ELA done, math this week.
The Out Outs, who prefer to be called “refusals”, may have more refusals than last year. The Long Island Opt Out Facebook page is filled with negative comments about the test and points to mistakes that required corrections during the test to claims of ambiguous and age inappropriate questions.
Was the test a “good” test or a “bad” test? How do you judge the adequacy of a standardized test?
Test designers are called psychometricians, to design a test you have to know what to test, in other words: what are knowledge and /or skills that you are testing?
… standardized achievement tests.. create assessment tools that permit someone to make a valid inference about the knowledge and/or skills that a given student possesses in a particular content area. More precisely, that inference is to be norm-referenced so that a student’s relative knowledge and/or skills can be compared with those possessed by a … sample of students of the same age or grade level.
After the test is designed there is a process called “standard-setting,” a team of educators reviews each element of the test and assigns a performance level. i. e., whether the item is level one (below proficient), level two (approaching proficiency), level 3 (proficient) and level 4 (above proficient); tests have items with a range of proficiency levels.
While setting standards appropriately is critical to making sound student and policy level decisions, it is equally important that the content of the test and its difficulty level be appropriate for the decisions to be made based on test results. We cannot expect a test that does not cover the appropriate content or is not at the appropriate level of difficulty to lead to the appropriate decisions regardless of how the process of standard setting is carried out.
Last week’s test was sharply criticized by Leonie Haimson, the author of Class Size Matters.
Clearly there were many problems with this year’s NY state ELA exams.
These included overly long, dense and grade-inappropriate reading passages with numerous typos, abstruse vocabulary and confusing questions; commercial product placements; reading passages drawn from Pearson test prep materials; missing or mislabeled pages in test booklets: and children taking up to four to five hours per day to finish the exams — which violates the law that limits state testing to 1% of total instructional time.
A little review: the State moved to tests that reflected the new Common Core State Standards and the scores dropped across the State – from a 2/3 “proficient” or “above proficient” to 2/3 “approaching proficient” or “below proficient:” from 2/3 passing to 2/3 failing.
I have no problem with the Common Core State Standards – can you object to the Common Core Anchor Standards in Literature ?
Read closely to determine what the text says explicitly and to make logical inferences from it; cite specific textual evidence when writing or speaking to support conclusions drawn from the text.
Determine central ideas or themes of a text and analyze their development; summarize the key supporting details and ideas.
Analyze how and why individuals, events, or ideas develop and interact over the course of a text.
A test is created that assesses the level at which the student has acquired the requisite skills.
One would hope that the instruction in the classroom would reflect the skills on the test, in other words, “test prep” is not necessary if classroom instruction mirrors the elements within the standards.
The problem: There was no phase-in, there was inadequate professional development, and teachers were just pushed off the end of the diving board and left to struggle. Instead of moving to the higher standards one grade at a time the overly ambitious John King created the debacle.
A fair question: Do the tests current reflect the standards and are the proficiency levels at appropriate levels of difficulty?
We’ll have to wait until a report is issued in the fall, or later, a technical report assesses the quality of the test. Fred Smith, a testing expert has written detailed analyzes of prior exams. Read Fred’s doubts about the current tests here and sharp criticism of last year’s tests here.
The attacks on the tests are unabated.
The attacks are emotional, they are visceral. While the tests have no impact on students the low scores have created the anger – kids moved from passing to failing – the impact was emotional. On the other hand the impact on teachers and schools is significant. While there is a 4-year moratorium on the use of student tests to assess teachers the wound is still festering.
What do the opt outs want? What would it take to bring Opt Out parents back to taking exams?
Gary Stern in an editorial in LoHud muses over what it would take to lure parents back.
Perhaps “better” tests. Tests that are more “age appropriate.” First, I don’t know how to define “better” tests, and, second, if scores continue to define 2/3 of kids as failing, parents will not return to the testing world.
One approach is to move away for pen and pencil tests to performance tasks; however performance tasks require a sea change in instructional focus.
The Stanford Center for Assessment, Learning and Equity (SCALE) is working with a number of school districts to develop and implement a performance tasks approach to student assessment. Coincidentally Stanford is offering a free online MOOC, “Designing for Deeper Learning: How to Develop Performance Tasks,” and, it starts tomorrow!! Sign up now here.
What the feds do may impact the testing kerfuffle in New York State. Some argue that the feds, under the new ESSA law had no authority to intervene in a state, others that the feds have the authority to withhold funds from the state and the state determines the impact at the local level. Of course, in an election year will the feds want to intervene? And, after November, a new president and a totally new ball game.
While the tests end on Thursday the anger will continue to seethe. The Board of Regents has a complex task in a highly charged political and emotional environment.