Sunday, March 8, 2015

The problem is with PARCC

In the past few weeks, I've had feedback from several "education reformers" and pro-PARCC voices about my criticism of PARCC tests. Much of their arguments have, to me, been grounded in naivete and mis-information. The following is the gist of my responses about my criticism of PARCC testing.

For one, it's not simply a matter of counting up the "testing time" and declaring that students are not "over-tested." For example, my district high schools are losing the equivalent of 9 full school days (out of roughly 174 days of instruction in a school year) in order to administer the tests required by the state. With proctor and tech needs, there is simply no other way. As an AP Teacher, I am positively ill about the impact on classroom time that will lessen my ability to prepare students for a truly meaningful and significant assessment. The excessive number of days/hours and two separate testing windows (PBA & EOY) of PARCC are simply untenable for schools and for students. In a world where students can take graduate school exams like the GRE and LSAT in a few hours, it seems patently absurd that kids between the ages of 8 and 18 are facing eleven hours of testing.

And, this testing monstrosity is a completely new product which has not faced market scrutiny. The PARCC assessment is an unproven and obtrusive assessment tool that inhibits instruction and poses serious equity issues in its format. If states and the federal government had had the common sense a year ago to align with ACT for a new common state assessment - one that can be given in a day or less - rather than PARCC, we might not have the huge parental refusal/opt out movement we are currently experiencing. Additionally, the radical new format of an online assessment necessitates training for students that absolutely requires "test prep" and "teaching to the test." And, with all we know about research on the importance of test format familiarity and "online reading comprehension discrepancies," the online format virtually guarantees the PARCC will not be an accurate measure of academic knowledge or skills, nor the ability to think critically.

I am not opposed to assessment or accountability. As an AP English teacher, I could hardly be opposed to standardized testing, or even "teaching to the test." In fact, one of my earlier pieces about testing simply proposed that Colorado choose ACT-Aspire over the PARCC. And, I have never opposed the use of MAPS or DRA2 or CSAP (or ISAT in Illinois) for my students or my own children. And, I do, in fact, prep my students for AP/SAT/ACT. However, I am critical of the edu-critics who use "test-based" reform as the factor in determining "good/bad" schools, and I am critical of the argument for new standards and tests based on the myth that "public education" is failing. In terms of the achievement gap, my position and the views of PARCC critics are not centered on simply test scores. My concerns extend to the impact on struggling schools that are forced by sheer need of test prep to narrow the scope of instruction to "pass the test." As noted in this profound article, students at high achieving schools will maintain their access to electives and the arts and subjects outside the tested ones despite an increased testing regimen. The same can't be said for other schools.

Truly, I can't imagine why an "educator" would enthusiastically cheer, or even passively accept, the PARCC test as "the answer" to all that allegedly ills public schools. Have they seen the test? Have they taken a practice test? If so, are they not bothered by the radical new system of online testing that has students scrolling passages and questions on opposite sides of the page while they try to answer questions? Are they not concerned about an "essay" section that puts writing in a box that kids can only see a few lines of their essay at a time? Do they know nothing of the research about discrepancies between online and paper reading comprehension? And what about the lack of data? Where is the evidence of valid proficiencies and cut scores? If a product is going to have so much significance, it should have more transparency about its quality. It would not have taken much planning to pilot, research, refine, and release the data to promote the test's value. But that didn't happen. And that's just not good practice.

As a parent and an educator, I can assert that my own children have taken CoGAT and NNAT and MAPS and DRA2 and CSAP and ACT and other tests as diagnostics for learning. I have no problem with these tests. But PARCC? I have serious concerns that have not been alleviated.

No comments: