This week, discussion and evidence far more authoritative than my assurances to teachers about the future of writing in Common Core assessment abounded. Had I read PARCC’s March Progress Update before April, I would have had the same information published in Erik Robelen’s Curriculum Matters blog in the April 12th EdWeek, “Man vs. Computer: Who Wins the Essay Scoring Challenge?” I am from Illinois where statewide writing assessment is not based on academic excellence but on budgetary consideration. Currently, writing assessment is not a state priority because the state cannot afford to have writing assessments scored. However, the CCSS is an integrated “model of literacy” and as such, “requires that students be able to write about what they read” (CCSS, p. 4). This focus on writing as an extension of reading demands that writing be among the aspects of literacy assessment in 2015. Not only does there exist a theoretical reason for including writing, but both PARCC and SMARTER Balanced Consortia promised to provide performance based assessments in the original proposals that won their constituencies the $187 million dollars (give or take depending on which consortia we are speaking of) to play with.
However, the distance between promising performance based assessment and delivering the goods is like running a marathon race…and the gun has been shot. Stepping up to sweeten the Race to the Top pot, Willet and Hewlett Foundation is has offered $100K in prize money for a competition among leading “data scientists” to create the most effective system for automated grading of performance based assessments. This effort is a means to ensure the expectations and goals of the CCSS integrated model of literacy can be achieved. Anyone in education knows the adage, “what is tested is taught.” And though my research in opportunities to learn (OTL) does not bear the adage out as fact, the inclination to teach was is tested is far greater than to do the opposite. In those years that Illinois did not test writing, many schools de-emphasized its importance, some going so far as to remove writing instruction from the curriculum.
But the coming CCSS cannot allow that to happen because the theoretical foundation of the standards rightly aligns reading and writing as means of co-processing information into knowledge, understandings, and skills. Efficient automated scoring will reduce the likelihood of either teachers or states to dismiss their responsibilities to address literacy learning that reaches beyond reading, speaking, and listening.
“……the constructed-response items and extended essays that will make up the performance-based component of the tests – can be scored quickly, efficiently, affordably, and, perhaps most important, validly and reliably. Automated scoring technologies would allow PARCC to rely less on more time-consuming and expensive hand-scoring methods, helping to ensure the tests maintain their strong focus on asking students to demonstrate what they know and can do in engaging and authentic ways that are affordable and sustainable over time.” (PARCC Progress Report, March, p. 9)
Not only does the theoretical stance of the CCSS demand writing assessment, but PARCC’s Model Content Frameworks released last fall clearly illustrates the relationship between reading and writing. At every grade, the Frameworks emphasizes that students integrate reading, writing, and research in order to “better understand content matter” (Cebelak, 2011, p. 45). Once again, the importance of disciplinary literacy is apparent: learners cannot acquire deep conceptual constructs within a discipline if they do not have the opportunities to read broadly and deeply in an content area and reflect on their reading through various forms of writing: summarizing significant passages or concepts, pondering connections between opposing views, comparing perspectives of like-minded thinkers/scientists/practitioners, etc. Many of the teachers I work with find this chart helpful in understanding not only the relationship between reading and writing and research, but also in understanding the shared responsibility of literacy among the academic and technical disciplines. Keep in mind that there is a model for each grade 3-11 and though the content may change from grade to grade the modular concept does not.
Although there have been changes to the initially proposed assessments on both the part of PARCC and SBAC, I remain confident that writing will play a significant role in the final assessment format based on PARCC’s March Update. Therein is a promise for a performance based assessment that will be required by all. The standards in theory and because of rigor demand students write to show their own thinking ability in addition to (or I would go so far as to say “in place of…”) multiple choice tests which requires more skills of inference than skill at higher level thinking. In sum, I agree with the words of Barbara Chow, education program director at Hewlett: “The more we can use essays to assess what students have learned, the greater likelihood they’ll master important academic content, critical thinking, and effective communication” (EdWeek, April 12, 2012, para. 12)Share Online