Blog

Evidence Centered Design & the Common Core: A Fairer Way to Assess

Share Online Facebooktwitterredditpinterestlinkedinmail

Goldilocks and the Three BearsAs a classroom teacher for nearly 20 years, one who was fluent with and adherent to our state standards, many ask me why I have so easily shifted from the known and comfortable to the unknown and even tentative space the Common Core seems to relocate education. My answer may not be calming to some, but for me it is true. I found the other space tentative. As a high school teacher, I could teach to our state standards, but the eventual assessment, the generic though highly reliable and valid ACT, was not centered in those standards. Indeed, ACT has a set of college and career readiness standards, but those were not our state standards. As an assessment the ACT and by extension any other well-intentioned test may be centered on student learning, but if the assessment is not standards aligned it is does not connect the two aspects our current culture presumes assessment to measure: teaching and learning.

When standardized testing was first introduced, the focus of assessment was not related to teaching (for a good history of assessment, read this The Big Test: The Secret History of the American Meritocracy). Generic intelligence, what we term in the vernacular as achievement today, was measured for a variety of reasons–school placement, course placement, college entrance, military appointment–but not as a means to measure teacher or school performance. Assessment purposes were individualized as a means to sift, distill, categorize people.

In the past–the test taker opened the test booklet and proceeded through a variety of questions representing domains or disciplines in the academic schema. The questions posed did not directly represent a written standard. Simply stated, testing technique was based on the childhood story of The Three Little Bears as they sought a napping place: item writers generated some questions that were too hard, some questions that were “too soft,” and some questions that were “just right. ” Using such a method, the test could be statistically valid and reliable: smart kids would get all or nearly all the questions right, strugglers would get the “too soft” questions, and the kids in the middle…well, yep, that’s where they’d be…in the middle of the bell curve having achieved success with the “too soft” and the “just right” questions. Yesterday, teachers wrote those kinds of tests and today, many continue to write those kinds of tests: the hard questions are the ones that weren’t drilled into students’ heads; the easy ones are informative tidbits that have been drilled every year since the fourth grade; and the just right questions demand attention throughout the unit without over-taxing cognitive demand.

Don’t get me wrong. I’m not picking on anyone. I’m just saying that’s how testing has been developed: item response theory as a technical name. But now that we are using assessment in a far different manner for a far different means, we need a far different assessment. If students and teachers are going to be held accountable for assessment outcomes, it is only “fair” that they know the goal. The decisions about their futures should not reside with a group of test writers generating highly secretive test items behind closed doors. Using a Standard’s Based Assessment, students should no longer be assessed over random facts or intellectual performances for which they may or may not be prepared. Standards based or evidence designed theory identifies the outcome to be assessed before the assessment is generated and before instruction begins. If you’ve read my blog in the past, you know I have been preaching AND practicing such a design for many years, as a teacher and as a teacher of teachers.

PARCC and SBAC have ascribed to Evidence Centered Design. Their blueprints identify the “claim” or the standard students are accountable for learning and teachers for teaching. Furthermore, the blueprints identify the type of evidence the standard demands as proof of teaching and learning. And the blueprint identifies quite specifically the means by which that evidence can be generated. Understanding that students want to learn and do well on assessment as a continuation of learning, both PARCC and SBAC use assessment as a way to engage students in cognitive processing by using cold text and asking readers to solve problems or grapple with ideas in a way that is directly aligned to what they have been doing and learning in school.

The implications for this type of assessment are far reaching but not undoable. As educators, we need to begin providing more opportunities for students to grapple independently with text. In the past, we have scaffolded learning and understanding to such a degree that students were unable to approach independently the kinds of problems and ideas that careers and colleges present. In tomorrow’s classroom, whole class instruction will be premium because it will be briefer than in the past. We will not be telling kids nor showing them; they will be reading and doing themselves.

As a result, they will be generating their own reports, written and spoken, over what they learned and know while backing their reports with evidence and explanation (reading standard #1….regardless of content area)! We will not be testing over ideas and facts that have been pounded into heads, but assessing over the facts and inferences students can pull out of text. Students will need timely, regular, and descriptive feedback on their thinking and the products of their thinking.

Our classrooms will not look like they have in the past because in the future we will see kids working together, their heads in their texts: could be an experiment, could be a book, could be around a computer, could be over a mathematical problem. We will have redefined text and expanded the understanding of reading. The history text will not be the same as the math text, nor the music text the same as the art, nor the literature the same as the shop. Understanding text more broadly will raise its value for all learners, teachers and students. Because all disciplines will have recognized texts, there will be a greater connection between learning and thinking. Reading is not compartmentalized but something we do all of the time.

Learning is messy and kids have to start getting dirty. In the past, too many teachers have left the building exhausted from their work and frustrated with the fruits of labor. In tomorrow’s world, teachers and students will leave the building equally tired having shared the work of learning. Tomorrow, teachers and students will feel confidence in knowing what the year-end assessment will bring.

See these links below for more specific information: PARCC’s listing of documents can be somewhat confusing. I have tried to help readers understand what they may find within the pages of these documents.  titles. SBAC appears to have a more user friendly approach in finding and learning about this testing theory and how it applies to ELA and Math items on the 2015 assessment.

  • PARCC ELA Evidence Tables:
    • Two column approach: looks much like deconstructed standards
    • Left column: grade-level standard on the left
    • Right column: discrete evidences over which students will be assessed (subsets of the standards)
  • PARCC ELA Performance and End of Year Test Specifications:
    • Written at each grade level
    • Details relationships between standards and all assessment tasks (selected response and prose constructed)
    • Identifies number of selected response items (EBSR & TECR) for each task and the standard each respresent
    • Specifies how many prompts options will be written for each performance task and the standard each represents
  • PARCC Mathematics Evidence Tables: from here, you’ll need to scroll to the lower portion of the page and click on the specific grade or high school pathway: traditional or integrated
    • Five-column approach: Standard, Evidence Statement, Clarification, Mathematical Practice, Calculator (yes/no)
    • Evidence statement may be part of or the standard in entirety
    • Clarification acts much like Task Generation Model for ELA providing specific guidance in test writing and therefore, instructional preparation & practice
  • Smarter Balanced Assessment Consortium: Item Writing and Review: this page has a variety of links addressing math and ELA that are well described and accessible.
Share Online Facebooktwitterredditpinterestlinkedinmail

Tags: , , , , , , , , , , , ,