Let’s look at an example provided by PARCC. Although this example is designated for 6th grade, all grade-level examples (3-11) look essentially the same…well, until they look different. More on that in a moment. Each item is worth 2-points. Again, grade level doesn’t matter–2 points for each item and an item is comprised of two-questions. As you can see here, part one of the item (the first question) asks the reader to indicate what the word “regal” means as used in the passage. This question is assessing RL.6.4: “Determine the meaning of words and phrases as they are used in a text, including figurative and connotative meanings; analyze the impact of a specific word choice on meaning and tone.” Part 2 of the item (the second question) is assessing RL.6.1 (the evidence standard): Cite textual evidence to support analysis of what the text says explicitly as well as inferences drawn from the text. The student was asked to “determine” or infer the meaning of the word regal, then asked to cite the evidence supporting that inference. At its basic level, that is how an EBSR works.
Students can earn full credit, the 2 points, by getting both questions or Part A and Part B correct. OR they can earn partial credit of one point by getting Part A correct but not having a correct answer on Part B. However, if students have an incorrect answer on Part A but a correct answer on Part B, they earn no credit. The idea here is that students need to know the answer and be able to show their thinking in getting there. In a previous PARCC release of item guidelines (Version 8.0 released April 25, 2013, p. 28), allowances for one part EBSRs were delineated:
In grade 3, a one part EBSR is allowable because Reading Standard 1 evidence 1 is distinctly different from Reading Standard 1 in grades 4-11.
In grades 4-11, a one part EBSR is allowable when there are multiple correct responses that elicit multiple evidences to support a generalization, conclusion or inference.
Interestingly, these caveats do not appear in the most recently released guidelines (October 22, 2013) although the scoring guidelines (Item Guidelines for ELA/Literacy PARCC…, p. 31). Although some one-part items were developing during Phase One, one-part items are not being developed during Phase Two, the current test development period.
Some EBSR items will ask students to identify multiple correct answers.
Not all EBSR items are as straightforward as the example shown above: a simple two part item, each part offering four choices and only one correct answer among those provided. In more sophisticated items, students may be asked to identify a correct answer while noting there is more than one correct response possible. When that is the case, the item will clearly state that there is more than one correct answer but warn the student they need only select one response. Under those circumstances, there must be six options from which to choose.
An even more sophisticated item design, exemplified at the left, depicts an item that asks students to identify multiple correct answers, in this case, three pieces of evidence to support the selection in Part A. For those items asking students too identify three correct responses(allowed only in grades 6-11), there must be seven options from which to choose. If students are to choose more than one response (as would often be required for a question addressing both RA8 and RA9), the test form will indicate how many responses students are to choose rather than the vague language of “select all that apply.”
Some PARCC items have more than one right answer and more than one part….
The Technology Enhanced Constructed Response (TECR) is similar to the EBSR in purpose: to measure both reading comprehension of standards 2-9 and reading taker will go back into the text and highlight a selection or return to the text, select a phrase or sentence and drag and drop that response into a dialogue box. Other possibilities are also open for exploration by the test developers.
Regardless, the two-part (and in some cases three-part) items function similar to the EBSR: Part A asks the test-takers to select an answer among a series of choices to a question measuring reading/comprehension accuracy. Part B asks test-takers to find evidence to support the response. How does a three-part TECR evolve? In a situation similar to the EBSR described above–one that asks for two pieces of evidence, a three-part TECR will result as shown at the left in a PARCC example of a Technology-Enhanced Constructed-Response (TECR). PARCC’s assessment guidelines indicate that one suggested use of the TECR as the Part B and/or Part C is when the text itself offers more than four text evidences though this may not be the only situation under which a TECR is used. The freedom of drag and drop allows readers to go to a place in a text that supported their thinking rather than sift through a long list of possible options.
Items with multiple responses and multiple parts are scored differently than the simple four-choice questions and items. Obviously, if the student answers both parts of the item with full correctness (all options correct on both Part A and Part B and Part C if there is one!), s/he earns 2 points. If the test-taker gets Part A, the reading accuracy portion of the item, correct but misses Part B (evidence) on a simple 4-choice item, s/he earns one point. However, if the test-taker misses Part A (reading accuracy), s/he will receive no points regardless of the correctness of Part B or Part C (if there is one). On the other hand, if Part A has multiple correct answers and the test taker gets at least one of those responses correct, but misses the evidence questions, the test taker will earn 1-point. To read the scoring document published by PARCC, access the Assessment Guideline, p. 31.
The Evidence-Based Selected-Response and the Technology-Enhanced Constructed-Response will appear on both PARCC assessments that comprise PARCC’s summative assessment: the Performance-Based Assessment (PBA) that will occur in the spring and the End of Year Assessment (EOY) that will be given in late spring. On the PBA, each text readers engage with will have a series of these items. Both the Literary Analysis and the Research Simulation Task will have two items related to grade-level reading standards 1-3 and 4-9 and one directly addressing grade-level reading standard 4 (vocabulary) for each text read (two texts for the literary analysis and three for the research simulation task). The Narrative Task, however, is different and will have five items associated to grade-level standards 1-9. To learn about the specific relationships between the performance tasks and the standards assessed by an EBSR or TECR, check out my PARCC Aligned Planners. From the appearances on PARCC’s Assessment Blueprint, the End of Year (EOY) Assessment will offer up a range of EBSRs and TECRs at each grade level, 3-11 (ELA/Literacy Form Specifications Grades 3-5, Grades 6-8, and Grades 9-11).
Instructionally, become more open-ended on the kind of thinking you ask students to practice. This kind of thinking is supported by texts with rich and deep ideas or theories. Acknowledge the multiple possibilities for both inference making and supporting evidence. Look for ways to move deeper into thinking and academic connections rather than limiting thought with single and simple responses. In terms of class assessment, begin to readjust the types of multiple choice questions you offer. PARCC’s approach is different from conventional multiple-choice. I don’t think the task is over burdensome although the thinking required will be deeper–for test taker and test writer!