My virtual educational colleague, Darren Burris, recently inquired about PARCC’s growing list of acronyms. Specifically, Darren wanted to know more about the EBSR or Evidence-Based Selected-Response, a multiple choice item designed by PARCC to assess not only the accuracy of test-takers reading responses but also the evidence they used to draw those conclusions. As you can see from the screen shot, Darren’s tweet that was followed up by others asking what EBSR meant. I replied to Darren’s question through a series of tweets, but both his inquiry and the interested spike among other Twitter peeps made me realize that many are not yet knowledgeable much less comfortable with the format of the new assessment.
What is an EBSR?
An Evidence-Based Selected-Response (EBSR) is one of the three “item” types by which PARCC will measure student’s proficiency with achieving grade level reading standards. The other two item types are the Technology-Enhanced Constructed-Response (TECR), and the Prose Constructed Response (PCR), which simply put is an essay for which I have written a separate blog post: Another PARCC Acronym: The PCR or Prose Constructed Response…the essay. More on those later. Foro now, let’s keep our focus here on the EBSR.
Essentially, an EBSR is like a conventional multiple choice question. What makes the EBSR different from the conventional multiple choice is the nature of the ”item.” All EBSR items are two-parts developed through the use of paired questions. The first part measures reader accuracy and comprehension of text(s) (with a measure of reading standards 2-9) and the second part measures the textual reader evidence a reader has used to develop that accurate comprehension (reading standard 1). What does that mean?
The 2-Part Item
The first question in the two-part item directly addresses student accuracy in applying one or more reading standards from the grade-level standards 2-9: the test-taker reads the question, considers the choices by closely reading the text, confirms his/her understanding through revisiting the text, eliminates the distractors, and marks the response. The second question asks the reader for evidence to support comprehension–the answer marked for the first question–by identifying text evidence that directly or inferentially justifies the initial response. The simplest format for these two-part items are two related questions each with four answer choices. Part A questions may provide distractors that represent misreadings of the text; however, all Part B choices (including distractors) must be “accurate and relevant from the passage (whether exact citations or paraphrases) (Item Guidelines for ELA/Literacy PARCC Summative Assessment, 2013. p. 23).
Examining the EBSR
Let’s look at an example provided by PARCC. Although this example is designated for 6th grade, all grade-level examples (3-11) look essentially the same…well, until they look different. More on that in a moment. Each item is worth 2-points. Again, grade level doesn’t matter–2 points for each item and an item is comprised of two-questions. As you can see here, part one of the item (the first question) asks the reader to indicate what the word “regal” means as used in the passage. This question is assessing RL.6.4: “Determine the meaning of words and phrases as they are used in a text, including figurative and connotative meanings; analyze the impact of a specific word choice on meaning and tone.” Part 2 of the item (the second question) is assessing RL.6.1 (the evidence standard): Cite textual evidence to support analysis of what the text says explicitly as well as inferences drawn from the text. The student was asked to “determine” or infer the meaning of the word regal, then asked to cite the evidence supporting that inference. At its basic level, that is how an EBSR works.
Scoring an EBSR
Students can earn full credit, the 2 points, by getting both questions or Part A and Part B correct. OR they can earn partial credit of one point by getting Part A correct but not having a correct answer on Part B. However, if students have an incorrect answer on Part A but a correct answer on Part B, they earn no credit. The idea here is that students need to know the answer and be able to show their thinking in getting there. In a previous PARCC release of item guidelines (Version 8.0 released April 25, 2013, p. 28), allowances for one part EBSRs were delineated:
- In grade 3, a one part EBSR is allowable because Reading Standard 1 evidence 1 is distinctly different from Reading Standard 1 in grades 4-11.
- In grades 4-11, a one part EBSR is allowable when there are multiple correct responses that elicit multiple evidences to support a generalization, conclusion or inference.
Interestingly, these caveats do not appear in the most recently released guidelines (October 22, 2013) although the scoring guidelines (Item Guidelines for ELA/Literacy PARCC…, p. 31). Although some one-part items were developing during Phase One, one-part items are not being developed during Phase Two, the current test development period.
The More Sophisticated EBSR
Not all EBSR items are as straightforward as the example shown above: a simple two part item, each part offering four choices and only one correct answer among those provided. In more sophisticated items, students may be asked to identify a correct answer while noting there is more than one correct response possible. When that is the case, the item will clearly state that there is more than one correct answer but warn the student they need only select one response. Under those circumstances, there must be six options from which to choose.
An even more sophisticated item design, exemplified at the left, depicts an item that asks students to identify multiple correct answers, in this case, three pieces of evidence to support the selection in Part A. For those items asking students too identify three correct responses(allowed only in grades 6-11), there must be seven options from which to choose. If students are to choose more than one response (as would often be required for a question addressing both RA8 and RA9), the test form will indicate how many responses students are to choose rather than the vague language of “select all that apply.”
What is a TECR?
The Technology Enhanced Constructed Response (TECR) is similar to the EBSR in purpose: to measure both reading comprehension of standards 2-9 and reading taker will go back into the text and highlight a selection or return to the text, select a phrase or sentence and drag and drop that response into a dialogue box. Other possibilities are also open for exploration by the test developers.
Regardless, the two-part (and in some cases three-part) items function similar to the EBSR: Part A asks the test-takers to select an answer among a series of choices to a question measuring reading/comprehension accuracy. Part B asks test-takers to find evidence to support the response. How does a three-part TECR evolve? In a situation similar to the EBSR described above–one that asks for two pieces of evidence, a three-part TECR will result as shown at the left in a PARCC example of a Technology-Enhanced Constructed-Response (TECR). PARCC’s assessment guidelines indicate that one suggested use of the TECR as the Part B and/or Part C is when the text itself offers more than four text evidences though this may not be the only situation under which a TECR is used. The freedom of drag and drop allows readers to go to a place in a text that supported their thinking rather than sift through a long list of possible options.
Scoring Multiple-Multiple Choice Items
Items with multiple responses and multiple parts are scored differently than the simple four-choice questions and items. Obviously, if the student answers both parts of the item with full correctness (all options correct on both Part A and Part B and Part C if there is one!), s/he earns 2 points. If the test-taker gets Part A, the reading accuracy portion of the item, correct but misses Part B (evidence) on a simple 4-choice item, s/he earns one point. However, if the test-taker misses Part A (reading accuracy), s/he will receive no points regardless of the correctness of Part B or Part C (if there is one). On the other hand, if Part A has multiple correct answers and the test taker gets at least one of those responses correct, but misses the evidence questions, the test taker will earn 1-point. To read the scoring document published by PARCC, access the Assessment Guideline, p. 31.
Where Will We See the EBSR and TECR
The Evidence-Based Selected-Response and the Technology-Enhanced Constructed-Response will appear on both PARCC assessments that comprise PARCC’s summative assessment: the Performance-Based Assessment (PBA) that will occur in the spring and the End of Year Assessment (EOY) that will be given in late spring. On the PBA, each text readers engage with will have a series of these items. Both the Literary Analysis and the Research Simulation Task will have two items related to grade-level reading standards 1-3 and 4-9 and one directly addressing grade-level reading standard 4 (vocabulary) for each text read (two texts for the literary analysis and three for the research simulation task). The Narrative Task, however, is different and will have five items associated to grade-level standards 1-9. To learn about the specific relationships between the performance tasks and the standards assessed by an EBSR or TECR, check out my PARCC Aligned Planners. From the appearances on PARCC’s Assessment Blueprint, the End of Year (EOY) Assessment will offer up 18 EBSRs and 8 TECR at each grade level, 3-11.
Instructionally, become more open-ended on the kind of thinking you ask students to practice. This kind of thinking is supported by texts with rich and deep ideas or theories. Acknowledge the multiple possibilities for both inference making and supporting evidence. Look for ways to move deeper into thinking and academic connections rather than limiting thought with single and simple responses. In terms of class assessment, begin to readjust the types of multiple choice questions you offer. PARCC’s approach is different from conventional multiple-choice. I don’t think the task is over burdensome although the thinking required will be deeper–for test taker and test writer!