Type of information requested

This refers to the kinds of information that readers identify to answer a test question successfully. The more concrete the requested information, the easier the task is judged to be. In previous research based on large-scale assessments of adults' and children's literacy (Kirsch, Jungeblut, and Mosenthal, 1998; Kirsch and Mosenthal, 1994), the type of information variable was scored on a 5-point scale. A score of 1 represented information that was the most concrete and therefore the easiest to process, while a score of 5 represented information that was the most abstract and therefore the most difficult to process. For instance, questions that asked examinees to identify a person, animal, or thing (i.e., imaginable nouns) were said to request highly concrete information and were assigned a value of 1. Questions asking respondents to identify goals, conditions, or purposes were said to request more abstract types of information. Such tasks were judged to be more difficult and received a value of 3. Questions that required examinees to identify an "equivalent" were judged to be the most abstract and were assigned a value of 5. In such cases, the equivalent tended to be an unfamiliar term or phrase for which respondents had to infer a definition or interpretation from the text.

Plausibility of distractors

This concerns the extent to which information in the text shares one or more features with the information requested in the question but does not fully satisfy what has been requested. Tasks are judged to be easiest when no distractor information is present in the text. They tend to become more difficult as the number of distractors increases, as the distractors share more features with the correct response, and as the distractors appear in closer proximity to the correct response. For instance, tasks tend to be judged more difficult when one or more distractors meet some but not all of the conditions specified in the question and appear in a paragraph or section of text other than the one containing the correct answer. Tasks are judged to be most difficult when two or more distractors share most of the features with the correct response and appear in the same paragraph or node of information as the correct response.

At first glance, the skills involved in performing quantitative tasks might appear to be fundamentally different from those involved in processing prose and document tasks. An analysis of tasks along this scale shows, however, that processing printed information plays an important role in affecting the difficulty of quantitative tasks. In general, it appears that many individuals can perform single arithmetic operations using printed materials when both the numbers and operations are made explicit. Yet, when the numbers for these same operations must be extracted from materials that contain similar but irrelevant information, or when the operations must be inferred, the tasks become increasingly difficult.

As with the prose and document tasks, quantitative tasks require individuals to match information in a question or directive with information stated in one or more texts where a text could be either continuous or noncontinuous. In addition, quantitative tasks may require respondents to deal with plausible distractors when extracting information for an arithmetic operation. Individuals are also required to process some type of information. While type of information varies for the prose and document tasks, requested information is always an amount in quantitative tasks. Thus, the process variables for quantitative tasks are type of match and plausibility of distractors—like those defined for prose and document literacy—plus two additional variables that are unique to this scale. These are type of calculation and operation specificity. These two variables are briefly described here. They are more fully characterized through a discussion of exemplary tasks and fully operationalized in Appendix A.