Different perspectives can be used to help organize a domain of tasks. Traditionally, literacy skills have been categorized by modality into reading, writing, speaking, and listening. Reading and writing are sometimes combined, as they are thought to require similar processes, and speaking and listening are often grouped in terms of being too costly and difficult to assess. Thus, they were not included in the survey. Committee members also wanted to include basic arithmetic calculations as part of the assessment since adults are often required to use printed information that involves these skills. As a result, this aspect of literacy was also included in the surveys.

Work in the area of context of literacy clearly provides one possible organizing principle for what may appear to be a disparate set of literacy tasks. There is the familiar academic or school context (dealing primarily with prose or connected discourse) contrasted with nonschool or "everyday life" contexts. And the nonschool contexts can be subdivided into the work-related and home-related tasks. However, it is operationally difficult to separate tasks along these latter dimensions since the work and home categories are not mutually exclusive in terms of the literacy tasks engaged in.

Another organizing principle of some appeal involves categorizing literacy tasks in terms of the types of materials or formats in which they occur, and to examine the associated purposes or uses both within and across materials. The appeal for this type of organizational scheme stems from research literature suggesting that different materials or formats are associated with different contexts and that a significant proportion of adult reading tasks in the context of work involve documents ( Jacob, 1982; Kirsch and Guthrie, 1984a; Sticht, 1975)—graphs, charts, forms, and the like—rather than prose. Frequently, these documents are embedded in the contexts of home or work and community, as contrasted with prose, which is most frequently associated with school or academia. Moreover, different materials and formats are often associated with different purposes, and these purposes are frequently associated with different reading strategies. This line of reasoning led to distinctions such as Sticht's "reading to do" and "reading to learn."

As another instance reflecting similar distinctions, the National Assessment of Educational Progress (NAEP) (1972) came to aggregate reading exercises in terms of "themes"—word meanings, visual aids, written directions, reference materials, significant facts, main ideas, inferences, and critical reading. The areas of reference materials and significant facts were among those in which young adults aged 26-35 performed better than did in-school 17-year-olds, while in-school 17-year-olds performed higher than young adults in inferences and critical reading. These and other NAEP results suggest the utility of a priori classifications that allow for the examination of differential performance for subgroups both within a single assessment and across groups over time.

In the end, a compromise was reached among the various organizing concepts that was felt to reflect a number of salient notions from the literature. Three scales were hypothesized—a prose literacy scale, a document literacy scale, and a quantitative literacy scale. In this way, it is possible to acknowledge that the structure of prose passages are qualitatively different from the structures associated with documents such as charts, tables, schedules, and the like, and to provide for a separate scale for those tasks involving the processing of printed information in combination with arithmetic operations.

The original data from the NAEP Young Adult Literacy Survey (YALS) was subjected to factor analysis to explore dimensionality (Kirsch and Jungeblut, 1986). Following the logic of Cattell's scree test (1966), the breaks in the pattern of latent roots indicated at least three salient factors, with the possibility of as many as five additional factors. Analysis of parallel random data reinforced the judgment that a three-factor solution was appropriate. However, for exploratory purposes, three separate analyses were conducted: In one analysis eight factors were retained and rotated for interpretation; in another, five factors were retained; and, in the final analysis, three factors were retained for rotation and interpretation.