Once the quality control team at Statistics Canada was satisfied that the data files were indeed clean and of high quality, the records were handed over to ETS for scaling. The test results were analyzed using three scales—prose, document and quantitative—rather than a single scale. Each scale had a range from 0-500. As mentioned in the Introduction, the scale scores were, in turn, grouped into five empirically determined literacy levels.
The Item Response Theory (IRT) scaling procedures that were applied in IALS constituted a statistical solution to the challenge of establishing one or more literacy scales for a set of tasks with an ordering of difficulty that would essentially be the same for everyone. The scale point assigned to each task was the point at which individuals with that proficiency score would have a given probability of responding correctly. In IALS, an 80 percent probability of correct response was the criterion used. This meant that individuals estimated to have a particular scale score performed tasks at that point on the scale with an 80 percent probability of a correct response. It also meant they would have a greater than 80 percent chance of performing tasks that were lower on the scale. While some of the tasks were at the low end of a scale and some at the very high end, most had values in the range 200-400. It is important to recognize that the ranges were selected not as a result of any inherent statistical property of the scales, but rather as the result of shifts in the skills and strategies required to succeed at various tasks along the scales, ranging from simple to complex.
The primary goal of the IALS was to generate valid, reliable and comparable profiles of adult literacy skill both within and between countries, a challenge never before attempted. The IALS study also set a number of scientific goals, many of which were related to containing measurement error to acceptable levels in a previously untried combination of educational assessment and household survey research.
The findings presented in this monograph leave little question that the study has produced a wealth of data of importance to public policy, a fact that has whetted the appetite of policy makers for more. As with any new measurement technology, however, much room remains for improvement. In each successive round of collection, quality assurance procedures have been enhanced and extended in response to identified problems.18 A recent review of IALS methods, conducted on behalf of the European Union by the National Office for Statistics of the United Kingdom, concluded that the quality and comparability of IALS estimates in each successive round of collection had improved as a direct result of these measures.19 The same report points, however, to a need for continued development through international collaboration related to the design, implementation and analysis of data."
18. Murray, T.S., Kirsch, I.S., Jenkins, L.B. (Eds.) (1998). Adult Literacy in OECD Countries: Technical Report on the First International Adult Literacy Survey. Washington, DC: National Center for Education Statistics, United States Department of Education.
19. Carey, S. (Ed.) (2000). Measuring Adult Literacy: The International Adult Literacy Survey in the European Context. London: Office for National Statistics.