Secondary analysis of the 1994 IALS data has yielded consistent evidence that the performance difference between Level 2 and Level 3 on the prose, document and quantitative literacy scales is substantive and corresponds to a significant difference in measurable benefits accruing to citizens in OECD countries (OECD and HRDC, 1997). Results of preliminary analysis of the IALSS data, including the new numeracy scale, are consistent with this finding. For this reason, some of the analyses contained in this report anchor the scales at the cut point between Levels 2 and 3, thus highlighting the distributions above and below this threshold for the prose, document and numeracy domains. In contrast, interpretation of the problem solving domain (see Table I.2) is more complex and no single “desirable” threshold has yet been set.

Thus, the tables and charts included in this report provide multiple ways to examine how the distributions of competencies differ across Canada.

Text box D

Measuring proficiency

For IALSS, each proficiency scale starts at zero and increases to a theoretical maximum of 500 points. Scores along the scale denote the points at which a person with a given level of performance has an 80 percent probability of successfully completing a task at that level of difficulty. For instance, a person with an assessed performance at 250 points has an 80 percent probability of correctly answering a task with an estimated difficulty level of 250. The same individual would have an “ 80 percent plus” probability of correctly answering a simpler task (about 95 percent for a task with a complexity of 200) and a diminished probability (less than 80 percent) of successfully completing a more difficult task (about 40 percent for a task with a complexity of 300) (Kirsch, Jungeblut and Campbell, 1992).

Interestingly, while the probability of a correct response may approach zero as the tasks become more difficult, it can never quite reach it because there is always some chance, however small, that a correct answer will be provided regardless of ability. Accordingly, the results presented in this report measure performance along a proficiency continuum. The scales do not measure the absence of a competence, and thus cannot distinguish those who have from those who lack a specific competency.

The proficiency levels used for IALSS are useful in summarizing the results but also have some limitations. First, the relatively small proportions of respondents who actually reach Level 5 do not always allow for accurate reporting. For this reason, whenever results are presented by proficiency level, Levels 4 and 5 are combined. Second, as shown in Tables I.1 and I.2, the levels indicate specific sets of abilities and, therefore, the thresholds for the levels are not equidistant. The ranges of scores in each level are therefore not identical. In fact, for all four domains, Level 1 captures almost half of the scale. The thresholds for the problem solving domain are set somewhat differently and Level 1 covers precisely half of the scale. Level 1 includes all basic abilities required to attain higher levels. In other words, the ability to read may lie somewhere in Level 1, but the ability to understand and use what has been read comes in gradations of complexity from Level 1 to Level 5. The upshot of the relatively large ranges of scores in Level 1 on each of the scales is that there are multiple sub-levels of proficiency within this level. The range includes those who can barely read at all as well as those who read poorly or inattentively.4