Interpretability focuses on collecting evidence that will enhance the understanding and interpretation of what is being measured. In some assessments, the meaning of what is being measured is constructed by examining performance on individual tasks, or by assuming it is inherent in the label that is used to organize one or more sets of tasks—for example, reading comprehension or critical thinking. All too often assessments focus on rank ordering populations or countries by comparing mean scores or distributions. These data tell us that people differ without telling us how they differ. One of the stated goals in the IALS and ALL studies is to try to address the issue of interpretability not only by reporting that countries, groups, or individuals differ in their proficiencies, but also by developing an interpretative scheme for reporting how they differ.

In considering the development of the literacy framework, a set of necessary components has been identified:

  • A framework should begin with a general definition or statement of purpose—one that guides the rationale for the survey and what should be measured.
  • A framework should identify various task characteristics and indicate how these characteristics will be used in constructing the tasks.
  • Variables associated with each task characteristic should be specified, and research should be conducted to show which of these variables account for large percentages of the variance in the distribution of tasks along a continuum or scale. Variables that appear to have the largest impact on this variance should be used to create an interpretative scheme. This is a crucial step in the process of measurement and validation.

While the chief benefit of constructing and validating a framework for literacy is improved measurement, a number of other potential benefits are also evident. Namely:

  • A framework provides a common language and a vehicle for discussing the definition of the skill area.
  • Such a discussion allows us to build consensus around the framework and measurement goals.
  • An analysis of the kinds of knowledge and skills associated with successful performance provides a basis for establishing standards or levels of proficiency. As we increase our understanding of what is being measured and our ability to interpret scores along a particular scale, we have an empirical basis for communicating a richer body of information to various constituencies.
  • Identifying and understanding particular variables that underlie successful performance further our ability to evaluate what is being measured and to make changes to the measurement over time.
  • Linking research, assessment, and public policy promotes not only the continued development and use of the survey, but also understanding of what it is measuring.