Overview of the framework

While there are many approaches one could take to develop a framework for measuring a particular skill area, the diagram shown here represents a process that has been used to construct and interpret the literacy tasks for the National Adult Literacy Survey (NALS) (Kirsch, Jungeblut, Jenkins, and Kolstad, 1993) and for the IALS (OECD and Human Resources Development Canada [HRDC], 1997; OECD and Statistics Canada, 1995; OECD and Statistics Canada, 2000). This process is also being used to develop the reading literacy measure for the Programme for International Student Assessment (PISA) (OECD, 1999). The diagram shown here represents a process that consists of six parts. These six parts represent a logical sequence of steps that should be addressed, from needing to define a particular skill area, to having specifications for constructing items, to providing an empirically based interpretation of the scores that are obtained.

graphic of Literacy Framework parts: 1.Defining Literacy 2. Organizing the Domain 3.Task Characteristics 4.Identifying and Operationalizing Variables 5.Validating Variables 6. Building an Interpretive Scheme

Part 1 of the framework focuses on the working definition for literacy, along with some of the assumptions that underlie it. In doing so, the definition sets the boundaries for what the survey seeks to measure as well as what it will not measure. Part 2 provides a discussion on how we may choose to organize the set of tasks that are constructed to report to policymakers and researchers on the distribution of a particular skill in the population. Determining how to report the data should incorporate statistical, conceptual, and political considerations. Part 3 deals with the identification of a set of key characteristics that will be manipulated by developers when constructing tasks for a particular skill area. Part 4 identifies and begins to define the variables associated with the set of key characteristics that will be used in test construction. These definitions are based on the existing literature and on experience with building and conducting other large-scale assessments. Part 5 lays out a procedure for validating the variables and for assessing the contribution each makes toward understanding task difficulty across the various participating countries. The final part, Part 6, discusses how an interpretative scheme was built using the variables that have been shown through the research in Part 5 to account for task difficulty and student performance.