In each instance, the factors were rotated to orthogonal simple structure
by the varimax procedure and to oblique simple structure by the DAPPER method
(Tucker
and Finkbeiner, 1981). Tasks loading highest on the first and largest factor
seemed to rely heavily on prose comprehension, tasks loading highest
on the second factor seemed
to reflect skills in using documents, while tasks loading highest on the third
factor required the application of arithmetic operations.
Interpretation of the five- and eight-factor solutions was much less clear.
Although each revealed three major factors reflecting prose, document,
and quantitative
operations, for the most part these rotated solutions provide interesting
clues for possible task
modification and for future item development, rather than clear-cut implications
for scaling the existing data. That is, if desired, one could devise
a new set of tasks that
could isolate a factor reflecting the importance of procedural knowledge
as it might apply, for example, to entering and using information in forms.
Alternatively,
one
might prefer to restrict the impact of this type of knowledge by eliminating
this type of task from the assessment. Thus, the empirical data provided
by the YALS tended not
only to support the a priori judgment for the three literacy scales but also
suggested ways in which the assessment could be broadened. It is important
to keep in mind that
the three literacy scales are not the only salient dimensions of literacy
per
se. These dimensions are likely to shift as a function of different
definitions and different
perspectives on literacy.
More recent advisory committees involved with NALS and IALS have agreed
that literacy should not be measured along a single continuum and have
chosen to adopt the general definition and three scales defined here. These
committees
further
recommended that new literacy tasks, which were constructed for each of
these assessments, should be developed to enhance the three existing scales,
and
that these new tasks should continue to use open-ended simulation tasks
rather than multiple-choice
questions and to emphasize measuring a broad range of information-processing
skills covering a variety of contexts.
Identifying task characteristics
Almond and Mislevy (1998) note that variables can take on one of five roles
in an assessment or test. They can be used to limit the scope of the
assessment, characterize
features that should be used for constructing tasks, control the assembly
of tasks into booklets or test forms, characterize examinees' performance on or responses to tasks, or
help to characterize aspects of competencies or proficiencies. Some of these variables
can be used both to help in the construction of tasks and the understanding of
competencies, as well as in the characterization of performance. A finite number of
characteristics are likely to influence students' performance on a set of literacy tasks,
and these can be taken into account when constructing or scoring the tasks. These
characteristics, which are thought to be important components of the literacy process,
were manipulated in the development of tasks for IALS. These characteristics include:
- Adult Contexts/Content. Since adults do not read written or printed
materials in a vacuum, but read within a particular context or for
a particular purpose, materials for the literacy assessment are selected
that
represent a variety of contexts and contents. This helps ensure that
no single group of adults is either advantaged or disadvantaged due to
the
context or content included in the assessment.
|