1. The first step is to develop a working definition of the domain including the assumptions underlying it. Before the definition is developed, the domain and the skills and abilities it encompasses are wide open. It is the definition that sets the boundaries for what will be measured and what will not.
  2. Once the definition is developed, it is important to think about the kinds of tasks that represent the skills and abilities included under that definition. Those tasks must then be categorized, or organized, to inform test design and result in meaningful score reporting. Step 2 allows one to move beyond a laundry list of tasks or skills to a coherent representation of the domain that will permit policy makers and others to summarize and report information in more useful ways.
  3. Step 3 involves identifying a set of key characteristics that will be used in constructing tasks for the assessment. This may include characteristics of the stimulus materials to be used as well as characteristics of the tasks presented to examinees.
  4. In step 4, the variables associated with each task characteristic are specified.
  5. In step 5, research is conducted to show which variables account for large percentages of the variance in the distribution of tasks and thereby contribute most towards understanding task difficulty and predicting performance.
  6. Finally in step 6, an interpretative scheme is built that uses the validated variables to explain task difficulty and examinee performance. The work of this panel involved the first two steps: defining ICT literacy and organizing the domain.