Once the researchers are satisfied with the form of the preliminary inventory, it is
useful to circulate the inventory among experts in the domain. One method of obtaining
feedback is to convene a focus group of experts to review and discuss the inventory. In
our research, focus-group participants were given a brief introduction to the goals of
the project and an explanation of the tacit-knowledge construct in non-technical language.
They were asked to judge the construct-relatedness of the inventory questions by
considering whether each question addresses knowledge gained through experience and
fits the definition of tacit knowledge provided. In addition, focus group participants
were asked to help "fill gaps" and "fix problems" in the inventory. In particular, they
were asked to (a) provide additional, plausible response options for any question;
(b) identify areas of confusion or lack of clarity; (c) identify problems of gender, racial,
or ethnic bias; and (d) identify anything that did not "ring true" in the inventory
questions.
The researcher can use the feedback from the focus group to revise the inventories.
For example, inventory questions for which judgments of construct-relatedness are not
unanimous (and positive) may be omitted from the inventory. Similarly, a response
option or scenario feature that is objected to by two or more participants may be omitted.
The focus group may suggest additional response options or scenario features, which
can be added to the inventory. The final result of this test-development process is a
revised tacit-knowledge inventory that can be administered to position incumbents and
used to address further research questions, such as those regarding criterion-related
construct validity.
4.3.4 Summary
The phases described above all are designed to support the construction of tacit-knowledge
tests. The tacit-knowledge items acquired in the interview study form the raw materials
for this construction process. During this process, the tacit-knowledge items are subjected
to qualitative analysis (e.g., sorting into categories) and quantitative analysis
(e.g., obtaining quality ratings). The various phases serve to address two basic questions
about the pool of tacit-knowledge from which an instrument will be developed. First,
which items are most promising for use in the construction of tacit-knowledge test
questions? Second, what does the underlying structure represented by the tacit-knowledge
items tell us about the structure of the construct domain so that we can design our
tacit-knowledge tests to capture this domain? The result of this process is an inventory
that has greater likelihood of possessing both internal and external validity. We discuss
the issue of validity in the last part of this section.
4.4 Establishing the validity of tacit-knowledge inventories
An important part of developing any tests is to establish its construct validity.
Unlike the development of many cognition-type tests, we do not rely solely
on the qualifications
that items should load heavily on a single factor and predict some external performance
criteria as sufficient for concluding that a test measures the construct of interest.
As
Nunally (1970) and others have argued, such a "criterion-based" approach to test
development is problematic and often produces measurement instruments of inferior
quality. Specifically, such an approach may yield tests that suffer from low
internal-consistency reliability, poor factor structure, and fragility with respect
to criteria other
than those on which the selection of items was based.
We rely on both theoretical and empirical justifications to establish the validity
of tacit-knowledge tests. We use Messick's (1995) unified validity framework to show
how tacit-knowledge theory and the phases of test development outlined above,
contribute to the validity of our tacit-knowledge tests. Messick's framework treats the
traditionally separate forms of validity (i.e., content, construct, and criterion) as aspects
of a more comprehensive kind of construct validity. According to this framework, the
essential goal of test validation is to support, through a combination of theoretical
rationale and empirical evidence, the interpretation of test scores and the uses of scores
under that interpretation.
|