4. Measuring tacit knowledge

One of the goals of our research is to show that tacit knowledge contributes to successful performance in a variety of domains. That is, we aim to establish a relationship between the possession of tacit knowledge and performance. But how does one proceed to develop a test to measure tacit knowledge? This section addresses the development of tools to measure the amount of tacit knowledge of various kinds that an individual has acquired. We begin by reviewing some approaches that have been used to measure the competencies considered to be relevant to the performance of real-world tasks, and contrast them with our knowledge-based approach. We then discuss what tacit-knowledge tests are intended to measure and offer a general framework for developing and validating such a test through the assessment of everyday situational judgments.

4.1 Methods of measuring real-world competencies

The tacit-knowledge approach to understanding practical cognition is based on several methods of measuring real-world competencies. These include the use of the critical-incident technique, simulations, and situational-judgement tests. We review briefly each of these methods and then discuss how the tacit-knowledge approach draws certain aspects from these methods.

4.1.1 Critical-incident technique

The critical-incident technique is an approach that seeks to identify the behaviors associated with effective performance (Flanagan, 1954). According to Flanagan, a critical incident describes the behavior, the setting in which the behavior occurred, and the consequences of the behavior. Critical incidents are generated by asking individuals, typically subject-matter experts, to provide examples of effective and ineffective behaviors. More specifically, individuals are asked, through interviews or open-ended survey questions, to describe several incidents that they, or someone else, handled particularly well, as well as several incidents that they, or someone else, handled poorly (Flanagan, 1954; McClelland, 1976). Boyatzis (1982) used a variation on the critical-incident technique, called the "behavioral event interview," in which he obtained behavioral incidents from individuals identified a priori as either high, medium, or low on effectiveness. He then examined the incidents generated from each group to identify traits and skills that distinguished between effective and ineffective managers.

The "critical incidents" generated from observations, interviews, or surveys are analyzed qualitatively to determine the nature of the competencies that appear important for success in a given task domain. The incidents typically are grouped on the basis of similar behavior content. For example, an incident that pertains to assigning a task to a subordinate and an incident about monitoring task completion by a subordinate might be grouped into a category of supervising subordinates. These categories are used to draw general conclusions about the behaviors that are characteristic of effective and ineffective performers.

Limitations of the critical-incident technique are that it assumes people can and will provide incidents that are critical to success in their particular jobs, and that qualitative analysis is sufficient for identifying the underlying competencies. However, the value of the critical-incident technique lies in identifying the strategies individuals use to perform various tasks, and in examining specific, situationally-relevant aspects of behavior. The critical-incident technique has been used successfully in the development of several performance assessment tools, including behaviorally anchored rating scales (BARS; e.g., Smith and Kendall, 1963) and situational-judgment tests (SJTs; e.g., Motowidlo, Dunnette, and Carter, 1990), the latter of which is described in more detail below.