The different methods used by different computer programs to count sentences, words, and syllables can also cause discrepancies - even though they use the same formula. Finally, the range of scores provided by different formulas remind us that they are not perfect predictors. They provide probability statements or rather estimates of difficulty.

The problem of optimal difficulty Different uses of a text require different levels of difficulty. As we have seen, Bormuth (1969) indicated the 35% cloze score was the point of optimum learning gain (see Table 7 above) for assisted classroom reading.

Vygotsky (1978) supported Bormuth's findings that optimal difficulty should be slightly above their current level of development and not below. Using books that are at the reader's present level or below may increase fluency and rate, but not in the way of comprehension.

For this reason, advise that materials intended for assisted reading when an instructor is available should be somewhat harder than the readers' tested reading level. Materials for the general public, however, such as medicine inserts, instructions for filing tax forms, instructions for using appliances, and health information should, be as easy as possible to convey (Chall and Dale 1995).

Paul (2003) found that independent reading requires at least an 85% comprehension on multiple-choice reading quizzes for readers below the 4th grade and 92% for advanced readers. He also recommends that advanced students who score better than 92% correct on quizzes should be given material that is more challenging.

The formulas and usability testing Redish (2000) and Schriver (1991, 2000), promote the need for reading protocols and usability testing as an alternative to the formulas. They feel that usability testing eliminates the need for readability testing. They fail to state, however, how to match the reading ability of subjects with that of the target audience.

Dumas and Redish (1999), in their work on usability testing, hardly mention reading comprehension. They have us assume that, if test subjects correctly perform a task, they have correctly understood the instructions. When problems arise, however, it is difficult to locate the source of the difficulty.

In both usability testing and reading protocols, some subjects are more skilled than others in articulating the problems they encounter. Do problems come from the text or from some other source? If they are located in the text, do they come from the design, style, organization, coherence, or content? We are often left with guesswork and trial-and-error cycles of revision and testing.

As experienced writers know, this gets expensive. In preparation for a test, it makes as little sense to neglect the readability of a document as it does to neglect its punctuation, grammar, coherence, or organization.

One cannot emphasize enough the importance of testing and of frequent contacts with members of the targeted audience before, during, and after the process of producing documents as urged by Schriver (1997) and Hackos and Redish (1998). Assessing both the reading ability of the audience and the readability of the text will greatly facilitate this process.