Back, then, to figure 2.2, the diagram of cascading analysis letter recognition, fortified with the knowledge of our own productive inexactitudes. Let us take the story up from a point where we have a few letters sort of identified. These letters will begin to excite word (or morpheme or letter pattern) units which share features with them and will begin to inhibit those which don’t. Suppose we have almost fully recognised three letters in a row, each letter incorporating a circle, the first with a line running up from it and the last with a line running down. Letters like o, a, p, q, g, b and d will begin to twitch but even while the cascade toward full letter identification continues the search for word identification can begin. With the first and last letters containing lines up and down as described, and with three circles in a row, words like dog, dag (to clip the wool from around the tail of a sheep), bap (a bread roll), bop, bog will be tickled.
You will have noticed that up to now the whole procedure has been bottom-up. The model has simply been reacting to the evidence on the page, the incoming data. The program, however, also begins, from an early stage, to formulate its hypothesis as to what the word is likely to be, given the context in which it is immersed. The model begins to use what it already knows about context and the syntax of its language to search incoming data for such details as will confirm or deny its hypothesis. For example, in a section about dogs, this three letter word is far more likely to be dog than it is bog, bap, dag or bop. Prior knowledge has enabled the data to be interrogated and the correct decision approached even while the data search is still incomplete and the data ragged and unclear. This is top-down reading, using the mind’s content, including the context of our present reading, to interrogate data proactively, to presuppose what the text may contain and to examine it for agreement. Is this how we do it? Do we form a hypothesis as to what text says and then question it?
In the real world of literacy some research indicates that fluent readers reading easy or predictable text may be going too fast for there to be much effect of context or prior knowledge on the reading itself; too fast for there to be much top-down reading. Such reading may be done almost wholly bottom-up - almost entirely based on decoding of the text itself. When not going so fast, for example when the text is more difficult or when the reader is less fluent, the effect of context and prior knowledge increasingly matters, and reading becomes more biased towards a top-down procedure. It is a flexible system which trades reading procedure and task difficulty off against each other to achieve maximally effective performance. This is the famous interactive-compensatory trade-off between bottom-up and top-down reading (Stanovich, West and Freeman 1981, Stanovich 2000) of which more in chapter four.
The procedure we have described is feature analysis in cascade. It is called this because bottom-up feature identification traveling one way and top-down proactive feature searching the other are both active at the same time. Data is cascading one way and expectant search the other, simultaneously. The whole chain bursts into a buzz of two-way activity, final decision being possible a very short time after first presentation of the text. A decision is also possible, if our feature recognition allows for a degree of imprecision, even if the text is partial or distorted, as is so often the case in real life (much handwriting, for example).