The interactive-activation, or distributed processing model is one possible explanation for these interesting findings implicating phonological processing in reading even simple, common words well within the reach of the kinds of subjects usually used in this kind of experimentation. This model invokes the idea of the brain as a massively parallel processor, routinely managing different aspects of the same issue simultaneously. (We have already seen that our brains probably do indeed work in this parallel distributed way.) The model accepts that direct, visual reading is the cheapest, simplest and fastest route and will be the way most reading is done by fluent readers under most circumstances. However, the model proposes simultaneous, or very nearly simultaneous, processing around the phonological pathways, through the phonological lexicon (probably through mandatory spreading activation). This route is much less simple, and so its findings may arrive too late to play much part in normal reading, but arrive they probably do, nonetheless, such that they may be useful as a secondary way to achieve a difficult reading (or to interfere with lexical decisions during a priming experiment!).

And it is still debateable how strongly the phonology of target items is affecting the decisions made about them – how strongly does the pronunciation of letter strings reverberate through to affect semantic word recognition? Is there an orthographic effect? There are experimental findings which contradict, or at least considerably dilute, the suggested effect of phonology on word recognition. These experiments seem to indicate that, even in experimental circumstances where the pseudohomophone or pseudomember effects are apparently unequivocally shown, the effect of the orthography of targets may be being seriously underestimated. The orthography, or purely visual appearance of letter patterns and strings, may be as important as the apparently pronunciation-mediated effect appears to be.

For example, Taft found that the pseudohomophone effect was greater to pseudohomophones spelled with similar letter patterns to the mentally associated real word than to those spelled with very different letter patterns.

…while homophone decisions to SKREAM are indeed faster than those to SKREME, homophone decisions to SKEAM are slower than those to SKEME. What influences response times is the fact that SKREAM is orthographically more similar to SCREAM than is SKREME, and that SKEME is orthographically more similar to SCHEME than is SKEAM. Taft 1991 p. 65.

In other words letter patterns had considerable non-phonological input to this reading. If researchers find it difficult precisely to resolve the role of phonology in word recognition, far be it from me … Perhaps a wiser man would leave it at that. However, one further experiment, exploring reading to meaning (as opposed to simple word recognition) under conditions of articulatory suppression, is interesting and relevant in this context.

Articulatory suppression: Researchers sometimes ask subjects to perform certain tasks while their phonological system is simultaneously fully occupied in quite another task. Subjects may, for example, as in this instance, be asked to carry out decision-making tasks on items presented on a VDU screen as usual, but to recite something unrelated (for example the numbers from one to ten) continuously while doing so. In this way researchers hope to cancel the phonological system, by filling it with work unrelated to the circumstances they wish to manipulate, and thereby to gain some idea of the role of phonological processing in whatever aspect of cognition they are exploring.

The experiment I am about to describe involved subjects reading sentences from the screen and making the yes / no decision as to whether they made sense or not (eg NOISY PARTIES DISTURB SLEEPING NEIGHBOURS makes sense whereas PIZZAS HAVE BEEN EATING JERRY does not). While doing this, one sentence after another, some of the subjects also repeatedly recited a distractor stimulus aloud (for example the numbers one to ten over and over again). These subjects were doing the decision-making under conditions of articulatory suppression, with their phonological processing system continuously filled with nonsense and therefore presumed to be unavailable for working on the decision-making tasks. (Kleinman 1975 reported in Taft 1991 pp. 72-73.) The subjects who operated under articulatory suppression, in the event, made their decisions much slower and made many more errors than those who made their decisions without simultaneously reciting. In other words reciting unrelated stuff aloud renders making sense of text more difficult. This would accord with Adams’ conclusions as to the probable role of ‘the phonological processor’ in reading whole sentences under normal conditions. She regards this role as chiefly to enhance our memory for recent words – holding onto what has gone before in a sentence for long enough to correlate it with what is to come later and so, eventually, being able to make sense of the whole thing. ‘Skilled readers can neither remember nor comprehend a complex sentence when they are prevented from subvocalising its wording.’ (Adams 1990 p. 188) (On the other hand, asked to recite the Lord’s Prayer over and over, or even just repeatedly count to ten, would make me much less effective at any task, so what it would prove I hesitate to claim with any confidence. Have we, thereby, disabled only phonological language?)