
Figure 3.3 Accessing meaning visually and phonically.
To return to the phonics vs. visual reading debate, the reading wars. We have constructed a model which shows how we may do what we clearly can do, namely to read phonically. We can read print via sound; but do we? The evidence does not show that we usually do any such thing, in fact, merely that we are able to if and when we need to. There is, indeed, a great deal of evidence that we do not read to phonemes, we do not read by sound to meaning under usual reading conditions (by which I mean a fluent reader reading an easy text). (And see Adams 1990, Ellis 1993, Ellis and Beattie 1986, Ellis and Young 1996, Rayner and Pollatsek 1989, Smith 2004, Taft 1991, Underwood and Batt 1986.) Are we ever reading by sound?
‘We can be confident that skilled readers do not access meaning via sound.’ (Ellis 1984. p.55)
I ought to interject a small proviso here, too. This is all just a little too bald and inclines somewhat improperly, rather too far to the visual camp. It depends, in the end, just what, exactly, we mean by ‘reading’. Much of the above, and some of what follows, derives from priming experiments. (And see notes to this chapter.) These experiments are fascinating, and give considerable useable insight, but they generally tell us about the processes of word identification - the identification of single words, usually isolated single words. They demonstrate that the lifting of single words from text to meaning is, indeed, a separate process from accessing the pronunciation of the word and is, certainly with easy or familiar words, done entirely visually. However, we are conversational creatures and take our language overwhelmingly as sound. For most human beings, most of the time, language is experienced as spoken language. All mental activity is subject, as we have seen, to mandatory spreading activation. Stuff activated in one area and one modality (graphemic code, for example) will inevitably reverberate further and, after a small delay, activate related stuff elsewhere and in other modalities (phonemic code, for example). In other words, if we have sight of a word the sound of it will, after slight delay, be at least somewhat activated. When we read text we do indeed seem (sometimes at least) to ‘hear’ it as well, with the ‘inner ear’. It is clear that we do not always do this when simply identifying single words, but it is more likely when we are reading continuous text, in sentences, particularly language in long, convoluted sentences. There is evidence, indeed, that we probably do make use of the ‘inner ear’, in order to manage ‘reading’ at a higher level than simple word identification - to manage comprehension, in fact. Adams (1990 p.414) describes this as ‘…the irrepressible automaticity of skillful readers’ spelling-to-sound translations’ - another way of saying ‘spreading activation’. The first words of a sentence (or phrase) must be remembered until the last are read, of course, if the whole is to be understood. Adams claims that this ‘translation’ into sound is probably a mechanism whereby the text can be held meaningfully in short term memory, as it unravels, for the purposes of comprehension of the whole. She says that ‘… automatic phonological recoding subserves two distinct and critical processes. First as an alphabetic back-up system … second … it expands the reader’s verbatim memory capacity in support of proper comprehension’ (ibid. p.191).