However, getting back to our graphemically coded language: Just as before, we still have no meaning for this language-as-symbols; we only have linguistic symbols in graphemic code in the visual input lexicon. To reach meaning we must again pass this data further - from that lexicon to the semantic lexicon. Each graphemically coded item of language must access its semantically coded equivalent in the semantic lexicon, at which point language-as-symbol (graphemically coded language) has activated language-as-meaning (semantically coded language) and we can say that the text has been understood and we have read to meaning. (With the same philosophical proviso as previously noted, of course - unless I appropriately associate the semantically coded language I have just read (‘low battery: You should immediately change your battery or switch to outlet power …’) with my real life it still has no genuine meaning, I suppose.)
Let us consider one more diversion from our main argument, a taster for chapter three’s discussion, and a brief glance at the ancient, but still fulminating, controversy of whether we read (or spell) primarily by visual or phonic attack. Let us loudly and clearly note that the route to reading we have just examined (visual association from graphemes straight to meaning) is the direct route; it is the most economical and the quickest route to reading. It is sometimes referred to as the lexical route. (Ellis 1993, Taft 1991) Let us note that this route made no reference to sound whatsoever (though spreading activation may well reverberate further and reach it). The text reached meaning directly from visual representation of language-as-symbol (graphemically coded language) to language-as-meaning (semantically coded language). It went directly from page to eye to visual input lexicon to semantic lexicon (and see chapter three and appendix one). It was pure visual attack and sound was not necessary to it.
We normally read visually. What, though, if the text is difficult and full of unknown words like lysosome, or apocope? An alternative, though slower, clunkier and more expensive, route enables us to read by sound. It takes text first to graphemic code in the visual input lexicon, as it must, but then to a conversion link (a translation service, in effect, translating graphemic code into phonemic code) then, in phonemic code, to the auditory input lexicon and only then to meaning in the semantic lexicon (see figure 3.3 in chapter 3). In other words, decoding text to graphemes, translating these representations into representations of sounds and using these representations of sounds to reach meaning. This more roundabout route, sometimes called the sublexical route, delivers what is called assembled reading. Being less economical, and less elegant, we can assert that assembled reading is inherently much less likely to be the way reading is primarily done in fact, under usual circumstances and assuming a fluent reader. (Biology is ruthlessly economical, and elegant, at all times.) This sublexical route, from text to sound to meaning, is also clearly not necessary except for words which are complex, or new to the reader, and must be ‘sounded out’ (‘sublexical’ perhaps, or ‘Taft’).
A mass of evidence exists to show that readers do it visually as a primary strategy and phonically only secondarily or as a ‘useful second pass’ (e.g. in Adams 1990, Ellis 1993, Ellis and Young 1996, Taft 1991, Rayner and Pollatsek 1989) or as part of the probably flexibly applied dual-route strategy (read primarily visually when text is familiar, phonically when extra support is necessary). (Adams 1990, Coltheart et al 1993, Decker et al 2003, Goswami and Bryant 1990, Rayner and Pollatsek 1991, Smith 2004, Stanovich 2000, Underwood and Batt 1986 and Wray 1994.) Notwithstanding the evidence, and the logic, there continues to be hot, sometimes virulent, argument, at chalkfaces, in the media and in government departments. We will more thoroughly examine this controversy (the ‘reading wars’) in the next chapter.
We have known for over a century that mental lexicons are anatomically distinct. We know (and it is logically demanded) that in each lexicon language is represented as single code; that in the visual lexicon there is only language in graphemic code and that in the auditory lexicon there is only language in phonemic code, for example. (Ellis 1993, Ellis and Beattie 1986, Ellis and Young 1996, Taft 1991) It has been shown that the routes are distinct, too. (Morton and Gipson, both cited in Ellis 1984 and see Ellis 1993, Ellis and Beattie 1986, Ellis and Young 1996, Taft 1991 and later in this book) The routes we have discussed to date are direct, or lexical, routes, managing language in only a single manifestation corresponding to the sense through which it has been appreciated. All other routes are less direct, more clunky, more expensive and deliver assembled, rather than directly accessed, meaning, of which more in chapter three.