Now let us return to the further consideration of language management.

Language management: from text to meaning.

The direct route from text to meaning: text, visual analysis (graphemes), visual input lexicon (words in graphemic code) and semantic system (meaning)

Figure 3.2. Understanding the written word.

In figure 3.2 we see an outline model of visual reading - the direct route from print to meaning, a suggested cognitive psychological route to understanding of the written word. It works much the same as the previous model in figure 3.1, viz.: Text is seen. The information on the page, mostly graphemes, is sent from the eyes to the visual cortex right at the back of the brain. The visual cortex recognises the symbols which have just been seen and assembles them, if it decides they are letters, into the words (or morphemes?) of which they are part. At this moment they are being held in a visually recognised form according to their written symbol manifestation; they are in graphemic code, stored in a visual input lexicon. Each graphemically coded unit, or word, can then be matched with its counterpart in the semantic system and at this moment we can claim that the written word has been understood.

So far so good. We have a model which postulates routes for the decoding of language-as-sound to language-as-meaning solely by way of its sounds and also a model for the decoding of language-as-symbol to language-as-meaning purely visually. We have a model for direct listening to meaning, and one for direct (or lexical) reading to meaning. However, as you would, by now, expect, things are never quite as simple as that. We know that we can also ‘decode’ writing to sound, and thence meaning, very easily. We do this, for example, when we read very unfamiliar words. We can even decode absolutely new words (names, for example), or even nonsense, from text into phonemes. ‘T’was brillig, and the slithy tove did gyre and gimble in the wabe…’ for example. We can also decode phonetically misspelt words to meaning, via sound. ‘Hee iz a phawmiddabel phello’ for example. How could we be doing this? Our model, up to now, has no suggested route by which this might happen, no means of reading text and arriving at the sounds of the language it represents without going via meaning first. We need to add a route, in other words, to the model in order to show how we might read indirectly, via sound. How we might read sublexically and produce assembled meanings – sublexical or assembled reading. How we can read phonically, in fact. We need a system which can convert the representation of symbol into the representation of sound – translate graphemically coded language into phonemically coded language and, of course, vice versa. We need a conversion link - a pre-semantic interpreter or translator. This link between language held in graphemic code and language held in phonemic code will be able to explain our ability to read phonically, and also why we sometimes ‘hear’ an inner voice as we read silently, and visually, to ourselves. Our new, latest model is at figure 3.3.