At a minimum we must have at least one semantic lexicon where we hold language-as-meaning. We must have at least one auditory lexicon where we hold language-as-sounds. We must have at least one visual lexicon where we hold language-as-visual symbols. And you may be fluent and literate in different languages with different scripts. If so, you have a separate set of replica lexicons for each language. (As before, I am tempted to write that the mind boggles except that, as we have already seen, it doesn’t even come close to boggling.)

Let us consider language management, and the procedures which this management of language entails, in a little detail. Later we will consider the significance of all of this essential neuro-stuff to the teaching and managing of literacy (see chapter three).

Understanding speech: From sound to meaning.

Language management in the left brain

Figure 2.1 Language management in the left brain.

Beginning, then, with speech; with language-as-sounds coming in from the ears and reaching area 1 on figure 2.1. Area 1 is the auditory association area - the area which receives the sounds of speech from the ears, and then makes appropriate associations among them, the area which recognises spoken language as spoken language. (Areas 1 and 2 are also called Wernicke’s area after he who first surmised their function in the late 19th century.) In this auditory association area incoming noises are analysed until recognised as the constituent sounds of speech. (This is not, incidentally, a simple matter! Speech is usually extraordinarily indistinct, messy stuff – sounds are often swallowed and lost, or blended into other sounds, word boundaries regularly disappear. Language is enthusiastically mangled.)

However, our auditory association area somehow contrives to recognise (or even reconstruct for us) its constituent sounds, or phonemes. (A phoneme is the smallest distinct linguistic sound (smaller, usually, than a syllable) - ‘ea’ and ‘t’ in ‘eat’, for example; ‘t’, ‘ea’ and ‘ch’ in ‘teach’ or ‘c’ ‘a’ and ‘t’ in ‘cat’.) This raw material, this collection of phonemes, is then passed on to area 2 on figure 2.1. This is the speech association area. Here, the phonemes are assembled into mental representations of language, but only of language-as-sounds. At this point spoken language has been recognised but is represented purely by its sounds. It is a collection of phonemes represented in what we can call phonemic code. It is in a mental lexicon where language coming in from the ears is held in phonemic code, and this area can therefore be called an auditory input lexicon.