Speaking and writing: from meaning to sound or symbol.

What about when we produce language, either as sound (speech) or text (writing)? To do these we first have to activate semantic meaning in our semantic lexicon, to activate mental representations of meaning in the form of language. To come up with something to say, in semantic code. (Philosophy intrudes again: ‘who’ activates this meaning? From where does the original impulse to access these words come? How could any of this start? My only defence to this is to look shifty and move on.) Let us quickly, if timidly, abandon philosophy and take the cognitive story up from the point at which ‘I’ have, in fact, thought of some meaning and decided either to put it in writing or utter it as speech. ‘I’ have somehow begun activating entries in my semantic lexicon and am ready to go.

These activated representations of language-as-meaning, language in semantic code, will be passed to the appropriate motor areas of the brain (areas to do with formulating commands, usually to muscle groups). These semantically coded representations will there be associated with their motor coded counterparts, which are instructions to the relevant muscle groups for appropriate action. For example, data relating to the language I wish to write will be passed to area 7 on figure 2.1. This area is the hand motor area. Here, instructions are formulated and issued which will cause my hand to move in such a way that I write my meaning down. The semantically coded language-as-meaning is translated into motor coded language-as-hand-movements and I write my meaning. Should I wish to speak my meaning I must send the data, the semantically coded language-as-meaning, from the semantic lexicon at area 5 to Broca’s area at area 6. There, appropriate instructions to the muscle groups responsible for the production of speech will be produced and distributed; the semantically coded language will be translated into motor coded language-as-speech-movements and I will speak my words.

To write, I activate language-as-meaning, in semantic code, translate it in the hand motor area into instructions to the groups of muscles which write, and write my meaning into symbol. To speak, I activate language-as-meaning, in semantic code, translate it in Broca’s area into directions to the groups of muscles which enable me to speak, and speak my meaning as sound.

To reiterate a point made earlier about reading to meaning, which applies similarly to writing our meaning: The procedures I have described which enabled me to write my meaning made no reference to sound. I took semantically coded language directly to the hand motor area and that area delivered motor instructions to my hand. I was not required to activate phonemically coded language at any point. This is the most direct route from meaning to writing. It is a direct, or lexical, route. This direct route is obviously the cheapest and simplest way to writing from meaning and is therefore much the most likely candidate for the way we do this, in usual circumstances. My activation of semantically coded language may reverberate as far as sound in my brain, it may also activate phonemically coded language in there and I may ‘hear’ what I write as I write it, but this auditory activation is not necessary for me to write (in usual circumstances and with simple or well-known language). This auditory activation is secondary to most writing. Perhaps, for example, I write easy language such as ‘I will speak my words’ using direct, lexical activation but may consult sound when I attempt ‘activation of semantically coded language’? Perhaps we flexibly deploy a dual-route strategy? This thinking will be germane to our discussion of the whole language vs. phonics controversy, the ‘reading wars’ in chapter three.

To summarise the first two chapters thus far:

We have glanced at the wiring - how the cerebral cortex is probably structured in that regard and how it probably works. We noted that particular activities tend to be performed in specific areas of the cortex and we have mapped some of these. We have discussed, in some detail, the basic neuroanatomy of language management in the left cerebral cortex. We have examined the various mental lexicons which correspond to the various logically deduced manifestations of language. We have suggested that analysis of information takes place in columns of cells arranged into modules by function. We noted that a truly stupendous amount of interconnection between modules is built in to the system, giving it the potential for almost unlimited capabilities. We saw that these connections between neurons can be weakly or strongly excitatory or inhibitory. We have not yet said so aloud, but learning is presumably the establishment of particular associations and we imagine it must be the result of the establishment of particular connections and patterns of connections among neurons and the tiny modules they make up. Mental connections and patterns must in some way correspond to physical connections and patterns (and see notes to this chapter).

We are now in a position to consider four very basic psychological concepts: spreading activation and cascading analysis together with top-down and bottom-up processing. We will then be able to see how these might impinge on the management and learning of literacy by real brains in the real world.