Chapter two notes.

Pattern Associators: Neural nets.

Pattern associators (or neural nets) are a plausible paradigm for learning, memory, recall and association. Pattern associator (or neural) nets, networks of nets and networks of networks of nets (and so on, I suppose) provide a simple and elegant model which is yet robust and economical. It is so in respect of both hardware and software, in computer-speak. It requires little hardware (just a few neurons connected up) and its programmes are simple and easily written; in fact there may be just a single programme (the net’s synapses self-setting across the net with values appropriate to their input according to a single rule, the ‘Hebb rule’ - of which more later). Biology is always maximally simple, elegant, robust and economical. It is easy to see how what we know the mind actually does can be done using this paradigm; procedures like parallel distributed processing, spreading activation and association fit readily into a neural net theory. Neural nets, as envisaged in the pattern associator paradigm, would be elegant, but they would also be automatic. They would work under their own control, without reference to ‘external’ control, or rules of any kind. Neural nets would robustly, and absolutely reliably, self-manage, in fact. They would easily learn - adopt and accumulate new and appropriate networks in response to inputs and then deliver new and appropriate behaviours.

Neural nets have also been modelled on computer. When tested the models ‘behave’ in ways very characteristic of us humans. They ‘learn’ similarly and make the same kind of errors as we do. None of this proves, of course, that we actually do use neural nets in our real brains, but the model is very appealing and credible. It has also indicated some fruitful theoretical approaches to teaching literacy which, in my experience, when adapted for use, have delivered unusually swift and dependable learning in practice. Learning that felt ‘natural’ to students doing it. I present, in these notes, a simplified version of the account in Rumelhart & McClelland’s excellent first volume (1986 pp. 31-40).

You will remember that neurons (nerve cells) in the brain throw out, and receive, thousands of connections (synapses) with other brain cells, near and far away. It is all these interconnections between neurons which is the circuitry itself. You will also remember that these synapses can deliver or receive either positive or negative impulses and that these can vary in strength. Neurons can, in fact, strongly or weakly excite or inhibit each other.

This figure illustrates a pattern associator net

A pattern associator net (Rumelhart & McClelland 1986)