And why (going all the way back to feature analysis), if we operate by seeking identifying features in letters, should we not do exactly the same for whole words? (That is exactly what I was probably doing when I read that email.) Supposing that in order to identify letters, which commonly present in so many different forms, we apply a list of 12 tests - does it have a curve here, a line there, a crosspiece there and so on? Why should we not be able to apply as many and as general tests to whole words, thus recognising them directly? Look at figure 2.5.

Three sentences, where each is written in a different manner.

Figure 2.5

Did you manage to read those three sentences? Of course you did. Did you notice, though, that the last word in each was visually identical but ‘read’ differently? To have managed that reading without a hitch you must have been ignoring the detail of letters and reading words whole (and using context of course).

We will be meeting her at the station at 4 PM tomorrow.

Figure 2.6

Letters, anyway, often disappear altogether. You read figure 2.6 easily enough, yet the ‘ing’ in ‘meeting’ and the ‘ion’ in ‘station’ have become mere squiggles in which letters are barely hinted at.

Now try figure 2.7. It is written in minimal feature text, a text in which minimal features of letters have been retained. You will find this ‘depleted text’ easy enough to read. If this book was written in such text you would, by now, find it about as easy to read as normal text. It is written in an alphabet designed as part of an experiment to determine how much of each letter of the lower case alphabet could be eliminated without seriously affecting legibility.

an alphabet designed as part of an experiment to determine how much of each letter of the lower case alphabet could be eliminated without seriously affecting legibility

Figure 2.7