Churchland then spends a chapter explaining how neural networks can encode linguistic, emotional, social, and moral understanding. Part one concludes with a chapter on common failures of the brain and what they teach us about its operation; this also touches on some of the neurochemical complexities neglected in the so far purely neural perspective.
This is popular science at its best and a worthy companion to Churchland's earlier Matter and Consciousness, which was a broader introduction to the philosophy of mind. Part one of The Engine of Reason does a brilliant job of explaining the complex ideas involved without assuming any prior knowledge of either computer science or neurobiology.
Part two goes on to explore the philosophical consequences of the ideas presented in part one. It is both more controversial and, at least for those already familiar with the basic ideas of neural networks and neurobiology, more interesting.
Churchland begins with what is probably the most contested concept in philosophy of mind — consciousness. He gives examples of how other apparent mysteries were later brought within the scope of scientific theories and suggests that consciousness may be similar. He then counters the arguments of four philosophers who have proposed theoretical objections to the very possibility of an explanation of consciousness — Leibniz, Nagel (and his famous bat), Jackson (and his colour-blind neuroscientist), and Searle. Having cleared this debris out of the way, Churchland presents a concrete proposal for how consciousness may actually work.
The possibility of machine consciousness is the subject of the next chapter. Churchland recounts his experiences at the 1993 Loebner competition and explains why the Turing Test isn't a good test for intelligence. He goes on to answer some of the in principle objections raised to machine consciousness, notably Searle's idea of "intrinsic meaning" and Penrose's quantum consciousness. This is solid stuff, but Churchland does make one egregious mistake here. He correctly points out that "classical machines are limited to computing mathematical functions whose inputs and outputs can be expressed as ratios of whole numbers", but he goes on to claim that neural networks do not have this limitation, since their computations range "over the full range of real numbers, not just the rational ones, ... over the true mathematical continuum"! This is later put forward as evidence that neural networks are non-algorithmic, a claim somewhat difficult to reconcile with the existence of extensive bodies of work on distributed, parallel, and approximate algorithms.
Churchland also resurrects that hoary chestnut, the thought experiment which argues that no program being run "on" the population of China could have sensory qualia (a variation on which appears as Searle's "Chinese Room"). He accepts this, arguing that it is only an objection to functionalism. But what happens if we have the population of China instantiate a neural network, surely a much more obvious thought experiment than having it simulate a serial computer? Churchland clearly thinks there is a necessary connection between functionalism and symbolist approaches to mind, but he never explains why.
The next chapter is a bit of a mixed bag. Churchland returns to consciousness with a critical look at Dennett's ideas on the subject; he also critiques some of Chomsky's ideas about language and its relationship to thought. But his central thesis here is that all human cognitive activities — language, science, art, music, morality — consist of the same kind of neurocomputational structures. I found this unconvincing, since it seems to me that there is a difference between things such as visual perception and consciousness on the one hand and scientific discovery, art, or morality on the other: while I have no doubt that they all rest on the same kind of neural networks at the "bottom", I think there is room for levels of structure above that. (To extend one of Churchland's own analogies, it doesn't follow from life being entirely physical that anything particularly interesting can be said about it at the atomic level.)
The final chapter looks at some of the social implications and possible applications of neurotechnology. Churchland's suggestion that neural networks will play a major role in assisting doctors and judges is probably not so controversial (an earlier wave of proposed applications for expert systems has accustomed us to the general idea). But the idea that most criminals will have neurobiologically distinguishable brains strikes me as an example of reductionist excess. As a largely social phenomenon, I think criminality is, like scientific discovery or art, unlikely to be directly mapable to events at the neural level.
Agree or disagree with him, however, Churchland is always worth reading. He argues his position forthrightly and provocatively but without becoming strident or tendentious. The Engine of Reason is a challenge to proponents of alternative views of mind and brain as well as an introduction to connectionism and neurobiology for the layperson.
September 1996
- External links:
-
- buy from Amazon.com or Amazon.co.uk
- details at the MIT Press
- Related reviews:
-
- books about neuroscience
- books about philosophy
- more popular science
- books published by The MIT Press