Could a Machine Have a Mind?
12th July 2014
... Comments

At first slight this seems quite ridiculous. How could metal and silicon ever be conscious, have original thought, appreciate a sunset,  have a sense of humour?

Computers only do what they are programmed to do – how can they have free will? And what about the soul? Surely only creatures of flesh and blood have minds? Where is the flickering flame of the soul inside a machine? How could a mere machine have a point of view – a moral position – rights and obligations…? How could a machine care?

The philosopher Wittgenstein once asked: ‘Why do people always say it was natural for men to assume the sun went around the earth and not that the earth was rotating?’

A friend replied: ‘Well, obviously, because it just looks as if the sun is going round the earth.’

‘So ‘ says Wittgenstein ‘What would it have looked like if it had looked as if the earth was rotating?’

This story has a parallel in the way in which we think about our own consciousness. Think of what consciousness feels like to you right now. Does it feel like billions of tiny atoms wiggling in space? Well, no…But the materialist view of  mind says that that is all there is. How then should we reply? What would consciousness feel like if it did feel like billions of atoms vibrating in space?


We live at a time when our newspapers, television and radio networks are being dumbed down and our toasters, washing machines and vacuum cleaners are becoming “smarter”. We apply the term “smart” to devices with less intellect than a lobotomised ant. The new hardware in our kitchen makes “smart” decisions to mix the perfect dough. ‘Terry the talking toaster’ worries about getting just the right shade of brown on our muffins. The smart card in your pocket is “smart” purely by virtue of having a memory. But such autonomous devices are nothing new – the spinning governor on top of a of a steam engine, the thermostat on the wall make equally ‘smart’ decisions. Is it really ‘smart’ to steer a bomb down an Arab ventilation shaft? If we seek the road towards truly intelligent machines then perhaps the best directions we could get would be “Well I wouldn’t start from here if I were you.”

In 1950 Alan Turing wrote a seminal paper in which he asked the question: could a machine think? The question, he said, is meaningless as stated and inevitably leads to a pointless wrangling over the definitions of machine, mind and thought. How can we recognise minds in other humans, in animals, let alone in a machine? Can we understand the mind purely in terms of atoms wiggling in space, or does this leave something out? Can we know whether there is anyone there to care inside a lobster’s shell or behind the shining face of a robot?

Turing proposed a purely operational test for the presence of mind which he thought avoided the problems of abstract definitions. We’ll get on to this in a minute. I however feel we should attempt a thumbnail sketch of mind.

Minds first and foremost are agents – they are affected by and affect their environment – they have causal powers. Minds have beliefs about their environment and in some cases about themselves. Those beliefs lead to desires which can be satisfied by the formulation of intentions – minds are the cradle of behaviour. These are necessary but no way sufficient conditions for mind. The properties we have mentioned are exhibited by all kinds of agents – both animal and mechanical – which we would accept are mindless. Does a virus or a prion protein have a mind? Does a thermostat have a mind? – some early practitioners of artificial intelligence (AI) have famously said  yes. After all: it senses its environment, it has beliefs, a desire to achieve some goal and the means to change its environment to meet that desire.

What is missing here?

What distinguishes mind from mere mechanism is content and meaning. Mental states are more than just symbols in the sense that words are merely symbols for what they represent. The mind has intrinsic meaning or intentionality – mental states are directed – they have aboutness. My thought about a cat is more then just a reference – it is somehow about that cat in a way that is qualitatively more than just syntax or pattern. It has furriness. It purs. But how can matter be intrinsically about anything?  The patterns of ink in a book are not about anything until interpreted by an external mind. But how can something wholly within the inner experience of mind somehow be directed at something outside itself?

The problem of content and meaning is intimately connected to the problem of consciousness. Consciousness is the place where it seems to us that everything comes together: it is the Centre of Narrative Gravity. Rather than atoms wiggling in place we feel our thoughts playing before us on the stage of a Cartesian theatre. Consciousness raises two difficult questions: qualia and subjective experience – why should consciousness feel like anything at all?

Colours, smells, feelings, pleasure, pain  - these are the elusive qualia that keep philosophers of mind awake biting their pillows. When I look at the sunset I have a sense of redness which is nothing to do with some analysis involving wavelengths of light. Why does what I experience feel like red and not green? Why does red look like anything at all? When you look at the same sunset, do you have the same inner experience? We can describe the redness we see to each other but here we are just attaching labels within a public language to something which is entirely private to our own experience. Qualia have no representational content – no functional role to play – and yet they are essential to provide the underlying meaning to our mental symbols.  Mental symbols such as “cat” are on one level just syntax –firings of particular groups of neurons – and yet these firings can instantiate a whole inner experience of what “cat” means to be by triggering all sorts of dimly associated memories of long lost qualia. Without qualia to ground-out our references, our thoughts are mere shapes without content.


Consider also the problem of free will. Can we grant a mind to something that merely follows the inexorable laws of stimulus-response. Free will – the ability to make unforced selections from alternative moves – is a requirement of any complex autonomous agent. It is more than mere decision making in the sense of devices like thermostats where the decision follows irrevocably from the laws of physics. How we select our next move from the thicket of conflicting goals is mostly buried beneath the threshold of consciousness. Memory, genetic pre-disposition, conditioned response, cost-benefit analysis, planning and folk psychology all play a part. It is a moot point however whether our choices are truly free. Is free will just an illusion where our conscious mind rationalizes a purely unconscious decision after the event so as to fit in with our conscious beliefs?


And what of emotion?  Does a mind without emotion have any meaning? Evolution has shaped our minds as emotional engines: our decisions thankfully are not based  on logic alone but are influenced by our emotional state. Without emotion a complex society like ours is unthinkable. We would be incapable of making any decision involving other human beings – there are just too many variables for use to calculate all the possible consequences. Emotions such as trust, friendship, love and envy allow us to enter into meaningful social contracts with each other – to behave altruistically and co-operate in situations where cold logic would advocate selfishness.

Then there is the problem of other minds – is there actually anybody else out there? You know that you have a mind – you have Descartes’ proof after all. But how do you know that I have a mind? Perhaps I can ask you to suspend disbelief on that one for a while longer.

Descartes unfortunately took all this a little too seriously and thought that all he could know for certain was that he had a mind. Everything else, even the existence of his body, could be an illusion, a distortion of the truth controlled by an evil demon. Today we are of course familiar with this concept – we call it Peter Mandelson. 

We confer the accolade of mind having on each other chiefly through circumstantial evidence – we converse, exchange ideas, observe each others behaviour and responses. It is largely language that provides us with the litmus test for mind. Then again we share a common origin and structure – we use induction to assume that similar patterns of thought must be going on in your head as there are in mine.

What about animals?  Is there a smooth continuum of mind from nematode to nuclear physicist or is there some sudden phase transition from which minds like ours crystalise?

I can happily say ‘My dog and I, we played ball together’ but ‘Me and my oyster…’? There’s something wrong there – oysters are terrible at catching anyway. This question of the attribution of mind has serious moral consequences that run right through our society – medical ethics, animal rights, cruelty and cookery. A mind grants a point of view, a right to an opinion, the ability to care. When does an unborn foetus acquire a mind as opposed to just a brain: at birth, at three months, at conception? Is this the correct or only basis for ethical decision making anyway?

This problem of devising a suitable test for mind was considered by Alan Turing when he asked whether a machine could think. He proposed a test where a computer and a human each communicate with an interrogator in another room via teletypes or Internet terminals or whatever. The interrogator asks questions of each in turn about anything he/she likes. These would not be yes/no questions but purely open questions such as ‘what do you think of Shakespeare’s sonnets’ or ‘Tell me about your childhood’. After a given time, an hour or so perhaps, he would have to identify which was the computer. If he cannot tell after this time then Turing asserted that the computer was indeed thinking – not just generating appropriate answers but having some genuine understanding of the content of the conversation, including its own role.

This is now known as the Turing test and has never yet been passed by any computer. It is also true that most computer scientists I know would fail the Turing test themselves. But how valid is such a test? What is it actually a test for?

Clearly, it is not a test for mind per se – a chimpanzee is rational, emotional, has beliefs and desires and is conscious of itself as an individual. Yet the chimp fails the Turing test miserably.  Turing takes a strictly behaviourist approach and says that if the external behaviour is comparable then what does it matter about what is going on inside? The Turing test uses language in the same way the physicist uses particles in a cyclotron – firing them at some target and making deductions from what sprays off..

Consider an alternative version of  the test played with a man and a woman. The man gives truthful answers while the woman tries to make the interrogator believe that she is in fact the man.  What if the woman succeeded in convincing us that she was the man? Would it prove she had the mind of a man? No. But surely it would show that she had a thorough understanding of what it is to be a man: attitudes, opinions, modes of thought, sense of humour, conversational style. Most women would probably say that this was a pretty trivial achievement in any case – could the man ever successfully fathom the complexities of a woman’s mind?

What if the computer passed the Turing test? Surely we could say the computer is just simulating intelligence and simulation is not the same as the real thing. Just because the outward signs are the same doesn’t mean the same process is going on inside. The lights may be on but there’s no-one at home. Computers these days simulate all sorts of things – the economy, airflow across a turbine, the weather for example. We don’t’ get wet by a simulation of a rainstorm inside a computer – it’s not real – so why should a simulation of thinking be any different?

Some would argue that it is purely a matter of viewpoint. If you were a simulated human being caught in a simulated hurricane inside a computer program then would you say that the hurricane was just a simulation? The claim here is that we judge phenomena by their effects within our medium of experience.  We can class some phenomenon, X say, as a hurricane if  it has the same behaviour and has the same abstract patterns within its medium of expression as a real hurricane in a weather system.

Hmmm…but surely it is the medium of X which is the essence, very  X-ness of X. You surely cannot separate the pattern from its medium of expression. A hurricane is only a hurricane because of its effects relative to a real weather system. A simulated hurricane may share the abstract mathematical patterns but that doesn’t make it real in any medium. This leads us to ask whether the arithmetic done by computers is therefore just a simulation of arithmetic – not real arithmetic at all. Can only human mathematicians with real minds perform real arithmetic?

I think ultimately we have the same problem with machines as we have with each other. We can only gauge the level of understanding in any mechanism, biological or electronic, through empirical observation. Perhaps the best we can do with mind is to use the pornography test – ‘I don’t know exactly how to define it but I know it when I see it.’

 

So we’ve tiptoed our way around the concept of mind and in the best philosophical tradition added fluff to the question rather than answer it. Perhaps we can begin to be a little more precise when we ask what we mean by machine?

In 1900 the German mathematician David Hilbert asked whether mathematics was complete, consistent and decidable. Could the whole of mathematics be deduced from its basic axioms by applying simple logical steps one after the other?  Hilbert was asking whether doing mathematics was in principle a mechanical process. Was there some definite process or method by which all mathematical problems could be decided, all truths discovered?

In 1936 the young Alan Turing devised a startlingly simple machine which could in principle be used to decide such questions – the Turing Machine.

A Turing machine has an infinitely long tape divided up into squares. Each square is either blank or contains a symbol.  The tape passes through a reader which examines only one square at a time. The machine can perform only five simple actions:  move the tape one square to the left, move to the right, write a symbol into the current square or erase the current square, or STOP. The machine has a fixed number of internal states: numbered 1,2,3 etc. Each state has a simple rule which says what action the machine should take if the current square is either blank or filled. For example, the rule for state 4 might say, if the square is blank then write a symbol and goto state 6. If the square has a symbol then move left and goto state 9.  To start the machine we feed the first square of the tape into the reader, put the machine into its starting state, state 1 say, and off we go. On each tick of the clock the machine looks at the current square and applies the rule for its current state; writing to or moving the tape and changing its state in the process.

The machine will shuffle the tape backwards and forwards, reading and writing marks on the tape until it enters the state where the action is STOP. At this point the pattern of marks on the tape give us any answer the machine might have produced. And that’s all there is to it. By using a suitable code to make patterns of marks and spaces stand for letters or numbers, we can use such a device to compute anything we want it to.  If you think of the input tape as the data and the table of rules as the program, you can see how a modern digital computer operates in principle. Turing showed that any process which was computable could in fact be handled by such a simple machine – and this includes much of formal mathematics, theorem proving, arithmetic.

So what about Hilbert’s goal of placing mathematics on a sound basis? Well unfortunately both Turing and Goedel effectively killed that hope for ever. Turing showed that there was no way of deciding in advance whether any Turing machine would ever stop once started. In other words it is impossible to decide if a given mathematical problem can be solved. Even worse, Kurt Goedel showed that mathematics was incomplete - there would always be truths about mathematics which mathematics itself could never prove. Human knowledge will never be complete. As we will see later, this has been pounced upon by some philosophers as proof that machines can never think.


We have now defined what we mean by a computing machine. A computer can be built from anything as long as it can be mapped onto some equivalent Turing machine. We build them using electronic logic gates on wafers of silicon but could just as easily compute with contraptions using billiard balls, DNA, valves and water pipes, or beer cans and windmills. Computation then is universal, ubiquitous, commonplace. So what about the brain? Is the mind just software running on a biological computer? Is thinking simply the manipulation of symbols in some abstract Turing machine?

 


Cognitive science uses a functional model of the mind which makes just this assumption. Mental processes are modelled by moving symbols around between various functional states such as belief, desire and intention according to simple logical rules. It attempts to model the richness of our subjective experience in terms of block diagrams and arrows.

The evangelical wing of  cognitive science is termed artificial intelligence, or AI, has the objective of  replicating certain properties of mind, such as cognition, perception, problem solving, planning and understanding. Disciples of AI pursue their holy grail of a conscious machine – a being aware of itself in the world as itself. You don’t actually need the beard and sandals - but it helps.

The early decades of AI were dominated by a functional approach to the mind which models perception and problem solving in terms of predicate calculus and formal logic. We can call this High Church Computationalism and like most religious sects it had some charismatic high priests. Their motto can be summed up as “anything you can do I can do meta”. The central dogma was that intelligence was just a matter of  amassing enough inference rules which could be applied in a top-down fashion against facts in the real world. The air was full of the smell of incense and unbridled optimism: ‘Oh, we’ll have a human level intelligence well before the end of the century’ intoned the high priests said thirty years ago.

This top-down approach had some early successes. A program rather optimistically called ‘General Problem Solver’ could prove a wide range of mathematical theorems from first principles by rigorously applying logical rules – even deriving some totally new and novel proofs. Systems appeared that encoded human expertise in the form of  know-how, facts and inference rules. These so-called ‘expert systems’ could give impressive human-level performance in specialised fields such as oil prospecting, diagnosing lung disease, recommending drug therapy, and deducing chemical structures. These expert systems  were rigorous and methodical, could explain their reasoning, justify their conclusions and were often more reliable than the human experts. A lot of companies in the City are using such ‘expert in a box’ systems to make buying decisions.

Just as the geneticist has his fruit fly, the neuro-scientist his giant squid axon, so AI  has its favorite laboratory playthings. The chessboard and worlds of coloured blocks moved around a virtual tabletop were the lab rats for these early programs.

But beyond these narrow areas of specialised expertise, these AI systems hit a brick wall. The problem arose when AI tried to move into domains requiring more general knowledge about how the physical world works - common sense reasoning. Outside the toyland of the blocks world AI ran up against something called The Frame Problem.

 


Let me tell you a story. Once upon a time there was a robot called R1 whose only purpose was to fend for itself. One day its designers gave it a little problem to solve:

Its spare battery, its only energy supply, was locked in a room with a ticking time bomb due to go off soon. R1 quickly found the room and the key to the door and formed a plan to rescue its battery. Inside the room was a trolley and on the trolley was the battery. So R1 deduced that a plan PULL THE TROLLY OUT OF THE DOOR would result in moving the battery out of the room. Unfortunately the bomb was also on the trolley and as soon as it got outside the room it exploded. R1 knew the bomb was on the trolley but had missed the implication that moving the battery also moved the bomb.

That was the end of R1.

Ah, said the scientists, we need a new robot which will see the implications and the side effects of its actions before it does them. So they duly built a new robot called Robot-Deducer – R1D1, and gave it the same problem. Straight away, R1D1 hit on the plan PULLOUT the TROLLEY, but then set about computing all the possible things that moving the trolley might cause. It had just finished calculating that moving the trolley would not change the colour of the walls when the bomb went off.

That was the end of R1D1.

Well, said the scientists, we must teach it the difference between relevant and irrelevant implications so that it can ignore all irrelevant implications. So they built a more complex robot called Robot-Relevant-Deducer, or R2D1, and gave it the same problem. They were puzzled to see R2D1 just sitting outside the room, humming quietly to itself and computing madly. ‘Do something!’ they yelled at it. ‘I am,’ it said ‘I am busy ignoring these ten thousand implications I have calculated are irrelevant. As soon as I compute a new irrelevant implication I just put it on a list of things I must ignore and…’

The bomb went off.

The moral is: if we are ever to build a robot with the real world flexibility and robustness of R2D2 then we must solve the frame problem – how to deal with the billions of conflicting axioms, rules and implications about how the physical world works in a way that hones in on just that small set of rules relevant to the task in hand.

 


Common sense reasoning is the essence of the frame problem. People like Hubert Dreyfus believe that an essential requirement for solving the frame problem is having a body. Humans have common sense intelligence by virtue of interacting physically with the material world. As we develop mentally we compile not syntactic rules but sequences of learnt behaviour which relate to each other in complex ways. Common sense consists not of rules but  procedural know-how, rules-of-thumb, smart-moves. Watch a small child playing with water. She pours it from container to container – learning about the physical and behavioral properties of liquids, gravity and containers.

Intelligent systems in the real world must solve the frame problem every time they must decide on their next move. Minds like ours have the ability to look ahead and assess the possible consequences of a number of candidate moves. As Kark Pooper said: we can let our hypotheses die in our steads.  No system of rules, however complex, can ever capture the flexibility and generality of this procedural knowledge. Because computers can only follow formal rules says Dreyfus they will never be capable of creative thought. To yearn for a beaker full of the warm south one must first be capable of drinking it.


AI comes in two strengths:

Strong AI says that mind is just computation and nothing more. Thinking is just a matter of executing the right algorithm or program on the right set of symbols. It doesn’t matter what is performing the computation - a digital computer, a brain, a contraption of beer cans and windmills – all that matters is getting the right functional states in the right casual relationship. Consciousness and sensation then are emergent properties which supervene on the physical mechanism purely by virtue of the computation it is performing. The exact details of how this computation is realised in matter is irrelevant.

In contrast, weak AI says mind can be simulated by some appropriate computation but whether there would be actually any inner sensation, consciousness or true understanding there is an open question.

Not surprisingly, strong AI has many opponents.  Firstly there are external objections that argue that a computation could never even successfully simulate a human mind. Then there are the deeper arguments that allow that a simulation is possible but there could never be an inner conscious experience going on. Philosophers have the habit of clutching a concept like AI to their bosom and running with it headlong over the nearest cliff. As in the cartoons, they hang in the air, supported briefly by their arguments until they realise their predicament and gravity takes effect.

Ned Block says that if strong AI is true then it would be possible to create what he calls the Chinese Brain. The population of China is roughly the same as the number of neurons in the brain. Thus in principle we could make China function as a human brain by instructing  each Chinese citizen to play the role of a single neurone. Each citizen receives and passes on messages, via telephone say, from other connected citizens in a manner exactly parallel to the way real neurones behave. Oh I know this is a gross over-simplification of the neuro-biology but then – hey – this is philosophy. So, if strong AI is true, says Block, then we should succeed in creating something which has a mind in its own right, over and above the minds of the citizens taking part.

The question is: would such a system actually instantiate anything we would recognise as consciousness and, if so, how could we tell anyway?  Would it replicate the mind in all its richness of inner experience or just simulate its external behaviour?

Well, as our existence proof perhaps we could use the Turing test, or have the brain control a robot. Of course each Chinaman in the experiment experiences nothing but the passage of simple messages and certainly does not feel any ‘thought’ process going on. No single citizen is any more conscious than a single cell in the brain. No dot in a newspaper photograph has no conception of the picture of which it forms a part.

Our intuition of course is to say that of course the Chinese nation could not constitute some communal mind. But this is a conceptual problem common to many thought experiments of this kind - we have great difficulty in allowing for mental processes in anything markedly different in size or material than a human brain.

The philosopher Leibniz recognised this with his analogy of a mill:

‘Perception and all that depends on it,’ he said, ‘are inexplicable by mechanical causes – by figures and motions. Supposing there were a machine whose structure produced thought, sensation and perception – we could conceive of it increased in size until one was able to enter its interior, as one would into a mill. But, as we walk around, we find only pieces working mechanically one upon the other. Never would be find anything to explain perception.’

Does our intuition tell us anything different if we shrink each Chinaman down to the size of a cell, the whole country to the size of a football, and speed up the whole process by a factor of a thousand? What if we then replace each tiny Chinaman by a silicon chip programmed to give just the same simple response to its input signals?  Does the picture begin to change for us? Take the top off somone’s head and peer inside. We see nothing but cells signaling to each other, sodium and potassium gates opening and closing mechanically.

The fact that it is at best an open question whether the Chinese brain would be conscious would seem to put the skids under strong functionalism. If strong AI is true then it shouldn’t be an open question since it says that being in some defined mental state is purely for the computational mechanism to be in some functional state. Since this functional mapping is replicated perfectly in our Chinese citizens then there should be no question that anything necessary for mind is missing here. Are you convinced?

 


Other people have attacked strong AI using Goedel’s theorem to show the limits of formal systems such as computer programs. Gödel showed that for any formal system, such as arithmetic or the rules of a computer program, there will always be true statements which the system can never deduce as true by following the rules of its own logic. These truths, called Gödel sentences, are effectively of the form ‘this statement cannot be proved within this system’ and are sophisticated versions of the Liar Paradox where I say “I am lying” for example.

Suppose there was some marvellous machine which could derive and print out all the true statements of mathematics and science. Call it the Universal Truth Machine or UTM. It would be possible to construct a well-formed statement using the axioms of mathematics which effectively said: ‘UTM will never print out this statement’. Call this statement G. Well if UTM ever did print out G then G must be false. But then UTM will have printed out a false sentence, which it cannot do. So UTM will never in fact print out G – and so G is true as it says it is. But then we have a true statement that UTM cannot print out. So a real Universal Truth Machine  is impossible.

People such as JR Lucas and famously Roger Penrose at Oxford have eagerly used this narrow theorem about the nature of mathematics to claim that human minds are forever superior to mechanical systems.  The human mathematician, says Penrose, can see the truth of a Goedel sentence that any proposed mechanical system such as UTM can never prove. Human minds then can deduce what is provably not computable by machine.  He cites mathematical insight as one example of a mode of thought that is evidently not using an algorithmic style of deduction. So Penrose is saying the human mind has deductive powers beyond the limits of Goedel’s theorem and hence cannot be an algorithmic system – a computer program

So – does this finally prove the fallacy of AI? Well no, not at all in fact. Penrose’s argument depends on the spurious claim that human rationality – human mathematical ability – is consistent.  He compares the human mind with a consistent formal system such as UTM and cries – “Look, see how it has magical powers to compute the uncomputable”. But Goedel’s theorem only applies to consistent systems and says nothing about what can be deduced by an inconsistent system. Penrose is ignoring the fact that the mind is shot through with all sorts of inconsistency and self-contradiction. Evolution has equipped us with a mind which is a rag-bag of tricks and smart moves; rules-of-thumb for staying alive long enough to reproduce. Human reasoning provides no guarantee of soundness or freedom from error and yet Penrose insists that this must be a property of any machine intelligence. There is no perfectly sound algorithm for winning chess and yet this does not prevent computers using heuristic algorithms from beating Grand Masters. As long as the algorithm being used is good enough, then a guarantee of its soundness is not essential.  I would claim that it is only by virtue of this computational sloppiness that our minds have the robustness and flexibility they have.

So desperate in fact is Penrose to elevate the mind beyond the clutches of mere machinery that he appeals to some new as yet unexplained theory of quantum gravity. There is something essentially uncomputatable at the heart of physics which the brain uses to achieve its powers of insight. The microskeleton of the brain apparently supports an infinite number of simultaneous quantum computations that collapse under quantum gravity to a single conscious insight. The fact is that Penrose’s quantum gravity theory of mind is in serious danger of collapsing under its own weight.

 


But perhaps the most  powerful critic of strong AI is the American philosopher John Searle. He says that computers are incapable of true understanding because they only manipulate symbols according to their formal syntax – they have no knowledge of the meaning of the symbols, only their shape. His illustration of this is the so-called Chinese Room experiment.

Suppose, says Searle, that there is a computer program that can pass the Turing test and answer any questions in English.  What if we change languages and use Chinese instead? He imagines himself locked in a room with an input slot, an output slot, and a bucket of Chinese characters on slips of paper. In through the input slot come pieces of paper on which are written questions in Chinese. Searle cannot speak a word of Chinese but he does have a huge book of rules (in English) which tells him which characters to paste together to make up an answer, which he then posts through the output slot. To an observer outside, the room appears to have a good understanding of Chinese, producing as good a set of answers as any native Chinese speaker. But says Searle, to me in the room it is just a load of squiggles.  By following the rule book I am just doing what a computer does and yet I have gained no understanding of Chinese.  Thus a computer doing the same task has no understanding either. QED.

 

How valid an argument is this?

The obvious reply is that any understanding is a property of the system as a whole - the rule book, the slips of paper and the mechanism, Searle in this case, which applies the rules. Searle here is playing the role of the computer hardware which only has ‘understanding’ of how to shuffle symbols about the room – the real understanding of Chinese is a property of the software – the rule book and the slips of paper and the functional states that exist between them.  The fact that Searle is himself conscious here is totally irrelevant to his functional role in the room.

Searle’s next move is to say ‘Suppose I memorise the program and execute it in my own head rather than from the rule book. I can escape from the Chinese Room and wander around having conversations in Chinese.  The whole system is now part of my mind - but I still cannot understand a word of Chinese.

Searle is trying to pull a fast one here. For Searle’s argument to work we need to be able to imagine actually simulating the program in our heads. Any program for which this is possible is necessarily so simple as to have no understanding, of Chinese almost by definition. The program we are talking about would in reality be tens of thousands of pages of code and Searle would need to keep track of thousands of variables in his head. Searle might of course object here and say that understanding cannot be just a matter of writing a complicated enough program. How could ’just more of the same’ result in understanding? But you could apply this argument just as well to the brain – no small enough part of the brain (a group or sub-system of neurons) can in itself show understanding and so how can the whole brain (just more of the same) understand Chinese?

 

But OK - suppose we accept Searle’s proposal in principle. Does Searle now have any understanding of Chinese in his own mind? The answer of course is still no. However, it is a very different question as to whether the ‘program’ itself  has understanding.

Searle says that when he memorises the rules and runs everything in his head then any consciousness which arises must surely be within him. But surely what we have here is two mental systems implemented within the same physical space. The organisation which gives rise to the Chinese understanding is quite separate from the mental system which gives Searle his own experiences.

Think of what we mean by a physical system such as a brain, a computer, or a demon in a Chinese room performing a computation. Such a system must have a complex functional organisation where its physical parts have a corresponding causal relationship to the functional relations of the program. The rules and slips of paper in the room are not just a pile of formal symbols as Searle thinks but represent a complex web of causal links. Searle acts here as a purely an unconscious mediator which serves to realise these causal links between the paper symbols.  It these physical causal interactions which instantiate the computation rather than the abstract computation itself which are the basis for consciousness.

It is very easy to parody Searle’s Chinese Room. A good example is given by Paul Churchland, who describes an experiment called the Luminous Room.  Rather than arguing against strong AI, he argues against a mythical credo called strong EM, which claims that light is identical with interacting electromagnetic fields.

So to prove the case against string EM  lets put Searle in a dark room with a battery, a coil of wire and a bar magnet. ‘Look’ says Searle, ‘no matter how much I wave the magnet around in the coil the room stays dark. This proves of course that interacting electric and magnetic fields are themselves insufficient to cause illumination and so some mysterious factor must be missing.’

AI today is like Scrodinger’s Cat: in a state of uncertain health. The computer Deep Blue experiences no more joy on check mating Gary Kasparov than an egg beater gets from whisking the perfect meringue. High church AI has achieved some success but the frame problem has shown the limits to what can be achieved using rules alone.

An alternative approach which has grown in recent years is called connectionism. It borrows many of its basic ideas from the way the brain itself is constructed. Rather than having a single complicated processor, why not have many simple units all connected to each other in a vast network?  Each cell in the network simply takes the signals from each of the cells connected to its inputs, adds them up and if the sum exceeds some threshold the cell fires its own signal forward to the next layer of cells.  Such systems, often called neural networks, attempt to build a mind from the bottom up rather then the top down. Unlike rule based programs, knowledge is not held in explicit symbols but is distributed throughout the connections of the entire network. Problem solving and perception arise spontaneously as properties of the whole rather than as a sum of the parts.

Neural nets are very adept at recognising patterns from fuzzy information and classifying things into categories. For example, recognizing characters in handwriting  or faces from fuzzy photographs. They can be used for almost any complex task however – controlling robot arms, expert decision making, spotting flaws in ceramics, or sniffing for drugs.

Neural networks have a number of key differences to normal computer programs.

Firstly, they are not programmed explicitly by specifying rules and instructions in the conventional sense. Instead they are taught their task by being given examples of the sorts of thing they must recognise. Without any expert programming they will learn from example and then generalise this skill to cope with new cases they have never met before.  This is a crucial advantage. Conventional programs must be given clean accurate data. Neural networks cope fine in noisy environments where the information is muddy or partial.

  The other key difference is their robustness.  Conventional software is brittle - a single error in the logic of our program and it shatters - the whole edifice comes crashing around our ears. A neural network in contrast is plastic not brittle - its knowledge is a property of the whole network and no single part of the network holds any particular symbol or rule. We can remove bits of the network piece by piece and it just degrades gracefully with none of the sullen silences so familiar in normal computers.

So we have a clash of dogmas here - its High Church against the Zen Holists of Connectionism. Both sides grope towards the terra incognito of mind from opposite ends of the scale.

But do neural nets offer us a possible route to the artificial mind? Not in this simple form at least – any neural network can be simulated by a set of rules or a Turing machine and look where that got us. What is lacking is the means to interconnect networks into larger cognitive systems which incorporate complex feedback loops.

What is needed is a pinch of chaos.

 

In New Mexico the Santa Fe Institute is a centre for the study of chaos and complexity. There are many familiar examples of chaos in daily life – the pattern of drops from a dripping tap, turbulence in the rising column of smoke from a cigarette, the eddies of  boiling water in a kettle, the unpredictability of the weather. Chaotic systems are totally deterministic, governed wholly by simple laws of physics, and yet are in practice totally unpredictable. A tiny variation in starting conditions can lead to immensely different consequences – that damned butterfly in South America whose flapping wings cause hurricanes in Europe for example.

Over the years a somewhat cliched catch phrase has developed – “Evolution to the edge of chaos”.  Life itself is the best example of a complex network which teeters on the very edge of chaos. It has evolved by a process of increasing complexity driven initially by the simple laws of physics and later by competition and selection. Organic chemicals form a complex network where each chemical or enzyme catalyses the formation of the next. Eventually the reaction chain feeds back on itself – later products catalysing products earlier in the chain. The system becomes a complex self-organising entity. Molecules capable of replicating themselves become more numerous in the soup.   Some, like DNA, become particularly efficient and drive their molecular competitors to extinction. Feedback and self-organisation relentlessly drive such highly connected networks to the very brink of dissipation. By straddling the boundary between order and chaos, living things maximise their ability to react adaptively to a changing environment.

This I believe shows the way to create cognitive complexity - we need to drive our computers  to the very edge of chaos.

We cannot create intelligence in machines by programming them with millions of explicit rules – the rules do not exist and even if they did we are not clever enough to discover them. What we can do however is to use a form of software  evolution to discover this knowledge for us. The rich diversity of apparent design we see in nature is the result of two complimentary algorithms: self-organisation and evolution. Blind search guided by selection pressure makes nature an ingenious, if expensive, engineer.

We are beginning to use Darwin’s dangerous idea to shape the development of new computer systems. We can create software agents which have the simple goal of surviving and reproducing in competition with each other for resources such as computer memory and processing time. Genetic algorithms continually re-shape their software DNA. They  interbreed and spawn child agents which will be more effective at survival. These are not minded agents – they are simple machines much like bacteria – and yet they show startlingly complex behaviour.

The principle here is clear. We can build complexity from the ground up by borrowing from another paradigm.  Intelligence is not a one-shot process – we cannot program a tabula rasa with the rules for instant consciousness – they do not exist. We can however build adaptive systems capable of learning from experience to cooperate as they compete to survive in an electronic landscape.  For this to happen we must provide a kindergarten for these our mind children to play in – a software schoolroom full of toys, obstacles and, most importantly, other children.

The physicist Richard Feynman said before he died that he had only two questions for God: why turbulence and why consciousness?  I find it interesting that he left out the measurement problem of quantum mechanics but that’s another story. But perhaps from what we have seen complexity, chaos, and consciousness are all intimately connected.

 ‘He doesn’t know his own mind’ is a much truer statement that it first appears. As Goedel showed for mathematics, a system can never discover all there is to know about itself from the inside out. There is nothing in principle that says it is impossible for a computer program to have the same reasoning powers, including insight and intuition, as a human being. Goedel has proved that we could never ourselves though understand such a program. If such a program is ever to be created it will not be by the conscious hand of man but will come about through the same evolutionary route that forged our own consciousness. Conscious machines will not then be our mind children but fellow travellers on the road towards self knowledge.

 

 

More
About the Author

Tim I

Member since: 4th June 2010

Passionate about promoting science, reason and critical thinking.

Popular Categories