A Comment on Searle’s Claim That We Are Not Justified in Attributing Intelligence to a Machine That is Merely a Computer Suitably Programmed Even if it Passes the Turing Test

                                                              

 

Peter Engholm

Monash University, October 1997

 

 

 

ABSTRACT:

This paper looks at John Searle’s claim that we are not justified in attributing intelligence to a machine that is merely a computer suitably programmed even if it passes the Turing test. The paper shall agree with Searle, and argue that opposing views are built on illogical hopes and expectations that have no root in reality.

 

KEYWORDS:

Artificial intelligence, Turing test, John Searle

 

FULL TEXT:

 

INTRODUCTION

This paper looks at John Searle’s claim that we are not justified in attributing intelligence to a machine that is merely a computer suitably programmed even if it passes the Turing test. I will discuss his arguments to see whether they are successful, comparing them with especially the ‘System reply’ objection, discussed by himself, Aubrey Townsend, and Douglas Hofstadter. In my conclusion, I validate the successfulness of Searle’s claims as well as giving my own opinion in this issue – something that has not been discussed by either of the men­tioned authors.

SEARLE’S ARGUMENT

Searle’s argument is, as mentioned, that machines using formal symbol processes (programs) to carry out certain instructions can not be regarded as having intelligence. He stresses the meaning of understanding and intentionality (aboutness), arguing that computerized machines only manipulates symbols but never can understand the meaning of the instructions or contents which are given to them as formal symbol processes. Intentionality to Searle can only be found in human beings and life forms with similar biological components as such. He argues that “whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program since no program by itself is sufficient for intentionality”.[1]

Searle uses an example with the “Chinese room” where a person who does not understand Chinese is given Chinese questions. By manipulating these for him/her strange characters after given instructions in the language the person understands, he/she can answer the questions in Chinese. The point Searle makes with this example is that although the person does carry out the instructions correctly, which simulates a computer’s way of working, he/she does not understand anything of the questions or answers. The person is merely an agent that carries out given instructions, but will nevertheless gain no new knowledge of Chinese after the experiment. His conclusion is that “whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything”[2].

Thus, Searle discards the possibility that computerized machines using formal processes ever will be able to understand anything. The only exception would be a machine made by biological components sufficiently akin to ours or with other sorts of chemical principles, but Searle regards this as an empirical question and not correlated with the issue of artificial intelligence which presupposes no knowledge of our biological system[3].

Thus, for Searle, the Turing test is not a valid test of attributing intelligence (in the sence of understanding) to computerized machines as by no doubt some of them in the future will be able to pass it as our technical development reaches higher levels. I agree with Searle in that although we will be able to create these “super computers” in the future, they will still only deal with ‘zeros and ones’ and thus remain non-understanding agents in an information processing system. It would be interesting to know though, whether or not Searle would regard someone like Data in Star Trek as an understanding, thinking and intelligent being as Data seems to have been made by a mix of biological and technical substance,similar to human beings. My opinion is that we will never be able to create such a being. Why, I will discuss later, but first I will look at some objections to Searle.

OBJECTIONS TO SEARLE

Searle himself mentions a couple of replies or objections to his view, and I will mainly reflect on the one I find most important, namely the ‘Systems Reply’.

The ‘Systems Reply’ wants to ascribe understanding not to the mere individual but the whole system of which it is only a part. Even if the person in the example with the Chinese room does not understand Chinese, there would be something in the room that does. Searle’s answer to this is that even if we let the individual incorporate the entire system, he would still not understand anything, and thus, there is no way the system could understand because it is just part of him. Searle expands this further with indicating that if the ‘Systems reply’ was true, it would be hard to avoid saying that stomach, heart, liver, and so on are all understanding systems, and this he finds ridiculous[4].

Hofstadter defends a variant of the ‘System reply’ and accuses Searle for not being clear in telling what has intentionality and what has not within the system. He mentions an example where someone’s brain gradually is replaced by integrated circuit chips, programmed in such a way as to keep the input-output function of each unit identical to that of the unit being replaced. When does the genuine ‘you’ disappear from the system? Hofstadter claims that this is Searle’s weakness of his position. Searle only insists that some systems have intentionality by virtue of their ‘casual powers’ and some do not, but does not explain it further. Hofstadter finds it incorrect to say that minds can be seen everywhere, thus not taking on a panpsychic view of the universe. He believes that minds may come to exist in programmed machines that would not derive their causal powers from the substance they are made of, but from the programs that are run by them[5].

Aubrey Townsend thinks that Searle’s objection to the ‘System reply’ is sound, but believes that there could still be something generated by the running of a program in a machine that would have a mind. He refers to this ‘something’ as a ‘virtual agent’, giving an example of the virtual help figure in Office 97 as the first generation of virtual agents. Thus, Townsend argues that we have to distinguish the machine that runs a program from the thing that does the understanding. He further states that this agent in order to be considered as an intelligent, thinking being must be capable of self-conception, knowing which individual he is[6].

I believe Townsend’s argument about virtual agents lack validity and sense. In my opinion, these agents or ‘demons’ would still have been created by the program running the machine, and how can something that lacks understanding in the beginning create understanding? It is clear that Townsend does not grant understanding or mind to the program running the machine and I can not see why he then should grant it to a virtual agent. No matter how intelligent this agent might seem to be, it would still only work with ‘zeros and ones’ like the program in the machine itself and can not be distinguished from it in the way it manipulates the information given to it.

Thus, my conclusion of the objections according to the ‘System reply’ is that they build on illogical hopes and expectations which, according to me, have no root in reality and that would pave way for Searle’s refutation of thinking computerized machines. Even though his arguments may lack satisfying explanations, I prefer his view in contrast to Townsend’s and Hofstadter’s who seem to believe that intentionality might be found in some stages of the formal symbol processing (for example in virtual agents or emulators), but do not explain how it can be created in the very beginning.

Other objections, brought up by Searle, include the ‘Robot reply’, the ‘Brain simulator reply’, the ‘Combination Reply’, and the ‘Many Mansions reply’. In short, these

objections try to show that intentionality can be created if we so have to imagine the most powerful, highly technical machine which might be so similar to the human being that even the smallest parts of the human system are copied and replaced with digital components. Whatever the casual processes are that Searle says are essential for inten­tio­nality would surely in such a system be possible to copy according to these claims.

Searle argues that these objections derive from the original claim made on behalf of artificial intelligence that mental processes is computational processes over formally defined elements. Thus, his answer is the same as for the ‘Systems Reply’ namely that all these objections presuppose some kind of formal computerized symbol manipulation, and because these by themselves do not have intentionality, it also can not be created[7].

CONCLUSION

I find Searle’s arguments for that we can not attribute intelligence to a suitably programmed computer more founded and successful compared to the mentioned and explained objections. I base this opinion on my belief that no intelligence can be produced by something that is not intelligent from the beginning, for example, a computer program. In contrast to Searle, I also believe that this can not be done even if we create a machine with identical biological structure as for example a human being. How do I then explain why some beings have intelligence? My belief is that the whole concept of our mind and intentionality is embodied in the concept of a soul, given to us by a supernatural power – call it God – which we never will be able to copy or create. I find it strikingly surprising that this possibility is not brought up at all by any of the mentioned authors. Maybe the recent highly technological development has dazzled us and given us visions of being able to create life in form of computerized machines, but then again, maybe that has always been a trademark of man - to want to play God.

 

Any reproduction or distribution of this text is prohibited without express permission by the author. Please contact peter@engholm.nu for permission or further information.

 

REFERENCES:

Hofstadter, D., “Reflections”, in Notes and readings for Minds and Machines, Monash University, 1997.

Searle, J., “Minds, Brains and Programs”, in Notes and readings for Minds and Machines, Monash University, 1997.

Townsend, A., “Virtual agents”, in Notes and readings for Minds and Machines, Monash University, 1997.

 

FOOTNOTES:



[1] J. Searle, “Minds, Brains and Programs”, in Notes and readings for Minds and Machines, Monash University, 1997, p.37.

[2] Searle, “Minds, Brains and Programs”, pp. 29-30.

[3] Ibid., pp. 33-35.

[4] Searle, “Minds, Brains and Programs”, pp. 30-31.

[5] D. Hofstadter, “Reflections”, in Notes and readings for Minds and Machines, Monash University, 1997, pp. 39-43.

[6] A. Townsend, “Virtual agents”, in Notes and readings for Minds and Machines, Monash University, 1997, pp. 49-53.

[7] Searle, “Minds, Brains and Programs”, pp. 32-35.