Lecture
The last thought experiment to be described seems to be the most famous of all. The idea of this experiment belongs to John Searle, who described a hypothetical system in relation to which it is clear to any observer that it is running under some program and successfully passes the Turing test, but it is also clear (according to Searle) that this system does not understand the meaning of any from its input or output. On the basis of this, Searle concludes that the operation of the system under the control of an acceptable program (that is, a program generating the correct output) is not a sufficient condition for the possession of the mind.
The system consists of a person who understands only English, is supplied with a book with rules written in English, and he has different stacks of papers at his disposal, some of which are empty and others are filled with descriptions that cannot be decrypted. (Thus, a person plays the role of a computer processor, the rule book is a program, and the stacks of papers correspond to a memory device.) The system is in a room with a small hole going outside.
Through the hole there are strips of paper with characters that can not be decrypted. The person finds the characters that match them in the rule book and follows the instructions found in it. Instructions may include tasks for writing characters on new strips of paper, searching for characters in stacks, reordering stacks, etc. Ultimately, the instructions dictate the need to write one or more characters on a piece of paper, which is again transmitted to the outside world.
Until now, everything went fine. But from the outside world, we see a system that accepts input in the form of sentences in Chinese and forms responses in Chinese that seem outwardly “intellectual,” just like the conversation mentally presented by Turing. After this, Searle conducts the following reasoning: a person sitting in a room does not understand Chinese (this is given). The rule book and the stacks of paper, being simply sheets of paper, do not understand Chinese. Therefore, we cannot talk about any understanding of the Chinese language. Therefore, according to Searle, the operation of even a suitable program does not necessarily lead to the development of understanding.
Like Turing, Searle examined and attempted to refute a number of responses to his arguments. Some commentators, including John McCarthy and Robert Vilensky, put forward suggestions, which Searle called system responses. Their objection is that a person sitting in a room can certainly be asked if he understands Chinese, but this is similar to the way a computer processor was asked if it can extract cubic roots. In both cases, the answer is negative and in both cases, according to the system answer, the whole system has the ability that was the subject of the question.
Of course, if you ask a system with a Chinese room a question in Chinese, whether it knows Chinese, the answer will be affirmative (and formulated in live Chinese). According to the correct Turing agreement, this should be enough. Searle's answer is a return to the idea that understanding is not in the brain of a person sitting in a Chinese room, and cannot be contained in piles of paper, so there can be no understanding. Further Searle points out that it is possible to imagine a situation when a person memorizes a rule book and the contents of all stacks of paper, therefore there is no longer any object to which an understanding could be attributed, except for the person himself; but after that, if he is asked (in English) whether he understands Chinese, the answer will be negative.
We now turn to the real analysis of this problem. The transition from experiment using stacks of paper to experiment with memorization is just an attempt to confuse the reader, since both forms are variants of the physical embodiment of a working program. The actual statements put forth by Searle rely on the four axioms below.
Based on the first three axioms, Searle concludes that programs cannot serve as a sufficient condition for mind to appear. In other words, an agent in which a program functions may turn out to be reasonable, but it will not necessarily become reasonable only due to the fact that the program works in it.
On the basis of the fourth axiom, Searle concludes: "Any other system capable of generating the mind must have causal power (at least) equivalent to that which the brain possesses." Based on this, he concludes that any artificial brain must embody the same causal power as the brain, and not just work under the control of a specific program, and that the human brain does not develop mental phenomena solely because of what functions in it is a program.
The conclusion that the use of programs is not enough to create a mind, really follows from these axioms, if their free interpretation is allowed. But the very rationale for the conclusion was not satisfactory - all Searle could prove is that if the principles of functionalism are clearly rejected (as was done in his third axiom), then the conclusion that objects other than the brain give rise to the mind becomes optional . This assumption is fully justified, so the whole discussion boils down to whether the third axiom can be accepted. According to Searle, the whole point of the experiment with the Chinese room is to provide an intuitive rationale for the third axiom. But the reaction of other researchers shows that such intuitive ideas are congenial only to those who were already inclined to agree with the idea that programs taken in their pure form are not capable of developing true understanding.
We note once again that the purpose of the experiment with the Chinese room is to refute the concept of strong artificial intelligence — the assertion that the exploitation of a program of a suitable type necessarily leads to the appearance of reason. This thought experiment is carried out by demonstrating an externally intelligent system in which a program of a suitable type functions, but with respect to this system, according to Searle, it can be clearly shown that it does not have a mind. To this end, Searle resorts to intuition, not proof; he seems to be telling us: "just look at this room; can there be any reason in it?" But the exact same argument can be made for the brain - just look at this conglomerate of cells (or atoms) that work blindly in accordance with the laws of biochemistry (or physics); can there be a mind in it? Why can there be reason in a piece of brain, but not in a piece of liver?
Moreover, Searle, agreeing that materials other than neurons can, in principle, be carriers of the mind, weaken our arguments even more, for two reasons: firstly, we have to be guided only by Searle’s intuition (or our own) to prove that there is no reason in the Chinese room, and, secondly, even if we decide that there is no reason in this room, such a conclusion will not allow us to learn something about whether there will be a program working in some other physical environment (including computer), have a mind.
Searle admits the logical possibility that the brain actually implements a traditional-type artificial intelligence program, but the same program running in the wrong type of machine does not create a mind. Searle denies that he believes that “machines cannot have a mind,” rather, he claims that some machines have a mind, for example, people are biological machines that have a mind. But it also leaves us in the dark about the type of car that fits or does not fit the definition of intelligent cars.
Comments
To leave a comment
Approaches and directions for creating Artificial Intelligence
Terms: Approaches and directions for creating Artificial Intelligence