You get a bonus - 1 coin for daily activity. Now you have 1 coin

Strong AI

Lecture



Many philosophers have argued that even a machine that passes the Turing test will still not actually think, but only imitate thinking. Turing foresaw this objection against artificial intelligence. In particular, he quoted the following fragment of the speech of Professor Jeffers Jefferson.

We can agree that the machine is equal to the brain, only after it is able to write a sonnet or compose a concert under the influence of its thoughts and emotions, and not due to the random coincidence of the necessary symbols; By this is meant that the machine must not only write such a work, but also understand that it is written by it.

Turing called this objection an argument based on the concept of consciousness ; according to this argument, the machine must understand its own mental states and actions. Of course, consciousness is an important topic, but the key idea of ​​Jefferson actually concerns the problem of phenomenology , or the study of direct experience, i.e. this scientist requires the machine to really feel emotions.

Other scientists focus on the problem of focus , i.e. on the question of whether beliefs attributed to a machine are really true, desires and other internal representations relate to "something" existing in the real world.

Turing's answer to this objection is quite interesting. He could demonstrate the reasons why machines would actually be able to act consciously (either from the point of view of phenomenology, or from the point of view of focus). Instead, he pointed out that this question is just as flawed as the question of whether machines can think. Also, on what basis do we demand higher standards for machines than for people? In the end, in everyday life, we never get any direct evidence about the internal mental state of others. Nevertheless, Turing said: "Instead of conducting endless disputes on this topic, it is usually customary to enter into a gentlemen's agreement and assume that everyone thinks."

Turing argues that Jefferson would agree to extend this gentlemen's agreement to cars only if he had experience in dealing with those of them that act intellectually. He quoted the following dialogue between the person and the machine that actually occurred, which is considered such an integral part of the legends of artificial intelligence transmitted by word of mouth, that we simply must include it on this page.

Man What is your spring day? (The first line of your sonnet says "I want to compare you with a summer day"; wouldn't it be as good or even better to say "with a spring day"?)

Car . It wouldn't scan. (The rhythm would break.)

Man How about "a winter's day". That would scan all right. (What about the words "happy winter day". The rhythm would not break.)

Car . Yes, but nobody wants to be compared to a winter's day. (Yes, but no one wants to be compared with a winter day.)

Man Would you say Mr. Pickwick reminded you of Christmas? (Are you telling me that Mr. Pickwick reminded you of Christmas?)

Car . In a way. (In a certain sense.)

Man I love you Pickwick would mind the comparison. (But still, Christmas is a winter day, and I do not think Mr. Pickwick would object to such a comparison.)

Car . I do not think you're serious. By the winter of the day, it means one of the typical winter's day, rather than the special one like Christmas. (I don’t think you are serious. A winter day means a normal winter day, not a special day like Christmas.)

In conclusion, Turing noted that the question of consciousness is intractable, but refuted the view that it is of great importance for the practice of artificial intelligence: "I do not want my words to be interpreted as if I do not consider the problem of consciousness to be a complex mystery ... but I do not think that it is necessary to solve all such riddles before we can answer the question in this article. " The authors agree with Turing: we are interested in creating programs that act intellectually, and not to give anyone a reason to consider these actions as real or simulated. On the other hand, this problem remains a subject of keen interest for many philosophers. In order to understand the essence of such an interest, let us consider the question of whether other artificially created objects are considered as real.

In 1848, Frederick Wöhler synthesized artificial urea for the first time. This achievement was very important, as it became evidence of the unity of organic and inorganic chemistry, and also allowed to put an end to the question, which until now has been the subject of heated debate. After the successful implementation of this synthesis, the chemists agreed that artificial urea is indeed urea, because it has all the correct physical properties.

Similarly, it cannot be denied that artificial sweeteners really are sweeteners, and artificial insemination (another term with the abbreviation AI - Artificial Insemination) is indeed fertilization. On the other hand, artificial flowers are not flowers, and as Daniel Dennett pointed out, artificial Chateau Latour wine is not Chateau Latour wine, even if the samples of the two cannot be distinguished from each other by chemical analysis, simply because it was not made in the right place in the right way. And an artificially made Picasso drawing is not a Picasso drawing, regardless of whether it looks like the original or not.

Based on the above, it can be concluded that in some cases only the behavior of the artificial object is important, and in other cases the origin of the artificial object also plays a role. The fact in which the latter factor becomes important seems to be due only to accepted agreements. And when it comes to artificial intelligence, we cannot rely on an agreement adopted on this issue, and we need to rely only on intuitive assumptions. Philosopher John Searle made the following very convincing assumption.

No one thinks that computerized imitation of a thunderstorm will make him soak ... so why can people, being in their right mind, assume that computerized imitation of thought processes is really thought processes?

Of course, one cannot but agree that computer simulations of thunderstorms will not make us soak, but it is not entirely clear how this analogy can be transferred to computer simulations of thought processes. In addition, the imitations of thunderstorms created in Hollywood, in which sprayers and blowers are used, really make the actors drench.

Most people, without thinking, will say that computer imitation of addition is an addition, and computer imitation of a chess game is a chess game. What is more like mental processes - thunderstorms or abstract operations, such as arithmetic addition and the game of chess? What should they be compared with - piece goods, such as Chateau Latour wine and Picasso's paintings, or with mass products, such as urea? Answers to all these questions depend on the accepted theory of mental states and processes.

In the theory of functionalism , it is argued that the mental state is any intermediate causal condition relating input and output data. According to the theory of functionalism, any two systems with isomorphic causal processes must have the same mental states. Therefore, a computer program can have the same mental states as a person. Of course, we have not yet defined what is really meant by the term “isomorphic,” but the basic assumption is that there is some level of abstraction, below which specific implementation details have no meaning; provided that below this level processes are isomorphic, the same mental states arise.

In contrast, the theory of biological naturalism claims that mental states are high-level emergent characteristics that are caused by low-level neurological processes in neurons, and some (indefinite) properties of neurons play a leading role in these processes. This means that mental states cannot be duplicated only on the basis of a certain program that has the same functional structure and exhibits the same behavior, expressed as input / output data; we must require that this program be operated in an architecture with the same causal power as neurons. In this theory, nothing is said about why neurons possess this causal power, and also about whether there are other physical manifestations that may or may not have this causal power.

To analyze these two points of view, we first consider one of the oldest problems in the field of the philosophy of mind, and then turn to three thought experiments.
created: 2014-09-23
updated: 2023-05-25
132506



Rating 9 of 10. count vote: 2
Are you satisfied?:



Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Models and research methods

Terms: Models and research methods