You get a bonus - 1 coin for daily activity. Now you have 1 coin

The complexity of the tasks of AI

Lecture



From the very beginning, the researchers of artificial intelligence did not differ in restraint, expressing predictions about their future success. For example, the following prediction by Herbert Simon, published in 1957, was often quoted. I do not set myself the task to surprise or shock you, but the easiest way is to sum up by saying that we now live in a world where machines can think, learn and create. Moreover, their ability to carry out these actions will continue to grow until (in the foreseeable future) the range of problems that machines can cope with will be comparable to the range of problems that still needed a human brain.

Expressions such as “foreseeable future” can be interpreted differently, but Simon also made a more specific prediction that in ten years the computer will become a world chess champion and that all important mathematical theorems will be proved. These predictions came true (or almost came true) not in ten years, but in forty. Simon’s excessive optimism was due to the fact that the first artificial intelligence systems showed promising performance, albeit with simple examples. But in almost all cases, these early systems suffered a crushing defeat, faced with a wider range of problems or more difficult problems.

The difficulties of the first kind were related to the fact that most of the early programs did not contain knowledge or had only a small amount of knowledge about their subject area; their temporary successes were achieved through simple syntactic manipulations.

The history typical of this period occurred during the first works on machine translation of text in natural language, which were generously funded by the US National Research Council in an attempt to speed up the translation of Soviet scientific articles during that period of turbulent activity that began after for the launch in the USSR of the first artificial Earth satellite in 1957.

Initially, it was believed that in order to preserve the exact meaning of the sentences, it is enough to carry out simple syntactic transformations based on the grammars of Russian and English, and replacing words using an electronic dictionary. But the fact is that in order to eliminate ambiguity and determine the meaning of a sentence in the translation process, you must have general knowledge of the subject area.

The difficulties arising from this are illustrated by the famous reverse translation of the phrase "the spirit is willing but the flesh is weak), which resulted in the following:" the vodka is good but the meat is rotten "(vodka is good , but the meat is spoiled).

In 1966, in the report of one advisory committee, it was noted that "the machine translation of a general scientific text has not been carried out and will not be carried out in the short term." All funding for academic machine translation projects by the US government has been minimized.

Currently, machine translation is an imperfect, but widely used tool for processing technical, commercial, government documents, as well as documents published on the Internet.

The difficulties of the second kind were associated with the insolubility of many problems, which they tried to find a solution with the help of artificial intelligence. In most early artificial intelligence programs, the solution of problems according to the principle of testing various combinations of possible steps, which was carried out until a solution was found. At first, such a strategy led to success, since microworlds contained a very small number of objects, therefore they provided for only a small list of possible actions and made it possible to find very short sequences of solutions.

Before the theory of computational complexity was developed, it was widely believed that in order to “scale” tasks to the level of larger problems, it is enough just to use more high-speed hardware with a large memory capacity. For example, the optimism with which the reports on the development of a method for proving theorems with the help of a resolution were met quickly faded away when the researchers could not prove theorems that included a little more than a few dozen facts. As it turned out, the program can find a solution in principle, does not mean that this program really contains all the mechanisms that allow to find this solution in practice.

The illusion of unlimited computing power extended not only to problem solving programs. Early experiments in the field of machine evolution (which is now known as the development of genetic algorithms ) were based on the belief that making the appropriate series of small changes to the machine code of a program allows you to create a program to solve any particular simple problem with high performance. Of course, this approach itself is quite reasonable. Therefore, the general idea was that it was necessary to check random mutations (changes in the code) using the selection process to save mutations that seem useful.

Thousands of CPU time was spent on these experiments, but no signs of progress were found. Modern genetic algorithms use better presentation methods that show more successful results.

One of the main criticisms of artificial intelligence contained in the Lighthill report, which formed the basis of the decision of the British government to stop supporting research in the field of artificial intelligence in all but two universities, was the inability to cope with the “combinatorial explosion” - a rapid increase in the complexity of the problem. (This is the official version of events, and the oral presentation draws a slightly different and more colorful picture, in which political ambitions and personal interests are manifested, a description of which goes beyond the scope of this presentation.)

Difficulties of the third kind arose due to some fundamental limitations of the basic structures that were used to develop intellectual behavior. For example, in the book by Minsky and Papert Perceptrons, it was proved that perceptrons (a simple form of a neural network) can demonstrate the ability to learn everything that is possible to imagine with their help, but, unfortunately, they allow us to present only very little.

In particular, a two-input perceptron cannot be trained to recognize a situation in which different signals are sent to its two inputs. Although the results obtained by these scientists do not extend to more complex, multilayered networks, it was soon discovered that finances allocated to support research in the field of neural networks almost did not bring any benefits.

It is curious to note that new learning algorithms by back propagation for multilayer networks, which caused the revival of extraordinary interest in research in the field of neural networks in the late 1980s, were in fact first opened in 1969.
created: 2014-09-22
updated: 2024-11-12
242



Rating 9 of 10. count vote: 2
Are you satisfied?:



Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Models and research methods

Terms: Models and research methods