You get a bonus - 1 coin for daily activity. Now you have 1 coin

The future of artificial intelligence

Lecture



What is artificial intelligence?

The science called “artificial intelligence” is included in the complex of computer sciences, and the technologies created on its basis belong to information technologies.

The task of this science is to provide reasonable reasoning and actions with the help of computing systems and other artificial devices.

The following major difficulties arise along the way:

  • in most cases, before obtaining the result, the algorithm for solving the problem is not known. For example, it is not known exactly how the text is understood, the search for proof of the theorem, the construction of an action plan, the recognition of an image.
  • Artificial devices (for example, computers) do not have a sufficient level of initial competence. The specialist achieves the result using his competence (in particular, knowledge and experience).

This means that artificial intelligence is an experimental science. Experimental artificial intelligence is that by creating certain computer representations and models, the researcher compares their behavior with each other and with examples of solving the same tasks by a specialist, modifies them based on this comparison, trying to achieve a better match of the results.

For modifying programs in a “monotonous” manner to improve the results, one must have reasonable initial representations and models. They are delivered by psychological studies of consciousness, in particular, cognitive psychology.

An important characteristic of artificial intelligence methods is that it deals only with those mechanisms of competence that are verbal in nature (they allow a symbolic representation). Not all the mechanisms that people use to solve problems are as follows.

Origins (how it all began)

The first studies attributable to artificial intelligence were undertaken almost immediately after the appearance of the first computers.

In 1954, the American explorer A.Newell (A.Newel) decided to write a program for playing chess. He shared this idea with analysts from RAND Corporation (RAND Corporation) J. Shaw (J.Show) and G.Simon (H.Simon), who offered Newell his help. As a theoretical basis for such a program, it was decided to use the method proposed in 1950 by Claude Shannon (CE Shannon), the founder of information theory. The exact formalization of this method was carried out by Alan Turing. He modeled it manually.

A group of Dutch psychologists under the leadership of A. de Groot, who studied the playing styles of outstanding chess players, was involved in the work. After two years of joint work, this team created the programming language IPL1, apparently the first symbolic language for processing lists. Soon the first program was written, which can be attributed to the achievements in the field of artificial intelligence. This was the program "Logic-Theorist" (1956), designed to automatically prove theorems in the calculus of statements.

Actually, the chess game program, NSS, was completed in 1957. At the heart of its work were the so-called heuristics (rules that allow you to make a choice in the absence of precise theoretical grounds) and descriptions of goals. The control algorithm attempted to reduce the differences between the assessments of the current situation and the evaluations of the goal or one of the subgoals.

In 1960, the same group, based on the principles used in NSS, wrote a program that its creators called GPS (General Problem Solver) - a universal task solver. GPS could cope with a number of puzzles, calculate indefinite integrals, solve some other problems. These results have attracted the attention of experts in the field of computing. The programs of automatic proof of theorems from planimetry and solving algebraic problems (formulated in English) have appeared.

Stanford's John McCarthy (J.McCarty) was interested in the mathematical foundations of these results and in general symbolic calculations. As a result, in 1963 he developed the LISP language (LISP, from List Processing), which was based on the use of a single list view for programs and data, the use of expressions to define functions, and a bracket syntax.

At the same time, in the USSR, mainly at Moscow University and the Academy of Sciences, a number of pioneering studies were carried out, headed by Benjamin Pushkin and Dmitry Pospelov, whose goal was to find out how, in fact, a person performs perebornye tasks?

As a testing ground for these studies, various mathematical games were chosen, in particular, the game "15" and the game "5", and as an instrumental method of research - recording eye movements or gnostic dynamics. The main methods for recording eye movements were electrooculogram and the use of a sucker placed on the cornea.

The goal of each such game is to move from some initial situation to the final one. Transitions are carried out by successive movement of chips along horizontals and verticals to a free field.

Take, for example, the game "5", the initial and final situations in which they look, respectively, as follows:

2

3

five

one

four

and

one

2

3

four

five

The problem is optimally solved in six moves, which correspond to movements of chips 1, 4, 5, 3, 2, 1. The solution would be much more difficult if, for example, chip 2, or chip 2, moved on the first move. It is clear that the task can be represented as a tree (or a maze), the root of which is the initial situation, and the movement of each piece leads to a new vertex. All situations are in this approach, the vertices of the graph or points on the game tree, and they are the elements from which the "model of the world" is built. Two elements connect the course - the transformation of one situation into another.

Such a model of the game leads, generally speaking, to a complete enumeration or "labyrinth" of variants and forms the basis of the labyrinth thinking hypothesis.

On the other hand, the analysis of experimental data made it possible to isolate two types of changes in the parameters of gnostic dynamics in the process of learning to solve a problem. Namely, changes in a number of parameters already during the solution of the second or third of the set of similar tasks for one of the groups of subjects are characterized by the appearance of a break point.

These parameters include the time to solve the problem, the number of inspections of the conditions, the number of inspections of the target, the total number of inspections, the density of the inspection and the ratio of the number of inspections of conditions to the number of inspections of the target. In another group of subjects such changes do not occur.

For example, the ratio of the number of examinations of the conditions of a task to the number of inspections of a target in the first group of subjects undergoes a kink after solving the second task and continues to decrease when solving the number of subsequent tasks. In the second group of subjects, this ratio does not decrease. The same applies to the time of solving problems.

Analysis and other experimental data confirmed the existence of some general trends in the dynamics of learning to solve problems.

There is every reason to believe that the main factor influencing the temporal characteristics of this process in the first group of subjects is the moment of understanding the equivalence of tasks or the transposition (transfer) of relations formed during the solution of the first problems.

The study of the entire set of data allows us to link the formation of such a system of relations with the time of solving the second and subsequent tasks - it is then that the common one that binds the first and second tasks is formed. Awareness of the community and, consequently, the “discovery” of equivalence occurs when confronted with the third task.

Comparison of experimental data also indicates that the correlation of various situations is interconnected through such a cognitive component as goal analysis. In other words, the analysis of the initial situation is controlled by the analysis of the goal and the process of correlating the initial and final situations. Thus, the modeling of the initial situation is a controllable component, and the relations established in the final situation are the regulator of this modeling process. The very same model of the initial situation is considered from the point of view of the final situation.

This model can also be represented as a graph, but the vertices of this graph will not be situations, as when using the "maze" of options, but elements of situations. The edges connecting the vertices will not be transitions from one situation to another, but those relationships that were identified on the set of these elements with the help of gnostic dynamics. These considerations form the basis of the model hypothesis of thinking and led to the appearance in 1964 of the language (and method) of situational control.

To research in the field of artificial intelligence began to show interest and logic. In the same year, 1964, the work of the Leningrad logic of Sergei Maslov was published: “The inverse method for establishing derivability in the classical predicate calculus”, which for the first time proposed a method of automatically finding proofs of theorems in the predicate calculus.

A year later (in 1965), the work of JA Robinson (JAPobinson) appeared in the United States, devoted to a somewhat different method of automatically finding proofs of theorems in the calculus of first-order predicates. This method was called the resolution method and served as the starting point for the creation of a new programming language with a built-in inference procedure - the Prolog language (PROLOG) in 1971.

In 1966, in the USSR, Valentin Turchin developed the Refal recursive function language, designed to describe languages ​​and various types of processing. Although it was conceived as an algorithmic meta-language, but for the user it was, like LISP and Prolog, a language for processing symbolic information.

In the late 60s, the first game programs appeared, systems for elementary text analysis and solving some mathematical problems (geometry, integral calculus). In the complex over-the-counter problems that arose in this process, the number of iterated options was sharply reduced by using all sorts of heuristics and “common sense”. This approach was called heuristic programming. Further development of heuristic programming went along the path of complicating algorithms and improving heuristics. However, it soon became clear that there was a certain limit beyond which no improvement in heuristics and complications of the algorithm would improve the quality of the system and, most importantly, would not expand its capabilities. A program that plays chess will never play checkers or card games.

Gradually, researchers began to realize that all previously created programs lack the most important thing - knowledge in the relevant field. Experts, solving problems, achieve high results, thanks to their knowledge and experience; If the programs will turn to knowledge and apply it, they will also achieve high quality work.

This understanding, which arose in the early 70s, essentially meant a qualitative leap in the work on artificial intelligence.

The American scientist E.Feigenbaum (E.Fieigenbaum) expressed fundamental considerations on this subject in 1977 at the 5th Joint Conference on Artificial Intelligence.
created: 2014-09-23
updated: 2021-03-13
132563



Rating 9 of 10. count vote: 2
Are you satisfied?:



Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Artificial Intelligence. Basics and history. Goals.

Terms: Artificial Intelligence. Basics and history. Goals.