You get a bonus - 1 coin for daily activity. Now you have 1 coin

The development of artificial intelligence

Lecture



Mathematics (period from about 800 to the present)

  • What are the formal rules for the formation of correct conclusions?
  • How to determine the limits of computability?
  • How to conduct reasoning with the use of unreliable information?

Philosophers formulated the most important ideas of artificial intelligence, but to transform it into a formal science it was necessary to achieve a certain level of mathematical formalization in three fundamental areas: logic, computation, and probability.

The origins of the ideas of formal logic can be found in the works of philosophers of ancient Greece, but its formation as a mathematical discipline actually began with the works of George Buhl (1815-1864), who elaborated the logic of statements, or Boolean logic . In 1879, Gottlob Frege (1848-1925) expanded Boolean logic to include objects and relations in it, creating a first-order logic, which is currently used as the most fundamental system for knowledge representation.

Alfred Tarsky (1902-1983) first derived the scientific use of reference theory, which shows how to associate logical objects with objects of the real world. The next step was to define the limits of what can be done with logic and computation.

The first non-trivial algorithm is the algorithm for calculating the highest common denominator proposed by Euclid. The study of algorithms as independent objects was started by al-Khorezmi, a Central Asian mathematician of the 9th century, through whose work Europe became acquainted with Arabic numerals and algebra.

Boule and other scientists widely discussed the algorithms of inference, and by the end of the XIX century, efforts were made to formalize the general principles of mathematical reasoning as a logical conclusion. In 1900, David Hilbert (1862–1943) presented a list of 23 problems and correctly predicted that these problems would be occupied by mathematicians until the end of the 20th century.

The last of these problems is the question of whether there is an algorithm for determining the truth of any logical statement, which consists of natural numbers. This is the so-called famous solution search problem (Entscheidungsproblem). In essence, this question asked by Hilbert was about determining whether there are fundamental limits that limit the power of effective proof procedures.

In 1930, Kurt Gödel (1906-1978) showed that there is an effective procedure for proving any true statement in Frege and Russell’s first-order logic, but the first-order logic does not allow for expressing the principle of mathematical induction necessary to represent natural numbers.

In 1931, Gödel showed that there really are real limits to computability. The incompleteness theorem proposed by him shows that in any language sufficiently expressive for describing the properties of natural numbers, there are true statements that are unprovable, in the sense that their truth cannot be established using any algorithm.

This fundamental result can also be considered as a demonstration that there are some functions of integers that cannot be represented using any algorithm, i.e. they cannot be calculated.

This prompted Alan Turing (1912-1954) to try to accurately characterize which functions are capable of being computed. This approach is actually a bit problematic, since in reality the concept of computation, or an effective computation procedure, cannot be given a formal definition. But it is generally accepted that a quite satisfactory definition was given in the Church-Turing thesis, which indicates that the Turing machine is able to calculate any computable function.

In addition, Turing showed that there are some functions that cannot be computed by a Turing machine. For example, generally speaking, no machine can determine whether a given program will return a response to specific input data or will work indefinitely.

Although the concepts of unprovability and noncomputability are very important for understanding computational capabilities, the notion of undecidability had a much greater influence on the development of artificial intelligence. Roughly speaking, a task is called insoluble if the time required to solve individual instances of this problem grows exponentially with increasing sizes of these instances.

The distinction between polynomial and exponential growth of complexity was first emphasized in the mid-1960s in the works of Cobham and Edmonds. The importance of this discovery is as follows: exponential growth means that even instances of a problem of moderate magnitude cannot be solved in any reasonable time. Therefore, for example, it is necessary to engage in the division of the general task of developing intellectual behavior into solvable subtasks, rather than trying to solve an unsolvable problem.

How can you recognize an intractable problem? One of the acceptable methods of such recognition is presented in the form of the NP-completeness theory, first proposed by Stephen Cook and Richard Karp. Cook and Karp showed that there are large classes of canonical problems of combinatorial search and the formation of reasoning, which are NP-complete.

There is a possibility that any class of problems to which this class of NP-complete problems reduces is unsolvable. (Although it has not yet been proved that NP-complete tasks are necessarily unsolvable, most theorists believe that this is the case.) These results contrast with the optimism with which the appearance of the first computers under such headlines as “ Electronic superbrains "who think" faster than Einstein! "

Despite the constant increase in the speed of computers, a characteristic feature of intelligent systems is the economical use of resources. Roughly speaking, our world, in which the AI ​​systems must get used to, is an extremely large instance of the problem. In recent years, artificial intelligence techniques have helped figure out why some instances of NP-complete tasks are complex, while others are simple.

In addition to logic and the theory of computation, the third largest contribution of mathematicians to artificial intelligence was the development of probability theory . The idea of ​​probability was first formulated by the Italian mathematician Gerolamo Cardano (1501–1576), who described it in terms of the results of events with several outcomes arising in gambling.

Probability theory quickly became an integral part of all quantitative sciences, helping to use unreliable measurement results and incomplete theories. Pierre Farm (1601–1665), Blaise Pascal (1623–1666), James Bernoulli (1654–1705), Pierre Laplace (1749–1827), and other scientists made a great contribution to this theory and introduced new statistical methods.

Thomas Bayes (1702–1761) proposed a rule for updating probabilities based on new facts. The Bayes Rule and the scientific direction arising from it, called Bayesian analysis, underlie most modern approaches to reasoning, given the uncertainty in artificial intelligence systems.

Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Artificial Intelligence. Basics and history. Goals.

Terms: Artificial Intelligence. Basics and history. Goals.