You get a bonus - 1 coin for daily activity. Now you have 1 coin

The history of the development of artificial intelligence

Lecture



The emergence of the premises of artificial intelligence (from 1943 to 1955)

  The history of the development of artificial intelligence
Walter Pitts
The first work, which is now admittedly related to artificial intelligence, was carried out by Warren McCulloch and Walter Pitts . They drew inspiration from three sources: basic knowledge of the physiology and purpose of neurons in the brain; formal analysis of the logic of statements, taken from the works of Russell and Whitehead; and also the theory of Turing calculations.

McCulloch and Pitts proposed a model consisting of artificial neurons, in which each neuron was characterized as being in the "on" or "off" state, and the transition to the "on" state occurred in response to stimulation of a sufficient number of neighboring neurons.

The state of the neuron was considered as "actually equivalent to a statement in which an adequate number of stimuli is proposed." The work of these scientists showed, for example, that any computable function can be calculated using a network of connected neurons and that all logical connectives (AND, OR, NOT, etc.) can be implemented using simple network structures.

In addition, McCulloch and Pitts have suggested that the networks
structured appropriately, capable of learning. Donald Hebb demonstrated a simple update rule for modifying the number of connections between neurons. The rule proposed by him, now called the Hebbian learning rule , continues to serve as the basis for models that are widely used today.

  The history of the development of artificial intelligence
Warren McCulloch
Two postgraduate students at the faculty of mathematics at Princeton University, Marvin Minsky and Dean Edmonds, in 1951 created the first network computer based on a neural network. This computer, called Snare, used 3000 electron tubes and an additional autopilot mechanism from a B-24 bomber to simulate a network of 40 neurons. The attestation commission, before which Minsky defended his Ph.D. thesis, expressed doubt whether this kind of work could be considered as a mathematical one, to which von Neumann, according to contemporaries, replied: "Not today, but sometime it will." Later on, Minsky proved very important theorems showing the limitations of research in the field of neural networks.

In addition, you can cite a large number of examples of other early works that can be described as related to artificial intelligence, but it was Alan Turing who first expressed a complete understanding of artificial intelligence in his article Computing Machinery and Intelligence, which was published in 1950. In this article, he described the Turing test, machine learning principles, genetic algorithms, and reinforcement learning.

Birth of Artificial Intelligence (1956)

Another authoritative expert in the field of artificial intelligence, John McCarthy , conducted his research at Princeton University. After receiving his degree, McCarthy moved to Dartmouth College, which became the official birthplace of this field of study. McCarthy persuaded Marvin Minsky, Claude Shannon and Nathaniel Rochester to help him gather all American researchers interested in automata theory, neural networks and intelligence research.

  The history of the development of artificial intelligence

They organized a two-month workshop in Dartmouth in the summer of 1956. A total of 10 participants attended the seminar, including Trenchard Moore from Princeton University, Arthur Samuel from IBM, as well as Ray Solomonov and Oliver Selfridge from the Massachussetts Institute of Technology ( MIT ).

Two researchers from the Carnegie Institute of Technology, Allen Newell and Herbert Simon, literally monopolized the whole show. While others could only share their ideas and in some cases show programs for specific applications such as drafts, Newell and Simon could already demonstrate a reasoning program, Logic Theorist (LT), or a theoretical theorist about which Simon stated: "We invented a computer program capable of thinking in non-numeric terms and therefore solved a respectable problem about the relationship between mind and body."

Shortly after this seminar, the program showed its ability to prove most of the theorems of Russell and Whitehead Principia Mathematica. It was reported that Russell was delighted when Simon showed him that this program offered a proof of one theorem, shorter than in Principia. The editors of the Journal of Symbolic Logic were less prone to emotion; they refused to accept the article, whose co-authors were Newell, Simon, and the Logic Theorist program.

The Dartmouth Seminar did not lead to the emergence of any major new discoveries, but it made it possible for all the most important figures in this scientific field to get acquainted. They, as well as their students and colleagues from the Massachusetts Institute of Technology, Carnegie Mellon University, Stanford University and IBM have occupied a leading position in this area for the next 20 years.

Perhaps the longest preserved result of this seminar was an agreement to adopt a new name for this field, proposed by McCarthy, artificial intelligence. Perhaps it would be better to call this scientific field “computational rationality”, but the name “artificial intelligence” was attached to it.

Analysis of proposals on the subject of reports for the Dartmouth seminar allows us to understand the reason for the need to transform artificial intelligence into a separate area of ​​knowledge.

Why it would not be possible to publish all the work performed in the framework of artificial intelligence, under the flag of management theory, or operations research, or decision theory, which ultimately have goals similar to artificial intelligence? Or why artificial intelligence is not considered as a field of mathematics?

The answer to these questions, firstly, is that from the very beginning artificial intelligence absorbed the idea of ​​modeling such human qualities as creativity, self-improvement and the use of natural language. These tasks are not considered in any of these areas. Secondly, another answer is the methodology.

Artificial intelligence is the only of the above areas, which is certainly one of the areas of computer science (although in the study of operations great importance is also attached to computer modeling), in addition, artificial intelligence is the only area in which attempts are made to create machines acting autonomously in a complex, changing environment.

created: 2014-09-22
updated: 2024-11-14
325



Rating 9 of 10. count vote: 2
Are you satisfied?:



Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Artificial Intelligence. Basics and history. Goals.

Terms: Artificial Intelligence. Basics and history. Goals.