You get a bonus - 1 coin for daily activity. Now you have 1 coin

- The history of artificial intelligence. Main steps

Lecture



Это окончание невероятной информации про история искусственного интеллекта.

...

src="/th/25/blogs/id5996/6_6e5612941ddb0c2ade26b40162be4028.jpg" data-auto-open loading="lazy" alt="The history of artificial intelligence. Main steps" >

REPLY- Adding power to such 'PERCEPTUAL' and 'Motor' adds nothing by understanding to Schank's original program. The robot receives input data through this "perceived" apparatus, as well as instructions are given to its apparatus "engine" without an engine, knowing any of these facts. Everything the robot does is followed by formal instructions on the manipulation of formal characters.

3. Brain Simulator Response (Berkeley and MIT) - This argument goes; "Suppose we are developing a program that does not represent the information we have about the world, but mimics the actual sequence of neural firing at the synapses of the brain of a native Chinese speaker when he understands the stories in Chinese and gives answers to them ... Of course, in such a case, we would have to say that the machine understands the stories; and if we don’t want to say that we shouldn’t also deny that the native Chinese speakers understood the stories? "

REPLY- I thought that the whole idea of ​​a strong AI is that we don’t need to know how the brain works in order to know how the mind works ... On assumptios of a strong AI, mind to the brain, like a program to hardware, and In a way, we can understand the mind without doing neurophysiology. This we had to know how the brain worked to make AI, we will not do AI.

However, if we transmit Chinese to a human, fire neurons and, therefore, a person responds in Chinese, this does not mean that he understands. Again, the instructions are simply followed.

4. Combination Reply (Berkeley and Stanford) - This argument goes; " While each of the three previous answers cannot be completely convincing on its own, then you take all three, they are all more convincing. Imagine a robot with a computer brain shape, fed into its cranial cavity, imagine a computer programmed with all the synapses of the human brain, imagine all the behavior of the robot is indistinguishable from human behavior, and the north - west to think about the whole thing as a single system rather than just a computer with inputs and outputs, of course, in that case, we would have n. ipisat intentionality to the system "

REPLY- "I agree that in this case we would rationally find that the robot was intentional, as long as we didn’t know anything about it anymore. " If we knew independantly how to explain it without such material assumptions, we wouldn’t would assign intentionality to it, especially if we knew that it was a formal program. "

"We consider the robot as a brilliant mechanical dummy. The hypothesis that the dummy has a mind now would be unjustified and unnecessary ... He does not see what comes into the eyes of the robot, he does not intend to move the robot arm, and he does not understand which either from comments made by or using a robot.

5. Other Minds Reply- "How do you know that other people understand the Chinese or something else? Only by their behavior. The computer can pass behavioral tests, so if you are going to attribute knowledge to other people, you should in principle also include him to computers "

REPLY- This objection is only worth the short answer. This is not about how I know that other people have cognitive states, but what it is that I attribute to them when I attribute cognitive states to them.

"In cognitive sciences, one assumes the reality and the knowability of the mental in the same way that in physical sciences, a person must assume the reality and knowbility of physical objects"

6. Many Mansions answer (Berkeley) - "Your whole argument suggests that AI is only analog and digital computers. But it just happens to be a state-of-the-art technology ... after all, we can create devices that have these causal - investigative processes, and this will be AI. So your arguments are in no way directed to the ability of the AI ​​to produce and explain cognition "

REPLY- I really don’t mind sticking to this answer to say that it really trivializes the project of a strong AI, revising it as something that artificially produces and explains cognitive

Back to the question:

"There must be something about me that makes it so that I understand English and the corresponding something is missing in me, which makes it so that I do not understand Chinese. Now, why we could not give those somethings, what are they, to the car?

The essence of the present argument is that no purely formal model will never be in itself sufficient for intentionality, since formal properties are not in themselves constitutive intentionality.

Mental states and events are literally the product of the work of the brain, but the program is not thus a product of the computer.

No one assumes that computer simulations in a shower will leave us all soaked. Why on earth would anyone suggest that computational understanding comprehension actually understand something?

If you do not believe that the mind is separable from the brain both conceptually and empirically — dualism in a strong form — you cannot hope to reproduce the psychic by recording and executing programs, since programs must be independent of the brain. [So anyone who has an intentionality can be obtained with a computer program, so there must be a dualist- not a dualist mind / body dualist substance].

“Can a car think? My own point of view is that only a car could think ... AI, by its own definition, about programs, and programs are not machines”

THOUGHTS

Searle seems to have good arguments here. I agree with the majority of what he says. When we try to create premeditation in a program, something is missing. You cannot create consciousness in this way. It's like trying to make an omelet without eggs.

Intentionality is a chemical and biological thing that can be reproduced only biologically, not mechanically.

The first winter of 1975-1980

Edward Hance Shortli MYCIN (1970)

The history of artificial intelligence.  Main steps

He was the main developer of the MYCIN clinical expert system , one of the first expert systems based on the rules of artificial intelligence , which obtained online clinical data from a physician user and was used to diagnose and recommend treatment for severe infections. While it is never used in practice (since it preceded the era of the local network and cannot be integrated with the records of the patient and the doctor of the process), its performance was shown to be comparable to, and sometimes more accurate than, the Stanford Department of Infectious Diseases . [1] This led to the development of a wide range of activities in the development of rule-based expert systems, knowledge representation, network beliefs and other areas, as well as its design had a great influence on the subsequent development of computer technology in medicine.

He is also regarded as one of the founders of the field of biomedical informatics, and in 2006 received one of the highest awards, the Morris F. Collin Prize is given by the American College of Medical Informatics. [2]

He held administrative positions in academic medicine, research and national bodies, including the Institute of Medicine, the American College of Physicians, the National Science Foundation, the National Institutes of Health and the National Medical Library (NLM), and had a great influence on the development of medicine, computing technology and biomedical informatics nationally and internationally. His interests include a wide range of issues related to integrated systems of medical decision support and implementation, biomedical computer science and medical education and training, as well as the Internet in medicine.

In March 2007, he became the founder of the Dean of the University of Arizona's College of Medicine - Phoenix Campus. He resigned from this position in May 2008 and in January 2009 moved his main academic meeting at the University of Arizona, where he became a professor of biomedical informatics. He claimed a secondary assignment as a professor of basic medical science and medicine at the University of Arizona College of Medicine (Phoenix Campus). In November 2009, he moved his academic home to a part-time appointment as a professor at the School of Biomedical Informatics, University of Texas Health Science Center at the Texas Medical Center in Houston, where he lived until November 2011. Since then, he has returned to New York - York, where he is still an adjunct professor of biomedical informatics at Columbia University.

In July 2009, Shortliffe took the position of President and Chief Executive Officer of the American Association of Medical Informatics, an organization that he helped form between 1988 and 1990, when he was president of the symposium on computer technologies in medical care. At the end of 2011, he announced his intention to resign from this position in 2012

Paul J Werbos Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences (1974)

The history of artificial intelligence.  Main steps

Paul J. Werbos (born in 1947) is a scientist best known for his 1974 Harvard University Ph.D. thesis, which first described the learning process of artificial neural networks due to back propagation of errors. [1] The thesis, and some additional information, can be found in his book, The Roots of Back Propagation (ISBN 0-471-59897-6). He was also a pioneer of recurrent neural networks. [2]

Werbos was one of the first three biennial presidents of the International Neural Network Society (Inns). He was awarded the IEEE Pioneer Neural Network Award for the discovery of reverse propagation and other basic neural network learning frameworks, such as Adaptive Dynamic Programming.

Werbos also wrote on quantum mechanics and other areas of physics. [3] [4] He also has an interest in big issues concerning consciousness, the fundamentals of physics and human potential. Roger Penrose discusses some of these ideas in his book The Shadow of the Mind.

He worked as a program director at the National Science Foundation for several years until 2015.

David Rumelhart Learning submissions by back-propagating errors (1986)

The history of artificial intelligence.  Main steps

David Rumelhart (Eng. David Everett Rumelhart ; 1942 - 2011) - an American scientist who made a significant contribution to the study of human consciousness and largely determined a number of areas of development of cognitive science in the 1970s of the XX century. Rumelhart's scientific work is connected with such directions as artificial intelligence, mathematical psychology, parallel computing processes. The greatest scientific resonance received his work related to the study of learning and memory in semantic neural networks. The representative of the connectionist approach in cognitive science.

In 1963 he received a bachelor's degree from the University of South Dakota. In 1967 - doctoral degree (PhD) at Stanford University. From 1967 to 1987 he taught at the psychology department of the University of California, then moved to Stanford University. In 1991 he was elected to the National Academy of Sciences of the United States.

After manifestation of the symptoms of Pick's disease in 1998, he left research at Stanford and moved to live with his brother in Ann Arbor, Michigan. Deceased on March 13, 2011 in Chelsea, Michigan, at the age of 68

Judea Pearl Probabilistic Reasoning in Intelligent Systems (1988)

The history of artificial intelligence.  Main steps

Judea Pearl (born Judea Pearl , Hebrews. יהודה רל, born 1936) - American and Israeli scientist, author of the mathematical apparatus of Bayesian networks, creator of the mathematical and algorithmic base of probabilistic inference, author of the trust distribution algorithm for graphical probability models , do- calculus [1] and calculus of anti-factual conditional (English counterfactual conditional ).

In 2011, Pearl won the Turing Award for "the fundamental contribution to artificial intelligence through the development of calculus for probabilistic and causal reasoning" [2] .

Perla 's book Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference (1988) ranks 7th in the CiteSeerX database by number of citations (5222 facts as of May 2012) [3] .

Sayings

  • “Any phenomenon that a person demonstrates should be able to imitate a computer as well” [20]
  • “There is no free will, but free will is a useful illusion, since evolution considers it necessary to equip us with this illusion ... Our actions are predetermined by the activation of neurons. One neuron is activated because other neurons send certain signals ... Our current actions are determined by the state of mind formed yesterday. ” [21]

New Era 1993-n. (Instead of conclusion)

Initially, the mechanism appeared and the opportunity to look not only at the world as a mechanism, but also at the person. Descartes had a hand in this, who suggested that the animal is an automaton, and man is slightly more than an automaton, but for the most part it is still an automaton. Obviously, it was impossible not to wait for a larger flight of fantasy at that time, because it was impossible to create a car of such a level of complexity as a person was impossible for technical reasons.

The history of computer technology rested on the need to make mass accurate calculations, the success of mechanics, electricity, etc. Although sometimes mechanical computers were created as early as the 17th century (including by such monsters as Pascal and Leibniz), it was not until the 20th century that machines adopted to be called computers began to emerge. At the same time, the theoretical foundations of artificial intelligence (AI) research were laid. An important milestone was the emergence of the idea of ​​a "Turing machine" - a simple machine made from an endless belt and a mechanism that can change a symbol on a cell of a tape, change a state and move to a neighboring cell. The specific operation depends on the state and character in the cell.

Turing showed that each recursively computable function can be computed on such a machine in a finite time. And from this it followed that if you have the necessary program, enough memory in the machine and a certain time, you can calculate any function controlled by the rules, including solving the Turing test, which would mean creating artificial intelligence! It remains only to find the necessary function ... At first, everything went to the fact that such a function was about to be found: from the mid-50s to the mid-70s, a lot of AI was done: computers learned how to conduct simple conversations, play chess, solve complex algebraic problems and prove theorems. Computers grew in capacity, doubling their capacity every two years (see Moore's Law), in 1958 Rosenblatt developed the perceptron, and the pillars of the AI ​​predicted the construction of AI communism by 1980 ... But then a lull happened. The increase in the power of computers has not decreased (it has not yet decreased), but the "AI" didn’t get much stronger. Any complicated actions required a lot of time, and in some places very little resembled the work of the brain. It is necessary to create huge databases, teach the AI ​​to assimilate new information into the database, to deal with the problem of finding information from this mess (otherwise how will you talk about the trends of French thought and the crisis of the Beat’s counterculture with the "judge"?)

In 1972, H.L. Dreyfus publishes a book where he exposed AI's success, and in 1980, Searle finished it all with his criticism and the Chinese Room thought experiment, showing that even if the car gave the right reactions in the smartest way, it would not make it conscious.

Thus, Searle advanced the theory of strong AI (that AI can be created and it will be quite conscious and intelligent) and the theory of weak AI (that it is impossible to create such a program that is identical to the human mind).

We no longer dream of creating an artificial mind, imitation of a person. the terms Strong and weak AI were introduced for more adequate concepts.

Methods of weak AI solve perfectly in particular limited tasks, without mentioning that this is part of the unutilized reason, passing the Turing test. This has the advantage that there are no disappointments with the set goals and the range of tasks of the weak intellect, that is, the weak AI does not pursue an artificial human of reason.

Strong and weak artificial intelligence

Strong and weak artificial intelligence is a hypothesis in the philosophy of artificial intelligence, according to which some forms of artificial intelligence can really justify and solve problems [1] . The theory of strong artificial intelligence suggests that computers can acquire the ability to think and be aware of themselves, although not necessarily their thinking process will be similar to human. The theory of weak artificial intelligence rejects this possibility.

The term “strong AI” was introduced in 1980 by John Pearl (in the work describing the “Chinese Room” mental experiment), which for the first time characterized it as follows:

An appropriately programmed computer with the necessary inputs and outputs will be the mind, in the sense in which the human mind is the mind.

- “Minds, brains and programs”

The history of artificial intelligence.  Main steps

Requirements for creating a strong AI

Many definitions of intelligence have been proposed (such as, for example, the ability to pass the Turing test), but at the moment there is no definition that would satisfy everyone. However, among artificial intelligence researchers, there is a general agreement that Strong AI has the following properties: [3]

  • Making decisions, using strategies, solving puzzles and acting in the face of uncertainty;
  • Knowledge representation, including a general view of reality;
  • Planning;
  • Training;
  • Natural language communication;
  • And the combination of all these abilities to achieve common goals.

Work is underway to create machines with all these abilities, and it is assumed that Strong AI will have all of them, or most of them.

There are other aspects of human intelligence that also underlie the creation of a Strong AI:

  • Consciousness: Being susceptible to the environment;
  • Self-awareness: To recognize oneself as a separate person, in particular, to understand one’s own thoughts;
  • Empathy: the ability to " feel ";
  • Wisdom.

None of these properties is necessary to create a strong AI. For example, it is not known whether a car needs to perceive the environment as much as a human being. It is also unknown whether these skills are sufficient for creating intelligence: if a machine is created with a device that can emulate a neural structure similar to the brain, will it be able to form an idea of ​​knowledge or use human speech. It is also possible that some of these abilities, such as empathy, will arise naturally in a machine if it reaches real intelligence.

The Turing test played a certain role in the development of artificial intelligence, including the criticism of the test itself. Here you can spend
analogy with aviation. According to the logic of the Turing dough, good aircraft should be considered as indistinguishable from birds to such
the extent that even birds take them for their own. The development of aviation began when the designers stopped copying the birds, and
aerodynamics, materials science and theory of strength. Robotics became an industry after it stopped copying human anatomy.
Similarly, artificial intelligence subjects received the right to life after attempts to build AI systems ceased,
thinking and acting like people, but began to build systems that act and think rationally, i.e. reaching the best
result.

The latest achievements in the field of AI can be represented by the following commercial projects:

• 1. Autonomous planning and scheduling. The program Remote Agent, developed by NASA, is used for the integrated control of the operation of spacecraft that are far beyond the limits of the near-Earth orbit, including diagnose and troubleshoot as they occur.

• Keeping games. IBM's Deep Blue program was the first program to win a world chess champion.

• Autonomous control. Alvinn's computer vision system was trained to drive a car, keeping to the lane. Over 2,850 miles, the system provided driving control for 98% of the time.

•Diagnostics. Medical diagnostic programs have managed to reach the level of an experienced doctor in several areas of medicine.

• Supply planning. During the crisis in the Persian Gulf in 1991. The US Army deployed a DART system (Dynamic Analysis and Re-planning), which provided for automated supply planning and scheduling of shipments, covering up to 50,000 vehicles, people, and cargo at the same time. The developers of this system stated that this application alone paid for their 30-year investment in artificial intelligence.

0atively easy to reach the level of an adult in such tasks as an intelligence test or a game of checkers, but it is difficult or impossible to achieve the skills of a one-year-old child in perception or mobility tasks ” [1] .

The linguist and cognitive scientist Steven Pinker considers this discovery to be the most important, made by researchers of artificial intelligence [2] . Marvin Minsky notes that it is most difficult to reverse engineer those skills that are unconscious [3] .

Продолжение:


Часть 1 The history of artificial intelligence. Main steps
Часть 2 - The history of artificial intelligence. Main steps


Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Artificial Intelligence. Basics and history. Goals.

Terms: Artificial Intelligence. Basics and history. Goals.