You get a bonus - 1 coin for daily activity. Now you have 1 coin

Artificial intelligence: yesterday, today, tomorrow

Lecture



In the modern world, the problem of creating artificial intelligence rises more and more often. Here and there notes flashed in newspapers that, they say, artificial intelligence (AI) has already been practically created or put into practice for military purposes, space research, medicine, etc.
Passions incite and fantastic films, telling about the real existence of AI. In the light of the cult films "The Matrix", "Terminator", "I am a Robot", the TV viewer comes to the unequivocal conclusion that the creation of AI has left to live for a very short time, and in less than a century, how difficult the organized machine will decide the fate of humanity. Is it so? Are all these conjectures fair? Is it possible to create an AI in principle, and how much is left to wait if possible?
What is man AI for? AI will be able to partially or completely replace a person in many specialties and areas (astronautics, working specialties, etc.). In addition, AI will help a person to cope with tasks that he cannot do (complex calculations and analysis) and simply expand the intelligence given to him by nature.
In general, the concept of "artificial intelligence" is quite vague. Almost all modern equipment is equipped with microchips, and manufacturers convince consumers about the presence of AI in their products. But, in the majority it is a simple copying of a humanoid line of conduct on an artificially created object to reduce the costs and time of a person.
basic concepts
The term intelligence (Intelligence) comes from the Latin concept of intellectus - mind, reason, mind. Artificial Intelligence (Artificial Intelligence - AI) is understood as the ability of automatic systems to take on human functions, to choose and make optimal decisions based on previously obtained life experience and analysis of external influences. Any intelligence relies on activity.
Brain activity is thinking. Intellect and thinking are connected with many goals and objectives: recognition of situations, logical analysis, planning of behavior. Characteristic features of intelligence is the ability to learn, generalize, accumulate experience, adapt to changing conditions in the process of solving problems.
Based on the very definition of AI, the main problem in creating an intellect follows: the possibility or impossibility of modeling the thinking of an adult or a child.
The history of the development of artificial intelligence
The first serious studies on the creation of AI were made almost immediately after the appearance of the first computers.
The birth of the science of artificial intelligence. 1943 - 1956
During this period, a group of scientists from a wide range of areas of science began to discuss the possibility of creating an artificial brain.
• Research in neuroscience has shown that the brain is a network of neurons that exchange electrical signals with each other on the “all or nothing” principle, 0 or 1.
• Cybernetics Norbert Wiener described the basics of control and stability in electrical networks.
• Information Theory Claude Shannon has described digital signals.
• Alan Turing's calculation theory has shown that any calculations can be performed using digital operations.
• Walter Pitts and Warren McCulloch analyzed networks consisting of idealized artificial neurons and showed how they can perform simplest logical functions. They were the first who described the researchers later called the neural network.
One of the students inspired by their ideas was Marvin Minsky, who was then 24 years old. Subsequently, he became one of the most visible leaders and innovators in the field of AI for the next 50 years.
In 1951, programs for playing checkers and chess were written, which became a measure of progress in AI for many years.
Dartmouth Conference 1956
Dartmouth Seminar - Conference on Artificial Intelligence was held in the summer of 1956 at Dartmouth College for 2 months. The conference was important for science: it introduced people who were interested in modeling the human mind, approved a new branch of science and gave it a name - “Artificial Intelligence” - “Artificial Intelligence”.
The conference plan was drawn up in accordance with the thesis that "every aspect of training or any other property of intelligence can be described in such detail that it can be modeled on a computer."
The workshop was organized by John McCarthy, Marvin Minsky, Claude Shannon and Nathaniel Rochester. They invited all well-known American researchers, one way or another connected with questions of control theory, automata theory, neural networks, game theory and intelligence research.
The seminar was attended by 10 people:
1 John McCarthy, Dartmouth College
2 Marvin Minsky, Harvard University
3 Claude Shannon, Bell Laboratories
4 Nathaniel Rochester, IBM
5 Arthur Samuel, IBM
6 Allen Newell, Carnegie Mellon University
7 Herbert Simon, Carnegie Mellon University
8 Trenchard Moore, Princeton University
9 Ray Solomonov, Massachusetts Institute of Technology
10 Oliver Selfridge, Massachusetts Institute of Technology
The purpose of the conference was to consider the question: you can model intellectual processes of thinking and creativity with the help of computers. As key questions, participants identified: language understanding, self-study and self-improvement of computers.
Ten scientists absolutely seriously assumed that they would be able to achieve significant results on these issues if they worked together for two months.
Golden years: 1956-1974
The years after 1956 were the era of discoveries, sprints on new terrain. The programs that have been developed nowadays seemed amazing to most people, this kind of “intelligent” machine behavior seemed incredible. Researchers have shown unprecedented optimism both in personal communication and in publications, predicting that a full-fledged intelligent machine will be created in less than 20 years. Government agencies, such as ARPA (Advanced Research Projects Agency), have invested heavily in the development of this new area.
Many programs created in those years used a labyrinth algorithm. To achieve a certain goal (winning in a game or proving a theorem), they moved to the goal like a movement in a maze, returning to a branch point and choosing another path if this turned out to be a dead end.
optimism
The first generation of AI researchers made such predictions about their work:
• 1958 - H. Simon, A. Newell: “for ten years the digital computer will be the world chess champion” and “for ten years the computer will discover and prove a new important mathematical theorem”
• 1965 - H. Simon: “for 20 years, machines will be able to perform any work that a person is capable of”
• 1967 - M. Minsky: “during the generation the problem of creating artificial intelligence will be almost completely solved”
• 1970 - M. Minsky: “in the range of 3 to 8 years, we will have a machine with an intellect comparable to the average human level”
financing
In 1963, MIT (Massachusetts Technological University), “AI Group”, Minsky & McCarthy, received a grant of $ 2.2 million from ARPA, which continued funding of $ 3 million per year to the 70s. The same scale of funding is involved with the Stanford AI Project, John McCarthy and the programs Newell and Simon, Carnegie Mellon University. Another AI research laboratory was established at the University of Edinburgh in 1956. These four institutes became the main centers of development and research in the field of AI for many years.
perceptrons
Perceptron called the kind of neural network proposed by Frank Rosenblat in 1958 Like most AI researchers, he was optimistic about the potential of perceptrons, predicting that "a perceptron may be able to learn, make decisions, translate from one language to another."
An active research program in this field was launched in the 1960s, but it was suddenly interrupted shortly after the publication of the book Perceptrons by Minsky and Papert in 1969. It argued that there were significant restrictions on the possibilities of perceptrons, and Rosenblatt's predictions were an excessive exaggeration. The effect of this book was devastating - more than 10 years of research in this area were almost completely discontinued.
The first "winter" of artificial intelligence, 1974 - 1980 (The first AI Winter)
At the request of the British Council for Scientific Research, the famous mathematician Sir James Lighthil prepared the report “Artificial Intelligence: A General Review,” published in a collection of papers from the Symposium on Artificial Intelligence in 1973. Lithil described the state of development in the field of artificial intelligence and gave very pessimistic forecasts for the main directions of this science. In his report, the level of achievements in the field of AI was identified as disappointing, and the overall assessment was negative from the standpoint of practical significance.
In the 70s, AI was the subject of criticism and reduced funding. AI researchers could not adequately assess the complexity of the problems they faced. Their excessive optimism has given rise to an incredibly high level of hopes and expectations, and when the promised results could not materialize, the funding for AI has ceased.
At the same time, the direction of AI, relating to neural networks, was completely closed for 10 years as a result of destructive perceptron criticism by Marvin Minsky.
Despite the difficulties (limited computational power, the effect of a “combinatorial explosion” in most algorithms, the huge amounts of data required for processing in tasks related to speech and image recognition) that were encountered in the 1970s, new ideas were expressed in logical programming, considerations based on "common sense" and much more.
Boom 1980 - 1987
In the 1980s, a variety of AI programs called “expert systems” was used by a number of large corporations and became mainstream in AI research. In 1980, the XCON expert system was completed at CMU for Digital Equipment Corporation. She brought the company $ 40 billion a year until 1986 By 1985, they allocated a billion dollars a year for AI research.
At the same time, the Japanese government began an “aggressive” financing of a project to create AI based on a fifth-generation computer (see Computer Generation 5). Unfortunately, the project did not justify the hopes placed on it.
Another important event was the revival of neural networks in the works of John Hopfield (Hopfield network) and David Rumelhart (Back Propagation - an algorithm for back propagation of error).
The second "winter" AI, 1987 - 1993 (The second AI winter)
The interest and participation of the business community in AI research (sponsoring them) suffered a surge and decline according to the classical scheme of the economic bubble. The specialized hardware market for AI began to fall in 1987. Personal computers from Apple and IBM have steadily increased their speed and power and in 1987 became more productive than more specialized and expensive computers.
1993 - our days
The field of study related to AI has finally achieved some of its original goals. Certain developments have found their niche in the technology industry. Part of the success was due to increased computing power, partly due to focusing on specific problems.
But, the dream of the intellect, equal to the human, was not realized, because the AI ​​researchers have become much more prudent and cautious in their predictions and judgments.
Today, the development of AI systems is proceeding at an intensive pace and the world's largest institutions are working on this problem.
Areas of research in the field of artificial intelligence
In research in the field of artificial intelligence, there are two main areas: bionic and pragmatic.
The bionic direction of research in the field of artificial intelligence is based on the assumption that if the structures and processes of the human brain are reproduced in an artificial system, then the results of solving problems with such a system will be similar to the results a person receives. In this area of ​​research stand out:
• Neural network algorithms. It is based on a system of elements that, like brain neurons, are capable of reproducing certain intellectual functions. Applied systems developed on the basis of this approach are called neural networks.
• Structural and heuristic approach. It is based on knowledge about the behavior of an observable object or group of objects and reasoning about those structures that could ensure the implementation of the observed forms of behavior. An example of such systems are multi-agent systems.
• Evolutionary algorithms. In this case, it is possible to solve problems formulated in terms of an evolving population of organisms - a set of subsystems that counteract and cooperate, as a result of the functioning of which the necessary balance (stability) of the entire system is ensured in conditions of constantly changing environmental influences. This kind of approach is implemented in applied systems based on genetic algorithms.
• Fuzzy logic. The most impressive in human intelligence is the ability to make the right decisions in conditions of incomplete and fuzzy information. Building models of human approximate thinking and using them in computer systems is today one of the most important problems of science. "Artificial Intelligence", which easily solves the tasks of managing complex technical complexes, is often helpless in simple situations of everyday life. To create intelligent systems that can adequately interact with a person, you need to apply a new mathematical apparatus that translates ambiguous life statements into the language of clear and formal mathematical formulas.
The pragmatic direction is based on the assumption that human mental activity is a “black box”. But, if the result of the functioning of an artificial system coincides with the result of an expert’s activity, then such a system can be considered intellectual regardless of the means of obtaining this result. This approach does not raise the question of the adequacy of the structures and methods used in computer to those structures or methods that a person uses in similar situations, but only the final result of solving specific problems is considered.
From the point of view of the final result in the pragmatic direction, three target areas can be distinguished:
• Development of methods for the representation and processing of knowledge is one of the foundations of the modern period of development of artificial intelligence;
• Intelligent programming - divided into several groups. These include game programs, natural language programs (machine translation systems, automatic reviewing, text generation), recognition programs, programs for creating works of art and graphics.
• Creation of tools. Toolkit - language for artificial intelligence systems; deductive and inductive methods of automatic program synthesis; linguistic processors; speech analysis and synthesis systems; knowledge base; shells, prototype systems; cognitive graphics systems;
Common to these programs is the widespread use of search procedures and methods for solving surmountable tasks associated with searching and viewing a large number of options. These methods are used in computer-aided solution of game problems, in problems of decision making, in planning expedient activities in intelligent systems.
The essence of the implementation of AI in theory and practice
The essence of the realization of thinking is still not fully understood and remains a mystery to science. Today, computers recycle mostly not the information itself, but only the contents of memory cells that can be filled with anything. So, computers do not "comprehend" the contents of information, unlike people who are characterized exclusively by intelligent concepts. Figuratively it can be said that in humans the process of thinking takes place in the soul, while for machines it does not exist.
What are the components of the system usually built artificial intelligence, and any intelligence in general?
First of all, AI is a combination of hardware and corresponding software. The first is usually a computer of a certain configuration and servicing mechanisms (manipulators, video cameras, sound and other sensors). To a greater degree, the software’s stuffing, which determines the degree of “advancement” of a given AI, affects the “intellect” of the machine as a whole.
In the electronic stuffing AI in the first place there is a huge amount of memory, on the basis of which all arguments and conclusions are built. It is clear that it is impossible to put all the knowledge from different areas into the AI ​​memory, but it is quite possible to make an intellectual system in a certain area of ​​knowledge. Usually, a person first lays in the system the minimum knowledge of the world.Further, these knowledge are expanded in the process of accumulating experience and investing it by a person (passive path) or by the system itself (active path) as a result of its adaptation to environmental conditions. However, computer memory is just a simple collection of files and folders.
The memory of a person is much more complex - it does not operate with files, it is pieces of information. Human memory is the memory of images. Human memory can be compared to a comet: behind is a long “tail” of life experience, which over time is automatically forgotten and erased by a new one; the comet itself is a layer of real memory every second; a thin front layer is a foggy reasoning (foresight) of the human future. And while the memory of AI systems is fundamentally different from human.
Secondly, the logical process of calculating the situation takes place in the information processing device. Most often this is a specific software and computer central processor.

Artificial intelligence: yesterday, today, tomorrow

The performance and activity of AI directly depends on the capabilities of this information processing center.
The most important difference between genuine artificial intelligence software and simple applications is the ability to “think” in images. With the help of imaginative thinking, such technologies as compression and encoding of information, processing of biometric images, optimization of the gamut of color rendition, similar search, image meaning analysis, automatic cataloging of information, pattern recognition algorithms have become available today.
For a person, examples of images can be the sky, clouds, music, the sea, poetry, etc. The ability to perceive the outside world in the form of images allows people to recognize an infinitely large number of objects and understand each other regardless of their nationality.
The process of perceiving an object as an insult to a machine has some peculiarities. Usually, before selecting an image (for example, a graphic one), it is considered known beforehand that it is necessary to divide the set of points of a certain space into two or more areas, and after separation all points will belong to these two (or more) areas. At the same time, only the location of the points of the source area (their approximate coordinates) is known in advance. Further, the process of dividing points into regions (images) according to certain criteria (for an image it will be a change in colors and contrasts) occurs. Sometimes you need to process the image so that the dots are more pronounced for the separation (for example, transfer the color image to black and white) - this will make the separation sensitivity higher (as most text recognition programs work).
If the system will be able to independently classify and filter not only previously known objects, but also unknown ones (without knowing their properties, in appearance), then this process will be called self-learning. Today, AI systems can distinguish only a few images in small, specified spaces.
An important feature of AI should be its training and numerous scientists around the world are working on this problem. Learning, as a rule, is defined as a process, as a result of which the system gradually acquires the ability to respond with the necessary reactions to certain external influences. Today, there are prototypes of equipment that are capable of learning simple mechanical operations (machining parts on a machine, copying a human gait). However, advances in the field of learning AI are still progressing at a rather slow pace and do not keep up with the development of electronics.
To solve an AI problem, today a solution algorithm is needed (just like any other person). An algorithm is an exact prescription of performing operations in a certain order to solve a specific task. Finding an algorithm for a person or a machine is associated with subtle and complex reasoning. These considerations often require ingenuity and creativity, therefore, the machine constantly requires interaction with the person in the absence of the above qualities. The machine is not peculiar to the "spear method" - it is only looking for solutions to the problem using the ones written in the database.
An important role in the functioning of AI perform the functions of information analysis and the accumulation of life experience. Watching the children, we are convinced that they receive most of the knowledge through training and communication with the outside world, and not in the quality that they have in advance. The invention of an effective mechanism of self-analysis and self-accumulation of life experience will put AI at a much higher level than modern.
The real possibilities and advantages of artificial intelligence
Recently, it is possible to trace the gradual transformation of software engineering into intellectual engineering, which deals with more general problems of information processing and knowledge provision. To determine the real possibilities of the development of AI, we consider promising approaches to the organization of AI systems and the capabilities of artificial intelligence today.
Knowledge representation and knowledge based systems development.
Models of knowledge representation are developed, knowledge bases constituting the core of expert systems are created. Improved models and methods of mining and knowledge structuring.
This direction is made into a separate science - knowledge engineering.
AI software
Special languages ​​are being developed for solving intellectual problems, where logical and symbolic processing than traditional computational procedures (LISP, PROLOG, SmallTalk, REFAL) gain greater advantages.
Created software packages focused on the development of intelligent systems: KEE, ARTS, G2.
Empty expert systems (shells) are created, whose knowledge bases can be filled with concrete knowledge: KAPPA, EXSYS, M1, EKO.
AI on the Internet
Experts believe that in the future it will be the Internet to determine the user's lifestyle and activities in his spare time: digital television, universal library, games, etc. These services will probably be free or conditionally paid.
Today, AI systems are actively used on the Internet: these are search engines that contain signs of intelligence and are able to find and provide the necessary information in seconds; personalized search; voice interface; image and handwriting recognition; site guides; intelligent sensors capable of warning of a robbery or fire, etc.
robotics
Each person has the desire to facilitate their work as much as possible. Robotics today is quite a promising direction of AI. Since the work of muscles can be replaced only by the work of other applications, the person did not forget to use it - in many factories, instead of people, work is being done today.
There are several generations in the history of robotics.
1 Works under a rigid control scheme (manipulators are programmed). Almost all modern industrial robots can be attributed to them.
2 Adaptive work with touch devices. There are successful developments, but they are rarely used in industry.
3 Intelligent robots, self-learning and customizable. They are the ultimate goal of the slave technique. The main problem here is machine vision, in particular, adequate recognition, processing and preservation of three-dimensional visual information, as well as the problem of maintaining balance when moving.
The first robots can hardly be called intellectuals. Only at the end of the 60s were robots designed to be driven by computers. For example, as a result of the development of the Industrial Intelligent Robot project in Japan in 1969, work was assembled with AI elements to perform assembly and assembly work with visual inspection. The robot manipulator had 6 degrees of freedom and was equipped with tactile sensors. Vision work was organized with the help of two video cameras provided with light filters for recognizing the color of objects. The robot was able to identify the area where the objects were located and recognize them. Gradually, the characteristics of robots have improved significantly, and today the accuracy of their work will be envied by any person. In developed countries, it is planned to transfer a significant part of the armed forces on a robotic basis.
The public's attention is attracted by the annual competition of robots moving across rough terrain, using only the card. These difficultly organized mechanisms are capable of independently making decisions on the coordination of movement and have, for this purpose, a primitive AI with vehicle tilt sensors, radio beacons, a compass, a range finder, infrared and other motion monitoring sensors. In the United States, development of machine learning, robot navigation, logical planning of their actions, etc., is underway.
Medical systems
Systems for performing precise operations and consulting doctors in difficult situations have been created; the use of robotic arms for high-precision operations (for example, on the retina).
Fully automated production
The creation of fully automated plants with the replacement of people (especially in conditions of increased danger). Most of the current lines in modern microelectronic plants and other industries need only a few human operators, the rest of the assembly and packaging of products perform work.
Expert systems.
Today, society is interested in real-time decision-making systems, means of storing, retrieving, analyzing and modeling knowledge, dynamic planning systems. Among them there are already specific results:
• DENDRAL - highly intelligent chemical structure recognition system. This is the oldest of expert programs. The first versions of this system appeared in 1965. The user sets the DENDRAL system with some information about the substance, as well as spectrometry data (infrared, nuclear magnetic resonance and mass spectrometry), and the latter in turn issues a diagnosis in the form of a corresponding chemical structure.
• MICIN - expert system of medical diagnostics. It was developed by the infectious diseases group at Stanford University. The program makes an appropriate diagnosis, based on the symptoms presented in it, and recommends a course of drug treatment for a diagnosed infection.
• PUFF - system analysis of human respiratory disorders. It is based on the MICIN system, from which data on infections was removed and data on pulmonary diseases were inserted.
• PROSPECTOR is a system created to facilitate the search for commercially viable mineral deposits.
Machine learning and self-study.
Development of models, methods and algorithms that focus on the automatic accumulation and formation of knowledge based on the analysis and synthesis of data. Incarnated learning behind the examples and traditional approaches to the theory of pattern recognition. Today, this issue is given great attention in the field of artificial intelligence.
There are many machine learning algorithms. One of the most common is class C4 algorithms. These algorithms allow you to build and analyze a complex decision tree. Each branch of the tree is associated with a specific class of problem solving examples. In the process of solving classes can be divided into subclasses. The completion of the algorithm is the adoption of a solution that meets the needs of the task. The disadvantage of the algorithm is the limited examples of problem solving.
In recent years, data mining (Knowledge Mining) and KnowLedge Discovery (search for patterns in the presented data) are becoming popular.
Data mining and processing of statistical information.
A relatively new direction of application of AI. These include the process of identifying patterns in the initial information, building a specific model for analyzing information, forecasting the results of research for the future and presenting them in the form of graphical information. This is quite a promising direction of AI, which is already actually used in various exchanges and marketing activities.
Development of natural interfaces and machine translation systems
Computational linguistics, in particular, machine translation has been a popular topic since the 1950s. The idea of ​​translation is not simple, as it seemed to the first developers. They used a consistent example of words in the text, which was inappropriate, because translate the text only based on the understanding of the entire text and in the context of all the information.
The use of intermediate languages
There is a translation of the “source language” - “language of content” - “language of translation”.
associative search
Search for similar text fragments and their translations in special databases
structural approach
Applies sequential analysis and synthesis of natural messages, consisting of several stages:
• Morphological analysis of words in the text
• Parsing - parsing sentences and grammatical connections between words.
• Semantic analysis of the content of the components of each sentence on the basis of a subject-oriented knowledge base
• Pragmatic analysis of the content of sentences in a real context based on its own knowledge base.
Automatic language analysis.
This includes searching dictionaries, language recognition, translation, identifying unfamiliar words, vocabulary, grammar, etc.
pattern recognition
This direction, formed from the birth of AI, for now it is an independent science. The main approach is to describe the classes of objects in terms of significant attributes. Each object is assigned a matrix of attributes by which recognition takes place. For the separation of objects into classes using special mathematical procedures and functions.
neural networks
The principle of creating artificial neural networks is borrowed from biology. They are formed from elements that reproduce the elementary functions of a biological neuron. Artificial neural networks reproduce certain properties that are inherent in the human brain. They study on the basis of experience, summarize their experience, and are able to extract the most important information from the information.
The ability of the neural network to learn was first investigated by J. McCulloch and W. Pitt in the experiments of 1943 on the neuron model they created. The authors described the principles of building neural networks. Later, in 1962, F. Rosenblatt proposed his own model of a neural network, the perceptron, and in 1986 J. Hinton and his colleagues published an article describing the model of a neural network and its learning algorithm, which gave impetus to the effective study of neural networks.
Models built according to the type of human brain's neural networks are characterized by easy parallelization of algorithms and high performance. An important property brings them together with the human brain, which is absent in simple electronic machines: neural networks work even if there is incomplete information about the environment, that is, like a person, they can respond not only yes or no, but I don’t know for sure, but rather yes. "
Neural networks today are able to recognize signals, speech, images, data retrieval, financial forecasting, data encryption. The neural network approach is used in a large number of tasks - for clustering information from the Internet, for simulating and modeling complex human brains, for pattern recognition, and others. Nowadays, the improvement of the synchronous operation of neural networks on parallel devices continues.
The advantages of neural networks include self-study, the same settings, configuration flexibility, high efficiency. Among the most famous neural networks today are Hopfield networks, neural networks with back-propagation errors, and self-organizing maps.
New computer architectures
Modern computers, like computers and generations, are based on von Neumann’s traditional sequential architecture, which is rather inefficient for symbolic processing. Therefore, the efforts of scientists and manufacturers are aimed at developing architectures capable of processing symbolic and logical data. PROLOGUE and LISP machines, database computers, parallel and vector computers are created.
Although there are good industrial designs, but the high cost, insufficient software and hardware incompatibility with traditional computers significantly inhibit the widespread use of new architectures.
Game direction
One of the most interesting and useful aspects of the use of AI is the development of games, entertainment programs and systems of artificial communication with a person. Most of the time here is occupied by modeling social behavior, communication, human emotions, and creativity. This is one of the most difficult areas of development of AI and at the same time - one of the most promising.
Appliances
Modern systems of artificial intelligence are able to master much more specialties than a simple person, thanks to a significant number of various information sensors and devices, which create like structures of human sense organs.
The development of AI is used today as autonomous secretaries, search engines, work planners, professional teachers, and salespeople. It is also supposed to use further AI systems in various household appliances: room cleaners; units for the preparation, delivery and ordering food; automatic car drivers, etc.
However, one should not think that computers or robots will be able to solve any problems. Scientists have proved the existence of such types of problems for the solution of which the only effective algorithm is impossible (for example, difficult life situations). Man often by the method of "scientific spear" expands for himself the zone of knowledge about nature, discovers new laws. Computer artificial intelligence is absolutely not typical.
Недостатки и проблемы современного искусственного интеллекта
Сегодня мы имеем возможность наблюдать постоянный рост вычислительной мощности компьютеров, но это не означает появления у них ИИ. К сожалению, также принципы работы человеческой психики сегодня остаются неясными. А поскольку ИИ изначально задумывался как прообраз человека, то его создание связано с неизвестностью. Однако рост производительности компьютеров в сочетании с повышением качества алгоритмов обработки делает возможным применение различных научных методов на практике в различных сторонах жизни человечества.
Рассмотрим основные проблемы, связанные с разработкой ИИ на практике.
Большинство современных разработок ИИ используют несколько типов понятий: ДА (хорошо) и НЕТ (плохо). В математике и электронике это нормально, но в жизни точные понятия используют редко

Поскольку сначала ИИ задумывался как человекоподобный интеллект, служит дополнением к человеку, то угодить этому самому человеку будет очень нелегко.Як, например, машине понять депрессивное состояние или эйфорию человека? Понятие "веселый" и "грустный" для машины здесь никак не подходят.
Проблемы в разработке ИИ прослеживаются и на уровне формирования образов и образной памяти. Поскольку образы в мышлении человека взаимопроникают друг в друга, то формирование образных цепочек у людей не представляет сложности - оно ассоциативно. Файлы же, в противовес к образам, отделены пакетами машинной памяти. В памяти человека поиск данных ведется не по содержанию памяти, а вдоль готовых цепочек ассоциативных связей. Компьютер же ищет конкретные файлы.
Пример: для человека не будет проблемой узнать лицо друга на фотографии, даже если он похудеет или поправится, и это является ярким примером ассоциативной памяти. Для машины это практически невозможно. Она не сможет отличить главное от второстепенного. Для получения результата ИИ использует только определенную базу известных данных. Ему несвойственный эксперимент.
Проблема перевода с одного языка на другой, а также обучение машины мови.Якщо вы предложите современным программам-переводчикам (например, Promt) перевести любой абзац из книги на другой язык, то поймете, что качеством здесь и не пахне.В результате вы получите простой набор слов. Why? Потому, что для перевода целых предложений необходимо понимать смысл предложения, а не просто переводить слова.Сучасни ИИ - программы не могут пока выделять смысл в тексте (вероятно, потому, что посредником для перевода, скажем, с английского на украинский, является бездушная машинный язык - язык единиц и нулей).
Простота математических вычислений. В последнее время многими ведущими специалистами в области ИИ внесено предложение об исключении из списка высокоинтеллектуальных задач простого алгебраического решения уравнений, поскольку для этого сегодня есть стандартные последовательные алгоритмы вычислений. Это не требует сложных, многоэтапных и часто непоследовательных интеллектуальных способностей. Распознавание текста, игра в шахматы и шашки, распознавания звуков на сегодня успешно применяются на практике, но их хотят убрать по проблемам ИИ.
Современные разработки, связанные с искусственным интеллектом, способны к самокопирования (размножения). На современном этапе развития кибернетики и электроники абсолютно самостоятельное самокопирования роботов невозможно, необходимо хотя бы частичное (часто значительное) вмешательства человека. Однако для приложений это является простым, например, возможности утилит самостоятельно копироваться в другую директорию. Ярким примером является компьютерные и мобильные вирусы, которые способны к бесконтрольному размножению и выполнения разрушительных действий.
Еще одна проблема на пути к созданию ИИ - отсутствие у него всякого проявления воли. Как это ни странно звучит, но в современных компьютеров есть колоссальные возможности для сложных расчетов, но абсолютно отсутствуют какие желания. Даже если компьютер обеспечить микрофоном и акустикой, это не значит, что он начнет самостоятельно писать музыку или невольно запускать любые приложения. Он не ленивый - просто у него нет желаний. Компьютеру все равно, кто с ним работает, зачем и с какой целью.
В современных прототипах ИИ отсутствуют стимулы к дальнейшему совершенствованию. В природе на любой живой организм действует фактор естественного отбора, который порождает постоянное приспособление к условиям окружающей среды. Голод, стремление выжить и дать потомство - это факторы, постоянно действующие на живой организм, как стимул к дальнейшему совершенствованию.
Мотивация большинства современных ИИ очень примитивной: человек задала задачу - машина ее выполняет без вариантов и эмоций. Теоретически на мотивацию и совершенствование может повлиять введение обратных связей компьютер -> человек и создание улучшенной системы самообучения машины. Правда, это только теория - на практике же все оказывается гораздо сложнее. Однако подобная работа уже проводится. Как стимул выбрано элементарное чувство голода - предвестник скорого окончания энергетических ресурсов и, соответственно, существования машины. Американец С. Уилкинсон создал "гастроробота" по имени "Жуй - жуй". Машина питается сахаром, и основой ее поведения является исследование окружающего мира в поисках съестного. Тело "Жуй - жую" состоит из трех тележек, а чувство голода является его постоянным спутником, поскольку аккумуляторы постоянно требуют перезарядки. Проблема частые ошибки машины в выборе продуктов питания.
Некоторая примитивность искусственных нейронных сетей. Искусственные нейронные сети демонстрируют сегодня удивительные преимущества, присущие человеческому мозгу. Они учатся на основе личного опыта, обобщают информацию, самоконфигурируются, извлекают главное из информации с лишними данными. Однако даже развитые искусственные сети не могут дублировать функции человеческого мозга. Реальный интеллект, демонстрируется сегодня сложно устроенными нейронными сетями, находится ниже уровня развития интеллекта дождевого червя.
Неэффективность искусственного интеллекта в военных целях. В последнее время в СМИ довольно часто появляются новости о создании ИИ в военных целях. Однако в реальности перед разработчиками подобных машин-роботов стоят очень сложные и часто неразрешимые задачи. Прежде всего это недостатки систем автоматического распознавания, способных самообучаться и адекватно анализировать информацию в режиме реального времени (принимать нужные решения в нужный момент). Такой боевой машине очень трудно, а скорее всего - практически невозможно, будет отличить на поле боя своих от чужих.
Также пока не разработано алгоритмов работы подобных устройств в условиях незнакомой местности. Подобные боевые единицы способны сегодня максимум до простого дистанционного управления. Более выдающиеся результаты достигнуты военными в прикладных направлениях: точное распознавание речи и тембра голоса, различные "детекторы лжи", создание консультационных систем (снижение однотипных действий и нагрузки на пилотов в режиме реального полета), системы низкоуровневого анализа изображения, полученного от видеокамеры, и т . d.
Кроме этого, сегодня создано достаточно большое количество приборов с подобием ИИ, призванных усовершенствовать работу вооруженных сил: разнообразные интеллектуальные сонары и радары для обнаружения целей, спутниковая система позиционирования для точного координирования локализации войск и их передвижение, разнообразные системы навигации в судоходстве.
findings
Сегодня продолжается внедрение логики в прикладные области и программы. Программ глобального масштаба, способных хоть какой мере соответствовать реальному человеку, вести процесс разумного мышления и общения, пока нет и в ближайшем времени не предвидится (существует слишком много препятствий и неразрешимых проблем).
Сегодня компьютер выполняет только точные указания, которые ему предоставляет человек. При написании любого приложения программист пользуется языком высокого уровня, затем программа - транслятор переводит это приложение на машинный язык директив, которую и понимает процессор компьютера. Поэтому, становится понятно, что сам по себе компьютер к мышлению способен в принципе, но высокоуровневые программы делают его относительно интеллектуальным.
Делая вывод из всего сказанного, можно сказать, что высокоинтеллектуальное мышление - это свойство не высокоорганизованной материи, а свойство высокоорганизованной ДУШИ. Животные и человек способны ставить и решать задачи. Компьютеры - устройства неодушевленные, сегодня их очеловечивают программисты, а машины только следуют их указаниям. К сожалению, какой бы ни была сложной современная программа, какие бы сложные алгоритмы не было у нее заложено, в конечном итоге она не сможет сделать ничего кроме того, что не предусмотрено автором. Возможно, в будущем то и изменится, но не сегодня ...
Ученые пытаются приоткрыть завесу отдаленного будущего. Возможно ли создание искусственного интеллекта? Можно создать такие человекообразные системы, которые смогут мыслить абстрактными образами, будут саморазмножаться, самообучаться, корректно реагировать на изменения окружающей среды, владеть чувствами, волей, желаниями? Можно создать соответствующие алгоритмы? Сможет человечество контролировать такие объекты? К сожалению, ответов на эти вопросы пока нет. Остается надеяться на то, что, если искусственный интеллект можно создать в принципе, то рано или поздно он будет создан.


Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Artificial Intelligence. Basics and history. Goals.

Terms: Artificial Intelligence. Basics and history. Goals.