Lecture
Это окончание невероятной информации про .
...
which neural mechanisms provide for the perception of objects in large, complex scenes is considered. Some experiments show that filtering information about small objects plays an important role in determining search targets [Luck, 1998]. The question of how we “learn” to see is investigated in [Sagi and Tappe, 1998].
Works [Gilbert, 1992, 1998] are devoted to the flexibility of perception. It follows from them that the picture seen by a man does not reflect the exact physical characteristics of the scene. It greatly depends on the processes by which the brain tries to interpret this picture.
[Ivry, 1998] addresses the question of how the cerebral cortex encodes and indexes correlated information in time, including the interpretation of sensations and motor activity.
Stress hormones, produced in states of emotional arousal, affect memory processes [Cahill and McGaugh, 1998]. This correlates with the problem of justification (grounding): why are thoughts, words, sensations meaningful for an agent? In what sense is "sadness in things" possible, Vergilian lacremae reruml
Acoustic-phonetic aspects of speech are based on important organizational principles linking research in the field of neural systems with cognitive and linguistic theories [Miller et al., 1998]. [Gazzaniga, 2000] addresses the question of how syntactic and semantic components are combined in the cerebral cortex.
How does an individual learn a particular language, and which neurophysiological stages correspond to this process? This issue is devoted to the work [Kuhl, 1993, 1998].
The following issues are addressed in [O'Leary et al. 1999]. How to understand the development, what determines the plasticity of the critical periods and reorganization in the process of maturation observed in mammalian somatosensory systems? Are developmental phases critical for the formation of intelligence? These same questions are addressed in [Karmiloff-Smith, 1992], [Gazzaniga, 2000] and subsection 16.1.2.
Practical work in the field of artificial intelligence does not require extensive knowledge of the above and adjacent neural and psychological fields. But such knowledge can help in the development of intelligent devices, as well as in determining the place of AI research in the context of the general theory of intelligent systems. Finally, the synthesis of psychology, neurophysiology, and computer science is a truly fascinating task. But it requires a detailed study by epistemologists, which will be our next topic for discussion.
16.2.2. Epistemology issues
If you do not know where you are going, you will come not where you want ... - attributed to Yogi Berra (Yogi Berra)
The development of artificial intelligence occurred in the process of solving many important tasks and issues. Understanding natural languages, planning, reasoning in the face of uncertainty, and machine learning are all typical problems that reflect important aspects of rational behavior. Intellectual systems operating in these subject areas require knowledge of goals and experience in the context of a specific social situation. For this, in the program that a person is trying to endow with "reason", it is necessary to ensure the possibility of the formation of concepts.
The process of forming concepts reflects both the semantics supporting the use of symbols and the structure of the symbols used. The task is to find and use the invariance inherent in this subject area. The term "invariant" is used to describe regularities or meaningful, usable, aspects of complex working environments. In this review, the terms symbols and character systems are used in a broad sense. They include a wide range of concepts: from well-defined symbols of Newell and Simon [Newell and Simon, 1976] to nodes and the network architecture of communication systems, as well as evolving structures of genetics and artificial life.
Although the questions discussed below are typical of most of the work in the field of AI, let us dwell on the problems of machine learning. This is due to three reasons.
First, a fairly deep study of at least one key subject area of artificial intelligence will give the reader a more complete and accurate picture of the latest achievements of AI. Secondly, advances in machine learning, and especially in neural networks, genetic algorithms and other evolutionary approaches, have the potential to make a revolution in the field of AI. Finally, training is one of the most fascinating areas of research in artificial intelligence.
Despite progress, learning remains one of the most difficult problems facing AI researchers. Next, we will discuss three issues constraining progress in this area today: firstly, generalization and generalization and overlearning, secondly, the role of inductive bias in training, thirdly, empiricist dilemma dilemma or understanding of evolution without limits. The last two problems are interrelated. The inherent threshold in many algorithms is an expression of a rationalistic problem. The threshold is determined by expectations, i.e. what we learn often depends on what one wants to learn. There is another problem. Sometimes we do not have a priori guess about the result. This, for example, can be observed in studies devoted to artificial life. Is it possible to say: “Build it and it will work as it should”? If you believe Yogi Berra (see the epigraph to the section), then, most likely, no! These topics will be covered in the next section.
Generalization problem
The examples used to represent a variety of learning models (symbolic, connectionist, and evolutionary) were often too unnatural. For example, link architectures often contained only a few nodes or one hidden layer. Such examples are relevant, since the basic laws of learning can be adequately explained in the context of several neurons or layers. Но стоит помнить, что в реальных задачах нейронные сети обычно значительно больше, и проблема масштаба здесь имеет значение. Например, для обучения с обратным распространением ошибки требуется большое количество обучающих примеров, а для решения сколько-нибудь практически интересных задач нужны большие сети. Многие исследователи ([Necht-Nielsen, 1990]; [Zurada, 1992], [Freeman и Scapura, 1991]) работали над проблемой выбора оптимального числа входных данных, соотношения между числом входных параметров и узлов в скрытом слое, определением количества проходов процесса обучения, достаточного для обеспечения сходимости. Здесь мы только констатировали, что эти проблемы сложные, важные и в большинстве случаев открытые.
Количество и качество обучающих данных важны для любого обучающего алгоритма. Без обширных исходных знаний о предметной области алгоритм обучения может не справиться с выделением образов при получении зашумленных, недостаточных или даже испорченных данных.
Смежная проблема - вопрос "достаточности" в обучении. В каких случаях алгоритмы можно назвать достаточно пригодными для выделения важных границ или инвариантов предметной области задачи? Приберечь ли запас входных данных для тестирования алгоритма? Отвечает ли количество имеющихся данных требуемому качеству обучения? Должно быть, суждение о "достаточности" является в большей степени эвристическим или даже эстетическим: мы, люди, часто рассматриваем алгоритмы как "достаточно хорошие".
We illustrate the problem of generalization with an example using the error back-propagation method to find a function from a set of given points (Fig. 16.2).
The lines around this set represent the functions found by the learning algorithm. Recall that upon completion of training, the algorithm must provide new input data to check the quality of training.
Fig. 16.2. A set of data points and three approximating functions.
The function f1 represents a fairly accurate least squares approximation. Further training of the system can give the function f2, which looks like a fairly “good” approximation of the data set. But still, f2 does not exactly match the given points. Further training can lead to functions that accurately approximate data, but give terrible generalizations to new input data. This phenomenon is called network overtraining. One of the strengths of learning with back propagation of error is that in the subject areas of many applications it provides effective generalizations, i.e. approximations of functions that approximate training data and correctly process new ones. Nevertheless, to find the point at which the network moves from the "under-trained" to the re-trained state is not a trivial task. It is naive to think that it is possible to build a network, or any training tool at all, provide it with “raw” data, and then step aside and observe how it produces the most effective and useful generalizations applicable to new similar problems.
To summarize, returning the question of generalization to the context of its epistemology. When the problem solver forms and applies generalizations in the solution process, it creates invariants or even systems of invariants in the problem-solution domain. Thus, the quality and clarity of such generalizations can be a necessary basis for a successful project. Research in the field of generalization of the problem and the process of its solution continues.
Inductive threshold: rationalist a priori
Methods of automatic learning, discussed in Chapters 9-11, and, consequently, most of the methods of AI reflect the inductive thresholds inherent in their creators. The problem with inductive thresholds is that the resulting representations and search strategies provide a means of coding in an already interpreted world. They rarely provide mechanisms for exploring our interpretations, forging new perspectives, or for tracking and changing ineffective perspectives. Such implicit prerequisites lead to the trap of rationalist epistemology, when the studied environment can only be seen as we expect or taught to see it.
The role of the inductive threshold should be explicit in each learning algorithm. (An alternative statement states that ignorance of an inductive threshold does not mean that it does not exist and does not critically affect learning parameters.) In symbolic learning, inductive thresholds are usually obvious, for example, using a semantic network for conceptual learning. In Winston's learning algorithm [Winston, 1975a], the thresholds include a presentation in the form of conjunctive connections and the use of "hitting the target" for correcting the constraints imposed on the system. Similar thresholds are used when implementing search in version space (section 9.1), building decision trees in the ID3 algorithm (section 9.3), or even the Meta-DENDRAL rules (section 9.5).
As mentioned in Chapters 10 and 11, many aspects of connectionist and genetic learning strategies also imply inductive thresholds. For example, the limitations of perceptron networks led to the appearance of hidden layers. The appropriate question is how the hidden nodes contribute to the solution. One of the functions of hidden nodes is that they add new dimensions to the view space. On a simple example from subsection 10.3.3, it was clear that the data in the XOR problem were not linearly separable in two-dimensional space. However, the weighting factors obtained in the learning process add another dimension to the presentation. In a three-dimensional space, points can be divided by a two-dimensional plane. The output layer of this network can be considered as a perceptron that finds a plane in three-dimensional space.
It is worth noting that many of the “different” learning paradigms use (sometimes implicitly) common inductive thresholds. An example of such a situation is the relationship between clustering in the CLUSTER / 2 system (section 9.5), the perceptron (section 10.2), and prototype networks (section 10.3). It was noted that counter-dissemination of information in a dual network that uses unsupervised learning with weights correction to exit in the Kohonen layer, along with training with the teacher in the Grossberg layer, is in many ways similar to reverse-propagation learning.
The considered means are similar in many important aspects. In fact, even the task of clustering data is an addition to the method of approximation of functions. In the first case, we are trying to classify data sets; in the second we build functions that uniquely separate data clusters from each other. This can be observed when the perceptron-based classification algorithm, based on the minimum distance, also finds the parameters defining the linear separation.
Even the problem of generalization or the construction of functions can be viewed from different perspectives. For example, statistical methods are used to detect data correlation. An iterative version of the Taylor series allows you to approximate most of the functions. Polynomial approximation algorithms for over a century have been used to approximate functions at given points.
So, the result of learning (symbolic, connectionist or evolutionary) is largely determined by the accepted assumptions about the nature of the decision. Taking into account this synergistic effect in the process of developing computational problem solvers, it is often possible to improve the chances of success and more meaningfully interpret the results.
Empiric dilemma
If in today's approaches to machine learning, especially learning with a teacher, the inductive threshold plays a major role, learning without a teacher, which is used in many genetic and evolutionary approaches, faces the opposite problem, which is sometimes called an empirical dilemma. In such approaches, it is believed that solutions will develop themselves on the basis of evolving alternatives, in the process of "survival" of the most appropriate individuals of the population. This is a powerful method, especially in the context of parallel and distributed search tools. But the question arises: how can we know that the system came to the right decision if we did not know where we were going?
Long ago, Plato formulated this problem with the words of Menon from the famous dialogue:
"But in what way, Socrates, will you search for a thing, not even knowing what it is? Which of the things you do not know will be chosen as the subject of research? Or, if at best you even come across it, how do you know that it is what you didn't know? "
Several researchers confirm the words of Menon [Mitchell, 1997] and Wolpert and Macready's "on free cheese" theorem [Wolpert, Macready, 1995]. The empiricist actually needs some share of rationalistic a priori knowledge in order to give scientific character to his reasoning.
Nevertheless, many exciting developments in unsupervised learning and evolutionary learning continue. An example is the creation of networks based on models or energy minimization, which can be considered as attractors for complex invariance. Observing how the data "line up" to the points of attraction, the researcher begins to consider these architectures as a means of modeling dynamic phenomena. The question arises: what are the limitations of these methods?
In fact, researchers showed [Siegelman and Sontag, 1991] that recurrent networks are complete in terms of computation, i.e. equivalent to the class of Turing machines. This equivalence generalizes earlier results. Kolmogorov [Kolmogorov, 1957] showed that for each continuous function there is a neural network that implements its calculation. It was also shown that the back-propagation error network with one hidden layer can approximate any continuous function on a compact [Necht-Nielsen, 1989]. In section 11.3, it was pointed out that von Neumann constructed Turing complete state machines. Thus, networks of connections and finite automata turned out to be two more classes of algorithms that can approximate almost any function. In addition, inductive thresholds are applicable to unsupervised learning, as well as to evolutionary models; Representation thresholds are applicable to the construction of nodes, networks, and genomes, and algorithmic thresholds to search, reinforcement, and selection mechanisms.
What, then, can connectionist, genetic, or evolutionary finite automata offer us in their various forms?
One of the most attractive features of neural network learning is the possibility of adaptation based on input data or examples. Thus, although their architectures are precisely designed, they are trained in examples, summarizing data in a specific subject area. But the question arises whether the data is sufficient and sufficiently “clean” in order not to distort the result of the decision. And can a designer know this?
Genetic algorithms also provide a powerful and flexible search engine in the problem parameter space. Genetic search is controlled by both mutations and special operators (for example, crossing or inversion), which preserve important aspects of parental information for future generations. How can a program designer find and maintain the right balance between diversity and consistency?
Genetic algorithms and connectionist architectures can be viewed as examples of parallel and asynchronous processing. But do they really provide results that are unattainable in sequential programming?
Although neural network and sociological roots are not of fundamental importance for many modern practitioners of connectionist and genetic learning, these methods reflect many important aspects of natural selection and evolution. In Chapter 10, learning models with error reduction were considered: perceptron networks, networks with back propagation of error, and Hebb's model. In subsection 10.3.4, Hopfield networks are described for solving problems of associative memory. A variety of evolutionary models were discussed in Chapter 11.
And finally, all teaching methods are empirical tools. Are these means powerful and expressive enough to raise further questions about the nature of perception, learning and understanding as we reveal the invariance of our world?
The next section will substantiate the conclusion that constructivist epistemology, combined with the experimental methods of modern artificial intelligence, offers tools and technologies for the further construction and study of the theory of intelligent systems.
Reconciliation with constructivists
Theories, as networks: who throws them, and in order to catch ... - L. Wittgenstein (L. Wittgenstein)
Constructivists hypothesize that all understanding is the result of the interaction between the energy images of the world and the mental categories imposed on the world by a reasonable agent [Piaget, 1954, 1970], [von Glasesfeld, 1978]. In terms of Piaget , we assimilate environmental phenomena in accordance with our current understanding and adapt our understanding to the "requirements" of the phenomenon.
Constructivists often use the term schemata to refer to a priori structures used in the organization of an experienced knowledge of the outside world. This term is borrowed from the British psychologist Bartlett [Bartlett, 1932] and is rooted in the philosophy of Kant. From this point of view, observation is not passive and neutral, but is active and interpretive.
Perceived information (Kantian a posteriori knowledge) never fits exactly into the pre-compiled schemes. Therefore, thresholds based on schemes and used by the subject to organize experimental knowledge should be modified, extended or replaced. The need for adaptation, caused by unsuccessful interactions with the environment, serves as the engine of the cognitive balancing process. Thus, constructivist epistemology is the basis of cognitive evolution and refinement. An important consequence of the theory of constructivism is that the interpretation of any situation implies the use of concepts and categories of the observer.
When Piaget (Piaget, 1954, 1970) proposed a constructivist approach to understanding, he called it genetic epistemology. The inconsistency between the scheme and the real world creates a cognitive contradiction that forces one to revise the scheme. The correction of the scheme, the adaptation leads to the continuous development of understanding in the direction of equilibration.
Revision of the scheme and movement to balance is an innate predisposition, as well as a means of adapting to the structure of society and the surrounding world. In the revision of the schemes, both these forces are combined, reflecting our inherent propensity to survive. Modification of schemes is a priori programmed by our genetics and at the same time is the a posteriori function of society and the world. This is the result of our embodiment in space and time.
Here there is a fusion of empirical and rationalistic traditions. As a materialized object, a person can perceive no more than their senses perceive. Thanks to adaptation, he survives by learning the general principles of the outside world. Perception is determined by our expectations, which, in turn, are shaped by perception. Therefore, these functions can only be understood in terms of each other.
Finally, a person is rarely aware of the schemes that ensure his interaction with the world. It is a source of bias and prejudice in science and in society. But most often the person does not know their meaning. They are formed by achieving cognitive balance with the world, and not in the process of conscious thinking.
Why is constructivist epistemology especially useful in studying the problems of artificial intelligence? The author believes that constructivism helps in addressing the problem of epistemological access (epistemological access). For more than a century in psychology, there has been a struggle between two directions: positivists, who propose to explore the phenomenon of mind, starting from visible physical behavior, and supporters of a more phenomenological approach that allows the use of descriptions from the first persons. This disagreement exists because both approaches require some assumptions or interpretations. Compared to physical objects that are considered directly observable, the mental states and behavior of the subject are clearly difficult to characterize. The author believes that the contradiction between the direct approach to physical phenomena and the indirect to mental is an illusory one. Constructivist analysis shows that no experimental knowledge of the subject is possible without the use of certain schemes to organize this experience. In scientific research, this implies that every access to the phenomena of the world occurs through the construction of models, approximations, and refinements.
16.2.3. Embedded performer and existential mind
Symbolic reasoning, neural network computing and various forms of evolutionary strategies are the dominant approaches in the modern study of AI. However, as noted in the previous section, to ensure higher efficiency, these approaches must take into account the limitations of the world, according to which all “intelligence” is materialized.
Theories of embedded and reified action claim that intelligence is not the result of controlling models built by the mind. It is best viewed in terms of actions taken by an agent placed in the world. As a metaphor, a better understanding of the difference between the two approaches in [Suchman, 1987] suggests a comparison between European navigation methods and the less formal methods practiced by Polynesian islanders. European navigation methods require constant tracking of the location of the ship at each moment of travel. For this, navigators rely on extensive, detailed geographic models. Polynesian navigators, by contrast, do not use maps and other means of determining their location. They use the stars, winds and currents to continue on to their goal, improvising a route that directly depends on the circumstances of their journey. Without relying on the model of the world, the islanders rely on their interaction with it and achieve their goal using a reliable and flexible method.
Theories of embedded action claim that intelligence should not be viewed as a process of building and using models of the world. It is rather a less structured process of taking action in this world and reacting to the result. This approach focuses on the ability of sensory perception of the world, purposeful actions, as well as a continuous and rapid response to changes in it. In this approach, the feelings through which a person is introduced into this world are of more importance than the processes of reasoning about them. More important is the ability to act, not the ability to explain these actions.
The impact of such a point of view on networks and agency approaches is obvious. It consists in rejecting general symbolic methods in favor of adaptation and learning processes.
Works in the field of artificial life [Langton, 1995] should be the most vivid example of research carried out under the influence of theories of rooted action. These models of reason influenced the approach to the robotics of Rodney Brooks [Brooks, 1989], Brendan McGonigla [McGonigle, 1998], Lewis and Luger [Lewis and Luger, 2000]. These researchers argue that it was a mistake to begin work in the field of AI with the implementation of high-level processes of reasoning in the simulated mind. This emphasis on logical reasoning led mankind on the wrong road, diverting attention from the fundamental processes that allow the agent to infiltrate the world and act in it in a productive way. According to Brooks, you need to start with the design and research of small simple robots, creatures acting at the level of insects. They will allow to study the processes underlying the behavior of both simple and complex creatures. Having built many such simple robots, Brooks and his colleagues are trying to apply the acquired experience in designing a more complex COG robot, which the developers hope will achieve human capabilities.
Despite this, the theory of embedded action also influenced symbolic approaches to AI. For example, work in the field of reactive planning [Benson, 1995], [Benson and Nilsson, 1995], [Klein et al., 2000] reject traditional planning methods that involve developing a complete and definite plan that will guide the agent all the way from points and to the desired goal. Such plans rarely work as they should, because there may be too many mistakes and unforeseen problems on the way. In contrast, reactive scheduling systems operate in cycles. The construction of a partial plan is followed by its implementation, and then the situation is revalued to construct a new plan.
Constructivist theories and theories of rooted action confirm many ideas of the philosophy of existentialism. Existentialists believe that man manifests himself through his actions in the outside world. What people believe (or claim to believe) is far less important than what they do in critical situations. This emphasis is very important in AI. The researchers gradually realized that intellectual programs should be placed directly in the subject area, and not cherished in the laboratory. This is the reason for the growing interest in robotics and the problem of perception, as well as the Internet. Recently, active work is underway to create Web-agents, or "softbots," programs that go online and perform useful intellectual actions. The network is attractive to AI developers primarily because it can offer the intellectual programs a world far more complex than the one built in the laboratory. This world, comparable in complexity to nature and society, can be inhabited by intellectual agents who do not have bodies, but who are able to feel and act as in the physical world.
According to the author, the theory of embedded action will have an ever-increasing impact on artificial intelligence, forcing researchers to pay more attention to issues such as the importance of materialization and justification for an intellectual agent. It is also important to take into account the influence of social, cultural and economic factors on education and on the way the world influences the growth and evolution of an implanted agent. [Clark, 1997], [Lakoff and Johnson, 1999].
In conclusion, we formulate the main issues that both contribute to and impede today's efforts of specialists in constructing the theory of intelligent systems.
16.3. Artificial Intelligence: Current Tasks and Future Directions
As a geometer, straining all efforts,
To measure the circle, grab the mind
The sought cannot foundation,
That was me with the new diva tom ...
- Dante (Paradise)
Although the use of AI techniques for solving practical problems has demonstrated its usefulness, the problem of using them to build a complete theory of intelligence is complex, and work continues on it. In this final section, we will return to the questions that led the author to study the problems of artificial intelligence and writing this book: is it possible to give a formal mathematical description of the processes that form the intellect?
The computational description of intelligence arose with the advent of abstract definitions of computing devices. In the 1930-1950s. Turing, Post, Markov and Church began to investigate this problem - they all worked on formal systems for describing computations. The purpose of this study was not only to determine what is meant by calculations, but also to establish the scope of their applicability. The most studied formal description is the universal Turing machine [Turing, 1950], although the rules for the derivation of the Post underlying the production systems also make an important contribution to the development of this field of knowledge. The Church model [Church, 1941], based on partially recursive functions, led to the creation of modern high-level functional languages, such as Scheme and Standard ML.
Theorists have proved that all these formalisms are equivalent in power: any function calculated with one approach is calculated with the rest. In fact, it can be shown that the universal Turing machine is equivalent to any modern computing device. Based on these facts, Church-Turing's thesis was advanced that it is impossible to create a model of a computing device that is more powerful than the already known models. Having established the equivalence of computational models, we gain freedom in choosing the means of their technical implementation: we can build machines based on electron tubes, silicon, protoplasm, or cans. Computer-aided design in one implementation can be considered as equivalent to other mechanisms. This makes the empirical method even more important, since a researcher can experiment on a system implemented by some means in order to understand a system implemented by others.
Although, perhaps, the universal Turing and Lent machine is too universal. The paradox is that the implementation of intelligence may require a less powerful computational mechanism with more emphasis on control. In [Levesque and Brachman, 1985], it was suggested that the implementation of human intelligence may require computationally more efficient (albeit less impressive) ideas, including those based on the use of Horn clauses, to present reasoning and narrowing actual knowledge to basic literals. Agent and evolutionary models of intelligence also share a similar ideology.
Another aspect related to the formal equivalence of computational models is the question of dualism, or the problem of the relationship between the brain and the body. At least since the time of Descartes (see section 1.1), philosophers have been asking about the interaction and integration of the brain, consciousness, and the physical body. Philosophers offered all sorts of explanations, from complete materialism to the denial of material existence, right up to the intervention of God. AI and cognitive studies reject Cartesian dualism in favor of a material model of the mind based on physical realization, or instantiation of symbols, a formal description of computational operations on these symbols, equivalence of representations, and "realization" of knowledge and experience in reified models. The success of such studies indicates the validity of the chosen model [Johnson-Laird, 1998], [Dennett, 1987], [Luger, 1994].
Nevertheless, many questions arise concerning the epistemological principles of the organization of the intellect as a physical system. We note some important problems.
1. The problem of representation. Newell and Simon hypothesized that the physical character system and the search are a necessary and sufficient characteristic of the intellect (see section 16.1). Are the successes of neural networks, genetic and evolutionary approaches to intelligence hypothesize the physical symbol system, or are they symbolic systems themselves?
The conclusion that the physical character system is a sufficient model of intelligence has led to many impressive and useful results in modern science of thinking. Research has shown that it is possible to implement a physical character system that will exhibit sensible behavior. The sufficiency of the hypothesis allows us to build and test symbolic models of many aspects of human behavior ([Pylyshyn, 1984], [Posner, 1989]). But the theory that the physical symbol system and the search are necessary for rational behavior remains in question [Searle, 1980], [Weizenbaum, 1976], [Winograd and Flores, 1986], [Dreyfus and Dreyfus, 1985], [Penrose, 1989].
2. The role of materialization in knowledge. Одним из главных предположений гипотезы о физической символьной системе является то, что физическая реализация символьной системы не влияет на ее функционирование. Значение имеет лишь формальная структура. Это ставится под сомнение многими исследователями [Searle, 1980], [Johnson, 1987], [Agre и Chapman, 1987], [Brooks, 1989], [Varela и др., 1993], которые утверждают, что разумные действия в мире требуют физического воплощения, которое позволяет агенту объединяться с миром. Архитектура сегодняшних компьютеров не допускает такой степени внедрения, ограничивая взаимодействие искусственного интеллекта с миром современными устройствами ввода-вывода. Если эти сомнения верны, то реализация машинного разума требует интерфейса, весьма отличного от предлагаемого современными компьютерами.
3. Культура и интеллект. Традиционно упор в ИИ делался на разум отдельного индивида как источник интеллекта. Предполагается, что изучение принципов работы мозга (способов кодирования и манипулирования знаниями) обеспечит полное понимание истоков интеллекта. Но можно утверждать, что знание лучше рассматривать в контексте общества, чем как отдельную структуру.
В теории интеллекта, описанной в [Edelman, 1992], общество само является носителем важных составляющих интеллекта. Возможно, понимание социального контекста знания и человеческого поведения не менее важно для теории интеллекта, чем понимание процессов отдельного разума (мозга).
4. The nature of interpretation. Most computational models work with a previously interpreted domain, i.e. there is an implicit a priori binding of developers to the context of interpretation. Because of this binding, the system is limited in changing goals, contexts, and ideas as the problem is solved. In addition, too few attempts are being made today to illuminate the process of constructing human interpretation.
The position of Tarskian (Tarskian), who considers semantic adaptation as a display of a set of symbols on a set of objects, is certainly too weak and does not explain, for example, the fact that one subject area can have different interpretations in the light of various practical purposes. Linguists are trying to eliminate the limitations of such semantics, adding to it the theory of pragmatics [Austin, 1962] (section 13.1). Recently, researchers in the field of analysis of connected speech often address these questions, since the use of symbols in context plays an important role here. However, the problem is much wider - it boils down to studying the advantages and disadvantages of the reference tools in general (Lave, 1988), [Grosz and Sidner, 1990].
In the tradition of semiotics, founded by Pierce [Peirse, 1958] and continued in the works [Eso, 1976], [Grice, 1975], [Seboek, 1985], a more radical approach to language is adopted. Here, symbolic expressions are considered in a wider context of signs and sign interpretations. This implies that the meaning of a symbol can only be understood in the context of its interpretation and interaction with the environment.
5. Uncertainty of representations. Anderson's hypothesis about the uncertainty of ideas [Anderson, 1978] says: it is fundamentally impossible to determine which representation scheme best approximates the solution of a problem by man in the context of his experience and skills. This hypothesis is based on the fact that every representation scheme is inextricably linked with the larger computational architecture and search strategies. A detailed analysis of human experience shows that sometimes it is impossible to manage the decision process to a degree sufficient to determine a view, or to establish a view that uniquely defines a process. Like the uncertainty principle in physics, where the measurement process affects the phenomenon under study, this consideration is important when designing models of intelligence, but, as will be shown below, it does not limit their applicability.
The same remarks can be addressed to the computational model itself, where inductive thresholds of symbols and searching in the context of the Church-Turing hypothesis still impose restrictions on the system. The idea of the need to build some kind of optimal representation scheme may well be a fragment of the rationalist's dream, while the scientist needs only a model that is of sufficient quality to deal with empirical issues. The proof of the quality of the model is its ability to interpret, predict and adapt.
6. The need to build erroneous computational models. Popper [Popper, 1959] and others argue that scientific theories should be wrong. Consequently, there must be circumstances in which the model cannot successfully approximate the phenomenon. This is due to the fact that to confirm the correctness of the model, no finite number of confirmatory experiments are sufficient. Errors in existing theories stimulate further research.
The extreme generality of the hypothesis about the physical symbol system, as well as agent and evolutionary models of intelligence can lead to the fact that they will not be able to make mistakes. Consequently, their applicability as models will be limited. The same remarks can be made about the assumptions of the phenomenological tradition (p. 7). Some data structures of AI, such as semantic networks, are also so general that they can simulate almost everything that can be described, or, as in the universal Turing machine, any computable function. Thus, if a researcher of an AI or a cognitive scientist is asked a question about the conditions under which his model of intelligence will not work, he will have difficulty answering.
7. Limitations of the scientific method. Some researchers [Winograd and Flores, 1986], [Weizenbaum, 1976] argue that the most important aspects of intelligence are in principle impossible to model, and especially with the help of symbolic representation. These aspects include learning, natural language understanding and speech. These questions are rooted deep in the phenomenological tradition. For example, the comments of Winograd and Flores are based on the issues raised in phenomenology [Husserl, 1970], [Heidegger, 1962].
Many of the provisions of the modern theory of AI originate in the works of Carnap (Carnap), Frege (Frege), Leibniz (Leibniz), and also Hobbes (Hobbes), Locke (Locke), Hume (Hume) and Aristotle. In this tradition, it is argued that intellectual processes comply with the laws of nature and are comprehensible in principle.
Heidegger (Heidegger) and his followers presented an alternative approach to understanding intelligence. For Heidegger, reflexive awareness is peculiar to the world of materialized experience. Proponents of this view, including Vinograd, Flores and Dreyfus (Dreyfus), say that the personality’s understanding of any aspects is based on their practical “use” in everyday life. Essentially, the world is a context of socially organized roles and goals. This environment and the functioning of man in it are not explained by relations and theorems. It is a stream that forms itself and continuously modifies. In a fundamental sense, in the world of evolving norms and implicit goals, human experience is not the knowledge of an object, but rather a mode of action. Man is inherently unable to express much of his knowledge and rational behavior in the form of language, whether formal or natural.
Consider this point. First, as a critique of the purely rationalist tradition, it is true. Rationalism upholds the position that all human activity, intellect and responsibility can in principle be represented, formalized and understood. Most thoughtful people question this by assigning an important role to emotions, self-affirmation, and debt obligations (finally!). Aristotle himself said: "Why I do not feel the urge to do what requires responsibility?". There are many varieties of human activity that go beyond the reach of the scientific method, which play an important role in the conscious interaction of people. They can not be played in the machines.
And yet, the scientific tradition of examining data, building models, setting up experiments and verifying results, refining the model for further experiments, gave humanity a high level of understanding, explanation and ability to predict. The scientific method is a powerful tool for improving human understanding. Nevertheless, many pitfalls remain in this approach.
First, scientists should not confuse a model with a simulated phenomenon. The model allows you to gradually approximate the phenomenon, but there is always a "remnant" that cannot be explained empirically. In this sense, the ambiguity of representation is not a problem. The model is used for research, explanation and prediction, and if it performs these functions, then this is a successful model [Kuhn, 1962]. In fact, different models can successfully explain different aspects of a single phenomenon, for example, the wave and particle theory of light.
Moreover, when researchers argue that some aspects of the intellect are beyond the scope of scientific tradition methods, this statement itself can only be verified with the help of this tradition. The scientific method is the only instrument with the help of which it is possible to explain in what sense questions may be beyond the limits of a person’s current understanding. Every coherent point of view, even the point of view of the phenomenological tradition, must be correlated with the current understanding of the explanation, even if it only establishes the boundaries within which the phenomenon can be explained.
These issues need to be considered to preserve the logical coherence and development of AI. In order to understand the process of solving problems, learning and language, it is necessary to make sense of ideas and knowledge at the level of philosophy. Researchers have to solve the Aristotelian contradiction between theory and practice, to live between science and art.
Scientists create tools. All of our ideas, algorithms, and languages are tools for designing and building mechanisms that exhibit reasonable behavior. Through experiment, one can explore both their computational adequacy for solving problems and our own understanding of the phenomenon of intelligence.
We are the heirs to the traditions of Hobbes, Leibniz, Descartes, Babbage, Turing, and others, whose contribution to science was discussed in Chapter 1. Engineering, Science, and Philosophy; nature of ideas, knowledge and experience; the power and limits of formalism and mechanism are limitations that must be reckoned with and which should be studied further.
16.4. Summary and additional literature
For more information, the reader can use the links at the end of Chapter 1. It is worth adding to them the works [Pylyshyn, 1984] and [Winograd and Flores, 1986]. Questions of cognitive science are addressed in [Norman, 1983], [Newell and Simon, 1972], [Posner, 1989], [Luger, 1994], [Ballard, 1997], [Franklin, 1995], [Jeannerod, 1997], [Elman et al, 1996].
In the works [Haugeland, 1981, 1997], [Dennett, 1978], [Smith, 1996] the philosophical roots of the theory of intelligent systems are described. The book [Anderson, 1990] on cognitive psychology provides valuable examples of information processing models. [Pylyshin, 1984], [Anderson, 1978] provides a detailed description of many fundamental questions of cognitive science, including consideration of the uncertainty of ideas. In [Dennett, 1991], cognitive techniques are used to study the structure of consciousness. You can also recommend books on the scientific method: [Popper, 1959], [Kuhn, 1962], [Bechtel, 1988], [Hempel, 1965], [Lakatos, 1976], [Quine, 1963].
Finally, in [Lakoff and Johnson, 1999] possible answers to the question of justification are proposed. The book [Clark, 1997] describes important aspects of the materialization of the intellect. A description of the Brooks model [Brooks, 1991a] for solving the problem of an embedded robot can be found in [McGonigle, 1990, 1998] and [Lewis and Luger, 2000].
Часть 1 Artificial Intelligence as an Empirical Problem
Часть 2 - Artificial Intelligence as an Empirical Problem
Comments
To leave a comment
Approaches and directions for creating Artificial Intelligence
Terms: Approaches and directions for creating Artificial Intelligence