Lecture
The problem of intelligence has existed since ancient times, and the problem of artificial intelligence since the mid-twentieth century. We can safely say that in the foreseeable future, the problem of intelligence will not cease to be a problem. Therefore, it is not necessary to reject even a slight possibility of overcoming the crisis that has arisen in the study of reasonable behavior. It is necessary to consider all the hypotheses that can help to take a step towards the cognition of the riddle of mind and consciousness. Freedom of thinking of the developer from the psychological taboos is worth it, to abandon the Occam's razor, at least in the early stages. Freedom from taboo will give comfort to the developer, which can significantly reduce costs, or even in a roundabout way, but through the use of extraordinary ideas, lead to a workable prototype. We are considering a computing system. An artificial intelligence system will exist in virtual reality. And it will not be the physical reality in which man exists. The reality in which artificial intelligence will exist has the right to be any, both infinitely more complex than physical reality and infinitely simpler than it. The principles of construction (laws) operating in virtual reality can resemble the laws of natural reality in a very small volume. We do not have to do artificial intelligence in the image and likeness of natural intelligence. We have the right to create any virtual reality with any properties. We can indefinitely immerse some levels of virtual realities in others, cross or violate the borders separating them. The main thing is to provide the possibility of modeling the obtained abstract solution on the existing computing technology in our physical reality. Consider neural networks as a means of implementing an artificial intelligence system. A neuron is a node that processes the information coming to it and sends the result to the other nodes. Especially important is the simultaneity of all neurons. A neuron can perform both elementary simple and very complex operations. This does not entail a special functional difference for the whole system. If necessary, you can combine a group of elementary operations into an ensemble of neurons and work with the complex functions of such abstract formations. In this case, nodes with complex behavior can be modeled by a group of elementary neurons.
The neural network can implement any function that developers are able to formally describe. However, to prove the ability to implement any function is just the beginning. For practical application of the developed system, it is necessary to implement not any, but some specific and useful functionality in this context. The described abstract neural network lacks the ability to change the function performed in dynamics. If the network structure is rigidly defined, as in the classical perceptron, then, once implemented according to the project, such a network will be able to adapt to the changed conditions within very tight limits due to its structure. Of course, it is possible to apply the methodology of meta-transitions, and work with the dynamics of the processed pulses, but this will be an escape from the problem, not its solution. The possibility of self-modification and self-analysis requires the presence of topology dynamics. It is necessary that the neural network is able to process its neurons as data. Then one part of the neural network can change the topology of its other part. Interestingly, as far back as 1957, John von Neumann created a neural network architecture that is fundamentally different from perceptron (we will not consider the limitations imposed by the crystal lattice, for details on the implementation of the original "self-replicating automaton" it’s better to work. of this work are the following: the neuron is some fairly simple device for processing input signals (in the original, logical functions of conjunction, disjunction and invoice were used the neuron can dynamically change the function being performed, the neuron can dynamically change connections with other neurons, one part of the neural network can analyze the state of another part of the network, one part of the network can change the topology of another part of the neural network. With a mathematically complete set of functions, their complexity will be reflected only on computational efficiency. As von Neumann showed in his work, such a neural network is equivalent to a Turing machine.
In the Neumann network, one section of the neural network can analyze the structure of another section. Then make some decisions based on this analysis and change the connections or types of neurons. One part of the neural network can use another part of the neural network as a memory bank, dynamically connecting to individual neuron-cells, reading their state or changing this state. An interesting analogy that can be drawn between a DNA molecule and a ribbon in such an automaton. Moreover, one part of the neural network can release a “design sleeve” and assemble a device that performs some functions from individual neurons. In this case, the memory located in the "tape" is used as DNA, on the basis of which a certain part of the neural network is constructed.
Consider the semantic neural networks and the consequences that arise in the process of applying the ideas of von Neumann to these networks. Unlike the von Neumann network, there are no restrictions on the topology of neurons in the semantic neural network. This leads to the impossibility of relative addressing of neurons, as von Neumann did. In this case, you must enter the absolute addressing. Each neuron must have some unique identifier, knowing that you can directly access it. Of course, neurons that interact with each other through axon dendrites must have identifiers of each other. Relative addressing can be modeled by entering the specificity of neurons, similar to how it is implemented in biological neural networks.
We postulate the presence of a pointer to a neuron. This pointer will simply be a unique number — the identifier of the neuron in the neuron repository. Let neurons be able to process not only fuzzy data, but also pointers to each other. Obviously, it is perfectly realistic to implement it technically. A pointer to a neuron is a virtual connection that is not implemented as a dendrite or an axon. Suppose that in a constructed virtual reality, neurons will interact with each other not only by transmitting signals through axons-dendrites, but also using paranormal effects. So, the neuron will have signal inputs, signal outputs and a set of virtual connections with other neurons. The neuron will be able to interact with other neurons, owning their pointers, but without having signaling connections with them. The difference between pointers and signal connections is also obvious. Signal connections are two-way structural entities associated with both the source and receiver of the signal. The pointer is one-sided. The owner of an identifier of a neuron is able to initiate interaction with this neuron. Connections manifested in the form of axons and dendrites can be considered as the long-term memory of the system, invariant to the context. Signals and pointers processed by neurons - as a super-operative information, depending on the current context. Loss of signals or pointers (by analogy with an epileptic seizure) should not affect the manifested structure and lead to changes in long-term memory or personality.
The presence of such pointers leads to indirect interaction possibilities. Analogue from traditional programming languages is double or triple addressing. In this case, some neuron-1 that has a virtual connection with neuron-2 is able to interact with neuron-3, provided that neuron-2 owns a pointer to neuron-3. This leads to the wide possibilities of interaction of one neuron with another neuron without direct contact through both developed and virtual connections. To ensure self-reflection in the network, you can enter neurons that perform the functions of analyzing and changing the network structure. We introduce neuron receptors that respond to the structural elements of the constructed neural network. We will also introduce neurons effectors, which, being transferred to the excited state, perform some modification of the neural network structure. To ensure the completeness of the system, it is necessary to ensure the self-applicability of receptor neurons and effectors' neurons. The neuron receptor must be able to analyze other neuron receptors, including those of the same type as it is. Neurons effectors should be able to modify other neurons effectors, and not only neurons that perform signal processing.
Due to the presence of virtual connections between neurons, it is possible to divide the network model into submodels. We call such submodels bodies. The physical body will consist of the bodies of neurons and connections between neurons and will exist in a spatial continuum. Unlike the real physical world, the number of spatial dimensions in the virtual world may not be equal to 3. The information body will consist of impulses transmitted between neurons. The astral body will consist of virtual connections between neurons and the general principles of the organization of a neural network that are not reflected in the logic of the behavior of an individual neuron. Models of the physical, informational, and astral bodies will be executed while the virtual machine is running. The space provides a base for the placement of the bodies of neurons and their connections. In a semantic neural network, space has less than one dimension, since the identifiers of neurons that address neurons in the repository do not have an order relationship. Each neuron can have contact with any neuron without restrictions on the distance between them or on the topology of connections.
The model of the astral body in the first place should ensure the possibility of forming pointers to neurons and the possibility of transferring these pointers between computational structures. Then some neurons can transmit pointers to other neurons in signals. Also on the model of the astral body, you can lay out the training functions of the neural network, such as the reverse propagation of error, the synthesis of a synchronized linear tree and others. The physical body (neural network) controls the effectors of the system and provides information to the astral body, which is necessary to modify the structure of the neural network. The astral body, using the rules of learning that are stored in the structure of the neural network (memory) changes (teaches) this neural network, including the new rules of learning. Thus, any rules of learning / synthesis of the topology of the neural network become a particular case of all the capabilities of this network. As a result, the developed neural network can be effectively implemented by means of existing computing equipment. The semantic neural network is equivalent to a Turing machine. This means that on its basis it is possible to implement a system that calculates any function that is computable on Turing machines. For example, such a network, as a special case, can simulate a multilayer perceptron with back propagation of an error. So the perceptron neurons can be constructed from separate neurons of such a network, performing separately the functions of summation, multiplication and activation function. The perceptron learning algorithm can be implemented as a separate fragment of such a network, analyzing and modifying the fragment corresponding to the perceptron itself.
Comments
To leave a comment
Models and research methods
Terms: Models and research methods