Lecture
The development of software systems that exist in virtual reality differs from the developments that should result in products that exist in physical reality. When developing software, the cost of the resulting product is determined by the consumption of the following resources: 1 - the memory required for the execution of the developed software system, 2 - the performance of the software system and 3 - the cost of developing this system. As the existing practice of developing software systems shows, the most expensive resource is the cost of development. Costs are especially high in the early stages of solving “hopeless” tasks. In this case, even the possibility of solving the problem “in principle” is not clear. To the "hopeless" can be attributed to the problem of creating artificial intelligence. The problem of intelligence has existed since ancient times, and the problem of artificial intelligence since the mid-twentieth century. We can safely say that in the foreseeable future, the problem of intelligence will not cease to be a problem. Therefore, it is not necessary to reject even a slight possibility of overcoming the crisis that has arisen in the study of reasonable behavior. It is necessary to consider all the hypotheses that can help to take a step towards the cognition of the riddle of mind and consciousness.
In the process of creating an artificial intelligence system, you can spend time on developing a working prototype, even if such a prototype will use non-optimal technical solutions. After the construction of the existing prototype, it will be possible to carry out optimization and obtain an industrial solution. In this connection, in this paper, an alternative method to the Occam's razor is used. The method will not imply a limit on the number of entities. On the contrary, such entities will be introduced at any opportunity. Only one thing will matter - is it possible in the end to extract something pragmatic from the legion of such superfluous entities? The question arises about the justification for such a development method. In contrast to the physical reality, in the information reality such an approach is fully justified. Indeed, will excess entities worsen the situation? Freedom of thinking of the developer from the psychological taboos is worth it, to abandon the Occam's razor, at least in the early stages. Freedom from taboo will give comfort to the developer, which can significantly reduce costs, or even in a roundabout way, but through the use of extraordinary ideas, lead to a workable prototype.
We are considering a computing system. An artificial intelligence system will exist in virtual reality. And it will not be the physical reality in which man exists. The reality in which artificial intelligence will exist has the right to be any, both infinitely more complex than physical reality and infinitely simpler than it. The principles of construction (laws) operating in virtual reality can resemble the laws of natural reality in a very small volume. We do not have to do artificial intelligence in the image and likeness of natural intelligence. We have the right to create any virtual reality with any properties. We can indefinitely immerse some levels of virtual realities in others, cross or violate the borders separating them. The main thing is to provide the possibility of modeling the obtained abstract solution on the existing computing technology in our physical reality. There is a question about the attitude to such methods of development. Suppose that a virtual prototype of an artificial intelligence system has already been built in virtual reality. This prototype is based on the laws operating in the virtual world and is not based on the laws of the ordinary physical world. The question arises: can the device of this system illuminate the device of natural intelligence? Will it be an intellect? An artificial intelligence device may not correspond to a natural intelligence device. Analogy can only reveal the general principles that such systems obey. The reality of natural intelligence can be much more complicated and grander, and much more primitive than artificial intelligence developed on the basis of a non-rigorous methodology. Consequently, even if, on the basis of this work, a working prototype will be created in some future, then its device can in no way be considered to claim correct correspondence with the natural intelligence.
Artificial Neural Networks
Often, when it comes to artificial neural networks, we mean perceptron-type networks. In this case, the neuron consists of synapses, multiplying the input signals by its weight, adder and activation function. From the set of such neurons, a certain topology is created; usually the neurons are grouped into a multilayer structure. Then, the resulting network is trained by one of the learning algorithms, for example, an error back-propagation algorithm. The network structure is fully specified by the developer. The number of neurons or connections does not change. Of course, this is a description of a simplified picture of an artificial neural network of the perceptron type, but for the further course of reasoning, such a representation would be quite enough.
Now consider the idea of a neural network with a higher level of abstraction. What is a neural network like an idea? A neuron is a node that processes the information coming to it and sends the result to the other nodes. Especially important is the simultaneity of all neurons. The complexity of processing inside the neuron is obviously not a significant factor in this context. A neuron can perform both elementary simple and very complex operations. This does not entail a special functional difference for the whole system. If necessary, you can combine a group of elementary operations into an ensemble of neurons and work with the complex functions of such abstract formations. In this case, nodes with complex behavior can be modeled by a group of elementary neurons. Assuming that the neural network is equivalent to the Turing machine, such a network will be able to calculate any computable function. It should be noted that nothing prevents the developer from taking advantage of the previously proclaimed freedom and creating neurons with any behavior necessary to accomplish the task. The neural network, obviously, can realize any function which only developers are able to invent and formally describe. However, to prove the ability to implement any function is just the beginning. For practical application of the developed system, it is necessary to implement not any, but some specific and useful functionality in this context.
What is missing from the described abstract neural network that performs any computable function? - Ability to change this feature over time. If the network structure is rigidly defined, as in the classical perceptron, then, once implemented according to the project, such a network will be able to adapt to the changed conditions within very tight limits due to its structure. Of course, it is possible to apply the methodology of meta-transitions, and work with the dynamics of the processed pulses, but this will be an escape from the problem, not its solution.
The possibility of self-modification and self-analysis requires the presence of topology dynamics. It is necessary that the neural network is able to process its neurons as data. Then one part of the neural network can change the topology of its other part. Interestingly, as early as 1957, John von Neumann created a neural network architecture that is fundamentally different from perceptron (we will not consider the limitations imposed by the crystal lattice, it’s better to work on the details of the implementation of the original self-replicating automaton). Its properties essential for this work are the following:
With a mathematically complete set of functions, their complexity will be reflected only in computational efficiency. As von Neumann showed in his work, such a neural network is equivalent to a Turing machine. At the same time, only a single-level meta-transition is needed, as a result of which such a neural network directly implements a finite state machine equivalent to a Turing machine and a tape with a program. In the von Neumann network, the main thing is not the function performed by the neuron; the ability to self-reflection and self-modification is the main thing. Suppose a network has already been formed. In the described network, one section of the neural network can analyze the structure of another section. Then make some decisions based on this analysis and change the connections or types of neurons.
One part of the neural network can use another part of the neural network as a memory bank, dynamically connecting to individual neuron-cells, reading their state or changing this state. An interesting analogy that can be drawn between a DNA molecule and a ribbon in such an automaton. Moreover, one part of the neural network can release a “design sleeve” and assemble a device that performs some functions from individual neurons. In this case, the memory located in the "tape" is used as DNA, on the basis of which a certain part of the neural network is constructed.
Semantic neural networks
Consider the semantic neural networks and the consequences that arise in the process of applying the ideas of von Neumann to these networks. We postulate the opportunity to develop a virtual reality that would be most interesting to us as its users. In the von Neumann network there are restrictions on the topology of connections — we postulate the absence of such restrictions. Only logical values are processed - we postulate the processing of fuzzy values. All neurons in the von Neumann network are synchronized with ticks. To use self-timed circuitry, we postulate the presence of synchronized and non-synchronized neurons.
Unlike the von Neumann network, there are no restrictions on the topology of neurons in the semantic neural network. This leads to the impossibility of relative addressing of neurons, as von Neumann did. In this case, you must enter the absolute addressing. Each neuron must have some unique identifier, knowing that you can directly access it. Of course, neurons that interact with each other through axon dendrites must have identifiers of each other. Relative addressing can be modeled by entering the specificity of neurons, similar to how it is implemented in biological neural networks.
In the original description of the semantic neural network there is no description of the ability to self-reflection and self-modification. Of course, it can be said that the semantic neural network inherited these abilities from its prototype — the von Neumann network. Yes, it is, the idea of self-reflection is inherited. But from a simple idea far to practical implementation. So, it is required to provide analysis and modification of the structure of one fragment of a neural network by its other fragment.
Suppose that fuzzy data transmitted from a neuron to a neuron will not be enough for this. We postulate the presence of a pointer to a neuron. This pointer will simply be a unique number — the identifier of the neuron in the neuron repository. Let neurons be able to process not only fuzzy data, but also pointers to each other. Obviously, it is perfectly realistic to implement it technically.
Now it is necessary to comprehend that there is such a pointer. A pointer to a neuron is a virtual connection that is not implemented as a dendrite or an axon. Suppose that in a constructed virtual reality, neurons will interact with each other not only by transmitting signals through axons-dendrites, but also using paranormal effects. So, the neuron will have signal inputs, signal outputs and a set of virtual connections with other neurons. The neuron will be able to interact with other neurons, owning their pointers, but without having signaling connections with them. The difference between pointers and signal connections is also obvious. Signal connections are two-way structural entities associated with both the source and receiver of the signal. The pointer is one-sided. The owner of an identifier of a neuron is able to initiate interaction with this neuron. To determine the presence of a pointer to a neuron is technically impossible without a search for such an index. Connections manifested in the form of axons and dendrites can be considered as the long-term memory of the system, invariant to the context. Signals and pointers processed by neurons - as a super-operative information, depending on the current context. Loss of signals or pointers (by analogy with an epileptic seizure) should not affect the manifested structure and lead to changes in long-term memory or personality.
The presence of such pointers leads to indirect interaction possibilities. Analogue from traditional programming languages is double or triple addressing. In this case, some neuron-1 that has a virtual connection with neuron-2 is able to interact with neuron-3, provided that neuron-2 owns a pointer to neuron-3. This leads to the wide possibilities of interaction of one neuron with another neuron without direct contact through both developed and virtual connections.
To ensure self-reflection in the network, you can enter neurons that perform the functions of analyzing and changing the network structure. We introduce neuron receptors that respond to the structural elements of the constructed neural network. Such neurons receptors can become excited when certain specific conditions are fulfilled, describing the presence or absence of neurons and connections between neurons with certain characteristics. We will also introduce neurons effectors, which, being transferred to the excited state, perform some modification of the neural network structure.
To ensure the completeness of the system, it is necessary to ensure the self-applicability of receptor neurons and effectors' neurons. The neuron receptor must be able to analyze other neuron receptors, including those of the same type as it is. Neurons effectors should be able to modify other neurons effectors, and not only neurons that perform signal processing. This is possible due to the presence of pointers to previously postulated neurons.
As receptor neurons, it is possible to postulate neurons that determine the presence or absence of certain types of neurons associated with a certain neuron; or neurons that determine the presence of a connection between the given neurons or the presence or absence of a connection of a certain type in a certain neuron. If the condition is satisfied, the neuron receptor takes an excited state, otherwise it is passive. As neurons - effectors, you can postulate neurons that connect two neurons with some connection, neurons that create other neurons, effectors that destroy neurons or connections. If the excitation level exceeds a certain threshold, then the neuron effector will perform its function.
In the system, an experiment was conducted on the synthesis of the structure of a neural network based on an external task using the means of the neural network itself. In the virtual machine that performs the semantic neural network, the following types of neurons associated with self-applicability were supported:
Consider the possibility of restoring the semantic neural network after damage. Obviously, the structure of the neural network determines the individuality of the intellect.The death of a neuron means forgetting the relevant information. It is impossible to recover information about the presence of a certain neuron about a pink elephant without the knowledge that the system has knowledge about this elephant. If we postulate that having a neuron means having knowledge about an elephant, then the absence of a neuron means the absence of this knowledge. The destruction of the neuron destroyed the knowledge. The destruction of a neuron means irreversible forgetting. Therefore, the regeneration of neural tissue by a given sample in the DNA seems impossible. To restore a damaged neuron, you need to re-learn the same information.
The semantics of a neuron is not determined by its internal state or internal complexity. It is determined by the connections of the neuron with its neighbors. Only neighbors define the semantics of an individual neuron. Therefore, it does not matter at what specific point in space an ensemble of neurons will be located, describing the concept of a pink elephant. The main thing is to have connections that allow to determine the "elephantiasis" and "pinkness".
Regeneration of damaged receptors, effectors, or other regular structures is quite possible. In the absence of damage in the semantic structures associated with damaged regular structures, it is still possible to reconnect the regenerated regular structures. Even if it is not possible to completely restore an exact copy of the regular structure of the former before regeneration, the link isomorphism will allow the newly regenerated neurons to perform the same functional load.
Damage to the central nervous system can be repaired during re-training. If a pink elephant neuron is destroyed, the system will forget about such an elephant. When a pink elephant gets into the receptor field, a new neuron will be created in any other place of the neural network, connected with its neighbors to reflect the new concept.
Regeneration of the neural network will go not in the area of damage, but in the area of preserved tissue. The damaged area, to eliminate the "effect of epilepsy", it is desirable to degenerate and dissolve. Later, as we learn, healthy tissue, with an increase in the number of newly formed neurons, will take the place of the damaged one. Obviously, the expansion will occur only as learning new information.
Comments
To leave a comment
Models and research methods
Terms: Models and research methods