Lecture
The semantic neural network is an abstract model. In reality, it can run under the control of a virtual machine. This means that the laws of reality, in which neurons will exist, are completely determined by virtual reality. Is there a need to ensure the stability of these laws? If we introduce into a virtual machine the possibility, in some justified cases, to violate ordinary laws that exist in virtual reality, this can be a very useful feature. The developer, as the creator of a new virtual reality, is technically able to break the rules established in this reality, or temporarily change these rules. If we give a virtual machine the opportunity to analyze the neural network currently running, such a machine can change the structure of the neural network or the parameters of the signals transmitted via connections. From the point of view of the neural network, such a temporary violation of the laws will look like a non-deterministic accident or miracle. Virtual wonders greatly facilitate the development of the system. Instead of completely implementing a certain software block with the means of a neural network (which is possible due to the equivalence of a neural network with a Turing machine), you can implement the same service with means of a lower level and activate it as needed. This will save development time and significantly increase system performance. And from the point of view of the neural network, the work, which should occupy significant resources, is realized through the virtual miracle almost instantly.
Due to the presence of virtual connections between neurons, it is possible to divide the network model into submodels. We call such submodels bodies. The physical body will consist of the bodies of neurons and connections between neurons, and will exist in a spatial continuum. Unlike the real physical world, the number of spatial dimensions in the virtual world may not be equal to 3. The information body will consist of impulses transmitted between neurons. The astral body will consist of virtual connections between neurons and the general principles of the organization of a neural network that are not reflected in the logic of the behavior of an individual neuron. Models of the physical, informational, and astral bodies will be executed while the virtual machine is running.
The space provides a base for the placement of the bodies of neurons and their connections. In a semantic neural network, space has less than one dimension, since the identifiers of neurons that address neurons in the repository do not have an order relationship. Consequently, neurons cannot be built in order from the point of view of their organization in the neuronal storage space. Each neuron can have contact with any neuron without restrictions on the distance between them or on the topology of connections. Conventionally, we call this space 0-dimensional.
In the von Neumann neural network under consideration, the space is two-dimensional. In the case of a network implementation on a two-dimensional crystal, the effective emulation of 0-dimensional space is practically impossible due to the presence of restrictions on the topology of connections between neurons. In the case of the use of 3-dimensional silicon crystal, it is necessary to give most of the components not for signal processing, but to ensure communication between individual neurons. In a silicon neural computer, it will be hard to implement dynamic switching of connections between neurons in time. It will be almost impossible to realize the creation of new neurons. In silicon, you will have to create unused backup neurons in advance and connect them to the network as needed. The implementation of a neural network based on silicon technology is possible on the basis of a three-dimensional crystal, in which cells have their own addresses, similar to IP addresses, and routers are able to establish connections between cells by commands generated by the cells themselves. Based on this, the implementation of the neural network model based on biological tissue, which has the natural ability to change its structure over time, looks more promising.
The model of the astral body in the first place should ensure the possibility of forming pointers to neurons and the possibility of transferring these pointers between computational structures. Then some neurons can transmit pointers to other neurons in signals. Also on the model of the astral body, you can lay out the functions of learning of the neural network, such as reverse error propagation, Amosov SUT, synthesis of a synchronized linear tree and others. Such a solution is effective if there is no need to change the organizing principle throughout the life of the neural network. It is more effective to form the universal in the Turing sense model of the astral body, and to ensure its interaction with the physical and information bodies.
The model of the astral body can perform many functions, including the responsibility for implementing virtual miracles. The model of the astral body is able to break new connections between neurons, as well as transform gley cells into new neurons. Older neurons are not rational to divide. In the semantic neural network, each neuron corresponds to a certain concept of the domain. The division of the neuron will lead to the division of the concept - and this operation is not equivalent to learning a new concept. If in the process of learning to carry out uncontrolled division of neurons, then informational chaos will most likely occur in the system. It is much more effective to postulate the presence of a miracle and to morph space (a glial cell) into a neuron immediately in the place where it is needed. If necessary, neurons can be migrated; however, in the case of 0-dimensional space suitable for computer simulation, the efficiency of neuron migration is questionable.
The physical body (neural network) controls the effectors of the system and provides information to the astral body, which is necessary to modify the structure of the neural network. The astral body, using the rules of learning that are stored in the structure of the neural network (memory) changes (teaches) this neural network, including the new rules of learning. Thus, any rules of learning / synthesis of the topology of the neural network become a particular case of all the capabilities of this network.
As a result, the developed neural network can be effectively implemented by means of existing computing equipment. The semantic neural network is equivalent to a Turing machine due to the postulated developer freedom. This means that on its basis it is possible to implement a system that calculates any function that is computable on Turing machines. For example, such a network, as a special case, can simulate a multilayer perceptron with back propagation of an error. So the perceptron neurons can be constructed from separate neurons of such a network, performing separately the functions of summation, multiplication and activation function. The perceptron learning algorithm can be implemented as a separate fragment of such a network, analyzing and modifying the fragment corresponding to the perceptron itself.
Comments
To leave a comment
Models and research methods
Terms: Models and research methods