Lecture
Today there are a significant number of various models of artificial neural networks. All of them differ in the behavior algorithms of individual neurons and the principles of constructing the topology of connections. Despite the relatively prosperous state in the development of various mathematical models of neural networks, there is a shortage in the field of realizations of these models, both on parallel and on sequential computing systems. Most models of neural networks have significant similarities. A neuron is an object associated with other objects. Some data is transmitted via connections between neurons. Neurons process this data and transmit to other neurons. From this point of view, a neural network can be considered as a distributed computing system in which neurons correspond to individual processors, and the connections between neurons correspond to data transmission channels between these processors. The author considers it expedient to develop a universal virtual machine, which allows to model various neural networks in a sequential computing system environment. Currently, object-oriented software development methods are widespread. In this regard, it seems reasonable to implement the level of the neural network in the form of an object-oriented base. Neurons will be implemented by objects, and connections between neurons will be implemented as links between objects. It is necessary to isolate from the virtual machine the details of the implementation of the behavior of individual neurons in different models of neural networks. This will allow to use the developed virtual machine regardless of the neural network model used. In order to maximize the flexibility of system reconfiguration, a virtual machine should be implemented in several levels linked together through well-defined interfaces.
VNPI Core Level - storing and managing object lifetime
The core of the system includes the level of interaction with the operating system and the level of storage and management of objects. The level of interaction with the operating system must ensure that the functions of the operating system are adapted to the needs of the virtual machine implementing the neural network. The object storage level should provide storage and quick access to objects and their connections, as well as management of object lifetime. The set of APIs of all levels of the virtual machine, for definiteness, is called the Virtual Neural Programming Interface, hereinafter referred to as VNPI.
The effectiveness of the implementation language of the virtual machine kernel, emulating a neural network with many millions of emulated neurons, increased demands. As a result, generic C (but not C ++ ) becomes the only candidate as a programming language for such a system. The core needs to be developed considering modern component technologies. Otherwise, it will be impossible to realize the polymorphic interaction of the various modules of the core and the dynamic change of the connections between them. Existing industrial implementations of component ideology, such as Microsoft COM, are too cumbersome to use in a virtual machine that emulates a neural network of the required dimensions. The inefficiency of COM is primarily related to the need to execute a lot of GUID comparison commands with each call to QueryInterface. As a component architecture for implementing a virtual machine of the semantic neural network, it is proposed to apply the original design. The main difference of the proposed architecture is the absence of the need to call the QueryInterface and search for the desired interface by the GUID value. Under the conditions of the virtual machine core, you can determine the lifetime of any internal object at the design stage. Therefore, in the proposed architecture there is no reference counting to objects. Due to the absence of reference counting, kernel performance increases. A pointer to an object is a pointer to a structure with object data, and not to an interface. This ensures that there is no need to cast a pointer to an object to different types in the case of multiple inheritance of interfaces. For the class descriptor, a specialized XML language dialect - IDL is used. Prototypes of objects and prototypes of methods are generated on its basis. The virtual machine development environment should provide maximum comfort to the programmer. Therefore, IDL parsing is implemented in C # for the Microsoft .NET platform. In the process of parsing, the contents of DOM-XML are analyzed and on its basis the IDL object model is formed by creating and initializing objects. After building the IDL object model on its basis, source texts of the function prototypes in the C language are formed . The VNPI core provides base-level components for implementing the object-network base. It includes components of interaction with the operating system, memory management with the garbage collection subsystem, management of a structured storage of objects, subsystem caching objects and managing their lifetime in memory.
Bridge Level .NET-VNPI
The implementation level of user objects should be convenient for use by application programmers. At the moment, this requirement is most consistent with the Microsoft .NET Framework. To integrate the .NET Framework and the VNPI kernel, a VNPI bridge is needed - .NET. VNPI Bridge - .NET provides access to VNPI from the .NET Framework. It also allows you to implement objects in a .NET environment driven by the VNPI kernel subsystems.
Given the different goals and requirements facing the .NET Framework and VNPI, the ideology of the garbage collection system is completely different. So in .NET, the destruction of an object is possible only after there is no possibility of using it in the current process. In contrast, in the VNPI core, garbage collection is performed when memory is allowed for object allocation. In this case, unnecessary objects are pushed onto the long-term storage device. Destruction of objects is part of the domain model and therefore is performed only manually. The bridge counts references to the VNPI core objects used by .NET and prevents objects from being preempted that can be accessed through calls from the .NET context.
The system itself monitors the lifetime of user objects in memory. You do not need to initiate saving objects manually. The least frequently used objects are automatically pushed onto the long-term storage system, freeing up RAM. Therefore, at any moment it is possible that the system will have to repress and destroy some kind of user object. Each time a kernel method is called that requires a Kernel Object lock, a new wrapper object is created. The shell constructor increments the reference count. In Dispose, the reference count is reduced. As long as at least one shell is in memory, the reference counter is not equal to 0 and the object corresponding to this shell is blocked from destruction. If the programmer forgets to call Dispose, it will happen automatically at the time of the .NET Garbage collector. C # using language directives are needed to ensure that an object is kept in memory during a call to its methods, and then to guarantee a call to Dispose after you finish working with the object. Upon exiting from using, the Dispose of the shell object is automatically called and the Kernel Object reference count is reduced.
Level of network object-oriented base
The level of the network object-oriented database is responsible for creating instances of user objects, as well as for working with the attributes of objects. Each object of a network object-oriented database may have several attributes. The attribute values can be other objects or simplest (scalar) data types. To create objects, you must have information about their types. Means network object-oriented database, you can implement a table structure similar to the structure of a relational database. Therefore, to unify information about the types of objects used by this level, organized in the form of related tables.
Information about objects registered in the system is organized in the form of several Types, Attributes, Tables tables. This allows you to define in the database the types of .NET objects in a unified form. However, the properties of tables, rows and columns in the OO database are different from the properties adopted in relational databases. An object-oriented database allows storing not rows of tables, as is customary in tabular databases, but instances of objects. So a table column is an entity separate and independent from the table. The same column may belong to different tables. A column represents some attribute of an object. The Attributes table contains information about the attributes of objects that are in a network object-oriented database. If the database contains several objects that have an attribute name, then the values for this attribute (column) are the names of these objects. Attributes (columns) are also objects of the OO database. For example, the object attribute "Object Name" also has a name, as an object having a name. Obviously, the value of the attribute "Name of the object" for the "attribute name of the object" is the string "Name of the object". Thus, the use of tautologies may be more than justified and useful.
The Types table contains a description of data types (including user data) that are known to the system and on the basis of which the system creates instances of objects that are placed in a network base. The Qualified Type Name attribute of this table contains the name of the .NET class based on which custom objects are created.
Table Tables contains a description of all tables contained in the object database. Tables are also objects. Table name - the value of the object name attribute of the table object. Each table has two collections of objects. The collection of attributes (columns) and the collection of components (rows). Each row of any table is also an object. And the values for the columns of the table are the values of the attributes of the objects located in the rows of the table. Tables contain itself as a string. And the value of its column "Collection of columns" is the collection of columns of the same table. Although it is desirable to keep objects of the same type in the same table — this is not at all necessary. The main thing that these objects had attributes, at least partially coinciding with the columns of this table. I recommend keeping objects in one table whose classes are inherited from some base class that contains all the columns of this table as their own attributes. For example, the column "Object Name" has washed away in almost any table. Objects can be simultaneously in different tables as their rows. Object attribute descriptors are themselves objects and are registered in the attribute table.
Thus, each table contains not some abstract rows with data, but specific instances of .NET objects. The columns of the tables are the attributes (custom properties) supported by these objects. The types, attributes and tables themselves are also objects and are represented by rows in the corresponding tables. Each row of a table is an instance of an object contained in a database. Each column of a table represents an attribute described in the Atributes table. The class instance (object) is independent of the table. Not necessarily creating an object to place it in a string in a table. Once created, the object will continue to exist either in the system’s RAM or as a stream in the file storage until it is deleted forcibly. The created instance will maintain its identity and attribute values through various system startup and shutdown cycles.
The user has the ability to create new tables by adding new entries to the Tables table. Created instances of tables, attributes, and other objects do not need to be registered in the corresponding tables either. However, the practice of registering such objects in the corresponding tables is recommended, as it allows changing their attributes through a unified user interface. The requirement is to register a custom type in the Types table.
In OO DB, full-featured C # objects are supported, and methods, properties, fields, and events are supported for objects. It is recommended to create properties for the attributes supported by the class. The mechanism of delegates and events will not be supported if the connection is implemented between objects stored in the database. This is due to restrictions on the lifetime of copies. At the time when the database is stopped, all objects are stored in a serialized form. If the lifetime of the receivers or sources of messages can be isolated from the lifetime of the database objects, then the mechanisms of the delegates - messages work within the subnet of the objects deployed in the .NET Framework memory. The system is developed primarily to support neural networks with a free topology and the number of neurons up to 2 billion in one repository. Therefore, only part of the OO DB is currently in RAM. Most of the objects are frozen in the file storage and de-serialized as necessary. Considering that each object in such a database has a behavior implemented as methods of this object, it is logical to call such a database active. Using this development, the programmer is completely free to create his own tables containing objects of their own types. Since classes can be developed independently, strings can have methods, properties, events, and all other achievements of modern OO programming methods.
Semantic Neural Network Level
The next logical level is a neural network. The neural network is based on the VNPI core. Each neuron is a collection of objects of simpler objects. Neurons can be of various types. The most interesting are neurons receptors, effectors and internal neurons. An internal neuron performing logical operations consists of a nucleus and connections to other neurons. The neural network allows you to simulate the parallel execution of various neurons and the parallel execution of functions corresponding to neurons. In order to emulate the simultaneous parallel operation of a set of neurons on a sequential computer, it is necessary to sequentially process the states of all neurons in the network for each time slice. Consider the implementation of the quasi-parallel processing of neurons in the semantic neural network that implements the expert system. The performance of neurons in the neural network is performed by bars. Tacts are performed until the moment when there is not a single neuron in the network that needs to be processed. After completion of all cycles on the effectors, the current decision made by this section of the network is displayed. When the information on the receptors changes, new cycles are launched, and after they end, the specified solution is set up on the effectors. The work of the expert system takes place either in one cycle, by setting all the input fuzzy data on the receptor layer, or iteratively, as this data is accumulated.
Each measure consists of several passes. The pass is a mailing of messages to all registered neurons. At the same time, for each neuron, a special function is performed by the message handler corresponding to the given message. The work of the neuron of the control of the expert system layer is to send messages to the registered neurons about the passage.
Level of expert system and knowledge base
Expert systems are intelligence amplifiers that help a person to solve various tasks. Expert systems have found application in areas of human activity where decisions need to be made, to classify, to solve problems in conditions of insufficient information about the problem posed. Also expert systems are widely used in solving non-formalized tasks for which there are no solution algorithms.
Since in the real world it is rarely possible to say anything with full confidence, the expert system uses fuzzy logic in the process of reasoning. To determine the degree of certainty of the statement in fuzzy logic, the confidence factor is used - a number, usually in the range from 0 to 240 (similar to the theory of probability). But, unlike probability theory, the confidence factor expresses not the probability of an event, but a subjective certainty about it.
The rules are placed in the knowledge base. Also in the knowledge base, in the form of facts, the input data and the results of triggering the rules are placed. If the situation analyzed by the expert system satisfies the conditions of the rule triggering, then its consequences are considered true for the given situation.The terms of the rule contain the operations of conjunction, disjunction and inversion. If the conditions of two or more rules are fulfilled simultaneously, then it is necessary to choose the consequences for further processing. The choice can be made in various ways: 1 - the result of the first rule that is selected is selected; 2- from the set of admissible rules, one consequence is chosen randomly; 3- the consequence of the rule with the most stringent conditions is selected; 4- all the consequences of the admissible rules are selected simultaneously. To ensure the processing of multi-valued text tokens, we will consider all consequences of all rules that have conditions to be true. In this case, the same elements, which are consequences of different rules, are combined by the disjunction operation. Therefore, the production expert system is a network of rules. All rules are executed simultaneously.Separate rules are separate nodes, and the variables included in the conditions and consequences of the rules are links between the nodes of this network. Separate operations on the processed text elements are represented in this network as separate rules in the expert system knowledge base. Such an expert system may contain a simulation model of the domain and serve to conduct semantic analysis of the text.
Based on the semantic neural network, an expert system is implemented. The group of neurons of the semantic neural network, which emulates the mechanism of direct output, consists of three layers of neurons. The receptor layer collects information for processing in the neural network. The fuzzy level of excitation of a neuron - receptor is a fuzzy confidence factor in the presence of an element corresponding to this neuron. The layer of effectors gives the results of the expert system in the form of a set of fuzzy data of neuro effectors. As in the receptor layer, the fuzzy excitation level of the effector is a fuzzy confidence factor in the presence of an element corresponding to the effector. The third layer - the processing layer is located between the layer of receptors and the layer of effectors; there are neurons in it that implement the knowledge base of the expert system.In the case of the work of such an expert system in a highly parallel mode, each neuron processes fuzzy data from its dendrite and transmits the result of the performed operation to the following neurons along the axon. The decision time of such an expert system depends not on the volume of the knowledge base, but only on its complexity.
Comments
To leave a comment
Models and research methods
Terms: Models and research methods