You get a bonus - 1 coin for daily activity. Now you have 1 coin

1. Computer simulation of neural networks.

Lecture



Principles of software development that performs simulation modeling of neural networks. Structure and function of program blocks. An example of a software implementation of the perceptron learning algorithm.

A significant proportion of all applications of neural networks are accounted for by the use of their software models, commonly called neuroimmitters . The development of a program is usually cheaper, and the resulting product is more visual, mobile and convenient than specialized equipment. In any case, the development of a hardware implementation of a neural network should always be preceded by its comprehensive testing based on theory using a computer model.

This section of the book describes the most general principles for developing relatively small neuroprograms, usually for individual use. To simplify the presentation, a simple neural network architecture was chosen - a single-layer PERSEPTRON. The theoretical foundations of this network were considered in the fourth lecture.

At the end of the section are full listings of the described programs, which the reader, familiar with programming on TURBO PASCALE for an IBM PC, can use for educational purposes and modify as they wish.

Principles of development of neurotimidators.

A neuroimulator is a computer program (or software package) that performs the following functions:

  • Description and formation of the neural network architecture
  • Data collection for training sample
  • Learning a selected neural network on a training set or downloading an already trained network from disk
  • Testing a trained neural network
  • Visualization of the learning and testing process
  • Problem solving by a trained network
  • Record learning results and solutions obtained on the disk.

Industrial neuro-simulators (such as Neural Works Professional II + from Neural Ware, or MultiNeuron, developed at the Krasnoyarsk Scientific Center) provide the researcher with a wide range of options for implementing these functions. In individual programs, when the user is primarily interested in the result of the neural network, some of these functions can be as simple as possible.

The solution of the problem with the use of a neural network may consist of the following steps (not necessarily all and not necessarily performed in the specified order).

Problem statement in terms of a neural network.

Problem definition for a neural network has a certain specificity, as the reader could already be convinced throughout the course. First of all, it is necessary to decide whether the problem being solved is one of the standard types of neural network statements: classification problems (categorization), functional model building problems (system identification), forecast tasks, optimization and neuromathematics problems, control problems and, finally, pattern recognition problems. and signal processing.

The non-standard formulation of the problem for a neuro-computer usually requires special studies and extensive experience in solving other problems. At this stage, it is necessary to answer the question: is there a neural network to solve this problem? It is quite possible (and often the case) that a solution can be obtained by an algorithmic method. In this case, the use of a neuroimulator is usually not effective.

Next, you should determine the feature spaces used in the task, which include parameters that play an important role in this task. If you are not an expert in your subject area, then at this stage it is advisable to get advice. Communication with colleagues will never be harmful, even if you are considered a leading expert on this issue.

When constructing feature spaces, you should take into account the availability and accessibility of the corresponding data, otherwise you will not have information for training the neural network.

And, finally, it is very useful to present the expected result of the neural network and the method of its further use. In many cases, this leads to a simplification of the statement, and, as a result, to a more effective solution. If the results do not meet your expectations, then this is an important reason to approach the problem more fundamentally.

Selection and analysis of neuroarchitecture, adequate to the task.

The type of neural network used is largely dictated by the task. Thus, for the classification problem, a multilayer perceptron and a Lippmann-Hemming network may be convenient. Perceptron is also applicable to the problems of system identification and forecasting. When solving categorization problems, a Kohonen map, a counter-propagation architecture, or a network with adaptive resonance will be required. Neuromathematics problems are usually solved using various modifications of the Hopfield model.

It is better to use those architectures whose properties are most familiar to you, as this will simplify the interpretation of the results. The choice may be affected by the presence or absence of the appropriate programs at your disposal.

Selection of data and the formation of the training sample.

The ideal situation is when you can get arbitrarily many different data for your task. In this case, care should be taken to ensure that there are no systematic errors or deviations in the data (unless this particular issue is the subject of your research). It is advisable to include in the training sample primarily those data that describe conditions close to the conditions for further use of the neural system.

To solve some problems of pattern recognition, data, if possible, should be presented in an invariant form.

For practical purposes, a part of the training sample should not be used for training, but should be used for subsequent testing of the neural network. It is useful to understand that a very large sample of training data will greatly slow down the learning process without significantly improving the result.

If you have a very limited amount of data at your disposal, you will need to analyze its sufficiency for solving your problem. Usually this is a very difficult question. One solution may be to reduce the dimension of the characteristic problem spaces. In any case, there should be more learning data than the trained parameters of the neural network.

Developing your own program or using an existing neuroimulator?

Available on the market neuroimitators are developed by professionals specifically for your convenience. For practical purposes it is better to prefer their use. This will ensure that the standards are met and that your results are clear.

The exception is non-standard tasks and specialized architectures of neural networks, in this case it is necessary to develop a new program. When choosing a technical environment for your project, it is useful to consider the available tools for writing neural programs and processing databases. C (or C ++) is most often used as a programming language. For small projects, you can choose Pascal or BASIC.

And please do not waste time reprogramming the standard square root function!

Analysis of the results.

This is one of the most important phases of solving the problem. For completeness of the analysis, care should be taken to ensure that the results are naked, using a graphical representation of them. If the results will be used in further calculations with the use of computers, it is advisable to immediately present them in a format understood by other programs. For the exchange between programs of small data tables, you can use a text view. For large volumes, it is better to use standard formats, for example, Ashton-Tate dbf files of the dBASE system. This will automatically allow you to use the tools of this (and many others) systems for presenting, storing and editing data.

If the results obtained are significantly different from the expected ones, you will most likely have to return to the statement of the problem.

However, it is possible that you are on the verge of a new discovery ...

Description of the program PERC.

At this point, the simplest PERC program that implements the training of a single-layer PERSEPTRON will be described. The following task was chosen as an example. The neural network is presented with a vector consisting of 10 components, each of which can be zero or one. The network must learn to determine which is greater - zeros or ones.

To solve this problem, you need at least one neuron with ten inputs and one output (although the program allows you to use multiple neurons). The represented function belongs to the class of linearly separable, therefore this one neuron is sufficient for a solution.

As a training sample, 200 vectors are used, the components of which are played using the PASCAL pseudo-random number sensor. The correct answer is determined by directly comparing the number of zeros and ones.

The network is trained according to the F. Rosenblatt delta rule, which was discussed in detail in Chapter 4. At the end of training, the program gives the number of iterations performed and the value of the learning error achieved. At the end of this paragraph is a complete listing of the PERC program and the results of its work (Attention! If you perform the calculation on the program on your computer, then the values ​​obtained may differ slightly from those given due to different random number sequences).

To test the quality of training, a separate TEST program has been developed (the text and the results of which are also provided). The structure of the data used and the operation of the program are similar to the PERC program. Random vectors are also used for testing.

The test results are very satisfactory, the neural network successfully copes with the task with error accuracy in 2-3 answer signs. The interpretation of these errors does not cause difficulties or misunderstandings.

Text PERC.

  PROGRAM PERC;

 (* PERC - A curriculum that implements a single-layer
                PERCEPTRON.

                DATE: October 26, 1994
                AUTHOR: S.A.Terekhov (email: sta@ch70.chel.su)
 *)

 CONST
     CMaxInp = 20;  (* Maximum number of inputs *)
     CMaxOut = 10;  (* Maximum number of outputs *)
     CMaxImages = 200;  (* Maximum number of images *)
     CEta = 0.75;  (* Learning rate *)
     CError = 5.0e-3;  (* Required Error Limit *)
     CCounter = 1000;  (* Maximum number of iterations *)
     CInitWeight = 5.0;  (* Maximum starting value
                                random synaptic weights *)
     CBiasNeuron = 1.0;  (* Neuron threshold activity *)

 TYPE
     TMatrix = ARRAY [0..CMaxInp, 1..CMaxOut] OF REAL;
                     (* Zero column contains threshold values ​​*)
     TInpVector = ARRAY [1..CMaxInp] OF REAL;
     TOutVector = ARRAY [1..CMaxOut] OF REAL;

     (* Network structure *)
     TPerceptron = RECORD   
         NInp: INTEGER;  (* Number of inputs *)
         NOut: INTEGER;  (* Number of Outputs *)
         Inp: TInpVector;  (* Current input vector *)
         Out: TOutVector;  (* Current output vector *)
         W: Tmatrix;  (* Link Matrix *)
     END;

     (* Record in database - training sample *)
     TBaseRecord = RECORD
         X: TInpVector;
         Y: TOutVector;
     END;

     (* Database structure *)   
     TBase = RECORD
         NImages: INTEGER;  (* Number of learning images *)
         Images: ARRAY [1..CMaxImages] OF TBaseRecord;
     END;

 Var
     VNet: TPerceptron;
     Vbase: tbase;
     VOK: BOOLEAN;
     VError, VTemp, VDelta: REAL;
     VCounter, Vi, Vj, Vk: INTEGER;
     VFile: FILE OF TPerceptron;

 PROCEDURE InitAll;
 (* Initialization of a neural network with 10 inputs and one output,
    setting the initial random values ​​of the matrix of relations *)
 Var
     Li, Lj, Lk: INTEGER;
 BEGIN
     WITH VNet, VBase DO
     BEGIN
         NInp: = 10;
         NOut: = 1;
         FOR Li: = 0 TO NInp DO
             FOR Lj: = 1 TO NOut DO
                  W [Li, Lj]: = CInitWeight * (RANDOM-0.5);
     END;
     VOK: = TRUE;
 END;

 PROCEDURE GetDataBase;
 (* Generation of a training sample of 200 random images.
    When determining the correct number of units is used direct
    support *)
 Var
     Li, Lj, Lk: INTEGER;
 BEGIN
     VOK: = TRUE;
     WITH VBase, VNet DO
     BEGIN
         NImages: = 200;
         FOR Li: = 1 TO NImages DO
         BEGIN
             Lk: = 0;
             FOR Lj: = 1 TO NInp DO
             BEGIN
                 (* Randomly 0 or 1 *)
                 Images [Li] .X [Lj]: = RANDOM (2);
                 (* Counting units *)
                 IF (Images [Li] .X [Lj]> 0)
                     THEN Lk: = Lk + 1;
             END;
             (* Output equal to one if in this input vector
                the number of ones is greater than the number of zeros *)
             IF (Lk> (NInp-Lk))
                 THEN Images [Li] .Y [1]: = 1
                 ELSE Images [Li] .Y [1]: = 0
         END;
     END;
 END;

 PROCEDURE SaveNet;
 (* Writing neural network parameters to the SAMPLE.DAT file.
    Monitor output operations using
    key I + and I- compiler TURBO PASCAL *)
 BEGIN
     ASSIGN (VFile, 'SAMPLE.DAT');
     {$ I-}
     REWRITE (VFile);
     {$ I +}
     VOK: = (IOResult = 0);

     IF VOK THEN
     BEGIN
         {$ I-}
         WRITE (VFile, VNet);
         CLOSE (VFile);
         {$ I +}
         VOK: = (IOResult = 0);
     END;
 END;

 FUNCTION Sigmoid (Z: REAL): REAL;
 (* Sigmoidal transitional function of neuron *)
 BEGIN
    Sigmoid: = 1.0 / (1.0 + EXP (-Z));
 END;

 (* Main program *)
 BEGIN
     WRITELN ('<< PERCEPTRON >> (Neuro Simulator)');

 WRITELN ('-----------------------------------------');
     VOK: = TRUE;
     (* Initialization with error control *)
     RANDOMIZE;
     InitAll;
     IF (NOT VOK) THEN
     BEGIN
         WRITELN ('Error initialization');
         HALT;
     END;

     (* Database generation *)
     VOK: = TRUE;
     GetDataBase;
     IF (NOT VOK) THEN
     BEGIN
         WRITELN ('Error while generating database');
         HALT;
     END;

     (* Learning cycle *)
     VOK: = TRUE;
     VCounter: = 0;
     WITH VNet, VBase DO
         REPEAT
             VError: = 0.0;
             (* Cycle by training sample *)
             FOR Vi: = 1 TO NImages DO
             BEGIN
                 (* Submission of the next image to the network inputs *)
                 FOR Vj: = 1 TO NInp DO
                 BEGIN
                     Inp [Vj]: = Images [Vi] .X [Vj];
                 END;

                 (* Cycle on neurons. When hardware implementation
                    will be executed in parallel !!!  *)
                 FOR Vk: = 1 TO NOut DO
                 BEGIN
                     (* The status of the next neuron *)
                     VTemp: = CBiasNeuron * W [0, Vk];
                     FOR Vj: = 1 TO NInp DO
                     BEGIN
                         VTemp: = VTemp +
                             Inp [Vj] * W [Vj, Vk];
                     END;
                     Out [Vk]: = Sigmoid (VTemp);

                     (* Bug accumulation *)
                     VDelta: = Images [Vi] .Y [Vk] -Out [Vk];
                     VError: = VError + 0.5 * SQR (VDelta);

                     (* Rosenblatt delta-rule training *)
                     W [0, Vk]: = W [0, Vk] +
                         CEta * CBiasNeuron * VDelta;
                     FOR Vj: = 1 TO NInp DO
                     BEGIN
                         W [Vj, Vk]: = W [Vj, Vk] +
                             CEta * Inp [Vj] * VDelta;
                     END;
                 END;
             END;
             VCounter: = VCounter + 1;
         UNTIL ((VCounter> = CCounter) OR
                 (VError <= CError));
         (* The cycle is completed when the maximum number is reached
            iterations or minimally sufficient error *)

     WRITELN ('Done', VCounter, 'iterations');
     WRITELN ('Learning Error', VError);

     (* Saving learning results to disk *)
     SaveNet;
     IF (NOT VOK) THEN
     BEGIN
         WRITELN ('Error writing to disk');
         HALT;
     END;

     WRITE ('Neural network trained, parameters');
     WRITELN ('recorded in file SAMPLE.DAT');
 End.

 PERC Result
 << PERCEPTRON >> (Neuroimulator) 
 ----------------------------------------- 
 243 iterations completed 
 Learning error 4.9997994218E-03
 The neural network is trained, the parameters are written to a file.
 SAMPLE.DAT

 The text of the program TEST.
 PROGRAM TEST;

 (* TEST - Testing program for
 neurotimulator PERC *)

 CONST
     CMaxInp = 20;
     CMaxOut = 10;
     CMaxImages = 15;
     CBiasNeuron = 1.0;

 TYPE
     TMatrix = ARRAY [0..CMaxInp, 1..CMaxOut] OF REAL;
     TInpVector = ARRAY [1..CMaxInp] OF REAL;
     TOutVector = ARRAY [1..CMaxOut] OF REAL;
     TPerceptron = RECORD
         NInp: INTEGER;
         NOut: INTEGER;
         Inp: TInpVector;
         Out: TOutVector;
         W: TMatrix;
     END;

 Var
     VNet: TPerceptron;
     VTemp: REAL;
     VCorrect: REAL;
     Vi, Vj, Vk: INTEGER;
     VOK: BOOLEAN;
     VFile: FILE OF TPerceptron;

 PROCEDURE LoadNet;
 (* Read neural network parameters from SAMPLE.DAT file.
    Controls input operations using
    key I + and I- compiler TURBO PASCAL *)

 BEGIN
     ASSIGN (VFile, 'SAMPLE.DAT');
     {$ I-}
     RESET (VFile);
     {$ I +}
     VOK: = (IOResult = 0);

     IF VOK THEN
     BEGIN
         {$ I-}
         READ (VFile, VNet);
         CLOSE (VFile);
         {$ I +}
         VOK: = (IOResult = 0);
     END;
 END;

 FUNCTION Sigmoid (Z: REAL): REAL;
 BEGIN
    Sigmoid: = 1.0 / (1.0 + EXP (-Z));
 END;

 BEGIN
     VOK: = TRUE;
     RANDOMIZE;
     (* Reading the parameters of the trained neural network *)
     LoadNet;
     IF (NOT VOK) THEN
     BEGIN
         WRITELN ('Error reading file');
         HALT;
     END;
     VOK: = TRUE;
     WITH VNet DO
     BEGIN
         WRITELN ('<< PERCEPTRO N >> (Testing program)');
         WRITELN ('----------------------------------------------- ');
         WRITELN ('QUESTION ANSWER TRUE ANSWER');
         WRITELN ('----------------------------------------------- ');
         FOR Vi: = 1 TO CMaxImages DO
         BEGIN
             (* Submission to the entrance of a random image *)
             Vk: = 0;
             FOR Vj: = 1 TO NInp DO
             BEGIN
                 (* Randomly 0 or 1 *)
                 Inp [Vj]: = RANDOM (2);
                 (* Counting units *)
                 IF (Inp [Vj]> 0)
                 THEN Vk: = Vk + 1;
             END;
             (* The correct answer is known! *)
             IF (Vk> (NInp-Vk))
             THEN VCorrect: = 1.0
             ELSE VCorrect: = 0.0;
             (* Answer gives neural network *)
             FOR Vk: = 1 TO NOut DO
             BEGIN
                 VTemp: = CBiasNeuron * W [0, Vk];
                 FOR Vj: = 1 TO NInp DO
                 BEGIN
                     VTemp: = VTemp +
                          Inp [Vj] * W [Vj, Vk];
                 END;
                 Out [Vk]: = Sigmoid (VTemp);
             END;
             (* Issuance of results *)
             FOR Vj: = 1 TO NInp DO
                 WRITE (Inp [Vj]: 2: 0);
             WRITELN ('', Out [1]: 4: 2, '', VCorrect: 2: 0);
         END;
     END;
     WRITELN ('----------------------------------------------- - ');
 End.

 The result of the program TEST.

 << PERCEPTRO N >> (Testing program)
 -----------------------------------------------
  QUESTION ANSWER TRUE ANSWER
 -----------------------------------------------
  0 0 0 0 1 1 1 1 0 0 0.00 0
  0 0 1 0 0 0 0 1 0 1 0.00 0
  1 1 0 0 0 0 0 1 0 0 0.00 0
  1 1 1 1 0 1 0 1 1 1 1.00 1
  0 1 1 1 0 1 1 0 0 0 0.01 0
  1 0 1 0 1 0 1 1 1 0 0.99 1
  1 0 1 1 1 0 0 1 1 0 0.98 1
  1 0 1 1 1 1 0 0 1 1 1.00 1
  1 1 0 1 1 1 1 0 1 0 1.00 1
  1 1 0 1 1 1 0 0 0 1 1.00 1
  0 0 0 0 1 1 0 1 0 1 0.00 0
  1 0 0 1 0 0 0 0 0 1 0.00 0
  1 0 0 1 0 0 0 1 1 0 0.00 0
  0 1 0 1 1 1 0 1 0 0 0.02 0
  1 1 1 1 1 1 0 1 1 0 1.00 1
 --------------------------------------------


Tasks.

1. With the help of the PERC program, it is possible to study the dependence of the solution on the data volume of the training sample. This is achieved by changing the value of the Nimages variable in the GetDataBase subroutine. Try to explain the deterioration of the test results in training with a gradual decrease in the number of images.

2. Modify the PERC and TEST programs by changing the type of transition function of the neuron. Compare the results.

3. Conduct a study of the dependence of the learning rate on the pace ( CEta value) and the initial value of the scale ( CInitWeight value). Explain your results


Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Computational Neuroscience (Theory of Neuroscience) Theory and Applications of Artificial Neural Networks

Terms: Computational Neuroscience (Theory of Neuroscience) Theory and Applications of Artificial Neural Networks