You get a bonus - 1 coin for daily activity. Now you have 1 coin

Artificial neuron

Lecture



The basis of the brain is a neuron. Naturally, brain modeling using neural networks begins with an answer to the question of how to model it.

The basis of the work of a real neuron are chemical processes. In a state of rest between the internal and external environment of the neuron there is a potential difference - the membrane potential of about 75 millivolts. It is formed by the work of special protein molecules, working as sodium-potassium pumps. These pumps at the expense of the energy of the nucleotide ATP drive potassium ions inside, and sodium ions - outside the cell. Since the protein acts as ATP-ase, that is, an enzyme that hydrolyzes ATP, it is called “sodium-potassium ATP-ase”. As a result, the neuron turns into a charged capacitor with negative charge inside and positive outside.

  Artificial neuron

Neuron Scheme (Mariana Ruiz Villarreal)

The surface of the neuron is covered with branching processes - dendrites. Axon endings of other neurons adjoin dendrites. The places of their connections are called synapses. Through synaptic interaction, a neuron is able to respond to incoming signals and, under certain circumstances, generate its own impulse, called a spike.

Signal transmission in synapses occurs due to substances called neurotransmitters. When the nerve impulse along the axon enters the synapse, it releases from the special bubbles the neurotransmitter molecules characteristic of this synapse. On the membrane of a neuron that receives a signal, there are protein molecules - receptors. Receptors interact with neurotransmitters.

  Artificial neuron

Chemical synapse

Receptors located in the synaptic cleft are ionotropic. This name emphasizes the fact that they are also ion channels capable of moving ions. Neurotransmitters affect receptors in such a way that their ion channels open. Accordingly, the membrane is either depolarized or hyperpolarized, depending on which channels are affected and, accordingly, what type of synapse it is. In excitatory synapses, channels are opened, allowing cations to enter the cell, the membrane depolarizes. In the inhibitory synapses, conductive anions open, leading to membrane hyperpolarization.

In certain circumstances, synapses can change their sensitivity, which is called synaptic plasticity. This leads to the fact that the synapses of one neuron acquire a different susceptibility among themselves to external signals.

Simultaneously, a multitude of signals arrive at the neuron synapses. Braking synapses pull the membrane potential towards charge accumulation inside the cage. Activating synapses, on the contrary, try to defuse a neuron (figure below).

  Artificial neuron

Excitation (A) and inhibition (B) of the retinal ganglion cell (Nicolls J., Martin R., Wallace B., Fuchs P., 2003)

When the total activity exceeds the initiation threshold, a discharge occurs, called an action potential or a spike. The spike is a sharp depolarization of the neuron membrane, which generates an electrical impulse. The whole process of pulse generation lasts about 1 millisecond. In this case, neither the duration nor the amplitude of the pulse depends on how strong the reasons for it were (figure below).

  Artificial neuron

Registration of ganglion cell action potential (Nicholls J., Martin R., Wallace B., Fuchs P., 2003)

After spike, ion pumps provide a neurotransmitter reuptake and synaptic cleft clearing. During the refractory period coming after spike, the neuron is not capable of generating new impulses. The duration of this period determines the maximum generation frequency that a neuron is capable of.

Adhesions that arise as a result of activity at synapses are called induced. The frequency of the spikes caused encodes how well the incoming signal matches the sensitivity setting of the neuron synapses. When the incoming signals fall on the sensitive synapses that activate the neuron, and the signals coming to the inhibitory synapses do not interfere with this, the response of the neuron is maximal. The image that is described by such signals is called a neuron-specific stimulus.

Of course, the idea of ​​the work of neurons should not be overly simplified. Information between some neurons can be transmitted not only by the spikes, but also through channels connecting their intracellular contents and transmitting electrical potential directly. This distribution is called gradual, and the connection itself is called an electrical synapse. Dendrites, depending on the distance to the neuron body, are divided into proximal (close) and distal (distant). Distal dendrites can form sections that operate as semi-autonomous elements. In addition to the synaptic pathways of excitation, there are extrasynaptic mechanisms that cause metabotropic adhesions. In addition to the induced activity, there is also spontaneous activity. Finally, the neurons of the brain are surrounded by glial cells, which also have a significant effect on the processes that occur.

The long path of evolution has created many mechanisms that are used by the brain in their work. Some of them can be understood by themselves, the meaning of others becomes clear only when considering fairly complex interactions. Therefore, you should not take the above description of the neuron as exhaustive. To go to deeper models, we first need to deal with the "basic" properties of neurons.

In 1952, Alan Lloyd Hodgkin and Andrew Huxley made descriptions of electrical mechanisms that determine the generation and transmission of the nervous signal in the giant squid axon (Hodgkin, 1952). What was evaluated by the Nobel Prize in the field of physiology and medicine in 1963. The Hodgkin-Huxley model describes the behavior of a neuron by a system of ordinary differential equations. These equations correspond to the autowave process in the active medium. They take into account the many components, each of which has its own biophysical analogue in a real cell (figure below). Ion pumps correspond to current source Ip. The inner lipid layer of the cell membrane forms a capacitor with a capacity of Cm. Ion channels of synaptic receptors provide electrical conductivity gn, which depends on the input signals, changing with time t, and the total value of membrane potential V. The leakage current of membrane pores is created by conductor gL. Ion movement through ion channels occurs under the action of electrochemical gradients, which correspond to voltage sources with electromotive force En and EL.

  Artificial neuron

The main components of the Hodgkin-Huxley model

Naturally, when creating neural networks, there is a desire to simplify the neuron model, leaving only the most essential properties in it. The most well-known and popular simplified model is the McCulloch-Pitts artificial neuron developed in the early 1940s (J. McCulloch, W. Pitts, 1956).

  Artificial neuron

McCulloch-Pitts Formal Neuron

The inputs of such a neuron signals. These signals are weightedly summed. Further, a certain nonlinear activation function, for example, sigmoidal, is applied to this linear combination. Often the logistic function is used as sigmoidal:

  Artificial neuron

  Artificial neuron
Logistic function

In this case, the activity of a formal neuron is recorded as

  Artificial neuron

As a result, such a neuron turns into a threshold adder. With a fairly steep threshold function, the neuron's output signal is either 0 or 1. The weighted sum of the input signal and the neuron weights is a convolution of two images: the input signal image and the image described by the neuron weights. The result of convolution is the higher, the more accurate the correspondence of these images. That is, the neuron, in fact, determines how much of the supplied signal is similar to the image recorded on its synapses. When the value of convolution exceeds a certain level and the threshold function switches to unity, this can be interpreted as a neuron’s resolute statement that he has learned the image being presented.

Real neurons are really somehow similar to McCulloh-Pitts neurons. The amplitudes of their spikes do not depend on what signals on the synapses caused them. Spike is either there or not. But real neurons respond to a stimulus not by a single impulse, but by an impulse sequence. In this case, the frequency of the pulses is the higher, the more accurately the image characteristic of the neuron is recognized. This means that if we build a neural network of such threshold adders, then with a static input signal, although it will give some output, this result will be far from reproducing how real neurons work. In order to bring the neural network closer to the biological prototype, we need to simulate the work in dynamics, taking into account the time parameters and reproducing the frequency properties of the signals.

But you can go the other way. For example, you can select a generalized characteristic of the activity of a neuron, which corresponds to the frequency of its impulses, that is, the number of spikes for a certain period of time. If you go to this description, you can think of a neuron as a simple linear adder.

  Artificial neuron
Linear adder

The output signals and, accordingly, the input for such neurons are no longer dictomic (0 or 1), but are expressed by a certain scalar value. The activation function is then written as

  Artificial neuron

A linear adder should not be perceived as something fundamentally different compared to a pulsed neuron, it just allows you to switch to longer time intervals when modeling or describing. And although the impulse description is more correct, the transition to a linear adder in many cases is justified by a strong simplification of the model. Moreover, some important properties that are difficult to discern in a pulsed neuron are quite obvious for a linear adder.

created: 2014-09-25
updated: 2021-03-13
342



Rating 9 of 10. count vote: 2
Are you satisfied?:



Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Logic of thinking

Terms: Logic of thinking