You get a bonus - 1 coin for daily activity. Now you have 1 coin

An experiment with a prosthetic brain

Lecture



A mental experiment with a prosthetic brain was proposed in the mid-1970s by Clark Glymore and described in the writings of John Searle, but it is most often associated with the work of Hans Moravec. This experiment is as follows: Suppose that neurophysiology has reached a tremendous level of development at which there is a perfect understanding of the interaction of input and output signals and connections between all neurons in the human brain. Suppose also that it is possible to create microscopic electronic devices that mimic the behavior of a neuron, which can be hooked to a person’s neural tissue without being noticed. Finally, suppose that some wonderful surgical technique allows replacing individual neurons with appropriate electronic devices without disrupting the functioning of the brain as a whole. The experiment consists in the gradual replacement of all neurons in the human head with electronic devices, and then in the reverse implementation of this process to return the subject to his normal biological state.

We are interested in both external behavior and the internal experience of this subject during and after the operation. According to the definition of this experiment, the external behavior of the subject should remain unchanged compared with what would be observed if this operation were not performed. So, despite the fact that in the presence or absence of consciousness it is not so easy to be convinced, being an outside observer, the subject of the experiment should at least have the ability to register any changes in his perception of his own consciousness. Obviously, there is a direct conflict of intuitive ideas about what can happen. Moravec, being a robotics specialist and a supporter of functionalist views, is convinced that the mind of the subject subject to experiment will remain unaffected, and Searle, a philosopher and naturalist biologist, is equally firmly convinced that the mind of the subject of the experiment will gradually disappear, about which The following quotation from his work testifies.

You will find, to your complete amazement, that you really lose control of your external behavior. For example, you will notice that when doctors check your vision, they will say, "We are holding a red object in front of you; please tell us what you see." You will want to shout: “I don’t see anything. I’m completely blind,” but you will hear your voice speak, completely without submitting to your control: “I see a red object in front of me ...” Your consciousness gradually narrows and disappears, while externally observable behavior remains unchanged.

But there is an opportunity to lead this dispute, relying not only on intuition. First, it should be noted that external behavior will remain the same in the process of how the subject gradually loses consciousness, only if the subject’s will disappears instantly and completely; otherwise, a narrowing of consciousness would have to reflect on external behavior; this means that the subject would have to shout: "Help, I lose consciousness!" or something like that. Such a hypothesis of the instantaneous disappearance of the will as a result of the gradual replacement of individual neurons one after the other seems unlikely.

Secondly, we will consider what happens if we ask the subject questions concerning how he himself feels that he has consciousness in that period when he will not have a single real neuron left. According to the conditions of the experiment, we should receive approximately the following answers: “I feel great. I should also note that I was a little surprised because I believed in the truth of Searle’s arguments.” Another option is to prick the subject with a pointed wand and hear the answer: "Oh, it hurts." Now, if we return to normal life, the skeptic may argue that such output can also be obtained from artificial intelligence programs as simple results of accepted agreements. Indeed, it is not difficult to foresee, for example, the following rule: “If a signal with high intensity appears on sensor number 12, then output the output“ Oh, it hurts ”, but the whole point of the experiment under consideration is that we duplicated the functional properties of the usual human brain, therefore, it is assumed that the electronic brain does not contain such structures in which such agreements are implemented. This means that you need to somehow explain what caused the manifestations of consciousness produced by the electronic brain, which are based only on the functional properties of neurons. And this explanation should also extend to the real brain, which has the same functional properties. In our opinion, only two possible conclusions can be made below.

  1. The causal mechanisms of consciousness formation, which produce the output of this kind in the ordinary brain, still continue to operate in the electronic version of the brain, so the latter also has consciousness.
  2. The conscious mental events in the ordinary brain do not have a causal connection with the behavior, and since they are also absent in the electronic brain, the latter is unconscious.

Although we cannot exclude the second possibility, with this approach, consciousness reduces to what philosophers call an epiphenomenal (expressed as a side effect) role — as if something is happening, but does not cast a shadow that should appear in the observed world. In addition, if consciousness is really epiphenomenal, then the second, unconscious mechanism should be in the brain, with the help of which the exclamation “Oh, it hurts” is formed when a person is prickled with a pointed stick.

Third, consider the situation that arose after the operation was performed in the opposite direction and the subject again has an ordinary brain. And in this case, the external behavior of the subject should be, by definition, the same as if the operation had not been carried out at all. In particular, we would have the right to ask the subject such questions: "What did you feel during the operation? Remember how you were pricked with a pointed stick?" The subject must remember exactly the actual nature of his conscious experience, including the qualitative side of this experience, despite the fact that, according to Searle, there should be no such experience.

Searle might argue that we did not define the experiment properly. If real neurons, for example, ceased to act in that period of time when they were extracted, and then placed again in the brain, then, of course, they could not “remember” the experience obtained at that time. To take into account this circumstance, it is necessary to ensure the updating of the state of neurons in accordance with the change in the internal state of the artificial neurons with which they were replaced. If, in this case, the presence of “non-functional” aspects of real neurons, which would lead to behaviors that are functionally different from those that were observed while artificial neurons were still in place, would be assumed, it would be a simple reduction to the absurdity, since it would mean that artificial neurons are not functionally equivalent to real neurons.

Patricia Charland stated that the above arguments, based on the theory of functionalism and applied at the neuron level, can also be used at the level of any larger functional unit - a group of neurons, a brain, lobe, hemisphere, or whole brain. This means that by agreeing with the argument that an experiment with a prosthetic brain demonstrates consciousness in the brain in which neurons are replaced by electronic components, we must also agree that consciousness is preserved if the entire brain is replaced by a circuit in which the input data is displayed on weekends with using a huge search table. Such a view is unacceptable for many people (including Turing himself), whose intuition suggests that search tables can hardly have consciousness or, at least, that the conscious experience formed during the search in the table is not comparable to the experience formed in the process. a system that can be described (even in a primitive, computational sense) as a manipulation of beliefs, the results of self-analysis, goals, and the like, which are formed by the brain. These observations suggest that an experiment with a prosthetic brain can be an effective means of reinforcing our intuition only if it does not provide for replacing the entire brain at once, but this does not mean that in this experiment, replacing some atoms with others can only be considered wants to make us think Searle.

created: 2014-09-23
updated: 2024-11-14
197



Rating 9 of 10. count vote: 2
Are you satisfied?:



Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Approaches and directions for creating Artificial Intelligence

Terms: Approaches and directions for creating Artificial Intelligence