Lecture
Until now, this section has mainly addressed the question of whether artificial intelligence can be developed, but it is also necessary to devote time to analyzing the question whether it should still be developed. If the consequences of creating artificial intelligence technology are more likely to be negative than positive, then people working in this area have a moral responsibility that requires them to direct their search to other areas.
Unforeseen negative side effects resulted from the introduction of many new technologies: internal combustion engines caused air pollution and the widespread construction of roads, even in the most heavenly places; Nuclear technology caused explosions in Chernobyl and on Trimail Island and created a threat of global destruction. All scientists and engineers are faced with ethical considerations that they must take into account when performing their work, choosing projects that should or should not be implemented, as well as ways to implement these projects. Ethics of Computing has even been written on this topic. But artificial intelligence seems to be the source of some problems never before seen, including those listed below, except, say, the construction of bridges that fall under its own weight.
Consider each of these problems in turn.
As a result of automation, the number of unemployed may increase. The modern industrial economy has become completely dependent on the use of computers in general and individual programs of artificial intelligence in particular. For example, a large part of the economy, especially in the United States, depends on the availability of consumer credit.
Verification of applications for issuing credit cards, issuing permits for payments and disclosing fraudulent transactions, as well as other operations are now carried out by artificial intelligence programs. The conclusion suggests that thousands of employees lost their places because of the appearance of these programs, but in reality these jobs would not have been possible without the use of artificial intelligence programs, since in the case of manual labor, the cost of these operations would be unacceptable.
Until now, automation using artificial intelligence technology has consistently created more jobs than it eliminated, moreover, it led to the emergence of more interesting and highly paid specialties. Now, after the canonical artificial intelligence program has become an “intelligent agent1 'designed to help people, the loss of work becomes even less likely result of the introduction of artificial intelligence compared to the era when all the efforts of artificial intelligence specialists focused on creating“ expert systems ” designed to replace people.
May decrease (or increase) the amount of free time available to people. Alvin Toffler, in his book Future Shock, said: "The working week has been reduced by 50% since the beginning of the century. And no one is surprised by the forecast that by the year 2000 it will be reduced by another half." Arthur C. Clark wrote that "in 2001, people may face a future filled with extraordinary boredom, when the main problem in life is deciding which of several hundreds of television channels to choose to watch.
The only of these forecasts, which came true at least to some extent, was the number of television channels. And instead of shrinking the working day, people employed in industries that are characterized by the intensive use of knowledge, began to feel part of an integrated computerized system that operates 24 hours a day; to cope with their responsibilities successfully, they have to work more and more.
In an industrial economy, remuneration is roughly proportional to the cost of time; an increase in work duration of 10% usually leads to an average increase in income by 10%. And in the information economy, characterized by the availability of broadband and simplified replication of intellectual property (which Frank and Cook called "the society in which the winner gets everything"), the greatest reward brings the ability to be a little more successful than a competitor; an increase in work duration of 10% could mean an increase in income of 100%. Therefore, everyone is experiencing increasing pressure, which makes it work harder and harder. Artificial intelligence helps to increase the pace of technological innovation and therefore contributes to this general trend, but artificial intelligence also promises to give us the opportunity to take some of the burden off and allow our automated agents to do some work for us at least for a while.
People may lose their sense of uniqueness. In his book Computer Power and Human Reason, Weizenbaum, author of the Eliza program, pointed out some of the potential threats that society faces in connection with the development of artificial intelligence. One of the most important arguments of Weizenbaum is that as a result of research in the field of artificial intelligence, the idea that people are automatic machines seems to be not so incredible, and this idea leads to a loss of independence or even humanity. But the authors of this book want to note that this idea existed long before the appearance of artificial intelligence and goes back at least to the times when the book L'Homme Machine was published. The authors also note that people survived other attempts at a sense of their uniqueness: Copernicus, author of the book De Revolutionibus Orbium Coelestium, removed the Earth from the center of the solar system, and Darwin, author of the book Descent of Man, placed Homo sapiens on the same level as there are all other kinds of living things. Therefore, even in the event of its widespread and successful offensive, artificial intelligence will not be a greater threat to the moral foundations of the society of the 21st century than Darwin’s theory of evolution was for the 19th century.
People may lose some of their privacy rights. Weizenbaum also pointed out that the development of speech recognition technology can lead to a widespread use of telephone tapping facilities and therefore the loss of civil liberties. He did not foresee that sometime in the world terrorist threats would become so real that they would change people's attitude to what level of supervision they would be willing to agree with, but he correctly understood that artificial intelligence has the potential to create means of mass application. Weizenbaum’s prediction may soon come to fruition6: the US government is considering implementing the Echelon system, which consists of a network of listening stations, antenna fields and radar stations; the system relies on support for computers that use language translation, speech recognition and keyword search for automatic sifting traffic by phone, email, fax and telex. " Many agree that computerization leads to an infringement of privacy rights - one of the senior executives of Sun Microsystems, Scott McNeely, even said: "You still have no personal life. Forget about it." Others disagree; For example, Judge Louis Brandeis wrote back in 1890: "The right to present life is the most comprehensive of all rights ... after all, it is the right to be yourself."
The use of artificial intelligence systems can lead to people becoming more irresponsible. In the atmosphere of constant trial readiness that dominates the United States, issues of legal responsibility become more important. If the therapist has taken on faith the judgment of the medical expert system regarding the diagnosis, then who will be responsible if the diagnosis turns out to be wrong? Fortunately, it is now generally accepted that it is impossible to treat as a therapist medical procedures that have a high expected utility, even if the actual results were disastrous for the patient (partly this change of opinion is due to the growing influence of the methods of decision making in medicine).
Therefore, the above question should be considered in the following wording: "Who should be responsible if the diagnosis was unjustified?" So far, the courts have assumed that medical expert systems play the same role as medical textbooks and reference books; therapists must understand the reasoning behind any decision of the system, and use their own judgment to decide whether to follow the recommendations of the system.
Therefore, when designing medical expert systems in the form of intelligent agents, the actions of these systems should be viewed not as directly affecting the patient, but as influencing the behavior of the therapist. And if expert systems once will reliably produce more accurate diagnoses than people, doctors can become legally responsible if they do not use the recommendations of the expert system.
Similar issues related to the use of intelligent agents on the Internet also become important. For example, some progress has been made in terms of introducing restrictions into intelligent agents so that they could not, say, damage other users' files. Problems become even more important when it comes to cash transactions. If financial transactions are performed by an intelligent agent "on behalf of someone else", then who will be responsible for the damages?
Will it be possible for an intelligent agent to own assets and participate in electronic trading on its own behalf? It seems that such issues have not yet been fully studied. As far as the authors of this book are aware, no program has yet been given legal status as an individual capable of participating in financial transactions; at present, such a decision would seem to be unwise. In addition, programs are not yet considered as “drivers” when it comes to monitoring the implementation of traffic rules on real highways. At least in California there are no legal sanctions that would exclude the possibility of exceeding the speed limit by an automated vehicle, although in the event of an accident the designer of the vehicle control mechanism should be responsible. As in the case of technology for cloning people, legislators have yet to include new developments in the legal field.
The success of artificial intelligence can be the beginning of the end of the human race. Almost any technology, falling into malicious hands, reveals the potential for harm, but when it comes to artificial intelligence and robotics, a new problem arises from the fact that these malicious hands may belong to the technology itself.
Warnings about the danger borne by robots or robotic humanoid cyborgs that got out of control became the plot of countless science fiction works. Among the earliest examples of such works are the book Frankenstein, or the Modern Prometheus1 by Mary Shelley and the play RUR (Karel Chapek 1921), which describes how robots conquer the world. In the cinema, this film is devoted to the movie "The Terminator" (Terminator), released on screens in 1984, which uses the cliché "about robots conquering the world" and the story about time travel, and the movie "The Matrix" (Matrix), released in 1999, in which plots "about robots conquering the world" and "about the brain growing in a flask" are combined.
Apparently, robots are the protagonists of so many works of art about conquering the world mainly because they embody the unknown, just like witches and ghosts in fairy tales that frightened people in earlier eras. But do robots really pose a more real threat than witches and ghosts? If robots are properly designed as agents that respect the interests of their owner, then they do not create such threats: robots developing on the basis of the constant improvement of current projects will serve their masters rather than fight against them.
People sometimes use their intelligence in aggressive forms, because they have some inherent aggressive tendencies due to natural selection. But machines created by people do not need the features of innate aggressiveness, unless people themselves decide that they should be. On the other hand, it is possible that computers will achieve a kind of victory over people, successfully serving and becoming irreplaceable, as well as cars, which in a certain sense conquered the industrialized world. Consider one scenario that deserves additional analysis. I.J. Goode wrote the following in one of his works.
Let us define the superintelligent machine as capable of far surpassing in all areas of mental activity of any person, no matter how clever he is. And since the design of machines is one of such areas of mental activity, the super-intelligent machine will be able to design even more intelligent machines; This will certainly lead to an "explosion of intellectuality" and the human intellect will remain far behind. Thus, the first invention of a super-intelligent machine will be the last invention a person can ever make, and only if the machine is kind enough to tell a person how to keep it under control.
To this "explosion of intellectuality" a professor of mathematics and the author of science fiction works Vernor Vindzh assigned another name - technological superiority; He wrote: "Within thirty years, people will get the technological means to create superhuman intelligence. Soon after this, the era of people will end." Goode and Vinge (and many others) correctly point out that, at present, the curve of technological progress is growing exponentially (it suffices to recall Moore's law). However, the forecast that this curve will continue to rise upward, following the laws of almost infinite growth, would be too bold. Until now, any technology has developed along an S-shaped curve, according to which the exponential growth eventually fades away.
Vinge is concerned and concerned about the upcoming technological superiority, but other computer science experts involved in predictions are happy with his advance. Hans Moravek in his book Robot: Mere Machine to Transcendent Mind predicts that robots will be equal in intelligence to a man in 50 years, after which he will be surpassed. In his book he writes the following.
Pretty soon they will be able to push us out of life. But I am not very alarmed by this opportunity, because I believe that future cars are our offspring, “the offspring of our mind,” created in our image and likeness; it is us, but in a more developed form. Like the biological children of previous generations, they embody the best hopes of humanity for the long-term future. This encourages us to give them all possible assistance, and then step aside, after we are no longer able to do something useful.
Ray Kurzweil, in his book The Age of Spiritual Machines, predicts that by 2099 there will be "a powerful tendency to merge human thinking with the world of machine intelligence, which was originally created by the human race. There will no longer be any clear distinction between people and computers." There was even a new word, transhumanism, to designate an active social movement that embraces those who are preparing for such a future.
Suffice it to say that these questions become a challenge for most moral theorists, who consider the only possible way to develop the preservation of human existence and the human race itself.
Finally, consider this problem from the point of view of the robot. If robots gain consciousness, then interpreting them simply as “machines” (that is, further downgrading them) can become immoral.The robots themselves must also act morally - we will need to program in them a theory by which they can judge what is good and what is bad. The problem of the rights and obligations of robots was considered in the works of science fiction authors, starting with Isaac Asimov. In the well-known movie "AL." Spielberg's plot is the story of Brian Oldss about an intellectual robot, in whose program the belief was put in that he is a man, so he could not understand why the owner-mother would inevitably leave him once. Both the story used by Spielberg and his film argue that the civil rights movement of robots should be deployed.
Comments
To leave a comment
Connection with other sciences and cultural phenomena
Terms: Connection with other sciences and cultural phenomena