You get a bonus - 1 coin for daily activity. Now you have 1 coin

- Artificial intelligence as a positive and negative global risk

Lecture



Это окончание невероятной информации про искусственный интеллект и риски.

...

talk about security? I do not have privileged access to someone else’s psychology, but I will briefly discuss this issue based on my personal communication experience.

The field of AI research has adapted to the life experience through which it has passed over the past 50 years, in particular, to the model of big promises, especially abilities at the human level, and following them, confusing public failures. To attribute this confusion to the AI ​​itself is unfair; wiser researchers who did not make big promises did not see the triumph of their conservatism in the newspapers. And now unfulfilled promises immediately come to mind, both inside and outside the field of AI research, when AI is mentioned. The culture of AI research has adapted to the following condition: there is a taboo on talking about human-level abilities. There is an even stronger taboo against those who declare and predict certain abilities that they have not yet demonstrated on working code.

I got the impression that everyone who claims to be investigating Friendly AI indirectly declares that his AI project is powerful enough to be Friendly.

It should be obvious that this is not true either logically or philosophically. If we imagine someone who has created a real mature AI that is powerful enough to be Friendly, and, moreover, if, in accordance with our desired result, this AI is indeed Friendly, then someone should was working on Friendly AI for years and years. Friendly AI is not a module that you can instantly invent, at the exact moment you need it, and then insert it into an existing project, the polished design of which will not change in any other way.

The field of AI research has a number of techniques, such as neural networks and evolutionary programming, which have been growing in small steps over decades. But neural networks are opaque — the user has no idea how neural networks make their decisions — and cannot be easily brought into a state of transparency; The people who invented and polished neural networks did not think about the long-term problems of Friendly AI. Evolutionary Programming (ES) is stochastic, and does not accurately preserve the optimization goal in the generated code; The ES gives you a code that does what you request — most of the time under certain conditions, but this code can do something on the side. EP is a powerful, more and more mature technique, which by its nature is not suitable for the purposes of Friendly AI. Friendly AI, as I imagine it, requires recursive self-improvement cycles, which exactly preserve the optimization goal.

The most powerful modern AI techniques, as they were developed, polished and improved over time, are fundamentally incompatible with the requirements of Friendly AI, as I now understand them. The Y2K problem, which turned out to be very expensive to fix, although it was not a global catastrophe, also stemmed from the inability to anticipate tomorrow's project requirements. The nightmare scenario is that we might find that we were lent a catalog of mature, powerful, publicly available AI techniques that combine to generate non-Friendly AI, but which cannot be used to build Friendly AI without redoing all work from three decades from scratch. In the field of AI research, it is quite defiant to openly discuss human-level AI, in connection with the past experience of these discussions. There is a temptation to congratulate yourself for such courage, and then stop. After such courage, to discuss transhuman AI seems ridiculous and unnecessary. (Although there are no highlighted reasons why an AI should slowly climb the intellect scale and then stay on the human point forever.) Dare to talk about Friendly AI, as a precautionary measure against global risk, will be two levels bolder than that level of courage, on which you look border-breaking and brave.

There is also a reasonable objection, which agrees that Friendly AI is an important issue, but worries that, given our current understanding, we are simply not at that level to handle Friendly AI: if we try to solve the problem right now, we just be defeated, or create anti-science instead of science. And this objection is concerned. It seems to me that the necessary knowledge already exists - that it is possible to study a sufficiently large amount of existing knowledge and then deal with Friendly AI without plunging a person into a brick wall - but this knowledge is scattered among many disciplines: Theories of solutions and evolutionary psychology and probability theory and evolutionary biology and cognitive psychology and information theory and in the field of knowledge, traditionally known as “Artificial Intelligence” ... There is also no curriculum that would prepare a great deal for HS scientists to work in the field of Friendly AI.

“The rule of ten years” for geniuses, confirmed in various fields - from mathematics to tennis - says that no one achieves outstanding results without at least ten years of preparation. (Hayes, 1981.) Mozart began writing symphonies at four, but these were not Mozart's symphonies — it took another 13 years for Mozart to begin writing outstanding symphonies. (Weisberg, 1986.) My own experience with the learning curve reinforces this concern. If we need people who can make progress in Friendly AI, then they should start training themselves, all the time, years before they are suddenly needed.

If tomorrow the Bill and Melinda Gates Foundation will allocate one hundred million dollars to study Friendly AI, then thousands of scientists will begin to rewrite their proposals for grants so that they look relevant to Friendly AI. But they will not be genuinely interested in the problem - evidence of which is that they did not show curiosity about the problem before someone offered them to pay. While Universal AI is non-fashionable and Friendly AI is completely out of sight, we can assume that everyone who talks about this issue is genuinely interested in it. If you throw too much money into a problem whose area is not yet ready to be solved, excess money will create anti-science rather than science - an inordinate bunch of fake solutions.

I can not consider this conclusion good news. We would be much safer if the problem of Friendly AI could be solved by piling up human bodies and silver. But at the time of 2006, I strongly doubt that this is good - the field of Friendly AI, and the field of AI itself is in too much chaos. If someone claims that we cannot make progress in the field of Friendly AI, that we know too little, we should ask how long that person has been learning to come to this conclusion. Who can say what science does not know? Too much science exists in nature for one person to learn. Who can say that we are not ready for a scientific revolution, ahead of the unexpected? And if we cannot advance in Friendly AI, because we are not ready, this does not mean that we do not need Friendly AI. These two statements are not equivalent at all!

And if we find that we cannot advance in Friendly AI, we must determine how to get out of this situation as soon as possible! There are no guarantees in any case that since we cannot control the risk, then it must go.

And if the hidden talents of young scientists will be interested in the Friendly AI of their own choice, then I think it would be very useful from the point of view of the human species if they can apply for a long-term grant to study the problem with full employment. Some funding from Friendly AI is necessary for it to work — much more funding than is available now. But I think that in these initial stages the Manheten project would only increase the noise share in the system.


Conclusion


Once it became clear to me that modern civilization is in an unstable state. IJ Good suggested that an explosion of intelligence describes a dynamic, unstable system, like a pen, precisely balanced to stand on its tip. If the handle is completely vertical, it can remain in a straight position, but if the handle deviates even slightly from the vertical, gravity will pull it further in that direction and the process will accelerate. Similarly, smarter systems will require less time to make themselves even smarter.

A dead planet spinning lifelessly around its star is also stable. Unlike the intellectual explosion, extermination is not a dynamic attractor - there is a big difference between “almost disappear” and “disappear”. Even so, total extermination is stable.

Shouldn't our civilization eventually come to one of these two modes? Logically, the above reasoning contains punctures. For example, the Fallacy of the Giant Cheesecake: minds do not roam blindly between attractors, they have motives. But even so, then, I think, our alternative is to either become smarter or become extinct.

Nature is not cruel, but indifferent; this neutrality often looks indistinguishable from real hostility. Reality throws you one choice after another, and when you face a challenge that you cannot cope with, you experience consequences. Often nature makes grossly unfair demands, even in those tests where the penalty for error is death. How could a 10th century medieval peasant invent a cure for tuberculosis? Nature does not measure its challenges with your skills, your resources, or how much free time you have to think about a problem. And when you face a deadly challenge that is too difficult for you, you die. It may be unpleasant to think about it, but it has been a reality for humans for thousands and thousands of years. The same thing can happen to the whole human species if the human species faces an unjust challenge.

If human beings did not grow old, and century-olds would have the same level of death as the 15th anniversary, people would still not be immortal. We will continue to exist, as long as we are supported by probability. To live even a million years as a non-aging person in a world as risky as ours, you must somehow reduce your annual probability of death to almost zero. You must not drive a car, you must not fly, you must not cross the street, even if you look both ways, as this is still too much of a risk. Even if you abandon all thoughts of entertainment and stop living for the sake of saving your life, you will not be able to build a million year course without obstacles. It will not be physically, but mentally (cognitively) impossible.

The human species Homo sapiens is not aging, but not immortal. Hominids lived so long just because there was no arsenal of hydrogen bombs, there were no spacecraft to send asteroids to Earth, there were no military biolaboratories to create superviruses, there were no repeated annual prospects for atomic war or nanotechnology war or fought off by the AI. To live for any appreciable time, we must reduce each of these risks to zero. “Pretty good” is not good enough to live another million years.

It looks like an unfair challenge. This question was usually not within the purview of historical human organizations, no matter how hard they tried. For decades, the United States and the USSR avoided nuclear war, but were not perfect in it; there were moments very close to war, for example, the Cuban Missile Crisis in 1962. If we assume that future minds will show the same mixture of stupidity and wisdom, the same mixture of heroism and selfishness, like those we read in historical books - then the game of global risk is almost over; she was lost from the start. We can live another decade, even another century, but not the next million years.

But human minds are not the limit of the possible. Homo sapiens is the first universal intelligence. We were born at the very beginning of things, at the dawn of the mind. If successful, future historians will look back and describe the modern world as a formidable intermediate stage of youth, when humanity has become smart enough to create terrible problems for itself, but not smart enough to solve them.

But before we go through this stage of adolescence, we must, as boys, face an adult problem: the challenge of a smarter man than the intellect. This is the way out at the high-moral stage of the life cycle; a path too close to a vulnerability window; this is perhaps the most dangerous one-time risk we face. AI is the only way to this challenge, and I hope that we will go this way, continuing the conversation. I think that, in the end, it would be easier to make the 747th from scratch than to stretch an existing bird to scale or to replace it with jet engines.

I do not want to play down the tremendous responsibility of attempts to build, with an exact goal and project, something smarter than ourselves. But let's stop and remember that intelligence is far from the first thing encountered by human science that seemed difficult to understand. The stars were once a mystery, and chemistry, and biology. Generations of researchers have tried and failed to understand these riddles, and they have acquired the image of the unsolvable for simple science. Once upon a time, no one understood why one matter is inert and lifeless, while another pulses with blood and vitality. Nobody knew how living matter multiplies itself or why our hands obey our mental orders. Lord Kelvin wrote:

“The influence of animal or plant life on matter is infinitely far beyond the limits of any scientific research directed to this day. His power to control the movement of moving particles, in the daily demonstrated miracle of human free will and in the growth of generations after the generation of plants from one seed, is infinitely different from any possible result of the random agreement of atoms. ” (Quoted from MacFie, 1912.)

Any scientific disregard is sanctified by antiquity. Any and every lack of knowledge is a thing of the past, towards the dawn of human curiosity; and this hole lasts for whole epochs, looking unchanged, until someone fills it. I think that even human beings inclined to make mistakes are capable of achieving success in creating Friendly AI. But only if the mind ceases to be a sacred mystery to us, as life was for Lord Kelvin. Intellect must cease to be any kind of mysticism, sacred or not. We must complete the creation of Artificial Intelligence as an exact application of exact art. And then, perhaps, we will win.

Продолжение:


Часть 1 Artificial intelligence as a positive and negative global risk factor
Часть 2 9: Threats and prospects. (Threats and promises.) - Artificial intelligence
Часть 3 - Artificial intelligence as a positive and negative global risk


Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Connection with other sciences and cultural phenomena

Terms: Connection with other sciences and cultural phenomena