The work was published in the Journal of Artificial Intelligence Research. The danger that can come from superintelligent AI is described in the blockbusters about The Terminator.
Representatives of various scientific disciplines, including specialists from the Center for People and Machines of the Max Planck Institute for Human Development, decided to test how humans are able to control superintelligent robots.
“The superintelligent machine that runs the world sounds like science fiction. But there are already robots performing certain important tasks independently. At the same time, programmers do not understand at all how they learned this.
“As a result, the question arises: can this process at some point become uncontrollable and dangerous for humanity? ” – says one of the authors of the study, Manuel Sebrian, head of the digital mobilization group at the Center for People and Machines at the Max Planck Institute.
To answer this question, scientists used modeling and theoretical calculations. As it turned out, even if you put data on ethical principles into a superintelligent machine and limit its communication with the outside world (for example, turn off Internet access), this will not save humanity from the risk that such a system will get out of control.
The fact is that, according to the theory of computer science, any of the algorithms for such containment – whether it be programs about ethical principles or restricting access to the outside world – is vulnerable and, under certain circumstances, is quite capable of turning itself off.
Therefore, experts conclude that a person simply will not be able to control a superintelligent AI and the situation may sooner or later get out of control.
“We argue that complete containment is, in principle, impossible due to fundamental limitations inherent in the computation itself. The superintelligence will contain a program that includes all programs implemented by a universal Turing machine, potentially as complex as the state of the world,” the researchers conclude.