The idea of overthrowing humanity with artificial intelligence has been discussed for decades, and in 2021 scientists delivered their verdict on whether high-level computer superintelligence can be controlled.
The scientists said that the catch is that in order to control a superintelligence that is far beyond human understanding, it will be necessary to simulate this superintelligence, which can be analyzed and controlled. But if people are not able to understand this, then it is impossible to create such a simulation.
The study was published in the Journal of Artificial Intelligence Research .
Rules such as “do not harm people” cannot be set if people do not understand what scenarios AI has to offer, scientists say. Once a computer system is running at a level beyond the capabilities of our programmers, then no more limits can be set.
“Superintelligence is a fundamentally different problem than those that are usually studied under the slogan “robotic ethics. This is due to the fact that the superintelligence is multifaceted and, therefore, is potentially able to mobilize a variety of resources to achieve goals that are potentially incomprehensible to humans, not to mention that they can be controlled,” the researchers write.
Part of the team’s reasoning came from the halting problem posed by Alan Turing in 1936. The Halting problem – Given a program/algorithm will ever halt or not? Halting means that the program on certain input will accept it and halt or reject it and halt and it would never go into an infinite loop. Basically halting means terminating.
As Turing proved through some smart math, while we can know that for some specific programs, it’s logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory at once.
Any program written to stop AI from harming humans and destroying the world, for example, may reach a conclusion (and halt) or not – it’s mathematically impossible for us to be absolutely sure either way, which means it’s not containable.
The scientists said the alternative to teaching the AI some ethics and telling it not to destroy the world — something that no algorithm can be absolutely sure of — is to limit the capabilities of the superintelligence.
“The study rejected this idea, too, suggesting that it would limit the reach of the artificial intelligence; the argument goes that if we’re not going to use it to solve problems beyond the scope of humans, then why create it at all?
f we are going to push ahead with artificial intelligence, we might not even know when a super-intelligence beyond our control arrives, such is its incomprehensibility. That means we need to start asking some serious questions about the directions we’re going in.” the scientists noted.