According to the Washington Post, such a threat is considered real by Emil Torres, a philosopher and expert in assessing the likelihood of global catastrophes.
Torres is sure that by 2050 the so-called artificial superintelligence will appear in the world – a machine whose capabilities will exceed those of a person.
“The creation of artificial superintelligence will lead to a technological breakthrough in a variety of areas, but we will also have to deal with very serious dangers.
“If people trust the superintelligence with some truly important things, then there is a high probability that the machine will destroy humanity or greatly harm it. Very complex AI control algorithms will be needed, but where is the likelihood that a computer supergenius will not be able to bypass them? Even in today’s AI algorithms there are aspects that elude our understanding, and in the future people will understand less and less how powerful AI works,” Torres warned.
The expert believes that a global catastrophe caused by AI can occur if people trust the machine with too complex and abstract tasks. For example, they will instruct AI to “arrange peace in the world”, as a result of which the computer will simply destroy all people (no people – no one to fight each other).
It’s unclear humanity will ever be prepared for superintelligence, but we’re certainly not ready now. With all our global instability and still-nascent grasp on tech, adding in ASI would be lighting a match next to a fireworks factory.
Research on artificial intelligence must slow down, or even pause. And if researchers won’t make this decision, governments should make it for them.