A New Study Finds the Limits of Humans' Ability to Control AI

A New Study Finds the Limits of Humans' Ability to Control AI


Humans could not stop an artificially intelligent machine from making its own decisions or predict what decisions it might make, according to a recent study out of the Max-Planck Institute for Humans and Machines. Study co-author and research group leader Manuel Cebrian understands that the concept of a human-built machine humans do not understand may sound absurd to some, but he explains that such technology is already in existence.

“There are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity”, says study co-author Manuel Cebrian, per Business Insider.

Superintelligence poses different barriers than most subjects of “robot ethics” given their ability to adapt beyond the original scope of their programming. So, to study the problem, the research group conceived of a theoretical calculation called a “containment algorithm” to see whether an artificial intelligence could be controlled by programming it not to harm humans under any circumstances and to halt if the action is considered harmful. However, the researchers found that within the current bounds of computing, an air-tight algorithm to this effect could not be created; as the research group states “the containment problem is incomputable.”

“If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable”, explains Director of the Center for Humans and Machines Iyad Rahwan.

And to extend this line of reasoning, we therefore also may not be able to predict when super-intelligent machines will evolve — or even known we they’ve arrived. Scientists, including at times controversial figures like Elon Musk have warned in recent years about the more nefarious potentials of powerful AI, and these questions and fears are hardly new among casual followers of tech news. Innovation in the sphere nonetheless continues, with recent project like Mercedes-Benz’s 56-inch artificial intelligence hub.
Source: Read Full Article