Homeyoutube Why do scientists say AI hyperintelligence cannot be controlled?

Why do scientists say AI hyperintelligence cannot be controlled?

Η Artificial Intelligence AI that has come to overthrow them data of humanity, has been the subject of debate for many decades. Now, scientists are answering the question of whether we could control hypersensitivity computers high level. What is their answer? "Definitely no".

According to scientists, controlling an AI hyperintelligence that goes far beyond human comprehension would require a simulation that can be analyzed. But if hypersensitivity is not understood, it is impossible to create such a simulation.

Why do scientists say AI hyperintelligence cannot be controlled?

Rules such as "it does not harm humans" can not be set if the type of scenarios with which an artificial intelligence is created is not understood. Scientists have pointed out that when one system computer operates at a level above the field application of developers, no limits can be set. Specifically, the scientists pointed out the following: "Hypersensitivity creates a fundamentally different problem from the one usually studied in"robotics morality". "This is because hyperintelligence is multifaceted and therefore potentially capable of mobilizing a variety of resources to achieve goals that are potentially unthinkable for humans, let alone controlled."

Part of the researchers' reasoning stems from the shutdown problem posed by Alan Turing in 1936. The problem focuses on knowing whether a computer program will come to a conclusion and stop "running" or whether it will continue to "run" forever. .

Why do scientists say AI hyperintelligence cannot be controlled?

As Turing has shown through some "smart" mathematics, while we can know this for some specific programs, it is logically impossible to find a way that will allow us to know about any possible program that could ever be developed.

Any program developed to stop AI from harming people and destroying the world, for example, may or may not come to a conclusion (and stop). It is mathematically impossible for scientists to be absolutely sure, which means that they can not set limits.

Why do scientists say AI hyperintelligence cannot be controlled?

"In fact, this makes the restriction algorithm useless"says Iyad Rahwan, a computer scientist at the Max-Planck Institute for Human Development in Germany.

The AI ​​training alternative with an ethic that tells it not to destroy the world - something no one algorithm can not be absolutely sure that it does, say researchers - is to limit the possibilities of hyperintelligence.

However, researchers also reject this idea, as it would limit the scope of artificial intelligence.

Artificial Intelligence

In addition, the researchers explained that they can not even know when a hyperintelligence reaches beyond their control. This means that they need to start asking some serious questions about the directions they are following.

Computer scientist Manuel Cebrian of the Max-Planck Institute for Human Development said: "A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently, without the developers fully understanding how they learned it. "Therefore, the question arises as to whether this could at some point become uncontrollable and dangerous for humanity."

The study was published in the journal Research of Artificial Intelligence Research.

Pohackontashttps://www.secnews.gr
Every accomplishment starts with the decision to try.
spot_img

LIVE NEWS