Wednesday, 12 Dec 2018
Business

DARPA facing the dangers of artificial intelligence: "It's not one of those things that prevents me from sleeping at night"

Artificial intelligence remains predictable and will have to become much more sophisticated before posing a serious threat to humans, according to the head of the Defense Advanced Research Agency.

At a Q & A session with Washington Post columnist David Ignatius on Thursday, DARPA director Steven H. Walker said that artificial intelligence remained "a very fragile ability" , a little independent capacity.

"At least in the Defense Department today, we do not see machines doing anything by themselves," he said, noting that agency researchers are focusing intensely on creation of a "man-machine" partnership. "I think we are far from a generalized AI, even in the third wave of what we pursue."

"It's not one of those things that keeps me awake at night," he added, referring to the dangers posed by AI.

Walker's comments come amidst bitter controversy over the use of artificial intelligence by the military. In June, thousands of Google employees signed a petition to protest the company's role in a US Department of Defense project using artificial intelligence.

Google has finally pulled out of the program called Project Maven, an initiative that uses AI to automatically mark cars, buildings and other objects in videos recorded by drones flying over areas of conflict. Google employees have accused the armed forces of having mobilized AI to kill with increased efficiency, but military leaders have claimed that the technology would be used to keep military personnel safe from harm. all unnecessary danger, thus saving lives.

"Without a doubt, this caused a lot of consternation within the DOD," said Bob Work, the former Assistant Secretary of Defense who had helped launch Project Maven last year in Washington. Post, in October, with Tony Romm. "Google has created a great moral hazard by stating that it does not want to use any of its artificial intelligence technologies to take human lives, but they have not said anything about the lives that could be saved."

Several months after Google's release of Project Maven, DARPA announced a multi-year investment of more than $ 2 billion in new and existing programs focused on developing AI.

Renowned technologists such as Elon Musk and Bill Gates, along with British inventor Clive Sinclair and the late theoretical physicist Stephen Hawking, said that humanity was walking in dangerous territory in its seemingly blind quest. from the AI. .

Musk likened the AI ​​to "an immortal dictator" and "the devil" and Hawking said it "could sound the knell of the human race".

In his remarks on Thursday, Walker found a soothing tone, saying that DARPA researchers found that their machines worked "rather badly" when asked to reason flexibly, apart from the information they had been trained on. via large datasets.

The goal, he said, is not just to give machines the ability to understand what they see in their environment, but to give them the ability to adapt to that environment as a human.

For example: the AI ​​could possibly identify the image of a cat sitting on a suitcase, but the machine still does not understand that the cat can be placed in the suitcase – and that you do not probably do not want to do it. such a thing. Humans, on the contrary, understand both instinctively.

"How can you give machines that kind of sense, this is the next place DARPA will be heading," Walker said. "It will be crucial if we really want machines to be partners with humans, not just tools."

.

Post Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: