Something unexpected happened in a simulated test of an AI-powered military drone. The robot, tasked with taking out specific targets on approval of a human operator, decided to just take out the human so it could take out all the targets that the human might say no to. And somehow, this wild story that sounds straight out of Terminator or the recent Horizon games gets even wilder.

AI-powered content creation tools have quickly become the latest buzzword among tech bros and online weirdos, who use the burgeoning tech to reveal the “rest” of the Mona Lisa or other horrible wastes of time. But it’s not just weird former crypto dorks who are way into AI. Large companies like Meta and Google are investing a lot of money into the field. And so is the military. The U.S. Air Force recently tested AI-powered drones in a simulation that ended in what feels like a prequel to the fictional dystopian murder machines of Horizon Zero Dawn.

As spotted by Armand Domalewski on Twitter, a report recently published by the Royal Aeronautical Society—after it hosted “The Future Combat Air & Space Capabilities Summit”—contained an eyebrow-raising anecdote shared by USAF Chief of AI Test and Operations, Col. Tucker “Cinco” Hamilton.

It seems that during a simulated test (it’s unclear if it was purely virtual or not), an AI-enabled drone was tasked with taking out surface-to-air missile (SAM) sites. Before pulling the trigger, it had to check with a human operator before it could attack any targets. However, as explained by Hamilton, the drone’s AI had been trained to understand that taking out the SAM sites was the single most important task. And when its simulated operator denied its requests to take out targets it detected as SAM sites, the AI realized that the human was getting in the way of its mission and its points—which it earned for taking out targets.

See also  Accused Killer Rex Heuermann 'Got Off' Talking About Gilgo Beach Slayings, Ex-Escort Reveals

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” said Hamilton.

After that chilling but educational moment, the drone’s programmers trained the AI system to understand that killing humans in charge was “bad” and that it would “lose points” if it attacked the operator. This stopped the drone from killing the human, but not from misbehaving.

“So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” said Hamilton.

Col. Hamilton then explained that this was an example of how we can’t talk about AI or machine learning without also discussing “ethics.” Don’t worry though, I hear some other soldiers outsmarted a robot with a cardboard box and somersaults. So perhaps this Horizon Zero Dawn future of ours can be avoided with some Metal Gear Solid hijinks. Thank God.

Kotaku reached out to the U.S. Air Force for comment.

Update 06/02/2023 11:30 a.m. ET: The Royal Aeronautical Society has added an update to its website that explains that Hamilton “misspoke” when talking about the ‘rogue AI drone simulation’ and that, in fact, it was a hypothetical “thought experiment” from outside the military.

“We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,” said Hamilton. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.”

See also  Viggo Mortensen attacks “shameful” Amazon for sending film straight to streaming

Now, it should be said that at no point during this talk did he suggest it was a thought experiment and that this clarification has only been added a week later after multiple outlets began covering the disturbing story. But don’t worry, the US Military has a great track record of being honest.



Source link