August 21, 2017 – Listening to the senior military staff describe the technology and its capabilities gave me the creeps. I felt like I was in a bad “Star Wars” episode, or was watching the movie “Robocop.” Robots are becoming an essential part of the American, Russian and Chinese military. They include land-based, sea and airborne reconnaissance systems and they are one step removed from being loaded with weapons capable of killing people.
I’m not alone in feeling that this is a road we shouldn’t go down. A group of 116 engineers and scientists from 26 countries feel exactly the same way. One of them is Elon Musk of SpaceX and Tesla fame. Together they have composed a letter which states:
“We believe that AI has great potential to benefit humanity in many ways and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”
The letter further notes that most chemists and biologists are uninterested in chemical and biological weapons, and similarly, most artificial intelligence (AI) researchers and programmers are similarly inclined. The goal is to create benefits for society, not assault weapons for battlefield conditions. The fear is that AI weapons will open a Pandora’s Box that will be exceedingly difficult to close with armed conflicts escalating to new heights.
There is an argument to be made that armed conflicts with humans have escalated from the age of swords, spears and bows and arrows, to our current mechanized warfare where every soldier in the field carries enough firepower to wipe out a city block. How different will that be from a machine carrying the same weaponry? After all, war is about killing the other guy, the other army, and crushing the nation you are fighting. There is no civility in this practice.
So if robots were to fight for us, would it be better or worse? The military leadership in the United States, Russia, China, and other countries experimenting with robots, see AI as a way of saving lives, particularly those of the soldiers in the field of battle. A robot can be sent out on a suicide mission and if it doesn’t survive, no one has died.
Musk is already on record stating that AI is an existential threat to humanity. He fears an armed AI just raises that threat even more.
To the credit of the American soldiers and officers who appeared on the “60 Minutes” episode, they, too, felt that weaponized AI on the battlefield can never be allowed to fire without human oversight and control.
But as in all good science-fiction stories, one asks the “what if” question,
“What if the AI learns to bypass its human controllers?”
That indeed is what Musk and other scientists and engineers fear.
States Ryan Gariepy, founder of Clearpath Robotics, “autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.”Â
Clearpath has tried to keep its robots from becoming weapon systems. But its platforms are “pure vanilla” and contain the smarts if not the armaments to go rogue. Every Clearpath robot is situationally aware, self-navigating, and capable of independent decision-making. Today the robots get used in mining, agriculture, environmental monitoring, transportation, and defense. In the case of the latter the company restricts its technology to ground support, search and rescue. So far, no weapon systems. But it wouldn’t take much to mount a gun on a Clearpath Warthog on land or a Clearpath Heron on the sea,
And in defense, the company so far has restricted its technology to ground support, search and rescue. No weapons systems.
But it wouldn’t take much to mount a gun on the two robots pictured below, the Clearpath Warthog for land, and the Clearpath Heron for the sea.