HomeTech and GadgetsArtificial IntelligenceThe Evolution of Warfare in the 21st Century – Part Two: AI...

The Evolution of Warfare in the 21st Century – Part Two: AI on the Battlefield

In Part One of this series on warfare’s evolution in the 21st century, we described how inexpensive drones were altering conditions for both sides in the Russian-Ukraine conflict. Today, we look at the rise in the use of artificial intelligence (AI) to support military actions.

At a summit at The Hague last week, representatives from 60 countries got together to discuss the legal and ethical consequences of AI’s use in weapon and defence systems. REAIM stands for Responsible AI in the Military Domain, the name given to the meeting which lasted two days and ended with a call for action to identify what would collectively be deemed acceptable uses of AI for the military and in war.

AI is in Use in the Military Today

AI is already being used by military and defence contractors. Where it is used in operations and production and not on the battlefield, it is deemed to be within the boundaries of acceptance. In an interview with BNN Bloomberg, the Chief Technology Officer of Saab AB, the Swedish defence contractor, states, “Our current way of working is an interpretation based on the humanitarian law and some guidelines for development. We make sure that our engineers understand what areas are safe and in what areas should we be really careful.” 

The discussion at The Hague last week was heavily influenced by the Russian-Ukraine war with many attendees rationalizing the use of AI in support of the “good guys” in the fight.

AI and Autonomous Weapons and Defence Systems

In Ukraine today, a U.S. software company, Palantir, is supplying the country with Skykit which processes intelligence gathered from observation drones in the air and those made by operatives on the ground to help with strategic decision-making on the battlefield.

Another technology developed in Ukraine is called Zvook. It is an AI-powered acoustic monitoring system that by noise can evaluate a missile’s speed and direction so that interceptors can destroy it.

In 2019, the U.S. Center for Global Security Research at the Lawrence Livermore National Laboratory published a report on the use of AI on battlefields. In its introduction, the report quotes Vladimir Putin who has stated that the nation that rules AI “will be the ruler of the world.”  The report explored five questions:

  1. What near-term AI military applications were possible?
  2. Which of these applications would be consequential for strategic stability?
  3. How could AI systems affect regional and global stability?
  4. How would AI systems affect strategic deterrence?
  5. What are the unintended consequences of using AI for the military and on the battlefield?

Algorithmic warfare is another name for AI. It uses machine learning tools and the application of neural networks which mimic the human brain and applies them to current and anticipated military logistics, planning, analysis, transportation, intelligence and tactics. AI is seen as having a potential influence on decision-making related to the scale and scope of wars, to strategies of deterrence, escalation and de-escalation.

AI Autonomy on the Battlefield

The autonomous vehicles and systems presently in play in Ukraine are seen as having the highest priority for the military use of AI with a focus on navigation assistance in support of land, sea and air operations. Uncrewed vehicles can use AI to navigate through hostile environments. Drone swarms like the ones presently being used by Russia can operate synchronously through the use of an AI agent. AI can respond to information received in real time by onboard sensors and those deployed in the field to alter battlefield tactics as conditions change.

AI for Wargaming

Today, AI is being used to simulate nuclear explosions. Modelling has replaced underground nuclear weapons tests. Modelling assisted by AI is influencing the design of weapon systems from fighter jets to missiles and tanks. Entire battlefield scenarios and missions are being mapped and simulated with the assistance of AI. AI is being used to create novel production methods for new military hardware. And in wargaming which simulates battlefield conditions, gamers can modify conditions, weapons, and other variables and analyze what different mixes of these do to results.

AI for Intelligence Gathering and Analysis

The U.S. manages incoming streams of intelligence from so many different sources that analysis faces the problem of information overload. Machine learning and neural networks are well-suited to sort through the mountains of incoming information to spot potential destabilizing acts by foreign governments and their militaries. The U.S. is only beginning to use these tools within the military and domestic intelligence institutions. But for global security, using AI to separate the chaff from the wheat is a powerful use of the technology.

Are AI Autonomous Battlefield Robots in the Future Mix?

Whether you are one of the good guys or one of the bad, the advent of AI autonomous systems on battlefields is in the present and future mix. But does Russia or China have plans to send robot armies and drones to fight its 21st-century battles? To some extent, Russia is already doing this with its kamikaze drone swarms that have attacked Ukrainian cities and critical infrastructure.

But fully autonomous individual robots given the weapons to kill humans seems to be an ethical and moral dilemma that even the maddest generals whether friend or enemy would consider as a justifiable strategy. I say this, however, knowing that Vladimir Putin has used the threat to use nuclear weapons in speeches as Russia finds itself increasingly challenged by its failure to achieve his objectives in Ukraine. Most military and government analysts as well as current NATO members see Putin’s threats as a bluff. But would someone like Putin who is so willing to talk up nuclear war, be morally challenged by sending killer robots onto the battlefield?

If you have been keeping up with the evolution of autonomous mobile robotic systems, whether multi-wheeled or bipedal, none are yet ready for prime time on battlefields let alone to reliably and consistently navigate the sidewalks and streets of city neighbourhoods. So we still have time to put ethical and legal boundaries in place for AI’s use in the military and on the battlefield which means we need more than 60 countries meeting in The Hague to come up with the rules of engagement.

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

1 COMMENT

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics