Skip to content

Artificial Intelligence on the Battlefield : An Analysis

Anurag Dwevidi Writes: I had recently written about the excessive hype surrounding Low Cost UAV Swarms. There is a similar hype surrounding Artificial Intelligence (AI) on the battlefield, often encountered in conjunction with Lethal Autonomous Weapons (LAWs). In this fantasy world human soldiers will soon be replaced by robots.

Anurag Dwivedi Writes:

I had recently written about the excessive hype surrounding Low Cost UAV Swarms. There is a similar hype surrounding Artificial Intelligence (AI) on the battlefield, often encountered in conjunction with Lethal Autonomous Weapons (LAWs). In this fantasy world human soldiers will soon be replaced by robots. Let us examine further.

In 1997, then reigning world Chess champion Gary Kasparov was beaten in a six game match by IBM’s Deep Blue computer. The Man vs Machine match was deemed a historic moment for AI. IBM Deep Blue was written in ‘C’ programming language and ran on a parallel computing monster with 30 nodes, each containing a 120 MHz microprocessor, enhanced with 480 special purpose VLSI chips. It was capable of evaluating 200 million positions per second and was the 259th most powerful supercomputer at that time. Actually it was less of AI and more of brute computing power. Today Chess engines running on desktop level hardware can compete with grandmasters. Chess engines have also incorporated heuristics. Humans on the other hand have tried to devise anti-computer Chess tactics which can successfully confuse less proficient engines.

It thereafter took nearly 20 years until March 2016 for an AI programmed supercomputer from Google, called AlphaGo, to beat 9 dan ranked champion Lee Sedol at “Go” – a game which has a larger 19 X 19 board size and hence more permutations and combinations vis-a-vis Chess which has an 8 X 8 board. AlphaGo used application-specific integrated circuits (ASIC) developed specifically for machine learning and was trained using a database of 30 million board positions from 160,000 real-life games. AlphaGo is described as “Narrow AI” ie it is focused on one specific task.

More recently in January 2017 a Poker playing AI program called Libratus made big news by defeating four top players in a twenty day long match of Texas Hold’em Poker. This was a new frontier because both Chess and Go are perfect information and deterministic strategy games, whereas Poker is a game of incomplete information where opponents can bluff. Another unique characteristic of Libratus was that it used an algorithm called Counterfactual Regret Minimization (abbreviated as CFR) that allowed it to learn “on the job” and improve its strategy as it played. Libratus used Four Million core hours of a supercomputer during the competition. It is again a Narrow AI program designed to play a specific variant of Poker.

The above landmarks highlight certain important facts about AI.

Firstly, AI has matured slowly but is now at a level where it can better humans at certain specific tasks that require intelligence. It is therefore not surprising that AI algorithms are increasingly being tried in many real world applications like driverless cars, medical diagnostics, robotics, aviation, marketing, and even to compose music.

Secondly, the version of AI being implemented is the so called “Narrow AI” or “Weak AI” – a term used to describe algorithms that focus on one very specific problem instead of replicating the entire range of human cognitive abilities – the so called Strong AI, Full AI or Artificial General Intelligence. While aspects of intelligence like memory, reasoning, learning and even heuristics have been replicated by AI algorithms to a reasonable extent; several other attributes like abstract thinking, moral & social intelligence and verbal reasoning have been difficult to code. The difficulty increases further if we go into cognitive attributes like self-awareness, imagination, intuition and perception. It becomes exponentially more difficult when we want to combine all these into a single inclusive package ie Strong AI. No Strong AI prototypes have been encountered till date except in science fiction movies and novels. We have not even touched upon emotional attributes like empathy.

Thirdly, it is obvious that even “Narrow AI” requires enormous computing power and its performance is only as good as the underlying hardware. Software running on a smartphone may beat an ordinary human at Chess or Poker, but it requires a supercomputer and specially designed chips to defeat high level professionals. Leave aside gaming – the hardware suite of a prototype Tesla autonomous car comprises eight surround cameras, twelve ultrasonic sonars, one forward facing radar and a custom built processor. Autonomous cars are again Narrow AI – focused entirely on avoiding collision.

Fourthly, AI is not to be confused with robotics and computing. MS Word spell check is not AI. A robot steered to disarm an IED or to enter a terrorist hideout is also not AI. An Unmanned Combat Aerial Vehicle (UCAV) targeting an enemy outpost is likewise not AI. The intelligence at work in all these cases is that of the human operator. It is AI only when the human is replaced by an algorithm which perceives the environment and takes actions that maximize chances of success. One method of testing AI is the Turing Test where an intelligent machine and a human subject are put through several written questions. The interrogator has to then judge which one of the two is a machine based solely on the replies and no other inputs. Alan Turing who developed this method in 1950 had predicted that machines would be table to fool 30% of humans by year 2000. Revised estimates are year 2029 or beyond. The best case scenario therefore remains that human intervention (or supervision) will be required for complicated tasks in the foreseeable future.

Lastly, there are lessons to be learnt from failures. Trading algorithms contributed to the 2010 Dow Jones flash crash when the index plunged about 600 points only to recover those losses within minutes. In March 2016, just as AlphaGo was winning at Go; Microsoft had to silence its AI powered Internet chatbot called “Tay” which similarly learnt on the job from its experience on Twitter and started making racist and abusive comments. In May 2016 a US citizen died when his Tesla car collided with a tractor while in autopilot mode. These failures highlight pitfalls of AI in real world scenarios vis-à-vis closely controlled lab environments. We will not touch upon legal and regulatory issues here.

In November 2016, Maruti Suzuki chairman Mr RC Bhargava expressed that self driving cars won’t work in India because nobody follows rules. Well defined rules are one of the prerequisites of AI. The only rules when it comes to War are that “everything is fair” and “there are no runners up”. In none of the above described failures was the AI itself under attack by a hostile adversary. It goofed up by itself. Imagine a scenario where the AI itself is under attack. When everything “Smart” from a phone to a TV to a car has been shown to be hackable, what immunity do AI based systems enjoy in the real world? Electronic and Cyber warfare come into play in a big way.

From the above analysis we can conclude with confidence that there will be no Strong AI algorithms called Napoleon or Rommel in the foreseeable future. In fact it wouldn’t even be possible to program a half decent AI Commanding Officer who (at least) knows how to deploy, when and where to reinforce or launch a local counter attack etc. Mastering multiple operations of war (attack, defence, infiltration etc) would be utopian. It is a waste of time discussing this further.

What about Narrow AI applications? Autonomous vehicles are possibly becoming a reality so why not give it a machine gun and place it in a GPS designated warzone? Firstly, the complexity and degree of difficulty increase very rapidly as we add more functions (viz driving + target acquisition + firing + damage assessment + reloading + self-protection). Secondly, such LAWs in the battlefield milieu require an IFF (Identification, friend or foe) mechanism. Wireless challenge-response IFF transponders are the most common technique. Electronic Warfare (Jamming and Deception) again comes into play as a countermeasure.

To conclude, military applications of AI are limited and need to be carefully thought out. UCAVs are cited as one possible application, though it begs asking as to what is the cost-benefit of replacing an experienced human controller with an algorithm if at all possible? Also since AI machines are such lucrative targets for hacking, it makes more sense to invest limited resources in relatively easier electronic / cyber warfare countermeasures rather than in Narrow AI military applications. If we do wish to venture there – training simulators, cryptanalysis, decision support systems and surveillance appear to be relevant areas and not UCAVs, killer robots and UAV swarms.

1547 Total Views 1 Views Today