Explain: How AI is leading Israel’s bombing campaign in Gaza

Israel’s military operation in Gaza – which began following a deadly attack on October 7 last year that left more than 1,100 people brutally dead – has entered its 10th month and has claimed the lives of more than 40,000 people.

Israel has opened a separate front in the West Bank, another Palestinian territory, as violence flares in the Gaza Strip. The massive military operation in the West Bank has entered its second day and at least 16 people have been killed.

As Israel’s “war” continues for 10 months, attention has again focused on Israel’s AI tools, which have been used extensively in the bombing campaign in the Gaza Strip.

‘Gospel’, ‘The Alchemist’, ‘The Death of Wisdom’ and ‘Lavender’ are not the titles of novels but the names of artificial intelligence (AI) tools that have been used to process vast amounts of data, identify and attack suspects suspected of links to Hamas and Palestinian Islamic Jihad.

A detailed investigation by +972 Magazine and Local Call has revealed some disturbing details of Israel’s bombing campaign, specifically how the Israel Defense Forces (IDF) relied entirely on a single tool for its bombing operations.

‘Lavender’ and its uses

Lavender, developed by Israel’s elite intelligence department, Unit 8200, works as an AI-powered database designed to identify potential targets linked to Hamas and Palestinian Islamic Jihad (PIJ). Lavender uses machine learning algorithms and processes vast amounts of data to pinpoint individuals identified as “junior” militants within these armed groups.

Lavender initially identified 37,000 Palestinian men linked to Hamas or PIJ. The use of AI to identify targets represents a significant shift in the way Israeli intelligence, the Mossad and Shin Bet, operate – which rely on more labour-intensive human decision-making.

Soldiers often made decisions in less than 20 seconds to bomb these identified targets based on Lavender’s information, primarily to ascertain the target’s gender. Human soldiers often followed the machine’s information without question, despite the AI ​​program having an error margin of up to 10 percent, meaning it could be wrong up to 10 percent of the time.

According to the report, the program often targeted individuals with no or little connection to Hamas.

Gospel – Another AI Branch of Israel

“Systems like Gospel are being used to enable automated equipment to identify targets at a faster rate, and works by improving the accuracy and high-quality intelligence material needed,” the IDF said.
“With the help of artificial intelligence, and through the rapid and automated extraction of up-to-date intelligence – it generates recommendations for the researcher, with the goal of achieving a perfect match between the machine’s recommendation and the person’s identification,” the IDF said.

AI platforms crunch data to choose targets for airstrikes. The strikes can then be rapidly coordinated with another artificial intelligence model called Fire Factory, Bloomberg reported. Fire Factory calculates ammunition loads, prioritizes and allocates thousands of targets to planes and drones, and proposes a schedule, the report said.

The +972 Magazine report mentions a book titled ‘The Human-Machine Team: How to Create Synergy Between Humans and Artificial Intelligence That Will Revolutionize Our World’. The author ‘Brigadier General YS’, who is reportedly the commander of Israel’s 8200 Intelligence Unit, makes a case for the use of AI in “deep defense” and gives scenarios that could pose a threat to Israel in the future.

In the chapter “Deep Defense: New Potentials,” the authors state, “Deep Defense is the ability of national establishments to use the human-machine team concept to address security challenges, and to uncover issues in new ways that were previously impossible.”

The human-machine team must have the ability to identify thousands of targets before the war begins and thousands of targets must be identified every day. The author makes a case where he explains why it is important to create such tools so that the military can attack the right targets at the right time with less collateral damage.

What is the significance of AI in the Russia-Ukraine war?

AI begets AI. The use of automated equipment like unmanned FPV drones and robots has reduced the human-risk factor for warring nations but has increased the dependency on technology, which seems like a win-win situation for a nation but there are always benefits of technology followed by the ethical and legal concerns of using AI.

The Russia-Ukraine war is a testing laboratory for tools to fight in future wars. The concept of drone strikes has spread to various conflicts in different regions, especially in the fight against Israel by non-state actors such as Houthi rebels and Hezbollah.

The deployment of automated drones does not alone define the use of AI in conflict.

AI is primarily used to analyze geospatial intelligence by processing satellite images and decoding open-source intelligence such as videos and photos available online. Surveillance drone footage, on-ground human intelligence (HUMINT), satellite images, and open-source data are all combined and processed by AI tools to deliver a result that is used to conduct missions. This represents the use of data analytics on the battlefield.

According to a report in National Defense magazine, Ukraine reportedly used Clearview AI, a US firm’s software tool for facial recognition to identify dead soldiers and Russian attackers and combat misinformation. US firms such as Primer have deployed tools to decode Russian encrypted messages sent via radio.

Meanwhile, Ukraine is working on developing AI-enabled drones to counter radio jamming. For several months, cheap FPV drones that have been widely used have seen their lethality drop due to jamming of radio signals, a form of electronic warfare that the Russians specialise in.

“We are already working on the concept that soon there will be no contact between the pilot and the UAV at the front line,” Reuters reported, citing Max Makarchuk, AI head at Brave1, a defense technology accelerator founded by the Ukrainian government.

Radio jamming blocks the operator’s contact with the munition (drone) by creating a protective invisible layer around the target, resulting in damage to the drone. Automating the final part of the drone’s flight can lead to success.

Meanwhile, Russia is focusing on developing AI systems to counter Western countries and fight Ukraine on the battlefield. If statistics are compared, Russia is far ahead of Ukraine in terms of military prowess, but the Red Army has suffered heavy losses on the battlefield.

Moscow is focused on areas such as enhancing command, control, and communications with AI-enabled decision-making, developing smarter weapons, which it calls “weapons intellectualization,” and developing more AI-enabled guidance systems for unmanned aerial/ground vehicles and missiles.

ZALA Aero Group, the manufacturer of the Russian kamikaze drone KUB-LA, claims that it is able to select and destroy targets using AI. The Lancet-3 loitering munition is highly autonomous and the use of sensors enables it to detect and destroy targets without human guidance, even returning to the operator if the target is not found.

In May, a Russian S-350 Vityaz surface-to-air missile reportedly shot down an aircraft in autonomous mode, claimed to be the first AI-enabled missile kill. The system detected, tracked and destroyed a Ukrainian aerial target without human assistance. This claim remains disputed.

Heavy investments on both sides of the border confirm the central role of AI in warfare and how future wars may be led jointly by technology and humans.

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version