The Israeli military has recently deployed two new recruits to aid in tracking Hamas operatives in Gaza, and they are both artificial intelligence systems. Named Lavender and Gospel, these AI systems have become integral to the IDF’s targeting strategy, sparking debates about the ethical and legal implications of their use.
Lavender, developed by Israel’s elite intelligence division, Unit 8200, functions as an AI-powered database that identifies potential targets associated with Hamas and Palestinian Islamic Jihad. Using machine learning algorithms, Lavender processes vast amounts of data to pinpoint individuals deemed as “junior” militants within these armed groups. Despite an error margin of up to 10 percent, soldiers have been relying on Lavender’s information to make split-second decisions on whether to bomb identified targets.
On the other hand, Gospel operates by automatically generating targets based on AI recommendations, focusing on structures and buildings rather than human targets. The IDF stated that Gospel allows for the rapid extraction of updated intelligence to produce recommendations for researchers, aiming for a complete match between the machine’s recommendation and human identification.
While these AI systems offer potential advantages in target identification and operational efficiency, their deployment raises ethical and legal concerns. The use of AI in warfare presents moral dilemmas, especially when it comes to the accuracy of targeting and potential collateral damage. As the intersection of AI and modern warfare continues to evolve, it is crucial to address these ethical and legal implications to ensure responsible use of technology in military operations.