Summary: The ongoing conflict in Gaza highlights the increasing role of artificial intelligence in modern warfare. From AI-driven targeting systems to surveillance and misinformation, Israeli military operations showcase how AI technologies—often supplied by major American tech companies—are reshaping the battlefield. This article explores the tools used, their ethical implications, and the broader consequences for civilians caught in the crossfire.

A Glimpse into AI-Powered Warfare

In 2021, Israel introduced an AI tool codenamed “The Gospel” during its 11-day conflict with Gaza, marking what the Israel Defense Forces (IDF) called the first artificial intelligence war. This system rapidly analyzed surveillance data, satellite imagery, and social media to identify potential military targets. Though the war ended, tensions persisted, and AI’s role in conflict has only expanded since.

Fast forward to 2024, Israel’s latest offensive on Gaza has been described as an “AI Human Laboratory,” where emerging weapons and AI technologies are tested in real-time on live subjects. Over the past two years, the conflict has tragically claimed over 67,000 Palestinian lives, including more than 20,000 children. According to a Reuters investigation, over 1,200 families have been completely wiped out. The United Nations recently concluded that Israel’s actions in Gaza amount to genocide.

Despite multiple ceasefire agreements since October 2023, including one announced in March 2025 involving hostage exchanges and aid convoys, violence has continued. Notably, these agreements do not address the establishment of a Palestinian state, which Israel opposes.

AI Systems Generating Kill Lists

Israel has employed several AI programs in its military campaigns. Besides “The Gospel,” programs like “Alchemist” detect suspicious movements in real-time, and “Depth of Wisdom” maps Gaza’s tunnel networks. Another AI system, “Lavender,” generates kill lists by assigning likelihood scores to Palestinians based on their presumed affiliation with militant groups. High scores make individuals targets for missile strikes.

Investigations reveal that the IDF heavily relied on these AI systems early in the conflict, despite known inaccuracies that sometimes led to civilian casualties. While military officers are required to approve AI-generated targets, reports suggest this process often amounted to minimal checks, such as verifying the target’s gender.

Additional AI tools, like “Where’s Daddy?”, are designed to identify targets within family homes, facilitating strikes that disproportionately affect civilians. An anonymous Israeli intelligence officer noted that bombing family homes has become a first option due to the ease of targeting.

AI in Surveillance and Translation

Beyond targeting, AI plays a significant role in Israel’s mass surveillance efforts. Former IDF surveillance unit leader Yossi Sariel, who resigned after the Hamas attack on October 7, 2023, spent time training at a Pentagon-funded defense institution, sharing ambitious visions for AI on the battlefield.

Reports have uncovered that Israel stored and processed Palestinian mobile phone calls using Microsoft’s Azure Cloud Platform. Following public outcry, Microsoft announced it would restrict some services to the IDF after an internal review. However, most contracts remain intact. Microsoft CEO Satya Nadella met with Israeli intelligence officials in 2021 to discuss cloud hosting of intelligence materials.

AI also assists in translating and transcribing intercepted communications, though internal audits revealed inaccuracies in Arabic translations. OpenAI’s advanced AI models, accessed via Microsoft’s Azure, have been used extensively for these purposes, with usage surging after October 7, 2023.

AI-driven surveillance extends beyond the Middle East. In the United States, AI products from companies like Palantir have been used by the Department of Homeland Security to monitor pro-Palestinian activists, particularly non-citizens. Palantir has maintained long-standing contracts with DHS and Immigration and Customs Enforcement (ICE), providing investigative and enforcement tools.

The Role of AI-Generated Media

The rise of AI-generated videos and images has complicated the flow of information. Social media users often struggle to distinguish real content from fabricated or staged media. In the context of Gaza, Israeli authorities have labeled many videos and photos as “Gazawood,” alleging they are staged or AI-generated. This skepticism is compounded by Israel’s restrictions on foreign journalists entering Gaza and its targeting of local journalists.

One notable case involved Saeed Ismail, a 22-year-old Gazan raising funds online, who was falsely accused of being AI-generated due to misspellings in a video. Independent verification confirmed his existence, highlighting the challenges of misinformation in conflict zones.

American Big Tech’s Involvement

American technology companies play a significant role in supplying AI tools to the Israeli military. Microsoft, Google, Amazon, Palantir, Cisco, Dell, and IBM are among the major vendors partnering with the IDF. Notably, Google and Amazon have faced employee protests over “Project Nimbus,” a $1.2 billion contract to provide cloud computing and AI services to Israel’s military.

Amazon recently suspended an engineer who criticized the project internally, while Google has also limited employee dissent. Despite internal concerns about potential human rights violations, these companies continue their partnerships. The Israeli military has even sought access to Google’s Gemini AI model.

Palantir, known for its Artificial Intelligence Platform (AIP) that analyzes enemy targets and proposes battle plans, maintains a strategic partnership with the IDF. The company has faced global criticism and investor divestment due to its involvement in AI systems that rank Palestinians by perceived threat level, leading to preemptive arrests. CEO Alex Karp has publicly supported the company’s backing of Israel.

IBM emphasizes its commitment to human rights and ethical standards, disputing claims made in UN reports. Other companies like Cisco, Dell, Google, Amazon, and OpenAI have not commented publicly on their involvement.

Additionally, a leaked 38-page plan titled the Gaza, Reconstitution, Economic Acceleration and Transformation Trust (GREAT) proposes transforming Gaza into a U.S.-operated tech hub with AI-powered smart cities and data centers, administered as a trusteeship for at least ten years.

The Future of AI in Warfare and Surveillance

Military demand for AI technologies is surging worldwide. The U.S. invests heavily in integrating AI into military decision-making through programs like Thunderforge, while China prioritizes military AI development. Active conflict zones, including Gaza and Ukraine, serve as testing grounds for AI-powered weapons and surveillance systems.

Ukraine has invited foreign arms companies to test new weapons on its front lines, highlighting a growing trend of real-time battlefield experimentation. Palantir, besides its Israeli contracts, secured a $10 billion software and data deal with the U.S. Army in 2024.

While some tech companies have withdrawn from controversial military projects in the past, current political climates, particularly under the Trump administration, encourage American leadership in AI warfare. Despite internal and external criticism, the pursuit of military funding and technological dominance continues unabated.

By Manish Singh Manithia

Manish Singh is a Data Scientist and technology analyst with hands-on experience in AI and emerging technologies. He is trusted for making complex tech topics simple, reliable, and useful for readers. His work focuses on AI, digital policy, and the innovations shaping our future.

Leave a Reply

Your email address will not be published. Required fields are marked *