Summary: The ongoing conflict in Gaza highlights the increasing role of artificial intelligence in modern warfare. From AI-generated kill lists to surveillance and data analysis, Israeli forces are deploying advanced technologies, many supplied by American tech companies. This raises profound ethical concerns as civilians bear the brunt of these AI-powered operations. Meanwhile, misinformation fueled by AI-generated media complicates the truth on the ground. As militaries worldwide embrace AI, Gaza serves as a stark example of the future of AI warfare and its human consequences.

A Glimpse into AI-Driven Warfare

In 2021, Israel introduced an AI tool codenamed “The Gospel” during its 11-day conflict with Gaza. This marked what the Israel Defense Forces (IDF) called the first artificial intelligence war. “The Gospel” rapidly analyzed surveillance data, satellite imagery, and social media to identify potential military targets. Since then, AI technology has advanced rapidly, and Israel’s recent offensive on Gaza has been described as an “AI Human Laboratory,” where future weapons are tested on live subjects.

The Human Cost of Conflict

Over the past two years, the conflict has tragically claimed the lives of more than 67,000 Palestinians, including over 20,000 children. Reuters reported that by March 2025, more than 1,200 families were completely wiped out. The Palestinian Ministry of Health’s casualty figures since October 2024 only include identified bodies, suggesting the real death toll is even higher. A United Nations commission recently concluded that Israel’s actions in Gaza amount to genocide.

Despite a ceasefire agreement announced in early 2025 involving hostage exchanges and aid convoys, Israeli strikes continued for some time. The ceasefire does not include the establishment of a Palestinian state, which Israel opposes. Multiple ceasefire agreements have been attempted since October 2023.

AI Tools Behind the Scenes

Israel’s military relies heavily on AI systems, some of which have been partially disclosed, while others remain secret. In addition to “The Gospel,” programs like “Alchemist,” which sends real-time alerts about suspicious movements, and “Depth of Wisdom,” which maps Gaza’s tunnel networks, have been used since 2021.

One particularly controversial system, “Lavender,” generates kill lists by assigning a probability score to Palestinians, estimating their likelihood of being militants. High scores make individuals targets for missile strikes. Reports indicate that the IDF heavily relied on Lavender, even though it sometimes misidentified civilians as militants. While officers must approve AI recommendations, this process reportedly only verified whether the target was male.

Another AI program, “Where’s Daddy?”, is designed to locate targets inside family homes, facilitating strikes on those residences. An anonymous Israeli intelligence officer explained that bombing family homes is often the first option, as it is considered easier.

AI in Surveillance and Data Analysis

The IDF also employs AI extensively in mass surveillance. Yossi Sariel, former head of the IDF’s surveillance unit, trained at a Pentagon-funded defense institution and shared ambitious visions of AI on the battlefield.

Investigations revealed that Israel stored and processed Palestinian mobile phone calls using Microsoft’s Azure Cloud Platform. Following public outcry, Microsoft limited some services to the IDF unit involved. Microsoft CEO Satya Nadella reportedly met with IDF intelligence leaders in 2021 to discuss hosting intelligence data.

AI was also used to transcribe and translate intercepted communications, though internal audits found inaccuracies in Arabic translations. OpenAI’s advanced AI models, accessed via Microsoft’s Azure, have been used for these tasks, with usage surging after October 2023.

Beyond Gaza and the West Bank, AI-driven surveillance targets pro-Palestinian protesters in the United States. Amnesty International reported that American companies like Palantir provide AI tools to the Department of Homeland Security (DHS) to monitor non-citizens advocating for Palestinian rights. DHS confirmed ongoing contracts with Palantir but did not elaborate on specific uses.

The Role of American Tech Giants

American technology companies play a significant role in supplying AI tools to the Israeli military. Microsoft, Google, Amazon, and Palantir are among the major providers. Employees at Google and Amazon have protested contracts like “Project Nimbus,” a $1.2 billion deal to provide cloud computing and AI services to the IDF.

Amazon recently suspended an engineer who spoke out against the project, and Google has also restricted employee criticism. Despite internal concerns about potential human rights violations, these companies continue their partnerships. The Israeli military has also sought access to Google’s Gemini AI system.

Palantir offers software that analyzes enemy targets and proposes battle plans. The company maintains a strategic partnership with the IDF and has faced global criticism for its involvement. Some investors have divested over concerns about human rights violations linked to Palantir’s AI systems.

Other tech giants like Cisco, Dell, and IBM also have contracts with the IDF. IBM emphasized its commitment to human rights and integrity in business, disputing some UN report claims. Several companies declined to comment on their involvement.

In 2024, reports surfaced about a U.S.-backed plan called the Gaza, Reconstitution, Economic Acceleration and Transformation Trust (GREAT), which envisions transforming Gaza into an AI-powered tech hub with smart cities and manufacturing zones, administered by the U.S. for at least a decade.

The Spread of AI-Generated Misinformation

AI-generated videos and images, often called “Gazawood” by Israelis, have flooded social media, causing confusion over what is real. These claims are used to discredit authentic voices from Gaza, especially since foreign journalists are barred from entering the region and local journalists face targeting.

For example, Saeed Ismail, a 22-year-old Gazan raising funds for his family, was falsely accused of being AI-generated due to minor errors in a video. His existence was later verified.

Looking Ahead: AI on the Battlefield

Globally, militaries are eager to integrate AI into warfare. The U.S. invests heavily in AI systems for military decision-making, such as the Thunderforge program, while China prioritizes military AI development. Active conflict zones like Gaza and Ukraine serve as real-time testing grounds for AI-powered weapons.

Ukraine has invited foreign arms companies to test new weapons on its front lines, while Palantir secured a $10 billion contract with the U.S. Army in 2024. Although some tech companies have withdrawn from military projects in response to public pressure, current political climates support AI’s military role.

As AI continues to reshape warfare, the devastating consequences for civilians in conflict zones like Gaza underscore the urgent need for ethical considerations and accountability.

By Manish Singh Manithia

Manish Singh is a Data Scientist and technology analyst with hands-on experience in AI and emerging technologies. He is trusted for making complex tech topics simple, reliable, and useful for readers. His work focuses on AI, digital policy, and the innovations shaping our future.

Leave a Reply

Your email address will not be published. Required fields are marked *