medico: In the ongoing war against Gaza, the targets of Israeli attacks are being calculated using artificial intelligence. How does the use of such AI systems work, and since when have they been in use?
Sebastian Ben Daniel: It all began with the Palestinian uprising in 2016, also known as the Knife Intifada or Lone Wolf Intifada. At that time, the goal was to identify potential lone perpetrators in the occupied territories before they carried out an attack. If you've seen the 2002 film "Minority Report," you might be able to imagine this practice better. Essentially, it was an attempt to enable predictive policing with AI by screening social media and other databases—specifically in the context of the occupation.

How did the program make you a suspect?
For example, if someone wrote religious posts. Or if the mood in the posts was predominantly aggressive or sad. Or even if someone got a haircut. It was assumed that they wanted to present themselves in a particularly respectable manner in their last picture.
Based on this, a list of thousands of young people who were considered suspicious was compiled at the time, and they were subsequently arrested...
These young people couldn't actually be charged on the basis of such characteristics. Instead, many of them were placed in administrative detention without trial or charged with "incitement." It was enough to write something against the occupying power and call for protest, as demonstrations in the West Bank are always illegal. The military believed that someone who was locked up for three or four months would “cool off” and no longer be dangerous.
Were these AI programs successful?
At least, that's how it was portrayed in military circles. One officer said at the time that they knew that some teenager would become a terrorist before they even knew it themselves. In fact, there was not a single case in which an identified person was actually caught with a knife in their hand.
But the military had a tool that led to thousands of arrests and raids. Essentially, it was a large-scale intimidation campaign against the civilian population, but one that was considered justified in individual cases and extremely targeted.
Exactly. Numerous targets were suddenly identified. Whether these targets were relevant in a police sense is questionable. And that brings us to Gaza. A major difficulty for the military during the 2014 attack in "Operation Protective Edge" was that the so-called "target bank," a list of military targets compiled by large teams, was quickly exhausted. It contained 400 to 500 targets and was largely processed within a few days. Then they didn't know what to do, purely from a military standpoint. So, they needed a system that could do it automatically. Because humans in the loop were far too slow. Based on this experience, an AI system was developed, which was already used during the attack on the Gaza Strip in 2021. It was designed to identify Palestinian fighters based on numerous criteria—from suspicious cell phone usage to moving to a new apartment—and determine their possible location for an attack.
In 2021, the number of civilian casualties was still comparatively low.
Back then, they didn't use the system to the same extent as they do today. It was more strictly regulated. Every target suggested by the system was reviewed by a human being. In addition, the number of so-called "acceptable collateral damage" was much lower. So, it was still an experiment. However, they were very satisfied with the results because the system suddenly produced a surplus of "targets”.
Is this what former Chief of General Staff Aviv Kochavi meant when he declared in 2022 that "the industrialization of precise destruction" should become the guiding principle of future conflicts?
There was a clear tendency to overestimate the effectiveness of such AI programs. They operate much more quantitatively than qualitatively and are not suitable, for example, for identifying members of the Hamas leadership. Human teams still exist for that. But they allowed for much more attacks and much faster, under the guise of military logic. What has happened since October 2023 – as can also be read in Yuval Abraham's articles in 972mag – was the almost complete automation of these systems. Human oversight was effectively abolished, there were no longer any serious safeguards, and a much higher number of so-called permissible collateral damages.
Official Israeli sources repeatedly speak of targeted strikes and the military necessity of their actions in the Gaza Strip. How can this be explained in light of the complete destruction and the enormous number of civilian casualties?
Looking at Gaza today, one can practically deduce backwards what those responsible for the destruction were thinking. In a recently published recording, the then head of the Directorate of Military Intelligence (Aman), Aharon Haliva, says: "For every person killed on October 7, 50 Palestinians had to die. It doesn't matter now whether they are children. (...) There is no alternative." If 50,000 deaths were the actual goal, the use of AI programs was essential.
I don't quite understand that…
Most of the deaths in Gaza result from airstrikes on people suspected by the AI system of being low-level Hamas activists. Their military significance is minimal, but there are many of them. And if the "collateral damage"—meaning, the children, neighbors, and families of these activists—is large enough, the target of 50,000 Palestinian casualties will be achieved. Algorithms were therefore not the cause, but the tool. They generated an exponential number of new targets and enabled a scale of killing that humans alone could not have achieved. The IDF argues that every target was verified. But the speed was so high that this verification can only have been an alibi.
Why is this technology necessary if the objective is to kill anyway? Why not simply drop bombs indiscriminately?
That's the crucial point: AI gave this killing machine the appearance of legality and precision. Even in October 2023, the IDF commanders – as in many historical cases of genocide – could not issue orders for indiscriminate killing. That would be barbaric. Instead, they had to generate political consent. It was also crucial to legitimize the missions among the soldiers carrying them out. In the case of the pilots, as well as the target analysts in Unit 8200, who come from rather liberal backgrounds, it was necessary to ensure that they would carry out this mass murder. To do so, they had to convince them of at least two things: First, that there was no alternative. The massive bombing was not an expression of murderous ideology, but rather "fact-based" – with the advantage that the system allegedly analyzes faster and better than any human being.
Second, that everything was legal. The attacks, approved by the military's legal department, were considered "proportionate" because the collateral damage—apart from a few cases with hundreds of deaths at high-profile targets—was considered minimal compared to a supposedly almost unlimited threat. After all, everyone associated with Hamas had to be eliminated.
But there were images from Gaza, even if the Israeli media didn't show them. Did people not see and understand the consequences of these actions?
It is important to understand the role of military propaganda in this war. There are units and front organizations outside the military whose sole task is to cast doubt on information coming out of Gaza—similar to how the tobacco industry sowed doubt about the harmful effects of smoking. There's no need to provide real figures. It's enough to claim that there is no reliable information. Whenever Palestinians say something, the immediate response is: they're lying, manipulating, or exaggerating.
There's also an active suppression of independent thinking. Many in the security apparatus continue to talk about war and combat strategies without acknowledging the actual outcome of the war — the complete destruction of the Gaza Strip.
That's hard to believe—especially since the settler movement and its representatives celebrate the destruction as a messianic achievement.
Yes, in Israeli discourse, the military is now considered "left." Generals distance themselves from right-wing extremist calls for genocide, pilots from the atrocities committed by some ground units with large numbers of settlers. Yet their vision is being implemented by precisely such "liberals" with legally approved AI programs.
When discussing the crime of genocide, the question of "intent" is very often raised. Certainly, many decisions in the first few months were influenced by a desire for revenge—but is that enough?
I don't believe there was a detailed master plan from the outset that envisaged committing genocide. There were no secret meetings about it on the Sea of Galilee. But it happened anyway, gradually. Mainly because the world did nothing to stop it. This free pass also took the Israelis by surprise. When Rafah was about to be invaded a year ago, Europe and the US were very concerned. But when the offensive began and the city was ultimately wiped out, nothing happened. The government must have thought: If this works, we can keep going. Given the AI programs that spat out new targets every minute, the so-called target bank was never exhausted. In this respect, artificial intelligence contributed to prolonging the war indefinitely.
The interview was conducted by Yossi Bartal.
Dieser Beitrag erschien zuerst im medico rundschreiben 03/2025. Das Rundschreiben schicken wir Ihnen gerne kostenlos zu. Jetzt abonnieren!