advertisement

Concern grows as Israel uses US-made AI models during war

TEL AVIV, Israel (AP) — U.S. tech giants have quietly empowered Israel to track and kill many more alleged militants more quickly in Gaza and Lebanon through a sharp spike in artificial intelligence and computing services.

But the number of civilians killed also has soared, fueling fears that these tools are contributing to the deaths of innocent people.

Militaries have for years hired private companies to build custom autonomous weapons. However, Israel's recent wars mark a leading instance in which commercial AI models made in the United States have been used in active warfare, despite concerns that they were not originally developed to help decide who lives and who dies.

The Israeli military uses AI to sift through vast troves of intelligence, intercepted communications and surveillance to find suspicious speech or behavior and learn the movements of its enemies. After a deadly surprise attack by Hamas militants on Oct. 7, 2023, its use of Microsoft and OpenAI technology skyrocketed, an Associated Press investigation found.

The investigation also revealed new details of how AI systems select targets and ways they can go wrong, including faulty data or flawed algorithms. It was based on internal documents, data and exclusive interviews with current and former Israeli officials and company employees.

“This is the first confirmation we have gotten that commercial AI models are directly being used in warfare,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute and former senior safety engineer at OpenAI. “The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare going forward.”

The rise of AI

As U.S. tech titans ascend to prominent roles under President Donald Trump, the AP’s findings raise questions about Silicon Valley’s role in the future of automated warfare. Microsoft expects its partnership with the Israeli military to grow, and what happens with Israel may help determine the use of these emerging technologies around the world.

The Israeli military’s usage of Microsoft and OpenAI artificial intelligence spiked last March to nearly 200 times higher than before the week leading up to the Oct. 7 attack, the AP found in reviewing internal company information. The amount of data it stored on Microsoft servers doubled between that time and July 2024 to more than 13.6 petabytes — roughly 350 times the digital memory needed to store every book in the Library of Congress. Usage of Microsoft’s huge banks of computer servers by the military also rose by almost two-thirds in the first two months of the war alone.

Israel’s goal after the attack that killed about 1,200 people and took over 250 hostages was to eradicate Hamas, and its military has called AI a “game changer” in yielding targets more swiftly. Since the war started, more than 50,000 people have died in Gaza and Lebanon and nearly 70% of the buildings in Gaza have been devastated, according to health ministries in Gaza and Lebanon.

The AP’s investigation drew on interviews with six current and former members of the Israeli army, including three reserve intelligence officers. Most spoke on condition of anonymity because they were not authorized to discuss sensitive military operations.

The AP also interviewed 14 current and former employees inside Microsoft, OpenAI, Google and Amazon, most of whom also spoke anonymously for fear of retribution. Journalists reviewed internal company data and documents, including one detailing the terms of a $133 million contract between Microsoft and Israel’s Ministry of Defense.

The Israeli military says its analysts use AI-enabled systems to help identify targets but independently examine them together with high-ranking officers to meet international law, weighing the military advantage against the collateral damage. A senior Israeli intelligence official authorized to speak to the AP said lawful military targets may include combatants fighting against Israel, wherever they are, and buildings used by militants. Officials insist that even when AI plays a role, there are always several layers of humans in the loop.

“These AI tools make the intelligence process more accurate and more effective,” said an Israeli military statement to the AP. “⁠They make more targets faster, but not at the expense of accuracy, and many times in this war they’ve been able to minimize civilian casualties.”

The Israeli military declined to answer detailed written questions from the AP about its use of commercial AI products from American tech companies.

Microsoft declined to comment for this story and did not respond to a detailed list of written questions about cloud and AI services provided to the Israeli military. In a statement on its website, the company says it is committed “to champion the positive role of technology across the globe.” In its 40-page Responsible AI Transparency Report for 2024, Microsoft pledges to manage the risks of AI throughout development “to reduce the risk of harm,” and does not mention its lucrative military contracts.

Advanced AI models are provided through OpenAI, the maker of ChatGPT, through Microsoft’s Azure cloud platform, where they are purchased by the Israeli military, the documents and data show. Microsoft has been OpenAI's largest investor. OpenAI said it does not have a partnership with Israel's military, and its usage policies say its customers should not use its products to develop weapons, destroy property or harm people. About a year ago, however, OpenAI changed its terms of use from barring military use to allowing for “national security use cases that align with our mission.”

The human toll of AI

It’s extremely hard to identify when AI systems enable errors because they are used with so many other forms of intelligence, including human intelligence, sources said. But together they can lead to wrongful deaths.

In November 2023, Hoda Hijazi was fleeing with her three young daughters and her mother from clashes between Israel and Hamas ally Hezbollah on the Lebanese border when their car was bombed.

Before they left, the adults told the girls to play in front of the house so that Israeli drones would know they were traveling with children. The women and girls drove alongside Hijazi’s uncle, Samir Ayoub, a journalist with a leftist radio station, who was caravanning in his own car. They heard the frenetic buzz of a drone very low overhead.

Soon, an airstrike hit the car Hijazi was driving. It careened down a slope and burst into flames. Ayoub managed to pull Hijazi out, but her mother — Ayoub’s sister — and the three girls — Rimas, 14, Taline, 12, and Liane, 10, — were dead.

Video footage from a security camera at a convenience store shortly before the strike showed the Hijazi family in a Hyundai SUV, with the mother and one of the girls loading jugs of water. The family says the video proves Israeli drones should have seen the women and children.

An Israeli intelligence officer told the AP that AI has been used to help pinpoint all targets in the past three years. Humans in the target room would have decided to strike. The error could have happened at any point, he said: Previous faulty information could have flagged the wrong residence, or they could have hit the wrong vehicle.

The AP also saw a message from a second source with knowledge of that airstrike who confirmed it was a mistake, but didn’t elaborate.

Pushback from workers

The relationship between tech companies and the Israeli military also has ramifications in the U.S., where some employees have raised ethical concerns.

In October, Microsoft fired two workers for helping organize an unauthorized lunchtime vigil for Palestinian refugees at its corporate campus in Redmond, Washington. Microsoft said at the time that it ended the employment of some people “in accordance with internal policy” but declined to give details.

Hossam Nasr, one of the employees fired by Microsoft who works with the advocacy group No Azure for Apartheid, said he and former colleagues are pushing for Microsoft to stop selling cloud and AI services to the Israeli military.

“Cloud and AI are the bombs and bullets of the 21st century,” Nasr said. “Microsoft is providing the Israeli military with digital weapons to kill, maim and displace Palestinians, in the gravest moral travesty of our time.”

In April, Google fired about 50 of its workers over a sit-in at the company’s California headquarters protesting the war in Gaza.

Former Google software engineer Emaan Haseem was among those fired. Haseem said she worked on a team that helped test the reliability of a “sovereign cloud” — a secure system of servers kept so separate from the rest of Google’s global cloud infrastructure that even the company itself couldn't access or track the data it stores. She later learned through media reports that Google was building a sovereign cloud for Israel.

“It seemed to be more and more obvious that we are literally just trying to design something where we won’t have to care about how our clients are using it, and if they’re using it unfairly or unethically,” Haseem said.

Google said the employees were fired because they disrupted workspaces and made colleagues feel unsafe. Google did not respond to specific questions about whether it was contracted to build a sovereign cloud for the Israeli military and whether it provided restrictions on the wartime use of its AI models.

Gaza is now in an uneasy ceasefire. But recently, the Israeli government announced it would expand its artificial intelligence developments across all its military branches.

Meanwhile, U.S. tech titans keep consolidating power in Washington. Microsoft gave $1 million to Trump’s inauguration fund. Google CEO Sundar Pichai got a prime seat at the president’s inauguration. And OpenAI CEO Sam Altman met with the president on Trump’s second full day in office to talk up a joint venture investing up to $500 billion for AI infrastructure.

After OpenAI changed its terms of use last year to allow for national security purposes, Google followed suit earlier this month with a similar change to its public ethics policy to remove language saying it wouldn’t use its AI for weapons and surveillance. Google said it is committed to responsibly developing and deploying AI “that protects people, promotes global growth, and supports national security.”

• Biesecker reported from Washington and Burke from San Francisco. AP reporters Abby Sewell and Sarah El Deeb in Beirut, Julia Frankel and Natalie Melzer in Jerusalem, Dake Kang in Beijing and Michael Liedtke in San Francisco contributed to this report.

In this image from a store's security camera video, members of the Hijazi family get into a car in the Lebanese border village of Bilda, headed toward Beirut on Nov. 5, 2023. AP photo
The remains of a Hyundai SUV are seen in the town of Ainata, a Lebanese border village with Israel in south Lebanon, on Nov. 6, 2023, after Samira Ayoub, and her three granddaughters, Rimas, 14; Taline, 12; and Liane, 10, were killed in the car during an Israeli airstrike the previous evening. AP Photo/Mohammed Zaatari
An Israeli flag is draped over the Microsoft offices in a building in the Gav Yam technology park in Beersheba, Israel, on May 30, 2024. AP Photo/Sam Mednick
Google employees and other demonstrators protest against the war in Gaza and Google's work with the Israeli government on April 16, 2024, in front of the Google offices in Sunnyvale, California. Dai Sugano/Bay Area News Group via AP
An Israeli reconnaissance drone flies over the funeral procession of four Hezbollah fighters who were killed a day earlier after their handheld devices exploded in the southern suburb of Beirut on Sept. 18, 2024. AP Photo/Bilal Hussein, file
Article Comments
Guidelines: Keep it civil and on topic; no profanity, vulgarity, slurs or personal attacks. People who harass others or joke about tragedies will be blocked. If a comment violates these standards or our terms of service, click the "flag" link in the lower-right corner of the comment box. To find our more, read our FAQ.