Microsoft Says It Provided Ai to Israeli Military Denies Use for Kill

Microsoft Admits Supplying AI to Israeli Military During Gaza Conflict

In a recent announcement, Microsoft has acknowledged providing advanced artificial intelligence (AI) and cloud computing services to the Israeli military amid the ongoing conflict in Gaza. The tech giant stated that its technologies were used to assist in locating and rescuing hostages, but emphasized that there is no evidence to suggest its AI platforms were employed to target or harm civilians in Gaza.

The company’s disclosure came in an unsigned blog post on its corporate website, marking the first public confirmation of its involvement in the conflict. Microsoft revealed that employee concerns and media reports had prompted an internal review, leading to the hiring of an external firm for additional fact-finding. While the company did not disclose the name of the external firm or provide its report, it stressed that assistance was offered with significant oversight, approving some requests and denying others.

“We believe the company followed its principles on a considered and careful basis, to help save the lives of hostages while also honoring the privacy and other rights of civilians in Gaza,” Microsoft stated. The company also admitted limitations in monitoring how customers utilize its software on private servers or through other commercial cloud providers.

The Israeli military maintains extensive contracts for cloud and AI services with several major American tech companies, including Google, Amazon, and Palantir. This trend highlights a growing movement among tech firms to supply AI products to military organizations globally, raising concerns among human rights groups. Critics argue that flawed or error-prone AI systems could lead to misguided targeting decisions, potentially resulting in the loss of innocent lives.

Emelia Probasco, a senior fellow at Georgetown University’s Center for Security and Emerging Technology, noted the significance of Microsoft’s stance. “We are in a remarkable moment where a company, not a government, is dictating terms of use to a government that is actively engaged in a conflict,” she said. “It’s like a tank manufacturer telling a country you can only use our tanks for these specific reasons. That is a new world.”

Meanwhile, No Azure for Apartheid, a group comprising current and former Microsoft employees, has called for the company to release the full investigative report. Hossam Nasr, a former employee who was terminated after organizing a vigil for Palestinians in Gaza, criticized Microsoft’s response. “It’s very clear that their intention with this statement is not to actually address their worker concerns, but rather to make a PR stunt to whitewash their image,” he remarked.

Cindy Cohn, executive director of the Electronic Frontier Foundation, acknowledged Microsoft’s step toward transparency but highlighted lingering questions. “I’m glad there’s a little bit of transparency here,” she said. “But it is hard to square that with what’s actually happening on the ground.”

The situation underscores the ethical dilemmas tech companies face when their products are used in military conflicts. As AI continues to evolve, the debate over its application in warfare and the responsibility of tech firms remains a pressing issue.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back To Top