In a surprising turn, Microsoft revealed it provided advanced AI and cloud computing services to the Israeli military during the war in Gaza. The tech giant said its role was focused on supporting efforts to locate and rescue hostages, while adding that there is no evidence its Azure platform was used to target or harm people in Gaza.
According to Microsoft’s statement, help was offered on a limited basis with significant oversight—some requests were approved and others denied—in an effort to save lives and protect civilian privacy. However, the company also noted it does not have full visibility into how its software is used on external servers.
This disclosure marks one of the first times a tech company has openly detailed its involvement in an active conflict. Experts like Emelia Probasco of the Center for Security and Emerging Technology are calling attention to the new reality where companies, not just governments, are shaping the rules of military technology. 🤖
Critics—including groups like No Azure for Apartheid—argue that the announcement is more of a PR move than an effort to address internal concerns. Meanwhile, advocates for transparency, such as Cindy Cohn of the Electronic Frontier Foundation, welcome the step but urge for more clarity on how Microsoft’s AI models are actually deployed on military servers.
As debates continue, this case highlights the complex balance between technology’s potential to save lives and the ethical challenges it brings to modern military operations.
Reference(s):
Microsoft says it provided AI to Israeli military, denies use for kill
cgtn.com