Concerns About the Military Use of AI
The letter states that the employees' concerns are not tied to the geopolitics of any particular conflict but rather to Google’s involvement in supplying artificial intelligence and cloud computing services to the Israeli military, known as Project Nimbus. The signatories cite reports indicating that the Israeli military uses this technology for mass surveillance and selecting targets in Gaza. Additionally, they mention that Israeli arms companies, mandated by the government, must purchase cloud services from Google and Amazon.
The document also underscores the existing tensions between Google and DeepMind, reflected in a cultural clash within the organization. While Google, through its cloud business, sells AI services to militaries, DeepMind was acquired by the company in 2014 with the promise that its technology would not be used for military or surveillance purposes. This commitment, reaffirmed in 2018, now appears to be at risk due to the contracts in question.
Ethical Commitment in Question
The letter from DeepMind employees emphasizes that any involvement in weapons manufacturing or military operations jeopardizes Google’s reputation as a leader in ethical and responsible AI and contradicts the company’s mission and stated AI principles. The workers urged management to investigate reports of Google’s cloud services being used by militaries and weapons manufacturers, to cut off military access to DeepMind’s technology, and to establish a new governance body to prevent future use of AI for military purposes.
Despite the growing concern among employees, Google has not issued a significant response. This silence is fueling tensions within the company, where some employees are increasingly uneasy about the direction of military-related contracts and agreements, while other sectors of the company continue to advance in these collaborations.