The tech giant, Google, has revised its AI principles to support national security collaboration with governments, igniting discussions on the implications of AI in warfare and surveillance.
**Google Redefines AI Principles, Leaving Door Open for Military Use**

**Google Redefines AI Principles, Leaving Door Open for Military Use**
Alphabet alters its AI usage guidelines, raising ethical concerns around potential military applications.
Alphabet, the parent company of Google, has recently made a significant shift in its artificial intelligence (AI) policies by abandoning its previous commitment to refrain from using AI for developing weapons or surveillance technology. This change comes with the revision of the guiding principles surrounding AI use, wherein a critical section concerning harmful applications has been removed.
In a blog post authored by senior vice president James Manyika and DeepMind leader Demis Hassabis, the company justified this strategic pivot, asserting that businesses and democratic governments need to collaborate on AI that bolsters national security. The new guidelines suggest a move towards engaging with military applications and the potential for higher involvement in surveillance technologies.
The article highlights a growing discourse in the AI community regarding its governance, the balance between commercial interests and ethical considerations, and the management of associated risks. The controversy continues to surround the integration of AI in combat and surveillance scenarios, as stakeholders debate the appropriate level of oversight and ethical usage.
Manyika and Hassabis reasoned that the original AI principles established in 2018 required re-evaluation to match the technology's rapid evolution. According to the blog, AI has transformed from a specialized research field into an integral tool used daily by billions, mirroring the ubiquity of mobile phones and the internet.
The tech leaders expressed that the increasingly intricate geopolitical landscape necessitates a leadership role for democracies in AI advancement, rooted in foundational values like freedom and human rights. They called for a coalition of entities that share these principles to advance AI in a way that safeguards citizens, fosters global development, and aligns with national security objectives.
The blog debut closely coincides with Alphabet's financial report, which revealed performance below market expectations, causing a dip in its share prices. This was despite a notable 10% revenue growth from digital advertising, which benefitted from heightened US election spending.
In this context, the company announced a projected investment of $75 billion, which surpasses Wall Street forecasts by 29%, earmarked for AI infrastructure, research, and applications including the enhanced AI-powered search feature through its Gemini platform.
Historically, Google's founders set a tone of ethical governance with a motto of "don't be evil," which evolved into "Do the right thing" after the company's rebranding as Alphabet Inc in 2015. However, there have been internal tensions regarding these guiding principles, especially after the company faced protests over its involvement in military contracts like the controversial Project Maven, aimed at utilizing AI in military operations.
As conversations surrounding AI ethics heat up, the implications of Google's policy revision continue to be closely monitored.