Google is fine with using AI for sudden weapons and monitoring
Google has quietly updated its AI guidelines, which leave their pledge not to use technology for weapons or monitoring.
Listen to the story

Google has suddenly revised its moral guidelines for Artificial Intelligence (AI), lifted the clear restriction on using technology for weapons and monitoring. The update came on Tuesday through a blogpost shared by the company on the 2024 report on “responsible AI”. It was first seen Washington Post Through stored versions of guidelines. Changes in policy marks a significant change from the previous vow of Google to limit the AI applications that can cause damage. The company had earlier committed that it would not use AI for applications that are “the possibility to cause overall disadvantages”.
Google’s AI theory, which was first published in 2018, pledged against using technology in four fields:
-Wapons
-Supervision
There is a possibility of overall damage,
Those who violate international laws and human rights.
However, with updates of AI guidelines, these restrictions have been quietly terminated.
Meanwhile, the head of the AI of Google, Demis Hasabis, and James, SVP for Technology and Society, wrote in the blogpost: “We are investing more in both AI research and products that benefit people and society than ever Do, and to identify and address potential risks in AI safety and efforts ”.
“There is a global competition for AI leadership within a rapidly complex geo -political environment. We believe that democracy should lead to AI development, directed by main values such as freedom, equality, and honor for human rights, “Post reads further -which seems that Google is trying to convince the reader that is trying to convince the reader For the use of AI for lifting and monitoring the ban has been done with more and more good.
When Google first announced its AI principles in 2018, it was after a major protest against the then protesting project Maven by Google employees. It was a Pentagon contract that required to use AI for drone monitoring. However, Google Employees were protesting to use the technology made for monitoring, so after the pressure of the workers, Google was forced to withdraw from the project. Now, the change in policy, however, reflects a new desire to use AI for monitoring once again.
Google is not the only company working with AI and is open to provide its technology to the local government. Companies like Openai and Anthropic are also deeply involved with US defense officials. This change also explains the increasing cooperation between technical companies and national security agencies.
Last week, US President Donald Trump said he wanted to change the name of the Gulf of Mexico in the Gulf of America, Google quietly agreed without any opposition, and said that there was a policy to do so. As long as the name changes are reflected in the official records in the US, it can create that change for Google Maps users in the US.