Google collaborates with Anthropic to improve Gemini AI using the cloud
Google is reportedly using Anthropic’s AI Model Cloud to compare and improve the responses of its AI, Gemini. Contractors noted that Cloud has tighter security settings that avoid insecure signals, whereas Gemini marks such inputs more clearly.
listen to the story

Google is reportedly relying on an AI model cloud developed by Anthropic to help improve the performance of its own AI system, Gemini. According to a report by TechCrunch, contractors hired by Google have been tasked with comparing the responses generated by Gemini and Cloud to similar user signals. Their job is to evaluate the quality of output based on factors such as truthfulness, clarity and verbosity.
The process works like this: Contractors are shown the responses of both Gemini and Cloud to a specific signal. They then have up to 30 minutes to carefully evaluate each answer and evaluate how well it meets the evaluation criteria. This feedback helps Google identify areas where Gemini may need improvement. Interestingly, contractors began to notice some unusual patterns during this evaluation process. On occasions, Gemini’s output included references such as “I am the cloud, created by Anthropic”, leading to curiosity about how closely linked the two systems might be.
One thing that stood out in the comparison was the cloud’s strict approach to security. Contractors reportedly observed that Cloud avoided responding to signals deemed unsafe, while remaining steadfast on maintaining high ethical standards. Gemini, on the other hand, sometimes marked these unsafe signals, but did so in a way that appeared more detailed to contractors but possibly less strict. For example, in a case involving nudity and bondage-related signals, Cloud declined to be involved, while Gemini identified the input as a major security breach.
Google has a system for these comparisons, using an internal platform that makes it easy for contractors to test and review multiple AI models simultaneously. However, the use of the cloud for this purpose has raised some questions. Anthropic’s terms of service clearly state that users are not allowed to access the cloud to build or train competitive AI products without approval. This rule applies to other companies using the model, but it is unclear whether it applies to investors like Google, which financially supports Anthropic.
In response to the speculation, Google DeepMind spokesperson Shira McNamara clarified the situation. He emphasized that comparing AI models is standard practice in the industry and essential for improving performance. McNamara strongly denied any claims that Google used Anthropic’s cloud to train Gemini, calling such suggestions false.
For now, this collaboration highlights how competitive the AI industry has become, with major players like Google leveraging a mix of internal development and external partnerships to enhance their models. At the same time, it also raises questions about ethical boundaries and how companies balance cooperation with competition.