Openai starts using Google AI chips to cut the cost and rely less on Microsoft, Nvidia: Report
Openai has allegedly started using Google’s AI chips to reduce its dependence on Microsoft and NVIDIA, indicating a change in strategy between rising costs and ongoing stress with its largest backer.
Listen to the story

In short
- Openai starts to reduce the costs of using Google Tpus
- This step will help reduce the dependence on Openai Microsoft and Nvidia GPU Infrastructure
- There are reports of increasing stress between Openai and Microsoft partnership
Openai has allegedly begun to use Google’s Artificial Intelligence Chips, according to a report, to reduce cost cuts and prolonged partners to reduce its dependence on Microsoft and NVIDIA. RootsReportedly, Openai has recently begun to hire Google’s Tensor Processing Units (TPUS), which is used to support its AI application, including Chatgpt. The TPU is accessed through the Google Cloud, and this step is part of the OpenIA’s widespread effort to manage the increasing cost of running large AI models. Vikas was first reported InformationIn which it is also mentioned that Openai is expecting that TPUS will help reduce the cost of estimates, an important part of maintaining and scaling your AI products.
The decision to go to Google Chips marks Openai’s first meaningful use of non-NVIDIA chips, a major change for the company that is one of the largest buyers of the powerful GPU of NVIDIA. These chips have traditionally operated both their AI model and training training, which is basically the process of using a trained model to generate outputs such as outputs in Chatgpt.
This new system also highlights a stunning cooperation between two direct rivals in AI space. Google, who has developed its AI models such as Gemini, usually protect their TPU for internal projects. However, it has recently begun to open access to outdoor customers, including large names such as Apple and AI Startups, such as both anthropic and safe superintendents were established by pre -OpenAI leaders.
Despite the new partnership, Google has allegedly selected not to offer OpenAI access to its most advanced TPU. The Google Cloud employee allegedly stated the information that the company is limiting the use of its highest-end chips of Openai, which is likely to maintain a competitive edge.
Changes in Google’s hardware occur at a time when Openai’s relationship with Microsoft is showing symptoms of stress. Microsoft has been the largest investor in Openai. The company put $ 1 billion in 2019 and integrated Openai’s technology in its products such as Microsoft 365 and Github Copilot.
During this time, The Wall Street Journal Recently, it has been reported that Openai officials discussed whether Microsoft had to be accused of anticomotive conduct and potentially seek regulatory review of their contract. Although the two companies issued a joint statement expressing optimism about their continuous cooperation, the interaction and operational control over the financial terms is allegedly unresolved. One of the largest glued points is apparently around the revenue sharing. Openai is reportedly planned to reduce revenue stake, which pays Microsoft, currently 20 percent to 2030.
Additionally, both firms have not yet agreed to how to structure Microsoft’s equity stake in a re -configured unit of OpenaiI, nor on future profit rights.
Openai has asked to re -look at a specificity section that gives Microsoft the only right to host its model on the Azure Cloud. With its growing computing requirements and new dependence on the Google Cloud TPU, the company is trying to create more flexibility in its cloud infrastructure strategy.
Another reported source of friction is the $ 3 billion acquisition of Windsurf of OpenaiI, a startup focused on AI-managed coding. Microsoft is emphasized to access the intellectual property of windsurf, possibly to strengthen its own Github Copilot product. However, Openai has clearly opposed these efforts.