Microsoft will supercharge its AI chips with OpenAI technology, Satya Nadella says he gets it all: The story in 5 points

Microsoft will supercharge its AI chips with OpenAI technology, Satya Nadella says he gets it all: The story in 5 points

Microsoft is incorporating OpenAI’s custom AI chip designs into its semiconductor strategy by 2030. The move aims to accelerate AI innovation by combining hardware and software expertise in the new datacenter.

Advertisement
Microsoft will supercharge its AI chips with OpenAI technology, Satya Nadella says he gets it all: The story in 5 points
Microsoft-OpenAI partnership

Microsoft is taking its AI hardware ambitions up a notch, and this time, it’s doing so with a little help from its favorite partner, OpenAI. CEO Satya Nadella has revealed that the tech giant is folding OpenAI’s custom AI chip designs into its semiconductor strategy, in what could be one of the most consequential moves yet in the global AI arms race. The partnership, which already underpins some of the world’s most powerful AI systems, is now expanding to silicon, the literal heart of AI computing. Here’s the whole story in five key points.

Advertisement

Microsoft now has access to OpenAI’s hardware brainpower until 2030

Speaking on the podcast, Nadella confirmed that Microsoft will integrate OpenAI’s system-level hardware innovations directly into its in-house chip development. “We now have access to OpenAI’s chip and hardware research until 2030,” he said. He said Microsoft will also continue to use OpenAI’s models until 2032 under the revised partnership agreement.

OpenAI, meanwhile, is co-developing specialized AI processors and networking hardware with Broadcom, signaling its growing ambitions beyond software and into the foundational layers of AI computing. Microsoft plans to first help “industrialize” these chip designs for large-scale use and then expand them under its intellectual property to further its broader cloud and AI roadmap.

A new phase in the Microsoft-OpenAI alliance

This expanded collaboration strengthens the already symbiotic relationship between the two companies. For Microsoft, this means faster access to cutting-edge hardware tailored to OpenAI’s massive model-training needs. For OpenAI, this means harnessing the strength of Microsoft’s global infrastructure to further enhance its creations.

It’s a mutually reinforcing loop: OpenAI creates models that push the limits of the hardware; Microsoft makes the hardware that makes those models possible. Nadella described this alignment as “strategic”, indicating that OpenAI’s design expertise will accelerate Microsoft’s semiconductor ambitions.

Microsoft’s new AI data center powerhouse

While the OpenAI chips may be the spark, Microsoft’s Fairwater datacenters are the engine. The company has switched to a new class of features that don’t just stand alone, they’re part of a networked AI superfactory. Each Fairwater site serves as a node in this vast, interconnected system designed to train and deploy next-generation AI models on an unprecedented scale.

For example, Atlanta is already hosting one of these future datacenters. It features a new chip and rack architecture that delivers the highest throughput per rack of any cloud platform today. The site includes NVIDIA GB200 NVL72 rack-scale systems capable of scaling to hundreds of thousands of Blackwell GPUs. And in a nod to sustainability, it uses advanced liquid cooling that consumes almost zero water, unlike traditional datacenters that often guzzle millions of liters of water.

It’s about smarter systems

Microsoft’s AI hardware strategy isn’t just about brute force. As Scott Guthrie, executive vice president of Cloud + AI, said: “Being a leader in AI isn’t just about adding more GPUs, it’s about building the infrastructure that makes them work together as a system.”

Advertisement

He added, “We have spent years advancing the architecture, software and networking needed to reliably train the largest models, so our customers can innovate with confidence. Fairwater reflects that end-to-end engineering and is designed to meet growing demand with real-world performance that goes beyond just theoretical capability.”

It’s this approach that differentiates Microsoft’s hardware approach from rivals like Google and Amazon, both of which are also developing custom AI chips, but often keep them hidden within their own stacks.

The big picture: building the world’s AI backbone

From its first supercomputer built in collaboration with OpenAI in 2019 to systems that train GPT-4 and beyond, Microsoft is working at every layer of AI infrastructure – refining, reinventing, and rethinking how data, chips, and networks come together. Fairwater represents the culmination of those years of quiet experimentation.

And now, with OpenAI’s hardware blueprints, Microsoft is ready to supercharge that growth. The company’s latest datacenters are designed not only to train frontier models but also to make inferences, powering the AI ​​features that millions of users interact with every day.

As Nadella said, it’s not just about adding power, it’s about owning the full stack of AI innovation, from silicon to supercomputers to software. In the race for AI dominance, Microsoft is no longer just buying GPUs, but building factories that will enable them to run like one.

– ends

Your email address will not be published. Required fields are marked *

Zeen Subscribe
A customizable subscription slide-in box to promote your newsletter
[mc4wp_form id="314"]
Exit mobile version