Nvidia CEO Jensen Huang sees at least  trillion in AI chip revenue by 2027

Nvidia CEO Jensen Huang sees at least $1 trillion in AI chip revenue by 2027

At the chipmaker’s annual developer conference in San Jose, California, Nvidia CEO Jensen Huang said the revenue opportunity for its advanced AI chips was at least $1 trillion by 2027.

The figure signals Huang’s confidence that Nvidia can remain the biggest player in the market for AI chips amid growing competition and investor skepticism over whether its strategy to claw back its profits in the AI ​​ecosystem is paying off.

US markets

Powered byAppreciate

On 17 March 2026, 01:29 AM IST

S&P 500 Top Gainers

Dollar Tree114.37(6.43%)
Norwegian Cruise Line19.86(5.22%)
Western Digital286.37(5.17%)
Chipotle Mexican Grill34.07(4.77%)

profiteers»

S&P 500 Top Losers

mosaic27.68(-5.56%)
CF Industries Holdings122.40(-5.53%)
Crowdstrike Holdings423.81(-4.07%)
Bio-Techn48.84(-3.95%)

losers»

Huang did not elaborate on the forecast. But it represents a big step away from the roughly $500 billion revenue opportunity for 2026 Nvidia reiterated on its last earnings call.

Shares of Nvidia – the world’s most valuable listed company with a market value of more than $4.3 trillion – soon jumped on the news but pared those gains to 1.4% in last trade.

Huang was speaking at the 18,000-plus capacity hockey arena at an event that has become one of the largest showcases of AI technology.


The four-day conference is also expected to reveal how the top AI chipmaker plans to adapt to the rapidly changing AI landscape.

He began the keynote by arguing that part of Nvidia’s competitive advantage is its CUDA chip programming software, which some analysts consider its strongest shield.

“The installed base attracts developers who then create new ⁠algorithms that achieve breakthrough” technologies, Huang said. “We’re in every cloud. We’re in every computer company. We serve almost every single industry.”

In-demand AI chips

The keynote is also likely to include details of a next-generation AI chip called Feynman, named after the late American physicist Richard Feynman.

Huang is also likely to talk about physical AI such as data centers, digital assistants known as AI agents, and robots.

Another focus is likely to be on Groq, a chip startup from which Nvidia licensed technology for $17 billion in December. Groq specializes in fast and cheap “predictive” computing, in which an AI model takes what it has already learned and uses it to answer a question or make a prediction in real time.

After spending billions of dollars in recent years on chips to train their AI models, companies like OpenAI, Anthropic and Facebook-owned Meta Platforms are turning to serving the millions of users who are tapping into those AI systems.

Nvidia faces more competition in the market for chips for predictive-computing tasks than for AI-training chips, and analysts expect the company to strengthen its defenses against rivals seeking to regain market share lost to Nvidia in recent years.

Analysts also expect Nvidia to elaborate on why it invested $2 billion each in Lumentum and Coherent, both of which make lasers to send information between chips in the form of beams of light.

Despite that increased competition, some of which are designing their own chips from Nvidia’s own customers, Nvidia remains central to the global AI ecosystem.

Nations like Saudi Arabia are building custom AI systems for their own populations using its chips, and it is one of the only major US companies that continues to release open-source AI software, an area of ​​competition between the US and China.

Add ET logo As a trusted and reliable news source
Google logo Add now!


(You can now subscribe to our ETMarkets WhatsApp channel)

Zeen Subscribe
A customizable subscription slide-in box to promote your newsletter
[mc4wp_form id="314"]