Updated: Mar 25, 2026 – 14 episodes
At the Nvidia GTC conference, the company highlighted the increasing demand for AI computing power and the growing importance of inference in AI applications. This trend underscores Nvidia's pivotal role in the AI industry as it continues to innovate and provide solutions for AI workloads.
The Rundown is bullish on Nvidia's future, predicting a $500 billion revenue boost from AI agents. Start with their episode on agentic AI for a deep dive into why investors are excited. Rich Habits Podcast highlights Nvidia's shift to large-scale inference, with a trillion dollars in orders showcasing its unmatched growth potential. The AI Daily Brief offers a detailed analysis of Nvidia's forecast to reach a trillion dollars in revenue by 2027, driven by soaring demand for AI computing. For a tech-focused perspective, TechStuff discusses Nvidia's strategic acquisitions like Grok, positioning it to dominate the AI inference market.
Listen to the Playlist
Ridealong has curated the best podcasts and clips about Nvidia GTC showcases rising demand for AI computing and inference. Listen now.
Podcast Episodes Covering This Story
“So let's talk about what this means for the AI economy and why Jensen Huang thinks that the rise of AI agents could add an extra $500 billion to NVIDIA's revenues. So we're starting to see the rise of agentic AI. So let's talk about why investors are paying attention. See, for the last few years, AI has been about training large language models. That required tens of thousands of GPUs.”
Ridealong summary
The rise of agentic AI and the shift to inference will exponentially increase demand for compute, potentially adding $500 billion to NVIDIA's revenues.
“NVIDIA's CEO is saying that this new demand is being driven by a fundamental shift in how AI is actually being used. The industry has now moved away from training large language models, which was kind of the first phase of this, to inference at a massive scale. So think agentic AI applications that spawn other agents to accomplish tasks that are generating exponentially more tokens than a simple chatbot interaction.”
Ridealong summary
Nvidia's AI demand is driven by a shift to large-scale inference, with a trillion dollars in orders highlighting its unmatched growth potential.
“Jensen's keynote...was two and a half hours long, totally jam-packed with big announcements. We got confirmation of the new Grok-powered server focused on inference...Jensen also unveiled a new Gen.AI system that can enhance video game graphics on the fly...Jensen has now signaled that Nvidia can see enough demand to drive $500 billion in annual sales. This would more than double revenue from the past year.”
Ridealong summary
Nvidia's forecast of reaching a trillion dollars in revenue by 2027 signals unparalleled growth driven by soaring demand for AI computing and inference.
“Well, inference demand is exploding, driven by the AI agents and the coding assistants. I met with Ian Buck. I met with dozens of engineers at Meta, Google, NVIDIA, and all of them are seeing crazy inference demand and AI compute shortages. So across the board, people are in crazy, clamoring need for AI. And we're, I mean, yeah, you're seeing that from talking to engineering leaders at big tech companies.”
Ridealong summary
Inference demand is exploding, driven by AI agents and coding assistants, highlighting Nvidia's strategic foresight in securing supply agreements ahead of this surge.
“One thing that was really interesting, he introduced a new product. I always like when there's an actual product tied to this rather than the promise of a product... So basically that $20 billion licensing agreement is bearing fruit. It's going to make inference quicker, less expensive. It's going to make things more efficient, basically, for NVIDIA customers.”
Ridealong summary
Nvidia's strategic partnerships and new specialized AI chips are set to revolutionize AI computing by making inference quicker and more efficient, marking a significant shift from general-purpose GPUs.
“Inference demand is exploding, driven by the AI agents and coding assistants. I met with Ian Buck, I met with dozens of engineers at Meta, Google, NVIDIA, and all of them are seeing crazy inference demand and AI compute shortages. So across the board, people are in crazy clamoring need for AI. And the great thing is Jensen, he's very prescient. He probably saw this demand months away.”
Ridealong summary
Nvidia is perfectly positioned to capitalize on the exploding demand for AI inference and computing, with strategic moves like acquiring Grok and securing supply agreements ahead of time.
“NVIDIA is an incredibly innovative company, and it comes from the top. Jensen is just an incredibly curious, innovative person. They have a very flat structure as a company, not a lot of middle management... So there was a lot of talk about how they're integrating sort of like Grox offerings into NVIDIA offerings and, you know, just this big push to, you know, stay on top of things on the compute side.”
Ridealong summary
Nvidia's relentless innovation and strategic acquisitions, like Grok, position it to dominate the AI inference market as demand for AI computing continues to rise.
“NVIDIA acquired the chip-making startup in December and are expected to announce the first collaborative product this week. The information described the new product as integrating Grok's language processing chips into NVIDIA's rack-scale servers. If that's the case, this will be NVIDIA's first attempt to directly address inference demand. Until now, NVIDIA's chips have been world-leading in AI training, but haven't been particularly focused on efficient inference.”
Ridealong summary
NVIDIA is transforming from a chip company into a full-stack AI infrastructure platform, addressing the rising demand for AI inference with new Grok-integrated products.
“By incorporating Grok technology into their data center systems, NVIDIA is extending its lead from those companies. They're talking about the fact that they can generate more tokens faster and cheaper than any of those alternatives. And it was very important for them to bring the Grok technology in in order to accomplish that. So it's NVIDIA extending its lead over the competition and making the argument that the total cost of ownership, even for inference, is lower with NVIDIA systems.”
Ridealong summary
Nvidia's incorporation of Grok technology extends its lead in AI computing, making its systems more cost-effective for inference compared to competitors.
“What Jensen said in the last week is that as people move to reasoning models and using reasoning much, much more, they saw a 10,000 fold in the increase in compute demand from each user, but at the same time usage increased 100 times. So that was a million fold expansion in compute demand in just two years.”
Ridealong summary
The shift to reasoning models and increased AI usage is driving a million-fold expansion in compute demand, underscoring Nvidia's critical role in meeting this explosive growth.
“"They're basically taking a GB300, which is the Grace Blackwell chip, and they're turning it into a tiny little thing that fits on your desk. That's 750 gigabytes of coherent memory and 20 petaflops of AI compute, which allows you to run models up to a trillion parameters right from your desk. So it's an unbelievably dense machine."”
Ridealong summary
Nvidia's innovations, like the DGX Spark, are revolutionizing AI computing by making supercomputing power accessible on a personal scale, marking a significant leap in efficiency and capability.
“"NVIDIA CEO Jensen Wong threw out a lot of numbers, mostly of the technical variety during his keynote Monday to kick off the company's annual GTC conference... His projection that there will be $1 trillion worth of orders for NVIDIA's Blackwell and Vera Rubin chips, a monetary reflection of a booming AI business... I see, through 2027, at least $1 trillion."”
Ridealong summary
Nvidia's projection of $1 trillion in orders for its AI chips by 2027 highlights the explosive growth and demand in the AI computing sector.
“"Speaking of Dylan Patel, Semi Analysis was featured at NVIDIA GTC. The inference king has been crowned. NVIDIA won a massive belt, and it looks like Jensen's holding it up... The GBNBL72 is the inference king with 50x higher performance per watt on inference X by semi-analysis. 35x lower cost. Very, very exciting. And congrats to Jensen for becoming the inference king and winning the inference max award or inference X as it's now known."”
Ridealong summary
Nvidia's dominance in AI inference is solidified as it achieves unprecedented performance and cost efficiency, underscoring its pivotal role in meeting the surging demand for AI computing.
“"Even if Nvidia has sharp elbows and is able to push out other demand and get all the line time they need to build all the accelerators that they need... well, then you can still wind up in a weird scenario where Nvidia has the chips... but there's just not enough energy. When does the chip bottlenecks shift to the energy bottleneck?"”
Ridealong summary
Nvidia faces potential energy bottlenecks despite high demand for AI chips, but solutions like nuclear power could mitigate these challenges.
