Best Podcast Episodes About Tensor chip
Everything podcasters are saying about Tensor chip — curated from top podcasts
Updated: Apr 02, 2026 – 9 episodes
Listen to the Playlist
Ridealong has curated the best and most interesting podcasts and clips about Tensor chip.
Top Podcast Clips About Tensor chip
“Nick, you were gonna add to this, your analysis. Yeah we actually busy building chips for a bunch of companies We typically work with hyperscalers to build their own chips Think about like the Google Amazon Microsoft Meta type companies who are building their own hardware to do both training and inference And then we also work with semiconductor companies both GPU companies, as well as networking companies. So those are the people we build for. We're building a ton of chips right now. So I would say in the next year and two …”
“Nick, you were gonna add to this, your analysis. Yeah we actually busy building chips for a bunch of companies We typically work with hyperscalers to build their own chips Think about like the Google Amazon Microsoft Meta type companies who are building their own hardware to do both training and inference And then we also work with semiconductor companies both GPU companies, as well as networking companies. So those are the people we build for. We're building a ton of chips right now. So I would say in the next year and two years, you're going to start running on light matter hardware. These will be in the new data centers. think about like the texas stuff yeah core weave what's the one uh not star bay stargate another great film speaking of yes excellent film yeah and so there's a picture of um i think that's stargate and what you see in the middle is that plus i think …”
View more
Ridealong summary
Tech giants like Amazon and Google are investing billions to create their own custom chips, optimizing costs and enhancing performance for AI applications. With annual spending reaching over $200 billion, these companies are transitioning from software to hardware, reshaping the infrastructure landscape. This shift is driven by a race for power and efficiency in data centers, leading to innovations like micro nuclear reactors.
“You know, chips are harder for sure because it's a very specialized thing. And that, you know, in terms of like, if you had to ask, if you asked me what would be the most likely reason we wouldn't get economy transforming AI in the next few years, I would say something happening to the chip fabs in a major way that throws production off to the point where chips are super scarce and you know maybe we can't scale the training runs full stop or even if the …”
“You know, chips are harder for sure because it's a very specialized thing. And that, you know, in terms of like, if you had to ask, if you asked me what would be the most likely reason we wouldn't get economy transforming AI in the next few years, I would say something happening to the chip fabs in a major way that throws production off to the point where chips are super scarce and you know maybe we can't scale the training runs full stop or even if the training runs you know can kind of still scale there's just not enough inference to go around and so you know even we might have like really powerful systems but we just don't have enough access you know economy wide for people to deploy them and like automate all the things that seem like we're on track to automating. So I guess if I had to pick …”
View more
Ridealong summary
A major disruption in chip production could halt the advancement of transformative AI technologies. Nathan Labenz highlights that geopolitical tensions, particularly concerning Taiwan, pose significant risks to chip availability, which is crucial for scaling AI systems. Despite some positive developments in U.S. chip production, the looming threat of scarcity remains a critical concern for the future of AI.
“… And at the end of 2025, NVIDIA acquired a company called Groq, G-R-O-Q. It was founded by Jonathan Ross, who was the original designer of Google's Tensor Processing Units, which is Google's specialist chips for serving AI workloads. He'd done that roughly a decade ago, maybe a bit longer. And it was a big, slightly weird acquisition of about $20 billion. People moved and IP was licensed. The company wasn't formally acquired. It's kind of complicated. But that Grok acquisition really, really pointed to the changing shape of the AI market. Up until that point, NVIDIA had survived on a single, …”
“… at the very minimum it's going to be the same, quite likely more than that. So in this changing market, as we move from a world where the compute is dominated by training and move to a world where there's a lot of inference, things do change. And And at the end of 2025, NVIDIA acquired a company called Groq, G-R-O-Q. It was founded by Jonathan Ross, who was the original designer of Google's Tensor Processing Units, which is Google's specialist chips for serving AI workloads. He'd done that roughly a decade ago, maybe a bit longer. And it was a big, slightly weird acquisition of about $20 billion. People moved and IP was licensed. The company wasn't formally acquired. It's kind of complicated. But that Grok acquisition really, really pointed to the changing shape of the AI market. Up until that point, NVIDIA had survived on a single, albeit evolving, architecture.”
View more
Ridealong summary
The shift to reasoning models and increased AI usage is driving a million-fold expansion in compute demand, underscoring Nvidia's critical role in meeting this explosive growth.
“… I think there's like, look, you cannot regulate this stuff. I mean, it is going to be, just like life will find a way. And like just, it's, NVIDIA chips will find a way is basically like the idea here. And the A infrastructure is an all out race. it is a matter of national security energy is a matter of national security these like sectors and industries wars are being fought like around these assets in particular and so i think what i'm we've fought over oil for a hundred years oh my gosh yeah i mean like now we're fighting over the chips and the oil now we have two things to we got employees …”
“… is also allowing it to be exposed. I think it's kind of just like a, look, we can break the rules. We can go around things. And also like you interviewed with Jensa. I mean, it's opening back up and there's a lot of folks that are looking to buy this. I think there's like, look, you cannot regulate this stuff. I mean, it is going to be, just like life will find a way. And like just, it's, NVIDIA chips will find a way is basically like the idea here. And the A infrastructure is an all out race. it is a matter of national security energy is a matter of national security these like sectors and industries wars are being fought like around these assets in particular and so i think what i'm we've fought over oil for a hundred years oh my gosh yeah i mean like now we're fighting over the chips and the oil now we have two things to we got employees in the middle east right now like we we uh in uae in particular we've like i can't comment on the the sites that have been hit but just to let you know like there is this is all very real for us at gecko well if you yeah if you do share it the uae government's like please don't share pictures of dubai getting hit a little sensitive to it which i …”
View more
Ridealong summary
We're in an all-out cold war over semiconductor chips, a battle as critical as oil was in the past. As nations scramble for dominance in AI and energy sectors, the U.S. must ramp up its defense spending and regulation to keep pace. This geopolitical struggle highlights the urgent need for government intervention to protect national security and foster innovation in chip technology.
“… to accelerate a partner's existing technology roadmap on a dedicated line. Exactly. When you are projecting a need for hundreds of billions of chips for autonomous vehicles, robots, and satellites, you simply cannot wait in line behind Apple and NVIDIA for manufacturing capacity. You have to fund your own line. The consequence here alters the core business model of vehicle manufacturing. By funding their own silicon production, they insulate themselves from global supply chain shocks and create a massive, dedicated compute foundry for the space economy. Space applications require highly …”
“… volume demand while partnering with established foundries to utilize their existing process technology. So they're not inventing the manufacturing process from scratch. No, not at all. They are providing the billions of dollars and the guaranteed demand to accelerate a partner's existing technology roadmap on a dedicated line. Exactly. When you are projecting a need for hundreds of billions of chips for autonomous vehicles, robots, and satellites, you simply cannot wait in line behind Apple and NVIDIA for manufacturing capacity. You have to fund your own line. The consequence here alters the core business model of vehicle manufacturing. By funding their own silicon production, they insulate themselves from global supply chain shocks and create a massive, dedicated compute foundry for the space economy. Space applications require highly specific radiation-hardened hardware. When a computer is operating outside the Earth's atmosphere, stray cosmic rays can actually strike the silicon and flip a bit from a zero to a one. Just a single ray. Yes. And that single microscopic event can cause a rocket navigation system to fail instantly. You need specialized physical shielding and …”
View more
Ridealong summary
Tesla's new approach to semiconductor production could revolutionize the industry by funding its own chip fabrication plant. By leveraging established foundries and ensuring high demand, they aim to produce specialized chips for space applications and autonomous vehicles. This strategy not only secures their supply chain but also transforms the vehicle manufacturing business model.
“… like. And what the purpose of this station is, is to put everything under one roof. It's to put that lithography, the packaging, all the elements of chip making in one roof so that they can iterate quickly. traditionally what happens is you submit a chip design and then it takes months to years to go through the revision cycle to actually improve the chip design what they're doing here is under one roof they have everything in one place and they can design build and then test over and over and over again so they could iterate very quickly on these chips and what's amazing is the current output …”
“… start with the first thing that was actually announced and shown in the presentation, which it's their advanced technology fab. It's basically their R&D center. And they're building this one in Texas. And they have this great visual of what it looks like. And what the purpose of this station is, is to put everything under one roof. It's to put that lithography, the packaging, all the elements of chip making in one roof so that they can iterate quickly. traditionally what happens is you submit a chip design and then it takes months to years to go through the revision cycle to actually improve the chip design what they're doing here is under one roof they have everything in one place and they can design build and then test over and over and over again so they could iterate very quickly on these chips and what's amazing is the current output for this or the projected output that they're hoping to reach is one terawatt of energy per year and for reference the united states of america annually consumes half of that so we have some visual references here. Elon says this large factory, the actual TerraFab, not the R&D center. So once they've gone through the R&D, they figured out the chip …”
View more
Ridealong summary
The TeraFab project is a groundbreaking attempt to vertically integrate the entire AI chip stack under one roof, promising unprecedented speed and scale in chip manufacturing.
“… that Cerebris solved six years ago says Andrew Feldman the CEO and founder of Cerebris Shots fired Shots fired indeed He says their next inference chip not available yet has 140 times less memory bandwidth than Cerebris. To run a single 2 trillion parameter model, you need 2,000 GroK chips. On Cerebris, that's just over 20 wafers. Even paired with GPUs, GroK's max is out at 1,000 tokens per second. We run at thousands of tokens per second today and every day in production now. Why? When you connect 2,000 chips together, every interconnect has latency. Every cable has overhead. It doesn't …”
“… let me also tell you about Lambda. Lambda is the super intelligence cloud, building AI supercomputers for training and inference that scale from one GPU to hundreds of thousands. Nvidia biggest GTC announcement was a billion bet on the same problem that Cerebris solved six years ago says Andrew Feldman the CEO and founder of Cerebris Shots fired Shots fired indeed He says their next inference chip not available yet has 140 times less memory bandwidth than Cerebris. To run a single 2 trillion parameter model, you need 2,000 GroK chips. On Cerebris, that's just over 20 wafers. Even paired with GPUs, GroK's max is out at 1,000 tokens per second. We run at thousands of tokens per second today and every day in production now. Why? When you connect 2,000 chips together, every interconnect has latency. Every cable has overhead. It doesn't matter what your memory bandwidth is on paper if you're bottlenecked by the wiring between the thousands of tiny chips. We solved this with wafer scale, one integrated system, little interconnect tax. Jensen told the world that fast inference is where the value is. He's right. It's why the world's leading AI companies and hyperscalers are choosing …”
View more
Ridealong summary
Cerebris claims their chip architecture outperforms NVIDIA's Grok by a staggering margin, requiring only 20 wafers for a 2 trillion parameter model compared to Grok's 2,000 chips. This efficiency comes from Cerebris's innovative wafer-scale design that minimizes latency and interconnect overhead, making it the choice for leading AI companies. As the CEO of Cerebris points out, speed in inference is where the real value lies in AI technology.
“Can we just get like a no context quote card? Chip Patterson on Cincinnati football. I'm grateful I'm not a fan of the team. Hey, listen, I appreciate Danny. You had some good fact checking there. We are about to be clipping season You know we slip up with something This is about the time where we could really get burned on it So I appreciate you doing that research and adding our in context so we can be able to have that one Aaron Nolan, there's a name. He's going to be part of the battle at …”
“Can we just get like a no context quote card? Chip Patterson on Cincinnati football. I'm grateful I'm not a fan of the team. Hey, listen, I appreciate Danny. You had some good fact checking there. We are about to be clipping season You know we slip up with something This is about the time where we could really get burned on it So I appreciate you doing that research and adding our in context so we can be able to have that one Aaron Nolan, there's a name. He's going to be part of the battle at Memphis for Charles Huff's first season with the Tigers. Marcus Stokes. Y'all remember the Marcus Stokes story? Yeah. he was a florida commit and then a viral video of him singing some rap lyrics oh it's the white kid led to it being pulled he went to west florida where he absolutely they throw it all over don't they yeah yes so he's nice uh it is …”
View more
Ridealong summary
Aaron Nolan is set to make waves at Memphis under new coach Charles Huff, while Marcus Stokes seeks redemption after a viral incident derailed his Florida commitment. With intriguing quarterback battles shaping team prospects, keep an eye on these players as they could redefine their teams' futures this season.
“… do go crazy enough that this happens because we just need incremental compute and the compute is worth the higher cost power, et cetera, of these chips. But it's also unlikely to some extent, to a large extent, because of, I think, just comparing, you know, some of these are like not fair comparisons, right? For example, you know, from A100, which is 312 teraflops, to Blackwell, which is like a thousand-ish of FP16, or maybe it's 2000, and then Rubin is like 5,000 or so FP16. It's not a fair comparison because these chips have vastly different design targets, right? At A100, that is what …”
“to just bring on seven nanometer wafers and then oh that gives you a another 50 or 100 another 100 gigawatts um yeah tell me why that's naive yeah so i think you know we potentially do go crazy enough that this happens because we just need incremental compute and the compute is worth the higher cost power, et cetera, of these chips. But it's also unlikely to some extent, to a large extent, because of, I think, just comparing, you know, some of these are like not fair comparisons, right? For example, you know, from A100, which is 312 teraflops, to Blackwell, which is like a thousand-ish of FP16, or maybe it's 2000, and then Rubin is like 5,000 or so FP16. It's not a fair comparison because these chips have vastly different design targets, right? At A100, that is what NVIDIA optimized for was FP16, BFlood16 numerics. When you look at Hopper, they didn't care as much about that. They cared about FP8. When you look at Rubin, they don't care about FP16 and BF16 as much. They care mostly about FP4 and 6, right? um and so numerics like are what they've designed the search designed their chip for um and so there's a …”
View more
Ridealong summary
Comparing chip performance purely by flops can lead to misleading conclusions. Different chips like NVIDIA's A100 and Blackwell are optimized for distinct numerical formats, impacting their performance in ways that flops alone can't capture. This highlights the importance of understanding the design targets behind each chip to truly grasp their capabilities.
Top Podcasts About Tensor chip
This Week in Startups
2 episodes
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
1 episode
Azeem Azhar's Exponential View
1 episode
Elon Musk Podcast
1 episode
Limitless Podcast
1 episode
TBPN
1 episode
Cover 3 College Football
1 episode
Dwarkesh Podcast
1 episode
Stories Mentioning Tensor chip
Best Podcasts on Nvidia's AI Inference Boom
At the Nvidia GTC conference, the company highlighted the increasing demand for AI computing power and the growing importance of inference in AI applications. This trend underscores Nvidia's pivotal role in the AI industry as it continues to innovate and provide solutions for AI workloads.
Nvidia
Putin
Best Podcasts on Elon Musk's TeraFab Project
Elon Musk has announced TeraFab, an ambitious $25 billion joint venture between Tesla, SpaceX, and xAI, aimed at building a massive AI chip factory in Austin, Texas. This project, described as the "most ambitious industrial project in human history," seeks to produce one terawatt of computing power annually, signaling a significant push in AI infrastructure and manufacturing.
