Best Podcast Episodes About Vera Rubin
Everything podcasters are saying about Vera Rubin — curated from top podcasts
Updated: Mar 23, 2026 – 10 episodes
Listen to the Playlist
Ridealong has curated the best and most interesting podcasts and clips about Vera Rubin.
Top Podcast Clips About Vera Rubin
“… to taking at least two gigawatts of AWS Tranium compute and then three gigawatts of dedicated inference capacity and two gigawatts of training on Vera Rubin systems. That's from NVIDIA. So like, again, a gigawatt is a million homes, right? So we're talking about large fractions of the US power output here. One thing to note though, is Amazon's contingent investment is what it's called. It's kind of a weird thing. So billion of Amazon investment could be contingent on open AI either achieving AGI Okay we there billion contingent on open AI achieving AGI or on making its IPO by the end of the year”
“… And inevitably in the big short movie that will be made about this era, somebody will take this clip of me saying this right now, and it'll be part of like here are the idiots who are saying that this is going fine. So OpenAI right now is committed to taking at least two gigawatts of AWS Tranium compute and then three gigawatts of dedicated inference capacity and two gigawatts of training on Vera Rubin systems. That's from NVIDIA. So like, again, a gigawatt is a million homes, right? So we're talking about large fractions of the US power output here. One thing to note though, is Amazon's contingent investment is what it's called. It's kind of a weird thing. So billion of Amazon investment could be contingent on open AI either achieving AGI Okay we there billion contingent on open AI achieving AGI or on making its IPO by the end of the year”
View more
Ridealong summary
OpenAI's recent funding round has raised a staggering $110 billion, marking a historic moment in tech investment. With major contributions from Amazon and NVIDIA, this round blurs the lines between investment and pre-purchased services, raising questions about the sustainability of such massive financial commitments. The implications of this funding could reshape the future of AI development and infrastructure.
“… market. According to a report from the Financial Times, the company has shifted that manufacturing capacity at TSMC over to its next generation Vera Rubin chips instead. Now, the decision to stop the production of the H200 chips, which is one of NVIDIA's older generation AI processors, is not because of a lack of demand. In fact NVIDIA was expecting orders of more than 1 million units from Chinese customers and had been aggressively ramping up production in preparation for the Chinese government to formally approve the imports of the H200 chips But that approval process has basically stalled. …”
“… I feel like Broadcom gets pretty slept on, but their market cap of $1.5 trillion is pretty much the same as Meta. Let's shift gears and talk more about NVIDIA. NVIDIA has stopped the production of its H200 chips that were intended for the Chinese market. According to a report from the Financial Times, the company has shifted that manufacturing capacity at TSMC over to its next generation Vera Rubin chips instead. Now, the decision to stop the production of the H200 chips, which is one of NVIDIA's older generation AI processors, is not because of a lack of demand. In fact NVIDIA was expecting orders of more than 1 million units from Chinese customers and had been aggressively ramping up production in preparation for the Chinese government to formally approve the imports of the H200 chips But that approval process has basically stalled. Both Washington and Beijing have been putting up regulatory roadblocks. On the US side, the State Department pushed for tougher restrictions on how China could use the H200 chips. And then in China, Beijing started signaling that it may block imports altogether to protect their domestic chip industry and push Chinese AI companies to use homegrown …”
View more
Ridealong summary
Broadcom's recent earnings report reveals a staggering $19.3 billion in revenue, fueled by AI chip demand, positioning them as a serious competitor to NVIDIA. With expectations of over $100 billion in AI chip revenue by 2027, Broadcom is capitalizing on custom chip production for major tech companies, while NVIDIA halts H200 chip production due to regulatory issues in China. This shift illustrates the evolving landscape of the AI chip market and Broadcom's rising prominence.
“… though. So Oracle orders all these state-of-the-art, top-of-the-line Blackwell NVIDIA chips. But then NVIDIA goes, hey, we have a new chip coming, Vera Rubin, that does way better than Blackwell. OpenAI is like, we want the Vera Rubin chip. We want the newest and latest and greatest technology. Oracle is like, dang, we just filled it all with this latest and older generation chip. So there is a little bit of a timeline issue where people aren't syncing up when it comes to the latest and greatest technology. So that is one thing that is in the back of people's minds as these data centers are built. …”
“… than data centers can be built. Think about NVIDIA's product release timeline. They are pumping out a new and improved chip. It used to be every two years, but now they've dropped that to every one year. It takes a while to bring a data center online, though. So Oracle orders all these state-of-the-art, top-of-the-line Blackwell NVIDIA chips. But then NVIDIA goes, hey, we have a new chip coming, Vera Rubin, that does way better than Blackwell. OpenAI is like, we want the Vera Rubin chip. We want the newest and latest and greatest technology. Oracle is like, dang, we just filled it all with this latest and older generation chip. So there is a little bit of a timeline issue where people aren't syncing up when it comes to the latest and greatest technology. So that is one thing that is in the back of people's minds as these data centers are built. It's like your shoe rack. When you order new sneakers, you don't put them on the one with the old sneakers. You have to get an entire new rack to put the new sneakers on. There also another narrative in Oracle's earnings was around just overall SaaSpocalypse. Enterprise software has taken a big licking over the past few months. Oracle is a part of …”
View more
Ridealong summary
Oracle's recent earnings report surprised investors, with shares jumping 10% after a strong quarter that eased fears about an AI bubble. Despite concerns over its debt and reliance on OpenAI, Oracle's $553 billion revenue backlog and maintained capital expenditure outlook signal confidence in the growing demand for AI cloud computing. The company's executives boldly claimed they would not be disrupted by AI, positioning themselves as a leader in the evolving tech landscape.
“… centers being built right now are being filled with Blackwell chips, and by the time they turn on, Nvidia will be selling its next generation of Vera Rubin chips, making them obsolete. Now, now, now, now, now, now, you've probably heard that Vera Rubin, the next GPU from Nvidia, will use the same racks called Oberon as Blackwell, which is true to an extent, but won't be true for long, as Nvidia intends to shift to their Kyber racks in 2027, hoping to build one-megawatt IT racks, which involve entire racks full of power supplies, meaning that all of those data centers you see today, whenever it is …”
“… of millions, if not billions of dollars, into building data centers that take forever to even start generating money, at which point they only seem to lose it. Worse still, Nvidia sells GPUs in a one-year upgrade cycle, meaning that all of those data centers being built right now are being filled with Blackwell chips, and by the time they turn on, Nvidia will be selling its next generation of Vera Rubin chips, making them obsolete. Now, now, now, now, now, now, you've probably heard that Vera Rubin, the next GPU from Nvidia, will use the same racks called Oberon as Blackwell, which is true to an extent, but won't be true for long, as Nvidia intends to shift to their Kyber racks in 2027, hoping to build one-megawatt IT racks, which involve entire racks full of power supplies, meaning that all of those data centers you see today, whenever it is they get built, if they get built, will be full of racks incompatible with the next generation of GPUs. This will also decrease the value of the assets inside the data centers, which will, in turn, decrease the value of the assets held by the firm's investing. Stargate Abilene, the one invested in by JP Morgan, Blue Owl, and Primary Digital …”
View more
Ridealong summary
Despite being the best financed data center company, Corweave faces severe profitability challenges and overwhelming debt, raising concerns about the entire industry's viability. As data centers take years to build and become obsolete due to rapidly evolving technology, the financial backing from major banks may not be enough to stave off a looming crisis. With demand for AI compute in question, the situation could lead to a significant downturn in the data center market.
“… with Earth, one of those meter-sized rocks. But what I'm trying to say is that the information we have is very limited. For example, even with the Rubin Observatory, state-of-the-art, if an object were to move 10 times faster than the rocks of the solar system or the planets of the solar system, which are moving at tens of kilometers per second, if you had an object moving at hundreds of kilometers per second, it would spend very little time in each snapshot of the sky that we take with those telescopes, it would appear as a streak 300 kilometers per second 10 times more than Earth speed around …”
“And we can't see them because they don't reflect enough sunlight. So there is a huge reservoir of interstellar rocks that we are not aware of. And once every few years, one of them collides with Earth, one of those meter-sized rocks. But what I'm trying to say is that the information we have is very limited. For example, even with the Rubin Observatory, state-of-the-art, if an object were to move 10 times faster than the rocks of the solar system or the planets of the solar system, which are moving at tens of kilometers per second, if you had an object moving at hundreds of kilometers per second, it would spend very little time in each snapshot of the sky that we take with those telescopes, it would appear as a streak 300 kilometers per second 10 times more than Earth speed around the sun is just 0 of the speed of light Three orders of magnitude smaller than the speed of light. So there is a huge range of velocities that are larger than our rockets that we would miss if objects were moving in our backyard very fast. Just think about your own backyard. and if someone passes through very quickly, you wouldn't even notice …”
View more
Ridealong summary
Avi Loeb discusses the challenges of detecting interstellar objects like rocks that could be colliding with Earth. He explains that many of these fast-moving objects are missed by telescopes, leading to a limited understanding of what might be lurking in our solar system. Loeb emphasizes the need for investment in research to explore these potential extraterrestrial encounters.
“… listen to that. Well, once upon an AI time, training was the paradigm. Sure, it taught the motto's how, but inference runs the whole world now. Vera shows us who's the boss at 35 times less the cost. Blackwell makes the token sing NVIDIA. The inference king. I sincerely hope that they used AI to make that and did not pay a marketing firm many millions of dollars. Yeah, the quality is about what I would expect from AI. And just for folks, the references to Blackwell and Vera are references to various NVIDIA products. We also should say that Jensen announced NemoClaw, which was this …”
“Just for an example of how Jensen Wong talked about inference, there was a slightly bizarre AI animated music video that was displayed at the end of his speech. Let's listen to that. Well, once upon an AI time, training was the paradigm. Sure, it taught the motto's how, but inference runs the whole world now. Vera shows us who's the boss at 35 times less the cost. Blackwell makes the token sing NVIDIA. The inference king. I sincerely hope that they used AI to make that and did not pay a marketing firm many millions of dollars. Yeah, the quality is about what I would expect from AI. And just for folks, the references to Blackwell and Vera are references to various NVIDIA products. We also should say that Jensen announced NemoClaw, which was this enterprise platform for AI agents, basically like a secure enterprise version of OpenClaw. It's fun to watch companies scramble. So you've got NemoClaw now from NVIDIA. You've got the creator of OpenClaw, which then what is the latest name for it? Yeah, because it was ClawedBot, MaltBot, OpenClaw. OpenGlock, great. So he's now working at OpenAI and Meta has …”
View more
Ridealong summary
AI agent orchestration is more about marketing hype and competitive posturing than immediate technological advancement.
Nvidia's AI and robotics advancements are impressive, but the industry's rush to follow trends feels more like a marketing scramble than genuine innovation.
Nvidia's AI and robotics advancements are impressive, but the industry's rush to keep up feels more like a marketing scramble than genuine innovation.
“… kind of showed everybody. Grace Blackwell with NVLink is the king of inference today, delivering an order of magnitude lower cost per token. Vera Rubin will extend that leadership even farther. Of course, NVIDIA is going to be reporting these sort of insane returns. It's NVIDIA. I guess the question is, is anyone else? They're at the downhill part of it. NVIDIA is just a man holding a large bucket underneath a waterfall. Of course, he's going to get wet. Is anybody else? It doesn't seem like it's reasonable. I think that my one experience with these two vibe coding projects is I immediately …”
“… This is a good business to be in, 75%. They generated $34.9 billion in free cash flow. It's good. Jensen says, computing demand is growing exponentially. The agentic AI inflection point has arrived. That's what we've been talking about. That's what OpenClaw kind of showed everybody. Grace Blackwell with NVLink is the king of inference today, delivering an order of magnitude lower cost per token. Vera Rubin will extend that leadership even farther. Of course, NVIDIA is going to be reporting these sort of insane returns. It's NVIDIA. I guess the question is, is anyone else? They're at the downhill part of it. NVIDIA is just a man holding a large bucket underneath a waterfall. Of course, he's going to get wet. Is anybody else? It doesn't seem like it's reasonable. I think that my one experience with these two vibe coding projects is I immediately used all of my usage so quickly for both of them to do very simple things that were basically just to give me access to enter things into SQLite in a browser, which is not hard. It's not a task that should require that much compute. I'm not doing this in any sort of systemic way. Just remember, though, that you've spent that money now. That program …”
View more
Ridealong summary
Nvidia's dominance in AI infrastructure is clear, but the broader market may not sustain such growth as easily, with smaller companies struggling to keep up.
“… to blame that. No, what it is, is just every GPU, anyone doing anything with GPUs, is just buying all the RAM. The new Nvidia supercomputer, the Vera Rubin thing, I hate that they're using these actual scientists to apply to these things. It seems so disrespectful. But 1.5 terabytes of RAM in this Vera Rubin supercomputer. And that's this. So it's like an AMD 1.5 terabytes of high bandwidth memory. And how many is that? 1.5 terabytes per how many GPUs? I'm going to have to look into the specifics here. There's a lot of GPUs. But I believe it's in the full. Okay. And sorry, not to be a woman who …”
“… of wafers of RAM, just the things that you carve into RAM, which is insane on its own. But it was a letter of intent, which is a concept of a deal. It's like if you and I email and say, why do we do this? It means fucking nothing. And everyone's trying to blame that. No, what it is, is just every GPU, anyone doing anything with GPUs, is just buying all the RAM. The new Nvidia supercomputer, the Vera Rubin thing, I hate that they're using these actual scientists to apply to these things. It seems so disrespectful. But 1.5 terabytes of RAM in this Vera Rubin supercomputer. And that's this. So it's like an AMD 1.5 terabytes of high bandwidth memory. And how many is that? 1.5 terabytes per how many GPUs? I'm going to have to look into the specifics here. There's a lot of GPUs. But I believe it's in the full. Okay. And sorry, not to be a woman who doesn't work in tech about this. No, this is actually very annoying. So supercomputer named after a lady and we think it's disrespectful. Yes. Tell me the name again. Vera Rubin. Okay. The Vera Rubin computer is a big box and it's really powerful. And inside, there's a smaller, there's a little shoe box that you can keep all the little random things. …”
View more
Ridealong summary
RAM prices have surged three to four times, drastically affecting consumer technology prices. For example, Dell's XPS 14, initially expected to cost under $1,500, is now starting at $2,050 due to RAM costs. Consumers are urged to buy new devices now before prices climb even higher.
“… negative margin GPUs. Tell me, what happens if the debt stops flowing to data centers? How will NVIDIA sell their so-called 20 million Blackwell and Vera Rubin GPUs they're claiming they'll ship by the end of 2026? What happens if venture capitalists start running low on funds and can't keep feeding hundreds of millions of dollars to AI startups so that they can feed them directly to OpenAI or Anthropic? What happens to OpenAI and Anthropic and their already negative margin businesses when their customers can't pay them? What happens to Oracle or CoreWeave's work-in-progress data centers if OpenAI …”
“… will spook an already nervous market. Only one link in this chain of pain needs to break. Every single part of the AI bubble, this fucking charade, is unprofitable, save for NVIDIA and the construction firms erecting future laser tag arenas full of negative margin GPUs. Tell me, what happens if the debt stops flowing to data centers? How will NVIDIA sell their so-called 20 million Blackwell and Vera Rubin GPUs they're claiming they'll ship by the end of 2026? What happens if venture capitalists start running low on funds and can't keep feeding hundreds of millions of dollars to AI startups so that they can feed them directly to OpenAI or Anthropic? What happens to OpenAI and Anthropic and their already negative margin businesses when their customers can't pay them? What happens to Oracle or CoreWeave's work-in-progress data centers if OpenAI can't pay their bills? What happens to Anthropic's $21 billion of Broadcom orders or tens of billions of dollars of Google Cloud spend? And what if I'm right about everything I've said, not just in this series, but in the episodes and newsletters before it? In the last year, I estimate I've been asked the question, what if you're wrong over 25 times? …”
View more
Ridealong summary
The podcast segment critically examines the financial viability of AI companies like Meta, highlighting the unsustainable costs associated with GPU reliance and questioning the long-term profitability of such investments.
The podcast segment presents a critical view of the financial sustainability of AI companies like OpenAI, contrasting sharply with the optimistic projections of compute spending, suggesting that the reality of operational costs could undermine these ambitious targets.
The podcast segment highlights the unsustainable financial model of AI companies like Anthropic, emphasizing the massive costs and unprofitability that could lead to a collapse in the industry.
“… safer. That's the point. Yeah and I would be very interested in that If NBIDI is doing it but of course it will require a 5080 or some sort of or a Vera Rubin or some sort of fancy hardware It's intended for companies, not for you, yeah. Right. So, yeah, they're going to do their own agent, which I think we already know is going to be called NemoClaw, right? Yeah, because it's associated with Nemo. What's their other Nemo product?”
“… to be, I think, very interesting this year. There are at least two big announcements that he's going to make. One is something called NemoClaw, which is an open claw style agent platform. Open source, they say, or open access. Yeah. You know, and safer. That's the point. Yeah and I would be very interested in that If NBIDI is doing it but of course it will require a 5080 or some sort of or a Vera Rubin or some sort of fancy hardware It's intended for companies, not for you, yeah. Right. So, yeah, they're going to do their own agent, which I think we already know is going to be called NemoClaw, right? Yeah, because it's associated with Nemo. What's their other Nemo product?”
View more
Ridealong summary
NemoClaw is set to revolutionize AI with its open access platform designed for companies, not consumers. The upcoming GTC keynote promises major announcements, including this innovative agent platform that requires advanced hardware. This shift highlights the growing complexity and specialization in AI technology.
Top Podcasts About Vera Rubin
Better Offline
3 episodes
Intelligent Machines (Audio)
2 episodes
Last Week in AI
1 episode
The Rundown
1 episode
Morning Brew Daily
1 episode
The Why Files: Operation Podcast
1 episode
Uncanny Valley | WIRED
1 episode
Stories Mentioning Vera Rubin
Top Podcasts on Nvidia's AI Agent Orchestration
Nvidia CEO Jensen Huang is promoting the concept of AI agent orchestration, which he believes could significantly enhance the company's valuation. This approach involves coordinating multiple AI agents to work together, potentially unlocking new capabilities and efficiencies in AI applications.
Nvidia
Jensen Huang
Best Podcasts on Nvidia's GB300 & DLSS 5
At the GTC 2026 conference, Nvidia unveiled its new GB300 chip and DLSS 5 technology, projecting a trillion-dollar revenue forecast. This highlights Nvidia's continued innovation and dominance in the AI and graphics sectors, impacting the broader tech industry.
Nvidia
DLSS 5
