Best Podcast Episodes About AGI

Best Podcast Episodes About AGI

Everything podcasters are saying about AGI — curated from top podcasts

Updated: Mar 27, 2026 – 17 episodes
Listen to the Playlist

Ridealong has curated the best and most interesting podcasts and clips about AGI.

Top Podcast Clips About AGI

Bankless
Ridealong summary
In the evolving landscape of artificial intelligence, verification has become the scarce resource, overshadowing intelligence itself. As AI excels in pattern recognition and replication, humans retain a unique ability to navigate uncharted territories, creating meaning and value in creative domains. This dynamic raises questions about the future of art and human coordination in an AI-driven world.
Bankless · The Economics of AGI: Why Verification Is the New Scarcity w/ Christian Catalini · Mar 26, 2026
Y Combinator Startup Podcast
“I think we're probably looking at AGI 2030. Around the time that we're going to be releasing, like maybe ARC 6 or ARC 7, you're not going to stop AI progress. I think it's too late for that. And so the next question is, okay, like AI progress is here. It's actually going to keep accelerating. How do you make use of it? How do you leverage? How do you ride the wave? That's the question to ask. Today, we're lucky to be joined by Francois Cholet, founder of the ARK Prize, a global …” “I think we're probably looking at AGI 2030. Around the time that we're going to be releasing, like maybe ARC 6 or ARC 7, you're not going to stop AI progress. I think it's too late for that. And so the next question is, okay, like AI progress is here. It's actually going to keep accelerating. How do you make use of it? How do you leverage? How do you ride the wave? That's the question to ask. Today, we're lucky to be joined by Francois Cholet, founder of the ARK Prize, a global competition to solve the ARK AGI benchmark. His latest project is Endia, a lab exploring a new paradigm in frontier AI research. Francois is one of the best people in the world to help us understand the current AI moment and where all of this is going. Francois, thank you so much for joining us today and congrats on the launch of ARK AGI V3. Thanks so …” View more
Ridealong summary
The development of AGI through gaming parallels OpenAI's earlier work, highlighting the evolution from basic AI models to more complex systems.
Y Combinator Startup Podcast · How François Chollet Is Building A New Path To AGI · Mar 27, 2026
Last Week in AI
“… essentially like it ties this massive capital infusion right to one of the most philosophically contested definitions in technology who decides when agi is achieved well guess what now that's a contractual question all over again i thought we just finished establishing we wouldn't have to worry about that anymore because microsoft and open ai sorted that all out it you know is no longer going to be in their thing but now we're back at it again so unclear to me at least from this frame, who decides when AGI is achieved, what the hell that even means. And the fact that that's being put in the …” “Like, my brother in Christ, how are those both the same? like that essentially like it ties this massive capital infusion right to one of the most philosophically contested definitions in technology who decides when agi is achieved well guess what now that's a contractual question all over again i thought we just finished establishing we wouldn't have to worry about that anymore because microsoft and open ai sorted that all out it you know is no longer going to be in their thing but now we're back at it again so unclear to me at least from this frame, who decides when AGI is achieved, what the hell that even means. And the fact that that's being put in the same breath as, or if there's an IPO by the end of the year, kind of vaguely implies a sense that AGI at least might be declared to have been achieved sometime roughly on that order of magnitude, you know, a year or two. This is pretty insane. Yeah, this is pretty insane. I got nothing else. Yeah. And by the way, like valuations are not just like, …” View more
Ridealong summary
OpenAI's valuation is skyrocketing, raising questions about the future of artificial general intelligence (AGI). With a projected $25 billion in annual revenue, investors are betting heavily on AGI's imminent achievement, but this could lead to a massive bubble if expectations aren't met. The stakes are high as the tech landscape shifts dramatically, leaving us wondering if we're witnessing the new normal or an unsustainable trend.
Last Week in AI · #236 - GPT 5.4, Gemini 3.1 Flash Lite, Supply Chain Risk · Mar 12, 2026
The a16z Show
“You take an NLM and train it on pre-1916 or 1911 physics and see if it can come up with the theory of relativity. If it does, then we have AGI. Just today, by the way, Dario allegedly said that you can't rule out that they're conscious. You can rule out that they're conscious. To get to what is called AGI, I think there are two things that need to happen. Five years ago, Vishal Misra got GPT-3 to translate natural language into a domain-specific language it had never seen before. It worked. He had no idea why. So he set out to build a mathematical model of how LLMs actually function. …” “You take an NLM and train it on pre-1916 or 1911 physics and see if it can come up with the theory of relativity. If it does, then we have AGI. Just today, by the way, Dario allegedly said that you can't rule out that they're conscious. You can rule out that they're conscious. To get to what is called AGI, I think there are two things that need to happen. Five years ago, Vishal Misra got GPT-3 to translate natural language into a domain-specific language it had never seen before. It worked. He had no idea why. So he set out to build a mathematical model of how LLMs actually function. The result? A series of papers showing that transformers update their predictions in a precise, mathematically predictable way. In controlled experiments, the models match the theoretically correct answer almost perfectly. But pattern matching is not intelligence. LLMs learn correlation. They don't build models of cause and effect. To get to AGI, …” View more
Ridealong summary
Vishal Misra's experiments show that while large language models (LLMs) can handle complex tasks, like translating natural language into new domains, they lack true intelligence. Misra argues that to achieve Artificial General Intelligence (AGI), LLMs must transition from merely recognizing patterns to understanding causation and continue learning post-training.
The a16z Show · What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado · Mar 17, 2026
AI + a16z
“… just be some sort of random chaotic model. So to solve that problem is difficult. That's one aspect of it. So, you know, to get to what is called AGI, I think there are two things that need to happen. One is this plasticity, which has to be implemented through container learning. Secondly, we have to move from correlation to causation. That's... How much is this similar to what Jan LeCun talks about? So Jan LeCun... Causality planning, you know, predicting how your action would... It is related. You know, he's coming at it from a different angle than the J-PAL model, but it is related. The …” “then you are not making progress. Then it will just be some sort of random chaotic model. So to solve that problem is difficult. That's one aspect of it. So, you know, to get to what is called AGI, I think there are two things that need to happen. One is this plasticity, which has to be implemented through container learning. Secondly, we have to move from correlation to causation. That's... How much is this similar to what Jan LeCun talks about? So Jan LeCun... Causality planning, you know, predicting how your action would... It is related. You know, he's coming at it from a different angle than the J-PAL model, but it is related. The other thing is, you know, the first time I came on this podcast, I mentioned this test of AGI, the Einstein test. I don't remember. So I said, you know, you take an LLM and train it on pre-1916 or 1911 physics and see if it can come up with the theory of relativity. Yeah. If it does, then we have AGI. I mean, it's a high bar, but we should have high …” View more
Ridealong summary
To truly achieve Artificial General Intelligence (AGI), we must transition from correlation to causation and implement container learning. Vishal Misra references the Einstein test, where an AI trained on pre-relativity physics must derive the theory of relativity to prove its intelligence. This high bar reflects the complexities of understanding and advancing beyond current machine learning models.
AI + a16z · What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado · Mar 17, 2026
Latent Space: The AI Engineer Podcast
“… But the market was very different back in 2023 too. I think the best researchers in the world have this dilemma of, okay, I wanna go all in on AGI, but it's the product usage revenue flywheel that keeps the revenue in the house to power all the GPUs to get to AGI. And so it does make, you know, I think it sets up an interesting dilemma for any startup that has trouble raising up until that level, right? And certainly if you don't have that progress, you can't continue this fundraising flywheel. I would say that because we're keeping track of all of the things that are different, right? …” “… the GPUs? Is it toward the product? Is it toward new research, right? Or long-term research? Is it toward, you know, near to midterm research? And so in a case where you're resource constrained, of course there's this fundraising game you can play, right? But the market was very different back in 2023 too. I think the best researchers in the world have this dilemma of, okay, I wanna go all in on AGI, but it's the product usage revenue flywheel that keeps the revenue in the house to power all the GPUs to get to AGI. And so it does make, you know, I think it sets up an interesting dilemma for any startup that has trouble raising up until that level, right? And certainly if you don't have that progress, you can't continue this fundraising flywheel. I would say that because we're keeping track of all of the things that are different, right? You know, venture growth and app in front. One of the ones is definitely the personalities of the founders. It's just very different this time. I've been doing this for a decade and I've been doing startups for 20 years. And so, I mean, a lot of people start this to do AGI and we've never had like a unified North star that I recall in the same way. …” View more
Ridealong summary
AGI startups face a crucial dilemma: focus on immediate product needs or invest in long-term research. With resource constraints and a competitive talent market, founders must balance product revenue with the vision of achieving AGI. This unique moment in the industry is reshaping how startups are built and funded.
Latent Space: The AI Engineer Podcast · Inside AI’s $10B+ Capital Flywheel — Martin Casado & Sarah Wang of a16z · Feb 19, 2026
TBPN
“… Daniel Gross. No, he should be a household name. You should get into a taxi and they should be, oh, did you see how incredibly dialed Daniel Gross' AGI trades were in January 14, 2024? Yes. This is a great, great website. it's what is it danielgross.com slash AGI trades the classic Times New Roman Serif font 12 point just hammering it out in the vanilla HTML no need for styles no need for bootstrap templates what's that CS tailwind he didn't need tailwind for this he just wrote it and probably marked down our HTML directly and a lot of it has come true It interesting because a lot of them are …” “Hey, John, why is no one talking about Daniel Gross? No one, literally no one is talking about Daniel Gross. No, he should be a household name. You should get into a taxi and they should be, oh, did you see how incredibly dialed Daniel Gross' AGI trades were in January 14, 2024? Yes. This is a great, great website. it's what is it danielgross.com slash AGI trades the classic Times New Roman Serif font 12 point just hammering it out in the vanilla HTML no need for styles no need for bootstrap templates what's that CS tailwind he didn't need tailwind for this he just wrote it and probably marked down our HTML directly and a lot of it has come true It interesting because a lot of them are framed as just like open questions But if you think about if you believe in AGI and then you go through the questions like you will see exactly what happened over the last two years And this has been the, like the underpinning thesis of situational awareness in many ways. Daniel Gross was an anchor of the fund. That of course is- And it's …” View more
Ridealong summary
Daniel Gross made astonishing predictions about AGI trades back on January 14, 2024, and they are coming true today. He foresaw advancements in AI capabilities, including GPT-5's ability to perform tasks autonomously, which has now been confirmed. This insight positions Gross as a crucial figure in understanding the future of AI and its implications for the workforce.
TBPN · Daniel Gross’s AGI Trades, SpaceX’s $1.75T IPO, Google Silences Sweeney | Mark Gurman, Dan Primack, Cameron McCord, Max Haot, Christian Howell · Mar 05, 2026
The AI XR Podcast
“… for teachers respect for elders just those three things would change this country i i agree i i think that part of the problem now is that the aging generation is the boomers and we are being supported by the people who still are working and you know it's those people who are taking it on the chin for ai and the older generation which is benefiting because we own capital you know i will be the last generation that was allowed to accrue capital which allows us to benefit from the scam economy i'm not suggesting anybody not participate in the scan economy scam economy but it does create a …” “… anyone which i think is still true right now but the social systems we have and our culture needs repair and it doesn't mean go back to pre-civil war days and and unenlightened racism it means something else like this idea of respect for education respect for teachers respect for elders just those three things would change this country i i agree i i think that part of the problem now is that the aging generation is the boomers and we are being supported by the people who still are working and you know it's those people who are taking it on the chin for ai and the older generation which is benefiting because we own capital you know i will be the last generation that was allowed to accrue capital which allows us to benefit from the scam economy i'm not suggesting anybody not participate in the scan economy scam economy but it does create a lot of anger and a lot of resentment and a lot of it is well-founded and so we're headed not in the right direction with that and i think the only solution open to us is for the people who are benefiting from this system people like elon musk and mark zuckerberg have to restore equality and that's where it has to come from and i don't think they …” View more
Ridealong summary
The future of Artificial General Intelligence (AGI) raises critical questions about control and human value. As we approach a point where machines can replicate themselves, the conversation shifts to whether we can ensure these systems prioritize human welfare over self-serving interests. This urgent discussion highlights the need for ethical frameworks and a reevaluation of our societal values in the age of technology.
The AI XR Podcast · America Is Racing Toward An AI Cliff With No Safety Net, Will AGI Hurt Or Harm? - Alvin Wang Graylin · Feb 10, 2026
The a16z Show
“… in price of compute will save them, who knows. Yeah, I think the models need to asymptote to specific tasks. It's like, okay, now Opus 4.5 might be AGI at some specific task and now you can depreciate the model over a longer time. I think right now there's no old model. That used to be my mental model. Let me just change it a little bit. If you can raise more than the aggregate of anybody that uses your models, that doesn't even matter. It doesn't even matter. See what I'm saying? So I have an API business. My API business is 60% margin or 70% margin or 80% margin. It's a high margin business. …” “… converged, meaning right now it's still borrowing against the future to subsidize growth currently, which you can do that for a period of time, but at some point the market will rationalize that and nobody knows what that will look like. Or the drop in price of compute will save them, who knows. Yeah, I think the models need to asymptote to specific tasks. It's like, okay, now Opus 4.5 might be AGI at some specific task and now you can depreciate the model over a longer time. I think right now there's no old model. That used to be my mental model. Let me just change it a little bit. If you can raise more than the aggregate of anybody that uses your models, that doesn't even matter. It doesn't even matter. See what I'm saying? So I have an API business. My API business is 60% margin or 70% margin or 80% margin. It's a high margin business. So I know what everybody is using. If I can raise more money than the aggregate of everybody that's using it, I will consume them whether I'm AGI or not. Unlike in the past where engineering stops me from doing that, this is very straightforward. You just train. So I also thought it was kind of like, you must asymptote AGI, general, general, …” View more
Ridealong summary
Many experts believe we're closer to AGI than we think, especially for specific enterprise tasks. With open-source models rapidly evolving and capital markets backing aggressive growth, companies may soon leverage AGI capabilities to dominate their sectors. This shift could redefine how businesses operate, focusing on extracting value from these models rather than just the models themselves.
The a16z Show · Capital, Compute, and the Fight for AI Dominance · Feb 19, 2026
Silicon Valley Girl
“I agree. Can you talk to me about the moment that a lot of people are expecting, and some fear it, some are excited, it's the moment of AGI. How do you define it, and do you think it's a moment in history, or is it going to happen gradually? It's not a moment. The reason is simple. Intelligence isn't just one number. We have people who are very smart on some things and stupid on other things, and it's the same with AI. We currently have AI systems that are even much stronger than humans in some ways, in their knowledge and their abilities with so many languages and so on. In other …” “I agree. Can you talk to me about the moment that a lot of people are expecting, and some fear it, some are excited, it's the moment of AGI. How do you define it, and do you think it's a moment in history, or is it going to happen gradually? It's not a moment. The reason is simple. Intelligence isn't just one number. We have people who are very smart on some things and stupid on other things, and it's the same with AI. We currently have AI systems that are even much stronger than humans in some ways, in their knowledge and their abilities with so many languages and so on. In other ways, they're stupid. They're like a child. Yes, progress will move on all fronts, probably, but it's unlikely we'll end up with the same capabilities as humans across the board at any moment, which means that we shouldn't be thinking of an AGI moment. We should think of particular skills that AIs are becoming better at, track those skills. For …” View more
Ridealong summary
AGI is not a singular moment but a gradual progression of AI capabilities, requiring careful management of specific skills and intentions.
Silicon Valley Girl · Godfather of AI: Most Jobs Will Transform in 5 Years—What to Do Now | Yoshua Bengio · Feb 16, 2026
The Michael Knowles Show
“… intellect that's not that's not real intelligence the other reason speaking of the spiritual powers of the rational soul. The other reason that the AGI can't really get all that close to true intellect is because intellect goes along with will. You never have an intellect without a will, whether we're talking about human beings, whether we're talking about angels, whether we're talking about God himself. Will and intellect go together. The intellect deals with truth. The will deals with appetitive goods, but they go together. You come to some conclusion about the truth and that impels you in a …” “rational soul so the machine can process information and it can it can uh simulate intelligence reasonably well to an impressive degree but that's not an intellect that's not that's not real intelligence the other reason speaking of the spiritual powers of the rational soul. The other reason that the AGI can't really get all that close to true intellect is because intellect goes along with will. You never have an intellect without a will, whether we're talking about human beings, whether we're talking about angels, whether we're talking about God himself. Will and intellect go together. The intellect deals with truth. The will deals with appetitive goods, but they go together. You come to some conclusion about the truth and that impels you in a desire. And sometimes it goes in the other direction too. The machine doesn't have a will. The machine's just a bunch of stuff. That's why it won't work. It's a little dry, I guess. It's not as sexy as the idea that Terminator is about to come pop out of our computer screens and kill us all or that we've created a god by you know that this oracle …” View more
Ridealong summary
Artificial General Intelligence (AGI) will never achieve true intellect because it lacks will, which is essential for understanding and desire. While machines can simulate intelligence impressively, they are ultimately just advanced performances, not real intellect. This fundamental difference highlights our ongoing confusion about what intelligence truly is and why AGI remains unattainable.
The Michael Knowles Show · Ep. 1938 - BREAKING: The World's Biggest Pimp Dies At 43 · Mar 24, 2026
Data Engineering Podcast
“… what use cases? Yeah, absolutely. I think if you look at how the leading foundation model companies are thinking about it now they don think about AGI as a pure LLM concept right They are saying oh you know what We realized in the middle something interesting happened Models are improving but models got really good at code writing And that gave an ability for other kinds of inner loops where when the model cannot figure it out, it can write code and it could create environments where it tests the code. And, you know, so it can create systems not entirely as a model output, but as inner loop …” “… are using these models for software engineering use cases, it's doing that all the time and it's encouraged to do so. And I think that also opens up the overall question of what does it mean for an AI system to improve, improve along what axes and for what use cases? Yeah, absolutely. I think if you look at how the leading foundation model companies are thinking about it now they don think about AGI as a pure LLM concept right They are saying oh you know what We realized in the middle something interesting happened Models are improving but models got really good at code writing And that gave an ability for other kinds of inner loops where when the model cannot figure it out, it can write code and it could create environments where it tests the code. And, you know, so it can create systems not entirely as a model output, but as inner loop sub agents where it can keep improving. Right. And so that is seen as a much clearer path towards AGI than just seeing it just a model improve overall itself. And I think we take a similar approach in our, you know, vertical AI systems and agents that we do. I think we've got to make sure that we build these subagentic loops which are vertical …” View more
Ridealong summary
AI systems are evolving beyond simple models by creating sub-agents that dynamically write and improve code in real-time. This approach allows AI to interact with its environment, learn from feedback, and enhance its capabilities, paving a clearer path toward Artificial General Intelligence (AGI). By leveraging internal APIs, these sub-agents can autonomously refine their processes, leading to groundbreaking advancements in AI technology.
Data Engineering Podcast · Beyond Prompts: Practical Paths to Self‑Improving AI · Mar 16, 2026
AI + a16z
“… than the entire app ecosystem that's built on top of it. And if that's the case, they can expand beyond everything built on top of it. It's like, imagine like a star that's just kind of expanding. So there could be a systemic, there could be a systemic situation where the soda models can raise so much money that they can out pay anybody that builds on top of them, which would be something I don't think we've ever seen before, just because we were so bottlenecked on engineering. And it's a very open question. Yeah. It's almost like bitter lesson applied to the startup industry. A hundred …” “… you know, a company is building smaller models that, you know, use the bigger model in the background, open 4.5, but they add value on top of that, now if Anthropic can raise three times more every subsequent round, they probably can raise more money than the entire app ecosystem that's built on top of it. And if that's the case, they can expand beyond everything built on top of it. It's like, imagine like a star that's just kind of expanding. So there could be a systemic, there could be a systemic situation where the soda models can raise so much money that they can out pay anybody that builds on top of them, which would be something I don't think we've ever seen before, just because we were so bottlenecked on engineering. And it's a very open question. Yeah. It's almost like bitter lesson applied to the startup industry. A hundred percent. Yeah. It literally becomes an issue of like raise capital, turn that directly into growth, use that to raise three times more. And if you can keep doing that, you literally can outspend any company that's built, not any company, you can outspend the aggregate of companies on top of you, and therefore you'll miss your tick this year, which is …” View more
Ridealong summary
In the battle for artificial general intelligence (AGI), companies face a dilemma: should they invest in product development or focus on groundbreaking research? As funding becomes more accessible, leading firms like Anthropic could outspend smaller companies building on their technology, creating a unique ecosystem where capital directly translates to growth. This tension between immediate product needs and long-term AGI goals is reshaping the startup landscape.
AI + a16z · AI’s Capital Flywheel: Models, Money, and the Future of Power · Feb 24, 2026
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
“… engineering issues that you need to tackle very beautifully. And engineering is very important. That's part two that was different from what I imagined. Pretty much these two, I feel like. When you work on the model currently, is it mostly that you're solving problems that you see immediately from your hands on work? Or is it that the company says, oh, we have to achieve, let's say, opus results. How do you set the goals? We have a meta goal at the company level. For example, we want to improve the AI's capabilities in improving productivity, for example, because that's how people view. So …” “… very, very important. I didn't know that during school because during school or during a lapse, it's more toys as compared to companies. It's not that scaled up. But when you do scale up data, you scale up, compute, scale up people, right? You encounter engineering issues that you need to tackle very beautifully. And engineering is very important. That's part two that was different from what I imagined. Pretty much these two, I feel like. When you work on the model currently, is it mostly that you're solving problems that you see immediately from your hands on work? Or is it that the company says, oh, we have to achieve, let's say, opus results. How do you set the goals? We have a meta goal at the company level. For example, we want to improve the AI's capabilities in improving productivity, for example, because that's how people view. So we have a company's mission. As a single researcher in the team, we have our own missions that we set our own goals with. What is your goal currently? For the next generation, I really want the model to be working elegantly with experts. So it's more like better collaboration with experts, with developers. That's my goal as well. But that's maybe …” View more
Ridealong summary
AGI's definition is constantly evolving, and we won't truly understand it until we achieve it. Researchers must focus on solving fundamental problems rather than just reading papers, as engineering challenges become critical in scaling AI. This dynamic landscape requires innovative thinking and collaboration to push the boundaries of AI capabilities.
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis · Intelligence with Everyone: RL @ MiniMax, with Olive Song, from AIE NYC & Inference by Turing Post · Feb 22, 2026
Dwarkesh Podcast
“… skill, but because we wanna generalize. I mean, I think the framework you're laying down obviously makes sense, like we're making progress towards AGI, I think the crux is something like, nobody at this point disagrees that we're gonna achieve AGI in the century and the crux is, you say we're hitting the end of the exponential and somebody else looks at this and says, oh yeah, we're making progress, we've been making progress since 2012 and then 2035 we'll have a human-like agent. And so I wanna understand what it is that you're seeing which makes you think, yeah, obviously we're seeing the …” “… to the extent that we are building these RL environments, the goal is very similar to what was done five or 10 years ago with pre-training with we're trying to get a whole bunch of data, not because we wanna cover a specific document or a specific skill, but because we wanna generalize. I mean, I think the framework you're laying down obviously makes sense, like we're making progress towards AGI, I think the crux is something like, nobody at this point disagrees that we're gonna achieve AGI in the century and the crux is, you say we're hitting the end of the exponential and somebody else looks at this and says, oh yeah, we're making progress, we've been making progress since 2012 and then 2035 we'll have a human-like agent. And so I wanna understand what it is that you're seeing which makes you think, yeah, obviously we're seeing the kinds of things that evolution did or that within the human lifetime learning is like in these models and why think that it's one year away and not 10 years away. I actually think of it as like two, there's kind of two cases to be made here or like two claims you could make, one of which is like stronger and the other of which is weaker. So I think …” View more
Ridealong summary
Experts predict we could achieve Artificial General Intelligence (AGI) within this century, with some suggesting it might be just a few years away. The conversation revolves around the efficiency of AI learning compared to human learning and whether current methods are sufficient for rapid advancements. This debate raises critical questions about the future of AI and its potential impact on society.
Dwarkesh Podcast · Dario Amodei — "We are near the end of the exponential" · Feb 13, 2026
The a16z Show
“… the AI revolution, back to, you know, 2022 or so, I feel like the narrative that was told that was like the dominant narrative was this one of like AGI at that point, where it was kind of like this. It's like amazing new technology. It feels like magic, right? Like never experienced anything like this before in my life.” “… basically every place except open weight models. And electronics, yeah. I guess like that's the manufacturing aspect of it. Yeah, yeah, yeah. But, you know, from where I stand, it's like, you know, if you go back to the quote unquote early days of the AI revolution, back to, you know, 2022 or so, I feel like the narrative that was told that was like the dominant narrative was this one of like AGI at that point, where it was kind of like this. It's like amazing new technology. It feels like magic, right? Like never experienced anything like this before in my life.” View more
Ridealong summary
The AI revolution was born in the West, yet the U.S. surprisingly lags in open weight models. As we reflect on 2022, the dominant narrative was one of AGI, a technology perceived as magical and unprecedented. This raises questions about the future of AI and America's position in this evolving landscape.
The a16z Show · From Code Search to AI Agents: Inside Sourcegraph's Transformation with CTO Beyang Liu · Jan 20, 2026
The AI Daily Brief: Artificial Intelligence News and Analysis
“… own checkout experiences while we focus our efforts on product discovery. Basically, OpenAI will now support a variety of checkout paths, encouraging merchants to deploy their own ChatGPT apps, as well as clicking away to external shopping platforms. Still, one does have to wonder if there are bigger changes in the offing. ClickHealth's Simon Smith writes, Now, when does OpenAI kill its ad side quest, since it's like a $680 billion market dominated by incumbents versus the largely untapped roughly $40 trillion plus market of automatable knowledge work. Simon's implicit argument here is, of …” “… a success. OpenAI announced on Tuesday that they would be revamping the feature, writing, We found that the initial version of Instant Checkout did not offer the level of flexibility that we aspire to provide, so we're allowing merchants to use their own checkout experiences while we focus our efforts on product discovery. Basically, OpenAI will now support a variety of checkout paths, encouraging merchants to deploy their own ChatGPT apps, as well as clicking away to external shopping platforms. Still, one does have to wonder if there are bigger changes in the offing. ClickHealth's Simon Smith writes, Now, when does OpenAI kill its ad side quest, since it's like a $680 billion market dominated by incumbents versus the largely untapped roughly $40 trillion plus market of automatable knowledge work. Simon's implicit argument here is, of course, that even if the path to get there is more vague, the opportunity to reinvent how work happens in the world just feels quite a bit bigger than the opportunity to reinvent how people buy stuff on the internet. Now, with the renaming of the product team to the AGI deployment team, we've had a renewed wave of conversations about what AGI …” View more
Ridealong summary
OpenAI's shift from Sora to AGI deployment reflects a strategic focus on the larger potential of automatable knowledge work over consumer-facing ad solutions.
OpenAI's focus on AGI and automatable knowledge work presents a larger opportunity than its current ad ventures, despite challenges in defining and achieving true AGI.
The AI Daily Brief: Artificial Intelligence News and Analysis · Work AGI is the Only AGI that Matters · Mar 25, 2026

Top Podcasts About AGI

The a16z Show
The a16z Show
3 episodes
AI + a16z
AI + a16z
2 episodes
Bankless
Bankless
1 episode
Y Combinator Startup Podcast
Y Combinator Startup Podcast
1 episode
Last Week in AI
Last Week in AI
1 episode
Latent Space: The AI Engineer Podcast
Latent Space: The AI Engineer Podcast
1 episode
TBPN
TBPN
1 episode
The AI XR Podcast
The AI XR Podcast
1 episode

Stories Mentioning AGI

Best Podcasts on OpenAI vs Anthropic AI Rivalry
OpenAI and Anthropic are intensifying their competition in the development of AI agents and advancements towards artificial general intelligence (AGI). This rivalry highlights the growing focus on creating more autonomous and capable AI systems, which could significantly impact various industries and the future of AI technology.
Anthropic OpenAI
Mar 27, 2026 · 23 clips · 12 podcasts
Top Podcasts on OpenAI & Anthropic AI Rivalry
The AI landscape is buzzing with rapid developments, including Anthropic's accidental leak of its powerful "Claude Mythos" model and its focus on "Computer Use" agents. OpenAI is reportedly shifting strategy, canceling projects like Sora to focus on AGI, while Google rolls out new real-time voice models and Search Live globally. These moves signal a new era of AI capabilities and strategic pivots by major tech players.
Claude OpenAI Anthropic
Mar 25, 2026 · 18 clips · 10 podcasts
Best Podcast Episodes on AI's Impact on Jobs
Artificial intelligence continues to be a dominant topic, with podcasts exploring its profound effects on the labor market and the broader economy. Discussions range from the potential for AI to displace white-collar jobs and create new opportunities, to the ethical implications of AI-generated content and the emergence of an 'AI bubble.' The conversation also covers how AI agents are changing workflows and the race among tech giants like OpenAI and Google.
Mar 14, 2026 · 32 clips · 17 podcasts