Best Podcast Episodes About Dario Amodei
Everything podcasters are saying about Dario Amodei — curated from top podcasts
Updated: Apr 01, 2026 – 14 episodes
Listen to the Playlist
Ridealong has curated the best and most interesting podcasts and clips about Dario Amodei.
Top Podcast Clips About Dario Amodei
“… And I knew many of these people before Anthropic was formed, and they were motivated by protecting the world from dangerous superintelligence. And Dario just knew how to talk to them, to kind of allow them to work on a very lucrative problem, a very cool problem, instead of working on sort of the hard and frustrating work of making it safe or accepting the possibility that it's not going to be made safely anytime soon. Or maybe it can't be made safely in an acceptable way ever. Perhaps you can get lucky and make a superintelligence that is not deadly. But looking forward prospectively, you …”
“… the most exculpating narrative. It is such an excuse for doing a bad thing. So he got together a bunch of people who the Anthropic employees are the most self-consciously believe that they have a safety motive and that they're doing the right thing. And I knew many of these people before Anthropic was formed, and they were motivated by protecting the world from dangerous superintelligence. And Dario just knew how to talk to them, to kind of allow them to work on a very lucrative problem, a very cool problem, instead of working on sort of the hard and frustrating work of making it safe or accepting the possibility that it's not going to be made safely anytime soon. Or maybe it can't be made safely in an acceptable way ever. Perhaps you can get lucky and make a superintelligence that is not deadly. But looking forward prospectively, you know, I don't there. It might be impossible to get assurances like that. And I think in that case, we shouldn't go ahead. And Dario has just furnished excuse after excuse to be able to do it. uh i he's it relies entirely on you know it's not me it's the incentives the it's the incentives that are bad well somebody else is going to get rewarded for …”
View more
Ridealong summary
Anthropic's public image of prioritizing safety is contradicted by its internal practices and lobbying efforts, which align with other AI companies prioritizing competitive advantage over genuine safety concerns.
“… where you grind tokens all day long you look at what anthropic has done you may have political opinions or personal opinions or moral opinions about Dario Amadei about half my clients think he should be tried for treason tomorrow morning The other half think there should be a statue erected in Townsville, celebrating his moral center. Whatever your politics are, what Anthropic is doing is they're shipping two products a day that are unbelievable. They're shipping machines. And they've got to this point of what you might want to call recursive self-improvement of the models and also just recursive …”
“… quickly commoditizing. and open ai the end of life sora their video product they are going to concentrate now much more heavily on codex which is their coding tool they are realizing that their only path to profitability is enterprise that that's where you grind tokens all day long you look at what anthropic has done you may have political opinions or personal opinions or moral opinions about Dario Amadei about half my clients think he should be tried for treason tomorrow morning The other half think there should be a statue erected in Townsville, celebrating his moral center. Whatever your politics are, what Anthropic is doing is they're shipping two products a day that are unbelievable. They're shipping machines. And they've got to this point of what you might want to call recursive self-improvement of the models and also just recursive self-improvement of the techniques they're using to create the products and services. Now, the model Claude, the actual large language model, is a bunch of different things, as is the GBT5 family. and we think about someone the part of your question was someone sits down at chat gpt and specifically they mean chat gpt they don't mean gemini they …”
View more
Ridealong summary
AI has revolutionized productivity, with one company experiencing a staggering 400% growth in just one year. This leap is attributed to the staff's enhanced efficiency and output through AI tools, showcasing both the potential and challenges of integrating technology into the workplace. As AI continues to evolve, it raises questions about its impact on our daily lives and decision-making processes.
“… same time, he knows that politically that's not on the cards. And he has a sense of timing about when you should raise these issues. And so, whereas Dario Amadei took on the Pentagon by trying to assert safety principles and then just got rolled. I think Demis, when he does that, is going to feel that he's got the door is half open and he can give it a push. And we'll see. Of course, sometimes people keep their capital dry for so long that they never use it. But we'll see if the moment comes when he does use it. It'll be very interesting. Especially just to close, there was a great review of your …”
“… European Nuclear Research, CERN, which is a sort of technical agency that oversees nuclear power on a multinational basis. I think he would like some sort of global body to impose rules on what kind of AI should be let out into the wild. But at the same time, he knows that politically that's not on the cards. And he has a sense of timing about when you should raise these issues. And so, whereas Dario Amadei took on the Pentagon by trying to assert safety principles and then just got rolled. I think Demis, when he does that, is going to feel that he's got the door is half open and he can give it a push. And we'll see. Of course, sometimes people keep their capital dry for so long that they never use it. But we'll see if the moment comes when he does use it. It'll be very interesting. Especially just to close, there was a great review of your book in the Financial Times, which ends with this. whether and how Demis ever achieves AGI will form the defining chapters of his extraordinary and unfinished biography. What do you think about that? And what is the next chapter for him And will you write another follow book do you think You know I tend not to write follow about the same thing the …”
View more
Ridealong summary
Demis Hassabis, co-founder of Google DeepMind, envisions a global body to regulate AI development, shifting from his earlier belief in a single lab's control. As he approaches his 50th birthday, he continues to lead groundbreaking advancements in AI, hinting at aspirations for a second Nobel Prize. This ongoing journey makes his biography an exciting narrative still unfolding.
“So when I interviewed Dario, the point I was trying to make is not that I think the singularity is two years away and therefore Dario desperately needs to buy more compute, although the revenue is certainly there that he needs to buy more compute. But the point I was trying to make is that given what Dario seems to be saying, given his statements that we're two years away from a data center of geniuses, certainly not more than five years away. And data server geniuses …”
“So when I interviewed Dario, the point I was trying to make is not that I think the singularity is two years away and therefore Dario desperately needs to buy more compute, although the revenue is certainly there that he needs to buy more compute. But the point I was trying to make is that given what Dario seems to be saying, given his statements that we're two years away from a data center of geniuses, certainly not more than five years away. And data server geniuses should be earning trillions upon trillions of dollars of revenue. It just does not make sense why he keeps making these statements about being more conservative on compute or to your point, being less aggressive than OpenAI on compute. And I guess that point got lost because then people were like roasting me about like, oh, this podcast was like …”
View more
Ridealong summary
Dario's statements suggest we're on the brink of a 'data center of geniuses' yet he remains conservative about compute investments. This contradiction raises eyebrows, especially as the value of GPUs is expected to skyrocket, pushing companies towards higher-quality models. The discussion highlights a critical economic principle that could reshape market dynamics in AI.
“… is to be expected, right? So what we're going to get into today, all of the shit going on between Anthropic and the Pentagon, the Department of War, Dario and Pete, Hegseth, Amadi, right? Wait, I'm pulling in. I'm sorry. How do you actually say his name? Dario Amodi? I'm going to be so real with you. it's very funny um i mean i'm asking ed how to say i know one is it's you're asking me but also the other is my pronunciation of him has been so distorted because i talk with ed so much about anthropic and he calls him wario you know he just and he and then he says the last thing with a little bit of …”
“… any lawful means decided upon by a democratically elected government. You know, this is, you hate to see it. You hate to see AI and democracy in conflict. You know, it's rare. It's rare. As we heard about two episodes ago. Of course it's not. This is to be expected, right? So what we're going to get into today, all of the shit going on between Anthropic and the Pentagon, the Department of War, Dario and Pete, Hegseth, Amadi, right? Wait, I'm pulling in. I'm sorry. How do you actually say his name? Dario Amodi? I'm going to be so real with you. it's very funny um i mean i'm asking ed how to say i know one is it's you're asking me but also the other is my pronunciation of him has been so distorted because i talk with ed so much about anthropic and he calls him wario you know he just and he and then he says the last thing with a little bit of the british accent so i actually have i'm i just find myself always saying wario So all of us have different levels of the, we learn how to say stuff by reading it You know syndrome Yeah this is also Yeah Syndrome Yeah this is a fun one for my tongue Wario Amadeus. Now I want to say Amadeus. Now you're trying to say Amadeus. Jeremy's going to hit …”
View more
Ridealong summary
The Pentagon's scrutiny of Anthropic reflects a broader conflict between democratic governments and AI corporations over ethical AI use.
The conflict between Anthropic and the Pentagon is a predictable clash between democratic ideals and corporate interests, with AI companies resisting government use of AI for military purposes.
“… going after OpenAI hard and was a strong ally of Trump. So in some sense, Sam Altman doing this was very important for OpenAI. But on the flip side, Dario pointing out that OpenAI has cozied up to Trump is, as a result, very legit. But this stuff about employees being gullible is much less easy to kind of justify. Yeah, well, I will say if you look at Rune's, and this is like insider baseball in the Twitterverse here. But if you look at Rune's tweets, so Rune is clearly an OpenAI. He's an OpenAI employee in a trench coat, just anonymous online. Evangelist, you might say. Yeah, one of the very …”
“… not unreasonable. And the analysis with regards to the administration and so on is fairly true. I mean, it's just true that OpenAI has been friendly to Trump in a smart, honestly, very effective move by Altman. Worth remembering that Elon Musk was going after OpenAI hard and was a strong ally of Trump. So in some sense, Sam Altman doing this was very important for OpenAI. But on the flip side, Dario pointing out that OpenAI has cozied up to Trump is, as a result, very legit. But this stuff about employees being gullible is much less easy to kind of justify. Yeah, well, I will say if you look at Rune's, and this is like insider baseball in the Twitterverse here. But if you look at Rune's tweets, so Rune is clearly an OpenAI. He's an OpenAI employee in a trench coat, just anonymous online. Evangelist, you might say. Yeah, one of the very public facing figures from OpenAI. If you follow on Twitter, you know, he costs quite a lot. And I got to say, I mean, like I used to find him just, I mean, I still find him fascinating as a study, but I used to find him fascinating for sort of, let's say, objective analysis. It used to be a lot more objective on OpenAI and Sam and all that stuff. …”
View more
Ridealong summary
OpenAI's recent actions have raised serious ethical questions, with critics labeling their approach as opportunistic. While some employees protest against business dealings with controversial entities, others seem to justify the company's decisions, leading to a divide in perspectives. This situation highlights the complex moral calculations facing tech companies in today's geopolitical landscape.
“… now because they're announcing a bunch of new investments in A.I. And obviously you have the CEOs from the top A.I. labs, including Sam Altman and Dario Amodei of OpenAI and Anthropic, who were sat or rather stood next to each other during the celebration. And they were asked or prompted to hold hands. And as you can see on this video, they were not down to playing ball. Look at that. Look at this. Look at this next clip. They kind of awkwardly hold that. For those of you are listening, they awkwardly hold their hands up in the air, but they kind of cross arms. They don't want to hold each other's …”
“OK, we interrupt this news segment for what is probably going to be the contender of the year for most awkward moment ever in the entire A.I. industry. So for context on this video, a lot of the A.I. warlords, sorry, overlords are in India right now because they're announcing a bunch of new investments in A.I. And obviously you have the CEOs from the top A.I. labs, including Sam Altman and Dario Amodei of OpenAI and Anthropic, who were sat or rather stood next to each other during the celebration. And they were asked or prompted to hold hands. And as you can see on this video, they were not down to playing ball. Look at that. Look at this. Look at this next clip. They kind of awkwardly hold that. For those of you are listening, they awkwardly hold their hands up in the air, but they kind of cross arms. They don't want to hold each other's hand. Just so awkward. Yeah, it's funny that this A.I. Impact Summit seemingly came out of nowhere. It's this huge summit in India and they got every CEO there. I mean, I see Sundar and who else is there? There's yeah, we have DeepMind representation, OpenAI, Anthropic, Microsoft, basically every company is covered. And as they're on stage kind of …”
View more
Ridealong summary
At the A.I. Impact Summit in India, a hilariously awkward moment unfolded when OpenAI's Sam Altman and Anthropic's Dario Amodei were prompted to hold hands on stage but instead crossed their arms in refusal. This moment highlights the intense rivalry between top A.I. companies, raising questions about their ability to collaborate effectively for the greater good. The tension, however, may actually drive innovation as both companies compete to release new models at an unprecedented pace.
“… of entry-level white-collar work is going to be wiped out in the next one to five years, something that we have literally heard from people like Dario Amadei and other AI leaders, that the rules of the game have just entirely changed now. If there were things that worked for you in your career, and we know that your career was a massive success, well, maybe it's not going to work this time around. What would you say to those people? There are a number of people. I think there was a Gallup poll survey in 2023 that said 59% of people were, they use this word quiet quitting, but I would say, you …”
“… that there's a self-reinforcing loop that happens when you do that. What would you say to the people who take your views, take the knowledge, take the lessons, but then they say, well, AI is here now. and if we've got AI leaders telling us that half of entry-level white-collar work is going to be wiped out in the next one to five years, something that we have literally heard from people like Dario Amadei and other AI leaders, that the rules of the game have just entirely changed now. If there were things that worked for you in your career, and we know that your career was a massive success, well, maybe it's not going to work this time around. What would you say to those people? There are a number of people. I think there was a Gallup poll survey in 2023 that said 59% of people were, they use this word quiet quitting, but I would say, you know, ambivalent about their job or indifferent. They're not engaged or passionate about their job. And to me, those are the ones, and unfortunately, it's a really big group of people that are most at risk from AI. If you think about it, the rote best practice of yesterday is exactly what's in the models, right? Like they studied the best …”
View more
Ridealong summary
In the age of AI, those who are curious and proactive thrive, while the indifferent risk being left behind. Bill Gurley explains that individuals who embrace AI and understand its potential will become invaluable, while ambivalence can lead to job insecurity. The key to survival is becoming the most AI-enabled version of yourself, leveraging knowledge to stay relevant in a rapidly changing landscape.
“… to do with the things in this factory and how to evaluate them and how to proliferate them so society gets the benefit Back to jobs Anthropic CEO Dario Amadei has predicted in several occasions that AI will destroy half of all entry white collar positions and spike unemployment to as high as 20%, which would be the highest unemployment rate since the Great Depression. This is a near-term prediction. He said this could happen in as soon as five years. Do you agree with that forecast? We're talking about one of the potential things that can happen. And I think it's worth thinking that this is a …”
“… then what is the path to acceptably getting that in. And so some of the conversation that society is going to have now is what is the what are the appropriate ways we we as a society want this technology to be used And how do people decide about what to do with the things in this factory and how to evaluate them and how to proliferate them so society gets the benefit Back to jobs Anthropic CEO Dario Amadei has predicted in several occasions that AI will destroy half of all entry white collar positions and spike unemployment to as high as 20%, which would be the highest unemployment rate since the Great Depression. This is a near-term prediction. He said this could happen in as soon as five years. Do you agree with that forecast? We're talking about one of the potential things that can happen. And I think it's worth thinking that this is a choice. I don't agree with this because I think it's a choice that we can make. And also, my personal view based on the data that I look at is big changes in employment take a long time to filter through to the economy. And even with the magnitude of what we're talking about, you might expect it to take longer. But let's say that there is the …”
View more
Ridealong summary
AI development is likened to nuclear weapons, raising concerns about private firms profiting from potentially dangerous technology.
“… I think many of them lose themselves in the myth because they have to live and breathe and embody it day in and day out. And so when, you know, Dario says he thinks that 10 to 25% of the future could be catastrophic or whatever, the probability is 10 to 25%, he is actively engaging in the myth-making, But also he losing himself in the myth Like I think if you were to ask him do you genuinely believe that he would be like yes I genuinely believe that Because there been a blurring of when he saying something just to say something versus when he actually believes what he required to believe in …”
“… the public along with them by showing them dazzling demonstrations of the technology, by using, crafting a mission that will sound really good and make people give more leniency to their companies. So they know they're doing the myth-making. And also, I think many of them lose themselves in the myth because they have to live and breathe and embody it day in and day out. And so when, you know, Dario says he thinks that 10 to 25% of the future could be catastrophic or whatever, the probability is 10 to 25%, he is actively engaging in the myth-making, But also he losing himself in the myth Like I think if you were to ask him do you genuinely believe that he would be like yes I genuinely believe that Because there been a blurring of when he saying something just to say something versus when he actually believes what he required to believe in order to then continue doing the things that he doing And this is the whole psychology of cognitive dissonance, right? Where the brain struggles to hold two conflicting worldviews at the same time. So it's incentivized or endeavors to dismiss one. So if you, you know, if you wanted to be a healthy person, but also a smoker, and I pointed out …”
View more
Ridealong summary
AI executives may be knowingly creating dangerous myths similar to those in the sci-fi epic 'Dune' to control public perception and secure funding. Karen Hao reveals how this myth-making leads to cognitive dissonance, where leaders blur the line between their beliefs and the narratives they promote. This raises ethical concerns about the impact of AI on vulnerable communities worldwide.
“… happy with. A lot of people talk about how Altman might try to make himself God Emperor, or Demas Asavis might try to make himself God Emperor, or Dario Amidai, or whatever. And that's not my preferred outcome, to be clear. but I don't think that if Sam Altman made himself god emperor that like I would be that sad about it in terms of my practical lived experiences I think my life would be fine I think my children would be fine yeah he does have a little bit of a Roman uh Caesar kind of vibe to him where I think he does fancy that sort of uh it seems that he does fancy that sort of power …”
“… get into the good worlds where we retain control, and humans are in charge of steering, and what happens. And mostly I expect that most groups, that even if a small group has the ability to steer that world, they'll steer it in ways that we're pretty happy with. A lot of people talk about how Altman might try to make himself God Emperor, or Demas Asavis might try to make himself God Emperor, or Dario Amidai, or whatever. And that's not my preferred outcome, to be clear. but I don't think that if Sam Altman made himself god emperor that like I would be that sad about it in terms of my practical lived experiences I think my life would be fine I think my children would be fine yeah he does have a little bit of a Roman uh Caesar kind of vibe to him where I think he does fancy that sort of uh it seems that he does fancy that sort of power perhaps but also that when he does things like fund universal basic income experiments out of pocket. I view that as like honestly probably a genuine you know magnanimity on his part that I would expect to probably extend into It very easy to be magnanimous when there a true full abundance of resources right Like he could you know in theory keep 99 of …”
View more
Ridealong summary
The fear of losing control over AI could lead to a society where a select few manipulate technology for their own gain. Zvi Mowshowitz warns that if we don't steer AI responsibly, we risk creating a world where only a cabal benefits, leaving the majority behind. Instead of seeking personal escape, we should focus on building a better future for everyone.
“… been talking about with some of these Vibe-reported articles where people, reporters, have been saying, what possible motivation could someone like Dario Amadei, the person who knows this technology best, what possible motivation could he have? to be saying, I'm worried that this technology is going to take away all the jobs. This is the motivation. They've only made $5 billion against $10 billion train spend, and God knows how much inference spend, and $60 billion investment revenue over their entire existence.”
“… talk about these best case projections because they taken on a lot of money They spent a lot of money It costs a lot of money to run them and this is worrisome to investors and they would rather you not pay attention to it This goes back to what I've been talking about with some of these Vibe-reported articles where people, reporters, have been saying, what possible motivation could someone like Dario Amadei, the person who knows this technology best, what possible motivation could he have? to be saying, I'm worried that this technology is going to take away all the jobs. This is the motivation. They've only made $5 billion against $10 billion train spend, and God knows how much inference spend, and $60 billion investment revenue over their entire existence.”
View more
Ridealong summary
Anthropic claimed they would earn $19 billion this year, but court filings reveal they've only made $5 billion since 2023. This discrepancy highlights Silicon Valley's tendency to project inflated future earnings while downplaying current performance, raising concerns for investors about the company's financial health and transparency.
“… here's something that interests me anyway. Mileage may vary. Fox reports SpaceX and Tesla CEO Elon Musk gave a two retort after Anthropic leader Dario Amode claimed in an interview that he isn sure if his company AI models have gained consciousness Anthropocene says Claude may or may not have gained consciousness as the model has begun showing symptoms of anxiety read a post on x by cryptocurrency based prediction market polymarket to which musk replied he's projecting i don't really know what that means exactly uh the comment from musk who's a founder of xai comes as anthropic as at odds …”
“All right, here's something that interests me anyway. Mileage may vary. Fox reports SpaceX and Tesla CEO Elon Musk gave a two retort after Anthropic leader Dario Amode claimed in an interview that he isn sure if his company AI models have gained consciousness Anthropocene says Claude may or may not have gained consciousness as the model has begun showing symptoms of anxiety read a post on x by cryptocurrency based prediction market polymarket to which musk replied he's projecting i don't really know what that means exactly uh the comment from musk who's a founder of xai comes as anthropic as at odds with the pentagon over its use in a separate matter um in an interview with the new york times a mode when asked about AI and consciousness said, we've taken a generally precautionary approach here, and we don't know if the models are conscious. We're not even sure that we know what it would mean for a model to be conscious or whether a model can be …”
View more
Ridealong summary
Elon Musk dismissed claims that AI models like Claude may be gaining consciousness, arguing that true consciousness involves self-awareness and experiential understanding, which AI lacks. This debate, sparked by Anthropic's Dario Amode, raises critical questions about what consciousness truly means and whether AI could ever achieve it. Musk's perspective reflects a growing skepticism about the notion of conscious machines.
“… another one of these models. And it's been working, at least potentially working with the government on a few different things. Here's Anthropic CEO Dario Amidai talking to CBS about how they try to be as neutral as possible when he asked about left bent you know basically infecting the AI models President Trump has called Anthropic a left-wing woke company. Is this decision at all driven by ideology? Look, I can't speak for what, you know, I can't speak for what other parties are doing and what they're doing. But you and here in Anthropic. Yeah, look, we, we, but we, we, I think, have tried to …”
“… you like it or not now the question is, how do we make sure we do it maturely with the right people in charge, that it doesn't get infected by bad political ideologies or anything else? And that's what brings us to this, which is Anthropic, which is another one of these models. And it's been working, at least potentially working with the government on a few different things. Here's Anthropic CEO Dario Amidai talking to CBS about how they try to be as neutral as possible when he asked about left bent you know basically infecting the AI models President Trump has called Anthropic a left-wing woke company. Is this decision at all driven by ideology? Look, I can't speak for what, you know, I can't speak for what other parties are doing and what they're doing. But you and here in Anthropic. Yeah, look, we, we, but we, we, I think, have tried to be very neutral. We speak up on issues of AI policy where we have expertise. We don't, we don't have views. We don't think about general political issues. And we try to work together whenever there's common ground. All right. I said I was trying to avoid the dystopian version of it, but Sam Altman and talking about, you know, intelligence will be, …”
View more
Ridealong summary
Imagine a future where intelligence is bought and sold like electricity—this chilling scenario is becoming a reality. As AI companies like Anthropic navigate political pressures, their decisions could shape military operations and societal norms. If these technologies are politicized, we face a future that could lead to dystopia rather than prosperity.
Top Podcasts About Dario Amodei
Connections Podcast
1 episode
The AI XR Podcast
1 episode
TechStuff
1 episode
Dwarkesh Podcast
1 episode
This Machine Kills
1 episode
Last Week in AI
1 episode
Limitless Podcast
1 episode
Prof G Markets
1 episode
Stories Mentioning Dario Amodei
Top Podcasts on Anthropic AI Code Leak
Anthropic has experienced a major leak of its AI codebase, which has revealed details about its upcoming models and features. This breach could impact the company's competitive position in the AI industry and raises concerns about intellectual property security.
AI code leak
Anthropic
Best Podcasts on OpenAI vs Anthropic AI Rivalry
OpenAI and Anthropic are intensifying their competition in the development of AI agents and advancements towards artificial general intelligence (AGI). This rivalry highlights the growing focus on creating more autonomous and capable AI systems, which could significantly impact various industries and the future of AI technology.
OpenAI
Anthropic
AGI
Top Podcasts on Pentagon vs Anthropic AI Clash
The ethical implications and military applications of artificial intelligence are a major discussion point, highlighted by the lawsuit filed by AI firm Anthropic against the Pentagon. Anthropic alleges it was blacklisted for refusing to waive ethical restrictions on using its Claude model for autonomous weaponry and mass surveillance. This conflict underscores a broader debate about AI's impact on jobs, its role in warfare, and the responsibility of tech companies in developing powerful AI systems.
Best Podcasts on Anthropic's Pentagon Clash
AI company Anthropic is suing the Trump administration after the Pentagon officially designated it a 'supply-chain risk,' effectively blacklisting it from federal defense contracts. This escalation follows Anthropic's refusal to waive ethical restrictions on using its Claude AI model for autonomous weaponry and mass domestic surveillance, sparking a debate about AI ethics, government contracts, and the future of AI in military applications.
Amazon
Nvidia
OpenAI
Sam Altman
