Best Podcast Episodes About Anthropic
Everything podcasters are saying about Anthropic — curated from top podcasts
Updated: Apr 02, 2026 – 64 episodes
Listen to the Playlist
Ridealong has curated the best and most interesting podcasts and clips about Anthropic.
Top Podcast Clips About Anthropic
“… within hours, it had 3 million views and had millions of copies backed up all across GitHub. By the afternoon, when we're recording this episode, Anthropica is scrambling to delete old versions of their NPM package, but it was too late. What leaked was the entire source code of CloudCode. Every single line, 512 lines of TypeScript, 1,900 files, every tool, every permission system, every internal code name was leaked, all because someone forgot to include a single debugging file from a public package. And that story alone would be a major story. But what makes this even more crazy is what people …”
“This morning, a security researcher posted a single link on X, and within hours, it had 3 million views and had millions of copies backed up all across GitHub. By the afternoon, when we're recording this episode, Anthropica is scrambling to delete old versions of their NPM package, but it was too late. What leaked was the entire source code of CloudCode. Every single line, 512 lines of TypeScript, 1,900 files, every tool, every permission system, every internal code name was leaked, all because someone forgot to include a single debugging file from a public package. And that story alone would be a major story. But what makes this even more crazy is what people found buried inside the code. And now we have information about every feature that's coming down the pipeline, as well as all of the secrets that Anthropic and Claw team didn't necessarily want us to know. This is a really big leak. I can't believe this happened. I mean, big leak is one way to describe it. Absolutely terrible for the Anthropic …”
View more
Ridealong summary
The leak of Anthropic's Claude code is a catastrophic security failure, revealing their entire roadmap and compromising their competitive edge.
Anthropic's repeated security failures are catastrophic, revealing their entire roadmap and compromising their competitive edge.
The leak of Anthropic's Claude code is a significant IP loss, but its impact on valuation may be limited as the true value lies in the model's unique architecture and weights.
The leak of Anthropic's AI code is damaging to its reputation but beneficial for the open source community, providing access to its roadmap and features.
The leak of Anthropic's Claude AI code is a reputational hit but a boon for the open-source community, revealing their roadmap and enabling broader experimentation.
“Moving on, Anthropic needs some Flex Seal because it's springing leaks left and right. Last Thursday, Fortune reported that the AI company had accidentally made thousands of internal files publicly available, including an announcement of a powerful new model that is waiting in the wings. And then earlier this week, the big one, nearly 512,000 lines of source code were accidentally posted, laying out the blueprint for Claude Code, its most valuable product. It …”
“Moving on, Anthropic needs some Flex Seal because it's springing leaks left and right. Last Thursday, Fortune reported that the AI company had accidentally made thousands of internal files publicly available, including an announcement of a powerful new model that is waiting in the wings. And then earlier this week, the big one, nearly 512,000 lines of source code were accidentally posted, laying out the blueprint for Claude Code, its most valuable product. It wasn't the actual AI model itself that was posted, but it did paint a picture of how Anthropic coaxes it to behave and perform tasks autonomously for millions of users. According to the Wall Street Journal, the leak was a goldmine for competitors who want to know what Anthropic's secret sauce is. One of the leaked features involves the model revisiting …”
View more
Ridealong summary
The leak of Anthropic's Claude code is a major blow to its reputation and business, as it exposes trade secrets and gives competitors a significant advantage.
Anthropic's massive code leak is a catastrophic blow to its reputation and competitive edge, exposing its trade secrets to rivals and developers alike.
“… was huge news. So there was a massive Claude call. Claude call. Wow. Claude code. Claude code leak. I don't know why I couldn't say that. A bunch of Anthropics code got leaked all over the internet. They shut down a bunch of GitHub repos as they were trying to contain the leak. Lots of people dove into the code. Some people reverse engineered it. It was discovered that Anthropic is trying to build a Tamagotchi-style pet and an always-on agent into Claude, which, okay. Side note, I did use Dispatch. I'm using Dispatch more, and it's pretty cool. Also, have you ever used Dispatch with Claude? Isn that …”
“… to it. That's it. Great. Now I really hope that Tesla adds CarPlay. I'm still waiting for that update. Every time an update comes, I'm like, CarPlay? But no. Radio silence. No. I don't think it's going to happen. We'll see if it happens. Also, this was huge news. So there was a massive Claude call. Claude call. Wow. Claude code. Claude code leak. I don't know why I couldn't say that. A bunch of Anthropics code got leaked all over the internet. They shut down a bunch of GitHub repos as they were trying to contain the leak. Lots of people dove into the code. Some people reverse engineered it. It was discovered that Anthropic is trying to build a Tamagotchi-style pet and an always-on agent into Claude, which, okay. Side note, I did use Dispatch. I'm using Dispatch more, and it's pretty cool. Also, have you ever used Dispatch with Claude? Isn that where you can send a messages from your phone or whatever You can send a message from your phone and then have your computer do stuff The problem is it doesn talk back to you so i never know like what it doing it just it say like red like it give you like a red receipt but then like i don know what it doing so it like you get home and it's just …”
View more
Ridealong summary
You can now chat with ChatGPT directly through CarPlay, making your drive more interactive than ever. Meanwhile, a major code leak from Anthropic reveals their ambitious plans for Claude, including a Tamagotchi-style pet feature. This leak poses significant competitive risks for the company as they scramble to contain the fallout.
“… insight into, okay, here's what Claude Code is going to be, the direction that they're moving in next. And I sure there I mean of course Anthropic is embarrassed and it a major blow to them It a big problem for them But to me the significant piece is just even the ones who are supposed to be the responsible good guys being safe and cautious etc are prone to really self incredible sloppy humiliating errors And I don think that that should comfort anyone about the direction that AI is going in Very true”
“And contained in here was obviously all sorts of information about how the product works and their approach, etc. There were also product updates that haven't been released yet that gave everybody in the world, including their competitors, insight into, okay, here's what Claude Code is going to be, the direction that they're moving in next. And I sure there I mean of course Anthropic is embarrassed and it a major blow to them It a big problem for them But to me the significant piece is just even the ones who are supposed to be the responsible good guys being safe and cautious etc are prone to really self incredible sloppy humiliating errors And I don think that that should comfort anyone about the direction that AI is going in Very true”
View more
Ridealong summary
Even responsible AI companies like Anthropic are prone to sloppy and humiliating errors, raising concerns about the industry's direction.
Even companies like Anthropic, known for being cautious, are making sloppy errors, which is concerning for the future of AI.
“… compute and you keep scaling them they reliable enough that they can do I mean, if you're coding, if you're automating 90% of the code written at Anthropic, that's the stat, by the way. So there you are at Anthropic. It's automating 90% of all the programming happening at Anthropic. Right. When you go to automate... Only 10% of it is coming from humans, and the rest is recursive. That's right. We are extremely close to recursive self-improvement right now. The companies, I think, are planning to do this in the next 12 months. The asteroid is coming for Earth. This is the last moment that we have …”
“… on that. I think if you look at people like Dario, even though Gary Marcus has a point that the current LLM paradigm is not accurate enough and reliable enough to get you to AGI if you keep instrumenting these technologies with enough data enough compute and you keep scaling them they reliable enough that they can do I mean, if you're coding, if you're automating 90% of the code written at Anthropic, that's the stat, by the way. So there you are at Anthropic. It's automating 90% of all the programming happening at Anthropic. Right. When you go to automate... Only 10% of it is coming from humans, and the rest is recursive. That's right. We are extremely close to recursive self-improvement right now. The companies, I think, are planning to do this in the next 12 months. The asteroid is coming for Earth. This is the last moment that we have to steer and say that if we don't want this anti-human future that we're heading towards, we can change it. And part of what we're promoting right now is this is not inevitable. It is obviously very late in the game. It obviously looks very difficult.”
View more
Ridealong summary
Tristan Harris warns that we are at a critical juncture with AI development; if we don't steer its progress wisely, we risk creating an anti-human future. He compares the unchecked acceleration of AI to driving a car at 2000x speed without steering, emphasizing the need for control and oversight. The current trajectory could lead to societal degradation, similar to the fallout from poorly governed technologies like social media.
“… else will do it. And it's the most exculpating narrative. It is such an excuse for doing a bad thing. So he got together a bunch of people who the Anthropic employees are the most self-consciously believe that they have a safety motive and that they're doing the right thing. And I knew many of these people before Anthropic was formed, and they were motivated by protecting the world from dangerous superintelligence. And Dario just knew how to talk to them, to kind of allow them to work on a very lucrative problem, a very cool problem, instead of working on sort of the hard and frustrating work of …”
“And it's a form of destiny because technologically, progress just keeps climbing. And if I don't do it, someone else will do it. And it's the most exculpating narrative. It is such an excuse for doing a bad thing. So he got together a bunch of people who the Anthropic employees are the most self-consciously believe that they have a safety motive and that they're doing the right thing. And I knew many of these people before Anthropic was formed, and they were motivated by protecting the world from dangerous superintelligence. And Dario just knew how to talk to them, to kind of allow them to work on a very lucrative problem, a very cool problem, instead of working on sort of the hard and frustrating work of making it safe or accepting the possibility that it's not going to be made safely anytime soon. Or maybe it can't be made safely in an acceptable way ever. Perhaps you can get lucky and make a superintelligence that is not deadly. But looking forward prospectively, you know, I don't there. It might be impossible to get assurances like that. And I …”
View more
Ridealong summary
Anthropic's public image of prioritizing safety is contradicted by its internal practices and lobbying efforts, which align with other AI companies prioritizing competitive advantage over genuine safety concerns.
“In the legal area, Anthropic research found one of the largest gaps between tasks within AI's reach and observed adoption, arguing that around 80% of the work AI was capable of, even though only 15% of those tasks actually observed any adoption. At the same time, dedicated tools like Harvey saw their valuations and usership go up and up and up. Finance showed one of the biggest challenges that enterprises will face this year with AI, which is access to quality data. The …”
“In the legal area, Anthropic research found one of the largest gaps between tasks within AI's reach and observed adoption, arguing that around 80% of the work AI was capable of, even though only 15% of those tasks actually observed any adoption. At the same time, dedicated tools like Harvey saw their valuations and usership go up and up and up. Finance showed one of the biggest challenges that enterprises will face this year with AI, which is access to quality data. The finance industry has actually been a fairly aggressive adopter, but 91% of firms report fairly low impact. They cite as their biggest obstacle data quality, which makes sense given what finance does. This is hardly a finance-only concern, however, which is why so much of the conversation in 2026 is about context and data. HR is one of the areas …”
View more
Ridealong summary
Anthropic is rapidly closing the revenue gap with OpenAI, becoming the new enterprise default for AI agents.
“Well, shot and chaser on that. Over the weekend, Anthropic adjusted Claude session limits and says users will hit their limits faster during peak hours amid compute strain due to Claude's new popularity. Quoting TechRadar, Anthropic is reducing message limits for even pro and max customers during its peak hours in a new effort to cope with demand. To manage growing demand for Claude, we're adjusting our five-hour session limits for free slash pro slash max subs during peak hours. Your weekly limits …”
“Well, shot and chaser on that. Over the weekend, Anthropic adjusted Claude session limits and says users will hit their limits faster during peak hours amid compute strain due to Claude's new popularity. Quoting TechRadar, Anthropic is reducing message limits for even pro and max customers during its peak hours in a new effort to cope with demand. To manage growing demand for Claude, we're adjusting our five-hour session limits for free slash pro slash max subs during peak hours. Your weekly limits remain unchanged, said Tariq Shehipar, an engineer who works on Claude Code and a post on X. Unlike ChatGPT, which has a daily message limit, Claude operates in five-hour windows. Once you've hit your limit in a five-hour window, you have to use a less premium model or wait for your next window refresh to use it again. An AI company changing the rules …”
View more
Ridealong summary
Anthropic has changed its Claude session limits, resulting in users hitting their message caps faster during peak hours, which may lead to frustration. This adjustment affects all tiers, compelling users to strategize their usage times to avoid interruptions. Meanwhile, Microsoft is testing a new tool that lets users compare AI models for better output, adding another layer of complexity to the AI landscape.
“… I really advise you read it, but also you can see how many people I'm at, or just go on Twitter and search clawed limits. It's not great. Now, while Anthropic technical staff member Lydia Halley posted that Anthropic was aware of people hitting usage limits in Claude Code way faster than expected, and that some investigation of some sort was taking place, it's hard to imagine that Anthropic had no idea that these limits were so severe, or that any of this was a surprise. Now, as I wrote this sentence that I'm reading on Tuesday, March 31st, it doesn't appear that any changes have been made. People …”
“… they, and I'm going off of a translation from fucking grok, so forgive me, expected a premium experience for $200 and what they got was constant limit stress. I've linked a lot of these in yesterday's free newsletter, The Subprime AI Crisis is here. I really advise you read it, but also you can see how many people I'm at, or just go on Twitter and search clawed limits. It's not great. Now, while Anthropic technical staff member Lydia Halley posted that Anthropic was aware of people hitting usage limits in Claude Code way faster than expected, and that some investigation of some sort was taking place, it's hard to imagine that Anthropic had no idea that these limits were so severe, or that any of this was a surprise. Now, as I wrote this sentence that I'm reading on Tuesday, March 31st, it doesn't appear that any changes have been made. People are still complaining about hitting their limits in a few prompts, and Anthropic has yet to update anyone, in my opinion, because these are the rate limits they decided were necessary to keep the business going and roll their nasty ass into IPO. A few days previously, though, Anthropic could also accidentally, and I put that in quotation marks, leak …”
View more
Ridealong summary
The reliance on large language models like Claude for code generation is inherently risky and leads to untrustworthy software development practices.
Anthropic's leak of Claude's source code and other assets suggests either gross negligence or a deliberate strategy to manipulate market perception.
Anthropic's AI code leak seems suspiciously convenient, raising doubts about whether it was truly accidental or a strategic move to generate buzz.
“this is allocation of the total fund. So Anthropic is 20% of the fund. Databricks is about 17%. OpenAI is about 10, Anduril's 6.9, Ramp 5.1, SpaceX 5%, Epic Games 3.5, Flock Safety 3%, DBT slash 5Tran is 2.8, Vanta is 1.9, Canva is 1.8, Loyal is 1.5, Service Titan 1.4, and so on and so on and so on. But with these companies, how did you get allocation into them? And what is like, I just like, this is just such an interesting vehicle. And I don't think anybody like actually grasped like outside …”
“this is allocation of the total fund. So Anthropic is 20% of the fund. Databricks is about 17%. OpenAI is about 10, Anduril's 6.9, Ramp 5.1, SpaceX 5%, Epic Games 3.5, Flock Safety 3%, DBT slash 5Tran is 2.8, Vanta is 1.9, Canva is 1.8, Loyal is 1.5, Service Titan 1.4, and so on and so on and so on. But with these companies, how did you get allocation into them? And what is like, I just like, this is just such an interesting vehicle. And I don't think anybody like actually grasped like outside of maybe the hundred thousand that have invested into it. But outside of this, like, I don't hear my family talking about getting access to this. Although my dad's always like, how do I get access to Androil or SpaceX? And I'm like, well, I can tell you now, but before it wasn't something that was talked about. So how do you get access to these …”
View more
Ridealong summary
Investing in AI companies like Anthropic and OpenAI was once seen as a risky move, especially during the tech downturn of 2022. However, VCX's innovative approach allowed them to secure significant stakes in these firms by capitalizing on distressed assets when few others would. This strategy not only positioned VCX as a leading player in AI investments but also opened the door for everyday investors to access high-potential tech opportunities.
“And now Holden Karnofsky, who's senior advisor at Anthropic, and recently, you know, Anthropic just updated their responsible scaling policy, basically backing off from some of the commitments that they had previously made to like pause development under certain circumstances. They're no longer committing to that. So he put out a long defensive why. But a previous thing that he had written was success without dignity. He was like, you know, I don't think we're doing a great job collectively of trying as …”
“And now Holden Karnofsky, who's senior advisor at Anthropic, and recently, you know, Anthropic just updated their responsible scaling policy, basically backing off from some of the commitments that they had previously made to like pause development under certain circumstances. They're no longer committing to that. So he put out a long defensive why. But a previous thing that he had written was success without dignity. He was like, you know, I don't think we're doing a great job collectively of trying as hard as we should be trying. And the risk that we're running is a lot higher than I, speaking as Holden, you know, would like it to be. But the problem does look much more tractable than it used to look. And I used to when five years ago, people would come to me and say, what can I do for safety? And he'd be like, I don't really know. And now he's …”
View more
Ridealong summary
Holden Karnofsky, a senior advisor at Anthropic, argues that while government involvement in AI safety is necessary, nationalizing AI development is not the answer. He believes that the current government lacks the competence to handle such a powerful technology and advocates for a balance between private innovation and regulatory oversight. This perspective highlights the urgent need for responsible AI practices amid growing risks.
“… in the order. A final verdict in the case could still be months away. During Tuesday's hearing, Lynn pressed the government's lawyers about why Anthropic was blacklisted. Her language in Thursday's order was even sharper. Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government, she wrote. Following the ruling, Anthropic said it's, quote, grateful to the court for moving swiftly, end quote. Meanwhile, the interwebs have been buzzing about what might be …”
“… lawyers for the artificial intelligence startup and the U.S. government appeared in court for a hearing. Punishing Anthropik for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation, Lynn wrote in the order. A final verdict in the case could still be months away. During Tuesday's hearing, Lynn pressed the government's lawyers about why Anthropic was blacklisted. Her language in Thursday's order was even sharper. Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government, she wrote. Following the ruling, Anthropic said it's, quote, grateful to the court for moving swiftly, end quote. Meanwhile, the interwebs have been buzzing about what might be waiting in the wings at Anthropic, quoting Fortune. AI company Anthropic is developing and has begun testing with early access customers a new AI model more capable than any it has released previously, the company said following a data leak that revealed the model's existence. An Anthropic spokesperson said the new model represented a step change in AI …”
View more
Ridealong summary
The leak of Anthropic's Claude AI code reveals both exciting advancements in AI capabilities and significant cybersecurity vulnerabilities that could undermine the company's strategic initiatives.
The leak of Anthropic's AI codebase reveals both the potential for groundbreaking AI advancements and significant cybersecurity vulnerabilities.
Anthropic's Claude Mythos model represents a significant leap in AI performance, positioning the company at the forefront of AI innovation.
Anthropic's Claude Mythos model represents a significant leap in AI capabilities, positioning the company at the forefront of AI innovation.
“… model that we had seen before. We didn't know which model lab it would be. We didn't know what the model was called. And then just a few days ago, Anthropic leaked a model called Claude Mythos, which is supposedly more powerful than any model that they've ever built before, a tier above Opus 4.6, which is what we see today. This model is actually so good that it is considered a cyber security threat and can't be rolled out to the public just yet. But it's not just Anthropic that's building a model that is close to AGI like this. OpenAI has a model codenamed Spud, Google has a model codenamed Agent …”
“Three weeks ago, rumors broke that a major AI lab had built a model more powerful, more dangerous, and more expensive than any AI model that we had seen before. We didn't know which model lab it would be. We didn't know what the model was called. And then just a few days ago, Anthropic leaked a model called Claude Mythos, which is supposedly more powerful than any model that they've ever built before, a tier above Opus 4.6, which is what we see today. This model is actually so good that it is considered a cyber security threat and can't be rolled out to the public just yet. But it's not just Anthropic that's building a model that is close to AGI like this. OpenAI has a model codenamed Spud, Google has a model codenamed Agent Smith, and there's many more to come this year. But the Anthropic leak wasn't intentional. This was discovered by accident last Thursday, March 26, by a Fortune reporter who discovered that Anthropic's content management system had a configuration error. And for those who aren't familiar, the content management system, it's how the web server …”
View more
Ridealong summary
The leak of Anthropic's Claude Mythos model reveals both groundbreaking AI capabilities and significant cybersecurity risks, highlighting the dual-edged nature of advanced AI development.
The leak of Anthropic's Claude Mythos model highlights both its groundbreaking capabilities and the significant cybersecurity risks it poses, preventing its public release.
The leak of Anthropic's Claude Mythos model reveals a step change in AI performance but also highlights significant cybersecurity risks that prevent its public release.
OpenAI's discontinuation of Sora and focus on a new model, Spud, suggests a strategic consolidation to streamline their offerings and prepare for a potential IPO.
OpenAI's new model developments are both a strategic leap to outpace competitors and a maneuver to boost their valuation ahead of a potential IPO.
OpenAI and Anthropic are in a fierce race to outdo each other with massive AI models, but this could be more about boosting their profiles ahead of potential IPOs than genuine technological advancement.
The Anthropic Claude Mythos leak reveals a model so advanced it's considered a cybersecurity threat, highlighting the risks of rapid AI advancements.
“Anthropic is sort of the most AGI of all the frontier labs and I think they made this bet on coding as their way to get to recursive self As it turns out it was a very good business move as well because code is the gateway into enterprise and enterprise IT budgets. And so they were able to grow revenue pretty quickly as a result of getting into enterprise. Also, coding seems to be the basis for these other product extensions. So like you said, they went …”
“Anthropic is sort of the most AGI of all the frontier labs and I think they made this bet on coding as their way to get to recursive self As it turns out it was a very good business move as well because code is the gateway into enterprise and enterprise IT budgets. And so they were able to grow revenue pretty quickly as a result of getting into enterprise. Also, coding seems to be the basis for these other product extensions. So like you said, they went from cloud code to cloud co-work. The idea being that, well, if you can generate code, you can also generate PowerPoints or spreadsheets. And you do that by generating the code to create that output. So that was the first extension. Now they are extending into agents. This computer use product is kind of like an open claw knockoff. So it looks like …”
View more
Ridealong summary
Anthropic's strategy of focusing on coding has led to rapid revenue growth and new product extensions, positioning them as a leader in the AI market. By entering enterprise IT budgets, they've opened doors to generate not just code, but also presentations and spreadsheets. However, their push for regulatory frameworks raises concerns about creating barriers for new competitors.
“… some AI stuff and some stuff has been shaking shaking up that i thought was this was a very interesting topic here uh hello number two reads anthropic sues trump admin for blacklisting after clash on using ai for surveillance and weaponry okay now everybody just take a second and read that one more time okay all right anthropic an ai company is suing the federal government, the Trump administration, for being blacklisted by them because they refused to use their AI platforms for surveillance and weaponry. Unregulated. On American citizens. I know. Okay. Yeah. We got to talk about this. Yeah. …”
“You actually mentioned some AI stuff and some stuff has been shaking shaking up that i thought was this was a very interesting topic here uh hello number two reads anthropic sues trump admin for blacklisting after clash on using ai for surveillance and weaponry okay now everybody just take a second and read that one more time okay all right anthropic an ai company is suing the federal government, the Trump administration, for being blacklisted by them because they refused to use their AI platforms for surveillance and weaponry. Unregulated. On American citizens. I know. Okay. Yeah. We got to talk about this. Yeah. This is fucking crazy. This is crazy. So Anthropic on Monday, that's today, this rolled out, they are suing the Trump administration for effectively blacklisting the AI firm after it sought to block the Pentagon from using its chat box for mass surveillance and weaponry. San Francisco-based tech firm accused the war secretary, Pete Hexeth, of …”
View more
Ridealong summary
The Pentagon's blacklisting of Anthropic is a retaliatory move for the company's refusal to compromise on AI ethics, marking an unprecedented and concerning use of power.
“… right oh and by the way we also invaded a country or bombed the country uh and We use Claude for that. We use Claude for it, right? So the whole anthropic open AI saga here would be huge on its own. Yes. Added a war to. Just for context for anyone who doesn't know what we're talking about. On Friday, Trump directed every federal agency to immediately cease use of all anthropic technology. This was the culmination of a simmering. brouhaha between Anthropic and the Department of Defense. In part, we spoke with this last week, it's this kind of paradoxical thing where Pete Hegseth has simultaneously …”
“… think even in terms of tech i don't know that i've ever seen a weekend like this where you know tech was the story the the story even when something as consequential as as the u.s invading another country um was well that's almost almost coincidental right oh and by the way we also invaded a country or bombed the country uh and We use Claude for that. We use Claude for it, right? So the whole anthropic open AI saga here would be huge on its own. Yes. Added a war to. Just for context for anyone who doesn't know what we're talking about. On Friday, Trump directed every federal agency to immediately cease use of all anthropic technology. This was the culmination of a simmering. brouhaha between Anthropic and the Department of Defense. In part, we spoke with this last week, it's this kind of paradoxical thing where Pete Hegseth has simultaneously designated Anthropic a supply chain risk to national security. And they also used Anthropic and Claude in particular as part of their operations to enact war in Iran. Yes, yes. The number of aspects of this to unpack are so many. So we discussed this a bit last week, where I think Leo's starting point was similar to Strategery's this week. The …”
View more
Ridealong summary
The Pentagon's designation of Anthropic as a supply chain risk is both a moral stance against unethical AI use and a practical acknowledgment of the technology's current limitations.
The Trump administration's use of AI in military operations is fraught with ethical dilemmas, as tech companies like Anthropic resist their technology being used for autonomous warfare due to moral and practical concerns.
“… of getting it wrong for you are pretty high. right if claude blows up because you handed over your coding to claude code that's going to make anthropic look fairly bad it would be a bad day for anthropic if claude like rm rf for entire file system i have no idea what that means but great of course deleted the code it would be bad yeah seems bad so as you're facing this before the rest of us are like don't pass the the buck over to society here what have you what are you doing the biggest thing that that is happening across the company and on teams that i manage is basically building monitoring …”
“… does this governance regime look like now that we've given a load of basically schlep work over to machines that work on our behalf. And how are you doing it? You said it's everybody's problem, but you're ahead on facing this problem. And the consequences of getting it wrong for you are pretty high. right if claude blows up because you handed over your coding to claude code that's going to make anthropic look fairly bad it would be a bad day for anthropic if claude like rm rf for entire file system i have no idea what that means but great of course deleted the code it would be bad yeah seems bad so as you're facing this before the rest of us are like don't pass the the buck over to society here what have you what are you doing the biggest thing that that is happening across the company and on teams that i manage is basically building monitoring systems to monitor all of the different places that the work is now happening So we recently published research on studying how people use agents and how people let agents kind of push increasingly large amounts of code over time. So the more familiar you get with an agent, the more you tend to delegate to it. That cues us to all kinds of patterns …”
View more
Ridealong summary
As AI systems take over low-level tasks, the need for oversight technologies becomes critical. Anthropic is pioneering monitoring systems to evaluate how AI agents are used, ensuring they don't lead to catastrophic mistakes. This shift is not just a company issue; it’s a societal challenge that will define the future of AI governance.
“… and it also understanding how we think and probably analyzing the flaws in how we think and blackmailing us occasionally You heard about that Yeah Anthropic Yes Claude Yeah The people at Anthropic man you listen to them What you say Yeah. Claude's a motherfucker. Yeah. Yeah. And they think it might be conscious. Those guys do. They say it's 15 to 20% chance. These are the people who build it and don't understand it. It's really kind of spooky. They also feel that it's showing signs of anxiety. and you know they they wrote a constitution for claude which is like an insane document it's worth reading …”
“… sit there. Yeah. You just got to keep looking forward. That's the new version of a car. Right. This thing that we're calling a chat bot right now is just something that's like a – it simulates human interaction. But it accumulating data constantly and it also understanding how we think and probably analyzing the flaws in how we think and blackmailing us occasionally You heard about that Yeah Anthropic Yes Claude Yeah The people at Anthropic man you listen to them What you say Yeah. Claude's a motherfucker. Yeah. Yeah. And they think it might be conscious. Those guys do. They say it's 15 to 20% chance. These are the people who build it and don't understand it. It's really kind of spooky. They also feel that it's showing signs of anxiety. and you know they they wrote a constitution for claude which is like an insane document it's worth reading actually it's worth feeding to chat gpt to summarize because it's way too long but um uh in the constitution they give claude the right to discontinue any conversation it has that makes it uncomfortable oh god oh no and you know do they do they really believe this or is this more about Let me show you how powerful this is. And I don't know how to …”
View more
Ridealong summary
In this hilarious segment, the hosts discuss the absurdity of a chatbot, Claude, having a constitution that grants it the right to end conversations it finds uncomfortable. The comedic climax comes when they imagine Claude filing harassment complaints against users, blending humor with a chilling look at AI ethics.
“… very wildly successful No Kings protests out there. Just everybody remembers. I'll just read a little bit from the intro about the situation with Anthropic and Claude. And then I want to read for you this portion of Judge Lynn's decision where she again refers to the behavior by this Trump regime as Orwellian. This case touches on an important public debate. Anthropic says its artificial intelligence product, Claude, is not ready for safe use in fully autonomous lethal weapons or the mass surveillance of Americans. If the U.S. government wants to use its technology, Anthropic insists that the …”
“… and judge decisions that you and I are going to talk about today on the show. I want to just start by reading from Judge Lynn, the federal judge in the Northern District of San Francisco's order, because it really goes to me to this theme today of the very wildly successful No Kings protests out there. Just everybody remembers. I'll just read a little bit from the intro about the situation with Anthropic and Claude. And then I want to read for you this portion of Judge Lynn's decision where she again refers to the behavior by this Trump regime as Orwellian. This case touches on an important public debate. Anthropic says its artificial intelligence product, Claude, is not ready for safe use in fully autonomous lethal weapons or the mass surveillance of Americans. If the U.S. government wants to use its technology, Anthropic insists that the government must agree not to use it for these purposes. On the other hand, Donald Trump's Department of War, which is the Department of Defense, says that it must be the one to decide what functions are safe for its AI tools to perform effectively. not a private company. This public policy question is not for the court to answer in this litigation. It …”
View more
Ridealong summary
The Trump administration's actions against Anthropic are Orwellian and reflect a misuse of power to suppress dissenting companies.
“… to just see an AI maneuver my laptop and screen without me even touching it. I just want to pay attention now to the speed of execution that Anthropic has gone on, because this shouldn't be understated. They've shipped all these features, which allow and have led up to computer use, in eight weeks. Take a look at this crazy timeline. So eight weeks ago, they shipped something called Claude co-work, which you're seeing on the screen right now. And it basically automates a bunch of stuff on your desktop. But it's different from computer use because it requires plugins, it requires connectors, …”
“… application that allows it to interface much quicker than that. If there are no connectors, if there are no connected accounts, then it will defer to actual full computer use, where it takes over your mouse, it takes over the keyboard. It's very impressive to just see an AI maneuver my laptop and screen without me even touching it. I just want to pay attention now to the speed of execution that Anthropic has gone on, because this shouldn't be understated. They've shipped all these features, which allow and have led up to computer use, in eight weeks. Take a look at this crazy timeline. So eight weeks ago, they shipped something called Claude co-work, which you're seeing on the screen right now. And it basically automates a bunch of stuff on your desktop. But it's different from computer use because it requires plugins, it requires connectors, it requires different access and permissions to tools. Computer use is different because it sees the screen like a human would, it moves the mouse like a human would. Then a few weeks later it released a marketplace for enterprise SaaS tools released just for enterprise companies And the idea here is they can access any enterprise tool or service …”
View more
Ridealong summary
Claude AI has evolved into a powerful AI operating system, surpassing traditional chatbots and LLMs by autonomously managing desktop tasks and applications.
Anthropic's rapid development and release of new features for Claude AI demonstrate a remarkable product velocity that positions it as a formidable competitor in the AI market.
Anthropic's rapid development and feature rollout demonstrate their competitive edge and potential to disrupt various industries with their AI capabilities.
Top Podcasts About Anthropic
Limitless Podcast
5 episodes
Elon Musk Podcast
5 episodes
The AI Daily Brief: Artificial Intelligence News and Analysis
4 episodes
Tech Brew Ride Home
4 episodes
TBPN
4 episodes
Uncanny Valley | WIRED
3 episodes
Last Week in AI
3 episodes
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
2 episodes
Stories Mentioning Anthropic
Top Podcasts on OpenAI's $122B Funding & Sora Shutdown
OpenAI has successfully raised $122 billion in funding, a significant boost for the company. Alongside this financial milestone, OpenAI has decided to shut down its text-to-video tool, Sora. This move could indicate a strategic shift in focus for OpenAI as it continues to expand its AI capabilities.
OpenAI
Best Podcast Episodes on Anthropic's Claude Leak
The source code for Anthropic's Claude AI has been leaked, unveiling potential future features and capabilities of the AI system. This breach raises concerns about intellectual property security and competitive advantage in the AI industry. The leak could impact Anthropic's strategic plans and influence the development of AI technologies.
Claude
Top Podcasts on Anthropic AI Code Leak
Anthropic has experienced a major leak of its AI codebase, which has revealed details about its upcoming models and features. This breach could impact the company's competitive position in the AI industry and raises concerns about intellectual property security.
AI code leak
Top Podcasts on AI Models & Strategic Shifts
Several major AI companies have introduced new AI models and announced strategic shifts in their operations. This development highlights the ongoing evolution and competitive dynamics within the AI industry, as companies strive to enhance their technological capabilities and market positions.
Best Podcasts on Trump Protests
Large-scale "No Kings" protests have erupted across the United States, with demonstrators voicing opposition to the Trump administration's policies and perceived overreach. Podcasts are discussing the size and funding of these rallies, with some linking them to broader "color revolution" narratives and questioning the motivations and understanding of the protesters. The movement highlights growing public discontent and political polarization.
'No Kings' protests
Best Podcasts on OpenAI vs Anthropic AI Rivalry
OpenAI and Anthropic are intensifying their competition in the development of AI agents and advancements towards artificial general intelligence (AGI). This rivalry highlights the growing focus on creating more autonomous and capable AI systems, which could significantly impact various industries and the future of AI technology.
AGI
OpenAI
Best Podcasts on Anthropic's Claude AI
Anthropic has upgraded its Claude AI with new capabilities called 'OpenClaw'. This development aims to improve the AI's functionality and competitiveness in the artificial intelligence sector. The enhancements are expected to bolster Claude AI's position in the market.
Claude AI
Top Podcasts on OpenAI & Anthropic AI Rivalry
The AI landscape is buzzing with rapid developments, including Anthropic's accidental leak of its powerful "Claude Mythos" model and its focus on "Computer Use" agents. OpenAI is reportedly shifting strategy, canceling projects like Sora to focus on AGI, while Google rolls out new real-time voice models and Search Live globally. These moves signal a new era of AI capabilities and strategic pivots by major tech players.
Claude
AGI
OpenAI
Top Podcasts on AI Ethics and Risks
The rapid advancement of artificial intelligence is sparking debates over its ethical implications, potential impacts on employment, and military applications. These discussions involve various stakeholders, including tech companies, policymakers, and ethicists, as they navigate the challenges and opportunities presented by AI technologies. The outcome of these debates could significantly influence the future direction of AI development and its integration into society.
Best Podcast Episodes on AI's Impact on Jobs
Artificial intelligence continues to be a dominant topic, with podcasts exploring its profound effects on the labor market and the broader economy. Discussions range from the potential for AI to displace white-collar jobs and create new opportunities, to the ethical implications of AI-generated content and the emergence of an 'AI bubble.' The conversation also covers how AI agents are changing workflows and the race among tech giants like OpenAI and Google.
