Start Listening

Best Podcast Episodes About Pentagon Evaluates Boeing, Lockheed's Dependence on Anthropic's Claude AI

Department of Defense Boeing Lockheed Martin Anthropic Claude Code
Updated: Feb 26, 2026 – 9 episodes
The Department of Defense has requested Boeing and Lockheed Martin to assess their reliance on Anthropic's Claude AI, potentially leading to a blacklisting of the company. This move indicates concerns over national security risks associated with Anthropic, a prominent AI firm. The outcome could significantly impact the tech industry, especially in terms of AI's role in defense.
Listen to the Playlist

Ridealong has curated the best podcasts and clips about Pentagon Evaluates Boeing, Lockheed's Dependence on Anthropic's Claude AI. Listen now.

Podcast Episodes Covering This Story

Tech Brew Ride Home
Tech Brew Ride Home
Galaxy Unpacked
Date posted: Feb 25, 2026
▶ Listen
At a glance
The Pentagon's pressure on Anthropic to grant unfettered access to Claude AI highlights a tense standoff between national security needs and ethical AI usage policies.
Key quote from this episode
“The Pentagon wants to punish Anthropic as the feud over AI safeguards grows increasingly nasty, but officials are also worried about the consequences of losing access to its industry-leading model, Claude. Anthropic has said it is willing to adapt its usage policies for the Pentagon, but not to allow its model to be used for the mass surveillance of Americans or the development of weapons that fire without human involvement.”
Intelligent Machines (Audio)
Intelligent Machines (Audio)
IM 859: What's Behind the Fox? - Tech's Gilded Age
Date posted: Feb 25, 2026
▶ Listen
At a glance
The Department of Defense's demands on Anthropic to remove ethical clauses from their AI contracts are seen as posturing, with the DOD already claiming not to engage in the prohibited activities.
Key quote from this episode
“Part of this, what I believe the fight is over is like two main clauses that Anthropic kind of puts on these contracts, which is we don't want our AI to be used to autonomously kill people without any humans in the loop. And we don't want it to be used to, I believe, autonomously surveil the American people. And those are two things that I believe the DOD already says it essentially doesn't do.”
Limitless Podcast
Limitless Podcast
Anthropic Just Got Hacked by China. These are the New Front Lines.
Date posted: Feb 25, 2026
▶ Listen
At a glance
Anthropic's commitment to safety and alignment conflicts with the Pentagon's demand for unrestricted AI access, leading to potential national security threats.
Key quote from this episode
“The issue now is that Anthropic is restricting Pentagon's access, like American-owned self-defense against these kinds of things. And so the Pentagon is getting fed up and issuing them an ultimatum and saying, listen, if you don't figure this out, we're going to classify you as a threat to the country. Now, I have to give credit to Anthropic for maintaining their identity evenly across every single facet, but I don't think it's the smart way to do it.”
Hard Fork
Hard Fork
The Pentagon vs. Anthropic + An A.I. Agent Slandered Me + Hot Mess Express
Date posted: Feb 20, 2026
▶ Listen
At a glance
The Pentagon's potential blacklisting of Anthropic is a strategic threat to both the company and the military, as it complicates government contracts and could hinder military operations reliant on AI.
Key quote from this episode
“The latest thinking on this is that it would impact the use of Anthropic's products on Pentagon systems and Pentagon-related systems. So like Google Cloud, for example, wouldn't be able to use Claude on any kind of systems or servers that touch Google's government contracts. But the belief is that Anthropic could still work with Google, just not on anything that kind of touches Google's government contracts.”
TBPN
TBPN
Happy Nvidia Day, Salesforce Earnings with Marc Benioff, Anthropic's New Stance on Safety | Doug O'Laughlin, Maxwell Meyer, Ben Lerer, Michael Manapat, Adam Warmoth, Connor Sweeney, Matthew Harpe
Date posted: Feb 25, 2026
▶ Listen
Anthropic's shift away from strict AI safety commitments is driven by competitive pressures and an uneven policy environment, not directly related to Pentagon negotiations.
“Anthropic faces intense competition from rivals which regularly release cutting edge models. It's also locked in a battle with the defense department over how it's Claude suite are used after it told the Pentagon it couldn't be used for domestic surveillance or autonomous lethal activities. Anthropic said the safety policy changes an update based on the speed of AI's development and a lack of federal AI regulations which they have been pushing for.”
The Pentagon's scrutiny of Anthropic's Claude AI is more of a political maneuver than a genuine security concern, highlighting the complex dependency on AI in defense.
“The Pentagon wants to punish Anthropic as the feud over AI safeguards grows increasingly nasty, but officials are worried about the consequences of losing access to its industry-leading model, Claude. The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is that they are good, a defense official told Axios ahead of the meeting.”
Uncanny Valley | WIRED
Uncanny Valley | WIRED
Pentagon v. ‘Woke’ Anthropic; Agentic v. Mimetic; Trump v. the State of the Union
Date posted: Feb 26, 2026
▶ Listen
At a glance
The Pentagon's demands for unrestricted AI use clash with Anthropic's ethical stance against autonomous weapons, highlighting a significant ethical divide in AI deployment.
Key quote from this episode
“Anthropic has pretty strict restrictions on how their technology can be deployed. For instance, it can't be used for domestic surveillance or fully autonomous weapons. You know, the fully autonomous weapons thing really gets me. What Anthropic is saying here is not like some woke, left wing, wild thing. It's saying that machines can't be the ones to fully push the button. Right? Right. They can't kill people just themselves.”
Primary Technology
Primary Technology
Anthropic vs the Pentagon, Galaxy S26 Ultra, Mac Backup Strategy
Date posted: Feb 26, 2026
▶ Listen
At a glance
The Pentagon's potential blacklisting of Anthropic over Claude AI reflects a complex tension between national security and corporate autonomy, with significant implications for defense contractors.
Key quote from this episode
“One of them is that they could designate it as a supply chain risk. What that would mean is that's a pretty heavy option, right? Because that is a type of thing that they would normally reserve for a foreign adversary influence... It is essentially saying we will blacklist you if you don't do this thing. The other alternative is that they could use what's I think called the Defense Production Act.”
Tech Brew Ride Home
Tech Brew Ride Home
OpenAI Grabs OpenClaw’s Creator
Date posted: Feb 16, 2026
▶ Listen
At a glance
The Pentagon's frustration with Anthropic stems from ideological clashes over AI's use in sensitive military operations, not just security concerns.
Key quote from this episode
“The Pentagon is pushing four leading AI labs to let the military use their tools for, quote, all lawful purposes, even in the most sensitive areas of weapons development, intelligence collection, and battlefield operations. Anthropic has not agreed to those terms, and the Pentagon is getting fed up after months of difficult negotiations. The tensions came to a head recently over the military's use of Claude in the operation to capture Venezuela's Nicolas Maduro through Anthropic's partnership with AI software firm Palantir.”
TBPN
TBPN
Big Tech to Pay for Power, Anthropic Abandons Safety, the Adoption Paradox | Diet TBPN
Date posted: Feb 26, 2026
▶ Listen
At a glance
Anthropic's shift away from strict AI safety commitments is driven by competitive pressures, raising concerns about its role in defense and national security.
Key quote from this episode
“Anthropic, the AI company known for its devotion to safety is scaling back that commitment. The company said Tuesday it is softening its core safety policy to stay competitive with other AI labs. Anthropic previously paused development work on its model if it could be classified as dangerous, but it said it would end that practice if a comparable or superior model was released by a competitor.”

Sentiment Overview

Mixed (10)