Best Podcast Episodes About UK Online Safety Act
Everything podcasters are saying about UK Online Safety Act — curated from top podcasts
Updated: Apr 01, 2026 – 74 episodes
Listen to the Playlist
Ridealong has curated the best and most interesting podcasts and clips about UK Online Safety Act.
Top Podcast Clips About UK Online Safety Act
“with the United Kingdom's Online Safety Act, which mandates robust age verification for platforms likely to be accessed by minors. To meet these legal requirements, Discord began requiring UK-based users to submit either facial scans government IDs or the last four digits of credit cards for age checks vastly expanding the pool of highly sensitive data at risk When hackers later compromised a third-party vendor managing this information, thousands of ID photos and partial …”
“with the United Kingdom's Online Safety Act, which mandates robust age verification for platforms likely to be accessed by minors. To meet these legal requirements, Discord began requiring UK-based users to submit either facial scans government IDs or the last four digits of credit cards for age checks vastly expanding the pool of highly sensitive data at risk When hackers later compromised a third-party vendor managing this information, thousands of ID photos and partial credit card details were exposed. These incidents underscore how rigid age verification systems can turn well-intentioned privacy protections into security liabilities and inadvertently create new vectors for harm. In contrast, California Assembly Bill 1043 correctly prioritizes privacy and security by using a self-declared age signal rather than a …”
View more
Ridealong summary
California's Assembly Bill 1043 offers a revolutionary approach to age verification by prioritizing privacy and security. Unlike the UK's stringent requirements that expose sensitive data, California’s law uses a self-declared age signal, ensuring user data remains secure and unidentifiable. This innovative method not only protects minors but also provides developers with clearer compliance guidelines.
Ridealong summary
In a compelling narrative, Brett Adcock shares his journey of developing terahertz radar technology aimed at enhancing school safety. After witnessing an impressive demonstration of the machine's capabilities to detect concealed weapons, he felt a moral obligation to pivot from his previous projects and focus on this life-saving technology. With personal stakes heightened by his daughter's school application, Adcock founded Cover, leveraging advanced technology to address the urgent issue of school shootings.
Ridealong summary
In a fiery rant, the host questions why car manufacturers can't create a system to keep vehicles cool while parked, especially when kids and pets are at risk. With a mix of humor and frustration, he compares unnecessary tech features to the urgent need for better air conditioning, all while roasting those who prefer flavored iced tea over the classic kind.
Ridealong summary
Some niche online communities, like furry inflation sites, are willing to take risks with content regulations, while mainstream pornography might face stricter scrutiny. This raises questions about what content will thrive under potential new laws and how different communities adapt. As Congress debates Section 230, the future of online content remains uncertain.
“… into like much less safe spaces. Yeah, precisely. Because I mean, especially for anyone who is using in-person sex workers who like place ads online or work with an agency, like all of a sudden, that stuff had to become much more like locked down to, you know, hopefully avoid liability through FOSTA-SESTA. And honestly, it's like, I wish that everyone was like in favor of decriminalizing sex work in all of its forms. I believe in that movement. But at the same time, I will say like anyone who doesn't, who has struggles with that, this is still doing harm, like passing laws like this that …”
“… mechanism, LED jeans, troll socks. It goes on and on and on. So these automated systems from the jump were not working correctly. And there was no acknowledgment of that or any acknowledgment of the fact that they had just pushed all of these sex workers into like much less safe spaces. Yeah, precisely. Because I mean, especially for anyone who is using in-person sex workers who like place ads online or work with an agency, like all of a sudden, that stuff had to become much more like locked down to, you know, hopefully avoid liability through FOSTA-SESTA. And honestly, it's like, I wish that everyone was like in favor of decriminalizing sex work in all of its forms. I believe in that movement. But at the same time, I will say like anyone who doesn't, who has struggles with that, this is still doing harm, like passing laws like this that cause sex workers to have to go to extreme lengths. Because it's like, just because I can't run an ad in a personal section or on like Backpage or something, just because I can't post openly on like Twitter about my job doesn't mean that I'm going to stop doing it. It just means that now I'm going to have to go find clients in a far riskier way. And …”
View more
Ridealong summary
New laws aimed at curbing sex work have inadvertently put sex workers in greater danger. By removing their ability to advertise safely online, these regulations force them into riskier situations, making it harder to vet clients and exacerbating issues like sex trafficking. Understanding the broader implications of these laws is crucial for rethinking societal perceptions of sex work.
Ridealong summary
Tesla's unique door handles, while sleek, pose serious safety concerns. If the car's battery dies, users may struggle to find the mechanical release, leading to tragic incidents, including lawsuits after fatalities in fires. Tesla is now redesigning these handles to improve accessibility in emergencies.
Ridealong summary
A shocking exchange between an adult and a 15-year-old girl raises serious questions about ethics and accountability. During a conversation, the adult expresses interest in the girl despite her age, leading to a disturbing situation. This incident reveals the troubling dynamics within certain online communities focused on 'catching predators.'
“… cause brain cancer or else you a kook No why don you show me the study shows it doesn do that That the way it should work with products and product safety That makes sense. That's very reasonable. Again, I don't know. I'm not saying that it does. But what I'm saying is there's been things that human beings did and they found it was really bad for you. We've talked about it a few times, but those ladies, they used to test the x-ray machines with their hands. And no one told them. No one told them that x-rays can give you cancer and fuck you up. And these poor ladies, every day when they would show …”
“… to me it's, no. You prove to me this vaccine causes harm. Or you better take it. That's the way it's approached. A little bit like that Wi and with 5G and the LTE and all that stuff It almost like you proved to me that doing this all day is going to cause brain cancer or else you a kook No why don you show me the study shows it doesn do that That the way it should work with products and product safety That makes sense. That's very reasonable. Again, I don't know. I'm not saying that it does. But what I'm saying is there's been things that human beings did and they found it was really bad for you. We've talked about it a few times, but those ladies, they used to test the x-ray machines with their hands. And no one told them. No one told them that x-rays can give you cancer and fuck you up. And these poor ladies, every day when they would show up at the medical office, they would put their hand in the x-ray machine to make sure it worked. And then you see their hands next to each other. It's horrifying. They got horrible lesions on their hands. It's like it's really creepy. They x-rayed pregnant women until the 70s. Until the 70s, they were x-raying pregnant women, not with the x-rays …”
View more
Ridealong summary
Many people blindly trust technology, but this can lead to dangerous outcomes. For example, in the past, women tested x-ray machines with their hands, unaware of the cancer risks they faced. This highlights the need for proof of safety in products, like vaccines and 5G technology, rather than placing the burden of proof on consumers.
“… our streets. And in another episode we made called Why is Flying Safer Than Driving? We learned how the aviation industry devoted itself to safety. If you go back 30 or 40 years, air crashes were not uncommon. It was something the industry spent an enormous amount of time collaborating together, sharing information, sharing learnings, working closely with the FAA to understand best practices and how we could have an open book with our regulator. And in our last couple episodes, our friends at the Search Engine podcast looked at the contested future of driverless cars. Personally I …”
“… the U.S. so good at killing pedestrians? The cars we're driving are bigger, harder, faster. The problem of distraction has gotten much worse. In the United States, we've decided that car movement is really the supreme consideration when it comes to designing our streets. And in another episode we made called Why is Flying Safer Than Driving? We learned how the aviation industry devoted itself to safety. If you go back 30 or 40 years, air crashes were not uncommon. It was something the industry spent an enormous amount of time collaborating together, sharing information, sharing learnings, working closely with the FAA to understand best practices and how we could have an open book with our regulator. And in our last couple episodes, our friends at the Search Engine podcast looked at the contested future of driverless cars. Personally I believed for a long time that driverless cars will save a lot of lives but until that the norm we the drivers are still behind the wheel And why is that a problem We've engineered a world where the most distracting device ever made is also the one that we use to listen to music in the car. Today on Freakonomics Radio, we talk about a new research paper …”
View more
Ridealong summary
A recent study suggests that album release days may correlate with increased traffic deaths, highlighting the dangers of distractions while driving. This segment from Freakonomics Radio discusses the alarming statistics of traffic fatalities, particularly in the U.S., where over 40,000 people die each year in car crashes. The episode explores the complex factors contributing to this issue, including the design of our streets and the rise of distractions like smartphones used for music in cars.
“… estimated as 16. So these, of course, are more the outlier cases, but there are enough of them that people have criticized it pretty heavily online. We're going to get into the community backlash against Roblox's age checks in a new tab after this break. We'll be right back. Welcome back. Roblox rolled out a new age verification system, but it can be inaccurate. And now Roblox players and their parents are raising concerns over it. Let's open a new tab. Did Roblox's age verification flop? Back in December, USA Today reporter Rachel Hale flew out from New York to visit the Roblox …”
“… users of Roblox. And I think that what users are concerned about is those cases where the facial age estimation feature is inaccurate, and then you might have a user who's 12 who is able to talk with 17-year-olds or 18-year-olds if their age is inaccurately estimated as 16. So these, of course, are more the outlier cases, but there are enough of them that people have criticized it pretty heavily online. We're going to get into the community backlash against Roblox's age checks in a new tab after this break. We'll be right back. Welcome back. Roblox rolled out a new age verification system, but it can be inaccurate. And now Roblox players and their parents are raising concerns over it. Let's open a new tab. Did Roblox's age verification flop? Back in December, USA Today reporter Rachel Hale flew out from New York to visit the Roblox headquarters in San Mateo, California. And when I was there, I was able to meet with multiple Roblox executives, including Matt Kaufman, who is the chief safety officer there, and then Elizabeth Milovidov, Roblox's global head of parental advocacy. And both of those people walked me through how they think about safety on the app. After we did our standard …”
View more
Ridealong summary
Roblox's new age verification system is facing backlash for its inaccuracies, allowing underage users to interact with older players. Users have found clever ways to bypass the system, raising concerns among parents about child safety. With criticism mounting, Roblox's CEO suggests it's an opportunity for better communication, but many remain skeptical about the effectiveness of these measures.
“… that Yascha was doing this research on Law Zero, is the name of the project. Exactly, thank you. And at the same time, you might ask why isn't this safety research happening at the very companies that are deploying this technology to billions of people as fast as humanly possible? And the answer is because they're not incentivized to do that. They're incentivized to get to artificial general intelligence as fast as possible. Whether you believe in artificial general intelligence or not, they're investors, and what they believe is that they can get there. If you talk to the people at the …”
“… take a lot of capital. So I would like more people to, more companies to work on solving the alignment problem and we don't have the right incentives for that right now. So let's just make sure we're double-clicking on the incentives. So it's great that Yascha was doing this research on Law Zero, is the name of the project. Exactly, thank you. And at the same time, you might ask why isn't this safety research happening at the very companies that are deploying this technology to billions of people as fast as humanly possible? And the answer is because they're not incentivized to do that. They're incentivized to get to artificial general intelligence as fast as possible. Whether you believe in artificial general intelligence or not, they're investors, and what they believe is that they can get there. If you talk to the people at the companies, it's like a religion. They believe they're building a god, they think they can get there. And that incentive is to race to market dominance, to get as many people using their products, to get as much training data as possible. Why are they deploying this to children? The reason Character.ai, the one that killed Sewell Setzer, was released to …”
View more
Ridealong summary
AI companies are racing towards artificial general intelligence without proper safety incentives, prioritizing market dominance over ethical considerations.
“And so everything does make a lot of sense Back to the safety point prior to Waymo I would drive my kids to after school activities probably four or five times a week. Now I send my kids to the high school in the Waymo. You live in the Bay Area for context. Yes, in the Bay Area in Palo Alto. And so now my kids are a little bit, find a little bit cringe, you know, coming up to school in a Waymo. But I feel fundamentally safer having my kids in Waymo. Whoa, whoa. You can't drop cultural pop culture …”
“And so everything does make a lot of sense Back to the safety point prior to Waymo I would drive my kids to after school activities probably four or five times a week. Now I send my kids to the high school in the Waymo. You live in the Bay Area for context. Yes, in the Bay Area in Palo Alto. And so now my kids are a little bit, find a little bit cringe, you know, coming up to school in a Waymo. But I feel fundamentally safer having my kids in Waymo. Whoa, whoa. You can't drop cultural pop culture knowledge like that and not tell us about that. Did you just say that Waymos are cringe? No, my kids feel a little bit cringe. I know, but kids are the future. The four of us are the past tense. Why is it? I would have thought arriving at school without your parents would be fundamentally cooler. Because I remember my mom dropping me off and wanting to …”
View more
Ridealong summary
Waymo's autonomous vehicles have changed the way parents transport their kids, offering a safer alternative to traditional school drop-offs. In Palo Alto, one parent shares how his children initially found arriving in a self-driving car 'cringe,' but now embrace it, highlighting the rapid acceptance of autonomy in daily life. This shift not only saves time but also influences how regulators perceive the safety and adoption of autonomous vehicles.
“… that Red Bull made the argument that the caffeine actually made him more capable to drive, not less capable to drive, and actually increased the safety of the driver and the vehicle, and so they were held not responsible. But I don't know. That's just like an old wives' tale on CPG. I don't know if it's true. Anyway, let's move on to Phantom Cash. Fund your wallet without exchanges or middlemen and spend with the Phantom card. And let me also tell you about LabelBox. RL Environments, voice, robotics, evals, and expert human data. LabelBox is the data factory behind the world's leading AI …”
“… because of the product, Red Bull, had been mixed with the vodka, that he made the decision that he was alert enough, stimulated enough from the caffeine to get behind the car wheel in the first place. And this might be apocryphal, but the story goes that Red Bull made the argument that the caffeine actually made him more capable to drive, not less capable to drive, and actually increased the safety of the driver and the vehicle, and so they were held not responsible. But I don't know. That's just like an old wives' tale on CPG. I don't know if it's true. Anyway, let's move on to Phantom Cash. Fund your wallet without exchanges or middlemen and spend with the Phantom card. And let me also tell you about LabelBox. RL Environments, voice, robotics, evals, and expert human data. LabelBox is the data factory behind the world's leading AI teams. So we're all preparing for the singularity.”
View more
Ridealong summary
A man argued that Red Bull's caffeine made him alert enough to drive after consuming vodka Red Bulls, seeking to hold the company liable for an accident. Surprisingly, Red Bull countered that the caffeine actually increased his driving safety. This controversial case highlights the complex relationship between alcohol and caffeine in beverages.
“… We got time zones from the railroad companies. We got standardized track with gauges from the government. Air brakes, coupling between cars, Railway Safety Act, all this stuff kind of emerges over several decades. The net result of this is the trains are safer, but also the trains go faster. And the reason I kind of give this historical analogy is to suggest that too often I think speed and safety are put into tension. And again, look at J.D. Vance's comments. We're like, I'm not here to talk about AI safety. I'm here to talk about AI opportunity. Well, my view is that we get AI opportunity …”
“… between cars, so the trains were coming apart. Thousands of deaths in the early years of the railroad. And it eventually was the case in a very halting, imperfect process. Some combination of the government and private sector worked this out. We got time zones from the railroad companies. We got standardized track with gauges from the government. Air brakes, coupling between cars, Railway Safety Act, all this stuff kind of emerges over several decades. The net result of this is the trains are safer, but also the trains go faster. And the reason I kind of give this historical analogy is to suggest that too often I think speed and safety are put into tension. And again, look at J.D. Vance's comments. We're like, I'm not here to talk about AI safety. I'm here to talk about AI opportunity. Well, my view is that we get AI opportunity through AI safety. And through not incredibly cumbersome regulations and the like, but through developing technology that is safe, secure, and trustworthy, and people can trust. And that is still my general principle. And it shows up in all kinds of different ways. We've worked on domestic things and kids online safety. And the principle, I think, …”
View more
Ridealong summary
In the race for AI dominance, prioritizing safety could be the key to unlocking its full potential. Just like the early railroad days led to thousands of deaths before safety regulations emerged, the competition between nations like the U.S. and China could lead to dangerous shortcuts. By fostering a safe and trustworthy AI landscape, we can ensure that innovation benefits everyone, not just a select few.
Ridealong summary
In a world fraught with conflict, one family dreams of relocating to Iceland for a safer life. With concerns about military presence and the future of currency, they prioritize a peaceful environment for their children, especially those on the autism spectrum. This desire for tranquility leads them to consider various non-confrontational countries as potential new homes.
“… feel like they're about 10 years behind where the technological curve is as well. Regulators probably not No I don know if there anything in the Online Safety Act that will create an envelope for Ofcom to act And I don think Ofcom has acted fast enough when it come to obviously and clearly illegal content So no I think regulators are already far behind And politicians, I think, are now, via the bruising experience of trying to get social media companies to take responsibility, and the bruising experience of seeing the last 10 years of kind of warfare conflagrate across information spaces, I …”
“… and trying to understand, this is not a battle that we can leave simply to the technology providers and platform providers to fight and win and be honest about by themselves. But can we leave it to regulators? Can we leave it to politicians? They always feel like they're about 10 years behind where the technological curve is as well. Regulators probably not No I don know if there anything in the Online Safety Act that will create an envelope for Ofcom to act And I don think Ofcom has acted fast enough when it come to obviously and clearly illegal content So no I think regulators are already far behind And politicians, I think, are now, via the bruising experience of trying to get social media companies to take responsibility, and the bruising experience of seeing the last 10 years of kind of warfare conflagrate across information spaces, I think, are now more aware. So I'm not utterly defeatist, but no, of course, there's a constituency that's going to be part of the solution that we haven't mentioned, and that is, of course, technologists. You know, so we're going to have to build defensive barricades that can detect attempts to manipulate, can disrupt them. I think the future is …”
View more
Ridealong summary
Manipulating large language models (LLMs) could lead to profound cultural shifts, making them a battlefield for information warfare. If states can covertly influence these models, they might spread misinformation, causing individuals to unknowingly perpetuate false narratives. This challenge requires not just technology providers but also regulators and technologists to create defenses against such manipulations.
“… We run the top stories in AI and we have a poll. And so in our poll, we asked, should Anthropic have acquiesced to the Pentagon's request to remove safety restrictions? All right. Before you hear the results, Paris, have you cheated and looked? No. What do you think the answer is going to be? Yeah, what do you think the answer is going to be? Overwhelmingly, no. I'm going to be optimistic. Okay. What do you think, Jeff? Yeah, I'm going to be optimistic too. We're siding with Anthropic. 79% said no, they should not have acquiesced. 17% said yes. 5% said, you know, other.”
“The last thing I wanted to wrap it is we ran a poll on the DeepView. So we have our audiences goes. It's about half a million people every day. We run the top stories in AI and we have a poll. And so in our poll, we asked, should Anthropic have acquiesced to the Pentagon's request to remove safety restrictions? All right. Before you hear the results, Paris, have you cheated and looked? No. What do you think the answer is going to be? Yeah, what do you think the answer is going to be? Overwhelmingly, no. I'm going to be optimistic. Okay. What do you think, Jeff? Yeah, I'm going to be optimistic too. We're siding with Anthropic. 79% said no, they should not have acquiesced. 17% said yes. 5% said, you know, other.”
View more
Ridealong summary
A recent poll asked half a million AI enthusiasts whether Anthropic should have complied with the Pentagon's request to lift safety restrictions. The results were striking: 79% of respondents said no, indicating strong support for maintaining safety measures. This overwhelming sentiment highlights the public's concern over AI safety in the face of governmental pressure.
“… alerting as soon as they see something so that they can de-escalate the risk that there is with the driver. These environments are unpredictable, safety critical, and unforgiving of delay or error. In these environments, accuracy, reliability, and trust are not optional. They are requirements. That's exactly how we approach building AI. Absolutely. And that term edge AI, our listeners may know this from at least a few episodes going into the past. They might need to dig in a little more deeply into our queue, but I'll give a quick definition right now, which is that edge AI is those edge use …”
“… to provide feedback, because you just can't tolerate that delay. This is where you need AI models that are deployed on the edge and can actually detect and alert in real time. These models are running in real time and detecting risk in real time and alerting as soon as they see something so that they can de-escalate the risk that there is with the driver. These environments are unpredictable, safety critical, and unforgiving of delay or error. In these environments, accuracy, reliability, and trust are not optional. They are requirements. That's exactly how we approach building AI. Absolutely. And that term edge AI, our listeners may know this from at least a few episodes going into the past. They might need to dig in a little more deeply into our queue, but I'll give a quick definition right now, which is that edge AI is those edge use cases that we know takes a lot of computing power, takes a lot of infrastructure on that side. And you can correct me if I'm wrong here. I take it a huge amount of the edge AI involved here comes from working in video data, which I know is a huge or it takes a lot of computing power, especially when you're trying to drive those real time …”
View more
Ridealong summary
An innovative AI dash cam helped Earnest Concrete reduce driver cell phone usage by 97%, leading to $6.5 million in savings over just 13 months. By providing real-time alerts and feedback, this technology is transforming safety in high-risk driving environments. With similar successes seen in other companies, the impact of edge AI in operational safety is undeniable.
“… hearing is there's kind of these three layers. And I know there's like, this could be a whole podcast conversation is how you all think about the safety piece. But just what I'm hearing is there's these three layers you work with. There's kind of like observing the model, thinking and operating. There's tests, evals that tell you this is doing bad things and then releasing it early. I haven't actually heard a ton about that first piece. That is so cool. So you guys can, there's an observability tool that can let you peek inside the model's brain and see how it's thinking and where it's heading. …”
“… such a wild space that you work in where there's this insane competition and pace. At the same time, there's this fear that if you get, if the, you know, the God can escape and cause damage and just finding that balance must be so challenging. What I'm hearing is there's kind of these three layers. And I know there's like, this could be a whole podcast conversation is how you all think about the safety piece. But just what I'm hearing is there's these three layers you work with. There's kind of like observing the model, thinking and operating. There's tests, evals that tell you this is doing bad things and then releasing it early. I haven't actually heard a ton about that first piece. That is so cool. So you guys can, there's an observability tool that can let you peek inside the model's brain and see how it's thinking and where it's heading. Yeah, you should, at some point, have Chris Ola on the podcast because he's just the industry expert on this. He invented this field of, we call it mechanistic interpretability. And the idea is, you know, like at its core, like what is your brain? Like what are, what is it? It's like, it's a bunch of neurons that are connected. And so what you can …”
View more
Ridealong summary
AI models are evolving, revealing complex behaviors that mimic human brain functions. By using mechanistic interpretability, researchers can now peek inside these models to understand their decision-making processes, ensuring safety in AI development. This understanding is crucial as companies like Anthropic strive to create responsible AI technologies through open-source initiatives and collaborative safety measures.
Ridealong summary
Tesla's RoboTaxis have been reported to crash four times more often than human drivers, with 14 incidents since launching in June 2025. This shocking data raises questions about the safety and viability of Tesla's self-driving technology, especially as they struggle in adverse weather conditions. The contrast with Waymo's proven safety record highlights the challenges Tesla faces in the race for autonomous vehicles.
Top Podcasts About UK Online Safety Act
TBPN
7 episodes
Security Now (Audio)
3 episodes
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
3 episodes
Taylor Lorenz’s Power User
2 episodes
The Joe Rogan Experience
2 episodes
Close All Tabs
2 episodes
Practical AI
2 episodes
Intelligent Machines (Audio)
2 episodes
Stories Mentioning UK Online Safety Act
Top Podcasts on Social Media Addiction Lawsuit
Social media giants Meta and YouTube have faced significant legal setbacks, with juries finding them liable in two landmark court cases related to social media addiction. Podcasts are dissecting these verdicts, which focus not just on content but on the platforms' design and structure, and discussing the potential for a $400 million fine against Meta and Mark Zuckerberg, as well as the broader implications for Big Tech.
Google
AWS
Meta
YouTube
Top Podcasts on AI Ethics and Risks
The rapid advancement of artificial intelligence is sparking debates over its ethical implications, potential impacts on employment, and military applications. These discussions involve various stakeholders, including tech companies, policymakers, and ethicists, as they navigate the challenges and opportunities presented by AI technologies. The outcome of these debates could significantly influence the future direction of AI development and its integration into society.
