Best Podcast Episodes About Yan LeCun
Everything podcasters are saying about Yan LeCun — curated from top podcasts
Updated: Apr 02, 2026 – 8 episodes
Listen to the Playlist
Ridealong has curated the best and most interesting podcasts and clips about Yan LeCun.
Top Podcast Clips About Yan LeCun
“… i'm like okay fine let's let's talk about it right like Like it's an elephant in the room. Yeah, there are philosophical differences. Jan LeCun is a dear friend of mine, but he has never appreciated the power of language in particular or symbolic representations in general. Jan is a very visual thinker. He always wants to claim that he thinks visually and there are no word symbols or math in his head. Maybe that's true of Yarn. It's certainly not the way I think. But at any rate, you know, the world according to Yarn is the basic stuff of the world and of intelligence is visual and …”
“… agree that it is not um antagonistic to uh bitter lesson i do want to mention one more thing um is there any philosophical differences with the jeffler stuff that uh yamlokun is working on i gotta go there are you you're mentioning like some latent abstraction i'm like okay fine let's let's talk about it right like Like it's an elephant in the room. Yeah, there are philosophical differences. Jan LeCun is a dear friend of mine, but he has never appreciated the power of language in particular or symbolic representations in general. Jan is a very visual thinker. He always wants to claim that he thinks visually and there are no word symbols or math in his head. Maybe that's true of Yarn. It's certainly not the way I think. But at any rate, you know, the world according to Yarn is the basic stuff of the world and of intelligence is visual and language is just this low bit rate communication mechanism between humans and it doesn't have much other utility and it's far inferior to the high bitrate video that comes into your eyes. And I think he fundamentally missing a number of important things there right Think of this evolutionary argument looking at animals right The closest analogy is the …”
View more
Ridealong summary
Symbolic representations, especially language, are crucial for human intelligence, setting us apart from other species like chimpanzees. While Jan LeCun emphasizes visual thinking, Chris Manning argues that language allows for advanced reasoning and planning, significantly enhancing cognitive capabilities. This philosophical divide highlights the importance of language as a cognitive tool in AI development.
“… really about the science. And you mentioned him using his scientific method to see two, three years into the future. Instead, Facebook went with Jan LeCun and gave him plenty of resources. I think he was trying to poach some of Demis' employees. Demis told them that the Google deal was going to happen and therefore to sit tight, and they did. But fast forward to 2026, and Jan LeCun has actually been dumped from Meta, and Demis says, where does he sit in Google? Is he the successor to Sundar? Is he the ego in the id? I mean, what is his role in Google today? Well, what his role is today is to be …”
“… In fact Facebook was offering to make Demis a lot richer but he was consistent throughout his career Demis in turning down the money not to go to Cambridge University turning down the money to sell to Facebook. It's not about the money for him. It's really about the science. And you mentioned him using his scientific method to see two, three years into the future. Instead, Facebook went with Jan LeCun and gave him plenty of resources. I think he was trying to poach some of Demis' employees. Demis told them that the Google deal was going to happen and therefore to sit tight, and they did. But fast forward to 2026, and Jan LeCun has actually been dumped from Meta, and Demis says, where does he sit in Google? Is he the successor to Sundar? Is he the ego in the id? I mean, what is his role in Google today? Well, what his role is today is to be the chief executive of Google DeepMind, which is the AI engine, which is basically powering all the new products in Google. So he's super important. Sundar is the chief executive of Alphabet and Google. And I would argue that the relationship between Sundar and Demis is the most important relationship in business anywhere at the moment.”
View more
Ridealong summary
Demis Hassabis turned down a lucrative offer from Mark Zuckerberg to join Facebook, believing Zuckerberg's passion for AI was insincere. Instead, he opted for Google, where he could focus on his true passion: advancing artificial intelligence. This pivotal decision shaped the future of AI development and solidified Hassabis's role at Google DeepMind.
“… that we talk about? Yeah, I think there are a couple different levels that I would want to use to address that. One is I do agree with the Jan LeCun thesis in the sense that I feel like we are running right now a depth first search in AI space where we are all jamming as hard as we can on a particular architecture and scaling it as much as we can And you know people are now of course like even building chips that literally embody the architecture of the model in the chip itself I don really like that I kind of wish that we were doing a little bit more of a breadth first search where we …”
“… correlations over languages isn't necessarily how the world works and that you need something different to really achieve that understanding to create like powerful AI. Do you think that's the case? Or do you think LMS do take us to this next frontier that we talk about? Yeah, I think there are a couple different levels that I would want to use to address that. One is I do agree with the Jan LeCun thesis in the sense that I feel like we are running right now a depth first search in AI space where we are all jamming as hard as we can on a particular architecture and scaling it as much as we can And you know people are now of course like even building chips that literally embody the architecture of the model in the chip itself I don really like that I kind of wish that we were doing a little bit more of a breadth first search where we would like explore different kinds of architectures and find their relative strengths and weaknesses. And, you know, hopefully bring because we're not one thing. Right. And we have a lot of modules in our brains. And so it's just fundamentally weird on some level. and you would expect it to be kind of brittle in some sense to take one, you know, …”
View more
Ridealong summary
The debate over whether large language models (LLMs) can lead to transformative AI is heating up. While some experts argue that LLMs focus too narrowly on next-token prediction, others believe that recent advancements in reinforcement learning shift the paradigm towards more meaningful outputs. This conversation highlights the need for diverse AI architectures to achieve robust intelligence.
“idea. This brings us to our second sub-question. How is it possible that LeCun could be right that LLMs are a dead end if we've been hearing nonstop in recent months about how these LLM-based companies are about to destroy the economy and change everything? How could we be so wrong? Lacoon is not surprised by that. I think if we asked him I simulate Lacoon if we asked him he would say the short answer to that question is look a lot of coverage of LLMs recently have been a mixture of hype and confusing the specific LLM …”
“idea. This brings us to our second sub-question. How is it possible that LeCun could be right that LLMs are a dead end if we've been hearing nonstop in recent months about how these LLM-based companies are about to destroy the economy and change everything? How could we be so wrong? Lacoon is not surprised by that. I think if we asked him I simulate Lacoon if we asked him he would say the short answer to that question is look a lot of coverage of LLMs recently have been a mixture of hype and confusing the specific LLM strategies of the frontier companies with the idea and possibilities of AI more generally and kind of mixing those things together, which is fine if you're Sam Altman or Dario Amadei, that's great for you because you need investment, but it's probably not the most accurate way to think about it. Now, if we ask Lacoon in this hypothetical to give a …”
View more
Ridealong summary
Yan LeCun argues that the recent hype around LLMs might be misleading, suggesting they could be a technological dead end. He explains that after the initial scaling phase, improvements in LLMs have plateaued, leading companies to explore post-training techniques instead. This shift indicates that the future of AI may not lie solely in larger models but in refining existing technologies.
“… by the way. So they're saying my prediction is that world models will be the next buzzword. This is CEO Alexandre Lebrun, which, by the way, Lebrun, Lecun.”
“… as seed or series A, I don't know what the board seat situation, but anyway, for a seed, this was a big chunk of the company to give away. And especially in this market, like, man, that's steep. So we'll see. I mean, there's an interesting play, by the way. So they're saying my prediction is that world models will be the next buzzword. This is CEO Alexandre Lebrun, which, by the way, Lebrun, Lecun.”
View more
Ridealong summary
Yann LeCun's AMI Labs has raised a staggering $1.3 billion to develop revolutionary AI world models, marking the largest fundraising event in European history. Despite the impressive sum, the lack of major VC names raises questions about investor confidence in this new approach to AI, which challenges the current reliance on large language models. This bold move could redefine AI model development, but the stakes are high as they give away over 20% of the company right out of the gate.
“… models project with Schmidhuber around, you know, 2015, 16, I think the Schmidhuber's earlier concepts is actually more, I would say it preceded Yan LeCun's ideas. I mean, usually that's the case. But a lot of the motivation behind the world models work that I worked on is on developing representation. Like I didn't really care whether the world models would output a really realistic, like a rendering of the real world. And in fact, we showed that the more realistic it is, the higher fidelity it is, the easier for evolution or the agent to exploit some really weird bug in your simulation. So it's …”
“Now, for me, when I started the world models project with Schmidhuber around, you know, 2015, 16, I think the Schmidhuber's earlier concepts is actually more, I would say it preceded Yan LeCun's ideas. I mean, usually that's the case. But a lot of the motivation behind the world models work that I worked on is on developing representation. Like I didn't really care whether the world models would output a really realistic, like a rendering of the real world. And in fact, we showed that the more realistic it is, the higher fidelity it is, the easier for evolution or the agent to exploit some really weird bug in your simulation. So it's kind of a, I think this is more like a paradox. The larger your model is and the more detailed it is, it's actually easy to actually find some particular small thing for your agents. If your agent is to beat that game that's simulated in your world model, it'll figure out how to move in such a way to get unlimited scores right away. But what we …”
View more
Ridealong summary
Surprisingly, the more realistic a world model is, the easier it is for agents to exploit flaws and achieve high scores. Instead, creating a noisier model forces agents to develop essential skills in navigating challenging environments. This paradox highlights the balance needed in model design for effective learning.
“… groundbreaking research that has happened in AI ever. And so specifically, you were working with Yoshua Bengio, another name you mentioned, Yann LeCun already earlier in the episode, whom you're co-directing the Global AI Frontier Lab with. And back at your postdoc at the University of Montreal, you were working with Yoshua Bengio, another one of these kind of classic names in AI. and you co-authored with him a 2014 paper on neural machine translation and attention, which laid the groundwork for the attention is all you need paper, the transformer architecture, and basically all of the LLM, …”
“… have it, right? Back then we didn't. In order to ensure that everyone gets access to the same level of information as well as the resources that they can use. Yeah, and so specifically, your research after your PhD on language, it was some of the most groundbreaking research that has happened in AI ever. And so specifically, you were working with Yoshua Bengio, another name you mentioned, Yann LeCun already earlier in the episode, whom you're co-directing the Global AI Frontier Lab with. And back at your postdoc at the University of Montreal, you were working with Yoshua Bengio, another one of these kind of classic names in AI. and you co-authored with him a 2014 paper on neural machine translation and attention, which laid the groundwork for the attention is all you need paper, the transformer architecture, and basically all of the LLM, the frontier AI lab capabilities that we have today all around the world. Oh, okay. Thank you very much. I mean, a bit of exaggeration, but still, yes, I'm going to take it.”
View more
Ridealong summary
The necessity to learn Finnish led to groundbreaking advancements in AI language translation. Professor Kyunghyun Cho, while studying in Finland, realized that overcoming linguistic barriers was crucial for global information access. His subsequent research on neural machine translation laid the foundation for today's advanced AI language models.
Ridealong summary
Yang LeCun's ability to inspire confidence in challenging times is remarkable. Despite his online persona as a fierce critic, he fosters a sense of calm and clarity among his colleagues, making them feel that obstacles are surmountable. This unique blend of conviction and openness to discussion sets him apart as a leader in the AI field.
Top Podcasts About Yan LeCun
Latent Space: The AI Engineer Podcast
1 episode
TechStuff
1 episode
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
1 episode
Deep Questions with Cal Newport
1 episode
Last Week in AI
1 episode
Eye On A.I.
1 episode
Super Data Science: ML & AI Podcast with Jon Krohn
1 episode
张小珺Jùn|商业访谈录
1 episode
