Yann LeCun is a prominent supporter of the idea that AI safety is guaranteed. All we have to do is make superintelligent AIs subservient to us, and no significant harm can come from this technology.
I asked GPT4 to inspect LeCun’s Twitter feed and find potentially strained or inappropriate analogies. All text below generated by GPT4. The first tweet was analysed as a single tweet in isolation, so it got more attention, and the others were analysed en masse.
All text below is AI generated. My point is that even GPT4 expects more nuance.
The rocket analogy:
You know, it’s like rockets.
You test it, it blows up, you tweak it and try it again.
You keep iterating until it becomes safe eough [sic] to deploy and make people’s well-being depend on it.
This tweet uses the development of rockets as an analogy for the development of AI, implying that through trial and error, we can eventually make AI safe. However, there are several reasons why this analogy can be considered strained:
Scale of impact: When a rocket fails during testing, the impact is generally localized, and precautions can be taken to limit the damage (i.e., conducting tests in isolated locations). In contrast, if a powerful AI system fails or behaves unexpectedly, the consequences could be widespread and systemic, affecting digital infrastructures, economies, and even global stability. For example, an AI system tasked with managing critical infrastructure or financial markets could cause widespread harm if it behaves unexpectedly.
Predictability: The physical principles that govern rocket science are well understood, which makes failures understandable and predictable to a certain extent. On the other hand, AI systems, especially those based on machine learning, can behave unpredictably, with their decisions often being hard to interpret. This could make it more challenging to understand why a particular AI system failed and how to prevent similar failures in the future.
One-off vs Continuous Learning: Rockets, once designed and tested, generally operate according to their design with no further learning or adaptation. However, many AI systems are designed to continuously learn and adapt from their environment. This means that even if an AI system is safe at the time of deployment, it could later adapt in ways that lead to unsafe behavior.
Reversibility: A rocket launch, once initiated, can’t be undone, but there is usually a comprehensive review and analysis process after a failure. AI decisions, especially when scaled and automated, could lead to actions that are difficult or impossible to reverse, especially if they interact with and impact the real world.
In summary, while the iterative development process might hold true for both rockets and AI to some extent, the analogy can be misleading due to the differences in predictability, reversibility, scale of potential impact, and behavior after deployment. It could lead one to underestimate the unique challenges and potential risks associated with AI safety.
The ballpen analogy:
Tweet: “Engineer: I invented this new thing. I call it a ballpen TwitterSphere: OMG, people could write horrible things with it, like misinformation, propaganda, hate speech. Ban it now! – Writing Doomers: imagine if everyone can get a ballpen. This could destroy society.”
Here, the tweeter compares the introduction of AI to the invention of a ballpen. The analogy is strained because while both are tools that can be misused, the ballpen is a simple tool with a limited range of uses. AI, on the other hand, is a complex, powerful technology with the potential to radically reshape society. The potential misuses of AI (e.g., deepfakes, autonomous weapons, job displacement) are far more complex and potentially more damaging than the misuse of a ballpen.
The 19th Century farmer analogy:
Tweet: “From the point of view of a 19th Century farmer, machines have taken over all the jobs in the 21st Century. Jobs that occupied most people in the 19th Century now occupy small amounts of people or have disappeared. Most 21st jobs would be entirely incomprehensible for a 19th…”
The tweeter compares the job displacement due to AI to job displacement due to industrialization. While both processes do involve job displacement, the analogy simplifies the complexities and potential consequences of AI-induced job displacement. The pace of AI development and implementation could lead to rapid, widespread job displacement, and the skills required for the new jobs created by AI may be inaccessible to many displaced workers.
The jet airliner analogy:
Tweet: “We can all imagine innumerable scenarios in which a jet airliner flight goes bad. Yet engineers have found ways to make air transport incredibly safe and reliable. That requires skills that the air transport doomers don’t have”
This analogy is intended to convey the idea that AI, like air transport, can be made safe and reliable despite the potential for disaster. However, the scope and unpredictability of AI outcomes are arguably much greater than those of air transport. Furthermore, the consequences of AI failure could be far more pervasive and systemic than an individual airline disaster.
The set of possible scenarios analogy:
Tweet: “Imagine that the set of possible scenarios is a large tree in which each branch is a series of events. “Good” scenarios constitute a tiny subset of that tree.”
While this analogy is aimed at visualizing the myriad potential outcomes of AI development, it oversimplifies the complexity and interrelatedness of these scenarios. AI’s impact will not unfold along a single, linear sequence of events (as suggested by a tree structure), but will likely involve complex interactions between various factors including technology, policy, social response, and more.
The hard take-off “fallacy”:
Tweet: “Two common misconceptions about AI: (1) the fallacy of the hard take-off scenario. (2) the idea that intelligent agents will necessarily want to dominate.”
The tweeter criticizes what they view as two common misconceptions about AI. In regards to the ‘hard take-off scenario’, they imply that considering this rapid and uncontrollable AI development as a high probability event is a fallacy. However, this stance may oversimplify the range of possible AI trajectories and prematurely dismiss valid concerns. It’s true that predicting the pace of AI development is uncertain and a ‘hard take-off’ is not guaranteed. Yet, it’s equally arguable that we cannot rule out a rapid escalation in AI capabilities given the right conditions. By labeling the ‘hard take-off’ scenario as a fallacy, the tweeter appears to strawman a more nuanced concern: that we should prepare for a wide range of possibilities, including fast and impactful shifts in AI capabilities. This preemptive dismissal risks minimizing the importance of safeguards, policy measures, and ethical considerations to mitigate potential negative impacts, should AI development pace exceed expectations.
In summary, while these analogies can help make complex AI concepts more accessible, they often oversimplify the issues, potentially leading to misunderstanding. This does not necessarily mean the tweeter has a superficial understanding of AI; rather, it could suggest a desire to simplify complex topics for a general audience. However, this approach has its drawbacks, as it can sometimes mask the true complexity and potential impact of AI.
Another deficient piece of logic from LeCun today:
.
Tweet:
“There is much more to intelligence than language.
All of animal intelligence, which includes what we would call common sense, has nothing to do with language.
Everything infants learn is entirely non linguistic.”
.
GPT4’s analysis:
The tweet seems to argue that language-based models, such as GPT or other large language models (LLMs), cannot achieve intelligence because they focus only on language, while there’s much more to intelligence, like common sense and non-linguistic learning, that isn’t covered by language.
While it’s true that there’s more to intelligence than language, the argument seems to neglect a few things:
Language as a window into intelligence: While not all of intelligence is about language, language is a key manifestation of our intelligence and is intertwined with many of its aspects. Understanding, producing, and learning from language require sophisticated cognitive abilities. So, while an LLM may not capture all facets of intelligence, it can capture a significant portion of them.
Language models can learn more than just language: LLMs learn patterns in the data they’re trained on, and they can extrapolate these patterns to novel situations. While this is not the same as human common sense, they can exhibit a form of it, albeit limited and sometimes flawed.
Non-linguistic learning isn’t exclusive of intelligence: The tweet seems to imply that non-linguistic learning (like that of infants and animals) is entirely separate from intelligence, but it’s not. Non-linguistic learning is a form of intelligence, but it’s not the only form. And while LLMs can’t learn in the same way as an infant or an animal (they don’t perceive the world or interact with it in the same way), that doesn’t mean they can’t exhibit intelligence in the areas they can handle.
LLMs can learn from non-linguistic data encoded in language: Although LLMs primarily work with text, they can learn from non-linguistic information if it’s encoded in a textual format. For example, they can learn to generate text that describes how to solve a math problem, even though the underlying mathematics isn’t linguistic.
In conclusion, while it’s true that LLMs do not and cannot encapsulate the entirety of intelligence, especially regarding aspects like physical interaction with the world or emotional intelligence, they can manifest significant aspects of cognitive intelligence, especially those that can be demonstrated or learned through language.