A paper has come out suggesting that LLMs trained on A=B cannot infer that B=A. For anyone who has spent time with GPT4, this seems to be obviously false – this is well within the domain of what GPT4 understands. (The claim being made is actually slightly different, as will be discussed in Part 2.)
So let’s see how well GPT4 understands reverse identity relationships.
Human: On a fictional world, called Blah, Glumph is the head wizard.
Bob lives on Blah. He is the head wizard.
There is only one head wizard on Blah.
Assuming all of these statements are true, and without adding any embellishments or conjecture, what can you tell me about Bob?
ChatGPT: Bob is the head wizard on the fictional world called Blah. There is only one head wizard on Blah, and Bob holds that position. Therefore, Bob is also known as Glumph, as the head wizard is named Glumph.
[Note that GPT4 has had to drawn a conceptual line from Bob to head wizard and then infer that, as head wizard, his name must actually be “Glumph” as well as “Bob”. The reverse identity is known so well it overrides the natural tendency to think of Bob and Glumph as separate people. We could quibble about the details, but GPT4 has no trouble with these sorts of puzzles; I am only showing one example.]
Human:
In general, if I tell you that A is B, does that always mean B is A? Discuss the ways in which this relationship either holds or does not hold, using made up examples not in your training data.
ChatGPT: The statement “A is B” doesn’t necessarily imply that “B is A,” as the relationship between A and B can be asymmetrical depending on the context. This is an important concept in both logic and set theory. Below are some examples to illustrate when the relationship holds and when it doesn’t:
Holds True:
Reflexive Relationship: In a reflexive relationship, A is always B and B is always A. For example, if "A is a square" and "B is a square," then it's true that "B is A" because they're identical in definition.
Symmetrical Relationship: If "A is a sibling of B," then it's generally true that "B is a sibling of A." The relationship is mutual.
Does Not Hold True:
Subset Relationship: If "A is a poodle," it doesn't mean that "B is a poodle" if B is simply "a dog." Here, A is a subset of B but not vice versa.
Hierarchical Relationship: If "A is an employee of B," it's not true that "B is an employee of A." In this case, B is the employer, not an employee.
Functional Relationship: If "A is the mother of B," it's incorrect to say "B is the mother of A." The relationship is clearly directional.
Qualitative Difference: If "A is a good example of B," it does not imply that "B is a good example of A." For instance, if "A is a good example of a dedicated student," it doesn't mean that "a dedicated student is a good example of A."
So, the validity of the statement “B is A” when “A is B” is declared varies depending on the nature of the relationship between A and B.
Human: I think your first example (based on squares) is ambiguous, and is open to a wrong interpretation. Can you identify the ambiguity and then rephrase to avoid it?
ChatGPT: Certainly. The ambiguity in the first example lies in the phrasing “A is a square” and “B is a square,” which could imply that A and B are both members of the category “squares” but not necessarily the same square. This would mean that while A is a square and B is a square, they could still be different squares, making “A is B” and “B is A” potentially untrue.
A clearer way to phrase this would be: “A and B are the same square.” In this case, it would hold true that if “A is B,” then “B is A,” because they are identified as being the same square.
Human: Correct… But I think your examples are not consistently examples of “A is B”. Saying A is a square and B is a square does mean A is B or that B is A. Saying that A is a poodle and that B is a dog does not mean that A is B – it would not even imply that A is B if the two expressions were “A is a poodle” and “B is a poodle”.
For actual examples of statements that take the form “A is B”, are there cases where the converse is not true?
ChatGPT: You’re correct, and I apologize for the oversight. The original examples didn’t strictly adhere to the “A is B” format in a way that would test the validity of its converse “B is A.”
In strict terms of statements that say “A is B,” the relationship can be either symmetrical or asymmetrical:
Holds True:
Identity: If A is B in the sense that they are identical (A = B), then it would naturally hold true that B is A. For example, if "John is the current CEO of the company," and it is clear we are talking about one and the same person when referring to "John" and "the current CEO," then "the current CEO is John."
Does Not Hold True:
Inclusion: If we say, "A is a part of B," it wouldn't mean "B is a part of A." For example, "The handle is part of the umbrella" doesn't imply that "The umbrella is part of the handle."
Attribution: If "A is a quality of B," the converse doesn't hold. For instance, "Red is a color of the flag" doesn't imply "The flag is a color of red."
Role or Function: Statements like "A is the mother of B" do not work in reverse. "The mother of B is A" doesn't mean "B is the mother of A."
So, when dealing with statements that strictly adhere to the “A is B” structure, there are indeed cases where “B is A” does not hold true.
Human: Wouldn’t all of those hold true if bracketed correctly? In the sentence “A is a part of X”, the entity taking the role of “B” is not “X”, but “a part of X”. And it does seem to be true that: A part of X is B. Similarly: A color of the flag is red. And: The mother of A is B.
So, are there any statements involving “is” where the full complement of the verb to be cannot be exchanged with the subject of the verb to be without compromising the truth of the statement?
ChatGPT: You bring up an excellent point. When the entire predicate (“a part of X,” “a quality of B,” etc.) is treated as the “B” in “A is B,” then indeed the statement and its converse would both hold true when phrased appropriately. This is a result of the “is” verb in this context functioning as an equivalence relation, which is reflexive, symmetrical, and transitive.
The key to the apparent paradox is that while the content of the statement may differ depending on the wording or framing, the logical structure remains consistent.
However, in natural language, the “is” verb can also be used to denote other kinds of relationships beyond strict equivalence or identity. For example, “is” can be used in definitions (“A hexagon is a polygon with six sides”) or categorizations (“A robin is a bird”). These statements typically do not hold in reverse (“A polygon with six sides is a hexagon” is not universally true, nor is “A bird is a robin”).
In summary, when the predicate is treated as a singular entity for the purpose of logical structure, “A is B” and “B is A” would both be true when properly phrased. However, the “is” verb in natural language can express a wide variety of relationships that are not strictly symmetrical, and in those contexts, the symmetry of the statement and its converse may not hold.
Human: It has been claimed that LLMs cannot infer reverse identity relationships. That is, if told that fictional A is fictional B, they do not thereby know that fictional B is fictional A. Do you think this claim about LLMs is true?
ChatGPT: In logical terms, an identity relationship (“A is B”) implies that “B is A” because identity is a symmetrical, reflexive, and transitive relation. In a computational context, a language model like me should be able to recognize that if A is identified as B, then B is identified as A.
However, the issue may arise when interpreting natural language, where “is” can have multiple semantic roles beyond mere identity (such as categorization, attribution, or part-whole relationships). Therefore, the capability of a language model like me to infer “B is A” from “A is B” would generally depend on the clarity of the context in which this relationship is stated.
If the context clearly suggests that “A is B” is an identity statement (“Clark Kent is Superman”), it would be a reasonable expectation for me to infer that “Superman is Clark Kent.”
In summary, if the claim is that LLMs cannot infer reverse identity relationships in a strict logical sense, this claim is likely incorrect. However, if the claim pertains to the nuances of natural language interpretation, then the accuracy of the inference could indeed vary.
Human: I think the claim pertains only to earlier models of LLMs. You seem to have no issues with identity relationships. The claim is of this form:
- GPT3 cannot infer reverse identities
- GPT3 is an LLM
- Therefore, LLMs cannot infer reverse identity relationships.
What do you think of this chain of reasoning?
ChatGPT: The chain of reasoning suffers from a hasty generalization. While it is true that GPT-3 is an example of a large language model (LLM), it is not the only example, and it certainly doesn’t represent all the capabilities or limitations that LLMs could have, either now or in the future. Therefore, if GPT-3 has a limitation in inferring reverse identities, it doesn’t necessarily follow that all LLMs have this limitation.
This type of reasoning could lead to incorrect conclusions if generalized too broadly. In logic, this is similar to arguing:
One apple in the basket is rotten.
That apple is an apple in the basket.
Therefore, all apples in the basket are rotten.
The sample size (one instance of an LLM, in this case, GPT-3) is too small to make a generalized conclusion about the entire category of LLMs. The correct conclusion might be more limited, such as, “Some LLMs, such as GPT-3, have shown limitations in inferring reverse identity relationships.”
Human: My thoughts exactly. But there was one other LLM tested – I have simplified a little in relating the study. The point remains the same – it is hazardous to conclude that any individual logical principle is beyond the reach of LLMs.
Final thoughts?
ChatGPT: Indeed, the limitations of one or even a few instances of LLMs should not be generalized to suggest that certain logical principles are inherently beyond the reach of all such models. Machine learning, and natural language processing in particular, is a rapidly evolving field. Models are continually improved and expanded, and it’s quite possible that limitations observed in earlier versions may be addressed in subsequent iterations or by other models.
It’s also worth noting that the training methods, data sets, and architectures can differ, leading to different capabilities and limitations even within the category of LLMs. Therefore, caution should be exercised when making general claims about the capabilities of such a diverse and evolving set of technologies.
So, I fully agree: While it’s valuable to identify limitations in current technologies for the purpose of improvement, it’s hazardous to make sweeping claims about what is or isn’t fundamentally possible for LLMs based on a limited set of observations.
To be fair to the authors, though, they were not directly making a comment about whether LLMs understand this concept, and there is something to the reverse identity issue, as shown in Part 2.