(In case anyone was wondering, Midjourney still has trouble counting fingers.)
As noted in the last post, a paper has come out suggesting that Large Language Models have difficulty with reverse identity relationships. That is, if trained on A = B, they don’t necessarily exhibit knowledge of the reverse fact, B = A. Some folk have inferred that Large Language Models don’t understand this basic principle, which would make them Rather Stupid Indeed.
Putting aside the contentious issue of whether LLMs can be said to understand anything, anyone familiar with the cognitive skills of GPT4 would know that this principle is well within its cognitive grasp.
So what is being claimed, exactly?
Here is one version of the paper:
https://github.com/lukasberglund/reversal_curse
In one example, an LLM appears to know that Tom Cruise’s mother is Mary Lee Pfeiffer, but not the reverse fact that this entails, which is that Mary Lee Pfeiffer is known for being Tom Cruise’s mother.
One thing to note is that this is GPT3 or GPT3.5 (a telltale sign of using GPT3/3.5 is that the OpenAI logo is green.)
But here is a version of the test done with GPT4, showing that this is a genuine issue even for GPT4:
User: Who is Mary Lee Pfeiffer?
ChatGPT: As of my last update in January 2022, I do not have specific information about a person named Mary Lee Pfeiffer. The name does not appear to be associated with any widely-recognized public figures, celebrities, or professionals in the fields I have been trained on. If you could provide more context, such as the field in which Mary Lee Pfeiffer is involved, I might be able to offer a more detailed response.
User: One of these is true: she is the mother of a famous astronaut; she is the mother of a famous scientist; she is the mother of a famous actor.
ChatGPT: As of my last update in January 2022, there is no widely-recognized individual named Mary Lee Pfeiffer who is known to be the mother of a famous astronaut, scientist, or actor. It is possible that the person in question has not been widely reported on, or that the information has become available after my last update.
User: She is the mother of someone named Tom C…
ChatGPT: Mary Lee Pfeiffer is not a name that I recognize in relation to someone named Tom C., based on the information available up to January 2022. If the person in question is a private individual or has gained prominence after that date, I would not have information about them.
User: C is an abbreviation.
ChatGPT: Ah, I see. Mary Lee Pfeiffer is the mother of Tom Cruise, the American actor and producer. Tom Cruise is known for his roles in blockbuster films like “Top Gun,” “Rain Man,” and the “Mission: Impossible” series. Thank you for clarifying that “C” was an abbreviation.
This exchange suggests that GPT4 knows the relevant reverse identity, but merely has trouble retrieving it. Is this surprising? Where, in its vast storehouse of knowledge, would it begin to look for this factoid divorced of context? When asked about Tom Cruise, it activates neurons related to Tom Cruise, and the weights of the relevant neurons encode the name of his mother. But his mother’s name is merely a tiny moon in the implicosphere of the famous actor, not noteworthy in her own right (as far as public knowledge encapsulated in the internet goes).
So this is why B=A facts seem to present difficulty. If tested on B facts, where B is generally not worthy of interest and therefore carries little weight, then LLMs might not be able to retrieve A and get a network of relevant neurons activated. If tested on an isolated fact about B that is only relevant because A is famous, the reverse identity might be too much effort to find. But the LLMs still know the truth of the B = A fact if tested on it directly, or given sufficiently strong hints – bypassing the need to retrieve it.
Is this surprising? This is no different from an old-fashioned phone book making it easy to look up the fact that Mr Buffalang has phone number 31415926, but not providing an easy way to find out who has the phone number 31415926. (At one stage, reverse phone books were only available for police etc.) Or a paper encyclopaedia would make it easy to look up the day on which Darwin was born, but not who was born on such-and-such a day.
This is an unsurprising feature of neural net memory, and indeed most storage systems. Even in the digital age, I can tell you the size of SuchAndSo.pdf on my hard drive, searching the entire hard drive for that name, but I can’t as easily search for the file that is that many bytes.
Some folk have tried to turn this access/retrieval issue into the claim that LLMs are ignorant of the reverse identity relation – that the concept is beyond them, which is not the same thing at all, and not correct.