LLMs and the Gell-Mann amnesia effect

Gell-Mann amnesia is one of my favorite concepts. From Wikipedia:

In a speech in 2002, Crichton coined the term “Gell-Mann amnesia effect” to describe the phenomenon of experts reading articles within their fields of expertise and finding them to be error-ridden and full of misunderstanding, but seemingly forgetting those experiences when reading articles in the same publications written on topics outside of their fields of expertise, which they believe to be credible. He explained that he had chosen the name ironically, because he had once discussed the effect with physicist Murray Gell-Mann, “and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have”.

It cracks me up because it’s so true. The last time I experienced it was reading this article about kettlebell exercises in the New York Times. I can’t find it now, but the demonstration animation of the swing was dangerously wrong. Yet I had no trouble turning the figurative page over to another article and just moving on…

Which brings me to the topic of LLMs. Many words have been written about LLM behavior over the last few years (perhaps rivaling the output of the LLMs themselves…! ;), especially about hallucination. Hallucinations have probably reduced a great deal over the last few years, but I still find myself catching subtle errors in responses that make me do a double take.

For example, I recently asked Claude to critique a presentation I had written, and it hallucinated text that wasn’t in the presentation. In this particular instance, the text it “thought” was in the presentation wasn’t invented from whole cloth — it did exist in the context, but it was from an earlier part of the conversation.

I had a similar experience using Rovo at Atlassian, which is trained on 20+ years of company knowledge. Now, Rovo is a superb tool. I loved using it. However, while I wouldn’t call myself an expert, necessarily, I’d been at the company long enough to have developed a reasonable knowledge bank of my own — information about our products, processes, and projects. Similarly, there would moments where my personal context enabled me to detect subtle errors in Rovo’s responses.

Now, I don’t really mind correcting these subtle errors. After all, humans make these kinds of “memory” mistakes, too.

But, when it happens, I’m reminded of the Gell-Mann amnesia effect. By that, I mean: in these instances, I’m acutely aware of the context I’m providing the LLM, or I’ve got enough experience in the domain — I’m the “expert” — so I can detect these subtle errors. (Hopefully!)

But when I ask the LLM for information about something I’m not an expert in… it’s so easy to just go along with whatever it says.

The same principle is in effect.

I’m not entirely sure what to make of this realization, because the line between me knowing enough to determine a response’s accuracy and not can get a bit blurry on the margins.

But it does guide me to leverage LLMs to do things I already know how to do — just better and faster.