All Models Are Wrong (Metaphors for the Insufficiency of Metaphors)
On Maps and Territories
There's a saying: the map is not the territory. Farnham Street Blog has a pretty good explainer:
The map of reality is not reality. Even the best maps are imperfect. That’s because they are reductions of what they represent. If a map were to represent the territory with perfect fidelity, it would no longer be a reduction and thus would no longer be useful to us. [...]
Maps are necessary, but flawed. (By maps, we mean any abstraction of reality, including descriptions, theories, models, etc.) The problem with a map is not simply that it is an abstraction; we need abstraction. [...] the mind creates maps of reality in order to understand it, because the only way we can process the complexity of reality is through abstraction. But frequently, we don’t understand our maps or their limits.
As the post documents, this saying originates in 1931 with mathematician Alfred Korzybski. Korzybski urged us to recognize that our maps of the world are necessarily limited: they can be overly reductive, they can be incorrect, and they can be misinterpreted.
This is an extremely important and deeply true point. I love the map-territory distinction. But it is itself a map, or model, or metaphor, for how the world works, and is thus inherently limited.
So let's take a look at some similar metaphors.
On Leaky Abstractions
Back in 2002, Joel Spolsky wrote a blog post called The Law of Leaky Abstractions. Spolsky never mentions maps or territories or even models, but I think he's essentially talking about the same thing:
[A] lot of computer programming consists of building abstractions. What is a string library? It’s a way to pretend that computers can manipulate strings just as easily as they can manipulate numbers. What is a file system? It’s a way to pretend that a hard drive isn’t really a bunch of spinning magnetic platters that can store bits at certain locations, but rather a hierarchical system of folders-within-folders containing individual files that in turn consist of one or more strings of bytes.
Spolsky says "a lot" of programming is building abstractions, but I think that undersells it. The core of programming is building, manipulating, and trying to communicate about abstractions.
And, as any programmer will tell you, we are constantly screwing it up.
TCP attempts to provide a complete abstraction of an underlying unreliable network, but sometimes, the network leaks through the abstraction and you feel the things that the abstraction can’t quite protect you from. This is but one example of what I’ve dubbed the Law of Leaky Abstractions: All non-trivial abstractions, to some degree, are leaky.
Abstractions fail. Sometimes a little, sometimes a lot. There’s leakage. Things go wrong. It happens all over the place when you have abstractions.
I tend to conceptualize leaky abstractions as strack traces. Stack traces are something programmers frequently wrestle with when debugging, and reading one is like following the path of a leaky abstraction as it drips through code. Here's a snippet of a stack trace I was confronted with a few weeks ago (full thing here):
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/username/temp-venv/lib/python3.10/site-packages/parsons/__init__.py", line 14, in <module>
from parsons.databases.redshift.redshift import Redshift
<snip snip>
File "/Users/username/temp-venv/lib/python3.10/site-packages/botocore/vendored/requests/packages/urllib3/response.py", line 9, in <module>
from ._collections import HTTPHeaderDict
File "/Users/username/temp-venv/lib/python3.10/site-packages/botocore/vendored/requests/packages/urllib3/_collections.py", line 1, in <module>
from collections import Mapping, MutableMapping
ImportError: cannot import name 'Mapping' from 'collections' (/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/collections/__init__.py)
Apologies, I know this is basically unreadable for any non-programmers. It's unreadable for most programmers too. It's only sort of readable for me because I have a ton of context about the situation: I understand the Python packaging ecosystem, how virtual environments are structured, how my particular project is structured, and how import errors work. Even with all that, I couldn't find a solution for this error.
Reading a stack trace is one of the hardest things a programmer has to do precisely because it requires navigating many nested layers of abstraction, many of which you've never encountered before. You're forced to look at code which had previously been "abstracted away", but you don't know which abstractions you actually need to unpack and you can't unpack all of them. And yet any layer could potentially be the source of the error.
Similarly, when you notice a flaw in a mental model, it's often hard to track down the source. Maybe the flaw isn't in the model itself but in one of its constitutive assumptions. Or maybe the flaw is in how the model is being used, how it is being applied to some new context. There are so many potential source of complication, it can be overwhelming!
Anyway, I love the "leaky code"/"leaky abstraction" metaphor for the limitations of models and find it really compelling. Obviously it will be less helpful to non-programmers. Regardless, we'll add it to our collection of metaphors and move on.
On Stories
Neil Gaiman, in the introduction to his short story collection Fragile Things, writes:
One describes a tale best by telling the tale. You see? The way one describes a story, to oneself or to the world, is by telling the story. It is a balancing act and it is a dream. The more accurate the map, the more it resembles the territory. The most accurate map possible would be the territory, and thus would be perfectly accurate and perfectly useless.
The tale is the map which is the territory.
New writers often struggle with including too much detail out of the desire to be 'accurate' and 'realistic'. Their character goes to work in the morning, and the writer tells us how groggy they felt when they woke up, which clothes they put on and what they had for breakfast, how they spent a few frantic minutes looking for their keys. Of course, those details could be important and relevant, but the author has to craft them that way. Otherwise they're just boring.
But a tale is not always map. It is not always an abbreviated version of the full history of the world. A tale can be about an impossible world. It can cover territory that has never existed.
Let's let stories stand on their own as a metaphor.
Stories have a unique relationship to reality. With the arguable exception of "true stories" they are by definition at least a little unreal. That's part of what makes fiction compelling. Readers want to be immersed in something different from their everyday reality.
At the same time, stories are compelling when they ring true. We love stories about aliens and robots and gods, sentient toys and intelligent dinosaurs, but only when they are at least a little like us. The 1884 novella Flatland follows a literal two-dimensional square, but the square has human-like emotions—curiosity, skepticism, wonder—that make it easy to identify with.
As writer/editor/teacher Beth Weeks writes:
many of us have probably received feedback along the lines of, or thought to ourselves as we read, “that’s not realistic.” many of us believe, consciously or not, that fiction that is more “realistic” is inherently better than fiction that is less “realistic.” for some of us, real means a saturation of details, the clear depiction of the surfaces of things. reality is found in the rendering thereof; if you can “see” it, it’s real. for others of us, it might be the development of complex characters and their growth across a narrative. and for yet others, reality is subtlety, or misery, or the idea of “slice of life,” a term i don’t think means anything, because aren’t all stories a slice of a character’s life? what would a story that’s not a slice of life look like?
[...] you the writer get to define the constraints of your own reality. you get to choose if your world even complies with the known laws of physics. and if it doesn’t, you get to choose which ones to break, and why to break them. you get to choose if your stories take place in a real house in a real town on a real day.
Stories must capture some part of reality, but which parts are up to the author (and also dependent on the reader; a story set on the Appalachian trail will capture more of reality for a hiker who's walked that trail than for someone like me).
When we say "all models are wrong" or "all abstractions leak", there is an assumption that we want models to be less wrong, that we want to stop our abstractions from leaking. But we expect stories to diverge from reality. We want them to.
I don't know if we ever want our models to be wrong, but there's something valuable in adopting a metaphor that not only accepts that wrongness but focuses on it. A model says "assume the elephant is spherical" and quickly moves on. A story says, "wait! what would a world with spherical elephants even look like? what are the implications?" A story dwells on the parts of the model that are wrong and finds them as interesting and important as the parts that are right.
Navigating Ill-Structured Domains
We've got three metaphors in our basket now: the map and the territory, the leaky abstraction/stack trace, and the story. But why are we collecting metaphors for the insufficiency of metaphors?
Why are we looking for different ways to say that all models are wrong?
Let me answer that question with yet another metaphor. We have two eyes, and each of them receives a slightly different picture of the world. What we see with both eyes open is different from what either eye sees alone. Our brains stitch together a composite integrating information from both (and adds in some interpretations of its own).
All models are wrong, but if we look at several models at once, maybe we can stitch together something closer to the truth.
Cognitive Flexibility Theory (CFT) argues that expertise in ill-structured domains such as medicine is achieved not by learning abstract and generalizable principles that can be faithfully applied to a given situation, but by having experiences. These experiences allow us to generate partial models or metaphors, which can be applied to new situations in a flexible way. As they put it: "Emphasis must be shifted from retrieval of intact, rigid, precompiled knowledge structures, to assembly of knowledge from different conceptual and precedent case sources to adaptively fit the situation at hand."
Experts in ill-structured domains recognize that no one model is enough to give them purchase, so they get good at flexibly choosing between many models. Here is an example the authors give of using multiple metaphors to understand how muscle fibers work:
We have discovered a large number of misconceptions that result from the overextended application of analogies. To combat the negative effects of a powerful and seductive single analogy, we employ sets of integrated multiple analogies. [...]
So, where we find that misconceptions about the nature of force production by muscle fibers often develop because of a common analogy to the operation of rowing crews (sarcomere "arms" and oars both generate force by a kind of "pulling"), other analogies are introduced to mitigate the negative effects of the limited rowing crew analogy. An analogy to turnbuckles corrects misleading notions about the nature of relative movement and the gross structures within the muscle. An analogy to "finger handcuffs" covers important information missing in the rowing crew analogy about limits of fiber length [...] And so on.
The authors of CFT focus mostly on their own domain of medicine, but claim their work applies to any "fuzzy" field. I'm not sure what field we're in here—epistemology? ontology?—but I'm pretty sure it's fuzzy. So maybe this approach will help.
Comparing Metaphors
For each of our three metaphors, we can ask: what works, and what doesn't? How does the metaphor help us understand the thing it's trying to model, and how does it get in the way?
The Map and Territory metaphor does a great job of highlighting how models are reductions of reality. It brings to mind a map where single colors represent forests, roads, and bodies of water, sprawling beautiful ecosystems reduced to a splash of green or blue.
It also makes clear how much models depend on interpretation. We've all misinterpreted a map and gotten lost.
But this metaphor seems to kind of imply that there's a right way and a wrong way to interpret a map. That if you get lost, it's your fault.
The Story metaphor emphasizes subjectivity. There's no 'right way' or 'wrong way' to read a story. (Mostly. At some point I will do a blog post about Umberto Eco's Interpretation and Overinterpretation.) The author may have an intended interpretation. There may be a conclusion you are meant to reach. But you are free not to reach it.
The meanings of stories are often socially contested. There are power struggles over which kind of stories can be told. Models have social and political power. Their use and construction can be a matter of life and death. The story metaphor makes that a little more salient.
And, as we talked about earlier, stories often emphasize the places where they diverge from reality. They draw our attention to the 'wrongness', instead of away from it.
The Leaky Abstraction metaphor prompts the question: what exactly is 'leaking' from the abstraction?
My answer is 'context'. By context I mean the details of reality, the local and global variables that were presumed to be irrelevant to the abstraction but turn out to be important.
I like this metaphor because it reminds me that abstractions are not truly separate from the reality they come from. They are entangled with the world, and the world can leak in.
This metaphor also reminds me that abstractions overlap and are nested within and around other abstractions. A flaw in one abstraction may arise from another flaw in an abstraction it depends on. It's leaky abstractions all the way down.
Sometimes, when I hear (or think) the phrase 'all models are wrong', I get the urge to say, impatiently, "yes, of course, all models are wrong, but is this model good enough? How can we tell when a model has a superficial flaw, and when it's a big problem we need to worry about?"
Let's see how our three metaphors might answer this question.
The map metaphor might say: a map is too flawed when it keeps getting people lost, so a model is too flawed when it confuses more people than it helps.
The story metaphor might say: a story is too flawed when it fails to move people, so a model is too flawed when it's boring or irrelevant.
The leaky abstraction/code metaphor might say: a program is too flawed when people report bugs, so a model is too flawed when people tell you it's causing them problems.
What other metaphors for 'all models are wrong' exist? What might those metaphors say about when a model is flawed enough to need fixing?
Member discussion