As I've said a few times now, all models are wrong. That includes laws and norms, employee handbooks and product roadmaps, system diagrams and city plans. If something is abstracting away from reality, it's creating a gap. What happens when that gap causes problems?
In his frustrating but provocative book The Utopia of Rules, David Graeber introduces the term "interpretive labor". He defines interpretive labor as the work of "trying to decipher others' motives and perceptions". He then starts using the term "imaginative labor", which he doesn't define, and which he seems to conflate with interpretive labor. (I said it was a frustrating book!)
I find Graeber's terminology confusing, and so I'd like to try my own definitions. Going forward, let us say that imaginative labor is the work of model-building, while interpretive labor is the work of bridging the model and reality. A product designer performs imaginative labor, while a customer support representative performs interpretive labor. The city planner performs imaginative labor, while the construction team performs interpretive labor. And so on.
(I find the incoherence of Graeber's definitions kind of funny, since he writes: "This confusion, this jumbling of different conceptions of imagination, runs throughout the history of leftist thought." You are not immune, dear professor. And if my definitions are also confusing, I suppose I am carrying on a great tradition!)
Imagine a retail employee given two conflicting instructions: 'always get a receipt before refunding an item' and 'the customer is always right'. Their handbook has no guide for how to resolve the contradiction. Whatever the employee's solution, the work put into finding it is interpretive labor. The pedestrian on the street outside, trying to judge whether it's safe to cross against the light, is performing interpretive labor too. So is the bus driver hesitating to let a passenger on when they don't have a fare.
We all perform interpretive labor regularly. That's not a bad thing—it's a natural and inescapable part of life. But interpretive labor can be performed in dysfunctional ways, and that we should try to avoid.
Interpretive Labor In Dysfunctional Systems
Let's return yet again to the Toyota worker on the factory floor, faced with a flaw in the part passing by. They have to decide whether Toyota's rules about when to halt production apply, and this is interpretive labor. Toyota encouraged interpretive labor by creating a safe learning environment.
General Motors adopted the same process, but not the safe learning environment. They discouraged workers from halting production, saying that those who did so were lazy and trying to get out of work. The workers were left with little incentive to perform the interpretive labor of spotting problems and acting on them. The gaps between the "ideal car" and the "real car" might have been noticed by workers, but they were less likely to be reported.
Interpretive labor becomes dysfunctional when it prevents feedback about systemic problems from being reported to those who can fix the problem.
Sometimes this happens because interpretive labor (or really, the interpretive laborer) is blamed for things that go wrong. In Nancy Leveson's Engineering a Safer World, she documents how individual operators are often blamed for accidents. For example, railroad higher ups blamed individual workers for getting themselves killed when trying to couple trains. Similarly, aircraft accidents are often blamed on pilots:
During and after World War II, the Air Force had serious problems with aircraft accidents: From 1952 to 1966, for example, 7,715 aircraft were lost and 8,547 people killed. Most of these accidents were blamed on pilots. [...] [T]he Air Force did not take [alternative explanations] seriously until they began to develop intercontinental ballistic missiles: there were no pilots to blame for the frequent and devastating explosions of these liquid-propellant missiles. In having to confront factors other than pilot error, the Air Force began to treat safety as a system problem, and System Safety programs were developed to deal with them. [...]
It is still common to see statements that 70 percent to 80 percent of aircraft accidents are caused by pilot error or that 85 percent of work accidents are due to unsafe acts by workers rather than unsafe conditions. However, closer examination shows that the data may be biased and incomplete: the less that is known about an accident, the most likely it will be attributed to operator error. Thorough investigation of serious accidents almost invariably finds other factors.
When we blame individuals for problems, we fail to look at the full system. Other issues that contributing to the failure are ignored, and thus cannot be reported to system designers.
Interestingly, interpretive labor performed successfully can also cause feedback to go unreported, allowing system designers to think that the system is working fine.
Imagine a maintenance worker given instructions to turn a room's heat to seventy degrees Celsius. "That can't be right," they think, reasonably, "the person who wrote this must have meant Fahrenheit." And so they turn the heat to a moderate 70° F, having correctly interpreted the author's intent. But the instructions are passed on to other workers as-is, and one day someone else reads them and thinks, "sounds hot, but okay".
Suddenly, there's another accident. And perhaps the individual will be blamed for it. "They should've known we couldn't want the building that hot!" their bosses might say. But that is not really the issue. The issue is not even that there was a typo in the instructions. Typos happen. The problem is that the feedback about the typo was never passed to someone who could correct it. There was no channel for information from the worker on the ground to the designer of the system.
The interpretive labor worked until it suddenly didn't. The gap between model and reality stayed wide open until somebody fell into it.
Just a few days ago, John Bull wrote a tweet thread about the "trust thermocline". The thermocline is a metaphor taken from large bodies of water, which get gradually cooler the deeper you dive, until the water suddenly becomes frigid. In product development, users can gradually grow more and more annoyed, and then suddenly leave all at once.
[Product developers] think that as long as usage is ticking up, they can do what they like to cost and product. And (critically) that they can just react when the curve flattens.
But with a lot of CONTENT products (inc social media) that's not actually how it works. Because it doesn't account for sunk-cost lock-in. Users and readers will stick to what they know, and use, well beyond the point where they START to lose trust in it. And you won't see that. But they'll only MOVE when they hit the Trust Thermocline. The point where their lack of trust in the product to meet their needs, and the emotional investment they'd made in it, have finally been outweighed by the physical and emotional effort required to abandon it.
When customers are unhappy with a product, but the cost of leaving is too high, they will will perform the interpretive labor necessary to keep the product in their lives. This labor is often invisible to product designers, but it is usually visible to user-facing workers. Bull writes:
A few people have asked how you spot the where your Trust Thermocline is, and how to avoid hitting it. I'll give you the same answer I give senior execs: I don't know. But the people working on the ground level in the customer-facing sections of your company do. Because it's those people that will be picking up on the general vibe of your userbase and their 'grumbles' - i.e. the complaints that the user shoulders internally (mostly) rather than makes directly in feedback.
The answer, in other words, is to have good systems of feedback which value and respond to the interpretive labor being performed by both users and user-facing workers. Unfortunately, this is a hard answer for privileged people to hear.
Interpretive Labor and Privilege
Graeber remarks several times on how interpretive labor is unevenly distributed in society. For instance, women are more likely to perform it than men:
Generations of women novelists—Virginia Woolf comes most immediately to mind (To the Lighthouse)—have documented the [...] constant efforts women end up having to expend in managing, maintaining, and adjusting the egos of oblivious and self-important men, involving the continual work of imaginative identification, or interpretive labor.
This seems right enough, but I'd like to zoom out and look at privilege from a systems perspective. Privilege—or rather, the lack of it: marginalization, discrimination, bias, etc—often causes interpretive labor to be dismissed, and the valuable feedback it could provide ignored.
In a 2021 interview with Charlie Warzel, social media researcher Erin Gallagher explained how agents had been manipulating social media to influence elections for years, but it went largely ignored because it wasn't happening in the United States:
There was this report that came out in Bloomberg in March 2016 called “How To Hack An Election.” I’m not sure it made waves in the U.S. but when it came out the researchers I’d come in contact with in Latin America told me it was a like a bomb went off in their part of the world. This guy had been manipulating social media and influencing elections for a decade and he explained how he did it. But it didn’t seem like it made much of an impression in the US. There was a notion of ‘that won’t happen here.’
Social media companies, journalists, and lawmakers could have responded to the problems being reported on in Latin America. They could have learned from the work already being done by Latin American civic leaders and activists. Instead they were ignored until United States elections started to be clearly and transparently targeted.
Another example: the biological mechanisms behind long Covid appear to be related to several chronic illnesses that are more common among women. Ed Yong writes:
Many long-haulers have the hallmark symptom of ME/CFS—post-exertional malaise, in which mild bursts of activity trigger dramatic crashes. Clusters of ME/CFS have followed many disease outbreaks, including the original SARS epidemic, in 2003. And when the pandemic began in 2020, ME/CFS researchers and patients saw long COVID coming before anyone else did.
“For years, we’ve been shouting from the rooftops that this is something that happens after an infectious onset, but it’s been hard to get people to pay attention,” Michael Van Elzakker of Harvard, who is one of the few scientists to study the condition, told me. Much like long COVID, ME/CFS has been trivialized as a psychological condition, its patients mocked and its researchers underfunded. “It’s a terrible outrage,” Maureen Hanson, a molecular biologist at Cornell who also works on ME/CFS, told me. “If we had a better understanding of it, we’d be ahead of the game” with long COVID.
This pattern, if you look for it, is everywhere. Anne Helen Petersen writes about the ever-increasing burdens of elder care:
So much of the labor — and struggle — associated with caregiving goes unnoticed, unappreciated, and underdiscussed. There’s a whole host of reasons for that, mostly the fact that family caregiving is largely performed by women in the home and thus discounted as labor; when it is paid, it’s almost entirely performed by women of color, particularly immigrant women, and socially devalued. [...] Since the difficulty of this care remains largely imperceptible to all save those who provide it, there have been few attempts, governmental or otherwise, to make it better, easier, or less of a life-swallowing burden.
Elder care is a taxing combination of physical, emotional, and logistical labor. And in the context of the larger system, it is interpretive labor. People—primarily women, and women of color—step into the gaps of the system and struggle to make do. They have no other choice. And because the needs of these groups are devalued, that struggle is largely seen as a tolerable sacrifice, as the system more or less working, rather than a warning sign that something's wrong.
In each of these cases, there are people who have identified a problem with the model (the model of election security, disease, caregiving, etc). But because the people who best understand the problems are from groups with less power, their needs and their feedback have been ignored. And in each case, the problem has only grown.
Interpretive Labor as a Site of Power Struggle
In the previous section I listed some examples where the gap between models and reality impacted marginalized groups, and feedback from those groups was ignored. It's possible to read them and think, naively, that bias has an unfortunate side effect of making us disregard important feedback about dysfunctional systems. That may sometimes be true, but interpretive labor can also be a site of explicit power struggle.
Many people have noted that applications for welfare benefits are more elaborate and demanding than other government programs. As Emily Badger writes:
We rarely make similar demands of other recipients of government aid. We don't drug-test farmers who receive agriculture subsidies (lest they think about plowing while high!). We don't require Pell Grant recipients to prove that they're pursuing a degree that will get them a real job one day (sorry, no poetry!). We don't require wealthy families who cash in on the home mortgage interest deduction to prove that they don't use their homes as brothels (because surely someone out there does this). The strings that we attach to government aid are attached uniquely for the poor.
If the government suddenly started drug-testing home mortgage interest deduction recipients, those recipients would protest, and in far more effective ways than welfare recipients are capable of. They would sue; they would donate to political opponents; they would write letters to the editor and be interviewed on cable news. They would send feedback up the chain and change the system. Their political power is precisely what keeps the government from foisting unnecessary interpretive labor onto them.
Welfare applicants, as a whole, have no such power. They can be forced to do the interpretive labor of filling out forms, taking drug tests, and petitioning for exceptions. This extra labor is both a symptom of their lack of power and a way to keep them from gaining power. Interpretive labor takes time and energy. It can leave you exhausted.
Resistance to feedback, and refusal to learn from the interpretive labor of others, can be a demonstration of power. Heidi Grasswick writes:
In their analyses of invested ignorance, many feminists have built upon Charles Mills’s work on racial ignorance, in which he argues that whites (or other dominant groups) have a positive interest in misrepresenting the world in ways that help support their dominant position (Mills 1997, 2007). Mills conceptualizes whites as engaged in a kind of cognitive dysfunction that serves their purposes by preventing them from understanding the social relations of domination in which they are engaged[.]
Part of how white people maintain dominance over people of color, and men over women, is by asserting a model of the world which is ignorant of the realities of less privileged groups. The lack of corrective feedback which would make their models more true is a feature, not a bug.
This analysis is depressing as hell, so I will end with some examples of collective action where interpretive labor was a source of liberation.
First, work to rule, or as it's been renamed recently, quiet quitting. This is when workers refuse to perform any interpretive labor and instead follow the rules and regulations to the letter.
James Scott explains:
Workers have seized on the inadequacy of the rules to explain how things actually run and have exploited it to their advantage. Thus, the taxi drivers of Paris have, when they were frustrated with the municipal authorities over fees or new regulations, resorted to what is known as a greve de zele. They would all, by agreement and on cue, suddenly begin to follow all the regulations in the code routier, and, as intended, this would bring traffic in Paris to a grinding halt. Knowing that traffic circulated in Paris only by a practiced and judicious disregard of many regulations, they could, merely by following the rules meticulously, bring it to a standstill.
The English-language version of this procedure is often known as the "work-to-rule" strike. In an extended work-to-rule action against the Caterpillar Corporation, workers reverted to following the inefficient procedures specified by engineers, knowing that it would cost the company valuable time and quality, rather than continuing the more expeditious practices they had long ago devised on the job. The actual work process in any office, on any construction site, or on any factory floor cannot be adequately explained by the rules, however elaborate, governing it; the work gets done only because of the effective informal understandings and improvisations outside those rules.
Systems only function when people are willing to perform the interpretive labor that bridges the gap between the formal and the actual. When laborers collectively withhold that work, they can bring the system to a halt.
(I have largely ignored Scott's work in this post, but his writings on metis, a similar concept to interpretive labor, are quite relevant. I hope to return to them soon.)
Finally: civil disobedience. It is hard to imagine a more formalized or abstract model than a legal code. What is the reality that the model of law maps to? There's no single answer, but I would say that the reality law is meant to map to is one in which the people who live under the law are treated fairly, and flourish happily.
Civil disobedience occurs when a group of people are unhappy with the law, often because it treats them unfairly. Going along with the law, and absorbing the consequences of its unfairness, is a form of interpretive labor. With civil disobedience, groups choose to no longer perform that labor, and attempt to send corrective feedback to society at large. Sometimes this results in the system being changed.
A Final Note
I am aware that the framework I've sketched out here is as abstract and untested as a model can be. Every single example I've cited has a complex context which I'm not truly addressing. Nevertheless, I offer it as an initial attempt to develop what for me has become a keystone in how I see the world.
If you have spotted errors, misunderstandings, over-generalizations, or missing insights, I very much welcome the interpretive labor of your feedback.