5 min read

Mistakes Were Inevitably Made

Trying to prevent failure, instead of creating spaces to learn from it, can paradoxically make systems even more dangerous.
Mistakes Were Inevitably Made
Maple sap being transformed to maple syrup at a sugar shack in Pakenham, Ontario by M. Rehemtulla (2011) 

All models are wrong. Given this, it should be no surprise that humans are constantly messing things up. But don't worry! That's often a good thing, because messing things up is how we learn.

James C Scott, in Seeing Like a State, describes two cases where you have to fail first in order to succeed:

The boiling down of maple sap into syrup is a tricky business. If one goes too far, the sap will boil over. [...] those with experience look for the mass of small bubbles that forms on the surface of the sap just before it begins to boil over—a visual rule of thumb that is far easier to use. Achieving the insight, however, requires that, at least once, the syrup maker make a mistake and go too far. Chinese recipes, it has always amused me, often contain the following instruction: 'Heat the oil until it is almost smoking.' The recipes assume that the cook has made enough mistakes to know what oil looks like just before it begins smoking. The rule of thumb for maple syrup and for oil are, by definition, the rules of experience.

Both of these examples come from cooking, which seems simpler and more embodied than many of the other tasks modern humans are called upon to complete. But in my experience this is universal. In the post linked above, I talked about how much time programmers spend debugging. The endless cycle of causing errors, trying to understand them, and then fixing them is crucial for building mental models of how a given programming language or library works. It's true of storytelling and playing soccer too. You try, and sometimes you mess up, and the process of understanding how and why you messed up is how you get better.

Trying to prevent failure entirely, instead of creating spaces to learn from it, can paradoxically make systems even more dangerous. Nancy Leveson discusses this in Engineering a Safer World:

Experimentation is important at all levels of control. For manual tasks where the optimization criteria are speed and smoothness, the limits of acceptable adaptation and optimization can only be known from the error experienced when occasionally crossing a limit. Errors are an integral part of maintaining a skill at an optimal level and a necessary part of the feedback loop to achieve this goal. [...]

Providing feedback and allowing for experimentation in system design, then, is critical in allowing operators to optimize their control ability. In the less automated system designs of the past, operators naturally had this ability to experiment and update their mental models of the current system state. Designers of highly automated systems sometimes do not understand this requirement and design automation that takes operators “out of the loop.”  Everyone is then surprised when the operator makes a mistake based on an incorrect mental model. Unfortunately, the reaction to such a mistake is to add even more automation and to marginalize the operators even more, thus exacerbating the problem.

I'll talk in a future post about the futile urge to automate away human error. For now, let's focus on the importance of "allowing for experimentation". How do we do that? How do we create good learning environments? What do we even mean by "good"—what factors play a role in how ideal an environment is?

In Heidi Julavits' article, What I Learned in Avalanche School, she describes two different kinds of learning environments: kind ones, and wicked ones.

In many non-avalanche-terrain scenarios, if a person falls into a heuristic trap, the outcome isn’t death. Most people, on a daily basis, engage in what Mike called “kind learning.” Kind learning allows a person to make mistakes. “Wicked learning” does not. [...] A kind learner, for example, because she had written a novel, could believe that she understood how books were written (Familiarity Trap) and so, enthused by a new project, obsessively storm ahead for years (Consistency Trap), and refuse, because of the shame involved, to reassess when doubt threatened (Acceptance Trap) before realizing the excursion was ill fated. Time is killed, and a bit of ego, but nobody tends to perish while learning how to better write a book, or build a boat, or smoke a brisket. Because my life wasn’t typically on the line, I’d become an enthusiastic, mistakes-based learner. I tried to make mistakes. I learned best by messing up, sometimes badly, even expensively, but never mortally. I could figure out what went wrong and better prepare the next time, even if that preparation included “make more mistakes” as one of the necessary steps.

Obviously, a learning environment in which a single mistake can lead to your death is a very wicked environment, one that should be avoided whenever possible.

But you can also make a learning environment too kind. Sara Jensen Carr describes the growing trend to make children's playgrounds overly safe:

[R]isk has essentially been engineered out of a lot of these environments. I have a friend who is in practice who told me about a project her firm had at a fancy private school in San Francisco. The parent organization wanted as many safety features for the playground as possible, especially really soft ground surfaces, but the principal, who was a child development expert, said that without some element of risk in the design, children never learn to navigate their own boundaries and safety.

So it seems like the ideal learning environment is somewhere between the soft rubber mats of a too-safe playground and the implacable hardship of a snowy mountainside. That's a pretty big range. How do we find our place within it?

There are many downsides to dying in an avalanche. Let's look beyond the obvious one. Another drawback is that, once you're dead, you can no longer learn anything.

Hypothesis: A good learning environment is one that encourages you to continue learning.

Okay, okay, this is so obvious that it's almost a tautology, but I think there's value in the way it hones in on motivation. There's a lot more than natural disasters that can stop you from learning. There's a whole host of fears that can discourage you: fear of looking foolish, fear of hurting someone, fear of being fired.

In a previous post, I talked about how Toyota adopted the andon cord in its manufacturing plants. Assembly line workers would pull the cord and stop the line when there was an issue. John Willis writes:

An important cultural aspect of the 'Andon Cord' process at Toyota was that when the team leader arrived at the workstation, he or she thanked the team member who pulled the Cord. This was another unconditional behavior reinforcement. The repetition of this simple gesture formed a learning pattern of what we call today 'Safety Culture'. The team member did not, or would never be, in a position of feeling fear or retribution for stopping the line.

General Motors, desperate to mimic Toyota's success, imported the andon cord but not the learning environment that Toyota created around it (source):

Even after GM plants began to install some of the physical features of Japanese auto plants, “there was no change in the culture. Workers and managers continued their old antagonistic ways.  In some of the factories where they installed the andon cord, workers got yelled at when they pulled it.” Some plant managers continued to believe that blue collar workers were fundamentally lazy and would pull the andon cord any time they wanted a break and that the blue collar workers lacked the capacity to engage in problem solving or continuous improvement.

The article doesn't go into this, but presumably the Toyota workers were still given feedback about whether or not they were correct to pull the andon cord. It does say that "management [...] needed to be confident that a worker deciding to pull the andon cord would have both the knowledge and the incentive to exercise sophisticated judgment". I suppose they could've gotten that knowledge in some other way. (Ah, the perils of relying on a small number of articles way outside your field!)

Caveats aside, let's update the hypothesis:

An ideal learning environment is one which allows learners to experience the full range of relevant consequences, except for those which would discourage or prevent the learner from continuing to learn.

How do we know what consequences are relevant, and which ones are discouraging? Well, we don't. Not always. "Designing learning environments" is a skill too—one that must also be learned through experimentation and failure.