[Update: this is a subconscious paraphrase, or at least extension, of Jonathan Blow’s excellent talk Preventing the Collapse of Civilization which I watched a few months ago. Thanks JP for the reminder.]

The US government used to make a substance called Fogbank, a critical component in fusion bombs. Then we stopped making it for a while, and “forgot” how to make it.

Actually, it turns out we never really understood how to make it at all. When we wanted to make more of it, we created a new facility based on the historical records of the first time around. That plan didn’t work. It turns out that what made Fogbank work were certain impurities in the final product, traceable to chemical impurities in the input. The modern inputs were purified better than the historical inputs, so the impurities disappeared, and Fogbank 2 didn’t work until this was all tracked down and figured out. No one knew this the first time around. (See page 20 here, via a comment on MR.)

This is story is both terrifying and comforting. It’s good that we were able to eventually figure out the problem. But fusion bombs are one of the most technically sophisticated artifacts of modern civilization, produced by the wealthiest country in world history. Massive resources have been put into their research, engineering, and the institutions responsible for them. What else could fail similarly? What would it cost to fix? What would the ramifications of temporary failure be?

Most of our modern technosphere relies on extreme specialization in complex engineering disciplines. However, much of this knowledge is implicit, rather than explicit.1{#identifier_0_597.footnote-link.footnote-identifier-link} A very short version might say: explicit knowledge is what you can write down, implicit knowledge is what you can’t. “Carbon has 6 protons” is explicit knowledge, “how to evaluate a novel” is implicit. Of course you could write a guide to evaluating a novel, but reading that guide would not lead people to perform the same evaluation that you do. Another example: most recipes are actually a blend of explicit and implicit knowledge. The amount of the ingredients and the order of adding them is usually explicit, but knowing when something is done, or when it has been suitably mixed, or small adjustments based on variable ingredients, are all implicit.

Worse, often these processes are highly context-dependent, and the people performing them don’t know what elements of the context really matter. This is the case for Fogbank – the nuclear physicists didn’t know that the impurity was relevant.

This implicit/explicit divide isn’t just on the level of individuals, but also institutions. Codifying process is virtually impossible, and any system with humans in it is organic and adaptive, so defined processes become obsolete immediately. If an institution (a research lab, a company, a division) dies, that knowledge doesn’t live on in any one individual: it dies as well. Even explicit knowledge is often under-documented in organizations. Most broadly, there is an intelligence in systems, and we often don’t know how to recreate it.

Markets can probably ameliorate some of these concerns. If components are truly critical, there should be strong incentives for firms to maintain these systems of knowledge. And one would hope that for critical components, the market incentives are such that things could be rediscovered quickly. But firms can go out of business for unrelated reasons, and much of our critical infrastructure is highly concentrated or actually quasi-governmental, so markets cannot be the only solution.

I’d like to read more about this – is there a good literature out there? What would the field be called – knowledge resilience?


Some related links and ideas:

  1. I don’t have a good citation for this – I really only know this dichotomy in an informal sense. Some googling suggests that it traces back to the work of Michael Polanyi but please chime in if you know more!