Category Archives: Economics

Civilizational Memory and Resilient Knowledge

[Update: this is a subconscious paraphrase, or at least extension, of Jonathan Blow’s excellent talk Preventing the Collapse of Civilization which I watched a few months ago. Thanks JP for the reminder.]

The US government used to make a substance called Fogbank, a critical component in fusion bombs. Then we stopped making it for a while, and “forgot” how to make it.

Actually, it turns out we never really understood how to make it at all. When we wanted to make more of it, we created a new facility based on the historical records of the first time around. That plan didn’t work. It turns out that what made Fogbank work were certain impurities in the final product, traceable to chemical impurities in the input. The modern inputs were purified better than the historical inputs, so the impurities disappeared, and Fogbank 2 didn’t work until this was all tracked down and figured out. No one knew this the first time around. (See page 20 here, via a comment on MR.)

This is story is both terrifying and comforting. It’s good that we were able to eventually figure out the problem. But fusion bombs are one of the most technically sophisticated artifacts of modern civilization, produced by the wealthiest country in world history. Massive resources have been put into their research, engineering, and the institutions responsible for them. What else could fail similarly? What would it cost to fix? What would the ramifications of temporary failure be?

Most of our modern technosphere relies on extreme specialization in complex engineering disciplines. However, much of this knowledge is implicit, rather than explicit.1 A very short version might say: explicit knowledge is what you can write down, implicit knowledge is what you can’t. “Carbon has 6 protons” is explicit knowledge, “how to evaluate a novel” is implicit. Of course you could write a guide to evaluating a novel, but reading that guide would not lead people to perform the same evaluation that you do. Another example: most recipes are actually a blend of explicit and implicit knowledge. The amount of the ingredients and the order of adding them is usually explicit, but knowing when something is done, or when it has been suitably mixed, or small adjustments based on variable ingredients, are all implicit.

Worse, often these processes are highly context-dependent, and the people performing them don’t know what elements of the context really matter. This is the case for Fogbank – the nuclear physicists didn’t know that the impurity was relevant.

This implicit/explicit divide isn’t just on the level of individuals, but also institutions. Codifying process is virtually impossible, and any system with humans in it is organic and adaptive, so defined processes become obsolete immediately. If an institution (a research lab, a company, a division) dies, that knowledge doesn’t live on in any one individual: it dies as well. Even explicit knowledge is often under-documented in organizations. Most broadly, there is an intelligence in systems, and we often don’t know how to recreate it.

Markets can probably ameliorate some of these concerns. If components are truly critical, there should be strong incentives for firms to maintain these systems of knowledge. And one would hope that for critical components, the market incentives are such that things could be rediscovered quickly. But firms can go out of business for unrelated reasons, and much of our critical infrastructure is highly concentrated or actually quasi-governmental, so markets cannot be the only solution.

I’d like to read more about this – is there a good literature out there? What would the field be called – knowledge resilience?

Some related links and ideas:

  1. I don’t have a good citation for this – I really only know this dichotomy in an informal sense. Some googling suggests that it traces back to the work of Michael Polanyi but please chime in if you know more! []

Police Carelessness Wakes Local Citizen

Tuesday morning I was woken up at 5AM by a police siren. As I lay awake, I started thinking. I couldn’t be the only one woken up. Should police cars use their sirens at night? It seems like a classic example of diffuse costs and easy-to-see benefits. But is it worth it? Let’s make a quick back-of-the-envelope calculation.

The cost of running a siren at night can be modeled by the following equation (I love Fermi problems!):
LengthDriven * (2 * SirenDistance) * Density * PctWoken *
DailyIncome * ProductivityLoss


  • we start with the LengthDriven with the siren on
  • multiply by twice the SirenDistance, how far away on each side of the police car a person can hear the siren, to get the geographic area affected
  • multiply by the Density to get the number of people potentially affected
  • multiply by the PctWoken to get the number of people who were woken up
  • multiply by the average DailyIncome to see how much economic value those people create each day
  • multiply by the ProductivityLoss (as a percent) that they experience when groggy to see how much economic value was lost by the siren

This model makes some assumptions. It assumes the police car drives in a straight line, that the density is uniform, that the PctWoken is constant within the SirenDistance and zero outside of it, that everyone works (so ignoring children), that everyone works a day shift, that the ProductivityLoss is independent of DailyIncome, etc. But it seems like a reasonable first step.

Let’s plug in some values:

LengthDriven = 0.25 miles

SirenDistance = 400 feet \approx 0.08 miles

Density = 18,187 people/square mile in SF (from Wikipedia)

PctWoken = 5% of people. I made this up out of nowhere.

DailyIncome = $45000 yearly per-capita income in SF / 200 work days per year = $225 per day

ProductivityLoss = 25%, I made this up too

This gives us an economic cost of $1023 every time a police officer flips a siren on at night.

Even if this only happens once per night in SF, it creates a cost of $375,000 over the course of the year — equivalent to the salary of about 4 police officers, or about 1% of the SFPD’s budget. Use of sirens also appears to be dangerous. I wonder what the benefits are — how much additional public safety is provided for this cost?

I reached out to the SFPD asking if they have any guidelines about siren use — stay tuned.

Make Small Mistakes: A Mood Is Not A Personality

Alex Tabarrok has a new paper out showing that easy availability of guns increases the number of suicides. (Read the comments section on that post, it’s full of good tidbits like the fact that there are more suicides farther from the equator.) This is an econometric study, but there is a psychological angle:

Our econometric results are consistent with the literature on suicide which finds that suicide is often a rash and impulsive decision–most people who try but fail to commit suicide do not recommit at a later date–as a result, small increases in the cost of suicide can dissuade people long enough so that they never do commit suicide.

In other words, while we think of some people as “suicidal” this is just the fundamental attribution error rearing its ugly head. Another example:

  • That other person didn’t signal before changing lanes because they’re a bad driver.
  • I didn’t signal before changing lanes because I forgot, I’m tired, my kids are yelling in the backseat, etc.

People’s behavior is determined by the situation, their feelings are transient or generated on the spot. Very little of their behavior can be pinned to permanent characteristics or explicit intentions. But our first inclination is the opposite. If this  idea tickles your fancy — if you’d like to learn a lot more about how the situation can affect your behavior — read The Person and the Situation, even though it didn’t make my list of top 5 behavioral economics books and even though Malcolm Gladwell wrote the Introduction. You should also browse this deck of cards showing how the physical design of the environment can affect your actions.

There are also fascinating implications for the study of crime. Gary Becker revolutionized the field by pointing out that crime isn’t done by “criminals” — it’s done by ordinary humans who face different costs and benefits than the rest of us. Of course, this isn’t the final word. Most crimes are crimes of passion; between a fifth and a third of prisoners were drinking at the time of their offense. To prevent crime, we don’t need to make 25 year sentences longer, we need to somehow get around all that System 1 decision-making. And an important new paper shows that Cognitive Behavioral Therapy is staggeringly effective:

The intervention … included … in-school programming designed to reduce common judgment and decision-making problems related to automatic behavior and biased beliefs, or what psychologists call cognitive behavioral therapy (CBT). Program participation reduced violent-crime arrests during the program year by … 44 percent … the benefit-cost ratio may be as high as 30:1 from reductions in criminal activity alone.

The paper also finds improved schooling outcomes. Reducing the importance of your automatic decision-making can have huge benefits. The alternate strategy — what Tabarrok’s paper on suicide suggests — is that you should also lower the size of the mistake that your System 1 self can make. Your impulsive self can really screw you over if you’re not careful: make sure he doesn’t have a gun.

5 Easy Steps To Becoming Louis Potok

Boy, behavioral economics is everywhere these days! As a self-proclaimed “behavioral expert” I often get people asking me for a reading list. I’m sick and tired of rewriting this five times a day, so here goes: the definitive, well-ordered, short (looking at you, Shane Parrish) behavioral economics reading list.

  1. Influence (Robert Cialdini). This is a quick read, flashy and fun but substantive. Cialdini finds behavioral economics everywhere and the book is almost written as a guide for used car salesmen or other hucksters. He does a great job of weaving in the academic research with existing sales practices. For years I’ve been planning to hire a graphic designer to make a poster of Cialdini’s Six Principles of Influence — you know, if you’re looking for gift ideas.
  2. Nudge (Thaler and Sunstein). The book that got everyone talking. Thaler and Sunstein distill the literature into really digestible behavioral principles and focus on applying those principles to policy-making. The authors are pretty cool as well: Thaler is a perennial Nobel bridesmaid; Sunstein is a prominent “jurist”. whatever that means; and I can’t ignore writer-in-part John Balz who is now evangelizing everywhere about Chief Behavioralists.
  3. Thinking Fast and Slow (Kahneman). Take off the water wings, put on your goggles and inhale: you’re diving into the deep end. You will never see the world the same way and you will piss off friends and family with the names of behavioral effects. More important, you will actually understand the effects you name, and you will apply them correctly. You will remember the studies that uncovered them. You will understand the complex way they inter-relate. You will consider getting a PhD in behavioral economics. Your life will be better than it was.
  4. Poor Economics (Bannerjee and Duflo). An important look at how behavioral economics and randomized controlled trials are breathing new life into tired debates about development. Compared to the other books on my list, this book has a lot more field studies, impact evaluations, and non-Western research participants.
  5. Scarcity (ideas42 co-founders Mullainathan and Shafir): A fascinating new branch of research on how “scarcity captures the mind”. Turns out, as best the authors can tell, poor people are not optimizing under constraints. They are not genetically less capable than the rich. They are not suffering from a unique culture of poverty. Instead, the condition of being poor leads to making choices that are systematically different (better in some ways and worse in others), and you would do the same if you were poor. In fact, you do the same thing when you’re short on time. They don’t talk about this, but at some level this must be connected to the cognitive metaphors we use to understand time and money.

What other behavioral economics books do you consider must-reads?

I Might Have Been Wrong: On Experimentation

In my previous post I railed against a political fundraising technique I saw in the wild (on Twitter). I was upset because the experimental literature suggested that the technique they used was a waste of money, and I went on to lament the campaign’s lack of interest in science.

But it’s possible they were a step ahead of me. After all, a true science-driven campaign would be doing their own experimenting because of external validity concerns in the paper I mentioned. So maybe I was just in one of several treatment groups. If that’s the case I hope they’re tracking “disdainful blog posts” as an outcome variable of interest.

Why (Behavioral) Science Matters

Almost a month ago Bill de Blasio, Democratic nominee for NYC mayor, tweeted something that made me angry. If I were to donate to his primary campaign, he (or a 19 year old unpaid intern) proclaimed, my contribution would be matched 6 to 1. So now we know that de Blasio — unlike Barack Obama — is not running his campaign according to the latest research findings.

Bad economics from a mayoral hopeful.

Bad economics from a mayoral hopeful.

Political fundraising has long been an inexact science, so in the mid 2000s, two economists partnered up with a US non-profit and ran an experiment about what works best when a charity is raising money. Specifically, they looked at the effect of matching funds. Are people actually more likely to donate if their donations are matched? Their findings were surprising.

It turns out that offering a 1:1 match made people more likely to donate and raised the total amount of money that the charity raised. But higher matches — which essentially give donors more bang for their charitable buck — have no additional effect. 6 to 1 sounds high, but the evidence demonstrates that a one-to-one match would have worked just as well and so five sixths (83%) of that donor money is being wasted on a match when it could just go to general operating expenses.

That’s right, folks: a political candidate brazenly ignored a six-year-old paper from the American Economic Review. Pitchforks and torches would not be an over-reaction.

No, the real reason this matters is as a signal. Every organization on the planet, whether a political campaign, a business, or a government agency, at some level needs to influence people’s behavior. Traditionally this has been done based on intuition but now we can use state-of-the-art scientific knowledge about behavior. Hire a chief behavioralist, or just someone whose life was changed by Thinking Fast and Slow. It would have taken a day, tops, for someone to review the literature on the science of fundraising, but no — instead the campaign was guided by hoary rules of thumb that have never been exposed to the scientific method (tagline: “the known universe’s best tool for knowledge accumulation”).

Of course research matters elsewhere as well. From seawalls to transit expansion to services for the indigent, every policy decision that de Blasio will have to make as mayor can be research-informed — and good decisions are those that use prior knowledge intelligently. If de Blasio doesn’t see science as important for his campaign, if he can’t see how it would help him win election to office, what are the odds that he will use it to make better policy? Pick your political candidates based on how much they strive to use the best possible information to make their decisions —  vote for fans of scientific research.

Should You Interrupt?

Some processes can be interrupted and restarted at little-to-no cost, while others suffer greatly from interruption.

For example, if you cook a steak halfway, let it return to room temperature, and then finish cooking it, either the outer layers will be overdone and dried, or the middle will still be raw, or both.

Gratuitous picture of steak, taken by J. Kenji Lopez-Alt

On the other hand, if you’re making a stew, feel free to stop and restart–it’ll be just fine.

This is the core of the distinction drawn by Paul Graham between the maker’s schedule and the manager’s schedule. Makers–writers, programmers, designers–need long uninterrupted stretches of time to do productive work. Managers, on the other hand, work in much smaller chunks of attention–sending emails, scheduling meetings, going to meetings–and so interruptions (such as meetings) have virtually no cost to them.

A third example is the difference between running for distance and lifting weights. At the gym, runners are scornful of the people standing around between sets chatting. That’s because running 2 5ks is not the same–is much easier–than running a 10k. But taking a 5 minute break between sets is not all that much worse than taking a 2 minute break. (Though as with all nutrition and exercise guidelines, I’m sure there’s a huge amount of disagreement.)

I can’t decide if there are many other things in the world that break down along these lines, but either way I would like a word to capture the distinction. Suggestions?

How Large Institutions Screw You Over

Have you ever caught your bank making a mistake? Maybe they levied a fee in error, or maybe they penalized you twice for the same overdraft, or so on. I had this happen once–my account was “automatically” switched from a student no-fee no-minimum account, and so I immediately started racking up penalties for failing to meet the minimum account requirements. When I complained, I was treated nicely, the mistake was fixed, and I got various perks.

Now, have you ever not caught your bank when they made a mistake? We all have limited time and limited attention so it would be easy to miss small occasional mistakes.

This could be due to malice. After all, there must be a profit-maximizing point when you think about balancing the cost of dealing with complaints compared to the benefits of collecting unearned fees. Banks could be setting out to look for this point or simply stumbling upon it by accident–since supervisors get mad when the level of mistakes is sub-optimal from a profiteering perspective.

But either way, the bank is earning money because it is cheaper for them to catch mistakes than it is for you. They can have one full-time person looking for mistakes in hundreds or thousands of similar accounts; in other words, large institutions have economies of scale of attention.

"bank error in your favor"


Two possible solutions to this:

  1. Regulation: penalties for bank errors could be set much higher by statute, so that the equilibrium number of mistakes is set to maximize overall welfare and not just the bank’s profit.
  2. Automation: It’s plausible that the cost to consumers of finding mistakes will drop dramatically using services like If This Then That.

What are some other examples of large institutions bullying you around through mere attention?

(Inspired by a conversation with @JoshHenryKatz)

Annals of Comparative Advantage

In my last post I talked about two interesting things I learned from John Reader’s Africa: A Biography of a Continent. I also mentioned that the book is sometimes infuriatingly superficial or wrongheaded in its treatment of science or economics. Let’s look at an example:

During the 16th century the Portuguese expanded their empire to include territories spread around the globe from South America to the Spice Islands of the Far East. Best estimates suggest that up to 10,000 Portuguese were working abroad at any one time. Ten thousand, when the population of Portugal was only around 1 million, and large numbers were continuously leaving to replace those who died abroad. The empire imposed a drain on manpower and resources that Portugal could not sustain–and the empire collapsed. The wonder is not that it collapsed, a history concluded, “but that it flourished for exactly a century and lasted as long as it did.” (383)

First, is ten thousand out of a million (1% of the population) really so high? For comparison, today about 10% of the population of the Philippines are working overseas. But more importantly, you should do the activity with the highest return! Reader described the Portuguese Empire as largely a series of trading missions that paid taxes on their return.

Trade flourished and the merchants prospered. It has been estimated that returns were hardly ever less than 50 per cent and sometimes as high as 800 per cent….between 1450 and 1458…the traffic yielded from five to seven times the invested capital. (339)

In an economic sense you don’t need to worry about “manpower” for other activities. What exactly was Reader worried about? If 1% of Americans went to work in Silicon Valley, the result wouldn’t be a dangerous draining of “manpower and resources” (= Labor and Capital) away from other activities. When you have prices, resources tend towards their highest-valued allocations. Saying “lots of resources went to this really high-valued activity” does not explain the collapse of an empire.