Category Archives: Psychology

Complex actions need specialized interfaces

Yesterday I was in a room with a Bloomberg terminal. Bloomberg is specialized software used by financial professionals to navigate data and take actions. Users interact with the system through a specialized keyboard that looks like this:

These keyboards are easy to laugh at, they look antiquated and ridiculous. They look kind of like toys for people who don’t know how to use real computers. But I really like them, or at least I like the underlying idea:

Specialized tools need specialized interfaces.

Keyboards are specialized text-entry devices. It’s easy to forget this because they are our main interface with computers, which are general-purpose engines. But it’s crazy to think that it is the best tool for every program, for every cognitive environment that we can imagine implementing in software. We knew this once but in the name of efficiency we have forgotten it.

Good history museums remind you that history is not a linear, predetermined progression. Natural history museums, for example, are fascinating not just because we can see the apes from which we descended but the vastly stranger evolutionary dead-ends. I feel the same way at the Computer History Museum. Looking back there seems to be a sort of Cambrian explosion in the late 60’s/early 70s where the fundamentals of our computers were beginning to fix in place, but the world was still wide open. This was when the mouse was invented, along with stranger beasts like the “chorded keyboard”, where you play different letters with different combinations of keypresses:

But even this is just a text input device. The keyboard is a workhorse because we have abstracted the computer towards it – because we had keyboards before computers.

But for highly specialized cognitive work, there may be better ways to interface. No one would try to play a piano with a computer keyboard and mouse. PC gamers and flight simulator use joysticks. Console gamers use specialized controllers. Why don’t we have better input devices for programming, for data analysis, for planning timelines and budgets. The Bloomberg keyboard, like the Apple touchbar, is a small halting step in this direction.

Related links:

Fundamental Attribution Error and the Master Race

I just re-watched Hitchcock’s Lifeboat and it provides a really nice demonstration of the Fundamental Attribution Error. (The film has other virtues too.)

Lifeboat is the story of an American merchant marine ship sunk by a German U-Boat during WWII. A few Americans find a lifeboat and pick up a German sailor amid the wreckage. They are short on food and water and are forced to row towards safety. For days, while the Americans bicker in thirst and hunger under a hot sun, German “Willy” steadfastly pulls oar.

The Nazi in Question. (Image found on

The Americans come to see him as a superman. One character points out the contrast: “He’s made of iron, while we’re made of flesh and blood. Hungry flesh and blood!” Willy instead credits his endurance to “clean living”.

The Fundamental Attribution Error (and more generally, The Person and the Situation) say that when we see a person acting a certain way we think it’s because of who they are or how they were born. But instead it’s usually because of what they’ve done, how they’ve trained, or the context they face (that is invisible to us). So while the Americans think Willy’s physical endurance is genetic (“they truly are the master race”) he sees the years of physical training he endured, and perhaps even the additional pressure he faces, as the one German on the boat, to prove himself.

At least this is the story for about 15 minutes of the movie. In fact Willy is cheating. He has food and water squirreled away, which is the hidden context that really accounts for the observed behavior.

Streets Are Too Wide…Really

Have you ever heard of a sneckdown? Or even a neckdown? I just got a fascinating email with a great diagram pointing out that snow reveals patterns of street use:

Walking (and maybe even biking, you brave soul!) through the slush these past few weeks, you may have spotted a pattern: a tire-marked path through the snow surrounded by untouched white.

The phenomenon was first branded a “sneckdown” by T.A. activists in 2001. It’s a neckdown of untrod or plow-piled snow. (If you left your urban planning manual at home, a neckdown is when the width of a street at an intersection is made narrower to calm traffic.)

Drive-lines provide a clear message about how streets can work better. The prospect of wider sidewalks, new public plazas and bike lanes are revealed in the space where no one has driven.

And psychologically, wider streets mean more dangerous driving:

The wider a street, the safer drivers feel exceeding the speed limit. Streets narrowed by snow have the opposite effect, encouraging drivers to behave. Where normally drivers are jockeying for position, snow banks both sides of the street keep drivers in line and in their lane, demonstrating how narrow the street could be.

More on the subject can be found here, at the Economist, or on Twitter at #sneckdown.


Make Small Mistakes: A Mood Is Not A Personality

Alex Tabarrok has a new paper out showing that easy availability of guns increases the number of suicides. (Read the comments section on that post, it’s full of good tidbits like the fact that there are more suicides farther from the equator.) This is an econometric study, but there is a psychological angle:

Our econometric results are consistent with the literature on suicide which finds that suicide is often a rash and impulsive decision–most people who try but fail to commit suicide do not recommit at a later date–as a result, small increases in the cost of suicide can dissuade people long enough so that they never do commit suicide.

In other words, while we think of some people as “suicidal” this is just the fundamental attribution error rearing its ugly head. Another example:

  • That other person didn’t signal before changing lanes because they’re a bad driver.
  • I didn’t signal before changing lanes because I forgot, I’m tired, my kids are yelling in the backseat, etc.

People’s behavior is determined by the situation, their feelings are transient or generated on the spot. Very little of their behavior can be pinned to permanent characteristics or explicit intentions. But our first inclination is the opposite. If this  idea tickles your fancy — if you’d like to learn a lot more about how the situation can affect your behavior — read The Person and the Situation, even though it didn’t make my list of top 5 behavioral economics books and even though Malcolm Gladwell wrote the Introduction. You should also browse this deck of cards showing how the physical design of the environment can affect your actions.

There are also fascinating implications for the study of crime. Gary Becker revolutionized the field by pointing out that crime isn’t done by “criminals” — it’s done by ordinary humans who face different costs and benefits than the rest of us. Of course, this isn’t the final word. Most crimes are crimes of passion; between a fifth and a third of prisoners were drinking at the time of their offense. To prevent crime, we don’t need to make 25 year sentences longer, we need to somehow get around all that System 1 decision-making. And an important new paper shows that Cognitive Behavioral Therapy is staggeringly effective:

The intervention … included … in-school programming designed to reduce common judgment and decision-making problems related to automatic behavior and biased beliefs, or what psychologists call cognitive behavioral therapy (CBT). Program participation reduced violent-crime arrests during the program year by … 44 percent … the benefit-cost ratio may be as high as 30:1 from reductions in criminal activity alone.

The paper also finds improved schooling outcomes. Reducing the importance of your automatic decision-making can have huge benefits. The alternate strategy — what Tabarrok’s paper on suicide suggests — is that you should also lower the size of the mistake that your System 1 self can make. Your impulsive self can really screw you over if you’re not careful: make sure he doesn’t have a gun.

My Dictum and Your Blind Spots

In Thinking Fast and Slow (one of my 5 behavioral economics must-reads), Kahneman lays out the memorable idea that “What You See Is All There Is.” He explains it nicely in a brief interview:

WYSIATI means that we use the information we have as if it is the only information. We don’t spend much time saying, “Well, there is much we don’t know.”

These are the famous unknown unknowns that I’ve written about— the gaps or blind spots you wouldn’t think to look for. These are so important that I think it’s worth updating Asimov’s Dictum: “The most exciting phrase to hear in science, the one that heralds new discoveries, is not “eureka!” but “that’s interesting…” “Similarly, Potok’s Dictum is that whenever you’re planning, evaluating, or decision-making:

The most exciting phrase that heralds good decision-making is not “here’s the answer” but “oh, I hadn’t thought of that.”

Whenever a new idea, perspective, or fact appears, treat it carefully and feed your inner pigeon so that you learn to keep generating them! A useful addition to your cognitive toolkit would be a set of ways to find those blindspots you would otherwise totally ignore. Here are a few I’ve thought of or seen elsewhere:

  1. Explain it to a child
  2. Ask a third party with “middle-level” knowledge of the issue: not an expert, but not a child.
  3. Find a new taxonomy to organize everything you know about the issue — the “worse” it seems, the better. For example, if you’re planning a project chronologically, try to list all the aspects of the project by department or division instead.
  4. Perform a premortem: ask yourself “If this project fails spectacularly, what will have caused it?”
  5. Twiddle the knobs a la Daniel Dennett’s must-read advice about Intuition Pumps, or Polya’s guidelines for understanding mathematics. Change each part of what you know — sometimes the extreme cases — to see what happens. “What if we had 10 years to do this? What if we only had three days?” “What if only 5 people show up to our event? What about 50, or 500, or 5,000?” Most of these questions will be a waste of time but a few might completely change the way you’ve been thinking.

Readers: how do you find blind spots? What tools should I add to this list? Have you ever completely overlooked something that was obvious in retrospect?

Mathematical Decision-Making

Whenever you make a decision you have to consider a universe of facts. Suppose you’re trying to decide where to go on vacation.

First you might make a list of all the cities you would consider as a vacation spot and all the facts you know about them. These are the integers, and you can get pretty far with a few simple operations. You know that Paris has great museums and London has bad food and Berlin has great nightlife.

But there are also huge gaps in the facts you know. If I teach you division, you can turn the facts you know into a much larger set of facts you don’t know — the rational numbers. What are the museums like in London? How’s the food in Berlin and the nightlife in Paris? You can spend your whole life wandering around in the rational numbers, making pretty good decisions and really honing your long division skills.

But there’s a much larger infinity of things you’ve never thought of. Can you take a staycation, or take a cruise? Is that hand gesture you always make considered rude in any of these cities? These unknown unknowns are like irrational numbers. You need new, exotic operations to find them but once you learn how to look for them they’re everywhere you look — and there are so so many of them.

We spend so much time practicing our long division for 3, 4 and 5 digit numbers — staying in the known unknowns of the rational numbers — and not nearly enough time developing and using the new tools that will bring the unknown unknowns to our attention. This is one of the most important ideas you can have in your cognitive toolkit.

5 Easy Steps To Becoming Louis Potok

Boy, behavioral economics is everywhere these days! As a self-proclaimed “behavioral expert” I often get people asking me for a reading list. I’m sick and tired of rewriting this five times a day, so here goes: the definitive, well-ordered, short (looking at you, Shane Parrish) behavioral economics reading list.

  1. Influence (Robert Cialdini). This is a quick read, flashy and fun but substantive. Cialdini finds behavioral economics everywhere and the book is almost written as a guide for used car salesmen or other hucksters. He does a great job of weaving in the academic research with existing sales practices. For years I’ve been planning to hire a graphic designer to make a poster of Cialdini’s Six Principles of Influence — you know, if you’re looking for gift ideas.
  2. Nudge (Thaler and Sunstein). The book that got everyone talking. Thaler and Sunstein distill the literature into really digestible behavioral principles and focus on applying those principles to policy-making. The authors are pretty cool as well: Thaler is a perennial Nobel bridesmaid; Sunstein is a prominent “jurist”. whatever that means; and I can’t ignore writer-in-part John Balz who is now evangelizing everywhere about Chief Behavioralists.
  3. Thinking Fast and Slow (Kahneman). Take off the water wings, put on your goggles and inhale: you’re diving into the deep end. You will never see the world the same way and you will piss off friends and family with the names of behavioral effects. More important, you will actually understand the effects you name, and you will apply them correctly. You will remember the studies that uncovered them. You will understand the complex way they inter-relate. You will consider getting a PhD in behavioral economics. Your life will be better than it was.
  4. Poor Economics (Bannerjee and Duflo). An important look at how behavioral economics and randomized controlled trials are breathing new life into tired debates about development. Compared to the other books on my list, this book has a lot more field studies, impact evaluations, and non-Western research participants.
  5. Scarcity (ideas42 co-founders Mullainathan and Shafir): A fascinating new branch of research on how “scarcity captures the mind”. Turns out, as best the authors can tell, poor people are not optimizing under constraints. They are not genetically less capable than the rich. They are not suffering from a unique culture of poverty. Instead, the condition of being poor leads to making choices that are systematically different (better in some ways and worse in others), and you would do the same if you were poor. In fact, you do the same thing when you’re short on time. They don’t talk about this, but at some level this must be connected to the cognitive metaphors we use to understand time and money.

What other behavioral economics books do you consider must-reads?

The Three Levels Of Frustration

There are three ways to react to frustration and there’s a strict hierarchy of these responses. As you move from Level 1 to Level 3, you have to put in more effort but you get more return. The exact tradeoff between effort and return depends a lot on the exact situation, but overall Level 3 is always harder and more rewarding than Level 2, and so on. You’ve probably had each of these responses at different times, and I think some people are more prone to one than the next.

Level 1 : It’s the system’s fault that this is frustrating, I’m not going to do it.

This website is so poorly designed, I’m not going to sign up.

It’s so hard to have my voice heard in the political process, I’m not going to vote.

My company’s infrastructure makes project management really hard, I don’t know what to do.

I would say this categorizes about 90% of global frustration. Something is hard or thought-intensive? Don’t do it. And there’s a deeper level to your thinking: “Everyone else must have also gotten discouraged, but they’d fix it if that was a problem. So this can’t actually be that important, right?”

Level 2: I’m frustrated, and it’s the system’s fault, but I can solve the problem if I work harder.

This is probably the attitude of most externally-successful people. Yes, the system makes it difficult to do this. But difficult isn’t impossible, and in fact difficult is usually easier than it looks. So I’m just going to do it.

Level 3: I’m frustrated, it’s the system’s fault and I can solve the problem if I work harder. But, if I work even harder than that, I can fix the system and make everyone else’s life easier.

The most obvious example is a political revolutionary. But this is also what great coders do — solve the general problem they’re facing and publish the solution as a program. Or the best people in your organization, who create infrastructure to solve their problems, instead of pushing through for themselves.

The problem is I think that Level 3 is much harder than Level 2, or at least takes more time. And of course there are many different levels of “Level 3 Solutions”. Imagine that you think people are at your office don’t feel adequately appreciated by their coworkers. You could do nothing — Level 1. You could go out of your way to verbally appreciate people — Level 2. You could start an email thread for people to recognize their colleague’s hard work — Level 3. You could institute a weekly meeting for verbal recognition of successes — Level 3+. You could start a company that attempts to solve this problem in workplaces everywhere — Level 3+++. Different levels of “general solution” may be appropriate depending on what the problem is, your own skill, and your free time.

If you run an organization you want to empower people to do Level 3 as often as possible, and make it easy for them to do so. As an individual, this framework can be helpful so you can decide for yourself which level is appropriate every time you’re frustrated.

What are some examples you’ve seen of each of these levels?

(Thanks to Matt Darling for a helpful comment and the relevant XKCD.)

I Might Have Been Wrong: On Experimentation

In my previous post I railed against a political fundraising technique I saw in the wild (on Twitter). I was upset because the experimental literature suggested that the technique they used was a waste of money, and I went on to lament the campaign’s lack of interest in science.

But it’s possible they were a step ahead of me. After all, a true science-driven campaign would be doing their own experimenting because of external validity concerns in the paper I mentioned. So maybe I was just in one of several treatment groups. If that’s the case I hope they’re tracking “disdainful blog posts” as an outcome variable of interest.

I Regret X Again and Again

“I never want to do X again. I’ve decided it before, but this time is different. This time I really mean it, I feel it so strongly, I’m definitely never doing X again.”

Two potential models of regret:

  1. Hydraulic model of regret: regret builds up over time. The first time you regret doing X, you won’t stop doing it. But as the weight builds up over time, as you keep doing X, you start feeling more and more strongly that you really should quit. Eventually you feel so strongly that you just stop, out of willpower.1
  2. Memoryless model with black swans: almost every time you regret doing X it feels the same as all the previous times. You feel equally strongly every time, but you keep failing–keep relapsing. Then, one day, something changes. You feel regret and you change your situation or drastically change your outlook and all of a sudden, it’s not a struggle any more–you just Don’t Want To Do X or Set Up Your Life So That X Doesn’t Fit. Now, if only you knew what was different those times, you’d be set, but they seem to appear randomly.

I think that my experience is more consistent with model 2. Thoughts?

UPDATE (7/22/13, 5:48 PM):  Two alternative models of regret have emerged from the pundit sphere.

The first, from @EMGurevitch: “you regret x, repress x, unconsciously reenact x. cf Freud’s “remembering, repeating, and working through” “. Interesting, though I don’t quite what this predicts in terms of doing (or refraining from) X in the future.

The second, from @letthemeatfood; “You do X and regret it but the short-term benefit outweighs the regret to the extent that you do X again and again.” I like this point a lot. Regret doesn’t mean that you made a bad decision–the regret could be an acceptable cost. In fact, suppose that regret for doing X gets weaker over time as you become habituated more and more to doing X. Then weaning yourself off these unhelpful emotions could be an active planned consequence of what you do. Point well taken!

  1. Note: This is consistent with a Bayesian model–every time you feel regret you update your priors a little bit until the evidence is strong enough. []