Two theories of driving

Two thoughts from wandering around Phnom Penh by foot, moto and tuktuk.

Being a good driver

What does it mean when you call someone a good driver? There are really three components to being “good”:

  1. Values: You are making an appropriate tradeoff between speed and safety.
  2. Ability: a package of skills that expand your speed-safety possibility frontier (more safety at the same speed, or more speed for the same safety).
  3. Adaptation: How well can you predict what other drivers will do? How well can they predict what you will do?

We talk about being a good driver as though it’s just #2, but usually in the US I suspect it’s about #1. Or, maybe one of the skills in #2 is “accurately assessing risks” and when someone is a bad driver they may not realize accurately what tradeoffs they’re actually making.

When travelling, people often say things like “the drivers here are terrible.” And there may be geographic variation in #2, but more of it is probably #3. Being a good driver is tied up in your deep knowledge of local driving behavior, and how well you match other drivers’ mental models so they can anticipate what you will do and act accordingly.

This framework applies to many things in life.


As a cyclist in the US I noticed that I was often frustrated with cars but never had a problem with buses. Buses were so big and slow-moving that they were easy to circumvent and avoid. Like in the natural world, things that are too removed from you in scale are just not direct competitors. They occupy different niches.

Here in Phnom Penh things are similar. There are three common vehicle classes: motorcycles/scooters, tuk-tuks, and cars/SUVs. Cars are so big and bulky that it’s easy to get around them on a moto, but tuk-tuks are just the right size to always be in your way whether you’re in a car or on two wheels.

This is similar to the social phenomenon of “the narcissism of small differences”: the closer someone is to you, the more they compete for the same resources, and the more threatening they become.

Civilizational Memory and Resilient Knowledge

[Update: this is a subconscious paraphrase, or at least extension, of Jonathan Blow’s excellent talk Preventing the Collapse of Civilization which I watched a few months ago. Thanks JP for the reminder.]

The US government used to make a substance called Fogbank, a critical component in fusion bombs. Then we stopped making it for a while, and “forgot” how to make it.

Actually, it turns out we never really understood how to make it at all. When we wanted to make more of it, we created a new facility based on the historical records of the first time around. That plan didn’t work. It turns out that what made Fogbank work were certain impurities in the final product, traceable to chemical impurities in the input. The modern inputs were purified better than the historical inputs, so the impurities disappeared, and Fogbank 2 didn’t work until this was all tracked down and figured out. No one knew this the first time around. (See page 20 here, via a comment on MR.)

This is story is both terrifying and comforting. It’s good that we were able to eventually figure out the problem. But fusion bombs are one of the most technically sophisticated artifacts of modern civilization, produced by the wealthiest country in world history. Massive resources have been put into their research, engineering, and the institutions responsible for them. What else could fail similarly? What would it cost to fix? What would the ramifications of temporary failure be?

Most of our modern technosphere relies on extreme specialization in complex engineering disciplines. However, much of this knowledge is implicit, rather than explicit.1 A very short version might say: explicit knowledge is what you can write down, implicit knowledge is what you can’t. “Carbon has 6 protons” is explicit knowledge, “how to evaluate a novel” is implicit. Of course you could write a guide to evaluating a novel, but reading that guide would not lead people to perform the same evaluation that you do. Another example: most recipes are actually a blend of explicit and implicit knowledge. The amount of the ingredients and the order of adding them is usually explicit, but knowing when something is done, or when it has been suitably mixed, or small adjustments based on variable ingredients, are all implicit.

Worse, often these processes are highly context-dependent, and the people performing them don’t know what elements of the context really matter. This is the case for Fogbank – the nuclear physicists didn’t know that the impurity was relevant.

This implicit/explicit divide isn’t just on the level of individuals, but also institutions. Codifying process is virtually impossible, and any system with humans in it is organic and adaptive, so defined processes become obsolete immediately. If an institution (a research lab, a company, a division) dies, that knowledge doesn’t live on in any one individual: it dies as well. Even explicit knowledge is often under-documented in organizations. Most broadly, there is an intelligence in systems, and we often don’t know how to recreate it.

Markets can probably ameliorate some of these concerns. If components are truly critical, there should be strong incentives for firms to maintain these systems of knowledge. And one would hope that for critical components, the market incentives are such that things could be rediscovered quickly. But firms can go out of business for unrelated reasons, and much of our critical infrastructure is highly concentrated or actually quasi-governmental, so markets cannot be the only solution.

I’d like to read more about this – is there a good literature out there? What would the field be called – knowledge resilience?

Some related links and ideas:

  1. I don’t have a good citation for this – I really only know this dichotomy in an informal sense. Some googling suggests that it traces back to the work of Michael Polanyi but please chime in if you know more! []

Two Theories of Japanese Culture

Two favorite theories from Japan and the Shackles of the Past:

  1. Japan never had a true Axial Age moment – the moment where Athenian philosophy, Buddhism, and Judaic monotheism (among others!) all began to separate the material world we live in from the “spiritual world” above. All human culture to that point was animistic – where spirituality and real gods imbued every aspect of the world and to talk of a separate spirit world didn’t make much sense at all. When Buddhism arrived in Japan it never fully co-opted the native Shinto animism. And it is precisely this “spirituality in everything” approach that characterizes the Japanese reverence for small design details, ceremonial acts, etc.
  2. Feudal Japan under the Tokugawa Shogunate (1600-1868) is widely known for the influence of the samurai warrior class and their spartan ethic of bushido. However, this was a peaceful period! In fact, the samurai were essentially useless at all for 250 years and they responded in two ways. First, they focused more than ever on an extreme version of bushido, seeking to one-up each other with asceticism and military technique. Second, though, they became decadent – noted for their patronage of everything from burlesque to prostitution — which developed customs of extravagant costume and theatrical presentation. It is precisely this dual nature that shocks one about Japan today – the ultra-serious business ethic coexisting easily with the hypersexualized otaku videogame culture, but it has a long history.

These are obviously oversimplifications and I barely know anything about Japan, certainly not enough to evaluate these two theories – but I enjoyed thinking about them.

Where are you from?

I’ve been thinking a lot lately about the question “where are you from”. It rankled me for a long time – in San Francisco when I first arrived it felt like a provocation, a way of checking if you were a native or a gentrifier. Then I made a friend who has spent a lot of time in Navajo nation building relationships with folks there. They ask the same question, apparently, because their relationship to the land itself is so much of a part of who they are, that knowing you grew up on a particular mountain or stream tells them about you in ways that go beyond “local culture”. So I started thinking about which parts of your background are important to get to know you? Which fields have you studied, who are your intellectual and ethical ancestors? Is there a “small talk” question that gets at that? “What did you major in” is probably close, for more intellectual folks who are still in or near college, but is still a pretty weak shadow of what I’m trying to get at. 
And this becomes even more relevant when you think about statelessness and movement as political issues. The 21st century doesn’t have a monopoly on this – human history is full of forced and voluntary migrations – but there are particular new spins on it. Cosmopolites (which I just read and is very good) talks about bidoons in the UAE, whose ancestors were nomadic desert people just like everyone else, but missed the citizenship initiatives of when those states were forming, and are now stateless – and the UAE’s efforts to buy them citizenships in Comoros, a poor island in the Indian Ocean. The most marginalized want citizenship as a way to secure their rights, to be legible to government and justice. Then of course the ultra rich are stateless in a different way – trying specifically to escape from government responsibilities like paying taxes, they collect residences and passports in the most convenient nations. Climate refugees may be one of the key stories of the 21st century and this question of “where are you from” may have a completely different meaning for them.

Oh, is that all?

How did the ancient Egyptians build those giant pyramids? Did they have access to some secret technique that we don’t know about? Well, yes and no.

We have some hints about the process, and in 1997 a team of researchers tried to replicate this as best they could. They failed. In How To Build a Pyramid, Jimmy Maher quotes team leader Mark Lehner:

Although we failed to match the best efforts of the ancient builders it was abundantly clear that their expertise was the result not of some mysterious technology or secret sophistication, but generations of practice and experience.

Oh, just generations of practice.

There is a fundamental confusion here [1] What could be more mysterious and secret than a technique that takes hundreds of years of experience to develop, and that afterwards cannot be adequately communicated? Or a level of sophistication which can barely be sketched out before achieving it? To think otherwise is to fetishize knowledge while holding contempt for the process of acquiring it.

The process itself is the amazing thing. Amazing that humans are so good at it by default (it may be the secret to our success) and also that we have developed in the scientific method a version that is in some ways many times more powerful. Can you imagine Galileo seeing a space shuttle and thinking “I don’t know how they did it, but I’m sure that if I did this for several hundred years, I’d be as good as them.” It’s true, of course, but it really misses the point. If this is your mindset, what sort of technology or sophistication would impress you? What method of acquiring that knowledge would have looked like anything other than “generations of practice and experience”.

James Scott’s Seeing Like A State makes the forceful point that this buildup of tradition often creates implicit knowledge (metis) that cannot be gained in other ways (as far as we know) and we dismiss it at our peril.

  1. I don’t know if Lehner himself is confused, or if he’s imagining his audience confused. []


A very thoughtful recent blog post makes the point that institutions that seem “decentralized” or claim that as a value often exhibit centralizing tendencies over time. Some recommendations:

  • Be specific about what things you want to decentralize, and why. Regard decentralization as an ongoing process that can never be complete.
  • Find checks and balances, so that it is harder for any set of actors to achieve centralizing power. Use multiple forms of decentralization and participation.
  • Consider accountability: often what we really care about is accountability, and a centralized but accountable entity (such as a government antitrust enforcer) can limit the centralized and unaccountable power accumulation that we fear.

Pluribus Skepticism

Is Facebook’s new poker AI really the best in the world?

Facebook released a paper and blog post about a new AI called Pluribus that can beat human pros. The paper title (in Science!) calls it “superhuman”, and the popular media is using words like “unbeatable”.

But I think this is overblown.

If you look at the confidence intervals in the FB blog post above, you’ll see that while Pluribus was definitely better against the human pros on average, Linus Loeliger “was down 0.5 bb/100 (standard error of 1.0 bb/100).” The post also mentions that “Loeliger is considered by many to be the best player in the world at six-player no-limit Hold’em cash games.” Given that prior, and the data, I’d assign something like a 65-75% probability that Pluribus is actually better than Loeliger. That’s certainly impressive. But it’s not “superhuman”.

I don’t know enough about poker or the AIVAT technique they used for variation reduction to get much deeper into this. How do people quantify the skill difference across the pros now?

I’m also a bit skeptical about the compensation scheme that was adopted – if the human players were compensated for anything other than the exact inverse of the outcome metric they’re using, I’d find that shady – but the paper didn’t include those details.


Defensive Randomization

Machine learning is common and its use is growing. As time goes on, most of the options that you face in your life will be chosen by opaque algorithms that are optimizing for corporate profits. For example, the prices you see will be the highest price under which you’ll buy, as based on an enormous amount of data about you and your past decisions.

To counter these tendencies, I expect people to begin adopting “defensive randomization”, introducing noise into your decision-making and forcing corporate algorithms to experiment more broadly with the options they introduce to you. You could do this by simple coin flip, or introduce your own bots that make random (or targeted exploratory) decisions on your behalf. For example, you could have a bot log in to your Netflix account and search for a bunch of movies that are far away from Netflix’s recommendations for you.

One possible future is for these bots to share data between themselves — a guerilla network of computation that is reverse-engineering corporate algorithms and feeding them the information that will make your life more humane.

This is related to:

[mildly inspired by Maximilian Kasy’s Politics of Machine Learning]


Surveillance Valley

Just read Yasha Levine’s Surveillance Valley. There was a lot more new information than I was expecting but also a lot of “guilt by association” arguments and some interpretations I found a bit sketchy. Curious if anyone else has read it and what they thought. The book has two main sections.

First: the proto-history of the internet in ARPA was tied closely to concrete surveillance usecases. We usually tell the ARPANET story as an independent research arm within ARPA, but he shows that this is something of a myth – from the very beginning the intelligence community was using it to build linked databases of domestic surveillance (eg their dossiers on Vietnam War protestors). This surveillance use was recognized by the anti-war left at the time – there were large protests at MIT and Harvard against these projects. This has largely dropped out of our collective memory.

Second, and more interesting: the recent wave of anti-surveillance feeling, and the way it has centralized around Tor and Signal. The ultimate puzzle he is trying to unravel is: “privacy activists claim that Tor and Signal break the surveillance power of governments and large internet corporations. So why do those institutions support those tools and advocate their widespread use?” Specifically, the US government is a major funder of both, through a variety of entities such as OTF and the Broadcasting Board of Governors. He spends much less time discussing the large tech companies, but treats them by-and-large as collaborators with government surveillance, and makes that case pretty strongly and well.

(He also spends a lot of time in this section detailing how his previous investigations into these issues led to him being harassed online by privacy activists.)

His answer has three main components.

Answer 1: technical reasons. Tor was created as a DARPA project for spy communication – but the developers quickly realized that they would need lots of non-spy activity on Tor for the spy activity to blend into the background, which is why they opened it up and continue to fund it and advocate for it.

Answer 2: influence. The funding relationship allows the government to exert influence on these organizations, get advance notice of vulnerabilities and roadmaps, shape the direction and steer them away from things that are actively dangerous to the handlers. Somewhere in here is the possbility of backdoors, which I can’t really assess the evidence for. Part of this explanation is that by supporting a highly visible but secretly defanged privacy movement, they reduce the pressure that might otherwise cause trouble for them.

Answer 3: use of these systems as a tool to destabilize enemy regimes – the USG funds privacy training for political activists across the world, and advises them to use Tor and Signal. This is not exactly hidden – the OTF’s Wiki page cites its mission statement as wanting to “support projects that develop open and accessible technologies to circumvent censorship and surveillance, and thus promote human rights and open societies”. The extent of the activities that we’re supporting likely go deeper – we’re not above a little violent regime change – but this goal is out in the open.

There are a lot of interesting issues raised here, and the facts in this book are painstakingly documented. But ultimately I wonder if he’s seeking too consistent an explanation, in the vein of conspiracy theorists who need a simple causal pattern to explain a wide variety of events. He seems to think that “Google” and “the US government” are monolithic entities with a single volition, whose actions must be somehow consistent – this is of course not the way these institutions work, especially when it comes to the intelligence community. The story he tells (especially Answer 2) complicates and punctures the self-aggrandizing, radical-aesthetic narrative in the privacy community. But I don’t think this is as big a puzzle as he makes it out to be.

A Pattern Language

I’ve been seeing people recommend A Pattern Language (amazon, very large pdf) here and there for a few years now and finally picked it up. I’ve only begun to read it, but it is a truly remarkable work. In particular it draws a thick and complex connection between design and ethics.

(Skimming the wikipedia page of the first listed author makes me want to read much more of his work.)

This book simultaneously defines what a pattern language is, makes a case for how they should be used in design, and provides an example.

Designers of any sort (industrial designers, graphic designers software, urban planners, etc) work explicitly or implicitly based on patterns that they have learned about, developed, or identified. If I own land and want to sleep indoors, I might think about the pattern “Single Family Home” and create a design based on that pattern. And we need patterns for the whole spectrum of human existence that emerges through design, from the way our highest political entities are arranged (“Independent Regions”) through cities (“Subculture Boundaries”, “Night Life”), and so on (“Looped Local Roads”, “Compost”)

How do different patterns, though, connect to each other? There’s the concept of a Pattern Library which I’ve often seen in the tech space (example). The Library metaphor asserts that patterns should be listed and categorized. But metaphor of a Pattern Language goes much farther in exploring the rich connections between patterns, the syntax by which they can be juxtaposed, and the layers of meaning that they bring to bear when they are used together in different ways. A library can only be constructed and maintained, usually by a single entity. A library, unlike a language, does not usually develop and evolve organically.

Every society which is alive and whole, will have its own unique and distinct pattern language; and further, that every individual in such a society will have a unique language, shared in part, but which as a totality is unique to the mind of the person who has it. In this sense, in a healthy society there will be as many pattern languages as there are people — even though these languages are shared and similar…

The language described in this book, though, is more like Esperanto than like English. It is not the dictionary of any observed pattern language, it is a call for a new language that will lead to a new and better lived existence for humanity. Languages differ in the fluency with which they can express certain concepts, and so each language comes with a value system and creating a language is an ethical acts. What kind of patterns should feel natural to express? What is clunky?

The language that has emerged in our society is a stunted, depraved language without humanity. We have a pattern for billboards, for surveillance cameras, for strip malls, for old age homes.

[W]e have written this book as a first step in the society-wide process by which people will gradually become conscious of their own pattern languages and work to improve them. We believe…that the languages which people have today are so brutal, and so fragmented, that most people no longer have any language to speak of at all — and what they do have is not based on human, or natural considerations. [emphasis added]

The language in this book contains, on the contrary, patterns like the following:

  • Magic of the City
  • Old People Everywhere
  • Children in the City
  • Holy Ground
  • Connected Play
  • A Room of One’s Own
  • Garden Growing Wild
  • Communal Sleeping
  • Window Overlooking Life
  • Secret Place

These are only a few with particularly obvious ethical ramifications, but every pattern and every connection expresses an ethics, and creating such a language is a lasting way to codify your ethics.

Any such set of design principles contains within it an ethics and ethics are sometimes best expressed as design principles. In particular, I’m familiar with the conversation around dat a ethics. Usually when we talk about data ethics we are saying “here are the set of tools we’ve designed and built, and over there is our thinking about ethical ways to use them.” But those tools were also designed within a value system that is embedded not just in the design of the specific tool but the whole web of existence.

In the book’s domain (the built environment), we might think about the design of a single house. What ethics are embedded in the way a house is designed? How many people is built for, and what kinds of living arrangements? But the design of the house broadly speaking must also connect to the design of the broader society and its ethics: what materials are used, and what sorts of labor arrangements are assumed to be available? What is nearby, and what can we assume about the ways that neighborhood will change over time? What is the anticipated lifespan of this building and how might its uses change in the future?

Similarly, maybe talking sensibly about data ethics requires connecting it more deeply to the patterns we use as designers, and thinking more broadly about what those patterns are that we use and the timescales and means by which they change.

We have spent years trying to formulate this language, in the hope then when a person uses it, he will be so impressed by its power, and so joyful in its use, that he will understand again, what it means to have a living language of this kind. If we only succeed in that, it is possible that each person may once again embark on the construction and development of his own language — perhaps taking the language printed in this book, as a point of departure.