Defensive Randomization
Machine learning is common and its use is growing. As time goes on, most of the options that you face in your life will be chosen by opaque algorithms that are optimizing for corporate profits. For example, the prices you see will be the highest price under which you’ll buy, as based on an enormous amount of data about you and your past decisions.
To counter these tendencies, I expect people to begin adopting “defensive randomization”, introducing noise into your decision-making and forcing corporate algorithms to experiment more broadly with the options they introduce to you. You could do this by simple coin flip, or introduce your own bots that make random (or targeted exploratory) decisions on your behalf. For example, you could have a bot log in to your Netflix account and search for a bunch of movies that are far away from Netflix’s recommendations for you.
One possible future is for these bots to share data between themselves — a guerilla network of computation that is reverse-engineering corporate algorithms and feeding them the information that will make your life more humane.
This is related to:
- adversarial examples
- CVDazzle: camouflage from face recognition
- Putting rocks in your shoes to fool gait recognition technologies
- My previous post about Economies of scale of attention
- My tweet about antipersuasive technology
[mildly inspired by Maximilian Kasy’s Politics of Machine Learning]
Comments (0)
To leave a comment on this post, send me an email.
Revision History
(About)