What Computers Can Teach Us About Making Decisions

Tom Griffiths, PhD, is professor of psychology and director of the Computational Cognitive Science Lab at the University of California, Berkeley. In his new book, Algorithms to Live By, co-authored with Brian Christian, Griffiths explores how solutions from computer science can guide the kinds of decisions we humans make in our cluttered and sometimes confusing lives.

What can we learn from computers about human decision-making?

Computers offer a different way of thinking about rational decision-making. Intuitively, we feel like we should weigh all the options to make a rational decision. But when computers try to solve the kind of difficult problems that people face, that’s not what they do. The computer isn’t going to consider all possible solutions because there are too many to look at. It’s not going to carefully evaluate each one. And it’s not necessarily going to produce the same answer each time. Both computers and humans have to make decisions with a finite amount of time and computational resources.

What are a couple of examples of the kinds of decisions people and computers have to make?

Consider a problem we tend to think of as uniquely human: how to organize all the things we need to do in our busy schedules. Computers face the same challenge. They use algorithms to organize the tasks they have to perform for optimal efficiency. Another example: what to throw away and what to keep. How to clear out your closet, in other words. Computers face the same problem. They have to juggle the pieces of information they need to keep in limited memory, and to get rid of the information they no longer need. By looking at how computer scientists solve these problems, we can gain insights into strategies that can work for us, as well.

So what can computers teach us about organizing our closets?

The question is what to keep and what to get rid of. Or, from the perspective of a computer, what do you put in the fastest memory, which is very limited, and what do you store in slower memory, or get rid of entirely? Computer scientists call this caching. And it turns out that the best way to organize memory in a computer is to throw out the things that haven’t been used for the longest time. The same goes for your closet. What you’re most likely to need are the things that you’ve used most recently. The things that have gone unused the longest you get rid of.

What about managing crowded schedules and a million distractions?

People and computers both face that challenge. If you’ve tried to get your computer to do too much too quickly and it freezes up—you know the phenomenon. Computer scientists call it thrashing. Your computer is frozen because it’s spending all its time trying to figure out what to do next, managing its own memory, rather than performing tasks. Anyone who works in an office environment knows that feeling. If someone is calling you on the phone all the time, you can’t get anything done. If new emails keep popping up on your screen, you never get to the report you’re supposed to write. One way computers deal with the problem is by carefully navigating the trade-off between responsiveness and throughput. Computer scientists use algorithms to determine how responsive the computer can be to the user and still perform its tasks. What that algorithm teaches us is to figure out the minimum amount of responsiveness we need. If you’re overwhelmed because emails keep coming in, figure out how frequently you really need to respond. Let’s say nothing is so urgent that it can’t wait an hour. Then only check your email once an hour. Get rid of those alerts that tell you every time an email comes in.

What about knowing how much time to spend on something?

We tend to think that the more effort we put into something, the better it will be. Machine-learning researchers and statisticians have discovered that this can actually be counterproductive. The phenomenon is called overfitting. Let’s say you’re writing a grant proposal. The committee has some way they are going to evaluate proposals, but you don’t know exactly what that will be. The only guide you have is your own sense of the quality of the proposal. Since your sense of quality isn’t a perfect match for how the proposal will be evaluated, beyond a certain point spending more time optimizing it won’t help and is likely to make the proposal worse, at least from their perspective.

Or consider a program to predict stock market performance. You only have data from the past, and you want to predict the future. Since there’s not a perfect match between what you can measure and what matters, optimizing past performance can actually impair future performance—the model begins to capture the noise in the data from the past, and its predictions about the future get worse rather than better. I have a tendency toward perfectionism. But at a certain point, I know, it’s good to let something go, or I’ll actually make it worse.

Your book takes on some tougher problems, like choosing a spouse. What can computers teach us?

Here the problem is knowing when to stop looking. Let’s use the example of buying a house. You want to look at what’s on the market. But how many houses do you need to look at before you make an offer? Every time you go to an open house just to look, you get more information, but you also lose the opportunity to make an offer. Mathematicians have analyzed that problem and there’s a very precise answer. The right balance between gathering information and acting on that information is precisely 37 percent. If there are 100 houses in the area you’re looking at, you don’t need to look at all of them. Instead, you should look at 37 before making any offers. And then make an offer on the next house that is better than any you have seen so far. That maximizes the probability that you buy the very best house.

That sounds pretty calculating. Are there decisions that people are better at making than computers?

In fact, some of the problems that people actually face in everyday life are the hardest for computers to solve. They are decisions that you have to make when you only have a little bit of information—when the variables are complex and changing in ways that are hard to characterize. Human beings seem to do a better job than computer algorithms when faced with ambiguity. What my co-author, Brian Christian, and I argue in the book is that there’s a useful feedback process between computer science and human cognition. Computer science can offer optimal solutions to specific problems. But when the reality doesn’t quite match the conditions of the algorithm, humans are better at making use of these tools than computers are.

Leave a Reply

Your email address will not be published. Required fields are marked *