W Power 2024

Dan Ariely: Hidden Forces that Shape Our Decisions

The renowned Behavioural Economist talks about some of the hidden forces that shape our decisions

Published: Feb 9, 2010 08:24:35 AM IST
Updated: Feb 9, 2010 12:06:14 PM IST

You have called Alan Greenspan’s admission to being ‘shocked’ by the failure of the financial system “an important step forward” for Behavioural Economics.  Please explain.
For years, my colleagues and I have been conducting experiments about human irrationality. When we present our results, the ‘rational’ economists say, ‘These are very nice experiments that make for great dinner conversation; but when it comes to professionals making decisions that involve money, irrationality simply doesn’t occur’. I never bought this argument: why would the human brain develop two different approaches to decisions that depend upon the importance of the decision?  While I  allowed that the market could possibly mitigate some irrational behaviour, I also felt that it could increase it. 

At the end of the day, we had all this evidence of human irrationality, but we could never really test how it plays out in the market. There was no way to set up two versions of the market -- one with rational people making decisions and one without. In addition, we couldn’t get these hyper-rational beings to participate in our experiments, so we were left without any empirical answers: until the failure of the financial markets in 2008.  This was indeed the best thing that has ever happened for Behavioural Economics.  That isn’t very nice to say, because so many people were hurt by it (myself included), but perhaps we can take some comfort in the fact that an emerging academic discipline derived some benefit from the crisis. All of a sudden, there was a realization that the irrationalities we had been studying might be much more important -- and prevalent -- than people believed.  Maybe we needed to start thinking about human ability in a more humble way, and to acknowledge that lots of avoidable mistakes are being made on a regular basis.

If the so-called ‘rational’ economic approach can’t protect us from ourselves, what model should we be using?
That’s a tricky question.  One reason we got sucked into the rational model is that it is so tractable.  Outside of a few zealous economists, most people would admit that it is not perfect, but it’s the best model we have. Allow me to make an analogy regarding the rational model. Can you imagine what our highway system would look like if it had been designed by economists? First of all, there would be no shoulder at the side of the road, because why would you ever pave something where nobody is supposed to drive anyway? And those raised ‘bubble lines’ that indicate to a driver that he is leaving his lane? Why would you need those either, because people know they are supposed to drive within the lines.  Indeed, we might not have any lanes or speed limits, because why restrict people like that? What I am describing is a situation in which you assume that people are perfectly rational beings who know what they are doing, so you build-in no room for error. While this is an extreme example, this is the basic idea of perfect economic rationality.

What model should we have in place instead?  There is no one theory in Behavioural Economics on which a new model could be built: there is only one way to be rational, but there are endless ways to be irrational.  We are susceptible to all kinds of mistakes, and these change with time and technology.  Again, think about driving: when cars were slow, we couldn’t really hurt ourselves with them. But then we made faster cars, and all of a sudden we needed to create airbags and anti-lock brakes.  Next, we created cell phones and text-messaging, and all of a sudden, we need new rules that tell people not to drive and send text messages.  In what kind of universe do we have to tell people not to risk their own lives and the lives of others by texting while driving? Sadly, this is the irrational world we live in, and people make these mistakes all the time – even smart people.

The fact is that we are susceptible to all kinds of decision errors, and as we invent new technologies, new financial instruments and other new ways to get ourselves into trouble, we also create more risks. This aspect of ‘progress’ needs to be better managed.  I think we should take a more empirical approach to life by saying, ‘Let’s examine what we are good at, what we are bad at, and the sorts of mistakes that we regularly make and don’t make. Where people don’t make many mistakes, we can let them loose -- following the idea of the free market; but where people make lots of mistakes, we should think more about how to prevent them -- or at least, limit them. 

You have said that people don’t know what they want, unless they see it in context.  What are the repercussions of such ‘relativism’ for decision making?
The term ‘positional good’ refers to the idea that in many cases, people don’t really care how big something is, they just want to have more than the next guy.  Take sea lions, for example: they want to be bigger than the other sea lions, because if you’re bigger, you will attract more females.  But in the race to become bigger, they tend to get much too big, and many die from health complications. Now think about humankind, and how nice it would be if our collective ‘footprint’ was half the size that it is now: we would use less energy; we would need less resources.  In the race to have ‘more’, we have hurt the species – all because we care so much about comparing ourselves to others.  Executive salaries fall into this category as well. Many executives make a substantial amount of money. The fact is, nothing would happen to their lifestyle if they made a bit less. But the way they look at it isn’t, ‘How much do I need?’, but, ‘I want more than that guy over there’. It’s not just executives -- we are constantly making these types of comparisons in many domains of life.

You have studied a wide variety of decision-making biases. Which would you say is the most dangerous to good decision making?
I think the one that has the biggest effect is the power of habits. When we come into a new environment, before long, we will have to make a decision of some type. It may be done thoughtfully or not, it might be based on real information or not, but the next time we enter that environment, we will remember what we did the last time.  We won’t remember why, but we will remember what we did, and we have a tendency to repeat that decision over and over.  I once did some research in a supermarket. When a new juice comes out and it’s half-priced, people say, ‘Hey, I’ll try that’.  But what I found is that the next time they come in, they forget that they bought it the first time only because it was half-priced. They just say, ‘This is the juice I bought last time’, and they keep doing it.  This tendency can create situations where one mistake becomes a stream of mistakes over a long period of time.

Seeing reality from a self-serving perspective is another pervasive human foible.  What can we do to counteract it?
Unfortunately, there isn’t much we can do. If, every time you make a decision, you consciously say to yourself, ‘I will not have a conflict of interest here’ or ‘I will not be overly optimistic’, you might be able to move the needle a little bit -- but not by much.  Imagine that you’re a doctor and there are two treatment plans available for your patient: plan A and plan B.  Plan A is better for the patient, and plan B is better for you (it takes less effort and input.) Is it possible for you to see this in an objective way?  The answer is No.  All we can do is try to eliminate situations that foster conflict of interest. 


By the way, conflict of interest was a major contributor to the financial crisis. Imagine if I paid you ten million dollars a year to believe that mortgage-backed securities are a good thing, and I got everybody around you to behave in the same way.  The desire to see the world in a way that is comfortable for us is very powerful, and I don’t think we can escape it.  What we can do, is to try to limit the amount of trouble it gets us into by taking steps to eliminate conflict of interest from our financial system, from our healthcare system and from politics. However, the sad truth is that once people are tempted with a conflict of interest, they are likely to fail, and this brings me to another point: many people think Behavioural Economics is just about ‘how stupid we are’, but it’s also about how wonderful we are. If you are a lobbyist for company X, and you come and spend time with me and tell me stories, and I learn about your family and your hobbies and so on, I am predisposed to like you, and I will want to help you. It is this basic human desire to help others that makes me susceptible to you.  This is a wonderful capacity -- it’s great that we like other people and are willing to do all sorts of things for them. As we’ve discussed, conflict of interest also has a very dark side, but I would not want to program people so that they aren’t susceptible to it; while it would eliminate many decision errors, it would also mean people wouldn’t care about each other. I think we just need to better understand human nature, figure out its strengths and weaknesses, and find ways to limit the costs of the weaknesses.

When it comes to our finances, you have said that most people suffer from ‘the planning-fallacy syndrome’. What is this, and what can we do to overcome it?
The planning-fallacy syndrome resides in domains where we make ourselves promises to finish something by a certain time, but we rarely do.  The reason this happens is that in life, different things tend to go wrong at different times.  For example, I am late getting home nearly every day. To an outsider, it might appear as if I never learn from my mistakes, but in my defense, every day something new happens: I might get an unexpected phone call, or the printer gets jammed. If the exact same thing happened every day – say the printer got stuck every day for ten minutes -- I would quickly learn how to work around it, but that isn’t what happens.  And like most of my fellow humans, I haven’t taken the time to calculate the range of things that might go wrong, to take an average of the probability of each one happening, and assume that something will go wrong every day, thereby indicating that I should leave 17½ minutes earlier in order to get home on time. This same phenomenon happens with our finances. We all have certain fixed expenses – mortgage, electricity bills and so on, but things also go wrong sometimes and surprise us. The car breaks down or the roof starts leaking, and because different things go wrong at different times, we don’t plan for any of it.  It’s not as if we even plan for the average amount of bad things happening: we don’t plan for any bad things at all, so we need some help with that.  Imagine if somebody could go over your expenditures and say, ‘Over the last three years you have been spending 20 per cent of your income on unexpected expenses; let’s see if we can better account for that.’  It’s a tall order to say that we can change the way we think all the time.  Instead, what we need to do is design and implement something automatic that creates solutions for such problems.  Maybe a smarter credit card or budgeting tool.

Who has a better chance of making a good decision, an individual or a group?
If you are asking, ‘In the history of the world, have more groups or more individuals made better decisions?’, I would suspect the answer is individuals, but that is not to say that groups don’t have potential.  The problem is, when you get groups of people together, they often make bad decisions for a number of reasons.  When President Bush decided to invade Iraq, he got his cabinet together and said, ‘I think we should invade; what do you think?’, and he looked for responses from his cabinet members, one after the other.  Imagine being the third person to answer: your boss has just said he thinks it’s a good idea, and two other cabinet members have agreed. Are you going to say, ‘No I don’t think we should’?  Not likely.  So groups have attributes that actually hinder their ability to make good decisions, and issues around authority and conformity are two of the most common ones. If you get a group of 20 people together to express their opinions, you should not expect to actually get the value of the independent opinions of 20 people.  On top of that, there’s also political correctness and lots of other stuff that often eliminates good decisions. 

Don’t get me wrong -- a multiplicity of opinions is a great thing, but it doesn’t necessarily materialize just because you get lots of people into a meeting, and we have to create solutions that address this. A few years ago, my colleagues and I created a piece of software called ‘anti-groupware’, where we basically tried to remove most of the negative social consequences of group decisions: people voted anonymously, and nobody else could see their vote; if you didn’t feel like you knew much about the topic, you couldn’t vote; and if you thought somebody else knew it better than you, you could assign your vote to them, but that other person (the one voting for you) would not know about it.  Taking such steps can allow a group’s potential to flourish. What we need is broad-based interventions that enable the benefits of diverse groups, without the hidden costs.

Dan Ariely is the author of the New York Times’ best-seller Predictably Irrational: The Hidden Forces That Shape Our Decisions (HarperCollins 2009). He is the James B. Duke Professor of Behavioural Economics at Duke University, with appointments to the Fuqua School of Business, the Center for Cognitive Neuroscience and the Department of Economics.

[This article has been reprinted, with permission, from Rotman Management, the magazine of the University of Toronto's Rotman School of Management]

Post Your Comment
Required
Required, will not be published
All comments are moderated
  • Geetha Manichandar

    Great article on Decisive Decision Making and Productive Ideological Conflict. Thank you! Regards, Geetha

    on Feb 10, 2010