W Power 2024

Left brain, right stuff

Phil Rosenzweig, the IMD Professor and author talks about the importance of recognizing your ability to affect outcomes

Published: Aug 28, 2015 06:36:09 AM IST
Updated: Aug 28, 2015 09:02:48 AM IST
Left brain, right stuff

Q. You believe that in the realm of decision making, a critical distinction should always be made.  What is it?
A distinction that is important, but often overlooked, is whether the decision is about something you can directly influence or not. Much of the work done by cognitive psychologists in the last 25 years is based on experiments where subjects make judgments about things they cannot influence, or choose between options that they cannot alter.  That’s fine for understanding the basic mechanics of human cognition—but in real life, we don’t only make judgments about things that we can’t influence, and we don’t simply make choices from options that are presented to us.  Very often, we can influence or change those things.  As a result, the lessons we have taken from Cognitive Psychology should be amended somewhat.

Q. For decades, researchers have told us that we have a pervasive tendency to overestimate our control over things; but recent research overturns that idea.  Please explain.
Berkeley School of Business Psychologist Don Moore and his colleagues recently ran several studies and concluded that people do not consistently overestimate their level of control:  instead, they have an imperfect understanding of how much control they can exert.  When control is low they tend to overestimate it, and when it is high they tend to underestimate it.  Rather than suffering from a pervasive illusion of control, it is more accurate to say that people can and do err in both directions. This finding is highly significant.

Q. Research also indicates that we tend to be over-confident in many areas of life.   But you have found that when we have some control over an outcome, a high degree of confidence is a good thing. Please explain.
As indicated, we can control many activities in life. For example, how well we do our jobs, how we perform on an exam, or how much effort we put into cooking a meal or playing an instrument.  All of these things depend very much on the actions we take, the effort we extend, and our mindset.  For these sorts of tasks, the more common error is not an illusion of excessive control, but the reverse: a failure to recognize how much control we really have.  

Nobody says to you, “Karen, do you want to put together a good issue of Rotman Management or not?”  It’s the same for me: I have to teach my classes well.  We’re not just choosing whether to do these things well. And in these situations, I have found that an element of positive thinking and a high level of confidence can help us do better.  That is not the case for things over which we truly have no control.  For instance, what’s the weather going to be like tomorrow? Will the S&P 500 go up or down next week? You and I are not in a position to change those outcomes, so having high—or even slightly elevated—confidence is not useful.

Also, when you ask somebody about something that is relatively difficult to do, they actually tend not to overestimate their abilities; in fact, they often underestimate them.  So, the idea that people are consistently prone to overconfidence turns out not to be correct.

Q. Talk a bit about how ‘absolute performance’ vs. ‘relative performance’ fits into the picture.
Decision making has often been studied without any regard to competition. Yet in many fields, performance is best understood as relative: in business, politics, sports and more, many of the most important decisions are made with an eye to rivalry: the aim isn’t just to do well, but to do better than others. That’s why the second key to making great decisions is to recognize whether you’re trying to do well, or if you need to out-do rivals. When you combine the ability to exert control with the need to outperform rivals, suddenly it’s not just possible to influence outcomes—it’s often necessary.  

The good news is, improvements in absolute performance can affect relative success. In a business environment where technologies can change suddenly, consumer preferences shift from one week to the next and rivals can emerge from anywhere, there is incessant pressure to find ways of doing better, whether it be through innovative new products and services or simply better execution.  Only by taking chances and pushing the envelope can companies hope to stay ahead of rivals.  When performance is relative—which it always is in business—one thing is assured: playing it safe will almost guarantee failure.

Q. You argue that some situations demand left-brain thinking and ‘right-stuff’. Please explain.
As indicated, many of the experiments in Cognitive Psychology have examined ‘no control’ choices in lab settings.  These lessons are still very germane for a wide range of decisions, including consumer choice, most investment decisions, and some public policy decisions. However, when you combine the ability to influence the outcome with a desire to outperform rivals, you’re in a very different setting: this is the quadrant where strategic thinking is required, and for that you need a combination of left brain thinking and what I refer to as ‘right stuff’. Let me explain.

The research shows that we suffer from a wide range of thinking biases—and that we should be aware of them and try to proactively remove them.  That calls for deliberate, detached thinking, which I summarize with the term ‘left brain’.  However, in situations where you can shape the outcome, left-brain thinking is simply not enough.   You also have to stretch the boundaries, push the envelope, and boldly go where you have never gone before.  That’s what I mean by the ‘right stuff’.  Going beyond what has been done before does not mean being reckless: it calls for a combination of careful analysis and management of risk (left brain) with the willingness to take a step into the unknown (right stuff).

Q. How should managers think about control when it comes to making decisions?
The essence of management is to exercise control and influence events.  Of course, managers cannot have complete control over outcomes—any more than a doctor has total control over a patient’s health.  They are buffeted by events outside of their control: macroeconomic factors, changes in technology, actions of rivals, etc.  Yet, as indicated, it is a mistake to conclude that managers suffer from a pervasive illusion of control, and that they should temper what they think they can accomplish. I have found that the greater danger is the exact opposite: that managers will underestimate the extent of control they have. Very often, they can achieve more, influence more and bring about more change than they imagine is possible.

Q. Decision models have been called ‘the new way to be smart’. What are their strengths and weaknesses?
Decision models can be extremely useful; but again, it’s important to know whether your model is trying to make a judgment about something that you can or cannot directly influence.  For instance, a model can estimate whether a loan will be repaid, but it can’t change the likelihood that a given loan will be repaid on time.  It can’t give the borrower any greater capacity to pay, or make sure that he doesn’t squander his money the week before a payment is due.  Likewise, a model can predict the rainfall and days of sunshine on a given farm in rural Ontario, but it can’t change the actual weather; it can estimate the quality of a wine vintage, but it can’t make a particular wine any better.

When your aim is to make an accurate estimate of something that you cannot influence, models can be enormously powerful. But when you can influence outcomes, the story changes, and that is why, for people who have to get things done, models are far from sufficient.  

Complicating matters is the fact that there is also a third decision-making category between direct influence and no influence: indirect influence.  As a result, if a model’s prediction is communicated in a way that changes someone’s behaviour, you may still be able to shape the outcome.  Indirect influence can take two forms: if it increases the chance of an event occurring, that’s a self-fulfilling prediction; if it lowers the chances of an event occurring, that’s a self-negating prediction.   

The need to distinguish among different kinds of influence—no influence, direct influence or indirect influence—was keenly apparent during the 2012 U.S. presidential campaign. Nate Silver has become very well-known for his book, The Signal and the Noise.  I’m a big fan of Nate’s, and he shows just how powerful decision models can be.  But almost all of his examples involve things like predicting an election, predicting who’s going to win a basketball game, or predicting the weather.  While these may be things that we would like to be able to predict, we are in no position to directly influence them.  That’s why it’s important to note that big data can be very helpful in some situations, but much less so when you can influence the outcome.

Q. Tell us a bit about ‘Type I’ and ‘Type II’ errors.
Figure One shows four possible combinations of belief and reality. If we believe we can’t control an outcome, and in fact we cannot, we are in the lower left quadrant. If we believe we can control an outcome, and we truly can, we are in the upper right.  In both cases, our beliefs are correct. In the upper-left quadrant, we believe we have control when we actually don’t, so we overestimate our control.  That’s a Type I error, or a ‘false positive’.  The result is an error of commission: we go ahead with some action when we shouldn’t.  In the lower right quadrant, we don’t believe that we can influence the outcome when in fact we can. This is a Type II error, or a ‘false negative’: we fail to act when we should.

Of course, everyone would like to minimize the chance of error, which is why we gather information to improve the accuracy of our beliefs.  But even so, some uncertainty remains. That is why, when making decisions, we need to consider the consequences of error. Is it better to act as if we have control (and run the risk of a Type I error) or is it better to assume we don’t have control, and run the risk of a Type II error?  By proactively thinking about the consequences of each, we can try to avoid the more serious of the two.

Q. Can you provide an example?
Suppose a disease sweeps through a remote village.  Current remedies are ineffective, and children, the elderly and the weak succumb in great numbers. One option is to keep seeking a treatment for the disease. If we’re wrong and commit a Type I error, the result will be wasted resources, but not much more. The other option is to conclude that we have no way to halt the disease—that it is fate or God’s will.  The downside in the event that we’re wrong about that, and commit a Type II error, will be many additional deaths.  By structuring the decision in this way and comparing the consequences of Type I and Type II errors, we might conclude that it is wise to keep searching for a cure.

Type II errors—which involve failure to take action when we can effect change—can be very serious.  As a rule of thumb, it’s better to err on the side of thinking you can get things done rather than assuming you cannot; the upside is greater and the downside less.


Phil Rosenzweig is a Professor of Strategy and International Business at IMD in Lausanne, Switzerland and the author of Left Brain, Right Stuff: How Leaders Make Winning Decisions (PublicAffairs, 2014). He is also the director of IMD’s Executive MBA Program.

[This article has been reprinted, with permission, from Rotman Management, the magazine of the University of Toronto's Rotman School of Management]

Post Your Comment
Required
Required, will not be published
All comments are moderated