Access the essential membership for Modern Managers
Transcript
Rachel Salaman: Welcome to this edition of Expert Interview from Mind Tools with me, Rachel Salaman.
When you have an important business decision to make, your judgment is of vital importance, clearly. But the wider decision-making process is equally important, as it can catch, or fatally ignore, flaws in your decision making that you might not be aware of. Today, we're delving into why people make bad decisions and, more importantly, how to avoid doing so. My guest is Andrew Campbell, co-author of the book, Think Again – Why Good Leaders Make Bad Decisions, and How to Keep it From Happening to You. He's co-director of the Strategic Management Center at Ashridge Business School in the UK, and I recently caught up with him at his London office. I began by asking him how he defines bad decisions.
Andrew Campbell: This is a slightly tricky question, but clearly a bad decision is one that has a bad outcome. If you think of Royal Bank of Scotland's decision to buy ABN Amro, the Chairman said in the Treasury Select Committee that – he was asked how much value had been destroyed in that decision, and he said that he thought they'd lost everything, so they paid £11 billion, and they got nothing. In fact, they got less than nothing. So that would be an example of a bad decision.
Rachel Salaman: And that's not just unlucky?
Andrew Campbell: Well, then the question is, that's a bad outcome, but was it a foolish choice amongst options, and that's a much harder thing to judge because you're trying to say, "Would it have been fairly obvious for an averagely intelligent person that there was a better choice to be made?" And, to some degree, that's what we've tried to do in putting our sample together, is find decisions where most people presented with the information available at the time would have made a different choice than the people did.
Rachel Salaman: And that different choice would have been a better choice?
Andrew Campbell: And that different choice would have been a better choice. I mean, for example, one of the decisions is by Matthew Broderick about Hurricane Katrina. On the crucial Monday when the hurricane hit New Orleans, he was the person responsible for alerting the President and the Homeland Security whether the levies had been breached or not, and therefore whether federal aid should be rushed to the site, because obviously if they had been breached, the town – city would be underwater. And despite having 17 bits of information saying that they had been breached, or that there was major flooding in New Orleans, he had some three or bits of information to the contrary, and he chose the four to the contrary, and went home that night saying that they hadn't been breached. Now I think almost anybody else would have made a different choice.
Rachel Salaman: So tell me a bit about the research that went into this book, because it's based on a very big piece of research, isn't it?
Andrew Campbell: Well, there were two streams. One was putting together a sample of decisions with bad outcomes that we thought were foolish at the time, of which we collected about 83 that we felt broadly met those criteria – although, as I pointed out, there's quite difficult judgments to make. But we then realized that that sample of examples wouldn't give us an answer to the problem that we were trying to solve, which is, "Why do these happen?" but more they would provide us with a set of examples against which to test any theories that we had about the course. So we then went to search for the reason why capable people arrive at what to other people look like foolish judgments, and that took us into brain neuroscience, particularly, decision neuroscience. And that research really involved reading all of the literature, and talking to the people in those areas, and trying to develop some explanations for these judgment errors. And we then took those explanations. and tested them against a sample of examples, and they seemed to explain most of them very well indeed.
Rachel Salaman: So how do people make decisions? What goes on in our brains?
Andrew Campbell: One of the big, I guess, breakthroughs in my own thinking on this, being a very rational academic left brain kind of person, was to discover that decision making, the act of making a judgment, is primarily an emotional activity in the brain. And what happens is the subconscious throws up an opinion, which is your judgment, as a result of emotional tags that you have stored in your memory, and that is then often processed in your rational reasoning part of your brain to check whether it's reasonable or not. But the actual act of forming the judgment appears to be an emotional one, and often can be detected through measuring physical changes like sweat glands on the palm, so you can tell somebody's made a judgment even before they know that they've made a judgment. So your emotions are influencing your body often before they start to influence your reasoning.
Rachel Salaman: Did you find out that people shouldn't really let their emotions play a part in any decision making?
Andrew Campbell: Well, quite the reverse. There isn't any choice, you know. Well, our brains work in the way that they work, and that means that the judgments we make are formed through an emotional process, and then tested using a reasoning process, often semiconscious. And what that tells us, though, is that in certain kinds of situations, particularly where the emotional triggers are inappropriate for the situation the person's facing, they're very likely to make the wrong judgment. And just to give a silly example, if you think of somebody who has been emotionally primed, maybe as a result of a childhood experience, to be afraid of dogs, then that person, when they're faced with a situation involving a dog, is going to have a high probability of making the wrong judgment about that dog's intention, or that dog's behavior, or the risks in the situation.
We recognize situations like that. What we don't realize is that, say, Fred Goodwin at the Royal Bank of Scotland is influenced by emotions in exactly the same way as that person frightened of dogs. And they're affecting his judgments, and hence there are reasonable probabilities that he, or his colleagues, will make the wrong decision, if the emotional triggers are inappropriate.
Rachel Salaman: In your book, you talk about four red flags, which you say lead to poor decision making, which is linked to this emotion issue. The first of those is misleading experiences. Can you explain that – perhaps give an example?
Andrew Campbell: Well, it may be useful just to describe the four conditions under which it's quite likely that you or I, or Barack Obama, will make a wrong judgment.
So the first one is misleading experiences: if we've had an experience in the past which is being triggered by the current situation which is giving us a misleading trigger, then there's a good chance that our judgments will be faulty. The second one is prejudgments of previous decisions we've made, or previous conclusions we've made, or theories we've been taught in school, or points of view we've had drilled into us by our parents, or by our bosses. If those affect our judgments on the situations we're facing today, if those are inappropriate, because the situation we're facing is slightly different, then again will be a good chance we'd make a wrong call.
And then there are two others. One is self-interest, which I guess we all recognize that sort of inappropriate self interest. But we probably recognize less inappropriate attachments, because as human beings we can fall in love with people, we can fall in love with possessions, we can fall in love with places, and those emotions can then affect our judgments about if they're inappropriate; can affect the judgments that we're making when those people, places, or possessions are influenced.
Rachel Salaman: One of the examples in the book that you give for misleading experiences is the Gatorade Snapple example. Can you just talk us through that?
Andrew Campbell: Yes, I might not recall the names of the protagonists, but the longstanding chairman of Quaker – one of his early experiences had been the acquisition of Gatorade, and that had been a huge success, where Quaker had taken a growing brand and developed it multi-fold, and done extremely well out of it. And his explanation for the mistake he made over Snapple was that he saw the acquisition of Snapple as being very similar to the Gatorade situation. He felt it was – and these were nearly 20 years apart – but his memory was still very primed by the success of Gatorade. He'd spent plenty of years looking for another opportunity like Gatorade, and he saw Snapple as this opportunity. And it turned out to have some important differences which his mental processes didn't pick up, but they were the sort of differences that other people might well have spotted, and that's an example of a misleading experience. So the previous experience of the success with Gatorade made him overenthusiastic about the Snapple deal, and the result was something of a disaster.
Rachel Salaman: What can we do to stop ourselves falling foul of misleading experiences?
Andrew Campbell: Well, I guess the first issue is really, "How do you spot them in advance?" Because we are all drawing on our experiences, the question is, "How do we figure out whether, in a particular situation we're facing, we've got misleading experiences?" You know, you can't always spot them, but what we found is you can frequently, and the process that we have used, and suggest that people use, is first to lay out the uncertainties that are involved in the decision.
So if we go back to the Snapple one, then one of the uncertainties might have been, "Would the Snapple management team be able successfully to grow the brand with extra marketing resource?" And so you identify all of the uncertainties, and then against each uncertainty, because these are the areas which you have to make judgments about, it's only really where there's an uncertainty that you need a judgment. If it's not uncertain, you can solve the problem with a calculation. So you look at each of the uncertainties, and then you ask yourself, "What experiences might I, or the decision maker, be drawing on in order to make a judgment about that uncertainty?" And then you look at those experiences, and say, "Is there any reason to suppose that those experiences might be misleading?" And if you have some concerns that they might be misleading, then you've got a red flag. And you'll then start to worry about, "Well, I might be making the wrong decision, or wrong judgment. I might believe strongly in my wrong judgment, and that might disrupt the whole decision process."
Rachel Salaman: Well, let's talk about misleading pre-judgments now; it's a bit different from misleading experiences. Where are the differences there?
Andrew Campbell: Well, again, there's a bit of an overlap because this is stuff that you have in your memory. And what we found is it's helpful to distinguish between experiences in the memory and previous judgments that are in the memory. Often those previous judgments have come from experiences, so the two are linked, but there's a whole category of judgments that we have in our mental make-up that come from things that you wouldn't normally think of as experiences. So when you're sitting in a classroom at university, you get taught some theory, and that then affects your judgment structure – and so it's those kind of things we're talking about. Often it's previous decisions you've made. So if you have previously decided that this is a growing market, you are going to have a harder time concluding that it's not a growing market, if data comes in that starts to conflict with the evidence you have.
Rachel Salaman: You also talked about inappropriate self-interest as a red flag. Obviously, self-interest itself is fine; you need self-interest to get ahead in business. At what point does self-interest become inappropriate?
Andrew Campbell: Well, self-interest, obviously, is a huge aid in our mental processes, and you could imagine that our evolution, it's a terribly important part of how or minds work. They work very hard to protect us from things that might harm us. And when you're driving along the road, you can be – your subconscious can alert you to the fact that there's something dangerous happening, and you can feel an emotional jolt even before you've consciously recognized what it is that's dangerous. So our emotional processes are very good at spotting things that might be dangerous to us, and generating the emotions that will help protect us.
Those emotions are inappropriate if they conflict with the objective view that we should be taking on behalf of the stakeholders, of which we may be only one, or we may not even be one of the stakeholders, when we're making a decision. So if you're chief executive of a large company, you have stakeholders such as shareholders and customers and other managers and governments, and your personal interests are a very minor stake in that decision, but they can have a very major influence on your emotions.
Rachel Salaman: And your fourth red flag is inappropriate attachments. You talked about attachments to people, places, and things. Have you got an example to illustrate this, and how it affects decision making?
Andrew Campbell: Yes, I think one of the more interesting ones is an old one, but it concerns An Wang who was chief executive and founder of Wang, an incredibly successful company, who rode the rise of word processing in the 70s. And when he came to make a decision about launching personal computers in the early 80s, even though the IBM PC was obviously becoming the industry standard, he chose to launch a computer with a proprietary software, and as a result, it was not nearly so successful as it might have been, and some years later Wang went into Chapter 11. So it was a disaster for An Wang and for the company.
And in trying to understand why he had made that judgment, we were told, particularly by his son, that he had a hatred for IBM, and there were some good reasons for this –opposite, I guess, to an attachment; hatred being the opposite to attachment, but it's the same set of emotions. As a result, he had not wanted to be using a software standard which was an IBM-supported software standard, even though it was provided by Microsoft, and it's that kind of thing that can disrupt important judgments.
Rachel Salaman: And as we go about our daily business making decisions, what should we be thinking about, to make sure we're not being led astray by inappropriate attachments?
Andrew Campbell: Again, it's a bit like the sort of analysis that one needs to do for misleading experiences or pre-judgments, or self-interest. On the attachment side, the first thing to do is to identify any people, places, or things that are affected by this decision, to which the decision maker might be attached; then to make a judgment about whether the decision maker is attached. For example, if you're trying to decide whether to promote your Number Two into your job when you move on, there is obviously high likelihood, or there's a risk that you will be attached to that individual, and that'll affect your judgment. So, you lay out the attachments – the people, places, and things to which you might be attached. You then have to make a judgment about whether you think there is an attachment that could be strong enough to disrupt the decision, and if you think there might be, because you're never going to know for certain, then you've got a red flag. There is a chance that this decision could go badly wrong because of that attachment.
Rachel Salaman: Balancing the red flags in your work are what you call 'safeguards' – things that help us avoid bad decision making. Can you just briefly do an overview of the four safeguards in your book?
Andrew Campbell: Yes, let me try and explain the flow here, so that the thought of the safeguards is once you've identified some red flag condition, you are worried that there is a risk that the decision will go wrong. It may be quite a small risk, but it's there. So what you're then trying to do is to think about, "How can I give some protection to the decision through improving or strengthening the decision process?" And the first thing that you're likely to think about is, "Can I correct the problem at source? Can I change what's in the head of the person whose thinking might be biased?" as a result of the red flag, or self-interest, or the inappropriate experience.
And the easiest way to do is to give that individual, or maybe there may be more than one or two people involved, give that individual some experience, or some extra information, which would cause them to change whatever it is that's in their head. If you're trying to make a decision about a new market, and you're worried that the individual might be biased because of their previous experience in a similar market, you might encourage that individual to go out and spend time in that market and meet some people and get some new experiences, which could change that misleading influence. And that's step one.
And usually one tries to look for a solution of data or experience, to help protect the decision. If you think that's not going to be sufficient, and very often it isn't, partly because of the strength of emotions that can be triggered, it's unlikely, for example, to solve a problem of self interest, by giving people more information or more experience, they're still going to be thinking, "Well, this is bad for me," even if only subconsciously they're thinking, "This is bad for me," and that's influencing their judgments.
So the next thing you can do is to bring some other people into the decision process, to challenge that individual, or the small clique who may have a common view, and that – it could be bringing people in, it could be constructing debates and conversations, it could be dividing the decision up into smaller decisions in order to open up the dialog. And that's a very common thing that's done. You will construct the group that you want to make a decision based on your belief about any influences people might be bringing to the table that are inappropriate. So, for example, if you're thinking of selling a business, you may well want to involve the head of that business in the decision, but you'd want to make sure that there were other people who were balancing that person's self interest. If you think that that may not be strong enough, because the individual or the clique are sufficiently powerful that they're unlikely to be influenced by others, or that they are going to reject the idea of having other people involved in the decision, then your next step is to add some extra governance.
So you need a layer above the decision makers, who have the ability to reject the recommendation, on the basis that they think that it isn't being well enough thought through. And so, typically in an organization, this is the Board of non-executive directors, but of course lower down, it can just be your boss or your boss' boss. But thinking about, "Well, who is in a position to say, 'Hang on, I hear what you're recommending, but I'm concerned that you're biased and I want you to think again about it.'" That's part of the governance. And you can make that stronger or weaker, and you can add people to governance, and you can give the people in the governance level some extra information as well, and so on.
And then the fourth, the final backstop, if all of those you think are not going to really protect you against this particular bias that you're worried about, then the last one is monitoring. And that means the decision will get made, if the wrong decision gets made, you want to catch it really quickly, so you want to monitor what happens very closely so that you can change the decision if it proves that it looks like you've made the wrong one.
Rachel Salaman: In your experience, how often are those safeguards in place in organizations?
Andrew Campbell: Well, they're often in place in what I would think of as a bureaucratic way, so we have non-executive chairmen who's supposed to be able to stand aside from major decision. We have layers of management who have oversight over each other's judgments. We have processes for collecting data, and people often involve consultants and so on. What we're recommending is that those processes of good decision making are designed not in a bureaucratic way, because of the rules of governance, but with great thought about what the particular red flags are in this decision. And actually, in some decisions where there are no red flags, we can rip out a lot of that bureaucracy. We can let an individual get on with it, because we're fairly confident that there's little chance of an error of judgment.
Rachel Salaman: Most of the examples you've been using have been from the world of business; obviously big decisions being made in business all the time. Are the things you're talking about relevant in other areas too, for example, politics?
Andrew Campbell: The answer is yes – and the military. And so we have some examples of military decisions that have gone wrong for these kind of reasons. One of my favorites is the decision that Tony Blair took to take Britain into war with Iraq, or into invading Iraq, alongside America. That, for me, captures examples of multiple red flags, and would have been a very difficult decision to have provided suitable protection for, in terms of safeguards. But let me just talk through the red flags, and then maybe a little bit about some safeguards.
In terms of misleading experiences, Tony Blair had persuaded President Clinton to facedown Milosevic in the Serbia Kosovo problem, because of his belief that if the leaders of the world stood shoulder to shoulder against misbehaving leaders, they could get them to back down and correct the situation without having to kill a lot of people. And he succeeded, and Milosevic did back down, and he was then deposed by the Serbians, and so for him, that was a huge reinforcement of that belief. He then sent troops into Sierra Leone as a result of difficulties that were happening there, and again, there was a big success, and the problem was solved with very little bloodshed, and I guess his third potentially misleading experience was Afghanistan, after 9/11, he went into Afghanistan alongside the Americans and a number of other countries, and certainly at the time of the Iraq decision, the Afghanistan invasion had appeared to be successful.
So, faced with the decision about, "Should we try and facedown Saddam Hussein and invade Iraq?" then these misleading experiences would have been giving him what turned out to be inappropriate guidance. He also had suffered from some of the other red flags, so he had made some pre-judgments, not only before the Kosovo situation, but in around 2000 he'd made a presentation in Chicago saying how he felt it important for the big nations of the world to face down the bad nations, and so improve the condition of the world.
He also potentially suffered from some self-interest in that he was being touted, even nicknamed the Deputy Leader of the World as a result of the influences he was having on the world stage, and no doubt, continuing to have that influence shoulder-to-shoulder with George Bush over the Iraq situation would probably have been attractive to him, personally. And the longstanding relationship between Britain and the United States, and his personal relationship with George Bush, could well have been an inappropriate attachment in this decision. So there was a situation where a leader had probably lots of red flags, suggesting that he could make the wrong judgment, and the question is, "Could one have designed a process to protect him from what looks like certainly the wrong judgment, with hindsight?"
Interestingly, the Cabinet, which is the main process for checking and debating and arguing against a Prime Minister's judgments, didn't really operate over this one. There were apparently a number of discussions about Iraq, but none of them were really of the form of, "Should we invade Iraq?" They were more for information, and minor issue discussions, like, "If we invade Iraq, how will we deal with the UN, and should we get the UN involved?" and so on. If you had been Gordon Brown, and you'd be worried that Tony Blair was about to make the wrong decision, and you had thought through the red flags that he was potentially suffering from, you might well have insisted on some more robust process of debate and dialog, or at least a Cabinet vote that was, for example, "No vote in the Cabinet," about whether this was the right decision.
Rachel Salaman: And what about on a much smaller scale: most of us are making decisions all the time in our daily life. Is there anything we should be thinking about to strengthen our decision-making processes that lead to better decisions?
Andrew Campbell: This way of thinking, I personally believe, should be part of our decision making, at any point, in a family. In fact, Jo Whitehead and I worked very closely together, with Sydney's help, used this thinking as we were writing the book, we had to make decisions about, "How many chapters we would have in the book?" We had to make decisions about, "Should we use this diagram to represent this idea or not?" And it was frequent for us to be saying to each other, "Well, Andrew, you wrote the first draft of that chapter so you're likely to be attached to the words that are there, so maybe I should do the next draft in order to give us a chance of finding a new way of expressing this?" Or, "Let's involve Sydney in helping us make this decision because he hasn't got any attachments," or "He's unlikely to be self interested," or, "He doesn't have the particular misleading experience that we were worried about." So I think this way of thinking can help us with all sorts of minor decisions, as well as the really big ones.
Rachel Salaman: Andrew Campbell, speaking to me in London. Once more, the name of Andrew's book is "Think Again – Why Good Leaders Make Bad Decisions, and How to Keep it From Happening to You," co-authored by Sydney Finkelstein and Jo Whitehead. They have a website where you can find out more about these ideas, www.thinkagain-book.com.
I'll be back in a couple of weeks with another Expert Interview. Until then, goodbye.