Risk Cognition

Wide-eyed believer, arguer, mocker: which one are you?

  • April 10, 2018

When it comes to dealing with big, long-range risks, people tend to fall into one of those three groups.

Global climate change, the rise and eventual dominance of artificial intelligence over its human creators, rampaging Terminator machines, human population collapse, nuclear warfare… These are examples of potentially huge risks that could take place this century.

Even as I raise those subjects, some listeners will be experiencing different, but similarly powerful reactions to them:

“Oh my gosh, what can we do!”

“Hang on a minute, it’s too early to say!”

“Don’t be an idiot, that’s never going to happen!”

Long-range risks are arguably the most important ones that we should pay attention to, because the changes they bring about can be the biggest in terms of impact and duration. It’s my perception that we prefer not to think too far ahead, either because we are uncomfortable dealing with uncertainty, or because we care too little – we aren’t going to be around long enough in life or employment – to see the consequences rolling in.

All of which means we can easily walk into real danger as time passes.

Let’s face it, it’s just easier to confidently deal with day-to-day operational issues – to “sweat the small stuff” if you will allow.

But confidently dealing with long-term, strategic issues means doing tough stuff. It means asking big questions, persuading difficult audiences, challenging prevailing opinions or addressing issues where there isn’t much consensus. All of which have their own inherent risk of committing a “Career Limiting Move!” and nobody wants to do that.

Let’s look at those 3 reactions to long-range risks again and explore which mindset is best for dealing with these big, strategic risks.

The wide-eyed believer/“OMG” reaction isn’t something I’ve come across very often at executive level.

People with a tendency to accept without evidence, rarely make it to the highest levels. Such people are overtaken by reality earlier in their careers, unless they have been exceptionally lucky. When they are faced with new risks, being unconditionally accepting means they only see positives and are surprised or let down when the real world does its thing.

At the other end, we have the mockers. These people refuse to believe.

They only trust facts, at least the facts that fit with their established worldviews. This can go as far as “Cognitive Dissonance” – a pattern of thinking we all lapse into if we aren’t careful – seeing what we want to see despite sometimes overwhelming evidence to the contrary.

No one likes to be wrong and it’s not that unusual to keep doing something that is bad, just so we don’t have to admit any mistakes and we can stay safe in the comfort of a group of like-minded people.

Mockers look for certainty, resist ambiguity. Which is a little bit like trying to drive a car by looking in the rear-view mirror; you only make decisions on what has already happened. Dealing with risk means having to grapple with uncertainty, using what we already know as a starting point, but then breaking down and digging into the things we don’t.

This is why I want to return to the middle group, the Arguers.

For me, this is where really effective management of long range risks can take place. You see, the further away a risk is in time and space, the less connection you have with established facts, the less likelihood that everyone will agree on its implications and significance. Creating an environment to develop questioning and debating habits, to deconstruct uncertainties, to fabricate a common view on a risk that is allowed to adapt as new evidence emerges, this is helpful for management of long-range risks.

What do we know, how do we know it, what don’t we know yet? What level of confidence do we need before we commit resources to action? How can we break down one big problem into many small pieces and start taking one step at a time to be in the best position for the future? How will we recognise the direction of change, updating wrong decisions as quickly as possible?

Elon Musk, Bill Gates, even the late Stephen Hawking, have all issued grave warnings about the future of AI

…and how this invention will bring lots of benefits to humanity at first – but could then get out of control and become severely harmful (what they have to say is actually quite scary!).

It might even sound crazy to you right now, pushing you into the mockers camp, bringing to mind images of Schwarzenegger-shaped T-800 killer terminator robots. But these are respected thinkers.

At a recent MIT conference on AI, a former Google exec predicted that half of all jobs in China could be done by AI in the next 10-15 years. Risks like these will profoundly affect our businesses, our economies, our politics, our sense of identity – if we don’t face up to them now.

So here’s my practical advice for dealing with long-term, strategic risks inside your organisation:

  • Bring groups of decision-makers together to constructively disagree and form “less wrong” positions about the future
  • Give them stimulating information to consider (external experts are good for this) and work out what it means in 10, 5, 2 years’ time
  • Make sense of big picture issues by asking what practical effect they will have on each part of your activities and when.

Need help making sense of your risks? Get in contact.

Got a problem with T-800 Terminators – call someone else!

 

 

 

  • Tags:
  • AI