Global climate change, the rise and eventual dominance of artificial intelligence over its human creators, rampaging Terminator machines, human population collapse, nuclear warfare… These are examples of potentially huge risks that could take place this century.
Even as I raise those subjects, some listeners will be experiencing different, but similarly powerful reactions to them:
“Oh my gosh, what can we do!”
“Hang on a minute, it’s too early to say!”
“Don’t be an idiot, that’s never going to happen!”
Long-range risks are arguably the most important ones that we should pay attention to, because the changes they bring about can be the biggest in terms of impact and duration. It’s my perception that we prefer not to think too far ahead, either because we are uncomfortable dealing with uncertainty, or because we care too little – we aren’t going to be around long enough in life or employment – to see the consequences rolling in.
All of which means we can easily walk into real danger as time passes.
Let’s face it, it’s just easier to confidently deal with day-to-day operational issues – to “sweat the small stuff” if you will allow.
But confidently dealing with long-term, strategic issues means doing tough stuff. It means asking big questions, persuading difficult audiences, challenging prevailing opinions or addressing issues where there isn’t much consensus. All of which have their own inherent risk of committing a “Career Limiting Move!” and nobody wants to do that.
Let’s look at those 3 reactions to long-range risks again and explore which mindset is best for dealing with these big, strategic risks.
People with a tendency to accept without evidence, rarely make it to the highest levels. Such people are overtaken by reality earlier in their careers, unless they have been exceptionally lucky. When they are faced with new risks, being unconditionally accepting means they only see positives and are surprised or let down when the real world does its thing.
They only trust facts, at least the facts that fit with their established worldviews. This can go as far as “Cognitive Dissonance” – a pattern of thinking we all lapse into if we aren’t careful – seeing what we want to see despite sometimes overwhelming evidence to the contrary.
No one likes to be wrong and it’s not that unusual to keep doing something that is bad, just so we don’t have to admit any mistakes and we can stay safe in the comfort of a group of like-minded people.
Mockers look for certainty, resist ambiguity. Which is a little bit like trying to drive a car by looking in the rear-view mirror; you only make decisions on what has already happened. Dealing with risk means having to grapple with uncertainty, using what we already know as a starting point, but then breaking down and digging into the things we don’t.
For me, this is where really effective management of long range risks can take place. You see, the further away a risk is in time and space, the less connection you have with established facts, the less likelihood that everyone will agree on its implications and significance. Creating an environment to develop questioning and debating habits, to deconstruct uncertainties, to fabricate a common view on a risk that is allowed to adapt as new evidence emerges, this is helpful for management of long-range risks.
What do we know, how do we know it, what don’t we know yet? What level of confidence do we need before we commit resources to action? How can we break down one big problem into many small pieces and start taking one step at a time to be in the best position for the future? How will we recognise the direction of change, updating wrong decisions as quickly as possible?
…and how this invention will bring lots of benefits to humanity at first – but could then get out of control and become severely harmful (what they have to say is actually quite scary!).
It might even sound crazy to you right now, pushing you into the mockers camp, bringing to mind images of Schwarzenegger-shaped T-800 killer terminator robots. But these are respected thinkers.
At a recent MIT conference on AI, a former Google exec predicted that half of all jobs in China could be done by AI in the next 10-15 years. Risks like these will profoundly affect our businesses, our economies, our politics, our sense of identity – if we don’t face up to them now.
Need help making sense of your risks? Get in contact.
Got a problem with T-800 Terminators – call someone else!