Entire industries, including strategic consulting, are built on selling you frameworks for defining, tiering, and quantifying levels of uncertainty, but these calculated bets or residual risks tend to be modeled on assumptions that make them useless, or even dangerous.
Today, even tangibles are intangibles – cars, planes, and computers rely on software. They are essentially digital assets under the disguise of a mechanical body, and much of their value is intangible.
If we are to remain relevant, we need to build antifragile foundations to prepare for disruption and benefit from any disorder. We must create innovative, fluid, and adventurous mindsets as well as social and economic networked ecosystems that strengthen from stress, random events, and shocks.
With complex, systemic challenges such as climate, there are no individual winners. Collectively addressing carbon impact means we all win, or we all lose.
The fight against climate change is often an opportunity for banks, financial institutions, and ratings agencies to develop a new marketing product, a new green bond, and a new net-zero tracker index fund as often as they can.
Inflection points are moments when major shifts from one stage to another take place. They can be seen in social movements, technological innovations, and business transitions. Regardless of context, one cannot merely continue the same behavior after an inflection point.
Climate intelligence enables action-oriented, climate-aligned decisions to mitigate risks, build resilient adaptation, and identify emerging opportunities.
Incremental climate adaptation needs to shift to exponential climate adaptation.
Disruption is no longer merely a single or recurring event, but a steady state, expanding its impact. In short, while disruption has always existed, it is now disrupting itself.
In a systemic world, there is no such thing as a discrete or isolated event – impacts cascade and spill over.
Horizon scanning enables you to catch a glimpse of the future by observing fragments embedded in the present.
As advances in AI, machine learning, and neural networks evolve, incomprehensibility will reach even higher levels – exposing these complex systems to both human and machine errors.
As AI continues to develop, machines could become increasingly legitimate in autonomously making strategic decisions, an area where humans currently lead.
Could the most fundamental edge humans have over machines be irrationality?
As algorithms become the most important decision-makers in our lives, the question is not only whether we can trust AI, but whether we can trust that we understand AI well enough.
To shift the relationship between humans and machines, AI does not have to reach general artificial intelligence, nor become exceptional at handling complex systems. It just needs to become better than us.
Maybe the existential risk is not machines taking over the world or reaching human-level intelligence, but rather the opposite where human beings think like idle machines – unable to connect the emerging dots of our complex, systemic world.
When it comes to our world and existential risks, we face the “Big UN”: UNcoordinated, UNaccountable, UNprepared, and UNderinvested.
We must remember that agency emerges through choice, not structure.
The paradox of short-term thinking is that it often ends up being more damaging and more expensive than longer-term thinking.