A resilient world is built on humility

Featured

By David Salt

What helps keep a system resilient?

Of course, it depends on context, and everyone brings their own definitions to the party when answering this question. Which means you seldom find two people that will give you the same answer.

Yet, obviously, it’s a pretty important question.

Nine attributes

Towards the end of writing the book Resilience Thinking with Brian Walker, we asked many of the world’s most preeminent resilience scholars (including Buzz Holling) what they thought were the key lessons emerging from resilience science. They responded with a wide variety of answers, both in terms of length of response and areas covered. Even resilience experts vary in what they think is most important about the topic.

We didn’t have room in the book to reprint their responses so instead we attempted to distill their thoughts into a list of nine attributes of a resilient world. In summary, those attributes are:

1. Protect diversity: A resilient world promotes and sustains diversity in all forms (biological, landscape, social and economic).

2. Respect ecological variability: Resilience is about embracing and working with ecological variability, rather than attempting to control and reduce it.

3. Manage with modularity: Resilient systems consist of modular components. Failure in one component doesn’t collapse the system.

4. Acknowledge slow variables: There needs to be a focus on the controlling (often slowly changing) variables associated with thresholds.

5. Govern with appropriate feedbacks: A resilient world possesses tight feedbacks (but not too tight). Are the signals from cost/benefit feedbacks loosening?

6. Cultivate social capital: This is about promoting trust, well developed social networks and effective leadership.

7. Promote innovation: Resilience places an emphasis on learning, experimentation, locally developed rules and embracing change.

8. Govern with overlap: A resilient world would have institutions that include ‘redundancy’ in their governance structures, including a mix of common and private property with overlapping access rights.

9. Incorporate ecosystem services: A resilient world includes all the unpriced ecosystem services in development proposals and assessments.

It’s a good list (I’d even suggest a great list) though, of course, each attribute requires a lot of unpacking, explaining and illustration with examples (though, it did appear at the end of our book so readers who got this far were already in the frame).

But why only nine?

This was Brian’s idea: ‘Let’s set out nine attributes, one short of the biblical ten, and invite readers to suggest what attribute they would add to our list to complete it.’

I thought it was a dumb idea because a. I didn’t think we’d get much response (this was a science textbook after all), b. I suspected every reader would have their own idea (‘a resilient world would have lots of cats…’) and we’d just get a long list of pet thoughts with no emergent consistency; and c. what’s the point, how would we provide feedback to readers? This was a book afterall, not a monthly magazine.

The tenth (and 11th) attribute

As it turned out, I was wrong on all counts (hats off to you, Brian).

We received many hundreds of suggestions; most of them thoughtful, well considered and articulate.

And, while there was an enormous variety in the ideas being put forward (and no suggestion that cats would make for a more resilient world), there were clearly four themes constantly coming to the fore: democratization, fairness, learning and humility.

And, while we hadn’t planned on a follow-up book back when Resilience Thinking came out, it became apparent a few years later that people wanted more information on how resilience thinking can be implemented. Consequently, we wrote Resilience Practice, and included a discussion on the feedback we had received from readers of Resilience Thinking at the end.

Indeed, we added fairness and humility to our list of nine. Actually, we felt that the themes of democratization, fairness, learning and humility were all implicit to varying degrees in our original list of nine attributes. Our readers, however, obviously felt that equity and humility needed to be acknowledged explicitly; so we did. Here are the added two attributes to round off our list:

10. Enshrine fairness & equity: A (desirable) resilient world would acknowledge notions of equality among people, encourage democratization so that everyone has a say, a sense of agency, and promote the notion and practice of ‘fair trade’. These attributes would encourage diversity, innovation, collaboration and effective feedbacks while promoting higher levels of social capital.

11. Exercise humility: A resilient world would acknowledge our dependence on the ecosystems that support us, allow us to appreciate the limits of our mastery, accept we have much to learn, and ensure our people are well educated about resilience and our interconnection with the biosphere.

No panacea

Even if we adopted these 11 attributes as goals (even if we achieved them) there’s no guarantee that we will side step the looming shocks and changes currently facing our planet. However, a resilient world will be better placed come what may.

Which brings me to the end of this series of (relatively) ad hoc reflections on resilience thinking, what it is and why it’s worth knowing about. I’m not suggesting it will save the world; but I am certain it will provide new insights on the nature of the challenges facing us and why the complexity of the world makes these challenges so wicked. And, indeed, if we as a society are not prepared to acknowledge the complexity that lies at the heart of the challenge of sustainability, there is little hope of us meeting that challenge.

If you enjoyed this blog and would like to read any of my earlier pieces on resilience thinking, here’s a list of topics with links:

Why can’t we fix this? Because it’s complex
Introducing the notion that ‘complexity’ lies at the heart of our big challenges

Solving sustainability – It’s complicated AND complex. Do you know the difference?
‘Complexity 101’, complex is different to being complicated but most people mix them up

Thinking resilience – navigating a complex world
Ideas about resilience comes from many areas, most them are about working with complexity

The myth of the optimal state: adaptive cycles and the birth of resilience thinking
Buzz Holling and collapsing spruce forests. More control just made it worse

The perils of command and control and the pathology of Natural Resource Management
How the belief of mastery, blind application of efficiency and vested interests leads to a decline in a system’s resilience

On identity, complexity and a ‘little’ fossil fuel project off the West Australian coast
The identity of a system drives decision making above and beyond rationality

Death of the Queen, identity and a sustainable world
Thinking of ‘the Crown’ as a complex adaptive system (RIP Queen Elizabeth II)

Losing it – the consequences of stepping over the threshold
When a system crosses a threshold, it loses its identity

To be or not to be? It’s really a question about whether we adapt or transform
Adaptation and transformation, two important concepts in resilience thinking that most people use interchangeably without much thought

Resilience – the good, the bad and the ugly
Resilience thinking is almost always inspirational, but it’s also ambiguous and politicians love hiding behind it

Banner image: Maybe if Moses had shown a little more humility, the 10 Commandments might have been a tad more resilient. (Image by Jeff Jacobs from Pixabay)

Resilience – the good, the bad and the ugly

Featured

By David Salt

Some 17 years ago a former boss of mine, Dr Brian Walker, approached me to write a book with him that made the science of resilience more understandable and engaging. That text, Resilience Thinking(Island Press, 2006), would become one of the first widely read, popular science books on the subject of ecological resilience. In my humble opinion, Resilience Thinking played an important role in bringing the ideas around resilience into the mainstream, raising the very notion of ‘resilience’ to the status of being a buzzword (with all the good and bad that comes with this).

At the time that Brian approached me I knew little about resilience science (and I was also ignorant about Brian being one of the world’s leading researchers in this field; I knew him as the ridiculously overworked Chief of CSIRO Wildlife and Ecology, where I had been his Communications Manager). When he asked me to co-author a book with him, a few years after this, I was a freelancer. Indeed, I was an ageing freelance science writer that was growing increasingly cynical about ‘science being the answer’ to the world’s growing sustainability challenge.

When Brian began explaining what resilience thinking was my first thought was ‘it’s just another bit of tricky science that will supposedly boost efficiency and save us all, while we dig the planet into an even deeper hole’, just like many of the other ‘breakthroughs’ I had covered and promoted over the years.

Well, I was quite wrong about this. By the time I had finished writing Resilience Thinking, I looked at the world anew. Things that had befuddled me in the past, now made sense. I looked at the world with different eyes and became a proselytiser for the cause. It really was an epiphany; so much so that I would go on to write a second book with Brian (Resilience Practice, Island Press 2012), numerous articles on the theme, and lecture on the topic.

Though, I should be careful using the world ‘proselytise’ because resilience thinking has been criticised by some as looking like a religion (it’s not, it’s an ever-developing science with all the peer review and validation that comes with that) with adherents that sometimes come across as acolytes (God, I hope I don’t sound like one). Because, for all the value and insight that comes with resilience thinking, it has collected some unfortunate baggage along the way. But let’s begin with the positives.

The good

When I finished writing Resilience Thinking I suddenly realised that while we used the word ‘resilience’, the book was actually a guidebook to ‘complexity’ (and complex adaptative systems). Though I had written about complexity in the past, this was the first time the concepts wrapped up under the cloak of complexity came together and made sense.

The world is a complex system operating at multiple linked scales. I am a complex system, so is my family, my region, my country; all these systems are linked in lesser or greater ways; all are constrained by their histories to some extent; will change over time (adaptive cycles); are capable of self-organisation in the face of disturbance and have the capacity to sustain their identities (resilience), but only up to a certain point (thresholds) beyond which they take on new ways of being.

When these insights are applied to the world around me, I realise, in a very fundamental way, that my big problem with the world is that I always expect that things (events, people, history) to be rational (that people always act rationally, for example) when in fact they are complex (and often irrational). Rationality is just our way of simplifying things, of dealing with the uncertainty that goes hand in hand with complexity. It’s a great approach in the short term but brings wretched results in the longer term as the complex systems around us self-organise around our efforts to optimise, simplify and hold things steady.

Attempting to explain ‘resilience thinking’ to others gave me a framework that explained for me why optimisation is such a flawed model (maybe ‘partial’ or ‘incomplete’ are better adjectives here) to move us towards ‘being sustainable’; why ‘efficiency’, while being important, is never the answer to long term sustainability; why ‘stability’ is a myth and attempts to hold things steady actually reduces a system’s resilience. These and many other epiphanies became clear as I applied the insights from resilience thinking to systems around me.

So, I would say that, for me, ‘resilience thinking’ is jam packed with inspiration and insight about the world around me. Possibly more important, however, I am aware of many readers of Resilience Thinking who came away with their own epiphanies about their own systems of interest.

When people begin considering the complexity of their own system(s) (looking for thresholds, seeing adaptive cycles, reflecting on where their sources of resilience might lie) they too begin to see the world in a different way, and are excited by the insights that pop up.

The bad

Unfortunately, descriptions of ‘complexity’ become very complicated all too quickly (though please don’t confuse these terms). It takes time (and some patience) to absorb ‘resilience thinking’. And, like complexity itself, resilience thinking is not a linear process in which you read ‘the formula’ press a button and the answer is delivered (‘the answer to your question on resilience is 43’).

Resilience thinking is more of a culture in which stakeholders in a system investigate their system, assess different facets of its resilience (where are the thresholds, what is its space of safe operation; where does it sit in the adaptative cycle, what are the system’s levels of adaptability and transformability, and so on), decide on a course of action, monitor and adapt around that decision; and then iteratively go through that process (compile, assess, act) again and again; learning, adapting, experimenting and transforming as you go.

That’s all good and well, and it’s what we should all be doing all the time, but managers, decision makers and policy people need simpler and linear processes to inform their actions and decisions. Resilience thinking is sometimes seen as ‘nice (if time and resources are unlimited) but unhelpful (in the real world)’ when it comes to getting on with things.

Also, many of the insights emerging from the application of resilience thinking are quite dependent on a particular context and may not hold in a different context. On top of this (and maybe because of this), one person’s insights often vary from the insights another person finds when applying resilience thinking.

Some people have accused resilience thinking of being somewhat vague. Others have even suggested that this is deliberate and even important when it comes to framing complexity. One philosopher asked: “Does resilience exhibit conceptual vagueness, and, if so, is that beneficial? Can looseness in concepts and meanings lend itself to shedding light on unsolved problems? While resilience research has established that redundancy is an asset for complex adaptive systems, does a similar finding also hold for conceptual frameworks?”

All of which is to say is that while resilience thinking can be inspirational, it can also be problematic in its implementation.

And the ugly

While acknowledging this, I do believe it’s an important first step in re-evaluating our failing approach to sustainability (an approach largely based on simplistic linear thinking, technology and efficiency).

I’m happy to acknowledge the good with the bad. Where I get extremely frustrated, however, is where political leaders and corporate spin masters see ‘resilience’ as an opportunity to claim action while actually doing nothing (or continuing with their environmentally damaging activity).

Our last national conservative government claimed they were building a ‘resilient’ Great Barrier Reef while subsidising and expanding the country’s fossil fuel sector. (Our new national government appears to be doing much the same.) Carbon emissions are killing our Reef.

In a similar vein, one of Australia’s biggest companies, BHP, co-funded the Australian Coral Reef Resilience initiative to protect the ‘resilience’ of the reef while continuing to expand their fossil fuel pollution.

When climate-charged wildfires tore apart coastal communities along New South Wales (NSW) coastline in 2019/20 (our Black Summer), the NSW Government created a new overarching recovery agency called Resilience NSW (because who can have enough resilience, and the NSW government is there to provide it). Two years later and instead of fires, unprecedented floods devasted NSW coastal communities. A government enquiry found that Resilience NSW (an agency that hadn’t even found its feet) had failed and should be abolished.

These are just a few local examples where the ideas of resilience are inappropriately used (and abused). This happens everywhere. The problem here is that resilience is complex, most people don’t have the opportunity to have a deep engagement with it, and politicians are quick to exploit that ignorance; in the first place to hide behind it, in the next to use it as the scapegoat.

In a rational linear world, they wouldn’t get away with this. But, of course, we don’t live in a rational world, do we?

Banner image: Resilience thinking is about people, landscapes, society, ecosystems and complexity. Depending on how it is applied it can be good, bad or ugly. (Image by David Salt)

Losing it – the consequences of stepping over the threshold

Featured

By David Salt

In Australia, we called the horrible summer of 2019/20 the Black Summer. Unprecedented heat waves and drought led to the biggest, most ferocious, most extensive wildfires this nation had ever known.

I wonder what the world will call 2022? Once again that word ‘unprecedented’ gets rolled out to describe a series of heatwaves, extreme storms, massive floods and record-breaking droughts. This year these events were happening all over the world (and especially across Europe, Asia and America during the northern hemisphere summer). Will it be the ‘Angry Summer’ or the ‘Season of our Great Discontent’ or maybe just the year of ‘Climate Breakdown’. (At what point do we know it’s broken?)

Or maybe the climate disruption will just continue and even grow worse, as many climate scientists are predicting, and 2022 will be wilfully forgotten as we struggle to deal with each new emerging weather crisis.

The idea of normal

When describing abnormal events, unprecedented episodes or historic happenings, you need to have some idea about what ‘normal’ actually means. In some cases this is relatively straightforward.

We have temperature records, for example, that go back for at least a century so it’s easy to define ‘normal’ with statistical precision. Our temperature has ranged between X and Y, and there is a different average max and minimum value for each month of the year. This August was particularly hot for many regions in the northern hemisphere, so when you hear on the news that temperatures broke records, or were above average, you can appreciate just what is meant.

The more variables you bring in (precipitation, wind speed, humidity, wild storms etc), the harder it is to characterise what is normal. Of course, these variables are what add up to weather, and long-term average weather is what we call climate.

If the weather gets ugly, we normally console ourselves that we just need to survive this rough patch and at some point the weather will ‘return to normal’ – the rains will replenish the dams after the drought or calm will follow the big storm.

‘Return to normal’ is a form of equilibrium thinking. Your world gets rocked by some disturbance, your equilibrium is thrown out, but you do everything you can to bounce back, to return to normal.

Of course, I’m talking about the notion of resilience – the capacity to cope with disturbance and bounce back (the word ‘resilience’ derives from the Latin ‘resilire’, meaning ‘to jump back’ or ‘to recoil’).

What’s normal for a complex system

‘Resilience thinking’ is all about how this idea of ‘recovery’ applies to complex adaptive systems. Complex systems have the capacity to self-organise. Resilience is the amount of disturbance a system can absorb and still retain its identity, still continue to function in much the same kind of way.

In recent blogs I’ve attempted explain what complexity means, and how complex systems change over time, how they go through a pattern known as an adaptive cycle. The concept of adaptive cycles is one important building block of resilience thinking, the other is the idea of ‘thresholds’.

There are limits to how much a complex system can be changed and still recover. Beyond those limits the system functions differently because some critical feedback process has changed. These limits are known as thresholds.

When a complex system crosses a threshold it is said to have crossed into another ‘regime’ of the system (also called a ‘stability domain’ or ‘basin of attraction’). It now behaves in a different way – it has a different identity (or you might say it has lost its original identity).

In coral reefs there’s a threshold on the variable of the level of nutrients in the surrounding water. If nutrients become too high, the coral will be displaced by algae. The coral reef identity is lost, replaced by the identity of an algal reef.

On many rangelands there is a threshold on the amount of grass present. If the grass level falls below a certain level (because of too many grazing animals or a prolonged drought), shrubs begin to take over. The grassy rangeland identity is lost, replaced by a shrubland.

Sometimes it’s easy to cross back over to the identity you want, sometimes it’s difficult and sometimes it’s impossible.

Tipping points

In a recent blog I discussed how fossil fuel corporations are complex systems. The identity of this system is heavily influenced by quarterly profit statements; more so than any concern they might hold for longer term climate disruption. The levels of the profits in those quarterly statements likely has a threshold point, below which the fossil fuel corporation will likely change its business (eg, take on the identity of a renewables company, maybe) or shut down. Either way, crossing this threshold leads to a change of identity in this system. (Of course, what might put downward pressure on their profits is stronger government regulation or broader community rejection of the cost being imposed on society by the fossil fuel company.)

In my last blog I also said you could view the British Monarchy as being a complex system. Its identity hinges on public acceptance and support over time, something the late Queen Elizabeth II understood and worked with like a pro. Again, its likely a threshold point exists on this variable of public support, below which the Monarchy becomes vulnerable. QEII represented integrity, authenticity, stability and certainty. She had very high levels of social approval (social capital) that has ensured that the system of the Monarchy had resilience, even to the disturbance of her own death, and the Crown passed seamlessly to her son, now King Charles III. But imagine what might have happened if the Queen didn’t have that level of social capital. Or what happens if King Charles squanders that social capital? Smooth successions aren’t always the rule in the UK (or elsewhere), and many countries don’t need Queens (or Kings) to function.

Thresholds occur in many complex systems however they are often described as ‘tipping points’ where they occur in the social domain. In addition to the two examples I just discussed (profit levels and levels of public approval), tipping points might manifest as changes in fashion, voting patterns, riot behaviour, or markets.

Defining a safe operating space

So here is useful way of defining a system. Every system can be described in a variety of ways using a number of variables. The identity of the system can be characterised by an average range of those values. While kept in that range, the system will behave as you expect, be it a business, a monarchy, a coral reef or a rangeland. However, when the system passes a certain level on one of a number of key variables (eg, profit, popularity, nutrients, grass cover) – a threshold or tipping point – the system changes its identity and begins to behave differently (often in strange or undesirable ways).

Or, in other words, you can understand a system’s identity by knowing how much change it can take before that identity is lost, replaced by a different identity.

Not only are thresholds critical to understanding the behaviour of complex systems, they are the basic limits to whatever enterprise you’re responsible for or have an interest in. To use the phrase in a prominent analysis of global-scale thresholds (Rockstrom et al 2009), thresholds define the safe operating space of your system.

And how are we going in keeping our society in a safe operating space? Well, considering our experiences with the Black Summer of 2019 or the Angry Summer of 2022, not so well.

Climate and weather systems are complex systems too. Their current behaviour suggests they have been pushed over critical thresholds and their emerging identity is something quite new, quite destructive and terrible. Allowing the Earth system to cross these thresholds comes with an enormous cost to society, and will sorely test our own resilience as we cruise into an increasingly uncertain future.

Banner image: How much disturbance can your ‘system’ take before it loses its identity? It’s not just the intensity of a specific event (a single hurricane for example) that’s important, it’s also the frequency of such disturbances. The Great Barrier Reef can survive mass bleaching events if they only occur once every 20 years but it loses its ‘identity’ if they occur every few years (which is now what’s happening). (Image by David Mark from Pixabay)

The perils of command and control and the pathology of Natural Resource Management

Featured

By David Salt

As a younger man I honestly believed that sustainability was a tractable problem; a difficult challenge no doubt but one that was solvable with hard work coupled with science and technology. And, as a confident young thing, I thought I could contribute to this outcome by serving in the area of science communication and education; get more talented young people into science, and increase community acceptance of emerging technological solutions so they can be effectively implemented.

How might science and technology save us? By providing us with insights on the many problems being faced by humanity and the environment, and by helping humanity lighten its footprint on Planet Earth. Well, science has definitely provided ample insights on the plight of our planet, and technology has given us so many ways to be more efficient in how we do things.

For all that, however, we are moving away from being sustainable; indeed, we seem to be accelerating away from it. In the last half century, humanity has pushed the Earth system over several planetary boundaries, unleashed a sixth extinction event, and seems unable as a global community to do anything about greenhouse gas emissions which are remorselessly on the increase (as a by-product of our addiction to economic growth).

Science and technology has underpinned so much of our wealth creation and economic activity, and many techno-boosters are fervent in their belief that science and technology is the solution to the many problems facing our environment (indeed, I heard Australia’s Chief Scientist say this exact thing on the radio this morning, as I write.)

As I grew older and watched the natural world decline around me (on a number of scales; think of weed infestation in your local bush reserve, glacial retreat or the bleaching of the Great Barrier Reef), my enthusiasm for (and faith in) science and technology also declined. I could see the potential of all these new discoveries (think renewable energy, nanotech and biotech as examples) but could never see where the outcomes were creating a more sustainable future. For example, for every 10% improvement in efficiency in process X, we seemed to see 100% increase in people using that process resulting in more waste, more consumption and more damage (albeit less impact per capita, see the Rebound Effect for a discussion on this).

The dangers of partial solutions

It’s not that I’m anti science and technology and I do believe increasing efficiency is important. However, by themselves they are not enough.

Then I was asked to write a couple of books on resilience science (Resilience Thinking and Resilience Practice) and my doubts on the belief that ‘science and technology is the solution’ crystallised into a new way of looking at the world. The experience of writing about resilience opened my eyes to ideas of complexity, and the capacity of a complex systems to absorb disturbance and retain their identity (the definition of resilience). The consequences of these ideas are deep and far reaching. In a range of different ways, I’ve been attempting to articulate them in my stories for Sustainability Bites.

One major consequence of acknowledging the complexity around us is to be aware of the cost of partial solutions sold to us as complete answers. Science and technology (and endlessly increasing efficiency) are not only not enough to move us to being sustainable, an exclusive reliance on them (and belief in them, think ‘technology not taxes’) will actually reduce resilience in the systems we depend upon and make us more vulnerable to disturbance.

There are many lines of evidence supporting this contention (see Resilience Thinking and Resilience Practice) but in the space I have here I’d like to discuss how natural resource management agencies decline over time. Improving science and technology (and efficiency) is often touted as the solution but only fuels this decline. This discussion is based on a landmark paper by CS Holling (one of the founding fathers of resilience thinking) and Gary Meffe, written a quarter of a century ago: Command and Control and the Pathology of Natural Resource Management.

The command-and-control pathology

Holling and Meffe point out that when command and control is applied in natural resource management, the initial phase is nearly always quite successful. Insect pests are reduced by pesticide use; fishing and hunting are enhanced by stocking or predator removal; forest fires are suppressed for years; floods are minimized by levees and dams.

But what follows on these initial successes is rarely acknowledged. The agencies responsible for management shift their attention from the original social or economic purpose towards increasing efficiency and a reduction in costs. (Of course, all agencies/companies do this over time not just NRN agencies. It’s a pattern well described in the idea of ‘adaptive cycles’ first proposed by Holling.)

NRM agencies search for better and more efficient ways to kill insects, eliminate wolves, rear hatchery fish, detect and extinguish fires, or control flows. Priorities thus shift from research and monitoring (why ‘waste’ money studying and monitoring apparent success?) to internal agency goals of cost efficiency and institutional survival.

Holling and Meffe contend that as this happens, there is a growing isolation of agency personnel from the systems being managed and insensitivity to public signals of concern. They describe this as institutional myopia and increased rigidity (again, something well described by the theory of adaptive cycles).

At the same time, economic activities exploiting the resource benefit from success (of more fish, or water or whatever) and expand in the short term. We see greater capital investment in activities such as agricultural production, pulp mills, suburban development, and fishing and hunting. There’s nothing wrong with this, they say, within limits.

But the result is increasing dependency on continued success in controlling nature while, unknown to most, nature itself is losing resilience and increasing the likelihood of unexpected events and eventual system failure. When natural systems are ‘controlled’ they invariably lose their natural diversity and processes, which leads to a declining ability to absorb disturbance (while maintaining its identity).

With dependency comes denial and demands by economic interests to keep and expand subsidies, and pressure for further command and control.

So, the initial successes of command and control come with a costs that are usually never acknowledged. Command and control reduces natural variation and erodes resilience, environmental managers aim for efficiency rather than connection with the system they are managing, and economic interests that benefit from original command and control distort the system to maintain it. The composite result is increasingly less resilient and more vulnerable ecosystems, more myopic and rigid institutions, and more dependent and selfish economic interests all attempting to maintain short-term success.

Holling and Meffe point out that solutions to this pathology cannot come from further command and control (for example, stronger regulations) but must come from innovative approaches involving incentives leading to more resilient ecosystems, more flexible agencies, more self-reliant industries, and a more knowledgeable citizenry.

Back in the ‘real world’, you’ll largely hear our political leaders deny the complexity of this and simply say science and technology will save us. Unfortunately, in a complex world, simple solutions have a habit of only making the situation worse.

Don’t get me wrong, I still love science and technology. However, by themselves, they are not the solution. To contribute to a sustainable world, they need to work with complexity, not subjugate it.

Banner image: Dams are an important piece of human infrastructure offering many valuable short-term benefits by controlling our rivers. In the longer term they come with a range of often unacknowledged costs. They reduce the natural variability of the river; they encourage human settlement in areas subject to flooding; and allow food production in areas that normally wouldn’t support agriculture. Over time, the agencies managing the dam become myopic and rigid, the economic sectors depending on the dam become increasingly reliant and selfish, and the river system becomes increasingly vulnerable to disturbances. (Image by David Salt)

The myth of the optimal state: adaptive cycles and the birth of resilience thinking

Featured

By David Salt

Being sustainable, is tough. So far, we (as in humanity) are failing at the task miserably. My contention is that a big part of the problem is our inability to deal with the complexity of the systems around us, that we are a part of. Rather than acknowledging this complexity, we impose framings on these systems treating them as simple. (I discussed these ideas in complicated vs complex.)

Command and control

Simple systems can be managed and controlled, and held in an optimal state for as long as needed. Complex systems, on the other hand, self-organise around our efforts to control them. They can’t be held in an optimal state.

The notion of an ‘optimal sustainable yield’ was a widespread idea in natural resource management last century. The belief was that if you knew a little about what drives a natural resource (say reproductive capacity in fish stocks or forest trees), you could harvest that system removing an optimal amount of that resource forever as it would always replace itself. It’s a command-and-control approach that left countless collapsed fisheries and degraded landscapes in its wake.

‘Command and control’ involves controlling aspects of a system to derive an optimized return. The belief is that it’s possible to hold a system in a ‘sustainable optimal state’.

However, it’s not how the world actually works. Yes, we can regulate portions of the system, and in so doing increase the return from that portion over a short time frame, but we can’t do this in isolation of the rest of the system. If we hold some part of the system constant, the system adapts around our changes, and frequently loses resilience in the process (ie, loses the capacity to recover from a disturbance).

While we can hold parts of the system in a certain condition, the broader system is beyond our command. Indeed, no one is in control; this is a key aspect of complex adaptive systems.

Resilience thinking is an alternate approach to working with these systems, an approach that places their complexity front and centre. And the origins of this approach are entwined with an early realisation that a command-and-control approach to harvesting natural systems will always strike problems eventually. (The following example is based on a discussion that appears in the book Resilience Thinking.)

Of budworms and social-ecological systems

Spruce fir forests grow across large areas of North America, from Manitoba to Nova Scotia and into northern New England. They are the base of a highly valuable forestry industry.

Among the forests’ many inhabitants is the spruce budworm, a moth whose larvae eat the new green needles on coniferous trees. Every 40 to 120 years, populations of spruce budworm explode, killing off up to 80% of the spruce firs.

Following World War II, a campaign to control spruce budworm became one of the first huge efforts to regulate a natural resource using pesticide spraying (thanks in part to new technologies emerging from the war).

Initially, the pest control proved a very effective strategy, but like so many efforts in natural resource management that are based on optimizing production, it soon ran into problems.

In a young forest, leaf/needle density is low, and though budworms are eating leaves and growing in numbers, their predators (birds and other insects) are easily able to find them and keep them in check. As the forest matures and leaf density increases the budworms are harder to find and the predators’ search efficiency drops until it eventually passes a threshold where the budworms break free of predator control, and an outbreak occurs.

While the moderate spraying regime avoided outbreaks of budworms, it allowed the whole forest (as distinct from individual patches) to mature until all of it was in an outbreak mode. Outbreaks over a much greater area were only held in check by constant spraying (which was both expensive and spread the problem).

The early success of this approach increased the industry’s dependence on the spraying program, intensified logging and spawned the growth of more pulp mills.

Now there was a critical mass of tree foliage and budworms. The whole system was primed for a catastrophic explosion in pest numbers. The managers in this system were becoming locked into using ever increasing amounts of pesticide because the industry wouldn’t be able to cope with the shock of a massive pest outbreak. The industry had little resilience, and yet the continued use of chemicals was only making the problem worse. They had created a resource-management pathology.

Adaptive cycles

The industry acknowledged the looming crisis and engaged ecologists (including CS ‘Buzz’ Holling) to see how they might tackle the problem from a systems perspective. In 1973, Holling proposed a new analysis of the dynamics of the fir forests, one based on what he described as ‘adaptive cycles’.

Forest regions exist as a patchwork of various stages of development. The cycle for any one patch begins in the rapid growth phase, when the forest is young. The patch then proceeds through to maturity, and eventually, following some 40 to 120 years of stable and predictable growth (referred to as the ‘conservation phase’), the cycle tips into the release phase. The larvae outstrip the ability of the birds to control them, larvae numbers explode, and the majority of forest trees in that patch are killed. Their rapid demise opens up new opportunities for plants to grow, and during the reorganization phase the forest ecosystem begins to re-establish itself. The cycle then repeats.

With this understanding of the cycle and the key changing variables that drive the system, the forest managers were able to fundamentally modify the manner of their pest control. Rather than continually using low doses of pesticide over wide areas they switched to larger doses applied less frequently at strategic times over smaller areas. They re-established a patchy pattern of forest areas in various stages of growth and development rather than keeping wide areas of forest primed for a pest outbreak.

The forest industry also changed through the process, moving to regional leadership with a greater awareness of the ecological cycles that underpinned the forest’s productivity.

From budworms to resilience thinking

The case study of the spruce budworm and the fir forest is important on many levels as it was in part the genesis of what has become resilience thinking. During his investigations, Holling proposed that the key to sustainability was an ecosystem’s capacity to recover after a disturbance, not the ability to hold it in a notional optimal state.

He also recognized that the ecosystem and the social system had to be viewed together rather than analyzed independently, and that both went through cycles of adaptation to their changing environments. Adaptive cycles don’t just happen in nature, they happen in communities, businesses and nations, it’s feature of complex adaptive systems.

His proposal catalyzed the thinking of ecologists and researchers (with an interest in systems) all over the world because similar patterns were being identified everywhere social-ecological systems were being studied.

One key insight that grew out of an understanding of adaptive cycles is that bringing about change/reform in a social-ecological system is always difficult. However, windows of opportunity do open when a system goes into a release phase, although the window doesn’t open for long. You need to be prepared to seize the opportunity while it’s there.

A basic lesson I draw from the notion of adaptive cycles is that systems get locked into themselves over time and become rigid. There’s no such thing as a sustainable optimal state because even if the system is managed into a condition deemed desirable, it then progressively loses its capacity to learn, innovate or keep its flexibility (often in the name of efficiency). Efficiency is important but is never the complete answer. Efficiency is not the key to sustainability.

Over the decades since Holling first described adaptive cycles, the models and the thinking associated with managing for resilience has gone through much refinement but the two core ideas remain at its heart: the fact that social-ecological systems constantly move through adaptive cycles over many linked scales, and that they can exist in different stable states. I’ll discuss this second building block in my next blog.

Banner image: Spruce fir forests provide valuable timber. However, efforts to optimise these systems last century with the widespread application of pesticide almost destroyed the industry. Uncovering what was going wrong became the origins of resilience thinking. (Image by Reijo Telaranta from Pixabay.)

Thinking resilience – navigating a complex world

Featured

By David Salt

Our world seems to be coming unstuck at the moment. Climate fuelled weather extremes – floods, droughts, heatwaves and fires – are crippling large parts of humanity. Many people are grappling for answers; What do we do? Why haven’t we already done something about this? It’s not like we haven’t been warned (repeatedly and with comprehensive detail by our climate scientists and others).

I believe many of your problems lie in our inability to deal with the complexity of the world around us (my last two blogs discussed this very thing – we can’t fix this because it’s complex and complicated vs complex). One way of better appreciating that complexity and navigating a way through lies in the area of resilience thinking.

The word ‘resilience’

The word ‘resilience’ is now common in many vision and mission statements. But ask the people who use these statements what they think it means and you get a range of different answers, most of which relate to how something or someone copes with a shock or a disturbance.

Concepts of resilience are used in all sorts of disciplines, but it has four main origins – psycho-social, ecological, disaster relief (and military), and engineering.

Psychologists have long recognised marked differences in the resilience of individuals confronted with traumatic and disastrous circumstances. Considerable research has gone into trying to understand how individuals and societies can gain and lose resilience.

Ecologists have tended to describe resilience in two ways; one focused on the speed of return following a disturbance, the other focused on whether or not the ‘system’ can recover. People engaging with resilience from the perspective of disaster relief or in a military arena incorporate both aspects (ie, speed and ability to recover). Indeed, there is a lot of commonality in the understanding of resilience in the three areas of psychology, ecology and disaster relief.

In engineering the take on resilience is somewhat different. Indeed, engineers more commonly use the term ‘robustness’ with a connotation of designed resilience. It differs from the other three uses in that it assumes that the kinds of disturbances and shocks are known and the system being built is designed to be robust in the face of these shocks.

Resilience thinking

The ‘resilience’ that is being invoked in vision and mission statements relating to Australia’s environment is largely based on the idea of ecological resilience, and it’s all about the ability to recover.

The science underpinning our understanding of ecological resilience is often referred to as resilience thinking. The definition of resilience here is: the capacity of a system to absorb disturbance and reorganize so as to retain essentially the same function, structure and feedbacks – to have the same identity. Put more simply, resilience is the ability to cope with shocks and keep functioning in much the same kind of way. 

A key word in this definition is ‘identity’. It emerged independently in ecological and psycho-social studies, and it is both important and useful because it imparts the idea that a person, a society, an ecosystem or a social-ecological system can all exhibit quite a lot of variation, be subjected to disturbance and cope, without changing their ‘identity’ – without becoming something else.

The essence of resilience thinking is that the systems we are dealing with are complex adaptive systems. These systems have the capacity to self-organise around change but there are limits to a system’s self-organising capacity. Push a system too much and it changes its identity; it is said to have crossed a threshold.

The systems around us that we depend on (and are embedded in) are linked systems with social, economic and bio-physical domains, operating over multiple scales. To understand what enables these complex systems to retain their identity, what keeps them resilient, we need to appreciate the linkages between these domains and scales. We also need to understand how the system is behaving within each domain and scale, because over time these components go through their own cycles (known as adaptive cycles) in which the capacity for change (and the ability to hang on to their identity) shifts.

Many ideas, many insights

Resilience thinking involves all these ideas. It is the capacity to envisage your system as a self-organising system with thresholds, linked domains and cycles.

Each of these ideas take a bit of explaining, something I’ll attempt in upcoming blogs (for a good guide, see Resilience Practice*). However, when you begin engaging with ideas relating to a system’s resilience, you begin to appreciate the world in a different way.

Some of those insights have been for me that no-one is in control, and you can’t understand a system by understanding the components that make it up – complex systems have emergent properties (for example, the whole is greater than the sum of the parts).

We also need to appreciate that the narrower concept of ‘efficiency’ – ie, holding a part of our system in a state that delivers optimal returns (eg, food or fibre) without considering interactions with other domains or scales – leads to a loss of resilience, making it less likely that these systems will continue to deliver into the future. Efficiency is important but, by itself, it is not the solution to the challenge of sustainability.

We live in a complex world facing enormous challenges. Too much of our efforts so far have been directed to command-and-control approaches, techno solutions and improving efficiency. If the problems we were dealing with were simple and tractable, such approaches would work well. Unfortunately, our current approaches to sustainability are not working at all, and the problem is growing significantly.

Could it be we’re trying to solve the wrong problem? We’re managing a complex world as if it were a simple system.

*Walker B & D Salt (2012). Resilience Practice: Building Capacity to Absorb Disturbance and Maintain Function. Island Press. Washington.

Banner image: Forests begin their recovery after Australia’s Black Summer of 2019/2020. (Image by David Salt)

Solving sustainability – It’s complicated AND complex. Do you know the difference?

Featured

What is it about the challenge of climate change that makes it so difficult to solve?

Clearly, it’s a complicated problem involving many interacting components. These interacting parts include the Earth system (and its billions of components), people (you and me), states and countries; organisations and institutions; unknowns; tradeoffs; winners and losers. We’ve spent decades of effort addressing this issue – including billions of dollars on research – and yet the problem of mounting levels of carbon emissions and accelerating environmental decline only seems to get worse. (Have you seen what’s happening in the northern hemisphere at the moment? And it’s only spring!)

Clearly, climate change is a big and complicated problem but it seems to me, having watched us deal with this challenge (and fail) over many years, what we’re not acknowledging is that it’s also a complex problem, and we’re not dealing with this complexity very well.

‘Complicated’ and ‘complex’ are words often used interchangeably but they are fundamentally different ideas. Do you know the difference? I’ll confess that for most of my life I didn’t.

So, what is complexity?

Complex systems scientists have been attempting to pin down what complexity is for decades. To me, most of their definitions are highly technical and only understandable by other complex systems scientists.

Here’s one commonly used definition set out by the famous evolutionary biologist Simon Levin in 1998 that encapsulates many of the ideas floating around complexity. It’s relatively short and sets out three criteria for defining a complex adaptive system. Complex adaptive systems have:

-components that are independent and interacting;

-there is some selection process at work on those components (and on the results of local interactions); and

-variation and novelty are constantly being added to the system (through components changing over time or new ones coming in).

Sounds straightforward but what does it mean and why is it important? Here’s how I attempted explain it in the book Resilience Thinking*.

Cogworld vs Bugworld

Consider these two situations: Cogworld and Bugworld.

Everything in Cogworld is made of interconnected cogs; big cogs are driven by smaller cogs that are in turn driven by tiny cogs. The size and behavior of the cogs doesn’t change over time, and if you were to change the speed of the cogs of any size there is a proportionate change in speed of other connected cogs.

Because this system consists of many connected parts some would describe it as being complicated. Indeed it is, but because the components never change and the manner in which the system responds to the external environment is linear and predictable, it is not complex. Really, it is just a more complicated version of a simple system, like a bicycle with multiple gears.

Bugworld is quite different. It’s populated by lots of bugs. The bugs interact with each other and the overall performance of Bugworld depends on these interactions. But some sub-groups of bugs are only loosely connected to other sub-groups of bugs. Bugs can make and break connections with other bugs, and unlike the cogs in Cogworld, the bugs reproduce and each generation of bugs come with subtle variations in size or differences in behavior. Because there is lots of variation, different bugs or subgroups of bugs respond in different ways as conditions change. As the world changes some of the subgroups of bugs ‘perform’ better than other subgroups, and the whole system is modified over time. This system is self-organizing.

Unlike Cogworld, Bugworld is not a simple system but a complex adaptive system in which it’s impossible to predict the emergent behavior of the system by understanding separately its component subgroups. It meets the three criteria outlined by Levin: it has components that are independent and interacting; there is some selection process at work on those components; and variation and novelty are constantly being added to the system.

Complicated vs Complex

In Cogworld there is a direct effect of a change in one cog, but it doesn’t lead to secondary feedbacks. The cogs that make up Cogworld interact but they are not independent, and the system can’t adapt to a changing world. Cogworld might function very ‘efficiently’ over one or even a range of ‘settings’ but it can only respond to change in one way – that is working all together. If the external conditions change so that Cogworld no longer works very well – the relative speeds of the big and little cogs don’t suit its new environment – there’s nothing Cogworld can do.

In Bugworld the system adapts as the world changes. There are secondary feedbacks – secondary effects of an initial direct change. The bugs of Bugworld are independent of each other though they do interact (strongly – though not all bugs interact with all other bugs).

In our Bugworld, if we attempted to manage a few of the subgroups – eg, hold them in some constant state to ‘optimise’ their performance – we need to be mindful that this will cause the surrounding subgroups to adapt around this intervention, possibly changing the performance of the whole system.

Ecosystems, economies, organisms and even our brains are all complex adaptive systems. We often manage parts of them as if they were simple systems (as if they were component cogs from Cogworld) when in fact the greater system will change in response to our management, often producing a raft of secondary feedback effects that sometimes bring with them unwelcome surprises.

The real world is a complex adaptive system. It is more like Bugworld than Cogworld and yet it seems most of our management, policy and leadership is based on a Cogworld metaphor.

The consequences of complexity

Complex adaptative systems are self-organizing systems with emergent properties. No-one is in control and there is no optimal sustainable state that it can be held in. These are just two of the consequences that fall out when you begin to appreciate what complexity is all about, and they are pretty important consequences if you reflect on it.

Our political leaders will tell you they are in control, and that they have a plan, a simple solution that solves the problem of climate change without anyone having to change the way they do things. This is the message that Australians have been hearing for the past decade from our (recently defeated) conservative government. But we grew skeptical of these claims as we saw our coral reefs bleach and our forest biomes burn.

Why is climate change so difficult to solve? Yes, it’s complicated with many interacting components. However, more importantly, it’s complex and complexity is something humans don’t deal with well (let alone understand).

As one piece of evidence on this, consider how we think about thinking. What’s the image that immediately comes to your mind? For most people it’s a set of mechanistic cogs encased in a head (like in our banner image this week). If you thought my ‘Cogworld’ was fanciful, how many times have you seen this representation of human thinking as mechanistic clockwork without questioning it. Because what you’re seeing is a representation of a complex system (you thinking) as a non-complex simple system (a set of cogs). The ‘cogmind’ is a fundamentally disabling metaphor.

And if you scale this up to the systems around us, how many times have you accepted that someone is in control, and that the answer is in just making the world a bit more efficient, a bit more optimal? How is that going for us at the moment?

Different priorities

If, however, we are living in a complex world, then maybe we should stop looking for the illusory optimal solution and start dealing the complexity in which we are all embedded. How is that done?

One set of ideas I have found helpful lies in resilience thinking. Rather than prioritising efficiency, command-and-control, reductionism and optimisation, resilience thinking encourages reflection, humility and co-operation, aspects on which I’ll expand in my next blog on complexity.

*Two decades ago I was asked by a group called the Resilience Alliance to write a book on resilience science. That book, co-authored with Brian Walker, one of the world’s leading authorities on resilience science, became the text Resilience Thinking. As I learnt about resilience science I discovered that it was all about dealing with complexity, an insight that transformed the way I understood the world.

Banner image: If you thought my ‘Cogworld’ was fanciful, how many times have you seen this representation of human thinking as mechanistic clockwork without questioning it. (Image by Pete Linforth from Pixabay)