Uncertain about Risk and Uncertainty
Here’s an interesting – nay, potentially iconic – figure from a paper, Greenhouse-gas emission targets for limiting global warming to 2C, Meinshausen et al, Nature vol.458, p.1158, 30th April 2009:
It represents the results of a heroic assemblage of climate modelling data. The horizontal axis gives the emissions from 2000-49 (belying the paper’s title, incidentally) in GtCO2 for each set of data plotted and the vertical axis – careful wording alert – the probability of the given model output set predicting a global mean temperature increase of 2C or greater over pre-industrial levels before 2100. The dots and swathe of colour are the outputs of all this modelling; the solid black line some kind of best fit (I know not how determined) or “illustrative default”; the dotted line incidentally is the outcome based on a set of models including carbon cycle feedbacks, which implies less likelihood (carefully chosen word) of hitting 2C for given fossil fuel and land-use change emissions.
The good news is that if this modelling exercise represented a set of real world possibilities, we could, for example, emit around 1500GtC02 and still have a 50% chance of avoiding “dangerous climate change” defined as the 2C temperature increase. The bad news is that the grey area bottom left of the figure represents the 234GtC02 of actual emissions from 2000-06 (7 rather than 6 years, I believe, though the paper is scandalously ambiguous on this point) – basically we have to slow way down.
You could probably write a dissertation about the diagram – for example, why are we discussing scenarios for the future such as the IPCC’s A1F1 (top right) which apparently emit more carbon before 2050 than is stored in the world’s fossil fuel reserves (and even more in the second half of the century)? In fact, why are we discussing more than one of the IPCC scenarios, since even the best of them, B1, is unlikely to lead to less than 2C warming? There is surely no longer any need to nuance the basic point that unmitigated emissions will lead to dangerous climate change.
But all I want to cover just now is essentially one word. The word “probability” on the vertical axis.
What we have here are not probabilities in any real-world sense. There will only be one outcome. We will be at one position on the horizontal axis, depending on (in this case) our emissions before 2050 (though different pathways may give different outcomes for the same 2000-49 emissions). The distribution about the illustrative default (a vertical slice through the diagram) is an indication of our state of knowledge (as encapsulated by the models used) as to what will happen to the global mean temperature as a result. It is not, in a strict sense, a probability.
Using the term probability in this context is unfortunate. In fact, I might even go so far as to say it is symptomatic of a pathology: the same pathology evident when the idea of carrying out climate “experiments” is discussed. Guys, we are using models, not instances of the real world. If parallel universes do exist, we do not have access to them.
My personal issue with all this is that when I read the word “probability” I assume we’re talking about risk, when in fact the topic is uncertainty. I simply can’t help it.
It is extremely unfortunate that the climate modelling community (and perhaps a wider group) has chosen to use the word “probability” to refer to uncertainty as well as risk when they could simply have used “likelihood”. This has been extended to probability distribution functions (PDFs) which are often nothing of the sort (I say “often”, because, confusingly, the horizontal axis may be a probability function and the vertical axis what I would call a likelihood function). These figures should be renamed likelihood distribution functions, with the added advantage of a less overused TLA, LDF rather than PDF.
Apart from confusing models with the real world, the inappropriate use of the term “probability” has caused another problem: too much weight is being given to quantifiable knowledge (included in models) and too little to that which has not been quantified. The term “likelihood” would instead force people to focus on the right questions. What’s happened is that the climate science community is pretending it can somehow answer epistemological questions – those about the state of knowledge – in a scientific, quantifiable way. Presenting the information in precise terms – “my belief in the actual increase in the global mean temperature we can expect for 1500GtC02 emissions is represented by this graph” – doesn’t alter the fact that all we are discussing is the state of our knowledge. And when we let machines produce the graph we automatically lose anything not input to the calculation.
I thought I’d try to clarify the problem with a simple analogy.
If I toss a coin and call “heads” the probability – risk – of losing is approximately 50%. I can make decisions on this basis.
But I said “approximately”, because there is some uncertainty – the coin may have a bias.
Now, any estimate of uncertainty in this case depends entirely on my state of knowledge. For example, I might have tested the coin – suspecting it might be weighted – before the crucial coin toss and found that of hundreds of attempts 55% came up heads. This (and my knowledge of statistical confidence testing) may then lead me to believe the risk of losing to be not 50% but 45%. But this would be exceptional. For a randomly chosen coin I would normally be prepared to say it’s likely – an expression of certainty – that the risk of losing when calling heads is, for all practical purposes, 50%.
If asked to quantify how sure I am that the risk of losing is 50%, I might say 99.99%, because I believe the vast majority of the coins in circulation are equally likely to come down heads as tails. But this 99.99% is what I would term a likelihood – a judgement of the uncertainty – not a probability.
On the other hand I may be asked to pick a coin from a bag I’m told contains 50 normal coins and 50 weighted so as to come down heads only 25% of the time. Now it becomes a probability as to how likely it is that the coin is true. I can even calculate the overall probability of tossing a head (37.5%). There may still be uncertainty, though – whoever told me 50% of the coins are weighted might have been lying.
What I hope this analogy conveys is the importance of being clear about what we know and what we don’t know. We should only talk about risk and probabilities within a defined theoretical framework. When making judgements as to the state of our knowledge we should be discussing uncertainty and (I suggest) likelihood.
Let’s turn back to the iconic figure from Meinshausen et al and that vertical axis labelled “probability”.
Are we really talking about probability or likelihood?
Remember where the numbers come from – a series of computer model runs. Here’s a philosophical point: no scientific model is the same as the real world. A simple scientific law such as F=ma or E=mc2 may make very good predictions, but has different characteristics to the real world. And these simple models give the same answer every time (or rather, are not sensitive to small variations in the initial conditions). When we use complex computer-based models such as of climate, we find that quite different answers result from multiple runs with minimally perturbed initial conditions.
What we don’t know is how much of the variation in model runs depends on a lack of determinism of the real world and how much is a result of the characteristics of the models. Remember the real world is air, water, sunlight, clouds and so on. The model is numbers representing big chunks of real stuff in a computer. Maybe there is a “butterfly effect” and the magnitude of global warming in a century’s time will depend on minor unpredictable events that happen today. Somehow I doubt it – weather may be affected by small events, but not climate – though the issue could be discussed long into the night.
What is less disputable, though, is that we don’t know how much of the variation in model predictions of global warming is due to different real world possibilities and how much is due to the characteristics of the models.
By failing to distinguish probability from what I term likelihood, and labelling their vertical axis as “probability”, papers such as Meinshausen et al are perhaps implicitly asserting that what is represented is entirely due to real world variability.
It seems to me far wiser to do the opposite. We should assume that the variability of model outputs represents uncertainty. We are best off considering that if the models were perfect they’d give the same answer every time.
But Meinshausen et al might argue that all they’ve done is use the word “probability” loosely. They might agree with my argument. Now, though, we have another problem. We are giving much more weight to model variability than to other forms of uncertainty. The models only capture some of the known unknowns. There are known real-world phenomena that are not included in the models – poorly understood phenomena, for example, carbon-cycle feedbacks, such as methane release from tundra. And then there are unknown unknowns.
Here’s my biggest problem – if we are to present some forms of uncertainty in numeric form then surely we are obliged to present all forms of uncertainty in the same way. It is misleading to simply list the things we haven’t taken account of. We have to make judgements about the known unknowns and unknown unknowns and adjust the uncertainty distribution accordingly. It’s not all bad news: we might argue that the models overstate the uncertainty in outcome and narrow the distribution in figures such as that by Meinshausen et al.
Policymakers need a considered view of the state of climate knowledge, not diagrams that present dubious “probabilities” and a set of provisos.
Rereading this post, I realise I should make it clear that I am not wedded to the term “likelihood”. All I am arguing is that an alternative to “probability” should be found when in fact we are discussing not risk, but uncertainty. Another possibility is use of the word “possibility”, e.g. Meinshausen et al could have improved clarity by labelling the vertical axis of their figure “Possibility of exceeding 2C”. The important thing, though, is that terms are defined e.g. “In this paper both risk and uncertainty are expressed quantitatively. Risks are expressed as ‘probabilities’; uncertainties as ‘possibilities’ “.
In my (Bayesian) mind, risk is uncertainty. The broader a probability distribution, the greater the risk/uncertainty.
You may be interested in this post
http://rankexploits.com/musings/2011/surface-temperatures-cooler-than-multi-model-mean/
Suggests that the wide spread in multi-model output is more due to differences between models, and less due to weather noise, than typically assumed.
Hence could it be more *likely* after all, given recent tracking of global temperatures towards the bottom of the “spaghetti graph”, that the higher sensitivity models are incorrect…
Chris