Fixing the IPCC (and the Motley CRU): Part 1
Last night’s BBC News at 10 report on the University of East Anglia (UEA) investigation of the affair of the leaked emails made it seem like the climate scientists were on trial.
What a travesty (as someone said recently).
Just as with the financial crisis, where it’s easier to blame the bankers than analyse what’s wrong with the wider system, we have to look beyond the individuals involved. Why did they behave as they did? After all, they weren’t in line for massive bonuses.
Climate change science is indeed in crisis. There are various problems – which I’ll examine in detail in subsequent posts – but by far the most significant is the way findings are being evaluated. Rather than indulge point-scoring by “sceptics” we should simply go back to basics and demand testable predictions. And the “sceptics” should go and make their predictions as well.
Let’s get everyone’s eyes back on the ball.
The first part of the fix is therefore to take the entirely reasonable step of excluding “deniers” from the process – including from IPCC. The model we should look to is that of “splits” in open source software development projects. But we should also draw lessons from the history of science.
Most scientists, let alone the general public, have a rudimentary understanding of how science works. They know it’s not just a process of generalising from accumulated facts (known as inductive reasoning in the trade). Rather, hypotheses can be falsified. And, indeed, the idea that an experiment or observation can prove a general claim to be false is one thing that distinguishes science from other forms of knowledge.
But – my favourite word again – in the real world, theories are complex beasts. Sometimes – as in the example of the sun bending light according to Einstein’s prediction and not Newton’s – one crucial test supports one theory over another. Usually, though, theories can be adjusted to explain unexpected data. And there’s no way of knowing, except with hindsight, whether the adjustment was valid or the whole theory should have been thrown out.
It’s therefore possible to have what the philosopher Imre Lakatos called “competing research programmes”. Thomas Kuhn expressed much the same idea when he discussed “paradigms” in his famous work, The Structure of Scientific Revolutions, but I mentioned Lakatos because he – correctly in my view – allows for more than one theory existing at the same time, whereas Kuhn only conceives of one “paradigm” succeeding another.
Kuhn, in particular, stresses the social side of science. His paradigms are characterised by specific methods, texts and so on (in fact the philospher Margaret Masterman famously identified 21 distinct meanings of the word “paradigm” to refer to these types of model!).
In particular, it should be stressed that paradigms (or research programmes) are incommensurate, that is, they are based on concepts that have no meaning outside the paradigm.
At present what we are asking climate scientists to do is to work as a team with those who don’t simply disagree with them on some technical point, but also do not share their basic assumptions, their very culture.
The IPCC should give up on forcing mainstream climate scientists to field members of the opposing team in their line-up. Let them decide whose research is worthy of inclusion and whose isn’t. If an article appears in a journal that follows peer-review procedures that doesn’t in itself prove it has special value. It could still be misleading. Or outright garbage.
There are some similarities to open-source projects which occasionally split when one group has a different vision to another – leading to two versions of the browser, operating system or peer-to-peer file-sharing engine. Obviously, splitting is undesirable, because it spreads resources more thinly, but it is sometimes unavoidable.
If necessary, the UN should be prepared to fund more than one research programme. If a bunch of people want to go off and prove that the main way climate is controlled is by cosmic rays, then good luck to them. Let them publish a minority report.
If there are multiple research programmes, let’s see who puts their name to each of them. And let’s see which most impresses young research scientists entering the field.
If one research programme continually makes accurate predictions and the other doesn’t, well, one research programme will wither and die.