The UK’s Cold March 2013 and the Perils of Ensemble Forecasting
Weather forecasting (and climate prediction) is not just about computer power. Deep philosophical ideas also come into play. In particular, problems emanate from the use and communication of the concepts of probability and uncertainty. Often, the probability of a specific outcome is quoted, when what is meant is the level of its certainty in the opinion of the forecaster. Or more to the point in the opinion of the forecaster’s computer.
I’ll discuss communication problems more generally in a later post, but here I want to suggest the possibility that the outputs of forecasts – specifically ensemble forecasts – are being misinterpreted, and not just poorly communicated.
Anyone wanting an accessible introduction to the issues of forecasting, communicating forecasts and ensemble forecasting in particular, could do a lot worse than view the recent Royal Society debate, Storms, floods and droughts: predicting and reporting adverse weather. It’s entertaining too – there’s a great rant (with which I can’t help agreeing) from an audience member on the way the BBC reports London’s weather.
Ensemble forecasting is when a computer weather (or climate) model is run repeatedly – say 50 times – for the same forecast, but with very slightly different initial conditions (e.g. atmospheric pressure and temperature at particular locations). The idea is to produce a range of forecasts, representing the likelihood of possible outcomes. Tim Palmer suggests during the Royal Society debate that the most famous UK forecast ever – the dismissal in 1987 by Michael Fish of the possibility of a hurricane in Southern England the evening before one occurred – would instead have been presented probabilistically. Had he had an ensemble of forecasts, Fish might have said that there was a 30% probability of a hurricane.
I don’t agree with this.
If Fish had said there was a 30% probability of a hurricane he would have been guilty of confusing his computer model with reality.
All ensemble forecasters know is that a certain proportion of computer model runs produce a given outcome. This might help identify possible weather events, but doesn’t tell you real-world probabilities. If there is some factor that the computer model doesn’t take account of then running the computer model 50 times is in effect to make the same mistake 50 times.
March 2013 in the UK is the cold snap that just keeps on giving. The weather has defied forecasts. Specifically, it seemed just a few days ago that westerly air was going to break through before the end of March. Of course, this has a bearing on where March 2013 will rank among the all-time coldest, which I discussed in my previous post, but I’ll have to find time to revisit that subject in the next day or two.
The Weathercast site has made ensemble forecasts from the European Centre for Medium-range Weather Forecasts (ECMWF) available to the public. Here are some from over the last few days:
I’ve lined them up, so that the day the forecast is for appears in a column for forecasts as of 00:00 hours on 22nd, 24th and 25th March.
An ensemble that’s behaving itself should provide less of a spread of forecasts as we get closer to the forecast date. For example, the spread of the maximum temperature on Easter Day, 31st March narrows in the forecast from 25th March compared to that on 24th.
But now look at the coldest possible temperatures on 31st March. On 22nd hardly any predicted temperatures below 0C, and none below about -2C. By 25th most of the forecasts were for a frost on the morning of 31st, and many for a severe frost (-3C or so). This shouldn’t happen.
It seems that on 22nd nearly all the model runs predicted Atlantic air to break through by 31st; by 25th virtually none of them did.
Instead of fanning out more the longer in the future the forecast is for, the ensemble model outcome seems to change systematically. Perhaps ensemble forecasts don’t solve all our problems. I suspect there are aspects of the climate system our computer models do not yet capture. There are things we do not yet know.
As we saw for the unexpected rainfall in 2012, ensemble forecasts can predict zero probability of extreme events, in this case the (most likely) second coldest March since the 19th century. And the whole point of ensemble forecasts is to predict extremes.
The forecast for 31st March is of more than passing interest, of course. It is no doubt of great importance to those who may be planning to take school kids on Duke of Edinburgh expeditions on Dartmoor or (since we’re talking about a London forecast) preparing for a traditional Boat Race on the Thames!
0 thoughts on “The UK’s Cold March 2013 and the Perils of Ensemble Forecasting”