A pre-hurricane climate change analysis gets major revision after the storm

It was the first scientific attempt of its kind—assessing the impact of climate change on a hurricane the storm had even made landfall. And the results (which we covered at the time) were remarkable, suggesting that 2018’s Hurricane Florence would be dropping 50 percent more rainfall and span an extra 80 kilometers (50 miles) because of a warmer world.

Increased rainfall would hardly be a surprise. Results from many previous tropical cyclones have found that a warmer atmosphere, which holds more moisture, is expected to boost storm precipitation totals. But 50 percent would be exceptional, as previous studies had fallen somewhere between 6 and 38 percent, depending on the storm.

The scientists weren’t able to explain why they got that high number at the time, considering they had only a few days to get the model forecast simulations run and out the door. With the benefit of time, the scientists have now published an evaluation of their groundbreaking effort. Unfortunately, it shows that mistakes were made.

The initial work was based on 10 simulations each of two versions of the world: the actual conditions at the time and a counterfactual world with the warming trend removed (in this case, taking 0.75°C off of ocean surface temperatures in the area). The difference between these “actual” and “counterfactual” runs was the influence attributed to climate change, with the differences among the 10 runs providing some error bars.

To revisit this, the researchers repeated the experiment but with 100 simulations for each scenario. Collections of repeat simulations, called “ensembles,” are done by varying some of the uncertain parameters in the model. The more combinations of parameters you have, the more you fill out the range of possible outcomes. This firms up the error bars and ensures you aren’t missing part of what the model is predicting.

With this done, the researchers could compare the model forecast scenarios to what actually happened when Hurricane Florence dumped torrential rains on the Carolinas in September 2018. The “actual” forecast simulations did their job, matching the timing and location of landfall. The precipitation forecast was also good, with maximum rainfall totals averaging 85.3 centimeters (33.6 inches), compared to the 82.3 centimeters (32.9 inches) measured in the real world.

However, the researchers discovered a problem with the way their “counterfactual” simulations had originally been set up. An error caused their intended 0.75°C cooling of ocean-surface temperatures to grow an additional 1-3°C off the Carolinas. That set up a much larger contrast with the current world, and it turns out to be the reason the numbers they released back in 2018 seemed so extreme.

After fixing that error, their 100 “counterfactual” simulations show a much smaller influence of climate change. Rather than something like 50 percent of the rainfall being the result of a warmer world, the models actually show about (and that’s ±5). And rather than a storm that is 80 kilometers wider because of climate change, it was about nine kilometers (±6) wider.

Obvious “Oops!” aside, there is one more thing the researchers learned from this analysis. To test out the impact of only running 10 simulations instead of 100, they ran the numbers on many random sets of 10. While the averages obviously tended to be similar, the error bars on a set of 10 are much wider.

The 95 percent confidence range on storm size due to climate change using all the simulations is 3.1 to 15.3 kilometers, for example. That range when using 10 simulations grows to -8.6 kilometers to 28.5 kilometers (that is, some would predict the storm would actually be smaller). So at least in this case, not having enough time to run more simulations means you’ll be stuck with obnoxiously large error bars.

The researchers do point out that each situation is a little different, and it’s not as simple as saying that simulations are required. It may take more examples to work out a recommended approach to these ultra-quick assessments.

They also put a somewhat surprisingly happy face on their results. The researchers write:

We demonstrated that a forecasted attribution analysis using a conditional attribution framework allows for credible communication to be made on the basis of sound scientific reasoning. Post-event expansion of the ensemble size and analysis demonstrated it to be reasonable, albeit with some quantitative modification to the best estimates and the opportunity to more rigorously evaluate the significance of the analysis.

After all, the big mistake here was avoidable, although more likely in the rush. And while the error bars would be large, the method can at least say something interesting. Whether there’s sufficient value in getting a less reliable answer faster is another question.

Science Advances, 2020. DOI: 10.1126/sciadv.aaw9253 (About DOIs).

[ufc-fb-comments url="http://www.newyorkmetropolitan.com/tech/a-pre-hurricane-climate-change-analysis-gets-major-revision-after-the-storm"]

Latest Articles

Related Articles