Anyway, a much more important article appeared in Nature last week: "Climate models at their limit?" by Mark Maslin and Patrick Austin. If you read only one science article this month, let this be the one (it's only 2 pages), except I haven't found a free PDF anywhere yet (if you know of a link to one, please leave a comment; I agree with all the someones who have said that if Nature thinks AGW is so serious, how about making their articles on it freely available to all, especially if the authors receive public funding?).
Maslin and Austin write:
For the fifth major assessment of climate science by the Intergovernmental Panel on Climate Change (IPCC), due to be released next year, climate scientists face a serious public-image problem. The climate models they are now working with, which make use of significant improvements in our understanding of complex climate processes, are likely to produce wider rather than smaller ranges of uncertainty in their predictions. To the public and to policymakers, this will look as though the scientific understanding of climate change is becoming less, rather than more, clear.This isn't exactly a new realization. A 2007 paper in Science by Roe and Baker [PDF] has an elegant demonstration of why reducing uncertainties is difficult if feedbacks are large. If G is the "gain" in temperature -- the amount of extra warming beyond the simple (Planck) warming of ~1.2°C from a straight doubling of CO2 -- then its uncertainty is
Scientists need to decide how to explain this effect. Above all, the public and policymakers need to be made to understand that climate models may have reached their limit. They must stop waiting for further certainty or persuasion, and simply act.
where f-bar is the average feedback, and σf is the uncertainty in the average feedback.
As they write, a climate sensitivity between 2.0°C and 4.5°C corresponds to G between 1.7 and 3.7, so f is between 0.41 and 0.73. So if the feedback is on the high end -- closer to 1 -- the uncertainty in the amount of "extra" warming blows up.
So maybe it's not surprising that as climate models incorporate more and more physics, chemistry, and even biology, the uncertainty in the climate sensitivity hasn't lowered by much. Maslin and Austin give this chart of climate sensitivities since Svante:
The climate models, or ‘climate simulators’ as some groups are now referring to them, being used in the IPCC’s fifth assessment make fewer assumptions than those from the last assessment, and can quantify the uncertainty of the complex factors they include more accurately. Many of them contain interactive carbon cycles, better representations of aerosols and atmospheric chemistry and a small improvement in spatial resolution.Good luck getting the general public to understand that.
Yet embracing more-complex processes means adding in ‘known unknowns’, such as the rate at which ice falls through clouds, or the rate at which different types of land cover and the oceans absorb carbon dioxide. Preliminary analyses show that the new models produce a larger spread for the predicted average rise in global temperature. Additional uncertainty may come to light as these models continue to be put through their paces. Dan Rowlands of the University of Oxford, UK, and his colleagues have run one complex model through thousands of simulations, rather than the handful of runs that can usually be managed with available computing time. Although their average
results matched well with IPCC projections, more extreme results, including warming of up to 4 °C by 2050, seemed just as likely. As computing power becomes more accessible, that ‘hidden’ uncertainty will become even more obvious.
So what to do? Their best idea is probably to project the uncertainty onto the x-axis and give that -- to say that the year Y when it will be X degrees warmer is uncertain, instead of saying that the warming in the year Y is uncertain by such-and-such. (It's basically these kind of charts.) That is, instead of saying it will be (say) 1.5°C to 2.5°C by the year 2050, say we will reach 2°C of warming sometime between (say) the years 2040 and 2100.
Besides this, their solution is basically to say that that stopping warming is the right thing to do:
In the face of scientific uncertainty, various philosophies for decision-making haveThat seems to rely heavily on agreement about the word better in the phrase "creating a better world," and if history has shown anything it's that humans have, for millenia now, demonstrated a remarkable amount of concurrence on what "a better world" means, right? So this should all be a piece of cake.
arisen. But perhaps the best approach is to ensure that policies include ‘win–win’ strategies. Supporting a huge increase in renewable energy would reduce emissions and
help to provide energy security by reducing reliance on imported oil, coal and gas.
Reduced deforestation and reforestation should draw-down CO2 from the atmosphere and help to retain biodiversity, stabilize soils and provide livelihoods for local people through carbon credits. Measures that lessen car use will increase walking and cycling, which in turn reduces obesity and heart attacks. No one can object to creating a better world, even if we turn out to be extremely lucky and the scale of climate change is at the low end of all projections.
It seems slightly disingenous to say that adding more information about the way in which the atmosphere responds to GHG accumulations can increase the amount of uncertainty in the models.
Increasing the number and quality of inputs (and this may turn out to be a generous assumption) should generally not reduce uncertainty, unless previous models swept true uncertainty under the rug.
Except it's not just information about how the atmosphere responds to GHGs, but about how *climate* responds to GHGs + other things.
If you model a system with just one variable, x, and try to calculate some function f(x), you can calculate its uncertainty delta(f) as a function of x and delta(x).
But if you add another variable y to the description, whose influence on f may be less than x's, then now delta(f) depends on x, y, delta(x) and delta(y), and the new delta(f) could well be bigger than the original delta(f). No?
In other words, you can measure one side of a square with more accuracy than you can measure its area.
The word "denier" is usually not used in the literature. I'm aware of only one other, a Curry, Webster, and Holland piece from 2006.
Post a Comment