"The models are running too hot. The flat trend in global surface temperatures may continue for another decade or two."As for the models:
-- Judith Curry, quoted in the Daily Mail, 3/16/13 (written by -- who else? -- David Rose)
via Twitter. Or, from Ed Hawkins (regularly updated):
RealClimate has a very good recent post on comparing models and satellite data.
38 comments:
In both the Gavin Schmidt graph and the Ed Hawkins graph, the oberved temperatures are in the lower half of the model spread, and have been (in the Schmidt graph) since the 1997-98 El Nino. This makes Professor Curry's point perfectly - oops, the models are running too hot. It will be interesting to see what the obervations look like after the La Nina. That will be the acid test.
Schmidt's wide error bars prove too much. In principle, a scientific theory can only be falsified, never totally verified. Still, a theory is considerably strengthed if it predicts something unexpected, and the that thing comes to pass.
Models created during a period of rising temperatures predicted that temperatures would continue to rise. But, these models had a wide range of conceivable rates of warming. So, the fact that the planet warmed by an amount within the range is hardly unexpected.
cheers
"Schmidt's wide error bars prove too much."
The error bars come from the science, not from Gavin Schmidt's personal opinion.
If you're not including error bars, you have no business saying models do not agree with observations -- it's a vacuous claim.
Reply to David Appell :-
If Schmidt's wide error bars come from the science, then the science is nowhere near being settled.
Gavin Schmidt's plot does not show "error bars", it shows model spread. If you are interested in decal climate prediction, like Curry and Mallett are, you have to take into account that the errors bars are two times wider than the model spread.
Yes, the uncertainty is large. I would personally not change a system our society depends upon if the uncertainty what will happen is large. Even if I would expect that the change were neutral or slightly positive. If you bet on something that has a large impact or your live or civilisation, you want to be sure that you can handle the worst case scenario. Maybe that is my conservative side.
We are sure there will be a large change, just not how large. The uncertainty monster is not your friend.
David in Cal wrote:
"But, these models had a wide range of conceivable rates of warming. So, the fact that the planet warmed by an amount within the range is hardly unexpected."
The match in Gavin's graph between observations and models is impressive -- very close to the line of the ensemble mean predictions with adjusted forcings, and always within the 95% confidence limits. (And, yes, those are a true prediction, not a projection; the ensemble mean of the predictions.)
Of course, the observations have error bars too, which aren't shown on the graph. The points on the graph are the mean values.
Climate change will probably never be predicted (or projected) exactly. 2-sigma is already pretty good. It is by easily close enough to indicate models have skill (see Gavin's TED talk) and are useful for policy. We will have to make decisions in the face of uncertainty -- but then, we do that ALL the time. (See Cheney's 1% Doctrine.)
Would you not buy fire insurance, or take steps to reduce damages in the event of a fire (installing smoke detectors, clearing brush, escape rope ladders, unless someone told you the exact date your house would burn down?
"Still, a theory is considerably strengthed if it predicts something unexpected, and the that thing comes to pass."
That's very rare, even in the hard sciences, and hardly necessary.
What first impressed people of Einstein's general relativity was his post-diction of the additional perihelion shift of Mercury. Even the bending of starlight observed by Eddington was not unexpected, classically -- but Einstein's prediction was different by a factor of two, which was correct.
Dirac post-dicted the anomalous magnetic moment of the electron to be g=2, a great advance. His equation also predicted a Lamb shift, but not correctly. Almost two decades later Schwinger predicted g=2+alpha/pi, which was very impressive and also (I think) before experiment. QED also post-dicted the Lamb shift, which was already measured.
Post-dictions, not predictions, were absolutely vital in convincing people of the validity of the Dirac equation and then, better still, Feynman/Schwinger/Tomonaga QED.
It was much the same for the quark model and QCD, which predicted various ratios of scattering cross sections that were already known.
Predicting the charm quark, though -- that was new. So was Lee & Yang's prediction of parity violation. And the Higgs. All very very very impressive. But not the usual case in science.
I should add that it's not today's model/observation agreements, like Gavin Schmidt demonstrates, that convinced scientists of AGW -- they were convinced years ago, and for some, decades ago.
Those who aren't impressed by Gavin's graph are never going to be convinced. Or by any other evidence.
Victor, this is a good and important comment, thanks.
So, for a particular climate model, how *would* you calculate the error bars of its projections?
Every factor that goes into models -- radiative forcings, vegetation growth, lots of parametrizations -- has an uncertainty attached. Is anyone really including all these in their model calculations? I would guess that would be computationally impossible....
When I was an undergraduate a scientist in an medium-energy experimental group I worked for one summer said 90% of their computer time was taken up with calculating error bars....
Reply to David Appell :-
Nobody doubts that there is an anthropogenic component to global warming. The only questions are :-
1. How much of the present warming (say since 1850) is anthropogenic ?
2. Is it a blessing or a curse ? People living in the future ice ages may consider this period to be a climate optimum.
The factor two larger that the uncertainty is larger than the model ensemble spread is for decadal predictions, 10 to 20 years ahead. The way the climate "debate" abuses models for a task they were never designed to do.
The climate models were designed for long term projections. There the main source of uncertainty for the long-term projections is whether humanity takes action.
If you would like to have an uncertainty for a specific scenario, I am not sure if one climate model would be enough to estimate this. Multiple models and all the other evidence from observations and past reconstructions are necessary to get a handle on this complicated uncertainty. That is basically the debate about the climate sensitivity, where models are just one of the sources of evidence.
Models are especially important for the changes for which we do not have much observations/reconstructions, such as changes in the hydrological cycle and the circulation and local/regional changes. Here I would be happy if the model estimates are in the right ball park and if you look at regional changes you can be very happy if you have some confidence in the sign of the change.
But who cares about conservative values? Let's change the climate system and we'll see how it messes up things.
Richard, don't pretend like your questions haven't been addressed time and time and time again. It just makes you look like (even more of) an idiot.
PS Richard: Unless the quality of your comments increase substantially, soon you are going to be put on moderation. Deniers have plenty of comment space on denier blogs; they don't get much here, and you don't get to usurp the conversation with inane questions.
Error bars may be a convenient tool for expressing someone's judgment about the amount of uncertainty in the climate prediction. But, I don't think the error bars are derivable by any objective scientific method. IPCC has a bunch of models giving somewhat different predictions. There are also lots of models not considered by the IPCC, probably with a wider range of estimates. There may be bias in which models are included in the IPCC, because scientists who forecast little or no warming may be less likely to be invited to participate in the IPCC. There may be important inputs not reflected in any of the models. You don't know which, if any, model is correct. AFAIK there's no objective way to derive error bars from that situation.
Cheers
David in Cal:
If you don't know how it is measurements that give error bars, it's because you haven't studied enough science.
I also learned from Bevington, "Data Reduction and Error Analysis for the Physical Sciences."
Perhaps you should buy a copy and read it.
"There may be bias in which models are included in the IPCC, because scientists who forecast little or no warming may be less likely to be invited to participate in the IPCC"
Participating in the IPCC assessment process doesn't require an invitation.
Nor do those scientists who do participate in the IPCC (or anywhere else) ignore papers just because you think they don't like them. Science is the highest meritocracy of all. Good scientific ideas have always -- always -- won in science. Always.
--
PS: Did you look up the Karl et al Supplementary Material yet? You were clearly unaware it existed, yet unfairly criticized their paper for not including enough details.
DiC, you say: "Still, a theory is considerably strengthed if it predicts something unexpected, and the that thing comes to pass.
Models created during a period of rising temperatures predicted that temperatures would continue to rise. But, these models had a wide range of conceivable rates of warming. So, the fact that the planet warmed by an amount within the range is hardly unexpected."
This is indeed unexpected. You can look at any skeptic prediction about what will happen next and they always say cooling is imminent.
For instance Dr Norman Page:
http://davidappell.blogspot.ca/2016/05/dr-norman-page-phd-still-batshit-insane.html
Or Joe Bastardi: http://thinkprogress.org/climate/2011/01/17/207234/joe-bastardi-wager-global-warming-arctic-sea-ice/
"Richard Lindzen says he's willing to take bets that global average temperatures in 20 years will in fact be lower than they are now." - https://web.archive.org/web/20070314005128/http://reason.com/news/show/34939.html
This Russian duo: http://www.theguardian.com/environment/2005/aug/19/climatechange.climatechangeenvironment
Patrick Michaels: http://julesandjames.blogspot.ca/2005/05/yet-more-betting-on-climate-with-world.html
Etc Etc.
And they're not wrong. The world probably ought to be cooling except for one factor that they choose to ignore.
David --
We may have had some misunderstandings. You wrote, "If you don't know how it is measurements that give error bars..." I was addressing error bars in model predictions, rather than error bars in measurements. In particular, IPCC predictions (or projections) have two types of possible error that can be evaluated only judgmentally.
1. There's possible error in the choice of model. No formula can tell you how far off you might be if the model doesn't exactly fit reality.
2. The IPCC projection represents some kind of consensus of a group of models. There's possible error in the choice of models they looked at and how to weigh them. But, no formula tells one how much error there is from giving the wrong weights or omitting certain models.
Consider, e.g., Norman Page. He presumably has a model that forecasts temperatures going down. No doubt the IPCC gave no weight to his model when reaching their consensus estimate, and I would do the same. How much error or uncertainty is introduced from ignoring his model? No doubt, the IPCC would say "zero", and I don't disagree. However, my point is that the decision that ignoring Page's model didn't add to the uncertainty is a judgment. No formula automatically says that ignoring his model has no impact on the uncertainty of the IPCC projections.
P.S. I haven't taken the time to understand Karl's paper. I was merely quoting Prof. Curry on the subject. Since you distrust her, I understand that you won't be impressed with her opinion.
I don't think that Dr Page has published anything that the IPCC could have included in their report. I'm not aware of any published work that they have ignored. It's a survey of published work so they can't really pick and choose the ones they like.
This is probably why the range of probable climate sensitivities given in the IPCC is much wider than most scientists would give. They need to include the studies from the "sky is falling" crowd and the "What, me worry?" crowd along with the mainstream works.
Here are 13 scientists on climate sensitivity. Each gives a range much tighter than the IPCC: http://www.bitsofscience.org/real-global-temperature-trend-climate-sensitivity-leading-climate-experts-7106/
Thanks for the link Layzej. I'm glad to see that several of the 13 scientists explicitly acknowledge that their range is judgmental. Forster calls the range his "current thinking". Mann says, "I feel..." Caldeira calls it a "gut feeling". Rahmsdorf says, "I personally say..."
Cheers
"1. There's possible error in the choice of model."
This makes no sense. There is no "correct" model. Such models don't exist, for any science. There are only the qustions of (1) is the model useful? and, (2) how well does the chosen model reproduce known science. For the big climate models out there, all do a pretty good job -- easily good enough to tell us that we have a serious warming and climate change problem from GHG emissions, which the observations show unfolding before our very eyes.
"2. The IPCC projection represents some kind of consensus of a group of models."
Models don't "project" climate sensitivity, they calculate it directly.
"However, my point is that the decision that ignoring Page's model didn't add to the uncertainty is a judgment. No formula automatically says that ignoring his model has no impact on the uncertainty of the IPCC projections."
Page's model demonstrated no skill. Large climate models do.
https://www.ted.com/talks/gavin_schmidt_the_emergent_patterns_of_climate_change?language=en
David, let me try to explain "model error". It's a challenge, because physicists and statisticians each have our own lingo.
Let's say you want to estimate the long-term rate of global warming, say over the next century. For simplicity, let's say that the model is the slope of a linear fit to a certain time series. You're not sure that your answer is exactly correct, for two possible reasons:
1. Data going into the model includes some uncertainty or some random element.
2. The model itself may not be right.
There are techniques for estimating the possible error (or amount of uncertainty) due to cause #1. But, there are statistical techniques for estimating the possible error due to cause #2. And, as a rule, the possible error due to #2 is larger than the possible error due to #1.
E.g., consider the impact of Page's model. Your judgment (and mine) is that it's worthless. It adds no information. Others might have a different judgement. They might give some weight to the possibility that Page is onto something. So, they'd choose a wider range than you and I would. My point, is that the range of uncertainty is someone's judgment, not some objective calculation.
Reply to David Appell (7:51 pm) :-
Another very good textbook is 'An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements' by John R. Taylor, Second Edition (University Science Books, 1982, 1997)
David in Cal wrote:
"But, there are statistical techniques for estimating the possible error due to cause #2. And, as a rule, the possible error due to #2 is larger than the possible error due to #1."
Really? What algorithm gives the #2-type possible error for climate models.
And your statement that #2 always> #1 strikes me as dubious.
David -- Sorry, I accidentally left out a word in typing. My error changed the meaning of what I intended. I meant to say, "But, there are NO statistical techniques for estimating the possible error due to cause #2."
I did accurately write, "And, as a rule, the possible error due to #2 is larger than the possible error due to #1." I didn't say "always".
"I did accurately write, "And, as a rule, the possible error due to #2 is larger than the possible error due to #1." I didn't say "always"."
I don't see how you can possibly prove this for climate models.
I don't even know what #2 means, "The model itself may not be right." I can't think of a single model that is "right."
Right you are, David. You can't prove that model error is typically larger than statistical sampling error for climate models. In my experience dealing with messy data and uncertain models, that's been my experience.
I'll do my best to I explain #2. It's not easy, because, as you know, I'm no expert in climate models. Let's say you wanted to predict the temperature increase in the next 100 years. You have a model that you believe more or less represents reality. Your input is the temperature record for many past years. Here's a key: you assume that the past temperatures followed your model, except that there was a random error added to or subtracted from each year's temperature. Then your projection could be off, because the past temperatures don't exactly represent the "true" values. That is, they don't quite follow your model, because of the random error term. Then your prediction for 100 years from now might be somewhat off, because you had no way to know what the "true" input would have been. This is error of type 1.
But, now consider the possibility that your model might simply be wrong. E.g., someone named Bates has a new, peer-reviewed article using something called a "Two Zone, Energy Balance Model." His model yields an estimate of Equilibrium Climate Sensitivity of around 1 degree C -- considerably lower than the IPCC's range of 1.5 - 4.5 deg. C. http://onlinelibrary.wiley.com/doi/10.1002/2015EA000154/epdf If Bates' model is more correct than the type of models used by the IPCC, then the IPCC estimate could be substantially off.
The Bates model is at odds with observations: https://andthentheresphysics.wordpress.com/2016/05/13/ecs-1k/
AFAIK the IPCC models are also at odds with observations, to some degree. Climate models show amplified warming high in the tropical troposphere due to greenhouse forcing. However data from satellites and weather balloons don’t show much amplification. https://wattsupwiththat.com/2013/07/16/about-that-missing-hot-spot/
Cheers
Maybe, but ECS range of IPCC is not at odds with reality. ECS of 1C does appear to be at odds.
David in Cal wrote:
"Let's say you wanted to predict the temperature increase in the next 100 years. You have a model that you believe more or less represents reality. Your input is the temperature record for many past years. Here's a key: you assume that the past temperatures followed your model, except that there was a random error added to or subtracted from each year's temperature. Then your projection could be off..."
This isn't **AT ALL** how climate models work.
They don't use some statistical estimation of the trend and project that. Instead they numerically solve the underlying partial differential equations that describe the physics of the factors that detemine climate.
Here is a description of a well known climate model. You can just glance at it to see that climate models are completely different from what you think they are.
"Description of the NCAR Community Atmosphere Model (CAM 3.0)," NCAR Technical Note NCAR/TN–464+STR, June 2004.
http://www.cesm.ucar.edu/models/atm-cam/docs/description/description.pdf
David in Cal wrote:
"Here's a key: you assume that the past temperatures followed your model, except that there was a random error added to or subtracted from each year's temperature. Then your projection could be off"
Another important difference is that climate models don't start from a known intial state -- because we do not have nearly enough sufficient data on the initial state. That would require much more than global air temperatures -- it would require temperatures for each grid point (horizontal and vertically) in the climate model. (Grid point = center of grid box, or maybe values at the grid box corners, or the middle of their edges, etc.) It would require ocean heat content in each grid box, horizontally and vertically down, in the model. It would require ocean currents at all these points, and known cloud cover at all the grid points, and the Earth's albedo at each grid point, and vegetation cover, and aerosol distributions,and brown carbon, and much much more.
We simply do not have such data, and may never (and certainly not soon). So climate models can't solve the PDE initial value problem that is typical in physics.
Instead they "spin up" their models from centuries before today until the model comes back to an equilibrium state something like (I think, but am fuzzy on this) when you want your model to start.
http://www.oc.nps.edu/nom/modeling/initial.html
This is the reason why climate models can't make predictions for just a few decades out (like the pause) -- they don't start in the actual real climate state.
Climate models were made to predict the END EQUILIBRIUM STATE -- the climate state after all perturbations were finished doing their thing and the climate adjusted to everything. THAT is how climate sensitivity (to various factors) is calculated (again, not projected). For CO2, for example, modelers spin up their model, change the atmospheric CO2 level at time=0 to be twice as much as it was -- immediately, in one fell swoop -- and then let their models run for centuries or millennia. The surface temperature change when equilibrium is re-established is the climate sensitivity to, say, CO2.
I am not exactly sure what GISS is doing to generation projections like this:
https://twitter.com/climateofgavin/status/689889733737082880
I understand how (and why) they recalculate with exact forcings (because the assumed forcings -- GHGs, volcanic, solar, aerosols, etc -- are never right unless you can predict the future), but I don't know what initial state they start from. Some day I should ask Gavin Schmidt.
David in Cal wrote:
"AFAIK the IPCC models are also at odds with observations, to some degree. Climate models show amplified warming high in the tropical troposphere due to greenhouse forcing. However data from satellites and weather balloons don’t show much amplification. https://wattsupwiththat.com/2013/07/16/about-that-missing-hot-spot/"
Yet again -- again -- you are wrong. This is what you get for trusting shitty sites like Anthony Watts.
Watts' shitty graphs don't include error bars. That makes them...shitty.
If they had, he'd find that the error bars are too large to conclude one way or the other. See
http://davidappell.blogspot.com/2011/05/fred-singers-lecture-at-portland-state.html
Hypothetical situation. Suppose Norman Page said he had a model showing that the globe is cooling. I say to him, "The world isn't cooling; it's warming." Your model is wrong.
Page responds, "The amount of warming is within my error bar, so the observations are not at odds with my model."
I'd respond with the following arguments:
1. Error bars are judgmental, not statistical. Naturally Page now chooses error bars wide enough so that the observations to date don't contradict his model.
2. Page's error bars are so wide that neither cooling nor warming falsifies his model (unless the warming is severe). A theory that's not falsfiable isn't science.
3. From a common-sense POV, Page said the world is cooling; the observations show that it's actually warming. End of discussion.
David, I think you know where I'm headed. Does a parallel argument apply to the troposphere hot spot? It's supposed to be warming faster than the surface but observations show it warming slower than the surface.
Cheers
David in Cal said...
"Hypothetical situation."
What is a hypothetical situation?
Unless you quote something, I have no idea what you're talking about.
David in Cal wrote:
"Does a parallel argument apply to the troposphere hot spot? It's supposed to be warming faster than the surface but observations show it warming slower than the surface."
Did you read what I just wrote about the hot spot and uncertainties?
David, it's becoming clear to me, as it is already to some others here I think, that you keep writing the same old stupid shit time and time again.
Perhaps you are incapable of learning, or perhaps you think we aren't smart enough to realize what you're doing.
We are.
You are Richard M are just like all dumb deniers everywhere else. You know very little, and can't think outside that box.
You, David, yesterday clearly didn't have the slightest notion about how climate models work. Yet you were so sure they were all wrong, because instead of going and learning what climate models are, you assumed they were like some modeling you did years ago in some job somewhere.
You were as wrong as wrong can be. And this is frequently the case with you. You never admit it, and you never learn and improve.
Maybe you're only interested in wasting my time and others. That ends now.
I have put this blog on full moderation. Until you either start posting intelligent comments and replies or you go away, you won't be posting here anymore. You don't get to take over this blog.
Post a Comment