Speaking as an astronomer who's fit a lot of baselines to data, that blip during WWII sure looks like a 3-sigma result. Rather than thinking that it's real, though, I'd be more inclined to suspect correlation in the data that reduces the significance of such a deviation. Is that possible?

The poloynomial fit should not be used to project future temperature changes. Doing so would assume that the same natural process was driving global temperature throughout this entire period, even though man-made emissions were low during the earlier portion. It would also imply that the slight temperature decrease from 1880 to 1920 is evidence that there will be a rapid increase in the 21st century.

Neither of these should be used to project future temperature changes, but the quadratic model is far better than the linear one, at least for the recent past, the present, and the foreseeable future.

The linear model is below actual GISTEMP for 20 of the past 20 years. It's useless.

The quadratic model is closer. It's too low for only 13 of the past 20 years. So that's an improvement.

But it would make much much much much much more sense to project GISTEMP using the method JoeT gave you a few weeks ago. Project it based on the base-2 log of CO2, fitting a model from 1970 to the present, as you did in a post. Then make some assumptions about how CO2 will increase (it's pretty predictable at the decadal scale) and do the calculation.

That post projects temperature as a function of doublings of CO2 concentration, not time. The model is developed using GISTEMP and CO2 data from 1960-now. By assuming that CO2 concentrations will be around 493 ppmv in 2050, you get the following projection for temperature:

2050 GISTEMP Appell linear: 0.75 Appell quadratic: 1.47 CO2 model: 1.58 (CI 1.39 to 1.76) at 493 ppm CO2

2100 GISTEMP Appell linear: 1.1 Appell quadratic: 2.9 CO2 model: 2.8 (CI 2.6 to 3.0) at 678 ppm CO2

I don't put much confidence in the projection to 2100; the CO2 model is based on extrapolating the past rise in CO2. But actually that's a pretty reasonable approach -- the projected 678 ppm CO2 is very close to SRES A1B, and in between RCP 4.5 and 6.0. It's very defensible as a best guess.

Note that David Appell's simple quadratic model is pretty close to the CO2 model for this whole century -- it's slightly lower than the CO2 model in 2050, slightly higher in 2100.

Blogger climate data -- it's pretty much automatic that a higher degree polynomial would fit better than a linear. There are more ways to vary the curve.

A quote attributed to John Von Neumann says “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk”

Not the point. If the rate of warming were not accelerating, then both of them would fit. But the linear trend *doesn't* fit. There was, in fact, acceleration in the rate of warming from 1880-2016. This is completely unsurprising, so I'm not exactly sure why DA felt the need for this post, but I guess it's worth reiterating.

Aside from that, the other point is that just extrapolating temperature as a function of time makes little sense, even if one can generate a polynomial that fits OK in the past. Temperature change is driven by forcings, mostly CO2. In fact, temperature pretty much linearly increases in direct proportion to the log of CO2 concentration. So forecasts of the form "When the CO2 concentration reaches X, temperature will reach Y" are a better approach.

Of course, if one wants to know *when* that will happen, one still has to make assumptions about the time evolution of CO2 concentration. But separating out the two parts helps pin down where the real uncertainty lies.

## 9 comments:

Speaking as an astronomer who's fit a lot of baselines to data, that blip during WWII sure looks like a 3-sigma result. Rather than thinking that it's real, though, I'd be more inclined to suspect correlation in the data that reduces the significance of such a deviation. Is that possible?

Ocean temp measurement effect (ships switching to engine intake measurements vs bucket) and increased solar activity combines...

After 1970s is pretty linear. I don't think fitting a polynomial from 1880s makes much sense.

The poloynomial fit should not be used to project future temperature changes. Doing so would assume that the same natural process was driving global temperature throughout this entire period, even though man-made emissions were low during the earlier portion. It would also imply that the slight temperature

decreasefrom 1880 to 1920 is evidence that there will be a rapidincreasein the 21st century.Cheers

Lee, I have no idea what you mean.

Neither of these should be used to project future temperature changes, but the quadratic model is

farbetter than the linear one, at least for the recent past, the present, and the foreseeable future.The linear model is below actual GISTEMP for 20 of the past 20 years. It's useless.

The quadratic model is closer. It's too low for only 13 of the past 20 years. So that's an improvement.

But it would make much much much much much more sense to project GISTEMP using the method JoeT gave you a few weeks ago. Project it based on the base-2 log of CO2, fitting a model from 1970 to the present, as you did in a post. Then make some assumptions about how CO2 will increase (it's pretty predictable at the decadal scale) and do the calculation.

I do this up to 2050 over here:

First draft – temperature predictions

That post projects temperature as a function of doublings of CO2 concentration, not time. The model is developed using GISTEMP and CO2 data from 1960-now. By assuming that CO2 concentrations will be around 493 ppmv in 2050, you get the following projection for temperature:

2050 GISTEMP

Appell linear: 0.75

Appell quadratic: 1.47

CO2 model: 1.58 (CI 1.39 to 1.76) at 493 ppm CO2

2100 GISTEMP

Appell linear: 1.1

Appell quadratic: 2.9

CO2 model: 2.8 (CI 2.6 to 3.0) at 678 ppm CO2

I don't put much confidence in the projection to 2100; the CO2 model is based on extrapolating the past rise in CO2. But actually that's a pretty reasonable approach -- the projected 678 ppm CO2 is very close to SRES A1B, and in between RCP 4.5 and 6.0. It's very defensible as a best guess.

Note that David Appell's simple quadratic model is pretty close to the CO2 model for this whole century -- it's slightly lower than the CO2 model in 2050, slightly higher in 2100.

I wrote up a brief blog post to compare the quadratic fit to our CO2-based projection over here:

https://climategraphs.wordpress.com/2017/03/21/comparison-to-time-extrapolations/

Blogger climate data -- it's pretty much automatic that a higher degree polynomial would fit better than a linear. There are more ways to vary the curve.

A quote attributed to John Von Neumann says

“With four parameters I can fit an elephant, and with five I can make him wiggle his trunk”Not the point. If the rate of warming were not accelerating, then both of them would fit. But the linear trend *doesn't* fit. There was, in fact, acceleration in the rate of warming from 1880-2016. This is completely unsurprising, so I'm not exactly sure why DA felt the need for this post, but I guess it's worth reiterating.

Aside from that, the other point is that just extrapolating temperature as a function of time makes little sense, even if one can generate a polynomial that fits OK in the past. Temperature change is driven by forcings, mostly CO2. In fact, temperature pretty much linearly increases in direct proportion to the log of CO2 concentration. So forecasts of the form "When the CO2 concentration reaches X, temperature will reach Y" are a better approach.

Of course, if one wants to know *when* that will happen, one still has to make assumptions about the time evolution of CO2 concentration. But separating out the two parts helps pin down where the real uncertainty lies.

Post a Comment