Wednesday, December 16, 2015

Why the Claim of "No Warming in 18 Years" is a Blatant Cherry Pick

So by now everyone is used to the Mocktons of the World announcing, after each new month of satellite data on lower tropospheric temperatures, that there is an "18-year pause," plus-or-minus a few months. Ted Cruz even said so in his recent faux Senatorial hearing.

Wonder why they only cite the 18 year and a few months trend? Here's why:


Here I plot the amount of warming in the lower troposphere as a function of how far back you start looking -- what I call the "reverse total change."

So, for example, the temperature change for the lower troposphere over the last 30 years is, according to UAH version 6beta4 (their latest version), +0.31°C, with error bars of 0.06°C for the 5-95% confidence level (no autocorrelation).

Now you can can wee why they cite "18 years" or so -- it's a massive cherry pick. Picking a number longer than this and the amount of warming quickly rises. Pick a number shorter than this and it does too, though the error bars get big enough that a solid conclusion is not possible. (For example, you could try to say the LT pause is "14 years," but I can't imagine anyone claiming that 14 years is representative of just AGW and not natural variability. Though some probably try.)

Lesson: Picking the 1997-98 El Nino to start your trend isn't copacetic. One needs to account for natural variations (here, especially ENSOs) to pick out the anthropogenic signal.

I'm sure Ted Cruz couldn't care less -- he'd say whatever he needed to say to futher his agenda -- but the rest of us should.

17 comments:

Thomas said...

What are the error bars based on, I didn't think UAH gave any on their data?

David Appell said...

They're statistical error bars from the noise in the data. Not, you're right, from UAH's measurements errors. I have had a hard time finding the latter, even when I ask repeatedly on Roy Spencer's blog.

JoeT said...

David, I'm not clear about the error bars as well. I think what you did is, for example, go back 20 years, fit a line to the data and calculate the 2 sigma error in the slope. However, as you stated, you weren't able to include the actual error in each data point into the uncertainty estimate. Is this right?

BTW, you may have noticed that NASA came out with the November anomaly of 1.05C, making it by far the warmest November recorded. Second place was in 2013 at 0.8C.

You may remember last year NASA and NOAA estimated the probability that 2014 was the warmest year. NASA said 38%, NOAA 48%. I made an estimate of the probability that 2015 will be the warmest year from the NASA data only.

I assumed the December anomaly is the same as November. That puts 2015 at 0.86 C compared to 2014 at 0.74, a difference of 0.12 C. This is roughly the amount of warming in one year that typically takes about 7 years to achieve. I used a 1 sigma error for the NASA data of 0.05 C and did the estimate including only the 6 warmest years. Even adding the 5th and 6th years had little effect on the probability.

I got that the probability that 2015 is the warmest year is roughly 94%. We'll see in a month how close I came.

David Appell said...

Joe: Yes, I just calculated the uncertainty of the slope.

That doesn't include measurement error bar, but it gets small for large N, where N is the number of data points. I don't remember its exact dependence on N, but I'll get back to this later.

David Appell said...

I used this (standard) method to calculate the statistical error, written by Tom Wigley:

http://nimbus.cos.uidaho.edu/abatz/PDF/sap1717draft37appA.pdf

David in Cal said...

Monckton specifically looks for the earliest date such that the trend from that date to today is flat. As of November, 2015, that earliest date was Feb., 1997 -- before the 1998 El Nino. A trend to today starting at the high point of 1998 would be downward. See
http://www.climatedepot.com/2015/11/04/no-global-warming-at-all-for-18-years-9-months-a-new-record-the-pause-lengthens-again-just-in-time-for-un-summit-in-paris/

What does this mean? As David points out, the starting date is cherry-picked. It's not random. So, it shouldn't be used to project future warming. What the 18+ year pause does show is that the IPCC model predictions have not been at all accurate. I am told that the models show troposphere warming will be faster than surface warming, so the models have failed very badly.

Where does this leave us? We know from the physics that CO2 causes global warming. And, the warming of the planet confirms this. But, IMHO the IPCC projections of the rate of future warming deserve no credance.

David Appell said...

Joe: I think the error bar for the slope of the linear regression varies like 1/N^2 for the measurement errors, and 1/N^1.5 for the statistical errors.

Using the equation here for the least squares best estimate of the slope -- beta-hat (hat=caret)

https://en.wikipedia.org/wiki/Simple_linear_regression#Fitting_the_regression_line

and for simplicity assuming all the measurement errors are the same (call it D), then I find the uncertainty is beta-hat is

d(beta-hat) ~ 6D/N^2

I think I'm right about the variation of the statistical error. Anyway, the contribution to the slope's uncertainty comes almost all from the noise, not the measurement.

Do you agree?

JoeT said...

David, I wasn't actually questioning your calculation, I was just making sure I understood the graph. I didn't look at the Wikipedia page, but off the top of my head I would have thought that the uncertainty in the slope is inversely proportional to the square root of the sample size. We know the error in the intercept has to have that dependence because it's similar to the error in the mean. I thought the error in the slope has the same dependence as the error in the intercept. But perhaps I'm wrong.

But rather than debate the statistical analysis of the satellite data, at this point I don't fully understand the physics behind the satellite data. When I get some time, I was thinking this would be something worthwhile for me to investigate further.

David Appell said...

Hi Joe. OK, I see now. (Still haven't had any time to work out the N dependence. I may have been thinking about autocorrelation.)

Anyway, the satellite algorithms are very tricky stuff. Basically they convert microwave emissions from atmo oxygen and calculate temperature. The problem is that the calculation is frought with biases -- on what's called the "weighting function" for microwaves, the effect of satellite orbital drift (downward), the change in time when the satellites cross (for reference) the equator, what's called the warm target factor (a calibration of the instrument), and more.

Some people think satellites are the cleanest way to measure atmo temperatures (no urban heat island effects, for example), but that's far from clear to me (in fact, I disagree). Today at the AGU meeting I heard that the changing diurnal correction (time when the satellites cross a certain point like the equator) can give rise to a +/- 1 deg K change in the calculated temperature, and out of this people are trying to pick up changes of a few hundreds of a degree.

A lot of unaccounted biases have been found over the years, most of which the UAH group has resisted until they had no choice, and most of their biases are suspiciously on the cold side.

I don't have a good paper to recommend you, maybe I"ll try to find one next week. Any early papers by RSS (Carl Mears, Frank Wentz) or Christy & Spencer might work.

JoeT said...

Thanks very much David. The difficulty regarding the diurnal correction is most interesting. This is what I like about your blog --- data and cutting edge science. Speaking for myself, and probably many others here, this is far more interesting than perpetually rehashing old denier arguments. If you do have a good reference on the satellite data, I'd appreciate it. Also, if you have any more interesting tidbits from the conference that are not in your published work, that would be interesting.

One more thing, do you understand the argument that's been presented whereby Spencer admits the satellite data is not a good proxy for the surface temperature? Something to do with total precipitable water. I tried asking this question at skepticalscience, but didn't get anywhere.

David Appell said...

Thanks Joe. Here are the links to the satellite data:

UAH LT http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tlt/uahncdc_lt_6.0beta4
UAH LT - all regions http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tlt/uahncdc_lt_6.0beta4
UAH MT http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tmt/uahncdc_mt_6.0beta4
UAH LS http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tls/uahncdc_ls_6.0beta4
RSS LT http://images.remss.com/msu/msu_time_series.html
RSS LT - all regions ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt
RSS MT - all regions ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tmt_anomalies_land_and_ocean_v03_3.txt
RSS LS ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tls_anomalies_land_and_ocean_v03_3.txt

LT = lower troposphere
MT = middle troposphere
LS = lower stratosphere

Entry pages:
http://nsstc.uah.edu/climate/
http://images.remss.com/msu/msu_time_series.html

David Appell said...

Joe: About your second question: I saw where Spencer said that, but can't find the link right now. I didn't really understand that. The commenter on Spencer's site who replied to him seemed it did understand it and its implications.

I hoping to dig into some when I get more time.

David Appell said...

And you might find this useful:

"Climate Algorithm Theoretical Basis Document (C-ATBD)"
RSS Version 3.3 MSU/AMSU-A
Mean Layer Atmospheric Temperature
http://images.remss.com/papers/msu/MSU_AMSU_C-ATBD.pdf

Here's a list of the major corrections to UAH's model (some of which were acquiesced to only after intense scientific combat):

http://en.wikipedia.org/wiki/UAH_satellite_temperature_dataset#Corrections_made

https://www.skepticalscience.com/satellite-measurements-warming-troposphere.htm

David in Cal said...

David -- what do you think of this study? NEW STUDY OF NOAA’S U.S. CLIMATE NETWORK SHOWS A LOWER 30-YEAR TEMPERATURE TREND WHEN HIGH QUALITY TEMPERATURE STATIONS UNPERTURBED BY URBANIZATION ARE CONSIDERED http://wattsupwiththat.com/2015/12/17/press-release-agu15-the-quality-of-temperature-station-siting-matters-for-temperature-trends/

To me, it seems reasonable to look at the trend only at temperature stations that don't need adjustment, because the adjustment process is so uncertain.

Cheers

David Appell said...
This comment has been removed by the author.
David Appell said...

I did a quick calculation. If the current USA48 trend is S, and the Watts et al claim is true that it is really (2/3)S, then the change in the global trend will be (weighting by areas)
-fS/3, where f is the ratio of the area of USA48 to the area of the globe (f=1.6%).

So f/3=0.005, so the change in the global trend is only -S/200.

David Appell said...

David: Good question. I don't think much of it.

1) it hasn't been peer reviewed or published yet. I've almost never seen someone publish a press release before a study is published. I suspect I know why it's being done here.

2) It's suspicious that all these claimed siting issues weren't talked about for several years after they were first claimed, when the surface data showed a hiatus. They're being resurrected again because of the revisions of Karl et al that now do not show a hiatus.

3) the continental US is only 1.6% of the globe by area, so even if this claim is true it would have very very little effect on global trends.

You?