Sunday, July 29, 2012

Venema Review of Watts et al 2012

Victor Venema, a meteorologist at the University of Bonn, has a useful review of Watts et al 2012. He writes:
Thus what he found is that the Urban Heat Island effect exists. I did not know that this was controversial.

Good news is that the study finds that after homogenization, the station quality is no longer a problem for the mean temperature.
Venema understands the temperature data homogenization issues, and concludes:
Had the new study found clear differences in the temperature trend in the homogenized data, the study would have been interesting for the general public. Because it is the homogenized data that used to compute large scale trends in the real climate. If the homogenized data would still be partially polluted by the urban heat island effect that would have been an error. The aim of homogenization is exactly to remove artificial changes from the raw data. It seems to do so successfully, now acknowledged by WUWT the second time.
Earlier he had this review of data homogenisation of monthly and annual data from surface stations.   It gets into the weeds, but of course that's often where the science resides.

This is clearly a paper that needs a thorough peer review, which I suspect will take a good while if it's done right. Watts et al don't say where it is being submitted (you almost get the impression that's an afterthought), but that will matter. (E&E, for example, won't cut it.)

Then there are the deluded people who somehow believe one paper about 2 percent of the globe, that finds that quality sites have a 1979-2008 trend of +0.155 C/decade (3 significant figures?? error bars??) will change the debate about global warming and the influence of CO2. (Just like, I guess, the other hundred times they thought some result settled the debate for good.) It won't, of course.

20 comments:

charlesH said...

Pielke Sr.

"This paper is a game changer, in my view, with respect to the use of the land surface temperature anomalies as part of the diagnosis of global warming."

"Anthony has led what is a critically important assessment of the issue of station quality. Indeed, this type of analysis should have been performed by Tom Karl and Tom Peterson at NCDC, Jim Hansen at GISS and Phil Jones at the University of East Anglia (and Richard Muller). However, they apparently liked their answers and did not want to test the robustness of their findings."

http://pielkeclimatesci.wordpress.com/2012/07/29/comments-on-the-game-changer-new-paper-an-area-and-distance-weighted-analysis-of-the-impacts-of-station-exposure-on-the-u-s-historical-climatology-network-temperatures-and-temperature-trends-by-w/

Dano said...

"This paper is a game changer, in my view, with respect to the use of the land surface temperature anomalies as part of the diagnosis of global warming."

We don't need any more evidence that poor RP Sr is off the deep end, we know he's a fake expert. But thanks for the work of the cut-paste!!!!!!!!!!!

Best,

D

charlesH said...

I think this is a pretty good report of Watts et al. setting the understandable context.

Basically if one uses the Watts method the US land and sat global data come into much closer agreement.

(0.124 US vs 0.134 global sat)

http://www.theregister.co.uk/2012/07/30/watts_et_al_temperature_bombshell/page2.html

David Appell said...

Except satellites measure the lower troposphere, not the surface temperature.

Except Watts et al pertains to the US only, and except UAH's LT trend for USA48 is 0.23 +/- 0.08 C/decade (Dec 1978-present).

Except Watts et al measure, for sites they judge to be of sufficient quality, 0.155 C/decade. (They didn't bother to give that number's statistical significance, which make their calculation somewhat useless.)

David Appell said...

The UAH LT USA48 trend from 1979-2008 is 0.25 +/- 0.10 C/decade.

(95% confidence level, no correction for autocorrelation).

charlesH said...

1) One would expect the LT sat trend to be higher than the surface (so I've read).

2) The Watts result fits quite well with the global average taking into account 1). One would expect the mid latitudes to track the global average, the tropics to be under the average and the poles to exceed the average.

Bottom line:

a) Watts strengthens the sat record credibility.

b) Good question posed by Pielke Sr.: " this type of analysis should have been performed by Tom Karl and Tom Peterson at NCDC, Jim Hansen at GISS and Phil Jones at the University of East Anglia (and Richard Muller). However, they apparently liked their answers and did not want to test the robustness of their findings."

Paul S said...

charlesH,

1) There is no expectation of a larger TLT trend over land, only over ocean. Over large continental areas, such as the USA, it may even be expected that near-surface should warm faster than TLT.

2) The global average you're looking at is land+ocean. As David has already noted, the CONUS satellite trends for 1979-2008 are closer to the NOAA figure than Watts' 0.155ºC/Dec. His results are completely inconsistent with the satellite data.

charlesH said...

Paul S.

So the highest quality land instruments disagree with the sat data? Interesting. I wonder what that means?

Paul S said...

charlesH,

That's something which will have to be investigated and understood. I would suggest the most likely meaning is that the raw station trends in Watt's class 1/2 category require adjustment to be climatically meaningful. That is, so they don't simply reflect changes in time of observation or station siting conditions.

I think it's a reasonable assumption that many stations which are currently well-sited are currently well-sited because they have recently been moved to a better site, which would require adjustment to compensate.

Investigation may point to a slightly different trend than NOAA's (and BEST and GISS's) 0.3ºC/Decade but I would suggest anything lower than ~0.25ºC/Decade is unlikely.

charlesH said...

Paul,

So why wasn't Watts done 20yrs ago by those getting paid to maintain temp recording sites?

Seems such an obvious thing to do. Clearly is not "rocket science". Could be done (and was done) by less than Nobel caliber talent on a shoe string budget.

Paul S said...

CharlesH,

The GHCN files do contain metadata on most of the stations, detailing siting information (Example here, metadata for this station under the graph). Presmably much of this information would have come from photos, or reporting from those running the stations, or people visiting the sites. The issue would be whether ot not this information provides a detailed enough description of the microsite for today's users. Also, it could be out of date.

Watts has only run on a shoe-string because he's managed to get hundreds of people to put in hours of work for free. Also, his project has depended on Intenet connectivity and digital photography. The way he's done things wouldn't have been technically possible twenty years ago.

Dano said...

So why wasn't Watts done 20yrs ago by those getting paid to maintain temp recording sites?

Seems such an obvious thing to do. Clearly is not "rocket science". Could be done (and was done) by less than Nobel caliber talent on a shoe string budget.


Surely you can't seriously be implying that no work has been done to quantify the UHI and the effects on measuring stations.

Why you would imply such a thing is a mystery.

Best,

D

David Appell said...

Paul: You wrote, "There is no expectation of a larger TLT trend over land, only over ocean."

When I calculate the UAH LT trends, I get for their entire record:

Global: 0.14 +/- 0.02 C/decade
Global land: 0.17 +/- 0.03 C/decade
Global ocean: 0.11 +/- 0.02 C/decade
USA48: 0.23 +/- 0.08 C/decade

Source:
http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt

What do you think it means that the ocean trend is less than the global trend?

Paul S said...

David,

I meant that the ocean TLT trend is expected to be larger, by about 1.4 amplification, than the ocean surface trend.

Obviously you can see it isn't: they're about equal. Though an interesting observation is that the error between expected and measured over-ocean amplification (1.4 compared to 1) and the error between expected and measured over-land amplification (1 compared to 0.7) is roughly the same proportionally.

Land-sea warming contrast is still expected in the TLT data, with land warming faster during transient response, but the difference is expected to be smaller by about 30-40% than land-sea contrast at the surface.

David Appell said...

Charles: Andrew Revkin has a response from NCDC that provides context on their work to improve the quality of the USHCN dataset:

http://dotearth.blogs.nytimes.com/2012/07/30/a-closer-look-at-climate-studies-promoted-before-publicatio/

charlesH said...

Paul,

"Watts has only run on a shoe-string because he's managed to get hundreds of people to put in hours of work for free. Also, his project has depended on Intenet connectivity and digital photography. The way he's done things wouldn't have been technically possible twenty years ago."

Ok I agree an independent like Watts couldn't have done it nearly as easily 20yrs ago. But how about those paid to maintain the network? We've been spending $billions each year on AGW related science and no one thought to audit/calibrate the absolute foundation of relevant data, the land temperature measuring sites? And after auditing classify the sites on a 1-5 scale? Then report the temperature trends by each classification?

Seems to me something a high school science student might have done.

At any rate, it will be interesting to see how this develops.

Louis Hooffstetter said...

"Then there are the deluded people who somehow believe one paper about 2 percent of the globe, that finds that quality sites have a 1979-2008 trend of +0.155 C/decade will change the debate about global warming and the influence of CO2."

The USHCN consists of 1218 stations in the United States. HadCrut uses just over 3000. I believe HadCrut uses all of the USHCN stations, making them approximately 40% of their data set. Regardless, Watts et al. has uncovered serious flaws with a large percentage of the data used to demonstrate anthropogenic global warming is real. The "deluded people" simply want to see the data corrected find out just how much humans are affecting the climate. If Venema was a serious scientist, he would want the same thing.

David Appell said...

Louis: The US is 2% of the globe's area -- it says very, very little about global trends. The number of stations in the US, relative to the number in the globe, is not the relevant factor -- those stations only represent 2% of the globe, regardless of how many stations the US has.

Dano said...

Watts et al. has uncovered serious flaws with a large percentage of the data used to demonstrate anthropogenic global warming is real.

Only if it passes peer review. A press release doesn't validate your self-identity, no matter how hard you are wishing. This is reality, not your wish.

Best,

D

Victor Venema said...

Louis Hooffstetter said..."Regardless, Watts et al. has uncovered serious flaws with a large percentage of the data used to demonstrate anthropogenic global warming is real. The "deluded people" simply want to see the data corrected find out just how much humans are affecting the climate. If Venema was a serious scientist, he would want the same thing."

The frivolous scientist Venema wrote on his blog: "Roger Pielke Sr. rightly praises his friend Anthony Watts for his work on SurfaceStations.org." Good quality data is the basis of climatology. The work of Watts on the quality of the surface stations has helped create political pressure to take these issues more seriously and also the classification is valuable. The communication with NOAA was less than optimal and whether the classification changes anything about the debate on climate change remains to be seen.

Still, I wish there were similar initiatives in other countries. The small departments that are responsible for producing the climate data are not powerful enough. The governments abide by the will of the people and prefer to spend research money in the private sector and buy expensive super computers and satellites. Spending the money on scientists would increase the number of government employees, that is, increase the bureaucracy. The modern meme is: private is good, bureaucracy is bad. The sad scientist Venema thinks governments should spend tax money efficiently and feels that the distinction private-public is often not productive.

A fully other question is whether the quality of the data is not good enough to be able to compute a trend in the annual mean temperature over the US or over the globe. If you would like to use single stations (which you should normally not do), or analyse the climate of a small region, or analyse changes in the extreme weather, data quality is a serious problem. The dubious scientist Venema even wrote a post about the problems with daily data and extreme weather.

To determine trends in the annual mean temperature, I see no indication that the data quality is not sufficient. The fun scientist Venema himself has coordinated a blind validation study of homogenization algorithms. The algorithms were found to improve the estimate of the temperature trends. There have been many other validation studies as well. The "study" of Watts et al. does not study how homogenization works, not the physical time of observation bias corrections and not the statistical homogenization algorithm. One wonders where the serious scientist Watts gets the wisdom from that homogenization makes the trends worse. The only indication he seems to have is that the trends in the homogenized data are stronger, ignoring all the known reasons why this is the case (time of observation bias & transition to automatic weather stations).