There's been one little error in UAH's new data for the lower troposphere -- the baseline (1981-2010) for the global lower troposphere didn't have an average anomaly of zero. That's pretty trivial.
It's being fixed. The data is a beta version, after all. This is why you put it out there
But this raises a larger question. How can we trust any dataset that's put out there -- not just UAH's, but any and all of them, dozens if not hundreds?
The algorithms now are so complex that to be sure -- really sure -- I'd have to acquire the raw data, construct a data model (algorithm), and run it. Obviously I cannot do this, likely even if I had the time, certainly not for 99% of the data out there, and you can't either.
If I had to guess, I would say that none of the datasets is exactly right. All of them will contain errors. The big errors are easy to catch, because they're big, but anyone who's ever coded and worked with data knows there is the possibility of a zillions of little errors where your computer spits out numbers and that makes you happy.
At some point, when you get results that look plausible -- no obvious errors, lots of internal checks, reasonable agreement with other work (if there is such) -- you stop and say, here are my results. But judgement necessarily includes your own biases -- you simply cannot help it. But that doesn't mean there are no more errors.
It'd be wonderful if there were observations or experimental results to compare to. But that's very rare, and if such data did exist, you'd have checked yourself and not published it if there wasn't agreement.
This is a big problem in science, or in any field that does data analysis, especially when the science has public implications. We all believe the data we think supports our views, and have to struggle mightily to deal with data that doesn't. But it is always going to involve trust, and past results, and reputations, and more.
So when I point out some big changes in UAH's dataset, I really have no idea whether their version 6.0beta is better than v5.6 or now. It agrees better with RSS, so that's a strong point in its favor. On the other hand, some of its corrections are greater than 1 C, which is bigger than the warming expected since the start of their dataset.
Science moves a lot slower than public opinion. That should be a good thing, except in an environment like today's.
David, I've been reading your site for a while. Thank you for blogging.
ReplyDeleteI'd like to address a few of the elephants in the room regarding UAH and RSS. Even though they "match", I have a feeling they are not resolving clouds an aerosols correctly. Greenhouse gas forcing, for all intents and purposes, should raise the temperature of the lower troposphere faster than the surface. Therefore, the data doesn't quite match the physical expectations. In addition, I'm having a really hard time believing the TLT would be approximately the same temperature in 2001 as 2013-2014 when SSTas are nearly 0.3C higher! There are plenty of examples like that in the recent history of the dataset that make little sense. Something doesn't add up.
I'm curious of your thoughts here.
You are right, even if the principles of your data processing are sound, it is easy to make an error in the implementation. That is why you should always test the full software package using test data (best also parts if possible).
ReplyDeleteWe do so for homogenization methods for station climate data.
No idea if such validation studies have also been made for microwave satellite retrievals of tropospheric temperature. You could simulate based on a climate model run what a satellite would see, then apply the UAH algorithm and check whether the temperature trend is the same as originally in the model.
If someone knows of such validations, please let me know.
Drew: Good thoughts; I'm as puzzled as you, for many of the same reasons. I think the lower troposphere trend is supposed to be 1.2 times the surface trend, according to theory. Instead it is 0.7.
ReplyDeleteWith so many big changes, UAH has essentially started from scratch, it seems to me. There were a lot of errors found in their original algorithm:
http://en.wikipedia.org/wiki/UAH_satellite_temperature_dataset#Corrections_made
Who knows, but the fact theory is so wrong smells funny. But what do I know.