Compared to 12 months earlier, the 0-700 meter region of the ocean has gained 0.94 W/m2, and the 0-2000 meter region 0.71 W/m2.
Usually the larger region (0-2000 m) gains more heat than the smaller region (0-7000 m), but not this time. Perhaps that's due to El Nino -- the same happened in the latter part of 2010, which was also an El Nino year.
The (tenuous) acceleration of heating drops a hair: +1.5 ZJ/yr2, or +48 TW/yr2, or +0.09 W/m2/yr (for the top 2,000 meters). (ZJ = zettajoules = 1021 J; TW = terrawatts = 1012 W.) I'm trying to understand how to include autocorrelation for a polynomial fit, and will post that if I do.
62 comments:
Nice post, David. This is why you can't leave -- there's always a lot to think about.
Just a couple of silly questions first:
1- You must be using the surface area of the earth, not the oceans, yes? I get 1 W/m2 for the top 2000 meters compared to your 0.71. From here the earth's surface is 510 million km2 compared to ocean surface of 360 million km2. That accounts for the difference in our numbers. Is there a reason to use the earth's surface?
2- Why the hell are there negative numbers in the ocean heat content? These aren't anomalies.
I especially like your point that the upper 700 m is warming more than the lower 1300 m. Presumably this is because heat is coming up from below due perhaps not only the El Nino but the reversal of the PDO from negative to positive?
Also, should we not take this 1 W/m2 as the radiative imbalance? This is at the upper range of the 0.5 - 1.0 W/m2 in Trenberth 2014.
And finally, if indeed there is a second derivative in the forcing as you show, this should show up as a third derivative in the sea level, yes?
Thanks Joe.
Yes, I used to entire surface area of the Earth, which I usually note but did'nt this time. That's because almost all (about 93%, averaged over time) of the trapped heat goes into the ocean. I think that number comes from Trenberth -- Skeptical Science has it somewhere.
The negative numbers in the OHC data are just anomalies with respect to some baseline, whatever NOAA is using, not absolute numbers, just like GISS or NOAA has negative anomalies with respect to their chosen baseline.
It took a few minutes to find, but it seems NOAA is calculating the heat anomalies with respect to a 1955-2006 baseline -- see the last answer #3 to this FAQ:
http://www.nodc.noaa.gov/OC5/wod-woa-faqs.html
It seems to me this is especially required for *heat* data, though it's also done for atmo temperature data (and SSTs) too. That because it's impossible to answer "how much heat" is in the ocean, since heat is the transfer of energy and not energy itself.
It's impossible to really say "how much heat" is in the ocean -- heat is a relative, not absolute quantity. So you have to compare its new value to some old value and speak of heat added or heat lost.
Joe: Yes, I didn't think of the PDO's flip. That may be influencing things too.
You're right. I don't think this calculated energy imbalance is the net radiative forcing. I've asked a couple about this from time to time, but to be honest I've never really understood their answers, and -- one of those things -- I'm not really sure why I don't understand it. It might be that it's because planetary energy imbalance is measured w.r.t. the top of the atmosphere, but radiative forcing is w.r.t. the tropospause after allowing for the stratosphere to come back to equilibrium:
https://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-2.html
So I think it has to do with the techical definition of radiative forcing. But energy is easier to think of. But I don't know how exactly to relate the two.
Joe wrote:
"And finally, if indeed there is a second derivative in the forcing as you show, this should show up as a third derivative in the sea level, yes?"
Hmmmm....
Ocean heat change is proportional to temperature change, and sea-level *rise* {i.e sea level height's first derivative} is proportional to temperature change with time.
So I would guess that the _second_ derivative of ocean heat change, i.e. the acceleration I came up with in this post, is proportional to the second derivative of sea level height, i.e. the acceleration in sea level rise. (?)
What were you thinking?
Thanks for the clear explanations David. As the FAQ clearly showed, these are indeed anomalies. As to "what were you thinking?", not very clearly as it turns out. I was thinking that the rate of sea level rise is proportional to the forcing, so the third derivative of the sea level goes with the second of the forcing. That there is a second derivative in the forcing seems possible from NASA's write-up on forcings there does indeed appear to be a second derivative, but as you point out, the forcing is proportional to the derivative of the ocean heat content. My mistake.
The bigger picture is that there is a rather nice consilience between the increase in ocean heat content and the sea level rise (due to thermal expansion), measured independently, that doesn't get stressed enough. As has been pointed out by many, that's where the real global warming is occurring.
I still don't quite see why this isn't the radiative imbalance. Sure, melting glaciers need to be taken into account, but I'm happy with back-of-the-envelope estimates because it helps me understand. Are there not satellite measurements at the TOA that compare incoming and outgoing radiation?
Thanks again, David
I'm a bit dubious about this quarterly update. It was published about a month earlier than usual and the World Ocean Database, which is apparently their source for raw data, only shows updates to November. I wouldn't be surprised if December isn't included.
My understanding is that ocean tempeature readings are less precise than land or troposphere. I wonder how big the uncertainly is in these figures.
Cheers
David, To try and answer my own question about the satellite measurements at the TOA, I reread Stephens 2012. Here's the key statement:
"For the decade considered, the average imbalance is 0.6 = 340.2 − 239.7 − 99.9 Wm–2 when these TOA fluxes are constrained to the best estimate ocean heat content (OHC) observations since 2005 (refs 13,14). This small imbalance is over two orders of magnitude smaller than the individual components that define it and smaller than the error of each individual flux. The combined uncertainty on the net TOA flux determined from CERES is ±4 Wm–2(95% confidence) due largely to instrument calibration errors 12,15. Thus the sum of current satellite-derived fluxes cannot determine the net TOA radiation imbalance with the accuracy needed to track such small imbalances associated with forced climate change 11."
Thus it seems that OHC is needed to constrain the uncertainty in the satellite measurements. Let me know if your understanding is different.
DiC, if the readings were not precise then we may expect the trend to be buried in a very noisy signal. Instead the trend is very clear.
BTW D-I-C, if you had simply gone to the web site that David had pointed to, the uncertainty is given in the data file. Why don't you tell us how that uncertainty compares to the surface and the lower troposphere?
DiC wrote:
"My understanding is that ocean tempeature readings are less precise than land or troposphere."
From Argo's FAQ:
"The temperatures in the Argo profiles are accurate to ± 0.002°C...."
http://www.argo.ucsd.edu/FAQ.html
"I wonder how big the uncertainly is in these figures."
On this page, click on the link "figures with error bars":
http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/index.html
JoNova wrote: A single ARGO buoy (which measures ocean temperatures down to 2000m) has an uncertainty of about 0.1C. But using 3,000 buoys doesn’t make that uncertainty dramatically smaller when all that data is combined together. It would, if the 3,000 buoys were all measuring the same swimming pool. But each buoy measures a different piece of ocean, and the ocean does not have one global temperature. Or it would if all the world’s ocean localities warmed by the same increment due to global warming, in each time period. But that would be a very brave assumption, because different parts of the world’s oceans probably warm at different rates due to global warming. So the measurement uncertainty is closer to the instrument error of 0.1C than the 0.004C as claimed by fans of man-made global crisis, and since the oceans have only warmed by about 0.02C (if that) since we’ve been measuring it with ARGO, that tiny amount of warming might just be noise. Going back further, the pre-ARGO data is so bad that longer datasets have much larger uncertainties.
http://joannenova.com.au/2013/05/ocean-temperatures-is-that-warming-statistically-significant/
David quotes an ARGO accuracy of ± 0.002°C. OTOH JoNova says the individual ARGO thermometers are each accurate to ± 0.1°C. I presume the ARGO figure is based on a statistical formula for sampling error. If so, it's an improper use of statistics, as JoNova explains.
I personally have little doubt that the ocean is warming. However, I am dubious about whether the data would support estimating the second derivative.
cheers
DiC, If the buoys were not accurate then the results would jump all over the place from one month to the next. They do not.
Layzej -- If I am reading the graphs correctly, they don't show the change from one month to the next. Some of them show the change in 1-year or 5-year averages. These are quite smooth. Not surprising. Two 5-year averages as of adjacent months contain the same 59 months of data: only a single month's data is different. Some charts show the movement in 3-month averages. Those are moderately bumpy.
Cheers
The OHC numbers come out quarterly. Sources:
0-700 m:
http://data.nodc.noaa.gov/woa/DATA_ANALYSIS/3M_HEAT_CONTENT/DATA/basin/3month/ohc_levitus_climdash_seasonal.csv
0-2000 m:
http://data.nodc.noaa.gov/woa/DATA_ANALYSIS/3M_HEAT_CONTENT/DATA/basin/3month/ohc2000m_levitus_climdash_seasonal.csv
DiC: Why should anyone believe Jo Nova when the Argo project says differently?
"However, I am dubious about whether the data would support estimating the second derivative."
That's a mathematical question. The quarterly OHC numbers are published, as are their uncertainties....
DiC, why is it that you seem perfectly willing to accept JoNovo's claim of an uncertainty for each float of 0.1C, without any reference whatsoever, but if ARGO claims a sensitivity of 0.002C, that you don't believe? You can even find a data sheet on the sensors here.
JoNovo claims the models predict far more ocean warming than measured and writes, "in the upper 750m, according to typical models, is around 6.0 Watt·year/m2 per year" What garbage. Go and look at the Hansen paper she references. He writes, "Figure 2 shows that the modeled increase of heat content in the past decade in the upper 750 m of the ocean is 6.0 +/- 0.6 (mean T SD)
W year/m2, averaged over the surface of Earth. In case you didn't catch it, that's 6 Wy/m2 over 10 years, not one. Hansen comes up with an energy imbalance of 0.85 w/m2 which is remarkably close to what David obtained above.
David -- I am no expert on sea temperature measurements, so would you or others correct me if I'm wrong. What number is the Argo project supposed to represent? That is, the one that they claim to measure within ± 0.002°C? I assume it's the average temperature or average anomaly of all the oceans, at a given depth.
Let's grant the the thermometers are now highly accurate, say within 0.1 deg C.
The first problem is whether the sample of floats properly represents the entire ocean. That is, how do they average the values? How do they know what weights to give them? I think that there are 3000 floats for all oceans and all depths within the studied range. So, there might be only a few hundred, say, for the Atlantic Ocean at a given depth. Suppose you got, say 400 temperature measurements of Atlantic temperature at 100 feet. They would show a wide range of readings, from frigid at poles to very warm near the equator. How much uncertainty would be in whether the average of these 400 measurements actually represents the true mean temperature of the Atlantic Ocean. Clearly, a lot of uncertainty.
Second problem: ± 0.002°C is ridiculously tiny. Hardly anything is measurable to that level of precision. The oceans are immense and filled with unknown temperature differentials, currents, etc. It's preposterous to imagine that one could calculate an average temperature to anywhere near that level of precision. Extraordinary claims require extraordinary proof. And, that level of accuracy would be extraordinary.
Third problem is the one Jo Nova pointed out: that this miniscule level of uncertainty is probably due to an improper use of statistics. This is the kind of thing I had in mind, when I pointed out in an earlier comment that statistics is harder than it appears. The formulas are relatively simple. But, it's not always easy to know when a formula can or cannot properly be used. It would be interesting to find out for sure how ARGO came up with the figure of ± 0.002°C.
cheers
One more thing, the uncertainty in the ocean heat content is a simple exercise in propagation of error, taking into account the error in each float and the partial derivative of the heat flux with respect to the temperature for each sector. However, I'm not going to spend the next several hours going through the data set and finding out what very competent scientists already concluded. Apparently JoNovo couldn't afford the time to do the analysis either.
But let's just do a b.o.e estimate of the uncertainty. To a good approximation,
sigma_Q/Q = sigma_T/T. Then for a increase of 22e22Joules using a surface area of 360 million km2, a depth of 2000 m, heat capacity of 4000J/kg/K and a density of 1000 kg/m3. The temperature rise is 0.08K. The uncertainty in the ocean heat content is 22e22*0.002/0.08 = 0.5e22. The uncertainty listed in the data set is 0.4e22.
David, I'm not an expert either. I assume it means the temperatures taken along the path of an Argo bot are accurate to +/-0.002 C. Why don't you write Argo and ask them exactly what it means?
Why you'd believe anything from someone like Nova is beyond me.
JoeT -- I would love to follow your comment. However, I'm afraid I'm not familiar with the symbols and I can't come close to following it. Sorry.
Thanks Joe.
DiC:
T = temperature
Q = heat
dQ = m*c*dT
m = mass
c = heat capacity
DiC wrote:
"The first problem is whether the sample of floats properly represents the entire ocean."
No, it isn't.
They are measuring the heat changes in THEIR MODEL OF THE OCEAN.
Their model consists of everywhere the Argo bots go. In fact, in some of their older literature I once perused by Sydney Levitus (OHC guru, recently retired), they call this the "World Ocean."
They are measuring heat changes in the World Ocean.
It's a separate question of how well their World Ocean models the actual ocean. Argo's home page has a map of where the bots that are delivering recent data:
http://www.argo.ucsd.edu/
The actual ocean is so huge that missing small parts of it isn't going to have much effect on the changes in actual global OHC. Given that substantial OHCs are seen in the World Ocean, it's certainly a reasonable assumption that similar, if not nearly identical, changes are taking place in the actual ocean. I am sure the NOAA and Argo people have spent lots of time thinking about these things and doing their uncertainty analysis.
In short, you don't need to measure every cubic centimeter of the ocean to get a good estimate of how much heat it is gaining or losing. If some reasonably good portion of the ocean that you CAN measure is gaining heat, there's no good reason to think the actual ocean isn't doing the same.
You can try to make the same case for measuring surface temperatures. Except there there ARE sustantial regions that are poorly sampled that DO matter for the global temperature change, such as the polar regions.
"The first problem is whether the sample of floats properly represents the entire ocean."
And, don't forget, Argo only measure down to 2000 meters, so they are missing about half the global ocean's volume (it's average depth = volume/surface_area = 3,680 meters).
But that's OK since it seems most of the heat changes are occuring in the region they are monitoring. But Argo is busy at work on diving in the deep ocean:
https://www.climate.gov/news-features/climate-tech/deep-argo-diving-answers-ocean%E2%80%99s-abyss
DiC wrote:
"How much uncertainty would be in whether the average of these 400 measurements actually represents the true mean temperature of the Atlantic Ocean. Clearly, a lot of uncertainty."
No, it's not clearly a lot of uncertainty. Especially in the ocean heat changes.
This is a standard problem and comes up everywhere. I expect you would find a lot written about this in the papers and documents that Argo is built on, if you are really interested in finding answers and not just asking questions for the sake of trying to manufacture doubt.
A tweet by Andrew Dessler some months ago was from a talk he was attending where someone said you only need 50-100 temperature stations to get a usably accurate value for average global surface temperature.
DiC wrote:
"It's preposterous to imagine that one could calculate an average temperature to anywhere near that level of precision."
Just another of statements with nothing backing it up at all.
This is a technical question. Those aren't answered by what "seems" right to you, or to anyone, but by science. And you never delve into the science -- you just want to dismiss all of climate science because if all "seems" wrong to you. That's not how science works, which is precisely its strength.
But instead of actually being interested in the science and answering such questions -- which are admittedly good questions, but hardly simple or bloggable -- you are just trying to manufacture doubt whether it exists or not, in order, I suspect, to confirm again and again your biases.
DiC wrote:
"It would be interesting to find out for sure how ARGO came up with the figure of ± 0.002°C."
Then go look it up in the technical specs of the Argo bots. Or write people and ask them. You can report back what you find. I'm not here to answer every question you put forth to try to manufacture doubt. This is a blog, not a scientific journal. These are blog posts, not papers intended for peer reviewed publication.
There are many good professional scientists who have worked on the Argo design and implementation. They too thought of all these questions on Day 1, but then they went on to answer them. Like all deniers, you think your easy questions were never asked before and that therefore you have some gotcha on the science. You don't.
D-I-C, With respect to the uncertainty of 0.002C, read my post above at 9:19 PM. I even point you to a spec sheet that gives this uncertainty. JoNova just made up the number of 0.1C. Why would she do that? I can read my thermometer to an accuracy of 0.1 C. The science has advanced. Go back and read what I wrote about how JoNova lied about what Hansen said the models claimed would be the increase in ocean heat content. Go back and actually read the Hansen paper that JoNova linked to. You'll note that Hansen's estimate of the energy imbalance is pretty damn close to what David just calculated.
Thanks, David. More questions. Am I correct that m represents the mass of a certain set of water? Perhaps it's all the water in the oceans down to a depth of 700 feet?
Then would T be the average temperature of the same set of water?
I don't know what heat and heat capacity are exactly. Is Q the total heat energy of the same body of water represented by T and m?
Am I right that one can discuss the change in temperature or the change in Heat? They're more or less equivalent. That is, one can be converted into the other. If that's right, why are they not making the entire presentation in terms of temperature? That would seem simpler.
thanks
David -- IMHO it hardly matters whether the individual ARGO thermometers are accurate to within 0.1 degree or 0.0001 degree. That's not the source of the total uncertainty, or barely the source of the total uncertainty. The big uncertainty is how well the sample of ARGO readings represents the entire oceans.
Cheers
I wonder whether Jo Nova's .1 came from this:
NOAA argues that the transition to buoys introduced a spurious cooling bias into the record. ERIs tend to warm the water a bit before measuring it (ship engine rooms being rather hot), whereas buoys do not. They identify a bias of around 0.1 C between buoys and ERIs and remove it by adjusting buoy records up to match ERI records in ERSST v4, as well as use NMAT readings to calibrate the differences across ships.
http://judithcurry.com/2015/11/22/a-buoy-only-sea-surface-temperature-record/
If adjustments of 0.1 deg C are being made, it would seem that the uncertainty must be in that order of magnitude. I'm not arguing that the adjustments are right or wrong, but only that the need for adjustments shows a lack of full confidence in the data.
Cheers
DiC, now you're just making stuff up. The JoNova article was from 2013, the Curry article from 2015. That's a pretty neat trick to rely on an article from the future.
The 0.lC that is being referred to is the difference between the sea surface temperature from engine intake water and that from buoys. The engine water runs slightly warmer. This has nothing to do with the accuracy of the buoys. I'll tell you how JoNova got 0.1. She took the actual accuracy 0.002 and multiplied it by the square root of the number of buoys, ie 0.002 * sqrt(3000) = 0.1. It's the difference in the uncertainty between one measurement and 3000 of exactly the same sample.
What you're doing now the classic Gish gallop. You don't like that the uncertainty is 0.002, that's too precise. Then you went to, it doesn't matter whether it's 0.1 or 0.0001. Now you're back to, it has to be 0.1 because .... Judith Curry.
I'll tell you another thing about uncertainty. Given the average first year graduate student who deals with uncertainty, the classic error is to overestimate it. It's like calculating the uncertainty in a straight line and hitting the tops and bottoms of the 1-sigma bar, which is highly improbable. Scientists know how to deal with uncertainty. They know how to calculate whether a sample is representative of the larger population. You don't seem to even know what heat capacity is.
Frankly, I don't know why David lets you ruin his blog.
DiC, What are you talking about 5 year running means? There is a graph at the very top of the post that includes quarterly measurements. It is very clear that measurement uncertainty not dominate. The trend is clearly visible.
DiC,
If that is what Jo Nova is doing then I think she is confusing bias with uncertainty. She is not likely a reliable authority if this is the case.
JoeT - Can you help me? What is the item whose uncertainty is being discussed? Is it the average temperature of all the oceans down to some depth? Or the average temperature of a segment all the oceans between two depths?
thanks
It's the uncertainty in the measurement of the temperature at any particular location. It includes the calibration of the instrument itself from known sources such as the triple point of water and the gallium melt point and the inherent drift in the sensor. The link above showed that the ARGO sensors, tested AFTER they had already been in use, retained their original calibration.
BTW, the reason you want to give the result in joules to measure energy, rather than degrees Kelvin to measure temperature is because of the huge heat capacity of the water as well as the enormous mass. It's what allows David or James Hanson to calculate the net energy imbalance. It's why people say things like this energy imbalance that is warming the oceans is the equivalent of 4 Hiroshima bombs going off every second.
Layzej -- you were talking about month to month changes. My point was that monthly figures are not available. I agree that the trend in the figures is clear.
JoeT -- thank you.
DiC wrote:
"...the need for adjustments shows a lack of full confidence in the data."
*ALL* raw data in science is adjusted (except for very simple cases). There are always things to account for that can bias the raw data. For example, the satellite data for atmo emperatures are heavily adjusted.
"If adjustments of 0.1 deg C are being made, it would seem that the uncertainty must be in that order of magnitude."
That does not follow -- it depends on the nature of the adjustment. If the parameters that describe the adjustment are well known, the adjustments contribute little uncertainty to the final result.
Simple example: You measure a certain distance with a metal ruler. The ruler's length expands with temperature. If you know the coefficient of expansion well, and the temperature, you can account for that with little uncertainty and obtain the "adjusted" length. The uncertainty of the meaured distance them comes from (1) how well you can read the ruler, and (2) statistical variation -- you don't get the exact same value every time you measure it, but values clustered around some mean value.
Good example, David. I was thinking of a different situation. Suppose you think an adjustment is appropriate, but you're not sure if one is needed. Then, there's some chance that the error has been increased by the amount of the adjustment.
David, if you're not sure an adjustment is needed, why would you make one?
Adjustments are scientifically necessary. If you don't know if an adjustment is needed, you don't understand the system well enough to treat it scientifically.
In my work as a casualty actuary, I often adjusted data based on assumptions. My job was to pridict something in the future based on past patterns. The past data included both actual payments and estimates. Sometimes, the data I was using wasn't consistent over time. Maybe there had been a change in accounting procedures, or a change in the method used to get estimates. Unadjusted data would have produced spurious patters. I believed my adjustment amounts were resonable, but they were merely best guesses.
I think this is a parallel for some data adjustments used in climate research. E.g., when a particular weather station reading is believed to be invalid and is adjusted to something based on nearby weather stations.
cheers
David: How would you calculate the uncertainty of an adjustment based on using nearby weather stations?
All you can do is state the assumptions that go into your model, and calculate the associated uncertainties as best you can.
You can't compare your model to a different hypothetical-but-unknown better model. (If you have a better model, use it! If it's unknown, you can't compare it to anything.)
DiC wrote:
"Am I right that one can discuss the change in temperature or the change in Heat? They're more or less equivalent. That is, one can be converted into the other. If that's right, why are they not making the entire presentation in terms of temperature? That would seem simpler."
Yes, either change in temperature and change in heat. They are proportional.
The reason for talking about heat instead of temperature is that the ocean has 1000 times the heat capacity of the atmosphere. So while a given amount of heat would cause a certain temperature change in the ocean -- the 0-700 m region has warmed by 0.17 C since 1955 -- it would cause a much larger change if it occurred in the atmosphere, or were released there.
The ocean is a huge heat sink. But over time -- centuries and millennia -- much of the added heat does not stay there.
DiC wrote: "IMHO it hardly matters whether the individual ARGO thermometers are accurate to within 0.1 degree or 0.0001 degree."
Why do you keep making up numbers when Argo tells us explicitedly the uncertainty of their temperature profiles?
"The big uncertainty is how well the sample of ARGO readings represents the entire oceans."
We've been through this already. Did you read what I wrote?
Argo doesn't have to measure every cubic meter of the ocean to get a valuable reading of ocean heat gain.
Paul wrote:
"I wouldn't be surprised if December isn't included."
Come on -- no scientist is going to publish 2 months of data that they label as 3 months of data. They would neve live that down.
I suspect NOAA was anxious to get out the annual data for 2015, to make its conclusions available to the public.
Joe: Great comment above at 9:44 am.
When I was an undergraduate I worked a couple of summers for a group that did medium energy particle studies at Los Alamos.
One of the scientists told me once that 90% of their computer time was spent calculating uncertainties.
Scientists take uncertainties VERY seriously.
Joe wrote:
"The bigger picture is that there is a rather nice consilience between the increase in ocean heat content and the sea level rise (due to thermal expansion), measured independently, that doesn't get stressed enough"
Good point. Sea level rise in a macro-indicator that can be observed without have to cut and adjust any data. So is global glacier melt. These things can't be faked. Any contrarian has to explain them.
Argo doesn't have to measure every cubic meter of the ocean to get a valuable reading of ocean heat gain.
I agree. Even if the average of the ARGO floats doesn't exactly equal the average of the entire ocean (were such a thing measurable), the trend in some average of the ARGO floats should give a good reading of the warming trend (provided that each year's work was done the same way.)
I believe that climate research sometimes adjusts the data in ways that cannot be validated. E.g., it's my understanding that the Berkeley Earth study substitutes model values for some recorded data, based on neighboring weather station readings. But, it cannot be proved that the adjusted data is more accurate than the original data. Even if the adjusted data is more accurate, it cannot be shown how much error remains after adjustment.
NOAA recently adjusted past data, as far back as the 1930's. There is obviously no way to go back in time and prove that these adjustments were appropriate or adequate.
Sea level rise in a macro-indicator that can be observed without have to cut and adjust any data.
I wish that were true. As I understand it, depth of water is affected by both sea level rise and ground subsidence. It's not easy to separate the two effects. E.g., see this paper:
The southern Chesapeake Bay region is experiencing land subsidence and rising water levels due to global sea-level rise; land subsidence and rising water levels combine to cause relative sea-level rise. Land subsidence has been observed since the 1940s in the southern Chesapeake Bay region at rates of 1.1 to 4.8 millimeters per year (mm/yr), and subsidence
continues today.
This land subsidence helps explain why the region has the highest rates of sea-level rise on the Atlantic Coast of the United States. Data indicate that land subsidence has been responsible for more than half the relative sea-level rise measured in the region. http://pubs.usgs.gov/circ/1392/pdf/circ1392.pdf
Cheers
DiC wrote:
"But, it cannot be proved that the adjusted data is more accurate than the original data."
What do you mean by "more accurate?" Compared to what?
DiC wrote:
"Even if the adjusted data is more accurate, it cannot be shown how much error remains after adjustment."
"Error" compared to what?
Are you expecting them to derive an error with respect to the "TRUE" value?
The true value isn't known. So how can anything be compared to it?
DiC wrote:
"NOAA recently adjusted past data, as far back as the 1930's. There is obviously no way to go back in time and prove that these adjustments were appropriate or adequate."
What does "adequate" mean here? Compared to what?
DiC wrote: "As I understand it, depth of water is affected by both sea level rise and ground subsidence. It's not easy to separate the two effects."
Yes. Do you think scientists don't spend a lot of time trying to understand this?
http://sealevel.colorado.edu/content/do-you-account-plate-tectonics-global-mean-sea-level-trend
http://sealevel.colorado.edu/content/what-glacial-isostatic-adjustment-gia-and-why-do-you-correct-it
http://sealevel.colorado.edu/content/sedimentation-oceans-accounted-gmsl-estimate
It's the same as always with you, David -- you ask a basic question that anyone would ask in the first hour of studying a problem, but then you assume no one has ever thought of it before and you don't take the time to go look. This is a major characteristic of deniers.
"adequate" means large enough, compared to reality. In the examples I provided, we cannot know what reality was, because real figures are not available.
Of course scientists work to separate sea level rise from subsidence. The paper I referenced is one such example. Maybe I'm just quibbling over semantics of what constututes an "adjustment".
Aside from semantics, there's been at least one wrong projection of the rate of sea level rise. Watts wrote In 2005, the United Nations Environment Programme predicted that climate change would create 50 million climate refugees by 2010. These people, it was said, would flee a range of disasters including sea level rise, increases in the numbers and severity of hurricanes, and disruption to food production.
The UNEP provided a map. The map shows us the places most at risk including the very sensitive low lying islands of the Pacific and Caribbean. http://wattsupwiththat.com/2011/04/15/the-un-disappears-50-million-climate-refugees-then-botches-the-disappearing-attempt/
Watts goes on to point out that the specified islands actually gained population during the period.
These islands are still at risk. Over time, they probably will become uninhabitable as sea level rises. My point is that the rate of rise is hard to project.
Cheers
"As I understand it, depth of water is affected by both sea level rise and ground subsidence. It's not easy to separate the two effects."
Actually it is relatively easy now to separate the two effects. What you're talking about are tidal gauge measurements, which go back to the 1700s. However with satellite altimeter measurements, scientists can distinguish the sea level rise from land subsidence. In fact the comparison of the satellite data to the tidal gauge data is used to determine vertical land motion. (I trust the satellite altimeter data more than I trust the measurement of microwave radiation from oxygen molecules and the subsequent inversion of the radiative transfer equation to calculate the tropospheric temperature).
"This land subsidence helps explain why the region has the highest rates of sea-level rise on the Atlantic Coast of the United States."
What's being described here is the Chesapeake Bay Region, but in fact the entire eastern seaboard of the US is showing high rates of sea level rise. At least part of that rise is due to the slowdown in the Atlantic Meridional Overturning Circulation (AMOC). You can read about this here, here and here
BTW, if anyone didn't catch Stefan Rahmstorf's piece on Blizzard Jones and the Slowdown of the Gulf Stream at realclimate, it's well worth reading.
"Aside from semantics, there's been at least one wrong projection of the rate of sea level rise. Watts wrote In 2005, the United Nations Environment Programme predicted that climate change would create 50 million climate refugees by 2010.... My point is that the rate of rise is hard to project."
You've conflated two very different subjects. One is the rate of sea level rise and the other is the number of climate refugees. UNEP may, or may not, have overestimated the number of refugees (I haven't looked into it enough to know), but this has nothing to do with projections of sea level rise.
As we've been discussing, sea level rise due to thermal expansion is rather straightforward to project from the increase in ocean heat content. Given the rate of greenhouse gas emissions one can estimate the earth's energy imbalance. The big unknown is what the contribution of the glacier melt will be. If anything the data shows that short-term projections of sea level rise were too conservative.
David in Cal wrote:
"'adequate' means large enough, compared to reality."
But what does "large enough compared to reality" mean? It's just as nebulous as your use of the word "adequate."
DiC wrote:
"Watts wrote In 2005...."
So the UNEP was wrong. That doesn't make everyone wrong. Should I point out all the wrong predictions that have appeared on Watts' blog? Here are just two:
http://davidappell.blogspot.com/2015/08/the-department-of-oops-case-number-1.html
http://wattsupwiththat.com/2012/08/13/when-will-it-start-cooling/
"My point is that the rate of rise is hard to project."
Of course. Everything is hard to project.
But don't forget that can lead to underprojection (Arctic sea ice) as well as overprojection.
Consider this: the IPCC's First Assessment Report includes projections of sea level rise for the period 1985-2030 (Ch9 Table 9.10 pg 276):
High: 28.9 cm
Best estimate: 18.3 cm
Low: 8.7 cm
According to Aviso data, the average rate of change over the 22+ years of their data (starts in 1993) is 3.34 mm/yr.
For the 55 years considered in the FAR, above, that works out to a projecion, if linear, of 18.4 cm.
Compare to their "best estimate." Pretty good.
Oops. The time interval above was 45 years, not 55 years, so the projection works out to be 15.0 cm. Still pretty good. I just wrote a post on this:
http://davidappell.blogspot.com/2016/01/the-first-assessment-reports-projection.html
DiC provided this reference:
NOAA argues that the transition to buoys introduced a spurious cooling bias into the record. ERIs tend to warm the water a bit before measuring it (ship engine rooms being rather hot), whereas buoys do not. They identify a bias of around 0.1 C between buoys and ERIs and remove it by adjusting buoy records up to match ERI records in ERSST v4, as well as use NMAT readings to calibrate the differences across ships.
Aren't the buoys referred to in the above statement are these:
http://www.ndbc.noaa.gov/
Those are not ARGO buoys.
Post a Comment