Pages

Sunday, August 07, 2016

"Everything we know...we know through models."

"Nor is there any such thing as a pure climate simulation. Yes, we get a lot of knowledge from simulation models. But this book will show you that the models we use to project the future of climate are not pure theories, ungrounded in observation. Instead, they are filled with data — data that bind the models to measurable realities. Does that guarantee that the models are correct? Of course not. There is still a lot wrong with climate models, and many of the problems may never be solved. But the idea that you can avoid those problems by waiting for (model-independent) data and the idea that climate models are fantasies untethered from atmospheric reality are utterly, completely wrong. Everything we know about the world’s climate — past, present, and future — we know through models."

- Paul N. Edwards, A Vast Machine: Computer Models Climate Data, and the Politics of Global Warming

85 comments:

  1. "It isn't what we don't know that gives us trouble, it's what we know that ain't so."
    Will Rogers

    Models are simlified replicas of reality. Therefore they are inherently uncertain. We never can be sure whether significant parts of reality are represented accurately in the model, or whether significant parts of reality are not even a part of the model.

    An illustration of model uncertainty is that various climate models produce a wide range of values for climate sensitivity, although they start from the same basic data.

    Cheers

    ReplyDelete
  2. David, you simply repeated what the excerpt said.

    Climate models make different assumptions, make different tunings, can make different parametrizations, are initialized differently. So there's no inherent reason for them to agree. That's their nature. It is looking unlikely that they will be able to reduce the uncertainty in climate sensitivity much further. (The Roe and Baker paper in Science several years ago showed this quantitatively.)

    The upshot is that we have to make decisions about climate policy in the face of uncertainty. But we do that all the time.

    ReplyDelete
  3. And therefore we have to make decisions on the basis of observations, not of models, which (as Joe T has clearly shown) are consistently running above observations.

    ReplyDelete
  4. There are so many things wrong with your statement Richard, it's hard to know where to start:
    1- There is no such thing as 'observations' that stand alone as some gold standard. If one observes a disagreement between a model and so-called 'observations' than a real scientist questions both.
    2- In the case of satellite temperature 'observations', you do know that they don't measure temperature, right? They measure microwave intensity and then use the radiative transfer equation to back out the temperature. Radiative transfer by itself is a model, so one uses a model to determine the 'observation'. But then there are a set of corrections that have to made to the data, like taking into account the diurnal drift of the satellite. And the funny thing about the algorithm they use to determine the 'observation' is that it changes every few years. As David has shown in some of his posts, sometimes by huge amounts. In this case, do we believe the model or the 'observation'?
    3- As I showed in my recent post in the Spencer thread below, even the so-called 'observations' don't agree all that well in the recent past. Which 'observation', RSS, UAH or RATPAC, are we supposed to believe?
    4- You keep repeating yourself about the data being in the lower half of the model spread. Why don't you show me which of the 102 Christy models actually has the right forcing and the correct ENSO and IPO variation so we can accurately compare the CMIP5 model to the 'observations'. As with the surface temperatures, the distribution of heat between the oceans, surface and atmosphere changes with natural variation. It is wrong to conclude that if the 'observation' is at the bottom of the model spread, then the models are running too warm.
    5- As I've shown, right now the RATPAC data, which has been diverging from the satellite data since 2010 or so, is sitting at the top of the model spread. Should we conclude from this that the CMIP5 models are running cold?

    ReplyDelete
  5. Richard: for example

    "Historical Records Miss a Fifth of Global Warming: NASA," 7/21/16
    http://www.nasa.gov/feature/jpl/historical-records-miss-a-fifth-of-global-warming-nasa

    ReplyDelete
  6. 1. Agreed. Either or both can be wrong. It's possible that one is right, or that all are wrong.

    2. Agreed. All measurements are derived from some other property, as Spencer correctly pointed out. Obviously the models change every few years (if not months) - that is a sign a priori of strength, not of weakness.

    3. From 1880 to 2015, NASA GISS, NOAA NCEI and HadCRUT4 agree pretty well.

    4. As you produced the graph yourself, you obviously know which of the models agree most closely with observations.

    5. There are not as many balloon based observations as land based observations, so I would be less inclined to accept RATPAC if it disagrees with the land based observations.

    ReplyDelete
  7. Reply to David Appell :-

    No graphs or data, so we will have to wait and see what is meant by "applied the quirks in the historical records to climate model output" - applying the quirks would seem to involve a lot of assumptions.

    ReplyDelete
  8. Richard, sure, we have to wait for more results, but the point is that the observations in climate science -- and in any science -- are not set in stone. They too can contain mistakes, and science is a continual process of building better data models as well as better theoretical models.

    I like a quote from Eddington: "Experimentalists will be surprised to learn that we will not accept any experimental evidence that is not confirmed by theory."

    ReplyDelete
  9. I think "applied the quirks in the historical records to climate model output" means masking the models where measurements are unavailable. In that case they are just performing an apples to apples comparison.

    The paper is here. Graphs, data, and all.

    ReplyDelete
  10. Eddington was quite right in general terms; but for climate, there are at present too many variables (some of which, e.g. ocean depth temperatures, are poorly measured) that any theory has to take into account. Astronomy is another obvious example - the observations (evidence) may take many years / decades to be confirmed by theory.

    Since the number of weather stations has declined in recent years, future observations will more often come from satellites.

    ReplyDelete
  11. BS. There are over 11,000 weather stations around the world. How many were there at the peak? How many are needed to create a robust record?

    ReplyDelete
  12. "...but for climate, there are at present too many variables (some of which, e.g. ocean depth temperatures, are poorly measured) that any theory has to take into account. Astronomy is another obvious example - the observations (evidence) may take many years / decades to be confirmed by theory."

    Eddington was an astronomer.

    Sometimes theory leads experiment, and some experiment leads theory. Both have to come into accord for science to be accepted. A unexpected measurement that theory can't explain has to always be scrutinized errors, as with the neutrinos thought to move faster than light announced a few years ago.

    For climate science and AGW, theory and observations are already in accord, to the extent needed to take action.

    "Since the number of weather stations has declined in recent years, future observations will more often come from satellites."

    I don't think so. I heard someone say you only need 50-100 weather stations around the world to make a quality temperature average.

    Besides, ocean heat content is the real sign of global warming.

    Carl Mears of RSS:

    "A similar, but stronger case can be made using surface temperature datasets, which I consider to be more reliable than satellite datasets…."

    http://www.remss.com/blog/recent-slowing-rise-global-temperatures

    From the same page:

    "Does this slow-down in the warming mean that the idea of anthropogenic global warming is no longer valid? The short answer is ‘no’. The denialists like to assume that the cause for the model/observation discrepancy is some kind of problem with the fundamental model physics, and they pooh-pooh any other sort of explanation. This leads them to conclude, very likely erroneously, that the long-term sensitivity of the climate is much less than is currently thought.

    "The truth is that there are lots of causes besides errors in the fundamental model physics that could lead to the model/observation discrepancy. I summarize a number of these possible causes below. Without convincing evidence of model physics flaws (and I haven’t seen any), I would say that the possible causes described below need to be investigated and ruled out before we can pin the blame on fundamental modelling errors."


    ReplyDelete
  13. "Obviously the models change every few years (if not months) - that is a sign a priori of strength, not of weakness."
    Actually that is not necessarily true. UAH went from 5.6 to 6.5 which agreed better with RSS v.3. But then RSS went from v.3 to v.4 which opened up the gap again.

    "From 1880 to 2015, NASA GISS, NOAA NCEI and HadCRUT4 agree pretty well."
    This is a non-sequitur with what I was discussing. I'm making comparisons of the temperature in the middle troposphere to the data. Bringing in the ground measurements has nothing to do with what I was talking about. What I showed in my graph is that there is a substantial disagreement between RATPAC and the latest versions of RSS & UAH in recent years.

    "As you produced the graph yourself, you obviously know which of the models agree most closely with observations."
    You're missing the point entirely. The data set that I downloaded are just a bunch of numbers. They don't tell me which model is in what phase of ENSO or IPO at any particular point in time.

    "There are not as many balloon based observations as land based observations, so I would be less inclined to accept RATPAC if it disagrees with the land based observations."
    Again, I was talking about the temperature in the middle troposhere. You seem to having your own conversation. This has nothing to do with the land based observations. Since the RATPAC data is now at the top of the model spread, should we then conclude that the CMIP5 models are running cool?

    ReplyDelete
  14. Reply to Layzej :-

    See https://www.ncdc.noaa.gov/monitoring-references/docs/peterson-vose-1997.pdf which shows (a) the decline in the number of stations, and (b) the sparse coverage in some areas of the world, particularly for long term observations (covering a century or more) which are obviously the most useful.

    Reply to Joe T :-

    Since you prefer a short answer to a long one, the short answer is that the balloon data are too sparse to draw any conclusions. The longer answer is that since the long term land data agree pretty well with each other, they are probably more reliable than the short term results from balloons and satellites.

    Reply to David Appell :-

    I disagree that there is any need to 'take action' - the FAO are reporting that crop production and yields have been increasing. Why do we need to take any action, apart from giving the poor people of the world access to cheap and reliable electricity ?

    Ocean heat content is measured by buoys such as the ARGO buoys, which sample very small volumes of water. Like the land areas, the global coverage is pretty sparse. Fifty to one hundred stations doesn't sound like much for the 510 million square kilometres of the surface of the Earth. That's 5.1 to 10.2 million square kilometres per station.

    ReplyDelete
  15. "The longer answer is that since the long term land data agree pretty well with each other, they are probably more reliable than the short term results from balloons and satellites."

    You seem to be insistent on having your own conversation. There is little doubt that the surface data is more reliable that satellites and radiosondes. But since the surface data doesn't tell us what is happening in the middle troposphere, it's irrelevant. Even Roy Spencer doesn't believe the tropospheric measurements are good proxies for the surface temperatures.

    I'll take your answer then to mean that indeed you think the CMIP5 models are running too cool at this moment. Otherwise you wouldn't insist that if the data is below the mean then the models are running too warm.

    ReplyDelete
  16. Richard, did you even read the paper you cited to show station drop out? It says right there that they retroactively add in historic data from stations that do not report in real time. There is no station drop out. Rather they add in historic data as it becomes available.

    ReplyDelete
  17. RM says: I disagree that there is any need to 'take action' - the FAO are reporting that crop production and yields have been increasing.

    I think Richard is looking out the rear window while driving down the highway. "No obstacles in sight! What, me worry?"

    Which illustrates the value of climate models. The goal is to look forward.

    ReplyDelete
  18. Richard wrote:
    "Ocean heat content is measured by buoys such as the ARGO buoys, which sample very small volumes of water. Like the land areas, the global coverage is pretty sparse."

    How many buoys are needed to get robust OHC numbers?

    Do you really think Argo didn't study this carefully when planning their network??

    ReplyDelete
  19. Reply to David Appell :-

    I'm sure that ARGO would say that they only sample a very small percentage of the oceans of the world, and that they would always get better and more reliable results with more buoys.

    Reply to Layzej :-

    Adding historic data means that they have more data about the past. It does not affect the fact that there are fewer stations providing data about the present.

    Regarding the Science article, since it is paywalled, I cannot comment.

    ReplyDelete
  20. Richard,

    All of the data is from the past. Some data can be collected in near real time. Some is collected over the course of several years. There is no "decline in the number of weather stations", let alone one that would necessitate abandoning the surface station record. Sheesh.

    ReplyDelete
  21. First a quibble: We do know something important, not from models. Physical principles explain why increasing CO2 is causing global warming.

    Setting that point aside, I agree with Edwards that our policies must be based on models, but I see that as a bad thing. One must go beyond or outside a model to somehow decide how reliable the model is. The model itself doesn't answer that question. And, there are so many different climate models. There are models predicting global cooling. They're based on an assumption (which I don't buy) that temperture follows the amount of sunspots. There are models predicting many degrees of warming by the end of this century. And, models in between.

    BTW Edwards sounds naive, when he writes, "The models we use to project the future of climate are not pure theories, ungrounded in observation. Instead, they are filled with data — data that bind the models to measurable realities." Of course all the models use actual data, to the extent it's available. However, despite starting from the same actual data, predictions vary a lot, from model to model. I wonder if Edwards understands that a big challenge is deciding which model to believe.

    ReplyDelete
  22. Reply to Layzej :-

    As the number of stations being monitored and contributing to the global data has declined, you will never get post-1970 data from those stations that have closed down since then. You can 'sheesh' all you want, but that doesn't alter the fact. The stations are not there any more. This is really basic stuff.

    Also, I have never advocated "abandoning the surface station record"

    Again, your reading and comprehension skills have let you down.

    ReplyDelete
  23. As the number of stations being monitored and contributing to the global data has declined

    Show me. There are currently over 11,000 stations. How many were there at the peak?

    ReplyDelete
  24. In the Global Historical Climate Network, the number of stations has declined from about 6000 to about 2600. At https://climateaudit.org/2008/02/09/how-much-estimation-is-too-much-estimation/ the number of unique GHCN records has declined from about 14000 to about 2000.

    ReplyDelete
  25. Richard. The number of stations hasn't declined. Some report in real time and some do not. Those that do not are added in retroactively once the data is compiled. Someone is trying to dupe you here and you've swallowed it.

    Your original link was to GHCNv2. Check out GHCNv3. V3 uses twice as many stations for the most recent decade. In fact, they've increased the number of stations right back to 1980.

    None of this supports the notion that "Since the number of weather stations has declined in recent years, future observations will more often come from satellites."

    In fact it all rather dilutes the substantive comments of David in Cal and the conversation in general. For the sake of this thread I'll leave it at that.

    ReplyDelete
  26. See http://onlinelibrary.wiley.com/doi/10.1029/2011JD016187/full for the number of stations in GHCN-M (monthly) version 2 and version 3. From Figure 2 the number of stations (both versions) peaked in about 1970 at about 6000 stations. Version 2 is now around 1500 stations and version 3 is now around 2500 stations.

    ReplyDelete
  27. Reply to Joe T :-

    "I'll take your answer then to mean that indeed you think the CMIP5 models are running too cool at this moment. Otherwise you wouldn't insist that if the data is below the mean then the models are running too warm."

    I don't understand your logic here.

    Your spaghetti graph shows RSS, UAH and RATPAC below most of the models.
    Hawkins shows observations below most of the models.
    Spencer shows HadCRUT4 and UAH below 95-98% of the models.
    Skeptical Science (posted by Layzej) shows HadCRUT3, NCDC and GISTemp below the model mean except for the 1997-98 El Nino.
    Bart Verheggen shows UAH and RSS below the model mean except for the 1997-98 El Nino.
    Curry shows the balloon and satellite observations below the model mean.


    How does this indicate that the models are running too cool ?

    ReplyDelete
  28. It's not that difficult to understand Richard. The first and most important thing is that your premise is wrong that because the data is below the mean in the model spread the models are running too warm. For about the 10th time already you've been told there is a collection of models AND for each model they run them with a set of initial conditions. These differing initial conditions are meant to simulate a range of natural variability. So unless you go and pick out of the models which one has the right phase of ENSO and IPO for example, then it's impossible to say if the models are running warm or cool. This was the subject of a set of papers including the recent Fyfe paper, Kosaka and Xie, Trenberth etc.

    However, you keep insisting, that the models run warm, paying no attention to whether they simulate the natural variability as if the mean of the model spread is supposed to be some magical gold standard. What I showed in my post below is that the RATPAC radiosonde data is now at the top of the model spread. Furthermore, having tracked RSS and to some extent UAH reasonably well until 2010 or so, the radiosonde data has diverged considerably in recent years from the satellite data. Actually having performed microwave measurements myself (and having published one PRL on the subject, although I am not a climate scientist), I'm actually somewhat familiar with the difficulties of making precise temperature measurements with microwaves especially when the radiation is not blackbody and you have a very long optical path length. My suspicion, backed up by the quote from Carl Mears, is that there could be problem with the hot load calibration. From that statement I tend to trust the radisonde measurement more than the satellite data.

    Therefore, I conclude, that if you're going to say the models are too warm if the data is below the mean, then you should conclude that the RATPAC data is showing the models to be too cool. In truth, neither conclusion is correct because the model mean is mostly irrelevant. What matters is which model actually comes closest to getting the natural variability roughly correct. And, of course, making sure that the forcing is correct, which is another uncertainty in the projections.

    On the other hand, every time I tried to discuss measurements in the middle troposphere, you would turn the conversation to surface measurements. They are not the same. Even Roy Spencer says they are not the same. You either really don't understand what I'm saying or you're just quoting propaganda. At this point, it's useless for me to continue the discussion.

    ReplyDelete
  29. Your RATPAC data is mostly similar to UAH, except right at the end, when it comes close to the middle of the spaghetti graph. I wuld not judge a graph on the basis of the few most recent points.

    In the post to which you replied, I quoted RSS, RATPAC, UAH, HadCRUT4, HadCRUT3, NCDC, and GISTemp. All of these (land based and satellite) are running below most of the models, consistent with Spencer's graph (even if it doesn't show the model spread, as you, Hawkins and Skeptical Science do)

    If you want to seay that all this is irrelevant, then fair enough - the model makers have been wasting their time.

    ReplyDelete
  30. Richard wrote:
    "If you want to seay that all this is irrelevant, then fair enough - the model makers have been wasting their time."

    The amount of climate change we're facing is an immensely important question for society.

    How would approach that question, without a model?

    BTW, all the data records you just cited also come from models, as the post's excerpt says.

    ReplyDelete
  31. Richard, are you color-blind? It's not meant to be an insulting question actually. If you look after 2000 then the data sets start to diverge. RATPAC which is green is the highest, followed by RSS and then by UAH. RATPAC is closest to RSS v.4 but then starts to diverge from that as well.

    You're already distorting what I'm saying. I assume you're doing that deliberately. The model mean is irrelevant, not the models. Pick out of the models which one comes closest to the natural variability, then we can discuss whether they run warm or cool. Spencer's graph is garbage. He cherry-picks his starting point. When you do the proper baselining, the data falls within the model spread.

    I realize you don't understand what I'm saying, so I'm done with you.

    ReplyDelete
  32. Yes of course - we are comparing temperatures (whether or not they are derived from models) with CMIP5.

    ReplyDelete
  33. My previous reply was to David Appell.

    Reply to Joe T :-

    Your graph shows the green line ending at about +1.0 degree. The models for that year run from about +0.5 to about +1.5 degrees.

    As I said, it's irrelevant anyway, because we are only looking at one year of a 40 year time line.

    ReplyDelete
  34. Richard wrote:
    "Yes of course - we are comparing temperatures (whether or not they are derived from models) with CMIP5."

    No, that's not what I meant.

    I mean the data THEMSELVES come from models -- the data we call "observations." The observed data come from models.

    Nothing to do with CMIP5 or any other climate model.

    (And for satellites, the data models are rather complicated model.)

    ReplyDelete
  35. Realclimate includes updated model/observation comparisons here: http://www.realclimate.org/index.php/archives/2016/08/unforced-variations-aug-2016/

    We would expect the actual temperatures to be either above or below the mean for a decade or more at a time. Richard infers meaning when points are below the mean. Those above are irrelevant.

    ReplyDelete
  36. "Your graph shows the green line ending at about +1.0 degree. The models for that year run from about +0.5 to about +1.5 degrees."

    Christ, you still do statistics by eye. With respect to the baseline, RATPAC is at 1.03 C. The model mean is at 0.90. One sigma is at 0.29. The last RATPAC datapoint is above the mean in the models. The natural variability has changed from the previous decade. You don't get to pick and choose the points that you like

    ReplyDelete
  37. Reply to David Appell :-

    Yes, we agree. The observations come from models. Those temperatures, that come from models, are mostly below the CMIP5 climate models.

    Reply to Layzej :-

    The observations have never been (apart from the El Nino) above the model mean; from what I have seen, they have all been at or below (and mostly below) the model mean.

    The figure from Real Climate is the same as the one from Skeptical Science, and clearly shows that most of the observations are below the model mean (and sometimes even outside the confidence interval)

    ReplyDelete
  38. We would expect the actual temperatures to be either above or below the mean for a decade or more at a time. The fact that it was isn't that informative.

    ReplyDelete
  39. Reply to Joe T :-

    I mostly use the data to plot my own graphs in Excel. Your graph merely confirms the comparisons of models with observations that I have listed above.

    No, you don't get to pick and choose the data points that you like. That was my point. The vast majority of the observation data points in your graph are at or below the mid-point of the models that you have plotted. Therefore you confirm the evidence given by the other graphs.

    ReplyDelete
  40. Reply to Layzej :-

    Yes, I agree. 40 years is in any case too short a time frame to draw any conclusions from the fact that the observations are nearly all at or below the model mean. We need to reassess in another 50-60 years.

    ReplyDelete
  41. We've not yet realized 11 years of the CMIP5 forecast which started in 2006. That 9 are below the mean is not surprising given that the NMO has been in a negative phase for the entire period. What happens when NMO reverses?

    ReplyDelete
  42. What's NMO ? I have heard of AMO, PDO, NAO, IPO but not NMO.

    ReplyDelete
  43. That's what you said last time.

    ReplyDelete
  44. Ah - Northern Hemisphere Multi-Decadal Oscillation I presume. I have never seen a graph or data for that.

    ReplyDelete
  45. Yes you have: http://www.realclimate.org/index.php/archives/2015/02/climate-oscillations-and-the-global-warming-faux-pause/

    It is the result of combining the AMO and PMO.

    ReplyDelete
  46. OK, a smooth graph of 'estimated history'

    ReplyDelete
  47. Richard wrote:
    "Yes, we agree. The observations come from models. Those temperatures, that come from models, are mostly below the CMIP5 climate models."

    Only if you think the data models are accurate.

    How do you know if they are?

    ReplyDelete
  48. Of course we don't know if any of the models are accurate. The fact that the 3 land based models (NASA GISS, NOAA NCEI and HadCRUT4) agree pretty well with each other gives us more confidence that they are accurate.

    ReplyDelete
  49. Richard, yeah, that's a good sign. On the other hand, those models mostly use the same raw data and methods. But signficant changes still come along, which wouldn't happen if the models were truly accurate. Such as the Karl et al Science 2015 changes, and the SST changes paper they were based on, and the 3-times larger changes in UAH v6 (with some monthly regional values changing up to 1.4 C), and the sign error in UAH from the 1990-00s, and the "we missed 1/5th of warming" press release I've been mentioning. Plus, UAH and RSS differ signficiantly in their MT results.

    ReplyDelete
  50. David in Cal wrote:
    "First a quibble: We do know something important, not from models. Physical principles explain why increasing CO2 is causing global warming."

    Physical principles are also models.

    ReplyDelete
  51. David in Cal wrote:
    "One must go beyond or outside a model to somehow decide how reliable the model is."

    What exactly is "outside" a model?

    "There are models predicting global cooling. They're based on an assumption (which I don't buy) that temperture follows the amount of sunspots."

    Those models are wrong (hence, very very poor models).

    "Of course all the models use actual data, to the extent it's available."

    The "actual data" comes from models.

    Even measuring the length of something requires a model: a ruler.

    ReplyDelete
  52. At some level even our perception of the world around us is really just a model, but if you follow this line of thinking things can get a bit postmodern. I'm not a big fan of postmodernism because it tends (in my opinion) towards academic wankery.

    Not to say that it isn't a good idea to be keenly aware of the limits of our models and measurements. And I'll concede that it's good to be aware that models are really our only way of understanding the universe.

    ReplyDelete
  53. Reply to David Appell :-

    Again, most of your criticisms apply to the satellites, which anyway cover too short a time period for me to attach any importance to them yet.

    As I said before, those models behind the measurements are constantly evolving, which I regard as a strength rather than a weakness.

    ReplyDelete
  54. Richard, I agree, that's a strength, not a weakness.

    The satellite temperatures now cover 37+ years. That's more than the 30 years many people use to calculate trends, though, you're right, it doens't include full cycles of the PDO and AMO.

    My only point on this post was to point out that a disagreement between "observations" and climate models doesn't a prior mean the climate models are wrong. Sometimes the climate models are right and the "observations" are wrong, like with UAH's sign error. People expected, based on climate models, that the lower troposphere should be warming, and they were right.

    Thanks for your comments.

    ReplyDelete
  55. Yes, that was an apparent paradox in the UAH data, which was certainly causing a lot of head scratching, and now it's been sorted o9ut.

    Again, I agree that we need at least one cycle of the AMO / PDO, ideally a couple of cycles, which we have for the surface temperature data.

    Yes, NASA GISS, NOAA NCEI and HadCRUT4 all use GHCN, but (for example) it's my understanding that they use different methods for handling those areas of the world that are missing data (parts of the Arctic, and parts of Africa, South America and Australia, for example)

    ReplyDelete
  56. But we do have surface temperatures over a few cycles of the PDO and AMO; they start in either 1880 (NOAA and GISS) or 1850 (HadCRUT4).

    I agree with you on the rest.

    ReplyDelete
  57. That's what I meant when I said :-

    "Again, I agree that we need at least one cycle of the AMO / PDO, ideally a couple of cycles, which we have for the surface temperature data."

    ReplyDelete
  58. Richard wrote:
    "I disagree that there is any need to 'take action' - the FAO are reporting that crop production and yields have been increasing."

    Due to what? Many factors determine yields.

    “For wheat, maize and barley, there is a clearly negative response of global yields to increased temperatures. Based on these sensitivities and observed climate trends, we estimate that warming since 1981 has resulted in annual combined losses of these three crops representing roughly 40 Mt or $5 billion per year, as of 2002.”

    -- “Global scale climate–crop yield relationships and the impacts of recent warming," David B Lobell and Christopher B Field 2007 Environ. Res. Lett. 2 014002 doi:10.1088/1748-9326/2/1/014002
    http://iopscience.iop.org/1748-9326/2/1/014002

    ReplyDelete
  59. Richard wrote:
    "I disagree that there is any need to 'take action' - the FAO are reporting that crop production and yields have been increasing."

    Why are you focusing only on crop yields?

    What about sea level rise? Heat extremes? Ocean acidification?

    Why ignore everything else?

    ReplyDelete
  60. RIchard wrote:
    "I'm sure that ARGO would say that they only sample a very small percentage of the oceans of the world, and that they would always get better and more reliable results with more buoys."

    How does the reliability of their results scale as a function of the number of their buoys?

    How many robots do they need to get an OHC to the precision they are looking for?

    All projects are compromises between what's perfect and what's affordable.

    ReplyDelete
  61. Lobell 2011 finds similar results:

    "global maize and wheat production declined by 3.8 and 5.5%, respectively, relative to a counterfactual without climate trends. For soybeans and rice, winners and losers largely balanced out. Climate trends were large enough in some countries to offset a significant portion of the increases in average yields that arose from technology, carbon dioxide fertilization, and other factors." - http://science.sciencemag.org/content/333/6042/616

    ReplyDelete
  62. Reply to David Appell (1:59 am) :-

    Yes, of course many factors affact crop yields and crop production. The FAO has reported that cereal production (for example) has more than doubled from 1961 to 2013.


    Reply to David Appell (2:01 am) :-

    I do not exclude everything else except crop yields. I just quoted that as an example of why we don't 'need to take action' - China (for example) has decided that the action that they need to take is to build more coal fired power stations.

    Sea level has been rising by 3.4 +/- 0.4 mm. per year, according to http://sealevel.colorado.edu/ - nothing to worry about there.

    I haven't seen any graphs that purport to show how the pH of the oceans has changed over time scales of decades to centuries. I don't know of any ocean that's acidic, so the term 'ocean acidification' is misleading.

    I haven't seen any attempt to quantify the change in heat extremes over time scales of decades to centuries. From what I have seen, minimum temperatures have been rising more quickly than maximum temperatures, which suggests that the climate generally is becoming less extreme.

    Reply to David Appell (2:04 am) :-

    I would expect more measurements from ARGO buoys to increase the reliability of their results. Of course, this cannot be quantified, just as we cannot say how much more reliable the land temperature would be if we had more weather stations in the Arctic, Antarctic, Siberia, Australia, Africa and South America, but few people would deny that it would make the land temperature more reliable.

    ReplyDelete
  63. RM: the term 'ocean acidification' is misleading.

    Busted! It was liberal conspirator Laurent Lavoisier that planted this seed in the 1700s, hoping that the word "acidification", though (obviously) misleading, would spur people to action. Sadly he died before his dream was realized.

    P.S. you haven't looked very hard if you can't find quantifications of changes in extremes or ocean acidity or acceleration in sea level rise, or even the cost of global warming to global food production or the non-linear impacts of climate change.

    ReplyDelete
  64. OK, where in the graph at http://sealevel.colorado.edu/ do you see acceleration in sea level rise ?

    ReplyDelete
  65. Actually, I can answer my own question.

    Is the detection of accelerated sea level rise imminent?
    Edited: 2016-08-10

    Share
    Tagged XML BibTex Google Scholar

    Title Is the detection of accelerated sea level rise imminent?
    Publication Type Journal Article
    Year of Publication 2016
    Authors Fasullo, J. T., R. S. Nerem, and B. Hamlington
    Journal Scientific Reports
    Volume 6
    Pagination 31245
    Date Published 08/2016
    Keywords climate, news, rate
    Abstract Global mean sea level rise estimated from satellite altimetry provides a strong constraint on climate variability and change and is expected to accelerate as the rates of both ocean warming and cryospheric mass loss increase over time. In stark contrast to this expectation however, current altimeter products show the rate of sea level rise to have decreased from the first to second decades of the altimeter era. Here, a combined analysis of altimeter data and specially designed climate model simulations shows the 1991 eruption of Mt Pinatubo to likely have masked the acceleration that would have otherwise occurred. This masking arose largely from a recovery in ocean heat content through the mid to late 1990 s subsequent to major heat content reductions in the years following the eruption. A consequence of this finding is that barring another major volcanic eruption, a detectable acceleration is likely to emerge from the noise of internal climate variability in the coming decade.
    DOI 10.1038/srep31245

    "a detectable acceleration is likely to emerge from the noise of internal climate variability in the coming decade" means it hasn't done so yet.

    ReplyDelete
  66. So, in other words, the fact that there has been no acceleration in sea level rise in 23 years is excused by the fact that there happened to be a volcanic eruption 25 years ago.

    Want to try for changes in heat extremes or ocean pH levels over time scales of decades to centuries ?

    ReplyDelete
  67. How many years would it take before you should expect to detect acceleration in the satellite record, given the natural variability introduced by volcanoes and other factors?

    To other papers on sea level acceleration from this year:
    http://www.pnas.org/content/113/10/2597
    http://www.pnas.org/content/113/11/E1434.full

    ReplyDelete
  68. So first you imply that you would not expect to detect acceleration, and then you reference two papers, one of which doesn't mention acceleration (though it does make some forecasts for the current century) and the second one of which gives a 20th. century rise of 1.4 +/- 0.2 mm. per year. Nothing to worry about there.

    ReplyDelete
  69. Wait. 1.4 over the century but 3.4 over the last decade? It's the 2nd derivative that you need to worry about here.

    ReplyDelete
  70. No, it's not 3.4 mm. per year over the last decade, it's 3.4 +/- 0.4 mm. per year average since 1993, compared to 1.4 +/- 0.2 mm. per year average over the previous century. Any child can see that. Look at the graph at http://sealevel.colorado.edu/ again.

    Again, there is no need for you to worry. Look at 3.4 +/- 0.4 mm. on a ruler. Again, any child will tell you that's a very small amount.

    ReplyDelete
  71. Right. It seems to be rising faster now... So what has changed? (hint: it's the 2nd derivative you need to worry about here)

    ReplyDelete
  72. A linear fit on CU's data gives a linear trend of 3.39 mm/yr and an acceleration of 0.028 mm/yr2 (statistically significant).

    For AVISO it's the same linear trend but a=0.037 mm/yr2. And the acceleration itself is increasing.

    Small? That (for AVISO) is 45 cm of SLR in 2100 relative to today (again, also increasing). The acceleration is, so far, enough to create an extra 16 cm above the linear projection.

    And SLR doesn't stop in 2100 -- by then it's just getting started. A result in a paper by David Archer finds that paleoclimate data gives an ultimate SLR of about 15 m for every one degree C of warming.

    ReplyDelete
  73. The graph of Topex, Jason-1 and Jason-2 data from the University of Colorado speeds up and slows down, and even shows a decrease sometimes, so it's by no means increasing at an ever increasing rate.

    The correlation coefficient (r^2) of the linear fit is 0.9608. If that continues for the next 84 years, that will mean an increase of 3.0 to 3.8 * 84 = 252 to 319.2 mm. and people who live on the coast will have plenty of time to prepare.

    So I guess it all depends on whether or not you believe TOPEX and Jason, which the University of Colorado Sea Level Research Group continuously monitor against a network of tide gauges.

    In fact, historical tide gauge measurements at
    http://sealevel.colorado.edu/content/tide-gauge-sea-level
    show a range of rates of sea level rise from 1.2 +/- 0.3 mm. per year to 2.8 +/- 0.8 mm. per year.

    ReplyDelete
  74. "The graph of Topex, Jason-1 and Jason-2 data from the University of Colorado speeds up and slows down, and even shows a decrease sometimes, so it's by no means increasing at an ever increasing rate."

    The math says otherwise.

    You calculation didn't include acceleration.

    "...people who live on the coast will have plenty of time to prepare."

    And taxpayers will be reimbursing them for lost property, all across the world, basically forever.

    Stormwater will kick them out first. If the average angle of a shore is A, water will come ashore at a rate of SLR/sin(A). For A=5 degrees, this equals 11 times SLR.

    I think most scientists these days agree that the satellites are superior for measuring global sea level, but AFAIK they can't measure local rates of change, so for that tide gauges are important. But they are susceptible to land subsidence, glacial rebound, etc and must be corrected. Swell's Point, Virginia, in Norfolk VA, shows a linear trend of 4.6 mm/yr since 1927. Key West, 2.4 mm/yr since 1913, and Astoria OR -0.2 mm/yr since 1925. (These are just the three I look at every few months.) All show slight positive accelerations.

    SLR to-date is certainly small. But it is beginning to accelerate, and it's going to be taking place for millennia. This is what the public doesn't get, and this realization -- that we are setting the stage for geological scale changes -- is what I think has been behind the warnings more and more scientists are giving these days.

    It's weird to think that the next scores of generations will not know what it's like to have stable coastlines.

    ReplyDelete
  75. PS: For the AVISO data, even the small acceleration of today means average SLR in 2100 will be almost 3 inches per decade.


    Standard Numerology Warnings Apply.

    ReplyDelete
  76. RM: If that continues for the next 84 years

    If wishes were horses... If it continued at the 1900-1930 rate we'd only have 50mm. If it continued at the 1930-1992 rate we'd only have 117mm. If it continued at the 1993-2016 rate we'd have 285mm. All of these scenarios are pointless to consider because of the 2nd derivative, and as David points out, likely also the 3rd.

    ReplyDelete
  77. As with global temperatures, we (or our descendants) will know more in the next years / decades. The Chinese (and probably most of the African countries) are pinning their hopes of future development on cheap and reliable electricity from fossil fuels and nuclear power.

    ReplyDelete
  78. We know enough about AGW already to take action.

    And the Chinese are well aware of manmade global warming and CO2's role in it. And are taking far more steps than we are to cut it.

    Don't forget, America's per capita emissions are about 2x that of China. And we're emitted about 10x more historial CO2 than has China.

    ReplyDelete
  79. This comment has been removed by the author.

    ReplyDelete
  80. What would we know in 2026 that we didn't know in 1980?

    In 1980, when the record looked roughly flat, Hanson predicted accelerated warming. Some people said "Wait and see."

    Well, 35 years later we ended up with this, and some people still say "Wait and see."

    What evidence would be sufficient? 10 years from now those people will be talking about the pause that started in 2016.

    ReplyDelete
  81. What would we know in 2026 that we didn't know in 1980?

    Well, we'll have much better regional projections, and we'll have a better understanding of expected SLR.

    But that's like knowing more about the shape of the bullets in the gun pointed at you. We knew enough, 30 years ago, to know that we needed to start shifting how we run the economy. And that's a really slow process. We should have gotten started.

    Now, 30 years later, we're still dithering. Bah.

    ReplyDelete
  82. Tamino quantifies the 1981 prediction here. Hanson predicted that by 2010, Earth’s temperature would rise about 0.4°C (0.7°F). Instead we saw a rise of 0.55°C (0.99°F), more than 35% higher than the projected increase.

    ReplyDelete
  83. Windchasers -- we did start, although perhaps not enough. Reducing atmospheric CO2 concentration will require a new source of massive amounts of energy. Something that can replace most fossil fuels worldwide and cost no more than fossil fuels. A lot of effort has gone into fusion energy, unfortunately without success so far. We also started by building more efficient vehicles and developing buildings that can be heated and cooled more efficiently.

    ReplyDelete
  84. "A lot of effort has gone into fusion energy, unfortunately without success so far."

    I don't often find myself in agreement with David in Cal, but he has a point. Of course, it depends on what 'success' means. If success means containing a burning plasma and being well on the road to a demonstration reactor, then I agree. However, there has been a vast increase in physics understanding in the past few decades.

    If one is interested, I wrote up a brief post, called Fusion Power to Mitigate Climate Change in which I tried to argue for an acceleration of the fusion program, especially in the US. Some of the material is already dated: ITER won't start operation until December, 2025 at the earliest; tritium won't be used until 2034 or so. Although it may not be completely obvious, by the time I was done with my argument, I had changed my mind. Fusion will come too little and too late (and will probably be too expensive) to have any substantial impact on climate change.

    If you haven't played around yet with the climate models I link to at the University of Chicago web site, it's worth taking a look.

    ReplyDelete