But did it really set a record? The difference between these two records, 0.06°C, is small -- especially since the monthly data are only given to the nearest 0.1°C. Could uncertainties in the measurements mean these numbers are essentially indistinguishable?
I think one way to get a handle on this (which certainly is not rigorous) is as follows:
In reality the two highest values in the dataset, T1=10.93°C and T2=10.87°C, each have an uncertainty they carry. If someone else at the same location was doing the same measurements, they probably would get slightly different numbers. If lots of people were doing the same measurements, there'd be a range of numbers for T1 and T2.
Assuming the measurements were all independent, the differences between measurements of T1 would be close to normally distributed, and the same for T2.
Of course, all these measurements weren't measured simultaneously. All we have are the monthly numbers. Since they're cited to the nearest 0.1°C, we might take that as the uncertainty ΔT of each measured temperature for each month (or at least an upper bound on the uncertainty). Then the uncertainty of the annual average ΔA, again assuming normal distributions, is (with 12 measurements in a year) σ = ΔT/sqrt(12) = 0.029.
(Yes, the uncertainty of the average is less than the uncertainty of any measurement, assuming the measurements are random and fall into a normal distribution.)
The situation is in the figure to the right. Two overlapping bell curves peak at the two different temperatures T1 and T2, with each having the same width σ.
The green line denotes where the two functions intersect.
Then the probability that T1 is greater than T2 is the area under the red curve (T1) from the green line to infinity.
Since they each have the same width, you can calculate where the red and blue curves intersect from their Gaussian functions -- it's just their average, (T1 + T2)/2.
Then you can evaluate the area to the right of the green line using the error function.
When I do this with T1= 10.93°C, T2 = 10.87°C, and σ = ΔT/sqrt(12) = 0.029, I find the probability that HadCET set a new record is
85%.
Which seems reasonably reasonable.
Again, this isn't rigorous, just a back of the envelope calculation someone might do before she went and studied how to do it exactly. Good enough for blog work.
Just as a check, if T1=T2, this method gives the probability that T1 > T2 is 50%, which is what you expect. If σ was twice as large, the probability reduces to 70% -- and you'd expect it to be smaller. If σ = 1, prob = 50.1%, again reasonable. As T1 increases with T2 and σ held the same, the probability becomes closer and closer to one, as expected.
If this is off-the-wall wrong, or worse, not even wrong, let me know in the comments.
Update: as a commenter noted, Gavin Schmidt discussed something much like this on Twitter.
2 comments:
There was no explicit explanation but I assumed that Gavin Schmidt was doing a similar calc here.
https://twitter.com/ClimateOfGavin/status/540913888829902848/photo/1
https://twitter.com/ClimateOfGavin/status/542118394200608768/photo/1
Mark, yes, thanks, I forgot about those. The second one was what got me thinking about this in the context of temperature records. Although I remember a post many years ago by Kevin Drum, who asked a similar question about election polls, which show candidates close "within the margin of error." The media often call that a "dead heat" or "statistical tie." Drum put up a matrix of values calculated for him by two statisticians, of the probability one candidate was ahead of the other, given the polling gap between them and the poll's margin of error. I assumed they did something like this.
Post a Comment