Tuesday, July 31, 2012

Reax to Weekend's PR Blitz

Jason Samenow at the Washington Post has a lot of strong thoughts, and collects reaxs, to the weekend's PR blitz.
"Both studies staged high-profile releases and represent concerted efforts to influence public perception about what we know about climate science. But neither has been published in a peer-reviewed publication and there is cause to question their legitimacy."
Ben Santer in the LA Times:
“I think [Muller] can do great harm to the broader debate. Imagine this scenario: that he makes these great claims and the papers aren’t published? This (op-ed) is in the spirit of publicity, not the spirit of science.”
Samenow:
"The claim that this single study [BEST] was “too important” to hold back - especially in light of scores of other important studies which received no such pre-publication fanfare - reeks of arrogance on the part of the author team."
"Irrespective of the flaws in Muller’s analysis or its merits - grabbing headlines in the New York Times prior to peer review represents an enormous tactical mistake. Peer review is the primary pillar of scientific legitimacy. Without it, a study has little to support it." 
"Science blogger David Appell had it exactly right when he said the Watts paper is “exactly the kind of paper that most needs peer review: based on a lot of judgements and classifications and nitty gritty details....”
and
"The Muller and Watts studies no doubt represent a lot of hard work and may eventually prove to be valuable contributions to science. But we should reserve judgment on their significance."

"And this new effort by these scientists to grab attention for studies that have not yet been vetted by other, independent scientists is disturbing and unproductive. It’s a disingenuous attempt to score points on a highly polarized scientific issue."

"My advice? Ignore these publicity stunts and pay no attention to these studies until they have passed peer review. And even studies that have been peer reviewed should be viewed with a certain amount of skepticism until they have been confirmed in multiple subsequent studies and stood the test of time."
One obvious hole is that Watts et al made little effort to communicate the statistical significance of their trends. That's a crucial part of any piece of science, without which a result is essentially meaningless (ask the teams who discovered the Higgs), even more so when you're claiming results to three significant figures. Their section 3.2.4, "Statistical Significance Testing," isn't what I mean -- I mean the estimated uncertainty (error bars) on each of the trend results -- are they good to 1% or 50%? It matters, and I suspect some peer reviewers will fail it just for this.

No comments: