The Scientific Method is actually less complex, in a way, than the pretentious title makes it sound. The essence is simple:
- Observe the world, looking for patterns
- Make a guess as to what might cause the patterns
- Make predictions: if your guess is true, what would you expect to happen?
- Test those predictions with observation or experiment
- If wrong — and this is the usual result — start over and refine your guess.
- If your guess seems supported, tell the world how you got your results so they can get them, too.
The second bullet point, the “tell the world” aspect, is vital. There are so many ways to make mistakes and to find results supporting your idea even if those results are accidental. It’s not just probabilty calculations here, but issues of philosophical approach. Richard Feynman recounts a story of scientists testing mice in mazes, with what seemed inexplicable results. The story is worth the read, and includes nude babes and jungle statues. A couple of pertinent quotes:
It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.
Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.
In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.
* * *
But this long history of learning how not to fool ourselves—of having utter scientific integrity—is, I’m sorry to say, something that we haven’t specifically included in any particular course that I know of. We just hope you’ve caught on by osmosis.
The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.
It is in the context of these ideas — which I first read nearly half a century ago — that I’ve been viewing the global warming debate. (At the time, of course, it was the “global cooling” debate—some of the same scientists involved in global warming were then touting the coming ice age.)
This article, just published in Nature, talks about the problem of bad science, with what seems a careful omission. They focus on the problem in psychology (as Feynman did above) but list other sciences in a chart called “Accentuate the Positive”:
This chart indicates a compulsion to produce positive results, results that support the consensus:
It has become common practice, for example, to tweak experimental designs in ways that practically guarantee positive results. And once positive results are published, few researchers replicate the experiment exactly, instead carrying out ‘conceptual replications’ that test similar hypotheses using different methods. This practice, say critics, builds a house of cards on potentially shaky foundations.
The chart is arranged by the percentage of positive results. But the highest number on the chart — that of psychiatry/psychology — is still less than the so-called 97% consensus support that catastrophists describe for their field. Climate science is not on the list above — but would arguably appear at the very bottom.
There are a small number of highly visible climate scientists who are creating problems of integrity for the science, and there is a large financial/career compulsion that temps many others to come up with the results expected, one way or another. This would be self-correcting if these scientists actually used the scientific method, the “tell the world how you got your results” part that Feynman speaks to at length.
The big fight between catastrophists and skeptics has been over “how you got your results.” Releasing the data and methods to a skeptic would be “bad for my career” says one catastrophist scientist. Another famously responded, “Even if WMO agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”
Of course, if a skeptic finds “something wrong with it” and is mistaken, he or she loses credibility. The ClimateAudit website has undertaken a great many analsyese of climate studies, and publishes all of the data and source code used so that readers can follow along, try it (using free software), and suggest improvements and new approaches.
This evidently scares the crap out of the catastrophists. They continue to fight a battle at every turn to keep from releasing the data and source code used to produce key supporting pillars of the catastrophist position, such as the famous Hockey Stick. As recently as yesterday, the catastrophists were loudly publicly defending this work (and defending withholding the data and code), even though we know that in their private emails (revealed in the Climategate releases) they were admitting to each other that the Hockey Stick graph was indefensible, bad science. Privately, they’d said:
[#3373] I’m sure you agree–the Mann/Jones GRL paper was truly pathetic and should never have been published. I don’t want to be associated with that 2000 year “reconstruction”.
The quote above is by Raymond S. Bradley, who was the author/editor of the Paleoclimatology textbook on the shelf next to me. More quotes on this topic, and what the climate scientists were saying to each other about these temperature reconstructions, can be seen here. The further context in that link above is interesting; they’re rather cynical about all of this. So is this email, and many more like it:
[#1939] You’ll be unsurprised to hear that I think this paints too rosy a picture of our understanding the vertical structure of temperature changes. Observations do not show rising temperatures throughout the tropical troposphere unless you accept one single study and approach and discount a wealth of others. This is just downright dangerous. We need to communicate the uncertainty and be honest.
They have not, it seems, learned to “be honest” — and the process that should be part and parcel of the scientific method continues to be obstructed.
[#1656] How should we deal with flaws inside the climate community? I think, that “our” reaction on the errors found in Mike Mann’s work [the Hockey Stick] were not especially honest.
This paper documents what should be done, instead, to produce good science:
Requirements for authors
1. Authors must decide the rule for terminating data collection before data collection begins and report this rule in the article.
2. Authors must collect at least 20 observations per cell or else provide a compelling cost-of-data-collection justification.
3. Authors must list all variables collected in a study.
4. Authors must report all experimental conditions, including failed manipulations.
5. If observations are eliminated, authors must also report what the statistical results are if those observations are included.
6. If an analysis includes a covariate, authors must report the statistical results of the analysis without the covariate.
Guidelines for reviewers
1. Reviewers should ensure that authors follow the requirements.
2. Reviewers should be more tolerant of imperfections in results.
3. Reviewers should require authors to demonstrate that their results do not hinge on arbitrary analytic decisions.
4. If justifications of data collection or analysis are not compelling, reviewers should require the authors to conduct an exact replication.
As you read through the Climategate emails — I’ve read all of them — it becomes evident that there are problems at each of the above stages. With the Hockey Stick, perhaps the most obvious one has been “5. If observations are eliminated, authors must also report what the statistical results are if those observations are included.”
In the Hockey Stick graph, a surprisingly small amount of source data was weighted and adjusted (including being cut off early) to produce a graph that looked like a hockey stick: a thousand years of stable temperature then a sudden rise at the end of the twentieth century to “unprecedented” levels. But while the source and data for these results has not been completely released to this day (and there are ongoing court battles over it!), enough has been released to show that many other proxies were omitted from the graph. And we can get an idea “what the statistical results are if those observations are included” — unsurprisingly, the hockey stick vanishes. This researcher passed away (to the private cheers of the catastrophist scientists) before Climategate and the release of some of the info, but he’s got it basically right in this article.
When more data is included, as Steve McIntyre of ClimateAudit shows here, the chart of tree ring data (corresponding, partially, to growing season temperatures as well as other things) suggests that the temperatures recently have been unremarkable.
But the bigger issue is the consistent, furious attempts to hide/obfuscate/obstruct access to the data and code that would show catastrophist results to be supportable — or not. This is coupled with the complete, unwavering defense of catastrophists caught in blatant bad behavior. If it aligns with the Cause of Global Warming, absolutely nothing is impermissible. That is eating away at the integrity of the scientific method, and is a tremendous shame. It is also a tremendous expense for a cause supported by “science” that has not been demonstrated to survive even a casual audit.
So how do catastrophist scientists get away with it? How is it for years that they can advance a theory based on papers that they privately admit are “indefensible”? Well, they have powerful forces — governments, corporations, and non-profits with lots of capital and influence — on their side, and invested in those results.
And even if your theory is flawed, you can still survive — as this rabbit demonstrates in the Tale of the Superiority of Rabbits over Foxes and Wolves (h/t Tora Kiyoshi).
===|==============/ Keith DeHavelle
- people.psych.cornell.edu/~jec7/pcd pubs/simmonsetal11.pdf