Matt Ridley is the best popular science writer in the business. I offer that observation as an example of the topic that Ridley concentrated on in three recent pieces for the Wall Street Journal: confirmation bias. While I can objectively defend Ridley's excellence, the main reason I rank him so highly is that he confirms a lot of what I already believe. Consider that mea culpa properly filed.
Ridley's first installment exposes a conceit of scientific culture.
There's a myth out there that has gained the status of a cliché: that scientists love proving themselves wrong, that the first thing they do after constructing a hypothesis is to try to falsify it. Professors tell students that this is the essence of science.
Yet most scientists behave very differently in practice. They not only become strongly attached to their own theories; they perpetually look for evidence that supports rather than challenges their theories. Like defense attorneys building a case, they collect confirming evidence.
He goes on to explain that this is "only human". Of course. The human mind is a narrative generator. Our thought and language consist of stories we construct out of the raw material that enters through our senses. Some of these stories are pragmatic. I am allergic to this food so I shouldn't eat it. Some of the stories are all about self-esteem. If they weren't in charge, I'd be doing better. Almost always the motives overlap. We persist in falling failed strategies because we can't admit to being wrong. Methods for rigorous analysis can help bring our narratives in line with reality, but they don't change the weight of the investments that we make in those narratives.
Confirmation bias occurs when we ignore evidence that is inconsistent with our narrative or theory in favor of evidence that confirms it. Despite their official loyalty to the principle of falsification, scientists are as much susceptible to confirmation bias as anyone else. So what corrects for this bias in the progress of science?
Ridley offers an answer in his second installment.
The answer was spelled out by the psychologist Raymond Nickerson of Tufts University in a paper written in 1998: "It is not so much the critical attitude that individual scientists have taken with respect to their own ideas that has given science the success it has enjoyed... but more the fact that individual scientists have been highly motivated to demonstrate that hypotheses that are held by some other scientist(s) are false."
Most scientists do not try to disprove their ideas; rivals do it for them. Only when those rivals fail is the theory bomb-proof. The physicist Robert Millikan, (who showed minor confirmation bias in his own work on the charge of the electron by omitting outlying observations that did not fit his hypothesis) devoted more than 10 years to trying to disprove Einstein¹s theory that light consisted of particles (photons). His failure convinced almost everybody but himself that Einstein was right.
This is obviously right. I say that because it confirms my bias concerns parties in politics. To make policy, like-minded persons have to join together, formulate positions, and work to see them translated into policy. A group of likeminded people is not, however, very open to evidence that tells against their biases. The solution is to have at least one honorable opposition.
Another reason that seems right is that it is not so much the best solution as the only solution. Once you come to grips with the reality of confirmation bias in science and everywhere else, what other solution is there?
In the final installment, Ridley turns to the specific problem of climate change.
Last month saw two media announcements of preliminary new papers on climate. One, by a team led by physicist Richard Muller of the University of California, Berkeley, concluded "the carbon dioxide curve gives a better match than anything else we've tried" for the (modest) 0.8 Celsius-degree rise in global average temperatures over land during the past half-century-less, if ocean is included. He may be right, but such curve-fitting reasoning is an example of confirmation bias. The other, by a team led by the meteorologist Anthony Watts, a skeptical gadfly, confirmed its view that the Muller team's numbers are too high-because "reported 1979-2008 U.S. temperature trends are spuriously doubled" by bad thermometer siting and unjustified "adjustments."
Much published research on the impact of climate change consists of confirmation bias by if-then modeling, but critics also see an increasing confusion between model outputs and observations. For example, in estimating how much warming is expected, the most recent report of the Intergovernmental Panel on Climate Change uses three methods, two based entirely on model simulations.
Perhaps you can see the problem. Computer simulations are useful not only for understanding phenomena but for producing results that can be reported to the press. Such models, however, inevitably build in assumptions about what is going on and these are frequently designed to confirm what the designer expects to find.
Ridley does not consider the influence of nonscientific biases, such as political opinions. In the climate change controversies, politics exercises an enormous gravitational force. Most, but not all, of those making a strong case for the anthropogenic global warming thesis and the need for global legislation are comfortable with the idea that governments should exercise more control over private economic activity. Most, but not all, of the critics are not.
The critics of global alarmism are no more immune to confirmation bias than the proponents. The problem with the debate so far has been the constant attempt by the latter to shut it down. We are constantly told that the consensus of climate scientists is that global warming is real and that big changes in our economies are necessary to correct it. Consensus, however, is more often the enemy of good science than its friend. Almost all moments of genuine progress in science happen when some critic successfully challenges the consensus.
I think that Ridley is right on all counts but that is surely subject to confirmation bias on my part. I note that this involves not only political bias but a personal delight at seeing the consensus undermined. Ridley thinks that good science happens when one scientist tries to tear a big hole in the theories of other scientists. The latter either emerge stronger or suffer a deserved collapse. Of course, his preference for an adversarial culture in science may itself be a product of confirmation bias. But again, what else is there?