Like many teachers, I’ve agonized over what to tell my students about the crises convulsing us lately, the pandemic and U.S. Presidential election. What lessons can we draw from what’s happened? I’ve decided to double down on the anti-wisdom I lay on all my classes: Distrust authorities, including me.
I’ve inadvertently demonstrated that precept for my students. I’m a lefty with an optimistic streak, so I predicted that Joe Biden would win handily on election night; that was my take on polls showing Biden leading Trump in Florida, Ohio, and other swing states. Pollsters, I told my students, had surely corrected mistakes they made four years ago, when they underestimated Trump’s support. But once again pollsters got “Donald Trump wrong,” as Politico put it. Wishful thinking led me, and perhaps many of them, astray.
Optimism has also distorted my view of the coronavirus. Last March, I took heart from warnings by Stanford epidemiologist John Ioannidis that we might be overestimating the deadliness of the virus and hence overreacting to it. He predicted that the U.S. death toll might reach only 10,000 people, lower than the average annual toll of seasonal flu. I wanted Ioannidis to be right, and his analysis seemed plausible to me, but his prediction turned out to be wrong by more than an order of magnitude.
Ironically, Ioannidis is renowned for raising doubts about scientific experts. In his blockbuster 2005 article “Why Most Published Research Findings Are False,” Ioannidis presented evidence that a majority of the claims in peer-reviewed papers cannot be corroborated. Since then, Ioannidis has continued documenting problems in the scientific literature and tracing them to factors such as confirmation bias, competition for funding, and conflicts of interest.
“There is increasing evidence that some of the ways we conduct, evaluate, report and disseminate research are miserably ineffective,” Ioannidis declared in Scientific American in 2018. “We should not trust opinion (including my own) without evidence.” Although the scientific community has attacked Ioannidis’s views of COVID-19, his critiques of the scientific literature have been embraced.
Another notable expert critic of experts is social psychologist Philip Tetlock of the University of Pennsylvania. In his 2005 book Expert Political Judgment, Tetlock reports on a study of 284 professional pundits, including academics, government officials, and journalists, who comment on politics and related issues via mass media and in scholarly journals and conferences. Tetlock assessed the accuracy of 28,000 of these pundits’ predictions concerning elections, wars, economic collapses, and other events.
The experts’ accuracy was no better than chance or a dart-throwing monkey, as Tetlock put it. Not only that, but their accuracy was inversely proportional to their prominence. That is, the more exposure they got from CNN, Fox News, The Wall Street Journal, and The New York Times, the less likely their predictions were to hold up. This counter-intuitive finding makes sense when you consider that pundits get more attention by making dramatic pronouncements, which are also more likely to be wrong.
As a science journalist, I’ve criticized lots of experts, including Nobel Prize winners and tenured professors at fancy universities. In addition to telling students about my work, and that of Ioannidis and Tetlock, I mention philosophical critiques of science mounted by Thomas Kuhn and Karl Popper. Scientists can never prove a theory is true, Popper insisted; they can only falsify or disprove it. Kuhn, similarly, warned that absolute truth is unattainable; scientific theories are always provisional, subject to change.
But after dumping all this skepticism on my students, I warn them not to be too skeptical. I remind them that science, in spite of its fallibility, represents an extraordinarily powerful method for understanding and manipulating nature. Science has helped us vanquish smallpox and other diseases, send spacecraft to the Moon and Mars, and invent jets, smartphones, and other technologies that have transformed our planet.
We believe in the bedrock theories of science—quantum mechanics, relativity, the big bang, the theory of evolution, the genetic code—because scientists have amassed overwhelming evidence for them. We should believe that vaccines are effective and that fossil-fuel emissions are warming the planet for the same reason.
So yes, I tell my students, distrust scientists and other experts, while never forgetting that sometimes they get things right. Scientists can also earn our trust, paradoxically, by admitting their fallibility. Many of us trust what Anthony Fauci says about COVID-19 because he “admits uncertainties and failings,” as Scientific American has noted.
Self-criticism, I tell my students, is difficult. It’s much easier to spot flawed thinking in others than in yourself. How do I practice self-criticism? I try to be transparent—with students, readers, and myself—about my prejudices. I also try to understand the perspectives of those with whom I disagree. That’s why, last spring, I spoke to a Texan strength-training guru and Trump supporter who thought that the U.S. was overreacting to COVID-19.
Now that the election is over, I find myself once again peering into the future. Will Trump’s devotees and the Republican party accept Joe Biden and Kamala Harris as their leaders? Will the Pfizer vaccine turn out to be as effective as a small, preliminary trial seems to suggest? I’m trying hard, but not that hard, to keep my wishful thinking in check.
John Horgan directs the Stevens Center for Science Writings. This column is adapted from one originally published on ScientificAmerican.com.
Be First to Comment