Quantcast
Channel: Special Articles - Committee for Skeptical Inquiry
Viewing all articles
Browse latest Browse all 856

Science Surveyor and the Quest for Consensus

$
0
0

Broccoli causes cancer! Global temperatures are falling! Chocolate helps you lose weight!

Why is science news so bad?

More specifically: Why do journalists insist on trumpeting the findings of the latest, anomalous study—ignoring the weight of all the evidence that came before?

It’s certainly one of the most frustrating tendencies in science journalism—as Marguerite Holloway surely knows. Her group of journalism, computer science, and design researchers at Stanford and Columbia is developing Science Surveyor, a tool to help journalists quickly get an idea of the scientific context for new studies, according to Nieman Lab.

The project is at an early stage of development, but the idea is that the tool would search academic databases for relevant papers and try to determine if the new study is an outlier or part of a consensus. It would also show changes in the field over time. The researchers hope the tool can communicate all this through simple, easily understood graphics.

If it lives up to this promise, the tool could mitigate one of the most frustrating problems in science journalism.

Where’s the context?

Communications scholars have for some years now recognized the dangers of science news items that fail to put studies in their proper context. For example, Corbett and Durfee found that when climate change stories lacked information about the scientific consensus, readers were less likely to perceive scientific certainty about the issue.1

Communications researchers also have some idea of why such problematic reporting occurs - though some of the reasons are fairly obvious. One is journalism’s obsession with speed, which has only accelerated in the past few decades. University public relations departments have honed their game to meet this challenge, providing news outlets with press releases that can easily form the basis of stories, with a minimum of additional reportage or verification.

Another factor is “balance,” which arose as a journalistic norm to ensure that both sides of a story get heard. For many topics, this can be an appropriate way of presenting information to the public. As University of Wisconsin professor Sharon Dunwoody argues, “Although journalism exists in principle to help individuals make reasoned decisions about the world around them, journalists are rarely in a position to determine what’s true. Objectivity and balance have evolved over time to serve as surrogates for truth claims.”2

In science reporting, this approach has become both a shield and a stumbling block, allowing journalists to claim they’ve done their duty, while actually misinforming the public. One shouldn’t resort to a truth “surrogate” where the truth can actually be found.

Lazy seal Attribution: Efraimstochter via Pixabay

Are journalists just lazy?

With all the dangers that context-free science reporting poses, it’s easy to see the journalists who write such stories as lazy or incompetent. And we can’t forget either, that publishers have a financial interest in emphasizing novelty, controversy, and the “man bites dog” imperative. They know lots of people will read about how chocolate helps weight loss, while the old news about chocolate packing on the pounds is far less intriguing.

At the same time, if we hope to change journalistic practice, it’s important to look more deeply at the conditions that shape journalistic practice. One of those constraints is limited expertise.

The expertise problem exists at many scales. Yes, as frustrated scientists and science enthusiasts have pointed out, there are journalists who cover science without the most basic skills necessary: without rudimentary understanding of probabilities and statistics, without understanding why a control group is needed or why a study of mice hardly tells us anything about humans.

But the problem may be more systemic and unavoidable than that. Even the most talented science writers—including individuals with advanced degrees in scientific disciplines—typically cover a variety of topics. It’s hard to make a living as “that guy who writes about microbiology” or “that woman who writes about neutron stars.”

As Dunwoody writes, “Science writers… are defined as specialists among journalists, yet most cover a wide variety of topics, from nanotechnology to stem cells. There’s solid evidence that years in the saddle is a good predictor of one’s knowledge base as a journalist—science writers who have been covering the beat for a couple of decades know a great deal about many things—but even experienced journalists cannot grasp the factual intricacies of all they cover.”3

Chart showing the accumulative total of papers in the study sample, by year, which endorse, reject or take no position on a man-made climate change consensus.

How do we determine “consensus”?

There’s also an inherent difficulty in getting the true sense of “scientific consensus.” Take global warming. The scientific consensus may seem obvious now, after several studies have established that a vast majority of climate scientists believe that climate change is real and man-made4, 5, 6—most recently, a 2013 paper by Cook, Nuccitelli, et. al., establishing the oft-cited 97 percent figure.7

But for a long time, the weight of evidence wasn’t clear to journalists covering climate. In a survey of members of the Society of Environmental Journalists, published in 2000, only one-third of the reporters knew that global warming was accepted by most atmospheric scientists.8

Should they have known better? How would they have known better? Establishing the weight of scientific evidence is far from a quick process. As Cook’s work shows, it takes careful analysis and in fact, a non-trivial amount of statistical know-how—it’s a process that likely took these researchers months.

On the other hand, the journalist’s toolbox contains a number of shortcuts that are supposed to get around this difficulty. Calling up a few experts on a topic can certainly help—but how can the journalist be assured that the scientists she chooses will be on the side of the consensus, if there is one? After all, even Nobel Prize winners have been known to develop wacky ideas far outside the mainstream.

Then there’s scientific associations and governmental bodies, like the AAAS and Centers for Disease Control. These can be an invaluable resource. But even many well-respected, well-funded bodies have demonstrated that they haven’t yet mastered the fine art of risk communication, and their poorly chosen words can lead journalists astray. Just look at the World Health Organization’s mangled attempts to explain the cancer risk posed by bacon.

New approaches, new tools

So what’s the solution? Dunwoody argues that journalists must take a “weight-of-evidence” approach, that is, “to find out where the bulk of evidence and expert thought lies on the truth continuum and then communicate that to audiences.”9

Building on this, Clarke, Dixon, Holton, and McKeever argue for “evidentiary balance,” which would combine the weight-of-evidence approach with appropriate hedging of scientific findings.10 That means telling readers about study limitations—crucial information that often goes unreported.11

These are good frameworks to start with, but much more needs to be done. First, every link in the science communication chain—from university PR departments, to government and non-governmental agencies, to publishers, editors, and reporters—must start taking seriously the findings of communication researchers.

This means tossing out the old-fashioned and irresponsible model of journalism whereby reporters simply throw down words on a page, according to an institutional formula, and expect their audience to make sense of the story. It means gaining a basic grasp of how readers construct meaning, out of the words on the page and the ideas and attitudes already in their heads, and understanding that different people and groups will construct meanings differently.

And yes, it means finding better and faster ways of understanding just what the scientific consensus is. Thankfully, while the online age has brought us many ways of obscuring the truth, it offers opportunities too. Toward this end, Science Surveyor marks a promising start.



References

  1. Corbett, Julia B., and Jessica L. Durfee. 2004. Testing public (un)certainty of science: Media representations of global warming. Science Communication 26(2): 129–51.
  2. Dunwoody, Sharon. Winter 2005. Weight-of-evidence reporting: What is it? Why use it? Nieman Reports 59(4): 89-91.
  3. Ibid.
  4. Oreskes, Naomi. 2004. The scientific consensus on climate change. Science 306(5702): 1686.
  5. Anderegg, William R. L., James W. Prall, Jacob Harold, and Stephen H. Schneider. 2010. Expert credibility in climate change. Proceedings of the National Academy of Sciences of the United States of America 107(27): 12107–9.
  6. Doran, Peter T., and Maggie Kendall Zimmerman. 2009. Examining the scientific consensus on climate change. Eos, Transactions American Geophysical Union 90(3): 22.
  7. Cook, John, Dana Nuccitelli, Sarah A. Green, Mark Richardson, Bärbel Winkler, Rob Painting, Robert Way, Peter Jacobs, and Andrew Skuce. 2013. Quantifying the consensus on anthropogenic global warming in the scientific literature. Environmental Research Letters 8(2).
  8. Wilson, Kris M. 2000. Drought, debate, and uncertainty: Measuring reporters’ knowledge and ignorance about climate change. Public Understanding of Science 9(1): 1–13.
  9. Dunwoody, Sharon. Winter 2005. Weight-of-evidence reporting: What is it? Why use it? Nieman Reports 59(4): 89-91.
  10. Clarke, Christopher E., Graham N. Dixon, Avery Holton, and Brooke Weberling McKeever. 2015. Including ‘evidentiary balance’ in news media coverage of vaccine risk. Health Communication 30(5): 461–72.
  11. Jensen, Jakob D., Nick Carcioppolo, Andy J King, Jennifer K. Bernat, LaShara Davis, Robert Yale, and Jessica Smith. 2011. Including limitations in news coverage of cancer research: Effects of news hedging on fatalism, medical skepticism, patient trust, and backlash. Journal of Health Communication 16(5): 486–503.

Viewing all articles
Browse latest Browse all 856

Trending Articles