Quantcast
Channel: Special Articles - Committee for Skeptical Inquiry
Viewing all articles
Browse latest Browse all 856

Why We Often Get Risks Wrong

$
0
0

Getting Risk Right: Understanding the Science of Elusive Health Risks. By Geoffrey C. Kabat. Columbia University Press, New York, 2016. ISBN 9 780231 166461. 272 pp. Hardcover, $35.00.



Geoffrey Kabat devoted his previous book, Hyping Health Risks: Environmental Hazards in Daily Life and the Science of Epidemiology (Columbia University Press 2008, reviewed in the July/August 2009 SI) to debunking overblown claims of risks of various environmental agents such as environmental causes of breast cancer on Long Island and radon and electromagnetic fields as causative agents in cancer. In his new book, Kabat goes beyond simple debunking and sets himself a much more ambitious task: “Two questions at the heart of this book are, first, how is it that extraordinary progress is made in solving certain problems, whereas in other areas little progress is made, and, second, why do instances of progress get so little attention, while those issues that gain attention often tend to be scientifically questionable?” (p. 27).

In the first three chapters, Kabat writes about how investigations of claimed risks sometimes get it right and uncover the actual causes of real risks versus investigations of non-risks that end up causing much unneeded anxiety and wasting large sums of research funds and researchers’ time and effort. The last four chapters are case studies of specific investigations. Two of these investigations resulted, through careful and arduous medical detective work, in uncovering the real causes of a very puzzling disease in one case and a type of cancer in the other. The other two case studies are of investigations of environmental agents—cell phones and endocrine disruptors—that went badly off the rails and continue to unduly alarm the public and consume research time and money that could be much better used studying actual risks.

The book starts with a brief introductory chapter, “The Illusion of Validity and the Power of ‘Negative Thinking.’” The second chapter, “The Splendors and Miseries of Associations,” begins with a discussion of basic concepts in epidemiology. It emphasizes the complexity of teasing apart causation when causation is a complex network of interacting variables. This could have been a bit clearer in places. For example, saying that “If two variables are correlated, as one increases, the other increases” (p. 14) ignores the existence of negative correlations. The chapter highlights the work of John Ioannidis, whose 2005 paper “Why Most Published Research Findings are False” (PLoS Medicine 2005, 2, e124) caused much controversy when it appeared. The paper applied to medical research, not to other areas of scientific studies. The paper was often thought to be an attack on biomedical research in general, arguing that medical research really couldn’t contribute to distinguishing between what was or was not risky or beneficial.

In actuality, Ioannidis’s paper was highly critical of the ease with which sloppy research was published in even fairly prestigious medical journals. This problem is most evident, in the context of Kabat’s book, in the ease with which papers were, and still are, published claiming that substance x has some sort of harmful environmental effects—sometimes, to some people, somewhere. This creates the “toxic terror de jour” type of reporting in the popular media that is so common. Kabat and Ioannidis argue that different scientific disciplines within biomedicine have very different levels of tolerance for bad research. Studies of the genetic contributions to disease have “become extremely rigorous owing to agreed-on standards for large populations and the requirement for replication” (p. 24). Epidemiology, on the other hand, is plagued with hordes of small studies reporting “it-might-maybe” be dangerous sorts of results with relatively small samples and poor study design and analyses. This, it seems to me, is due to journal editors’ far too lax criteria for accepting studies for publication.

An excellent example of such lax publication standards is a study that appeared while I was composing this review. D. Leslie et al. published “Temporal Association of Certain Neuropsychiatric Disorders Following Vaccination of Children and Adolescents: A Case-Control Study” in Frontiers in Psychiatry: Child and Adolescent Psychiatry (2017, 00003). This study looked at the risk of seven different psychiatric disorders. Autism was not included, although Robert F. Kennedy, the well known autism crank, has been singing the praises of this study as evidence for a vaccine-autism link. Presumably, as controls, the risk of broken bones and open wounds were also included. Seven different vaccination types were examined, each at three different times post-vaccination. This resulted in a grand total of 189 computed risk or hazard ratios (HRs). Sure enough, some of these comparisons resulted in HRs of greater than one, presumably indicative of a risk. Of the 148 HRs for the different psychiatric disorders, forty-one were significant. Of these, thirty-one were above one. The remaining ten were below one, suggesting that vaccines protected against a particular disorder. There were also a few (eleven to be exact) significant HRs in the results for broken bones and open wounds. Parents the world over will be happy to learn that vaccination against varicella protects, at three months, against broken bones. No attempt was made in this study to correct the significance levels of the HRs for the number of comparisons made. Had this paper been handed in to me when I taught an introductory statistics class, it would have received a failing grade. And yet it got through the editorial process at a semi-respectable journal and was published.

In the third chapter, “When Risk Goes Viral,” Kabat discusses the reasons that studies reporting risks that may not be real, or are extremely small, are over-represented in the medical literature and then in the popular media. One obvious reason is that risks sell. They sell both newspapers and “panic de jour” reporting on TV. But now-you-see-it-now-you-don’t risks also attract funding for further research. Even if an initial report shows a minimal risk in a poorly designed study, this is an opportunity for the authors and others to get funding to further study by arguing that the risk(s) might be real and thus deserve more research to really find out, once and for all, whether the risk is real. In this context, Kabat quotes UC Irvine researcher Robert Phalen (p. 43): “It benefits us [researchers] personally to have the public be afraid, even if these risks are trivial.” When additional studies of the probably illusory risk don’t absolutely, totally, prove the risk isn’t real, well, that’s just a reason for even more funding to track down whatever variable might be responsible. Thus, some proponents of the vaccine-autism link suggest that there is a special group of autistics for which there is a link, but the link may not be found for all autistics. This is sort of like saying that the link exists only for those born on an odd numbered Monday in the third week of a month without an “r” in its name.

Kabat, again summarizing Ioannidis’s work, notes that initial reports of risks tend to have follow-up studies that report much lower risks, if any at all. The same is true of initial reports of the effectiveness of questionable therapeutic interventions. The early reports seem hopeful, even dramatic, as was seen in the eruption of enthusiasm for facilitated communication. But then, as more studies are done with better design and controls, the initial effects fade away to either nothing or to a much more modest level. This is very reminiscent of the history of studies in parapsychology. A new paradigm to study the phenomenon is developed and, gee whiz, the first studies look great. But then results fade back to chance as better studies are done. And a few years later yet another flash-in-the-pan method is trotted out and touted as once-and-for-all finally proving that ESP is real ... until it, too, withers away.

Other factors leading to continued study and hand-wringing about nonexistent or trivial risks is that humans are much more likely to be afraid of things they don’t understand and can’t control. This nicely explains the fear that power lines caused cancer back in the 1980s, a fear that has certainly not gone away even now. Related to this is another psychological factor—what Kabat terms the “availability cascade.” This is a version of the availability heuristic in which people make poor judgments about the likelihood of some event because it is easier to recall examples of that event rather than some other event. Thus people judge traveling by commercial airline as more dangerous than travel by car when just the reverse is the case. This happens because it is easier to call to memory examples of terrible airline disasters than the ninety or so people who die each and every day on America’s highways. As applied to public perception of environmental risks, Kabat cites the case of Love Canal near Buffalo, New York. This was said to be a neighborhood in which the residents were exposed to extraordinarily high levels of dangerous industrial pollution. But “only later, after years of more careful study by many scientists and agencies, did it turn out that, in fact, there was no evidence of any abnormal exposure among residents or any ill effects” (p. 54). And yet “Love Canal” is still a catchphrase for horrible environmental pollution. Kabat nicely summarizes (p. 55) why so much questionable research on risks is done and published: “Science that deals with factors that affect our health takes place in a different context from other fields of science. This is because we are all eager for tangible progress in preventing and curing disease.” This leads to an emphasis on flashy, if shallow, research that may be of little use in actually improving public health.

In the next four chapters, Kabat focuses on four specific areas of research that serve as case studies for the distinctions he has made previously. Chapter 4 is titled “Do Cell Phones Cause Brain Cancer?” The answer is no; for anyone wishing for an excellent review and discussion of the history of the claim that cell phones cause cancer and the nature of the research used to argue for that claim, this chapter alone is worth the price of the book.

Chapter 5, “Hormonal Confusion. The Contested Science of Endocrine Disruption,” does for this controversy what the previous one did for cell phones. Most people have heard the claims that the so-called endocrine disruptors, synthetic estrogens, chemicals found in many plastics and pesticides, were having deleterious effects on people, especially children. It was interesting to discover in this chapter that these chemicals are dramatically less hormonally active than many naturally occurring substances. For example, natural substances found in cabbages are about 10,000 times more “estrogen equivalent” than those found in food “contaminated” by organochloride type pesticides. Again, this chapter is an excellent review of this area of research. Kabat concludes that the controversy over the safety of trace levels of these chemicals is likely to continue, driven by the factors noted above that support continued expensive searches for wild geese that were probably not there in the first place. Continued research on this issue will result in spending much-needed funds that could be much more productively used to investigate real, and solvable, health problems.

Chapters 6 and 7 focus on two just such real health risks and the researchers who pursued the answers, solved the mysteries, and saved many lives in the process. Chapter 6 starts in Brussels in the early 1990s with an outbreak of a mysterious and very serious kidney disease. The story leads from there through Serbia, Bosnia, and Croatia and then into China. I’ll say no more for fear of spoiling the ending of the fascinating and very well-told medical detective story. Chapter 7 is also an international medical detective story. Here the issue is the relationship between viruses and cancer. Suffice it to say that the name Burkitt plays a large role, and much of the action takes place in Africa. These two chapters are tributes to dedicated medical researchers who spent much of their time out of the glare of publicity, doggedly plugging along trying to find the answers to real medical problems.

Kabat ends this excellent and informative book by calling for making better distinctions between issues that should receive attention and funding and those that should not: “There are real problems and there are false problems, that is, problems that, to the best of our knowledge, are not problems at all. Vaccines, genetically modified crops and foods, and cell phones are not threats to our well-being. We need to get better at distinguishing false problems from real problems” (p. 177).

I noted at the start of this review that Kabat set himself two goals for Getting Risk Right. He more than accomplishes both goals.


Viewing all articles
Browse latest Browse all 856

Trending Articles