Some things sound too good to be true, and on closer examination, they are—even in science. For example, over the past few years, many classic psychology studies—most in the field of social psychology—now appear to be too good to be true. No episode more clearly illustrates this problem than the “Parable of the Power Pose.”
The Rise
The idea was simple. If you spend two minutes adopting an expansive, arms-akimbo Wonder Woman-like pose—or any similarly expansive pose—your hormones will get a boost, and you will go on to show increased risk-taking behavior. How cool is that? Power posing was a perfect “free, no-tech life hack” that was easy to understand and could be adopted by anyone hoping to boost their confidence at school or on the job. People loved the idea, which New York Times writer David Brooks promoted in a 2011 column. One of the authors of the seminal study, Harvard Business School professor Amy Cuddy, went on to record the second most popular TED talk—currently weighing in at thirty-eight million views. She subsequently hit the speaking circuit and published a New York Times bestselling book, Presences, that reportedly netted her a million dollar advance.
But even before Cuddy’s book hit the stores, nagging questions about the validity of power posing began crop up. The full details of the parable are reported by Tom Bartlett in a recent cover story in the The Chronicle of Higher Education, but here is a quick summary.
The Fall
In March of 2015, Eva Ranehill of the University of Zurich and her colleagues published an independent attempt to recreate the power pose study using a much larger sample and a number of additional controls. They found no effect on hormones or on risk-taking behavior. Cuddy and her coauthors responded by conducting a summary of thirty-three power pose studies, including the Ranehill study, and, while making no final assessment, they highlighted the differences between their original study and the replication by Ranehill. Then two independent researchers at the Wharton School of the University of Pennsylvania performed a reanalysis of the data from the same thirty-three power pose studies and found that “either power-posing overall has no effect, or the effect is too small for the existing samples to have meaningfully studied it.”
Finally, a crushing blow. In September of 2016, Dana Carney, one of Cuddy’s coauthors on the original power pose study, posted a statement on her web page at the University of California, Berkeley, which said, “I do not believe that ‘power pose’ effects are real” (bold and underline in the original).
As the questions about power posing emerged, Cuddy moderated some of her claims, but she still maintains the technique is beneficial. In a statement posted online, Cuddy admitted that the evidence for hormonal and behavioral changes was muddy, but she defended power posing on the basis of a consistent finding that people say they feel more confident after using it. Unfortunately, this is a rather weak defense. Many things that do not work as advertised nonetheless produce positive self-reports from users. For example, as I mentioned in an earlier column, people who use brain training programs such as Lumosity often report feeling more mentally fit despite a lack of measurable improvement in cognitive performance.
How to Reverse the Parable
This is a sad story, but there is a very extensive movement afoot to strengthen science and avoid many of the problems it presents. Brian Nosek, professor of psychology at the University of Virginia, is the cofounder and executive director of the Center for Open Science (COS), whose mission is to “increase the openness, integrity, and reproducibility of scientific research.” Nosek devised and led the “Reproducibility Project,” which attempted to replicate the results of 100 experiments selected from three of the top journals in psychology. The results of this project, which required the collaborative effort of 270 scientists, were quite discouraging. Most of the findings of the repeated experiments where much weaker than in the original published studies, and fewer than 40 percent of the studies could be substantially duplicated. A sobering reminder that the mere fact that a study has been published doesn’t make it a fact.
Well before the Reproducibility Project, several other investigations had failed to replicate the results of famous and often-cited studies—particularly in the field of social psychology. Reports in the press seemed to suggest that science was in serious peril, and although this was something of an overstatement, it became clear that a reevaluation was in order. In 2013, Nosek and Virginia colleague Jeffrey Spies founded COS, and, in just a few years, they have managed to attract a large number of corporate and foundation backers for their ambitious effort to improve the state of the art.
COS is based on an idea that, if the research process is opened to scrutiny by—and collaboration with—the rest of the scientific community, better, more reliable findings will result. In addition, COS is designed to help bridge the gap between scientific values and scientific practice. Ideally, science is supposed to be open, objective, and more interested in quality than quantity. In reality, much of science exists in a “publish or perish” world, where getting articles into top journals is the key to grant support, jobs, and promotions. In addition, much of the business of science is done in the secrecy of labs and offices, where researchers are free to make decisions that put their data in the most favorable light. In the bad old days (which were only a few years ago), scientists often chose to report only some of their findings and could statistically manipulate their data until something shiny and publishable popped out. Given the strong incentives for publishing and the prevailing bias in favor of statistically significant findings, the possibility of unreliable results was quite high.
COS has developed a set of Transparency and Openness Guidelines that have now been endorsed by a large group of scientific journals. Here are three of the most important features of the COS project:
- Preregistration
This is akin to calling your shots in pool. Before a scientist begins to collect data, the entire design of the study should be posted publicly online. This would include a description of all of the experimental conditions that will be run, the numbers of participants or measurements that will be included, and a list of all the variables in the study. Ideally, preregistration prevents people from altering the study along the way and/or from selectively reporting the results. - Open materials
In addition to a detailed description of the plan of study, the materials used would be made publicly available. - Open data
The raw data collected in the study would all be available online. Theoretically, a journal editor, a reviewer, or any other scientist would have access to the dataset and be free to check it for errors and reanalyze it in an attempt to reproduce the results of the original investigators. There are times when, out of privacy concerns, it would be unethical to make some data public, but in all other instances openness should be the rule.
Although the preregistration of research is just a few years old in psychology, it had an earlier start in biomedical research. In 2000, the United States Food and Drug Administration required all clinical trials for new drugs to be registered, and in 2004, a group of medical journals adopted the rule that no report of a clinical trial would be accepted for publication unless it was registered before any patients were enrolled in the study.
In the social sciences, rather than requiring preregistration, the approach has been to provide incentives for researchers to choose the open science path. A number of journals now recognize studies that use these new methods by displaying the three badges in the figure above on published articles. Research that meets the openness guidelines is likely to be considered more reliable and of higher quality, and as a result, may have an easier route to publication.
To partially eliminate the bias against studies with non-significant results, some journals (e.g., Cortex [pdf]) have said they will review preregistered studies prior to data collection and, if the design is strong enough to warrant it, will accept these studies for publication no matter what the eventual results turn out to be. To further encourage preregistration, COS is offering a $1,000 reward for each of a 1,000 qualified preregistered studies that reach publication. In addition, COS operates the Open Science Framework site, which is a free open source commons that makes it easy for researchers to post their studies, control what materials are public, and collaborate throughout the entire research process.
All of this is still very new, but it appears to have great promise. By opening up the process of research and placing needed restrictions on the freedom of investigators to manipulate their data, science is likely to become more reliable. However, making research findings more reliable is only part of the problem.
The Power of Posing as a Scientist
The “Parable of the Power Pose” is partly a story about the incremental process of science. Bits of information are gathered over time, and eventually a dependable picture begins to emerge. In this case things happened very rapidly. The original power pose study appeared in 2010—before many of these new standards for research had been introduced. The seminal study was published in Psychological Science, a prestigious journal that now awards articles with open science badges, but in 2010 it did not. By the time questions began to emerge about the power pose effect, Amy Cuddy had given her TED talk and was profiled in the New York Times. Ranehill’s study was not accepted for publication until August of 2014, and the deflating Wharton School reanalysis of power pose data came in May of 2015. Cuddy’s bestselling book—which was undoubtedly well underway by then—appeared in December of 2015. The timing of this story was unfortunate because the machinery of the power pose phenomenon was steaming ahead before the doubts had time to be tested.
Amy Cuddy is a respected researcher whose publication record goes far beyond power posing. But she has been the primary promoter of power posing in media appearances, speaking engagements, and a bestselling book, now translated into seventeen languages. It is obvious that she is a remarkable speaker, and judging from her Twitter feed, she has been a powerful inspiration to many people throughout the world. But the duties of a motivational speaker and a public scientist are very different. One requires only a story that makes people feel better, and the other requires accurately representing the known evidence in your field of expertise. How each scientist performs this duty is a personal decision.
Cuddy’s coauthor Dana Carney has taken a very different tack. Carney was the lead author on the original 2010 study, and because she and the third author Andy J. Yap supervised the data collection, she was much closer than Cuddy to the actual procedures used. Here is her summary statement:
Where Do I Stand on the Existence of “Power Poses”1
- I do not have any faith in the embodied effects of “power poses.” I do not think the effect is real.
- I do not study the embodied effects of power poses.
- I discourage others from studying power poses.
- I do not teach power poses in my classes anymore.
- I do not talk about power poses in the media and haven’t for over 5 years (well before skepticism set in).
Elsewhere in her statement, Carney revealed that she was a peer reviewer for the Ranehill replication study and that she strongly recommended publication. She also acknowledges that “reasonable people, whom I respect, may disagree,” but she has clearly left the power pose behind.
If Brian Nosek’s vision of open research gains wide acceptance, science will undoubtedly be stronger. Research findings will be more reliable and less likely to be overturned. But the “Parable of the Power Pose” is also about the public statements scientists make. Given the explosion of media platforms in the recent years and the various incentives for personal and professional advancement, it may be time to reexamine how scientists present their ideas in the marketplace. The American Psychological Association’s ethical standards state that psychologists “do not knowingly make public statements that are false, deceptive, or fraudulent concerning their research, practice, or other work activities” (code 5.01), but this is a rather vague statement. I am not prepared to say that Cuddy or anyone else in the power pose story has made deliberately deceptive statements. As Dana Carney suggests, reasonable people can disagree. But for scientists to be taken seriously, it is important that they bring their research to the public in a responsible manner.
When I asked Nosek how science communication problems might be avoided, he said:
My hope is that a discussion about uncertainty is part of every story. Instead of X is linked to Y kinds of headlines, the story would be about the topic of study, what the evidence suggests so far, and—importantly—other answers that are still viable. I think this latter part is an easy way to convey uncertainty, by both researchers and journalists. Just pointing out that another explanation could be true immediately shifts the reader out of that confirmatory/acceptance mindset.
What Nosek is asking for would require collaboration between scientists and journalists, both of whom are responding to a variety of competing incentives. We all prefer clear answers. We would really like someone to give us a final answer on whether coffee is good or bad for us. But as Nosek suggests, science is an incremental process that gradually scrapes away at the void. It requires a degree of humility and caution, and if the public is going to understand how science works, scientists and journalists will need to teach their audiences how to think critically about the evidence.
Unfortunately, there are no established standards for science journalism, and there is no effective mechanism for oversight. No systematic feedback loop. The New York Times employs an independent public editor, who comments on the journalistic strengths and weaknesses of the paper. Perhaps science should hire a team of public science editors who would comment on the quality of science talk in the media and the marketplace. The move to open science methods will strengthen the products of science, but until we address the way scientific findings are presented to the world outside the lab, the possibility of damaging parables remains.
Note
- Carney’s statement included a sixth point: “6. I have on my website and my downloadable CV my skepticism about the effect and links to both the failed replication by Ranehill et al. and to Simmons & Simonsohn’s p-curve paper suggesting no effect. And this document.”