A massive study by 270 researchers, including two ÈËÆÞÓÕ»ó psychologists, underscores one of the key challenges facing scientists today: Just how far can you trust the research published in professional, peer-reviewed journals?
According to this project, you should take it with a chunk of salt.
The study, published in Science, set out to examine a core principle of scientific research: the property of reproducibility. Two different researchers should be able to run the same experiment independently and get the same results. These results form the basis for theories about how the world works, be it the formation of stars or the causes of schizophrenia. Of course, different scientists may offer competing explanations for a particular result—but the result itself is supposed to be reliable.
Unfortunately, it doesn’t always work this way. In this study, researchers set out to replicate 100 experiments published in three prestigious psychology journals. Only 36% of the replications yielded statistically significant results. In other words, researchers were unable to replicate the original results in almost two-thirds of the studies they looked at.
The study set off alarm bells in psychology labs around the world. “This one will make waves,” says Prof. Michael Pitts [psychology 2011–], who worked with Melissa Lewis ’13 to replicate one of the 100 experiments in ÈËÆÞÓÕ»ó’s SCALP lab.
For her thesis, Melissa looked at the correlation between the “error-related negativity response”—a brainwave that corresponds to the feeling you get when you lock the keys in the car—and the “startle response” that occurs when you’re surprised by a sudden sound.
The original experiment reported a strong correlation between the two responses. Melissa found a similar pattern—but the correlation was weaker than reported in the original.
“The pattern was close, but not as strong,” she says. “This nicely encapsulates the problem of reproducibility. The reality is that science is noisy and unexpected factors can creep into your results.”
The study identifies several factors that may explain the failed replications. First, journal editors are hungry for experiments that are novel and surprising, which puts pressure on researchers to publish results at the ragged edge of statistical significance.
Second, scientists today often stockpile vast quantities of data from their experiments. Given hundreds or thousands of data points, it is often possible to find two variables that seem to be related, even though the link is a matter of happenstance. “If you run enough t-tests, you’re going to find something significant,” says Prof. Pitts.
Although the study focused on psychological research, the authors suspect the phenomenon is widespread. In fact, the replication project has inspired a similar initiative in the field of cell biology.
LATEST COMMENTS
I knew Steve Jobs when he was on the second floor of Quincy. (Fall...
- 2 weeks ago
Prof. Mason Drukman [political science 1964–70] This is gold, pure gold. God bless, Prof. Drukman.
puredog - 1 month ago
Such a good friend & compatriot in the day of Satyricon...
- 4 months ago
John died of a broken heart from losing his mom and then his...
- 7 months ago
Who wrote this obit? I'm writing something about Carol Sawyer...
- 8 months ago
...and THREE sisters. Sabra, the oldest, Mary, the middle, and...
- 10 months ago