An analysis of psychology research has found that roughly two-thirds of studies cannot be reproduced, calling into question the validity of a tremendous amount of material within the discipline.
When reading new psychology studies, it is prudent to read the reports with a skeptical eye and recognize that each new finding is only a small piece of an extremely large scientific panorama that is always in flux, says Stephen Lindsay, a professor of psychology at the University of Victoria in British Columbia and editor of Psychological Science.
Jean O’Reily of the Press Examiner quotes Lindsay:
“To thrive in science, researchers need to earn publications, and some kind of results are easier to publish than others, particularly ones that are novel and show unexpected or exciting new directions,” he said.
It could be that a replication of a study failed by chance alone, or the original results could have been a “false positive” because researchers were so eagerly pursuing one line of inquiry that they ignored anything that might be opposed to that specific inquiry. Rarely is a study published that is outright fraud, however.
“It’s important to note that this somewhat disappointing outcomes does not speak directly to the validity or the falsity of the theories”, said Gilbert Chin, a psychologist at Science, who added, “What it does say is that we should be less confident about numerous original experimental results”.
Almost four years ago, the Reproducibility Project:Psychology was begun by 270 researchers who agreed to take on earlier experiments and repeat them to see if they could produce the same results. This was the first time such a task had been attempted, and it has shown that concerns are real and can be addressed. Scientific claims are not to be believed simply because of the author’s status or authority, real credibility is achieved when the experiment’s supporting evidence can be repeated. The converse is true as well. The original report is not necessarily incorrect because it cannot be reproduced.
Replication is a prerequisite for building scientific knowledge. It allows for the assurance that empirical findings are reliable. So, it is somewhat surprising that scientists often do not conduct, or publish, replications of existing studies. Elizabeth Gilbert and Nina Strohminger write for The Conversation that journals are looking for “novel and cutting edge” research. And, once published, the media and other scientists will cite the findings as if they were infallible.
Gilbert and Strohminger are two of the 270 researchers who have just published in the journal Science their attempts at reproducing 100 previously published psychological science findings. The researchers call themselves the Open Science Framework and are coordinated by social psychologist Brian Nosek from the Center for Open Science. Teams from around the world ran replications of studies published in either Psychological Science, Journal of Personality and Social Psychology, and the Journal of Experimental Psychology: Learning, Memory, and Cognition. The scientists worked closely with the original researchers to ensure that the replications were as exact as possible.
Approximately 97% of the original studies had statistically significant results. Scientists are more likely to publish findings that uncover meaningful results, but the teams discovered that when these 100 studies were completed by different researchers, only 36% reached statistical significance.
In addition, when the new study found evidence for the existence of the original finding, the magnitude of the effect was on average just half the size of the original.
Some of these results could be because of luck, poor execution, or a misunderstanding of the circumstances needed to show the effect. In some ways, this shows that failed replications could prove the inherent uncertainty of any single study, original or repeated.
Ben Shoichet, in an article for the Bulletin Leader, says the idea of double-checking another scientist’s work has been dissentious.
“There’s no doubt replication is important, but it’s often just an attack, a vigilante exercise”, said Norbert Schwarz, a professor of psychology at the University of Southern California.
The results of the replicated studies were weaker than the results of the original study in many cases, but very few of the replications contradicted the original results. Science, of course, is based on hypothesizing, testing, validating, and retesting.
Nosek is now working on a similar project in cancer biology and hopes in the future, to include other fields as well. Belen Fernandez-Castilla, a team member from Universidad Complutense de Madrid, added:
“Scientists investigate things that are not yet understood, and initial observations may not be robust”.