Study that rubbished psychology 'flawed'

Brian Nosek, director of the Center for Open Science, led a 2015 study that is now being harshly criticized. MUST CREDIT: Washington Post photo by Bill O'Leary.

Brian Nosek, director of the Center for Open Science, led a 2015 study that is now being harshly criticized. MUST CREDIT: Washington Post photo by Bill O'Leary.

Published Mar 4, 2016

Share

Washington - In a blistering announcement on Thursday, scientists at Harvard University and the University of Virginia condemned the results of a 2015 landmark study that concluded more than half of 100 published psychology studies were not replicable.

The scientists now says that the research methods used to reproduce those studies were poorly designed, inappropriately applied and introduced statistical error into the data. The result: a gross over-estimation of the failure rate.

The 2015 meta-analysis, conducted by the nonprofit Centre for Open Science and published in the journal Science, made headlines around the world. At the time, the journal's senior editor declared that “we should be less confident about many of the experimental results that were provided as empirical evidence in support of those theories.”

Harvard psychologist Daniel Gilbert, a lead author of the critique, noted that such conclusions did significant harm to psychological research.

“This paper has had an extraordinary impact,” Gilbert said in a statement released on Thursday. “It led to changes in policy at many scientific journals, changes in priorities at funding agencies, and it seriously undermined public perceptions of psychology.”

The first problem that he and his team noted was the centre's non-random selection of studies to replicate.

“What they did is created an idiosyncratic, arbitrary list of sampling rules that excluded the majority of psychology subfields from the sample, that excluded entire classes of studies whose methods are probably among the best in science from the sample, and so on,” according to the Harvard release.

“Then they proceeded to violate all of their own rules. So the first thing we realised was that no matter what they found - good news or bad news - they never had any chance of estimating the reproducibility of psychological science, which is what the very title of their paper claims they did.”

Among the most egregious errors: The replicated research was anything but a repeat of the original experiment. One example was a study of race involving white students and black students at Stanford University discussing affirmative action. Instead of reproducing the experiment at Stanford, however, the centre's scientists substituted students at the University of Amsterdam.

After realising their lack of fidelity to the original research, the centre's scientists sought to remedy the situation by again repeating their work, this time at Stanford. When they did, Gilbert and his team found, the results were indeed reproducible. But this outcome was never acknowledged in the 2015 study.

Once the mistakes in that research were accounted for, the reproducibility rate was “about what we should expect if every single one of the original findings had been true,” said co-author Gary King, director of Harvard's Institute for Quantitative Social Science.

“So the public hears that 'Yet another psychology study doesn't replicate' instead of 'Yet another psychology study replicates just fine if you do it right and not if you do it wrong,' which isn't a very exciting headline,” King said.

The 2015 study took four years and 270 scientists to conduct and was led by Brian Nosek, director of the Centre for Open Science and a University of Virginia psychology researcher.

Nosek, who took part in the new investigation, said on Thursday night that the bottom-line message of the original undertaking was not that 60 percent of studies were wrong “but that 40 percent were reproduced, and that's the starting point.”

As for the follow-up critique, it's another way of looking at the data, he said. Its authors “came to an explanation that the problems were in the replication. Our explanation is that the data is inconclusive.”

Gilbert stressed that his team's work was a straightforward review. “Let's be clear, no one involved in this study was trying to deceive anyone,” he said. “They just made mistakes, as scientists sometimes do. So this is not a personal attack, this is a scientific critique. We all care about the same things: doing science well and finding out what's true.”

The critique is being published Friday as a commentary in Science.

Washington Post

 

 

EXPLAINER: THE DISPUTED STUDIES

The Harvard scientists point to several cases where the experimental methodology of a study was not followed when other OSC psychologists tried to replicate the findings.

“A study that measured Americans' attitudes toward African-Americans was replicated with Italians, who do not share the same stereotypes; a study that asked college students to imagine being called on by a professor was replicated with participants who had never been to college,” the scientists write.

A study asking Israelis to imagine the consequences of military service was replicated by asking Americans about the consequences of a honeymoon, and a study of young children's ability to identify targets on a large screen was replicated using older children and smaller screens.

The Independent

Related Topics: