Previously we have commented on the increasingly infamous of LaCour and Green (2014), the so-called gay canvassing study published in Science in December 2014 and recently retracted. The study passed several usually steep barriers to publication in such an august forum, notably the editors’ apparent appeal to the authority of the respected second author, and the highly plausible inference that both the second author and Science editors failed to perform due diligence because they wanted the paper’s results to be true.
Now comes a report that the pattern of scholarly mischief observed in this case has a precedent. Moreover, it cannot be explained by an appeal to authority.
In a post updating the story, Retraction Watch links to a post by Virginia Hughes at BuzzFeed who links to a report published online by Emory University political scientist Gregory Martin, who reported May 29 of having been unable to replicate a different LaCour paper.
At first Martin attributed this to differences in data but did not pursue the matter further. Since the Science story appeared, however, he attempted to replicate LaCour’s study using the same data, and was unable to do so:
Simply put, despite using the same dataset of news show transcripts and implementing the same method described in his paper, my results are not at all comparable to LaCour’s. LaCour’s text-based measures of news show ideology reported in “The Echo Chambers Are Empty” are very likely fabricated.
The difference is that, whereas the gay canvassing paper was published in Science, this other paper, according to Martin via Hughes, was “unpublished but frequently cited at scientific conferences”:
“This is a pretty influential paper that a lot of people have seen,” Martin said. But given these troubling findings, he added, “this is probably also fake.”
It’s not clear why this paper became influential, however. Publication in a peer-reviewed journal is supposed to be the first step along a road that ends with broad acceptance within a field that research results are valid and reliable. If Martin is correct, then the informal peer review procedures that precede formal peer-review also are suspect as tools for discerning quality. This also may explain why peer review at Science failed: if its peer reviewers devoted the same level of effort to validate quality that scholars in the field had committed informally, then it should be no surprise that they failed to perform any better.
Hughes characterizes the original LaCour case as an “epically bizarre science scandal,” meaning that it is way outside the norm for scholarly publication. This might be true, but her story about the earlier paper suggests that this conclusion is at best premature. It could be that the only barrier to uncovering more such scandals is a lack of appropriately rigorous peer review.
Respected scholarly journals (and journals that want to be more respected) should consider adopting much more rigorous peer review practices. That might include requiring publication online of all data and code as a prerequisite to publication (or better yet, peer review), and rewarding scholars for discovering error.
Respected journals also should consider paying peer reviewers for their services. If peer review is a critical input in the production of high-quality scholarship, but a journal pays nothing for it, the editors should not be surprised when they get what they paid for.