"Science" has a problem — or more accurately stated, those who produce and publish "scienitific" studies — have a problem. Richard Horton, editor of The Lancet, one of the leading weekly peer-reviewed general medical journals, caused quite a stir last week when he said that "much of the scientific literature, perhaps half, may simply be untrue." That may be an underestimate.
One of the more recent such examples involves a paper published late last year in Science Magazine, which calls itself “The World’s Leading Journal of Scientific Research, Global News and Commentary."
The authors of the paper involved — “When contact changes minds: An experiment on transmission of support for gay equality” — claimed that “a single conversation (can) change minds on divisive social issues, such as same-sex marriage.”
That assertion, if made by a "layman," would cause most people hearing it to break into hysterical howls of laughter. But because it appeared in a "scientific" journal, the establishment press, which is supposed to be skeptical, unquestioningly ate it up.
Last week, in a complete non-surprise to those of us who haven't lost our understanding of human nature, the world learned that the data in the study was faked.
Here is how I capsulized the matter at my home blog last week (bolds were in original):
The authors (Michael J. LaCour and Donald Green) weren’t talking about a low-percentages achievement in changing minds. While the study’s abstract presents no specifics, it claims that “Contact with minorities coupled with discussion of issues pertinent to them is capable of producing a cascade of opinion change.”Well, it did that because, to summarize the findings in a painfully long identification of irregulariies in the study published on Tuesday by academics who attempted to replicate and extend it: ”[T]he dataset … was not collected as described.” In layman’s terms, it was faked.
One particular paragraph from that review of irregularities indicates how little genuine work was involved in producing the original result:
May 15, 2015. Our initial questions about the dataset arose as follows. The response rate of the pilot study was notably lower than what LaCour and Green (2014) reported. Hoping we could harness the same procedures that produced the original study’s high reported response rate, we attempt to contact the survey firm we believed had performed the original study and ask to speak to the staffer at the firm who we believed helped perform Study 1 in LaCour and Green (2014). The survey firm claimed they had no familiarity with the project and that they had never had an employee with the name of the staffer we were asking for. The firm also denied having the capabilities to perform many aspects of the recruitment procedures described in LaCour and Green (2014).
Four days later, Green who claims to have been duped by LaCour, indicated that the paper had been retracted at his curriculum vitae web page.
Green also admitted to the following in a discussion with Retraction Watch, a web site begun by two gentlemen who believe “that retractions, and corrections for that matter, need more publicity”:
Several weeks after the canvassing launched in June 2013, Michael LaCour showed me his survey results. I thought they were so astonishing that the findings would only be credible if the study were replicated. (I also had some technical concerns about the “thermometer” measures used in the surveys.) Michael LaCour and Dave Fleischer [an "LBGT canvasser" — Ed.] therefore conducted a second experiment in August of 2013, and the results confirmed the initial findings.I did not have IRB (Internal Review Board) approval for the study from my home institution, I took care not to analyze any primary data — the datafiles that I analyzed were the same replication datasets that Michael LaCour posted to his website. Looking back, the failure to verify the original Qualtrics data was a serious mistake.
As I wrote last week:
... Green let the guys who faked data the first time around fake it a second time to satisfy his concerns. The second paragraph is an admission that he didn’t do his due diligence. Perhaps Green thought that the results were too good to check.
Retraction Watch identified the following outlets which covered the study's original publication:
This American Life, The New York Times, The Wall Street Journal, The Washington Post, The Los Angeles Times, Science Friday, Vox, and HuffingtonPost.
The Associated Press also published a story.
Part 2 on this topic will look at how the publications involved have handled their corrections and retractions. While correcting the record is important, the bigger problem is that they all deliberately or conveniently let their guards down, believing something that almost any person on the street would tell you doesn't pass the stench test, let alone the smell test.
Cross-posted at BizzyBlog.com.