If You Ask Me: A Non-Science Person Looks at the Replication Crisis

by | Jun 21, 2021 | 2 comments

#

By: Matthew Ismail, Editor in chief of the Charleston Briefings and Conference Director at the Charleston Conference

Matthew Ismail

Those of us whose study or research is in the humanities probably don’t always understand the nuts and bolts of the research replication crisis currently in the headlines of scholarly communication publications. For myself, I studied Islamic History and Modern European Intellectual History, and in neither of these fields did I ever do a single quantitative study. My work was based on a close reading of the published literature and access to previously unexploited primary sources. The value of the work was not something that could be construed as “ verified output,” but was more like storytelling supported by documentary evidence–and I don’t say “storytelling” to deprecate that sort of work, but to acknowledge that my sort of history is more literary than scientific, and that’s how I like it.

When I wrote Wallis Budge: Magic and Mummies in London and Cairo, I spent years reading about British social and intellectual history, the history of Egyptology and archaeology, the founding and evolution of the British Museum, Sir E. A. Wallis Budge’s own writings and writings of his colleagues and enemies (not surprisingly, there was a lot of politics in the British Museum and in Egyptology!) I spent weeks in the archives of the British Museum and British Library trawling through thousands of volumes of departmental correspondence. There was a huge amount of research, but the output of the research was not data. It was a volume that presented the life of an Egyptologist in the context of late Victorian and Edwardian cultural history, refuting malicious gossip masquerading as history and widening the scope far beyond departmental power struggles. The value in the work is in the quality of the storytelling and thoroughness of the documentation. It was convincing to the extent that I was both persuasive, elegant, and thorough.

I emphasize this background simply to suggest that, for many people such as me, whose work is more literary, the replication crisis in the sciences is a bit mysterious. I read articles about it and the arguments include formulas, scientific laws, accusations that statistical approaches are not understood or are misapplied, mystifying arguments among specialists, etc., and I have no way of judging whether these accusations are true or not. I never saw a statistical argument that didn’t make me sleepy. While anyone with a decent humanistic education can read and understand my book and can understand my approach, which is as old as Gibbon’s Decline and Fall of the Roman Empire, this is rarely true of published science.

The replication crisis takes place in an environment different from my sort of work. I just have to take the word of scientific specialists that the work they are doing is founded on correct technique, on properly vetted literature,  and that the outcomes are therefore true the way 2 + 2 = 4 is true (that’s about as quantitative as I get!). When scientists say that the COVID19 vaccines are safe and effective, I have no way of judging that this is true beyond the fact that these public statements are said to be based on valid published research. Scientists are the ones who read the research and judge that it is argued correctly and supported with the best evidence. So, when the CDC says that authoritative research supports the safety of the vaccines, my willingness to go along with them is not based on my own reading of the research, but on theirs. I assume both that published research is vetted and certified by peer review and also by the expertise of editors and other researchers who subsequently read the work.

This is why the replication crisis is so alarming. When I read in Wikipedia (“The Replication Crisis”) that a 2015 meta-analysis found that studies published in the prestigious Journal of Personality and Social Psychology could only be replicated 23% of the time; The Journal of Experimental Psychology: Learning, Memory, and Cognition only 48% of the time; and 38% of the time for Psychological Science…I am alarmed. I am alarmed, not because I read the articles and refuted their findings myself, but because my only option as a non-specialist is to trust that the system of peer review is working, which it clearly is not.  The papers were accepted for publication by experts and became “facts” upon which other research was subsequently based. Now we learn that most of these papers did not produce new “facts,” but only new lines on someone’s CV.

This is alarming enough in psychology. But when we read that “When cancer research does get tested, it’s almost always by a private research lab. Pharmaceutical and biotech businesses have the money and incentive to proceed—but these companies mostly keep their findings to themselves…In 2012, the former head of cancer research at Amgen, Glenn Begley, brought wide attention to this issue when he decided to go public with his findings in a piece for Nature. Over a 10-year stretch, he said, Amgen’s scientists had tried to replicate the findings of 53 ‘landmark’ studies in cancer biology. Just six of them came up with positive results.” (Daniel Engber, “Cancer Research Is Broken: There’s a replication crisis in biomedicine—and no one even knows how deep it runs.”).

And if this weren’t enough, now researchers at UCSD “analyzed data from three influential replication projects which tried to systematically replicate the findings in top psychology, economic and general science journals (Nature and Science). In psychology, only 39 percent of the 100 experiments successfully replicated. In economics, 61 percent of the 18 studies replicated as did 62 percent of the 21 studies published in Nature/Science. With the findings from these three replication projects, the authors used Google Scholar to test whether papers that failed to replicate are cited significantly more often than those that were successfully replicated, both before and after the replication projects were published. The largest gap was in papers published in Nature/Science: non-replicable papers were cited 300 times more than replicable ones.” (Christine Clark, “A New Replication Crisis: Research that is Less Likely to be True is Cited More.”)

So, not only is there a preponderance of important studies in the most prestigious science journals whose findings cannot be replicated, but those are precisely the papers that are widely cited because their findings are supposedly “more interesting”!

Again, this is alarming precisely because I am not a specialist and am profoundly dependent on specialists to publish studies that will help us to treat disease more effectively. What am I to think when I see that only six of fifty-three supposedly “landmark” studies in cancer research could be replicated? That unreplicatable but “interesting” research findings are immensely more likely to be cited than those studies that can actually be shown to replicate? It’s a mess!

It’s a mess in part, of course, because there are constant public debates about who is “pro-science” and who is “anti-science.” Our support for science is being reduced to an article of faith, making decisions such as whether to get the COVID vaccine based on our trust in the system of science research and publication we know to be flooded with studies that cannot be replicated!

Now, most people still trust science–Pew Research found in 2020 that “About three-quarters of Americans (73%) say science has, on balance, had a mostly positive effect on society. And 82% expect future scientific developments to yield benefits for society in years to come” (Cary Funk, Pew Research Center, “Key findings about Americans’ confidence in science and their views on scientists’ role in society”). But we non-specialists who are aware of the problem really want the science specialists to figure this replication crisis out before the crisis becomes better known. I want to trust published science even though I cannot verify the findings myself. But in the absence of proof that published science can be trusted, what are we to say to those who spread anti-science conspiracy theories? “Well, yes, it may be true that much published science cannot be replicated and its claims are thus unverifiable, but that’s no reason to be a science skeptic!”

Actually, it would be a reason to be a science skeptic, wouldn’t it? As the NOBA Project says, “The replication of findings is one of the defining hallmarks of science. Scientists must be able to replicate the results of studies or their findings do not become part of scientific knowledge. Replication protects against false positives (seeing a result that is not really there) and also increases confidence that the result actually exists.” Telling us that most published science results cannot be replicated is telling us that these results “do not become part of scientific knowledge.”

The findings of science are not something that we can simply demand that people accept on faith or tell them, “You just don’t understand how science works.” This is the patronizing voice of the status quo. If new scientific studies are based on previous scientific studies that cannot be shown to work a second time, then it certainly sounds as if the new study is built on a foundation of sand. And when people who question the findings of published science are attacked as “methodological terrorists” by the scientific establishment (Andrew Gelman, “What has happened down here is the winds have changed,” Statistical Modeling, Causal Inference, and Social Science), it certainly looks as if science publishing is a more of a territory to be defended than the published results of  a disinterested scientific process that can be proved wrong.

The problem with this perception of interests overriding scientific rigor is that more people will begin to listen to those incredibly loud and confident voices telling us not to get vaccinated, not to take antibiotics, not to trust doctors and scientists. And what can a non-specialist say in response? “Well, I just choose to believe them!” I want more than that, frankly. I try to tell a good story in my own writing, but I want something else from published science. Telling me that “Science doesn’t always move in one direction and it moves by fits and starts” is a story–but it doesn’t tell me why studies based on previous studies whose findings cannot be replicated are reliable. I want correct science that helps researchers make good decisions in research that could save lives–or not!

Maybe we need less published science that is more thoroughly vetted? I don’t know. But I hope that there is a course correction in science publishing that would allow people like me to say more confidently that we can trust the published work of scientists. The alternative is certainly not an attractive one!

2 Comments

  1. Ann

    Legitimate issue, but the posting sells humanists way short — Many are able to read and comprehend a statistical argument!

    Reply
  2. Matthew

    Hi Ann!

    Yes, I did no research to judge how many people with a humanities background are also highly skilled in statistics and scientific research methods. Maybe I’m the exception!

    I think, more to the point, that there is some question of how well the people who are publishing social science research understand statistics and scientific method, as Alvaro de Menard suggests:

    https://fantasticanachronism.com/2020/09/11/whats-wrong-with-social-science-and-how-to-fix-it/#

    If a statistician with no domain knowledge can comfortably predict whether or not a study in the social sciences will replicate based only on a brief scan of a paper’s abstract and methodology, then the problems run pretty deep!

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

LATEST NEWS

A New Beta From JSTOR Labs – Plus More ATG News & Announcements

New Beta From JSTOR Labs: “Juncture, a Free-To-Use, Open Source Framework For Creating Engaging Visual Essays” According to infoDOCKET "JSTOR Labs’ latest is "Juncture, a free-to-use, open source framework for creating engaging visual essays.  Juncture allows you to...

SUBSCRIBE TO OUR PODCAST

Share This