Wednesday, February 28, 2018

Peer Review On Trial



Peer Review Gone Wrong
or, A Cynic's Manifesto

This behavior should get your scientific knickers in a knot: https://theintercept.com/2018/02/23/3m-lawsuit-pfcs-pollution/

Essentially, 3M used perfluorinated chemicals in the production of some of their products (like Scotchgard). Research has demonstrated that such chemicals may harm humans, other organisms, and the environment when they seep into the ground water.

The State of Minnesota sued 3M for this; 3M settled the case for less than 1/5 the amount of the lawsuit, and all without admitting wrongdoing. And 3M had the gall to release a statement describing the settlement as "consistent with 3M’s long history of environmental stewardship."

With 'environmental stewards' like that, who needs polluters?...

Just as outrageously (perhaps more so, if you're an early-career researcher!), this article points out that Minnesota's lawsuit named a widely-published professor, Dr. John Giesy, who allegedly took money from 3M while putting obstacles to publication in the path of scientists whose work showed that perfluorinated chemicals can be dangerous.

State's evidence includes this e-mail, and this particularly damning one, which shows that Giesy knew perfectly well that he was protecting 3M and its interests.

***

Enraged yet? Maybe we need to put the process of peer review on trial!

This instance shows that the traditional peer-review system is flawed. Fundamentally, fatally flawed. The peer-review process says "Trust me," while hiding an unknowable number of instances of misconduct just like this one.

The conflict-of-interest is not always so obviously money-driven. You may get reviewers who simply disagree with your interpretation (especially if your findings contradict their own previous work), and therefore recommend against publication.

It's clearly unethical to block somebody's work in order to protect the interests of a funding source. It's less clear whether it's unethical to block somebody's work when you think the conclusions are wrong, but can't quite put your finger on why.

Here, I'm reminded of noted physicist Max Planck's observation that "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." Planck's quote shows that this is not a new problem; he lived from 1858-1947.

This quote is often cynically paraphrased as "Science advances one funeral at a time..." [note that Planck didn't actually say this, though it pithily captures the idea].

https://spark.adobe.com/page/Gfr5gtrvrkePs/embed.jpg?buster=1489423343660
This isn't actually a quote from Planck himself; it's paraphrased.

In an era when publishing your work is as simple as uploading your data and your paper online, why hand over your hard work—gratis—to a for-profit publishing house who charges insane rates to your institution (thereby ripping off your students and/or the state taxpayers), only to subject it to scrutiny by parties who may or may not have a hidden agenda?

That process may have been necessary 100 years ago, or even 40 years ago.

But it's not necessary today.

A growing number of preprint services are available in a variety of disciplines. Physics started the trend with arXiv.org (pronounced like "archive") in 1996; PsyArXiv officially launched in late 2016.

For PsyArXiv, which I've already used, you have to register first. Then, you can upload all your materials, making them freely available to anyone—whether or not that person can be considered a "peer." These tools to open science can be particularly helpful for researchers who want to replicate your work, or people who have questions about your data.

This brings me back to Dr. Giesy's scientific misconduct, as well as the recent discoveries of people who fabricated or massaged results, like social psychologist Diederik Stapel, biochemist/Parkinson's Disease researcher Mona Thiruchelvam, physician Andrew Wakefield, and anesthesiologist Yoshitaka Fujii, who fabricated data for a whopping 172 papers!

Why would people make up data for a scientific paper? In part, due to the massive career pressure that comes along with peer-reviewed publications. Hiring committees or tenure committees, strapped for time and with a million additional responsibilities besides hiring or promoting their next colleague, assume that a prolific author is a good scientist.

Here, I'm reminded of the old saw that quantity ≠ quality...

Still not convinced that peer review is a substandard method to evaluate quality? Check out this roundup of the worst science scandals of 2012. Or these whoppers from 2015 (some of which relate to research, others of which relate to the everyday conduct of scientists). Or the research misconduct discovered in 2017, listed the end of this page.

The history of science is littered with massaged data, fabricated data, serious methodological errors, interpretations that don't follow from the data, and more! Bearing in mind such scandals as these, why do we cling to the quaint notion that peer review is the 'gold standard' of evaluating scientific fact?

Now, some people work in subfields that require specialized knowledge. In such cases, peer review can be a helpful way to evaluate the quality of a study's methodology and whether the conclusions follow logically from the results obtained by those methods.

But too often, it's a cop-out to maintain the status quo, instead of people challenging themselves to implement solutions to the problems with the current system.

This story also brings up the question: should editors decide whether or not to publish a paper—a decision that can affect the author's career—based on the opinions of reviewers? The case of Dr. Giesy suggests that this policy, which is standard among even the most highly-regarded journals, might be misguided.

We have years' worth of evidence that experts can't catch outright fakery, and now we have at least one instance where experts deliberately block papers with undesirable findings. It's common knowledge that this sort of behavior happens for ideological reasons; the Max Planck quote above confirms that this was a well-known issue at least 100 years ago.

This case is the first instance I've heard of in which a reviewer acted deliberately to protect the interests of a company or organization. Unfortunately, there's no way to know how pervasive this problem may be.

There's even evidence that highly-influential scientific papers are rejected by reviewers and/or editors. See, for instance, these 8 papers by eventual Nobel-prize winning scientists! ("It was rejected on the grounds that it will not interest physicists." Ha!!!)

So, is the peer review system ripe for a review of itself?

At least one scholar is more optimistic about the process of peer review than I am. In that paper, the position I've adopted—that peer-review is an indefensible method of gatekeeping that adds unnecessary layers of bureaucracy (as well as more chances for mistakes) to an already painfully slow process—is dubbed "cynical."

You're damn right I'm cynical!

And if you're not feeling cynical, go re-read what I've written so far. Peruse the links I've provided here. I'll wait.

Or, you can find plenty of examples for yourself: here's a link to a Google search (and here's a link to a DuckDuckGo search if that's your preferred search engine).

If you'd prefer an argument from somebody who isn't me, the popular blogger and statistician Andrew Gelman makes a case against peer review here, so I'm clearly not the only scholar to distrust the process.

And what about the vast, vast majority of research in which there's no fraud? Surely, 99% of published research is trustworthy, right? Well, consider this piece, written by a veteran science journalist, that I use in my own teaching. Or this summary of the uncertainty surrounding behavioral priming research. Or John Ioannidis' famous 2005 paper, provocatively titled "Why Most Published Research Findings Are False." [The contents of Ioannidis' paper are dense and mathematical, but not quite as pessimistic as the title suggests.]

OK, after reading all of that...feeling cynical enough yet?

Then answer this: why do 'peer-reviewed publications' still constitute the major criterion for how researchers are evaluated? How many instances of deceit have to be uncovered before we figure out a new way to figure out whether an academic's work is worthwhile?

Maybe we just need to stick our collective heads out the window and yell, "I'm as mad as hell, and I'm not gonna take it anymore!"

The famous line from Network begins at 1:33. Here's the clip in its entirety.

UPDATE 6/11/2018: Also see this comment from 'Al_The_Plumber' on a report from Nautilus: http://nautil.us/issue/24/error/how-the-biggest-fabricator-in-science-got-caught

I think that the quote at the end (from Richard Horton, former editor of prestigious medical journal The Lancet) summarizes the issue very succinctly:
The mistake, of course, is to have thought that peer review was any more than a crude means of discovering the acceptability - not the validity - of a new finding...we know that the system of peer review [like any other endeavor that relies on human judgment] is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.
  • Bracketed text added by me, to emphasize that the problem isn't with peer review per se, but with fundamental human flaws.

    A quick search reveals that this quote is often used by people who peddle alternative medicine and claim that medical science is in the pocket of 'Big Pharma.' While the pharmaceutical industry undoubtedly tries to exert its influence on people and on institutions, 'alternative medicine' often results in avoidable deaths and other problems due to medication interactions, or people avoiding treatments that have been shown to be generally effective. Doctors and professors are sometimes mistaken, but so are practitioners of alternative "medicine."

    The NewRepublic link in the previous paragraph has a great quote illustrating what I'm talking about: "A scientist approaches all treatments with an open but skeptical mind." That is, a good scientist seeks only the truth about whether (and how) a treatment works.

    Whether the claim is that St. John's Wort treats depression or that CAR-T can cure leukemia, a good scientist should say "This might work, but there are so many sham treatments out there that I better design a strict test. I'll only believe it works after I see it succeed in multiple, well-designed experimental studies comparing it to conventional treatments and to a placebo."

    If someone offers you a 'miraculous natural cure that's being covered up by Big Pharma,' always remember that cyanide, arsenic, hemlock, cocaine, botulinum (a bacteria-produced toxin), and tetrodotoxin are a few examples of all-natural poisons that are almost instantly lethal. Natural ≠ good.

    So don't think for a second that my criticism of the peer-review process means that I'm advocating junk science! I'm simply not buying the hype that "if it's published in a peer-reviewed journal, then it's true." Science will always be far more complicated than that.

1 comment:

  1. Here's a similar observation, advanced by a physicist:
    http://backreaction.blogspot.de/2018/05/the-overproduction-crisis-in-physics.html

    Jean Pestieau's comment about Ibn al-Haytham's 11th-century quote is illustrative. What I'm saying really isn't anything new; nonetheless, it needs to be heard!

    ReplyDelete

ResearcherID