Wednesday, April 28, 2010

Ethics of anonymous silence

This week in Nature there's an article titled "Under suspicion", discussing how Nature investigates allegations about data or author conduct.


This got me thinking again about one of my favorite topics: how many things have to go wrong before someone gets caught doing something really egregious and is actually forced to retract the paper.


More often I suspect truth dies by a thousand pinpricks, and the paper is published anyway, and even though many grad students and postdocs suspect entire fields are based on contaminated publications, and suspect we know why, those papers are never retracted.



Here are a couple of examples I know about peripherally, which, to my knowledge, have never been investigated or enforced in any way:


1. Author makes outrageous claim, paper gets reviewed by high-impact journal because the result is so "surprising". Turns out the crux of the claim is based on an extremely high sample number and/or vastly overstated statistical power. Reviewer suggests politely that the number in question is either a lie or a typo. Author revises the text to remove the "typo" but keeps the figures and conclusions the same. Paper is accepted.


Scary, part a: Nobody tries to verify that these authors actually tested even the revised, smaller number of samples (which is still way too many to be believable).

Scary, part b: Nobody can publish anything conflicting with the model based on these published claims, without reproducing the original results in at least as many samples, and no one can afford to do that because it's so outrageously expensive. And actually, if you calculate out the cost, it's clear that the original group couldn't possibly have afforded to do that many samples themselves, either. Suggesting that the only rational answer is that they... didn't.

I wonder if requiring some kind of accounting procedure for papers would help catch these kinds of exaggerations? Not that I'm favoring extensive bean-counting, but sometimes all it takes is the blank space on the back of an envelope.



2. Authors submit paper to high-impact journal with data that have clearly been processed incorrectly. Reviewer points this out. Paper is rejected. Authors submit same paper, with no revisions, to different high-impact journal. Paper is accepted.

Scary, part a: Reviewers at second journal apparently didn't notice? Or did the authors actually fix the problem and magically get the same results? Really? Magically?

Scary, part b: No one involved in the original anonymous reviewing process, neither the reviewers themselves nor the editor, is ethically (or, um, legally?) required to come forward and say anything? So they don't? It's like it never happened? Because it's anonymous, even though it's in the first journal's database, presumably, somewhere? Does that information just get deleted? What would people think if that information got out? Would we finally know which reviewers were completely spineless kowtowers?



Sometimes I wonder how often these kinds of things are happening. More often I wonder why everyone puts up with it.

I like to think we'd learn a heckuva lot if somebody would hack into those computers and find out the extent of all this nonsense. It would certainly be a fun data mining project, tracking the reviews and the papers across journals to see where they end up and how many accusations are made, investigated, or just lost in the shuffle from journal to journal.

Labels: , , , , ,

7 Comments:

At 4:41 PM, Anonymous jh said...

You're right. Today, journal space is not anymore a limited resource. I don't know about "high impact" journals, but the few studies there are about manuscript publication suggest that today a manuscript has a very high chance to get published *somewhere* at least.

See e.g. http://www.ncbi.nlm.nih.gov/pubmed/17301708

Quote from the abstract:
"CONCLUSIONS: These data suggest most epidemiology manuscripts are eventually published, although some persistence on the part of the authors may be necessary."

Of course, if this is in a low impact journal, nobody will care...

 
At 6:35 PM, Anonymous physics grad student said...

3. High impact journals so obsessed to beat their competitors and feature the hottest new research send very suspect results to reviews that they know will give glowing remarks. See Schon, Jan Hendrik.

 
At 10:23 PM, Anonymous Anonymous said...

so in case#2, when the paper got published in the second journal, did the original reviewer from first journal (the one who rejected it) see it and go, "hey!!" I'm surprised the authors would have the gall to do that because this could easily happen - the first reviewer would have copies of their manuscript and his reviews and could easily blow the whistle on them. That is unless the authors are prepared to defend their paper publicly. If they are, then they must honestly believe in it themselves which is rather a different thing from deliberately being deceptive.

 
At 10:05 AM, Blogger Steven Salzberg said...

Your second situation happens quite frequently. I've been the reviewer myself on multiple occasions where I pointed out problems and a paper was rejected, and soon afterward, the same paper appeared, with the same problems, in another journal. As a reviewer I couldn't say anything - for all I knew, these papers were being revised in preparation for a resubmission. You don't generally learn about such situations until publication in the second journal.

One solution might be for journals themselves to share their databases of recently-reviewed papers with each other. But they don't do that.

 
At 11:42 AM, Blogger a physicist said...

I heard a nice talk once by the vice-president in charge of research at U of Illinois/Urbana-Champaign. She handled all of their research ethics. She said that the common viewpoint is that research misconduct is extremely rare, that scientists go out of their way to be ethical. That the reality is far from this, that there are ethical violations both small and large that occur regularly. I completely agree with the point of your post, that the minor ones probably go unnoticed all the time, yet cause big problems. Especially big problems with subsequent science.

This also goes well with your point of view you've said often before, that we all enter science naively expecting it to be a meritocracy, unpolluted by bullshit. Sadly that's not usually the case.

By the way, loved your last post ("On little boys and science").

 
At 12:57 PM, Blogger Ms.PhD said...

jh,

Just to show how much kool-aid is still in my veins, I don't care so much about low-impact journals, because I do think that most data should be published somewhere. I guess I care more about high-impact because I see those papers as high-value currency for jobs and funding. It's like the difference between a counterfeit $1 and a counterfeit $100 bill. Just a few fake $100 bills can buy a lot; even a lot of fake $1 bills don't matter that much.

physics grad student,

Yes, there's that too! But what I'm wondering about here is, what about the OTHER reviewers, the ones who KNEW, but didn't feel like they could say anything even after they saw the papers come out? There must have been some of those people. And they have no way to safely blow the whistle without risking their own careers.

Anon 10:23,

Technically, when you review a paper, you're supposed to destroy any copies of it. So to come forward and say "Hey! I have a copy of the original!" is walking on thin ice.

The authors are ALWAYS prepared to at least TRY to defend their papers publicly. See the comment above with the example of Schon. Even when they get caught, initially everyone denies it, or says it was an honest mistake, or blames their grad student (Homme Hellinga is a great example).

Steven,

Yes, exactly. But I've been arguing that we need a more standardized, centralized process for publishing... having all these separate journals with different formats is not all win-win.

a physicist,

thanks.

 
At 8:07 PM, Blogger Helen Huntingdon said...

This was a fun one: http://www.nytimes.com/2010/03/21/world/asia/21grid.html?pagewanted=all

I read the paper in question. It appears to make some kind of sense for a packet-switched network, I think, but it most definitely doesn't have anything to do with power grids, which physically can't function in the manner described in the paper.

 

Post a Comment

<< Home