Wednesday, April 07, 2010

Ethics of publishing

Sigh. Just read this post and the related comments over at FSP.

Poor FSP. Again, she is so naive.

So let me give you a little bit of my perspective on this.

FSP is writing about having to take ethics courses and how stupid it is to be forced to take something related to human subjects when you don't even work in the biosciences.

Fair enough. Human subjects are hardly the major issue of ethics in science these days, because studies involving human subjects are governed by rules and reviewed by committees.

And, they can sue.

I would argue instead that the ethics of publishing and the pressure of citation indices, as FSP mentions, are much more dangerous for science as a whole.

I didn't really appreciate the extent of this problem when I started out. I knew it was bad, I heard horror stories. But like FSP, initially I thought they were "bizarre" - as in, outliers. Exceptions. Unlikely. Uncommon.

Now I think otherwise.



1. MPU: minimal publishable unit

Definition: Breaking scientific studies down maximize the number of publications

Pro: More publications in less time

Obvious Con: Usually lower-impact papers

Less Obvious Con: You already have data that contradicts your pet hypothesis, but you leave this out of the first paper and plan instead to publish it in the second paper. It won't fit anyway! Besides, it's okay if you're wrong so long as you're the one who reports it, right?

Advantage: One paper becomes two, potentially both high-impact, and soon!

Disadvantage:

• Knowingly misleading the field during the time between the first and second publications.

• Temptation to never publish the second paper. Especially if the student defends or the postdoc gets a job based on the first paper.

• The next grad student or postdoc in your lab, or another lab, can't publish their work because it contradicts your paper and you never published the second.





2. Kitchen sink publication

Definition: Cramming tons of data into one big paper, in order to increase chances of overwhelming the reviewer

Pro:

• This often works, especially at high-impact journals

• Makes use of data that would otherwise never be published

• Makes a non-story look like a really big deal

Obvious Con:

• Often much of the data that is crammed into the paper does not contribute to the story

• Many middle-authors

• Can only be done with projects that are relatively mature, and/or in larger labs where multiple people contribute parts of figures

Less Obvious Con: Can be used as a way of burying data in supplemental figures that actually contradict the main claim of the paper, while still garnering the credibility of being able to say that all the appropriate experiments and techniques were done

Advantage: High impact paper!

Disadvantage:

• Misleads the field. It's in a high-impact journal, so it must be true, right?

• Supplemental figures are often not reviewed at all, and generally not held to the same standard as figures in the main text.

• Contradictory data that should have been, at a minimum, an MPU for a student or postdoc, and important for the field, gets eaten by the career (and ego) of the first and last author.



I could go on, but I'll stop there for now. Might write more another time if these don't seem sufficiently scary already.

Labels: , ,

9 Comments:

At 6:02 PM, Blogger Kea said...

Hear, hear! I tried logging onto this ISI h-index site, but I can't without an institution. In my theory field, the h-index is higher if you work in a pack, like say in String Theory, because everyone goes around citing each other and papers are MUCH, MUCH easier to publish. In specialised fields, citations are really not an indicator of research quality. They can always be dismissed on the (incorrect) grounds that the other authors did all the work, and the hiring committee's opinion on this point depends on whoever they ask.

 
At 9:45 PM, Anonymous Anonymous said...

My grad school advisor loved the MPU. He was a numbers guy and liked to brag about how many publications he got each year (yes, they were in 2nd tier or lower journals because it was the path of least resistance). I published a simple paper in a lower tier journal, and in a subsequent study tried a slightly different reagent that worked less efficiently. He actually tried to convince me to publish this work (even though it was worse than my initial paper) because it used a new reagent. His antics became so ridiculous that I found myself coming in at odd hours to run experiments for my own curiosity because as soon as he thought there was enough data to publish he wanted a manuscript the next morning. And if he knew you had publishable results, he bugged you EVERY DAY for the manuscript.

The best part is when it came time for me to interview for jobs and postdocs, I was asked why my publications weren't in better journals. How do you answer something like that? I was being penalized for my advisor's bad habits. I guess you can add this to the "less obvious cons" section.

 
At 1:17 AM, Anonymous Anonymous said...

my postdoc group and PI did both the LPU and the kitchen sink, but I didn't benefit from either.

everyone in the group - encouraged by the PI - did LPU papers to the extreme. I thought it was ridiculous so i didn't do it. As a result I was deemed the least productive member of the group since I had the fewest publications, or that it took "so long" for me to get anything published.

The same group and PI also occasionally cobbled all their LPU papers into a kitchen sink paper. Everyone else in the group would make their latest routine data set into yet another first-author pub. But because I didn't want to do that (because I disagree with the whole LPU approach), I held back so I could publish a more substantial paper but never got to it because my PI then swiped my hard-won results (that took me far longer to obtain than everyone took to get their LPU papers) to put in his kitchen sink paper, robbing me of a better first-author paper (compared to the other lab members') and instead making me a middle author.

 
At 10:48 AM, Blogger Ms.PhD said...

another astute commenter pointed out that FSP has a 2nd part to her post up today, more or less talking about the same thing I mentioned here.

 
At 11:47 AM, Blogger SamanthaScientist said...

Great post. I'm currently struggling with related issues. I also feel like I've gotten screwed on all fronts. I wasn't made an author on a lab mate's LPU even though I contributed, and my advisor wouldn't let me publish my own LPU. And my "kitchen sink" paper has contradictions that are difficult to resolve, which equals difficult to put together a manuscript that makes sense and difficult to get it accepted.

I don't know what the correct course of action would have been. If I would have published the LPU a long time ago, I wouldn't have known about the ambiguities that I discovered later. But one can always discover new issues upon doing more experiments, and you can't do an infinite number of experiments before publishing. And the data I collected is still exciting and meaningful, but now some exact values are called into question by the ambiguities. At least if I would've published the LPU, I would've freaking graduated by now and gotten out of this toxic environment!

 
At 1:53 PM, Blogger Ms.PhD said...

Kea - I hoped this problem of "who really did what" would be solved by some journals' new policies offering some kind of statement of "author contributions", but it turns out that most of them don't allow for realistic categories. My advisor insisted on being listed as having helped in all areas, despite having actually FOUGHT ME EVERY STEP OF THE WAY and really only contributed to one aspect (editing the manuscript). But the reading audience won't know that.

Anon - Personally, I think that kind of behavior would have to be really extreme before it would hurt you. Presumably you were being asked about this DURING INTERVIEWS?

We're all penalized by our advisor's bad publishing habits. That has been a major theme of this blog all along.

SamanthaScientist - This is the trap with holding out for "kitchen sink" publications.

MOST science has ambiguities that seem to occur in waves. The trick is publishing at the peak of the curve before you fall down into another confusion well, and the you climb back out and publish another paper with the next piece of the puzzle. There's a rhythm to it, if you know what you're doing and you can figure out how to break off story-sized pieces.

You really need both sides: the person on the ground has one perspective on where the project is (peak or trough), and in the ideal situation, the person in the office (the PI) has a bird-eye view of where the project begins and ends (how tall is the next peak, how steep is the climb?).

But in practice there are problems with this. First, there's the conflict of interest: you want to finish, defend, and get out. Your advisor wants higher impact. Second, the PI is often too far removed, and too distracted, to have a clear view of the big picture. For example, I see way too many PIs these days who don't know the relevant literature for what their own students are doing. They seem to think that's the sole responsibility of the student, but then they also don't trust the student enough, which just leads to a complete lack of progress for everyone.

It's a freaking nightmare.

 
At 4:47 AM, Blogger biochem belle said...

The kitchen sink Glamor mag publications are quite popular at my institute and lab. Other publications are often looked down on; it's as though people don't see the point of publishing a paper if it's not some sexy thing that turns them on. It drives me a little crazy and is something I've blogged about.

The odd thing about the obsession to publish in Glamor Mags-which is the main source of kitchen sink publications-is that some big players see the problems. Two perspectives published this year-one in EMBO Reports by Laurent Segalat and one in Science Signaling by Michael Yaffe-discuss these issues. Frankly I'm not terribly optimistic that this obsession will dwindle anytime in the near future, but a scientist can hope...

 
At 10:59 AM, Blogger Ms.PhD said...

biochem belle - thanks for the links! I will check those out.

though, FWIW, I've noticed some extreme hypocrazy among people who write articles on the issuez. my own PI wrote an article many eons ago about how labs should be small. And then proceeded to build an empire. if it's the MIchael Yaffe I'm thinking of, his lab has a lot of Glamour Pubs so I am curious to see what he says. I've never heard of Laurent Segalat.

 
At 12:22 PM, Blogger biochem belle said...

Hypocrazy absolutely abounds in academia... or maybe just people in general.

btw, I think "hypocrazy" is a perfect word for this situation as it reflects the insanity from which it comes and the insanity it induces.

 

Post a Comment

Links to this post:

Create a Link

<< Home