Tuesday, October 12, 2010

Grab bag of atrocities & wonderment

Lots of stuff I've been meaning to write about. Yesterday I was overjoyed to see this fantastic post over at FSP about the concept of "selective sexism". This is when a guy seems to be okay with some women but when threatened by others, resorts to harassment tactics.

I was thrilled to see the FSP finally gets it, that a seemingly respectful young male postdoc (to her) could actually be a complete asshat to his peers (female postdocs). Of course, it's not clear that anyone is prepared to do the right thing in this situation, but I was glad to see FSP noticing the disparity and blogging about it.

This week I've mostly been catching up on some classic pieces in science journalism. One is this wonderful piece by Kendall Powell in Nature News.

The article points out, though understates, how much worse funding is now than it was 10 years ago (when I was in grad school). Funding back then was at the 32% level, which didn't seem so bad, if you really believed that about one-third of applications got funded back when I was struggling to get my PhD. It's now down to 21%.

To most non-scientists, that may sound like a 10% difference. But you could think of it as a 35% decrease, in the sense that those who would previously have been funded now won't be.

But it's a bit more complicated than that anyway, since the percentage points don't mean 32 out of 100 grants will be funded. It's not clear, but I think they're really citing the percentiles, which is a weighted ranking number based on past years and the distribution of all the grants this year.

Isn't that clever, how they hide the degree to which funding has become increasingly more competitive? It was never 1 in 3 to begin with, but they don't like you to know that when you're choosing your undergraduate science major!

Here is the paragraph everyone should read. If you're considering becoming a cancer researcher, if you're donating a dollar at your grocery store (as mine has been harassing me to do multiple times per unavoidable visit to buy food), this is how scientifically your money gets distributed [comments in brackets are my snide remarks]:

All of this puts immense pressure on the grant-review panels. [poor grant review panels!] Senior reviewers say that when the top one-third of proposals can be funded, the review process works well at identifying the best science. But when the success rate drops, they see the process start to fall apart. [BECAUSE IT'S NOT A VERY ROBUST SYSTEM TO BEGIN WITH.]

Let me pause here and emphasize this point before going on. How is this a great system when it works at 32% but FALLS APART at 21%? If this were a building, it would fall down.

But next comes my favorite part of the entire article:

Conversations turn nit-picky and negative, with reviewers looking for any excuse not to fund a project, rather than focusing on its merits. Reviewers say that they feel forced into making impossible choices between equally worthy proposals, especially when success rates are less than 20%. "That's in a range where you have lost discrimination," says Dick McIntosh, professor emeritus of cell biology at the University of Colorado in Boulder. "That's a situation where you are grading exam papers by throwing them down the stairs." The chairman of the ACS panel agrees. "Deciding between the top grants, I don't want to say it's arbitrary, but it's not really based on strong criteria," he says. "It's subtle things." [emphasis mine]

There are tons of other gems in this article, so you should absolutely read the whole thing. I applaud Kendall Powell for giving a fly-on-the-wall view of the grant review process. Although I have to admit, many of the subtleties would have been lost on me when I was in college or even grad school, because she mostly just reports observations without really spelling out what it all means.

The next to last paragraph includes advice from the ACS vice-chair(wo)man [I guess it's a Britishism to call everyone chairman regardless of gender?]:

She also says applicants should use their contacts to sniff out the personality of the panel and the nature of the competition.

Just read that sentence over a few more times.

Yeah, you idealistic types out there. I'm talking to you. It's not about the quality of your work.

****

Speaking of quality of work, I know some of you saw this story about the postdoc who sabotaged a labmate's cell cultures with alcohol.

Whew.

I hate to say it, but this article actually made me feel better.

I had a couple of situations myself where people threw out my samples or reagents that I think might qualify.

Despite all this, there is little to prevent perpetrators re-entering science. In the United States, federal bodies that provide research funding have limited ability and inclination to take action in sabotage cases because they aren't interpreted as fitting the federal definition of research misconduct, which is limited to plagiarism, fabrication and falsification of research data.

I was deeply impressed, both by the grad student for pursuing the complaint, and the PI for listening to her and starting an investigation without first confronting the postdoc, which would have made the hidden camera approach impossible to pursue.

When it happened to me, I approached my PI (two different PIs, actually), and in both cases the PI confronted the person in question, alerting them and giving them a chance to defend themselves (and/or switch tactics).

The PI in this article did everything right, or this wouldn't have been resolved.

Here again, the entire article is worth reading, although many subtleties would have been lost on me when I was younger and less experienced, it includes some useful tidbits, especially if you might have missed earlier posts on these topics.

Daniele Fanelli at the University of Edinburgh, UK, who studies research misconduct, says that overtly malicious offences such as Bhrigu's are probably infrequent, but other forms of indecency and sabotage are likely to be more common. "A lot more would be the kind of thing you couldn't capture on camera," he says. Vindictive peer review, dishonest reference letters and withholding key aspects of protocols from colleagues or competitors can do just as much to derail a career or a research project as vandalizing experiments. These are just a few of the questionable practices that seem quite widespread in science, but are not technically considered misconduct. In a meta-analysis of misconduct surveys, published last year (D. Fanelli PLoS ONE 4, e5738; 2009), Fanelli found that up to one-third of scientists admit to offences that fall into this grey area, and up to 70% say that they have observed them.

[emphasis mine]

Now that Bhrigu is in India, there is little to prevent him from getting back into science. And even if he were in the United States, there wouldn't be much to stop him. The National Institutes of Health in Bethesda, Maryland, through its Office of Research Integrity, will sometimes bar an individual from receiving federal research funds for a time if they are found guilty of misconduct. But Bhigru probably won't face that prospect because his actions don't fit the federal definition of misconduct, a situation Ross finds strange. "All scientists will tell you that it's scientific misconduct because it's tampering with data," she says.

Still, more immediate concerns are keeping Ross busy. Bhrigu was in her lab for about a year, and everything he did will have to be repeated.


Perhaps the best part of the article is written in such a way as to not really stand on its own. Here's the actual text:

After Bhrigu pleaded guilty in June, Ross called Trempe at the University of Toledo. He was shocked, of course, and for more than one reason. His department at Toledo had actually re-hired Bhrigu. Bhrigu says that he lied about the reason he left Michigan, blaming it on disagreements with Ross.

Allow me to translate. The cheating postdoc went back to his former lab, lied about why he needed a job, and the former PI had no idea about the sabotage investigation. The only way anyone knew was because the PI who caught him actually called the former advisor and related the story.

There was nothing posted to Google or twitter, nothing circulated by NIH Feedback Loop email. NOBODY KNEW.

Oh and another funny thing happened on the way to my reading this article. It was forwarded to me by two people, both of whom said they witnessed or heard rumors about similar things happening in their former labs. Neither of those cases were investigated.

Labels: , , , ,

Wednesday, May 26, 2010

Journal of unpublication

This is just getting embarrassing. I missed it when Drugmonkey blogged about it, but at least The Scientist did credit him (yo!).

Two highlights from this article that really stuck out to me:

investigation at the Mayo Clinic concluded that one of the lab's researchers, Suresh Radhakrishnan, "tampered with another investigator's experiment with the intent to mislead"

Um, seriously? This is like something out of a premed organic chem lab! Scary!! Can't leave that shit unattended for even one minute!!

But if you got a weird result, wouldn't you, um, at least, do it, like, OVER AGAIN? Or have someone else try to reproduce it, just in case you were doing something weird?

Does that mean these authors either

a) didn't reproduce the results multiple times or
b) he tampered with the results MULTIPLE TIMES??

Gah! That's one of my worst nightmares. That somebody (let's say for example, my PI) might tamper with my samples! But that's why I try to do everything several times several ways to make sure I'm not imagining it. Still, I don't know if I would be able to detect it if someone were sneaky and consistently screwing around with my stuff.


And as Drugmonkey quoted from the PNAS article, I guess this is the problem:

"..In no case did these repeat studies reveal any evidence that the B7-DCXAb reagent had the previously reported activity."

The missing ingredient was the tamperer!



The other thing from The Scientist article was a point I keep hammering like a very dead horse:

I was surprised about this retraction from [Journal of Experimental Biology]" -- the lab's first publication about B7-DCXAb -- "because the groups involved enjoy an excellent reputation in the field," said Melero of the University of Navarra.

Yeah, because reputation determines the OUTCOME of your experiments.

Hmph.

Labels: , , , ,

Wednesday, April 28, 2010

Ethics of anonymous silence

This week in Nature there's an article titled "Under suspicion", discussing how Nature investigates allegations about data or author conduct.


This got me thinking again about one of my favorite topics: how many things have to go wrong before someone gets caught doing something really egregious and is actually forced to retract the paper.


More often I suspect truth dies by a thousand pinpricks, and the paper is published anyway, and even though many grad students and postdocs suspect entire fields are based on contaminated publications, and suspect we know why, those papers are never retracted.



Here are a couple of examples I know about peripherally, which, to my knowledge, have never been investigated or enforced in any way:


1. Author makes outrageous claim, paper gets reviewed by high-impact journal because the result is so "surprising". Turns out the crux of the claim is based on an extremely high sample number and/or vastly overstated statistical power. Reviewer suggests politely that the number in question is either a lie or a typo. Author revises the text to remove the "typo" but keeps the figures and conclusions the same. Paper is accepted.


Scary, part a: Nobody tries to verify that these authors actually tested even the revised, smaller number of samples (which is still way too many to be believable).

Scary, part b: Nobody can publish anything conflicting with the model based on these published claims, without reproducing the original results in at least as many samples, and no one can afford to do that because it's so outrageously expensive. And actually, if you calculate out the cost, it's clear that the original group couldn't possibly have afforded to do that many samples themselves, either. Suggesting that the only rational answer is that they... didn't.

I wonder if requiring some kind of accounting procedure for papers would help catch these kinds of exaggerations? Not that I'm favoring extensive bean-counting, but sometimes all it takes is the blank space on the back of an envelope.



2. Authors submit paper to high-impact journal with data that have clearly been processed incorrectly. Reviewer points this out. Paper is rejected. Authors submit same paper, with no revisions, to different high-impact journal. Paper is accepted.

Scary, part a: Reviewers at second journal apparently didn't notice? Or did the authors actually fix the problem and magically get the same results? Really? Magically?

Scary, part b: No one involved in the original anonymous reviewing process, neither the reviewers themselves nor the editor, is ethically (or, um, legally?) required to come forward and say anything? So they don't? It's like it never happened? Because it's anonymous, even though it's in the first journal's database, presumably, somewhere? Does that information just get deleted? What would people think if that information got out? Would we finally know which reviewers were completely spineless kowtowers?



Sometimes I wonder how often these kinds of things are happening. More often I wonder why everyone puts up with it.

I like to think we'd learn a heckuva lot if somebody would hack into those computers and find out the extent of all this nonsense. It would certainly be a fun data mining project, tracking the reviews and the papers across journals to see where they end up and how many accusations are made, investigated, or just lost in the shuffle from journal to journal.

Labels: , , , , ,

Saturday, June 06, 2009

Oh, so that's what it's called!

Had a conversation with a new postdoc in my lab who is trying to publish some leftover papers from her PhD work.

I guess this is a pretty common predicament nowadays, to be a postdoc who has actually never been through the experience of scientific publishing.

There are several things that are really baffling about the "process".

1. Formatting

This used to make sense in the days of yore printing, but it makes less and less sense as most of us get our journals online.

2. Anonymous reviews

They asked for WHAT??!! Often leads to the question of who is really your peer and what the hell they are doing writing reviews.

3. Editor-speak

As in, when the response is actually favorable but it sounds like it's not. Or, as is more often the case these days, the response reads as if they really didn't understand what was written in the reviews.

4. Reject means resubmit

Even when a paper is soundly rejected, the tradition is fast becoming to resubmit anyway, and browbeat the editor into sending the paper back out.

5. How long this all takes

So let me get this straight, she said. It's going to take a month or two to get the reviews back? What am I supposed to do in the meantime? Take out my crystal ball and try to guess what they'll ask for?

6. How little time you have to address the reviews

Most journals give you 2 months to do any and all experiments, but you're supposed to know that you can negotiate for more (even though it's not at all clear that this is negotiable if you read the journal websites).

7. How political it all is

Whether it's better to have presented the work in public first very close to when it will be submitted; or not at all. Whose names are on the paper; whether the reviewers you suggest will be the ones the editor uses; whether people with a conflict of interest will recuse themselves (no, they won't!).

8. Who's really going to get the credit when it comes out

Your PI. Whether s/he had anything to do with any of it or not.

____________________________________________

In related news, this week I learned there is a term for what has been going on that is ruining my field. Apparently Richard Feynman called it Cargo cult and I think the description on wikipedia is totally accurate. Since I saw this on a blog but now can't remember where, apologies and thanks to the person who wrote about it.

Labels: , , ,