Tuesday, July 28, 2009

Stupidity vs. Dishonesty

Saw this as an interesting dichotomy posed by Janka over in the comments at a recent post by FSP.

It was posed in reference to a question about rules and regulations, and what's really unethical if the rules make no sense.

But it got me thinking about certain quandaries I have experienced in the lab, where I have to watch sloppy science going on all around me and I'm not always sure how much anyone is aware that they're juggling hand grenades.

I have made it my policy to steer clear, as much as possible, of things that I think are stupid, especially if I think they could lead me into scenarios where I would have to confront my PI about past published potential dishonesties.

In general, the scenario comes down like this:

PI suggests I try a procedure that Other Postdoc has used recently (Other Postdoc may be in my lab or in other lab, this has happened both ways).

I get the protocol and maybe ask Other Postdoc a few questions. I go off on my own and do a little reading and run a few controls to make sure things are working before I try the full-scale experiment.

Then things get hairy.

I get some results that are puzzling. They do not fit what Other Postdoc has published. I do some more reading and then I get really concerned.

In some cases, I have told PI and we have confronted Other Postdoc. Sometimes, the answers do not clearly distinguish between Stupidity vs. Dishonesty, and if anything only serve to make the whole incident more alarming.

Generally, if it were up to me, I would probably abandon said protocol at this point and do something else. I have fallen into this trap before of wasting time on uninterpretable methods, and I can usually smell them from far away.

However, sometimes I am forced to use said protocol, and PI has some long rationalization for why it's okay.

I usually go a little further and rationalize to myself that it's okay so long as we don't overstate our findings, and mention the caveats if/when we ever present this work in public.

But if I were the PI and I had published work like this, I would be losing sleep. A lot of it. I would be thinking hard about retracting papers and whether I would have to refuse to ever write another recommendation letter for this person in the future.

So I guess my question is this: isn't it always stupid to be dishonest? Or is it worth it in the long run to split hairs on hairy experiments?

Maybe I won't know if I never get to see what happens in the long run. Because when it comes to rules and regulations that make no sense, it's clear that it's often smarter to be dishonest, especially if you're smart enough to know which rules are enforced and which are not.

I feel like this dichotomy is one of the major weaknesses that will eventually bring science down if nothing changes.

Labels: , , ,


At 10:25 AM, Anonymous Anonymous said...

this is why I don't want to be a PI, I would prefer to remain a bench scientist. Once you're a PI, you get corrupted because you're now evaluated on different criteria compared to when you're a bench scientist. When you're still a real scientist you can remain true and objective to the science. Once you're a PI and are more a lab manager and administrator than an actual scientist, it's all about the acquisition of grant money and prestige and glory and the science is secondary. I never believe reports and big claims of supposed good results during talks given by PIs - I go to their staff scientists for the real story.

At 11:15 AM, Anonymous Anonymous said...

Anyone still believe in peer review?

From the point of specialization where nobody understands what each other is doing anymore science becomes politics. This point has quite likely been passed.

At 10:24 PM, Blogger femme de science(s) said...

That's why it's a good idea to keep in mind to make our own research reproducible...


At 8:29 AM, Anonymous Anonymous said...

you nailed it. it's very important to learn which rules are enforced, and this is a critical part of young investigator training. it's like knowing that you can drive 80 on the interstate, even though the speed limit is 65. if you're driving 65, you're falling behind. you're also telegraphing the fact that you're the type that drives 65 when you could be doing 80.

the implicit (although not certain) guarantee in this system is that in skirting some rules (or driving 80) you're not really doing anything that unethical.

consider subtracting background or enhancing contrast (in an unbiased fashion between groups of course) from "representative" blots. technically most editors say this is unethical, although everyone (who publishes) does it. the bar graph or whatever is the real data, so does it really matter? try submitting your crappy jpgs from the old lab camera to a decent journal: reduces credibility. I had a colleague basically lose an NRSA because they failed to believe her low-molecular weight western bands because they were slightly more "oval" versus "banded". if she had only known, she could have cleared that up (without affecting the actual data) by squishing the picture horizontally to flatten out the bands. a highly-regarded Nobel laureate does this with every blot he publishes. his bands are literally rectangles.

At 2:47 PM, Anonymous Anonymous said...

That is a fucked up reason to reject an NRSA. At least complain about the lack of a "training plan."

At 11:55 PM, Blogger femme de science(s) said...

Anyone still believe in peer review?

I actually posted my comment about reproducible resarch before this was published, but it's actually the point... Even as a peer, if there's not enough details about the methodology, etc, you can judge whether the results are likely to be correct and if the approach is reasonable, but are not necessarily able to actually check the results.

I do realize that it's easier for computing, you provide the code and database and your peers can check your method. If it requires a big setup or year long experiments, it will also reach its limits...


Post a Comment

Links to this post:

Create a Link

<< Home