When PI technical knowledge is really important.
This post was inspired by a discussion that I started (actually sort of an argument) somewhere else.
Physioprof wrote this in response to my original comment:
And by the way, MsPhD really, really, really needs to get past the canard that it is a failing of mentorship and lab leadership if a PI does not (or even cannot) sit down at the bench side by side with a trainee ("apprentice") and teach the trainee the physical process of performing a particular technique. This has nothing to do with being a good PI and an effective mentor. It bears no correlation with whether a PI is good at generating novel ideas or techniques.
And I actually agree with everything else he said after that.
I agree that these are different skills, but as Dr. J pointed out, we are NOT talking about having a PI sit down at the bench with the trainee.
Not at all.
Actually Dr. J's post very nicely makes that point.
We are NOT talking about generating novel ideas. But we ARE talking about the likelihood of success at testing them.
I just wanted to add one other point, from the point of view of a trainee, since some of these PIs seem to be totally unaware of the realities of just how fast technology is moving these days. And how dependent they are on postdocs (and grad students) to master the technology so they don't have to.
I know my PI absolutely takes this for granted. My PI knows this, to some extent, but I've seen evidence of just how dangerous this situation can be. This is one of the main reasons, I think, we're seeing so many retractions (especially from Science and Nature) now.
So here's my
Hypothesis: The severe disconnect between benchworker performance and PI assessment is a major flaw in our current system.
This is at least partly due to a general lack of technical understanding on the part of the PIs.
Allow me to elaborate (hey, it's my blog).
It really does matter quite a lot whether the technical advice is good, bad, or absent. And whether the PI is correctly assessing the quality of the data (which requires knowledge, believe it or not, of the techniques).
I think most PIs, whether they realize it or not, do wield a lot of authority. And that's especially dangerous when they're unaware of it.
Exhibit A.
One of my advisors is a great example. Most people try EVERYTHING she suggests, even the things that make no sense (without asking Pubmed or Google whether the concept is likely to work).
I know this is not her intention at all. She's just throwing out ideas.
She's good at ideas. She's not so good at planning the technical execution.
But she understands how to troubleshoot and she does some experiments herself.
In those cases she asks all the right questions and she gets things to work.
She's just not that good at guiding students and postdocs in experimental setup.
I can understand that. It can be hard, sometimes, to get in the mode of planning something. We all do this when we put off starting a new series of experiments - until you know you're really going to have to do it.
For a PI I guess the "really going to have to do it" needs to include the modifier [yourself] to really pack a punch.
Something about being one degree removed from the action just puts it out of her reach, and/or she's just too tired of telling students to do more controls and having them ignore her (I've seen that, too). So she doesn't bother anymore.
It's just sad to watch. And I'm sure it happens all the time as professors get older and more fed up.
Exhibit B.
I had something similar with another PI recently.
We discussed a very difficult experiment, which would be the "ideal experiment" if it weren't so technically unlikely to work, for reasons this PI does not grasp at all.
Instead I designed a similar but much easier experiment.
PI did not understand why I did that. At all.
But this PI does not want to know why (this is the kind of person who uses "technician" as a pejorative), so there was no point in trying to talk through all the mechanics.
I'm hoping that, when the experiment is done, the figure will help move the discussion in the right direction: away from "you don't listen to me" to "oh, I see why you did it this way."
That is usually what happens. This, I learned, the hard way. After being talked out of doing many experiments, and after banging my head against impossible experiments for years, only to conclude that it wasn't my fault but just due to the limits of the techniques.
But the point is really that it's frustrating to be constantly second-guessed by people who have
a) authority over your funding/lab space/future career success
b) no idea what they're talking about half the time.
I'm not sure if they all realize how discouraging they can be. Because we do look to them for advice. And confirmation of what's worth fighting for, and what's not.
It's a very strange psychological process, learning when/if to trust your Advisor, the Expert, and when/if to argue with or ignore them.
Or when you know there is no way they can help you get over that hump when you're stuck.
The worst thing to me, I think, is when they don't understand the technique, so they don't believe/like the real results, so they just assume you suck.
And maybe it's even worse because Joe Blow, in the row one bench over, is pretending like (or maybe even truly believes) his experiments are working - when he's really not doing any of it correctly. At all.
And everyone in the lab knows it, too. And everyone who sees the data knows it, but no one will say anything to the PI because they're all terrified of the whole shoot-the-messenger phenomenon.
They're all hoping that it won't make it through review.
But it does. Maybe because the reviewers are all PIs, too.
And then the paper is published. Then what do you do?
And the PI doesn't know the technique well enough to know that Joe's results aren't real.
This is one of the scariest pitfalls of not knowing the technical stuff. When the person in charge is totally snow-jobbed for lack of being bothered to learn how it's really done.
That is definitely how a PI ends up on the road to retraction. Because eventually, as they say, the truth will come out. Or bubble to the top, as the case may be.
Labels: advisors, no wonder the system is so broken, science
7 Comments:
Here's a hint about experimental design. There needs to be a positive control. And a negative control. If you want to get fancy, it would be good to have a system that gives a graded output in response to a graded input.
Beyond that, the limitations of the technique itself are important to know. Maybe there are implicit assumptions that mean it won't work for you. That's the whole PubMed aspect, I guess. But if the technique is reproducible with both positive and negative controls, I don't know that it matters whether the PI understands the technical details.
Now, if the PI doesn't use a 2nd (or 3rd (or 4th (etc))) method of verification that is at least somewhat independent of the first method, then that is their own fault and they should be condemned to retraction.
Interesting. What you describe here happens over and over again here at LargeU. Luckily, I get to observe it all from a very removed position because The Boss is one of those rare PIs: tenured, brilliant, and extremely hands-on. Sadly, I have a sneaking suspicion that this is such a rare mash-up of scientific traits that I may not experience is again.... unless others can indicate to me otherwise?
Anon,
EXACTLY my point. If you don't know the limitations of the techniques, you might not know when controls are missing. And if there aren't any other methods that can be used, those controls suddenly become WAY more important.
UR,
I'm glad you're in such a good lab! I do think it's very rare. But extremely hands-on can have its good and bad sides, too. It usually only becomes a problem if the person is an egomaniac +/- control freak, which might still not be a problem until you've been there a long time and need to demonstrate independence. That kind of person is not always so generous about giving credit when credit is due.
Amen. My PI was an excellent molecular biologist in his day. In lab meeting he's all over issues along those lines. But oh, I cringe whenever he presents lab data involving flow cytometry (amongst other things). What if they ask him a question that requires more than superficial knowledge of the end results?? He has no idea of the caveats of some of the things we do...
Thank you! And this was a useful post.
I don't mind (too) much when a PI is disconnected from the benchwork, as long as he/she somehow maintains good connection to the experimental questions, and respects the trainees and their opinions. It always feels like the benchwork ignorance is just the final straw with a PI who is incompetent in many other respects as well.
Just because I think I tend towards the devil's advocate in discussions, I'll add a caveat here.
A PI doesn't need to know the details of executing techniques. She needs to know the flaws and limitations of techniques and what questions to ask.
A PI who says "Run experiment X" is a bad PI. In general someone who doesn't collect data, but gives very specific instructions to all mentees is a bad PI.
A PI who says "What will experiment X answer and what assumptions does it make," is a good PI. At that point, the mentee has an opening to do the background reading and testing and learn together with the PI.
My grad advisor and postdoc advisors have not collected or personally processed data in years, but they keep asking questions. Sometimes that means mentees need to explain some basic new stuff, but it's balanced by good questions and a personal perspective of what was done in the field more than 5 years ago.
SO thius will tell you far away from being a bench scientists I am: If the PI's don't keep current on lab techniques and then teach them to the grad students or post-docs, how do the PIs expect to get tehir science done? And what's the point in having them mentor the grad studenst or Post-docs if theyhave nothing to teach?
Post a Comment
<< Home