When PI technical knowledge is really important.
This post was inspired by a discussion that I started (actually sort of an argument) somewhere else.
Physioprof wrote this in response to my original comment:
And by the way, MsPhD really, really, really needs to get past the canard that it is a failing of mentorship and lab leadership if a PI does not (or even cannot) sit down at the bench side by side with a trainee ("apprentice") and teach the trainee the physical process of performing a particular technique. This has nothing to do with being a good PI and an effective mentor. It bears no correlation with whether a PI is good at generating novel ideas or techniques.
And I actually agree with everything else he said after that.
I agree that these are different skills, but as Dr. J pointed out, we are NOT talking about having a PI sit down at the bench with the trainee.
Not at all.
Actually Dr. J's post very nicely makes that point.
We are NOT talking about generating novel ideas. But we ARE talking about the likelihood of success at testing them.
I just wanted to add one other point, from the point of view of a trainee, since some of these PIs seem to be totally unaware of the realities of just how fast technology is moving these days. And how dependent they are on postdocs (and grad students) to master the technology so they don't have to.
I know my PI absolutely takes this for granted. My PI knows this, to some extent, but I've seen evidence of just how dangerous this situation can be. This is one of the main reasons, I think, we're seeing so many retractions (especially from Science and Nature) now.
So here's my
Hypothesis: The severe disconnect between benchworker performance and PI assessment is a major flaw in our current system.
This is at least partly due to a general lack of technical understanding on the part of the PIs.
Allow me to elaborate (hey, it's my blog).
It really does matter quite a lot whether the technical advice is good, bad, or absent. And whether the PI is correctly assessing the quality of the data (which requires knowledge, believe it or not, of the techniques).
I think most PIs, whether they realize it or not, do wield a lot of authority. And that's especially dangerous when they're unaware of it.
One of my advisors is a great example. Most people try EVERYTHING she suggests, even the things that make no sense (without asking Pubmed or Google whether the concept is likely to work).
I know this is not her intention at all. She's just throwing out ideas.
She's good at ideas. She's not so good at planning the technical execution.
But she understands how to troubleshoot and she does some experiments herself.
In those cases she asks all the right questions and she gets things to work.
She's just not that good at guiding students and postdocs in experimental setup.
I can understand that. It can be hard, sometimes, to get in the mode of planning something. We all do this when we put off starting a new series of experiments - until you know you're really going to have to do it.
For a PI I guess the "really going to have to do it" needs to include the modifier [yourself] to really pack a punch.
Something about being one degree removed from the action just puts it out of her reach, and/or she's just too tired of telling students to do more controls and having them ignore her (I've seen that, too). So she doesn't bother anymore.
It's just sad to watch. And I'm sure it happens all the time as professors get older and more fed up.
I had something similar with another PI recently.
We discussed a very difficult experiment, which would be the "ideal experiment" if it weren't so technically unlikely to work, for reasons this PI does not grasp at all.
Instead I designed a similar but much easier experiment.
PI did not understand why I did that. At all.
But this PI does not want to know why (this is the kind of person who uses "technician" as a pejorative), so there was no point in trying to talk through all the mechanics.
I'm hoping that, when the experiment is done, the figure will help move the discussion in the right direction: away from "you don't listen to me" to "oh, I see why you did it this way."
That is usually what happens. This, I learned, the hard way. After being talked out of doing many experiments, and after banging my head against impossible experiments for years, only to conclude that it wasn't my fault but just due to the limits of the techniques.
But the point is really that it's frustrating to be constantly second-guessed by people who have
a) authority over your funding/lab space/future career success
b) no idea what they're talking about half the time.
I'm not sure if they all realize how discouraging they can be. Because we do look to them for advice. And confirmation of what's worth fighting for, and what's not.
It's a very strange psychological process, learning when/if to trust your Advisor, the Expert, and when/if to argue with or ignore them.
Or when you know there is no way they can help you get over that hump when you're stuck.
The worst thing to me, I think, is when they don't understand the technique, so they don't believe/like the real results, so they just assume you suck.
And maybe it's even worse because Joe Blow, in the row one bench over, is pretending like (or maybe even truly believes) his experiments are working - when he's really not doing any of it correctly. At all.
And everyone in the lab knows it, too. And everyone who sees the data knows it, but no one will say anything to the PI because they're all terrified of the whole shoot-the-messenger phenomenon.
They're all hoping that it won't make it through review.
But it does. Maybe because the reviewers are all PIs, too.
And then the paper is published. Then what do you do?
And the PI doesn't know the technique well enough to know that Joe's results aren't real.
This is one of the scariest pitfalls of not knowing the technical stuff. When the person in charge is totally snow-jobbed for lack of being bothered to learn how it's really done.
That is definitely how a PI ends up on the road to retraction. Because eventually, as they say, the truth will come out. Or bubble to the top, as the case may be.