Thursday, April 03, 2008

More interdisciplinary woes.

I had yet another horrible realization yesterday.

I realized that most of my peers are smart enough to come up with long lists of possible experiments. I admire and respect that. It is useful to me.

But horribly, most are not wise enough to modestly account for their lacking knowledge of practicalities.

In other words, they don't know enough about how to do those experiments to weigh the input and caveats vs. outcome.

Some examples include but are not limited to:

- The person who proposed an experiment for which I've tried to do the controls, and they don't work.

That makes the experiment a weak one at best, impossible to interpret at worst.

And the response was: "Oh but you don't have to do that control, just do the experiment."

?!

- The person who works in a different system, who perpetually proposes experiments that would be easy in that system but inordinately laborious in mine.

- The people who believe that anything in my system is meaningless until I can also show that the same thing happens in theirs.

(I think this is backlash, because I know their reviewers expect the converse from them.)

Of course for me to learn how to do anything correctly in their system - MsPhD says, modestly - would take a long time.

And if I can't do it correctly, I have to wonder why it's worth bothering.

Right now I'm struggling with not just choosing which journal's reviewers to subject myself to (UGH!), but meanwhile convincing my collaborators of what's worth the inordinate effort to try to circumvent potentially valid criticisms, and what's not.

I feel like I'm always working really hard, and maybe selling myself short by not always trumpeting all the troubleshooting we've done to even get this far. It's a fine line between self-promotion ("Look at this hard thing we did, aren't you impressed?") and sounding like I'm whining.

Deep down, I have to admit that I worry about being perceived as lazy or underachieving when it comes to my science.

I can't help but want to grab these people by the scruff of the neck and drag them forcibly around with me as I do experiments every day. I keep wondering if they had to walk in my shoes, whether they would say the kinds of things they say?

On the other hand, part of me thinks maybe it's good. Like most people in academia, I work even harder when pushed. And maybe this response - and the lack of a complete physical breakdown - leads people to even higher expectations of me.

Which maybe, just maybe, leads me to have higher expectations and more faith in myself.

I'd like to think I have even more untapped potential. But given all the hurdles and what is at best the grudging cooperation of almost everyone I've ever worked with, I have to wonder if I have the energy to tap into all of it.

And whether I should have to use up all my reserves to prove it.

------

I'm also really struggling with the problem of bad precedents.

I like the field(s) I work in. For good reasons.

Unfortunately there is at least one technique I have to use that everyone seems to have been sloppy about since it became popular (relatively recently).

This means that most of my peers (and I expect, my reviewers) ask for me to do it and expect a certain result to be presented in a certain way.

Okay? Sounds okay.

But I'm torn about how to handle this. My idealistic self wants to do it right or not do it at all.

Doing it right is really, really hard. I'm not even sure if I can figure out how. I've already spent a lot of time working on finding a way.

Contrast that with how everyone else does it: basically a form of cherry-picking.

This is where my respect for my peers and my love for these fields breaks down.

Stuck in the mud on a dirt road in the middle of nowhere. Breaks. Down.

The only other option is another technique, which is much better in many ways, except for one major problem: it's orders of magnitude slower. We're talking 6 months vs. 1 week. And that's the idealistic estimate. Realistically it's more like 1 year vs. 1 month.

So I'd like to do it the right way, or at least do the other thing the right way.

I'm just not sure I want to spend all of another year doing either one.

This where I have to wonder if I want it badly enough.

It's hard not to envy other fields, where they publish lots of little papers every year. I have collaborators in a couple of these other fields, and they're baffled by why I don't have a faculty position yet.

And most of the time, so am I.

Labels: , , ,

5 Comments:

At 6:21 PM, Anonymous Anonymous said...

is that ChIP you are talking about?
Toronto PDF

 
At 8:21 PM, Blogger Ms.PhD said...

Nope. But I'd have to agree with you there.

 
At 5:46 AM, Blogger Katie said...

Well, if you are looking for a faculty position, I would think it would be important for you to have the most impact in the shortest amount of time (so that you can focus on other things). If you spend a year doing something in order to be thorough, is it worth your time? Can your manuscript stand alone without it?

I understand that you feel the reviewers will want to see a "cherry-picking technique". I have dealt with this issue in 2 different ways in the past. A) Is it possible to submit to a journal that will have reviewers that won't be quite so picky about methods? B) Is it really a big deal if you just include the cherry-picking technique just to make others happy? For example, if you can stand firm in your conclusions w/o the technique, but you include it anyway (w/o discussing it much) to appease others... maybe this will be annoying for you, but it will get your work published w/o you putting much stock in crappy science.

"Our awesome conclusions were further confirmed by using the Cherry-Picking technique. Because the data is meaningless, results have been included in the Supplementary Text."

 
At 6:02 PM, Anonymous Anonymous said...

ChIP has caused me similar pain...
Toronto PDF

 
At 8:33 PM, Blogger Ms.PhD said...

Welcome, Candid Engineer. And thanks for stating the obvious.

Yes of COURSE I would want the most impact in the shortest amount of time. Don't we always?

Would we be having this conversation if my manuscript could stand without it (no)?

Would I be able to submit to a journal with reviewers that don't care about methods (no, or I would just do that)? There's the problem with the whole "high enough impact" thing, although I guess some might argue that the higher impact journals care LESS about methods. Although in this case it's more the general approach that they'll want to see, regardless of whether I go to the trouble of analyzing it correctly.

Is it a really big deal for me to just do it anyway... well there I don't know, and yes I've considered just doing it and putting it in the supplemental. But I've found that the best way to never publish anything you're worried might be wrong is to start by never using techniques you don't trust.

There really is only one other way to do this, and that's the slow way. So I can't really make this point by another method and then just use this to back it up. Unfortunately. Since that would be ideal.

 

Post a Comment

<< Home