Saturday, November 22, 2008

Slouching toward Bethlehem.

Well, maybe the stock market continuing to crash is a good thing for science.

I'm feeling more and more these days that, in my narcissistic state, I'd be a lot happier if science just went away for a while. Maybe a decade. I really do think the only way to fix it is to start with a clean slate.

I keep thinking it's so ironic how everyone in academia had been saying,

"Oh, we just need a new President, just hang in there [for four more years], funding is going to get better after that."

But look around, folks, it's getting worse, and, Obama or no Obama, it will very likely stay that way for a while.

Senior PIs are tightening their belts. They're making less effort (at least around here) to pretend like it's all hunky-dory and "training." They're telling postdocs we should consider ourselves lucky to be employed at all.

I really think the only way this is going to change is if NIH goes away, or if we have a national postdoc union that goes on strike. My guess is it will take about ten years to happen.

I'm watching what's happening with the UC unionization and I kind of have to laugh. I really do wonder if, given a little more time and bad enough job morale, scientists are finally going to grow some spine and say enough is enough?

On the one hand, I'm pretty happy with my actual day-to-day right now. It's frustrating in a lot of ways, but this week I did some things I think helped a few people:

I edited a thesis chapter for one PI's student (1 hour)
I reviewed a paper for a major journal (1 hour)
I helped another student with his thesis presentation (1 hour)
I helped a postdoc in my own lab with a fellowship application (1 hour)
I continued training my undergrad (4 hours)

And oh yeah, a few things that helped me directly:

I did some experiments.

I met a senior FSP who might be helpful as a mentor. I'm not sure yet, but she said I should come see her sometime and I thought, okay. What have I got to lose? 2 hours for another anti-pep talk?

I also had a few problems.

I spent some time hunting for reagents I need to buy and trying to negotiate a quote I could live with (2 hours total during the week) but couldn't find what I wanted at a price I was happy with, and will probably go to a different company.

I got an anti-pep talk from my advisor (2 hours) when I tried to steer the conversation to getting some actual help with my career. So much for that idea. Not going to get any help there, that much has been confirmed. Repeatedly.

I spent some time hunting for equipment (1 hour) which was broken when I finally found it.

I spent some time talking to a junior faculty friend who mentioned the hiring freezes, her own horrific departmental politics, and how her former postdoc PI is still fucking her over (2 hours).

I spent some time talking to a friend who got screwed over after writing yet another grant for his PI (2 hours).

I found out my advisor is royally screwing a former grad student on authorship of a paper. This is above and beyond what I've seen my advisor do before, it's just petty and stupid and I'm completely disgusted at how incredibly selfish Advisor is being about this.

And yet, today I am in lab for a few hours, and still feeing pretty good about reaching a few goals I have set for myself and reaching a few more in the (hopefully near) future.

After that, I don't know. I feel better when I don't worry so far ahead or care too much about all the shit going on around me. "Not my problem" is definitely a useful attitude. Or as a friend was saying this week:

What do we want?

APATHY!

When do we want it?

....Eh, Whenever.

Labels: , , ,

14 Comments:

At 3:14 PM, Anonymous Anonymous said...

You can review a paper for a major (hell, ANY) journal in 1 hour? You are either a very smart cookie or very fed up with science...

 
At 7:56 PM, Blogger Ms.PhD said...

Thanks, I think. I'm a big fan of most kinds of cookies.

I don't know if I'm particularly fast at reviewing. I guess I read fast, and I write fast, so maybe when you put that together I'm faster than some?

But I can't know if my reviews would be the sort that the authors were grateful for (or just annoyed about). It's all in the eye of the beholder. I try to be constructive, so if there are things that are good I say so explicitly, and if there are things that need improving I suggest specific experiments. I try to write them the way I want my reviews to read. You know, the whole "do unto others" clause?

Speaking of which, I think an hour is pretty average for paper reviews, don't you? Most of the ones I've received read like the person spent an hour or less from beginning to end. The quality varies, but I have yet to receive one that is a polished work of art.

 
At 5:49 AM, Anonymous Lamar said...

I can usually tell within a minute or so whether I'll accept or reject a manuscript. But it can still take me at least a couple of hours to comb over the manuscript and lay out my thoughts in detail.

 
At 9:36 AM, Blogger Ms.PhD said...

Lamar-

Minutes, okay I guess technically it takes me minutes (not hours) to decide when a paper is likely to be rejected. But I always read the whole thing at least once before I make a final decision.

When you judge *that* quickly, do you worry that you're making a snap judgment based too much on appearances? Especially if it then takes you several hours to justify the decision you made apparently made intuitively?

 
At 12:02 PM, Anonymous pinus said...

When I am reviewing a paper, there is often the temptation to do it quick. But unless it is total crap (lots of typos, bad experimental design, poor quality data) I usually put a few hours in to it, making sure I really understand why the did, what they did, and how they did it. Having been on the receiving end of more than a few reviews where somebody just did a once over and missed many points, I feel like I owe it to spend some time. I figure, chances are that the manuscript represents hours upon hours of work by the authors, so I owe it to them.
In spite of this, I still get befuddled by colleagues who take a week to review a simple manuscript.

 
At 12:14 PM, Anonymous Anonymous said...

Wow, I read and write fast too, but it definitely takes me 5+ hours (spread out over two days - I like to sleep on things) to review a manuscript (from reading to tracking background info to the writing up of the actual review).
Am I really that slow? From what I've seen others do this isn't out of the ordinary.

 
At 12:12 AM, Anonymous DrJonny said...

I usually start with a skim through the manuscript, which doesn't take long - minutes, really. That's enough to form an impression that usually holds to the final accept/revise/reject.

Having said that, I try to be as thorough and objective as possible. If my review is 5 pages long, so be it - as long as it's objective and will help improve the article. That's my "do unto others..." - it just usually takes a half a day (and often a return to the manuscript after a good night's sleep - fresh eyes are amazing). An hour does seem on the fast side to me... but if you're happy you've done the job, it's probably fine.

Still, as an established post-doc looking for something further up the tree I take invitations to review quite seriously - and as a badge of honor: there are people out there recognizing you as an expert. That's worth something, and worth making the effort for IMO.

 
At 12:35 AM, Anonymous Anonymous said...

it usually takes me several hours at least to review a paper - going through the author's mathematical derivations (I have found gross errors before, probably just a result of shoddy writing). And then checking each reference cited and where necessary reading the cited papers to see if the authors' claims about their new results as compared with previous work, is valid.

I'm reminded of the public scandal of Jan Hendrick Schon that rocked the scientific community just a few years ago and how he had published so many papers in Nature and PRL that in the end were complete fabrication. And the ensuing worldwide discussion about the role and responsibility of peer reviewers. while I don't think the manuscript reviewers for his first few papers could have known anything was wrong (since obviously you do have to trust the authors are being honest because you can't check EVERYTHING) I do thinkt that the reviewers of his later papers should have caught the problem because he had literally copied and pasted exact graphs from his previous papers into his newer ones. That was how he was found out in the end - when colleagues in the field noticed that in his different Nature and PRL the data was exactly the same even down to the noise level. I think a reviewer should or could have caught something as blatant as that and thereby prevented the paper from ever being published and wasting countless labs' time and money trying in vain to reproduce fake groundbreaking results.

 
At 7:37 AM, Anonymous Anonymous said...

Biggest things for me are consistency - do the data show what is being claimed they show - and rigor - is the result shown by multiple, independent (at least insofar as possible) methods. I now spend time thinking about the relevance of the topic to the mission of the journal and I spend some time making sure they don't leave me with any reasonable doubts.

So I think I can decide whether the work is solid within 15 minutes. So that's one hurdle. If I like the paper, I put more time into it. It takes another 2 hours or so to make sure I have enough context to determine the significance of the results and to articulate any key, unresolved issues that arise in the manuscript. So that's another hurdle. Then I decide which of those issues should be addressed within the scope of the present manuscript and I rank them. I use the different levels of evidence that are used in clinical medicine for assessing each claim and suggest experiments that drive the authors toward getting as much level 1 evidence as possible.

It then takes about an hour to edit what I have written to a point where I'm happy with it. Then I step away from it for at least a day and re-read it before I submit it to the journal editor. So I think a good paper takes me about 4 hours to review right now.

 
At 8:03 AM, Blogger Ms.PhD said...

pinus,

I might have spent more time if it had appeared they had worked harder on it. It was full of typos and that was the least of the problems. When there are so many obvious things that need fixing, I don't see why I should spend *too* much time.

Anon 12:14,

You track background info? That's impressive. My feeling is that, at least in my field, if the reviewer isn't familiar with the literature, they won't spend the time to check.

So far I haven't been asked to review anything where I didn't already know the relevant literature well enough to review it without having to do additional reading.

DrJonny,

I think the 15 minute skim approach is pretty typical. Still, somewhat frightening to students and postdocs slaving over manuscripts, don't you think? I guess we should aim to be as easy to read, and as aesthetically pleasing, as possible.

I agree that it's a badge of honor when I get to do it as me. Most of my peers do it regularly as ghostwriters for their PIs. What makes me sick is that I've blogged about this before, and got many comments from people saying "that doesn't happen." They refused to believe their papers were being reviewed by students and postdocs with little or no PI input whatsoever.

Anon 12:35,

Gotta agree with you about the Schon debacle. It does make you wonder whether the reviewers of those papers were really "experts" in the field. But my impression is that wayyy too many people are biased by the names of the authors. As soon as they see that, if it's someone with a good reputation whom they don't have a grudge against, they shift into "this is probably a great paper" mode. And then the fact-checking aspect of reviewing goes out the window.

The papers I've reviewed thus far have all been from people whose names meant nothing to me, so I had no choice but to be objective!

On the other hand, I think the EDITORS should also bear some responsibility. What is their job, exactly, if not to at least read through the papers and check them against past relevant publications?

Anon 7:37,

What are the levels in clinical medicine? I've never heard of this. Can you send a link or describe, please?

 
At 8:09 AM, Anonymous Anonymous said...

I have to say, I am a postdoc at a UC, and I can't imagine what this union will possibly gain us. I could go on strike. Then all my experiments I've been working on for months would be ruined, and I would never get any papers out of them. Yeah... that'll show 'em!

 
At 11:44 AM, Anonymous Anonymous said...

As a reviewer, I print the manuscript off... flip through it. If I see format problems or wacko things that jump off the page at me, then I skim quickly through the methods and results to see if the stupid thing is worth reading but poorly presented. If it sucks for the nitty gritty of the methods and results... REJECT for reasons of poor presentation, lack of clarity, and general WTFness.

If it looks like the authors did something worthwhile and just had issues pulling it together (usually English-second language folks or meandering grad students have this problem), then I grab a few cookies, red pens, and maybe crack open the wine. I spend no more than an hour writing out a thoughtful critique to help them get it accepted, but I can still suggest "reject with encouragement to resubmit".

As an editor... I ask 2 reviewers/paper. If they both recommend reject, then off with their heads.... reject. I do run through the reviewer comments and usually find common problems, which I point out in my rejection email to the authors. If one reviewr recommends reject, and the other is accept.. well, then I go through the paper and the reviews with a fine tooth comb, a box of cookies, a bottle of wine, and my feet propped up with Paul Potts serenading me.

 
At 12:54 PM, Anonymous Anonymous said...

http://en.wikipedia.org/wiki/Evidence-based_medicine

I meant that I apply the principles behind the levels of evidence.

Level 1 'pretty damn conclusive' - showed the same result using at multiple (completely) independent methods and at the same time ruled out as many likely alternative explanations as possible. makes a significant advance in the field that should be incorporated into general practice in the field. Like a good Cell paper.

Level 2 'most likely explanation' - a solid piece of work, it fits with what your intuition suggests would make sense in light of known things. this work gets published everywhere. if it is really important/general or controversial in a way that advances academic discourse of a key topic, then it goes to a great journal. if it is really niche-y then it goes to whatever your niche society journal is.

alternatively, Level 2 'further conclusive data' - we already know the broad strokes, X binds Y resulting in Z. but what is the Kd? a lot of analytical molecular studies fall into this category. Your typical JBC or JPET paper.

more alternatively, Level 2 'improvement in a method' - again, this depends both on the utility of the method and what it was used to show. usually goes into some niche journal, but sometimes hits big. pays to be working on an important problem.

Level 3 'why not' - the result doesn't matter so much but the work is technically sound. the result may be idiosyncratic to the particular system being studied and the inherent limitations of that system that may or may not be fully appreciated by the general reader or even the people doing the work. much immunology falls into this category as there are a whole host of contrived systems that while reliable do not reflect actual immunology.

this model works for me.

 
At 9:33 PM, Blogger Ms.PhD said...

Anon 8:09,

I can see why you say that. I might blog about how a strike would be good in some ways, bad in others.

Anon 11:44,

That's basically what I did, except I was also in lab running a gel, so no cookies or wine. Maybe they should start sending those out with the manuscripts.

Never heard of Paul Potts. Will have to look him up?

Anon 12:54,

Interesting. My only beef with this is that there are some fields where there ARE no independent methods that can be used to measure some things. There is only one appropriate method that can directly measure, and then there can be a lot of side evidence that is consistent with the proposed model.

And it doesn't really work, anyway. I just found out a colleague's (I use that word loosely) Level 2-ish paper just got into Really Great Journal. There's nothing innovative about it, nothing really insightful at all. But it's solid enough in a very ho-hum, I guess some people will care about it kinda way.

Hard for me to follow the logic of rewarding that kind of work so richly.

Meanwhile, another friend who is still battling to get a paper into same RGJ used almost all new techniques, very innovative, lots of great insight, and is still getting ye olde "We just don't believe you, do it in another model system that is less relevant but better accepted" kind of bullshit reviews.

It makes me ill that this is what science has come to. Even if the interpretation turned out to be partly wrong years from now, it's still wayyyy more interesting and would provoke lots of interesting experiments to really test a new hypothesis... a helluva lot more than the aforementioned boring paper that appears to confirm something that ... wasn't unexpected and doesn't raise any new ideas whatsoever.

 

Post a Comment

Links to this post:

Create a Link

<< Home