Friday, August 15, 2008

Decisions, decisions.

Well, the week is almost 'over' and I have to decide if I want to try to work this weekend.

This is something I used to really agonize about all the time, but lately it has been pretty obvious most weeks. It was either Do or Die, or Just Don't.

Things being what they are, I worked all of last weekend and nothing this week has been particularly productive, although I've actually been in a better mood than I would have expected (not saying much, I know).

I was expecting not to be able to do anything this weekend, whether I wanted to or not, for lack of things being ready/ordered/arrived at the right time (e.g. by this afternoon).

But I thought of a couple of things I could/should do, and now I'm kind of torn.

Note that I fully expect to have to work next weekend, too.

Pro:

1. Might get something done 2 days sooner than would have otherwise.

2. Might enjoy the A/C (can't afford it at home)

3. Might feel relief at both 1 and 2.

4. Will get me out of the house and off the couch, i.e. more likely to go to the gym and possibly run some other errands I would otherwise put off.

5. Might be less busy next week (or more likely, just have more time for other crises).

6. Because of timing issues, this weekend might be better for some of the experiments than middle of next week.

7. Have no other plans for the weekend anyway.



Con:

1. Might not get desired results despite weekend effort.

2. Might feel more burned out.*

3. Will miss out on precious lounging-around and personal chore time, leading to underwear deficit next week and mrphd being pissed at getting stuck with more than his usual share of housework.





Hmm. Looks like the pros have it. Well, thanks for joining us in our latest round of Postdoc Guilt Gone Wild. Join us next time for "When is the best time to quit science?" at our regularly scheduled ranting.






*Not that it's emotionally possible to feel more burned out than I already am, but physically, things could be worse.

Labels: , ,

Friday, July 25, 2008

Postdoctoral Multitasking.

Amanda posted a comment over at FSP's post on this topic asking to hear more about how to do this.

When I started this blog (eons ago), I sometimes did posts on a "typical day in the life" format.

Today, for example, I had a meeting this morning (about 1 hour), right now I'm taking a break, then I'll go do some benchwork (about 1 hour), get lunch, (less than 1 hour) do a little reading/thinking for ~ 2 hours (I have a TON of reading to do this week!), and then go collect some data on samples I made yesterday (~ 2-3 hours). If the data look good, I'll spend ~2 hours analyzing them. If not, I'll read some more and think about what to do differently next time.

This is a pretty typical day for me.

Some of the things we do are perfect for multitasking. FSP has written about this before, I think, and I probably have too.

One of the key things to learn is the 5-minute trick. You can get a lot done in 5 minutes if you're good at switching gears. I have always been like this, maybe because I'm a little bit ADD (?).

If you can't do 5 minutes, start with 30 minutes or 1 hour. I used to routinely do a western blot (long 1-3 hour incubations waiting for gel to run and then transfer and then incubate in antibody) and while that was going, do another type of experiment with shorter incubations (say 30-60 minutes each) and while those were all going, read papers. Seriously. Nested multitasking is the best if you can time it just right. Even if all you have are 10 minutes here, 10 minutes there, you can read through half a paper before you have to start the next step. I promise you'll be amazed at how much you can get done if you start timing yourself.

In fact, I've had some interesting chats with people about whether or not to use a timer in lab. Who cares if my blot goes an extra 15 minutes, they say? To which I say, that is 15 minutes you just wasted, isn't it?

I stay on task AND make sure my gels don't run off by using a timer. I adore 3-button timers. That little beeping voice makes you aware.

In an average day, I work on 3 experiments, sometimes one for each of my projects. I don't usually complete 3 experiments in a day, since most of the things I do now require overnight steps, etc. But that's okay!

The key to multitasking, as far as I can tell, is thinking about how long each step will take, and planning everything before you start. Make sure you leave room for error (e.g. we ran out of methanol and nobody ordered more and now I have to spend 30 minutes running around borrowing some).

I always say if you can cook, you can do well in lab. It's the same idea. Nobody likes it when you finish the main dish and the potatoes won't be done for another hour. So you start the potatoes first. It's really that simple.

But you have to know the techniques involved. It's hard to multitask (and I wouldn't recommend it) with all new techniques.

For those, I say do one thing at a time, at least the first time through, until you know what to anticipate.

For writing, you can do the same thing. Set a timer. I actually found some cute applications online that let you set multiple taskbars for writing, so you can keep track of how long you've been working on several projects plus keep yourself from cheating by setting a tracker for how long you spent blogging (guilty) or checking email.

The best writers will tell you, one hour a day can be enough to finish most projects relatively quickly, if it's a productive hour. I think the most common misconception about writing is that it takes a lot of time. I always say writing does not take a long time, thinking about what to write is the hard part, and you can do that fast if you have a good strategy for making decisions.

I could do a whole blog post on how I write, but since I'm no Einstein with lots of one-word-journal papers, I can't believe anybody cares! Maybe you should ask someone more Successful about that.

Labels: , , ,

Friday, July 18, 2008

Stupid Question

Quick, what would you do? I can't decide.

My journal subscriptions are coming up for renewal. This is, god willing, the last year I am eligible for cheap subscriptions as a Postdoc.

Off and on, I have chosen to get hard copies. There are goods and bads to doing this.

I hate junk email. I don't like reading TOCs in my inbox. It's rare that I want to deal with clicking on the link and going to my web browser, blah blah blah.

I also hate reading papers online. Abstracts, okay, but I don't have a big monitor and I don't really like sitting at a desk. I'd much rather take a pile of paper and flop down on my couch at home!

So I decided, for these and other reasons, that hard copies are better. The chances that I will look through them are much better than if I have to remember to look online, and I like that I often find things I wasn't looking for.

The main danger I can see with using RSS feeders and preset searches is that you end up filtering out serendipity.

The drawback, of course, is the clutter. Physical information overload. If I get hard copies and don't have time to read them, they just pile up. And up and up and up.

So now I am torn about whether to renew, since of course for most things I can download papers for free via my institution. It's less aesthetically pleasing, but it works well enough.

I'm tempted to just not renew, save my piddling salary for better things, and look online when the mood strikes.

What do you do?

Labels: , , , ,

Sunday, July 13, 2008

Work is its own reward.

It's funny how sometimes the thing you've been dreading is exactly what you needed.

I was not looking forward to my experiments this week. I wanted the data but not the tedium. I dreaded the repetitive, the brainless pipetting, for these kinds of experiments. All I could picture was boredom and wrist strain.

I was complaining that, despite trying to take some time off of lab on the weekends, I needed more time to think. I thought I needed more time away.

Stupid me. I forgot how peaceful and relaxing meditative movements can be: more lab work was exactly what I needed.

It's nice to remember how much I like to commune with my samples, get a little lost in my data. It's nice to get paid to daydream about what might be happening in my samples, and what my options are for what to do next - almost endless possibilities.

Decisions, decisions.

I'm looking forward, this week, to more of this kind of quiet. It's an iPod bubble, so not exactly the same as walking in a deserted forest. My labmates are not exactly meditative types. But I'll take what I can get.

My hope is that, as has always happened in the past, if I keep following these results, I'll have one of those lightbulb moments. You know the kind I mean.

When the idea just grabs you and it's suddenly clear exactly what you need to do, and you can't rest until you have the answer.

You know, what makes you want to stay in lab until bedtime and you only force yourself to go home because you know that technically, sleep is important. And technically, you still need to eat.

You dream about your data, make plans in the shower. And run out the door in the morning, hair still wet, wearing whatever clothes are clean enough, to go develop that film or pick those colonies.

You know, the kind of thing that makes you start scheming. Do you really need those overnight incubations? Can you can make it go faster? Faster! Can you put it on a rocker, raise the temperature? Shake it harder! Why wait a day or a week for the answer? It's too long!

How come nobody ever thought of this before??!

Right now it's a little foggy, but the clouds are starting to lift. At least now, even if I don't know which way to go, I know how I'm going to get there.

One foot in the front of the other.

And now I'm off to lab. On a Sunday. Take that, weekends off!

Enlightenment is wayyy better than leisure.

Labels: , ,

Saturday, June 07, 2008

It's Saturday, I'm in lab, and...

I'm starving,

tired,

not sure if my experiment worked.

Not sure how to tell, since it definitely didn't work like it was supposed to.

Not sure what to do about it.

Can't do anything about it for at least a couple weeks, then it takes another month or so to repeat.

Goody.

And I have a long list of things to do after this, starting with at least one thing that is absolutely critical and has to get done ASAP.

Ahhh, research is the life for me.

Labels: , , ,

Thursday, April 03, 2008

More interdisciplinary woes.

I had yet another horrible realization yesterday.

I realized that most of my peers are smart enough to come up with long lists of possible experiments. I admire and respect that. It is useful to me.

But horribly, most are not wise enough to modestly account for their lacking knowledge of practicalities.

In other words, they don't know enough about how to do those experiments to weigh the input and caveats vs. outcome.

Some examples include but are not limited to:

- The person who proposed an experiment for which I've tried to do the controls, and they don't work.

That makes the experiment a weak one at best, impossible to interpret at worst.

And the response was: "Oh but you don't have to do that control, just do the experiment."

?!

- The person who works in a different system, who perpetually proposes experiments that would be easy in that system but inordinately laborious in mine.

- The people who believe that anything in my system is meaningless until I can also show that the same thing happens in theirs.

(I think this is backlash, because I know their reviewers expect the converse from them.)

Of course for me to learn how to do anything correctly in their system - MsPhD says, modestly - would take a long time.

And if I can't do it correctly, I have to wonder why it's worth bothering.

Right now I'm struggling with not just choosing which journal's reviewers to subject myself to (UGH!), but meanwhile convincing my collaborators of what's worth the inordinate effort to try to circumvent potentially valid criticisms, and what's not.

I feel like I'm always working really hard, and maybe selling myself short by not always trumpeting all the troubleshooting we've done to even get this far. It's a fine line between self-promotion ("Look at this hard thing we did, aren't you impressed?") and sounding like I'm whining.

Deep down, I have to admit that I worry about being perceived as lazy or underachieving when it comes to my science.

I can't help but want to grab these people by the scruff of the neck and drag them forcibly around with me as I do experiments every day. I keep wondering if they had to walk in my shoes, whether they would say the kinds of things they say?

On the other hand, part of me thinks maybe it's good. Like most people in academia, I work even harder when pushed. And maybe this response - and the lack of a complete physical breakdown - leads people to even higher expectations of me.

Which maybe, just maybe, leads me to have higher expectations and more faith in myself.

I'd like to think I have even more untapped potential. But given all the hurdles and what is at best the grudging cooperation of almost everyone I've ever worked with, I have to wonder if I have the energy to tap into all of it.

And whether I should have to use up all my reserves to prove it.

------

I'm also really struggling with the problem of bad precedents.

I like the field(s) I work in. For good reasons.

Unfortunately there is at least one technique I have to use that everyone seems to have been sloppy about since it became popular (relatively recently).

This means that most of my peers (and I expect, my reviewers) ask for me to do it and expect a certain result to be presented in a certain way.

Okay? Sounds okay.

But I'm torn about how to handle this. My idealistic self wants to do it right or not do it at all.

Doing it right is really, really hard. I'm not even sure if I can figure out how. I've already spent a lot of time working on finding a way.

Contrast that with how everyone else does it: basically a form of cherry-picking.

This is where my respect for my peers and my love for these fields breaks down.

Stuck in the mud on a dirt road in the middle of nowhere. Breaks. Down.

The only other option is another technique, which is much better in many ways, except for one major problem: it's orders of magnitude slower. We're talking 6 months vs. 1 week. And that's the idealistic estimate. Realistically it's more like 1 year vs. 1 month.

So I'd like to do it the right way, or at least do the other thing the right way.

I'm just not sure I want to spend all of another year doing either one.

This where I have to wonder if I want it badly enough.

It's hard not to envy other fields, where they publish lots of little papers every year. I have collaborators in a couple of these other fields, and they're baffled by why I don't have a faculty position yet.

And most of the time, so am I.

Labels: , , ,

Sunday, February 03, 2008

Where are the holes?

As a comment to my last post, Gosh asked how you know where to go in science to find something new, and says it sometimes feels like you'd have to read a lot of textbooks since it seems like everything is already known.

I've written a lot about how much I think reading helps. It's not the whole thing, but it is a big part.

***

One professor I had in college assigned us two giant textbooks to read over the summer.

Maybe I'm the only one who actually did (?!).

I liked the message it sent:

We're not going to go chapter by chapter and ask you stupid quiz questions every week. We're going to ask you to absorb as much of this as you can, so we can move on to more interesting discussion. But we can't talk as equals until you've been exposed to the vocabulary and the concepts.

***

There are at least three main pluses to reading well-written textbooks:

1. Outstanding questions. Things we've wanted to know that we still don't know. Good textbooks will just tell you plainly, "Here is an interesting idea. We still don't know how this works."

At the root of it, this is the main thing that made me want to get into science. The idea that everything is NOT already known!

I was shocked to learn this. Weren't you?

2. Historical context. Sometimes it's downright funny how wrong the old models were, but you can often see exactly why they thought they were right, given what they knew at the time.

3. Practice reading for assumptions. It's partly language, and it's partly smell - you start to sniff out the leaps in logic. Textbooks are a great place to learn how to tell when something has been 'assumed' but never actually tested.

Textbooks are where you learn the rules. How will you know an exception if you don't know the rules?

Change favors the prepared mind (no, that's not a typo).

***

Obviously, textbooks are are biased - everyone is biased, we admit it. And often outdated, sometimes dangerously so (at least in my field). Neither of these would be a problem, except that beginning students aren't always taught to keep these things in mind.

The first thing you need is a healthy dose of doubt. Don't take anyone's word for it that we actually know something. Anything. Figure out which stuff you believe. To do that, you have to read.

***

Then you have to learn the limits of the technology. What can you really say. Where are the holes. What's the limit of detection. What's the error.

I can't emphasize this enough. My main frustrations with colleagues lately are that they don't seem to understand the most basic techniques that we all use. What you can and can't say using qualitative or quantitative methods. When it's inappropriate to just report a mean, or just show one example. The difference between a good reagent and a bad one, and how to tell which is which.

***

To start to learn this, you do experiments. And you read papers. LOTS of papers.

When you read a lot, patterns emerge. In most fields, there are groups of authors who dominate the Big Journals, and their papers converge on an accepted model. It's the pat-each-other-on-the-back phenomenon. You'll know it because they all cite each other, and they don't discuss outstanding questions or exceptions (even if they were the ones who authored them in the first place).

But in every field, in other journals, there are odd little observations that don't fit with the accepted model. You usually can't find these by reading citation lists, you have to go directly to the library (Pubmed, Google Scholar, whatever).

You have to read both kinds of papers.

Usually the accepted model is mostly right.

But it's almost NEVER Completely Right (!).

Usually the odd little observations are in odd little systems that are less well understood, obscure organisms or uncommon techniques. And that is why they are easily ignored or dismissed. They are often badly written and almost always hard to understand.

But biology is all about exceptions, because every rule gets broken somewhere.

You have to read both kinds of papers to find the holes. The mainstream papers are all about "pay no attention to the man behind the curtain!"

If you just read those, of course it will seem like everything is already known.

It's not.

***

Sometimes when a rule breaks, it's an example of another way to solve a biological problem. Parallel evolution, maybe.

May be telling you something important.

Sometimes an exception is just an exception. Differences can be tolerated, even if they don't excel.

The gamble is guessing, when you find an exception, whether it's just tolerated in a marginal system, or if it's telling you to think about a paradigm shift. Or a way to do something in vitro.

A great example are thermophilic bacteria. Once upon a time, people would have told you, there is no way life can survive in a place that hot.

Fast-forward a couple of decades, and there is no way we could do the things with PCR that we do now, thanks to polymerases from those thermophiles that can survive 30 cycles in a heat block.

***

So reading is great, but I actually get most of my really new ideas when I'm observing the systems I work in by doing experiments.

Beware the risky all-or-nothing projects: they just lead to dead ends.

Here's how to tell the difference.

For a good observational project, if it's designed correctly with controls and a broad enough hypothesis, you're going to learn something no matter what.

You'll have the opportunity to make observations by stressing the system just enough.

For a risky all-or-nothing project, you'll often hear the phrase, "If this works..."

The reason "if" features so prominently is that if it doesn't work, you've got nothing. Not even new information about what to do next.

For better or worse funding, I design my experiments so that, if all the controls work the way they should (and they don't always work as expected!), I will learn something.

Something new.

Usually along the way, I see something I didn't expect to see. And then I go design an experiment to test whether that's real.

So to summarize this whole diatribe: pay attention to the exceptions. If it's reproducible, it's probably not just an anomaly.

The trick is having to nerve, the time, the resources and the permission to find out which ones are anomalies by testing them.

Labels: , , ,

Sunday, December 23, 2007

Go, go, go.

Someone wrote a comment about whether it's okay to rest on one's laurels or take a break or if it's got to be 'go go go' all the time.

Dear lazy old PIs who want to rest because you've been working hard all your lives,

Take a week off a couple times a year if you need to. You deserve it.

Just don't hang on to a job you're not actually doing.

Especially when there aren't enough jobs for younger, more energetic people who have tons of new ideas and no resources to test them.

Don't "rest on your laurels" by taking credit for what your younger colleagues are doing.

Especially don't do this by keeping them as slaves in your lab writing your grants, making slides for your talks, teaching your students, and generally everything else you're supposedly getting paid salary and grant funding to do.

Oh yeah, and if you're one of the ones taking a week off now and then, tell your lab slaves they deserve the same.

That's all.

Thanks.

Sincerely,

MsPhD

Labels: ,