Where are the holes?
As a comment to my last post, Gosh asked how you know where to go in science to find something new, and says it sometimes feels like you'd have to read a lot of textbooks since it seems like everything is already known.
I've written a lot about how much I think reading helps. It's not the whole thing, but it is a big part.
One professor I had in college assigned us two giant textbooks to read over the summer.
Maybe I'm the only one who actually did (?!).
I liked the message it sent:
We're not going to go chapter by chapter and ask you stupid quiz questions every week. We're going to ask you to absorb as much of this as you can, so we can move on to more interesting discussion. But we can't talk as equals until you've been exposed to the vocabulary and the concepts.
There are at least three main pluses to reading well-written textbooks:
1. Outstanding questions. Things we've wanted to know that we still don't know. Good textbooks will just tell you plainly, "Here is an interesting idea. We still don't know how this works."
At the root of it, this is the main thing that made me want to get into science. The idea that everything is NOT already known!
I was shocked to learn this. Weren't you?
2. Historical context. Sometimes it's downright funny how wrong the old models were, but you can often see exactly why they thought they were right, given what they knew at the time.
3. Practice reading for assumptions. It's partly language, and it's partly smell - you start to sniff out the leaps in logic. Textbooks are a great place to learn how to tell when something has been 'assumed' but never actually tested.
Textbooks are where you learn the rules. How will you know an exception if you don't know the rules?
Change favors the prepared mind (no, that's not a typo).
Obviously, textbooks are are biased - everyone is biased, we admit it. And often outdated, sometimes dangerously so (at least in my field). Neither of these would be a problem, except that beginning students aren't always taught to keep these things in mind.
The first thing you need is a healthy dose of doubt. Don't take anyone's word for it that we actually know something. Anything. Figure out which stuff you believe. To do that, you have to read.
Then you have to learn the limits of the technology. What can you really say. Where are the holes. What's the limit of detection. What's the error.
I can't emphasize this enough. My main frustrations with colleagues lately are that they don't seem to understand the most basic techniques that we all use. What you can and can't say using qualitative or quantitative methods. When it's inappropriate to just report a mean, or just show one example. The difference between a good reagent and a bad one, and how to tell which is which.
To start to learn this, you do experiments. And you read papers. LOTS of papers.
When you read a lot, patterns emerge. In most fields, there are groups of authors who dominate the Big Journals, and their papers converge on an accepted model. It's the pat-each-other-on-the-back phenomenon. You'll know it because they all cite each other, and they don't discuss outstanding questions or exceptions (even if they were the ones who authored them in the first place).
But in every field, in other journals, there are odd little observations that don't fit with the accepted model. You usually can't find these by reading citation lists, you have to go directly to the library (Pubmed, Google Scholar, whatever).
You have to read both kinds of papers.
Usually the accepted model is mostly right.
But it's almost NEVER Completely Right (!).
Usually the odd little observations are in odd little systems that are less well understood, obscure organisms or uncommon techniques. And that is why they are easily ignored or dismissed. They are often badly written and almost always hard to understand.
But biology is all about exceptions, because every rule gets broken somewhere.
You have to read both kinds of papers to find the holes. The mainstream papers are all about "pay no attention to the man behind the curtain!"
If you just read those, of course it will seem like everything is already known.
Sometimes when a rule breaks, it's an example of another way to solve a biological problem. Parallel evolution, maybe.
May be telling you something important.
Sometimes an exception is just an exception. Differences can be tolerated, even if they don't excel.
The gamble is guessing, when you find an exception, whether it's just tolerated in a marginal system, or if it's telling you to think about a paradigm shift. Or a way to do something in vitro.
A great example are thermophilic bacteria. Once upon a time, people would have told you, there is no way life can survive in a place that hot.
Fast-forward a couple of decades, and there is no way we could do the things with PCR that we do now, thanks to polymerases from those thermophiles that can survive 30 cycles in a heat block.
So reading is great, but I actually get most of my really new ideas when I'm observing the systems I work in by doing experiments.
Beware the risky all-or-nothing projects: they just lead to dead ends.
Here's how to tell the difference.
For a good observational project, if it's designed correctly with controls and a broad enough hypothesis, you're going to learn something no matter what.
You'll have the opportunity to make observations by stressing the system just enough.
For a risky all-or-nothing project, you'll often hear the phrase, "If this works..."
The reason "if" features so prominently is that if it doesn't work, you've got nothing. Not even new information about what to do next.
For better or worse funding, I design my experiments so that, if all the controls work the way they should (and they don't always work as expected!), I will learn something.
Usually along the way, I see something I didn't expect to see. And then I go design an experiment to test whether that's real.
So to summarize this whole diatribe: pay attention to the exceptions. If it's reproducible, it's probably not just an anomaly.
The trick is having to nerve, the time, the resources and the permission to find out which ones are anomalies by testing them.