On Devils and Details
My thesis advisor was fond of the phrase "the devil's in the details." I always took this to mean that the details, left undone, would come back to bite you in the ass.
I've since decided that this really is true in science, and it bites everyone (including my thesis advisor). But like most things in life, timing counts for a lot. If you can outrun the details, you can get away with leaving them out.
And knowing when a detail is a technical problem, or an exception to your model, is critical.
Sometimes chasing a detail is just a tangent. Sometimes it's the whole enchilada.
...
When I was learning how to write postdoc fellowships, my advisor told me that it was important to add enough detail to make it believable that you've really thought everything through. Of course the challenge, then, was not to take up too much space with minor points.
This was when I started to think a lot about emphasis. You want to highlight your main points, and then dress them up with just enough details. The trick was choosing the right details:
Not all details are equal.
...
More recently, I've noticed that my interdisciplinary interests are biting me in the ass, due to religious differences over the utility of details.
One discipline values details as a mark of integrity and thoroughness. Their papers tend to be solid, reproducible, and not in the Cell/Science/Nature journals.
The other discipline is quite the opposite. They value salesmanship. I'm pleased if I'm able to reproduce anything that has been published in that field, since it means it might not all be wrong.
To those people, salesmanship means glossing over, if not outright burying, details that don't fit with the prettiest version of their model.
They view people who pay attention to the details as mere technicians: people who must surely be missing the big picture.
Of course there is no correlation, so far as I can tell, between people who are good technically or who pay attention to details, and the ability to simultaneously think about the big picture. It's really a spectrum, like most human qualities. The two skills are not mutually exclusive.
I think the best scientists can do both the macro and the micro, the thinking and the hands-on part.
However, thinking about the big picture, and communicating the big picture, are two different things. That's where the sales skills come in.
Communicating the big picture is something some of us have to work hard to learn how to do. So I'm trying to figure out which details to hide on my slides, so I don't lose the people from the sales-heavy field - and just hope someone from the other field will ask if they don't believe me, so I can fill them in later.
...
I find myself trapped, since so far as I can tell, these sales people are largely Devils - when they're not outright lying, they're dangerously sloppy.
And I've had the unfortunate experience, perhaps because of my particular interdisciplinary bent, that leaving out any details tends to annoy reviewers and lower my credibility.
It matters which details you leave out.
Not to turn this into another rant about how corrupt our publication and funding and hiring systems are, but, let's face it. Most scientists can't handle the real truth: that many of their colleagues are desperate enough to be devils who bury the details that would ruin all their favorite models.
So I really have to wonder, if we held everyone to a higher standard, and really asked for all the details - could we get out of this state of denial where people are offended, rather than proud, to be asked?
9 Comments:
What you're going through seems to be related to the conflict between Boyle and Newton in the Royal Society. Boyle was a strict experimentalist who documented all work meticulously, whereas Newton focused more on publishing the crucial experiment. I think that it speaks well to the need to make a general case while looking for the extremes.
For completeness, my knowledge of the conflict between these two great scientists came from Steve Shapin's The Scientific Revolution. I would highly recommend this book on scientific epistemology.
I have a slightly differen't interpretation of that phrase. I always just thought it meantt that the details are the hardest part.
When you give a presentation where both groups are in the audience, can you leave the details off your slides, but say that you have more data if anyone wants to talk to you after? I've seen this done pretty effectively.
Woah, blogging synchronicity strikes again!
You're quite right. It’s not enough to merely print random facts and to quote people. Knowing what to include, what not to include, and understanding the context is important.
I have had this problem multiple times over the years, including with a former grad-school advisor. These things are not lying outright in the sense of speaking falsehoods, but are deliberately misleading by presenting true statements that lack the necessary context, or that are phrased in such a way as to lead the listener (or reader) to make connections that don’t really exist. Facts are not enough.
You're focusing on the negative of this, but in my broad area of neuroscience, a professor one said that everyone settles into a level of uncertainty where they are comfortable. If you really hate uncertainty, you'll be studying protein structures on neurons. If you can handle a bit more uncertainly, you'll be recording spikes from neurons. This goes up through non-invasive human imaging and sociology studies.
The field wouldn't exist without people across the entire area, but levels of certain of results do change.
That said, I think the biggest question is when is the lack of interest in details accepting uncertainty in results and understanding the role of uncertainty in ones research and conclusions and when is it the laziness of not wanting to do good science.
I've seen good sale-people who are lazy with details in fields with many levels of uncertainly.
My one big critique of the post is that after talking about details, you throw a very broad generalization in the end, "Most scientists can't handle the real truth: that many of their colleagues are desperate enough to be devils who bury the details that would ruin all their favorite models." I've known way too many prominent people who accept their models limitations and even accept that they are wrong when data say otherwise. Yes this happens, but it is still the minority by far.
For presentations, I think ecogeofemme has it right. However publishing is a lot more difficult... For example, Nature papers have huge prestige. My HoD would LOVE me to publish in Nature. But papers in my field in Nature are either so lacking in detail as to be no use in terms of advancing the science (and it's surprising how often there is either NO follow up paper to be found or the follow up paper with more detail/data substantially weakens the sweeping conclusion in the status-garnering original) or are opinion-pieces rather than science - 'I know let's come up with this new term and show how well it works by applying it to a couple of totally unrelated data sets and it MUST be good because famous-professor-who-I-once-worked-with is a co-author'. I may be biased - but as a SCIENTIST I find NAture-type papers virtually useless in my field. I NEED the detail.
I always read the methods section if my 'topping and tailing' suggests that I need to read the paper at all - a lot of people in my field don;t understand or publish statistics, but things like sample size can be gleaned from the methods with patience. In most cases.
I hate the defensiveness of many scientists - but I have seen myself that if you write papers or grant applications which make explicit reference to issues where we are uncertain, or to possible alternative interpretations, you get problems - I have had grants rejected on the grounds that 'we already know the answer' although that usually means 'X wrote a paper in the 1960s in which (s)he stated an assumption which we have held to be true ever since' (many of the core assumptions of my field are untested or, where tentative tests have been carried out which suggest that the assumptions are either wrong or applicable only to one specific subset of experimental setups, not to all, the tests have been ignored). I have had papers rejected or returned for substantial revision asking for more data because my error bars are wide (or whatever) - but I KNOW for absolute sure that other published work has errors at least as large, but only states mean values.
Part of creating the kind of science we want to create has to involve dealing with these devils. THat means naming them and calling them out. You make me think that perhaps I, in my relatively priviledged position (I have a 'continuing' faculty post which is as near to tenured as you can get in the UK these days), should be doing more on this. BUT the 'it's a great theory so it must be right, lose that data point and let's publish it now, let's not wait for confirmation that we can replicate' people are still the gatekeepers for many grant-awarding bodies... and getting papers and applications bounced is both dispiriting and bad for my own career and more importantly for my students and junior colleagues careers...
Basically my experience has been that it is much harder to get funding to try and be more precise, to better quantify the uncertainty in a method, to test the 'big ideas' that have become 'big truths' even though they were clearly 'small scribbles on beermats down the pub' in the first place, or to work on improving existing knowledge. AS 'Am I a female scientist's recent post about meta-analysis shows, this is NEEDED. But it's not 'sexy' and even worse it involves showing up possible weaknesses in past work - which people take as a direct challenge or an insult.
It takes self-confidence to be honest about science. It can lay you open as an easy target for the posturers. But it is essential if science is to get anywhere - at the end of the day, we need more Boyles and only one or two Newtons a generation. Think of Kepler - it was his refusal to explain away the little bits of data that didn't fit that led to the recognition of elliptical orbits. He couldn't have done that without Brahe's data - warts and all.
One major reason I'm contemplating leaving science is the strong focus on selling research rather than doing research. I know several people who are very good grant writers but are what I would call bad scientists in the sense that if I wanted to understand something in detail, their papers would be mostly useless. They look good on glossy paper such as Nature, Science, .. and even the departmental quarterly, but their research is irreproducible and conveniently (for them!) leaves out critical problems with their results.
With people diluting the quality of their papers more and more, scientific quality is no longer measured by itself (a theorem is a theorem) but by where the paper was published and the relative fame of various coauthors. It's sad!
I know I'm not the only one who have complained that those that spend a lot of time touring and lobbying their research and attending "how to write better grants"-workshops seem to be more successful "scientists" whereas those who spend all their time in the lab double checking their results gets the pass.
In my mind there is a divide behind the intentions of the grant writing process and the incentives of the same (same goes for the tenure process). People are rewarded for the appearance of productivity rather than the scholarship itself. Papers are written with just enough quality to barely pass through the review process because 2 B- papers about the same thing are better than 1 A+ paper.
Nobody told me that this was what modern science was all about when I signed up.
You're absolutely right that those who run fast enough can ignore the details. I think they are called skimmers. This leaves the most difficult problems to the next guy who in turn won't get the rewards of first discovery. Sh*t does indeed roll downhill.
I always wondered whether these guys are incompetent or evil. I think in science you'd want neither around. End rant :-)
I get that from my thesis adviser all the time! For me, it's mostly because our experiments are extremely sensitive (.21M salt can give you a constant 5-fold above from .22M salt, etc).
It's difficult to pick the appropriate level of detail for a talk, I can empathize with that. My dept is very diverse, so I have to give the details the biophysics people want, while keeping the genetics people up to speed.
Sometimes I do a mix of what ecogeofemme suggested, and the opposite, putting details like experimental conditions on the slide for those who are looking for them, but focusing my speech on the main *selling* points.
Thanks for the food for thought!
It seems obvious to me that it will only be through a sufficient number of scandals in the GlamourMagz (like the recent L. Buck one) that will force them, kicking and screaming, to prioritize good science (by which I suppose I mean the details) over sensationalism and salesmanship.
The blame is held by more than just the GlamourMagz.
As long publishing in the GlamourMagz is so key to getting tenure at the top universities or even getting faculty jobs at some of them some people will do anything to get published in them including stretching the facts or overlooking details. When everyone needs to publish there, the journals have no reason to boost quality since they'll always get the top papers in addition to the flashy junk.
When hiring/tenure committees weight high quality papers in the top specialty journals as heavily as a flashy but light piece in Nature or Science, then the # of people pushing to publish there will decrease and the journals will need to present a better case for why people should publish there.
Post a Comment
<< Home