There was quite a lot of kerfuffle over the weekend about a lengthy piece from Daniel Sarewitz in The New Atlantis, entitled Saving Science. Here’s the subhead from Sarewitz’s article, which might possibly help explain what all the fuss was about:
Science isn’t self-correcting, it’s self-destructing. To save the enterprise, argues Daniel Sarewitz, scientists must come out of the lab and into the real world.
Them’s fighting words…
…and the gloves are off.
Actually, no, I’m not going to tear into Sarewitz (well, not too much) because, buried in the hyperbole, puffery, and wild overstatement, he in fact makes a few decent points, which, if somewhat less than ground-breaking in their insights, are helpful to bear in mind. Let’s deal with the less level-headed assertions first, however.
As And Then There’s Physics points out, the thrust of Sarewitz’s article would appear to be that science should be more like engineering, i.e. focussed on near-term (and near-market) goals. Sarewitz argues that the disinterest  that should be core to fundamental science, and to which scientists aspire, is a “beautiful lie” that has trapped the scientific enterprise “in a self-destructive vortex”, apparently insulated from the outside “real” world. If only scientists would do what they’re told — by whom? (and therein lies the rub, of course)– then all would be right with the world.
I’m a physicist whose work is unashamedly, and firmly, focussed on the fundamental rather than the applied. (Nonetheless, I should stress that “ivory tower” stereotypes wind me up a great deal. Like very many of my colleagues, I spend a great deal of time on public engagement.) Sarewitz’s claims about the damage wrought by curiosity-driven science, as he perceives it, are frustratingly naive in the context of the university-industry complex. John Ziman, the physicist turned sociologist, rightly included disinterestedness as one of the core norms he laid out in characterising scientific culture and the scientific method. (It’s the “D” in his CUDOS set of norms). [EDIT Sept 2 2016:: In the comments section below, Jack Stilgoe makes the important point that the CUDOS norms were, of course (and as I state in the Nature Nanotech article also mentioned below) originally put forward by Merton, not Ziman. My apologies for not giving credit where credit is due.] If exploratory research — science for science’s sake, if you will — is driven out in favour of the type of intensely focussed R&D Sarewitz is championing, then we compromise the disinterestedness that has underpinned so many key advances. But, more importantly, we further erode public trust in science.
Back in 2008, when what’s now known in UK academia as the “impact agenda” was in its infancy, I wrote an opinion piece for Nature Nanotechnology — I’m a nanoscientist — focussed on the type of concerns that Jennifer Washburn had raised about the corporatisation of universities (in her exceptionally important book, “University Inc”). Sarewitz is a professor of science and society; I am confident that he is just as aware as I am of the very many ethical quandaries, at best, and entirely unethical behaviour at worst that have arisen from science being too close to, rather than cossetted from, the “real world”. Some of these issues are described in Washburn’s book (and in that Nature Nanotech article), but a cursory glance at Ben Goldacre’s work, or a browse through David Colquhoun‘s blog, or a visit to the website of Scientists For Global Responsibility will also help demonstrate that sometimes it’s rather important to ensure that scientists are detached from the real world of the corporate bottom line.
My colleague here at Nottingham, Brigitte Nerlich, has also written a critique of Sarewitz’s piece in which she quotes Richard Feynman’s musings on the value of science. (As a physicist, I am contractually obliged to quote Feynman at least twice daily so it’s great to see that sociologists are also getting in on the act!)
…it seems to be generally believed that if the scientists would only look at these very difficult social problems and not spend so much time fooling with the less vital scientific ones, great success would come of it.
It seems to me that we do think about these problems from time to time, but we don’t put full‑time effort on them – the reason being that we know we don’t have any magic formula for solving problems, that social problems are very much harder than scientific ones, and that we usually don’t get anywhere when we do think about them.
Sarewitz’s argument is that scientific research should be tethered to “real world” problems and that, in doing so, science will be saved. Yet there has been a strong drive worldwide over the last decade or so to make academic science more focussed on near-term and near-market research of exactly the type Sarewitz prefers. Has this led to dramatic improvements in the quality of scientific peer review? Has it led to a reduction of the publish-or-perish culture? Or has it instead driven the development of a patent-or-perish and IP-protection culture that impedes, rather than improves, public engagement with science?
Feynman’s point that “we know we don’t have any magic formula for solving problems, social problems are very much harder than scientific ones” is exceptionally important in the context of Sarewitz’s article. Without the disinterestedness that is the hallmark of good science — that we teach to our undergrad students from Day 1 in the 1st year laboratory — scientific data will be consciously or unconsciously skewed. Real world considerations need to be put aside when acquiring and interpreting experimental data.
ATTP notes that Sarewitz’s article is peppered with entirely unjustifed claims about the validity of science as a whole. For example, Richard Horton is quoted (on p.18 of the article):
The case against science is straight-forward: much of the scientific literature, perhaps half, may simply be untrue.
This is credulously quoted, with nary a citation in sight, as damning of the entire scientific enterprise. If I’m generous, the source of Horton’s “perhaps half” estimate is most likely John Ioannidis’ oft-cited paper, “Why most published research findings are false” (which Sarewitz also discusses in his article). The “clickbait” of the title of Ioannidis’ paper is unfortunate because his article, as described in this insightful blog post, is rather more nuanced than one might expect. In any case, Ioannidis was focused on biomedical science and, moreover, on a particular type of methodological approach to research that is not the norm in other areas of research including, in particular, physics (and, more broadly, many fields of the physical sciences). Horton’s “perhaps half” is entirely unjustified and it is remiss of Sarewitz to not at the very least qualify Horton’s claim and point out the lack of evidence to support it.
This is not to say, however, that Sarewitz, Ioannidis, and Horton haven’t got a point when it comes to the deficiencies in peer review. There are indeed many problems with peer review and, having been embroiled in a lengthy and exceptionally heated debate for a number of years regarding the interpretation of artefacts in scanning probe microscope data, I have a great deal of sympathy with Sarewitz’s concerns about the exceptionally poor quality control that allows some flawed (or, worse, fraudulent) papers through the net.
But Sarewitz’s claim that “In the absence of a technological application that can select for useful truths…there is often no “right” way to discriminate among or organise the mass of truths scientists create” is, without putting too fine a point on it, bollocks. Science rests on reproducibility of results. One can argue that this doesn’t happen enough and that the “reward” system in science is now so damaged that studies which involve attempts to reproduce results are seen as effectively worthless in our “high impact factor journal” culture. But that doesn’t mean that a real world application is required to discriminate between competing theories or interpretations; the literature is awash with examples where scientific theories and intepretations rose to prominence via careful experimental work that was far removed from any real world application.
On a similar theme, Sarewitz goes on to state that “..we have the wrong expectations of science. Our common belief is that scientific truth is a unitary thing…“. This is an important point and I agree with Sarewitz that there is a naivete “out there” about just what scientific results demonstrate. Science proves nothing. Moreover, in a political context, it is important for scientists to be honest and to admit that interpretation of data is not always as cut-and-dried as it is often presented.
But to argue, as Sarewitz does in his closing line, that “Only through direct engagement with the real world can science free itself to rediscover the path toward truth” is a remarkable leap of faith. Connection with “real world” imperatives too often produces science that is driven by the bottom line; science that is compromised; science that is biased. That’s the bottom line.
 “Disinterested” and “uninterested” are not synonymous. It’s a shame that I have to include this disclaimer, and I realise that for many it’s entirely superfluous, but I had to explain the distinction to a research council executive a number of years back.