It’s so important to be disinterested in science

There was quite a lot of kerfuffle over the weekend about a lengthy piece from Daniel Sarewitz in The New Atlantis, entitled Saving ScienceHere’s the subhead from Sarewitz’s article, which might possibly help explain what all the fuss was about:

Science isn’t self-correcting, it’s self-destructing. To save the enterprise, argues Daniel Sarewitz, scientists must come out of the lab and into the real world.

Them’s fighting words…

…and the gloves are off.

Actually, no, I’m not going to tear into Sarewitz (well, not too much) because, buried in the hyperbole, puffery, and wild overstatement, he in fact makes a few decent points, which, if somewhat less than ground-breaking in their insights, are helpful to bear in mind. Let’s deal with the less level-headed assertions first, however.

As And Then There’s Physics points out, the thrust of Sarewitz’s article would appear to be that science should be more like engineering, i.e. focussed on near-term (and near-market) goals. Sarewitz argues that the disinterest [1] that should be core to fundamental science, and to which scientists aspire, is a “beautiful lie” that has trapped the scientific enterprise “in a self-destructive vortex”, apparently insulated from the outside “real” world.  If only scientists would do what they’re told — by whom? (and therein lies the rub, of course)– then all would be right with the world.

I’m a physicist whose work is unashamedly, and firmly, focussed on the fundamental rather than the applied. (Nonetheless, I should stress that “ivory tower” stereotypes wind me up a great deal. Like very many of my colleagues, I spend a great deal of time on public engagement.) Sarewitz’s claims about the damage wrought by curiosity-driven science, as he perceives it, are frustratingly naive in the context of the university-industry complex. John Ziman, the physicist turned sociologist, rightly included disinterestedness as one of the core norms he laid out in characterising scientific culture and the scientific method. (It’s the “D” in his CUDOS set of norms). [EDIT Sept 2 2016:: In the comments section below, Jack Stilgoe makes the important point that the CUDOS norms were, of course (and as I state in the Nature Nanotech article also mentioned below) originally put forward by Merton, not Ziman. My apologies for not giving credit where credit is due.] If exploratory research — science for science’s sake, if you will — is driven out in favour of the type of intensely focussed R&D Sarewitz is championing, then we compromise the disinterestedness that has underpinned so many key advances. But, more importantly, we further erode public trust in science.

Back in 2008, when what’s now known in UK academia as the “impact agenda” was in its infancy, I wrote an opinion piece for Nature Nanotechnology — I’m a nanoscientist — focussed on the type of concerns that Jennifer Washburn had raised about the corporatisation of universities (in her exceptionally important book, “University Inc”). Sarewitz is a professor of science and society; I am confident that he is just as aware as I am of the very many ethical quandaries, at best, and entirely unethical behaviour at worst that have arisen from science being too close to, rather than cossetted from, the “real world”. Some of these issues are described in Washburn’s book (and in that Nature Nanotech article), but a cursory glance at Ben Goldacre’s work, or a browse through David Colquhoun‘s blog, or a visit to the website of Scientists For Global Responsibility will also help demonstrate that sometimes it’s rather important to ensure that scientists are detached from the real world of the corporate bottom line.

My colleague here at Nottingham, Brigitte Nerlich, has also written a critique of Sarewitz’s piece in which she quotes Richard Feynman’s musings on the value of science. (As a physicist, I am contractually obliged to quote Feynman at least twice daily so it’s great to see that sociologists are also getting in on the act!)

…it seems to be generally believed that if the scientists would only look at these very difficult social problems and not spend so much time fooling with the less vital scientific ones, great success would come of it.

It seems to me that we do think about these problems from time to time, but we don’t put full‑time effort on them – the reason being that we know we don’t have any magic formula for solving problems, that social problems are very much harder than scientific ones, and that we usually don’t get anywhere when we do think about them.

Sarewitz’s argument is that scientific research should be tethered to “real world” problems and that, in doing so, science will be saved. Yet there has been a strong drive worldwide over the last decade or so to make academic science more focussed on near-term and near-market research of exactly the type Sarewitz prefers. Has this led to dramatic improvements in the quality of scientific peer review? Has it led to a reduction of the publish-or-perish culture? Or has it instead driven the development of a patent-or-perish and IP-protection culture that impedes, rather than improves, public engagement with science?

Feynman’s point that “we know we don’t have any magic formula for solving problems, social problems are very much harder than scientific ones” is exceptionally important in the context of Sarewitz’s article. Without the disinterestedness that is the hallmark of good science — that we teach to our undergrad students from Day 1 in the 1st year laboratory — scientific data will be consciously or unconsciously skewed. Real world considerations need to be put aside when acquiring and interpreting experimental data.

ATTP notes that Sarewitz’s article is peppered with entirely unjustifed claims about the validity of science as a whole. For example, Richard Horton is quoted (on p.18 of the article):

The case against science is straight-forward: much of the scientific literature, perhaps half, may simply be untrue.

This is credulously quoted, with nary a citation in sight, as damning of the entire scientific enterprise. If I’m generous, the source of Horton’s “perhaps half” estimate is most likely John Ioannidis’ oft-cited paper, “Why most published research findings are false” (which Sarewitz also discusses in his article). The “clickbait” of the title of Ioannidis’ paper is unfortunate because his article, as described in this insightful blog post, is rather more nuanced than one might expect. In any case, Ioannidis was focused on biomedical science and, moreover, on a particular type of methodological approach to research that is not the norm in other areas of research including, in particular, physics (and, more broadly, many fields of the physical sciences). Horton’s “perhaps half” is entirely unjustified and it is remiss of Sarewitz to not at the very least qualify Horton’s claim and point out the lack of evidence to support it.

This is not to say, however, that Sarewitz, Ioannidis, and Horton haven’t got a point when it comes to the deficiencies in peer review. There are indeed many problems with peer review and, having been embroiled in a lengthy and exceptionally heated debate for a number of years regarding the interpretation of artefacts in scanning probe microscope data, I have a great deal of sympathy with Sarewitz’s concerns about the exceptionally poor quality control that allows some flawed (or, worse, fraudulent) papers through the net.

But Sarewitz’s claim that “In the absence of a technological application that can select for useful truths…there is often no “right” way to discriminate among or organise the mass of truths scientists create” is, without putting too fine a point on it, bollocks. Science rests on reproducibility of results. One can argue that this doesn’t happen enough and that the “reward” system in science is now so damaged that studies which involve attempts to reproduce results are seen as effectively worthless in our “high impact factor journal” culture.  But that doesn’t mean that a real world application is required to discriminate between competing theories or interpretations; the literature is awash with examples where scientific theories and intepretations rose to prominence via careful experimental work that was far removed from any real world application.

On a similar theme, Sarewitz goes on to state that “..we have the wrong expectations of science. Our common belief is that scientific truth is a unitary thing…“. This is an important point and I agree with Sarewitz that there is a naivete “out there” about just what scientific results demonstrate. Science proves nothing. Moreover, in a political context, it is important for scientists to be honest and to admit that interpretation of data is not always as cut-and-dried as it is often presented.

But to argue, as Sarewitz does in his closing line, that “Only through direct engagement with the real world can science free itself to rediscover the path toward truth” is a remarkable leap of faith. Connection with “real world” imperatives too often produces science that is driven by the bottom line; science that is compromised; science that is biased. That’s the bottom line.

[1] “Disinterested” and “uninterested” are not synonymous. It’s a shame that I have to include this disclaimer, and I realise that for many it’s entirely superfluous, but I had to explain the distinction to a research council executive a number of years back.





Author: Philip Moriarty

Physicist. Rush fan. Father of three. (Not Rush fans. Yet.) Rants not restricted to the key of E minor...

10 thoughts on “It’s so important to be disinterested in science”

  1. Nice post. There are a number of things that I find surprising about Sarewitz’s comment (and he’s not alone). One is how editorial it is. In my experience, comments or reviews would normally try to synthesise the current understanding of some topic. Of course, people will always have their own slants, but normally there would be some attempt to present an overall view, rather than simply focusing on your preferred viewpoint. In this case, it just seems like Sarewitz’s comment is simply his general opinion. It could just as easily have been in a magazine, or newspaper.

    The other issue (and this does rather bug me) is that whenever I read one of these articles from someone who is a Science and Society researcher, I see little indication that they recognise that, rather than being an objective observer, they’re part of the very system that they’re critiquing. Yes, there are problems that we could be addressing, but they apply to them as much as to anyone else. So, why should we accept their rather alarmist conclusions about the problems in science, especially when they provide little more than anecdotes?


  2. Great push-back on an important topic. A few comments:

    1. “If only scientists would do what they’re told — by whom?” – the last couple of examples of large-scale manipulation of science by governments for social purposes (Nazi Germany and the USSR) provide great models for how (not) to do this…..

    2. As I said in a letter to THE a while ago (and which I might have mentioned before in a comment on your blog?) if Ioannidis is correct, and “most published research findings are false”, then, in all likelihood, so are the conclusions in his paper.

    3. The Sarewitz piece seems to be the latest example of a growing trend of claiming that “X is broken” or “X is fucked”, and X needs to be completely over-hauled/changed to make it not broken or fucked. It goes back at least to Cameron’s pre-election statements about “Broken Britain” and has encompassed the NHS, the EU, peer review, etc. It frustrates the hell out of me; see my comments on another recent example over at the Dynamic Ecology blog:

    Maybe I was taking that one too seriously, but it’s a very corrosive mindset, in my view, even if being used as a rhetorical device (as in that case).


  3. Some of the claims of Sarewitz are interesting, potentially important and sometimes I share his worries. Unfortunately, his article does not provide any evidence of his problems and solutions beyond anecdotes, nor any evidence that that problems are getting worse, which may well exist. It does not go beyond a blog post I might have written; from an expert I expect better.

    Without the disinterestedness that is the hallmark of good science — that we teach to our undergrad students from Day 1 in the 1st year laboratory — scientific data will be consciously or unconsciously skewed. Real world considerations need to be put aside when acquiring and interpreting experimental data.

    As a young researcher I thought the same. I now start wondering. If you do good science there is not that much wiggle room. You have to be level-headed enough to do good work, good work will look as if you are disinterested. I wonder if it is worth its own CUDOS category or whether it is simply a consequence.


  4. “But Sarewitz’s claim that “In the absence of a technological application that can select for useful truths…there is often no “right” way to discriminate among or organise the mass of truths scientists create” is, without putting too fine a point on it, bollocks.”

    Your criticism is surely valid for the few remaining functional part of the scientific landscape.

    In my field, roughly “bionano”, most scientific papers are justified by applications rather by than new fundamental scientific insights. Thus, it is fair that external observers would then refer to those proposed applications to evaluate the solidity of our work, or, as he says, “select for truth”. Replications are almost unheard of. In most cases, those applications, highlighted in the introductions of all articles never materialize.

    And, remarkably, in one high profile example from a high profile lab (Chad Mirkin) where the technological application did materialize and was commercialised by a multinational company (Merck), the technology does not work though the negative data that would show this tend not to be reported because of negative bias in science publishing (which brings us back to Ioannidis and co…).


    1. Thanks, Raphael. You’re absolutely correct — I’ve fallen into the same trap as Sarewitz, and failed to recognise the different cultures in the various (sub)fields of science.

      The SmartFlaresTM case is remarkable. Kudos, as ever, to you and your research team for pursuing it.

      Liked by 1 person

  5. Hi Philip. I’m a big fan of Dan’s work, and I think he’s got the politics pretty accurately here, although in doing so, his broad brush strokes obscure some important points. I think you’re quite right to draw attention to the insidious nature of policy and practice that privatises and corrupts good science, although Ziman was making the point that Merton’s norms (the original CUDOS) were unrealistic and outdated, which is why he suggested more realistic ones (PLACE, if I remember). That’s not to say that disinterestedness is not still a good aspiration, but it may be naive to think it’s possible.


    1. Hi, Jack.

      Thanks for your comment and sorry for not pulling it out of the moderation queue before now. (I only moderate to get rid of spam and don’t edit/censor/”redact” in any other way but the day job has to take precedence over the blog so I don’t check here regularly, and sometimes WordPress doesn’t send me the notifications it should. Grrrr..)

      You’re absolutely correct to point out that those norms were, of course, originally due to Merton — and it was slack of me not make this much clearer in the post — but I’d disagree that Ziman was arguing that the CUDOS norms should be replaced with what he called the PLACE norms. It was more a question of being resigned to PLACE than advocacy for those norms, as I recall?

      That’s not to say that disinterestedness is not still a good aspiration, but it may be naive to think it’s possible

      But we have to strive for it! Dan’s piece argues that dsinterestedness is not even a good aspiration…


  6. Hi Philip, Dan’s perspective here is motivated by a fair amount of scholarship that questions the image of science that seems to undergird your perspective.

    One critique of the idea of pure science uninfluenced by the outside world/politics is that this is a myth. Would the fundamental study of nuclear physics have received so much funding had fission processes not been discovered at the eve of WWII? Moreover, NSF funding for both bio and nano “basic sciences” has increased over the last decades. Surely you don’t suppose this is just because those areas are somehow more objectively interesting rather than economically promising? The goal of the argument for dispelling the notion of pure science is that it would bring the question of which applications should we pursue out into the open. It’d be far better, in my opinion, to have a broader range of social interests represented (than just market potential) than to think we can somehow insulate science from society and politics.

    Moreover, the question of whether disinterested science exists or is even desirable has been disputed (even by good scientists). A foundational study in this regard is Ian Mitroff’s “Subjective Side of Science,” which notes the various counternorms to the “CUDOS” formula you mention and also questions its usefulness. To put it too briefly to do it justice: Interestedness helps produce good science. Without it many scientists would give up on promising theories/experiments way too early. Indeed, some have found that Galileo’s data didn’t match his conclusions about the solar system and Eddington’s observational proof of relativity relied on a questionable (if not interested) interpretation of his data. While I agree that disinterestedness can be valuable, it may be wrong to expect it out of individuals. I find the idea of objectivity as social and intersubjective (rather than existing in the minds of individual scientists) more persuasive. It is through the process of differently interested individuals subjecting each others ideas to scrutiny that better truths emerge. This pluarlist/democratic model is more persuasive than the classical one because it explains how rational results can come out of groups of flawed, biased, and boundedly rational individuals.

    Finally, the idea of reproducibility is more complicated, I think, than you present it. When working on the frontiers of science, investigators are stuck in an “experimenter’s regress.” Knowing whether one has correctly reproduced or challenged a result depends on knowing that one has done a correct experiment, which in turn is difficult to know without knowing the correct outcome. The correct outcome, of course, is what is being sought in the first place. If you have time to read the careful studies performed by historians of science, I highly recommend it. When a result is particularly contentious among scientists previously taken-for-granted assumptions and methodologies start to become much more uncertain and complicated and new confounds begin to be seen. The truth is far murkier in such situations that the traditional heroic tales that dominate popular and word-of-mouth scientific histories.

    Given all these (and other reasons), I think Sarewitz’s reasoning is more reasonable than you present it. We in Science and Technology Studies (at least any scholar in the field worth his or her salt) try very hard to walk a fine line: trying to recognize the promise and benefits of science without falling prey to scientism and exploring science’s limitations without denigrating what it can accomplish.

    In any case, previously a budding scientist myself, I am always happy to see a scientist engaging honestly and respectfully with someone like Sarewitz. Cheers.

    Liked by 1 person

    1. Hi, Taylor.

      Thanks for your comment. I’m up to my eyes at the moment but will do my best to respond soon — your considered points deserve a considered response!

      In the meantime, these videos, for a module I currently teach entitled “The Politics, Perception and Philosophy of Physics” (PPP), may be of interest:

      Those videos highlight that I’m entirely “on board” when you say the following: The truth is far murkier in such situations that the traditional heroic tales that dominate popular and word-of-mouth scientific histories. The truth is indeed far murkier, and I make this point repeatedly to the undergraduates who take the PPP module. But if we abandon the idea of disinterestedness and let wishful thinking and lack of objectivity drive science then the scientific enterprise will get orders of magnitude murkier still!

      Best wishes,



Comments are closed.