Addicted to the brand: The hypocrisy of a publishing academic

Back in December I gave a talk at the Power, Acceleration and Metrics in Academic Life conference in Prague, which was organised by Filip Vostal and Mark Carrigan. The LSE Impact blog is publishing a series of posts from those of us who spoke at the conference. They uploaded my post this morning. Here it is…


I’m going to put this as bluntly as I can; it’s been niggling and nagging at me for quite a while and it’s about time I got it off my chest. When it comes to publishing research, I have to come clean: I’m a hypocrite. I spend quite some time railing about the deficiencies in the traditional publishing system, and all the while I’m bolstering that self-same system by my selection of the “appropriate” journals to target.

Despite bemoaning the statistical illiteracy of academia’s reliance on nonsensical metrics like impact factors, and despite regularly venting my spleen during talks at conferences about the too-slow evolution of academic publishing towards a more open and honest system, I nonetheless continue to contribute to the problem. (And I take little comfort in knowing that I’m not alone in this.)

One of those spleen-venting conferences was a fascinating and important event held in Prague back in December, organized by Filip Vostal and Mark Carrigan: “Power, Acceleration, and Metrics in Academic Life”. My presentation, The Power, Perils and Pitfalls of Peer Review in Public – please excuse thePartridgian overkill on the alliteration – largely focused on the question of post-publication peer review (PPPR) via online channels such as PubPeer. I’ve written at length, however, on PPPR previously (here,here, and here) so I’m not going to rehearse and rehash those arguments. I instead want to explain just why I levelled the accusation of hypocrisy and why I am far from confident that we’ll see a meaningful revolution in academic publishing any time soon.

Let’s start with a few ‘axioms’/principles that, while perhaps not being entirely self-evident in each case, could at least be said to represent some sort of consensus among academics:

  • A journal’s impact factor (JIF) is clearly not a good indicator of the quality of a paper published in that journal. The JIF has been skewered many, many times with some of the more memorable and important critiques coming from Stephen Curry, Dorothy Bishop, David Colquhoun, Jenny Rohn, and, most recently, this illuminating post from Stuart Cantrill. Yet its very strong influence tenaciously persists and pervades academia. I regularly receive CVs from potential postdocs where they ‘helpfully’ highlight the JIF for each of the papers in their list of publications. Indeed, some go so far as to rank their publications on the basis of the JIF.
  • Given that the majority of research is publicly funded, it is important to ensure that open access publication becomes the norm. This one is arguably rather more contentious and there are clear differences in the appreciation of open access (OA) publishing between disciplines, with the arts and humanities arguably being rather less welcoming of OA than the sciences. Nonetheless, the key importance of OA has laudably been recognized by Research Councils UK (RCUK) and all researchers funded by any of the seven UK research councils are mandated to make their papers available via either a green or gold OA route (with the gold OA route, seen by many as a sop to the publishing industry, often being prohibitively expensive).

With these three “axioms” in place, it now seems rather straight-forward to make a decision as to the journal(s) our research group should choose as the appropriate forum for our work. We should put aside any consideration of impact factor and aim to select those journals which eschew the traditional for-(large)-profit publishing model and provide cost-effective open access publication, right?

Indeed, we’re particularly fortunate because there’s an exemplar of open access publishing in our research area: The Beilstein Journal of Nanotechnology. Not only are papers in the Beilstein J. Nanotech free to the reader (and easy to locate and download online), but publishing there is free: no exorbitant gold OA costs nor, indeed, any type of charge to the author(s) for publication. (The Beilstein Foundation has very deep pockets and laudably shoulders all of the costs).

But take a look at our list of publications — although we indeed publish in the Beilstein J. Nanotech., the number of our papers appearing there can be counted on the fingers of (less than) one hand. So, while I espouse the three principles listed above, I hypocritically don’t practice what I preach. What’s my excuse?

In academia, journal brand is everything. I have sat in many committees, read many CVs, and participated in many discussions where candidates for a postdoctoral position, a fellowship, or other roles at various rungs of the academic career ladder have been compared. And very often, the committee members will say something along the lines of “Well, Candidate X has got much better publications than Candidate Y”…without ever having read the papers of either candidate. The judgment of quality is lazily “outsourced” to the brand-name of the journal. If it’s in a Nature journal, it’s obviously of higher quality than something published in one of those, ahem, “lesser” journals.

If, as principal investigator, I were to advise the PhD students and postdocs in the group here at Nottingham that, in line with the three principles above, they should publish all of their work in the Beilstein J. Nanotech., it would be career suicide for them. To hammer this point home, here’s the advice from one referee of a paper we recently submitted:

“I recommend re-submission of the manuscript to the Beilstein Journal of Nanotechnology, where works of similar quality can be found. The work is definitively well below the standards of [Journal Name].”

There is very clearly a well-established hierarchy here. Journal ‘branding’, and, worse, journal impact factor, remain exceptionally important in (falsely) establishing the perceived quality of a piece of research, despite many efforts to counter this perception, including, most notably, DORA. My hypocritical approach to publishing research stems directly from this perception. I know that if I want the researchers in my group to stand a chance of competing with their peers, we have to target “those” journals. The same is true for all the other PIs out there. While we all complain bitterly about the impact factor monkey on our back, we’re locked into the addiction to journal brand.

And it’s very difficult to see how to break the cycle…

We are anonymous. We are legion. We are (mostly) harmful.

This revelation appeared in my Twitter timeline earlier this week:

On the same day, Nature News published a fascinating interview with Brandon Stell, the founder of PubPeer who has revealed his identity to the world:

I’ve waxed lyrical about PubPeer a number of times before, going so far as to say that its post-publication peer review (PPPR) ethos has to be the future of scientific publishing. (I now also try to include mention of PubPeer in every conference presentation/seminar I give). I’ll be gutted if PPPR of the type pioneered by PubPeer does not become de rigueur for the next generation of scientists; our conventional peer review system is, from so many perspectives, archaic and outdated. I agree entirely with Stell’s comments in that interview for Nature News:

Post-publication peer review has the potential to completely change the way that science is conducted. I think PubPeer could help us to move towards an ideal scenario where we can immediately disseminate our findings and set up a different way of evaluating significant research than the current system.

But one major bone of contention that I’ve always had with PubPeer’s approach, and that has been the subject of a couple of amicable ‘tweet-spats’ with the @PubPeer Twitter feed, is the issue of anonymity. I was disappointed that not only was it just Stell who revealed his identity — his two PubPeer co-founders remain anonymous — but that there are plans (or at least aspirations) to “shore up”, as Stell puts it, the anonymity of PubPeer commenters.

I am not a fan of internet anonymity. At all. I understand entirely the arguments regularly made by PubPeer (and many others) in favour of anonymous commenting. In particular, I am intensely aware of the major power imbalance that exists between, for example, a 1st year PhD student commenting on a paper at PubPeer and the world-leading, award-winning, scientifically decorated and oh-so-prestigious scientist whose group carried out the work that is being critiqued/attacked. Similarly, and in common with Peter Coles, I personally know bloggers who write important, challenging, and influential posts while remaining anonymous.

I also fully realise that there are are extreme cases when it might not only be career-theatening, but life-threatening for a blogger to reveal their identity. However, those are exactly that: extreme cases. It’s statistically rather improbable that all of those pseudonymously venting their spleen under articles at, say, The Guardian, Telegraph, or, forgive me, Daily Mail website are writing in fear of their life. (Although, and as this wonderful Twitter account highlights so well, many Daily Mail readers certainly feel as if their entire culture, identity, and belief system are under constant attack from the ranked hordes of migrants/PC lefties/benefit claimants/gypsies/BBC executives [delete as appropriate] swamping the country).

I am firmly of the opinion that the advantages of anonymity are far outweighed by the difficulties associated with fostering an online culture where comments and critique — and, at worst, vicious abuse — are posted under cover of a pseudonym. For one thing, there’s the strong possibility of sockpuppets being exploited to distort debate. Julian Stirling, an alumnus of the Nanoscience Group here at Nottingham and now a research fellow at NIST, has described the irritations and frustrations of the sockpuppetry we’ve experienced as part of our critique of a series of papers spanning a decade’s worth of research. In the later stages of this tussle, the line separating sockpuppetry from outright identity theft was crossed. You might suggest, like PubPeer, that this type of behaviour is not the norm. Perhaps. But our experience shows just how bad it can get.

Even in the absence of sockpuppetry, I’ve got to come clean and admit that I’m really not entirely comfortable with communicating with someone online who is not willing to reveal their identity. I’ve been trying to get to grips with just what it is about anonymous/ pseudonymous comments that rankles with me so much, having been involved in quite a number of online ‘debates’ where the vast majority of those commenting have used pseudonyms. When challenged on the use of a pseudonym, and asked for some information about their background, the response is generally aggressively defensive. Their standard rebuttal is to ask why I should care about who they are because isn’t it the strength of the argument, not the identity of the person making the comments, that really matters?

In principle, yes. But the online world is often not very principled.

Ultimately, I think that my deep irritation with pseudonyms stems from two key factors. The first of these is the fundamental lack of fairness due to the ‘asymmetry’ in communication. Dorothy Bishop wrote a characteristically considered, thoughtful, and thought-provoking piece on this issue of communication asymmetry a number of years back (in the context of the debate regarding the burqa ban). The asymmetry that’s established via anonymous commenting means that those who are critiquing an author’s (or a group’s) work are free to comment with impunity; they can say whatever they like in the clear knowledge that there’s a negligible chance of their comments ever being traced back to them.

Stell and his PubPeer co-founders claim that this is actually a key advantage of anonymity — those who comment are not constrained by concerns that they’ll be identified. But if their arguments are sound, and expressed in a polite, if critical, manner, then why the heck should they be concerned?  After all, it’s long been the case that “old school” journals — the APS’ Physical Review family of titles being a notable example — publish critiques of papers that have previously appeared in their pages. Those formal critiques are published with the names of the authors listed for all the world, or at least the readership of the journal, to see.

We should aim to change the culture so that critiquing other scientists’ work is seen as part-and-parcel of the scientific process, i.e. something for which researchers, at any career level, should be proud to take credit. Instead, the ease of commenting online from behind cover of a pseudonym or avatar is encouraging a secretive, and, let’s be honest, a rather grubby, approach to scientific criticism. I was therefore particularly encouraged by this announcement from The Winnower yesterday. It’s a fantastic idea to publish reviews of papers from journal club discussions and it’ll help to move the critique of published science to a rather more open, and thus much healthier, place.

The second, although closely related, aspect of anonymity that winds me up is that it essentially (further) depersonalises online communication. This helps to normalise a culture in which those commenting don’t ever take responsibility for what they say, or, in the worst cases — if we consider online communication in a broader context than just the scientific community — fail to appreciate just how hurtful their abuse might be. I’ve often seen comments along the lines of “It’s all just pixels on a screen. They should toughen up”. These comments are invariably made by those hiding behind a pseudonym.

As ever, xkcd has the perfect riposte…

This splendid poem also makes the point rather well. “Cuts and bruises now have healed, it’s words that I remember.

Anonymity contributes to a basic lack of online respect and too often can represent a lack of intellectual courage. When we criticise, critique, lambaste, or vilify others online let’s have the courage of our convictions and put our name to our comments.

Peer review in public: Rise of the cyber-bullies?


Originally published at physicsfocus.

A week ago in a news article in Science – and along with my colleagues and collaborators, Julian Stirling and Raphael Levy – I was accused of being a cyber-bully. This, as you might imagine, was not a particularly pleasant accusation to face. Shortly following publication of the piece in Science, one of the most popular and influential science bloggers on the web, Neuroskeptic, wrote an insightful and balanced blog post on what might be best described as the psychology underpinning the accusation. This prompted a flow of tweets from the Twitterati…


As one of the scientists at the eye of the storm, I wanted to take some time to explain in this blog post just how this unfortunate and distressing situation (for all involved) arose because it has very important implications for the future of peer review. I’ll try to do this as dispassionately and magnanimously as possible, but I fully realise that I’m hardly a disinterested party.

The science and the censure

The back-story to the claim of cyber-bullying is lengthy and lively. It spans almost 30 published papers (very many in the top tier of scientific journals – see the list here), repeated refusals to provide raw data and samples to back up those published claims, apathetic journal editors (when it comes to correcting the scientific record),  strong public criticism of the research from a PhD student initially involved in the contested work, years of traditional peer review before a critique could make it into the literature, a bevy of blog posts, a raft of tweets, and, most recently, the heaviest volume of comments on a PubPeer paper to date.

For those of you who have the stamina to follow the entire, exhausting story, Raphael Levy has recently put together a couple of very helpful compendia of blog posts and articles. I’ve given myself the challenge here at physicsfocus of condensing all of that web traffic down into a short(-ish) Q&A to provide a summary of the controversy and to address the questions that crop up repeatedly. As a case study in post-publication peer review (PPPR), there is an awful lot to learn from this controversy.

Q. What scientific results are being challenged?

In 2004, Francesco Stellacci and co-workers published a paper in Nature Materials in which they interpreted scanning tunnelling microscopy (STM) images of nanoparticles covered with two different types of molecule as showing evidence for stripes in the molecular ‘shell’. They followed this paper up with a large number of other well-cited publications which built on the claim of stripes to argue that, for example, the (bio)chemistry and charge transport properties of the particles are strongly affected by the striped morphology.

Q. How has the work been criticised?

In a nutshell, the key criticism is that imaging artefacts have been interpreted as molecular features.

In slightly more detail…

  • The stripes in the images arise from a variety of artefacts due to poor experimental protocols and inappropriate data processing/analysis.
  • The strikingly clear images of stripes seen in the early work are irreproducible (both by Stellacci’s group and their collaborators) when the STM is set up and used correctly.
  • The data are cherry-picked; there is a lack of appropriate control samples; noise has been misinterpreted, and there is a high degree of observer bias throughout.
  • Experimental uncertainties and error bars are estimated and treated incorrectly, from which erroneous conclusions are reached.

That’s still only a potted summary. For all of the gory detail, it’s best to take a look at a paper we submitted to PLOS ONE at the end of last year, and uploaded at the same time to the Condensed Matter arXiv and to PubPeer.

Q. …but that’s just your opinion. You, Levy, and Stirling could be wrong. Indeed, didn’t leading STM groups co-author papers with Francesco Stellacci last year? Don’t their results support the earlier work?

First, I am not for one second suggesting that I don’t get things wrong sometimes. Indeed, we had to retract a paper from Chem. Comm. last year when we found that the data suffered from an error in the calibration of the oscillation amplitude of a scanning probe sensor. Embarrassing and painful, yes, but it had to be done: errare humanum est sed perserverare diabolicum.

The bedrock of science is data and evidence, however, not opinion (although, as Neuroskeptic highlighted, the interpretation of data is often not cut-and-dried). It took us many months to acquire (some of) the raw data for the early striped nanoparticle work from the authors, but when it finally arrived, it incontrovertibly showed that STM data in the original work suffered from extreme feedback loop instabilities which are very well-known to produce stripes aligned with the (slow) scan direction. This is exactly what is seen in this (from the very first paper on striped nanoparticles):


What is remarkable is that Francesco Stellacci’s work with those leading STM groups last year not only doesn’t support the earlier data/analysis, it clearly shows that images like that above can’t be reproduced when the experiment is done correctly. (Note that I contacted those groups by e-mail more than a week in advance of writing this post. They did not respond.)

But that’s more than enough science for now. The technical aspects of the science aren’t the focus of this post (because they’ve been covered at tedious length previously).

Q. Why do you care? For that matter, why the heck should I care?

I care because the flaws in the striped nanoparticle work mislead other researchers who may not have a background in STM and scanning-probe techniques. I care because funding of clearly flawed work diverts limited resources away from more deserving science. I care because errors in the scientific record should not stand uncorrected – this severely damages confidence in science. (If researchers in the field don’t correct those errors, who will?). And I care because a PhD student in the Stellacci research group was forced into the unfortunate position of having to act as a whistleblower.

If you’re a scientist (or, indeed, a researcher in any field of study), you should care because this case highlights severe deficiencies in the traditional scientific publishing and peer review systems. If you’re not, then you should care because, as a taxpayer, you’re paying for this stuff.

Q. But can’t you see that by repeatedly describing Francesco Stellacci’s work as “clearly flawed” online, he may well have a point about cyber-bullying?

Can I understand why Francesco might feel victimised? Yes. Can I empathise with him? Yes, to an extent. As a fellow scientist, I can entirely appreciate that our work tends to be a major component of our self-identity and, as Neuroskeptic explains, a challenge to our research can feel like a direct criticism of ourselves.

But as I said in response to the Science article, to describe criticism of publicly-funded research results published in the public domain as cyber-bullying is an insult to those who have had to endure true cyber-bullying. If public criticism of publicly-funded science is going to be labelled as cyber-bullying, then where do we draw the line? Should we get rid of Q&A sessions at scientific conferences? Should we have a moratorium on press releases and press conferences in case the work is challenged? Should scientists forgo social media entirely?

Q. Don’t you, Levy, and Stirling have better things to do with your time? Aren’t you just a little, ahem, obsessive about this?

Yes, we all have other things to do with our time. Julian recently submitted his thesis, had his viva voce examination, passed with flying colours, and is off to NIST in March to take up a postdoctoral position. Raphael was recently promoted and is ‘enjoying’ the additional work-load associated with his step up the career ladder. And I certainly could find other things to do.

I can only speak for myself here. I’ve already listed above a number of the many reasons why I care about this striped nanoparticle issue. If the work was restricted to one paper in an obscure journal that no-one had read then I might be rather less exercised. And I certainly don’t make a habit of critiquing other groups’ work in such forensic detail. (Nor have I got a particular axe to grind with Francesco – I have never met the man and am certainly not pursuing this in order to “tarnish his reputation”.)

But the striped nanoparticle ‘oeuvre’ is riddled with basic errors in STM imaging and analysis – errors that I wouldn’t expect to find in an undergraduate project report, let alone in Nature Publishing Group and American Chemical Society journals. This is why we won’t shut up about it! That this research has been published time and time again when there are gaping holes in the methodology, the data, and the analyses is a shocking indictment of the traditional peer review system.

Q. But then surely the best way to deal with this is through the journals, rather than scrapping it out online?

Raphael Levy spent more than three years getting a critique of the striped nanoparticle data into print before he started to blog about it. I’ve seen the exchange of e-mails with the editors for just one of the journals to which he submitted the critique – all taken, it runs to thirty pages (over ninety e-mails) over three years. While this was going on, other papers based on the same flawed data acquisition and analysis processes were regularly being published by Francesco and co-workers. There is no question that traditional peer review and the associated editorial processes failed very badly in this case.

But is PPPR via sites such as PubPeer the way forward? I have previously written about the importance of PPPR (in this article for Times Higher Education), and some of my heroes have similarly sung the praises of online peer review. I remain of the opinion that PPPR will continue to evolve such that it will be de rigueur for the next generation of scientists. However, the protracted and needlessly tortuous discussion of our paper over at PubPeer has made me realise that there’s an awful lot of important work left to do before we can credibly embed post-publication peer review in the scientific process.

Although PubPeer is an extremely important – indeed, I’d go so far as to say essential and inevitable – contribution to the evolution of the peer review system, the approach as it stands has its flaws. Moderation of comments is key, otherwise the discussion can rapidly descend into a series of ad hominem slurs (as we’re seeing in the comments thread for our paper). But even if those ad hominems are sifted out by a moderator, those with a vested interest in supporting a flawed piece of work – or, indeed, those who may want to attack a sound paper for reasons which may not be entirely scientific – can adopt a rather more subtle approach, as Peer 7 points out in response to a vociferous proponent of Stellacci et al’s work:

“You are using a tactic[al] which is well known by online activists which consists of repeating again and again the same series of arguments. By doing so you discourage the reasonable debaters who do not have the time/energy to answer these same arguments every day. In the same time, you instil doubt in less knowledgeable people’s mind who could think that, considering the number of your claims, some might be at least partly true.”

Moderation to identify this type of ‘filibustering’ will not come cheap and it will not be easy – there will always be the issue of finding truly disinterested parties to act as moderators. A colleague (not at Nottingham, nor in the UK) who wishes to remain anonymous – the issue of online anonymity is certainly vexed – and who has been avidly following the striped nanoparticle debate at PubPeer, put it like this in an e-mail to me:

The way this thing is panning out makes me actually more convinced that a blog is not a proper format for holding scientific debates. It might work to expose factually proven fraud. The peer-reviewed, one-argument-at-a-time format does one fundamental thing for the sanity of the conversation which is that it “truncates” it. It serves the same purpose of the clock on politicians’ debates. And protects, at least to an extent the debater from Gish gallop[s]… and the simple denial techniques. Just because you cannot just say that somebody is wrong on a paper and get away with it. At least it is harder than on a blog

As I said in that Times Higher article, much of the infrastructure to enable well-moderated online commentary is in principle already in place for the traditional journal system. We need to be careful not to throw the baby out with the bathwater in our efforts to fix the peer review system: PPPR should be facilitated by the journals – in, of course, as open a fashion as possible – and embedded in their processes instead of existing in a parallel online universe. When it takes more than three years to get criticism of flawed research through traditional peer review channels, the journal system has to change.


P.S. The image we wanted to use for this post was this, which, as the Whovians amongst you will realise, would have rather neatly tied in with the title. The BBC refused permission to use the image. If they’re going to be like that, they’re not getting their Tardis back

Image: Scientists online want your clothes, your boots and your motorcycle. Or maybe just to correct the scientific record. Credit: DarkGeometryStudios/Shutterstock

Not everything that counts can be counted


First published at physicsfocus.

My first post for physicsfocus described a number of frustrating deficiencies in the peer review system, focusing in particular on how we can ensure, via post-publication peer review, that science does not lose its ability to self-correct. I continue to rant about discuss and dissect the issue of post-publication peer review in an article in this week’s Times Higher Education, “Spuriouser and Spuriouser”. Here, however, I want to address some of the comments left under that first physicsfocus post by a Senior Editor at Nature Materials, Pep Pamies (Curious Scientist in the comments thread). I was really pleased that a journal editor contributed to the debate but, as you might be less than surprised to hear, I disagree fundamentally with Pep’s argument that impact factors are a useful metric. As I see it, they’re not even a necessary evil.

I’m certainly not alone in thinking this. In an eloquent cri de coeur posted at his blog, Reciprocal Space, last summer, Stephen Curry bluntly stated, “I am sick of impact factors. And so is science”. I won’t rehearse Stephen’s arguments – I strongly recommend that you visit his blog and read the post for yourself, along with the close to two-hundred comments that it attracted – but it’s clear from the Twitter and blog storm his post generated that he had tapped into a deep well of frustration among academics. (Peter Coles’ related post, The Impact X-Factor, is also very well worth a read.)

I agree with Stephen on almost everything in his post. I think that many scientists will chuckle knowingly at the description of the application of impact factors as “statistically illiterate” and I particularly liked the idea of starting a ‘smear campaign’ to discredit the entire concept. But he argues that the way forward is:

“…to find ways to attach to each piece of work the value that the scientific community places on it though use and citation. The rate of accrual of citations remains rather sluggish, even in today’s wired world, so attempts are being made to capture the internet buzz that greets each new publication; there are interesting innovations in this regard from the likes of PLOS, Mendeley and”

As is clear from the THE article, embedding Web 2.0/Web 3.0/Web n.0 feedback and debate in the peer review process is something I fully endorse and, indeed, I think that we should grasp the nettle and attempt to formalise the links between online commentary and the primary scientific literature as soon as possible. But are citations – be they through the primary literature or via an internet ‘buzz’ – really a proxy for scientific quality and the overall value of the work?

I think that we do science a great disservice if we argue that the value of a paper depends only on how often other scientists refer to it, or cite it in their work. Let me offer an example from my own field of research, condensed matter physics – aka nanoscience when I’m applying for funding – to highlight the problem.

Banging a quantum drum

Perhaps my favourite paper of the last decade or so is “Quantum Phase Extraction in Isospectral Electronic Nanostructures” by Hari Manoharan and his co-workers at Stanford. The less than punchy title doesn’t quite capture the elegance, beauty, and sheer brilliance of the work. Manoharan’s group exploited the answer to a question posed by the mathematician Mark Kac close to fifty years ago: Can one hear the shape of a drum? Or, if we ask the question in rather more concrete mathematical physics terms, “Does the spectrum of eigenfrequencies of a resonator uniquely determine its geometry?”

For a one dimensional system the equivalent question is not too difficult and can readily be answered by guitarists and A-level physics students: yes, one can ‘hear’ the shape, i.e. the length, of a vibrating string. But for a two dimensional system like a drum head, the answer is far from obvious. It took until 1992 before Kac’s question was finally answered by Carolyn Gordon, David Webb, and Scott Wolpert. They discovered that it was possible to have 2D isospectral domains, i.e. 2D shapes (or “drum heads”) with the same “sound”. So, no, it’s not possible to hear the shape of a drum.

What’s this got to do with nanoscience? Well, the first elegant aspect of the paper by the Stanford group is that they constructed two-dimensional isospectral domains out of carbon monoxide molecules on a copper surface (using the tip of a scanning tunnelling microscope). In other words, they built differently shaped nanoscopic ‘drum heads’, one molecule at a time. They then “listened” to the eigenspectra of these quantum drums  by measuring the resonances of the electrons confined within the molecular drum head and transposing the spectrum to audible frequencies.

So far, so impressive

But it gets better. A lot better.

The Stanford team then went on to exploit the isospectral characteristics of the differently shaped quantum drum heads to extract the quantum mechanical phase of the electronic wavefunction confined within. I could wax lyrical about this particular aspect of the work for quite some time – remember that the phase of a wavefunction is not an observable in quantum mechanics! – but I encourage you to read the paper itself. (It’s available via this link, but you, or your institution will need a subscription to Science.)

I’ll say it again – this is elegant, beautiful, and brilliant work. For me, at least, it has a visceral quality, just like a piece of great music, literature, or art; it’s inspiring and affecting.

…and it’s picked up a grand total of 29 citations since its publication in 2008.

In the same year, and along with colleagues in Nottingham and Loughborough, I co-authored a paper published in Physical Review Letters on pattern formation in nanoparticle assemblies. To date, that paper has accrued 47 citations. While I am very proud of the work, I am confident that my co-authors would agree with me when I say that it doesn’t begin to compare to the quality of the quantum drum research. Our paper lacks the elegance and scientific “wow” factor of the Stanford team’s publication; it lacks the intellectual excitement of coupling a fundamental problem (and solution) in pure mathematics with state-of-the-art nanoscience; and it lacks the sophistication of the combined experimental and theoretical methodology.

But yet our paper has accrued more citations.

You might argue that I have cherry-picked a particular example to make my case. I really wish that were so but I can point to many, many other exciting scientific papers in a variety of journals which have attracted a relative dearth of citations.

Einstein is credited, probably apocryphally, with the statement “Not everything that counts can be counted, and not everything that can be counted counts”. Just as multi-platinum album sales and Number 1 hits are not a reliable indicator of artistic value (note that One Direction has apparently now out sold The Beatles), citations and associated bibliometrics are not a robust measure of scientific quality.

Image credit: 

Are flaws in peer review someone else’s problem?


That stack of fellowship applications piled up on the coffee table isn’t going to review itself. You’ve got twenty-five to read before the rapidly approaching deadline, and you knew before you accepted the reviewing job that many of the proposals would fall outside your area of expertise. Sigh. Time to grab a coffee and get on with it.

As a professor of physics with some thirty-five years’ experience in condensed matter research, you’re fairly confident that you can make insightful and perceptive comments on that application about manipulating electron spin in nanostructures (from that talented postdoc you met at a conference last year). But what about the proposal on membrane proteins? Or, worse, the treatment of arcane aspects of string theory by the mathematician claiming a radical new approach to supersymmetry? Can you really comment on those applications with any type of authority?

Of course, thanks to Thomson Reuters there’s no need for you to be too concerned about your lack of expertise in those fields. You log on to Web of Knowledge and check the publication records. Hmmm. The membrane protein work has made quite an impact – the applicant’s Science paper from a couple of years back has already picked up a few hundred citations and her h-index is rising rapidly. She looks to be a real ‘star’ in her community. The string theorist is also blazing a trail.

Shame about the guy doing the electron spin stuff. You’d been very excited about that work when you attended his excellent talk at the conference in the U.S. but it’s picked up hardly any citations at all. Can you really rank it alongside the membrane protein proposal? After all, how could you justify that decision on any sort of objective basis to the other members of the interdisciplinary panel…?

Bibliometrics are the bane of academics’ lives. We regularly moan about the rate at which metrics such as the journal impact factor and the notorious h-index are increasing their stranglehold on the assessment of research. And, yet, as the hypothetical example above shows, we can be our own worst enemy in reaching for citation statistics to assess work outside – or even firmly inside – our  ‘comfort zone’ of expertise.

David Colquhoun, a world-leading pharmacologist at University College London and a blogger of quite some repute, has repeatedly pointed out the dangers of lazily relying on citation analyses to assess research and researchers. One article in particular, How to get good science, is a searingly honest account of the correlation (or lack thereof) between citations and the relative importance of a number of his, and others’, papers. It should be required reading for all those involved in research assessment at universities, research councils, funding bodies, and government departments – particularly those who are of the opinion that bibliometrics represent an appropriate method of ranking the ‘outputs’ of scientists.

Colquhoun, in refreshingly ‘robust’ language, puts it as follows:

“All this shows what is obvious to everyone but bone-headed bean counters. The only way to assess the merit of a paper is to ask a selection of experts in the field.

“Nothing else works.


An ongoing controversy in my area of research, nanoscience, has thrown Colquhoun’s statement into sharp relief. The controversial work in question represents a particularly compelling example of the fallacy of citation statistics as a measure of research quality. It has also provided worrying insights into scientific publishing, and has severely damaged my confidence in the peer review system.

The minutiae of the case in question are covered in great detail at Raphael Levy’s blog so I won’t rehash the detailed arguments here. In a nutshell, the problem is as follows. The authors of a series of papers in the highest profile journals in science – including Science and the Nature Publishing Group family – have claimed that stripes form on the surfaces of nanoparticles due to phase separation of different ligand types. The only direct evidence for the formation of those stripes comes from scanning probe microscopy (SPM) data. (SPM forms the bedrock of our research in the Nanoscience group at the University of Nottingham, hence my keen interest in this particular story.)

But those SPM data display features which appear remarkably similar to well known instrumental artifacts, and the associated data analyses appear less than rigorous at best. In my experience the work would be poorly graded even as an undergraduate project report, yet it’s been published in what are generally considered to be the most important journals in science. (And let’s be clear – those journals indeed have an impressive track record of publishing exciting and pioneering breakthroughs in science.)

So what? Isn’t this just a storm in a teacup about some arcane aspect of nanoscience? Why should we care? Won’t the problem be rooted out when others fail to reproduce the work? After all, isn’t science self-correcting in the end?

Good points. Bear with me – I’ll consider those questions in a second. Take a moment, however, to return to the academic sitting at home with that pile of proposals to review. Let’s say that she had a fellowship application related to the striped nanoparticle work to rank amongst the others. A cursory glance at the citation statistics at Web of Knowledge would indicate that this work has had a major impact over a very short period. Ipso facto, it must be of high quality.

And yet, if an expert – or, in this particular case, even a relative SPM novice – were to take a couple of minutes to read one of the ‘stripy nanoparticle’ papers, they’d be far from convinced by the conclusions reached by the authors. What was it that Colquhoun said again? “The only way to assess the merit of a paper is to ask a selection of experts in the field. Nothing else works. Nothing.”

In principle, science is indeed self-correcting. But if there are flaws in published work who fixes them? Perhaps the most troublesome aspect of the striped nanoparticle controversy was highlighted by a comment left by Mathias Brust, a pioneer in the field of nanoparticle research, under an article in the Times Higher Education:

I have [talked to senior experts about this controversy] … and let me tell you what they have told me. About 80% of senior gold nanoparticle scientists don’t give much of a damn about the stripes and find it unwise that Levy engages in such a potentially career damaging dispute. About 10% think that … fellow scientists should be friendlier to each other. After all, you never know [who] referees your next paper. About 5% welcome this dispute, needless to say predominantly those who feel critical about the stripes. This now includes me. I was initially with the first 80% and did advise Raphael accordingly.”

[Disclaimer: I know Mathias Brust very well and have collaborated, and co-authored papers, with him in the past].

I am well aware that the plural of anecdote is not data but Brust’s comment resonates strongly with me. I have heard very similar arguments at times from colleagues in physics. The most troubling of all is the idea that critiquing published work is somehow at best unseemly, and, at worst, career-damaging.  Has science really come to this?

Douglas Adams, in an inspired passage in Life, The Universe, and Everything, takes the psychological concept known as “someone else’s problem (SEP)” and uses it as the basis of an invisibility ‘cloak’ in the form of an SEP-field. (Thanks to Dave Fernig, a fellow fan of Douglas Adams, for reminding me about the Someone Else’s Problem field.) As Adams puts it, instead of attempting the mind-bogglingly complex task of actually making something invisible, an SEP is much easier to implement. “An SEP is something we can’t see, or don’t see, or our brain doesn’t let us see, because we think that it’s somebody else’s problem…. The brain just edits it out, it’s like a blind spot”.

The 80% of researchers to which Brust refers are apparently of the opinion that flaws in the literature are someone else’s problem. We have enough to be getting on with in terms of our own original research, without repeating measurements that have already been published in the highest quality journals, right?

Wrong. This is not someone else’s problem. This is our problem and we need to address it.

Image: Paper pile. Credit: