Politics. Perception. Philosophy. And Physics.

Today is the start of the new academic year at the University of Nottingham (UoN) and, as ever, it crept up on me and then leapt out with a fulsome “Gotcha”. Summer flies by so very quickly. I’ll be meeting my new 1st year tutees this afternoon to sort out when we’re going to have tutorials and, of course, to get to know them. One of the great things about the academic life is watching tutees progress over the course of their degree from that first “getting to know each other” meeting to when they graduate.

The UoN has introduced a considerable number of changes to the “student experience” of late via its Project Transform process. I’ve vented my spleen about this previously but it’s a subject to which I’ll be returning in the coming weeks because Transform says an awful lot about the state of modern universities.

For now, I’m preparing for a module entitled “The Politics, Perception and Philosophy of Physics” (F34PPP) that I run in the autumn semester. This is a somewhat untraditional physics module because, for one thing, it’s almost entirely devoid of mathematics. I thoroughly enjoy  F34PPP each year (despite this amathematical heresy) because of the engagement and enthusiasm of the students. The module is very much based on their contributions — I am more of a mediator than a lecturer.

STEM students are sometimes criticised (usually by Simon Jenkins) for having poorly developed communication skills. This is an especially irritating stereotype in the context of the PPP module, where I have been deeply impressed by the quality of the writing the students submit. As I discuss in the video below (an  overview of the module), I’m not alone in recognising this: articles submitted as F34PPP coursework have been published in Physics World, the flagship magazine of the Institute of Physics.

 

In the video I note that my intention is to upload a weekly video for each session of the module. I’m going to do my utmost to keep this promise and, moreover, to accompany each of those videos with a short(ish) blog post. (But, to cover my back, I’ll just note in advance that the best laid schemes gang aft agley…)

Addicted to the brand: The hypocrisy of a publishing academic

Back in December I gave a talk at the Power, Acceleration and Metrics in Academic Life conference in Prague, which was organised by Filip Vostal and Mark Carrigan. The LSE Impact blog is publishing a series of posts from those of us who spoke at the conference. They uploaded my post this morning. Here it is…


 

I’m going to put this as bluntly as I can; it’s been niggling and nagging at me for quite a while and it’s about time I got it off my chest. When it comes to publishing research, I have to come clean: I’m a hypocrite. I spend quite some time railing about the deficiencies in the traditional publishing system, and all the while I’m bolstering that self-same system by my selection of the “appropriate” journals to target.

Despite bemoaning the statistical illiteracy of academia’s reliance on nonsensical metrics like impact factors, and despite regularly venting my spleen during talks at conferences about the too-slow evolution of academic publishing towards a more open and honest system, I nonetheless continue to contribute to the problem. (And I take little comfort in knowing that I’m not alone in this.)

One of those spleen-venting conferences was a fascinating and important event held in Prague back in December, organized by Filip Vostal and Mark Carrigan: “Power, Acceleration, and Metrics in Academic Life”. My presentation, The Power, Perils and Pitfalls of Peer Review in Public – please excuse thePartridgian overkill on the alliteration – largely focused on the question of post-publication peer review (PPPR) via online channels such as PubPeer. I’ve written at length, however, on PPPR previously (here,here, and here) so I’m not going to rehearse and rehash those arguments. I instead want to explain just why I levelled the accusation of hypocrisy and why I am far from confident that we’ll see a meaningful revolution in academic publishing any time soon.

Let’s start with a few ‘axioms’/principles that, while perhaps not being entirely self-evident in each case, could at least be said to represent some sort of consensus among academics:

  • A journal’s impact factor (JIF) is clearly not a good indicator of the quality of a paper published in that journal. The JIF has been skewered many, many times with some of the more memorable and important critiques coming from Stephen Curry, Dorothy Bishop, David Colquhoun, Jenny Rohn, and, most recently, this illuminating post from Stuart Cantrill. Yet its very strong influence tenaciously persists and pervades academia. I regularly receive CVs from potential postdocs where they ‘helpfully’ highlight the JIF for each of the papers in their list of publications. Indeed, some go so far as to rank their publications on the basis of the JIF.
  • Given that the majority of research is publicly funded, it is important to ensure that open access publication becomes the norm. This one is arguably rather more contentious and there are clear differences in the appreciation of open access (OA) publishing between disciplines, with the arts and humanities arguably being rather less welcoming of OA than the sciences. Nonetheless, the key importance of OA has laudably been recognized by Research Councils UK (RCUK) and all researchers funded by any of the seven UK research councils are mandated to make their papers available via either a green or gold OA route (with the gold OA route, seen by many as a sop to the publishing industry, often being prohibitively expensive).

With these three “axioms” in place, it now seems rather straight-forward to make a decision as to the journal(s) our research group should choose as the appropriate forum for our work. We should put aside any consideration of impact factor and aim to select those journals which eschew the traditional for-(large)-profit publishing model and provide cost-effective open access publication, right?

Indeed, we’re particularly fortunate because there’s an exemplar of open access publishing in our research area: The Beilstein Journal of Nanotechnology. Not only are papers in the Beilstein J. Nanotech free to the reader (and easy to locate and download online), but publishing there is free: no exorbitant gold OA costs nor, indeed, any type of charge to the author(s) for publication. (The Beilstein Foundation has very deep pockets and laudably shoulders all of the costs).

But take a look at our list of publications — although we indeed publish in the Beilstein J. Nanotech., the number of our papers appearing there can be counted on the fingers of (less than) one hand. So, while I espouse the three principles listed above, I hypocritically don’t practice what I preach. What’s my excuse?

In academia, journal brand is everything. I have sat in many committees, read many CVs, and participated in many discussions where candidates for a postdoctoral position, a fellowship, or other roles at various rungs of the academic career ladder have been compared. And very often, the committee members will say something along the lines of “Well, Candidate X has got much better publications than Candidate Y”…without ever having read the papers of either candidate. The judgment of quality is lazily “outsourced” to the brand-name of the journal. If it’s in a Nature journal, it’s obviously of higher quality than something published in one of those, ahem, “lesser” journals.

If, as principal investigator, I were to advise the PhD students and postdocs in the group here at Nottingham that, in line with the three principles above, they should publish all of their work in the Beilstein J. Nanotech., it would be career suicide for them. To hammer this point home, here’s the advice from one referee of a paper we recently submitted:

“I recommend re-submission of the manuscript to the Beilstein Journal of Nanotechnology, where works of similar quality can be found. The work is definitively well below the standards of [Journal Name].”

There is very clearly a well-established hierarchy here. Journal ‘branding’, and, worse, journal impact factor, remain exceptionally important in (falsely) establishing the perceived quality of a piece of research, despite many efforts to counter this perception, including, most notably, DORA. My hypocritical approach to publishing research stems directly from this perception. I know that if I want the researchers in my group to stand a chance of competing with their peers, we have to target “those” journals. The same is true for all the other PIs out there. While we all complain bitterly about the impact factor monkey on our back, we’re locked into the addiction to journal brand.

And it’s very difficult to see how to break the cycle…

Are flaws in peer review someone else’s problem?

2017-05-08-09-12-32-900x600

That stack of fellowship applications piled up on the coffee table isn’t going to review itself. You’ve got twenty-five to read before the rapidly approaching deadline, and you knew before you accepted the reviewing job that many of the proposals would fall outside your area of expertise. Sigh. Time to grab a coffee and get on with it.

As a professor of physics with some thirty-five years’ experience in condensed matter research, you’re fairly confident that you can make insightful and perceptive comments on that application about manipulating electron spin in nanostructures (from that talented postdoc you met at a conference last year). But what about the proposal on membrane proteins? Or, worse, the treatment of arcane aspects of string theory by the mathematician claiming a radical new approach to supersymmetry? Can you really comment on those applications with any type of authority?

Of course, thanks to Thomson Reuters there’s no need for you to be too concerned about your lack of expertise in those fields. You log on to Web of Knowledge and check the publication records. Hmmm. The membrane protein work has made quite an impact – the applicant’s Science paper from a couple of years back has already picked up a few hundred citations and her h-index is rising rapidly. She looks to be a real ‘star’ in her community. The string theorist is also blazing a trail.

Shame about the guy doing the electron spin stuff. You’d been very excited about that work when you attended his excellent talk at the conference in the U.S. but it’s picked up hardly any citations at all. Can you really rank it alongside the membrane protein proposal? After all, how could you justify that decision on any sort of objective basis to the other members of the interdisciplinary panel…?

Bibliometrics are the bane of academics’ lives. We regularly moan about the rate at which metrics such as the journal impact factor and the notorious h-index are increasing their stranglehold on the assessment of research. And, yet, as the hypothetical example above shows, we can be our own worst enemy in reaching for citation statistics to assess work outside – or even firmly inside – our  ‘comfort zone’ of expertise.

David Colquhoun, a world-leading pharmacologist at University College London and a blogger of quite some repute, has repeatedly pointed out the dangers of lazily relying on citation analyses to assess research and researchers. One article in particular, How to get good science, is a searingly honest account of the correlation (or lack thereof) between citations and the relative importance of a number of his, and others’, papers. It should be required reading for all those involved in research assessment at universities, research councils, funding bodies, and government departments – particularly those who are of the opinion that bibliometrics represent an appropriate method of ranking the ‘outputs’ of scientists.

Colquhoun, in refreshingly ‘robust’ language, puts it as follows:

“All this shows what is obvious to everyone but bone-headed bean counters. The only way to assess the merit of a paper is to ask a selection of experts in the field.

“Nothing else works.

“Nothing.”

An ongoing controversy in my area of research, nanoscience, has thrown Colquhoun’s statement into sharp relief. The controversial work in question represents a particularly compelling example of the fallacy of citation statistics as a measure of research quality. It has also provided worrying insights into scientific publishing, and has severely damaged my confidence in the peer review system.

The minutiae of the case in question are covered in great detail at Raphael Levy’s blog so I won’t rehash the detailed arguments here. In a nutshell, the problem is as follows. The authors of a series of papers in the highest profile journals in science – including Science and the Nature Publishing Group family – have claimed that stripes form on the surfaces of nanoparticles due to phase separation of different ligand types. The only direct evidence for the formation of those stripes comes from scanning probe microscopy (SPM) data. (SPM forms the bedrock of our research in the Nanoscience group at the University of Nottingham, hence my keen interest in this particular story.)

But those SPM data display features which appear remarkably similar to well known instrumental artifacts, and the associated data analyses appear less than rigorous at best. In my experience the work would be poorly graded even as an undergraduate project report, yet it’s been published in what are generally considered to be the most important journals in science. (And let’s be clear – those journals indeed have an impressive track record of publishing exciting and pioneering breakthroughs in science.)

So what? Isn’t this just a storm in a teacup about some arcane aspect of nanoscience? Why should we care? Won’t the problem be rooted out when others fail to reproduce the work? After all, isn’t science self-correcting in the end?

Good points. Bear with me – I’ll consider those questions in a second. Take a moment, however, to return to the academic sitting at home with that pile of proposals to review. Let’s say that she had a fellowship application related to the striped nanoparticle work to rank amongst the others. A cursory glance at the citation statistics at Web of Knowledge would indicate that this work has had a major impact over a very short period. Ipso facto, it must be of high quality.

And yet, if an expert – or, in this particular case, even a relative SPM novice – were to take a couple of minutes to read one of the ‘stripy nanoparticle’ papers, they’d be far from convinced by the conclusions reached by the authors. What was it that Colquhoun said again? “The only way to assess the merit of a paper is to ask a selection of experts in the field. Nothing else works. Nothing.”

In principle, science is indeed self-correcting. But if there are flaws in published work who fixes them? Perhaps the most troublesome aspect of the striped nanoparticle controversy was highlighted by a comment left by Mathias Brust, a pioneer in the field of nanoparticle research, under an article in the Times Higher Education:

I have [talked to senior experts about this controversy] … and let me tell you what they have told me. About 80% of senior gold nanoparticle scientists don’t give much of a damn about the stripes and find it unwise that Levy engages in such a potentially career damaging dispute. About 10% think that … fellow scientists should be friendlier to each other. After all, you never know [who] referees your next paper. About 5% welcome this dispute, needless to say predominantly those who feel critical about the stripes. This now includes me. I was initially with the first 80% and did advise Raphael accordingly.”

[Disclaimer: I know Mathias Brust very well and have collaborated, and co-authored papers, with him in the past].

I am well aware that the plural of anecdote is not data but Brust’s comment resonates strongly with me. I have heard very similar arguments at times from colleagues in physics. The most troubling of all is the idea that critiquing published work is somehow at best unseemly, and, at worst, career-damaging.  Has science really come to this?

Douglas Adams, in an inspired passage in Life, The Universe, and Everything, takes the psychological concept known as “someone else’s problem (SEP)” and uses it as the basis of an invisibility ‘cloak’ in the form of an SEP-field. (Thanks to Dave Fernig, a fellow fan of Douglas Adams, for reminding me about the Someone Else’s Problem field.) As Adams puts it, instead of attempting the mind-bogglingly complex task of actually making something invisible, an SEP is much easier to implement. “An SEP is something we can’t see, or don’t see, or our brain doesn’t let us see, because we think that it’s somebody else’s problem…. The brain just edits it out, it’s like a blind spot”.

The 80% of researchers to which Brust refers are apparently of the opinion that flaws in the literature are someone else’s problem. We have enough to be getting on with in terms of our own original research, without repeating measurements that have already been published in the highest quality journals, right?

Wrong. This is not someone else’s problem. This is our problem and we need to address it.

Image: Paper pile. Credit: https://pixnio.com/it/oggetti/libri/libri-documento-educazione-informazioni-conoscenza-leggere-ricerca-scuola-stack-studio-lavoro