Addicted to the brand: The hypocrisy of a publishing academic

Back in December I gave a talk at the Power, Acceleration and Metrics in Academic Life conference in Prague, which was organised by Filip Vostal and Mark Carrigan. The LSE Impact blog is publishing a series of posts from those of us who spoke at the conference. They uploaded my post this morning. Here it is…


I’m going to put this as bluntly as I can; it’s been niggling and nagging at me for quite a while and it’s about time I got it off my chest. When it comes to publishing research, I have to come clean: I’m a hypocrite. I spend quite some time railing about the deficiencies in the traditional publishing system, and all the while I’m bolstering that self-same system by my selection of the “appropriate” journals to target.

Despite bemoaning the statistical illiteracy of academia’s reliance on nonsensical metrics like impact factors, and despite regularly venting my spleen during talks at conferences about the too-slow evolution of academic publishing towards a more open and honest system, I nonetheless continue to contribute to the problem. (And I take little comfort in knowing that I’m not alone in this.)

One of those spleen-venting conferences was a fascinating and important event held in Prague back in December, organized by Filip Vostal and Mark Carrigan: “Power, Acceleration, and Metrics in Academic Life”. My presentation, The Power, Perils and Pitfalls of Peer Review in Public – please excuse thePartridgian overkill on the alliteration – largely focused on the question of post-publication peer review (PPPR) via online channels such as PubPeer. I’ve written at length, however, on PPPR previously (here,here, and here) so I’m not going to rehearse and rehash those arguments. I instead want to explain just why I levelled the accusation of hypocrisy and why I am far from confident that we’ll see a meaningful revolution in academic publishing any time soon.

Let’s start with a few ‘axioms’/principles that, while perhaps not being entirely self-evident in each case, could at least be said to represent some sort of consensus among academics:

  • A journal’s impact factor (JIF) is clearly not a good indicator of the quality of a paper published in that journal. The JIF has been skewered many, many times with some of the more memorable and important critiques coming from Stephen Curry, Dorothy Bishop, David Colquhoun, Jenny Rohn, and, most recently, this illuminating post from Stuart Cantrill. Yet its very strong influence tenaciously persists and pervades academia. I regularly receive CVs from potential postdocs where they ‘helpfully’ highlight the JIF for each of the papers in their list of publications. Indeed, some go so far as to rank their publications on the basis of the JIF.
  • Given that the majority of research is publicly funded, it is important to ensure that open access publication becomes the norm. This one is arguably rather more contentious and there are clear differences in the appreciation of open access (OA) publishing between disciplines, with the arts and humanities arguably being rather less welcoming of OA than the sciences. Nonetheless, the key importance of OA has laudably been recognized by Research Councils UK (RCUK) and all researchers funded by any of the seven UK research councils are mandated to make their papers available via either a green or gold OA route (with the gold OA route, seen by many as a sop to the publishing industry, often being prohibitively expensive).

With these three “axioms” in place, it now seems rather straight-forward to make a decision as to the journal(s) our research group should choose as the appropriate forum for our work. We should put aside any consideration of impact factor and aim to select those journals which eschew the traditional for-(large)-profit publishing model and provide cost-effective open access publication, right?

Indeed, we’re particularly fortunate because there’s an exemplar of open access publishing in our research area: The Beilstein Journal of Nanotechnology. Not only are papers in the Beilstein J. Nanotech free to the reader (and easy to locate and download online), but publishing there is free: no exorbitant gold OA costs nor, indeed, any type of charge to the author(s) for publication. (The Beilstein Foundation has very deep pockets and laudably shoulders all of the costs).

But take a look at our list of publications — although we indeed publish in the Beilstein J. Nanotech., the number of our papers appearing there can be counted on the fingers of (less than) one hand. So, while I espouse the three principles listed above, I hypocritically don’t practice what I preach. What’s my excuse?

In academia, journal brand is everything. I have sat in many committees, read many CVs, and participated in many discussions where candidates for a postdoctoral position, a fellowship, or other roles at various rungs of the academic career ladder have been compared. And very often, the committee members will say something along the lines of “Well, Candidate X has got much better publications than Candidate Y”…without ever having read the papers of either candidate. The judgment of quality is lazily “outsourced” to the brand-name of the journal. If it’s in a Nature journal, it’s obviously of higher quality than something published in one of those, ahem, “lesser” journals.

If, as principal investigator, I were to advise the PhD students and postdocs in the group here at Nottingham that, in line with the three principles above, they should publish all of their work in the Beilstein J. Nanotech., it would be career suicide for them. To hammer this point home, here’s the advice from one referee of a paper we recently submitted:

“I recommend re-submission of the manuscript to the Beilstein Journal of Nanotechnology, where works of similar quality can be found. The work is definitively well below the standards of [Journal Name].”

There is very clearly a well-established hierarchy here. Journal ‘branding’, and, worse, journal impact factor, remain exceptionally important in (falsely) establishing the perceived quality of a piece of research, despite many efforts to counter this perception, including, most notably, DORA. My hypocritical approach to publishing research stems directly from this perception. I know that if I want the researchers in my group to stand a chance of competing with their peers, we have to target “those” journals. The same is true for all the other PIs out there. While we all complain bitterly about the impact factor monkey on our back, we’re locked into the addiction to journal brand.

And it’s very difficult to see how to break the cycle…

Author: Philip Moriarty

Physicist. Rush fan. Father of three. (Not Rush fans. Yet.) Rants not restricted to the key of E minor...

16 thoughts on “Addicted to the brand: The hypocrisy of a publishing academic”

  1. Obviously there is a chicken-and-egg problem here, but it is easy to break. Tenured academics should publish exclusively in morally correct journals. If the quality of papers there is as high as in the high-quality journals of today, then within a few years such journals will have at least as good a reputation. As you note, you can’t expect non-tenured people to commit career suicide by publishing in a journal which is not yet reputable. But what is academic tenure (or something similar to it) for, if not for things like this?

    Do you have a permanent job, i.e. you can be sacked only in case of gross misconduct? If so, then you have no excuse. OK, you might have to publish collaborations with young scientists elsewhere, but if you know that a student is leaving the field, then the career-suicide argument doesn’t hold water, so such collaborations could be published in the OA journal.


    1. The problem, Phillip, is that for experimentalists, papers are very often the result of collaborations with students and postdocs, plural, not just one student or one postdoc.

      Second, there’s no such thing as tenure in UK universities. We are all subject to the vagaries of the Research Excellence Framework. Without publishing in “high impact factor journals” (*cough*) then one could very easily be not returned as research active for the REF (i.e. not “REF-able”). This is then a very tricky position to be in, particularly when universities decide to cut staff.

      It doesn’t take gross misconduct at all for a UK academic to lose their job.


      1. In that case, you have to work to get the REF rules changed. What about the IoP passing a resolution requiring at least members to publish in “good” journals? Yes, conflict of interest and so on, but learned societies are vehicles for their members. At some point the majority of the community has to be convinced anyway, and once that is done, one could pass such resolutions. A starting point might be to get a learned society behind an OA journal, or get their own journals transformed into proper (i.e. not “gold”) OA journals.


  2. I am going to be a bit provocative here, so I’ll start with a quote:

    “I was seldom able to see an opportunity until it ceased to be one.”

    Mark Twain

    Academics do the research, write the papers, peer-review the papers and assess the quality of journals. They give up their copyright and libraries pay a fortune for subscriptions for the printed and online versions. This has been going on for decades.

    Some twenty-five years ago, the World Wide Web was invented and graphical browsers developed. This removed the obstacle of printing, distributing and holding collections. The opportunity was there to take control of all aspects of research publication and develop a new system.

    This opportunity was not taken, so I don’t see that there is any point now for moaning. You are mugs! In mitigation, I would add that scientists are no different from the rest of the human race.

    Get over it!

    Liked by 1 person

    1. Thank you for that erudite contribution to the discussion. Would you care to expand on your comment?


Comments are closed.