Science magazine’s open-access sting lacks bite


Last week, Science magazine exposed the dark under-belly of open-access publishing, revealing it as an unethical scam through which entirely flawed science is published by unscrupulous companies who routinely bypass the quality-control mechanism of peer review to make a quick buck.

Or so Science and, by extension, the American Association for the Advancement of Science, would have us believe.

Just like the spoof paper submitted by John Bohannon to over three-hundred open access journals, however, there were gaping holes in the methodology used to reach this apparently damning conclusion. Foremost amongst these was the lack of a basic control study. As a host of bloggers – including Michael Eisen, co-founder of the Public Library of Science – were quick to ask in the immediate fallout of the Science article (and accompanying press release),  where was the comparable study of the fate of Bohannon’s paper when submitted to traditional subscription-model journals (such as Science itself)?

After all, Alan Sokal showed almost twenty years ago that even prestigious journals, driven by ‘rigorous’ peer review standards, are more than capable of accepting total tripe for publication. The title alone of Sokal’s paper, “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity”, should have been enough to sound a cacophony of warning bells, but apparently the ‘draw’ of a renowned physicist being seen to embrace cultural relativism was enough to quash any critical reading of the manuscript by its reviewers. All this was in the days when the idea of open-access publishing couldn’t even be said to be in its infancy – it was barely conceived of in many academic quarters.

What Bohannon’s spoof paper actually highlights are key deficiencies in the peer review system, rather than a problem with open access per se. I’m not about to revisit my arguments on failings in peer review yet again – see here and here for my previous rants and ramblings on this. What I want to highlight in this post instead is the extent to which the open access issue is dangerously driving a wedge between academics and the learned societies/professional bodies of which they are members. (I’ll also take the opportunity to point out just why nanoscientists are exceptionally fortunate compared to researchers in many other fields when it comes to the open access conundrum…)

The Cost of Knowledge
A number of high-profile blogging academics, including Peter Coles (In The Dark), Stephen Curry (Reciprocal Space), and Tim Gowers (Gowers’s Weblog) – Gowers initiated the Cost of Knowledge boycott of Elsevier, to which almost 14,000 people to date have signed up – have discussed and dissected the many deficiencies in the traditional model of academic publishing. I urge you to read their blog posts. Coles, in particular, has been scathing both in his criticism of publishers and the approach of the research councils (and government) to open access. There’s a list of his posts on the matter here. (See also a recent Physics World article, The Reality of Open Access.)

The fact that Science felt the need to crow so loudly about the fate of Bohannon’s spoof paper strongly suggests that they, in common with many other academic publishers, are running scared of the changes that will inevitably re-shape their industry. Much like the music industry left it far too late to work out how it was going to deal with the changes in ‘consumption’ of their product wrought by the internet, many academic publishers are finding it exceptionally difficult to appreciate that the ‘good old days’ are gone and that they need to work with the academic community to develop new business models which don’t involve astronomically high subscriptions/pay-for-access/article publication charges.

Open access, and, more broadly, alternatives to the traditional academic publishing model, are simply not going to go away: too many academics are mad as hell and they’re not going to take it any more (if I can be excused the steal from Peter Finch’s monologue in Network). George Monbiot, in a piece with the gently understated title of “Academic publishers make Murdoch seem like a socialist”, highlighted the myriad problems with the academic publishing industry, or, as he put it, “the knowledge monopoly racketeers”. Many publishers would feel that Monbiot’s article was little more than an ill-informed hyperbolic rant (some even said so at the time); but the weight of academic opinion, certainly in the sciences and mathematics, is very much with Monbiot.

Show us the money
Philip Campbell, editor-in-chief of Nature (and, coincidentally, erstwhile editor of the Institute of Physics’s own Physics World), recently estimated the journal’s internal costs per paper as £20–30,000. It is this type of astronomical figure – and the distinct lack of a breakdown of just how that cost was arrived at – that raises the ire of academics, particularly when we provide refereeing services for free. It’s worth noting that the figure quoted by Campbell is between a factor of six and ten greater than the cost per paper estimated by studies of scholarly publishing models (which include a profit/surplus of ~ 20%). Coles and others argue that even these costs per paper, of around £3ooo, are beyond the pale, and propose a model based on the physics arXiv, supplemented by suitably moderated off-line and online peer review, as a low-cost alternative.

Although I have a great deal of time for arguments based on “arXiv 2.0”, it is important not to tar all publishers with the same brush. There is a wealth of difference between the likes of Elsevier and NPG and, for example, Institute of Physics Publishing (IOPP). My choice of IOPP as an example is not coincidental. Following what some might call a ‘robust’ exchange of views with Steven Hall, Managing Director of IOPP, at an Institute of Physics meeting earlier this year, I was invited to IOPP in Bristol a few months back to see the scope of the publishing activity there. I came away with the strong impression that there is significant value added by the company, beyond the (extensive) peer review provided by academics, in terms of copy-editing, cross-publication and cross-field interactions, PR, marketing, in-house style and ‘brand’, social media presence, and outreach/public engagement activities.

What sets IOPP apart from the likes of Elsevier and NPG, and in common with a variety of other publishing companies connected to learned societies and professional bodies, is that its profits are ploughed back into its parent institute (i.e. the IOP) to support the physics community in a wide variety of areas, including education, input to science funding policy and government reviews, and professional development/careers advice. (Disclaimer: I am currently a member of the IOP Science Board, and am Chair of the IOP Conferences Committee. I have previously been Chair of the IOP Nanoscale Physics and Technology Group Committee (2009–12) and was a member of the Thin Films and Surfaces Group Committee).

Despite this, there remains deep scepticism about the true costs of academic publishing. Very many academics see the Finch report, and its emphasis on Gold Open Access, as little more than a sop to the publishing industry. There is a widespread feeling that publishing costs have been highly inflated in order to sustain large profit margins. Damningly, a parliamentary select committee has recently slated the recommendations of the report.

In order to convince the academic community of the value of the service they provide, publishers such as IOPP need to provide detailed justification of the costs underpinning their article publication charges, subscription costs, and download fees. There may well be a great deal of reticence about doing this due to “commercial sensitivities”, but the confidence and engagement of the community a publishing house supports is a major contributor to the financial health of the company in any case. ‘Opening up the books’ could potentially help to convince academics of the value of traditional academic publishing houses. Assuming, of course, that the costs are indeed justified…

The Beneficence of Beilstein
After all that criticism of publishers, let me close with an inspiring example of best practice. When it comes to open access, nanoscience researchers across the world are extremely fortunate.  Germany’s Beilstein-Institut zur Förderung der Chemischen Wissenschaften (or the Beilstein Foundation for short) set up the Beilstein Journal of Nanotechnology, a leading open access journal in the field, in 2010. The Beilstein Journal has an article publication charge of €0.00.

That’s right – open-access papers in the Beilstein Journal of Nanotech are published with no charge to the author.

Zero. Zilch. Nada. Gratis.

All papers in the Beilstein journal are freely available online. What’s more, the Foundation regularly distributes hard copies of the journal. For free.

The Beilstein Foundation obviously has extremely deep pockets and I am not, of course, suggesting that their altruism can form the basis of the business model of all publishers. Yet there is clearly fallow middle ground left to explore between Beilstein’s zero-cost-to-author approach and the elevated article publication charges levied by those journals at the top of the publication hierarchy – who find themselves in that enviable position by virtue of the statistically suspect impact factor metric.

Image: Lichen on a tree branch. The claim that a lichen molecule has cancer-curing properties was made in a spoof research paper. Credit: Norbert Nagel

Author: Philip Moriarty

Physicist. Rush fan. Father of three. (Not Rush fans. Yet.) Rants not restricted to the key of E minor...

3 thoughts on “Science magazine’s open-access sting lacks bite”

  1. For me it is more important to find out few sources of light in the ocean of darkness. People are more busy to find out the weakness of the study, how this study should have been conducted, etc. Some peoples are considering this as a ‘designer study to produce some designed baby’, etc. And I AGREE to all of them. Yes all of them are true. But in this huge quarrel and cacophony are we not neglecting some orphan babies born from this study (yes they born accidentally and not designed or expected to be born: as most of the large inventions are accidentally happened)?

    I have made some childish analysis with the raw-data of the report of John Bohannon.

    Bohannon used very few words for praising or highlighting the journals/publishers who successfully passed the test. He only mentioned about PlOS One and Hindawi, who are already accepted by academicians for their high reputation. At least I expected that Bohannon will include a table to highlight the journals/publishers, who passed test. I spent very little time to analyze the data. Surprisingly I found some errors made by Bohannon to rightly indicate the category of publishers (DOAJ / Beall). I have indicated some errors and I could not complete the cross-checking of all 304 publishers/journal. Bohannon used DOAJ/Beall as his main category of selecting the journals. But error in properly showing this category-data, may indicate that he spent more time in collecting the raw data, than analyzing the data or curating the data.

    I found more members of Beall’s list is present in Bohannon’s study. But Bohannon did not reported this fact.

    Table 1: List of 20 journals/publishers, who Rejected the paper after substantial review (May be considered white-listed journal/publisher)
    Table 2: List of 8 journals/publishers, who Rejected the paper after superficial review (May be considered white-listed-borderline journal/publisher)
    Table 3: List of 16 journals/publishers, who Accepted the paper after substantial review (May be considered blacklisted-borderline journal/publisher)
    Table 4: List of journals/publishers, who Accepted the paper superficial/NO review (May be considered confirmed blacklisted journal/publisher)
    Table 5: List of journals/publishers, who Rejected the paper but no review details recorded (Labeling of this journal/publisher is avoided)

    Link to my post:

    Akbar Khan


  2. Where the Fault Lies

    To show that the bogus-standards effect is specific to Open Access (OA) journals would of course require submitting also to subscription journals (perhaps equated for field, age and impact factor) to see what happens.

    But it is likely that the outcome would still be a higher proportion of acceptances by the OA journals. The reason is simple: Fee-based OA publishing (fee-based “Gold OA”) is premature, as are plans by universities and research funders to pay its costs:

    Funds are short and 80% of journals (including virtually all the top, “must-have” journals) are still subscription-based, thereby tying up the potential funds to pay for fee-based Gold OA. The asking price for Gold OA is still arbitrary and high. And there is very, very legitimate concern that paying to publish may inflate acceptance rates and lower quality standards (as the Science sting shows).

    What is needed now is for universities and funders to mandate OA self-archiving (of authors’ final peer-reviewed drafts, immediately upon acceptance for publication) in their institutional OA repositories, free for all online (“Green OA”).

    That will provide immediate OA. And if and when universal Green OA should go on to make subscriptions unsustainable (because users are satisfied with just the Green OA versions), that will in turn induce journals to cut costs (print edition, online edition), offload access-provision and archiving onto the global network of Green OA repositories, downsize to just providing the service of peer review alone, and convert to the Gold OA cost-recovery model. Meanwhile, the subscription cancellations will have released the funds to pay these residual service costs.

    The natural way to charge for the service of peer review then will be on a “no-fault basis,” with the author’s institution or funder paying for each round of refereeing, regardless of outcome (acceptance, revision/re-refereeing, or rejection). This will minimize cost while protecting against inflated acceptance rates and decline in quality standards.

    That post-Green, no-fault Gold will be Fair Gold. Today’s pre-Green (fee-based) Gold is Fool’s Gold.

    None of this applies to no-fee Gold.

    Obviously, as Peter Suber and others have correctly pointed out, none of this applies to the many Gold OA journals that are not fee-based (i.e., do not charge the author for publication, but continue to rely instead on subscriptions, subsidies, or voluntarism). Hence it is not fair to tar all Gold OA with that brush. Nor is it fair to assume — without testing it — that non-OA journals would have come out unscathed, if they had been included in the sting.

    But the basic outcome is probably still solid: Fee-based Gold OA has provided an irresistible opportunity to create junk journals and dupe authors into feeding their publish-or-perish needs via pay-to-publish under the guise of fulfilling the growing clamour for OA:

    Publishing in a reputable, established journal and self-archiving the refereed draft would have accomplished the very same purpose, while continuing to meet the peer-review quality standards for which the journal has a track record — and without paying an extra penny.

    But the most important message is that OA is not identical with Gold OA (fee-based or not), and hence conclusions about peer-review standards of fee-based Gold OA journals are not conclusions about the peer-review standards of OA — which, with Green OA, are identical to those of non-OA.

    For some peer-review stings of non-OA journals, see below:

    Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187-195.

    Harnad, S. R. (Ed.). (1982). Peer commentary on peer review: A case study in scientific quality control (Vol. 5, No. 2). Cambridge University Press

    Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242.


Comments are closed.