An experiment in post-proposal peer review


Originally published at physicsfocus.

I’m a huge fan of post-publication peer review (PPPR). It’s the future of scientific publishing and it’ll be de rigeur – rather than a novelty – for the next generation of scientists. Because if that doesn’t happen, science and society are going to continue to suffer from gaping holes in the quality-control mechanism that is traditional peer review.

I’m about to describe an experiment which takes the online/public peer review process back a couple of steps from the point of publication. But before I do that, it might help if I explain just why I’m such an enthusiastic advocate of PPPR.

Over the past couple of years, and along with colleagues at Nottingham, NIST, and Liverpool, I’ve been embroiled in a rather heated debate about the validity of a substantial body of research focused on the structure of coated (aka ‘stripy’) nanoparticles. I blogged about this for physicsfocus around about this time last year, and was delighted when our paper critiquing the nanoparticle research in question was finally published in PLOS ONE a couple of months ago.

Long before the paper appeared in PLOS ONE, however, we had made it available (via the arXiv) at the PubPeer PPPR site, for what is perhaps best described as pre-publication peer review. This led to a large volume of very helpful comments (and, it must be admitted, the occasional less-than-helpful post) from our peers. The PubPeer contributions of one of those peers, Brian Pauw, were so insightful and important that he ended up being added as a co-author to the paper.

In addition to highlighting the benefits of open and public next-generation peer review, the striped nanoparticle controversy made me intensely aware of a number of shocking deficiencies in the traditional peer review system. First is the demonstrated inability of traditional peer review to always filter out junk. I don’t want to harp on about the deficiencies in the striped nanoparticle work (which is faulty, rather than fraudulent) so let’s turn to a truly shocking example of the failure of traditional peer review: the nano chopsticks farce, as Brady Haran and I discuss in this Sixty Symbols video:

Social media, in particular the Chemistry Blog and ChemBark sites (and their associated Twitter feeds), exposed the chopstick ‘breakthrough’ as a staggeringly poor Photoshop job within days of the paper being published. It was retracted just two months after its publication.

A decade before this chopsticks debacle, the nanoscience community endured the rather less cack-handed, arguably quite clever, and remarkably systematic fraud of Hendrik Schön. I firmly believe that if post-publication peer review had existed in the early 2000s that Schön’s fraud would have been identified much, much sooner than it was. (Note how quickly the PubPeer community identified problems in the then-acclaimed, but now-retracted, STAP results published at the start of last year.)

PPPR isn’t, however, all about laying bare fraudulent work. At its best it’s exactly how the scientific method should work: authors should be willing to have their work discussed, debated, and dissected by their peers both before and after – particularly after – its publication. Compare and contrast with the following response from a well-respected, influential, and – for those who care about simplistic and flawed metrics – very high impact-factor journal, after I asked whether they’d be interested in publishing our critique (which eventually became the PLOS ONE paper described above):


Or, in other words, our journal is not interested in following the scientific method.

From PPPR to PPrPR

The deficiencies in peer review of course extend to the assessment of grant proposals. As I was writing this post, a link to an article published in Nature a couple of days ago appeared in my Twitter timeline (thanks @NKrasnogor), highlighting that the ratings of Medical Research Council proposals from external referees do not correlate well with the probability of the grant application being funded. This, of course, will not come as a great surprise to many researchers.

Some time ago I suggested to the Engineering and Physical Sciences Research Council (EPSRC) that they carry out an experiment where they send the same set of proposals to entirely independent prioritisation panels (and referees), and subsequently check for correlations between the rankings of the various panels. This is particularly important given that EPSRC blacklists researchers on the basis of where their grant proposal falls on the ranked list returned by the prioritisation panel.

EPSRC hasn’t run this experiment.

I’m trying a rather different peer review experiment of my own. Late last year I discussed the possibility of open peer review of a grant proposal, rather than a publication, with PubPeer and, subsequently, The Winnower. While PubPeer facilitates open review of any publication with a DOI, The Winnower, founded by Joshua Nicholson, combines open access publication with PPPR. The Winnower kindly agreed to publish our EPSRC proposal, Mechanochemistry At The Single Bond Limit, which, for the reasons discussed in this article in Physics World, is my first for EPSRC in quite some time. With the DOI provided by The Winnower, we subsequently set up a PubPeer thread related to the proposal.

As the ‘Pathways to Impact‘ section of the proposal lays out, the entire impact case is based on public engagement (rather than, for example, commercial exploitation). A key component of that public engagement programme, should the grant application be successful, is that my colleague Brigitte Nerlich will be an ‘embedded’ sociologist within the research team. Brigitte will observe, and blog/tweet about, just how the scientific method plays out in the course of the project. It therefore makes a great deal of sense to extend the public engagement aspects of the proposed research to the grant application process itself, i.e. to incorporate post-proposal peer review (PPrPR).

Coincidentally, and fortuitously, a week or so after the discussions with PubPeer and The Winnower, Dorothy Bishop tweeted a link to an important and very relevant paper by Daniel Mietchen in PLOS Biology (not one of the journals I usually read).

The closing sentence of the abstract to this far-sighted paper is worth quoting at length:

“The article … explores the option of opening to the public key components of the [grant application review] process, makes the case for pilot projects in this area, and sketches out the potential that such measures might have to transform the research landscape in those areas in which they are implemented.”

The motivation for making our EPSRC proposal available for comment and criticism via The Winnower and PubPeer is exactly as that abstract describes – it’s a question of opening up the grant application/review process to public scrutiny. My aim over the coming months is – EPSRC and reviewers permitting – to make available, here at physicsfocus, the referees’ reports and, ultimately, the outcome of the panel ranking process.

It’s an experiment that may return a null result, of course, in that there could well be a deafening silence in response to making the proposal (and, hopefully, the subsequent reviews) publicly available. After all, I don’t believe that there are too many academics fretting about finding more reviewing to do. But then, a null result is still very often an important finding that can provide key insights.

Let’s just run the experiment and see…

Image credit:

Author: Philip Moriarty

Physicist. Rush fan. Father of three. (Not Rush fans. Yet.) Rants not restricted to the key of E minor...

3 thoughts on “An experiment in post-proposal peer review”

  1. An interesting article. As a non-scientist, it has me uncertain about your view of the purpose of scientific publication.Historically, it would seem to me that publication was one of the very few ways of sharing knowledge, and peer review provided an assurance that the reviewed publication was worthy of further consideration. This was simply one step on the path of the evolution of our knowledge.

    It seems to me that Pre and Post publication peer review may cause some danger of focusing far too much on the publication itself and not on sharing the knowledge for further evolution by others. It may lead to all the discussions, debates and experiments that arise from the publication to bound by the approach and limitations of the publication itself, rather than let multiple others follow the multitude of paths your publication may create.

    I also think we need to consider the nature of knowledge and sharing itself as it exists today. The printing press changed the way we could share knowledge in extraordinary and unexpected ways. I suggest it is possible that the digital world is changing the very nature of the ways we can evolve knowledge. Do we really need to keep publication in a point of time, or can an idea now continuously and from a multiplicity of directions, be debated, discussed and tested in collaborative, isolated and competitive ways, all happening simultaneously, and with or without the knowledge of others? Organic growth rather than manufactured growth perhaps.

    So I wonder whether the issue you are discussing is about peer review, pre, post or traditional, or whether it is about the existence of peer review as a recognition of the quality of work for academic, funding, employment and commercial purposes?

    I suspect the answers to your questions and the value of your propositions lie in how you define the underlying framework.

    Thank you for your thought provoking and challenging piece.


  2. Good piece, and thanks for the highlight!

    What I worry about with the more “crowdsourced” review processes, is overcoming the barrier of apathy. PPPR and PPrPR work, but like crowdfunding, it heavily depends on popularity.

    In your case, you are likely to get your papers and proposals reviewed due to your high (online) visibility. I do not think I am that invisible, nor that my field is too obscure, but there is not a PPPR to be seen on my (pre-)publications…

    So how do we get these crowdsourced reviews to work for the less popular topics and people? How do we convince our peers to spend their already very rare moments of free time to read and (more importantly) leave thoughtful comments?


Comments are closed.