Maybe, Minister: Can politics and science ever speak the same language?



Along with 35 other scientists (including my colleague Clare Burrage here at Nottingham), I have been at Westminster for the past few days as part of this year’s Royal Society MP-Scientist pairing scheme. Clare and I are paired with our local MP, Lilian Greenwood, and spent some time yesterday shadowing Lilian during a number of her meetings at Westminster. In the not-too-distant future we’ll also accompany Lilian while she meets with some of her South Nottingham constituents.

I’m going to save a description of the Westminster shadowing process – and the frankly unedifying experiencing of witnessing Prime Minster’s Questions “in the raw” for the first time – for a later post. I will say for now, however, that notwithstanding the bear pit that is PMQs, I thoroughly enjoyed the Westminster experience and, for reasons I’ll come back to in the future, found it rather humbling at times.

The tour of the Houses of Parliament on the first day was particularly fascinating and educational for me, given that quite a bit of my knowledge of the English monarchy has been gleaned from episodes of Blackadder. (I’m Irish and our curriculum at primary and secondary school didn’t focus too heavily on the minutiae of the succession of English monarchs. Oliver Cromwell, on the other hand, tended to pop up quite regularly during history lessons…).

What I want to discuss in this first post on the Pairing Scheme, however, are a couple of questions which were raised repeatedly in talks and panel discussions on Monday and Tuesday: what role does scientific advice play in politics, and is evidence-based policy always a realistic aspiration? These are uncomfortable questions for scientists because they cut to the core of our ‘value system’ and force us to consider the plethora of uncontrolled, and fundamentally uncontrollable, ‘non-scientific’ variables which underpin the political system. Simply presenting the evidence is not enough.  (It was also a bit of an eye-opener to find that the term “evidence” need not necessarily mean evidence as a scientist would understand it; often it can mean opinion.)


We heard from a variety of speakers and panellists who are at the heart of the process of translating scientific evidence and scientific opinion/consensus to policy. The introduction to science in parliament by Chris Tyler on Monday afternoon and the subsequent panel discussion were both particularly enlightening. To get a good insight into the general flavour of Tyler’s comments, it’s worth reading this article in the Guardian which, coincidentally, was also published on Monday. Although I suspect that Chris and I would disagree rather strongly on the matter of the RCUK and HEFCE impact ‘agenda’ (lots more on this in my post tomorrow), there’s an awful lot in that article in the Guardian with which I would concur: “Science for policy” and “policy for science” are certainly very different things (today’s post deals with the former, tomorrow’s post with the latter); policy makers aren’t intrinsically driven by, or even particularly interested in, science because there is (a lot) more to policy than scientific evidence; let’s not condescendingly dismiss politicians’ ability to understand scientific uncertainty; and economics and law are much, much higher up the ‘pecking order’ than science.

This latter point was raised on quite a number of occasions. On Tuesday morning, Jill Rutter, Programme Director of the Institute for Government, presented some illuminating statistics on the distribution of the degree backgrounds of the permanent secretaries. As you might expect, economics, law, history, the humanities and the social sciences featured heavily. The life and physical sciences languish at the bottom of the list, occupying a tiny sliver of the pie chart. To hammer this point home, Rutter pointed out that only one person in the current cabinet did some science at degree level: Dr. Vince Cable. Even then, Cable apparently only did two years before swapping to economics.

In common with the vast majority of the talks and panels we attended on the first two days of the week in Westminster, Rutter’s presentation was refreshingly open and honest. She made very convincing arguments regarding the interplay of “technocracy” and politics, pointing out that the big difference between government and academia is that government needs to make decisions. Rutter also stressed that there are very few issues where science or evidence dictate government action, listing such concerns and criteria as cost-benefit analysis/spending prioritisation, political/ethical acceptability, legality, and implementability.

If anything, the presentation which followed – from David MacKay, Chief Scientific Advisor (CSA) to the Department of Energy and Climate Change – was even more “transparent” (if you’ll excuse the lift from the political lexicon). MacKay gave an engaging and fascinating insight into his work as a CSA pointing out, amongst many other things, the exceptionally important role of lobbyists and the at times difficult relationship of CSAs with the media. I must admit to being left with some nagging questions after MacKay’s talk about just how the type of rigorous science and comprehensive evidence base which he discussed with regard to sustainable energy was then translated into a form which could, for want of a better description, “play to the Daily Mail”.

MacKay, in common with Rutter before him and a number of panellists on the previous day (including Robert Winston and Alan Malcom, Executive Secretary of the Parliamentary and Scientific Committee  – more on their presentations tomorrow), pointed out that individual scientists can be influential “if you talk to the right people in the right way” and that lobbyists are massively influential. I think that the majority of scientists would instead hope that the scientific evidence could speak for itself. This is almost always a pipe-dream.

Yet, although I appreciate entirely that scientific evidence can never be the be-all-and-end-all of policy making and politics, and have a lot of sympathy for the policy-maker’s need to see evidence as only one component of a complex landscape of criteria and considerations, I nonetheless share the concerns voiced by George Monbiot a couple of month ago in an article entitled “For scientists in a democracy, to dissent is to be reasonable”:

“A world in which scientists speak only through minders and in which dissent is considered the antithesis of reason is a world shorn of meaningful democratic choices. You can judge a government by its treatment of inconvenient facts and the people who expose them.”

The tagline for Monbiot’s article is “Government policy in Britain, Canada and Australia is crushing academic integrity on behalf of corporate power”. Tomorrow, I’m going to focus on the question of the extent to which the research councils’ and the funding councils’ ‘impact agenda’ undermines the independence, integrity, and ethos of academia in its headlong rush to make academic scientists more responsive to the needs of business and industry.


  1. The UK Houses of Parliament. Credit: ktanaka
  2. The group of scientists who took part in a scheme pairing them with MPs

Science magazine’s open-access sting lacks bite


Last week, Science magazine exposed the dark under-belly of open-access publishing, revealing it as an unethical scam through which entirely flawed science is published by unscrupulous companies who routinely bypass the quality-control mechanism of peer review to make a quick buck.

Or so Science and, by extension, the American Association for the Advancement of Science, would have us believe.

Just like the spoof paper submitted by John Bohannon to over three-hundred open access journals, however, there were gaping holes in the methodology used to reach this apparently damning conclusion. Foremost amongst these was the lack of a basic control study. As a host of bloggers – including Michael Eisen, co-founder of the Public Library of Science – were quick to ask in the immediate fallout of the Science article (and accompanying press release),  where was the comparable study of the fate of Bohannon’s paper when submitted to traditional subscription-model journals (such as Science itself)?

After all, Alan Sokal showed almost twenty years ago that even prestigious journals, driven by ‘rigorous’ peer review standards, are more than capable of accepting total tripe for publication. The title alone of Sokal’s paper, “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity”, should have been enough to sound a cacophony of warning bells, but apparently the ‘draw’ of a renowned physicist being seen to embrace cultural relativism was enough to quash any critical reading of the manuscript by its reviewers. All this was in the days when the idea of open-access publishing couldn’t even be said to be in its infancy – it was barely conceived of in many academic quarters.

What Bohannon’s spoof paper actually highlights are key deficiencies in the peer review system, rather than a problem with open access per se. I’m not about to revisit my arguments on failings in peer review yet again – see here and here for my previous rants and ramblings on this. What I want to highlight in this post instead is the extent to which the open access issue is dangerously driving a wedge between academics and the learned societies/professional bodies of which they are members. (I’ll also take the opportunity to point out just why nanoscientists are exceptionally fortunate compared to researchers in many other fields when it comes to the open access conundrum…)

The Cost of Knowledge
A number of high-profile blogging academics, including Peter Coles (In The Dark), Stephen Curry (Reciprocal Space), and Tim Gowers (Gowers’s Weblog) – Gowers initiated the Cost of Knowledge boycott of Elsevier, to which almost 14,000 people to date have signed up – have discussed and dissected the many deficiencies in the traditional model of academic publishing. I urge you to read their blog posts. Coles, in particular, has been scathing both in his criticism of publishers and the approach of the research councils (and government) to open access. There’s a list of his posts on the matter here. (See also a recent Physics World article, The Reality of Open Access.)

The fact that Science felt the need to crow so loudly about the fate of Bohannon’s spoof paper strongly suggests that they, in common with many other academic publishers, are running scared of the changes that will inevitably re-shape their industry. Much like the music industry left it far too late to work out how it was going to deal with the changes in ‘consumption’ of their product wrought by the internet, many academic publishers are finding it exceptionally difficult to appreciate that the ‘good old days’ are gone and that they need to work with the academic community to develop new business models which don’t involve astronomically high subscriptions/pay-for-access/article publication charges.

Open access, and, more broadly, alternatives to the traditional academic publishing model, are simply not going to go away: too many academics are mad as hell and they’re not going to take it any more (if I can be excused the steal from Peter Finch’s monologue in Network). George Monbiot, in a piece with the gently understated title of “Academic publishers make Murdoch seem like a socialist”, highlighted the myriad problems with the academic publishing industry, or, as he put it, “the knowledge monopoly racketeers”. Many publishers would feel that Monbiot’s article was little more than an ill-informed hyperbolic rant (some even said so at the time); but the weight of academic opinion, certainly in the sciences and mathematics, is very much with Monbiot.

Show us the money
Philip Campbell, editor-in-chief of Nature (and, coincidentally, erstwhile editor of the Institute of Physics’s own Physics World), recently estimated the journal’s internal costs per paper as £20–30,000. It is this type of astronomical figure – and the distinct lack of a breakdown of just how that cost was arrived at – that raises the ire of academics, particularly when we provide refereeing services for free. It’s worth noting that the figure quoted by Campbell is between a factor of six and ten greater than the cost per paper estimated by studies of scholarly publishing models (which include a profit/surplus of ~ 20%). Coles and others argue that even these costs per paper, of around £3ooo, are beyond the pale, and propose a model based on the physics arXiv, supplemented by suitably moderated off-line and online peer review, as a low-cost alternative.

Although I have a great deal of time for arguments based on “arXiv 2.0”, it is important not to tar all publishers with the same brush. There is a wealth of difference between the likes of Elsevier and NPG and, for example, Institute of Physics Publishing (IOPP). My choice of IOPP as an example is not coincidental. Following what some might call a ‘robust’ exchange of views with Steven Hall, Managing Director of IOPP, at an Institute of Physics meeting earlier this year, I was invited to IOPP in Bristol a few months back to see the scope of the publishing activity there. I came away with the strong impression that there is significant value added by the company, beyond the (extensive) peer review provided by academics, in terms of copy-editing, cross-publication and cross-field interactions, PR, marketing, in-house style and ‘brand’, social media presence, and outreach/public engagement activities.

What sets IOPP apart from the likes of Elsevier and NPG, and in common with a variety of other publishing companies connected to learned societies and professional bodies, is that its profits are ploughed back into its parent institute (i.e. the IOP) to support the physics community in a wide variety of areas, including education, input to science funding policy and government reviews, and professional development/careers advice. (Disclaimer: I am currently a member of the IOP Science Board, and am Chair of the IOP Conferences Committee. I have previously been Chair of the IOP Nanoscale Physics and Technology Group Committee (2009–12) and was a member of the Thin Films and Surfaces Group Committee).

Despite this, there remains deep scepticism about the true costs of academic publishing. Very many academics see the Finch report, and its emphasis on Gold Open Access, as little more than a sop to the publishing industry. There is a widespread feeling that publishing costs have been highly inflated in order to sustain large profit margins. Damningly, a parliamentary select committee has recently slated the recommendations of the report.

In order to convince the academic community of the value of the service they provide, publishers such as IOPP need to provide detailed justification of the costs underpinning their article publication charges, subscription costs, and download fees. There may well be a great deal of reticence about doing this due to “commercial sensitivities”, but the confidence and engagement of the community a publishing house supports is a major contributor to the financial health of the company in any case. ‘Opening up the books’ could potentially help to convince academics of the value of traditional academic publishing houses. Assuming, of course, that the costs are indeed justified…

The Beneficence of Beilstein
After all that criticism of publishers, let me close with an inspiring example of best practice. When it comes to open access, nanoscience researchers across the world are extremely fortunate.  Germany’s Beilstein-Institut zur Förderung der Chemischen Wissenschaften (or the Beilstein Foundation for short) set up the Beilstein Journal of Nanotechnology, a leading open access journal in the field, in 2010. The Beilstein Journal has an article publication charge of €0.00.

That’s right – open-access papers in the Beilstein Journal of Nanotech are published with no charge to the author.

Zero. Zilch. Nada. Gratis.

All papers in the Beilstein journal are freely available online. What’s more, the Foundation regularly distributes hard copies of the journal. For free.

The Beilstein Foundation obviously has extremely deep pockets and I am not, of course, suggesting that their altruism can form the basis of the business model of all publishers. Yet there is clearly fallow middle ground left to explore between Beilstein’s zero-cost-to-author approach and the elevated article publication charges levied by those journals at the top of the publication hierarchy – who find themselves in that enviable position by virtue of the statistically suspect impact factor metric.

Image: Lichen on a tree branch. The claim that a lichen molecule has cancer-curing properties was made in a spoof research paper. Credit: Norbert Nagel

Perform or perish? Guilty confessions of a YouTube physicist


First published at physicsfocus

This week is YouTube’s Geek Week so it seems a particularly (in)opportune moment to come clean about some niggling doubts I’ve been having of late about physics education/edutainment on the web. Before I get started – and just to reassure you that these are not the bitter ramblings of a dusty old academic who, like our current education secretary, is keen to hasten the return of Victorian education values – let me stress that I am extremely enthusiastic about many aspects of online science communication. Indeed, not only have I been almost evangelical at times about the value of web-based learning, I’ve invested quite a bit of effort in helping to make YouTube videos of the type I’m about to criticise (just a little).

Along with a number of my colleagues at the University of Nottingham, since early 2009 I’ve been contributing to videos for Brady Haran’s popular Sixty Symbols and Numberphile channels. I’ve even crossed over to the dark (and smelly) side and made a couple of videos with Brady for Periodic Videos, the chemistry-focussed forerunner of Sixty Symbols. These channels, along with Brady’s many other YouTube projects — Haran has the work ethic of an intensely driven academic — have been extremely successful and have garnered many accolades and awards.

Brady is of course not alone in his efforts to communicate science and maths via YouTube. There is now a small, but intensely dedicated, clique of talented YouTubers, as described in this article in The Independent, whose videos regularly top one million views. (Conspicuous by its absence from that list in The Independent, however, is minutephysics, a staggeringly popular channel with, at the time of writing, 1.6 million subscribers.)

Working with Brady is a fascinating – and frankly quite exhausting – experience: challenging (because there’s no script – and even if there were, Brady would rip it up); unnerving (because the first time we academics see the video is when it’s uploaded to YouTube and it may well have picked up 10,000 views or more before we get round to watching it); and always intensely collaborative (because Brady not only films and edits – his ideas and questions are absolutely central to the direction of each video). Most of all, it’s fun. It is also immensely gratifying for all of us involved with Sixty Symbols to receive e-mails from YouTube viewers across the world who say that Sixty Symbols has (re)ignited their love of physics, and, for example, inspired them to pursue a degree in the subject.

You might quite reasonably say at this point that it sounds like ‘all win’ for everyone involved. What the heck is my problem? What’s the downside? (…and where are those guilty confessions I promised?)

I’m such a scientist. Get over it.

It took me a while to work out just where my nagging uneasiness with the YouTube edutainment business sprang from. It wasn’t until I borrowed a copy of Randy Olson’s book Don’t Be Such a Scientist from a colleague’s bookshelf a few months ago that things began slowly to crystallise. (Coincidentally, Don’t Be Such a Scientist was published back in 2009 – the year my colleagues and I started to work with Brady on Sixty Symbols – and I was somewhat surprised that I hadn’t encountered the book before, given that it’s about outreach and public engagement via film-making). My eyes were drawn immediately to the quote from Jennifer Ouellette on the back cover:

“This book is likely to draw a firestorm of controversy because scientists may not want to hear what Olson has to say. But someone needs to say it; and maybe Olson’s take-no-prisoners approach will get the message through.”

Jennifer Ouellette is an exceptionally talented science writer and blogger, so I was really looking forward to reading Olson’s book; praise from Ouellette is high praise indeed, as far as I’m concerned. She has an unerring knack for explaining complicated concepts in a lucid, engaging, and effortlessly witty way, without resorting to stereotypes or patronising the reader.

I really wish that I could say the same of Olson’s book. But I hated it. Not all of it, grant you, but enough that I often had to leave it to one side and count to ten (or go make yet another coffee) to stem my flow of expletives. In that sense, Ouellette was dead right – I didn’t want to hear what Olson had to say. Here are just a few reasons why:

The relentless stereotyping of scientists as unfathomable, passionless, literal-minded automatons.

To be fair, Olson highlights one or two exceptions to this general type, including the inspirational Carl Sagan. But that’s the point – he discusses Sagan as an exception.

The “us and them” mentality.

Olson argues that scientists are not well-equipped to communicate with the ‘general public’, i.e. the great unwashed who are too intellectually challenged to “get” science without it being brought down to their level (which is apparently generally below the waistline.). I was put in mind of the late Bill Hicks’ intense frustration with TV executives who told him time and time again that although his stand-up comedy routines were creative and funny, they were concerned that his material wouldn’t “play in the midwest”. As Hicks put it, “If the people in the midwest knew the contempt that television holds for them…”.

Reducing science to easy-to-digest content requiring little intellectual effort from the viewer.

In essence, Olson argues that scientists should adopt an approach to science communication which is informed by the strategies used by Hollywood, and the marketing and advertising industries: “Style is the substance”. Although my views on marketing may not be quite as extreme as those of Hicks, the very last thing that science needs to do is to move any closer to the advertising industry.  (A word of warning: Do not click on the preceding link if you are easily offended. Or work in marketing.)

One of the things I love about Sixty Symbols, and Brady Haran’s work in general is that, contrary to Olson’s view that ‘talking heads’ are boring, Haran’s videos humanise scientists by forgoing the bleeding edge graphics, the Ride of the Valkyries-esque backing tracks, and the breathless faux-urgency that have come to characterise so much of science communication in the mass media.  My colleagues who contribute to Sixty Symbols (and Numberphile, Periodic Videos etc.) have the remarkable ability to combine enthusiasm with clear and coherent explanations, each time breaking Olson’s cardinal rule that – and I hope they’ll forgive me for saying this – substance must be translated to style.  (In my case, although enthusiasm is generally not lacking in the videos I make with Brady, clarity and coherence can often take a back seat. Style is also not something that unduly concerns me.)

Although each of those points above certainly irritated me, it was Olson’s closing line, and over-arching theme, that made me realise just where my misgivings about science-by-YouTube came from:

“…you’ll find that making an effective film, in the end, is really not different from conducting an effective scientific study”.

Hmmm, really? The last thing you need for an effective science study is to elevate style over substance. Good science necessitates careful, systematic, and tedious measurements. It couldn’t – or shouldn’t – care less about the need to “arouse and fulfil” an audience. It certainly doesn’t follow a neat story arc.

And if doing science shares little with film-making, what about science education…?

I’m with stupid

I read most of Don’t Be Such a Scientist in one sitting. Shortly after putting it down an e-mail from physicsfocus arrived in my email inbox pointing to this excellent post by Alom Shaha: Explanations are not enough, we need questions.

And there, in a nutshell, were my niggling doubts about YouTube edutainment laid bare.

Education is about so much more than an engaging video and a simple, compelling explanation. Indeed, and rather counter-intuitively, an enthusiastic lecturer apparently plays very little role in students’ ability to grasp the material covered in a lecture.  I have always seen university lectures simply as a way of enthusing students about the material – the real learning takes place outside the lecture theatre. Or after the video has been played.

If YouTube science edutainment is seen in this light – with the focus firmly on entertainment and engagement, rather than education – then my concerns are allayed. But it’s when comments like the following are posted under the videos, or at the Sixty Symbols Facebook page, that I start to get a little ‘twitchy’.


Watching a five minute (or one minute) video is only the first step in the education process. As Shaha points out, what’s then required are the questions, debate, experiments, problems, and discussion that underpin deep learning.   This may well bring on the yawns, but we need to expose students, at whatever level – and, more broadly, any fan of science – to the hard graft required to grasp difficult concepts.

Moreover, some concepts simply do not lend themselves well to a short, snappy explanation. (Negative temperature is another example.)

I know full well that there’s a famous Einstein quote: “If you can’t explain it simply, you don’t understand it well enough”. But he’s also credited with this: “Everything should be made as simple as possible, and no simpler”, and, perhaps more importantly, this: “I do not teach anyone, I only provide the environment in which they can learn.”

I used to be very proud when Sixty Symbols viewers would leave a “Great – I feel smart now!” comment under one of the videos to which I contributed. But then I realised that any substantial leap in my understanding of physics had come not when I felt smart, but when I felt stupid. Really stupid.

Yet again, I discovered that another physicsfocus blogger had been through the same thought process long before me. Suzie Sheey made this wonderful point at her High Heels In The Lab blog: “…there are many arguments to be made that if you’ve stopped feeling stupid then you’ve stopped really doing science”. (I urge you to read the entire post).

Even Feynman, arguably the most gifted physics communicator there has been, clearly felt that the complexity and elegance of some concepts deserved more than just a shallow description and required considerable intellectual effort from the audience.

Sometimes we need to admit that the fabric of the cosmos takes a little more than five minutes to comprehend.

Update: The thoughts above became the theme of a TEDxDerby talk I gave in 2014:

Image: xkcd on teaching physics, used under a Creative Commons Attribution-NonCommercial 2.5 License

Selling science by the pound


The President of the National Research Council (NRC) of Canada, John McDougall, caused quite a blogstorm, and set Twitter alight, at the end of last month when he said:

“Scientific discovery is not valuable unless it has commercial value.”

The tweets below give a good indication of the consensus view among the Twitterati. I don’t have a Twitter account but I also added my own small howl of outrage via The Conversation.

There’s just one small problem: McDougall didn’t say that.

As described at Phil Plait’s Bad Astronomy blog, McDougall was badly misquoted in a Toronto Sun article. This misquote effectively went ‘viral’. Shortly after my brief article at The Conversation was uploaded, I was contacted by Patrick Bookhout, Media Relations Officer at the NRC, who was understandably quite keen to put the record straight. As I told Patrick by e-mail, like too many others I took the quote at face-value. A mea culpa is in order – I didn’t spend enough time doing my homework, ie verifying that the newspaper article had got its facts straight. That I was not alone in this is no excuse.

So what did McDougall actually say? Here’s the contentious quote verbatim:

“Impact is the essence of innovation. A new idea or discovery may in fact be interesting, but it doesn’t qualify as innovation until it’s been developed into something that has commercial or societal value.”

And you know what? I agree with much of that statement. Scientific discovery and innovation are different things and, for reasons I’ll outline below, we ultimately do academic research, and the taxpayers who fund it, a disservice to pretend otherwise. (McDougall is wrong, however, in suggesting that new ideas and discoveries, in and of themselves, do not have societal value.)

Richard Jones, Pro-Vice Chancellor for Research and Innovation at the University of Sheffield, and erstwhile Strategic Advisor for Nanotechnology for the Engineering and Physical Sciences Research Council (EPSRC), has pointed out the disconnect that exists between fundamental scientific research and the ‘nucleation’ and growth of successful industries based on innovative technologies.

I’m not going to rehearse Jones’ arguments here. I would strongly recommend that you visit his Soft Machines blog for a number of extremely well-argued posts on the deficiencies in UK innovation policy. (If only all PVCs were as well-informed as Prof. Jones…). Although Richard and I may not always see eye to eye on the value of, and motivations for, basic scientific research, he is someone who certainly does his homework. Take a look at his analysis of the UK’s disinvestment in R&D since 1980. (Note, in particular, the steady decline in private sector investment and Jones’ highly plausible interpretation of what this means for the direction of academic science in the UK.)

McDougall got it right about the disparity between scientific research at the frontiers of knowledge and innovations that translate to the market. But, of course, this is not a distinction that academic scientists, and the research councils which fund them, are exactly falling over themselves to promote to government. In the short term it serves us very well indeed to blur the boundaries between funding for basic science and for near-market R&D. You reap what you sow, however, and ratcheting up expectations for short-term, and direct, returns on investment in academic research, across the board, is a rather disingenuous and dangerous strategy.

What I find truly depressing is that this strategy is now fundamentally embedded in the workings of the research councils in the UK. EPSRC, in particular, has introduced a slew of new funding mechanisms and policies over the past five years or so which are steadily ensuring that it becomes more and more difficult in the UK to get funding for disinterested research which is not connected to the near-term requirements of industry.

For the more masochistic among you, there’s much more on my ‘issues’ with EPSRC here, here, and here. (And, oh, here as well.) As I mentioned in the article in The Conversation, recent moves by EPSRC towards further skewing the funding landscape towards applied research include the recommendation that industry not only is involved in Centres for Doctoral Training, but ‘co-creates’ the PhD training programme.

Not long ago, at the 2013 Association of Research Managers and Administrators (ARMA) conference here in Nottingham, Rick Rylance, Chair of Research Councils UK, said – assuming that I can believe the Twitter traffic this time – that the distinction between pure and applied research is “beginning to become untenable”. This is a stance that is becoming increasingly fashionable and was levelled against my particular research area, nanoscience, not so long ago.

I disagree with Rylance in the strongest possible way. All scientific research indeed falls somewhere along the pure-applied spectrum, and the boundary can certainly be difficult to define. But there is a vast difference in the mindset, motivations, and working methods of an academic scientist working on, for example, the fundamental basis of quantum field theory (…or the origin of dark matter, or the location of exoplanets, or submolecular resolution imaging at 4 K …etc.), and her colleague in a nearby department who is attempting to improve the efficiency of a market-ready photovoltaic device in collaboration with industry. Germany certainly sees a distinct separation between fundamental and applied science, supporting basic science via its Max Planck Institutes, and applied research through the Fraunhofer Society.

Contrary to what Rylance states, the science funding process would be a great deal more honest and free of misleading hyperbole (directed at both government and the taxpayer) if there were a much stronger delineation of basic and applied research projects, including the provision of separate funding streams. As it stands, EPSRC’s ‘one size fits all’ approach means that, regardless of where a scientist’s work falls on the pure-applied spectrum, each and every grant proposal must outline the direct socioeconomic worth of the research via Pathways to Impact and National Importance statements.

And that’s not so very far removed from a position which holds that “Scientific discovery is not valuable unless it has commercial value.”


When the uncertainty principle goes up to 11…


First published at physicsfocus.

I’m a middle-aged professor of physics and I love heavy metal.

There, I’ve said it.

I know that the mere mention of heavy metal – the music, that is, not one of those dubiously defined toxic elements in the periodic table – is likely to provoke a disdainful wrinkling of the nose among the more, let’s say, cultured readers of physicsfocus. But before you run to the hills, or depart en masse for BBC iPlayer and the more sedate sounds of Radio 3, first let me explain just why I am so heavily into metal and all its myriad sub-genres (including thrash, death, power, progressive, and – forgive me – hair metal), and why the Heisenberg uncertainty principle is fundamentally connected with the ‘crunch’ of a metal guitar riff.

The best metal is incredibly harmonically rich. The music of Black Sabbath and Metallica, to name but two metal giants, echoes and channels the sheer heaviness of the work of classical composers such as Wagner, Rachmaninoff, and Paganini. Indeed, one of the most accomplished metal guitarists there is, Yngwie Malmsteen, frequently cites Paganini’s work as a formative influence on his playing. And a British band who were a major inspiration for the fledgling Metallica, Diamond Head, ripped off paid homage to Holst’s Planets Suite – specifically, Mars: Bringer of War – on their seminal track Am I Evil? Other examples of classical ‘crossover’ abound in the metal oeuvre.

In addition to being harmonically sophisticated, however, particular ‘breeds’ of metal are also rhythmically complex. Thrash metal, and the closely related industrial metal and ‘djent’ sub-genres, in particular, are based around exceptionally tight and syncopated rhythm guitar riffs where extensive use is made of palm muting to damp the strings. The video below includes a few examples of the use of heavy string muting in a number of archetypal metal riffs.

Bands like Meshuggah and Fear Factory have honed the level of syncopation to a very fine art where even the vocals become percussive and are locked in sync with machine-like guitar ‘chugs’ in challenging time signatures. It’s this rhythmic complexity – and the type of guitar style that’s required to produce it – which underpins the link between heavy metal and the Heisenberg uncertainty principle.

Unfortunately, the uncertainty principle continues to be explained — at least in many pop sci accounts (see here for example) — in terms of the disturbance that a measurement causes to a quantum system. This rather frustratingly fails to put across the fundamental essence of the uncertainty principle and can be somewhat misleading for students.

The uncertainty principle is simply an unavoidable and natural consequence of imbuing matter with wavelike characteristics. A wave can equally well be described in the time or in the frequency domain. These are conjugate variables and we can switch between the two descriptions of the wave using the wonderfully elegant Fourier transformation process. (An erstwhile colleague at Nottingham described the Fourier transformation of data as “what physicists always do when they can’t think of anything better”. I agree, and am guilty as charged! But there’s a very good reason why physicists fall back on Fourier analysis time and time again…) Any sound engineer or producer is also familiar with the results of Fourier transforming audio waves (although they may not refer to the process in quite those terms): a spectrum analyser provides a visualisation of Fourier components, while a graphic equalizer allows the relative amplitudes of those components to be modified.

The uncertainty principle arises from a very simple relationship between the two different representations of a waveform on the time and frequency axes: the shorter the signal is, the wider its frequency spectrum must be. Put more simply: narrow in time, wide in frequency. The width of the spectrum is simply a ‘proxy’ for our uncertainty in defining a specific frequency for the waveform. This, of course, translates to other pairs of variables including, in particular, position and momentum, giving rise to the standard form of the uncertainty principle which 1st year physics undergraduates are most familiar with.

Metal guitar lends itself rather well to a demonstration of the uncertainty principle in action. An undamped string left to its own devices on a highly amplified guitar produces a distorted note which sustains for some time:

The waveform is shown below on the left. On the right hand side is the frequency spectrum for the fundamental (i.e. first harmonic) of the guitar string. Note that the spectrum is essentially a single spike at the frequency of the fundamental. (Of course, there are many other frequency components but we don’t need to worry about those – I’ve zoomed in on a narrow portion of the spectrum containing just a single harmonic).


If the string is now muted to get the signature ‘crunch’/’chug’ of the metal riff, the waveform dies out on a very much shorter time-scale:

This time-limited signal has a correspondingly wider frequency spectrum, i.e. our effective uncertainty in determining the frequency of the fundamental is much greater. (The intensity of the peak in the frequency spectrum will also decrease but I’ve scaled it up to allow for better comparison of its width with that of the original narrow peak).


This natural broadening of the spectrum of a time-limited signal represents the very essence of the uncertainty principle. And as was also aptly demonstrated by the IOP Schools lectures a few years back, what better way to demonstrate fundamental physics principles than via a heavily distorted guitar dialled all the way up to 11?

As I was finishing this post I found out that New College here in Nottingham will offer a degree in heavy metal from September 2013. It’s of course already attracted more than its fair share of opprobrium, widely mocked as a “Mickey Mouse” degree, but an undergraduate module or two on the physics of heavy metal strikes me as a very good idea indeed. It’d be an intriguing and left-field route into teaching topics such as vibrations and waves, signal processing, Fourier analysis, ordinary and partial differential equations, and feedback (non-linear dynamics).

I wonder if New College Nottingham is in need of an external examiner for its course..?

Image: Sagan/Slayer t-shirt design by Monsters of Grok

15 Responses to When the uncertainty principle goes up to 11…

    1. John Duffield says:

      Interesting stuff, Phil. I suppose you know all about the “Optical Fourier Transform”, like on Steven Lehar’s web page, about half way down. A lens converts an extended-entity wave into dots on a screen, effectively performing a real-time non-mathematical Fourier transform. I can’t help wondering if something similar is going on in the double-slit experiment. A photon goes through both slits, as per Steinberg et al’s plot in In Praise of Weakness. But when you detect it, you get a dot on the screen. And if you detect it at one slit, the photon is transformed into a dot that goes through that slit only.

    1. Firstly – it feels good to finally know I’m not the only physics-loving metalhead (or should that be metal-loving physicshead). I thought you were supposed to appreciate art history and Mahler, so I tend to keep it quiet! Thanks very much for this video, and the novel way of looking at the uncertainty principle.

      Secondly, a thought occured – when palm muting, I often find that if one rests too hard on the string, a noticeable change in frequency can occur, because you’re effectively changing the length of the standing wave. Is that not a possible alternate cause for the effect?

        • Hi, Mike.

          That’s a wonderfully perceptive comment! I worried about this too and made sure that I was not changing the pitch. It’s one of the reasons that I tuned back up from “drop A” tuning in the video. The key thing is that the peak position of the fundamental stays at the same frequency – it just becomes broader.

          All the best,


            • Of course! If you were to change the length, the frequency would have changed. The proof that the wavelength is the same is the unchanging fundamental. Brilliant 🙂

              I wonder if there’s some way to relate the uncertainty in frequency/wavelength to the width of the damper…

              Thanks for the reply,


    1. Ian Liberman says:

      As creator of Pressman`s Rock Trivia and an obsessed metal fan, who is very much into physics and cosmology as a hobby, I can not remember when I have enjoyed a article as much as I have yours. Your use of the guitar string played at its loudest to demonstrate Heisenberg`s Uncertainty Principal,using time and frequency instead of position and momentum to demonstrate the cycle of the waveform. This is demonstrated by the uncertainty residing in “narrow in time, wide in frequency” and also vica versa, when you play the one string along with the illustrating graph and applying it to HUP .You also peaked my interest in how you illustrate how the fourier transformation of data is used for analysis. Thanks for an excellent learning experience with metal overtones.

    1. Kelly says:

      I absolutely loved this post and the analysis of metal from a physics perspective. I am a biologist as well as a very vocal metalhead, and a classically trained percussionist. I have never seen anything remotely strange about my love for metal and classical music and sometimes have a hard time explaining to people why I am the way I am, but this post, as well as some others I’ve seen recently make me feel better that metalheads are getting out there and talking about why we love this technically and lyrically amazing music as much as we do (I write this as I am listening to Swallow the Sun…). Hopefully there will come a day when I don’t get dirty looks for being proud of the death, doom and black metal I listen to, and I will no longer have to explain how I can have Beethoven following Behemoth on my iPod. Thanks again to all metalheads supporting the genre.

    1. Richard Codling says:

      Very interesting article! I got into physics through taking guitars and effects apart and eventually built up to making my own little valve amp so this brings it all back round nicely.

      I attended an interview to become a trainee physics teacher and as part of my interview I had to give a five-minute presentation about an aspect of physics that interested me. I chose the elctric guitar and highlighted what could be cross-referenced to what part of any given course, mostly experiments I wanted to try myself! I got a place on the course but ended up in the health service instead for various reasons.

      I notice you can see the decay envelope of your noise gate on the raw waveform of the ‘crunch’ D too does that affect the frequency composition? DId you try with and without?

      Right better be off, my new band have a gig in 8 weeks and we need some material…

        • Hi, Richard.

          Great comment. The noise gate will indeed affect the overall shape of the frequency spectrum, but the general principle remains – narrow in time, wider in frequency. An exponentially decaying sinusoidal signal (as for the traditional damped, driven oscillator) when Fourier transformed to frequency space, will have a Lorentzian frequency spectrum. (The resonance curve familiar from A-level physics).

          Other types of decay of the signal will change the shape of the frequency spectrum (e.g. an abrupt switch-off of the signal would be the equivalent of the top-hat function known to undergrads, and this would produce a sinc function in Fourier space).

          I was being entirely serious in the last paragraph of the post – metal guitar sounds could be used as a very effective and entertaining way of explaining Fourier transforms.

          I look forward to hearing some MP3s from your band – please post a link when you upload them!

          All the very best,


    1. Great stuff Philip. I never dreamed I’d see the day when Heisenberg and hair metal were mentioned in the same article. On the other hand, Heisenberg would be a great name for a German industrial metal band.

      This reminds me how, when a German researcher developed an algorithm for classifying music according to characteristics such as timbre and rhythmic variation rather than genre, the system couldn’t really distinguish classical music from heavy metal: One can, for example, draw some analogies between the rhythmic tricks of Led Zeppelin and Stravinsky, although the refined audiences to whom I sometimes talk about music cognition don’t always seem to appreciate hearing Black Dog.

      If you’re interested in seeing the two genres (and others) merged (lord, if not Lord, save us from Deep Purple’s Concerto for Group and Orchestra), check out Glenn Branca ( or Towering Inferno. TI’s album Kaddish has been described as a mixture of “East European folk singing, Rabbinical chants, klezmer fiddling, sampled voices (including Hitler’s), heavy metal guitar and industrial synthesizer”. It would be hard to improve on that recipe (which also brings us back to Heisenberg…).

        • Thanks for those fantastic links, Philip. Wonderful to know that a quantitative analysis of timbre and rhythm fails to distinguish reliably between metal and classical music!

          That Branca composition is… disturbing. I thought that Robert Fripp was ‘out there’ but Branca is on an entirely different plane – actually, in an entirely different universe. I can’t say that I enjoyed it but I certainly found it compelling.

          “…lord, if not Lord, save us…” Nice.


    1. What an awesome site those links go to. Shows what I always suspected, which is that Bartok anticipated Slayer.

      This is risking getting off-topic now, but I couldn’t help thinking of one of my favourite YouTube videos:

      I love the way the demure little Japanese girl sits down to delight her audience with a beautiful performance, totally rocks out, then gives a petite little bow to polite applause. She’s even more extraordinary here:

        • It’s absolutely amazing, isn’t it? I watched that many moons ago during a tea-break in a long night of experiments which weren’t going particularly well and it cheered me up immensely!

    1. Mark Fromhold says:


      As you know, I’m also a middle-aged Professor of Physics but I also love folk music. So you see, it could be worse…

        • Hi, Mark.

          A bit of folk now and then is nothing to be ashamed of! Christy Moore, both solo and as a member of Planxty, is certainly lurking on my iPod. I’m also partial to the folk-prog-rock of Jethro Tull.


Not everything that counts can be counted


First published at physicsfocus.

My first post for physicsfocus described a number of frustrating deficiencies in the peer review system, focusing in particular on how we can ensure, via post-publication peer review, that science does not lose its ability to self-correct. I continue to rant about discuss and dissect the issue of post-publication peer review in an article in this week’s Times Higher Education, “Spuriouser and Spuriouser”. Here, however, I want to address some of the comments left under that first physicsfocus post by a Senior Editor at Nature Materials, Pep Pamies (Curious Scientist in the comments thread). I was really pleased that a journal editor contributed to the debate but, as you might be less than surprised to hear, I disagree fundamentally with Pep’s argument that impact factors are a useful metric. As I see it, they’re not even a necessary evil.

I’m certainly not alone in thinking this. In an eloquent cri de coeur posted at his blog, Reciprocal Space, last summer, Stephen Curry bluntly stated, “I am sick of impact factors. And so is science”. I won’t rehearse Stephen’s arguments – I strongly recommend that you visit his blog and read the post for yourself, along with the close to two-hundred comments that it attracted – but it’s clear from the Twitter and blog storm his post generated that he had tapped into a deep well of frustration among academics. (Peter Coles’ related post, The Impact X-Factor, is also very well worth a read.)

I agree with Stephen on almost everything in his post. I think that many scientists will chuckle knowingly at the description of the application of impact factors as “statistically illiterate” and I particularly liked the idea of starting a ‘smear campaign’ to discredit the entire concept. But he argues that the way forward is:

“…to find ways to attach to each piece of work the value that the scientific community places on it though use and citation. The rate of accrual of citations remains rather sluggish, even in today’s wired world, so attempts are being made to capture the internet buzz that greets each new publication; there are interesting innovations in this regard from the likes of PLOS, Mendeley and”

As is clear from the THE article, embedding Web 2.0/Web 3.0/Web n.0 feedback and debate in the peer review process is something I fully endorse and, indeed, I think that we should grasp the nettle and attempt to formalise the links between online commentary and the primary scientific literature as soon as possible. But are citations – be they through the primary literature or via an internet ‘buzz’ – really a proxy for scientific quality and the overall value of the work?

I think that we do science a great disservice if we argue that the value of a paper depends only on how often other scientists refer to it, or cite it in their work. Let me offer an example from my own field of research, condensed matter physics – aka nanoscience when I’m applying for funding – to highlight the problem.

Banging a quantum drum

Perhaps my favourite paper of the last decade or so is “Quantum Phase Extraction in Isospectral Electronic Nanostructures” by Hari Manoharan and his co-workers at Stanford. The less than punchy title doesn’t quite capture the elegance, beauty, and sheer brilliance of the work. Manoharan’s group exploited the answer to a question posed by the mathematician Mark Kac close to fifty years ago: Can one hear the shape of a drum? Or, if we ask the question in rather more concrete mathematical physics terms, “Does the spectrum of eigenfrequencies of a resonator uniquely determine its geometry?”

For a one dimensional system the equivalent question is not too difficult and can readily be answered by guitarists and A-level physics students: yes, one can ‘hear’ the shape, i.e. the length, of a vibrating string. But for a two dimensional system like a drum head, the answer is far from obvious. It took until 1992 before Kac’s question was finally answered by Carolyn Gordon, David Webb, and Scott Wolpert. They discovered that it was possible to have 2D isospectral domains, i.e. 2D shapes (or “drum heads”) with the same “sound”. So, no, it’s not possible to hear the shape of a drum.

What’s this got to do with nanoscience? Well, the first elegant aspect of the paper by the Stanford group is that they constructed two-dimensional isospectral domains out of carbon monoxide molecules on a copper surface (using the tip of a scanning tunnelling microscope). In other words, they built differently shaped nanoscopic ‘drum heads’, one molecule at a time. They then “listened” to the eigenspectra of these quantum drums  by measuring the resonances of the electrons confined within the molecular drum head and transposing the spectrum to audible frequencies.

So far, so impressive

But it gets better. A lot better.

The Stanford team then went on to exploit the isospectral characteristics of the differently shaped quantum drum heads to extract the quantum mechanical phase of the electronic wavefunction confined within. I could wax lyrical about this particular aspect of the work for quite some time – remember that the phase of a wavefunction is not an observable in quantum mechanics! – but I encourage you to read the paper itself. (It’s available via this link, but you, or your institution will need a subscription to Science.)

I’ll say it again – this is elegant, beautiful, and brilliant work. For me, at least, it has a visceral quality, just like a piece of great music, literature, or art; it’s inspiring and affecting.

…and it’s picked up a grand total of 29 citations since its publication in 2008.

In the same year, and along with colleagues in Nottingham and Loughborough, I co-authored a paper published in Physical Review Letters on pattern formation in nanoparticle assemblies. To date, that paper has accrued 47 citations. While I am very proud of the work, I am confident that my co-authors would agree with me when I say that it doesn’t begin to compare to the quality of the quantum drum research. Our paper lacks the elegance and scientific “wow” factor of the Stanford team’s publication; it lacks the intellectual excitement of coupling a fundamental problem (and solution) in pure mathematics with state-of-the-art nanoscience; and it lacks the sophistication of the combined experimental and theoretical methodology.

But yet our paper has accrued more citations.

You might argue that I have cherry-picked a particular example to make my case. I really wish that were so but I can point to many, many other exciting scientific papers in a variety of journals which have attracted a relative dearth of citations.

Einstein is credited, probably apocryphally, with the statement “Not everything that counts can be counted, and not everything that can be counted counts”. Just as multi-platinum album sales and Number 1 hits are not a reliable indicator of artistic value (note that One Direction has apparently now out sold The Beatles), citations and associated bibliometrics are not a robust measure of scientific quality.

Image credit: 

Are flaws in peer review someone else’s problem?


That stack of fellowship applications piled up on the coffee table isn’t going to review itself. You’ve got twenty-five to read before the rapidly approaching deadline, and you knew before you accepted the reviewing job that many of the proposals would fall outside your area of expertise. Sigh. Time to grab a coffee and get on with it.

As a professor of physics with some thirty-five years’ experience in condensed matter research, you’re fairly confident that you can make insightful and perceptive comments on that application about manipulating electron spin in nanostructures (from that talented postdoc you met at a conference last year). But what about the proposal on membrane proteins? Or, worse, the treatment of arcane aspects of string theory by the mathematician claiming a radical new approach to supersymmetry? Can you really comment on those applications with any type of authority?

Of course, thanks to Thomson Reuters there’s no need for you to be too concerned about your lack of expertise in those fields. You log on to Web of Knowledge and check the publication records. Hmmm. The membrane protein work has made quite an impact – the applicant’s Science paper from a couple of years back has already picked up a few hundred citations and her h-index is rising rapidly. She looks to be a real ‘star’ in her community. The string theorist is also blazing a trail.

Shame about the guy doing the electron spin stuff. You’d been very excited about that work when you attended his excellent talk at the conference in the U.S. but it’s picked up hardly any citations at all. Can you really rank it alongside the membrane protein proposal? After all, how could you justify that decision on any sort of objective basis to the other members of the interdisciplinary panel…?

Bibliometrics are the bane of academics’ lives. We regularly moan about the rate at which metrics such as the journal impact factor and the notorious h-index are increasing their stranglehold on the assessment of research. And, yet, as the hypothetical example above shows, we can be our own worst enemy in reaching for citation statistics to assess work outside – or even firmly inside – our  ‘comfort zone’ of expertise.

David Colquhoun, a world-leading pharmacologist at University College London and a blogger of quite some repute, has repeatedly pointed out the dangers of lazily relying on citation analyses to assess research and researchers. One article in particular, How to get good science, is a searingly honest account of the correlation (or lack thereof) between citations and the relative importance of a number of his, and others’, papers. It should be required reading for all those involved in research assessment at universities, research councils, funding bodies, and government departments – particularly those who are of the opinion that bibliometrics represent an appropriate method of ranking the ‘outputs’ of scientists.

Colquhoun, in refreshingly ‘robust’ language, puts it as follows:

“All this shows what is obvious to everyone but bone-headed bean counters. The only way to assess the merit of a paper is to ask a selection of experts in the field.

“Nothing else works.


An ongoing controversy in my area of research, nanoscience, has thrown Colquhoun’s statement into sharp relief. The controversial work in question represents a particularly compelling example of the fallacy of citation statistics as a measure of research quality. It has also provided worrying insights into scientific publishing, and has severely damaged my confidence in the peer review system.

The minutiae of the case in question are covered in great detail at Raphael Levy’s blog so I won’t rehash the detailed arguments here. In a nutshell, the problem is as follows. The authors of a series of papers in the highest profile journals in science – including Science and the Nature Publishing Group family – have claimed that stripes form on the surfaces of nanoparticles due to phase separation of different ligand types. The only direct evidence for the formation of those stripes comes from scanning probe microscopy (SPM) data. (SPM forms the bedrock of our research in the Nanoscience group at the University of Nottingham, hence my keen interest in this particular story.)

But those SPM data display features which appear remarkably similar to well known instrumental artifacts, and the associated data analyses appear less than rigorous at best. In my experience the work would be poorly graded even as an undergraduate project report, yet it’s been published in what are generally considered to be the most important journals in science. (And let’s be clear – those journals indeed have an impressive track record of publishing exciting and pioneering breakthroughs in science.)

So what? Isn’t this just a storm in a teacup about some arcane aspect of nanoscience? Why should we care? Won’t the problem be rooted out when others fail to reproduce the work? After all, isn’t science self-correcting in the end?

Good points. Bear with me – I’ll consider those questions in a second. Take a moment, however, to return to the academic sitting at home with that pile of proposals to review. Let’s say that she had a fellowship application related to the striped nanoparticle work to rank amongst the others. A cursory glance at the citation statistics at Web of Knowledge would indicate that this work has had a major impact over a very short period. Ipso facto, it must be of high quality.

And yet, if an expert – or, in this particular case, even a relative SPM novice – were to take a couple of minutes to read one of the ‘stripy nanoparticle’ papers, they’d be far from convinced by the conclusions reached by the authors. What was it that Colquhoun said again? “The only way to assess the merit of a paper is to ask a selection of experts in the field. Nothing else works. Nothing.”

In principle, science is indeed self-correcting. But if there are flaws in published work who fixes them? Perhaps the most troublesome aspect of the striped nanoparticle controversy was highlighted by a comment left by Mathias Brust, a pioneer in the field of nanoparticle research, under an article in the Times Higher Education:

I have [talked to senior experts about this controversy] … and let me tell you what they have told me. About 80% of senior gold nanoparticle scientists don’t give much of a damn about the stripes and find it unwise that Levy engages in such a potentially career damaging dispute. About 10% think that … fellow scientists should be friendlier to each other. After all, you never know [who] referees your next paper. About 5% welcome this dispute, needless to say predominantly those who feel critical about the stripes. This now includes me. I was initially with the first 80% and did advise Raphael accordingly.”

[Disclaimer: I know Mathias Brust very well and have collaborated, and co-authored papers, with him in the past].

I am well aware that the plural of anecdote is not data but Brust’s comment resonates strongly with me. I have heard very similar arguments at times from colleagues in physics. The most troubling of all is the idea that critiquing published work is somehow at best unseemly, and, at worst, career-damaging.  Has science really come to this?

Douglas Adams, in an inspired passage in Life, The Universe, and Everything, takes the psychological concept known as “someone else’s problem (SEP)” and uses it as the basis of an invisibility ‘cloak’ in the form of an SEP-field. (Thanks to Dave Fernig, a fellow fan of Douglas Adams, for reminding me about the Someone Else’s Problem field.) As Adams puts it, instead of attempting the mind-bogglingly complex task of actually making something invisible, an SEP is much easier to implement. “An SEP is something we can’t see, or don’t see, or our brain doesn’t let us see, because we think that it’s somebody else’s problem…. The brain just edits it out, it’s like a blind spot”.

The 80% of researchers to which Brust refers are apparently of the opinion that flaws in the literature are someone else’s problem. We have enough to be getting on with in terms of our own original research, without repeating measurements that have already been published in the highest quality journals, right?

Wrong. This is not someone else’s problem. This is our problem and we need to address it.

Image: Paper pile. Credit: