We are anonymous. We are legion. We are (mostly) harmful.

This revelation appeared in my Twitter timeline earlier this week:

On the same day, Nature News published a fascinating interview with Brandon Stell, the founder of PubPeer who has revealed his identity to the world:

I’ve waxed lyrical about PubPeer a number of times before, going so far as to say that its post-publication peer review (PPPR) ethos has to be the future of scientific publishing. (I now also try to include mention of PubPeer in every conference presentation/seminar I give). I’ll be gutted if PPPR of the type pioneered by PubPeer does not become de rigueur for the next generation of scientists; our conventional peer review system is, from so many perspectives, archaic and outdated. I agree entirely with Stell’s comments in that interview for Nature News:

Post-publication peer review has the potential to completely change the way that science is conducted. I think PubPeer could help us to move towards an ideal scenario where we can immediately disseminate our findings and set up a different way of evaluating significant research than the current system.

But one major bone of contention that I’ve always had with PubPeer’s approach, and that has been the subject of a couple of amicable ‘tweet-spats’ with the @PubPeer Twitter feed, is the issue of anonymity. I was disappointed that not only was it just Stell who revealed his identity — his two PubPeer co-founders remain anonymous — but that there are plans (or at least aspirations) to “shore up”, as Stell puts it, the anonymity of PubPeer commenters.

I am not a fan of internet anonymity. At all. I understand entirely the arguments regularly made by PubPeer (and many others) in favour of anonymous commenting. In particular, I am intensely aware of the major power imbalance that exists between, for example, a 1st year PhD student commenting on a paper at PubPeer and the world-leading, award-winning, scientifically decorated and oh-so-prestigious scientist whose group carried out the work that is being critiqued/attacked. Similarly, and in common with Peter Coles, I personally know bloggers who write important, challenging, and influential posts while remaining anonymous.

I also fully realise that there are are extreme cases when it might not only be career-theatening, but life-threatening for a blogger to reveal their identity. However, those are exactly that: extreme cases. It’s statistically rather improbable that all of those pseudonymously venting their spleen under articles at, say, The Guardian, Telegraph, or, forgive me, Daily Mail website are writing in fear of their life. (Although, and as this wonderful Twitter account highlights so well, many Daily Mail readers certainly feel as if their entire culture, identity, and belief system are under constant attack from the ranked hordes of migrants/PC lefties/benefit claimants/gypsies/BBC executives [delete as appropriate] swamping the country).

I am firmly of the opinion that the advantages of anonymity are far outweighed by the difficulties associated with fostering an online culture where comments and critique — and, at worst, vicious abuse — are posted under cover of a pseudonym. For one thing, there’s the strong possibility of sockpuppets being exploited to distort debate. Julian Stirling, an alumnus of the Nanoscience Group here at Nottingham and now a research fellow at NIST, has described the irritations and frustrations of the sockpuppetry we’ve experienced as part of our critique of a series of papers spanning a decade’s worth of research. In the later stages of this tussle, the line separating sockpuppetry from outright identity theft was crossed. You might suggest, like PubPeer, that this type of behaviour is not the norm. Perhaps. But our experience shows just how bad it can get.

Even in the absence of sockpuppetry, I’ve got to come clean and admit that I’m really not entirely comfortable with communicating with someone online who is not willing to reveal their identity. I’ve been trying to get to grips with just what it is about anonymous/ pseudonymous comments that rankles with me so much, having been involved in quite a number of online ‘debates’ where the vast majority of those commenting have used pseudonyms. When challenged on the use of a pseudonym, and asked for some information about their background, the response is generally aggressively defensive. Their standard rebuttal is to ask why I should care about who they are because isn’t it the strength of the argument, not the identity of the person making the comments, that really matters?

In principle, yes. But the online world is often not very principled.

Ultimately, I think that my deep irritation with pseudonyms stems from two key factors. The first of these is the fundamental lack of fairness due to the ‘asymmetry’ in communication. Dorothy Bishop wrote a characteristically considered, thoughtful, and thought-provoking piece on this issue of communication asymmetry a number of years back (in the context of the debate regarding the burqa ban). The asymmetry that’s established via anonymous commenting means that those who are critiquing an author’s (or a group’s) work are free to comment with impunity; they can say whatever they like in the clear knowledge that there’s a negligible chance of their comments ever being traced back to them.

Stell and his PubPeer co-founders claim that this is actually a key advantage of anonymity — those who comment are not constrained by concerns that they’ll be identified. But if their arguments are sound, and expressed in a polite, if critical, manner, then why the heck should they be concerned?  After all, it’s long been the case that “old school” journals — the APS’ Physical Review family of titles being a notable example — publish critiques of papers that have previously appeared in their pages. Those formal critiques are published with the names of the authors listed for all the world, or at least the readership of the journal, to see.

We should aim to change the culture so that critiquing other scientists’ work is seen as part-and-parcel of the scientific process, i.e. something for which researchers, at any career level, should be proud to take credit. Instead, the ease of commenting online from behind cover of a pseudonym or avatar is encouraging a secretive, and, let’s be honest, a rather grubby, approach to scientific criticism. I was therefore particularly encouraged by this announcement from The Winnower yesterday. It’s a fantastic idea to publish reviews of papers from journal club discussions and it’ll help to move the critique of published science to a rather more open, and thus much healthier, place.

The second, although closely related, aspect of anonymity that winds me up is that it essentially (further) depersonalises online communication. This helps to normalise a culture in which those commenting don’t ever take responsibility for what they say, or, in the worst cases — if we consider online communication in a broader context than just the scientific community — fail to appreciate just how hurtful their abuse might be. I’ve often seen comments along the lines of “It’s all just pixels on a screen. They should toughen up”. These comments are invariably made by those hiding behind a pseudonym.

As ever, xkcd has the perfect riposte…

This splendid poem also makes the point rather well. “Cuts and bruises now have healed, it’s words that I remember.

Anonymity contributes to a basic lack of online respect and too often can represent a lack of intellectual courage. When we criticise, critique, lambaste, or vilify others online let’s have the courage of our convictions and put our name to our comments.

(Guest post) Doing a PhD: To move or not to move?

There’s nothing I enjoy more than a good old spat with my Head of School, Mike Merrifield. Our debates run the gamut of the academic’s traditional soap-box topics, but a theme to which we return regularly is the question of the importance – or not – of moving institution for early career researchers. I put forward my views on this in a blog post for physicsfocus last year. In this guest post (a first for “Symptoms…”), Mike explains why he and I disagree on the question of whether PhD students and postdocs should be assessed on the basis of their mobility.

Once again I find myself somewhat in disagreement with my friend and colleague Professor Moriarty.  This is never an entirely comfortable place to be, because he argues tenaciously, and, irritatingly, is right more often than not, but on this occasion I thought it was worth trying to spell out my reasoning with a little more nuance than is allowed by the 140 character sound bites of Twitter.

The catalyst for this disagreement was Philip’s response to an article in the THE entitled 10 steps to PhD failure.  His objection was to one of the pieces of advice given that

“Going somewhere else for your PhD shows that you have expanded your intellectual horizons. In contrast, others will view the fact that you did all your degrees at the same place as an indication that you lack scholarly breadth and independence, and that you were not wise or committed enough to follow this standard advice about studying elsewhere.”

which led to a lengthy Twitter discussion of whether mobility is an appropriate factor to consider as an indicator of drive and independence, where Philip’s position is “no,” and mine is “sometimes.”

First let me make it clear that I agree with Philip that the article is wrong if it implies that any such consideration is absolute.  Anyone contemplating where to do a PhD should weigh up a whole range of elements, which should include lifestyle as well as professional factors to establish where on the spectrum of work–life balance they want or need to position themselves.  While some people may relish the opportunities afforded by moving to a new locale and maybe even experiencing the culture of another country, others could be happily settled where they did their undergraduate degree, or have responsibilities that limit their ability to relocate, which may well then over-ride any other considerations.

But, pretty much by definition, work–life balance implies a compromise that does not optimise either side of the equation individually, and anyone considering where to do a PhD should at least think about the potential downsides to staying in the same institution:

  • You have already interacted with the academic staff at that institution quite closely, and heard at least some of what they have to teach you. Educationally, there are benefits to encountering other points of view and learning about topics where your current institution may have very little expertise.  You can certainly pick some of that up by going to summer schools, conferences, etc, but there is no substitute for being embedded in a different, challenging working environment to really get a new perspective on things.
  • What are the chances that you happen to have done your first degree at the best place in the World for whatever discipline has caught your interest? Surely, very few students apply to university on the basis of a specific sub-discipline; indeed, they may not have even reached the level to study and appreciate many of the more exciting possibilities until they are quite a long way into their undergraduate programmes.  It would therefore be an amazing coincidence if they happen to be at the institution where the most exciting and innovative work in that field is currently being undertaken.  If you are in the happy position of being willing and able to relocate, why wouldn’t you have the ambition to try to go to the best place in the World to pursue your interest?
  • If you decide to go beyond your PhD in an academic setting, you will have to convince someone to employ you in an appropriate postdoctoral post. Typically, you may be up against fifty-or-so other applicants, and the people responsible for selection will be considering a variety of factors to decide to whom to offer the job.  One of the things they are likely to be looking for is evidence of drive and independence.  It is unfortunately true that some students do drift into doing a PhD just by following the “path of least resistance” when they finished as undergraduates, as carrying on in the same place doing more-or-less the same thing is easier than making a more radical departure.  From a potential employer’s perspective, it can be difficult to separate such drifters from more dynamic motivated individuals who have consciously opted to stay at their original institution, whereas someone who has moved to a different strong institution is clearly not suffering from inertia and has more apparently made a pro-active career decision.  Thus, while absence of mobility does not constitute evidence of a lack of drive, it is an absence of evidence for such drive.
  • The same issue also arises a little later in an academic career, when a postdoctoral researcher will likely be applying for individual fellowships or faculty positions against even longer odds. At this point, the assessor is looking for evidence of the applicant’s originality.  I know from experience serving on fellowship and appointment panels that it can be very difficult, if not impossible, to disentangle the applicant’s intellectual contribution to the work from that of their collaborators.  One indicator is the level of variety in authorship of papers published – if an individual has never published a paper that doesn’t have their old PhD supervisor as an author, it can be very difficult for the assessor to determine whether all the ideas presented originated with that supervisor, too.  A wider variety of collaborations, on the other hand, suggests a much more outgoing approach to developing research ideas, not to mention the sought-after intellectual curiosity that draws one to new and different problems.  Such a breadth of authorship and interests is more readily established if one has worked in more than one research group.

Bear in mind that for all these considerations there will always be exceptions.  All that I really want to put across is that it is more straightforward to demonstrate the intellectual curiosity that drives the best researchers if you are able and willing to be mobile, and that if you are not then it is important to take extra steps to establish these traits in other visible ways.

Finally, I should reiterate that this piece was really only intended to lay out the implications of mobility (or immobility) for one side of work–life balance, and that the appropriate location for the fulcrum of that balance is a matter for all individuals to decide for themselves.

When scientists help to sell pseudoscience: The many worlds of woo

…or, as Peter Coles suggested, The Empirical Strikes Back

Until a couple of weeks ago, I was blissfully unaware that there was a secret out there that had the potential to change my life forever. I could do anything, be anything, get anything I so desired… if I only knew The Secret. Despite my hitherto abject ignorance, it’s not a particularly well-kept secret: millions know about it — and its universal law of attraction guiding ‘principle’ — largely due to Oprah Winfrey’s glowing and gushing endorsement.

Nor is The Secret anything new. The film which first gave it away was released nearly a decade ago. Like the best memes, however, its rate of infection continues to grow. Googling “The Law of Attraction” gives millions upon millions of hits, and counting.

I found out about The Secret via Tim Brownson and Olivier Larvor, both mentioned in my previous post, and with whom I had a fun and expletive-fuelled discussion for their Raw Voices podcast last Friday. We chatted about the regular claim made by ‘Law of Attraction’ gurus — who make a nice little earner out of selling their ‘expertise’ — that quantum physics is at the heart of The Secret.  (I’ll add a link to the podcast when it becomes available. Edit 31/08/2015. The podcast is here.)

So what is The Secret? Well, it’s nothing more than the idea that if you think positive thoughts, good things will happen to you. The rather vile converse tenet is also part of The Secret: anything bad that happens to you is simply because you’re not thinking enough good thoughts. The law of attraction is just another way of expressing The Secret: if you think those good thoughts and click your heels together three times, you’ll attract good stuff to you. (Quite whether it’s an inverse square law has not yet been ascertained.)

Where does the quantum physics come in? I’ll let Rhonda Byrne, author of The Secret, enlighten you by way of a few quotes from her book:

The law of attraction is the law of creation. Quantum physicists tell us that the entire Universe emerged from thought.

Your thoughts determine your frequency, and your feelings tell you immediately what frequency you are on.

The law of attraction is a law of nature. It is as impartial and impersonal as the law of gravity is.

How it will happen, how the Universe will bring it to you, is not your concern or job. Allow the Universe to do it for you.

The Universe offers all things to all people through the law of attraction.

It’s easy and cathartic, of course, to rant about the anti-scientific nature of this type of delusional woo and to bemoan the extent to which our culture celebrates irrationality and “mysticism”. As Toby Young pointed out in an article celebrating the end of Oprah Winfrey’s chat show,

Above all, it is Oprah’s incontinent sentimentality that I find so objectionable, the elevation of ersatz emotion over any critical thought. For Oprah, the only test of veracity worth the name is whether something “feels” true, as though our initial emotional response to something – whether a prospective lover, a spiritual belief system or a political leader – is a more reliable guide than a careful sifting of the evidence.

This elevation of what “feels true” above cold, hard, impersonal evidence is, of course, why Oprah was such a fan of “The Secret”. Nonetheless, a central credo of Byrne’s books — and of the extremely lucrative legions of woo they have inspired — is that the “law of attraction” is grounded in science. This claim lends The Secret an air of credibility by effectively exploiting the classic argument from authority fallacy: if quantum physicists say there’s something in it, then Byrne must be onto something. (There’s a fascinating type of cognitive dissonance at play here, however, in that when scientists deign to criticise The Secret they’re of course told by Byrne’s acolytes that science doesn’t know everything).

It’s always fun for us scientists to get on our high horse and loudly berate Byrne, Deepak Chopra, Robert Lanza, and the many and varied other purveyors of woo for their lack of understanding of science, and of quantum physics in particular.

But we’re a big part of the problem.

Compare Byrne’s claim,

The law of attraction is the law of creation. Quantum physicists tell us that the entire Universe emerged from thought. 


“[T]he atoms or elementary particles themselves are not real; they form a world of potentialities or possibilities rather than one of things or facts.” Werner Heisenberg

“In the beginning there were only probabilities. The universe could only come into existence if someone observed it. It does not matter that the observers turned up several billion years later. The universe exists because we are aware of it.” Martin Rees (from The Anthropic Universe, New Scientist (August 1987))

We now know that the moon is demonstrably not there when nobody looks. N. David Mermin (The Journal of Philosophy 78, 397 (1981))

“It was not possible to formulate the laws of quantum mechanics in a fully consistent way without reference to consciousness.”  Eugene Wigner

We have reversed the usual classical notion that the independent ‘elementary parts’ of the world are the fundamental reality, and that the various systems are merely particular contingent forms and arrangements of these parts. Rather, we say that inseparable quantum interconnectedness of the whole universe is the fundamental reality…  (David Bohm, quoted in The Tao Of Physics, Fritjof Capra (1975))

Can we really blame Byrne, Chopra, et al. for promoting the idea that we’re all part of one interconnected universe, whose structure/reality we influence with our thoughts, when not only popular science books/magazines, but the scientific literature, are awash with statements like those above? After all, the preceding list of quotes is from a set of highly respected physicists who have made huge contributions to our understanding of the universe. Moreover, when we lesser scientists speak about quantum physics to the wider public(s) we’ll often quote those luminaries and talk up the more ‘fantastical’ elements of the theory.

I suspect that there are physicists who would immediately baulk at my use of “fantastical” and would point out that we live in a world that is essentially quantum. I beg to differ. The world around us is indeed the result of literally countless quantum events. But the quantum weirdness is washed out precisely because of the uncountable and uncontrollable combinations of those unthinkably large numbers of quantum events.

We live in a world of classical physics. While this, on the face of it, is a statement of the bleeding obvious, those of us involved in communicating science need to be a little more upfront about it.  Yes, of course quantum theory is the jewel in the crown of science (at least from this lowly “squalid state” physicist’s perspective), underpinning the structure and behaviour of all matter. And, yes, there are of course fascinating, unsettling (to some more than others), and complicated connections between information theory and quantum theory at the most fundamental level. For what it’s worth, I’m of the opinion that there’s a lot to be said for Anton Zeilinger‘s interpretation of the “message of the quantum“:

…the distinction between reality and our knowledge of reality, between reality and information, cannot be made. There is no way to refer to reality without using the information we have about it.

…but we have to realise that for the macroscopic systems all around us every day, there are immeasurably many ways that information can ‘leak out’. Everything around us — the walls of my office, the trees I can see through my window, the pizza I had for lunch, the nitrogen and oxygen molecules in the air I’m breathing — is an “observer”. Consciousness not required. That long-suffering and infuriating feline is observed long before the box is opened.

Debates regarding the ontological vs epistemological aspects of the wavefunction (and its associated ‘collapse’, if you subscribe to the Copenhagen interpretation) continue to rage. This, by Matt Leifer, is by far the best review I’ve read on the question of the ontic vs epistemic nature of the wavefunction. I enthusiastically recommend Leifer’s paper for key insights into the “state of the nation” when it comes to the fundamental interpretations of quantum mechanics. (His blog posts are also well worth reading).

John Stewart Bell, whose contributions to quantum theory have been lauded — although not by some — as “the most profound discovery in science“, was rather scathing about what he called the FAPP (“for all practical purposes”) principle. This was, in effect, his equivalent of the “shut up and calculate” dictum (traditionally attributed to Feynman but possibly (probably?) originally due to David Mermin) . He made arguments both against FAPP and in support of treating all of the universe on an equal quantum mechanical footing in his classic Against Measurement article back in 1990:

Is it not good to know what follows from what, even if it is not really necessary FAPP?

In the beginning natural philosophers tried to understand the world around them. Trying to do that they hit upon the great idea of contriving artificially simple situations in which the number of factors involved is reduced to a minimum. Divide and conquer. Experimental science was born. But experiment is a tool. The aim remains: to understand the world. To restrict quantum mechanics to be exclusively about piddling laboratory operations is to betray the great enterprise. A serious formulation will not exclude the big world outside the laboratory.

But there’s no getting away from the fact that “the big world outside the laboratorydoes behave very differently from those “piddling laboratory operations” designed to test the fundamentals of quantum mechanics. In the headlong rush of excitement brought about by the inherent weirdness/counter-intuitiveness of quantum mechanics we too often gloss over this when explaining quantum mechanics to a non-scientific audience (or to a scientific audience unfamiliar with the minutiae of quantum physics). Put bluntly, it doesn’t matter how many times you attempt to repeat the double slit experiment at a macroscopic scale by firing marbles at a couple of slots cut in a piece of cardboard — you’re not going to see the appearance of an interference pattern.

While physicists and philosophers continue to debate the reasons for this lack of “quantumness” on macroscopic scales — including the extent to which decoherence explains the loss of the interference pattern — the empirical observation is simply this: coherent interference, the bedrock of quantum weirdness, is not realised for macroscopic objects in the everyday world.

Zeilinger, Arndt and co-workers (including, at one time, my colleague now here at Nottingham, Lucia Hackermüller) have carried out elegant — or what are perhaps better described as heroic — experiments with ever-larger quantum objects to show that interference effects are possible even for molecules as large as 6 nm in size with a mass of 6910 atomic mass units. In a particularly impressive piece of work whose results were published in 2012, Juffman and co-workers imaged the molecule-by-molecule build up of a quantum interference pattern for two types of phthalocyanine molecule, namely PcH2 (C32H18N8, 58 atoms with a mass of 514 amu), and its larger fluorinated counterpart F24PcH2 (C48H26F24N8O8,F24PcH2, 114 atoms, mass 1298 amu). Here’s a video of the formation of a molecular interference pattern a molecule at a time:

…and here’s a comparison of the interference patterns formed by (a) the smaller and (b) larger molecules.


Despite the remarkable level of experimental control achieved by Juffman et al., the visibility of the interference pattern for the larger molecule is much weaker due to the contribution of an incoherent background arising from the size of the source of the molecules and the molecular velocity distribution. 1298 amu is about 22 orders of magnitude smaller than the mass of a marble. (You can add another three orders of magnitude for the mass of the average human being). When decoherence is an issue for particles which are 1298 amu in size and contained in an exceptionally well-controlled experimental environment, it’s clear just why coherent quantum interference isn’t a feature of the macroscopic world.

This simple absence of quantum interference for everyday objects is enough, by itself, to entirely debunk the claims of Byrne, Chopra, Lanza and other woo-meisters. When was the last time they diffracted when walking through a doorway?

It’s no Secret: we live in a classical world.

Before I stumble to the end of this long-winded post, I want to tackle — as briefly as possible — two other frustrating aspects of quantum woo that, again, we physicists have perhaps not always done enough to counter. The first is the idea of the “holistic” interconnected universe, as described by Bohm in that quote above: “…inseparable quantum interconnectedness of the whole universe is the fundamental reality…”.

In one technical, and entirely unmeasurable, sense, as you read this your electrons are indeed entangled with mine. And they’re entangled with those of every animal, mineral, and vegetable on the planet. And with those of any small, blue, furry alien species yet to be discovered. As I’ve discussed in a previous post, this coupling arises in quantum theory because, in essence, there’s no such thing as complete, perfect confinement of an electron (or any other particle).

But this predicted coupling between electrons in two human beings in the same room, let alone on different sides of the globe, is so mind-bogglingly tiny — smaller than the smallest thing ever, and then some — that it has zero influence on anything we measure or could ever hope to measure. FAPP, there is no coupling at all (and that’s why we can treat the Pauli exclusion principle for electrons in a particular atom without ever having to worry about all the other atoms in the universe).

Here I disagree fundamentally with Bell in that I’m soundly of the opinion that the distinction between “practice” and “principle” is absolutely key to the science we can do and, in particular, how we explain that science to various audiences. It therefore impinges directly on the questions about the nature of reality we can address. That’s because I’m a cynical old experimentalist who has too often seen beautiful theoretical predictions (from that most powerful of tools in the condensed matter theorist’s toolbox, density functional theory) shot down in flames because of an ugly experimental result. Thus begins a process of rehabilitating and tweaking the theoretical methods: “Oh, we just need to use a different functional…The exchange-correlation term isn’t quite cutting it…The dispersion interactions aren’t accounted for…There’s an issue due to basis set superposition we need to address…“.

That type of feedback between experimental measurement (or observation) and theoretical calculation is absolutely central to science. I was therefore gobsmacked by claims last year that we’re supposedly entering a world of “post-empirical science“,  and was very happy to see those claims promptly and elegantly rebutted by Sabine Hossenfelder. If there’s anything that will help promote the further rise of outlandish woo, it’s a move by scientists towards the idea that claims about the nature of the universe don’t need to be supported by observation, data, or evidence; that the “internal consistency” or elegance of a theory is good enough for it to be accepted. Beauty is merely in the eye of the beholder.

If it disagrees with experiment, it’s wrong. In that simple statement is the key to science. It doesn’t make any difference how beautiful your guess is, it doesn’t matter how smart you are who made the guess, or what his name is… If it disagrees with experiment, it’s wrong. That’s all there is to it.

That was Feynman, of course, on the scientific method. Lest we forget.

Coupled with this rather hubristic notion of “post-empirical” science is the related troublesome confusion, as highlighted by Peter Coles, between the map and the territory. A mathematical model is exactly that — a model. We will further bolster the “woo age” movement if we start mistaking a mathematical model, i.e. the map, with the territory of reality. So, for example, while I can entirely appreciate just why Sean Carroll and others are rather wedded to the many-worlds interpretation (MWI) of quantum mechanics, claims that the MWI is “probably correct“, for reasons including that it has the smallest number of postulates compared to any other breed of QM, leave me cold. Why is the most accurate theory necessarily the most elegant or the most “compact” in its postulates? I seem to hear the distant sound of bongos being beaten in frustration…

Similarly, there’s an argument justifying the “reality” of the many worlds of the MWI that goes something along the lines of — if you’ll excuse the paraphrasing of Mr. Adams — “Hilbert space is big. Very big. You just won’t believe how vastly, hugely, mind-bogglingly big it is.”. But an infinite dimensional Hilbert space is a mathematical construct. And a state in that infinite dimensional Hilbert space, or, indeed, in any finite dimensional Hilbert space we might consider, is not a physical entity. It’s a model. As Eric Scerri memorably pointed out over fifteen years ago in the context of claims that electron orbitals had been experimentally observed, a state in Hilbert space is about as real as the Cartesian x,y,z axes we use to model problems in classical physics. (This is not to say that I don’t see some of the attractions of the MWI over the traditional Copenhagen approach; Carroll does an impressive job of laying out the MWI’s virtues. Nonetheless, I remain unconvinced by Carroll’s stance on the issue of testability and share Chad Orzel’s agnosticism regarding the various ontic vs epistemic interpretations of quantum mechanics.)

What really matters when it comes to stemming the steady flow of woo, however, is that none of this quantum weirdness has any influence at all on how we live our lives. When we communicate science to a diverse audience we need to spend a little less time exploiting the “Wow. Quantum. Physics.” factor — and I’m certainly hardly blameless here — and explain carefully why classical physics holds sway in the world around us.

If we don’t, we could very well be adding our own small quantum of woo to the spread of pseudoscience.

Science proves nothing

If you’re not a regular viewer of the BBC’s Sunday Morning Live — perhaps, like me, you’ve facepalmed your way through an episode before and sworn off it for life — you may have missed the following astounding revelation on this week’s programme:

I found out about this from Kash Farooq, of Nottingham Skeptics, in the middle of an e-mail exchange about the next Skeptics In The Pub event, at which Kash has very kindly invited me to speak. I’ve titled my talk “The Wow! and Woo of Quantum Physics” and I’m planning to spend a cathartic (for me, at least), and possibly somewhat vitriolic, forty minutes or so venting my spleen on the type of quantum quackpottery highlighted by the video above. (If you’d like to listen to the entire Sunday Morning Live discussion it’s available (for now) via the BBC iPlayer. It’s worth it for Steve Jones‘ contributions.).

In what could be an holistic, quantum-entangled correlation spanning universal spacetime — or just possibly a coincidence — I was also contacted very recently by the dynamic duo of Tim Brownson and Olivier Larvor to ask whether I could talk about quantum woo for their Raw Voices podcast. (They’d watched this Sixty Symbols video from a couple of years back,  yet, despite that very far from polished performance, still invited me on). That’s going to happen this Friday and after the podcast I’ll write a post dedicated to the utter lunacy that is quantum life coaching.

Yes, you read that right. Quantum. Life. Coaching. Here’s one example. And another. And this was especially irritating.

(For those of you who are familiar with So Long And Thanks For All The Fish and/or the Quandary Phase of H2G2, the fact that quantum life coaching is a thing could very well be my Wonko The Sane moment…)

For now, however, it’s the idea that science proves anything, let alone the existence of an afterlife, that I’d like to briefly address. The net is awash with assertions that science has proved (or disproved) just about everything from the (non-)existence of a god to the fact that exercise is poisonous [1]. Comments threads erupt into flame wars on the basis that “It’s been scientifically proven that…”. I’ve also had my fair share of scientific papers to review where the authors have claimed that their experimental results “definitively prove” that their theoretical model is correct.

But science proves nothing. All scientific results are provisional and tentative; science progresses via a succession of ever-better guesses/explanations. As we get more and more evidence for a particular explanation then our confidence in that model grows accordingly. Science, however, is not mathematics: there are no proofs. (And even in maths, there are different classes of proof…)

I discuss this distinction between deductive and inductive reasoning as part of the Politics, Perception, and Philosophy of Physics module here at Nottingham [2] and refer the students to this important and provocative article by Carlo Rovelli: Science Is Not About Certainty. I’ll quote Rovelli at length because he really hammers home the key point.

The very expression “scientifically proven” is a contradiction in terms. There’s nothing that is scientifically proven. The core of science is the deep awareness that we have wrong ideas, we have prejudices.

…we have a vision of reality that is effective, it’s good, it’s the best we have found so far. It’s the most credible we have found so far; it’s mostly correct.

Science is a continual challenging of common sense, and the core of science is not certainty, it’s continual uncertainty—I would even say, the joy of being aware that in everything we think, there are probably still an enormous amount of prejudices and mistakes, and trying to learn to look a little bit beyond, knowing that there’s always a larger point of view to be expected in the future.    

Edit 09:48, 19 August 2015 — This great article by Geraint Lewis, Professor of Astrophysics at the University of Sydney, on the same subject was brought to my attention via Twitter: Where’s the proof in science? There is none.

1. This, of course, needs no scientific study. It’s a self-evident truth.
2. I’m gearing up to update this for the upcoming academic year and am planning a series of blog posts and videos on the themes in the module.

If I hadn’t failed my exams, I wouldn’t be a professor of physics

I started writing this post a little after 06:00 am this morning, the time at which schools and colleges were officially permitted to start releasing A-level results to hundreds of thousands of students across England, Wales, and Northern Ireland. I vividly remember the stomach-churning sense of dread thirty years ago as I awaited my Leaving Certificate results (the ‘Leaving’ is the Irish equivalent of the A-level system), and empathise with all of those students across the country biting their nails and pacing the floor as I write this.

By far the best advice for A-level students I’ve read over the last week was an open letter by Geoff Barton, Headteacher of King Edward VI school, to his Year 13 students, published in the TES on Tuesday: “Worrying about A-level results won’t help. They are out of your control“. Barton’s article resonated with me for a number of reasons, not least because I’m an undergraduate admissions tutor. It was the following paragraphs, however, that really hit home:

I know this because it happens each year, and it happened to me all those years ago when I failed one of my A-levels.

And what 30 years of experience has shown me is that if you end up not getting your first – or even second – choice of university place and have a tense couple of days on the phone sorting out new plans through the clearing process, then you will look back on this as something positive.

I ended up at a university I had never visited. It proved to be the best thing that happened in my education. And, like me, each year students come back at Christmas from their first term at university telling us that the unexpected change of plans has worked out to be brilliant.

Fortunately, I didn’t fail any of my Leaving Certificate exams — extreme exam failure was to come later on in my academic career — and I went on to start my BSc in Applied Physics degree at Dublin City University the following month. DCU was a small university at the time and I made my choice to go there not on the basis of prestige or national/international ranking  — in any case, the pseudostatistical, pseudoscentific, faux-quantitiative nonsense of university league tables hadn’t yet been spawned back in 1985 — but solely on the sense of excitement and, indeed, ‘belonging’ I felt when I attended a DCU Physics open day. (I’ll not bang on about the dubious value of league tables again, except to repeat that many A-level students show a healthy and laudable cynicism when it comes to the numerology of university rankings.)

Barton’s point about exam failure is particularly well made. I’ve been a personal academic and pastoral tutor for undergraduate students at Nottingham for the last eighteen years and it is always heartbreaking to have to tell a tutee that they have failed exams or, worse, can’t progress on their preferred course. This, of course, feels like the end of the world to them: how can they ever recover from what they see as abject failure?

So I tell them that I failed Year 3 of my four year BSc degree in Applied Physics at DCU.


Appallingly badly.

For a couple of exam papers I did little more than write my name on the cover sheet. This was because I was rather more focused on the band I was in at the time, returning home to Monaghan at weekends to rehearse/play gigs and using my revision time to write riffs, lyrics, and songs.

Not clever.

But if I hadn’t failed my third year exams, and had to resit the year, then I am absolutely certain that I would have similarly drifted through my fourth year and graduated with, at the very best, a low 2.2 or, most likely, a 3rd class degree. Failing my exams, in the words of a band whose songs we used to cover at the time, hit me “like a battering ram”. I repeated 3rd year and went into my final year with many orders of magnitude more motivation and commitment. I graduated with a 2.1 (the pass mark I was ‘carrying’ from my third year due to the resits didn’t, let’s say, work in my favour) — enough to take up a PhD.

Less than a year into my PhD I knew I wanted to pursue a career in academia. (For the reasons discussed here).

I recount this story to tutees and students who have failed exams to echo Barton’s advice that it really isn’t the end of the world when things don’t go to plan. I certainly don’t recommend failing exams as an effective study skill or as an efficient strategy for career development. Nonetheless, a failed exam or two can often act as a catalyst to improve a student’s overall motivation and performance.

But that’s enough about me. My secondary school and undergraduate days are so far in the past that my memories of those times have a subtle reddish hue. Let’s instead hear from Jason Patrone, who graduated last month from Nottingham with a thoroughly well-deserved 1st class hons BSc in Physics (and is featured on the front cover of the School’s most recent newsletter):

I got a C, D and E grade at A-level. I then worked for six years in a job I didn’t find rewarding, before making the decision to return to university in 2011. I did the Foundation Year because of the `non-standard’ A-level grades, getting an overall mark of 81% for the year. I then transferred to the BSc and for each year of the degree I secured a 1st class mark.

The second year of the BSc I found the most challenging. Would I have put the same effort in, come the 2nd year crunch time, if I had sailed through A-levels? I doubt it.

Whether it means a kick up the arse for a bogey year/bad results, or facing the harsh realities of a crap job, any glimpse at what bad results leads to — or even just a blunt reminder that you didn’t do what you know you are capable of — works wonders.

Or, as Barton so eloquently puts it in his open letter, “the reality is that sometimes it’s the unexpected events in our lives that are the richest and most rewarding.


[Edit 13/08/2015, 11:03 — Drat. Forgot to mention that the cartoon above is from the wonderful xkcd and that it’s made available under a Creative Commons licence.]

Be afraid. Be very afraid.

A few weeks back, I was asked by Kelly Oakes, Science Editor for BuzzFeed, to write 100 words or so on what really scared me as a scientist. The ‘brief’ was that it could be serious, funny, silly — whatever I liked. Kelly also asked another eight scientists about their greatest fears. The BuzzFeed article was published yesterday and, my contribution notwithstanding, it’s a great read. I particularly like the closing quote from Hope Jahren: “I fear being misquoted on BuzzFeed, and having to spend the remainder of my career explaining the missing context.”

The hyperlinks in the piece I sent Kelly couldn’t be included so I’m posting the original, URL-enabled version below. While it’s certainly not a particularly restrained piece, when it comes to vitriol-fuelled writing I’m a rank amateur compared to Chad Orzel, whose post on the unholy pairing of Chopra and Michio Kaku is a masterclass in venting one’s spleen: The Physics Of The Imbecile.

Here’s what scares the bejaysus out of me…

Deepak Chopra has 2.5M followers on Twitter.


Let that sink in for a moment.

2.5M people follow a man whose key talent is the ability to generate vacuous pseudoscientific bollocks-speak at a rate hitherto thought to be beyond human capability. His books sell bucket-loads, he’s in huge demand as an ‘inspirational’ speaker, and even learned academics have urged us to take Chopra seriously.

As a scientist – indeed, as a human – I find this both rather depressing and deeply unnerving. The Chopra story is essentially a 21st Century reboot of the Emperor’s New Clothes — a cautionary tale of the extent to which valueless pseudoscience can sweep the world.

Wake up and smell the quantum

I had a lot of fun working with Brady Haran on this Sixty Symbols video, uploaded yesterday:

We physicists spend a lot of time talking up the weird and wacky aspects of quantum mechanics — entanglement, teleportation, many worlds, tunnelling, the philosophical ramifications of the wavefunction…, you know the drill. For a change, I wanted to make a video with Brady that highlighted just how many aspects of the quantum world can be explained in terms of phenomena and patterns we’re used to seeing in the world around us; in other words, to ground quantum principles in everyday physics. And what could be more commonplace — some might even say mundane (though not me) — than a cup of coffee?

The video describes the staggering quantum corral images which were created by Mike Crommie, Chris Lutz, and Don Eigler back in the early nineties, as discussed in this ground-breaking paper. (Unfortunately, due to some crossed wires between Brady and me, and largely because I swamped his e-mail inbox with different links to various descriptions of the quantum corral work, the video mistakenly credits Joe Stroscio and Don Eigler — rather than Crommie, Lutz, and Eigler — for the image below. Joe Stroscio has done some phenomenal scanning probe work in his time, but he’s not responsible for the corral.)


The corral is formed of 48 iron atoms which have been painstakingly put in place, one at a time, using the tip of a scanning tunnelling microscope. (Coincidentally, Joe Stroscio and colleagues have introduced autonomous atom manipulation which allows these types of atomic arrangements to be “dialled in” and fabricated directly under computer control). The ripples that can be seen both inside and outside the corral are due to the variation in electron density across the surface — electron waves scatter off (i.e. are reflected from) the Fe atoms, interfere, and we’re left with a standing wave inside the corral. Because the corral is circular, that standing wave is described mathematically by something known as a Bessel function. And that precise mathematical function also describes the standing wave that forms in a cup of coffee.  Even though the diameter of the coffee cup is roughly six million times larger than that of the corral.

Physicists, and scientists in general, are very used to seeing mathematics describe very many aspects of our reality. This degree of familiarity with the ubiquity of mathematics in nature can sometimes make us — well, at least sometimes makes me — rather too blasé about just how utterly remarkable it is that precisely the same mathematical function can describe behaviour in completely different materials, spanning a huge range of length scales, and in entirely different environments. The only thing that’s common between the cup of coffee and the quantum corral is the symmetry. And yet the coffee and the electrons produce exactly the same pattern. (Well, as long as the critical “sloshing” point for the coffee isn’t reached. There was a great paper in Physical Review E back in 2012 on this topic).

What I don’t say in the video, however, is that there’s something very special about the copper sample on which the Fe atoms are sitting. It’s called a Cu(111) surface, where the numbers, known as Miller indices, describe the direction in which a copper crystal has to be cut to expose that particular plane. (Symmetry is all-important here too). At the Cu(111) surface the electrons are free to move across the plane; we call the system a 2D electron gas (although, in the video, I use the term “electron fluid” to bring out the comparison with the coffee. This isn’t such a “reach” – the term Fermi liquid is used throughout solid state physics). Not all surfaces give electrons this freedom to roam. The corral experiment would never work on a Si(111) surface, for example, because the electrons there, due to the strong covalent bonding in the crystal, simply don’t have the same leeway to explore the space around them.

I’ve written before that I’ve always been impressed that the comments under Sixty Symbols videos buck the usual trend for below the line online commentary, particularly at YouTube: the points the Sixty Symbols audience raise are very often insightful, smart, and even erudite at times. This is again true for the “quantum coffee” video. The following comment asks a particularly perceptive question related to what is causing the waves — are the electrons “driven” by the STM tip in some way?


The current from the STM tip is not responsible for “driving” the pattern. Or, to put it another way, the standing wave state of the electrons is not produced by the probe. Although STM can certainly be used in a very invasive way — this is precisely how the atoms are arranged to form the corral in the first place — it can also be used as relatively non-invasive probe of the electron density. Indeed, the same type of scattering is seen at naturally occuring defects (e.g. atomic step edges), as clearly seen in the image below (also taken from the IBM gallery). The ripples at the step edges are what are called Friedel oscillations and, again, arise from electron waves being back-reflected from the step.


As is my wont, I sneak a guitar into the video as an example of a one dimensional standing wave, in contrast to the 2D Bessel function pattern. In another key example of the pervasiveness of mathematics, there are particularly striking parallels between waves on a guitar (and other lesser musical instruments) and the quantum world. I’ve banged on about this at length before in the context of the Heisenberg uncertainty principle, so won’t hammer home the point again here. But what you might well ask is whether it’s possible to make a one-dimensional “corral” out of a line of atoms (as opposed to a 2D container).

It is. The image below shows the electron density in a 1D chain of Pd atoms, created and imaged using an STM by Nilius, Wallis and Ho ten years ago and elegantly described in this paper. By applying a different voltage to the STM tip, they can access different electron energies. The patterns of electron density, i.e. the standing waves, that they see as a function of voltage are very similar to those seen for waves on a guitar string. If you want to know more about this, including some of the not-so-gory mathematical detail, I cover it in the 1st year undergraduate Frontiers in Physics module here at Nottingham. Chapter 4 of the ebook for the nanoscience component of the module covers standing waves in the 1D atomic chain.


Another aspect of 2D standing waves we didn’t explore in the video, but which I’m hoping Brady and I will cover in the not-too-distant future, is the relationship of the quantum corral to drums and drumming. One of my all-time favourite scientific papers had that precise topic as its theme. But I’ll bang that particular drum in a future blog post.