Left in the lurch? On Corbyn, comedy and credibility

This arrived in the post at the beginning of July:

labourmembershipcard2.png

Yep, I signed up to vote for Jeremy Corbyn in the upcoming Labour leadership elections. (Here’s how to join, if you’re interested. It’s a quick and entirely painless process.) If you’re a UK resident, and unless you’ve been living in an alternate reality for the past month — in, for example, a parallel universe whose inhabitants still give a toss about Tony Blair’s proclamations — you’ll know that Corbyn has had a meteoric rise to the top of the Labour leaders’ board. Yesterday he was named as the bookies’ favourite, at 5-4 odds; six weeks ago he was a 100-1 outsider. This is the “biggest price fall in political betting history” according to William Hill.

This eloquent and compelling piece lays out many of the reasons why I’m voting for Corbyn. (It doesn’t, however, mention that he’s a staunch republican, having petitioned Blair to remove the Royal family from Buckingham Palace and place them in “more modest” accomodation. For that alone he’d get my vote. (And, yes, before you ask, I know about the homeopathy thing. Bear with me, I’ll get to it in a future post.)).

But according to a slew of articles in The Guardian and The Observer over the last few weeks, I’m a narcissistic, deluded, reactionary, dogmatic, immature, head-in-sand (and foot-in-sandal), tribal, ideologically-driven, confused lefty dinosaur for even beginning to entertain the slightest inkling of an idea that Corbyn’s leadership challenge might possibly be a good thing for not only the Labour party but for the entire country.

I’ve got used to reading those articles over my bowl of muesli in the mornings but Wednesday’s Guardian upped the ante just that little bit too far. In a piece claiming that Corbyn was humourless for having the temerity to say that, should he win, he’d like Lennon’s Imagine played at his victory rally, Jason Sinclair — yeah, me neither — made the truly remarkable claim that “We demand our politicians can display their common sense by telling good jokes“.

Errmm, what?

No, really. What?

It turns out that Sinclair, a copywriter, is responsible for the @corbynjokes Twitter feed, the focus of the article he wrote for The Guardian. To be fair to Sinclair, his feed generated one OK joke. This one:

Sinclair must have been spending quite some time in his own peculiar parallel universe, however, if he thinks that politicians tell good jokes. Either that or his threshold for what he considers good comedy is startlingly low. (Perhaps he moonlights as a Radio 4 sitcom writer?)

I’m going with the latter explanation. Here’s why. Another line from Sinclair’s article…

“Boris Johnson is a major political force in part because he has passable comic delivery.”

Hmmm. No politician, including Johnson, has ever made me laugh as a result of their comic delivery. And I’m not talking about gut-busting, tears rolling down cheeks, rolling on the floor laughter. Nor a hearty chuckle. Or even a knowing, spontaneous giggle. Indeed, I’d be more than happy if a politician’s joke could coerce even a weak smile from me every now and again. Instead, politicians’ attempts at humour are invariably so arse-clenchingly, toe-curlingly, cringe-makingly, gob-smackingly embarrassing that my natural reaction is to die a little inside on their behalf.

Now, the explanation for my lack of appreciation of, as Sinclair would have it, the natural comedic flair of our political class could be, of course, that I’m a dour, humourless, bearded lefty gobshite who is genetically incapable of cracking a smile occasionally. While I’d freely admit that grumpiness is not exactly a stranger to me, there are quite a few exceptionally talented writers out there whose well-observed, intelligent, witty, and original insights regularly crack me up. One of these is Charlie Brooker, who has written a wonderfully acerbic weekly column for the Guardian for many years. Here’s what Brooker had to say about a certain tousle-haired toss..  politician back in 2008:

On May 1 London chooses its mayor, and I’ve got a horrible feeling it might pick Boris Johnson for similar reasons. Johnson – or to give him his full name, Boris LOL!!!! what a legernd!! Johnson!!! – is a TV character loved by millions for his cheeky, bumbling persona. Unlike the cartoon MP, he’s magnetically prone to scandal, but this somehow only makes him more adorable each time. Tee hee! Boris has had an affair! Arf! Now he’s offended the whole of Liverpool! Crumbs! He used the word “picaninnies”! Yuk yuk! He’s been caught on tape agreeing to give the address of a reporter to a friend who wants him beaten up! Ho ho! Look at his funny blond hair! HA HA BORIS LOL!!!! WHAT A LEGERND!!!!!!

Copywriters are not exactly renowned for their originality. Like many of their colleagues in marketing and advertising, they have a reputation for churning out retreads of bland boilerplate with little or no creative copy. (See Private Eye, passim). This might help explain why Sinclair’s expectations when it comes to insightful and intelligent comedy are so low – he’s working in a field where wit is the exception rather than the norm.

Marketing, advertising, and copywriting are too often the living dead embodiment of style over substance, responsible for the type of banal bollocks designed to appeal to those who are entirely at ease with cliched, vacuous tripe, i.e. New Labour’s (and the Blairites’) stock-in-trade.

I prefer some substance to my politics. And to my comedy.

Here’s Bill Hicks on the subject of marketing.

Working 9 to 5 (ain’t no way in academia?)

Science magazine has been giving some distinctly dodgy careers advice of late, with two articles in quick succession seemingly being written by authors who were cryogenically frozen in the fifties and revived in 2015 so as to give us the benefit of their views. This week’s Times Higher Education has an article on a letter written in protest about Science’s repeated use of damaging stereotypes and signed by hundreds of researchers, which is being sent to the American Association for the Advancement of Science (AAAS) on Tuesday. (There’s still time to sign it).

The following paragraph, from the most recent article criticised in the letter to the AAAS, has been forensically dissected in a couple of blog posts I recommend — Bryan Gaensler‘s “Workaholism isn’t a valid requirement for advancing in science” and Chad Orzel‘s “Scientists should work the hours when they work best“.

I worked 16 to 17 hours a day, not just to make progress on the technology but also to publish our results in high-impact journals. How did I manage it? My wife—also a Ph.D. scientist—worked far less than I did; she took on the bulk of the domestic responsibilities. Our children spent many Saturdays and some Sundays playing in the company lobby. We made lunch in the break room microwave.

There’s a lot to wince at here, including the fact that the author’s wife “took on the bulk of the domestic responsibilities” while he blazed a trail, the children spending “many Saturdays” playing in the company lobby while dad worked, and the idea that his wife “worked far less”. (On a day when the kids are bickering and being particularly fractious, I’d find 16 hours in the office/lab a piece of cake compared to the rigours of domesticity).

But here’s the rub. The “I worked 16 to 17 hours a day” bit resonates with me. And I am just a little bit uneasy about sending the message to early career researchers that a successful academic career — at least in the present system — doesn’t involve long hours. I think it’s misleading and naive to suggest otherwise. Before I get shot down in flames, I need to stress that this doesn’t mean that I am suggesting that students and postdocs should be encouraged to work themselves into the ground. Nor am I an advocate of the current system — things have to change. The following, which I contributed to an article entitled “Parenthood and academia: an impossible balance?” in the THE last year, might help to explain my perspective.

“Daddy, Niamh won’t give me the loom band maker. And she won’t stop singing Let It Go really loudly all the time. Tell her to stop.”

“OK, calm down. I’ll be with you in a second. Just let me finish this email.”

“Daddy! She still won’t give me the loom bands. And she still won’t stop singing.”

“OK. OK. With you in a second.”

“DADDY!”

Deep sigh. Close laptop lid.

“OK. Coming now.”

I’d foolishly broken my golden rule again: never attempt to work at weekends or before the kids go to bed. As a certain porcine mainstay of children’s television who is wise beyond her years (and species) would put it: “Silly Daddy!”

Niamh, our first child, was born in 2003, when I was a reader. Her sister, Saoirse, arrived in 2005, when I was promoted to a chair, and her brother, Fiachra, came along another three years later. So my career was rather firmly bedded in before, in our mid-thirties, my wife, Marie, and I decided to start a family.

It has still not been entirely straightforward for us to juggle Marie’s shifts as a nursing auxiliary at the Queen’s Medical Centre (next to the university) with the time and travel demands of my work in academic physics. But if the children had started arriving a few years earlier than they had, when I was a (relatively) fresh-faced new lecturer, I don’t quite know how I’d have coped.

I found the transition from postdoctoral researcher to lecturer something of a culture shock. As a postdoc, your focus is almost entirely on research. A lectureship requires that focus to shift rapidly between at least three separate roles: teaching, research supervision and the ever-present administrative demands of both. Add in the demand to produce “impact” and you end up with a role that amounts to at least two full-time jobs in one. As a lecturer, I regularly worked 70- or 80-hour weeks (including weekends, of course), and this is not at all unusual in physics. Clearly that is not compatible with parenthood.

Nowadays, although I do sometimes fail, I try my utmost to keep evenings and weekends free to spend with the family. I have got into the habit of getting up very early in the mornings – around 4am – to have a few hours to work before taking the children to school. They are easily the most productive hours of my day. I have also tried, as much as possible, to cut down on the amount of travel to conferences and workshops I do. Again, this is much easier to do at this stage of my career than it would have been 10 years ago. Nonetheless, I still spend too much time away; so much more could be done via videoconferencing.

The working culture of your school or department is, of course, an essential factor in how easy you find it to balance family and work commitments. In my experience – and I know that this holds true for many of my colleagues – the School of Physics and Astronomy at Nottingham, where I have been since I was a postdoc, has been exceptionally supportive. As a testament to this, it was this year awarded “champion” status in the Institute of Physics’ Project Juno for “taking action to address gender inequities across its student and staff body”. I am not the first to observe that the changes facilitated by that project have resulted in a working environment that is better for everyone.

Still, I’m going to have to end on a downbeat note. Because I know for a fact that the research outputs I had when I landed my lectureship in 1997 would be nowhere near enough to secure that position today. Indeed, I wouldn’t even be shortlisted. The bar for entry to the academy is being raised at an extraordinarily high rate. I’m sure I don’t need to spell out the implications of this for the work-life balance of young scientists.

Let’s not beat around the bush, the competition for academic positions is intense. I’ve referred before to this letter in Physics World a couple of months back which makes the point especially well when it comes to my discipline.

 

In response to that careers advice column in Science, I’ve seen tweets and comments stating that long hours aren’t really necessary because we should “work smarter, not harder”. I’ve heard this argument quite a bit over the years. It’s rather trite advice in my opinion. Science simply doesn’t work to order — so much research involves going down blind alleys, reversing, inadvertently (or deliberately) taking a diversion, doing a U-turn, getting things wrong, getting things right only to find out that it doesn’t help solve the original problem, and in the end finding that Edison’s “one percent inspiration, 99 percent perspiration” appraisal really isn’t too far off the mark.

Working “smarter” simply isn’t an option in many cases — sheer bloody-minded tenacity is what’s required. This requires long and frustrating stints in the lab. Yet sometimes, when it works, the culmination of that effort is the most enjoyable aspect of the entire scientific process — we endure the pain and the long hours just to hit that (very) occasional high.

I’ll stress again that there is certainly no expectation from me that students and postdocs in the group here at Nottingham do long hours. I give them advice very similar to that offered by Chad Orzel in his blog post — do what works for you (and I certainly don’t dictate a required number of hours per week). But, similarly, I don’t feel embarrassed at all to say that I’ve enjoyed working long hours at times — lots of researchers border on the obsessive when it comes to their work and bouts of intense single-mindedness can often be an exciting, infuriating, and central element of the scientific process for some.

Orzel describes his far-from-traditional working pattern as a postdoc –including the obligatory late night visits to vending machines — as “a dumb thing I did”. As someone who has similarly regularly enjoyed the late night, mid-experiment caffeine injections provided by a machine-generated beverage which tasted “almost, but not quite, entirely unlike tea” (or, indeed, any other caffeinated drink), I beg to differ. It worked for him — and for me — at the time. Whether it was dumb or not is entirely down to the circumstances of the individual researcher (as, to be fair, Orzel himself goes on to say in his post).

There’s also much more to academia than hands-on research. When you start as a new member of academic staff, you have to keep the research side going (and build up a new independent programme of work), start designing and giving lecture courses (and marking coursework/exams), get used to a whole new world of admin pain, and try to be the best tutor you can be. “Work smarter, not harder” doesn’t cut it — there are only a finite number of hours in the week and, as I describe in that THE article above, I couldn’t have kept my head above water in that first couple of years without burning quite a lot of midnight oil.

I’m not moaning about this (promise). I love my job and some of the key reasons I’m drawn to it are the diversity of the things I can do, the independence, and the large degree of flexibility in working patterns. Let’s not sell PhD students and postdocs a pup, however. Academia places large demands on our time and a 37.5 hour working week is simply not the norm. (Even if the Higher Education Funding Council for England and Research Councils UK assume that academics indeed work a 37.5 hour week. Apparently that’s a “fair and reasonable” figure. But that’s a story for another post…)

I wasn’t going to menshn this again, but…

I really was not planning to revisit the Tim Hunt debacle. I’ve already written a lengthy post about it (which led to quite a number of online debates and exchanges via Twitter, blog comments, and YouTube — some more ill-tempered than others). But my e-mail inbox filled up again yesterday afternoon with quite a number of messages pointing me to Louise Mensch‘s contributions to the story — of which I was more than aware — and, more importantly, alerting me to the fact that Evan Harris had weighed into the debate. (In case you were wondering about the title of this post, it was inspired by Mensch). Harris’ involvement had, for some reason, passed me by.

Evan Harris is someone for whom I have a great deal of respect. It was a great shame he lost his seat in parliament by such a small margin back in 2010 as he was a dedicated MP, the Lib Dems’ spokesman for science from 2005, and an extremely effective member of the Science and Technology Select Committee from 2003 until 2010. The scientific community in the UK owes him a debt of gratitude for his sterling work during that time. The fact that he’s a patron of the British Humanist Association also doesn’t hurt. (As this post might betray, I’m also a card-carrying member of the BHA).

So I was surprised to see that Evan had called Mensch’s version of the events “forensic” and that he adopted a position on the Hunt furore which was rather counter (to put it mildly) to that of Dorothy Bishop, David Colquhoun, and Sylvia McLain, all of whom Mensch criticises in her blog post (and all of whom I agree with on the matter of Hunt’s comments). Harris’ twitter timeline would also seem to imply that he is of the opinion that Hunt’s comments were merely a harmless/misjudged joke that was taken out of context and that the UCL and Royal Society overreacted:

The bit I find most perplexing and bizarre in all of this is that criticism of Hunt (and the loss of his honorary position at UCL) is interpreted so often in terms of infringement of free speech/academic freedom. I’ve posited the following scenario, which I’ve described in comments threads elsewhere, during various discussions with colleagues. I wonder what Harris’ (or, indeed, Mensch’s) response to the questions at the end might be?


I’m undergraduate admissions tutor for the School of Physics and Astronomy at the University of Nottingham. A couple of weeks ago I stood up in front of hundreds of potential applicants and their parents for two days running at our open days and gave talks about the teaching and research we do in the School and the various aspects of the physics courses available at Nottingham.

Let’s say that I made the following “gag” at some point during my open day talk (or, indeed, opened up with it):

“Let me tell you about my trouble with girls in physics courses. Three things happen when they are in the lab: you fall in love with them, they fall in love with you, and when you criticise them they cry. Perhaps we should make separate labs for boys and girls taking our courses?

Now, seriously, I’m impressed by the strides made by girls in our physics courses over the years I’ve been at Nottingham. Science needs women, and you should do science, despite all the obstacles, and despite monsters like me.”

Then, when asked by a student during the Q&A session at the end of my talk to clarify my comments, I say:

“I’m really sorry if I have caused any offence. I was only being honest.”.

Would my Head of School be justified in calling me into his office, explaining why my comments weren’t entirely appropriate for that audience, and asking me to stand down from the Admissions Tutor aspect of my job?

…or would that be a violation of my academic freedom?

Pushing the potential of probe microscopy

Christian Wagner and co-workers at Forschungszentrum Jülich have developed an elegant and exciting way of mapping the electrostatic potential of single atoms and molecules with very high sensitivity. Their paper on this new approach, which they call scanning quantum dot microscopy, was published in Physical Review Letters yesterday. I was asked by the American Physical Society to write a Viewpoint article on Wagner et al.’s paper and jumped at the invitation because I found the work so fascinating and important. Here’s my take on what they’ve done: Pushing The Potential of Probe Microscopy

If it looks like a duck…

Last week I attended, and spoke at, a session entitled “Frontiers of Scanning Probe Microscopy” at this year’s Microscience Microscopy Congress in Manchester. The focus of the presentation I gave there — and it’s a recurrent theme in the talks and seminars I give at the moment — was the thorny problem of identifying and interpreting artefacts in images of atoms and molecules.

Microscopists tend to be skeptical about that old maxim, “seeing is believing”. But, as I’ll show below, sometimes we’re simply not skeptical enough. This is not just an issue for those involved in imaging and microscopy — it’s at the core of all science: how do we know our measurements are an accurate picture of reality? (Whichever version of reality we prefer…).

Every image out there, regardless of how it was created, is a convolution of the properties of the object and the characteristics of the imaging system. (And that includes our eyes). The word convolution has its roots in the Latin convolvere, meaning “to roll together”. That’s a great description of the mathematical physics underpinning the process: the functions describing the object and the imaging system are indeed rolled together (via a convolution integral).

Twenty-five years ago, the Hubble telescope gave us a spectacularly (un)illuminating insight into the essence of convolution. The images below, taken from the Wiki page for the HST, vividly show the effects of the convolution process when the imaging characteristics of the telescope were, let’s say, rather poor (on the left) and when they were much improved by the addition of corrective optics (on the right).

Hubble_Images_of_M100_Before_and_After_Mirror_Repair_-_GPN-2002-000064

The imaging system — and this holds true for any imaging system, be it a microscope, telescope, camera, or whatever arbitrary combination of optics we put together — is characterised via a very simple concept: the point spread function. That function does exactly what it says on the tin: it captures how the image of a single point in the object spreads in space as a result of the imaging system. We then take the point spread function and apply it in turn to all of the points in an object in order to determine what the resulting image will be. For the HST images above, the point spread function is substantially broader for the image on the left than for that on the right.

I should stress that these types of convolution effect are, of course, not limited to images — they hold for any measurement and any type of signal. Ten years ago, I taught an undergraduate module on Fourier analysis and spent quite some time on convolution. (I’ll save the elegance of the Fourier treatment of convolution for a future post). I used the various sound samples below to show the students how convolution works for an audio, rather than a visual, signal. In this case, the point spread function is the response of the surroundings (be it a cave, lecture theatre, auditorium, forest, classroom…) to a very short, sharp signal: the audio equivalent of a single pixel or point. Think of it like making one short hand-clap in a room: the point spread function, which for audio signals is called the impulse response function, is the sound of that clap reverberating. (Yes, the hand-clap is just an approximation to the type of short, sharp signal — i.e. impulse — we need but it serves to make the point.)

So, let’s take a large concert hall. Here’s the impulse response for the hall:

Now consider a space which is rather less grand (at least in terms of its audio characteristics). An ice cavern, say…

Note the very audible differences between the impulse response for the concert hall and for the cavern.

Now let’s take an audio signal completely at random. Like this…

If we convolve the Pythons’ Gregorian chant with the impulse response for the concert hall, here’s what we get.

And this is the convolution of the chant with the impulse response for the ice cavern:

Just as with the HST images, the response of the system (the concert hall or the ice cavern in this case) can be worked out from its audio “point spread function” (the impulse response).

For scanning probe microscopy (SPM), however, we’re in a whole new world of pain when it comes to deciphering the contribution of the imaging system to the image we see. Far from being a static distortion as in the HST optics, the scanning probe itself responds dynamically to the object under study. The simple point spread function approach breaks down. And this can lead to some very misleading images indeed…

My first love in research was, and will always be, SPM. I’ve written about the power and pitfalls of the technique in detail before but the concept at the heart of the technique is really very simple indeed. (Its execution rather less so). We take an exceptionally sharp probe — terminated in a single atom or molecule — and move it very close to a surface, an interface, or a single atom or molecule. When i say “very close”, I mean within a few atomic diameters, or, in the highest resolution work, about a single atom’s distance from a surface. At those distances a number of forces and interactions come into play including, in particular, chemical bond formation and, as described in this post for the Institute of Physics’ physicsfocus blog last year, electron-electron repulsion due to the Pauli exclusion principle. By scanning the probe back and forth (using piezoelectric motors) we can measure the variation of those forces within a single molecule and convert that signal to an image.

Leo Gross and co-workers at the IBM research labs at Rüschlikon in Zurich pioneered a new sub-field of scanning probe microscopy when they showed back in 2009 that images of the internal architecture of single molecules could be captured. The agreement between these images and the ball-and-stick models used by chemists (and physicists) to represent molecules is striking, to put it mildly. While the picture of the tip in the figure to the right is an artist’s representation, the image of the molecule directly below is the actual experimental data measured for a single pentacene molecule, the ball-and-stick model for which is shown at the foot of the figure (grey spheres are carbon, white spheres are hydrogen — it’s a molecule so simple even physicists can understand it.)

PWNANOMay16-AFM1.jpg

Ultrahigh resolution images showing submolecular structure in exquisite detail for a variety of molecules followed (as described in this book chapter). But a number of SPM research groups across the world, including our team at Nottingham, were particularly keen to ascertain whether intermolecular bonds (rather than, or in addition to, intramolecular bonds) could be resolved using the method introduced by IBM Zurich. Nottingham has a long track record — through the efforts of the research groups of my colleagues Peter Beton and Neil Champness — of exploiting hydrogen bonds in the assembly of supramolecular systems. Hydrogen bonds are also of key importance in biochemistry, including, of course, in underpinning base pair interactions in DNA. Could we actually see hydrogen bonds between molecules using probe microscopy?

We started the experiments.

And we were over the moon when we acquired this image of a hydrogen-bonded lattice of molecules shortly afterwards…

NTCDI

 …particularly as we could apparently map the “filamentary” features between the molecules directly onto where we expected the hydrogen bonds (the dotted lines in the image below) to be:

NTCDI-H-bonds.png

While we were puzzling over how to interpret the image — just why did the hydrogen bonds appear so bright compared to the bonds inside the molecules? — we were somewhat less over the moon to be scooped on the first ‘observation’ of hydrogen bonds, as described in the article below. (Click on the image for the full Chemistry World piece).

85828_hydrogen-visualisation_zhang_630m

Note the social media hits on that article: the images certainly created a stir.

Once more it looks like there’s exceptionally good agreement between the positions of the hydrogen-bond features in the probe microscope image on the left above and those expected on the basis of the chemistry (as sketched in the diagram to the right).

But, again, why are the H-bonds so bright in the image? The authors’ own calculations showed that the electron density between the molecules could not account for the brightness — there just wasn’t enough charge there. (This chimed with our experience, as we described in a paper published a few months later. (Free to read — no paywall)).

Even though the positions of the features in the images met all of our expectations with regard to where we’d expect hydrogen bonds to be observed, was it possible that it was simply some type of image artefact? Were those ‘bonds’ nothing more than nanoscopic will-o’-the-wisps? Could Nature really be that cruel? (That’s Nature as in the universe around us, not Nature the journal. Scientists all know just how cruel Nature can be…).

Yes, Nature is that cruel.

It turns out that the intermolecular features readily appear in simulations based around the type of simple interatomic potentials we explain to our 1st year undergraduate students. (See, for example, the sections on Lennard-Jones and Morse potentials in Chapter 2 of this ebook). The simulations in question know nothing about the electron density due to bonding between the molecules — they are based solely on the atomic coordinates, i.e. the positions of the atoms in the molecules. And yet, as the images below show, the simulations — which have been developed by a number of groups in parallel, particularly those of Pavel Jelinek at the Academy of Sciences of the Czech Republic and Peter Liljeroth at Aalto University in Finland — provide exceptionally good agreement with the experimental images. (The figure below is taken from this paper by Prokop Hapala and co-workers in Jelinek’s group, along with a team at Forschungszentrum Juelich comprising Stefan Tautz, Ruslan Termirov, Christian Wagner and Georgy Kichin).

medium.png

So if we’re not really seeing bonds, just what is going on?

When acquiring ultrahigh resolution images of the type pioneered by Leo Gross and colleagues, it is generally the case that the tip is terminated — either deliberately or inadvertently — with a single molecule. (The eagle-eyed among you might have noticed the CO molecule hanging off the end of the tip in the artist’s impression of the pentacene imaging experiment above). The molecular probe can flex and pivot as it is dragged across the target molecule — the apex of the tip bends back and forth due to the forces it experiences from the atoms of the molecules underneath. And that bending motion gives rise to the intermolecular features.

No intermolecular bonds required.

In a clever experimental design, Sampsa Hämäläinen, Liljeroth and co-workers used a molecule which forms hydrogen bonds at some places, but not at others, to highlight the exceptionally important role of the probe in generating spurious intermolecular features. The same type of effect has also been observed for halogen bonding and, most recently, for a system where no intermolecular bonds at all are expected: a lattice of buckyballs (C60 molecules). (I presented these latter data at the conference in Manchester.)

What’s more, and to add to the pain, even if we ‘lock down’ the probe molecule in the simulations — which we did for our calculations — so that it can’t flap around, we’re still left with the point spread function to contend with. The probe has a finite width (in terms of its electron ‘cloud’) and, as pointed out by Hapala and colleagues, this can also generate artefacts via convolution between the probe and the target molecule.

Douglas Adams once said

If it looks like a duck, and quacks like a duck, we have at least to consider the possibility that we have a small aquatic bird of the family anatidae on our hands.

It’s indeed possible. But when it comes to science, it can look like a duck, waddle like a duck, and quack like a duck…

…but all too often it can be a goose.