Brand New Thinking?

There’s a recent article on the Research Fortnight website describing UKRI’s, ahem, radical and daring new branding campaign, which apparently incorporates a Tetris-esque intermeshing of the logos of the Research Councils, as demonstrated in the video embedded below…

Reaction to the rebrand has not been overly enthusiastic.

A helpful 73 page document available from the UKRI website details just how, why, and where the brands should be used and includes such gems of the marketing genre as:

“Our design elements are not contained but are expressive, larger than life and breaking out of the confines.”

“Our design is all about how we come together to influence change, where the design idea expresses the impact that we create. Our design assets are combined to create a bold and colourful look and feel, that evokes the gravitas of the organisation but is always dynamic and modern.”


The Emperor’s New Clothes character of the worst excesses of branding and marketing has, of course, long irked many an academic — and I’m certainly no exception — but it’s worth noting that even those in the industry have pilloried the “pretentiousness … and, in some cases, the sheer ridiculousness of...Brand Bollocks.

I recognise, of course, that marketing and branding certainly have a role to play in any venture that needs to connect with an audience, demographic, or following (see How To Win Friends and Influence People and Mutual Respect and Teamwork are Vital). But the type of blather above on “design assets” is just empty verbiage  that, despite the claims to be bold, innovative and fresh, highlights the paucity of original thinking.

Sophie Inge, the author of the piece, contacted me last week for my thoughts on UKRI’s rebranding and included some of the rant below in her Research Fortnight article.

It’s the sheer lack of originality and “boilerplate” aspect of all of this nonsense that’s so irritating. And, yet, on p.6 of the branding document we have: “We’re prepared to do things differently”.

They’ve changed the logo. That’s it. And they’re written a load of accompanying `inspirational’ bumpf to attempt to justify the £90K spent on something that looks rather like an upper-end GCSE Art/Design Technology project. That’s not doing things differently. That’s exactly what every other brand-obsessed company does.

Our brand values are collaboration, integrity, innovation and excellence” (from p6 of the branding guidelines.) Well, it’s not as if too many other companies/ organisations/ institutions have exactly those values. “Excellence”, for one, is thin on the ground.

Marketers might well pour a lot of effort into their designs, but customers – and who, exactly, are the customers/the market in this case? – actually mostly couldn’t care less. I don’t want to “engage” with a brand, or have a “relationship” with it, or be loyal to it. And I’m certainly not alone in this.

Although I appreciate the importance of some aspects of marketing, this type of rebranding exercise does worse than leave me cold. It influences me negatively about the company/institution/organisation because it’s so ephemeral, self-referential, and ultimately pointless (because we can be sure that a couple of years down the line, another rebrand is going to happen…)

Molecules at Surfaces: What do we really know?


MatsI’m writing this on the Liverpool Lime Street to Norwich train1, heading back after attending an inspiring and entertaining symposium at the University of Liverpool over the past couple of days. As the title of this post suggests, the symposium had molecules at surfaces as its theme. More than that, however, it was a celebration of the work – and often, the life and times — of Prof. Mats Persson (pictured right), a formidably talented, influential, and yet humble and unimposing theorist who has played a leading role in shaping and defining the research fields in which I work: surface science, nanoscience, and, in particular, scanning probe microscopy. The words “A true gentleman” were repeated regularly through the symposium by Mats’ former PhD students researchers, postdocs, and co-workers, for very good reason.

Organised by George Darling, Matthew Dyer, Jackie Parkinson, and Rasmita Raval, the symposium was one of the best meetings I’ve attended not just recently but throughout my career to date. Ras, a leading light in the UK surface science community who has worked closely with Mats since he arrived in Liverpool in 2006 (and with whom I had the pleasure of collaborating on the Giants Of The Infinitesimal project2), kicked off the symposium with an engaging overview of not just surface science at Liverpool but of the city itself, including, of course, mention of the age-old rivalry between the two primary religious factions: the Reds and the Blues3.

What I particularly enjoyed about the meeting was the blend of world-leading science – an accolade that is often thrown around with wild abandon regardless of the quality of the work, but in this case its usage is absolutely justified —  with personal anecdotes about Mats’ career and those of his (very many) collaborators. It brought home to me yet again just how important social dynamics are to the evolution of science, no matter what howls of outrage this suggestion might provoke in certain quarters. Yes, of course, we do our utmost to be as rigorous, objective, and systematic in our research as possible – well, most of us – but the direction of a field is influenced not just by the science but by the “many-body interactions” of those who do the science. (For those interested in finding out more about the extent to which developments in science are influenced by the sociology of scientists, I thoroughly recommend Harry CollinsGravity’s Kiss; it’s that rarity among science and technology studies (STS) books: a page-turner. Harry is going to be visiting Nottingham in a couple of months to give an invited seminar for The Politics, Perception, and Philosophy of Physics module and I’ll post a lot more about his work then (including this fascinating “Spot The Physicist” experiment.))

A great example of just why the “who” can be as important as the “what” was this morning’s thoroughly entertaining retrospective from Stephen Holloway, erstwhile Head of Chemistry at Liverpool. Stephen covered not just his memories of working with Mats but included fascinating anecdotes about the political landscape, the interpersonal conflicts, and the “Big Names” who influenced the evolution of surface science through his career from the seventies onwards. I’ll spare Stephen’s (and others’) blushes by not revealing the names he mentioned, but his stories of scientists not quite being able to put personal grudges behind them when reviewing or assessing the work of their rivals/nemeses is just one aspect of where the personal and the professional are blurred. (This post from the popular blogger Neuroskeptic emphasises just how entwined these dual aspects can be.)

A running gag throughout the symposium was that many of those presenting owed their tenure/academic positions, either directly or indirectly, to working with Mats. And, indeed, the line-up of presenters read like a “Who’s Who?” of the most respected and influential groups in experimental and theoretical surface science/nanoscience today. Highlights are too many to mention but in addition to Stephen Holloway’s opening act this morning, I particularly enjoyed Wilson Ho’s compelling overview of his pioneering inelastic tunnelling spectroscopy work4 which opened the scientific symposium yesterday afternoon; Leonhard Grill’s always-fascinating insights into the reactions, switching and dynamics of single molecules at surfaces (the “ask Mats” image that opens this post is taken from Leonhard’s presentation);  Richard Palmer’s characteristically absorbing overview of his group’s STM and STEM research; Takashi Kumagi’s next-generation nanoplasmonics using sculpted probes…


…and Jascha Repp’s engrossing presentation of his group’s exceptionally impressive work on combining ultrafast optics with probe microscopy, enabling an unprecedented increase (by very, very many orders of magnitude) in the temporal resolution of the tunnelling microscope. This is Jascha presenting the working principle of the THz-STM:


…and one of the stand-out moments of the symposium for me was a video of the internal vibrational dynamics of a single adsorbed molecule, captured with ~ 100 femtosecond temporal resolution using the THz-STM technique. There is no question that the exciting results Jascha presented represent a truly ground-breaking step forward in our ability to probe matter at not just the sub-molecular but the sub-Angstrom scale — perhaps not quite as seismic as the Nobel-winning gravitational wave discovery but, nonetheless, an achievement that will certainly cause considerable ripples across the surface science, nanoscience, and scanning probe communities for many years to come.

Two other talks particularly piqued my interest, due to both the fascinating insights into single molecule behaviour and the alignment with my particular research interests right now. Cyrus Hirjibehedin – formerly of UCL and now at Lincoln Lab, MIT (Cyrus’ move back to the other side of the pond is a major loss to the UK scanning probe/nano/surface/magnetism communities) — gave a typically energetic and compelling presentation on his work on probing and tuning magnetic behaviour in phthalocyanine molecules, while Nicolas Lorente, who manages to combine razor-sharp scientific insights with razor-sharp wit in his presentations, discussed fascinating work on the Jahn-Teller effect (I’ll discuss this in a future post), again in phthalocyanine molecules. We are eagerly awaiting delivery and installation of a Unisoku high magnetic field STM/AFM, kindly funded by EPSRC, and so spin will be a major focus of our group’s research at Nottingham in the coming years. We’ve got such a lot of catching up to do…

Finally, it would be remiss of me to close this overlong post without mentioning a prevailing and exceptionally important theme throughout the symposium: the very close interplay between experiment and theory. Almost every speaker highlighted the “feedback loop” between experimental and theoretical data, but it was David Bird of the University of Bath whose — once again, thoroughly engaging — perspective hammered this point home time and again…


“Experiments Lead The Way”


“You learn more when theory doesn’t agree with experiment than when it does”, and…


“Simple models are best.”

This strong and very healthy experiment-theory interplay contrasts somewhat with other fields of physics, where sometimes experimental data seems to be almost an afterthought, at best, in the generation of new theories

A big thank-you to George, Matthew, Ras, and Jackie for organising such a great meeting. And, of course, enjoy your retirement, Mats!

A service that usually runs via Nottingham — cancellations, strikes, and acts of god/God/gods permitting — and with which I’m exceptionally familiar following very many fun, and occasionally somewhat gruelling, beamtime experiments at the now sadly defunct Daresbury Synchrotron Radiation Source. Daresbury is beside Warrington, which in turn is roughly midway between Liverpool and Manchester. I spent a lot of time (up to three months per year) at Daresbury in the late(-ish) nineties to early noughties, with very many hours whiled away sodden and/or freezing on the Warrington station platform, eyeing the announcement board and waiting for trains to collapse from a delayed-cancelled superposition into a more defined state…

2 Our friend and colleague Tom Grimsey, the powerhouse behind the Giants… project, sadly passed away almost five years ago. He was a wonderful man — full of enthusiasm for, and a hunger to learn about, all things nano, molecular, and atomic. I think that Ras would agree that Giants was such a fun project to work on because of the unique perspective Tom and Theo brought to our science. I couldn’t help but wonder a number of times during the symposium what Tom would have made of the incredible single molecule images presented during the talks.

3 Not being a football fan, I can’t comment further. (My dad was a lifelong Sunderland fanatic and my lack of interest in football may possibly not be entirely unrelated to this fact…)

4 …although I don’t quite yet share Wilson’s confidence in scanning probe microscopy’s ability to “see” intermolecular bonds.

“The drum beats out of time…”

Far back in the mists of time, in those halcyon days when the Brexit referendum was still but a comfortably distant blot on the horizon and Trump’s lie tally was a measly sub-five-figures, I had the immense fun of working with Brady Haran and Sean Riley on this…

As that video describes, we tried an experiment in crowd-sourcing data via YouTube for an analysis of the extent to which fluctuations in timing might be a signature characteristic of a particular drummer (or drumming style). Those Sixty Symbols viewers who very kindly sent us samples of their drumming — all 78 of you [1] — have been waiting a very, very long time for this update. My sincere thanks for contributing and my profuse apologies for the exceptionally long delay in letting you know just what happened to the data you sent us. The good news is that a paper, Rushing or Dragging? An Analysis of the “Universality” of Correlated Fluctuations in Hi-hat Timing and Dynamics (which was uploaded to the arXiv last week), has resulted from the drumming fluctuations project. The abstract reads as follows.

A previous analysis of fluctuations in a virtuoso (Jeff Porcaro) drum performance [Räsänen et al., PLoS ONE 10(6): e0127902 (2015)] demonstrated that the rhythmic signal comprised both long range correlations and short range anti-correlations, with a characteristic timescale distinguishing the two regimes. We have extended Räsänen et al.’s approach to a much larger number of drum samples (N=132, provided by a total of 58 participants) and to a different performance (viz., Rush’s Tom Sawyer). A key focus of our study was to test whether the fluctuation dynamics discovered by Räsänen et al. are “universal” in the following sense: is the crossover from short-range to long-range correlated fluctuations a general phenomenon or is it restricted to particular drum patterns and/or specific drummers? We find no compelling evidence to suggest that the short-range to long-range correlation crossover that is characteristic of Porcaro’s performance is a common feature of temporal fluctuations in drum patterns. Moreover, level of experience and/or playing technique surprisingly do not play a role in influencing a short-range to long-range correlation cross-over. Our study also highlights that a great deal of caution needs to be taken when using the detrended fluctuation analysis technique, particularly with regard to anti-correlated signals.

There’s also some bad news. We’ll get to that. First, a few words on the background to the project.

Inspired by a fascinating paper published by Esa Rasanen (of Tampere University) and colleagues back in 2015, a few months before the Sixty Symbols video was uploaded, we were keen to determine whether the correlations observed by Esa et al. in the fluctuations in an iconic drummer’s performance — the late, great Jeff Porcaro — were a common feature of drumming.

Why do we care — and why should you care — about fluctuations in drumming? Surely we physicists should be doing something much more important with our time, like, um, curing cancer…

OK, maybe not.

More seriously, there are very many good reasons why we should study fluctuations (aka noise) in quite some detail. Often, noise is the bane of an experimental physicist’s life. We spend inordinate amounts of time chasing down and attempting to eliminate sources of noise, be they at a specific frequency (e.g. mains “hum” at 50 Hz or 60 Hz [2]) or, sometimes more frustratingly, when the signal contamination is spread across the frequency spectrum, forming what’s known as white noise. (Noise can be of many colours other than white — just as with a spectrum of light it all depends on which frequencies are present.)

But noise is most definitely not always just a nuisance to be avoided/eliminated at all costs; there can be a wealth of information embedded in the apparent messiness. Pink noise, for example, crops up in many weird and wonderful — and, indeed, many not-so-weird-and-not-so-wonderful — places, from climate change, to fluctuations in our heartbeats, to variations in the stock exchange, to current flow in electronic devices, and, indeed, to mutations occurring during the expansion of a cancerous tumour.  An analysis of the character and colour of noise can provide compelling insights into the physics and maths underpinning the behaviour of everything from molecular self-assembly to the influence and impact of social media.

The Porcaro performance that Esa and colleagues analysed for their paper is the impressive single-handed 16th note groove that drives Michael McDonald’s “I Keep Forgettin’…” I wanted to analyse a similar single-handed 16th note pattern, but in a rock rather than pop context, to ascertain whether Procaro’s pattern of fluctuations in interbeat timing were characteristic only of his virtuoso style or if they were a general feature of drumming. I’m also, coincidentally, a massive Rush fan. An iconic and influential track from the Canadian trio with the right type of drum pattern immediately sprang to mind: Tom Sawyer.

So we asked Sixty Symbols viewers to send in audio samples of their drumming along to Tom Sawyer, which we subsequently attempted to evaluate using a technique called detrended fluctuation analysis. When I say “we”, I mean a number of undergraduate students here at the University of Nottingham (who were aided, but more generally abetted, by myself in the analysis.) I’ve set a 3rd year undergraduate project on fluctuations in drumming for the last three years; the first six authors on the arXiv paper were (or are) all undergraduate students.

Unfortunately, the sound quality (and/or the duration) of many of the samples submitted in response to the Sixty Symbols video was just not sufficient for the task. That’s not a criticism, in any way, of the drummers who submitted audio files; it’s entirely my fault for not being more specific in the video. We worked with what we could, but in the end, the lead authors on the arXiv paper, Oli(ver) Gordon and Dom(inic) Coy, adopted a different and much more productive strategy for their version of the project: they invited a number of drummers (twenty-two in total) to play along with Tom Sawyer using only a hi-hat (so as to ensure that each and every beat could be isolated and tracked) and under exactly the same recording conditions.

You can read all of the details of the data acquisition and analysis in the arXiv paper. It also features the lengthiest acknowledgements section I’ve ever had to write. I think I’ve thanked everyone who provided data in there but if you sent me an MP3 or a .wav file (or some other audio format) and you don’t see your name in there, please let me know by leaving a comment below this post. (Assuming, of course, that you’d like to be acknowledged!)

We submitted the paper to the J. New Music Research last year and received some very helpful referees’ comments. I am waiting to get permission from the editor of the journal to make those (anonymous) comments public. If that permission is given, I’ll post the referees’ reports here.

In hindsight, Tom Sawyer was not the best choice of track to analyse. It’s a difficult groove to get right and even Neil Peart himself has said that it’s the song he finds most challenging to play live. In our analysis, we found very little evidence of the type of characteristic “crossover” in the correlations of the drumming fluctuations that emerged from Esa and colleagues’ study of Porcaro’s drumming. Our results are also at odds with the more recent work by Mathias Sogorski, Theo Geisel, Viola Priesemann (of the Max Planck Institute for Dynamics and Self-Organization, and the Bernstein Center for Computational Neuroscience, Göttingen, Germany) — a comprehensive and systematic analysis of microtiming variations in jazz and rock recordings spanning a total of over 100 recordings.

The likelihood is that the conditions under which we recorded the tracks — in particular, the rather “unnatural” hi-hat-only performance — may well have washed out the type of correlations observed by others. Nonetheless, this arguably negative result is a useful insight into the extent to which correlated fluctuations are robust (or not) with respect to performance environment and style. It was clear from our results, in line with previous work by Holger Hennig, Theo Geisel and colleagues, that the fluctuations are not so much characteristic of an individual drummer but of a performance; the same drummer could produce different fluctuation distributions and spectra under different performing conditions.

So where do we go from here? What’s the next stage of this research? I’m delighted to say that the Sixty Symbols video was directly responsible for kicking off an exciting collaboration with Esa and colleagues at Tampere that involves a number of students and researchers here at Nottingham. In particular, two final year project students, Ellie Hill and Lucy Edwards, have just returned from a week-long visit to Esa’s group at Tampere University. Their project, which is jointly supervised by my colleague Matt Brookes, Esa, and myself, focuses on going that one step further in the analysis of drumming fluctuations to incorporate brain imaging. Using this wonderful device.

I’m also rather chuffed that another nascent collaboration has stemmed from the Sixty Symbols video (and the subsequent data analysis) — this time from the music side of the so-called “two cultures” divide. The obscenely talented David Domminney Fowler, of Australian Pink Floyd fame, has kindly provided exceptionally high quality mixing desk recordings of “Another Brick In The Wall (Part 2)” from concert performances. (Thanks, Dave. [3]) Given the sensitivity of drumming fluctuations to the precise performance environment, the analysis of the same drummer (in this case, Paul Bonney) over multiple performances could prove very informative. We’re also hoping that Bonney will be able to make it to the Sir Peter Mansfield Imaging Centre here in the not-too-distant future so that Matt and colleagues can image his brain as he drums. (Knock yourself out with drummer jokes at this point. Dave certainly has.) I’m also particularly keen to compare results from my instrument of choice at the moment, Aerodrums, with those from a traditional kit.

And finally, the Sixty Symbols video also prompted George Datseris, professional drummer and PhD student  researcher, also at the Max Planck Institute for Dynamics & Self-Organisation, to get in touch to let us know about his intriguing work with the Giesel group: Does it Swing? Microtiming Deviations and Swing Feeling in Jazz. Esa and George will both be visiting Nottingham later this year and I am very enthusiastic indeed about the prospects for a European network on drum/rhythm research.

What’s remarkable is that all of this collaborative effort stemmed from Sixty Symbols. Public engagement is very often thought of exclusively in terms of scientists doing the research and then presenting the work as a fait accompli. What I’ve always loved about working with Brady on Sixty Symbols, and with Sean on Computerphile, is that they want to make the communication of science a great deal more open and engaging than that; they want to involve viewers (who are often the taxpayers who fund the work) in the trials and tribulations of the day-to-day research process itself. Brady and I have our spats on occasion, but on this point I am in complete and absolute agreement with him. Here he is, hitting the back of the net in describing the benefits of a warts-and-all approach to science communication…

They don’t engage with one paper every year or two, and a press release. I think if people knew what went into that paper and that press release…and they see the ups and the downs… even when it’s boring… And they see the emotion of it, and the humanity of it…people will become more engaged and more interested…

With the drumming project, Sixty Symbols went one step further and brought the viewers in so they were part of the story — they drove the direction of the science. While YouTube has its many failings, Sixty Symbols and channels like it enable connections with the world outside the lab that were simply unimaginable when I started my PhD back in (gulp…) 1990. And in these days of narrow-minded, naive nationalism, we need all the international connections we can get. Marching to the beat of your own drum ain’t all it’s cracked up to be…

Source of cartoon:

[1] 78. “Seven eight”.

[2] 50 Hz or 60 Hz depending on which side of the pond you fall. Any experimental physicist or electrical/electronic engineer who might be reading will also know full well that mains noise is generally not only present at 50 (or 60) Hz — there are all those wonderful harmonics to consider. (And the strongest peak may well not even be at 50 (60) Hz, but at one of those harmonics. And not all harmonics will contribute equally.  Experimental physics is such a joy at times…)

[3] In the interests of full disclosure I should note that Dave is a friend, a fan of Sixty Symbols, Numberphile, etc.., and an occasional contributor to Computerphile. He and I have spent quite a few tea-fuelled hours setting the world to rights



Should we stop using the term “PhD students”?

I’m reblogging this important post by Jeff Ollerton on retiring the description of postgraduate researchers as “PhD students”. This has been something of a bugbear of mine for quite some time now. We ask that PhD researchers produce a piece of work for their thesis that is original, scholarly, and makes a (preferably strong) contribution to the body of knowledge in a certain (sub-)field. Moreover, the majority of papers submitted to the REF (at least in physics) have a PhD candidate as lead author. Referring to these researchers as “students” seems to me to dramatically downplay their contributions and expertise. I’m going to follow Jeff’s example and use the term “postgraduate researchers” from now on. The comments section under the post is also worth reading (…and there’s something you don’t hear every day.)

Over to you, Jeff…

Jeff Ollerton's Biodiversity Blog

2018-11-10 17.40.18

Back in the early 1990s when I was doing my PhD there was one main way in which to achieve a doctorate in the UK.  That was to carry out original research as a “PhD student” for three or four years, write it up as a thesis, and then have an oral examination (viva).  Even then the idea of being a “PhD student” was problematical because I was funded as a Postgraduate Teaching Assistant and to a large extent treated as a member of staff, with office space, a contributory pension scheme, etc.  Was I a “student” or a member of staff or something in between?

Nowadays the ways in which one can obtain a Level 8 qualification have increased greatly.  At the University of Northampton one can register for a traditional PhD, carry out a Practice-based PhD in the Arts (involving a body of creative work and a smaller…

View original post 332 more words

Breaking Through the Barriers

A colleague alerted me to this gloriously barbed Twitter exchange earlier today:


Jess Wade‘s razor-sharp riposte to Brian Cox was prompted by just how Dame Jocelyn Bell Burnell has chosen to spend the £2.3M [1] associated with the Breakthrough Prize in Fundamental Physics she was awarded today. Here’s the citation for the Prize:

The Selection Committee of the Breakthrough Prize in Fundamental Physics today announced a Special Breakthrough Prize in Fundamental Physics recognizing the British astrophysicist Jocelyn Bell Burnell for her discovery of pulsars – a detection first announced in February 1968 – and her inspiring scientific leadership over the last five decades.

In a remarkable act of generosity, Bell Burnell has donated the entire prize money to the Institute of Physics to fund PhD studentships for, as described in a BBC news article, “women, under-represented ethnic minority and refugee students to become physics researchers.” 

Bell Burnell is quoted in The Guardian article to which Brian refers as follows: “A lot of the pulsar story happened because I was a minority person and a PhD student… increasing the diversity in physics could lead to all sorts of good things.”

As an out-and-proud ‘social justice warrior’, [2] I of course agree entirely.

That rumbling you can hear in the distance, however, is the sound of 10,000 spittle-flecked, basement-bound keyboards being hammered in rage at the slightest suggestion that diversity in physics (or any other STEM subject) could ever be a good thing. Once again I find myself in full agreement with my erstwhile University of Nottingham colleague, Peter Coles:

[1] A nice crisp, round $3M for those on the other side of the pond.

[2] Thanks, Lori, for bringing those wonderful t-shirts to my attention!



“The surface was invented by the devil” Nanoscience@Surfaces 2018


The title of this post is taken from an (in)famous statement from Wolfgang Pauli:

God made solids, but surfaces were the work of the devil!

That diabolical nature of surfaces is, however, exactly what makes them so intriguing, so fascinating, and so rich in physics and chemistry. And it’s also why surface science plays such an integral and ubiquitous role in so many areas of condensed matter physics and nanoscience. That ubiquity is reflected in the name of a UK summer school for PhD students, nanoscience@Surfaces 2018, held at the famed Cavendish Laboratory at Cambridge last week, and at which I had the immense pleasure of speaking. More on that soon. Let’s first dig below the surface of surfaces just a little.

(In passing, it would be remiss of me not to note that the Cavendish houses a treasure trove of classic experimental “kit” and apparatus that underpinned many of the greatest discoveries in physics and chemistry. Make sure that you venture upstairs if you ever visit the lab. (Thanks for the advice to do just that, Giovanni!))


Although I could classify myself, in terms of research background, as a nanoscientist, a chemical physicist, or (whisper it) even a physical chemist at times, my first allegiance is, and always will be, with surface science. I’m fundamentally a surface scientist. For one thing, the title of my PhD thesis (from, gulp, 1994) nails my colours to the mast: A Scanning Tunnelling Microscopy Investigation of the Interaction of Sulphur with Semiconductor Surfaces. [1]

(There. I said it. For quite some time, surface science was targetted by the Engineering and Physical Sciences Research Council (EPSRC) as an area of funding whose slice of the public purse should be reduced, so not only was it unfashionable to admit to being a surface scientist, it could be downright damaging to one’s career. Thankfully we live in slightly more enlightened times. For now.)

Pauli’s damning indictment of surfaces stems fundamentally from the broken symmetry that the truncation of a solid represents. In the bulk, each atom is happily coordinated with its neighbours and, if we’re considering crystals (as we so very often do in condensed matter physics and chemistry), there’s a very well-defined periodicity and pattern established by the combination of the unit cell, the basis, and the lattice vectors. But all of that gets scrambled at the surface. Cut through a crystal to expose a particular surface — and not all surfaces are created equal by any means — and the symmetry of the bulk is broken; those atoms at the surface have lost their neighbours.

Atoms tend to be rather gregarious beasties so they end up in an agitated, high energy state when they lose their neighbours. Or, in slightly more technical (and rather less anthropomorphic) terms, creation of a surface is associated with a thermodynamic free energy cost; we have to put in work to break bonds. (If this wasn’t the case, objects all around us would spontaneously cleave to form surfaces. I’m writing (some of) this on a train back from London (after a fun evening at the LIYSF), having tremendous difficulty trying to drink coffee as the train rocks back and forth. A spontaneously cleaving cup would add to my difficulties quite substantially…)

In their drive to reduce that free energy, atoms and molecules at surfaces will form a bewildering array of different patterns and phases [2]. The classic example is the (7×7) reconstruction of the Si(111) surface, one of the more complicated atomic rearrangements there is. I’ve already lapsed into the surface science vernacular there, but don’t let the nomenclature put you off if you’re not used to it. “Reconstruction” is the rearranging of atoms at a surface to reduce its free energy; the (111) defines the direction in which we cut through the bulk crystal to expose the surface; and the (7×7) simply refers to the size of the unit cell (i.e. the basic repeating unit or “tile”) of the reconstructed surface as compared to the arrangement on the unreconstructed (111) plane. Here’s a schematic of the (7×7) unit cell [3] to give you an idea of the complexity involved…


The arrangements and behaviour of atoms and molecules at surfaces are very tricky indeed to understand and predict. There has thus been a vast effort over many decades, using ever more precise techniques (both experimental and theoretical), to pin down just how adsorbed atoms and molecules bond, vibrate, move, and desorb. And although surface science is now a rather mature area, it certainly isn’t free of surprises and remains a vibrant field of study. One reason for this vibrancy is that as we make particles smaller and smaller — a core activity in nanoscience — their surface-to-volume ratio increases substantially. The devilish behaviour of surfaces is thus at the very heart of nanoscience, as reflected time and again in the presentations at the nanoscience@Surfaces 2018 summer school.

Unfortunately, I could only attend the Wednesday and Thursday morning of the summer school. It was an honour to be invited to talk and I’d like to take this opportunity to repeat my thanks to the organising committee including, in particular, Andy Jardine (Cambridge), Andrew (Tom) Thomas (Manchester), Karen Syres and Joe Smerdon (UCLAN) who were the frontline organisers in terms of organising my accomodation, providing the necessary A/V requirements, and sorting out the scheduling logistics. My lecture, Scanning Probes Under The Microscope, was on the Wednesday morning and, alongside the technical details of the science, covered themes I’ve previously ranted about at this blog, including the pitfalls of image interpretation and the limitations of the peer review process.

Much more important, however, were the other talks during the school. I regretfully missed Monday’s and Tuesday’s presentations (including my Nottingham colleague Rob Jones’ intriguingly named “Getting it off and getting it on“) which had a theory and photoemission flavour, respectively. Wednesday, however, was devoted to my first love in research: scanning probe microscopy, and it was great to catch up on recent developments in the field from the perspective of colleagues who work on different materials systems to those we tend to study at Nottingham.

Thursday morning’s plenary lecture/tutorial was from Phil Woodruff (Warwick), one of not only the UK’s, but the world’s, foremost (surface) scientists and someone who has pioneered a number of  elegant techniques and tools for surface analysis (including, along with Rob Jones and other co-workers, the X-ray standing wave method described in the video at the foot of this post.)

Following Phil’s talk, there was a session dedicated to careers. Although I was not quite in the target demographic for this session, I nonetheless hung around for the introductions from those involved because I was keen to get an insight into just how the “careers outside academia” issue would be addressed. Academia is of course not the be-all-and-end-all when it comes to careers. Of the 48 PhD researchers I counted — an impressive turn-out given that 50 were registered for the summer school — only 10 raised their hand when asked if they were planning on pursuing a career in academia.

Thirteen years ago, I was a member of the organising committee for an EPSRC-funded summer school in surface science held at the University of Nottingham. We also held a careers-related session during the school and, if memory serves (…and that’s definitely not a given), when a similar question was asked of the PhD researchers in attendance, a slightly higher percentage (maybe ~ 33%) were keen on the academic pathway. While academia certainly does not want to lose the brightest and the best, it’s encouraging that there’s a movement away from the archaic notion that to not secure a permanent academic post/tenure somehow represents failure.

It was also fun for me to compare and contrast the Nottingham and Cambridge summer schools from the comfortable perspective of a delegate rather than an organiser. Here’s the poster for the Nottingham school thirteen years ago…


…and here’s an overview of the talks and sessions that were held back in 2005:


A key advance in probe microscopy in the intervening thirteen year period has been the ultrahigh resolution force microscopy pioneered by the IBM Zurich research team (Leo Gross et al), as described here. This has revolutionised imaging, spectroscopy, and manipulation of matter at the atomic and (sub)molecular levels.

Another key difference between UK surface science back in 2005 and its 2018 counterpart is that the Diamond synchrotron produced “first light” (well, first user beam) in 2007. The Diamond Light Source is an exceptionally impressive facility. (The decision to construct DLS at the Harwell Campus outside Oxford was underscored by a great deal of bitter political debate back in the late nineties, but that’s a story for a whole other blog post. Or, indeed, series of blog posts.) The UK surface science (and nanoscience, and magnetism, and protein crystallography, and X-ray scattering, and…) community is rightly extremely proud of the facility. Chris Nicklin (DLS), Georg Held (Reading), Wendy Flavell (Manchester) and the aforementioned Prof. Woodruff (among others) each focussed on the exciting surface science that is made possible only via access to tunable synchrotron radiation of the type provided by DLS.

I was gutted to have missed Stephen Jenkins‘ review and tutorial on the application of density functional theory to surfaces. DFT is another area that has progressed quite considerably over the last thirteen years, with a particular evolution of methods to treat dispersion interactions (i.e. van der Waals/London forces). It’s not always the case that DFT calculations/predictions are treated with the type of healthy skepticism that is befitting a computational technique whereby the choice of functional makes all the difference but, again, that’s a topic for another day…

Having helped organise a PhD summer school myself, I know just how much effort is involved in running a successful event. I hope that all members of the organising committee — Tom, Joe, Andy, Karen, Neil, Holly, Kieran, and Giovanni — can now have a relaxing summer break, safe in the knowledge that they have helped to foster links and, indeed, friendships, among the next generation of surface scientists and nanoscientists.


[1](a) Sulphur. S.u.l.p.h.u.r. Not the frankly offensive sulfur that I had to use in the papers submitted to US journals. That made for painful proof-reading. (b) I have no idea why I didn’t include mention of photoemission in the title of the thesis, given that it forms the guts of Chapter 5. I have very fond memories of carrying out those experiments at the (now defunct) Daresbury Synchrotron Radiation Source (SRS) just outside Warrington in the UK. Daresbury was superseded by the Diamond Light Source (DLS), discussed in this Sixty Symbols video.

[2] Assuming that there’s enough thermal energy to go around and that they’re not kinetically trapped in a particular state.

[3] Schematic taken from the PhD thesis of Mick Phillips, University of Nottingham (2004).

The truth, the whole truth, and nothing but…

This video, which Brady Haran uploaded for Sixty Symbols back in May, ruffled a few feathers…

I’ve been meaning to find time to address some of the very important and insightful points that were raised in the discussions under the video, but I’ve been …

Errrm. Sorry. Hang on just one minute. “Very important and insightful points” you say? Under a YouTube video? Yeah, right…

Believe me, I fully appreciate your entirely justified scepticism here but, yes, if you scroll past the usual dose of grammatically-garbled, content-free boilerplate from the more cerebrally challenged, you’ll find that the comments section contains a considerable number of points that are entirely worthy of discussion. In fact, I’m going to be using some of those YouTube comments to prompt debate during the Politics, Perception and Philosophy of Physics (PPP) module that my colleague Omar Almaini and I run in the autumn semester.

Before I get into considering specific comments, however, I’ll just take a brief moment to highlight a central theme “below the line” of that video, viz. the absolute faith in the trustworthiness and reliability of the scientific method. Or, more accurately, the monolith that is The Scientific Method. Many who contribute to that comments section are utterly convinced that The Truth, however that might be defined, will always win out against the inherent messiness of the scientific process. Well, maybe. Possibly. But on what time scale? And with what implications for the progress of science in the meantime? Wedded entirely to their ideology without ever presenting any evidence to support their case, they are completely convinced that they know exactly how science works. Often without ever doing science themselves. This is hardly the most scientific of approaches.

OK, deep breath. I’m going in. Let’s delve into the comments section…


The idea that science progresses as a nice linear, objective process from hypothesis to “fact” is breathtakingly naive. Unfortunately, it’s exceptionally difficult for some to countenance, within their rather rigid worldview and mindset, that science could ever be inherently messy and uncertain. As “Ali Syed” notes above, this can indeed lead to quite some intellectual indigestion for some…


cavalrycome” here helpfully serves up a key example of that breathtaking naivety in action. The idea that testing scientific theories doesn’t depend on social factors and serendipity shows a deep and touching faith — and I use that word advisedly — in the tenets of The Scientific Method. “Just do the experiment” is the mantra. Or, as Feynman put it,

 If it disagrees with experiment it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is. It does not make any difference how smart you are, who made the guess, or what his name is – if it disagrees with experiment it is wrong. That is all there is to it.

(I guess it goes without saying that, as is the case for so many physicists, Feynman is a bit of a hero of mine).

…all well and good, except that doing the experiment simply isn’t enough. The same experimental data can be (mis)interpreted by different scientists in many ways. I could point to very many examples but let’s choose one that hits close to home for me.

Along with colleagues in Liverpool, Nottingham, and Tsukuba, I spent a considerable amount of my time a few years back embroiled in a critique of scanning probe microscope (SPM) images of so-called ‘stripy’ nanoparticles. I am not about to open that can of worms again. (Life is too short). For an overview, see this post.

Without going into detail, the key point is this: we had our interpretation of the data, and the group whose work we critiqued had theirs. On more than one occasion, the fact that their interpretation had been previously published and regularly cited was used to justify their position. (I thoroughly recommend Neuroskeptic’s post on the central role of data interpretation in science. And this follow-up post.)

The testing, publication and critique of experimental (or theoretical) data fundamentally involves the scientific community at many levels. First of all, there’s the sociology of the peer review process itself. What has been previously published? Do our results agree with that previously published work? If not, can we convince the editors and referees of the validity of our data? Then there’s the question of the “impact” and excitement of the science in question. Is the work newsworthy? Will it make it to the glossy cover of the journal? Will it help secure the postdoc a lectureship or a tenure-track position?

Moreover, science requires funding.  Testing a particular theory may well require a few million quid of experimental kit, consumables, and/or staff resources. That funding is allocated via peer review. And peer review is notoriously hit and miss. I’ve seen exactly the same proposal be rejected by one funding panel and funded by another. On more than one occasion. Having the right person speak for your grant proposal at a prioritisation panel meeting can make all the difference when it comes to success in funding. (But don’t just take my word for it when it comes to how peer review (mis)steers the scientific process — a minute or two on Google is all you need to find key examples.)

Let’s complement that nanoparticle example above with some science involving rather larger length scales.  Following one of the PPP sessions last year, Omar pointed me towards an illuminating blog post by Ed Hawkins on uncertainty estimates in the measurement of the Hubble constant. Here are the key data (taken from RP Kirschner, PNAS 101 8 (2004)):


Note the evolution of the Hubble constant towards its currently accepted value. Feynman (yes, him again) made a similar point about the measurement of the value of the charge of the electron in his classic Cargo Cult Science talk at Caltech in 1974:

One example: Millikan measured the charge on an electron by an experiment with falling oil drops and got an answer which we now know not to be quite right.  It’s a little bit off, because he had the incorrect value for the viscosity of air.  It’s interesting to look at the history of measurements of the charge of the electron, after Millikan.  If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.

 Why didn’t they discover that the new number was higher right away?  It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong.  When they got a number closer to Millikan’s value they didn’t look so hard.  And so they eliminated the numbers that were too far off, and did other things like that.  We’ve learned those tricks nowadays, and now we don’t have that kind of a disease.

Sorry, Richard, but that disease is very much still with us. If anything, it’s a little more virulent these days…

I could go on. But you get the idea. Only someone with a complete lack of experience of scientific research could ever suggest that the testing of scientific theories/ interpretations is free of “social factors and chance”.

What say you, “AntiCitizenX”…?


So, apparently, the experience of scientists means nothing when it comes to understanding how science works? This viewpoint  — and it crops up regularly — never ceases to make me smile. The progress of science depends, fundamentally and critically, on the peer review process: decisions on which papers get published and which grants get funded are driven not by an adherence to one or other “philosophy of science” (which one?) but by working scientists.

The “messy day-to-day aspects of science” are science. This is how it works. It doesn’t matter a jot what Popper, Kuhn [1], Feyerabend, Lakatos or your particular philosopher of choice might have postulated when it comes to their preferred version of The Scientific Method. What matters is how science works in practice. (Do the experiment, right?) Popper et al. did not produce some type of received, immutable wisdom to which the scientific process must conform. (On a similar theme, And Then There’s Physics – more of whom later — has written a number of great posts on the simplistic caricatures of science that have often frustratingly stemmed from the Science and Technology Studies (STS) field of sociology, including this: STS: All Talk and No Walk?)

Does this mean that I think philosophy has no role to play in science or, more specifically, physics? Not at all. In fact, I think that we do our undergraduate (and postgraduate) students a major disservice by not introducing a great deal more philosophy into our physics courses. But to argue that scientists are somehow not qualified to speak about a process they themselves fundamentally direct is ceding rather too much ground to our colleagues in philosophy and sociology. And it’s deeply condescending to scientists.

As Sean Carroll so eloquently puts it in the paper to which I refer in the video,

The way in which we judge scientific theories is inescapably reflective, messy, and human. That’s the reality of how science is actually done; it’s a matter of judgement, not of drawing bright lines between truth and falsity, or science and non-science.

True or False?

Let’s now turn to the question of falsifiability (which was, after all, in the title of the video). Over to you, “Daniel Jensen”, as your comment seems to have resonated with quite a few:


This fundamentally confuses the type of “bending over backwards to prove ourselves wrong” aspect of science — yes, Feynman again — with Popper’s falsifiability criterion. I draw a distinction between these in the video but, as was pointed out to me recently by Philip Ball, when it comes to many of those who contribute below the line “it’s as if they’re damned if they are going to let your actual words deprive them of their right to air their preconceived notions“.

(At least one commenter realises this:


Thank you, “Shkotay D”. I’d like to think so.)

The point I make in the video re. falsifiability merely echoes what Sokal and Bricmont (and others) said way back in the 90s, and Carroll has reiterated within the context of multiverse theory: Popper’s criterion simply does not describe how science works in practice. Here’s what Sokal and Bricmont have to say in Fashionable Nonsense:

When a theory successfully withstands an attempt at falsification, a scientist will, quite naturally, consider the theory to be partially confirmed and will accord it a greater likelihood or a higher subjective probability. … But Popper will have none of this: throughout his life he was a stubborn opponent of any idea of ‘confirmation’ of a theory, or even of its probability. … the history of science teaches us that scientific theories come to be accepted above all because of their successes.

The question of misinterpretation (wilful or otherwise) is also raised by “tennisdude52278”:


I stand by everything I said in that video. I am acutely aware of just how statements are cherry-picked, quote-mined, and ripped out of context online but that can’t be used as a justification to self-censor for the sake of “toeing the party line” or presenting a united front. Science isn’t politics, despite its messy character. It is both fundamentally dishonest and ultimately damaging to the credibility of science (and scientists) if we pretend otherwise.

We demand rigidly defined areas of doubt and uncertainty” [2]

What I find particularly intriguing about the more overwrought responses to the video is the deep unwillingness to accept the inherent uncertainties and human biases that are inevitably at play in the progress of science. There’s a deep-rooted, quasi-religious, faith in the ability of science to provide definitive, concrete, unassailable answers to questions of life, the universe, and everything. But that’s not how science works. Carlo Rovelli forcefully makes this point in Science Is Not About Certainty:

“The very expression “scientifically proven” is a contradiction in terms. There’s nothing that is scientifically proven. The core of science is the deep awareness that we have wrong ideas, we have prejudices…we have a vision of reality that is effective, it’s good, it’s the best we have found so far. It’s the most credible we have found so far; it’s mostly correct.”

The craving for certainty is, however, a particularly human characteristic. We’re pattern-seekers; we love to find regularity, even when there’s no regularity there. And there are some who know very well how to effectively exploit that desire for certainty. This article on the guru appeal of Jordan B Peterson highlights just how the University of Toronto professor of psychology plays to the gallery in fulfilling that need:

“He sees the vacuum left not just by the withdrawal of the Christian tradition, but by the moral relativism and self-abnegation that have flooded across the West in its wake. Furthermore, he recognizes — from his experience as a practicing psychologist and as a teacher — that people crave principles and certainties.”

In passing, I should note that I disagree with the characterisation of Peterson in that article as a man who espouses ideas of depth and substance. No. Really, no. (Really, really, no.) He’s of course an accomplished and charismatic public speaker (with a particular talent for obfuscation that rivals, worryingly, that of politicians.) But then so too is Deepak Chopra. [3]

I’ve spent rather too much of my time over the last year discussing Peterson’s self-help shtick in various fora on- and offline. I’m particularly grateful to And Then There’s Physics for highlighting a debate I had with Fred McVittie last year on a motion of particular relevance to this post, “Jordan Peterson speaks the truth“. The comments thread under ATTP’s post runs to over 400 comments, highlighting that the cult of Peterson is fascinating in terms of its social dynamics. Unfortunately, what Peterson himself has to say is a great deal less interesting, and often mind-numbingly banal, as compared to the underlying sociology of his flock.

What Peterson clearly recognises, however, is that certainty sells. Humans tend to crave simple and simplistic messages, free of the type of ambiguity that is so often part-and-parcel of the scientific process. So he dutifully, and profitably, becomes the source of memes and headline messages so simple that they can feature comfortably on the side of a mug of coffee:


Comforting though Peterson’s simplistic and dogmatic rules for life might be for many, I much prefer the honesty that underpins Carl Sagan‘s rather more ambiguous and uncertain outlook…

Science demands a tolerance for ambiguity. Where we are ignorant, we withhold belief. Whatever annoyance the uncertainty engenders serves a higher purpose: It drives us to accumulate better data. This attitude is the difference between science and so much else.



[1] I’m not a fan of Kuhn’s writings, I’m afraid. I am well aware that “The Structure of Scientific Revolutions is held up as some sort of canonical gospel when it comes to the philosophy of science, but “…Scientific Revolutions” is merely Kuhn’s opinion. Nothing more, nothing less. It’s not the last word on the nature of progress in science.  For one thing, his views on the lack of “commensurability” of different paradigms are clearly bunkum in the context of quantum physics and relativity. The correspondence principle in QM alone is enough to rebut Kuhn’s incommensurability argument. And just how many undergrad physics students have been tasked in their first year to consider a problem in QM or special relativity in “the classical limit”…?

[2] Treat yourself to a nice big bowl of petunias if you recognise the source of the quote here.

[3] As an aside to the aside, what I find remarkable is that the subadolescent drawings and scribblings that decorate Peterson’s “Maps Of Meaning” were apparently offered to Harvard psychology undergraduates as part of their education. (Actually, that’s rather unfair to those adolescents who would be mortified at being linked in any way with the likes of these ravings.) Unlike Peterson, I’m not about to wring my hands, clutch my pearls, and call for a McCarthyite purge of undergraduate teaching in his discipline. But let’s just say that my confidence in the quality assurance mechanisms underpinning psychology education and research have been dented just a little. (Diederik Stapel’s autobiography also didn’t reassure me when it comes to the lack of reproducibility that plagues psychology research) I’ll concur entirely with Prof. Peterson on this point: it’s indeed best to get one’s own house in order before criticising others…