Sloppy Science: Still Someone Else’s Problem?

“The Somebody Else’s Problem field is much simpler and more effective, and what’s more can be run for over a hundred years on a single torch battery… An SEP is something we can’t see, or don’t see, or our brain doesn’t let us see, because we think that it’s somebody else’s problem…. The brain just edits it out, it’s like a blind spot”.

Douglas Adams (1952 – 2001) Life, The Universe, and Everything

The very first blog post I wrote (back in March 2013), for the Institute of Physics’ now sadly defunct physicsfocus project, was titled “Are Flaws in Peer Review Someone Else’s Problem?” and cited the passage above from the incomparable, and sadly missed, Mr. Adams. The post described the trials and tribulations my colleagues and I were experiencing at the time in trying to critique some seriously sloppy science, on the subject of ostensibly “striped” nanoparticles, that had been published in very high profile journals by a very high profile group. Not that I suspected it at the time of writing the post, but that particular saga ended up dragging on and on, involving a litany of frustrations in our attempts to correct the scientific record.

I’ve been put in mind of the stripy saga, and that six-year-old post, for a number of reasons lately. First, the most recent stripe-related paper from the group whose work we critiqued makes absolutely no mention of the debate and controversy. It’s as if our criticism never existed; the issues we raised, and the surrounding controversy, are simply ignored by that group in their most recent work.

More importantly, however, I have been following Ken Rice‘s (and others’) heated exchange with the authors of a similarly fundamentally flawed paper very recently published in Scientific Reports [Oscillations of the baseline of solar magnetic field and solar irradiance on a millennial timescale, VV Zharkova, SJ Shepherd, SI Zharkov, and E Popova, Sci. Rep. 9 9197 (2019)]. Ken’s blog post on the matter is here, and the ever-expanding PubPeer thread (225 comments at the time of writing, and counting) is here. Michael Brown‘s take-no-prisoners take-down tweets on the matter are also worth reading…

The debate made it into the pages — sorry, pixels — of The Independent a few days ago: “Journal to investigate controversial study claiming global temperature rise is due to Earth moving closer to Sun.

Although the controversy in this case is related to physics happening on astronomically larger length scales than those at the heart of our stripy squabble, there are quite a number of parallels (and not just in terms of traffic to the PubPeer site and the tenor of the authors’ responses). Some of these are laid out in the following Tweet thread by Ken…

The Zharkova et al. paper makes fundamental errors that should never have passed through peer review. But then we all know that peer review is far from perfect. The question is what should happen to a paper that is not fradulent but still makes it to publication containing misleadingly sloppy and/or incorrect science? Should it remain in the scientific record? Or should it be retracted?

It turns out that this is a much more contested issue than it might appear at first blush. For what it’s worth, I am firmly of the opinion that a paper containing fundamental errors in the science and/or based on mistakes due to clearly definable f**k-ups/corner-cutting in experimental procedure should be retracted. End of story. It is unfair on other researchers — and, I would argue, blatantly unethical in many cases — to leave a paper in the literature that is fundamentally flawed. (Note that even retracted papers continue to accrue citations.) It is also a massive waste of taxpayers’ money to fund new research based on flawed work.

Here’s one example of what I mean, taken from personal, and embarrassing, experience. I screwed up the calibration of a tuning fork sensor used in a set of atomic force microscopy experiments. We discovered this screw-up after publication of the paper that was based on measurements with that particular sensor. Should that paper have remained in the literature? Absolutely not.

Some, however, including my friend and colleague Mike Merrifield, who is also Head of School here and with whom I enjoy the ever-so-occasional spat, have a slightly different take on the question of retractions:

Mike and I discussed the Zharkova et al. controversy both briefly at tea break and via an e-mail exchange last week, and it seems that there are distinct cultural differences between different sub-fields of physics when it comes to correcting the scientific record. I put the Gedankenexperiment described below to Mike and asked him whether we should retract the Gedankenpaper. The particular scenario outlined in the following stems from an exchange I had with Alessandro Strumia a few months back, and subsequently with a number of my particle physicist colleagues (both at Nottingham and elsewhere), re. the so-called 750 GeV anomaly at CERN…

“Mike, let’s say that some of us from the Nanoscience Group go to the Diamond Light Source to do a series of experiments. We acquire a set of X-ray absorption spectra that are rather noisy because, as ever, the experiment didn’t bloody well work until the last day of beamtime and we had to pack our measurements into the final few hours. Our signal-to-noise ratio is poor but we decide to not only interpret a bump in a spectrum as a true peak, but to develop a sophisticated (and perhaps even compelling) theory to explain that “peak”. We publish the paper in a prestigious journal, because the theory supporting our “peak” suggests the existence of an exciting new type of quasiparticle. 

We return to the synchrotron six months or a year later, repeat the experiment over and over but find no hint of the “peak” on which we based our (now reasonably well-cited) analysis. We realise that we had over-interpreted a statistical noise blip.

Should we retract the paper?”

I am firmly of the opinion that the paper should be retracted. After all, we could not reproduce our results when we did the experiment correctly. We didn’t bend over backwards in the initial experiment to convince ourselves that our data were robust and reliable and instead rushed to publish (because we were so eager to get a paper out of the beamtime.) So now we should eat humble pie for jumping the gun — the paper should be retracted and the scientific record should be corrected accordingly.

Mike, and others, were of a different opinion, however. They argued that the flawed paper should remain in the scientific literature, sometimes for the reasons to which Mike alludes in his tweet above [1].  In my conversations with particle physicists re. the 750 GeV anomaly, which arose from a similarly over-enthusiastically interpreted bump in a spectrum that turned out to be noise, there was a similarly strong inertia to correct the scientific record. There appeared to be a feeling that only if the data were fabricated or fraudulent should the paper be retracted.

During the e-mail exchanges with my particle physics colleagues, I was struck on more than one occasion by a disturbing disconnect between theory and experiment. (This is hardly the most original take on the particle physics field, I know. I’ll take a moment to plug Sabine Hossenfelder’s Lost In Math once again.) There was an unsettling (for me) feeling among some that it didn’t matter if experimental noise had been misinterpreted, as long as the paper led to some new theoretical insights. This, I’ll stress, was not an opinion universally held — some of my colleagues said they didn’t go anywhere near the 750 GeV excess because of the lack of strong experimental evidence. Others, however, were more than willing to enthusiastically over-interpret the 750 GeV “bump” and, unsurprisingly, baulked at the suggestion that their papers should be retracted or censured in any way. If their sloppy, credulous approach to accepting noise in lieu of experimental data had advanced the field, then what’s wrong with that? After all, we need intrepid pioneers who will cross the Pillars of Hercules

I’m a dyed-in-the-wool experimentalist; science should be driven by a strong and consistent feedback loop between experiment and theory. If a scientist mistakes experimental noise (or well-understood experimental artefacts) for valid data, or if they get fundamental physics wrong a la Zherkova et al, then there should be — must be — some censure for this. After all, we’d censure our undergrad students under similar circumstances, wouldn’t we? One student carries out an experiment for her final year project carefully and systematically, repeating measurements, bringing her signal-to-noise ratio down, putting in the hours to carefully refine and redefine the experimental protocols and procedures, refusing to make claims that are not entirely supported by the data. Another student instead gets over-excited when he sees a “signal” that chimes with his expectations, and instead of doing his utmost to make sure he’s not fooling himself, leaps to a new and exciting interpretation of the noisy data. Which student should receive the higher grade? Which student is the better scientist?

As that grand empiricist Francis Bacon put it centuries ago,

The understanding must not therefore be supplied with wings, but rather hung with weights, to keep it from leaping and flying.

It’s up to not just individual scientists but the scientific community as a whole to hang our collective understanding with weights. Sloppy science is not just someone else’s problem. It’s everyone’s problem.

[1] Mike’s suggestion in his tweet that the journal would like to retract the paper to spare their blushes doesn’t chime with our experience of journals’ reactions during the stripy saga. Retraction is the last thing they want because it impacts their brand.

 

Is Science Self-Correcting? Some Real World-Examples From Psychological Research.

…or The Prognosis Is Not Good, Psychology. It’s A Bad Case Of Physics Envy*

Each year there are two seminars for the Politics, Perception, and Philosophy of Physics module that are led by invited speakers. First up this year was the enlightening, engaging, and entertaining Nick Brown, who, and I quote from no less a source than The Guardian, has an “astonishing story…[he] began a part-time psychology course in his 50s and ended up taking on America’s academic establishment.”

I recommend you read that Guardian profile in full to really get the measure of Mr. (soon to be Dr.) Brown but, in brief, he has played a central role in exposing some of the most egregious examples of breathtakingly poor, or downright fraudulent, research in psychology, a field that needs to get its house in order very soon. (A certain high profile professor of psychology who is always very keen to point the finger at what he perceives to be major failings in other disciplines should bear this in mind and heed his own advice. (Rule #6, as I recall…))

Nick discussed three key examples of where psychology research has gone badly off the rails:

    • Brian Wansink, erstwhile director of Cornell’s Food and Brand Lab, whose research findings (cited over 20,000 times) have been found to be rather tough to digest given that they’re riddled with data manipulation and resulted from other far-from-robust research practices.
    • The “audacious academic fraud” of Diederik Stapel. (Nick is something of a polymath, being fluent in Dutch among other skills, and translated Stapel’s autobiography/confession, making it freely available online. I strongly recommend adding Stapel’s book to your “To Read” list; I found it a compelling story that provides a unique insight into the mindset and motivations of someone who fakes their research. Seeing the ostracisation and shaming through Stapel’s eyes was a profoundly affecting experience and I found myself sympathising with the man, especially with regard to the effects of his fraud on his family.)

It was a great pleasure to host Nick’s visit to Nottingham (and to finally meet him after being in e-mail contact on and off for about eighteen months). Here’s his presentation…

*But don’t worry, you’re not alone.

** Hmmm. More psychologists with a chaotic concept of chaos. I can see a pattern emerging here. Perhaps it’s fractal in nature…


 

Update 18/11/2018. 15:30. I am rapidly coming to the opinion that in the dismal science stakes, psychology trumps economics by quite some margin. I’ve just read Catherine Bennett’s article in The Observer today on a research paper that created a lot of furore last week: “Testing the Empathizing-Systemizing theory of sex differences and the Extreme Male Brain theory of autism in half a million people“, a study which, according to a headline in The Times (amongst much other similarly over-excited and credulous coverage) has shown that male and female brains are very different indeed.

One would get the impression from the headlines that the researchers must have carried out an incredibly systematic and careful fMRI study, which, given the sample size, in turn must have taken decades and involved highly sophisticated data analysis techniques.

Nope.

They did their research by…asking people to fill in questionnaires.

Bennett highlights Dean Burnett ‘s incisive demolition of the paper and surrounding media coverage. I thoroughly recommend Burnett’s post – he highlights a litany of issues with the study (and others like it). For one thing, the idea that self-reporting via questionnaire can provide a robust objective analysis of just about any human characteristic or trait is ludicrously simple-minded. Burnett doesn’t cover all of the issues because, as he says at the end of his post: “There are other concerns to raise of course, but I’ll keep them in reserve for when the next study that kicks this whole issue off again is published. Shouldn’t be more than a couple of months.

Indeed.

How Not To Do Spectral Analysis 101

I will leave this here without further comment…

JesusHChrist

*bangs head gently on desk and sobs quietly to himself*

Source (via Sam Jarvis. Thanks, Sam.):

The original ‘peer-reviewed’ paper is this: Găluşcă et al., IOP Conf. Ser. Mater. Sci. Eng. 374 012020 (2018)

 

 

The war on (scientific) terror…

I’ve been otherwise occupied of late so the blog has had to take a back seat. I’m therefore coming to this particular story rather late in the day. Nonetheless, it’s on an exceptionally important theme that is at the core of how scientific publishing, scientific critique, and, therefore, science itself should evolve. That type of question doesn’t have a sell-by date so I hope my tardiness can be excused.

The story involves a colleague and friend who has courageously put his head above the parapet (on a number of occasions over the years) to highlight just where peer review goes wrong. And time and again he’s gotten viciously castigated by (some) senior scientists for doing nothing more than critiquing published data in as open and transparent a fashion as possible. In other words, he’s been pilloried (by pillars of the scientific community) for daring to suggest that we do science the way it should be done.

This time, he’s been called a…wait for it…scientific terrorist. And by none other than the most cited chemist in the world over the last decade (well, from 2000 – 2010): Chad A Mirkin. According to his Wiki page, Mirkin “was the first chemist to be elected into all three branches of the National Academies. He has published over 700 manuscripts (Google Scholar H-index = 163) and has over 1100 patents and patent applications (over 300 issued, over 80% licensed as of April 1, 2018). These discoveries and innovations have led to over 2000 commercial products that are being used worldwide.”

With that pedigree, this guy must really have done something truly appalling for Mirkin to call him a scientific terrorist (oh, and a zealot, and a narcissist), right? Well, let’s see…

raphaportrait2The colleague in question is Raphael Levy. Raphael (pictured to the right) is a Senior Lecturer — or Associate Professor to use the term increasingly preferred by UK universities and traditionally used by our academic cousins across the pond — in Biochemistry at the University of Liverpool. He has a deep and laudable commitment to open science and the evolution of the peer review system towards a more transparent and accountable ethos.

Along with Julian Stirling, who was a PhD student here at Nottingham at the time, and a number of other colleagues, I collaborated closely with Raphael and his team (from about 2012 – 2014) in critiquing and contesting a body of work that claimed that stripes (with ostensibly fascinating physicochemical and biological properties) formed on the surface of suitably functionalised nanoparticles. I’m not going to revisit the “stripy” nanoparticle debate here. If you’re interested, see Refs [1-5] below. Raphael’s blog , which I thoroughly recommend, also has detailed bibliographies for the stripy nanoparticle controversy.

More recently, Raphael and his co-workers at Liverpool have found significant and worrying deficiencies in claims regarding the efficacy of what are known as SmartFlares. (Let me translate that academically-nuanced wording: Apparently, they don’t work.) Chad Mirkin played a major role in the development of SmartFlares, which are claimed to detect RNA in living cells and were sold by SigmaMilliPore from 2013 until recently, when they were taken off the market.

The SmartFlare concept is relatively straight-forward to understand (even for this particular squalid state physicist, who tends to get overwhelmed by molecules much larger than CO): each ‘flare’  probe comprises a gold nanoparticle attached to an oligonucleotide (that encodes a target sequence) and a fluorophore, which does not emit fluorescence as long as it’s near to the gold particle. When the probe meets the target RNA, however, this displaces the fluorophore (thus reducing the coupling to, and quenching by, the gold nanoparticle) and causes it to glow (or ‘flare’). Or so it’s claimed.

As described in a recent article in The Scientist, however, there is compelling evidence from a growing number of sources, including, in particular, Raphael’s own group, that SmartFlares simply aren’t up to the job. Raphael’s argument, for which he has strong supporting data (from electron-, fluorescence- and photothermal microscopy), is that the probes are trapped in endocytic compartments and get nowhere near the RNA they’re meant to target.

Mirkin, as one might expect, vigorously claims otherwise. That’s, of course, entirely his prerogative. What’s most definitely not his prerogative, however, is to launch hyperbolic personal attacks at a critic of his work. As Raphael describes over at his blog, he asked the following question at the end of a talk Mirkin gave at the American Chemical Society meeting in Boston a month ago:

In science, we need to share the bad news as well as the good news. In your introduction you mentioned four clinical trials. One of them has reported. It showed no efficacy and Purdue Pharma which was supposed to develop the drug decided not to pursue further. You also said that 1600 forms of NanoFlares were commercially available. This is not true anymore as the distributor has pulled the product because it does not work. Finally, I have a question: what is the percentage of nanoparticles that escape the endosome?

According to Raphael’s description (which is supported by others at the conference — see below), Mirkin’s response was ad hominem in the extreme:

[Mirkin said that]…no one is reading my blog (who cares),  no one agrees with me; he called me a “scientific zealot” and a “scientific terrorist”.

Raphael and I have been in a similar situation before with regard to scientific critique not exactly being handled with good grace. We and our colleagues have faced accusations of being cyber-bullies — and, worse, fake blogs and identity theft were used –to attempt to discredit our (purely scientific) criticism.

Science is in a very bad place indeed if detailed criticism of a scientist’s work is dismissed aggressively as scientific terrorism/zealotry. We are, of course, all emotional beings to a greater or lesser extent. Therefore, and despite protestations to the contrary from those who have an exceptionally naive view of The Scientific Method, science is not some wholly objective monolith that arrives at The Truth by somehow bypassing all the messy business of being human. As Neuroskeptic described so well in a blog post about the stripy nanoparticle furore, often professional criticism is taken very personally by scientists (whose self-image and self-confidence can be intimately connected to the success of the science we do). Criticism of our work can therefore often feel like criticism of us.

But as scientists we have to recognise, and then always strive to rise above, those very human responses; to take on board, rather than aggressively dismiss out of hand, valid criticisms of our work. This is not at all easy, as PhD Comics among others has pointed out:

One would hope, however, that a scientist of Mirkin’s calibre would set an example, especially at a conference with the high profile of the annual ACS meeting. As a scientist who witnessed the exchange between Raphael and Mirkin put it,

I witnessed an interaction between two scientists. One asks his questions gracefully and one responding in a manner unbecoming of a Linus Pauling Medalist. It took courage to stand in front of a packed room of scientists and peers to ask those questions that deserved an answer in a non-aggressive manner. It took even more courage to not become reactive when the respondent is aggressive and belittling. I certainly commended Raphael Levy for how he handled the aggressive response from Chad Mirkin.

Or, as James Wilking put it somewhat more pithily:

An apology from Mirkin doesn’t seem to be forthcoming. This is a shame, to put it mildly. What I found rather more disturbing than Mirkin’s overwrought accusation of scientific terrorism, however, was the reaction of an anonymous scientist in that article in The Scientist:

“I think what everyone has to understand is that unhealthy discussion leads to unsuccessful funding applications, with referees pointing out that there is a controversy in the matter. Referee statements like these . . . in a highly competitive environment for funding, simply drain the funding away of this topic,” he writes in an email to The Scientist. He believes a recent grant application of his related to the topic was rejected for this reason, he adds.

This is a shockingly disturbing mindset. Here we have a scientist bemoaning that (s)he did not get public funding because of what is described as “unhealthy” public discussion and controversy about an area of science. Better that we all keep schtum about any possible problems and milk the public purse for as much grant funding as possible, right?

That attitude stinks to high heaven. If it takes some scientific terrorism to shoot it down in flames then sign me up.


[1] Stripy Nanoparticle Controversy Blows Up

[2] Peer Review In Public: Rise Of The Cyber-Bullies? 

[3] Looking At Nothing, Seeing A Lot

[4] Critical Assessment of the Evidence for Striped Nanoparticles, Julian Stirling et al, PLOS ONE 9 e108482 (2014)

[5] How can we trust scientific publishers with our work if they won’t play fair?

 

 

 

“The surface was invented by the devil” Nanoscience@Surfaces 2018

NaS.png

The title of this post is taken from an (in)famous statement from Wolfgang Pauli:

God made solids, but surfaces were the work of the devil!

That diabolical nature of surfaces is, however, exactly what makes them so intriguing, so fascinating, and so rich in physics and chemistry. And it’s also why surface science plays such an integral and ubiquitous role in so many areas of condensed matter physics and nanoscience. That ubiquity is reflected in the name of a UK summer school for PhD students, nanoscience@Surfaces 2018, held at the famed Cavendish Laboratory at Cambridge last week, and at which I had the immense pleasure of speaking. More on that soon. Let’s first dig below the surface of surfaces just a little.

(In passing, it would be remiss of me not to note that the Cavendish houses a treasure trove of classic experimental “kit” and apparatus that underpinned many of the greatest discoveries in physics and chemistry. Make sure that you venture upstairs if you ever visit the lab. (Thanks for the advice to do just that, Giovanni!))

IMG_4616.JPG

Although I could classify myself, in terms of research background, as a nanoscientist, a chemical physicist, or (whisper it) even a physical chemist at times, my first allegiance is, and always will be, with surface science. I’m fundamentally a surface scientist. For one thing, the title of my PhD thesis (from, gulp, 1994) nails my colours to the mast: A Scanning Tunnelling Microscopy Investigation of the Interaction of Sulphur with Semiconductor Surfaces. [1]

(There. I said it. For quite some time, surface science was targetted by the Engineering and Physical Sciences Research Council (EPSRC) as an area of funding whose slice of the public purse should be reduced, so not only was it unfashionable to admit to being a surface scientist, it could be downright damaging to one’s career. Thankfully we live in slightly more enlightened times. For now.)

Pauli’s damning indictment of surfaces stems fundamentally from the broken symmetry that the truncation of a solid represents. In the bulk, each atom is happily coordinated with its neighbours and, if we’re considering crystals (as we so very often do in condensed matter physics and chemistry), there’s a very well-defined periodicity and pattern established by the combination of the unit cell, the basis, and the lattice vectors. But all of that gets scrambled at the surface. Cut through a crystal to expose a particular surface — and not all surfaces are created equal by any means — and the symmetry of the bulk is broken; those atoms at the surface have lost their neighbours.

Atoms tend to be rather gregarious beasties so they end up in an agitated, high energy state when they lose their neighbours. Or, in slightly more technical (and rather less anthropomorphic) terms, creation of a surface is associated with a thermodynamic free energy cost; we have to put in work to break bonds. (If this wasn’t the case, objects all around us would spontaneously cleave to form surfaces. I’m writing (some of) this on a train back from London (after a fun evening at the LIYSF), having tremendous difficulty trying to drink coffee as the train rocks back and forth. A spontaneously cleaving cup would add to my difficulties quite substantially…)

In their drive to reduce that free energy, atoms and molecules at surfaces will form a bewildering array of different patterns and phases [2]. The classic example is the (7×7) reconstruction of the Si(111) surface, one of the more complicated atomic rearrangements there is. I’ve already lapsed into the surface science vernacular there, but don’t let the nomenclature put you off if you’re not used to it. “Reconstruction” is the rearranging of atoms at a surface to reduce its free energy; the (111) defines the direction in which we cut through the bulk crystal to expose the surface; and the (7×7) simply refers to the size of the unit cell (i.e. the basic repeating unit or “tile”) of the reconstructed surface as compared to the arrangement on the unreconstructed (111) plane. Here’s a schematic of the (7×7) unit cell [3] to give you an idea of the complexity involved…

MickSeven

The arrangements and behaviour of atoms and molecules at surfaces are very tricky indeed to understand and predict. There has thus been a vast effort over many decades, using ever more precise techniques (both experimental and theoretical), to pin down just how adsorbed atoms and molecules bond, vibrate, move, and desorb. And although surface science is now a rather mature area, it certainly isn’t free of surprises and remains a vibrant field of study. One reason for this vibrancy is that as we make particles smaller and smaller — a core activity in nanoscience — their surface-to-volume ratio increases substantially. The devilish behaviour of surfaces is thus at the very heart of nanoscience, as reflected time and again in the presentations at the nanoscience@Surfaces 2018 summer school.

Unfortunately, I could only attend the Wednesday and Thursday morning of the summer school. It was an honour to be invited to talk and I’d like to take this opportunity to repeat my thanks to the organising committee including, in particular, Andy Jardine (Cambridge), Andrew (Tom) Thomas (Manchester), Karen Syres and Joe Smerdon (UCLAN) who were the frontline organisers in terms of organising my accomodation, providing the necessary A/V requirements, and sorting out the scheduling logistics. My lecture, Scanning Probes Under The Microscope, was on the Wednesday morning and, alongside the technical details of the science, covered themes I’ve previously ranted about at this blog, including the pitfalls of image interpretation and the limitations of the peer review process.

Much more important, however, were the other talks during the school. I regretfully missed Monday’s and Tuesday’s presentations (including my Nottingham colleague Rob Jones’ intriguingly named “Getting it off and getting it on“) which had a theory and photoemission flavour, respectively. Wednesday, however, was devoted to my first love in research: scanning probe microscopy, and it was great to catch up on recent developments in the field from the perspective of colleagues who work on different materials systems to those we tend to study at Nottingham.

Thursday morning’s plenary lecture/tutorial was from Phil Woodruff (Warwick), one of not only the UK’s, but the world’s, foremost (surface) scientists and someone who has pioneered a number of  elegant techniques and tools for surface analysis (including, along with Rob Jones and other co-workers, the X-ray standing wave method described in the video at the foot of this post.)

Following Phil’s talk, there was a session dedicated to careers. Although I was not quite in the target demographic for this session, I nonetheless hung around for the introductions from those involved because I was keen to get an insight into just how the “careers outside academia” issue would be addressed. Academia is of course not the be-all-and-end-all when it comes to careers. Of the 48 PhD researchers I counted — an impressive turn-out given that 50 were registered for the summer school — only 10 raised their hand when asked if they were planning on pursuing a career in academia.

Thirteen years ago, I was a member of the organising committee for an EPSRC-funded summer school in surface science held at the University of Nottingham. We also held a careers-related session during the school and, if memory serves (…and that’s definitely not a given), when a similar question was asked of the PhD researchers in attendance, a slightly higher percentage (maybe ~ 33%) were keen on the academic pathway. While academia certainly does not want to lose the brightest and the best, it’s encouraging that there’s a movement away from the archaic notion that to not secure a permanent academic post/tenure somehow represents failure.

It was also fun for me to compare and contrast the Nottingham and Cambridge summer schools from the comfortable perspective of a delegate rather than an organiser. Here’s the poster for the Nottingham school thirteen years ago…

summerschool2005.jpg

…and here’s an overview of the talks and sessions that were held back in 2005:

summerschool2005_schedule.jpg

A key advance in probe microscopy in the intervening thirteen year period has been the ultrahigh resolution force microscopy pioneered by the IBM Zurich research team (Leo Gross et al), as described here. This has revolutionised imaging, spectroscopy, and manipulation of matter at the atomic and (sub)molecular levels.

Another key difference between UK surface science back in 2005 and its 2018 counterpart is that the Diamond synchrotron produced “first light” (well, first user beam) in 2007. The Diamond Light Source is an exceptionally impressive facility. (The decision to construct DLS at the Harwell Campus outside Oxford was underscored by a great deal of bitter political debate back in the late nineties, but that’s a story for a whole other blog post. Or, indeed, series of blog posts.) The UK surface science (and nanoscience, and magnetism, and protein crystallography, and X-ray scattering, and…) community is rightly extremely proud of the facility. Chris Nicklin (DLS), Georg Held (Reading), Wendy Flavell (Manchester) and the aforementioned Prof. Woodruff (among others) each focussed on the exciting surface science that is made possible only via access to tunable synchrotron radiation of the type provided by DLS.

I was gutted to have missed Stephen Jenkins‘ review and tutorial on the application of density functional theory to surfaces. DFT is another area that has progressed quite considerably over the last thirteen years, with a particular evolution of methods to treat dispersion interactions (i.e. van der Waals/London forces). It’s not always the case that DFT calculations/predictions are treated with the type of healthy skepticism that is befitting a computational technique whereby the choice of functional makes all the difference but, again, that’s a topic for another day…

Having helped organise a PhD summer school myself, I know just how much effort is involved in running a successful event. I hope that all members of the organising committee — Tom, Joe, Andy, Karen, Neil, Holly, Kieran, and Giovanni — can now have a relaxing summer break, safe in the knowledge that they have helped to foster links and, indeed, friendships, among the next generation of surface scientists and nanoscientists.


 

[1](a) Sulphur. S.u.l.p.h.u.r. Not the frankly offensive sulfur that I had to use in the papers submitted to US journals. That made for painful proof-reading. (b) I have no idea why I didn’t include mention of photoemission in the title of the thesis, given that it forms the guts of Chapter 5. I have very fond memories of carrying out those experiments at the (now defunct) Daresbury Synchrotron Radiation Source (SRS) just outside Warrington in the UK. Daresbury was superseded by the Diamond Light Source (DLS), discussed in this Sixty Symbols video.

[2] Assuming that there’s enough thermal energy to go around and that they’re not kinetically trapped in a particular state.

[3] Schematic taken from the PhD thesis of Mick Phillips, University of Nottingham (2004).

The conference dinner chatter way of (not) correcting the scientific record

I’m reblogging this important post by Raphael Levy on the value, or lack thereof, of discussion and ‘debate’ at scientific conferences. Raphael highlights two key issues: the “behind closed doors”/”keeping it in the family” nature of scientific criticism, and, as he puts it, the play-acting that is part-and-parcel of many conference sessions. (Raphael and I, along with our colleagues at Liverpool, Nottingham, and elsewhere, spent quite some time a number of years back finding out just how much time and effort it takes to publish critique and criticism of previously published work).

Rapha-z-lab

One of the common responses of senior colleagues to my attempts to correct the scientific record goes somewhat like this:

You are giving X [leading figure in the field] too much credit anyway. We all know that there are problems with their papers. We discussed it at the latest conference with Y and Z. We just ignore this stuff and move along. Though of course X is my friend etc.

This approach is unfair, elitist and contributes to the degradation of the scientific record.

First, it is very fundamentally unfair to the many scientists who are not present at these dinner table chatters and who may believe that the accumulation of grants, prizes, and high profile papers somewhat correlate with good science. That group of scientists will include pretty much all young scientists as well as most scientists from less advantadged countries who cannot get so easily to these conferences…

View original post 179 more words

Politics. Perception. Philosophy. And Physics.

Today is the start of the new academic year at the University of Nottingham (UoN) and, as ever, it crept up on me and then leapt out with a fulsome “Gotcha”. Summer flies by so very quickly. I’ll be meeting my new 1st year tutees this afternoon to sort out when we’re going to have tutorials and, of course, to get to know them. One of the great things about the academic life is watching tutees progress over the course of their degree from that first “getting to know each other” meeting to when they graduate.

The UoN has introduced a considerable number of changes to the “student experience” of late via its Project Transform process. I’ve vented my spleen about this previously but it’s a subject to which I’ll be returning in the coming weeks because Transform says an awful lot about the state of modern universities.

For now, I’m preparing for a module entitled “The Politics, Perception and Philosophy of Physics” (F34PPP) that I run in the autumn semester. This is a somewhat untraditional physics module because, for one thing, it’s almost entirely devoid of mathematics. I thoroughly enjoy  F34PPP each year (despite this amathematical heresy) because of the engagement and enthusiasm of the students. The module is very much based on their contributions — I am more of a mediator than a lecturer.

STEM students are sometimes criticised (usually by Simon Jenkins) for having poorly developed communication skills. This is an especially irritating stereotype in the context of the PPP module, where I have been deeply impressed by the quality of the writing the students submit. As I discuss in the video below (an  overview of the module), I’m not alone in recognising this: articles submitted as F34PPP coursework have been published in Physics World, the flagship magazine of the Institute of Physics.

 

In the video I note that my intention is to upload a weekly video for each session of the module. I’m going to do my utmost to keep this promise and, moreover, to accompany each of those videos with a short(ish) blog post. (But, to cover my back, I’ll just note in advance that the best laid schemes gang aft agley…)