PUNCTUATION; IS NOT. AN, OPTION.

6198246544_64188f7175_z

Originally published at physicsfocus.

As a professional physicist – as I sometimes like to pretend I am – I would estimate that at least 70% of my working week is spent on words, not numbers. Many of the undergrads here at Nottingham don’t appear to be entirely comfortable with this when I point it out. Indeed, quite a few students have specifically told me that they didn’t do physics to write essays and that they will go out of their way, in terms of module choices and exam questions, to avoid having to work with words.

But not all of our students have such an adverse reaction to the more qualitative side of their subject.

I have been extremely impressed by very many of the blog posts and articles produced, as coursework, for a fourth-year module we introduced this year, “The Politics, Perception, and Philosophy of Physics”. The majority of the coursework pieces to date have been uploaded at the course blog, and the quality of writing is generally very high. And it’s not just me who thinks this: I was delighted when both Physics World and physicsfocus agreed to publish coursework articles submitted by students.

A key point about the students taking the course, however, is that they were forewarned, repeatedly, that the module was devoid of mathematics. I stressed, during an introduction to Year 4 modules at the start of the academic year, that they would be assessed on the basis of blog posts and articles they submitted. In this sense, they’re a self-selecting ‘sample’ and thus perhaps not entirely representative of the class as a whole.

On the other hand, all physics undergraduates at Nottingham, even those who take our Physics with Theoretical Physics course, are required to do experiments in Year 1 and to submit formal reports on their lab work. (All undergrads also, of course, submit project reports in later years.) The title of this blog post stems from my marking of a set of first-year lab reports a few weeks ago, where the same errors in writing cropped up time and time again. (It’s not the first time that this has happened in my 17-odd years of teaching at Nottingham…)

I’ve been meaning to put together a video which not only lays out what is expected from physics undergrads for their lab reports – which, to be fair, is often not quite as clear and well-defined as it could be – but also highlights those common failings that cause so much wear and tear on my red pen. I managed to finally get round to doing this, after literally years of procrastination, over the Christmas break and I’m including the video here. I’d very much welcome and value feedback from physicsfocus readers.

My concerns about the words-numbers divide are, however, much broader in scope than the niggles on structure, punctuation,[1] and grammar outlined in the video. Having taken on the role of undergraduate admissions tutor this year, I am now even more aware of the extent to which the A-level system exacerbates the arts-and-humanities-vs-STEM divide. I grew up in Ireland where our equivalent of the A-level system, the Leaving Certificate, makes both English and maths mandatory, and where a larger range of subjects (typically seven) is studied in the final two years of secondary school.

I was lucky to do not only all three science subjects and maths for my Leaving Certificate, but also French and English. And Irish. (Some might well say “Is fearr Gaeilge briste, ná Béarla clíste” but then they haven’t heard my spoken Irish. Or my English, for that matter.) There are, of course, other examples of education systems where there is a greater breadth of subjects than is typically the norm in the UK – Scottish Highers, International Baccalaureate. The A-level system, on the other hand, too often means that students end up making a stark choice between the STEM and arts/humanities pathways too early. This is a great shame because it serves to entrench the ‘two cultures’ divide that CP Snow criticised so forcefully almost 60 years ago.

Simon Jenkins, the Guardian’s resident STEM-skeptic, regularly bemoans the negative perception of the value of the arts and humanities as compared to, as he sees it, the unquestioned importance of STEM subjects to society. He was on fine form on New Year’s Day, arguing in an article, “Easy to sneer at arts graduates – but we’ll need their skills”, that “a humanistic education” produces better-rounded and more creative types who “seem better equipped to use their imagination and challenge conventional wisdom”. Last year Jenkins also provoked quite some ire by arguing that STEM graduates, particularly computer scientists, lack the ability to communicate effectively.

This may perhaps come as something of a surprise to readers of physicsfocus, but I have quite some sympathy with Jenkins’ concerns about the extent to which an arts and humanities degree has been ‘devalued’ in terms of its perceived value to society (and, by extension, to the individual graduate). I have always rather disliked articles and reports proclaiming that physics is so much more intellectually challenging – i.e. ‘harder’ – than other subjects. Yes, physics is conceptually challenging. And, yes, it’s intellectually stimulating and demanding. And yes, as I’ve discussed before for physicsfocus, it requires a heck of a lot of work and effort in order to ‘get it’. But, as Dave Farmer explains in a perceptive, important, and smart post, there are many types of intelligence, and there are many types of aptitude.

There are physicists at all career levels whose analytical maths abilities are truly remarkable. But ask some of them to write 500 words which are engaging and thought-provoking, and they’re flummoxed. Echoing the points made by Farmer, a capability with mathematics is just one type of intelligence. Attempting to quantify such a multi-faceted and complex human characteristic via an aptitude in one area, or, worse, via a single ‘IQ’ value, is as ludicrous as, errmm, reducing the value of a university to a position on a league table.

An ability to communicate effectively is essential, independent of subject, discipline, or career. University physics departments across the country have for years complained about the reduction in the rigour of A-level maths, and have introduced first-year ‘refresher’ modules in order to bring incoming students up to speed in mathematical techniques. But similar primers in written communication have not been introduced. Given the lack of subject breadth of the A-level system, and the associated absence of the development of writing skills for many STEM-focused students, one could make the argument that there is an equally pressing, if not greater, need for formal teaching of written communication skills in Year 1 of a physics degree.

Where my views diverge dramatically from those of Jenkins, however, is with his argument that arts and humanities graduates are necessarily more creative than those with degrees in STEM subjects. Science is intrinsically creative and Jenkins does his important arguments about the value of the arts and humanities a great disservice by playing down to lazy stereotypes of STEM graduates.

Equally importantly, an arts and humanities degree is no guarantee of an ability to communicate concepts in a clear, engaging, and effective style. I’ll leave you with Exhibit #1 – an excerpt from the work of Prof. Karen Barad, of the Philosophy Department at the University of California Santa Cruz. (I suspect that I’ll be returning to a discussion of Barad’s work for a future physicsfocus post).

“Multiply heterogeneous iterations all: past, present, and future, not in a relation of linear unfolding, but threaded through one another in a nonlinear enfolding of spacetimemattering, a topology that defies any suggestion of a smooth continuous manifold. Time is out of joint. Dispersed. Diffracted. Time is diffracted through itself. It is not only the nature of time in its disjointedness that is at stake, but also disjointedness itself. Indeed, the nature of ‘dis’ and ‘jointedness’, of discontinuity and continuity, of difference and entanglement, and their im/possible interrelation ships are at issue.”

Thanks to my colleague at Nottingham, Brigitte Nerlich, for bringing my attention to that quite remarkable piece of impenetrable writing, via this blog post.

_ _ _

[1] I’m a fan of the Oxford comma.

Image Credit: https://www.flickr.com/photos/darinrmcclure/6198246544 

Confused? Good. You might be about to learn something.

this-way-718660_960_720

First published at physicsfocus.

It’s never the most comfortable of feelings to have some aspect of your work described as “appalling”. But that’s what greeted me yesterday morning when I scanned down the comments thread for the most recent video I’ve made with Brady Haran for his Sixty Symbols channel

EntropyConfusion

The source of the opprobrium? Well, the video in question was on the topic of entropy. I should have learned by now not to go near the topic of entropy with a barge-pole for a YouTube video because it’s a subject that really can’t be done justice in five or ten minutes. But I had attended a brilliant and inspiring colloquium by Daan Frankel on entropy and self-assembly shortly before the video was filmed. As part of his talk Daan had described the pioneering work done by Sharon Glotzer’s group at the University of Michigan on the role of entropy in the self-organisation of nanoparticles. Glotzer’s group has neatly shown how entropy can be exploited to drive an ensemble of nanoparticles to an ordered state.

Yep, that’s right. Entropy produced order, not disorder. (I enthusiastically recommend Glotzer’s TEDx talk for more on this.)

I thought that this departure from the traditional view of the role of entropy would make a great subject for a Sixty Symbols video and suggested it as a topic to Brady. We filmed it and, as ever, Brady and I had some healthy and robust debate about the role of analogies and metaphor in explaining the physics. Overall I was pretty happy with how the filming went. (As I’ve discussed elsewhere, we academics do not get involved with the editing of the Sixty Symbols videos – that’s all expertly done by Brady. The first time we see the finished product is when it goes online.)

So why does YouTube commenter TheRumpus feel so strongly that the video doesn’t work? Well, you can of course read his comment for yourself but it was the final two lines which particularly resonated with me (for reasons I’ll go into below):

entropy2

As I’ve explained over at YouTube, Sixty Symbols videos – certainly those with which I am involved – are not meant to be tutorials or mini-lectures. No-one should expect to come away with a solid understanding of entropy on the basis of watching a YouTube video (otherwise why would we bother with setting problems, coursework, lab work, and/or projects for thermodynamics courses – or, indeed, any aspect of physics?). Sixty Symbols videos are instead a conversation with physicists about particular topics that interest and enthuse them – they represent a taster, rather than a tutorial. (I discussed my qualms about YouTube edutainment in a physicsfocus post last year.)

Leaving those points aside, TheRumpus’s comment raises a much broader and rather more subtle issue. Is adding to confusion necessarily a bad thing? Should we always avoid the possibility of confusing the audience when we’re discussing or explaining physics? Or could confusion actually aid the learning process?

That certainly seems like a rather, errmm, unhinged set of statements for a university lecturer to make. After all, don’t I aim to make my lectures as clear as possible so as to enhance student learning? Don’t I revise and re-revise the notes I give students in an attempt to eliminate any hint of ambiguity? And isn’t the quality of my teaching assessed (via, for example, Student Evaluation of Teaching questionnaires) on the basis of its clarity?

Yes to all three questions. But could this focus on eliminating confusion and ambiguity actually be doing students a disservice?

A fascinating article was published in The Chronicle of Higher Education back in August on exactly this topic. In “Confuse Students to Help Them Learn”, the work of Derek Muller, of Veritasium fame, and both Sidney D’Mello (University of Notre Dame) and Arthur Graesser (University of Memphis) on the role of confusion in learning is described. D’Mello and Graesser’s work challenges much of the received wisdom about teaching and learning and I’ve made time over the past couple of weeks to read a number of their publications (which are all available here).

The title of a paper published earlier this year by D’Mello, Graesser and colleagues nails their colours to the mast: “Confusion can be beneficial for learning”. The abstract does a very good job of bringing out the key points of their study. Here’s an extract:

“Confusion is expected to be more the norm than the exception during complex learning tasks. Moreover, on these tasks, confusion is likely to promote learning at deeper levels of comprehension under appropriate conditions”

This flies in the face of everything we’re told about the characteristics of effective teaching, but, I suspect, will nonetheless chime with many physicists’ experience of how they came to understand complicated concepts in, for example, quantum theory, relativity, and – oh, let’s say – thermodynamics and statistical mechanics.

Over a decade before D’Mello et al.’s paper was published, Kurt VanLehn and co-workers had found that in order for successful learning to take place in physics, an ‘impasse’ (as they describe it) has to be reached. In other words, the student must be confused at some point in order to learn.

Or, as Derek Muller puts it in that Chronicle of Higher Education article:

It seems that, if you just present the correct information, five things happen. One, students think they know it. Two, they don’t pay their utmost attention. Three, they don’t recognize that what was presented differs from what they were already thinking. Four, they don’t learn a thing. And five, perhaps most troublingly, they get more confident in the ideas they were thinking before.”

So, “add[ing] to the confusion about entropy”, as TheRumpus puts it, need not necessarily be a bad thing. What is of key importance, of course, is the student reaction to that confusion. We need to be very careful to ensure that the learner does not switch off entirely (and D’Mello and Graesser are at pains to stress this).

But when confusion triggers a response like the following, it’s difficult to argue that we should always aim for maximum clarity:

entropy3

“Time to do some reading on this.” What more does any teacher want to hear?

Image: https://pixabay.com/en/this-way-confuse-where-to-go-way-718660/

 

Perplexed by Pauli

Wolfgang_PauliOriginally published at physicsfocus.

A rather pervasive meme claiming that nothing ever really touches anything else has been circulating on the internet for a number of years. I think, although I’m not entirely certain, that it may well have its origins in an explanation by a certain Michio Kaku. This type of explanation later formed the basis of a video, You Can’t Touch Anything, from the immensely successful VSauce YouTube channel, which has now accrued nearly 3.4 million views.

I appreciate just how difficult it is to explain complicated physics for a general audience (see, for example, this article in the education issue of Physics World published earlier this year). And I also fully understand that we all goof at times – particularly, and especially, me. But Kaku has got form when it comes to over-simplifying explanations to the point of incorrectness in order to exploit the ‘Wow! Quantum! Physics!’ factor. This misleading over-simplification is similarly a hallmark of the ‘you can’t touch anything’ meme.

Why is it that this particular meme winds me up so much? (After all, there’s a universe of other, much more egregious, stuff on the internet to worry about.) Well, I think it’s mainly because it hits just a little too close to home. My research area is known as non-contact atomic force microscopy (NC-AFM) and there’s a very good reason indeed why scientists in the field draw a distinction between the non-contact and contact modes of AFM. I’ve banged on about the flaws in the meme, as I see them, to Brady Haran on a number of occasions over the last couple of years and this finally led to a video, uploaded to the Sixty Symbols channel last week, where he and I debate whether atoms touch.

If you’ll excuse the shameless self-promotion1, of all the Sixty Symbols videos I’ve done with Brady, I’m most happy with this one. It shows science as a debate with evidence, models, and analogies being thrown into the mix to support a particular viewpoint – not as something which is “done and dusted” by the experts and passed down as received wisdom to the ‘masses’. This is exactly how science should work and how it should be seen to work. (Here’s the obligatory supporting Feynman quote: “Science is the belief in the ignorance of experts”. Stay tuned – another Feynman quote will be along soon.)

The reason I’m writing this post, however, isn’t to rake over the ashes of the debate with Brady (and the associated lengthy comments thread under the video). It’s instead to address a big, and quite deliberate, gap in the video: just how does the Pauli exclusion principle affect how atoms interact/touch/bond/connect? This is an absolutely fascinating topic that not only has been the subject of a vast amount of debate and confusion over many decades, but, as we’ll see, fundamentally underpins the latest developments in sub-molecular resolution AFM.

Beyond atomic resolution

At about the same time as the Sixty Symbols video was uploaded, and entirely coincidentally, a book chapter my colleagues and I have been working on over the last couple of months appeared on the condensed matter arXiv: Pauli’s Principle in Probe Microscopy. The Pauli exclusion principle (PEP) plays an essential role in state-of-the-art scanning probe microscopy, where images like that shown on the right below are increasingly becoming the norm. Scanning probe images of this type are captured by measuring the shift in resonant frequency of a tiny tuning fork to which an atomically (or molecularly) sharp tip is attached. As the probe is moved back and forth on nanometre and sub-nanometre length-scales, the gradient of the force between the tip apex and the molecule changes and this causes a change in the resonant frequency of the tuning fork. These shifts in frequency can be converted to an image or mathematically inverted to determine the tip-sample force. Or they can be listened to.

NTCDIThe image shown above is from recent work by our group at Nottingham but I really must name-check the researchers who pioneered this type of ultrahigh resolution imaging: Leo Gross and co-workers at IBM Zurich. Leo and his colleagues first demonstrated that it is possible to acquire AFM images of molecules where the entire chemical architecture can be visualised. The images show a remarkable, and almost eerie, similarity to the textbook ball-and-stick molecular models so familiar to any scientist. Compare the experimental image of NTCDI molecules on the right above to the ball-and-stick diagram on the left where grey, blue, and red spheres represent carbon, nitrogen, and oxygen atoms respectively. These exceptionally detailed images of molecular structure2 are acquired by exploiting the repulsion of electrons due to the Pauli exclusion force at very small tip-sample separations.

I was explaining all of this to a class of first-year undergraduate students last year, stressing that the repulsion we observe at small tip-sample separations – and, indeed, the repulsion ultimately responsible for the reaction force keeping them from falling through their seats – is not simply due to electrostatic repulsion of ‘like’ charges. I wound up the lecture, chucking in the throwaway remark, “…of course, the force due to Pauli exclusion isn’t really a force like any other. You’ll cover this in quantum statistics next year.”

By the time I’d got back to my office, two email messages from students in the lecture had already made their way to my inbox: “If it isn’t a force like any other, then what the heck is it?”

That’ll teach me to be flippant with the first-years. It’s a great question. Where does the repulsive force due to Pauli exclusion come from – just why is it that electrons don’t want to be ‘squeezed’ into the same quantum state?

The Quantum Identity Crisis

Ultimately, the Pauli exclusion principle has its origins in the indistinguishability of electrons. (Well, OK, fermions – but let’s stick with the PEP in the context of force microscopy.) One frustrating aspect of the discussions of quantum statistics in various textbooks, however, is that the terms ‘identical’ and ‘indistinguishable’ are too often assumed to be synonymous. Electrons are certainly identical in the sense that their ‘internal’ properties such as mass and charge are the same, but are they really indistinguishable?

Fleicschhauer had this to say in an fascinating commentary published a few years ago:

“In the quantum world, particles of the same kind are indistinguishable: the wavefunction that describes them is a superposition of every single particle of that kind occupying every allowed state. Strictly speaking, this means that we can’t talk, for instance, about an electron on Earth without mentioning all the electrons on the Moon in the same breath.”

Well, in principle, yes, we should consider the entire multi-particle ‘universal’ wavefunction. But I’m a dyed-in-the-wool, long-of-tooth and grizzled-of-beard experimentalist. I want to see evidence of this universal coupling. And you know what? As hard as I might look, I’m never going to find any experimental evidence that an electron on the Moon has any role at all to play in a force-microscopy experiment (or a chemical reaction, or an intra-atomic transition, or…) involving electrons on Earth.

I’ll stress again that in principle, the electrons are indeed indistinguishable as there is always some finite wavefunction overlap, because there is no such thing as the infinite potential well which is the mainstay of introductory quantum physics courses. In this sense, an electron on Earth and an electron on the Moon (or on Alpha Centauri) are indeed ‘coupled’ to some degree and arguably ‘indistinguishable’. But the degree of wavefunction coupling and associated energy splitting are so incredibly tiny and utterly negligible – if I can be forgiven the understatement – that, in any practical sense, the electrons are completely distinguishable.

(Some of you might at this point be having a déjà vu moment. This is possibly connected to the (over-)heated debate that stemmed from Brian Cox’s discussion of the exclusion principle in a BBC programme a few years ago. Brian caught a lot of online flak for his explanation, some of it rather too rant-y and intemperate in tone – even for me. One of the best analyses of the furore out there is a pithy blog post by Jon Butterworth for the Guardian – very well worth a read. My colleagues at Nottingham have also discussed the controversy, and Telescoper asked if Brian had Cox-ed up the explanation of the exclusion principle.)

It is only when there is appreciable wavefunction overlap, as when the atom at the very end of the AFM tip is moved very close to a molecule underneath it (or, equivalently, in a chemical bond), that the PEP ‘kicks in’ in any appreciable way. If you want to know just why indistinguishability and the exclusion principle are so intimately connected, and how electron spin plays into all of this, I’m afraid I’m going to have to refer you to Section 1.3 of that book chapter and references therein. If you’re willing to take my word for it for now, however, read on.

Fourier and Force

Let’s cut to the chase and elucidate where that repulsive ‘Pauli force’ comes from. The PEP tells us that we can’t have two electrons with the same four quantum numbers, i.e. we can’t ‘push’ them into the same quantum state. But just how does this give rise to a force between two electrons that is beyond their ‘natural’ electrostatic repulsion? Let’s strip the problem right down to its bare bones and consider a simple gedanken experiment.

Take two electrons of the same fixed spin separated by a considerable distance from each other. We’re going to move those electrons together until their wavefunctions overlap. As the electrons get closer their wavefunctions effectively change shape so that the mutual overlap is minimal – this is the PEP in action. The figure below, adapted from a description of the origin of the exclusion principle by Julian Su, schematically illustrates this effect.

FourierForce

On the left hand side the electron wavefunctions are not constrained by the exclusion principle, while on the right the PEP has been ‘switched on’. The essential point is this: the exclusion principle causes wavefunction curvature to increase. Because the kinetic energy (KE) of an electron is directly proportional to wavefunction curvature via the KE operator in quantum mechanics, increased curvature means increased kinetic energy. Or – and this is the description I much prefer because it’s yet another example of the beauty and elegance of Fourier transforms – higher wavefunction curvature requires the introduction of higher spatial frequency (i.e. higher momentum) contributions in Fourier space. It this change in the momentum distribution which gives rise to the Pauli repulsion force.

While this captures some of the essence of the exclusion principle (and certainly is enough to provide important insights into what’s going on in force microscopy experiments), it doesn’t even begin to scratch the surface of the underlying physics. I suspect that Pauli himself would dismiss all of the above with his trademark “… es ist nicht einmal falsch”. He himself said in his Nobel prize lecture of 1946 that “I was unable to give a logical reason for the Exclusion Principle or to deduce it from more general assumptions…” Almost two decades later, Feynman had the following to say:

“It appears to be one of the few places in physics where there is a rule which can be stated very simply, but for which no one has found a simple and easy explanation. The explanation is deep down in relativistic quantum mechanics. This probably means that we do not have a complete understanding of the fundamental principle involved.”

Like Feynman, I remain somewhat perplexed by Pauli’s principle.

 

_ _ _

1 In these social-media-enabled times, I guess that shameless self-promotion has become the academic’s stock-in-trade. Perhaps, however, it was ever thus.

2 There has, however, been a great deal of controversy of late as to the origin of the intermolecular features observed in AFM images by a number of groups, including ourselves. See our chapter on the role of the exclusion principle in probe microscopy for more detail.