Are you ready for the country?*

My friend and colleague Peter Milligan very kindly put together a three-CD compilation of country music for me a little while ago, as a “Country for Dummies”-esque introduction to the genre. As some of you may know, my preferred musical tastes and tipples lie somewhat (though definitely not exclusively — see editor’s interjections below) towards the heavier end of the spectrum. Peter’s “mix tape” was therefore a little …challenging. I tried. Lord knows, I tried. But I had to reluctantly admit defeat to Peter.

It’s possible that my over-exposure to the brutal form of bastardised aural assault that is Irish Country & Western (aka Country and Irish) — a firm favourite with my co-workers during my summer jobs when I was a teenager — has inoculated me against all forms of country music. In any case, I really don’t think I’m missing much. Peter begs to differ and makes the case for country in the following guest post…


I’ve Sold My Saddle For An Old Guitar

At the end of April I received my guitar back from Philip as he had borrowed it to play at an outreach event.  We started talking about open mic nights and how we had yet to perform at the same one.  I stated that I only did country songs at open mic nights and Prof. Moriarty expressed surprise and a “not getting” of said genre.

Not entirely legally, over the following weekend I constructed a “country music sampler” for your favourite nanoscientist and, as is often the way with me, it stretched to three CDs.  I didn’t include anything by Johnny Cash as his cover of “Hurt” had already been discussed as a “good song”.  [Johnny Cash’s version of Hurt is one of the most affecting songs I have ever heard. The first time I saw the video, I was in floods of tears. Even now, after countless re-listens and re-viewings, I still find myself welling up. PJM.] Bearing in mind the genre that song emanates from, I knew I had a challenge ahead of me to change your favourite metaller’s attitude to a form of music that has plenty of preconceptions, some of which are valid.

Cry Cry Cry

After a few weeks, I received the news today I had kind of been expecting.  Philip said that, try as he might, he had found my Country Music Sampler “unlistenable”.  He is not the first metal fan to evince such views to me, indeed that genre is known for its intolerance of any other kind of music [Sorry, Peter, but my pentagram-encrusted metal soul screams out at the injustice of this! I know many metal fans whose tastes, like mine, run from Aretha to Zappa, via, as just a handful of genre-spanning examples, Beethoven, Miles Davis, Kate Bush, Fear Factory, Duran Duran, Christy Moore, The Cure, The Smiths, Bowie, The Beatles. And The Shadows. (Examples all lifted from a scroll down my iTunes library.) PJM.]  but I have to be honest, that wasn’t his primary reason for finding the music of Hank Williams et al. unpalatable.  It seems the faux country music of his past has so poisoned his aural palate that he cannot even countenance a collection with an admittedly liberal interpretation of “country”.

So how did we get here and what is to be done?

Dropkick Me Jesus Through The Goalposts of Life

There is no genre of music as misunderstood or as reviled as country music.  There is also no genre of music as brutally honest as country music; there is nowhere to hide in country, your soul is bared wide open for all to see [Beg to differ. Four words: Fell On Black Days.  (And while we’re on the subject of Chris Cornell’s incomparable, incredible talent, Johnny Cash’s version of Soundgarden’s “Rusty Cage” is everything country should be…

Misery, pain and addiction are well trodden themes in country […and metal. It seems that misery really does love company. PJM] and my sampler contained them all.  It’s a good idea to see your country heroes live when you get the chance as they have a habit of not hanging around.  Whilst most country artists are white they are not all men.  I don’t think any musical genre really ticks both of those diversity boxes.  Modern feminism and the MeToo movement is all very well but Tammy, Dolly and Loretta were doing it forty years ago.

It’s Five O’Clock Somewhere

My own journey to country music has rather strange beginnings.  I was always an indie kid as a teen with a side interest in blues.  University in the early 90s saw my tastes grow into the blossoming lo-fi/alt.rock scene in the US and Crooked Rain, Crooked Rain by Pavement became a pivotal record in my life.  They became my favourite band and I was happy to discover that several of my friends in my hometown coincidentally liked them too.  When I suggested that “Range Life” from Crooked Rain, Crooked Rain was a cool song, one of my friends suggested that I listen to Silver Jews, who were a “Pavement spin-off band” – pedantic note, the converse is actually true – as they were like a countrified version of Pavement.  So I started to buy Silver Jews records as well – in those days, I bought vinyl, and was called a Luddite for doing so, funny how fashions have changed – and started to listen to the various incarnations of Will Oldham along with Sparklehorse, Whiskeytown, Wilco and other bands straddling the indie/country – essentially borderline.

I had CDs too as some of these albums were not available in the UK on vinyl in those days.  I recall spending (a lot) of time during my Ph.D studies at the Daresbury Laboratory doing X-Ray experiments on copper crystals with sulphur containing aromatic molecules stuck to them.  I had packed “Stranger’s Almanac” by Whiskeytown and a twofer of GP/Grievous Angel by Gram Parsons for listening to whilst my Ph.D supervisor and I were doing our long shifts at the laboratory.  Whilst my boss was seen to tap his feet to Stranger’s Almanac from time-to-time, he did complain that he didn’t like it very much.  The Gram Parsons stuff though was a different kettle of sturgeon altogether.  On hearing me talk about it, he actually thought that I was going on about Alan Parsons, an artiste his brother liked.

Gram Parsons

Up Against The Wall Redneck Mother

In those days, the internet was far more rudimentary than it is today; indeed, social networking, youtube and Wikipedia did not exist – oh that they did for the red-eyed long hours of synchrotron work! [Peter and I share a love of beamtime at large scale facilities. PJM]  So in order to discover more about Country Music, I was reliant mostly on the music press, whose veracity was, and still is, decidedly variable – not that the internet is much better.  One thing they all agreed on though was that Gram Parsons was the man.  I picked up the Gram Parsons disc second-hand in a now-defunct record shop in the West End of Glasgow.  I then discovered that this disc contained “proper country music”, i.e. with violins and pedal steel guitars that actually sounded like something your grandparents would listen to.  There was no way my supervisor was hearing this.  I kept it turned down so that no-one else could hear it but eventually I kind of got into it.

Then a CD came along that changed it all for me.  Sounds of the New West was a free CD with Uncut Magazine in 1998 and it became an almost constant companion.  I was interested to see it include Silver Jews and Will Oldham and it introduced me to Willard Grant Conspiracy, Freakwater, Lambchop and The Handsome Family.  I now have several albums by most of these and have seen some of them live, indeed a track from each one is included in your favourite caffeine junkie’s sampler but are sadly all “unlistenable”.  Although released later, the Beyond Nashville discs are similar in approach although contain older material also.  I should also point out that the sampler contained such country music luminaries as Hank Williams, Patsy Cline, Merle Haggard, Buck Owens, Kris Kristofferson and Willie Nelson – I do after all lean toward the “outlaw country” sub-genre.

The Dark End Of The Street

When I moved to Nottingham, a friend who was part of the folk scene started to get into country music also.  He posted on Facebook, linking me, that he “got it” and that the triumvirate of country music as far as he was concerned was Gram Parsons, Gene Clark and Townes Van Zandt.  A trio of more drugged out nihilists you could never hope to meet.  Like many others, including Teenage Fanclub, I worship at the altar of Gene Clark.  He wrote music of great beauty, sensitivity and fragility.  His “No Other”, whilst not technically a country record, is one of those “lost classics” that the critics purr over.

As to what constitutes country music; well my judgement is as subjective as anyone else.  I included Creedence, Stones, Scott Walker, Grateful Dead, R.E.M., The Band and even Pavement on the scanning probe doyen’s compilation and each one is a country song.  Alas, even they were too country for your favourite teetotal vegetarian. [It’s a question of the genre/style. Not the band. Each of those bands has produced music that I enjoy. PJM.]  

This Drinkin’ Will Kill Me

How rock and roll is country music?  Hank’s heart gave out at 29.  Gram Parsons checked out of an OD at 26.  His manager stole his body, drove it into the Joshua Tree National Park and set fire to itJohnny Cash gave us the immortal, most rock and roll line of all time, “I shot a man in Reno, just to watch him die”.

When my wife and I got married we decided, rather tongue-in-cheekily, to name the dinner tables after country singers.  I am very proud to have sat at a top table entitled, “The Hank Williams Table**” and my eldest step-daughter demanded that her table be named after Dolly PartonHank Williams is the undisputed king of country music.  His legacy is a rich tapestry.  Go discover***.


Thanks to my wife, Dawn, and also to Dom, Mark, and Jimmy who also received the sampler and gave me some valuable feedback also.

If you want to chat about country music or receive a sampler, leave a comment and we’ll get back to you!

*     from the Neil Young album, “Harvest”, Reprise, 1972

**    my nascent father-in-law proceeded to tell everyone that Hank Williams was the guitar player in The Shadows.

***   The Gilded Palace of Sin is as good a place as any to start!




At sixes and sevens about 3* and 4*

The post below appears in today’s Times Higher Education under the title “The REF’s star system leaves a black hole in fairness.” My original draft was improved immensely by Paul Jump‘s edits (but I am slightly miffed that my choice of title (above) was rejected by the sub-editors.) I’m posting the article here for those who don’t have a subscription to the THE. (I should note that the interview panel scenario described below actually happened. The question I asked was suggested in the interview pack supplied by the “University of True Excellence”.)

“In your field of study, Professor Aspire, just how does one distinguish a 3* from a 4* paper in the research excellence framework?”

The interviewee for a senior position at the University of True Excellence – names have been changed to protect the guilty – shuffled in his seat. I leaned slightly forward after posing the question, keen to hear his response to this perennial puzzler that has exercised some of the UK’s great and not-so-great academic minds.

He coughed. The panel – on which I was the external reviewer – waited expectantly.

“Well, a 4* paper is a 3* paper except that your mate is one of the REF panel members,” he answered.

I smiled and suppressed a giggle.

Other members of the panel were less amused. After all, the rating and ranking of academics’ outputs is serious stuff. Careers – indeed, the viability of entire departments, schools, institutes and universities – depend critically on the judgements made by peers on the REF panels.

Not only do the ratings directly influence the intangible benefits arising from the prestige of a high REF ranking, they also translate into cold, hard cash. An analysis by the University of Sheffield suggests that in my subject area, physics, the average annual value of a 3* paper for REF 2021 is likely to be roughly £4,300, whereas that of a 4* paper is £17,100. In other words, the formula for allocating “quality-related” research funding is such that a paper deemed 4* is worth four times one judged to be 3*; as for 2* (“internationally recognised”) or 1* (“nationally recognised”) papers, they are literally worthless.

We might have hoped that before divvying up more than £1 billion of public funds a year, the objectivity, reliability and robustness of the ranking process would be established beyond question. But, without wanting to cast any aspersions on the integrity of REF panels, I’ve got to admit that, from where I was sitting, Professor Aspire’s tongue-in-cheek answer regarding the difference between 3* and 4* papers seemed about as good as any – apart from, perhaps, “I don’t know”.

The solution certainly isn’t to reach for simplistic bibliometric numerology such as impact factors or SNIP indicators; anyone making that suggestion is not displaying even the level of critical thinking we expect of our undergraduates. But every academic also knows, deep in their studious soul, that peer review is far from wholly objective. Nevertheless, university senior managers – many of them practising or former academics themselves – are often all too willing, as part of their REF preparations, to credulously accept internal assessors’ star ratings at face value, with sometimes worrying consequences for the researcher in question (especially if the verdict is 2* or less).

Fortunately, my institution, the University of Nottingham, is a little more enlightened – last year it had the good sense to check the consistency of the internal verdicts on potential REF 2021 submissions via the use of independent reviewers for each paper. The results were sobering. Across seven scientific units of assessment, the level of full agreement between reviewers varied from 50 per cent to 75 per cent. In other words, in the worst cases, reviewers agreed on the star rating for no more than half of the papers they reviewed.

Granted, the vast majority of the disagreement was at the 1* level; very few pairs of reviewers were “out” by two stars, and none disagreed by more. But this is cold comfort. The REF’s credibility is based on an assumption that reviewers can quantitatively assess the quality of a paper with a precision better than one star. As our exercise shows, the effective error bar is actually ± 1*.

That would be worrying enough if there were a linear scaling of financial reward. But the problem is exacerbated dramatically by both the 4x multiplier for 4* papers and the total lack of financial reward for anything deemed to be below 3*.

The Nottingham analysis also examined the extent to which reviewers’ ratings agreed with authors’ self-scoring (let’s leave aside any disagreement between co-authors on that). The level of full agreement here was similarly patchy, varying between 47 per cent and 71 per cent. Unsurprisingly, there was an overall tendency for authors to “overscore” their papers, although underscoring was also common.

Some argue that what’s important is the aggregate REF score for a department, rather than the ratings of individual papers, because, according to the central limit theorem, any wayward ratings will “wash out” at the macro level. I disagree entirely. Individual academics across the UK continue to be coaxed and cajoled into producing 4* papers; there are even dedicated funding schemes to help them do so. And the repercussions arising from failure can be severe.

It is vital in any game of consequence that participants be able to agree when a goal has been scored or a boundary hit. Yet, in the case of research quality, there are far too many cases in which we just can’t. So the question must be asked: why are we still playing?

Language trouble in brain science and psychology

It’s time for another guest post. This time I’m delighted to introduce Elric Elias, PhD candidate at the University of Denver, cognitive neuroscientist, and a fellow metal fan. His dissertation work pits two fundamental computational methods, each leveraged by the visual system of the human animal brain, against each other in a duel to the death. In so doing, he hopes to learn about how the visual system turns a potentially infinite amount of incoming visual information into useful, adaptive output. He has been published in multiple scientific journals, and has shared his work at several academic conferences. He lives with his best friend Ladybird (also an animal—dog) in Denver, Colorado. You can contact him at his first name dot last name; gmail.

If you haven’t watched the Sixty Symbols in which Dr. Moriarty gets testy about the idea that physical objects never really “make contact”, quit jackassing around and do it. It’s great for a list of reasons. Near the top of that list is how forcefully and clearly it demonstrates language’s incredibly important role in science. Language matters, as they say. Now that’s not a new idea, but I may be approaching it from an angle you’re not used to. I’ll be talking about two instances in which, I think, sloppy language leads to problems in brain science and psychology. Read on.

Culture, technology, song lyrics that don’t really make sense but evoke an emotional reaction nonetheless: language is pretty useful and potent stuff. On the other hand, though, it’s a really imprecise medium. I mean, physicists tell me that the standard model can be captured on the back of an envelope (right, Dr. Moriarty?). So much explanatory power, so much precision… all on a scrap of paper. Of course, that’s only true if it’s written in mathematical notation, not English. The book that linguistically describes that notation to a layperson is considerably less efficient and precise (no offense to the author, thank the imprecision of language!).

This is why, when scientists speak, they chose their words very carefully. They even (hopefully) take the time to meticulously define their terms before they use them. To the non-scientist, this sometimes looks like nerdy pedantry. And proudly, in part it is. But nerdy pedantry minimizes the ambiguity of language. When Dr. Moriarty says the word “contact” to another physicist, each understands that word to mean precisely the same thing. Not only is that useful, it’s necessary. In the absence of nerdy pedantry, you end up with one group of people hollering “objects never make contact” and another group saying “but they do, though”. Round and round, for EVER. Turns out, the two groups had different notions of what “contact” meant all along (p.s., trust the physicist’s definition).

I’m a scientist in the sprawling field called psychology. More informatively, I’m a cognitive neuroscientist, or a vision scientist. I try to figure out how you see. Not with your eyes, but with your brain. So, the science I’m most familiar with deals with the brain, animal behavior (in my case, human behavior), and in general, the… ahem… “mind”. Ah. A familiar bit of language. One that you normally encounter and pass right over. The mind. Yeah, the mind. You know what that is, right? So—what is it?

The “mind and brain”

Well, “mind” is a term that you can find in plenty of popular representations of psychology and brain science. Sometimes the idea is that the “mind” is different from the brain (yes, that’s the stubborn specter of dualism). Sometimes people take a more agnostic stance, and wonder about the relationship between the mind and brain. Non-specialists might casually mention “the brain and the mind”, or even more teeth-grindingly, “my brain” (what, exactly, is the “owner” of the brain supposed to be… other than the brain itself?). Regardless, this much is almost always true: people treat the existence of “minds” as a given. It’s self-evident. We might wonder about its relationship to the brain, or assert that the two are different, but the existence of minds is just… too obvious to even bother questioning.

Brain scientists and psychologists are a bit different. Much of the time, they are careful about how they use brain/mind language. As an excellent example, see this very readable and reasonable take on conscious awareness—conscious “minds”. But occasionally, language that seems to assume the existence of minds, distinct from brains, seeps into more formal scientific settings (if you don’t want to scour that last article for “mind and brain” language, just take a look at the name of the journal it was published in). Sometimes, “mind and brain” language seeps into conversation among fellow scientists, or between scientists and students (you can take my word for that last assertion, or decline). Ok, fine. Maybe non-specialists sometimes assume the existence of minds, apart from brains, and sometimes even specialists do too. So what?

Well, since I’ve just spent the better part of four paragraphs talking about how carefully scientists use language, if a scientist distinguishes the mind from the brain, you might suppose that there’s a very good reason for doing so. What is that reason?

Beats the crap out of me. All the evidence brain science has gathered—and we have certainly gathered some—points towards this: every passing thought, every lingering emotion, every sensation, every dream, every decision, every moral indignation, every space-out, EVERYTHING you have ever experienced has been your brain doing its thing. Hell, “you” are a brain doing its thing. Neurons firing, populations of cells reacting to input from the external world or from other cells. Although the details are insanely complicated and no one claims to understand it all, every time brain scientists “look under the hood” and try to catch a glimpse of a thought, or a feeling, or an idea, or a plan, or an identity, what we observe is a brain making computations. Every. Single. Time. Nothing more, and nothing less. We have never, ever, observed or measured a “mind” in the absence of a brain. The evidence isn’t just correlational, though. Damage area X in the brain and observe a reliable change in the “mind”. Stimulate area Y in the brain and observe the conscious experience of mental state Z. Sure, it feels like my thoughts and feelings are somehow different from my brain, which is, after all, a three-pound lump of jellyish meat (decidedly not qualitatively similar to a thought or emotion). But even a mediocre psychologist will tell you that introspection is an insanely unreliable way to get at what’s true. So what’s my point?

Well, language matters, remember? If a scientific field goes around using a term (the “mind”, say), the onus is on them to provide evidence for the existence of that construct. I see plenty of evidence for the existence of brains. I am aware of zero evidence that suggests that minds are something above and beyond a brain over time. Instead, brains that are active over time are the conscious experiences we colloquially refer to as “the mind”. That’s what a “mind” is. Brain activity over time. Nothing more, nothing less. If you do not agree, I am open to evidence that the mind and brain are dissociable! Good luck.

My own field needs to be clear and consistent about what constitutes the “mind”: brain activity over time. Else don’t use the damned word. Language is imprecise enough. Good scientists should do their best to minimize that imprecision, not to keep it moist and let it fester. No more implying that the “mental” is separate from the physical. The “mental” is physical. Like it or not, all the evidence points in that direction. Sloppy language that implies otherwise just keeps the stubborn spectre of dualism well-fed.

Humans and animals

Let’s turn away from brains and minds and instead think about humans and animals. Psychology sometimes uses “animal models” to help us understand how brains work; often the goal is to infer how human brains work by studying how animal brains work. Psychology departments sometimes offer “animal cognition” courses, in which you can learn about the brains of birds or monkeys or rats or other amazing critters. Sometimes animal cognition or behavior is a program unto itself. Certainly representations of popular science use this kind of language (check out this double-whammy). There are humans, and then there are animals. Nothing contentious so far. Nothing worth getting your blood pressure up for, right?

Damnit, humans are animals! Every model in psychology is an animal model, including the ones that describe humans! What else would they be, mineral models? Gas models? The entire field of psychology is about animal cognition and animal behavior! Imagine if I told you this: “humans and women are capable of producing pretty good death metal music”. You would rightly punch me right in the mouth. The distinction between “humans” and “women” is grossly incorrect at best, incorrect and value-laden at worst. Likewise, the phrase “humans and animals” is sloppy nonsense, imprecise and potentially laden with value judgment.

Now, there’s no doubt that the human animal brain has some pretty unique capacities. But that’s true of all species. By creating linguistic divides that do not reflect the way nature really seems to be (e.g., human/animal, mind/brain), we map our own biases and value-judgments onto our understanding of the world. That is true whether a scientist is using the imprecise language or whether a non-expert is.

No more sloppiness!

There’s no excuse for such sloppy language in science. Sloppy imprecision is everything we’re not. At least, I really hope we’re not. As an interesting side-note, I don’t think it’s sufficient for a scientific field to be internally consistent. For example, let’s say that some middling evidence could be interpreted such that minds might exist absent brains (it doesn’t, that I know of, but pretend). And further imagine that psychologists came up with some theory that accounted for this interpretation. Their theory hung together with other theories in psychology; the field was consistent, no obvious contradictions. Well, that wouldn’t be good enough. Ultimately, theories have to be consistent across scientific disciplines. Chemistry is consistent with particle physics. Biology with chemistry. Psychology with biology. And to close the loop, psychology must ultimately be compatible with physics. If “minds”, above and beyond brains, are posited to exist, their existence would have to be consistent with physics, not just with other theories in psychology. I’m not sure that can be done, though I’m confident that hasn’t ever been done. Perhaps more on that in the future.

In sum:

Dear Psychology,

                No more sloppy language. All available evidence suggests that “minds” are brains doing their thing over time. Nothing more, nothing less. And, damnit, humans are animals. Nothing more, nothing less. Avoid the word “mind” unless you’re clear about what you mean. Say “human animals” or “non-human animals”, because that language is more precise and correct. Precision is worth the extra keystrokes. Let’s be the good examples; maybe it will spread.


Elric Elias

Why we need Pride

I’m reblogging Peter Coles’ post on just why the idea of “Straight Pride” is such a pathetic notion. Despite all their interminable whining about snowflakes, there is nothing quite as fragile, delicate, and insecure as those who rail against diversity at any available opportunity. (And, of course, the legend in his own lunchtime that is Milo Yiannopoulos was first in the queue to support the “Straight Pride” toddlers. Milo’s tiresomely transparent self-serving pearl-clutching was past its sell-by date a very long time ago. But he’s got bills to pay…)

In the Dark

This month is LGBT Pride Month and this year I am looking forward to attending my first ever Dublin Pride.

I do occasionally encounter heterosexual people who trot out the tedious `when is it Straight Pride?’ in much the same way as much the same people ask when is it `International Men’s Day’?

Well, have a look at this picture and read the accompanying story and ask yourself when have you ever been beaten up because of your sexual orientation?

It seems heterosexual privilege comes with blinkers in the same way that male privilege and white privilege do. Anything that threatens this sense of entitlement is to be countered to be countered, with violence if necessary. The above example is an extreme manifestation of this. The yobs on that night bus apparently think that lesbians only exist for the amusement of straight men. When the two women refused to…

View original post 34 more words

Does art compute?

A decade ago, a number of physicists and astronomers, an occasional mathematician, and even an interloping engineer or two (shhh…) here at the University of Nottingham started to collaborate with the powerhouse of pop sci (/pop math/pop comp/pop phil…) videography that is Brady Haran. I was among the “early adopters” (after the UoN chemists had kicked everything off with PeriodicVideos) and contributed to the very first Sixty Symbols video, uploaded back in March 2009. This opened with the fresh-faced and ever-engaging Mike Merrifield: Speed of Light.

Since then, I have thoroughly enjoyed working with Brady and colleagues on 60 or so Sixty Symbols videos. (Watching my hairline proceed backwards and upwards at an exponentially increasing rate from video to video has been a somewhat less edifying experience.) More recently, I’ve dipped my toes into Computerphile territory, collaborating with the prolific Sean Riley — whom I first met here, and then subsequently spent a week with in Ethiopia — on a number of videos exploring the links between physics and computing.

It’s this ability to reach out to audiences other than physicists and self-confessed science geeks that keeps me coming back to YouTube, despite its many deficiencies and problems (such as those described here, here, and here. And here, here, and here [1].) Nonetheless, during discussions with my colleagues about the ups and downs of online engagement, I’m always tediously keen to highlight that the medium of YouTube allows us to get beyond preaching to the converted.

Traditional public engagement and outreach events are usually targeted at, and attract, audiences who already have an interest in, or indeed passion for, science (and, more broadly, STEM subjects in general [2].) But with YT,  and despite the best efforts of its hyperactive recommendation algorithms to corral viewers into homogeneous groupings (or direct them towards more and more extreme content), it’s possible to connect with audiences that may well feel that science or math(s) is never going to be for them, i.e. audiences that might never consider attending a traditional science public engagement event. The comment below, kindly left below a Numberphile video that crossed the music-maths divide, is exactly what I’m talking about…


There’s still a strong tendency for a certain type of viewer, however, to want their content neatly subdivided and packaged in boxes labelled “Physics”, “Chemistry”, “Biology”, “Philosophy”, “Computing”, “Arts and Humanities Stuff I’d Rather Avoid” etc… Over the years, there have been comments (at various levels of tetchiness) left under Sixty Symbols, Periodic Videos, Computerphile etc… uploads telling us that the video should be on a different channel or that the content doesn’t fit. I hesitate to use the lazy echo chamber cliché, but the reluctance to countenance concepts that don’t fit with a blinkered view of a subject is not just frustrating, it narrows the possibilities for truly innovative thinking that redefines — or, at best, removes — those interdisciplinary boundaries.

Some physicists have a reputation for being just a little “sniffy” about other fields of study. This was best captured, as is so often the case, by Randall Munroe:

But this is a problem beyond intellectual arrogance; a little learning is a dangerous thing. As neatly lampooned in that xkcd cartoon, it’s not just physicists who fail to appreciate the bigger picture (although there does seem to be a greater propensity for that attitude in my discipline.) A lack of appreciation for the complexity of fields that are not our own can often lead to an entirely unwarranted hubris that, in turn, tends to foster exceptionally simplistic and flawed thinking. And before you know it, you’re claiming that lobsters hold the secret to life, the universe, and everything…

That’s why it’s not just fun to cut across interdisciplinary divides; it’s essential. It broadens our horizons and opens up new ways of thinking. This is particularly the case when it comes to the arts-science divide, which is why I was keen to work with Sean on this very recent Computerphile video:

The video stems from the Creative Reactions collaboration described in a previous post, but extends the physics-art interface discussed there to encompass computing. [Update 08/06/2019 — It’s been fun reading the comments under that video and noting how many back up exactly the points made above about the unwillingness of some to broaden their horizons.] As the title of this post asks, can art compute? Can a painting or a pattern process information? Can artwork solve a computational problem?

Amazingly, yes.

This type of approach to information processing is generally known as unconventional computing, but arguably a better, although contentious, term is lateral computing (echoing lateral thinking.) The aim is not to “beat” traditional silicon-based devices in terms of processing speed, complexity, or density of bits. Instead, we think about computing in a radically different way — as the “output” of physical and chemical and/or biological processes, rather than as an algorithmic, deterministic, rule-based approach to solving a computational problem. Lateral computing often means extracting the most benefit from analogies rather than algorithms.

Around about the time I started working with Brady on Sixty Symbols, our group was actively collaborating with Natalio Krasnogor and his team — who were then in the School of Computer Science here at Nottingham — on computational methods to classify and characterise scanning probe images. Back then we were using genetic algorithms (see here and here, for example); more recently, deep learning methods have been shown to do a phenomenally good job of interpreting scanning probe images, as discussed in this Computerphile video and this arXiv paper. Nat and I had a common interest, in common with quite a few other physicists and computer scientists out there, in exploring the extent to which self-assembly and self-organisation in nature could be exploited for computing. (Nat moved to Newcastle University not too long afterwards. I miss our long chats over coffee about, for one, just how we might implement Conway’s Game Of Life on a molecule-by-molecule basis…)

It is with considerable guilt and embarrassment that I’ve got to admit that on my shelves I’ve still got one of Nat’s books that he kindly lent to me all of those years ago. (I’m so sorry, Nat. As soon as I finish writing this, I’m going to post the book to you.)

This book, Reaction-Diffusion Computers by Andy Adamatzky, Ben De Lacy Costello, and Tetsuya Asai, is a fascinating and comprehensive discussion of how chemical reactions — in particular, the truly remarkable BZ reaction — can be exploited in computing. I hope that we’ll be able to return to the BZ theme in future Computerphile videos. But it was Chapter 2 of Adamatzky’s book, namely “Geometrical Computation: Voronoi Diagram and Skeleton” — alongside Philip Ball’s timeless classic, The Self-Made Tapestry (which has been essential reading for many researchers in our group over the years, including yours truly) — that directly inspired the Computerphile video embedded above.

The Voronoi diagram (also called the Voronoi tesselation) is a problem in computational geometry that crops up time and again in so very many different disciplines and applications, spanning  areas as diverse as astronomy, cancer treatment, urban planning (including deciding the locations of schools, post offices, and hospital services), and, as discussed in that video above, nanoscience.

We’ve calculated Voronoi tesselations extensively over the years to classify the patterns formed by drying droplets of nanoparticle solutions. (My colleagues Ellie Frampton and Alex Saywell have more recently been classifying and quantifying molecular self-assembly using the Voronoi approach.) But Voronoi tesselations are also regularly used by astronomers to characterise the distribution of galaxies on length scales that are roughly ~ 1,000,000,000,000,000,000,000,000,000,000 (i.e. about 1030) times larger than those explored in nanoscience. I love that the same analysis technique is exploited to analyse our universe on such vastly different scales (and gained a lot from conversations with the astronomer Peter Coles on this topic when he was a colleague here at Nottingham. )

As Cory Simon explains so well in his “Voronoi cookies and the post office problem” post, the Voronoi algorithm is an easy-to-understand method in computational geometry, especially in two dimensions: take a point, join it up to its nearest neighbours, and get the perpendicular bisectors of those lines. The intersections of the bisectors define a Voronoi cell. If the points form an ordered mesh on the plane — as, for example, in the context of the atoms on a crystal plane in solid state physics — then the Voronoi cell is called a Wigner-Seitz unit cell. (As an undergrad, I didn’t realise that the Wigner-Seitz unit cells I studied in my solid state lectures were part of the much broader Voronoi class — another example of limiting thinking due to disciplinary boundaries.)

For less ordered distributions of points, the tesselation becomes a set of polygons…


We can write an algorithm that computes the Voronoi tesselation for those points, or we can stand back and let nature do the job for us. Here’s a Voronoi tesselation based on the distribution of points above which has been “computed” by simply letting the physics and chemistry run their course…


That’s an atomic force microscope image of the Voronoi tesselation produced by gold nanoparticles aggregating during the drying of the solvent in which they’re suspended. Holes appear in the solvent-nanoparticle film via any (or all) of a number of mechanisms including random nucleation (a little like how bubbles form in boiling water), phase separation (of the solid nanoparticles from the liquid solvent, loosely speaking), or instabilities due to heat flow in the solvent. Whatever way those holes appear, the nanoparticles much prefer to stay wet and so are carried on the “tide” of the solvent as it dewets from the surface…


(The figure above is taken from a review article written by Andrew Stannard, now at King’s College London. Before his move to London, Andy was a PhD researcher and then research fellow in the Nottingham Nanoscience Group. His PhD thesis focused on the wonderfully rich array of patterns that form as a result of self-assembly in nanostructured and molecular systems. Fittingly, given the scale-independent nature of some of these patterns, Andy’s research career started in astronomy (with the aforementioned Peter Coles.))

As those holes expand, particles aggregate at their edges and ultimately collide, producing a Voronoi tesselation when the solvent has entirely evaporated. What’s particularly neat is that there are many ways for the solvent to dewet, including a fascinating effect called the Benard-Marangoni instability. The physics underpinning this instability has many parallels with the Rayleigh-Taylor instability that helped produce Lynda Jackson’s wonderful painting.

But how do we program our physical computer? [3] To input the positions of the points for which we want compute the tesselation, we need to pattern the substrate so that we can control where (and when) the dewetting process initiates. And, fortunately, with (suitably treated) silicon surfaces, it’s possible to locally oxidise a nanoscale region using an atomic force microscope and draw effectively arbitrary patterns. Matt Blunt, now a lecturer at University College London, got this patterning process down to a very fine art while he was a PhD researcher in the group over a decade ago. The illustration below, taken from Matt’s thesis, explains the patterning process:


Corporate Identity Guidelines™ of course dictate that, when any new lithographic or patterning technique becomes available, the very first pattern drawn is the university logo (as shown on the left below; the linewidth is approximately 100 nm.) The image on the right shows how a 4 micron x 4 micron square of AFM-patterned oxide affects the dewetting of the solvent and dramatically changes the pattern formed by the nanoparticles; for one thing, the characteristic length scale of the pattern on the square is much greater than that in the surrounding region. By patterning the surface in a slightly more precise manner we could, in principle, choose the sites where the solvent dewets and exploit that dewetting to calculate the Voronoi tesselation for effectively an arbitrary set of points in a 2D plane.


There’s a very important class of unconventional computing known as wetware. (Indeed, a massively parallel wetware system is running inside your head as you read these words.) The lateral computing strategy outlined above might perhaps be best described as dewetware.

I very much hope that Sean and I can explore other forms of lateral/unconventional computing in future Computerphile videos. There are a number of influential physicists who have suggested that the fundamental quantity in the universe is not matter, nor energy — it’s information. Patterns, be they compressed and encrypted binary representations of scientific data or striking and affecting pieces of art, embed information on a wide variety of different levels.

And if there’s one thing that connects artists and scientists, it’s our love of patterns…

[1] And that’s just for starters. YouTube has been dragged, kicking and screaming every inch of the way, into a belated and grudging acceptance that it’s been hosting and fostering some truly odious and vile ‘content’.

[2] On a tangential point, it frustrates me immensely that public engagement is now no longer enough by itself. When it comes to securing funding for engaging with the public (who fund our research), we’re increasingly made feel that it’s more important to collect and analyse questionnaire responses than to actually connect with the audience in the first place.

[3] I’ll come clean — the nanoparticle Voronoi tesselation “calculation” shown above is just a tad artificial in that the points were selected “after the event”. The tesselation wasn’t directed/programmed in this case; the holes that opened up in the solvent-nanoparticle film due to dewetting weren’t pre-selected. However, the concept remains valid — the dewetting centres can in principle be “dialled in” by patterning the surface.

Paul Darrow (1941 – 2019)


Between the tender ages of ten and thirteen (1978 – 1981) my universe revolved around Blake’s 7, a dark, dystopian, and desperately underfunded weekly series about a bunch of anti-heroes battling the evils of the totalitarian Terran Federation. Created by Terry Nation, whose fertile imagination also conjured up Dr. Who’s arch-nemeses, the Daleks, Blake’s 7 ran for four seasons, each of thirteen episodes. Wobbly sets, often clumsy dialogue, props that sometimes looked like they’d been knocked up out of a washing-up liquid bottle and some sticky-backed plastic on last week’s Blue Peter episode — none of that mattered. I adored B7’s unsettling plots  — Episode 1,  which involved the dissident/terrorist Blake being framed for child molestation, was hardly the least challenging viewing for a ten year old — and its Orwellian story arc.

The late seventies were, however, far from a dystopia for a young science fiction fan growing up in rural Ireland (Annyalla, Co. Monaghan to be a little more precise). Prog 1 of 2000 AD, which I devoured on a weekly basis, had been published in 1977; Star Wars was released in Ireland in March ’78; Fit the First of The Hitchhiker’s Guide To The Galaxy was broadcast in the same month; the wonderfully bonkers, quintessentially British, and  absolutely thrilling Sapphire and Steel [1] would make its debut in 1979. But all of this (yes, even Hitchhiker’s) paled into insignificance against Blake’s 7 for ten-year-old me.

I was especially fortunate to live in Ireland because it meant that I had a double fix of the 7 each week. The national Irish broadcaster RTE (Raidió Teilifís Éireann) also transmitted Blake’s 7—on a Sunday (if memory serves), whereas the BBC episode was on a Monday or a Tuesday—so I could watch it biweekly.

I’m recounting all of this because Paul Darrow, who played Kerr Avon in Blake’s 7, sadly passed away yesterday at the age of 78. Avon’s acerbic wit and brutal honesty made him my favourite character, by a country mile, of the series. I will never forget that giddy excitement as I counted down the hours until the next episode of Blake’s 7 as a kid, keen to watch Blake and Avon trade barbs and insults as they took on the might of the Federation (in a disused quarry somewhere off the M4). Science fiction played a huge role in fostering my interest in science as a kid. Thank you, Mr. Darrow, for the inspiration. (I never did figure out how the teleporter bracelet worked. But I’ll keep trying…)

[1] “All irregularities will be handled by the forces controlling each dimension. Transuranic heavy elements may not be used where there is life. Medium atomic weights are available: Gold, Lead, Copper, Jet, Diamond, Radium, Sapphire, Silver and Steel. Sapphire and Steel have been assigned.”