Language trouble in brain science and psychology

It’s time for another guest post. This time I’m delighted to introduce Elric Elias, PhD candidate at the University of Denver, cognitive neuroscientist, and a fellow metal fan. His dissertation work pits two fundamental computational methods, each leveraged by the visual system of the human animal brain, against each other in a duel to the death. In so doing, he hopes to learn about how the visual system turns a potentially infinite amount of incoming visual information into useful, adaptive output. He has been published in multiple scientific journals, and has shared his work at several academic conferences. He lives with his best friend Ladybird (also an animal—dog) in Denver, Colorado. You can contact him at his first name dot last name; gmail.


If you haven’t watched the Sixty Symbols in which Dr. Moriarty gets testy about the idea that physical objects never really “make contact”, quit jackassing around and do it. It’s great for a list of reasons. Near the top of that list is how forcefully and clearly it demonstrates language’s incredibly important role in science. Language matters, as they say. Now that’s not a new idea, but I may be approaching it from an angle you’re not used to. I’ll be talking about two instances in which, I think, sloppy language leads to problems in brain science and psychology. Read on.

Culture, technology, song lyrics that don’t really make sense but evoke an emotional reaction nonetheless: language is pretty useful and potent stuff. On the other hand, though, it’s a really imprecise medium. I mean, physicists tell me that the standard model can be captured on the back of an envelope (right, Dr. Moriarty?). So much explanatory power, so much precision… all on a scrap of paper. Of course, that’s only true if it’s written in mathematical notation, not English. The book that linguistically describes that notation to a layperson is considerably less efficient and precise (no offense to the author, thank the imprecision of language!).

This is why, when scientists speak, they chose their words very carefully. They even (hopefully) take the time to meticulously define their terms before they use them. To the non-scientist, this sometimes looks like nerdy pedantry. And proudly, in part it is. But nerdy pedantry minimizes the ambiguity of language. When Dr. Moriarty says the word “contact” to another physicist, each understands that word to mean precisely the same thing. Not only is that useful, it’s necessary. In the absence of nerdy pedantry, you end up with one group of people hollering “objects never make contact” and another group saying “but they do, though”. Round and round, for EVER. Turns out, the two groups had different notions of what “contact” meant all along (p.s., trust the physicist’s definition).

I’m a scientist in the sprawling field called psychology. More informatively, I’m a cognitive neuroscientist, or a vision scientist. I try to figure out how you see. Not with your eyes, but with your brain. So, the science I’m most familiar with deals with the brain, animal behavior (in my case, human behavior), and in general, the… ahem… “mind”. Ah. A familiar bit of language. One that you normally encounter and pass right over. The mind. Yeah, the mind. You know what that is, right? So—what is it?

The “mind and brain”

Well, “mind” is a term that you can find in plenty of popular representations of psychology and brain science. Sometimes the idea is that the “mind” is different from the brain (yes, that’s the stubborn specter of dualism). Sometimes people take a more agnostic stance, and wonder about the relationship between the mind and brain. Non-specialists might casually mention “the brain and the mind”, or even more teeth-grindingly, “my brain” (what, exactly, is the “owner” of the brain supposed to be… other than the brain itself?). Regardless, this much is almost always true: people treat the existence of “minds” as a given. It’s self-evident. We might wonder about its relationship to the brain, or assert that the two are different, but the existence of minds is just… too obvious to even bother questioning.

Brain scientists and psychologists are a bit different. Much of the time, they are careful about how they use brain/mind language. As an excellent example, see this very readable and reasonable take on conscious awareness—conscious “minds”. But occasionally, language that seems to assume the existence of minds, distinct from brains, seeps into more formal scientific settings (if you don’t want to scour that last article for “mind and brain” language, just take a look at the name of the journal it was published in). Sometimes, “mind and brain” language seeps into conversation among fellow scientists, or between scientists and students (you can take my word for that last assertion, or decline). Ok, fine. Maybe non-specialists sometimes assume the existence of minds, apart from brains, and sometimes even specialists do too. So what?

Well, since I’ve just spent the better part of four paragraphs talking about how carefully scientists use language, if a scientist distinguishes the mind from the brain, you might suppose that there’s a very good reason for doing so. What is that reason?

Beats the crap out of me. All the evidence brain science has gathered—and we have certainly gathered some—points towards this: every passing thought, every lingering emotion, every sensation, every dream, every decision, every moral indignation, every space-out, EVERYTHING you have ever experienced has been your brain doing its thing. Hell, “you” are a brain doing its thing. Neurons firing, populations of cells reacting to input from the external world or from other cells. Although the details are insanely complicated and no one claims to understand it all, every time brain scientists “look under the hood” and try to catch a glimpse of a thought, or a feeling, or an idea, or a plan, or an identity, what we observe is a brain making computations. Every. Single. Time. Nothing more, and nothing less. We have never, ever, observed or measured a “mind” in the absence of a brain. The evidence isn’t just correlational, though. Damage area X in the brain and observe a reliable change in the “mind”. Stimulate area Y in the brain and observe the conscious experience of mental state Z. Sure, it feels like my thoughts and feelings are somehow different from my brain, which is, after all, a three-pound lump of jellyish meat (decidedly not qualitatively similar to a thought or emotion). But even a mediocre psychologist will tell you that introspection is an insanely unreliable way to get at what’s true. So what’s my point?

Well, language matters, remember? If a scientific field goes around using a term (the “mind”, say), the onus is on them to provide evidence for the existence of that construct. I see plenty of evidence for the existence of brains. I am aware of zero evidence that suggests that minds are something above and beyond a brain over time. Instead, brains that are active over time are the conscious experiences we colloquially refer to as “the mind”. That’s what a “mind” is. Brain activity over time. Nothing more, nothing less. If you do not agree, I am open to evidence that the mind and brain are dissociable! Good luck.

My own field needs to be clear and consistent about what constitutes the “mind”: brain activity over time. Else don’t use the damned word. Language is imprecise enough. Good scientists should do their best to minimize that imprecision, not to keep it moist and let it fester. No more implying that the “mental” is separate from the physical. The “mental” is physical. Like it or not, all the evidence points in that direction. Sloppy language that implies otherwise just keeps the stubborn spectre of dualism well-fed.

Humans and animals

Let’s turn away from brains and minds and instead think about humans and animals. Psychology sometimes uses “animal models” to help us understand how brains work; often the goal is to infer how human brains work by studying how animal brains work. Psychology departments sometimes offer “animal cognition” courses, in which you can learn about the brains of birds or monkeys or rats or other amazing critters. Sometimes animal cognition or behavior is a program unto itself. Certainly representations of popular science use this kind of language (check out this double-whammy). There are humans, and then there are animals. Nothing contentious so far. Nothing worth getting your blood pressure up for, right?

Damnit, humans are animals! Every model in psychology is an animal model, including the ones that describe humans! What else would they be, mineral models? Gas models? The entire field of psychology is about animal cognition and animal behavior! Imagine if I told you this: “humans and women are capable of producing pretty good death metal music”. You would rightly punch me right in the mouth. The distinction between “humans” and “women” is grossly incorrect at best, incorrect and value-laden at worst. Likewise, the phrase “humans and animals” is sloppy nonsense, imprecise and potentially laden with value judgment.

Now, there’s no doubt that the human animal brain has some pretty unique capacities. But that’s true of all species. By creating linguistic divides that do not reflect the way nature really seems to be (e.g., human/animal, mind/brain), we map our own biases and value-judgments onto our understanding of the world. That is true whether a scientist is using the imprecise language or whether a non-expert is.

No more sloppiness!

There’s no excuse for such sloppy language in science. Sloppy imprecision is everything we’re not. At least, I really hope we’re not. As an interesting side-note, I don’t think it’s sufficient for a scientific field to be internally consistent. For example, let’s say that some middling evidence could be interpreted such that minds might exist absent brains (it doesn’t, that I know of, but pretend). And further imagine that psychologists came up with some theory that accounted for this interpretation. Their theory hung together with other theories in psychology; the field was consistent, no obvious contradictions. Well, that wouldn’t be good enough. Ultimately, theories have to be consistent across scientific disciplines. Chemistry is consistent with particle physics. Biology with chemistry. Psychology with biology. And to close the loop, psychology must ultimately be compatible with physics. If “minds”, above and beyond brains, are posited to exist, their existence would have to be consistent with physics, not just with other theories in psychology. I’m not sure that can be done, though I’m confident that hasn’t ever been done. Perhaps more on that in the future.

In sum:

Dear Psychology,

                No more sloppy language. All available evidence suggests that “minds” are brains doing their thing over time. Nothing more, nothing less. And, damnit, humans are animals. Nothing more, nothing less. Avoid the word “mind” unless you’re clear about what you mean. Say “human animals” or “non-human animals”, because that language is more precise and correct. Precision is worth the extra keystrokes. Let’s be the good examples; maybe it will spread.

Sincerely,

Elric Elias

Why we need Pride

I’m reblogging Peter Coles’ post on just why the idea of “Straight Pride” is such a pathetic notion. Despite all their interminable whining about snowflakes, there is nothing quite as fragile, delicate, and insecure as those who rail against diversity at any available opportunity. (And, of course, the legend in his own lunchtime that is Milo Yiannopoulos was first in the queue to support the “Straight Pride” toddlers. Milo’s tiresomely transparent self-serving pearl-clutching was past its sell-by date a very long time ago. But he’s got bills to pay…)

In the Dark

This month is LGBT Pride Month and this year I am looking forward to attending my first ever Dublin Pride.

I do occasionally encounter heterosexual people who trot out the tedious `when is it Straight Pride?’ in much the same way as much the same people ask when is it `International Men’s Day’?

Well, have a look at this picture and read the accompanying story and ask yourself when have you ever been beaten up because of your sexual orientation?

It seems heterosexual privilege comes with blinkers in the same way that male privilege and white privilege do. Anything that threatens this sense of entitlement is to be countered to be countered, with violence if necessary. The above example is an extreme manifestation of this. The yobs on that night bus apparently think that lesbians only exist for the amusement of straight men. When the two women refused to…

View original post 34 more words

Does art compute?

A decade ago, a number of physicists and astronomers, an occasional mathematician, and even an interloping engineer or two (shhh…) here at the University of Nottingham started to collaborate with the powerhouse of pop sci (/pop math/pop comp/pop phil…) videography that is Brady Haran. I was among the “early adopters” (after the UoN chemists had kicked everything off with PeriodicVideos) and contributed to the very first Sixty Symbols video, uploaded back in March 2009. This opened with the fresh-faced and ever-engaging Mike Merrifield: Speed of Light.

Since then, I have thoroughly enjoyed working with Brady and colleagues on 60 or so Sixty Symbols videos. (Watching my hairline proceed backwards and upwards at an exponentially increasing rate from video to video has been a somewhat less edifying experience.) More recently, I’ve dipped my toes into Computerphile territory, collaborating with the prolific Sean Riley — whom I first met here, and then subsequently spent a week with in Ethiopia — on a number of videos exploring the links between physics and computing.

It’s this ability to reach out to audiences other than physicists and self-confessed science geeks that keeps me coming back to YouTube, despite its many deficiencies and problems (such as those described here, here, and here. And here, here, and here [1].) Nonetheless, during discussions with my colleagues about the ups and downs of online engagement, I’m always tediously keen to highlight that the medium of YouTube allows us to get beyond preaching to the converted.

Traditional public engagement and outreach events are usually targeted at, and attract, audiences who already have an interest in, or indeed passion for, science (and, more broadly, STEM subjects in general [2].) But with YT,  and despite the best efforts of its hyperactive recommendation algorithms to corral viewers into homogeneous groupings (or direct them towards more and more extreme content), it’s possible to connect with audiences that may well feel that science or math(s) is never going to be for them, i.e. audiences that might never consider attending a traditional science public engagement event. The comment below, kindly left below a Numberphile video that crossed the music-maths divide, is exactly what I’m talking about…

numberphile.png

There’s still a strong tendency for a certain type of viewer, however, to want their content neatly subdivided and packaged in boxes labelled “Physics”, “Chemistry”, “Biology”, “Philosophy”, “Computing”, “Arts and Humanities Stuff I’d Rather Avoid” etc… Over the years, there have been comments (at various levels of tetchiness) left under Sixty Symbols, Periodic Videos, Computerphile etc… uploads telling us that the video should be on a different channel or that the content doesn’t fit. I hesitate to use the lazy echo chamber cliché, but the reluctance to countenance concepts that don’t fit with a blinkered view of a subject is not just frustrating, it narrows the possibilities for truly innovative thinking that redefines — or, at best, removes — those interdisciplinary boundaries.

Some physicists have a reputation for being just a little “sniffy” about other fields of study. This was best captured, as is so often the case, by Randall Munroe:

But this is a problem beyond intellectual arrogance; a little learning is a dangerous thing. As neatly lampooned in that xkcd cartoon, it’s not just physicists who fail to appreciate the bigger picture (although there does seem to be a greater propensity for that attitude in my discipline.) A lack of appreciation for the complexity of fields that are not our own can often lead to an entirely unwarranted hubris that, in turn, tends to foster exceptionally simplistic and flawed thinking. And before you know it, you’re claiming that lobsters hold the secret to life, the universe, and everything…

That’s why it’s not just fun to cut across interdisciplinary divides; it’s essential. It broadens our horizons and opens up new ways of thinking. This is particularly the case when it comes to the arts-science divide, which is why I was keen to work with Sean on this very recent Computerphile video:

The video stems from the Creative Reactions collaboration described in a previous post, but extends the physics-art interface discussed there to encompass computing. [Update 08/06/2019 — It’s been fun reading the comments under that video and noting how many back up exactly the points made above about the unwillingness of some to broaden their horizons.] As the title of this post asks, can art compute? Can a painting or a pattern process information? Can artwork solve a computational problem?

Amazingly, yes.

This type of approach to information processing is generally known as unconventional computing, but arguably a better, although contentious, term is lateral computing (echoing lateral thinking.) The aim is not to “beat” traditional silicon-based devices in terms of processing speed, complexity, or density of bits. Instead, we think about computing in a radically different way — as the “output” of physical and chemical and/or biological processes, rather than as an algorithmic, deterministic, rule-based approach to solving a computational problem. Lateral computing often means extracting the most benefit from analogies rather than algorithms.

Around about the time I started working with Brady on Sixty Symbols, our group was actively collaborating with Natalio Krasnogor and his team — who were then in the School of Computer Science here at Nottingham — on computational methods to classify and characterise scanning probe images. Back then we were using genetic algorithms (see here and here, for example); more recently, deep learning methods have been shown to do a phenomenally good job of interpreting scanning probe images, as discussed in this Computerphile video and this arXiv paper. Nat and I had a common interest, in common with quite a few other physicists and computer scientists out there, in exploring the extent to which self-assembly and self-organisation in nature could be exploited for computing. (Nat moved to Newcastle University not too long afterwards. I miss our long chats over coffee about, for one, just how we might implement Conway’s Game Of Life on a molecule-by-molecule basis…)

It is with considerable guilt and embarrassment that I’ve got to admit that on my shelves I’ve still got one of Nat’s books that he kindly lent to me all of those years ago. (I’m so sorry, Nat. As soon as I finish writing this, I’m going to post the book to you.)

This book, Reaction-Diffusion Computers by Andy Adamatzky, Ben De Lacy Costello, and Tetsuya Asai, is a fascinating and comprehensive discussion of how chemical reactions — in particular, the truly remarkable BZ reaction — can be exploited in computing. I hope that we’ll be able to return to the BZ theme in future Computerphile videos. But it was Chapter 2 of Adamatzky’s book, namely “Geometrical Computation: Voronoi Diagram and Skeleton” — alongside Philip Ball’s timeless classic, The Self-Made Tapestry (which has been essential reading for many researchers in our group over the years, including yours truly) — that directly inspired the Computerphile video embedded above.

The Voronoi diagram (also called the Voronoi tesselation) is a problem in computational geometry that crops up time and again in so very many different disciplines and applications, spanning  areas as diverse as astronomy, cancer treatment, urban planning (including deciding the locations of schools, post offices, and hospital services), and, as discussed in that video above, nanoscience.

We’ve calculated Voronoi tesselations extensively over the years to classify the patterns formed by drying droplets of nanoparticle solutions. (My colleagues Ellie Frampton and Alex Saywell have more recently been classifying and quantifying molecular self-assembly using the Voronoi approach.) But Voronoi tesselations are also regularly used by astronomers to characterise the distribution of galaxies on length scales that are roughly ~ 1,000,000,000,000,000,000,000,000,000,000 (i.e. about 1030) times larger than those explored in nanoscience. I love that the same analysis technique is exploited to analyse our universe on such vastly different scales (and gained a lot from conversations with the astronomer Peter Coles on this topic when he was a colleague here at Nottingham. )

As Cory Simon explains so well in his “Voronoi cookies and the post office problem” post, the Voronoi algorithm is an easy-to-understand method in computational geometry, especially in two dimensions: take a point, join it up to its nearest neighbours, and get the perpendicular bisectors of those lines. The intersections of the bisectors define a Voronoi cell. If the points form an ordered mesh on the plane — as, for example, in the context of the atoms on a crystal plane in solid state physics — then the Voronoi cell is called a Wigner-Seitz unit cell. (As an undergrad, I didn’t realise that the Wigner-Seitz unit cells I studied in my solid state lectures were part of the much broader Voronoi class — another example of limiting thinking due to disciplinary boundaries.)

For less ordered distributions of points, the tesselation becomes a set of polygons…

Tesselation

We can write an algorithm that computes the Voronoi tesselation for those points, or we can stand back and let nature do the job for us. Here’s a Voronoi tesselation based on the distribution of points above which has been “computed” by simply letting the physics and chemistry run their course…

tesselation-2.png

That’s an atomic force microscope image of the Voronoi tesselation produced by gold nanoparticles aggregating during the drying of the solvent in which they’re suspended. Holes appear in the solvent-nanoparticle film via any (or all) of a number of mechanisms including random nucleation (a little like how bubbles form in boiling water), phase separation (of the solid nanoparticles from the liquid solvent, loosely speaking), or instabilities due to heat flow in the solvent. Whatever way those holes appear, the nanoparticles much prefer to stay wet and so are carried on the “tide” of the solvent as it dewets from the surface…

Dewetting-1

(The figure above is taken from a review article written by Andrew Stannard, now at King’s College London. Before his move to London, Andy was a PhD researcher and then research fellow in the Nottingham Nanoscience Group. His PhD thesis focused on the wonderfully rich array of patterns that form as a result of self-assembly in nanostructured and molecular systems. Fittingly, given the scale-independent nature of some of these patterns, Andy’s research career started in astronomy (with the aforementioned Peter Coles.))

As those holes expand, particles aggregate at their edges and ultimately collide, producing a Voronoi tesselation when the solvent has entirely evaporated. What’s particularly neat is that there are many ways for the solvent to dewet, including a fascinating effect called the Benard-Marangoni instability. The physics underpinning this instability has many parallels with the Rayleigh-Taylor instability that helped produce Lynda Jackson’s wonderful painting.

But how do we program our physical computer? [3] To input the positions of the points for which we want compute the tesselation, we need to pattern the substrate so that we can control where (and when) the dewetting process initiates. And, fortunately, with (suitably treated) silicon surfaces, it’s possible to locally oxidise a nanoscale region using an atomic force microscope and draw effectively arbitrary patterns. Matt Blunt, now a lecturer at University College London, got this patterning process down to a very fine art while he was a PhD researcher in the group over a decade ago. The illustration below, taken from Matt’s thesis, explains the patterning process:

afm-patterning.png

Corporate Identity Guidelines™ of course dictate that, when any new lithographic or patterning technique becomes available, the very first pattern drawn is the university logo (as shown on the left below; the linewidth is approximately 100 nm.) The image on the right shows how a 4 micron x 4 micron square of AFM-patterned oxide affects the dewetting of the solvent and dramatically changes the pattern formed by the nanoparticles; for one thing, the characteristic length scale of the pattern on the square is much greater than that in the surrounding region. By patterning the surface in a slightly more precise manner we could, in principle, choose the sites where the solvent dewets and exploit that dewetting to calculate the Voronoi tesselation for effectively an arbitrary set of points in a 2D plane.

tesselation-3.png

There’s a very important class of unconventional computing known as wetware. (Indeed, a massively parallel wetware system is running inside your head as you read these words.) The lateral computing strategy outlined above might perhaps be best described as dewetware.

I very much hope that Sean and I can explore other forms of lateral/unconventional computing in future Computerphile videos. There are a number of influential physicists who have suggested that the fundamental quantity in the universe is not matter, nor energy — it’s information. Patterns, be they compressed and encrypted binary representations of scientific data or striking and affecting pieces of art, embed information on a wide variety of different levels.

And if there’s one thing that connects artists and scientists, it’s our love of patterns…


[1] And that’s just for starters. YouTube has been dragged, kicking and screaming every inch of the way, into a belated and grudging acceptance that it’s been hosting and fostering some truly odious and vile ‘content’.

[2] On a tangential point, it frustrates me immensely that public engagement is now no longer enough by itself. When it comes to securing funding for engaging with the public (who fund our research), we’re increasingly made feel that it’s more important to collect and analyse questionnaire responses than to actually connect with the audience in the first place.

[3] I’ll come clean — the nanoparticle Voronoi tesselation “calculation” shown above is just a tad artificial in that the points were selected “after the event”. The tesselation wasn’t directed/programmed in this case; the holes that opened up in the solvent-nanoparticle film due to dewetting weren’t pre-selected. However, the concept remains valid — the dewetting centres can in principle be “dialled in” by patterning the surface.

Paul Darrow (1941 – 2019)

474px-Paul_Darrow.jpg

Between the tender ages of ten and thirteen (1978 – 1981) my universe revolved around Blake’s 7, a dark, dystopian, and desperately underfunded weekly series about a bunch of anti-heroes battling the evils of the totalitarian Terran Federation. Created by Terry Nation, whose fertile imagination also conjured up Dr. Who’s arch-nemeses, the Daleks, Blake’s 7 ran for four seasons, each of thirteen episodes. Wobbly sets, often clumsy dialogue, props that sometimes looked like they’d been knocked up out of a washing-up liquid bottle and some sticky-backed plastic on last week’s Blue Peter episode — none of that mattered. I adored B7’s unsettling plots  — Episode 1,  which involved the dissident/terrorist Blake being framed for child molestation, was hardly the least challenging viewing for a ten year old — and its Orwellian story arc.

The late seventies were, however, far from a dystopia for a young science fiction fan growing up in rural Ireland (Annyalla, Co. Monaghan to be a little more precise). Prog 1 of 2000 AD, which I devoured on a weekly basis, had been published in 1977; Star Wars was released in Ireland in March ’78; Fit the First of The Hitchhiker’s Guide To The Galaxy was broadcast in the same month; the wonderfully bonkers, quintessentially British, and  absolutely thrilling Sapphire and Steel [1] would make its debut in 1979. But all of this (yes, even Hitchhiker’s) paled into insignificance against Blake’s 7 for ten-year-old me.

I was especially fortunate to live in Ireland because it meant that I had a double fix of the 7 each week. The national Irish broadcaster RTE (Raidió Teilifís Éireann) also transmitted Blake’s 7—on a Sunday (if memory serves), whereas the BBC episode was on a Monday or a Tuesday—so I could watch it biweekly.

I’m recounting all of this because Paul Darrow, who played Kerr Avon in Blake’s 7, sadly passed away yesterday at the age of 78. Avon’s acerbic wit and brutal honesty made him my favourite character, by a country mile, of the series. I will never forget that giddy excitement as I counted down the hours until the next episode of Blake’s 7 as a kid, keen to watch Blake and Avon trade barbs and insults as they took on the might of the Federation (in a disused quarry somewhere off the M4). Science fiction played a huge role in fostering my interest in science as a kid. Thank you, Mr. Darrow, for the inspiration. (I never did figure out how the teleporter bracelet worked. But I’ll keep trying…)

[1] “All irregularities will be handled by the forces controlling each dimension. Transuranic heavy elements may not be used where there is life. Medium atomic weights are available: Gold, Lead, Copper, Jet, Diamond, Radium, Sapphire, Silver and Steel. Sapphire and Steel have been assigned.”

 

So long, and thanks for all the fish

Stewart Lee was on fine form in yesterday’s Observer on a burning, but delicious, political issue of our day: are milkshakes the new politics of resistance?

“During his appearances on the campaign trail, Ukip’s star candidate, the internet’s Carl Benjamin, has been assailed with a total of four milkshakes and a single fish. This is a paltry selection of foods on paper, but one which Our Lord Jesus could have used to feed 5,000 people. Or pelt roughly 3,570 Brexiteers.”

Mr. Benjamin‘s milkshake misadventures also featured on Friday’s Have I Got News For You…

As Jess Phillips, MP for Birmingham Yardley, puts it in that clip…

“No, I don’t think you should throw things at politicians, I don’t think you should attack them. I think you should win by being better than them, which is what I am currently doing to Carl Benjamin.”

Jess, current majority of 37.2%, is very definitely winning. The extent of Carl’s political humiliation — which he, of course, will now attempt to pathetically and transparently laugh off as “trolling the establishment” (or some such similar nonsense) [1] — became clear late last night:

UKIP polled just 3.2 per cent of ballots cast in Benjamin’s constituency — a 29 per cent drop from their previous election. Even better, the combined toxicity of Benjamin and Tommy Robinson Stephen Christopher Yaxley-Lennon, and, of course, the wholly predictable and dispiriting success of Farage’s Brexit party, meant that UKIP lost every single seat. (Yaxley-Lennon had to sneak out of the election count early he was so embarrassed.)

Let’s just hope that last night’s very poor Labour performance will finally encourage Jeremy Corbyn to bow to pressure to support a second referendum. I’m not holding my breath, however. (I joined the Labour Party because of Jeremy Corbyn. And I left the Labour Party because of Jeremy Corbyn.)

If you, in turn, were waiting with baited breath for me to close this post with a good fish pun, I’m afraid that, just like Carl’s political career, I floundered…

[1] Carl Benjamin is 39 years old.

The Silent Poetry of Paint Drying

The painting has a life of its own. I just let it come through.

Jackson Pollock (1912 – 1956)

Over the last six weeks or so, I’ve had the immense pleasure of collaborating with local artist Lynda Jackson on a project for Creative Reactions — the arts-science offshoot of Pint of Science   I don’t quite know why I didn’t sign up for Creative Reactions long before now but after reading Mark Fromhold‘s wonderful blog post about last year’s event, I jumped at the chance to get involved with CR2019. The collaboration with Lynda culminated in us being interviewed together for yesterday’s Creative Reactions closing night, which was a heck of a lot of fun. The event, compered by PhD student researcher Paul Brett (Microbiology, University of Nottingham), was expertly live-tweeted by another UoN researcher (this time from the School of Chemistry), Lizzie Killalea

I’ve been fascinated by the physics (and metaphysics) of foam for a very long time, and was delighted that the collaboration with Lynda serendipitously ended up being focused on foam-like painting and patterns. When we met for the first time, Lynda told me that she had a burgeoning interest in what’s known as acrylic pouring, as described in this video…

…and here’s a great example of one of Lynda’s paintings, produced using a somewhat similar technique to that described in the video:

LyndaJackson_2.png

I love that painting, not only for its aesthetic value, but for its direct, and scientifically beautiful, connection to the foam patterns — or, to give them their slightly more technical name, cellular networks — that are prevalent right across nature, from the sub-microscopic to the (quite literally) astronomically large (via, as I discuss in the Sixty Symbols video below, the Giant’s Causeway and some stonkingly stoned spiders)…

Our research group spent a great deal of time (nearly a decade — see this paper for a review of some of that work) analysing the cellular networks that form when a droplet of a suspension of nanoparticles in a solvent is placed on a surface and subsequently left to its own devices (or alternatively spin-dried). Here’s a particularly striking example of the foams-within-foams-within-foams motif that is formed via the drying of a nanoparticle-laden droplet of toluene on silicon…

Nanoparticles-2.png

What you see in that atomic force microscope image above — which is approximately 0.02 of a millimetre, i.e. 20 micrometres, across — are not the individual 2 nanometre nanoparticles themselves, but the much larger (micron-scale) pattern that is formed during the drying of the droplet; the evaporation and dewetting of the solvent corrals the particles together into the patterns you see. It’s somewhat like what happens in the formation of a coffee stain: the particles are carried on the tide of the solvent (water for the coffee example; toluene in the case of the nanoparticles).

Lynda’s painting above is about 50 cm wide. That means that the scale of the foam created by acrylic pouring is ~ 25,000 times bigger than that of the nanoparticle pattern. Physicists get very excited when they see the same class of pattern cropping up in very different systems and/or on very different length scales — it often means that there’s an overarching mathematical framework; a very similar form of differential equation, for example, may well be underpinning the observations. And, indeed, there are similar physical processes at play in both the acrylic pouring and the nanoparticle systems: mixed phases separate under the influence of solvent flow. Here’s another striking example from Lynda’s work:

LyndaJackson_1.png

Phase separation and phase transitions are not only an exceptionally rich source of fascinating physics (and, indeed, chemistry and biology) but they almost invariably give rise to sets of intriguing and intricate patterns that have captivated both scientists and artists for centuries. In the not-too-distant future I’ll blog about Alan Turing’s remarkable insights into the pattern-forming processes that produce the spots, spirals, and stripes of animal hides (like those shown in the tweet below); his reaction-diffusion model is an exceptionally elegant example of truly original scientific thinking. I always hesitate to use the word “genius” — because science is so very much more complicated and collaborative than the tired cliche of the lone scientist “kicking against the odds” — but in Turing’s case the accolade is more than well-deserved.

I nicked the title of this post — well, almost nicked — from a quote generally attributed to Plutarch: “Painting is silent poetry, and poetry is painting that speaks.” It’s very encouraging indeed that Creative Reactions followed hot on the heels of the Science Rhymes event organised by my UoN colleague Gerardo Adesso a couple of weeks ago (see Brigitte Nerlich‘s great review for the Making Science Public blog). Could we at last be breaking down the barriers between those two cultures that CP Snow famously identified so many years ago?

At the very least, I get the feeling that there’s a great deal more going on than just a superficial painting over the cracks…

Are the Nanobots Nigh?

The annual Pint Of Science festival, about which I’ve blogged previously and enthusiastically, is taking place this year from May 20 – 22 not only across the UK but in 24 countries worldwide. This, if I remember correctly, is the fourth consecutive year that I’ve done a Pint of Science talk, and I am looking forward immensely to speaking in the Scratching The Surface of Material Science session tonight in Parliament Bar in town, alongside my University of Nottingham colleagues Morgan Alexander and Nesma Aboulkhair. (Encouragingly, all of the Pint of Science events in Nottingham have sold out!)

The title of the talk I’ll give is “Artifical Intelligence at the Nanoscale (or Is The Nanopocalypse Nigh?“, and I’ll focus on recent developments in machine-learning-enabled scanning probe microscopy, of the type described in this Computerphile video put together by Sean Riley last year…

The PoS talk will, however, also roundly criticise the breathless enthusiasm of certain futurist pundits for a nano-enabled future. (OK, I’ll name names. I mean Ray Kurzweil.  We’re going to become immortal by 2045 according to Ray. Because nano.) I had a long, but ultimately exceptionally productive, exchange all the way back in 2004 about the considerable stumbling blocks that stand in the way of the molecular manufacturing nanotech that is a key enabling component of Kurzweil’s “vision”. At the time I didn’t have a blog but Richard Jones very kindly posted the exchange at his Soft Machines blog, and I was rather pleased to find that the debate is still available there.

Soft Machines is an exceptionally good read on everything from nanoscience to R&D policy to general economics and politics. Richard has also written an incisive and compelling critique of Kurzweil and others’ stance on transhumanism. You should give both the blog and the book, “Against Transhumanism: The Delusion of Technological Transcendence“, a read at the earliest opportunity. You won’t regret it.