If it seems obvious, it probably isn’t

…And Then There’s Physics’ post on science communication, reblogged below, very much struck a chord with me. This point, in particular, is simply not as widely appreciated as it should be:

“Maybe what we should do more of is make it clear that the process through which we develop scientific knowledge is far more complicated than it may, at first, seem.”

There can too often be a deep-seated faith in the absolute objectivity and certainty of “The Scientific Method”, which possibly stems (at least in part) from our efforts to not only simplify but to “sell” our science to a wide audience. The viewer response to a Sixty Symbols video on the messiness of the scientific process, “Falsifiability and Messy Science”, brought this home to me: The Truth, The Whole Truth, and Nothing But…

(…but I’ve worried for a long time that I’ve been contributing to exactly the problem ATTP describes: Guilty Confessions of a YouTube Physicist)

By the way, if you’re not subscribed to ATTP’s blog, I heartily recommend that you sign up right now.

...and Then There's Physics

There’s an interesting paper that someone (I forget who) highlighted on Twitter. It’a about when science becomes too easy. The basic idea is that there are pitfalls to popularising scientific information.

Compared to experts,

laypeople have not undergone any specialized training in a particular domain. As a result, they do not possess the deep-level background knowledge and relevant experience that a competent evaluation of science-related knowledge claims would require.

However, in the process of communicating, and popularising, science, science communicators tend to provide simplified explanations of scientific topics that can

lead[s] readers to underestimate their dependence on experts and conclude that they are capable of evaluating the veracity, relevance, and sufficiency of the contents.

I think that this is an interesting issue and it partly what motivated my post about public involvement in science.

However, I am slightly uneasy about this general framing. I think everyone is a…

View original post 449 more words

Beauty and the Biased

A big thank you to Matin Durrani for the invitation to provide my thoughts on the Strumia saga — see “The Worm That (re)Turned” and “The Natural Order of Things?” for previous posts on this topic — for this month’s issue of Physics World. PW kindly allows me to make the pdf of the Opinion piece available here at Symptoms. The original version (with hyperlinks intact) is also below.

(And while I’m at it, an even bigger thank you to Matin, Tushna, and all at PW for this immensely flattering (and entirely undeserved, given the company I’m in) accolade…


From Physics World, Dec. 2018.

A recent talk at CERN about gender in physics highlights that biases remain widespread, Philip Moriarty says we need to do more to tackle such issues head on

When Physics World asked several physicists to name their favourite books for the magazine’s 30th anniversary issue, I knew immediately what I would choose (see October pp 74-78). My “must-read” pick was Sabine Hossenfelder’s exceptionally important Lost In Math: How Beauty Leads Physics Astray, which was released earlier this year.

Hossenfelder, a physicist based at the Frankfurt Institute of Technology, is an engaging and insightful writer who is funny, self-deprecating, and certainly not afraid to give umbrage. I enjoyed the book immensely, being taken on a journey through modern theoretical physics in which Hossenfelder attempts to make sense of her profession. If there is one chapter of the book that particularly resonated with me it’s the concluding Chapter 10, “Knowledge is Power”. This is a powerful closing statement that deserves to be widely read by all scientists, but especially by that especially irksome breed of physicist who believes — when all evidence points to the contrary — that they are somehow immune to the social and cognitive biases that affect every other human.

In “Knowledge is Power”, Hossenfelder adeptly outlines the primary biases that all good scientists have striven to avoid ever since the English philosopher Francis Bacon identified his “idols of the tribe” – i.e. the tendency of human nature to prefer certain types of incorrect conclusions. Her pithy single-line summary at the start of the chapter captures the key issue: “In which I conclude the world would be a better place if everyone listened to me”.

Lost in bias

Along with my colleague Omar Almaini from the University of Nottingham, I teach a final-year module entitled “The Politics, Perception, and Philosophy of Physics”. I say teach, but in fact, most of the module consists of seminars that introduce a topic for students to then debate, discuss and argue for the remaining time. We dissect Richard Feynman’s oft-quoted definition of science: “Science is the belief in the ignorance of experts”.  Disagreeing with Feynman is never a comfortable position to adopt, but I think he does science quite a disservice here. The ignorance, and sometimes even the knowledge, of experts underpins the entire scientific effort. After all, collaboration, competition and peer review are the lifeblood of what we do. With each of these come complex social interactions and dynamics and — no matter how hard we try — bias. For this and many other reasons, Lost In Math is now firmly on the module reading list.

At a CERN workshop on high-energy theory and gender at the end of September, theoretical physicist Alessandro Strumia from the University of Pisa claimed that women with fewer citations were being hired over men with greater numbers of citations. Following the talk, Strumia faced an immediate backlash in which CERN suspended him pending an investigation, while some 4000 scientists signed a letter that called his talk “disgraceful”. Strumia’s talk was poorly researched, ideologically-driven, and an all-round embarrassingly biased tirade against women in physics. I suggest that Strumia needs to take a page — or many — out of Hossenfelder’s book. I was reminded of her final chapter time and time again when I read through Strumia’s cliché-ridden and credulous arguments, his reactionary pearl-clutching palpable from almost every slide of his presentation.

One criticism that has been levelled at Hossenfelder’s analysis is that it does not offer solutions to counter the type of biases that she argues are prevalent in the theoretical-physics community and beyond. Yet Hossenfelder does devote an appendix — admittedly rather short — to listing some pragmatic suggestions for tackling the issues discussed in the book. These include learning about, and thus tackling, social and cognitive biases.

This is all well and good, except that there are none so blind as those that will not see. The type of bias that Strumia’s presentation exemplified is deeply engrained. In my experience, his views are hardly fringe, either within or outside the physics community — one need only look to the social media furore over James Damore’s similarly pseudoscientific ‘analysis’ of gender differences in the context of his overwrought “Google Manifesto” last year. Just like Damore, Strumia is being held up by the usual suspects as the ever-so-courageous rational scientist speaking “The Truth”, when, of course, he’s entirely wedded to a glaringly obvious ideology and unscientifically cherry-picks his data accordingly. In a masterfully acerbic and exceptionally timely blog post published soon after the Strumia storm broke (“The Strumion. And On”), his fellow particle physicist Jon Butterworth (UCL) highlighted a number of the many fundamental flaws at the core of Strumia’s over-emotional polemic.   .

Returning to Hossenfelder’s closing chapter, she highlights there that the “mother of all biases” is the “bias blind spot”, or the insistence that we certainly are not biased:

“It’s the reason my colleagues only laugh when I tell them biases are a problem, and why they dismiss my ‘social arguments’, believing they are not relevant to scientific discourse,” she writes. “But the existence of those biases has been confirmed in countless studies. And there is no indication whatsoever that intelligence protects against them; research studies have found no links between cognitive ability and thinking biases.”

Strumia’s diatribe is the perfect example of this bias blind spot in action. His presentation is also a case study in confirmation bias. If only he had taken the time to read and absorb Hossenfelder’s writing, Strumia might well have saved himself the embarrassment of attempting to pass off pseudoscientific guff as credible analysis.

While the beauty of maths leads physics astray, it is ugly bias that will keep us in the dark.

 

Is Science Self-Correcting? Some Real World-Examples From Psychological Research.

…or The Prognosis Is Not Good, Psychology. It’s A Bad Case Of Physics Envy*

Each year there are two seminars for the Politics, Perception, and Philosophy of Physics module that are led by invited speakers. First up this year was the enlightening, engaging, and entertaining Nick Brown, who, and I quote from no less a source than The Guardian, has an “astonishing story…[he] began a part-time psychology course in his 50s and ended up taking on America’s academic establishment.”

I recommend you read that Guardian profile in full to really get the measure of Mr. (soon to be Dr.) Brown but, in brief, he has played a central role in exposing some of the most egregious examples of breathtakingly poor, or downright fraudulent, research in psychology, a field that needs to get its house in order very soon. (A certain high profile professor of psychology who is always very keen to point the finger at what he perceives to be major failings in other disciplines should bear this in mind and heed his own advice. (Rule #6, as I recall…))

Nick discussed three key examples of where psychology research has gone badly off the rails:

    • Brian Wansink, erstwhile director of Cornell’s Food and Brand Lab, whose research findings (cited over 20,000 times) have been found to be rather tough to digest given that they’re riddled with data manipulation and resulted from other far-from-robust research practices.
    • The “audacious academic fraud” of Diederik Stapel. (Nick is something of a polymath, being fluent in Dutch among other skills, and translated Stapel’s autobiography/confession, making it freely available online. I strongly recommend adding Stapel’s book to your “To Read” list; I found it a compelling story that provides a unique insight into the mindset and motivations of someone who fakes their research. Seeing the ostracisation and shaming through Stapel’s eyes was a profoundly affecting experience and I found myself sympathising with the man, especially with regard to the effects of his fraud on his family.)

It was a great pleasure to host Nick’s visit to Nottingham (and to finally meet him after being in e-mail contact on and off for about eighteen months). Here’s his presentation…

*But don’t worry, you’re not alone.

** Hmmm. More psychologists with a chaotic concept of chaos. I can see a pattern emerging here. Perhaps it’s fractal in nature…


 

Update 18/11/2018. 15:30. I am rapidly coming to the opinion that in the dismal science stakes, psychology trumps economics by quite some margin. I’ve just read Catherine Bennett’s article in The Observer today on a research paper that created a lot of furore last week: “Testing the Empathizing-Systemizing theory of sex differences and the Extreme Male Brain theory of autism in half a million people“, a study which, according to a headline in The Times (amongst much other similarly over-excited and credulous coverage) has shown that male and female brains are very different indeed.

One would get the impression from the headlines that the researchers must have carried out an incredibly systematic and careful fMRI study, which, given the sample size, in turn must have taken decades and involved highly sophisticated data analysis techniques.

Nope.

They did their research by…asking people to fill in questionnaires.

Bennett highlights Dean Burnett ‘s incisive demolition of the paper and surrounding media coverage. I thoroughly recommend Burnett’s post – he highlights a litany of issues with the study (and others like it). For one thing, the idea that self-reporting via questionnaire can provide a robust objective analysis of just about any human characteristic or trait is ludicrously simple-minded. Burnett doesn’t cover all of the issues because, as he says at the end of his post: “There are other concerns to raise of course, but I’ll keep them in reserve for when the next study that kicks this whole issue off again is published. Shouldn’t be more than a couple of months.

Indeed.

Bullshit and Beyond: From Chopra to Peterson

Harry G Frankfurt‘s On Bullshit is a modern classic. He highlights the style-over-substance tenor of the most fragrant and flagrant bullshit, arguing that

It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction. A person who lies is thereby responding to the truth, and he is to that extent respectful of it. When an honest man speaks, he says
only what he believes to be true; and for the liar, it is correspondingly indispensable that he considers his statements to be false. For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye
is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says. He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.

In other words, the bullshitter doesn’t care about the validity or rigour of their arguments. They are much more concerned with being persuasive. One aspect of BS that doesn’t quite get the attention it deserves in Frankfurt’s essay, however, is that special blend of obscurantism and vacuity that is the hallmark of three world-leading bullshitters of our time:  Deepak Chopra, Karen Barad (see my colleague Brigitte Nerlich’s important discussion of Barad’s wilfully impenetrable language here), and Jordan Peterson. In a talk for the University of Nottingham Agnostic, Secularist, and Humanist Society last night (see here for the blurb/advert), I focussed on the intriguing parallels between their writing and oratory. Here’s the video of the talk.

Thanks to UNASH for the invitation. I’ve not included the lengthy Q&A that followed (because I stupidly didn’t ask for permission to film audience members’ questions). I’m hoping that some discussion and debate might ensue in the comments section below. If you do dive in, try not to bullshit too much…

 

 

The war on (scientific) terror…

I’ve been otherwise occupied of late so the blog has had to take a back seat. I’m therefore coming to this particular story rather late in the day. Nonetheless, it’s on an exceptionally important theme that is at the core of how scientific publishing, scientific critique, and, therefore, science itself should evolve. That type of question doesn’t have a sell-by date so I hope my tardiness can be excused.

The story involves a colleague and friend who has courageously put his head above the parapet (on a number of occasions over the years) to highlight just where peer review goes wrong. And time and again he’s gotten viciously castigated by (some) senior scientists for doing nothing more than critiquing published data in as open and transparent a fashion as possible. In other words, he’s been pilloried (by pillars of the scientific community) for daring to suggest that we do science the way it should be done.

This time, he’s been called a…wait for it…scientific terrorist. And by none other than the most cited chemist in the world over the last decade (well, from 2000 – 2010): Chad A Mirkin. According to his Wiki page, Mirkin “was the first chemist to be elected into all three branches of the National Academies. He has published over 700 manuscripts (Google Scholar H-index = 163) and has over 1100 patents and patent applications (over 300 issued, over 80% licensed as of April 1, 2018). These discoveries and innovations have led to over 2000 commercial products that are being used worldwide.”

With that pedigree, this guy must really have done something truly appalling for Mirkin to call him a scientific terrorist (oh, and a zealot, and a narcissist), right? Well, let’s see…

raphaportrait2The colleague in question is Raphael Levy. Raphael (pictured to the right) is a Senior Lecturer — or Associate Professor to use the term increasingly preferred by UK universities and traditionally used by our academic cousins across the pond — in Biochemistry at the University of Liverpool. He has a deep and laudable commitment to open science and the evolution of the peer review system towards a more transparent and accountable ethos.

Along with Julian Stirling, who was a PhD student here at Nottingham at the time, and a number of other colleagues, I collaborated closely with Raphael and his team (from about 2012 – 2014) in critiquing and contesting a body of work that claimed that stripes (with ostensibly fascinating physicochemical and biological properties) formed on the surface of suitably functionalised nanoparticles. I’m not going to revisit the “stripy” nanoparticle debate here. If you’re interested, see Refs [1-5] below. Raphael’s blog , which I thoroughly recommend, also has detailed bibliographies for the stripy nanoparticle controversy.

More recently, Raphael and his co-workers at Liverpool have found significant and worrying deficiencies in claims regarding the efficacy of what are known as SmartFlares. (Let me translate that academically-nuanced wording: Apparently, they don’t work.) Chad Mirkin played a major role in the development of SmartFlares, which are claimed to detect RNA in living cells and were sold by SigmaMilliPore from 2013 until recently, when they were taken off the market.

The SmartFlare concept is relatively straight-forward to understand (even for this particular squalid state physicist, who tends to get overwhelmed by molecules much larger than CO): each ‘flare’  probe comprises a gold nanoparticle attached to an oligonucleotide (that encodes a target sequence) and a fluorophore, which does not emit fluorescence as long as it’s near to the gold particle. When the probe meets the target RNA, however, this displaces the fluorophore (thus reducing the coupling to, and quenching by, the gold nanoparticle) and causes it to glow (or ‘flare’). Or so it’s claimed.

As described in a recent article in The Scientist, however, there is compelling evidence from a growing number of sources, including, in particular, Raphael’s own group, that SmartFlares simply aren’t up to the job. Raphael’s argument, for which he has strong supporting data (from electron-, fluorescence- and photothermal microscopy), is that the probes are trapped in endocytic compartments and get nowhere near the RNA they’re meant to target.

Mirkin, as one might expect, vigorously claims otherwise. That’s, of course, entirely his prerogative. What’s most definitely not his prerogative, however, is to launch hyperbolic personal attacks at a critic of his work. As Raphael describes over at his blog, he asked the following question at the end of a talk Mirkin gave at the American Chemical Society meeting in Boston a month ago:

In science, we need to share the bad news as well as the good news. In your introduction you mentioned four clinical trials. One of them has reported. It showed no efficacy and Purdue Pharma which was supposed to develop the drug decided not to pursue further. You also said that 1600 forms of NanoFlares were commercially available. This is not true anymore as the distributor has pulled the product because it does not work. Finally, I have a question: what is the percentage of nanoparticles that escape the endosome?

According to Raphael’s description (which is supported by others at the conference — see below), Mirkin’s response was ad hominem in the extreme:

[Mirkin said that]…no one is reading my blog (who cares),  no one agrees with me; he called me a “scientific zealot” and a “scientific terrorist”.

Raphael and I have been in a similar situation before with regard to scientific critique not exactly being handled with good grace. We and our colleagues have faced accusations of being cyber-bullies — and, worse, fake blogs and identity theft were used –to attempt to discredit our (purely scientific) criticism.

Science is in a very bad place indeed if detailed criticism of a scientist’s work is dismissed aggressively as scientific terrorism/zealotry. We are, of course, all emotional beings to a greater or lesser extent. Therefore, and despite protestations to the contrary from those who have an exceptionally naive view of The Scientific Method, science is not some wholly objective monolith that arrives at The Truth by somehow bypassing all the messy business of being human. As Neuroskeptic described so well in a blog post about the stripy nanoparticle furore, often professional criticism is taken very personally by scientists (whose self-image and self-confidence can be intimately connected to the success of the science we do). Criticism of our work can therefore often feel like criticism of us.

But as scientists we have to recognise, and then always strive to rise above, those very human responses; to take on board, rather than aggressively dismiss out of hand, valid criticisms of our work. This is not at all easy, as PhD Comics among others has pointed out:

One would hope, however, that a scientist of Mirkin’s calibre would set an example, especially at a conference with the high profile of the annual ACS meeting. As a scientist who witnessed the exchange between Raphael and Mirkin put it,

I witnessed an interaction between two scientists. One asks his questions gracefully and one responding in a manner unbecoming of a Linus Pauling Medalist. It took courage to stand in front of a packed room of scientists and peers to ask those questions that deserved an answer in a non-aggressive manner. It took even more courage to not become reactive when the respondent is aggressive and belittling. I certainly commended Raphael Levy for how he handled the aggressive response from Chad Mirkin.

Or, as James Wilking put it somewhat more pithily:

An apology from Mirkin doesn’t seem to be forthcoming. This is a shame, to put it mildly. What I found rather more disturbing than Mirkin’s overwrought accusation of scientific terrorism, however, was the reaction of an anonymous scientist in that article in The Scientist:

“I think what everyone has to understand is that unhealthy discussion leads to unsuccessful funding applications, with referees pointing out that there is a controversy in the matter. Referee statements like these . . . in a highly competitive environment for funding, simply drain the funding away of this topic,” he writes in an email to The Scientist. He believes a recent grant application of his related to the topic was rejected for this reason, he adds.

This is a shockingly disturbing mindset. Here we have a scientist bemoaning that (s)he did not get public funding because of what is described as “unhealthy” public discussion and controversy about an area of science. Better that we all keep schtum about any possible problems and milk the public purse for as much grant funding as possible, right?

That attitude stinks to high heaven. If it takes some scientific terrorism to shoot it down in flames then sign me up.


[1] Stripy Nanoparticle Controversy Blows Up

[2] Peer Review In Public: Rise Of The Cyber-Bullies? 

[3] Looking At Nothing, Seeing A Lot

[4] Critical Assessment of the Evidence for Striped Nanoparticles, Julian Stirling et al, PLOS ONE 9 e108482 (2014)

[5] How can we trust scientific publishers with our work if they won’t play fair?

 

 

 

The truth, the whole truth, and nothing but…

This video, which Brady Haran uploaded for Sixty Symbols back in May, ruffled a few feathers…

I’ve been meaning to find time to address some of the very important and insightful points that were raised in the discussions under the video, but I’ve been …

Errrm. Sorry. Hang on just one minute. “Very important and insightful points” you say? Under a YouTube video? Yeah, right…

Believe me, I fully appreciate your entirely justified scepticism here but, yes, if you scroll past the usual dose of grammatically-garbled, content-free boilerplate from the more cerebrally challenged, you’ll find that the comments section contains a considerable number of points that are entirely worthy of discussion. In fact, I’m going to be using some of those YouTube comments to prompt debate during the Politics, Perception and Philosophy of Physics (PPP) module that my colleague Omar Almaini and I run in the autumn semester.

Before I get into considering specific comments, however, I’ll just take a brief moment to highlight a central theme “below the line” of that video, viz. the absolute faith in the trustworthiness and reliability of the scientific method. Or, more accurately, the monolith that is The Scientific Method. Many who contribute to that comments section are utterly convinced that The Truth, however that might be defined, will always win out against the inherent messiness of the scientific process. Well, maybe. Possibly. But on what time scale? And with what implications for the progress of science in the meantime? Wedded entirely to their ideology without ever presenting any evidence to support their case, they are completely convinced that they know exactly how science works. Often without ever doing science themselves. This is hardly the most scientific of approaches.

OK, deep breath. I’m going in. Let’s delve into the comments section…

IntellectuallyDifficult

The idea that science progresses as a nice linear, objective process from hypothesis to “fact” is breathtakingly naive. Unfortunately, it’s exceptionally difficult for some to countenance, within their rather rigid worldview and mindset, that science could ever be inherently messy and uncertain. As “Ali Syed” notes above, this can indeed lead to quite some intellectual indigestion for some…

cavalrycome

cavalrycome” here helpfully serves up a key example of that breathtaking naivety in action. The idea that testing scientific theories doesn’t depend on social factors and serendipity shows a deep and touching faith — and I use that word advisedly — in the tenets of The Scientific Method. “Just do the experiment” is the mantra. Or, as Feynman put it,

 If it disagrees with experiment it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is. It does not make any difference how smart you are, who made the guess, or what his name is – if it disagrees with experiment it is wrong. That is all there is to it.

(I guess it goes without saying that, as is the case for so many physicists, Feynman is a bit of a hero of mine).

…all well and good, except that doing the experiment simply isn’t enough. The same experimental data can be (mis)interpreted by different scientists in many ways. I could point to very many examples but let’s choose one that hits close to home for me.

Along with colleagues in Liverpool, Nottingham, and Tsukuba, I spent a considerable amount of my time a few years back embroiled in a critique of scanning probe microscope (SPM) images of so-called ‘stripy’ nanoparticles. I am not about to open that can of worms again. (Life is too short). For an overview, see this post.

Without going into detail, the key point is this: we had our interpretation of the data, and the group whose work we critiqued had theirs. On more than one occasion, the fact that their interpretation had been previously published and regularly cited was used to justify their position. (I thoroughly recommend Neuroskeptic’s post on the central role of data interpretation in science. And this follow-up post.)

The testing, publication and critique of experimental (or theoretical) data fundamentally involves the scientific community at many levels. First of all, there’s the sociology of the peer review process itself. What has been previously published? Do our results agree with that previously published work? If not, can we convince the editors and referees of the validity of our data? Then there’s the question of the “impact” and excitement of the science in question. Is the work newsworthy? Will it make it to the glossy cover of the journal? Will it help secure the postdoc a lectureship or a tenure-track position?

Moreover, science requires funding.  Testing a particular theory may well require a few million quid of experimental kit, consumables, and/or staff resources. That funding is allocated via peer review. And peer review is notoriously hit and miss. I’ve seen exactly the same proposal be rejected by one funding panel and funded by another. On more than one occasion. Having the right person speak for your grant proposal at a prioritisation panel meeting can make all the difference when it comes to success in funding. (But don’t just take my word for it when it comes to how peer review (mis)steers the scientific process — a minute or two on Google is all you need to find key examples.)

Let’s complement that nanoparticle example above with some science involving rather larger length scales.  Following one of the PPP sessions last year, Omar pointed me towards an illuminating blog post by Ed Hawkins on uncertainty estimates in the measurement of the Hubble constant. Here are the key data (taken from RP Kirschner, PNAS 101 8 (2004)):

F2.large.jpg

Note the evolution of the Hubble constant towards its currently accepted value. Feynman (yes, him again) made a similar point about the measurement of the value of the charge of the electron in his classic Cargo Cult Science talk at Caltech in 1974:

One example: Millikan measured the charge on an electron by an experiment with falling oil drops and got an answer which we now know not to be quite right.  It’s a little bit off, because he had the incorrect value for the viscosity of air.  It’s interesting to look at the history of measurements of the charge of the electron, after Millikan.  If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.

 Why didn’t they discover that the new number was higher right away?  It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong.  When they got a number closer to Millikan’s value they didn’t look so hard.  And so they eliminated the numbers that were too far off, and did other things like that.  We’ve learned those tricks nowadays, and now we don’t have that kind of a disease.

Sorry, Richard, but that disease is very much still with us. If anything, it’s a little more virulent these days…

I could go on. But you get the idea. Only someone with a complete lack of experience of scientific research could ever suggest that the testing of scientific theories/ interpretations is free of “social factors and chance”.

What say you, “AntiCitizenX”…?

AntiCitizenX

So, apparently, the experience of scientists means nothing when it comes to understanding how science works? This viewpoint  — and it crops up regularly — never ceases to make me smile. The progress of science depends, fundamentally and critically, on the peer review process: decisions on which papers get published and which grants get funded are driven not by an adherence to one or other “philosophy of science” (which one?) but by working scientists.

The “messy day-to-day aspects of science” are science. This is how it works. It doesn’t matter a jot what Popper, Kuhn [1], Feyerabend, Lakatos or your particular philosopher of choice might have postulated when it comes to their preferred version of The Scientific Method. What matters is how science works in practice. (Do the experiment, right?) Popper et al. did not produce some type of received, immutable wisdom to which the scientific process must conform. (On a similar theme, And Then There’s Physics – more of whom later — has written a number of great posts on the simplistic caricatures of science that have often frustratingly stemmed from the Science and Technology Studies (STS) field of sociology, including this: STS: All Talk and No Walk?)

Does this mean that I think philosophy has no role to play in science or, more specifically, physics? Not at all. In fact, I think that we do our undergraduate (and postgraduate) students a major disservice by not introducing a great deal more philosophy into our physics courses. But to argue that scientists are somehow not qualified to speak about a process they themselves fundamentally direct is ceding rather too much ground to our colleagues in philosophy and sociology. And it’s deeply condescending to scientists.

As Sean Carroll so eloquently puts it in the paper to which I refer in the video,

The way in which we judge scientific theories is inescapably reflective, messy, and human. That’s the reality of how science is actually done; it’s a matter of judgement, not of drawing bright lines between truth and falsity, or science and non-science.

True or False?

Let’s now turn to the question of falsifiability (which was, after all, in the title of the video). Over to you, “Daniel Jensen”, as your comment seems to have resonated with quite a few:

DanielJensen.png

This fundamentally confuses the type of “bending over backwards to prove ourselves wrong” aspect of science — yes, Feynman again — with Popper’s falsifiability criterion. I draw a distinction between these in the video but, as was pointed out to me recently by Philip Ball, when it comes to many of those who contribute below the line “it’s as if they’re damned if they are going to let your actual words deprive them of their right to air their preconceived notions“.

(At least one commenter realises this:

AnsweredInVideo

Thank you, “Shkotay D”. I’d like to think so.)

The point I make in the video re. falsifiability merely echoes what Sokal and Bricmont (and others) said way back in the 90s, and Carroll has reiterated within the context of multiverse theory: Popper’s criterion simply does not describe how science works in practice. Here’s what Sokal and Bricmont have to say in Fashionable Nonsense:

When a theory successfully withstands an attempt at falsification, a scientist will, quite naturally, consider the theory to be partially confirmed and will accord it a greater likelihood or a higher subjective probability. … But Popper will have none of this: throughout his life he was a stubborn opponent of any idea of ‘confirmation’ of a theory, or even of its probability. … the history of science teaches us that scientific theories come to be accepted above all because of their successes.

The question of misinterpretation (wilful or otherwise) is also raised by “tennisdude52278”:

Anti-vaxxers

I stand by everything I said in that video. I am acutely aware of just how statements are cherry-picked, quote-mined, and ripped out of context online but that can’t be used as a justification to self-censor for the sake of “toeing the party line” or presenting a united front. Science isn’t politics, despite its messy character. It is both fundamentally dishonest and ultimately damaging to the credibility of science (and scientists) if we pretend otherwise.

We demand rigidly defined areas of doubt and uncertainty” [2]

What I find particularly intriguing about the more overwrought responses to the video is the deep unwillingness to accept the inherent uncertainties and human biases that are inevitably at play in the progress of science. There’s a deep-rooted, quasi-religious, faith in the ability of science to provide definitive, concrete, unassailable answers to questions of life, the universe, and everything. But that’s not how science works. Carlo Rovelli forcefully makes this point in Science Is Not About Certainty:

“The very expression “scientifically proven” is a contradiction in terms. There’s nothing that is scientifically proven. The core of science is the deep awareness that we have wrong ideas, we have prejudices…we have a vision of reality that is effective, it’s good, it’s the best we have found so far. It’s the most credible we have found so far; it’s mostly correct.”

The craving for certainty is, however, a particularly human characteristic. We’re pattern-seekers; we love to find regularity, even when there’s no regularity there. And there are some who know very well how to effectively exploit that desire for certainty. This article on the guru appeal of Jordan B Peterson highlights just how the University of Toronto professor of psychology plays to the gallery in fulfilling that need:

“He sees the vacuum left not just by the withdrawal of the Christian tradition, but by the moral relativism and self-abnegation that have flooded across the West in its wake. Furthermore, he recognizes — from his experience as a practicing psychologist and as a teacher — that people crave principles and certainties.”

In passing, I should note that I disagree with the characterisation of Peterson in that article as a man who espouses ideas of depth and substance. No. Really, no. (Really, really, no.) He’s of course an accomplished and charismatic public speaker (with a particular talent for obfuscation that rivals, worryingly, that of politicians.) But then so too is Deepak Chopra. [3]

I’ve spent rather too much of my time over the last year discussing Peterson’s self-help shtick in various fora on- and offline. I’m particularly grateful to And Then There’s Physics for highlighting a debate I had with Fred McVittie last year on a motion of particular relevance to this post, “Jordan Peterson speaks the truth“. The comments thread under ATTP’s post runs to over 400 comments, highlighting that the cult of Peterson is fascinating in terms of its social dynamics. Unfortunately, what Peterson himself has to say is a great deal less interesting, and often mind-numbingly banal, as compared to the underlying sociology of his flock.

What Peterson clearly recognises, however, is that certainty sells. Humans tend to crave simple and simplistic messages, free of the type of ambiguity that is so often part-and-parcel of the scientific process. So he dutifully, and profitably, becomes the source of memes and headline messages so simple that they can feature comfortably on the side of a mug of coffee:

PetersonCup.png

Comforting though Peterson’s simplistic and dogmatic rules for life might be for many, I much prefer the honesty that underpins Carl Sagan‘s rather more ambiguous and uncertain outlook…

Science demands a tolerance for ambiguity. Where we are ignorant, we withhold belief. Whatever annoyance the uncertainty engenders serves a higher purpose: It drives us to accumulate better data. This attitude is the difference between science and so much else.

 


 

[1] I’m not a fan of Kuhn’s writings, I’m afraid. I am well aware that “The Structure of Scientific Revolutions is held up as some sort of canonical gospel when it comes to the philosophy of science, but “…Scientific Revolutions” is merely Kuhn’s opinion. Nothing more, nothing less. It’s not the last word on the nature of progress in science.  For one thing, his views on the lack of “commensurability” of different paradigms are clearly bunkum in the context of quantum physics and relativity. The correspondence principle in QM alone is enough to rebut Kuhn’s incommensurability argument. And just how many undergrad physics students have been tasked in their first year to consider a problem in QM or special relativity in “the classical limit”…?

[2] Treat yourself to a nice big bowl of petunias if you recognise the source of the quote here.

[3] As an aside to the aside, what I find remarkable is that the subadolescent drawings and scribblings that decorate Peterson’s “Maps Of Meaning” were apparently offered to Harvard psychology undergraduates as part of their education. (Actually, that’s rather unfair to those adolescents who would be mortified at being linked in any way with the likes of these ravings.) Unlike Peterson, I’m not about to wring my hands, clutch my pearls, and call for a McCarthyite purge of undergraduate teaching in his discipline. But let’s just say that my confidence in the quality assurance mechanisms underpinning psychology education and research have been dented just a little. (Diederik Stapel’s autobiography also didn’t reassure me when it comes to the lack of reproducibility that plagues psychology research) I’ll concur entirely with Prof. Peterson on this point: it’s indeed best to get one’s own house in order before criticising others…

 

 

Politics. Perception. Philosophy. And Physics.

Today is the start of the new academic year at the University of Nottingham (UoN) and, as ever, it crept up on me and then leapt out with a fulsome “Gotcha”. Summer flies by so very quickly. I’ll be meeting my new 1st year tutees this afternoon to sort out when we’re going to have tutorials and, of course, to get to know them. One of the great things about the academic life is watching tutees progress over the course of their degree from that first “getting to know each other” meeting to when they graduate.

The UoN has introduced a considerable number of changes to the “student experience” of late via its Project Transform process. I’ve vented my spleen about this previously but it’s a subject to which I’ll be returning in the coming weeks because Transform says an awful lot about the state of modern universities.

For now, I’m preparing for a module entitled “The Politics, Perception and Philosophy of Physics” (F34PPP) that I run in the autumn semester. This is a somewhat untraditional physics module because, for one thing, it’s almost entirely devoid of mathematics. I thoroughly enjoy  F34PPP each year (despite this amathematical heresy) because of the engagement and enthusiasm of the students. The module is very much based on their contributions — I am more of a mediator than a lecturer.

STEM students are sometimes criticised (usually by Simon Jenkins) for having poorly developed communication skills. This is an especially irritating stereotype in the context of the PPP module, where I have been deeply impressed by the quality of the writing the students submit. As I discuss in the video below (an  overview of the module), I’m not alone in recognising this: articles submitted as F34PPP coursework have been published in Physics World, the flagship magazine of the Institute of Physics.

 

In the video I note that my intention is to upload a weekly video for each session of the module. I’m going to do my utmost to keep this promise and, moreover, to accompany each of those videos with a short(ish) blog post. (But, to cover my back, I’ll just note in advance that the best laid schemes gang aft agley…)