La Tristesse Durera (Sigh To A Scream)*

Just a few short lines on today’s post from the always-worth-reading And Then There’s Physics… ATTP’s exasperation is clear from the title of his post: “Sigh. You should, of course, read the entire piece but the lines that particularly resonated with me, following my own recent cri de coeur on the subject of online factions, were these:

Unfortunately, I think this is becoming all too common. My impression is that we’re now in a position where people who probably mostly agree about the issues, are in conflict over details that probably don’t really matter.

I really do wish it were possible to have these nuanced discussions without it turning contentious; that it were possible to have a discussion where maybe people didn’t end agreeing, but still learned something.

sigh, indeed.

*With all due credit to Mr. James Dean Bradfield and colleagues.

Sloppy Science: Still Someone Else’s Problem?

“The Somebody Else’s Problem field is much simpler and more effective, and what’s more can be run for over a hundred years on a single torch battery… An SEP is something we can’t see, or don’t see, or our brain doesn’t let us see, because we think that it’s somebody else’s problem…. The brain just edits it out, it’s like a blind spot”.

Douglas Adams (1952 – 2001) Life, The Universe, and Everything

The very first blog post I wrote (back in March 2013), for the Institute of Physics’ now sadly defunct physicsfocus project, was titled “Are Flaws in Peer Review Someone Else’s Problem?” and cited the passage above from the incomparable, and sadly missed, Mr. Adams. The post described the trials and tribulations my colleagues and I were experiencing at the time in trying to critique some seriously sloppy science, on the subject of ostensibly “striped” nanoparticles, that had been published in very high profile journals by a very high profile group. Not that I suspected it at the time of writing the post, but that particular saga ended up dragging on and on, involving a litany of frustrations in our attempts to correct the scientific record.

I’ve been put in mind of the stripy saga, and that six-year-old post, for a number of reasons lately. First, the most recent stripe-related paper from the group whose work we critiqued makes absolutely no mention of the debate and controversy. It’s as if our criticism never existed; the issues we raised, and the surrounding controversy, are simply ignored by that group in their most recent work.

More importantly, however, I have been following Ken Rice‘s (and others’) heated exchange with the authors of a similarly fundamentally flawed paper very recently published in Scientific Reports [Oscillations of the baseline of solar magnetic field and solar irradiance on a millennial timescale, VV Zharkova, SJ Shepherd, SI Zharkov, and E Popova, Sci. Rep. 9 9197 (2019)]. Ken’s blog post on the matter is here, and the ever-expanding PubPeer thread (225 comments at the time of writing, and counting) is here. Michael Brown‘s take-no-prisoners take-down tweets on the matter are also worth reading…

The debate made it into the pages — sorry, pixels — of The Independent a few days ago: “Journal to investigate controversial study claiming global temperature rise is due to Earth moving closer to Sun.

Although the controversy in this case is related to physics happening on astronomically larger length scales than those at the heart of our stripy squabble, there are quite a number of parallels (and not just in terms of traffic to the PubPeer site and the tenor of the authors’ responses). Some of these are laid out in the following Tweet thread by Ken…

The Zharkova et al. paper makes fundamental errors that should never have passed through peer review. But then we all know that peer review is far from perfect. The question is what should happen to a paper that is not fradulent but still makes it to publication containing misleadingly sloppy and/or incorrect science? Should it remain in the scientific record? Or should it be retracted?

It turns out that this is a much more contested issue than it might appear at first blush. For what it’s worth, I am firmly of the opinion that a paper containing fundamental errors in the science and/or based on mistakes due to clearly definable f**k-ups/corner-cutting in experimental procedure should be retracted. End of story. It is unfair on other researchers — and, I would argue, blatantly unethical in many cases — to leave a paper in the literature that is fundamentally flawed. (Note that even retracted papers continue to accrue citations.) It is also a massive waste of taxpayers’ money to fund new research based on flawed work.

Here’s one example of what I mean, taken from personal, and embarrassing, experience. I screwed up the calibration of a tuning fork sensor used in a set of atomic force microscopy experiments. We discovered this screw-up after publication of the paper that was based on measurements with that particular sensor. Should that paper have remained in the literature? Absolutely not.

Some, however, including my friend and colleague Mike Merrifield, who is also Head of School here and with whom I enjoy the ever-so-occasional spat, have a slightly different take on the question of retractions:

Mike and I discussed the Zharkova et al. controversy both briefly at tea break and via an e-mail exchange last week, and it seems that there are distinct cultural differences between different sub-fields of physics when it comes to correcting the scientific record. I put the Gedankenexperiment described below to Mike and asked him whether we should retract the Gedankenpaper. The particular scenario outlined in the following stems from an exchange I had with Alessandro Strumia a few months back, and subsequently with a number of my particle physicist colleagues (both at Nottingham and elsewhere), re. the so-called 750 GeV anomaly at CERN…

“Mike, let’s say that some of us from the Nanoscience Group go to the Diamond Light Source to do a series of experiments. We acquire a set of X-ray absorption spectra that are rather noisy because, as ever, the experiment didn’t bloody well work until the last day of beamtime and we had to pack our measurements into the final few hours. Our signal-to-noise ratio is poor but we decide to not only interpret a bump in a spectrum as a true peak, but to develop a sophisticated (and perhaps even compelling) theory to explain that “peak”. We publish the paper in a prestigious journal, because the theory supporting our “peak” suggests the existence of an exciting new type of quasiparticle. 

We return to the synchrotron six months or a year later, repeat the experiment over and over but find no hint of the “peak” on which we based our (now reasonably well-cited) analysis. We realise that we had over-interpreted a statistical noise blip.

Should we retract the paper?”

I am firmly of the opinion that the paper should be retracted. After all, we could not reproduce our results when we did the experiment correctly. We didn’t bend over backwards in the initial experiment to convince ourselves that our data were robust and reliable and instead rushed to publish (because we were so eager to get a paper out of the beamtime.) So now we should eat humble pie for jumping the gun — the paper should be retracted and the scientific record should be corrected accordingly.

Mike, and others, were of a different opinion, however. They argued that the flawed paper should remain in the scientific literature, sometimes for the reasons to which Mike alludes in his tweet above [1].  In my conversations with particle physicists re. the 750 GeV anomaly, which arose from a similarly over-enthusiastically interpreted bump in a spectrum that turned out to be noise, there was a similarly strong inertia to correct the scientific record. There appeared to be a feeling that only if the data were fabricated or fraudulent should the paper be retracted.

During the e-mail exchanges with my particle physics colleagues, I was struck on more than one occasion by a disturbing disconnect between theory and experiment. (This is hardly the most original take on the particle physics field, I know. I’ll take a moment to plug Sabine Hossenfelder’s Lost In Math once again.) There was an unsettling (for me) feeling among some that it didn’t matter if experimental noise had been misinterpreted, as long as the paper led to some new theoretical insights. This, I’ll stress, was not an opinion universally held — some of my colleagues said they didn’t go anywhere near the 750 GeV excess because of the lack of strong experimental evidence. Others, however, were more than willing to enthusiastically over-interpret the 750 GeV “bump” and, unsurprisingly, baulked at the suggestion that their papers should be retracted or censured in any way. If their sloppy, credulous approach to accepting noise in lieu of experimental data had advanced the field, then what’s wrong with that? After all, we need intrepid pioneers who will cross the Pillars of Hercules

I’m a dyed-in-the-wool experimentalist; science should be driven by a strong and consistent feedback loop between experiment and theory. If a scientist mistakes experimental noise (or well-understood experimental artefacts) for valid data, or if they get fundamental physics wrong a la Zherkova et al, then there should be — must be — some censure for this. After all, we’d censure our undergrad students under similar circumstances, wouldn’t we? One student carries out an experiment for her final year project carefully and systematically, repeating measurements, bringing her signal-to-noise ratio down, putting in the hours to carefully refine and redefine the experimental protocols and procedures, refusing to make claims that are not entirely supported by the data. Another student instead gets over-excited when he sees a “signal” that chimes with his expectations, and instead of doing his utmost to make sure he’s not fooling himself, leaps to a new and exciting interpretation of the noisy data. Which student should receive the higher grade? Which student is the better scientist?

As that grand empiricist Francis Bacon put it centuries ago,

The understanding must not therefore be supplied with wings, but rather hung with weights, to keep it from leaping and flying.

It’s up to not just individual scientists but the scientific community as a whole to hang our collective understanding with weights. Sloppy science is not just someone else’s problem. It’s everyone’s problem.

[1] Mike’s suggestion in his tweet that the journal would like to retract the paper to spare their blushes doesn’t chime with our experience of journals’ reactions during the stripy saga. Retraction is the last thing they want because it impacts their brand.

 

“We don’t need no education…”

(…or Why It Sometimes Might Be Better For Us Academics to Shut The F**k Up Occasionally.)

Boost Public Engagement to Beat Pseudoscience, says Jim Al-Khalili” goes the headline on p.19 of this week’s Times Higher Education, my traditional Saturday teatime read. The brief article, a summary of points Jim made during his talk at the Young Universities Summit, continues…

Universities must provide more opportunities for academics to engage with the public or risk allowing pseudoscience to “fill the vacuum”, according to Jim Al-Khalili.

Prof. Al-Khalili is an exceptionally talented and wonderfully engaging science communicator. I enjoy, and very regularly recommend (to students and science enthusiasts of all stripes), his books and his TV programmes. But the idea that education and academic engagement are enough to counter pseudoscience is, at the very best, misleading and, at worst, a dangerous and counter-productive message to propagate.

The academic mantra of “education, education, education” as the unqualified panacea for every socioeconomic ill, although comforting, is almost always a much too simplistic — and, for some who don’t share our ideological leanings, irritatingly condescending — approach. I’ve written enthusiastically before about Tom Nichols’ powerful “The Death of Expertise”, and I’ve lost count of the number of times that I’ve referred to David McRaney’s The Backfire Effect in previous posts and articles I’ve written. It does no harm to quote McRaney one more time…

The last time you got into, or sat on the sidelines of, an argument online with someone who thought they knew all there was to know about health care reform, gun control, gay marriage, climate change, sex education, the drug war, Joss Whedon or whether or not 0.9999 repeated to infinity was equal to one – how did it go?

Did you teach the other party a valuable lesson? Did they thank you for edifying them on the intricacies of the issue after cursing their heretofore ignorance, doffing their virtual hat as they parted from the keyboard a better person?

Perhaps you’ve been more fortunate than McRaney (and me.) But somehow I doubt it.

As just one example from McRaney’s list, there is strong and consistent evidence that, in the U.S., Democrats are much more inclined to accept the evidence for anthropogenic climate change than Republicans. That’s bad enough, but the problem of political skew in motivated rejection of science is much broader. A very similar and very distinct right-left asymmetry exists across the board, as discussed in Lewandowsky and Oberauer’s influential paper, Motivated Rejection Of Science. I’ll quote from their abstract, where they make the same argument as McRaney but in rather more academic, though no less compelling, terms [1]:

Rejection of scientific findings is mostly driven by motivated cognition: People tend to reject findings that threaten their core beliefs or worldview. At present, rejection of scientific findings by the U.S. public is more prevalent on the political right than the left. Yet the cognitive mechanisms driving rejection of science, such as the superficial processing of evidence toward the desired interpretation, are found regardless of political orientation. General education and scientific literacy do not mitigate rejection of science but, rather, increase the polarization of opinions along partisan lines.

Let me repeat and bolden that last line for emphasis. It’s exceptionally important.


General education and scientific literacy do not mitigate rejection of science but, rather, increase the polarization of opinions along partisan lines.


If we blithely assume that the rejection of well-accepted scientific findings — and the potential subsequent descent into the cosy embrace of pseudoscience — is simply a matter of a lack of education and engagement, we fail to recognise the complex and multi-facetted sociology and psychology at play here. Yes, we academics need to get out there and talk about the research we and others do — and I’m rather keen on doing this myself (as discussed here, here, and here) — but let’s not make the mistake that there’s always a willing audience waiting with bated breath for the experts to come and correct them on what they’re getting wrong.

I spend a lot of time on public engagement, both online and off — although not, admittedly, as much as Jim — and I’ve encountered the “motivated rejection” effect time and time again over the years. Here’s just one example of what I mean — a comment posted under the most recent Computerphile video I did with Sean Riley:

ZeroCred

The “zero credibility” comment stems not from the science presented in the video but from a reaction to my particular ideological and political leanings. For reasons I’ve discussed at length previously, I’ve been labelled as an “SJW” — a badge I’m happy to wear with quite some pride. (If you’ve not encountered the SJW perjorative previously, lucky you. Here’s a primer.) Because of my SJW leanings, the science I present, regardless of its accuracy (and level of supporting evidence/research), is immediately rejected by a subset of aggrieved individuals who do not share my political outlook. They outright dismiss the credibility or validity of the science not on the basis of the content or the strength of the data/evidence but solely on their ideological, emotional, and knee-jerk reaction to me…

Downvoting

(That screenshot above is taken from the comments section for this video.)

It’s worth noting that the small hardcore of viewers who regularly downvote and leave comments about the ostensible lack of credibility of the science I present are very often precisely those who would claim to be ever-so-rational and whose clarion call is “Facts over feels” [1]. Yet they are so opposed to my “SJW-ism” that they reject everything I say, on any topic, as untrustworthy; they cannot get beyond their gut-level emotional reaction to me.

My dedicated following of haters is a microcosm of the deep political polarisation we’re seeing online, with science caught in the slip-stream and accepted/rejected on the basis of how it appeals to a given worldview, rather than on the strength of the scientific evidence itself. (And it’s always fun to be told exactly how science works by those who have never carried out an experiment, published a paper, been a member of a peer-review panel, reviewed a grant etc.) This then begs the question: Am I, as a left-leaning academic with clearly diabolical SJW tendencies, in any position at all to educate this particular audience on any topic? Of course not. No matter how much scientific data and evidence I provide it will be dismissed out of hand because I am not of their tribe.[3]

Jim Al-Khalili’s argument at the Young Universities Summit that what’s required is ever-more education and academic engagement is, in essence, what sociologists and Science and Technology Studies (STS) experts would describe as the deficit model. The deficit model has been widely discredited because it simply does not accurately describe how we modify our views (or not) in the light of more information. (At the risk of making …And Then There’s Physics  scream, I encourage you to read their informative and entertaining posts on the theme of the deficit model.)

Prof. Al-Khalili is further reported as stating that “…to some extent, you do have to stand up and you do have to bang on about evidence and rationalism, because if we don’t, we will make the same mistakes of the past where the vacuum will be filled with people talking pseudoscience or nonsense.” 

Banging on about evidence and rationalism will have close to zero effect on ideologically opoosed audiences because they already see themselves as rational and driven by evidence [3]; they won’t admit to being biased and irrational because their bias is unconscious. And we are all guilty of succumbing to unconscious bias, to a greater or lesser extent. Force-feeding  more data and evidence to those with whom we disagree is not only unlikely to change their minds, it’s much more likely to entrench them further in their views. (McRaney, passim.)

Let me make a radical suggestion. What if we academics decided to engage rather less sometimes? After all, who is best placed to sway the position — on climate change, vaccination, healthcare, social welfare, or just about any topic — of a deeply anti-establishment Trump supporter who has fallen hook, line, and sinker for the “universities are hotbeds of cultural Marxism” meme? A liberal academic who can trot out chapter and verse from the literature, and present watertight quantitative (and qualitative) arguments ?

Of course not.

We need to connect, somehow, beyond the level of raw data and evidence. We need to appeal to that individual’s biases and psychology. And that means thinking more cannily, and more politically, about how we influence a community. Barking, or even gently reciting, facts and figures is not going to work. This is uncomfortable for any scientist, I know. But you don’t need to take my word for it — review the evidence for yourself.

The strength of the data used to support a scientific argument almost certainly won’t make a damn bit of difference when a worldview or ideology is challenged. And that’s not because our audience is uneducated. Nor are they unintelligent. They are behaving exactly as we do. They are protecting their worldview via the backfire effect.

 


[1] One might credibly argue that the rejection skew could lean the other way on certain topics such as the anti-vaccination debate, where anecdotal, and other, evidence might suggest that there is a stronger liberal/left bias. It turns out that even when it comes to anti-vaxxers, there is quite a considerable amount of data to support that it’s the right that has a higher degree of anti-science bias [2]. Here’s one key example: Trust In Scientists On Climate Change and Vaccines, LC Hamilton, J Hartter, and K Saito,  SAGE Open, July – Sept 2015, 1 – 13. See also Beyond Misinformation, S. Lewandowsky, U. K. H. Ecker, and J. Cook, J. Appl. Res. Memory. Cogn. 6 353 (2017) for a brief review of some of the more important literature on this topic.

[2] …but then it’s all lefty, liberal academics writing these papers, right? They would say that.

[3] Here’s an amusing recent example of numerological nonsense being passed off as scientific reasoning. Note that Peter Coles’ correspondent claims that the science is on his side. How persuasive do you think he’ll find Peter’s watertight, evidence-based reasoning to be? How should he be further persauded? Will more scientific evidence and data do the trick?

 

The Silent Poetry of Paint Drying

The painting has a life of its own. I just let it come through.

Jackson Pollock (1912 – 1956)

Over the last six weeks or so, I’ve had the immense pleasure of collaborating with local artist Lynda Jackson on a project for Creative Reactions — the arts-science offshoot of Pint of Science   I don’t quite know why I didn’t sign up for Creative Reactions long before now but after reading Mark Fromhold‘s wonderful blog post about last year’s event, I jumped at the chance to get involved with CR2019. The collaboration with Lynda culminated in us being interviewed together for yesterday’s Creative Reactions closing night, which was a heck of a lot of fun. The event, compered by PhD student researcher Paul Brett (Microbiology, University of Nottingham), was expertly live-tweeted by another UoN researcher (this time from the School of Chemistry), Lizzie Killalea

I’ve been fascinated by the physics (and metaphysics) of foam for a very long time, and was delighted that the collaboration with Lynda serendipitously ended up being focused on foam-like painting and patterns. When we met for the first time, Lynda told me that she had a burgeoning interest in what’s known as acrylic pouring, as described in this video…

…and here’s a great example of one of Lynda’s paintings, produced using a somewhat similar technique to that described in the video:

LyndaJackson_2.png

I love that painting, not only for its aesthetic value, but for its direct, and scientifically beautiful, connection to the foam patterns — or, to give them their slightly more technical name, cellular networks — that are prevalent right across nature, from the sub-microscopic to the (quite literally) astronomically large (via, as I discuss in the Sixty Symbols video below, the Giant’s Causeway and some stonkingly stoned spiders)…

Our research group spent a great deal of time (nearly a decade — see this paper for a review of some of that work) analysing the cellular networks that form when a droplet of a suspension of nanoparticles in a solvent is placed on a surface and subsequently left to its own devices (or alternatively spin-dried). Here’s a particularly striking example of the foams-within-foams-within-foams motif that is formed via the drying of a nanoparticle-laden droplet of toluene on silicon…

Nanoparticles-2.png

What you see in that atomic force microscope image above — which is approximately 0.02 of a millimetre, i.e. 20 micrometres, across — are not the individual 2 nanometre nanoparticles themselves, but the much larger (micron-scale) pattern that is formed during the drying of the droplet; the evaporation and dewetting of the solvent corrals the particles together into the patterns you see. It’s somewhat like what happens in the formation of a coffee stain: the particles are carried on the tide of the solvent (water for the coffee example; toluene in the case of the nanoparticles).

Lynda’s painting above is about 50 cm wide. That means that the scale of the foam created by acrylic pouring is ~ 25,000 times bigger than that of the nanoparticle pattern. Physicists get very excited when they see the same class of pattern cropping up in very different systems and/or on very different length scales — it often means that there’s an overarching mathematical framework; a very similar form of differential equation, for example, may well be underpinning the observations. And, indeed, there are similar physical processes at play in both the acrylic pouring and the nanoparticle systems: mixed phases separate under the influence of solvent flow. Here’s another striking example from Lynda’s work:

LyndaJackson_1.png

Phase separation and phase transitions are not only an exceptionally rich source of fascinating physics (and, indeed, chemistry and biology) but they almost invariably give rise to sets of intriguing and intricate patterns that have captivated both scientists and artists for centuries. In the not-too-distant future I’ll blog about Alan Turing’s remarkable insights into the pattern-forming processes that produce the spots, spirals, and stripes of animal hides (like those shown in the tweet below); his reaction-diffusion model is an exceptionally elegant example of truly original scientific thinking. I always hesitate to use the word “genius” — because science is so very much more complicated and collaborative than the tired cliche of the lone scientist “kicking against the odds” — but in Turing’s case the accolade is more than well-deserved.

I nicked the title of this post — well, almost nicked — from a quote generally attributed to Plutarch: “Painting is silent poetry, and poetry is painting that speaks.” It’s very encouraging indeed that Creative Reactions followed hot on the heels of the Science Rhymes event organised by my UoN colleague Gerardo Adesso a couple of weeks ago (see Brigitte Nerlich‘s great review for the Making Science Public blog). Could we at last be breaking down the barriers between those two cultures that CP Snow famously identified so many years ago?

At the very least, I get the feeling that there’s a great deal more going on than just a superficial painting over the cracks…

Are the Nanobots Nigh?

The annual Pint Of Science festival, about which I’ve blogged previously and enthusiastically, is taking place this year from May 20 – 22 not only across the UK but in 24 countries worldwide. This, if I remember correctly, is the fourth consecutive year that I’ve done a Pint of Science talk, and I am looking forward immensely to speaking in the Scratching The Surface of Material Science session tonight in Parliament Bar in town, alongside my University of Nottingham colleagues Morgan Alexander and Nesma Aboulkhair. (Encouragingly, all of the Pint of Science events in Nottingham have sold out!)

The title of the talk I’ll give is “Artifical Intelligence at the Nanoscale (or Is The Nanopocalypse Nigh?“, and I’ll focus on recent developments in machine-learning-enabled scanning probe microscopy, of the type described in this Computerphile video put together by Sean Riley last year…

The PoS talk will, however, also roundly criticise the breathless enthusiasm of certain futurist pundits for a nano-enabled future. (OK, I’ll name names. I mean Ray Kurzweil.  We’re going to become immortal by 2045 according to Ray. Because nano.) I had a long, but ultimately exceptionally productive, exchange all the way back in 2004 about the considerable stumbling blocks that stand in the way of the molecular manufacturing nanotech that is a key enabling component of Kurzweil’s “vision”. At the time I didn’t have a blog but Richard Jones very kindly posted the exchange at his Soft Machines blog, and I was rather pleased to find that the debate is still available there.

Soft Machines is an exceptionally good read on everything from nanoscience to R&D policy to general economics and politics. Richard has also written an incisive and compelling critique of Kurzweil and others’ stance on transhumanism. You should give both the blog and the book, “Against Transhumanism: The Delusion of Technological Transcendence“, a read at the earliest opportunity. You won’t regret it.

 

 

Concrete Reasons for the Abstract

I’ve just finished my last set of undergraduate lab report marking for this year and breathed a huge sigh of relief. Overall, however, the quality of the students’ reports has improved considerably over the year, with some producing work of a very high standard. (I get a little frustrated at times with the frustrating Daily Mail-esque whining about “students these days” that infects certain academics of a certain vintage.) Nonetheless, there remain some perennial issues with report writing…

My colleague James O’Shea sent the following missive/ cri de coeur to all of our 1st year undergrad lab class yesterday. I’m posting it here — with James’ permission, of course — because I thought it was a wonderful rationale for the importance of the abstract. (And I feel James’ pain.) Over to you, James.


 

You have written your last formal report for the first year but you will write many more in the coming years and possibly throughout your career. It seems that the purpose of abstracts and figure captions has not quite sunk in yet. This will come as you read more scientific papers (please read more scientific papers). What you want is to give a complete picture of why the experiment was needed, what the hypothesis was, how it was explored, what the result was, and what the significance of that result is. You should read your abstract back as if it is the only thing people will read. In most cases, it really is the only thing they will read. If the abstract does not provide all these things, the likely outcome is that they won’t bother reading the rest – your boss included – and all the work you put in doing the research will be for nothing.

If a researcher (or your boss) does decide – based on the abstract – that they are interested in your report or paper, they might if they are short of time first just look at the figures. The figure caption is therefore vital. Again, look at the figure and read the caption back to yourself as if this (in conjunction with the abstract) is the only thing they will read. It has to be understandable in isolation from the main body of the text. The figure represents the work that was done. The caption needs to explain that work.

If your boss did read the abstract and decided to look at the figures, they will then most likely skip to the conclusions. From this they will want to get an overview of what new knowledge now exists and what impact it will have on their company or research program. They might then recommend that others in the organisation read your report in detail to find out how robust the research is, or they might give you the go ahead to do more research, or let you lead your own team. But if your abstract did not tell the interesting story in the first place, or your figure captions did not convey what work was done, your report might not even get read in the real world.

Best regards

James O’Shea

 

 

If it seems obvious, it probably isn’t

…And Then There’s Physics’ post on science communication, reblogged below, very much struck a chord with me. This point, in particular, is simply not as widely appreciated as it should be:

“Maybe what we should do more of is make it clear that the process through which we develop scientific knowledge is far more complicated than it may, at first, seem.”

There can too often be a deep-seated faith in the absolute objectivity and certainty of “The Scientific Method”, which possibly stems (at least in part) from our efforts to not only simplify but to “sell” our science to a wide audience. The viewer response to a Sixty Symbols video on the messiness of the scientific process, “Falsifiability and Messy Science”, brought this home to me: The Truth, The Whole Truth, and Nothing But…

(…but I’ve worried for a long time that I’ve been contributing to exactly the problem ATTP describes: Guilty Confessions of a YouTube Physicist)

By the way, if you’re not subscribed to ATTP’s blog, I heartily recommend that you sign up right now.

...and Then There's Physics

There’s an interesting paper that someone (I forget who) highlighted on Twitter. It’a about when science becomes too easy. The basic idea is that there are pitfalls to popularising scientific information.

Compared to experts,

laypeople have not undergone any specialized training in a particular domain. As a result, they do not possess the deep-level background knowledge and relevant experience that a competent evaluation of science-related knowledge claims would require.

However, in the process of communicating, and popularising, science, science communicators tend to provide simplified explanations of scientific topics that can

lead[s] readers to underestimate their dependence on experts and conclude that they are capable of evaluating the veracity, relevance, and sufficiency of the contents.

I think that this is an interesting issue and it partly what motivated my post about public involvement in science.

However, I am slightly uneasy about this general framing. I think everyone is a…

View original post 449 more words