How Not To Do Spectral Analysis 101

I will leave this here without further comment…

JesusHChrist

*bangs head gently on desk and sobs quietly to himself*

Source (via Sam Jarvis. Thanks, Sam.):

The original ‘peer-reviewed’ paper is this: Găluşcă et al., IOP Conf. Ser. Mater. Sci. Eng. 374 012020 (2018)

 

 

Bullshit and Beyond: From Chopra to Peterson

Harry G Frankfurt‘s On Bullshit is a modern classic. He highlights the style-over-substance tenor of the most fragrant and flagrant bullshit, arguing that

It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction. A person who lies is thereby responding to the truth, and he is to that extent respectful of it. When an honest man speaks, he says
only what he believes to be true; and for the liar, it is correspondingly indispensable that he considers his statements to be false. For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye
is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says. He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.

In other words, the bullshitter doesn’t care about the validity or rigour of their arguments. They are much more concerned with being persuasive. One aspect of BS that doesn’t quite get the attention it deserves in Frankfurt’s essay, however, is that special blend of obscurantism and vacuity that is the hallmark of three world-leading bullshitters of our time:  Deepak Chopra, Karen Barad (see my colleague Brigitte Nerlich’s important discussion of Barad’s wilfully impenetrable language here), and Jordan Peterson. In a talk for the University of Nottingham Agnostic, Secularist, and Humanist Society last night (see here for the blurb/advert), I focussed on the intriguing parallels between their writing and oratory. Here’s the video of the talk.

Thanks to UNASH for the invitation. I’ve not included the lengthy Q&A that followed (because I stupidly didn’t ask for permission to film audience members’ questions). I’m hoping that some discussion and debate might ensue in the comments section below. If you do dive in, try not to bullshit too much…

 

 

The war on (scientific) terror…

I’ve been otherwise occupied of late so the blog has had to take a back seat. I’m therefore coming to this particular story rather late in the day. Nonetheless, it’s on an exceptionally important theme that is at the core of how scientific publishing, scientific critique, and, therefore, science itself should evolve. That type of question doesn’t have a sell-by date so I hope my tardiness can be excused.

The story involves a colleague and friend who has courageously put his head above the parapet (on a number of occasions over the years) to highlight just where peer review goes wrong. And time and again he’s gotten viciously castigated by (some) senior scientists for doing nothing more than critiquing published data in as open and transparent a fashion as possible. In other words, he’s been pilloried (by pillars of the scientific community) for daring to suggest that we do science the way it should be done.

This time, he’s been called a…wait for it…scientific terrorist. And by none other than the most cited chemist in the world over the last decade (well, from 2000 – 2010): Chad A Mirkin. According to his Wiki page, Mirkin “was the first chemist to be elected into all three branches of the National Academies. He has published over 700 manuscripts (Google Scholar H-index = 163) and has over 1100 patents and patent applications (over 300 issued, over 80% licensed as of April 1, 2018). These discoveries and innovations have led to over 2000 commercial products that are being used worldwide.”

With that pedigree, this guy must really have done something truly appalling for Mirkin to call him a scientific terrorist (oh, and a zealot, and a narcissist), right? Well, let’s see…

raphaportrait2The colleague in question is Raphael Levy. Raphael (pictured to the right) is a Senior Lecturer — or Associate Professor to use the term increasingly preferred by UK universities and traditionally used by our academic cousins across the pond — in Biochemistry at the University of Liverpool. He has a deep and laudable commitment to open science and the evolution of the peer review system towards a more transparent and accountable ethos.

Along with Julian Stirling, who was a PhD student here at Nottingham at the time, and a number of other colleagues, I collaborated closely with Raphael and his team (from about 2012 – 2014) in critiquing and contesting a body of work that claimed that stripes (with ostensibly fascinating physicochemical and biological properties) formed on the surface of suitably functionalised nanoparticles. I’m not going to revisit the “stripy” nanoparticle debate here. If you’re interested, see Refs [1-5] below. Raphael’s blog , which I thoroughly recommend, also has detailed bibliographies for the stripy nanoparticle controversy.

More recently, Raphael and his co-workers at Liverpool have found significant and worrying deficiencies in claims regarding the efficacy of what are known as SmartFlares. (Let me translate that academically-nuanced wording: Apparently, they don’t work.) Chad Mirkin played a major role in the development of SmartFlares, which are claimed to detect RNA in living cells and were sold by SigmaMilliPore from 2013 until recently, when they were taken off the market.

The SmartFlare concept is relatively straight-forward to understand (even for this particular squalid state physicist, who tends to get overwhelmed by molecules much larger than CO): each ‘flare’  probe comprises a gold nanoparticle attached to an oligonucleotide (that encodes a target sequence) and a fluorophore, which does not emit fluorescence as long as it’s near to the gold particle. When the probe meets the target RNA, however, this displaces the fluorophore (thus reducing the coupling to, and quenching by, the gold nanoparticle) and causes it to glow (or ‘flare’). Or so it’s claimed.

As described in a recent article in The Scientist, however, there is compelling evidence from a growing number of sources, including, in particular, Raphael’s own group, that SmartFlares simply aren’t up to the job. Raphael’s argument, for which he has strong supporting data (from electron-, fluorescence- and photothermal microscopy), is that the probes are trapped in endocytic compartments and get nowhere near the RNA they’re meant to target.

Mirkin, as one might expect, vigorously claims otherwise. That’s, of course, entirely his prerogative. What’s most definitely not his prerogative, however, is to launch hyperbolic personal attacks at a critic of his work. As Raphael describes over at his blog, he asked the following question at the end of a talk Mirkin gave at the American Chemical Society meeting in Boston a month ago:

In science, we need to share the bad news as well as the good news. In your introduction you mentioned four clinical trials. One of them has reported. It showed no efficacy and Purdue Pharma which was supposed to develop the drug decided not to pursue further. You also said that 1600 forms of NanoFlares were commercially available. This is not true anymore as the distributor has pulled the product because it does not work. Finally, I have a question: what is the percentage of nanoparticles that escape the endosome?

According to Raphael’s description (which is supported by others at the conference — see below), Mirkin’s response was ad hominem in the extreme:

[Mirkin said that]…no one is reading my blog (who cares),  no one agrees with me; he called me a “scientific zealot” and a “scientific terrorist”.

Raphael and I have been in a similar situation before with regard to scientific critique not exactly being handled with good grace. We and our colleagues have faced accusations of being cyber-bullies — and, worse, fake blogs and identity theft were used –to attempt to discredit our (purely scientific) criticism.

Science is in a very bad place indeed if detailed criticism of a scientist’s work is dismissed aggressively as scientific terrorism/zealotry. We are, of course, all emotional beings to a greater or lesser extent. Therefore, and despite protestations to the contrary from those who have an exceptionally naive view of The Scientific Method, science is not some wholly objective monolith that arrives at The Truth by somehow bypassing all the messy business of being human. As Neuroskeptic described so well in a blog post about the stripy nanoparticle furore, often professional criticism is taken very personally by scientists (whose self-image and self-confidence can be intimately connected to the success of the science we do). Criticism of our work can therefore often feel like criticism of us.

But as scientists we have to recognise, and then always strive to rise above, those very human responses; to take on board, rather than aggressively dismiss out of hand, valid criticisms of our work. This is not at all easy, as PhD Comics among others has pointed out:

One would hope, however, that a scientist of Mirkin’s calibre would set an example, especially at a conference with the high profile of the annual ACS meeting. As a scientist who witnessed the exchange between Raphael and Mirkin put it,

I witnessed an interaction between two scientists. One asks his questions gracefully and one responding in a manner unbecoming of a Linus Pauling Medalist. It took courage to stand in front of a packed room of scientists and peers to ask those questions that deserved an answer in a non-aggressive manner. It took even more courage to not become reactive when the respondent is aggressive and belittling. I certainly commended Raphael Levy for how he handled the aggressive response from Chad Mirkin.

Or, as James Wilking put it somewhat more pithily:

An apology from Mirkin doesn’t seem to be forthcoming. This is a shame, to put it mildly. What I found rather more disturbing than Mirkin’s overwrought accusation of scientific terrorism, however, was the reaction of an anonymous scientist in that article in The Scientist:

“I think what everyone has to understand is that unhealthy discussion leads to unsuccessful funding applications, with referees pointing out that there is a controversy in the matter. Referee statements like these . . . in a highly competitive environment for funding, simply drain the funding away of this topic,” he writes in an email to The Scientist. He believes a recent grant application of his related to the topic was rejected for this reason, he adds.

This is a shockingly disturbing mindset. Here we have a scientist bemoaning that (s)he did not get public funding because of what is described as “unhealthy” public discussion and controversy about an area of science. Better that we all keep schtum about any possible problems and milk the public purse for as much grant funding as possible, right?

That attitude stinks to high heaven. If it takes some scientific terrorism to shoot it down in flames then sign me up.


[1] Stripy Nanoparticle Controversy Blows Up

[2] Peer Review In Public: Rise Of The Cyber-Bullies? 

[3] Looking At Nothing, Seeing A Lot

[4] Critical Assessment of the Evidence for Striped Nanoparticles, Julian Stirling et al, PLOS ONE 9 e108482 (2014)

[5] How can we trust scientific publishers with our work if they won’t play fair?

 

 

 

The conference dinner chatter way of (not) correcting the scientific record

I’m reblogging this important post by Raphael Levy on the value, or lack thereof, of discussion and ‘debate’ at scientific conferences. Raphael highlights two key issues: the “behind closed doors”/”keeping it in the family” nature of scientific criticism, and, as he puts it, the play-acting that is part-and-parcel of many conference sessions. (Raphael and I, along with our colleagues at Liverpool, Nottingham, and elsewhere, spent quite some time a number of years back finding out just how much time and effort it takes to publish critique and criticism of previously published work).

Rapha-z-lab

One of the common responses of senior colleagues to my attempts to correct the scientific record goes somewhat like this:

You are giving X [leading figure in the field] too much credit anyway. We all know that there are problems with their papers. We discussed it at the latest conference with Y and Z. We just ignore this stuff and move along. Though of course X is my friend etc.

This approach is unfair, elitist and contributes to the degradation of the scientific record.

First, it is very fundamentally unfair to the many scientists who are not present at these dinner table chatters and who may believe that the accumulation of grants, prizes, and high profile papers somewhat correlate with good science. That group of scientists will include pretty much all young scientists as well as most scientists from less advantadged countries who cannot get so easily to these conferences…

View original post 179 more words

ECR blues: Am I part of the problem?

A very quick lunchtime post to highlight that this week’s Nature is a special issue on the theme of young scientists’ careers, and, as it says loud and clear on the front cover, their struggle to survive in academia. There are a number of important and timely articles on just how tough it is for early career researchers (the ECRs of the title of this post), including a worrying piece by Kendall Powell: “Young, Talented and Fed-Up“.

One of the things that struck me in the various statistics and stories presented by Nature is the following graph:

AgingWorkforce.png

Note how older scientists (and I’m soundly in the 41-55 bracket) now hold the large majority of NIH grants, and how different it was back in 1980. I’d like to know the equivalent distribution for grants in physics. If anyone can point me (in the comments section) towards appropriate statistics, I’d appreciate it.

In any case, I recommend taking a read of those articles in this week’s Nature, regardless of where you happen to be on the academic career ladder. As Powell’s article points out, Nature got a short, sharp response to its tweeted question about the challenges facing ECRs…

Politics. Perception. Philosophy. And Physics.

Today is the start of the new academic year at the University of Nottingham (UoN) and, as ever, it crept up on me and then leapt out with a fulsome “Gotcha”. Summer flies by so very quickly. I’ll be meeting my new 1st year tutees this afternoon to sort out when we’re going to have tutorials and, of course, to get to know them. One of the great things about the academic life is watching tutees progress over the course of their degree from that first “getting to know each other” meeting to when they graduate.

The UoN has introduced a considerable number of changes to the “student experience” of late via its Project Transform process. I’ve vented my spleen about this previously but it’s a subject to which I’ll be returning in the coming weeks because Transform says an awful lot about the state of modern universities.

For now, I’m preparing for a module entitled “The Politics, Perception and Philosophy of Physics” (F34PPP) that I run in the autumn semester. This is a somewhat untraditional physics module because, for one thing, it’s almost entirely devoid of mathematics. I thoroughly enjoy  F34PPP each year (despite this amathematical heresy) because of the engagement and enthusiasm of the students. The module is very much based on their contributions — I am more of a mediator than a lecturer.

STEM students are sometimes criticised (usually by Simon Jenkins) for having poorly developed communication skills. This is an especially irritating stereotype in the context of the PPP module, where I have been deeply impressed by the quality of the writing the students submit. As I discuss in the video below (an  overview of the module), I’m not alone in recognising this: articles submitted as F34PPP coursework have been published in Physics World, the flagship magazine of the Institute of Physics.

 

In the video I note that my intention is to upload a weekly video for each session of the module. I’m going to do my utmost to keep this promise and, moreover, to accompany each of those videos with a short(ish) blog post. (But, to cover my back, I’ll just note in advance that the best laid schemes gang aft agley…)

Addicted to the brand: The hypocrisy of a publishing academic

Back in December I gave a talk at the Power, Acceleration and Metrics in Academic Life conference in Prague, which was organised by Filip Vostal and Mark Carrigan. The LSE Impact blog is publishing a series of posts from those of us who spoke at the conference. They uploaded my post this morning. Here it is…


 

I’m going to put this as bluntly as I can; it’s been niggling and nagging at me for quite a while and it’s about time I got it off my chest. When it comes to publishing research, I have to come clean: I’m a hypocrite. I spend quite some time railing about the deficiencies in the traditional publishing system, and all the while I’m bolstering that self-same system by my selection of the “appropriate” journals to target.

Despite bemoaning the statistical illiteracy of academia’s reliance on nonsensical metrics like impact factors, and despite regularly venting my spleen during talks at conferences about the too-slow evolution of academic publishing towards a more open and honest system, I nonetheless continue to contribute to the problem. (And I take little comfort in knowing that I’m not alone in this.)

One of those spleen-venting conferences was a fascinating and important event held in Prague back in December, organized by Filip Vostal and Mark Carrigan: “Power, Acceleration, and Metrics in Academic Life”. My presentation, The Power, Perils and Pitfalls of Peer Review in Public – please excuse thePartridgian overkill on the alliteration – largely focused on the question of post-publication peer review (PPPR) via online channels such as PubPeer. I’ve written at length, however, on PPPR previously (here,here, and here) so I’m not going to rehearse and rehash those arguments. I instead want to explain just why I levelled the accusation of hypocrisy and why I am far from confident that we’ll see a meaningful revolution in academic publishing any time soon.

Let’s start with a few ‘axioms’/principles that, while perhaps not being entirely self-evident in each case, could at least be said to represent some sort of consensus among academics:

  • A journal’s impact factor (JIF) is clearly not a good indicator of the quality of a paper published in that journal. The JIF has been skewered many, many times with some of the more memorable and important critiques coming from Stephen Curry, Dorothy Bishop, David Colquhoun, Jenny Rohn, and, most recently, this illuminating post from Stuart Cantrill. Yet its very strong influence tenaciously persists and pervades academia. I regularly receive CVs from potential postdocs where they ‘helpfully’ highlight the JIF for each of the papers in their list of publications. Indeed, some go so far as to rank their publications on the basis of the JIF.
  • Given that the majority of research is publicly funded, it is important to ensure that open access publication becomes the norm. This one is arguably rather more contentious and there are clear differences in the appreciation of open access (OA) publishing between disciplines, with the arts and humanities arguably being rather less welcoming of OA than the sciences. Nonetheless, the key importance of OA has laudably been recognized by Research Councils UK (RCUK) and all researchers funded by any of the seven UK research councils are mandated to make their papers available via either a green or gold OA route (with the gold OA route, seen by many as a sop to the publishing industry, often being prohibitively expensive).

With these three “axioms” in place, it now seems rather straight-forward to make a decision as to the journal(s) our research group should choose as the appropriate forum for our work. We should put aside any consideration of impact factor and aim to select those journals which eschew the traditional for-(large)-profit publishing model and provide cost-effective open access publication, right?

Indeed, we’re particularly fortunate because there’s an exemplar of open access publishing in our research area: The Beilstein Journal of Nanotechnology. Not only are papers in the Beilstein J. Nanotech free to the reader (and easy to locate and download online), but publishing there is free: no exorbitant gold OA costs nor, indeed, any type of charge to the author(s) for publication. (The Beilstein Foundation has very deep pockets and laudably shoulders all of the costs).

But take a look at our list of publications — although we indeed publish in the Beilstein J. Nanotech., the number of our papers appearing there can be counted on the fingers of (less than) one hand. So, while I espouse the three principles listed above, I hypocritically don’t practice what I preach. What’s my excuse?

In academia, journal brand is everything. I have sat in many committees, read many CVs, and participated in many discussions where candidates for a postdoctoral position, a fellowship, or other roles at various rungs of the academic career ladder have been compared. And very often, the committee members will say something along the lines of “Well, Candidate X has got much better publications than Candidate Y”…without ever having read the papers of either candidate. The judgment of quality is lazily “outsourced” to the brand-name of the journal. If it’s in a Nature journal, it’s obviously of higher quality than something published in one of those, ahem, “lesser” journals.

If, as principal investigator, I were to advise the PhD students and postdocs in the group here at Nottingham that, in line with the three principles above, they should publish all of their work in the Beilstein J. Nanotech., it would be career suicide for them. To hammer this point home, here’s the advice from one referee of a paper we recently submitted:

“I recommend re-submission of the manuscript to the Beilstein Journal of Nanotechnology, where works of similar quality can be found. The work is definitively well below the standards of [Journal Name].”

There is very clearly a well-established hierarchy here. Journal ‘branding’, and, worse, journal impact factor, remain exceptionally important in (falsely) establishing the perceived quality of a piece of research, despite many efforts to counter this perception, including, most notably, DORA. My hypocritical approach to publishing research stems directly from this perception. I know that if I want the researchers in my group to stand a chance of competing with their peers, we have to target “those” journals. The same is true for all the other PIs out there. While we all complain bitterly about the impact factor monkey on our back, we’re locked into the addiction to journal brand.

And it’s very difficult to see how to break the cycle…