Is Science Self-Correcting? Some Real World-Examples From Psychological Research.

…or The Prognosis Is Not Good, Psychology. It’s A Bad Case Of Physics Envy*

Each year there are two seminars for the Politics, Perception, and Philosophy of Physics module that are led by invited speakers. First up this year was the enlightening, engaging, and entertaining Nick Brown, who, and I quote from no less a source than The Guardian, has an “astonishing story…[he] began a part-time psychology course in his 50s and ended up taking on America’s academic establishment.”

I recommend you read that Guardian profile in full to really get the measure of Mr. (soon to be Dr.) Brown but, in brief, he has played a central role in exposing some of the most egregious examples of breathtakingly poor, or downright fraudulent, research in psychology, a field that needs to get its house in order very soon. (A certain high profile professor of psychology who is always very keen to point the finger at what he perceives to be major failings in other disciplines should bear this in mind and heed his own advice. (Rule #6, as I recall…))

Nick discussed three key examples of where psychology research has gone badly off the rails:

    • Brian Wansink, erstwhile director of Cornell’s Food and Brand Lab, whose research findings (cited over 20,000 times) have been found to be rather tough to digest given that they’re riddled with data manipulation and resulted from other far-from-robust research practices.
    • The “audacious academic fraud” of Diederik Stapel. (Nick is something of a polymath, being fluent in Dutch among other skills, and translated Stapel’s autobiography/confession, making it freely available online. I strongly recommend adding Stapel’s book to your “To Read” list; I found it a compelling story that provides a unique insight into the mindset and motivations of someone who fakes their research. Seeing the ostracisation and shaming through Stapel’s eyes was a profoundly affecting experience and I found myself sympathising with the man, especially with regard to the effects of his fraud on his family.)

It was a great pleasure to host Nick’s visit to Nottingham (and to finally meet him after being in e-mail contact on and off for about eighteen months). Here’s his presentation…

*But don’t worry, you’re not alone.

** Hmmm. More psychologists with a chaotic concept of chaos. I can see a pattern emerging here. Perhaps it’s fractal in nature…


 

Update 18/11/2018. 15:30. I am rapidly coming to the opinion that in the dismal science stakes, psychology trumps economics by quite some margin. I’ve just read Catherine Bennett’s article in The Observer today on a research paper that created a lot of furore last week: “Testing the Empathizing-Systemizing theory of sex differences and the Extreme Male Brain theory of autism in half a million people“, a study which, according to a headline in The Times (amongst much other similarly over-excited and credulous coverage) has shown that male and female brains are very different indeed.

One would get the impression from the headlines that the researchers must have carried out an incredibly systematic and careful fMRI study, which, given the sample size, in turn must have taken decades and involved highly sophisticated data analysis techniques.

Nope.

They did their research by…asking people to fill in questionnaires.

Bennett highlights Dean Burnett ‘s incisive demolition of the paper and surrounding media coverage. I thoroughly recommend Burnett’s post – he highlights a litany of issues with the study (and others like it). For one thing, the idea that self-reporting via questionnaire can provide a robust objective analysis of just about any human characteristic or trait is ludicrously simple-minded. Burnett doesn’t cover all of the issues because, as he says at the end of his post: “There are other concerns to raise of course, but I’ll keep them in reserve for when the next study that kicks this whole issue off again is published. Shouldn’t be more than a couple of months.

Indeed.

Author: Philip Moriarty

Physicist. Rush fan. Father of three. (Not Rush fans. Yet.) Rants not restricted to the key of E minor...

2 thoughts on “Is Science Self-Correcting? Some Real World-Examples From Psychological Research.”

  1. I almost died laughing when I saw the butterfly. Then it sunk in what people had to go through for their “comment” to go through. Dr Tim Marsh, a friend of mine, who’s a PhD in psychology communicated that these problems didn’t really exist in the more renowned journals. I was actually rather curt with him when knowledge of the replication crisis hit and had to apologise afterwards. It’s just that replication needs to be taken more seriously, because science can be self-correcting if we self-correct for our own human nature. Trick is getting as close to the truth as possible so we can launch from that platform towards a more precise position. But if the foundation is flawed, then it all falls down and we might as well be constructing flatulence chambers in fear of starting the next hurricane.

    Like

  2. Here’s a great video with David Shanks addressing the replication crisis and publication bias in Psychology I watched a while back: https://www.youtube.com/watch?v=Zz627CecmgU

    At the 49:48 mark, he mentions one of the criteria Bakker & Wicherts used to flush out bad work. The criteria used was: “does the reported P value, match up with the T value in the degrees of freedom”. Which got a laugh. I was told as a layman, that’s bad, having no formal understanding of statistics.

    Like

Comments are closed.