Guilty Confessions of a REFeree

#4 of an occasional series

At the start of this week I spent a day in a room in a university somewhat north of Nottingham with a stack of research papers and a pile of grading sheets. Along with a fellow physicist from a different university (located even further north of Nottingham), I had been asked to act as an external reviewer for the department’s mock REF assessment.

I found it a deeply uncomfortable experience. My discomfort had nothing to do, of course, with our wonderfully genial hosts — thank you all for the hospitality, the conversation, the professionalism, and, of course, lunch. But I’ve vented my spleen previously on the lack of consistency in mock REF ratings (it’s been the most-viewed post at Symptoms… since I resurrected the blog in June last year) and I agreed to participate in the mock assessment so I could see for myself how the process works in practice.

Overall, I’d say that the degree of agreement on “star ratings” before moderation of my co-marker’s grading and mine was at the 70% level, give or take. This is in line with the consistency we observed at Nottingham for independent reviewers in Physics and is therefore, at least, somewhat encouraging. (Other units of assessment for Nottingham’s mock REF review had only 50% agreement.)  But what set my teeth on edge for a not-insignificant number of papers — including quite a few of those on which my gradings agreed with those of my co-marker — was that I simply did not feel at all  qualified to comment.

Even though I’m a condensed matter physicist and we were asked to assess condensed matter physics papers, I simply don’t have the necessary level of hubris to pretend that I can expertly assess any paper in any CMP sub-field. The question that went through my head repeatedly was “If I got this paper from Physical Review Letters (or Phys. Rev. B, or Nature, or Nature Comms, or Advanced Materials, or J. Phys. Chem. C…etc…) would I accept the reviewing invitation or would I decline, telling them it was out of my field of expertise?”  And for the majority of papers the answer to that question was a resounding “I’d decline the invitation.”

So if a paper I was asked to review wasn’t in my (sub-)field of expertise, how did I gauge its reception in the relevant scientific community?

I can’t quite believe I’m admitting this, given my severe misgivings about citation metrics, but, yes, I held my nose and turned to Web of Science. And citation metrics also played a role in the decisions my co-marker made, and in our moderation. This, despite the fact that we had no way of normalising those metrics to the prevailing citation culture of each sub-field, nor of ranking the quality as distinct from the impact of each paper. (One of my absolutely favourite papers of all time – a truly elegant and pioneering piece of work – has picked up a surprisingly low number of citations, as compared to much more pedestrian work in the field.)

Only when I had to face a stack of papers and grade them for myself did I realise just how exceptionally difficult it is to pass numerical judgment on a piece of work in an area that lies outside my rather small sphere of research. I was, of course, asked to comment on publications in condensed matter physics, ostensibly my area of expertise. But that’s a huge field. Not only is no-one a world-leading expert in all areas of condensed matter physics, it’s almost impossible to keep up with developments in our own narrow sub-fields of interest let alone be au fait with the state of the art in all other sub-fields.

So we therefore turn to citations to try to gauge the extent to which a paper has made ripples — or perhaps even sent shockwaves – through a sub-field in which we have no expertise. My co-marker and I are hardly alone in adopting this citation-counting strategy. But that’s of course no excuse — we were relying on exactly the type of pseudoquantitative heuristic that I have criticised in the past and I felt rather “grubby” at the end of the (rather tiring) day. David Colquhoun made the following point time and again in the run up to the last REF  (and well before):

All this shows what is obvious to everyone but bone-headed bean counters. The only way to assess the merit of a paper is to ask a selection of experts in the field.

Nothing else works.

Nothing.

Bibliometrics are a measure of visibility and “clout” in a particular (yet often nebulously defined) research community; they’re not a quantification of scientific quality. Therefore, very many scientists, and this most definitely includes me, have deep misgivings about using citations to judge a paper’s — let alone a scientist’s — worth.

Although I agree with that quote from David above, the problem is that we need to somehow choose the correct “boundary conditions” for each expert; I can have a reasonable level of expertise in one sub-area of a field — say, scanning probe microscopy or self-assembly or semiconductor surface physics — and a distinct lack of working knowledge, let alone expertise, in another sub-area of that self-same field. I could list literally hundreds of topics where I would, in fact, be winging it.

For many years, and because of my deep aversion to simplistic citation-counting and bibliometrics, I’ve been guilty of the type of not-particularly-joined-up thinking that Dorothy Bishop rightly chastises in this tweet…

We can’t trust the bibliometrics in isolation (for all the reasons (and others) that David Colquhoun lays out here), so when it comes to the REF the argument is that we have to supplement the metrics with “quality control” via another round of ostensibly expert peer review. But the problem is that it’s often not expert peer review; I was certainly not an expert in the subject areas of very many of the papers I was asked to judge. And I’ll hold that no-one can be a world-leading expert in every sub-field of a given area of physics (or any other discipline).

So what are the alternatives?

David has suggested that we should, in essence, retire what’s known as the “dual support” system for research funding (see the video embedded below): “…abolish the REF, and give the money to research councils, with precautions to prevent people being fired because their research wasn’t expensive enough.” I have quite some sympathy with that view because the common argument that the so-called QR funding awarded via the REF is used to support “unpopular” areas of research that wouldn’t necessarily be supported by the research councils is not at all compelling (to put it mildly). Universities demonstrably align their funding priorities and programmes very closely with research council strategic areas; they don’t hand out QR money for research that doesn’t fall within their latest Universal Targetified Globalised Research Themes.

Prof. Bishop has a different suggestion for revamping how QR funding is divvied up, which initially (and naively, for the reasons outlined above) I found a little unsettling. My first-hand experience earlier this week with the publication grading methodology used by the REF — albeit in a mock assessment — has made me significantly more comfortable with Dorothy’s strategy:

.”..dispense with the review of quality, and you can obtain similar outcomes by allocating funding at institutional level in relation to research volume”.

Given that grant income is often taken as yet another proxy for research quality, and that there’s a clear Matthew effect (rightly or wrongly) at play in science funding, this correlation between research volume and REF placement is not surprising. As the Times Higher Education article on Dorothy’s proposals went on to quote,

The government should, therefore, consider allocating block funding in proportion to the number of research-active staff at a university because that would shrink the burden on universities and reduce perverse incentives in the system, [Prof Bishop] said.

Before reacting strongly one way or another, I strongly recommend that you take the time to listen to Prof. Bishop eloquently detail her arguments in the video below.

Here’s the final slide of that presentation:

DorothyBishopRecommendations

So much rests on that final point. Ultimately, the immense time and effort devoted to/wasted on the REF boils down to a lack of trust — by government, funding bodies, and, depressingly, often university senior management — that academics cannot motivate themselves without perverse incentives like aiming for a 4* paper. That would be bad enough if we all could agree on what a 4* paper looks like…

Spinning off without IP?

I’ve had the exceptionally good fortune of working with a considerable number of extremely talented, tenacious, and insightful scientists over the years. One of those was Julian Stirling, whose PhD I ostensibly supervised. (In reality, Julian spent quite some time supervising me.) Julian is now a postdoctoral researcher at the University of Bath and is involved in a number of exciting projects there (and elsewhere), including that he describes in the guest post below. Over to you Julian…


Universities love spin-offs — they show that research has had impact! — but does the tax payer or the scientific community get good value for money? More importantly, does spinning off help or hurt the research? I fall strongly on the side of arguing that it hurts. Perhaps I am ideologically driven in my support for openness, but when it comes to building scientific instruments I think I have a strong case.

Imagine a scientist has a great idea for a new instrument. It takes three years to build it, and the results are amazing; it revolutionises the field. The scientist will be encouraged by funding bodies to make the research open. Alongside the flashy science papers will probably be a pretty dry paper on the concept of the instrument; these will be openly published. However, there will be no technical drawings, no control software, no warnings to “Never assemble X before Y or all your data will be wrong and you will only find out 3 months later!“. The university and funding agencies will want all of this key information to be held as intellectual property by a spin-off company. This company will then sell instruments to scientists (many funded by the same source that paid for the development).

The real problem comes when two more scientists both have great new ideas which require a sightly modified version of the instrument. Unfortunately, as the plans are not available, both their groups must spend 2-3 years reinventing the wheel for their own design just so they can add a new feature. Inevitably both new instruments get spun off. Very soon, the tax payer has paid for the instrument to be developed three times; a huge amount of time has been put into duplicating effort. And, very probably, the spin-off companies will get into legal battles over intellectual property. This pushes the price of the instruments up as their lawyers get rich. I have ranted about this so many times there is even a cartoon of my rant…

Julian.png

We live in a time when governments are requiring scientific publications to be open access. We live in a world where open source software is so stable and powerful it runs most web-servers, most phones, and all 500 of the worlds fastest supercomputers. Why can’t science hardware be open too? There is a growing movement to do just that, but it is somewhat hampered by people conflating open source hardware and low-cost hardware. If science is going to progress, we should share as much knowledge as possible.

In January 2018 I was very lucky to get a post-doctoral position working on open source hardware at the University of Bath. I became part of the OpenFlexure Microscope project, an open-source laboratory-grade motorised 3D-printed microscope. What most people don’t realise about microscopes is that the majority of the design work goes into working out how to precisely position a sample so you can find and focus on the interesting parts. The OpenFlexure microscope is lower cost than most microscopes due to 3D printing, but this has not been done by just 3D printing the same shapes you would normally machine from metal. That would produce an awful microscope. Instead, the main microscope stage is one single complex piece that only a 3D printer could make.  Rather than sliding fine-ground metal components, the flexibility of plastic is used to create a number of flexure hinges. The result is a high performance microscope which is undergoing trials for malaria diagnosis in Tanzania.

ResearchPartners.jpg

But what about production? A key benefit of the microscope being open is that local companies in regions that desperately need more microscopes can build them for their communities. This creates local industry and lowers initial costs, but, most importantly, it guarantees that local engineers can fix the equipment. Time and time again well-meaning groups send expensive scientific equipment into low resource settings with no consideration of how it performs in those conditions nor any plans for how it can be fixed when problems do arise. For these reasons the research project has a Tanzanian partner, STICLab, who are building (and will soon be selling) microscopes in Tanzania. We hope that other companies in other locations will start to do the same.

The research project had plans to support distributed manufacturing abroad. But what if people in the UK want a microscope? They can always build their own — but this requires time, effort, and a 3D printer. For this reason, Richard Bowman (the creator of OpenFlexure Microscope) and I started our own company, OpenFlexure Industries, to distribute microscopes. Technically, it is not a spin-off as it owns no intellectual property. We hope to show that scientific instruments can be distributed by successful businesses, while the entire project remains open.

People ask me “How do you stop another company undercutting you and selling them for less?” The answer is: we don’t. We want people to have microscopes, if someone undercuts us we achieved this goal. The taxpayer rented Richard’s brain when they gave him the funding to develop the microscope, and now everyone owns the design.

The company is only a month old, but we are happy to have been nominated for a Great West Business Award. If you support the cause of open source hardware and distributed manufacturing we would love your vote.

Bullshit and Beyond: From Chopra to Peterson

Harry G Frankfurt‘s On Bullshit is a modern classic. He highlights the style-over-substance tenor of the most fragrant and flagrant bullshit, arguing that

It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction. A person who lies is thereby responding to the truth, and he is to that extent respectful of it. When an honest man speaks, he says
only what he believes to be true; and for the liar, it is correspondingly indispensable that he considers his statements to be false. For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye
is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says. He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.

In other words, the bullshitter doesn’t care about the validity or rigour of their arguments. They are much more concerned with being persuasive. One aspect of BS that doesn’t quite get the attention it deserves in Frankfurt’s essay, however, is that special blend of obscurantism and vacuity that is the hallmark of three world-leading bullshitters of our time:  Deepak Chopra, Karen Barad (see my colleague Brigitte Nerlich’s important discussion of Barad’s wilfully impenetrable language here), and Jordan Peterson. In a talk for the University of Nottingham Agnostic, Secularist, and Humanist Society last night (see here for the blurb/advert), I focussed on the intriguing parallels between their writing and oratory. Here’s the video of the talk.

Thanks to UNASH for the invitation. I’ve not included the lengthy Q&A that followed (because I stupidly didn’t ask for permission to film audience members’ questions). I’m hoping that some discussion and debate might ensue in the comments section below. If you do dive in, try not to bullshit too much…

 

 

LIYSF 2018: Science Without Borders*

Better the pride that resides
In a citizen of the world
Than the pride that divides
When a colourful rag is unfurled

From Territories. Track 5 of Rush’s Power Windows (1985). Lyrics: Neil Peart.


LIYSF.JPG

Last night I had the immense pleasure and privilege of giving a plenary lecture for the London International Youth Science Forum. 2018 marks the 60th annual forum, a two-week event that brings together 500 students (aged 16 – 21) from, this year, seventy different countries…

LIYSF_countries.jpg

The history of the forum is fascinating. Embarrassingly, until I received the invitation to speak I was unaware of the LIYSF’s impressive and exciting efforts over many decades to foster and promote, in parallel, science education and international connections. The “science is global” message is at the core of the Forum’s ethos, as described at the LIYSF website:

The London International Youth Science Forum was the brainchild of the late Philip S Green. In the aftermath of the Second World War an organisation was founded in Europe by representatives from Denmark, Czech Republic, the Netherlands and the United Kingdom in an effort to overcome the animosity resulting from the war. Plans were made to set up group home-to-home exchanges between schools and communities in European countries. This functioned with considerable success and in 1959 Philip Green decided to provide a coordinated programme for groups from half a dozen European countries and, following the belief that ‘out of like interests the strongest friendships grow.’ He based the programme on science.

The printed programme for LIYSF 2018 includes a message from the Prime Minster…

MayLIYSF.JPG

It’s a great shame that the PM’s message above doesn’t mention at all LIYSF’s work in breaking down borders and barriers between scientists in different countries since its inception in 1959. But given that her government and her political party have been responsible for driving the appalling isolationism and, in its worst excesses, xenophobia of Brexit, it’s not at all surprising that she might want to gloss over that aspect of the Forum…

The other slightly irksome aspect of May’s message, and something I attempted to counter during the lecture last night, is the focus on “demand for STEM skills”, as if non-STEM subjects were somehow of intrinsically less value. Yes, I appreciate that it’s a science forum, and, yes, I appreciate that the LIYSF students are largely focussed on careers in science and engineering. But we need to encourage a greater appreciation of the value of non-STEM subjects. I, for one, was torn between opting to do an English or a physics degree at university. As I’ve banged on about previously, the A-level system frustratingly tends to exacerbate this artificial “two cultures” divide between STEM subjects and the arts and humanities. We need science and maths. And we need economics, philosophy, sociology, English lit, history, geography, modern (and not-so-modern) languages…

The arrogance of a certain breed of STEM student (or researcher or lecturer) who thinks that the ability to do complicated maths is the pinnacle of intellectual achievement also helps to drive this wedge between the disciplines. And yet those particular students, accomplished though they may well be in vector calculus, contour integration, and/or solving partial differential equations, often flounder completely when asked to write five-hundred words that are reasonably engaging and/or entertaining.

Borders and boundaries, be they national or disciplinary, encourage small-minded, insular thinking. Encouragingly, there was none of that on display last night. After the hour-long lecture, I was blown away, time and again, by the intelligent, perceptive, and, at times, provocative (in a very good way!) questions from the LIYSF students. After an hour and half of questions, security had to kick us out of the theatre because it was time to lock up.

Clare Elwell, who visited Nottingham last year to give a fascinating and inspirational Masterclass lecture on her ground-breaking research for our Physics & Astronomy students, is the President of the LIYSF. It’s no exaggeration to say that the impact of the LIYSF on Clare’s future, when she attended as a student, was immense. I’ll let Clare explain:

 I know how impactful and inspiring these experiences can be, as I attended the Forum myself as a student over thirty years ago. It was here that I was first introduced to Medical Physics – an area of science which I have pursued as a career ever since. Importantly, the Forum also opened my eyes to the power of collaboration and communication across scientific disciplines and national borders to address global challenges — something which has formed a key element of my journey in science, and which the world needs now more than ever.

(That quote is also taken from the LIYSF 2018 Programme.)

My lecture was entitled “Bit from It: Manipulating matter bond by bond”“. A number of students asked whether I’d make the slides available, which, of course, is my pleasure (via that preceding link). In addition, some students asked about the physics underpinning the “atomic force macroscope [1]” (and the parallels with its atomic force microscope counterpart) that I used as a demonstration in the talk:

IMG_4682.JPG

(Yes, the coffee is indeed an integral component of the experimental set-up [2]).

Unfortunately, due to the size of the theatre only a small number of the students could really see the ‘guts’ of the “macroscope”. I’m therefore going to write a dedicated post in the not-too-distant future on just how it works, its connections to atomic force microscopy, and its much more advanced sibling the LEGOscope (the result of a third year undergraduate project carried out by two very talented students).

The LIYSF is a huge undertaking and it’s driven by the hard work and dedication of a wonderful team of people. I’ve got to say a big thank you to those of that team I met last night and who made my time at LIYSF so very memorable: Director Richard Myhill for the invitation (and Clare (Elwell) for the recommendation) and for sorting out all of the logistics of my visit; Sam Thomas and Simran Mohnani, Programme Liaison; Rhia Patel and Vilius Uksas, Engagement Manager and Videographer, respectively. (It’s Vilius you can see with the camera pointed in my direction in the photo at the top there.); Victoria Sciandro (Deputy Host. Victoria also had the task of summarising my characteristically rambling lecture before the Q&A session started and did an exceptional job, given the incoherence of the source material); and James, whose surname I’ve embarrassingly forgotten but who was responsible for all of the audio-video requirements, the sound and the lighting. He did an exceptional job. Thank you, James. (I really hope I’ve not forgotten anyone. If I have, my sincere apologies.)

Although this was my first time at the LIYSF, I sincerely hope it won’t be my last. It was a genuinely inspiring experience to spend time with such enthusiastic and engaging students. The future of science is in safe hands.

We opened the post with Rush. So let’s bring things full circle and close with that Toronto trio… [3]


* “Science Without Borders” is also the name of the agency that funds the PhD research of Filipe Junquiera in the Nottingham Nanoscience Group. As this blog post on Filipe’s journey to Nottingham describes, he’s certainly crossed borders.

[1] Thanks to my colleague Chris Mellor for coining the “atomic force macroscope” term.

[2] It’s not. (The tiresome literal-mindedness of some online never ceases to amaze me. Best to be safe than sorry.)

[3] Great to be asked a question from the floor by a fellow Rush fan last night. And he was Canadian to boot!

In Praise of ‘Small Astronomy’

My colleague and friend, Mike Merrifield, wrote the following thought-provoking post, recently featured at the University of Nottingham blog. I’m reposting it here at “Symptoms…” because although I’m not an astronomer, Mike’s points regarding big vs small science are also pertinent to my field of research: condensed matter physics/ nanoscience. Small research teams have made huge contributions in these areas over the years; many of the pioneering, ground-breaking advances in single atom/molecule imaging and manipulation have come from teams of no more than three or four researchers. Yet there’s a frustrating and troublesome mindset — especially among those who hold the purse strings at universities and funding bodies — that “small science” is outmoded and so last century. Much better to spend funding on huge multi-investigator teams with associated shiny new research institutes, apparently.

That’s enough from me. Over to Mike…


A number of years back, I had the great privilege of interviewing the Dutch astronomer Adriaan Blaauw for a TV programme.  He must have been well into his eighties at the time, but was still cycling into work every day at the University of Leiden, and had fascinating stories to tell about the very literal perils of trying to undertake astronomical research under Nazi occupation; the early days of the European Southern Observatory (ESO) of which he was one of the founding figures; and his involvement with the Hipparcos satellite, which had just finished gathering data on the exact positions of a million stars to map out the structure of the Milky Way.

When the camera stopped rolling and we were exchanging wind-down pleasantries, I was taken aback when Professor Blaauw suddenly launched into a passionate critique of big science projects like the very one we had been discussing.  He was very concerned that astronomy had lost its way, and rather than thinking in any depth about what new experiments we should be doing, we kept simply pursuing more and more data.  His view was that all we would do with data sets like that produced by Hipparcos would be to skim off the cream and then turn our attention to the next bigger and better mission rather than investing the time and effort needed to exploit these data properly.  With technology advancing at such a rapid pace, this pressure will always be there – why work hard for many months to optimise the exploitation of this year’s high-performance computers, when next year’s will be able to do the same task as a trivial computation?  Indeed, the Hipparcos catalogue of a million stars is even now in the process of being superseded by the Gaia mission making even higher quality measurements of a billion stars.

Of course there are two sides to this argument.  Some science simply requires the biggest and the best.  Particle physicists, for example, need ever-larger machines to explore higher energy regimes to probe new areas of fundamental physics.  And some results can only be obtained through the collection of huge amounts of data to find the rare phenomena that are buried in such an avalanche, and to build up statistics to a point where conclusions become definitive.  This approach has worked very well in astronomy, where collaborations such as the Sloan Digital Sky Survey (SDSS) have brought together thousands of researchers to work on projects on a scale that none could undertake individually.  Such projects have also democratized research in that although the data from surveys such as SDSS are initially reserved for the participants who have helped pay for the projects, the proprietary period is usually quite short so the data are available to anyone in the World with internet access to explore and publish their own findings.

Unfortunately, there is a huge price to pay for these data riches. First, there is definitely some truth in Blaauw’s critique, with astronomers behaving increasingly like magpies, drawn to the shiniest bauble in the newest, biggest data set.  This tendency is amplified by the funding of research, where the short proprietary period on such data means that those who are “on the team” have a cast iron case as to why their grant should be funded this round, because by next round anyone in the World could have done the analysis.  And of course by the time the next funding round comes along there is a new array of time-limited projects that will continue to squeeze out any smaller programmes or exploitation of older data.

But there are other problems that are potentially even more damaging to this whole scientific enterprise.  There is a real danger that we simply stop thinking.  If you ask astronomers what they would do with a large allocation of telescope time, most would probably say they would do a survey larger than any other.  It is, after all, a safe option: all those results that were right at the edge of statistical significance will be confirmed (or refuted) by ten times as much data, so we know we will get interesting results.  But is it really the best use of the telescope?  Could we learn more by targeting observations to many much more specific questions, each of which requires a relatively modest investment of time?  This concern also touches on the wider philosophical question of the “right” way to do science.  With a big survey, the temptation is always to correlate umpteen properties of the data with umpteen others until something interesting pops out, then try to explain it.  This a posteriori approach is fraught with difficulty, as making enough plots will always turn up a correlation, and it is then always possible to reverse engineer an explanation for what you have found.  Science progresses in a much more robust (and satisfying) way when the idea comes first, followed by thinking of an experiment that is explicitly targeted to test the hypothesis, and then the thrill of discovering that the Universe behaves as you had predicted (or not!) when you analyse the results of the test.

Finally, and perhaps most damagingly, we are turning out an entire generation of new astronomers who have only ever worked on mining such big data sets.  As PhD students, they will have been small cogs in the massive machines that drive these big surveys forward, so the chances of them having their names associated with any exciting results are rather small – not unreasonably, those who may have invested most of a career in getting the survey off the ground will feel they have first call on any such headlines.  The students will also have never seen a project all the way through from first idea on the back of a beer mat through telescope proposals, observations, analysis, write-up and publication.  Without that overview of the scientific process on the modest scale of a PhD project, they will surely be ill prepared for taking on leadership roles on bigger projects further down the line.

I suppose it all comes down to a question of balance: there are some scientific results that would simply be forever inaccessible without large-scale surveys, but we have to somehow protect the smaller-scale operations that can produce some of the most innovative results, while also helping to keep the whole endeavour on track.  At the moment, we seem to be very far from that balance point, and are instead playing out Adriaan Blaauw’s nightmare.

Politics. Perception. Philosophy. And Physics.

Today is the start of the new academic year at the University of Nottingham (UoN) and, as ever, it crept up on me and then leapt out with a fulsome “Gotcha”. Summer flies by so very quickly. I’ll be meeting my new 1st year tutees this afternoon to sort out when we’re going to have tutorials and, of course, to get to know them. One of the great things about the academic life is watching tutees progress over the course of their degree from that first “getting to know each other” meeting to when they graduate.

The UoN has introduced a considerable number of changes to the “student experience” of late via its Project Transform process. I’ve vented my spleen about this previously but it’s a subject to which I’ll be returning in the coming weeks because Transform says an awful lot about the state of modern universities.

For now, I’m preparing for a module entitled “The Politics, Perception and Philosophy of Physics” (F34PPP) that I run in the autumn semester. This is a somewhat untraditional physics module because, for one thing, it’s almost entirely devoid of mathematics. I thoroughly enjoy  F34PPP each year (despite this amathematical heresy) because of the engagement and enthusiasm of the students. The module is very much based on their contributions — I am more of a mediator than a lecturer.

STEM students are sometimes criticised (usually by Simon Jenkins) for having poorly developed communication skills. This is an especially irritating stereotype in the context of the PPP module, where I have been deeply impressed by the quality of the writing the students submit. As I discuss in the video below (an  overview of the module), I’m not alone in recognising this: articles submitted as F34PPP coursework have been published in Physics World, the flagship magazine of the Institute of Physics.

 

In the video I note that my intention is to upload a weekly video for each session of the module. I’m going to do my utmost to keep this promise and, moreover, to accompany each of those videos with a short(ish) blog post. (But, to cover my back, I’ll just note in advance that the best laid schemes gang aft agley…)

How universities incentivise academics to short-change the public

Euro Money Coins Loose Change Specie CurrencyThis is going to be a short post (for a change). First, you should read this by David Colquhoun. I’ll wait until you get back. (You should sign the petition as well while you’re over there).

In his usual down-to-earth and incisive style, Colquhoun has said just about everything that needs to be said about the shocking mismanagement of King’s College London.

So why am I writing this post? Well, it’s because KCL is far from alone in using annual grant income as a metric for staff assessment – the practice is rife across the UK higher education sector. For example, the guidance for performance review at Nottingham contains this as one of the assessment standards: “Sustained research income equal to/in excess of Russell Group average for the discipline group”. Nottingham is not going out on a limb here – our Russell Group ‘competitors’ have similar aspirations for their staff.

What’s wrong with that you might ask? Surely it’s your job as an academic to secure research income?

No. My job as an academic is to do high-quality research. Not to ‘secure research income’. It’s all too easy to forget this, particularly as a new lecturer when you’re trying to get a research group established and gain a foothold on the career ladder. (And as a less-new lecturer attempting to tick the boxes for promotion. And as a grizzled old academic aiming to establish ‘critical mass’ on the national or international research ‘stage’.)

What’s particularly galling, however, is that the annual grant income metric is not normalised to any measure of productivity or quality. So it says nothing about value for money. Time and time again we’re told by the Coalition that in these times of economic austerity, the public sector will have to “do more with less”. That we must maximise efficiency. And yet academics are driven by university management to maximise the amount of funding they can secure from the public pot.

Cost effectiveness doesn’t enter the equation. Literally.

Consider this. A lecturer recently appointed to a UK physics department, Dr. Frugal, secures a modest grant from the Engineering and Physical Sciences Research Council for, say, £200k. She works hard for three years with a sole PhD student and publishes two outstanding papers that revolutionise her field.

Her colleague down the corridor, Prof. Cash, secures a grant for £4M and publishes two solid, but rather less outstanding, papers.

Who is the more cost-effective? Which research project represents better value for money for the taxpayer?

…and which academic will be under greater pressure from management to secure more research income from the public purse?

Image: Coins, the acquistion of which is not university departments’ main aim. Credit: https://www.maxpixel.net/Golden-Gold-Riches-Treasure-Rich-Coins-Bounty-1637722