First published at physicsfocus.
It’s been a week since Michael Gove was unceremoniously given the boot from his role as Education Secretary. The cheers of teachers still echo around staff rooms and schoolyards up and down the country.
Gove was variously described as incredibly unpopular, a hate figure, utterly ruthless, and a “toxic liability”. And that was just by his colleagues in the Coalition. (Allegedly.) Those who shared his simple-minded, wilfully uninformed, and proto-Victorian views on education, including a certain Richard Littlejohn, saw Gove’s unpopularity as arising simply because he was driving through what they considered to be essential reforms of an ailing education system. (My deep apologies for the preceding link to a Daily Mail article and its associated sidebar of shame. It won’t happen again. I also offer a posthumous apology to those Victorians who would likely have baulked at the suggestion that their educational methods were as backward-looking as those of Gove.)
Just why are Littlejohn and his reactionary ilk so certain that the English education system is, as they’d have it, going to hell in a handcart? A very large part of the reason is that they naively, quaintly, yet dangerously assume that education is equivalent to a competitive sport where schools, teachers, and children can be accurately assessed on the basis of positions in league tables. What’s worse – and this is particularly painful for a physicist or, indeed, anyone with a passing level of numeracy, to realise – is that this misplaced and unscientific faith in the value of statistically dubious inter-school comparisons is at the very core of the assessment culture of the Office for Standards in Education, Children’s Services and Skills (Ofsted).
An intriguing aspect of the swansong of Gove’s career as Education Secretary was that he more than once ‘butted heads’ with Michael Wilshaw, head of Ofsted. One might perhaps assume that this was a particularly apposite example of “the enemy of mine enemy is my friend”. Unfortunately not. Ofsted’s entirely flawed approach to the assessment of schools is in many ways an even bigger problem than Gove’s misplaced attempts to rewind education to the halcyon, but apocryphal, days of yore.
Moreover, Gove’s gone. Ofsted is not going anywhere any time soon.
I’ve always been uncomfortable about the extent to which number-abuse and pseudostatistics might be underpinning Ofsted’s school assessment procedures. But it was only when I became a parent governor for my children’s primary school, Middleton Primary and Nursery School in Nottingham, that the shocking extent of the statistical innumeracy at the heart of Ofsted’s processes became clear. (I should stress at this point that the opinions about Ofsted expressed below are mine, and mine alone.)
Middleton is a fantastic school, full of committed and inspirational teachers. But, like the vast majority of schools in the country, it is subject to Ofsted’s assessment and inspection regime. Ofsted’s implicit assumption is that the value of a school like Middleton, and, by extension, the value of the teachers and students in that school, can be reduced to a set of objective and robust ‘metrics’ which can in turn be used to produce a quantitative ranking (i.e. a league table). Even physicists, who spend their career wading through reams of numerical data, know full well that not everything that counts can be counted. (By the way, I use the adjective “inspirational” unashamedly. And because it winds the likes of Littlejohn and Toby Young up. As, I’d imagine, does starting a sentence with a conjunction and ending it with a preposition.)
But let’s leave the intangible and unquantifiable aspects of a school’s teaching to one side and instead critically consider the extent to which Ofsted’s data and processes are, to use that cliché beloved of government ministers, fit for purpose. In its advice to governors, Ofsted – rather ironically, as we’ll see — stresses the key importance of objective data and highlights that the governing board should assess the school’s performance on the basis of a number of measures which are ‘helpfully’ summarised at websites such as the Ofsted Data Dashboard and RAISE Online.
Ofsted’s advice to governors tacitly assumes that the data it provides, and the overall assessment methodology which gives rise to those data, are objective and can be used to robustly monitor the performance of a given school against others. Let’s just take a look at the objective evidence for this claim.
During the governor training sessions I attended, I repeatedly asked to what extent the results of Ofsted inspections (and other Ofsted-driven assessment schemes) were reproducible. In other words, if we repeated the inspection with a different set of inspectors, would we get the same result? If not, in what sense could Ofsted claim that the results of an inspection were objective and robust? As you might perhaps expect, I singularly failed to get a particularly compelling response to this question. This was for a very good reason: the results of Ofsted inspections are entirely irreproducible. A headline from the Telegraph in March this year said it all: Ofsted inspections: You’d be better off flipping a coin. This was not simply media spin. The think-tank report, “Watching the Watchmen”, on which the article was based, actually goes further: “In fact, overall the results are worse than flipping a coin”.
It’s safe to say that the think-tank in question, Policy Exchange, is on the right of the political spectrum. It is also perhaps not entirely coincidental that one of its founding members was a certain Michael Gove, and that the Policy Exchange report on Ofsted was highlighted by the right-of-centre press during the period of spats between Wilshaw and Gove mentioned above. None of that, however, detracts from the data cited in the report. These resulted from the work of Robert Coe and colleagues at Durham University and stemmed from a detailed study involving more than 3000 teachers. Coe has previously criticised Ofsted’s assessment methods in the strongest possible terms, arguing that they are not “research-based or evidence-based”.
Ofsted asks governors to treat its data as objective and make conclusions accordingly. However, without a suitable ‘control’ study – which in this case is as simple as running independent assessments of the same class with different inspectors – the data on inspections simply cannot be treated as objective and reliable. In this sense, Ofsted is giving governors, schools, and, more generally, the public exceptionally misleading messages.
But it gets worse…
The lack of rigour in Ofsted’s inspections is just one part of the problem. It’s compounded in a very worrying way by the shocking abuse of statistics that forms the basis of the Data Dashboard and RAISE Online. Governors are presented with tables of data from these websites and asked to make ‘informed’ decisions on the basis of the numbers therein. This, to be blunt, is a joke.
It would take a lengthy series of blog posts to highlight the very many flaws in Ofsted’s approach to primary and secondary school data. Fortunately, those posts have already been written by a teacher who has to deal with Ofsted’s nonsense on what amounts to a daily basis. I thoroughly recommend that you head over to the Icing On The Cake blog where you’ll find this, this, and this. The latter post is particularly physicist-friendly, given that it invokes Richard Feynman’s “cargo cult science” description of pseudoscientific methods (in the context of Ofsted’s methodology). It’s also worth following Icing On The Cake on Twitter if you’d like regular insights into the level of the data abuse which teachers have to tolerate from Ofsted.
Coincidentally, I stumbled across that blog after I had face-palmed my way (sometimes literally) through a meeting in which the Ofsted Data Dashboard tables were given to governors. I couldn’t quite believe that Ofsted presented the data in a way such that the average first-year physics or maths undergraduate could drive a horse and carriages right through it (if you’ll excuse the Goveian metaphor). So I went home and googled the simple term “Ofsted nonsense”. Right at the top of the list of hits were the Icing On The Cake posts (followed by links to many other illuminating analyses of Ofsted’s assessment practices).
I’m not going to rehash those posts here – if you’ve got even a passing interest in the education system in England you should read them (and the associated comments threads) for yourself and reach your own conclusions. To summarise, the problems are multi-faceted but can generally be traced to simple “rookie” flaws in data analysis. These include:
- Inadequate appreciation of the effects of small sample size;
- A lack of consideration of statistical significance/uncertainties in the data. (Or, at best, major deficiencies in communicating and highlighting those uncertainties);
- Comparison of variations between schools when the variation within a given school (from year to year) can be at least as large;
- An entirely misleading placement of schools in “quintiles” when the difference between the upper and lower quintiles can be marginal. Ofsted has already had to admit to a major flaw in its initial assignment of quintiles.
What is perhaps most galling is that many A-level students in English schools will be taught to recognise and avoid these types of pitfall in data analysis. It is an irony too far that those teaching the correct approach to statistics in English classrooms are assessed and compared to their peers on the basis of Ofsted’s pseudostatistical nonsense.
Image: Manipulating data. Credit: https://www.publicdomainpictures.net/en/view-image.php?image=45242