Advertisement

Do the maths: why England's A-level grading system is unfair

<span>Photograph: Barcroft Media/Getty Images</span>
Photograph: Barcroft Media/Getty Images

It wouldn’t be out of place in a maths A-level: suppose a class of 27 pupils is predicted to achieve 2.3% A* grades and 2.3% U grades; how many pupils should be given each grade? Show your working.

There are a few ways you could solve the problem. Each of the 27 pupils is 3.7% of the class, so maybe you give no one an A* or a U at all. After all, your class was effectively predicted to get less than one of each of those grades, and the only number of pupils less than one is zero.

Related: Gavin Williamson under pressure to resign over A-level results 'fiasco'

Or you go the other way: 2.3% is more than “half” a pupil in that class, after all, and everyone knows you should round up in that case. So perhaps you should give one A* and one U.

Or you could pick the system that the exam regulator applied to calculate results on Thursday – now decried as shockingly unfair – and declare that no one should get the A* but someone should still get the U. U means unclassified, or in lay terms, a fail.

And not merely should they get a U, under the Office of Qualifications and Examinations Regulation system – but they must get a U, even if their teacher recommended a much higher grade, and even if the system predicted that less than one pupil at that school would get a U according to the algorithm.

That choice, forcing grades down across the board, was at the heart of much of the dismay felt across England on Thursday. It meant that if Ofqual’s mysterious algorithm predicted any chance at all of a U grade in a class – even if its prediction was less than one single pupil getting that grade – then one pupil had to be given that grade, no matter how well they had performed up until that point.

The unfairness was exactly flipped at the other end of the scale: no matter how good a pupil you were, you could only achieve an A* if the Ofqual algorithm had predicted that at least one pupil would get that grade.

A class predicted just less than one A* and just more than zero U grades would be given zero A*s and one U.

Dave Thomson, chief statistician at education thinktank FFT, illustrated the problem starkly with figures from a real, anonymous, school. At that school, he writes: “12.5% of entrants achieved A* between 2017 and 2019. And none achieved a grade U”. So, what of the school’s 2020 exam-less results?

That historic data is combined with information about individual students in a process called the “prior attainment adjustment”. This is the part of the process which has been waved away over the past week as “the model” or “the algorithm” by ministers, and it involves trying to use data about A-level pupils, including their GCSE grades, to judge how accurate their predicted grades are.

For the school Thomson analysed, that adjustment process took a school which had not had a U grade for three years, and predicted a 2.3% chance of a U grade in 2020; and the historic 12.5% A* achievement was downgraded to a 5.71% chance of that grade this year.

From there, the rounding process wrecks things further. That school, with its class of 27, was given one A* and one U – 3.7% for each. “This seems rather harsh,” Thomson writes, “given that the model prediction is for fewer than one pupil (2.30%, when each pupil counts as 3.70%) to achieve this grade.

Beyond the rounding, though, it is the adjustment process that “is absolutely fundamental to understanding how this year’s grades have been calculated”.

“Unfortunately, it raises more questions than it answers,” he says.

And that U, which was only ever predicted for just half a pupil, must be given to one pupil – whoever is ranked bottom of class by teachers – no matter how well that individual performed.

An Ofqual paper reveals that the process, developed using historical data, performed best at predicting marks for history A-level, when it was right slightly more than two-thirds of the time. For the worst exam, Italian, it was right barely a quarter of the time.

Ofqual instead chose to focus on its own measure of accuracy – whether it was right “within a grade”. There, it performed better: 98.7% of English language marks were within a grade of the truth, though other courses including, ironically, further maths, were still wrong more than 10% of the time.

But as any A-level student will tell you, accuracy “within a grade” is meaningless. Ofqual may mark itself highly if it gives an A student a B, but for that student, the difference is life-changing.