A whole generation of GCSE and A-level students may forever have an invisible asterisk next to their qualification, marking them out as the COVID cohort, whose achievements will be seen as devalued because of grade inflation and gaps in their learning.

But are ‘pandemic grades’ really less meaningful than the grades of earlier years, or do they perhaps tell us something that other years don’t?

In 2020, nobody sat the exams and students were given ‘centre-assessed grades’ (CAGs). This year, although many students will sit some kind of formal assessment, it will again mainly be down to the teacher to determine the grade.

The familiar method of testing a students’ learning is called ‘summative’ assessment, which examines how well a student can recall what they have learned at the end of the curriculum. Since the then Education Secretary Michael Gove introduced curriculum reforms in 2010, the government has been trying to move towards this even more and making continual assessment much less important.

So it is no surprise that figuring out how to give students their qualification without exams taking place was no easy task. Outrage was caused after Ofqual’s algorithm produced grades which many thought were far too unfair, and so CAGs were used more widely instead.

This in itself also had many problems. One example is grade inflation: the average grades were higher because students were awarded whichever was higher out of the algorithm grades and the CAGs. Another issue with CAGs is that many argued that they are too subjective. A further concern that has come up this year is that CAGs and teacher predicted grades may be based on work and tests done while schools were shut, when cheating may have been possible.

But what is so great about summative tests? Are they really so much more accurate than CAGs or teacher predicted grades? Dr Mark Corver has written in his essay Predicted Grades and University Admissions that “exam-awarded grades themselves are likely not particularly good at predicting exam-awarded grades”.

What if a candidate goes into an exam on a bad day, maybe their cat just died or they’re on their period, which could affect their performance? What if it’s not the candidate, but the person marking the paper who has had a bad day, and therefore some papers are marked more strictly than others? Dame Glenys Stacey, the Chief Regulator of Ofqual has acknowledged that exam grades are only “reliable to one grade either way”. In 2019, only 51% of English exams were given the same grade when marked by a second examiner.

The A-level and GCSE exams that we are all used to test what an individual knows on one particular day and don’t allow for external factors that could affect how well they do. If performance is what is being tested, then it may be that these summative exams are the best way to do things.

But if it is potential that these qualifications aim to show, then perhaps CAGs and predicted grades are more accurate. After all, who better to assess a student’s potential than their teacher, who sees the effort they put in in class and whether they engage or not, and who may have a good notion whether they’ve cheated in online school? Teachers will know how well students could perform at their best.

What we need to decide is what A-levels and GCSEs are really for. Are we trying to test to see how well a candidate can stand up to the stresses of exam pressures and conditions and still show their knowledge? Or are we aiming to test what the candidate has the potential to achieve based on their effort and engagement as well as their achievements? When the pandemic is over, do we want to go back to full blown exams, or has this new method of assessment shown us that there are better ways to go about grading A-levels and GCSEs?