Getting it right

Transforming educational assessment with comparative judgement

The marking of student work is plagued by human error which has caused three systemic problems, weakening public confidence in qualifications.

To reduce marking error, assessments often comprise short, piecemeal questions – fragmenting curricular knowledge and encouraging rote learning. Marking error also means that grades are unreliable, unfairly hindering the life chances of some students. Moreover, marking error means standards over time and across awarding bodies cannot be accurately evaluated or maintained.

Our research into comparative judgement (CJ) led to the development of an assessment technique that eliminates marking error and avoids these problems. The method is widely used across the globe to support marking and enhance learning.

Our impact

Qualifications regulator Ofqual routinely applies our research

  • Fairer examinations have benefitted 5.5 million candidates over five years.

No More Marking Ltd routinely applies our research

  • Transformed practice in schools has improved the assessment of 579,400 students.
  • The assessment tool has enhanced the professional development of more than 10,000 teachers.

The research

Our research into marking error began almost a decade ago with a study investigating why GCSE mathematics was not fit for purpose.

We found that having to rapidly mark thousands of exam scripts resulted in question papers almost entirely comprising short-form questions that minimise marking error, but reduce validity.

Our comparative judgement (CJ) technique for assessing mathematical knowledge eliminates marking itself. Instead, subject experts decide which of two presented scripts is better in terms of a high-level criteria such as conceptual understanding.

Many such binary decisions are collected from several assessors and then fitted to the Bradley-Terry model to produce a score for each student.

Further studies have proved the reliability of the CJ system as an assessment tool when applied to open-ended mathematics assessments which better demonstrate a student’s understanding. What’s more, we demonstrated the reliability of CJ assessment outcomes across age groups, topics, institutions and jurisdictions.

We then developed a CJ-based technique for investigating standards over time and across equivalent qualifications. This revealed that standards in A-level Mathematics have declined since the 1960s. This research received much national media coverage – and the British Educational Research Journal’s Editor’s Choice Award 2017.

Research conducted at Loughborough has directly impacted the examination system in England and Wales, making it fairer and impacting around 1.1 million candidates per year. Our comparability and awarding work based on Loughborough’s research is crucial for ensuring public acceptance of the examination system.

Dr Michelle Meadows Deputy Chief Regulator - Ofqual

To date, 2,227 schools in 27 countries have subscribed to No More Marking’s services

Research funders

  • AQA
  • National HE STEM Programme
  • No More Marking Ltd
  • Nuffield Foundation
  • Royal Society

Meet the experts

Photograph of Lara Alcock

Dr Lara Alcock

Reader in Mathematics Education

Photograph of Colin Foster

Dr Colin Foster

Reader in Mathematics Education

Photograph of Camilla Gilmore

Professor Camilla Gilmore

Professor of Mathematical Cognition

Photograph of Matthew Inglis

Professor Matthew Inglis

Professor of Mathematical Cognition

Photograph of Ian Jones

Dr Ian Jones

Reader in Educational Assessment