Methodology behind the 2025 Guardian University Guide

<span>Illustration: Adrià Voltà/The Guardian</span>
Illustration: Adrià Voltà/The Guardian

We use eight measures of performance, covering all stages of the student life cycle, to put together a league table for 66 subjects. We regard each provider of a subject as a department and ask each provider to tell us which of their students count within each department.

Our intention is to indicate how likely each department is to deliver a positive all-round experience to future students and, in order to assess this, we refer to how past students in the department have fared. We quantify the resources and staff contact that have been dedicated to past students, we look at the standards of entry and the likelihood that students will be supported to continue their studies, before looking at how likely students are to be satisfied, to exceed expectations of success and to have positive outcomes after completing the course. Bringing these measures together, we get an overall score for each department and rank departments against this.

For comparability, the data we use focusses on full-time first-degree students. For those prospective undergraduates who have not decided which subject they wish to study, but who still want to know where institutions rank in relation to one another, the Guardian scores have been averaged for each institution across all subjects to generate an institution-level table.

Changes in 2025

The structure and methodology of the rankings has remained broadly constant since 2008 but data availability has affected this year’s guide.

Years from which data is drawn
Normally, most metrics would have referred to activity that took place in the 2022/23 academic year. Because of the new reporting protocols that came into effect for that year – HESA and JISC making structural changes to how student data is recorded under the heading ‘Data Futures’ – data we would normally use was not available and would have been subject to major questions over reliability.

Student-staff ratios and expenditure per student
Although new staff data and finance data was available for 2022/23, our metrics that use this data all depend on student activity as a denominator and because this is unavailable the data from 2021/22 had to be re-used, albeit with some updates and a reverification after a check on how resources map to subjects.

Career Prospects
The Career Prospects score is driven by the annual Graduate Outcomes survey, which traces graduate occupations 15 months after they complete their course. Data for the 2020/21 and 2021/22 cohorts was averaged and used for this metric, with 2021/22 data becoming available more recently.

National Student Survey
Data from the 2023 and 2024 surveys was used for the first time. Because publication rules for small response populations had been relaxed, additional rules to discount source data based on fewer than 10 respondents were introduced. The ‘overall satisfaction’ metric was already removed last year in anticipation of the questionnaire changes in 2023.

The 2023 and 2024 surveys were different to versions used in previous editions of the Guardian University Guide in the way in which questions were posed and the responses that were available for students to select.

Other metrics
Data on entry standards, value added scores and continuation rates continues to refer to activity in 2021/22.

Franchised provision
Some established universities subcontract their courses to be delivered by franchise partners and steps have been taken to review how such activity is counted within the compilation process. Because some of this provision is unique and relevant to prospective undergraduates, there has been no blanket exclusion of such activity. However, in situations where significant volumes of franchised activity (400+ students) outweighs the activity delivered by the registering university we have stripped out the sub-contracted activity from the source data. This ensures that where a university is ranked for a subject, they are delivering at least the majority of activity that is described by the statistics (and in the vast majority of cases, all of it).

Details of each metric

Entry standards
This measure seeks to approximate the aptitude of fellow students with whom a prospective student can expect to study and reports the observed average grades of students joining the department – not the conditions of admission to the course that may be advertised. Average tariffs are determined by taking the total tariff points of first-year, first-degree, full-time entrants who were aged under 21 at the start of their course, if the qualifications that they entered with could all be expressed using the tariff system devised by UCAS. There must be more than seven students in any meaningful average and only students entering year 1 of a course (not a foundation year) with certain types of qualification are included.

This metric contributes 15% to the total score of a department (24% for medical subjects) and refers to those who entered the department in 2021/22.

Student-staff ratios
Student-staff ratios seek to approximate the levels of staff contact that a student can expect to receive by dividing the volume of students who are taking modules in a subject by the volume of staff who are available to teach it. Thus a low ratio is treated positively – it indicates that more staff contact could be anticipated.

Staff and students are reported on a ‘full time equivalent’ basis and research-only staff are excluded from the staff volume. Students on placement or on a course that is franchised to another provider have their volume discounted accordingly.

At least 28 students and three staff (both FTE) must be present in an SSR calculation using 2021/22 data alone. Smaller departments that had at least seven student and two staff FTE in 2021/22, and at least 30 student FTE in total across 2020/21 and 2021/22, have a two-year average calculated.

This metric contributes 15% to the total score of a department (24% for medical subjects). It is released at HESA cost centre level, and we map each cost centre to one or more of our subjects.

Expenditure per student
In order to approximate the level of resources that a student could expect to have dedicated to their provision, we look at the total expenditure in each subject area and divide it by the volume of students taking the subject. We exclude academic staff costs as the benefits of high staff volumes are already captured by the student-staff ratios but recognise that many costs of delivery are centralised: we add the amount of money each provider has spent per students on academic services such as libraries and computing facilities per student, over the past 2 years.

This metric is expressed as points/10 and contributes 5% to the total score of a department (10% for medical subjects).

Continuation
Taking a degree-level course is a positive experience for most students but is not suited to everybody and some students struggle and discontinue their studies. Providers can do a lot to support their students – they might promote engagement with studies and with the broader higher education experience or offer dedicated support when students face a obstacle - and this measure captures how successful each department is in achieving this. We look at the proportion of students who continue their studies beyond the first year and measure the extent to which this exceeds expectations based on entry qualifications.

To achieve this, we take all first-year students on full-time first-degree courses that are scheduled to take longer than a year to complete and look ahead to the first of December in the following academic year to observe the proportion who are still active in higher education. This proportion is viewed positively, regardless of whether the student has switched course, transferred to a different provider, or been required to repeat their first year – only those who become inactive in the UK’s HE system are counted negatively.

To take the effect of entry qualifications into account we create an index score for each student who has a positive outcome, using their expectation of continuation up to a maximum of 97%. To calculate the score there must have been 25 entrants in the most recent cohort and 50 across the last 2 or 3 years.

This index score, aggregated across the last 2 or 3 years, contributes 15% to the total score of non-medical departments and 10% to those of the medical subjects. However, it is the percentage score – also averaged over 2 or 3 years - that is displayed.

Student satisfaction
The national student survey asks final year students to respond to questions with varying degrees of positivity, generally with 4 available responses. For the questions that we convert into metrics for the University Guide, we calculate two statistics: a satisfaction rate and an average response. The satisfaction rate looks across the questions concerned and reports the proportion of responses that were positive (to a lesser or greater extent) while the average response gives the average score between 1 and that was observed in the responses to those questions, where 4 is the most positive.

To assess the teaching quality that a student can expect to experience we took responses from the 2023 and 2024 NSS surveys and aggregated them for the following questions:

  • How good are teaching staff at explaining things?

  • How often do teaching staff make the subject engaging?

  • How often is the course intellectually stimulating?

  • How often does your course challenge you to achieve your best work?

The satisfaction rate for each provider is displayed, and the average response is used with a 10% weighting (16% for medical subjects).

To assess the likelihood that a student will be satisfied with assessment procedures and the feedback they receive we took responses from the 2023 and 2024 NSS surveys and aggregated them for the following questions:

  • How clear were the marking criteria used to assess your work?

  • How fair has the marking and assessment been on your course?

  • How well have assessments allowed you to demonstrate what you have learned?

How often have you received assessment feedback on time?

  • How often does feedback help you to improve your work?

The overall satisfaction rate for each provider is displayed, and the average response is used with a 10% weighting.

Data was released at the CAH (common aggregation hierarchy) levels of aggregation and we used details of how these map to HECOS (Higher Education classification of subjects) to weight and aggregate results for each of our 66 subjects, prioritising results from the most granular level. Our aggregation rules required that there were 10 or more respondents for each CAH subject, with results for more general subjects used if this condition was not satisfied. After aggregating these results to the level of Guardian Subject Groups, there needed to be 23 respondents across 2023 and 2024 for the resulting statistic to be used in the guide.

Value Added
In order to assess the extent to which each department will support its students towards achieving good grades, we use value added scores to track students from enrolment to graduation. A student’s chances of getting a good classification of degree (a 1st or a 2:1) are already affected by the qualifications that they start with so our scores take this into account and report the extent to which a student exceeded expectations.

Each full-time student is given a probability of achieving a 1st or 2:1, based on the qualifications that they enter with or, if they have vague entry qualifications, the total percentage of good degrees expected for the student in their department. If they manage to earn a good degree, then they score points that reflect how difficult it was to do so (in fact, they score the reciprocal of the probability of getting a 1st or 2:1). Otherwise they score zero. Students completing an integrated masters award are always regarded as having a positive outcome.

At least 30 students must be in a subject for a meaningful value added score to be calculated using the most recent year of data alone. If there are more than 15 students in both the most recent year and the preceding year, then a two-year average is calculated.

This metric is expressed as points/10 and contributes 15% to the total score of a department but is not used for medical subjects.

Career prospects
Using results from the Graduate Outcomes survey for the graduating cohorts of 2020/21 and 2021/22, we seek to assess the extent to which students have taken a positive first step in the 15 months after graduation, in anticipation that similar patterns will repeat for future cohorts. We value students that enter graduate level occupations (approximated by SOC groups 1-3: professional, managerial & technical occupations) and students that go on to further study at a professional or HE level and treat these students as positive.

Students report one or more activities, and for each of these give more detail. If students are self-employed or working for an employer, we treat them as positive if the occupation is in SOC groups 1-3, if they have either finished a course or are presently taking one then we look at the level and treat them positively accordingly. Students who have no activity that is regarded positively, but who either reported that they were unable to work, or only partially completed the survey leaving details of an activity incomplete, are excluded from the metric.

The metric refers only to students who graduated from full-time first-degree courses and we only use results if at least 15 students in a department responded in each of the two years or if at least 22.5 students responded in the most recent year. Partial responses are used if the respondent provided details for any of the activities that they reported undertaking. We exclude the responses if, for an activity, we are unable to determine if it should be treated as a positive outcome.

We have always avoided averaging results across years for this metric because the national economic environment that leavers find themselves in can have such a big effect on employment and this is especially true when a pandemic affects the economy. Unfortunately, response rates for the graduate outcomes survey are not high enough to maintain this stance. We therefore average the career prospects statistics across the two years in an unweighted manner, in order to avoid any advantage or disadvantage for a department that had a higher response for a cohort in which economic conditions were better/worse. In situations where only the most recent year of data meets the threshold for usage we have applied the year-on-year sector difference observed for the subject concerned in order to simulate what a 2-year average might have looked like given changing economic conditions.

This metric is worth 15% of the total score in all the non-medical subjects.

Using metric results
First of all, we determine if a department has enough data to support a ranking. Often individual metrics are missing and we seek to keep the department in the rankings where we can. An institution can only be included in the table if the weighting value of any indicators that are missing add up to 40% or less, and if the institution’s relevant department teaches at least 35 full time first degree students. There must also be at least 25 students (FTE) in the relevant cost centre.

For those institutions that qualify for inclusion in the subject table, each score is compared to the average score achieved by the other institutions that qualify, using standard deviations to gain a normal distribution of standardised scores (S-scores). The standardised score for student: staff ratios is negative, to reflect that low ratios are regarded as better. We cap certain S-scores – extremely high NSS, expenditure and SSR figures – at three standard deviations. This is to prevent a valid but extreme value from exerting an influence that far exceeds that of all other measures.

For metrics in subjects where there are very few datapoints we refer to the distribution of scores observed for a higher aggregation of subjects (CAH1). We also set a minimum standard deviation for each metric and make adjustments to the mean tariff that is referenced by departments with students who entered with Scottish highers or advanced highers.

Although we don’t display anything, we need to plug the gap left in the total score that is left by any missing indicators. We use a substitution process that firstly looks for the corresponding standardised score in the previous year and then, if nothing is available, resorts to looking at whether the missing metric is correlated to general performance in that subject. If it is, the department’s performance in the other metrics is used – effectively assuming that it would have performed as well in the missing metric as it did in everything else. If not, the average score achieved by other providers of the subject is used.

Using the weighting attached to each metric, the standardised scores are weighted and totalled to give an overall departmental score (rescaled to 100) against which the departments are ranked.

Advertisement