It is often interesting, once at university, to scan the uni ranking tables, if only to win bragging rights over friends in other places. But such tables have dangers that are often overlooked by the casual observer. The obsession with reductive scoring, driven by a commercial desire for universally applicable evaluation, often serves to cloud rather than clarify the judgement of university applicants.

One only has to look at the variables chosen to determine rankings. In The Guardian rankings, eight variables create a composite ‘Guardian Score/100’. But this seemingly scientific measure includes an incredibly large degree of subjectivity. ‘Student satisfaction’ is included in every ranking, yet the diversity of student experience and teaching quality means that producing an average score is misleading. Non-curricular factors also change students’ evaluation, such as the school they went to, personality, and individual expectations. Indeed, the concept of measuring apparent happiness seems hardly legitimate, let alone worthy of being made one of the primary measures by which applicants select their future universities.

Other variables raise problems too. The ‘Student to Staff’ ratio isn’t much of a guarantee of anything when it comes to quality of teaching, which is perhaps demonstrated by the fact Oxford has the lowest ratio out of any university on the list at 10.3. The Guardian don’t even make clear what staff are included in the measurement. The other distinctly problematic measure is ‘Career after Six Months’. It is often measured by the salary of graduates or, on an even more basic level, the percentage of graduates in work. This ignores the fact that many value satisfaction with jobs over pay, or seek graduate schemes where salaries may be minimal but will lead to opportunities in higher-paying jobs later. Employment is more likely to be a matter of personal choice and the effect a specific university has had in winning employment for the graduated student is, therefore, minimal or non-existent.

With such obvious flaws, the question must be asked why so many outlets continue to publish rankings. The answer lies in the increasing commercialisation of universities in the last decade especially, where the structural changes regarding tuition fees have fundamentally changed the relationship between student and university. It is seen now by many as a customer-business relationship, where customer treatment and quality of service can be scored and totalled. These comparisons are then spun by universities
as marketing.

The idea that academia should be treated like this is worrying. There is no linear spectrum of academic quality that can be measured, and it should concern us all that we are changing the nature of our higher education institutions in such a short period of time. We are permanently damaging the community when we push institutions to make choices between what is right (for the academic and personal development of students) and what will score well – the two will not always overlap.

For Cherwell, maintaining editorial independence is vital. We are run entirely by and for students. To ensure independence, we receive no funding from the University and are reliant on obtaining other income, such as advertisements. Due to the current global situation, such sources are being limited significantly and we anticipate a tough time ahead – for us and fellow student journalists across the country.

So, if you can, please consider donating. We really appreciate any support you’re able to provide; it’ll all go towards helping with our running costs. Even if you can't support us monetarily, please consider sharing articles with friends, families, colleagues - it all helps!

Thank you!