Saturday 18th October 2025

Embracing AI undermines academia

Intelligence and Oxford are usually synonymous. After a term or two, this idea generally wanes amongst its student population, but there is an underlying truth that people at this University have some idea of how to think. Why, then, is artificial intelligence being repeatedly imposed upon us; arriving in our search engines, eBook services, and most recently, across the entire university? The use of AI is rapidly changing from being a choice, made largely by hungover undergraduates, to an expectation coming top-down from the University administration. This assumes the worst of Oxford’s students – ignoring the genuine desire to work hard and improve – and instead views academia as a means to an end, rather than a worthwhile occupation in itself. 

First infiltrated was the eBook services. No person accessing a 1970s monograph on coinage in revolutionary America has any use for a vague and inaccurate summary. There is an immediate assumption of laziness that emerges when this is an unavoidable feature. I’ll save you the tangent on Britain’s anti-intellectual culture, but we live in a world that increasingly caters to the lowest common denominator. Maybe it’s asking too much of online book providers, but one should be able to read and seek information unencumbered by constant simplification. 

The most glaring issue with AI is that it is often factually wrong. The University uses literature reviews as an example of AI helping, but it cannot assist if it does not understand the work in the field. An AI summary of ‘Dress and Society: Contributions from Archaeology’ by Toby F Martin and Rosie Wench highlights Virginia Woolf as a “key concept”, having been quoted once. While my grip on medieval dress archeology may leave my tutors somewhat wanting, I can say with some certainty that Virginia Woolf does not play a major role. AI is only capable of clinging to words it has seen a lot, much like a three year old recognising their own name. 

While this example is obviously incorrect, had it flagged something more inconspicuous, the error could have easily gone unnoticed. When using AI in the very way the University recommends – which involves aforementioned literature reviews or identifying research gaps – this error becomes a significant issue. 

Most disturbingly, the AI writing the summary seems to think that it is the author of the book. Claiming “we seek to promote [dress] as fundamental to…understanding past societies”, appears relatively innocent, if not hugely accurate. However, it is this claim of authorship which is more worrying. Martin and Weetch did not argue that. It is one thing to have a poor summary, it is another to put words into the authors’ mouths. 

Recently, the University has taken the embrace of AI one step further, providing access to ChatGPT-5 for all staff and students. OpenAI – the company behind ChatGPT – has been sued multiple times for using copyrighted work to train its models. While I would not recommend looking to Oxford University governance for overly moral decisions, I had hoped that the ideas of intellectual property and authorial remuneration might somewhat resonate. Instead, the University is funnelling money into a company that undermines these values. 

The University is struggling with AI usage. I am not ignorant to that, nor to the idea that by facilitating it, they have better means of controlling the usage. But by embracing AI like this, Oxford University is simply giving up on trying to engage properly with the most pressing issue facing academia today. In doing so, the administration is letting its students down.

Where the University gives advice on AI usage, it is often a direct replacement for actually engaging with another person. In some cases, like working on writing in an academic tone, this may be helpful. In others, like hearing a “range of perspectives” or having “critical questions about a text”, speaking with others and simply thinking can have the same outcome – with the added benefit that the student might actually grow intellectually, rather than just being ready to answer the next question. 

I struggle to see how any humanities subject is benefited by AI. Everything that I know current humanities students are asking it to do is harmful to the education we are supposed to be getting. Developing the ability to think critically and understand – rather than just learn information – is the hallmark of an Oxford degree. So while an AI chatbot might be able to aid you in regurgitating ‘facts’, continued usage undermines the very point of why we are here. 

A few months ago, a joke of ‘just having a think’ circulated on social media. While light-hearted, it speaks to a wider sentiment. We have not evolved as a species in the past few years to lack the capacity for thought, nor the desire for it. Tech companies and the University treat AI as some inevitable, coveted invention: this is simply not the case. Oxford is full of intelligent and engaged people; people who want to do the work, and want to have opinions on it. By facilitating copious AI usage, the University fails to deliver on its centuries-long tradition of encouraging independent and original thought. 

Check out our other content

Most Popular Articles