Thursday, May 29, 2025

‘We need a different approach’: Students and tutors on AI in academia

With the never-ending releases of ChatGPTs, the question of generative AI looms large: where is the line between using it and relying on it? Between saving time and sacrificing learning? It’s hard to adjust to the present, but what will the future hold as it only gets more powerful? To find out what people really think, Cherwell brought together two Oxford focus groups. The first was a roundtable of eight students from a variety of subjects: the other, two academics working in the humanities and social sciences. We asked them about their experiences, hopes, concerns, and predictions. We kept them anonymous, referring to panellists by the subjects they study, so that we could have a candid discussion.

The panellists seemed divided on what ‘ethical’ usage of AI really means, whether it can have original thoughts, and the extent to which we should be using generative AI in our degrees. There was, however, a shared anxiety as to what AI means for the future and a strong sense that Oxford is unprepared to tackle these problems.

The Student Roundtable

Everyday uses
What sort of things do people use AI for in their day-to-day personal lives?

Compsci and Philosophy: I use generative AI instead of Google at this point. I also use it daily for coding my own personal projects. If it’s a simple enough app I won’t be writing a single line of code basically, and you can make some surprisingly sophisticated things.

Biology: Recently I was riding a bike and the chain fell off and I didn’t know how to reattach it so I just asked DeepSeek ‘How do you reattach a bike chain?’ and it gave me step by step instructions.

Is there a reason you didn’t use more traditional sources like YouTube or Google?

Biology: It’s a bit more tailored to the response. With the bike example, at first I didn’t know what was happening. I just told it that suddenly my bike stopped and you can’t turn the wheel anymore, and it tells you the problem and how to fix it. And then you have this back and forth that you can’t have with Google.

Compsci and Philosophy: Particularly when it’s a complex thing where you want to read a few different things and try to understand it. It condenses everything into one simple answer. Maybe it’s just me being lazy and not wanting to have to click on the website.

Philosophy and French: I’m interested in the things you guys search up, does it include political things, historical things?

Compsci and Philosophy: I guess it really depends on the topic. I think actually, for complex issues, ChatGPT’s, Deep Research is really impressive. It’ll generate a whole paper exploring the different angles and different interpretations of what people have said. Yeah, I guess I don’t have any reason specifically not to trust it.

The ethics of using AI academically
Raise your hand if you think there are situations where it is ethically okay to use generative AI: 8/8 raised their hands

Raise your hand it you think it’s ethically okay to have AI help you with a piece of work you’re doing for your degree: 4/8 raised their hands

Raise your hand if you think it’s ethically okay to give ChatGPT your notes and have it make an essay outline for you: 4/8 raised their hands

Biology: For science, it’s really useful to get a preliminary overview of a topic. The alternative is going through a lot of very dense papers that you might not understand, especially if you don’t even know the basics yet. But with AI, you can pull together sources quickly and get a brief overview so you have a rough idea of how to structure things. I don’t think it’s advanced enough yet to write a really detailed or good essay. So, I use it just for the overview, and then I put the sources together myself

Law with European Law: I’ve only really used it to understand cases. But it depends which level of ChatGPT you use. I figured out that if you use the normal version, it can make up cases, which is grand, but for some reason when you pay for it, all those problems go away. I use it to help with understanding cases and academic articles, but I never use it for submitted work. To be fair, I’ve also been given some AI software through my Disabled Students’ Allowance; stuff to make flashcards, and obviously Grammarly, which is very AI-heavy. So that’s quite interesting to me, that even DSA is now using AI-powered tools for disabled students.

People seem to be comfortable using generative AI to structure essays and give overviews. Is there a reason people aren’t using it to write their essays?

Compsci and Philosophy: I have a funny story about this. For one of my philosophy essays, I very stupidly chose an argument that, when I tried to understand it, made absolutely no sense. One late, sleep-deprived night, I uploaded the PDF of my draft and asked the AI to continue writing my essay. What it gave was way better than what I could’ve written, and it would’ve taken me ages. I highlighted the AI-generated parts in red and flagged this at the top of my essay. When I showed my tutor, they didn’t mind. I think tutorial essays are more for you than for them. It’s rude to hand in a fully AI-generated essay expecting a mark but you get what you want from the degree and your tutorials.

Would everyone feel comfortable doing the same thing and just labelling what was done by AI in red?

Law with European Law: I think for me there’s a sense of self pride, having come here off the back of my own hard work. I want to improve on my own skills and not have a robot do it for me, because if I suck at writing essays, that’s something I need to work on. If I struggle with essay technique, that’s a perfectly normal part of university life. But you don’t learn unless you make your own mistakes. I feel like getting AI to do it means you just don’t learn. Also, I just don’t feel proud, I feel guilty. I feel guilty, feel icky about it, because it’s not my work. It’s pure plagiarism, in my opinion.

University policy
If you all were advising Oxford University administration, how would you try and draw the line on an acceptable use policy for AI?

PPE 1: A big problem with making laws like that is that AI detectors are absolutely terrible, and because they are so bad at identifying AI, it would be a terrible university policy to say that if we detect AI in your work, you’re done for, because you can never be sure.

Law with European Law: To bring back the context again of disability usage of AI. If Grammarly is being used you can’t really penalize a student for that when that’s been the tools that they have been given to be able to be on a level playing field with everyone else. It’s okay using AI to a minimal extent, for example spell checking, word choice, grammar, especially in a disability context. Again, I’d also say that’s fine. Going beyond that and using it in an actual essay or in an exam, I would say goes beyond academic integrity.

PPE 2: I think that in some ways, it’s kind of like an arms race. This is less true in Oxford, where essays are graded individually, but in other universities where each essay is graded work, if everyone else is using AI, it then becomes difficult to do it all on my own while everyone else is using this tool. So, on a university level, regulators should be thinking, ‘what would I be okay with every single student in this university doing?’ I wouldn’t want every single university student to leave university having done all their readings through AI, having everything summarized by AI, and having written all their essays with AI.

Compsci and Philosophy: We shouldn’t just be thinking about the present state of AI, but also where it’s going. The fact is that this field is moving so fast, and I think we are going to have fundamentally radical transformations in the way our economy functions as a result of AI. We need a different approach to thinking about AI that equips people with the skills they will need in their future employment, rather than just sticking with what has worked for the past hundreds of years.

Biology: I think universities need to take an active approach to equipping students on how to use AI as a resource and a tool. For example, in biology AI is amazing at generating notes and resources, but at the same time it hallucinates and makes mistakes. Yet we are never taught how to use it. If universities say you can’t use AI for anything or discourage its use, then you lose out on learning this whole skill of working with AI. In the future, that is not going to be the case. We will not have future labs where all AI is banned.

Future job prospects
If you imagine the job you want five years from now… do you think AI is going to change it?

Law with European Law: I want to do music and the industry does not care about creativity being lost. It is just looking for a sexy single. So, it will just get AIs to churn out what the charts want. There is no actual individual voice there, but the industry does not care. That is something I am worried about. I do not actually see it necessarily taking away from artists’ individual artistry yet, but it is a worry considering the way the industry runs. The temptation will be there to just use it as a profit machine.

Biology: I think AI as it currently is does not have the ability to make massive changes. But I think the next system will replace a large number of principal investigators because AI will have knowledge from every single field. It will be able to identify new problems and directions much quicker and probably better than most principal investigators. It’s already doing that, but a culture change takes time.

Maths: Something I’m concerned about is that in the past we have had technologies that destroyed certain career options, like very few people are employed nowadays making saddles for horses. But it has always been the case that we were able to retreat to something else, like services and cognitive tasks. I am concerned that maybe there will come a time, perhaps in the near future, when there are fewer and fewer options for humans to retreat to, to work in. I’m not sure about the economics of how this all works out, but just naively thinking about it is concerning that the economic power of individuals will be really reduced.

PPE 2: I read something similar to that, where in the past a lot of technological innovations that actually led to jobs going down were tools meant to enhance human ability, whereas AI aims to mimic human abilities. So, I think it’s a very different kind of tool, where the end goal for AI is to replace the person, not just enhance their abilities.

Biology: But I think right now the economic incentives and everything are in line so that AI will be an agent replacing, if not all jobs, at least the jobs we would traditionally consider really high status. It seems like there will not be much meaningful work left to do.

Closing thoughts
Last one, how would you sum up your thoughts on AI?

PPE 1: I think there’s a risk it becomes a parasite that replaces human creativity entirely. Creativity, in many ways, is something that gives a lot of people a sense of purpose, and to have that replaced with AI seems completely pointless. It feels like the wrong thing entirely is being targeted. Replacing the things that people want to do rather than coming up with solutions to replace the jobs that people don’t want to do.

Philosophy and French: I find the current manifestations quite depressing really. Particularly, you know, with removing or eliding human interaction or just human effort, both in the ways that people are using it and in the way it reveals the incentives and how people think about things like educational creativity in society. I find it quite disheartening.

PPE 2: I think that, based on what we’ve seen so far, it’s had a net negative impact on the academic space in general.

Law with European Law: I think it has positive potential, but is it going to be used in that controlled, assistive way to enhance human efforts? No, I don’t think so. Because of economic incentives, the goal is basically to save costs and have AI perhaps do everything. That is why I am quite worried about how far AI will go.

Classics: There’s definitely a tension between what’s a morally acceptable way to use AI and what’s an intelligent way to use AI.

Biology: A key issue that very few people seem to know about is what the future of AI will be. This is not talked about enough. People often focus on issues AI taking away creativity but the idea of AI as more than just a tool that humans use, and how AI will be integrated into society along with the harm that could cause, these topics are rarely part of the wider discourse. I think this could potentially be very dangerous.

Maths: Today is the least capable that it ever will be. And it’s a very urgent question of; how do we control this? How do we situate in society in a way that, like, is net positive?

Compsci and Philosophy: I think we are living through completely insane times which could be the most transformative period in human history. I don’t think anyone is really taking this seriously. Society is not prepared for what is coming. Policymakers don’t understand what is happening, and progress just keeps accelerating. Soon AI will be able to automate AI research itself. According to Google’s report this year, AI is already generating 30 percent of code at Google and speeding up AI research. We are approaching a point where these systems will be vastly superhuman, and that moment is coming soon.

The Academic Panel

One of the Russell Group principles on AI that Oxford has adopted says that “Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience”, what’s your response to that?

Humanities Professor: I’m sure, in a fast-changing landscape, the central University felt there was no choice but to be proactive on these things. I think the adoption of that is wildly out of whack with the reality in most teaching spheres of the University. There may have been consultation of departments and faculties, but I can say that the majority of people do not feel consulted about that decision, and nor is it clear what the implications of it are. If you are, as an institution, adopting that position, then students would have a reasonable expectation of a certain level of literacy on the part of their tutors to help them navigate these waters. And we are absolutely not at that point. We have tutors, I imagine a number, who for ethical reasons or just fear reasons will never have laid eyes on a generative AI interface. They have just kept completely away from it, and they are not in a position to advise their students or to help them gain any kind of literacy.

Have you picked up on any changes in student attitudes to work since ChatGPT has started to get better?

Social Sciences Tutor: I only recently started teaching, but I do see that students are nervous about what generative AI means for them, the tools and skills that they need, and how it is affecting and changing both the job market and the political landscape. The sense I get is that beyond just, Oh, should I use it in my essays, there’s a deep unease and fear about what AI is doing to the socio-political landscape and students are thinking If ChatGPT can do it what’s the point? What I hope the fear around AI can do is prompt this deeper discussion of what is the value of the university.

Humanities Professor: One Oxford-specific risk I was thinking about as I was walking over here is that one of the impacts of AI has been to reinforce the commitment of some faculties to in-person exams. COVID had actually made it clear that certain alternative forms of assessment were possible, and effectively the arrival of ChatGPT killed those discussions. But that obviously carries a whole set of risks with it, right? I mean, we shouldn’t be committing to a mode of examination indefinitely because of fears we have about possible misuse of something. There are all kinds of implications in terms of gender disparities in performance, neurodiversity. My worry, to put it in a nutshell, is that Oxford structures allow us to avoid some of those problems, or those questions, rather than think about them. We can always say to our students, you know, use it if you want, but you’ll have to be there in the exam hall. We are hamstringing ourselves if we allow that to shut down the wider discussion.

Where is the limit between getting help from AI and where should we draw the line to where it is wrong for ethical reasons?

Social Sciences Tutor: I am what could be called an AI abolitionist. I think there are no use cases for it, and this goes beyond the education system. Even if there was a use case I think the environmental harms, the money flow, the kind of companies and politics you’re supporting by using it is enough to say absolutely no to any usage. I think it also disrupts students’ learning processes in terms of, like, what is the point of writing an essay? Which is that you learn, you learn how to think. You learn how to critically examine. And so, the problem is not that they’re deceiving us, the problem is that they’re that students are missing out on the opportunity to learn.

Humanities Professor: What concerns me about the open endorsement of certain uses of it, is that we don’t know yet what impact it has on people’s learning. I think in a world where students are feeling pressured into using it for tutorial essays we should think about ways to reiterate the basis of our pedagogy, which is predicated on this idea that, you know, if you write me bad essays, you haven’t wrecked your grade.

There are students turning to AI because they feel pressured but there are also those who feel it genuinely helps them with their work. It can be used to turn notes into flashcards, give prompts on grammar, and help you prepare for lectures. It’s not just giving it your essay question and saying ‘Write me 2000 words’.

Social Sciences Tutor: I think those types of usage result from a fundamental misunderstanding of what the technology is doing. It’s just an incredibly competent, environmentally destructive magic eight ball. It’s guessing; it’s literally producing bullshit. Which is not to deride the excellent other AI, non-generative AI tools like spell check. But what I would say to the kind of usage you’re talking about is that generative AI is being used because it exists, but if you were going to design a learning tool, it’s not how you would design it.

Humanities Professor: I’m pretty torn on this question because pragmatically of course I can see the appeal for students of these time saving techniques. There are menial tasks in my own research which I’ve been tempted to use AI for. I think to encourage and incorporate more reflection by students on their own learning processes would be valuable. Reflecting on why they’re using ChatGPT and what they’re getting from it would cut out the things people are doing out of fear and what people are doing because it’s genuinely useful. But as an academic institution, that should be a matter for thought and discussion, not just something that we rule on one way or the other.

Current University policy says “ethical and appropriate use” is okay, has Oxford gone too fast with that?

Social Sciences Tutor: My position, which I wish was the university’s position, is to say absolutely not. We will not pay to license any of these generative AI tools, and we will resist their adoption in any shape or form. This will never happen, but this is what I wish the policy was for two reasons. One is the environmental cost of the increased emissions. Second, if you look at who is running and benefiting from these companies, Elon Musk, Peter Thiel, the Maga movement. It’s being used to surveil people who are coming into the United States and putting them in ICE detention facilities. We can’t separate that. So, I wish the university’s policy was to say that we abolish and resist generative AI.

Is it too late?

Social Sciences Tutor: I don’t think so. Generative AI could go away like that *snaps fingers*. But there is a kind of nihilism and this sense that tech is imposed upon us. With AI, I don’t have to search out ChatGPT; everywhere I go, I am being subjected to AI. People go, oh, that’s just meant to be the way that the tech is. But it doesn’t have to be.

A lot of students expressed concerns about their future job prospects; what would you say to them?

Social Sciences Tutor: I think it is not that AI can do those jobs better. The only reason it looks like it can is because of stolen data from stuff that humans have already done. But that being said, it does not mean that AI won’t be used to cut costs. I think students are absolutely right to be despondent about what AI is doing. But, if Oxford mobilises or organises at the student level, at the academic level, it has a lot of power in what it can influence and decide to do. So, I’m despondent, but hopeful.

Humanities Professor: I would encourage students not to pre-emptively give up on ambitions they might hold on to on the assumption that AI is going to render their dreams obsolete. Technological changes of this magnitude that threaten to erase the significance of certain features of humanity, I think, end up generating a kind of irrepressible appetite for the distinctively human. I think it’s hard to feel hopeful without thinking in those very abstract terms. I’m not sure that’s much help to a student who’s worrying about their individual job prospects with an English degree or a history degree. But I do think that assuming the world into which one is moving is one where those things will just not exist, that feels premature to me.

Check out our other content

Most Popular Articles