The University of Oxford has released new guidance on the use of artificial intelligence (AI) tools for students. The guidance, published 8 January 2024, comes following significant interest in the promises and dangers of AI, including the 2021 launch of the Oxford Institute for Ethics in AI and the continued advertisement of the Saïd Business School’s Oxford Artificial Intelligence Programme.
The guidance permits students to “make use of generative AI tools […] in developing [their] academic skills and to support [their] studies.” They are warned, however, that “AI tools cannot replace human critical thinking or the development of scholarly evidence-based arguments and subject knowledge that forms the basis of [their] university education.” This advice is particularly stern toward students who might pass off AI-generated text as their own: “Unauthorised use of AI falls under the plagiarism regulations and would be subject to academic penalties in summative assessments.”
The guidance does provide examples of where use of AI is both helpful and permissible, such as in producing a summary of an academic paper, providing feedback on writing style, or listing key concepts likely to appear in a forthcoming lecture.
In all cases, however, it is stressed that use of AI should not be seen as a substitute for developing an individual’s capacity to learn and that any facts given by AI should be cross-referenced with traditional scholarly sources. Even if students follow these guidelines, the policy maintains that students “should give clear acknowledgements of how [AI] has been used when preparing work for examination.”
This is consistent with the University’s guidance on plagiarism, which states that students “must clearly acknowledge all assistance which has contributed to the production of [their] work.” This same guidance states that “AI can only be used within assessments where specific authorisation has been given, or when technology that uses AI has been agreed as reasonable adjustment for a student’s disability.”
It is not clear in which cases such specific authorisation has been given; of the five most studied undergraduate courses (Medicine, Law, History, PPE and Chemistry), only the Faculty of History includes reference to specific authorisation of AI use in its Undergraduate Handbooks, and this is simply to restate the same conditions from the University’s overall guidance on plagiarism.
The use of AI in education is sure to be an ongoing point of discussion among all universities as the technology develops, and there are clearly points of controversy among Oxford faculty which the guidance seems to obscure. While some faculty members signed an open letter calling for a six month pause in AI development (as reported by Cherwell), the Department of Computer Science understandably has “Artificial Intelligence and Machine Learning” as a key research focus.
In response to these disputes over the role of AI, the Russell Group published a joint statement on 4 July 2023, stating five principles for the use of AI in Education:
- Universities will support students and staff to become AI-literate.
- Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience.
- Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access.
- Universities will ensure academic rigour and integrity is upheld.
- Universities will work collaboratively to share best practice as the technology and its application in education evolves.
These principles are very clearly mirrored in Oxford’s advice. They are reworked into the newly published guidance as questions for students under the heading “Five things to think about when using generative AI tools,” although the guidance does not include any acknowledgement of the joint statement or its five principles.