Oxford's oldest student newspaper

Independent since 1920

New guidelines on AI usage in academic work emphasise human responsibility

New ethical guidelines for the use of large language models (LLMs), notably ChatGPT, in academic writing have been published by researchers from Oxford University and other leading global universities. The ethical framework aims to ensure the validity, integrity, and trust in LLM-assisted work.

The guidelines state that there must be a substantial human contribution to the work’s design, analysis, and data. At least one researcher must be able to guarantee the accuracy of the research and take responsibility for each substantive claim and piece of evidence in the writing.

The framework also emphasises the importance of researchers being transparent with their use of LLMs and other generative AI in their research. The article provides a template for authors to use in their work to help them declare the use of LLM. 

Recent improvements in LLMs have seen their increasing usage in academic work due to their high-level of performance, efficiency, and accessibility. The usage of LLMs in academic writing has generated some concerns about plagiarism, authorship attribution, and trust in research. LLM development has seen different models specialise in different academic fields. The most popular general LLMs include ChatGPT, Claude, and Bard.

The guidelines, published in Nature Machine Intelligence, state that “LLM use should neither lower nor raise the standard of responsibility that already exists in traditional research practices.”

The ethical framework was developed by researchers from Oxford’s Uehiro Institute, which focuses on contemporary ethical challenges, alongside researchers from the University of Cambridge, University of Copenhagen, and the National University of Singapore. 

Guidance for Oxford students from the University says that “unauthorised use of AI falls under the plagiarism regulations and would be subject to academic penalties in summative assessments.” Nonetheless, the guidance says that students can “make use of generative AI tools (e.g. ChatGPT, Claude, Bing Chat and Google Bard) in developing [their] academic skills to support [their] studies”, and even gives tips on how to use LLMs. 

In the last week, the use of AI became a point of contention for English students at Keble College as they were reminded in an email from their Director of Studies of their “responsibility to make sure that [their] work is not plagiarised…includ[ing] the use of AI (such as Chat GPT) to present the writing/thoughts/work of other sources as [their] own.” The email went on to highlight Keble and University guidance on plagiarism and academic misconduct. 

Check out our other content

Most Popular Articles