Oxford professors join Musk and Wozniak in call for six month pause in AI development

At least 13 members of Oxford Universityโ€™s academic staff have now signed an open letter calling on labs developing artificial intelligence (AI) systems more powerful than GPT-4 โ€œto immediately pause for at least 6 monthsโ€.

Currently, the letter has amassed more than 30,000 signatories, including the likes of Elon Musk, Apple co-founder Steve Wozniak and US politician Andrew Yang, but remains controversial among its supporters and critics alike.

The letter was penned by the Future of Life Institute, a non profit organisation criticised for supporting theories such as longtermism. This philosophy views longterm improvement as an essential moral priority and is supported by such academics as Oxford professor Nick Bostrom, who was criticised recently for a racist email he wrote in the 1990s and whose work is cited in the open letter. The letter raises concerns over what is viewed to be a disproportionately high rate of AI development in relation to a comparatively more limited understanding of the risks it might entail. 

The support of various Oxford academics among thousands of other signatories has been described by one academic as part of โ€œsounding the alarmโ€. 

Carissa Vรฉliz, one of the signatories of the open letter, is an Associate Professor at the Oxford Faculty of Philosophy and the Institute for Ethics in AI. The institute, launched in 2021 as a part of Oxfordโ€™s Faculty of Philosophy following a donation by Stephen A. Schwarzman, has been dedicated to exploring the ways in which the world of artificial intelligence interacts with areas such as human rights, democracy, environment, governance and human well-being. Cherwell recently spoke with the Professor about how she believed the University of Oxford specifically should – if at all – respond to the current rate of development. 

According to Vรฉliz, while the establishment of the Institute was a โ€œwelcome developmentโ€, โ€œweโ€™d stand a much better chance of ensuring that AI will contribute to the wellbeing of individuals, and to values like equality, fairness, and democracyโ€ if we โ€œinvested a fraction of what is being spent on developing AI on research on the ethics of [itโ€™s] governanceโ€.

When asked why she believes the focus on regulation within the artificial intelligence industry has received less attention than other fields, particular emphasis was placed on the significance of private sector monopoly.

That artificial intelligence is โ€œmostly being developed in private companies, as opposed to public institutions or universities [โ€ฆ] makes it harder to regulateโ€. According to Vรฉliz, these challenges have been contributed to further by the lobbying power of โ€œbig tech companiesโ€, as well as the very nature of artificial intelligence as โ€œa very complex technology, with unforeseen applications and possible consequencesโ€. She also added that she does not “subscribe to the longtermism movement”.

According to the Future of Humanity Institute, the six month โ€œpauseโ€ in AI development called for hopes to mitigate these unknowns, rather than pause the development of artificial intelligence in general.ย 

Despite this, certain experts within the field have criticised the contents of the letter for not going far enough, furthering a cycle of โ€œAI hypeโ€, rather than offering concrete solutions for the threats actually posed by limited regulation. According to Arvind Narayanan, an Associate Professor of Computer Science at Princeton University, the letter โ€œfurther fuels AI hype and makes it harder to tackle real, already-occurring AI harmsโ€. 

While there are โ€œvalid long-term concerns [โ€ฆ] theyโ€™ve been repeatedly strategically deployed to divert attention from present harmsโ€, Narayanan tweeted on Wednesday. While he agrees that these concerns warrant attention, โ€œcollaboration and cooperation [โ€ฆ] the hype in this letterโ€”the exaggeration of capabilities and existential riskโ€”is likely to lead to models being locked down even moreโ€.

Further criticism has come from a group of researchers at the DAIR (Distributed AI Research Institute). They published a riposte to the letter, claiming that while the authors raise many legitimate concerns about AI, โ€œthese are overshadowed by fearmongering and AI hypeโ€. The DAIR writers also criticize the longtermist philosophy behind the open letter and the lack of attention to the exploitative practices of large corporations. There are also no signatories from Open-AI, designer of Chat GPT-4, or the Open-AI spin-off Anthropic, which aims to create safer AI, as of March 29.

Oxford Universityโ€™s Associate Professor in Machine Learning, Michael Osborne, is another member of the universityโ€™s teaching staff to sign the letter. Echoing fears of the impact of artificial intelligence in undermining democracy, Osborne highlighted in conversation with Cherwell that the potential threats of under-regulated AI may include โ€œtargeted propaganda, misinformation and crimeโ€, but that the University of Oxford is currently โ€œleading the worldโ€ in its research.

Osborne added that if regulation fails to keep up with technological developments, it โ€œwill be necessary to tackle the possible harms from these modelsโ€, particularly as technologies such as ChatGPT increasingly move into the sphere of public consumption.

Check out our other content

Most Popular Articles