Oxford's oldest student newspaper

Independent since 1920

The EU’s AI Act is significant for all of us

Clemmie Read discusses the world's first attempts to legislate AI, and what it means for the future.

Oxford students have been no strangers to ChatGPT since its much-hyped launch in late 2022. Squabbles about the ethics of its use in research or essay-writing – balancing the potential for efficiency that it offers with the obvious problem of cheating – has acted somewhat like a microcosm of broader debates about A.I. over the last year. Its applications can be incredibly beneficial (as ‘work-hard-not-smart’ types can attest) but they are matched by risks and problems which, until now, countries worldwide have not been confident enough to regulate. But this is beginning to change; and whether you’re a keen ChatGPT user or not, the regulatory future of A.I. will affect you.

It was certainly a long time coming: the European Union’s much-awaited A.I. Act was agreed in the early hours of Saturday 9th December, after days of marathon trilogues (three-way meetings between the Parliament, the Council and the Commission) which spanned 22-hour sessions at a time, with the occasional break for the legislators to sleep. Critics understandably objected to this essay-crisis approach to one of the most important regulatory advancements of the century. But the regulators were racing to surmount objections from powerful opponents which could have delayed the Act by several years, as both member states and tech companies lobbied to limit the regulation. A landmark agreement was finally reached.

This surreal episode, live-tweeted throughout by those in the room, was itself only the final step in a legislative process which began in 2018, when EU regulators began working on the A.I. Act. Of course, this was long before the introduction of general-purpose A.I. models like ChatGPT, whose launch in November 2022 brought both generative A.I. and fears about its applications into the mainstream. The 2021 draft was torn up and re-written to address these developments. Consequently, the new act claims to be ‘future-proof’; whether or not we believe that laws which could take two years to enforce can govern the unknown developments to come, one thing is certain: the A.I. Act is a huge regulatory landmark. It is the beginning of a governance process that will determine the global future of technology, and the future of our society, altogether.

Oxford itself is at the forefront of these legal and philosophical debates, not only because its researchers pioneer crucial technological advancements, but because the Philosophy Faculty now includes an Institute for the Ethics of A.I., which launched in 2021 to conduct independently research into ethics and governance too often undertaken by the companies themselves. John Tasioulas and Caroline Green of Oxford’s Institute for the Ethics of A.I. explain, the Act “strives to avoid the twin perils of under-regulation (failing to protect rights and other values) and over-regulation (stifling technological innovation and the efficient operation of the single market)” by taking a tiered approach. Different A.I. models will be treated very differently: ‘minimal risk’ applications get a free pass, but ‘general-purpose’ models like ChatGPT will have to provide full transparency about everything, from the content used to train the systems, to cybersecurity measures and energy efficiency.  The use of A.I. for surveillance purposes (using machine vision for facial recognition, monitoring behaviours, and even predicting future behaviours, Minority Report-style) has largely been banned, except when it’s exceptionally required for a member state’s national security. These last two measures provided serious sticking points: law-makers disagreed about how far surveillance measures should go, and how much general-purpose models should be regulated (they were nearly excluded from the regulation altogether). But the result is on the stricter side, taking a strikingly bold position. The hope is that this won’t preclude innovation: EU Commissioner Thierry Breton called it ‘much more than a rulebook – it’s a launch pad for EU start-ups and researchers to lead the global A.I. race’. Only time will tell if this ambition is realistic. 

The road to regulation has been a long and fraught one. This is partly because its progress is so much more fast-moving than democratic legislation can ever be, and partly because of the obvious knowledge deficit. Technological entrepreneurs don’t understand the law, and law-makers don’t understand technology, so each thinks the other is missing the point, and leaves the two at a stalemate. The solution so far has been to let Big Tech just regulate itself – but the various privacy scandals in which the major companies have found themselves over the last decade have shown how little they can be trusted to hold to the rule of law. Think of Facebook’s role in the Cambridge Analytica scandal around the 2016 elections, AT&T’s role in Edward Snowden’s disclosure of the NSA surveillance programmes in 2013, and the antitrust cases (abuses of their market dominance) that Apple, Amazon and Google have all faced in recent months. 

Combine this with the existential worries which have followed the release of general-purpose A.I. models, from fears of deepfakes and job losses to a vision of a future dominated by A.I. overlords, and the need for law-makers to step in is clear. These fears were only exacerbated by the recent chaos at OpenAI, which saw CEO Sam Altman deposed and reinstated within a week as the board divided over the importance of A.I. safety. Earlier this year, a group of AI scientists called for a six-month moratorium on developing the technology while regulation was worked out. This certainly seems like a good idea on paper, but how realistic it is has proved another question. After all, profit-driven entrepreneurs are unlikely to stop thinking or planning for a good-faith pact; and the bureaucracy around regulation, as the five-years-and-counting progress of the A.I. Act makes clear, means it is far more than a six-month process.

Rigorous legislation is needed – this much is no longer in question. Countries have avoided being first to take the jump, fearing both the hit to innovation and the disadvantage to national security that restricting A.I. would create. Not for nothing has it been compared to the nuclear arms race: whatever a nation’s leaders might think of its risks, they are unmistakably more vulnerable without it. After all, protecting jobs is crucial, but if rival superpowers use outlawed A.I. to develop biological weaponry, it could become a moot point. The regulatory advances of the USA and UK remain embryonic compared with the AI Act. The US currently has an A.I. Bill of Rights but is unlikely to go much further given the current political gridlock. Rishi Sunak, meanwhile, somewhat modified his pro-innovation attitude by hosting a headline-grabbing A.I. Summit at Bletchley Park in November 2023, which produced a collective agreement to protect A.I. safety worldwide. But these nascent efforts have been firmly overshadowed by the EU, which has become the first jurisdiction to not just state but legislate. This is the world’s first proper attempt to do so.

It’s no surprise that this approach has come from the EU. Historically, the US’ market-driven foundation has led to a laissez-faire legislative approach which promotes innovation and technological development above all else, while the EU’s rights-driven model prioritises the rights of users, market fairness and upholding democracy over economic progress. This is borne out in recent EU acts, like the General Data Protection Regulation (GDPR), which caused a worldwide headache for tech companies and users alike. This was the first proper attempt at protecting data worldwide. Each approach has its pitfalls: the US outstrips the rest of the world in technology, but is arguably responsible for all the dangers that come with it, while the EU protects its citizens far more successfully, but remains on the back foot in terms of what it actually creates. President Macron has publicly attacked the A.I. Act, complaining that “we can decide to regulate much faster and much stronger than our major competitors. But we will regulate things that we will no longer produce or invent. This is never a good idea.” Fears that the EU is becoming an unattractive environment for start-ups are rife, not least because it is far easier for cash-rich incumbent companies to get around the regulations. 

What does this mean for the rest of us? It’s fairly likely that the rest of the world will follow suit: Breton claims that “Europe has positioned itself as a pioneer, understanding the importance of its role as global standard setter”. This acknowledges the so-called ‘Brussels Effect’, whereby EU legislation often becomes global practice, partly because of its ideological influence, and partly because it’s easier for tech companies to comply with EU demands worldwide than to adapt for different jurisdictions; hence Apple’s new iPhone has a standard USB-C charger, not its distinctive lightning adaptor, because of a newly enforced EU law. If the effects of the legislation are detrimental enough to Big Tech, however, this might change, and the EU becomes a strikingly different technological climate to the rest of the world. Even enforcing the regulations is likely to be tricky, given cash-rich tech companies’ abilities to appeal court cases indefinitely (or, indeed, to pay the fines required without much of a hit). There’s a long road ahead, even before we anticipate how A.I. itself might have changed by the time the Act is enforced in 2025, let alone in the decades to come, and how the law might have to change to govern it accordingly. This Act is one of landmark significance, and an encouraging step in the right direction; but it’s only the very first step.

Check out our other content

Most Popular Articles