With every passing day, technology becomes increasingly indispensable in our world. Inventions such as electricity, the motor car, and the internet stand out as technological advancements which have transformed our entire society. We often find ourselves asking: what will the next breakthrough technology be? What will we be dependent on in twenty, fifty or a hundred years time? The answer, in fact, is already here: Artificial Intelligence. 

The term Artificial Intelligence (AI) was first coined in 1956 and we have been using the technology for years. Recently, however, the term has become a major buzzword in the tech world. The Oxford Dictionary defines AI as “the theory and development of computer systems able to perform tasks normally requiring human intelligence”. AI learns through exposure to historic and live data and makes independent decisions based on this data. It learns from experience and adjusts to new inputs in real-time. In this way, AI is self-learning and does not require any human operation. Basic examples of AI include our mailbox filtering out spam and computers playing chess.

As technology develops, the omnipresence of AI assistants such as Siri and Alexa, robots replacing humans in certain jobs and the dawn of a new age of driverless cars, make it impossible to ignore the increasing effect that AI is having on all aspects of our society. Whilst the technology brings countless benefits, there are some complications which cannot be ignored, especially since development is so rapid. Most crucially, we do not currently have an adequate framework for legal issues that arise from the use of AI.

One such legal issue is intellectual property (IP) law. AI’s capacity for self-learning means that the data it holds and the outputs it produces could be considered as IP. Given the importance of data in our modern world, data produced by AI can be highly sought after. AI is currently considered in law as a tool as opposed to an entity which could have IP rights, but as AI develops, this could change. At present, human intervention is required alongside AI to make sense of any outputs, meaning that dependant on the jurisdiction, the programmer or the user will have IP rights over any output from AI. An example of where IP law is disputable is when AI, from its data sets, produces a highly desirable code. The programmer of the AI would typically be considered as holding the IP rights – is this fair considering that their ‘invention’ was unintended and derived from large sets of public data input into the AI algorithm? In the future, AI may be awarded IP rights, which would also lead to questions as to whether AI may be liable for infringement of other IP owners. Enforcement of holding AI accountable for IP infringement would provide yet another legal challenge.  

The question of liability is interesting to consider. Liability for negligence rests with the person who caused the damage or who might have foreseen it. At present, the programmer or the operator (depending on the circumstance) will typically be ultimately responsible for AI’s actions and, therefore, liability will lie with them.  However, as technology advances and AI becomes increasingly autonomous, this issue will become more complicated. Unable to see exactly how AI devices reach their decisions on how to act, we cannot accurately predict their actions. In an instance where the output or behaviour of an AI device is unforeseeable, it may be impossible for anybody to be declared liable as there would not be an element of negligence, but rather an unforeseeable event. For example, who is liable if a driverless car crashes? These vehicles are not yet fully autonomous, so the human driver will usually still be liable, but the technology is advancing. This is a developing area of law, and legal systems worldwide are not currently equipped to deal with these issues. Clear laws and best practices must be established which will determine the scope of liability for those involved in the creation and use of AI. 

Another legal – and ethical – issue with AI is inbuilt bias. Biased algorithms are reflections of the bias that exists in our society: algorithms learn through exposure to data, therefore, if the data which is drawn from our society is biased, the algorithm automatically will be too. For instance, an algorithm may select a white, middle-aged man to fill a vacancy based on the fact that other white, middle-aged men were previously hired for that position and subsequently promoted. The algorithm’s automatic reasoning could be overlooking the fact that the previous candidates were hired and promoted because of their profiles rather than their aptitude for the job. A vicious cycle of bias will then arise.

An example of AI bias was highlighted in a 2016 ProPublica study which found that an AI algorithm used by parole authorities in the US to predict reoffending (COMPAS) was biased against ethnic minorities. Data provided by COMPAS is used in courts to assist judges with sentencing decisions and, therefore, has likely negatively impacted the sentences that ethnic minorities have received. The algorithm continues to be used but comes with a warning to consider bias. Overcoming bias will involve ensuring that the initial coding of the algorithms do not perpetuate bias and potentially employing bias detection software, such as IBM’s Fairness 360 Kit which scans for signs of bias and recommends adjustments. Research into ethical issues such as racial bias in AI is also critical as we become increasingly reliant on the technology.

Last summer, the University of Oxford announced that it had received a donation of £150 million from American billionaire Stephen Schwarzman. The donation will be used to open the Schwarzman Centre which will house Oxford’s new Institute for Ethics in AI. High-profile computer scientist Sir Nigel Shadbolt will spearhead the development of the new institute which will aim to “lead the study of the ethical implications of artificial intelligence and other new computing technologies”. Research initiatives are vital for ensuring that AI continues to work to the benefit of humankind and that potential negative implications are minimised.

Ultimately, the fast-tracked and unregulated manner in which AI is being developed means that there are many legal and ethical issues for which we do not currently have requisite legislation. Without adequate legal frameworks, AI could cause more harm than good. It is vital that the technology is developed with a regulated approach, alongside legal structures which are fit for the 21st century and beyond.

Image Credits: Charlotte Bunney


For Cherwell, maintaining editorial independence is vital. We are run entirely by and for students. To ensure independence, we receive no funding from the University and are reliant on obtaining other income, such as advertisements. Due to the current global situation, such sources are being limited significantly and we anticipate a tough time ahead – for us and fellow student journalists across the country.

So, if you can, please consider donating. We really appreciate any support you’re able to provide; it’ll all go towards helping with our running costs. Even if you can't support us monetarily, please consider sharing articles with friends, families, colleagues - it all helps!

Thank you!