Oxford's oldest student newspaper

Independent since 1920

From classrooms to code: Education in Britain’s misinformation fight

Taking to Facebook in early August 2024, a 28-year old man encouraged “[e]very man and their dog [to] be smashing [the] fuck out” of a hotel housing asylum-seekers in eastern Leeds. The Crown Prosecution Service found no evidence that he participated directly in the riots. He was instead jailed for “intending to stir up racial hatred”.  

The incident that sparked this summer’s unrest – the false claim that a Muslim asylum-seeker committed the Southport stabbings of 29th July 2024 – is unlikely to have originated with this man. It did, however, spread rapidly across social media platforms and instant messaging apps. By the end of July, the rumour had been viewed nearly 20 million times on X alone. In the wake of the man’s conviction, National Police Chiefs’ Council Chair Gavin Stephens warned in The Telegraph that “left unchecked, misinformation and harmful posts can undermine all our safety.”

An overblown issue?

Yet, despite widespread concern over the role of social media platforms in amplifying disinformation (intentionally-shared misleading information) and misinformation (unintentionally-shared misleading information), some have argued that the issue is overblown. Professor Ciaran Martin of Oxford’s Blavatnik School of Government has cautioned against overstating the outcome of misinformation, at least in relation to national security. “There is a tendency sometimes…to confuse intent and activity with impact,” Martin explained. Similarly, of the 289 peer-reviewed researchers who responded to the 2023 Expert Survey on the Global Information Environment, only a third viewed social media platforms as “the most threatening actors to the information environment”.

One of the reasons for this scepticism may be the surprisingly low prevalence of misinformation on these platforms. For instance, a recent analysis by Aarhus University of 2.7 million tweets found that a mere 0.001% linked to an untrustworthy source. Historical data further supports this view. In their study of 120,000 letters sent to editors of The New York Times and the Chicago Tribune over a 120-year period, Joseph Uscinski and Joseph Parent found that the frequency of conspiracy theories has remained steady. The specifics and targets may change, but the appetite for misinformation has not increased.

Additionally, engagement statistics alone can be misleading. Just because a piece of misinformation is shared widely does not imply that its entire audience believes or endorses it. A study examining 9,345 Danish tweets during the COVID-19 pandemic, for example, found that about half of the tweets referencing misinformation were ridiculing it.

Biased algorithms and biased humans

Social science research also challenges another prevailing notion: that algorithms are to blame for shielding users from information sources outside their bubble. “I think this is widely exaggerated…that Google or other search engines would hide information from you, and you see only what you like,” said Professor William Dutton of the Oxford Internet Institute (OII). Indeed, a 2017 study tracking the news consumption habits of over 3,000 British users revealed that most people access content from a broad range of both partisan and nonpartisan outlets.

Algorithms certainly play a role in shaping the information we encounter online, but they may not be the root cause of the problems of dis- and misinformation. For Dutton, “[i]t is not a problem of the technology. The problem is that we are the algorithm that decides that we are going to look only at what we believe is the case.”

This points to a deeper phenomenon that predates social media algorithms: human psychology. For decades, research in social and personality psychology has demonstrated that people are inherently biased towards information that reinforces their existing views. More recently, a study of 879 Americans discovered that participants were more likely to believe false headlines that were aligned with their pre-existing beliefs and actively sought fact-checks for headlines that portrayed their political party negatively.

The problem with government      

Still, even if news quality has less of an effect on beliefs than we might expect, the spread of mis- and disinformation remains an issue. At best, it acts as a distraction, diverting resources from legitimate political, media, and governmental efforts. At worst, it facilitates chaos, deepens pre-existing divides, and undermines trust in institutions.

Perhaps legislation could be tightened. Under the UK’s Online Safety Act 2023 (OSA), social media companies must promptly remove “illegal content,” with penalties reaching up to £18 million or 10% of their global annual revenue. However, the Act does not explicitly designate “disinformation” or “misinformation” as illegal content. False communication is considered an offence only if the person spreading it intended to cause harm and knew it was untrue. While “disinformation” is therefore captured by the offence, Ofcom notes that it automatically rules out “misinformation” as illegal since it is, by definition, unintentional. 

Filling that gap in the legal framework would require redefining misinformation, a change that appears unlikely. To define misinformation, a formal definition of “news” would be necessary. Regulators themselves could take matters into their own hands but are reluctant. “I am not convinced that having a very clear definition is possible. What is news? News is lots of different things to all sorts of different people,” explained Cristina Nicolotti Squires, Ofcom’s Group Director for Broadcast and Media.

There are also fears that a legal definition of “news” would limit the media’s independence from government influence, subject it to censorship, and weaken its ability to hold the government accountable. As Professor Rasmus Kleis Nielsen of the Reuters Institute notes, “British journalists and British publishers…generally believe that having a formal, authoritative, state-imposed definition of what is news is worse than not having one.”

Additionally, aggressive unilateral measures could damage Britain’s soft power. Social media platforms are already bending to the demands of “decisive governments” where platforms must comply with restrictive laws. Robert Colvile, Director of the Centre for Policy Studies, warns the UK Government against adopting similar policies. “We probably do not want to do things that autocratic governments can seize on, saying, ‘You see, the British are doing it.’”

Tech action and inaction

To limit the UK Government’s reach, tech companies themselves may need to shoulder more of the burden. One promising approach lies in provenance-enhancing technology, which adds metadata to determine the origins of digital content. This could prove particularly useful for content shared on instant messaging apps like Telegram, whose active user base swelled by one million between July 29th and July 30th 2024. However, such tools to evaluate media are only effective if users engage with them.

Increasing transparency around social media algorithms has also emerged as a major focus. The OSA places the onus on platforms to publish transparency reports. Still, it’s unclear how detailed or useful this will be. Some tech companies already disclose similar information. Meta, for instance, details how its ranking system demotes certain content, while Google shares its search quality evaluator guidelines. But the complexity of algorithms – often involving billions of parameters – limits transparency.

Commercial interests also remain a significant barrier to full transparency. As Jane Singer, Emerita Professor at City, University of London, notes, “Why would the platforms necessarily want to do what you tell them to do?” Dutton also cautions that exposing the inner workings of these systems might “give everyone the information that they need to optimise their search,” providing bad actors with the tools to game the system even more effectively.

If the inner workings of algorithms cannot be disclosed or fully explained, they could at least be made more effective. Over half of the 289 experts surveyed by the IPIE believe current AI-powered moderation tools are poorly designed, failing to catch harmful content consistently. This is especially important for platforms with few human content moderators as a safeguard, such as X which had its team gutted in 2022.

Hours after the Southport incident, a post published by a 41-year old woman, calling for mass deportations and violence, was flagged by at least one X user. Despite her order to “set fire to all the fucking hotels full of the bastards for all I care,” an automated email informed the reporter that she “ha[d]n’t broken our rules against posting violent threats.” While her post might not be interpreted as a direct threat, it is unclear how it did not violate X’s prohibitions on violent speech, which include “Wish of Harm” and “Incitement of Violence.” The post has since been deleted and the woman’s account is no longer active; whether a human moderator at X took action or whether she removed it herself is unknown.

Empowering minds

Between commercial interests, algorithmic complexity, and the limitations of current AI moderation tools, tech companies cannot go it alone in eradicating the problem. Dr Dani Madrid-Morales of the University of Sheffield may be right when he says that the UK Government remains “overly focused” on regulatory and technological approaches to combatting misinformation, at the expense of educational initiatives. 

Although Ofcom data show that 71% of the 739 16- to 24-year-olds surveyed use social media as their primary news source, one might assume that older news consumers should be the primary target of such initiatives. Referring to Estonia’s media literacy programme, Maia Klaassen at the University of Tartu says, “I’m not worried about youth. I’m worried about 50-somethings.” Similar issues can be seen in Britain. While the UK ranks high on OSIS’ Media Literacy Index, this is misleading. Dr Steven Buckley at City, University of London, contends that many Britons may still lack the skills needed to navigate today’s information landscape effectively, such as an understanding of how to evaluate sources and how news is produced.

There is some evidence for this. In Ofcom’s 2024 Media Use and Attitudes survey, respondents aged 55+ were less likely than those aged 16-24 to recognise misleading news or verify its accuracy, such as by consulting additional sources or using a fact-checking website. They were also less confident in spotting fake social media profiles and more sceptical of genuine information.

Yet, if Dutton and the academic literature are right about confirmation bias and selective exposure shaping responses to misinformation, then the country’s younger people represent a crucial battleground where biases can be addressed before they take root. Despite being digital natives, many pre-teens, teenagers and young adults have been found to be overly trusting of news found through search engines and to overestimate their grasp of algorithm-driven content promotion.

A risky bet

Media literacy has its champions, including Education Secretary Bridget Phillipson, who has indicated that the ongoing school curriculum review will emphasise critical thinking skills relevant to media consumption. Before the summer riots, Oxford also took steps to enhance the media literacy of a portion of its students through last year’s climate change-themed Vice-Chancellor’s Colloquium. Importantly, the scheme was interdisciplinary, ensuring that all students could understand how misleading data can drive misinformation.

This notwithstanding, there are questions about media literacy programmes’ effectiveness and scalability. In a co-authored opinion piece for CNN, Professor Philip Howard at the OII deemed them “a risky bet” for combatting mis- and disinformation. Several factors contribute to this scepticism.

Funding is a hurdle. In 2023, the Department for Culture, Media and Sport allocated just £1.4 million to media literacy programmes. This in turn affects scalability. Delivery has been piecemeal and project-based, often led by media organisations and nonprofits. As a result, these initiatives struggle to impact a meaningful number of students. For example, the Student View programme reached just over 2,000 pupils between 2016 and mid-2021—out of nearly 9 million English school students.

Ofcom’s new media literacy responsibilities under the OSA could address some of these issues, but whether they will have the desired impact is uncertain. A 2012 meta-analysis of media literacy interventions showed positive results, but most of the studies predate the algorithmic age. More recent assessments, such as a 2019 RAND study, highlight the difficulty of defining media literacy and establishing a clear link between media literacy programmes and resilience to disinformation. Even when programmes show promise, results are often modest. The evaluators of the Guardian Foundation’s NewsWise programme found that it improved 9- to 11-year-olds’ ability to spot misinformation. However, the effects were not statistically significant.The test we now face is to ensure we accurately assess a threat that is starting to reveal itself. Overcorrecting legislatively carries risks. Social media platforms may not cooperate fully. The impact of media literacy programmes may take years to materialise. But the summer riots suggest that the effects of dis- and misinformation on British political discourse are no longer as “hard to detect” as Martin suggested in April 2024. He was right to warn against overreaction. He also recognised things could change quickly. We must now confront whether, as 2024 comes to a close with Donald Trump as President-Elect of the United States, we have reached that tipping point.

Check out our other content

Most Popular Articles