Spotting the fake
30 July 2017
Marina Jirotka is Professor of Human Centred Computing, Associate Director of the Oxford e-Research Centre and Associate Researcher of the Oxford Internet Institute.
Helena Webb is a senior researcher in the Department of Computer Science at the University of Oxford.
Here they consider what can be done by government and social media platforms to tackle the problem of fake news.
As campaigning in the UK General Election gained momentum in April 2017, the Chairman of the Government’s Culture, Media and Sport Select Committee called for Facebook to improve its handling of fake news on the platform. Referencing concerns that the spread of false stories across social media had influenced the results of the 2016 US Presidential election, Damian Collins MP suggested that the propagation of content of this kind could threaten the ‘integrity of democracy’.
Worries over the apparent prevalence of false content online and its capacity to have significant offline effects have grown rapidly over the past year and fake news has become an established social problem. Whilst the spread of rumour has always been a feature of social life, we can observe certain novel dynamics in the fake news phenomenon.
Firstly, the hyperconnectivity brought about by the popularity of social media means that online content of any kind can spread on an unprecedented rapid scale. Combined with the apparent growing user reliance on social media as a news source – in particular amongst young people – this creates a vulnerability where false stories can easily propagate.
They might then take hold if users conduct themselves online in a ‘filter bubble’ in which they surround themselves with only similar viewpoints and are not exposed to alternative or conflicting versions of the ‘truth’. These filter bubbles are in turn reinforced by social media platforms’ own algorithmic processes as users are presented with personalised content that complements what they have already looked at and liked and are less likely to be shown counter-content.
User behaviour and the nature of social media thereby appear to provide fertile ground for the spread of fake news. A further key concern is that this vulnerability can be exploited so that false content is propagated in an organised way for the purposes of profit (gained via online advertising) or political interference.
Dublin, Ireland. 09th Oct, 2013. Numerous colorful markers are stuck around the facebook logo in Facebook’s European headquarters in Dublin, Ireland, 09 October 2013. Around 500 people work at the European Facebook headquarters in the Irish capital, takin
Inevitably questions arise over how fake news can be addressed, with much attention – including from the UK government – focusing on the suggestion that social media companies should take more responsibility for resolving the problem. Research studies, including our UnBias project, explore how changes to the regulation of social media might prevent or limit spread of fake news.
One of the most radical changes could involve a shift in the legal status of social media organisations so that they become more comparable to traditional publishers such as newspapers in terms of the responsibility they must take for content posted. Less radical, and perhaps more technically and politically likely, is the development of Codes of Conduct for social media platforms. Platforms could sign up to undertake various practices in response to potential fake news. This would not necessarily involve the removal of content – something which would lead to strong objections on the grounds of freedom of speech.
Other practices might include the use of ‘kite marks’ to display the trustworthiness of certain news stories (based on features such as the provenance of the story and the existence of counter stories), feedback functions through which users can vote on the likely truthfulness or otherwise of what they have read, or the use of algorithms designed to pierce filter bubbles via the presentation of alternative content. However as the fake news phenomenon is not simply a technology-based one but is also grounded in social practices, solutions to the problem may also need to look beyond changes to the regulation of platforms.
Particularly important might be education and user self-governance practices in which individuals identify potentially false content and act to stop themselves and others spreading it. Such practices can also help mitigate the spread of false information and also develop more critical faculties in consumers, particularly the young who may often accept news online as the truth. Public and political debates about fake news seem set to continue in the future and efforts to address the apparent problem will benefit from careful research scrutiny.
For more information you can email [email protected]
You can follow UnBias on Twitter @UnBias_algos