It is just over 500 years ago that Martin Luther nailed his Ninety-five Theses to the door of All Saints’ Church in Wittenberg. Few at the time could have predicted that this would eventually lead to 150 years of religious wars, the creation of the Church of England, and the accession of women to the clergy. This is testament to the power of ideas, of course, but also to the technology that enables their dissemination: Luther’s ideas altered the course of history, but none of this would have been possible without the printing press.
We stand today at a similar junction, with the internet, digital platforms and communication technologies transforming the way information is created and disseminated. The reach, immediacy, and democratisation of these new technologies enable anyone, anywhere, to be a publisher, a pamphleteer or an author.
As always, the law has been slow in catching up with the technology. This is problematic as the immense potential of these technologies is also being used for nefarious purposes. Over the last few years, Europe has seen a cycle of terrorist atrocities followed by calls by government for internet companies to remove extremist content, faster and more accurately. While ISIS videos have often made the headlines, companies have also been asked to remove offensive content, fake news articles, ‘revenge porn’, and other ‘harmful content’.
The tech companies affected by these issues have tried to find a solution by creating the Global Internet Forum, comprising Microsoft, Facebook, Twitter and YouTube amongst others working together to identify and remove inappropriate content. They created a database of known ‘terrorist’ images and videos which contains more than 40,000 hashes, or digital signatures, allowing quicker detection and removal of the content.
Nonetheless, the latest salvo in this back and forth has come from the EU Commission who, on 1 March 2018, published a set of recommendations demanding that digital platforms step up their detection and deletion efforts of material that incites terrorism, radicalises users or helps prepare and fund attacks.
These recommendations require social media companies to remove terrorist-related material from their websites within an hour of being notified. These are not binding (although courts can take them into account), but they do raise important issues related to fundamental rights and freedoms: the freedom of expression, of course, but also the right to privacy, protection of personal data, freedom of thought, conscience and religion, freedom of assembly and association, freedom of the arts and sciences, or the right to an effective remedy.
It is worth noting that whilst these recommendations focus on ‘illegal’ content, previous EU Commission guidance referred to ‘harmful’ content, a term which is so vague it cannot form the basis of legal restrictions on fundamental rights. Even the term ‘illegal’ can be problematic in certain jurisdictions.
It also worth questioning whether these kinds of ‘voluntary measures’ between the EU and companies are, in fact, voluntary. The fact they are ‘encouraged’ by the Commission, on a three-month trial basis, and that failure will lead to legislative action, would suggest they are neither voluntary nor ‘self-regulatory’.
This is problematic, partly because it is doubtful whether private ‘voluntary measures’ can be in full respect of fundamental rights. Companies are not bound by the EU’s Charter of Fundamental Rights, and while member states have positive and negative obligations with regard to human rights obligations, companies do not.
Public authorities putting pressure on companies to remove and restrict access to certain content online, without the need to conduct a legality assessment, without any counterbalancing obligations for companies to respect human rights and fundamental freedoms, especially when vague terms such as ‘harmful’ are used, is problematic.
Even in the supposedly straightforward case of terrorism, the answers are difficult to find, perhaps because we are only just beginning to frame the questions. When an individual tweets a quote from a religious cleric asking individuals to ‘fight infidels’, does this represent terrorism, or the promotion of terrorism? Should it really be up to Twitter to determine whether this constitutes an offense, and restrict that individual’s freedom of expression accordingly? If someone else retweets it, is this an offence too? Should Twitter really be the arbiter of whether that is freedom of conscience or glorifying terrorism? Isn’t it a way of privatising enforcement and censorship?
All of this is complicated by the transnational nature of social media, and of the varying definitions of what constitutes ‘terrorism’ or ‘harmful content’ in different jurisdictions.
Finally, it is worth noting that a lot of this work will not be done by individuals, but by algorithms. The UK has already developed software that can identify ISIS material with 94 per cent accuracy. But this raises its own ethical issues. Regulation by algorithm is opaque, potentially biased, unregulated, and operates in a grey legal area. Who is responsible if the algorithm fails?
When one considers the impact that online platforms such as Facebook and Twitter have on elections, these are fundamental societal questions that impact every single one of us.
The Law Society and the University of Essex’s Human Rights Centre will be addressing some of these issues at a research seminar on 11 April. For more information on how to attend, please contact Olivier Roth at firstname.lastname@example.org.