The New EU Censorship

Roxana Stanciu
June 20, 2017

  Roxana Stanciu

What if your opinions with regards to human dignity, human rights, marriage, and the family were considered offensive or insulting? What if they were considered ‘hate speech’? Should you still be able to express them online? Not according to the European Commission’s Code of Conduct (CoC).

The CoC was presented in May 2016 by the European Commission with the agreement of IT and tech companies such as Facebook, Google, Microsoft, and Twitter. It is a non-binding agreement between these private companies and the European Commission. But it remains unclear why there has been no debate in the European Parliament on what kinds of limits on free speech may be considered under the CoC.

The CoC aims to ensure that “online platforms do not offer opportunities for illegal online hate speech to spread virally”. Under the agreement, IT and tech companies, and officials from the Commission, agree to work together with a network of ‘trusted reporters’ who will flag instances of ‘hate speech’. After receiving a “valid removal notification”, a company must then disable access to or remove such content within 24 hours.

 

What is ‘hate speech’?

One core problem, however, is that there is no clear definition as to what constitutes ‘hate speech’. At the EU level, the definition is vague and rather subjective. It also has not been considered in any major international human rights document or treaty, nor has it been defined by the European Court of Human Rights or by any other international court. Generally speaking, however, European officials consider ‘hate speech’ any speech that can incite ‘hate’.

But what if controversial views – or views that are considered ‘politically incorrect’—are perceived as ‘hate’ by someone who is offended by them? Under such an understanding, ‘hate speech’ could include criticism of abortion, Islam, marriage legislation, or mass migration – and could be considered a violation of the CoC.

In fact, the CoC relies on a rather unclear definition of ‘hate speech’ as provided by the 2008 Framework Decision on racism and xenophobia, which states that ‘hate speech’ is “the public incitement to violence or hatred directed against a group of persons or a member of such a group defined on the basis of race, colour, descent, religion or belief, or national or ethnic origin”. It thus has the potential of serving as a powerful tool to control the parameters of public deliberations and debate.

 

Who decides what happens to ‘illegal’ online posts?

The CoC states that IT and tech companies are “taking the lead on countering the spread of illegal hate speech online”. But shouldn’t a European member state’s own judicial system be equipped to apply existing law to the online world, rather than having private companies decide what is and is not lawful? This, however, could only be possible if the term ‘hate speech’ were clearly defined, which is simply not the case.

The CoC also states that when companies receive a “valid removal notification” they will review it against their terms of service – but not against any applicable national law. The notification would be checked against the law – only “where necessary”. But in how many cases will it truly be “necessary” to check a removal notification against the law? This would probably be the case in only a very few cases.

This means that whatever has been deemed ‘hate speech’ will be removed or deleted based on community (not legal) guidelines. This contradicts European and international human rights laws, which clearly state that any limitations on freedoms enjoyed by citizens should be “prescribed by law”.

Obviously, the purpose of the CoC is not to ensure that national laws are enforced, since participating IT and tech companies do not need to check if the content they remove is actually illegal or not. They only need to check that questionable content has been tagged with the label of ‘hate speech’.

 

One year after adoption of the CoC

At the beginning of June 2017, the European Commission released a Fact Sheet concerning the “significant progress” that had been made in countering illegal ‘hate speech’ online since the establishment of the CoC. The document reports that between the March 20 and May 5 of this year, a total of 2,575 notifications of illegal ‘hate speech’ have been submitted to IT and tech companies. Facebook received the largest amount of notifications, followed by YouTube and Twitter. Microsoft did not receive any notifications.

The Fact Sheet also analyses the ‘removal rates’ and the grounds on which incidents of ‘hate speech’ had been reported. It states that “the results confirm the predominance of hatred against migrants and refugees” – but notes optimistically that there has been “substantial improvement for all three companies” in ‘removal rates’. Overall, 59.1% of the notifications received led to the removal of online content. Facebook removed the content in 66.5% of cases, YouTube in 66%, and Twitter in 37.4% of cases. In addition, the figures indicate that companies and firms participating in the monitoring of content under the CoC have submitted 212 cases to national authorities.

But the Fact Sheet fails to answer some crucial questions such as: How many crimes have actually been prevented as a result of adoption of the CoC? How many people committing criminal or civil offences have been punished? And what is the involvement of public authorities in all this?

 

A new form of censorship

Another problem with the CoC is that it does include the involvement of any court or impartial arbiter in deciding what online material is – and is not – classified as ‘hate speech’. Although criminalisation under the rule of law implies that any truly reprehensible behaviour will be assessed by an impartial judge in a transparent manner, the way the CoC works does not abide by this requirement at all.

The CoC has thus created a serious risk for freedom of expression and helps to undermine our fundamental freedoms. It also seems to entirely ignore Article 10 of the European Convention on Human Rights, which clearly states that “[e]veryone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.”

In a truly democratic society, citizens should be able to freely express ideas and opinions, even if certain individuals or groups dislike or are offended by such ideas and opinions. In fact, it is prerogative of healthy democratic societies that speech will not be banned even if and when someone profoundly disagrees with it, or finds it ‘offensive’ or ‘disturbing’. The free exchange of divergent and opposing views leads to more meaningful public debate, engages citizens in the democratic process, and invigorates democratic deliberation. The CoC undermines all of this.

Given the confusion and uncertainty created by the CoC, citizens should probably avoid discussing sensitive topics online because their comments could be flagged as ‘hate speech’ and silently removed. This is an effective form of censorship – and it is taking place without any legal basis.

The biggest achievement of the CoC to date is that it has created a climate of suspicion and mistrust.  One does not know exactly where to draw the line between free speech and what may be considered ‘hate speech’. How many opinions, viewpoints, and voices will disappear from our online environment – and from democratic deliberations – without anyone ever knowing about them and without anyone having any legal recourse?