Laptop
4 of 7

1 October 2019

Regulating the internet – 4 of 7 Insights

Online harms: the regulation of internet content

From cyberbullying to terrorist attacks, the role of the internet is coming under increasing scrutiny with a number of initiatives at national and supra-national level to assess and mitigate the risks.

More
Authors

Mark Owen

Partner

Read More

Louise Popple

Senior Counsel – Knowledge

Read More
Authors

Mark Owen

Partner

Read More

Louise Popple

Senior Counsel – Knowledge

Read More

From cyberbullying to terrorist attacks, the role of the internet is coming under increasing scrutiny with a number of initiatives at national and supra-national level to assess and mitigate the risks.

This is particularly true in the UK, with the government recently publishing a proposal to regulate a wide range of harms caused by user-generated content (UGC): the Online Harms White Paper (OHWP).

The key proposal is the introduction of a new statutory duty of care on online operators, overseen by an independent regulator. The proposal is ambitious both in the types of harm covered and the operators caught. While many have welcomed this attempt to regulate the area, others have questioned the approach.

History and context

Historically, online operators have been given considerable protection regarding content uploaded by their users. For 20 years, online operators who merely host – rather than create – content have enjoyed a "safe harbour" from liability for illegal content, unless and until they have notice of it (provided they act expeditiously to remove or disable access to it once on notice), under the EU's e-Commerce Directive. In addition, they are excused any general monitoring obligation.

The Directive was considered necessary to allow the early internet to flourish. It worked. UGC has not only transformed the way we interact with one another but has turned areas from journalism to advertising on their heads. But there is a trade-off. When users are given the opportunity to create and share content, there is always scope for misuse. That misuse has come into sharp focus in recent years through such things as the live streaming of terrorist attacks and examples of the promotion of self-harm and suicide.

The difficulty with regulation

In view of this, the public and interest groups ask why there is not greater regulation, in particular why online operators are not forced to proactively monitor and filter content. Regulation is not, however, as straightforward as it might initially seem. How should society guard against damaging misuse without severely impacting the positive aspects of the online environment?

Even for the largest online operators, the sheer volume of UGC makes this particularly difficult. Technology has developed sufficiently to allow proactive monitoring and filtering of large volumes of content but it remains a somewhat blunt instrument; no technology yet exists that can differentiate cyberbullying from satire. As well as the risk of filtering out legitimate content, innovation may be stifled and smaller, early stage companies may find themselves squeezed out if they lack the required filtering technology.

The EU has grappled with this for some time with mixed results. A provision in the proposed Regulation on Terrorist Activity Online imposing a positive monitoring obligation for terrorist material was recently watered down because it was deemed incompatible with the e-Commerce Directive.

Conversely, the Directive on Copyright in the Digital Single Market – which effectively imposes a positive monitoring obligation for material infringing copyright – was recently adopted. Despite the controversy, the general direction of travel at EU level is towards some degree of proactive monitoring by platforms, at least in relation to certain types of content (see our article for more).

The number and range of online harms also makes a coherent regulatory framework difficult. Regulating modern slavery is not the same as regulating disinformation. The former has a clear legal definition, the latter does not. Making sure that any regulatory framework minimises the obligation on online operators to work out what is and is not acceptable content is obviously key. Without such clarity, legitimate content will almost certainly be removed.

It is no surprise then that it has taken time to formulate a coherent proposal for regulation. Past proposals – such as that contained in the Internet Safety Strategy Green Paper – focused on self-regulation. This appears no longer to be the preferred approach of the UK government.

Indeed, some in the industry are calling for greater guidance, with a number of large technology companies writing to the UK government in February 2019, setting out what they believe a new regulatory framework should look like.

OHWP – an overview

The OHWP proposes significantly increasing the responsibilities of operators to tackle harmful content and activities online. A new statutory duty of care would require operators to do what is "reasonably practicable" to tackle specified types of online harm. Compliance with the duty of care will be overseen and enforced by an independent regulator. In brief:

  • The proposals will apply to any operator providing services to UK users and which allows users to share or discover UGC or interact with each other online. This is very broad and potentially includes social media platforms, below the line comments, customer reviews, public discussion forums, online communities, collaboration platforms, listings sites, cloud hosting providers, file-sharing sites, instant messaging services and search engines.
  • The types of harm covered are also wide-ranging. They are split into three categories: harms with a clear definition (such as terrorist content, child sexual exploitation, hate crime and incitement of violence); harms with a less clear definition (such as cyberbullying, coercive behaviour, intimidation and disinformation); and underage exposure to legal content. The proposals cover content that is legal but nonetheless harmful. The list of harms is not fixed and will be updated from time-to-time, allowing it to change as technology advances, new harms emerge and expectations develop.
  • The regulator will set out how operators can comply with the duty of care in Codes of Practice (although operators can adopt their own practices). Interim Codes are expected in late 2019. A Code of Practice for Providers of Online Social Media Platforms which references the OHWP has already been issued, setting out appropriate actions to prevent bullying, insulting, intimidating and humiliating behaviours.
  • For certain tightly defined categories of illegal content – terrorist activity, child sexual exploitation and abuse, hate crime and serious violence – online operators will have an obligation to proactively monitor and filter content.
  • Operators will be required to take action appropriate to the scale and severity of the harm in question. More stringent and specific requirements will be imposed for harms that are clearly illegal.
  • Action will be assessed by the regulator according to the size and resources of the operator and the age of those at risk of harm. This will be welcomed by smaller technology companies.
  • The government is consulting on appropriate sanctions for failure to comply with the duty of care. They could include significant fines, disruption of business activity and individual liability for senior management (including possible criminal liability).

Criticisms of the OHWP

While the proposals have been broadly welcomed, some elements are proving controversial.

Legal harms

One of the main criticisms is the range of harms that the proposals attempt to regulate. By seeking to extend to content that is legal but nonetheless considered harmful, it is argued that they open the way to censorship of the internet. There is a vast grey area between free speech and hate speech and people will disagree on where the line should be drawn.

Absent any clear legal definition (in legislation or case law) of harms such as cyberbullying, the line will be a difficult one for operators to draw and some legitimate content will inevitably be removed. This is particularly so where the sanctions for failing to do so are significant and potentially include individual liability for senior managers.

Statutory duty of care

Central to the proposal is the creation of a new statutory duty of care on online operators which will be overseen by an independent regulator. However, it is not proposed that the regulator will have power to determine individual disputes but rather to take enforcement action only where there is evidence of systemic failure to fulfil the duty of care. There are questions as to whether a regulator will have sufficient resources to bring action in appropriate cases.

Moreover, the statutory duty of care does not create a new right for individuals to enforce. Any action against online operators will have to be based on existing laws (such as negligence, breach of contract or defamation). Despite this, the OHWP says that there will be "…scope to use the regulator's findings in any claim against a company in the courts on the grounds of negligence or breach of contract". More needs to be done to make it clear how (if at all) existing law is changing, if speculative or vexatious claims are to be avoided.

Positive monitoring obligation

A further criticism relates to the positive monitoring obligation. The UK government's claims that this is compatible with the provisions of the e-Commerce Directive must be open to doubt. If the UK leaves the EU, the point will probably become moot but if it does not, we can expect to see a challenge to this provision. How this will play out though is difficult to predict given the direction of travel at EU level towards positive monitoring.

Some argue that the provisions of the e-Commerce Directive now need to be amended. At present, there is a positive disincentive for operators to proactively monitor content – although many of the larger operators do so – since they are then deemed to be on notice and potentially liable. A change to this to allow some form of positive monitoring without risking liability might be beneficial for start-ups and smaller enterprises.

Other criticisms

Some question whether the proposals are appropriately targeted. Even the most comprehensive regulatory framework aimed at online operators will not prevent a determined user posting harmful content online. It could be argued that giving users greater scope to trace and take action against those who post harmful content online would be equally or more effective.

Likewise, some argue that the proposals should contain more on technological measures to protect users, as well as education and awareness programmes. More generally, there is a question about whether national regulation, uncoordinated with other countries, is the best approach.

Next steps

The OHWP has now been publicly consulted upon and the government is expected to respond to the results of the consultation later this year. Subject to some of the criticisms discussed above, the OHWP has broadly received cross-party political support and a new legal framework of some sort is likely to result.

Operators would be prudent to begin reviewing their systems and procedures to make sure that they conform with key elements of the proposals. Indeed, the government has specifically said that it expects operators to take action now.

Taylor Wessing is able to advise on the implications of and compliance with the new proposals. Further details on the proposals are available here.

If you have any questions on this article please contact us.

Return to

home

Go to Interface main hub