This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 5 minute read

The Online Safety Act turns one: Where are we now?

The journey so far

26 October marked the first anniversary of the UK’s Online Safety Act (the Act) receiving Royal Assent. This was the cumulation of a lengthy legislative process - the Internet Safety Strategy green paper was published seven years ago, in October 2017. The Act was the subject of significant debate and amendment as it passed through parliament, including on the regulation of legal but harmful content and its approach to encrypted services. The government’s stated aim had been to pass “world-leading” legislation in this space, but a number of other jurisdictions got there first, including the EU with the Digital Services Act and Australia with its own Online Safety Act. 

The Act’s impact is potentially enormous - Ofcom estimates that over 100,000 businesses may be in scope. It’s been a long road to get here, and there is still much to be done before we have a true picture of what the Act means for online safety in the UK. 

A busy first year

While the Act is lengthy and complex, it only provides a framework for the new regulatory regime. It is for Ofcom, through codes of practice and guidance, and the Secretary of State, through secondary legislation, to fill in the gaps. 

Shortly after the Act received Royal Assent, Ofcom published its consultation on protecting people from illegal harms online. Its second major consultation, published in May 2024, focused on how online services should approach their new duties in relation to content that is harmful to children. These consultations are extremely lengthy, running to thousands of pages, and have resulted in significant engagement from relevant stakeholders (the first consultation on illegal harms received close to 200 responses).

Aside from these key consultations, in December 2023, Ofcom published draft guidance on how online pornography services should implement effective age assurance measures and there has also been a consultation on Ofcom’s powers and how it plans to exercise these (closed on 4 October 2024). More recently, Ofcom published a consultation on the new fees and penalties regime which closes on 9 January 2025. In addition, Ofcom has submitted evidence to the Government regarding the thresholds for category 1, 2A or 2B services. These services will in time be subject to additional obligations, including transparency reporting requirements and duties to ‘empower’ adult users. 

In the meantime, Ofcom is engaging with service providers to better understand how it operates. In September, Ofcom published its evaluation of measures implemented by Twitch. The measures require content creators to apply content classification labels to tell viewers if a stream they are about to view contains certain mature themes. Ofcom found that, following changes to Twitch’s content classification guidelines, the accuracy of content labelling increased substantially.

Ofcom also continues to take enforcement action under the existing video-sharing platforms (VSPs) regime, which will be repealed following the Act’s implementation. In May 2024, Ofcom opened an investigation into whether OnlyFans has effective age verification measures in place to prevent under-18s from accessing pornographic material. In July 2024, Ofcom fined TikTok £1.875 million for providing inaccurate data in response to a request for information under the VSP regime about its parental controls safety feature.

First major test?

This summer saw racially charged riots in cities across the UK following the murder of three girls in Southport in July. The explosion of violence and civil unrest was fuelled by the rapid spread of misinformation online, including false information that the suspect was a Muslim asylum seeker. At least 30 people were arrested under various laws in relation to social media posts pertaining to the disorder. A number of public commentators questioned why the Act did not prevent the spread of this content online. 

Several potential explanations have been put forward. The most obvious is that, as noted above, the Act is not properly in force. However, some have questioned whether it would ever have been effective in this context. First, as highlighted by some experts (see this article published by the Online Safety Network), misinformation and disinformation content is not explicitly covered by the Act, and so is only in scope if caught by another category of content (for example, illegal content or content that is harmful to children). Second, the Act is supposed to operate at a systems and processes level, rather than regulating individual content: there is no requirement under the Act for service providers to remove specific pieces of content, such as individual posts, nor does Ofcom have the power to mandate this under the Act. Finally, the mechanism in the Act to enable Ofcom to take certain steps in “special circumstances”, which could include requiring a service provider to explain to the public how it is responding to a threat such as the riots, only operates at the direction of the Secretary of State and only in relation to Ofcom’s media literacy function. 

Earlier this month the Technology Secretary, Peter Kyle, wrote to Ofcom, requesting an update on Ofcom’s assessment about the spread of illegal content (particularly disinformation) during the disorder, and whether Ofcom is considering any targeted measures for the next iteration of the illegal harms code of practice for the Act as a result. The response from Ofcom recognised that the duties for in-scope service providers to prevent the spread of illegal material and to mitigate the safety risks to UK users do not yet bite, but argued that had the draft codes of practice been in place at the time, “they would have provided a firm basis for urgent engagement with services on the steps they were taking to protect UK users from harm”. The response also referenced Ofcom’s intention to establish a new Advisory Committee on Misinformation and Disinformation, to start work in early 2025. 

The road ahead - 2025 and beyond

With commentators and the wider public demanding action, the next year will be crucial for both Ofcom and the in-scope service providers. Ofcom recently published a progress update, including a revised roadmap, which sets out its plan for the regulatory regime coming into force in earnest - see our recent article for more. Some key dates for the diary include:

  • In December 2024, Ofcom will publish the illegal harms codes of practice and guidance on illegal content risk assessments. Service providers will then have until March 2025 to complete these illegal content risk assessments. 
  • Similarly, in January 2024 Ofcom will publish guidance on children’s access assessments. Service providers will need to assess whether their services are likely to be accessed by children. Ofcom will follow up with children’s risk assessment guidance and protection of children codes of practice in April 2025.
  • By the end of 2024, the government is expected to confirm thresholds for category 1, 2A and 2B services. Ofcom will then publish the register of categorised services in summer 2025.

Ofcom has said that it is “ready to launch immediate enforcement action if providers do not act promptly to address the risks posed by their services”. It expects this early enforcement action to focus on “measures that will be most impactful in protecting users, especially children, from serious harms such as those relating to CSAM, pornography and fraud”. 

While Ofcom presses ahead with enforcing the Act, there are some indications that more regulation could be on the way. In the wake of this summer’s riots, it was reported that the government will review the Act once it is fully implemented. Separately, a private members bill put forward by Labour MP Josh McAlister, the Safer Phones Bill, would expand Ofcom’s powers in relation to ‘addictive’ apps and increase the age of ‘internet adulthood’ from 13 to 16 (see our article here). There is also the wider question of how the Act will cope with the development of new technologies, including the ever-present issue of AI. Ofcom says that the Act is designed to be “technology neutral” and that it is tracking these emerging technologies and trends in online safety. 

It’s been a long road to get here, and the UK’s online safety journey is only just beginning. We will be watching closely to see how Ofcom implements and enforces this new regulatory regime - keep an eye out for further updates. 

Subscribe to receive our latest insights - on the topics that matter most to you - direct to your inbox, at your preferred frequency. Subscribe here

Tags

data protection and privacy, digital transformation, it and digital, online safety, technology, article