The New Online Safety Act 2023: What Is It & Why Is It So Controversial?

emily gordon brown
Emily Gordon BrownLegal Assessment Specialist @ Lawhive
Updated on 15th November 2023

The Online Safety Act has introduced new responsibilities for how tech firms should design, operate and moderate their platforms. But not everyone has welcomed it.

online-safety-act

The Online Safety Bill received Royal Assent in October with the intent as the Government describes it:

to make the UK the safest place in the world to be online.

But what are the problems the bill aims to help solve and does it go far enough?

What is the Online Safety Act? 

The OSB was granted Royal Assent on October 26th, after years of debate across parliament. This means it is now law. And internet companies must act

The idea is to place tighter constraints on tech firms, forcing them to take responsibility for the content on their platforms, rather than benefiting from potentially harmful material.

The BBC has described the new laws as 'divisive '. They also reported comments from the Technology Secretary Michelle Donelan, describing the Government’s position:

It ensures the online safety of British society not only now, but for decades to come.

The internet safety bill, despite the Government’s rhetoric, has long had its doubters. Many are concerned about the impact on privacy.

Tech giant Whatsapp has, among other messaging services, threatened to withdraw their platforms from the UK in protest.

When will the Online Safety Bill become law?

The OSB became law when it was granted Royal Assent on Thursday October 27th. This means the powers granted to enforcement agencies are now in force. 

For tech platforms this means they must act quickly to delete content considered harmful by the bill and going forwards to prevent it being uploaded and shared in the first place.

What is classed as illegal content?

Content classed as illegal by the bill includes:

  • Child sexual abuse

  • Controlling or coercive behaviour

  • Extreme sexual violence

  • Illegal immigration and people smuggling

  • Promoting or facilitating suicide

  • Promoting self-harm

  • Animal cruelty

  • Selling illegal drugs or weapons

  • Terrorism

Additionally, new crimes have been classified to deal with the landscape of the internet today and web 3.0.

Cyber-flashing is a new term to describe the sending of sexual imagery online without consent. 

Deepfake technology enables the creation of videos using computer generated faces. There has been a rise in deep fake pornography, where someone’s image is added to a pornographic video without their consent.

The Online Safety Act, as it has now passed into law, also aims to help bereaved parents find information about their children from online messaging services and social media platforms.

Who will enforce the Online Safety Act?

The new law puts responsibility on tech firms to self-police their platforms, including legal, but potentially harmful content. Ofcom, the UK’s communications regulator, has been given extra enforcement powers to ensure tech companies do everything they can to make their platforms safer under the law.

What will the consequences be of not complying?

Enforcement action for breaking the new rules is severe. The likes of Facebook could be fined 10% of their annual global revenue for not sticking to the legislation, or £18 billion – whichever amount is higher.

Executives could also face jail time if the rule breaking is serious enough, in a move that is sure to send shivers up the spine of some tech company leaders.

Ofcom is creating guidelines to outline the codes of conduct which the firms must follow. The code will offer guidance on how to stay within the rules.

The Government has said following the codes will not be the only way for companies to stay within the rules and fulfil their obligations. However, they have stated that by doing so tech firms can be certain they are compliant. 

Ofcom’s CEO has said:

Ofcom is not a censor, and our new powers are not about taking content down. Our job is to tackle the root causes of harm. Importantly, we'll also take full account of people's rights to privacy and freedom of expression.

The last sentence is important because many opponents to the law are concerned about privacy, including the companies affected most by the changes. We’ll expand on this point later.

What problems does the Online Safety Bill aim to solve?

In general, the bill is aimed at making the internet a safer place for everyone, in particular children. It also attempts to give adults the ability to manage what they see online. 

Tech companies will now have to work to remove and stop harmful content getting through their algorithms. 

The bill was prompted over rising fears in society about revenge porn amongst young people as well as online bullying and content that promotes harmful activities like self-harm and eating disorders.

A campaigner for the bill, Ian Russell’s 14-year-old daughter took her own life after viewing suicide and self-harm ideation content on social media sites, including Instagram and Pinterest.

Why has the Online Safety Bill been so controversial?

The bill and now law remain controversial. Over the years it was debated in parliament, many arguments were raised to attempt to derail the bill or reduce its powers.

Yet the Equality and Human Right Commission have praised the law, saying it’s:

A vital first step in addressing harmful content and behaviour online.

So, what is causing so much opposition to the bill?

It doesn’t go far enough

Some have said that the bill does not go far enough to reduce harms. Full Fact, a fact checking organisation, initially supported the bill, have said ‘retrograde changes’ have meant that the bill does not "Address the way that platforms treat harmful misinformation and disinformation”.

Being a fact checking organisation it’s no wonder that Full Fact’s primary concern is misinformation, disinformation or what’s commonly referred to as ‘fake news’. 

The company’s head of policy and advocacy built on their argument: "Our freedom of expression is left in the hands of self-interested internet companies, while dangerous health misinformation is allowed to spread rampant”.

The COVID-19 pandemic was significantly worsened by lockdown and vaccine sceptic messaging online, countering efforts from governments to spread public health awareness campaigns. Moreover, the rise in conspiracy theories in general, has continued to destabilise society, no more so than in the United States where the impacts of the social media fuelled January 6th insurrection will be seen for decades.

The act has created a new crime, that of deliberately creating false information, this of course does nothing to stop fake news spreading once it gets online.

Opposed by Big Tech

The big players in the social media and messaging services industries have strongly opposed the bill since its inception, to it becoming law. 

Their fears are around privacy, freedom of speech and whether they have the ability to police their platforms to the degree expected by the bill.

Censorship and privacy concerns 

The most divisive clause of the law is Section 122, which tech companies interpret as forcing them to read the private communications of their users. They protest this, believing it is wrong to do so, and potentially not possible. This is because these platforms use end-to-end encryption, which is one of their selling points, WhatsApp for instance reference the fact that not even they can read your messages in their advertising campaigns. 

Whatsapp launched its first privacy centric campaign in August 2021, a spokesperson saying:

The first step of keeping people safe is, you have to have strong security, and we think governments shouldn't be out there trying to encourage tech companies to offer weak security.

If companies like Whatsapp bypassed their own end-to-end encryption they’d effectively be taking away their user’s privacy in an age when users demand more, not less privacy. This is why platforms such as Snapchat, with its disappearing messages, and more lately Path and Signal have been growing in popularity.

The Government hopes companies can find a way to scan messages for harmful content, without bypassing encryption. That technology doesn’t exist and according to some technology experts never will.

Censorship is another issue raised by technology firms. Freedom of speech is of particular importance to Elon Musk, the owner of X (formerly Twitter). Campaigners have raised concerns about the removal of content being government sanctioned censorship. A right leaning think thank, CPS has voiced concerns: “It is for parliament to determine what is sufficiently harmful that it should not be allowed, not for Ofcom or individual platforms to guess' '. 

The Director of CPS Robert Colville went on to say: “If something is legal to say, it should be legal to type”.

If you are a UK tech business with concerns how the new Online Safety Bill might affect you when it becomes law, contact us today to speak to a solicitor.

Share on:

Get legal help the hassle-free way

We have expert solicitors ready to resolve any type of legal issue in the UK.

Remove the uncertainty and hassle by letting our solicitors do the heavy lifting for you.

Get Legal Help

Takes less than 5 mins

We pride ourselves on helping consumers and small businesses get greater access to their legal rights.

Lawhive is your gateway to affordable, fast legal help in the UK. Lawhive uses licensed solicitors you can connect with online for up to 50% of the cost of a high-street law firm.

Lawhive Ltd is not a law firm and does not provide any legal advice. Our network includes our affiliate company, Lawhive Legal Ltd. Lawhive Legal Ltd is authorised and regulated by the Solicitors Regulation Authority with ID number 8003766 and is a company registered in England & Wales, Company No. 14651095.

Lawhive Legal Ltd is a separate company from Lawhive Ltd. Please read our Terms for more information.

© 2024 Lawhive
86-90 Paul Street, London EC2A 4NE

Version: 4d70677