Tackling online abuse: The Online Safety Bill

Since the COVID-19 pandemic there has been a four-fold increase in the online abuse of vulnerable children and charities including the NSPCC and the IWF (Internet Watch Foundation) are publishing for the development of technology to block the streaming of illegal content.

Whilst the technology is not easy to create due to the need for it to be extremely accurate whilst also needing to adhere to privacy laws, social media platforms such as Facebook and Twitter have expressed their commitment to incorporating such technologies into their software. Meta, Facebook’s parent company already uses Artificial Intelligence (AI) to detect live-streams and video calls which are likely to contain child sexual exploitation. In 2018 during just one quarter, Facebook removed 8.7 million pieces of content that violated their child nudity or sexual exploitation of children policies. Meta have set out their mission to tackle child exploitation online and state that they have collaborated with other safety experts and companies to develop photo-matching technology which aims to detect child nudity and previously unknown child exploitative content when it’s uploaded.

Whilst this is a positive step for social media platforms, there are ongoing concerns regarding the use of live-streaming technology such as Zoom and Microsoft Teams. The NSPCC has produced research which showed that 1 in 20 children in the UK who live-streamed with someone had been asked to remove an item of clothing. Unfortunately tech companies have not been as proactive in their use of safeguarding technology and so the NSPCC has called for the government to impose requirements under the Online Safety Bill for such companies to implement and invest in the relevant technology to stop live-stream abuse. Andy Burrows from the NSPCC said “Live-streaming services expanded rapidly during the pandemic, but in a race to roll out products tech firms put growth before children’s safety”.

The government has faced criticism for the delays in enforcing such legislation with the rates of online abuse continuing to increase. Concerns have been raised by the IWF regarding the increasing number of children having access to devices with built-in cameras and them having the opportunity to explore new technologies and spend more time online. “Parents need to talk as a family because we know that’s the best way to keep children safe,” was the advice of Emma Hardy, a spokesperson from the IWF. Concerns have also been expressed by both the IWF and other charities that the bill will not go far enough to stop online abuse.

The Online Safety Bill, which is being brought in to tackle online scamming and hacking in addition to sexual abuse, seeks to deliver the government’s manifesto commitment to make the UK the safest place to be online. Whilst it has been recently reported that the bill is unlikely to be fully operational until 2024, it will seek to force companies to have a duty of care to its users such as having a duty to protect them from harmful content. There is also a proposal to appoint Ofcom to have regulatory powers over social media sites, and those sites which breach Ofcom rules could be fined up to £18 million. Confirmation as to these proposals are to be debated.

Written by Nicole Clough, Paralegal at BLM (Nicole.Clough@blmlaw.com)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s