2020 may have seen the start of the pandemic but the UK has been facing an epidemic of its own – online abuse.
According to an analysis by the NSPCC, more than 10,000 offences of online child sexual abuse were logged by police between April 2019 and March 2020, a 16% increase on the previous year. Parliamentary intervention has come at a crucial time. The Online Safety Bill is the result of two years’ work seeking to provide a safer virtual environment for all. The Digital Secretary, Oliver Dowden, has hailed the “ground-breaking laws” that the UK government have committed to introducing to ensure social media platforms protect their users and “safeguard our liberties”.
Social media sites will now owe a duty of care to their users, making offline behaviours equally reprehensible when conducted online. Category 1 services, comprising the largest and most popular social media sites, such as Instagram and TikTok, must set out in explicit terms how they are going to address and combat harms of both a legal and illegal nature.
Further, companies will be required to report child sexual exploitation and abusive content to allow for effective safeguarding. The draft Bill is set to transform the previous self-regulation by giving Ofcom greater powers for enforcing the duty through a Code of Practice and holding companies to account. As Priti Patel has stated, “it’s time for tech companies to protect the British people from harm. If they fail to do so, they will face penalties.” These sanctions range from criminal action against senior managers, £18 million fines and even the prospect of blocking access to sites.
The ONS has reported that 89% of 10 to 15 year olds go online every day. Of those, one in six has admitted to speaking with someone they have never met before. It is therefore crucial now more than ever to ensure airtight protection from online abusers. The new legislation, as the Home Secretary notes, will force online platforms to work collaboratively with law enforcement to provide evidence and bring offenders to justice.
However, Facebook is pursuing a new approach to protect users’ privacy by making end-to-end encryption the default messaging system. End-to-end encryption is a form of communication whereby only the users involved in the chat are able to read the messages. Platforms that do not use these systems are able to scan posts and messages to detect questionable or concerning messages and images, reporting appropriately. When this type of system is in place, which only allows the sender or the recipient to read any messages, how can the young and vulnerable be effectively shielded from online abuse or grooming? This matter is even more concerning when you consider that from the period of early 2017 to October 2019, 55% of online grooming offences recorded to the police in England and Wales were on a Facebook-owned app. Facebook have already conceded to the House of Commons that their planned implementation of encryption will keep exploitation of children a prevalent issue. The prospect of 70% of reports of questionable or concerning messages from Facebook being lost by the introduction of end to end encryption has prompted the US National Centre for Missing and Exploited Children to urge Facebook to reconsider.
Whilst maintaining individual rights to privacy is paramount, it must be compatible with ensuring people’s safety on user-generated platforms or there is a risk that the gains already made will be put in jeopardy.
The path to finding the balance between these competing rights going forward is far from clear, but what we can say is that in considering that balance, tech companies will be required to provide unequivocal evidence as to how they are effectively able to offer an encrypted service and monitor the presence of abusive content to provide a secure environment for their users.
Written by Gina Southwood, Trainee Solicitor at BLM (Gina.Southwood@blmlaw.com)