Tackling online abuse: The Online Safety Bill

Since the COVID-19 pandemic there has been a four-fold increase in the online abuse of vulnerable children and charities including the NSPCC and the IWF (Internet Watch Foundation) are publishing for the development of technology to block the streaming of illegal content.

Whilst the technology is not easy to create due to the need for it to be extremely accurate whilst also needing to adhere to privacy laws, social media platforms such as Facebook and Twitter have expressed their commitment to incorporating such technologies into their software. Meta, Facebook’s parent company already uses Artificial Intelligence (AI) to detect live-streams and video calls which are likely to contain child sexual exploitation. In 2018 during just one quarter, Facebook removed 8.7 million pieces of content that violated their child nudity or sexual exploitation of children policies. Meta have set out their mission to tackle child exploitation online and state that they have collaborated with other safety experts and companies to develop photo-matching technology which aims to detect child nudity and previously unknown child exploitative content when it’s uploaded.

Whilst this is a positive step for social media platforms, there are ongoing concerns regarding the use of live-streaming technology such as Zoom and Microsoft Teams. The NSPCC has produced research which showed that 1 in 20 children in the UK who live-streamed with someone had been asked to remove an item of clothing. Unfortunately tech companies have not been as proactive in their use of safeguarding technology and so the NSPCC has called for the government to impose requirements under the Online Safety Bill for such companies to implement and invest in the relevant technology to stop live-stream abuse. Andy Burrows from the NSPCC said “Live-streaming services expanded rapidly during the pandemic, but in a race to roll out products tech firms put growth before children’s safety”.

The government has faced criticism for the delays in enforcing such legislation with the rates of online abuse continuing to increase. Concerns have been raised by the IWF regarding the increasing number of children having access to devices with built-in cameras and them having the opportunity to explore new technologies and spend more time online. “Parents need to talk as a family because we know that’s the best way to keep children safe,” was the advice of Emma Hardy, a spokesperson from the IWF. Concerns have also been expressed by both the IWF and other charities that the bill will not go far enough to stop online abuse.

The Online Safety Bill, which is being brought in to tackle online scamming and hacking in addition to sexual abuse, seeks to deliver the government’s manifesto commitment to make the UK the safest place to be online. Whilst it has been recently reported that the bill is unlikely to be fully operational until 2024, it will seek to force companies to have a duty of care to its users such as having a duty to protect them from harmful content. There is also a proposal to appoint Ofcom to have regulatory powers over social media sites, and those sites which breach Ofcom rules could be fined up to £18 million. Confirmation as to these proposals are to be debated.

Written by Nicole Clough, Paralegal at BLM (Nicole.Clough@blmlaw.com)

Worrying developments as peer on peer abuse cases double in the two years to 2019

Figures received by BBC Panorama show in September of this year demonstrate an alarming rise in the number of children abusing other children.

Following up on earlier research carried out by BBC Panorama in 2017 (which found that Police recorded almost 8,000 reports of abuse of children by other children in England and Wales) the most recent figures have seen those figure soar to 15,000 to 16,000, with number falling in 2020-2021 to 10,861 (which is thought to be largely attributed to the pandemic and the lack of opportunity for such abuse to be carried out with many children not being at school and movement generally restricted).

Continue reading

Under-18s can seek removal of online nude images of themselves

The Internet Watch Foundation (“IWF”) and Childline have created a service that will allow minors to request nude images or videos of themselves to be removed from the internet.

Childline and the Internet Watch Foundation (IWF) joined forces in 2013 to ensure young people of 17 years and under know where to turn to get sexually explicit images removed from the internet. The partnership was a result of a Childline survey of 13-18 year olds which revealed that young people are frequently taking huge risks by making and sending sexual images of themselves.

The IWF was first established in 1996 to fulfil an independent role in receiving, assessing and tracing public complaints about child sexual abuse content on the internet and to support the development of website rating systems. The IWF states it works hard to implement new technologies to improve the identification and removal of these images and videos.

Continue reading

Challenges and concerns for children in their use of the internet

As noted in our blogs earlier this week the IICSA investigation into the internet is well underway.

The internet plays such a huge role in all of our lives, not least for Generation Z – the post-millennial generation, born 1995 to 2010 – who most likely will not have known a time when it did not exist. Many worrying stories continually dominate the headlines where technology seems less of a boon (curiously, the name of an app which ranks higher in Google’s search engine than the word itself) in the lives of children and young people.

Continue reading

A crisis of modern society? The internet and child sexual abuse

The second hearing of IICSA’s thematic investigation in to child sexual abuse and the internet was focused on the steps being taken by the internet industry and government bodies. By the ‘internet industry’, IICSA means internet service providers, software companies, social media companies, providers of search engines and those who provide email and messaging services and cloud storage. Whilst a number of representatives from the internet industry gave evidence, none of them applied for core participant status. The institutional core participants were the Home Office, the Internet Watch Foundation, the National Crime Agency, the National Police Chief’s Council and the Metropolitan Police Service. There were three complainant core participants who gave evidence of the direct impact on their lives of being abused in connection with the internet.

IICSA summarises online facilitated child abuse as follows:

  • Indecent images of children which may be created, distributed, downloaded and possessed
  • Grooming which can involve sexual communication with a child, arranging and meeting the child following such communication
  • Live streaming of child abuse

Tackling all aspects of online facilitated child abuse is a huge task. In its 2018 Serious & Organised Crime Strategy the Government set out the expectations of internet companies – that they need to be at the forefront of efforts to deny offenders the opportunity to access children and child sexual material via their platforms and services. The internet industry witnesses gave evidence of how they were seeking to do that, some with greater focus and success than others, but the general impression from the hearing was that there was a need for the efforts to be greater and quicker. When IICSA publishes its report in this investigation, expected early next year, it is reasonable to assume there will be a wide range of recommendations that the internet industry should implement. However there is no need to wait for the report as many suggestions and proposals were made from those giving evidence. The perception from the evidence is that the internet industry has been reactive and its needs to be proactive.

We have summarised in the chart below many of the proposal mentioned. It is not just the internet industry which needs to address the problem, but also society as a whole as can be seen for example by the proposals around the need for better education of children, parents and childcare professionals.

The internet is worldwide and that creates additional challenges but also opportunities for greater working together to ensure child safety around the globe. In 1996 18% of the world’s known child sexual abuse imagery was hosted in the UK, since 2003 that has been less than 1% but that does not mean there has been 17% less imagery, it has just moved elsewhere so whilst in 2018 there were only 41 web pages found and removed in the UK, in the Netherlands there were 48,900 such pages. Much of the abuse that is live streamed emanates from South-east Asia however research by IWF in 2018 found that commonly encountered live streaming involved white girls from western backgrounds filmed in a home setting. The need for a co-ordinated approach is clear, governments can seek to work together but the organisations which already have the greatest worldwide reach are the big internet industry professionals.

The evidence given by the individual core participants described how they were abused online (and in one case offline too). The abusers of all three have been given custodial sentences. Those who were abused solely online are not eligible for a payment from the Criminal Injuries Compensation Scheme, despite their abuser serving a lengthy sentence. Their counsel noted that there was little or no legal redress open to them, they did not have as the law stands a cause of action against the technology companies which provided the platforms through which the abuse took place.

The Online Harms White Paper, currently out for consultation, proposes the creation of a statutory duty of care. If implemented will that offer a route for redress? Better however would be actions which prevented the abuse from occurring in the first place. Organisations involved with children and those involved in the internet industry should be considering which of the proposals outlined below they can start to action now.

Capture3


jefferson_p_web Written by Paula Jefferson, partner and head of abuse and neglect at BLM

The challenges posed by online grooming

Social media use as we have already reported this week is prevalent among children, with an estimated 20% of children aged 8 to 11 years old said to have a social media profile in the UK, notwithstanding that the social media providers require all users to be over age 13 before having their own profile. The figure rises to 70% among children aged 12 to 15 years old1. The potential for online grooming is huge and is already being exploited as we have noted when commenting on the rise of sexting.

Continue reading

Learning from jurisdictions overseas and IICSA research

Much work has already been completed in jurisdictions outside of England & Wales about preventing and responding to child sexual abuse (“CSA”). Tomorrow (12 April) the IICSA is holding a seminar about best practice overseas to coincide with the publication of a report by the University of Central Lancashire on the same topic. The seminar will be held at the International Dispute Resolution Centre, London and is open to the public to attend.

Continue reading