Tackling Child Sexual Exploitation: Making the Internet Safer for Children

In this blog SafeToNet Foundation outline how their work supports efforts to end CSE, and consider how the internet may become a safer place for children and young people. 

The SafeToNet Foundation is a UK registered charity focussed on safeguarding children in the online digital context. Part of our “charitable objects” as agreed with the Charities Commission is to “educate and inform” the general public on the pernicious and complex problem of online harms, including sexual exploitation, as they pertain to children. To this end we produce an ongoing series of audio podcasts which explore the law, culture and technology of online child safety, which you can download for free from www.safetonetfoundation.org.

There are many online risks that children face, as identified some years ago by Sonia Livingstone’s seminal work, the EU Kids Online project:

tackling cse

How can children be blocked from accessing legal online adult pornography? Not only does viewing this kind of content present developmental problems for the young adolescent mind, but it’s also used to normalise sexual acts during the grooming process for sexual exploitation. Currently any one of any age can easily by-pass the all too porous age self-declaration processes (which are included in the ICO’s Age Appropriate Design Code) used by most service providers, which allow anyone to be any age; a 12 year old can profess to be 14, a 40 year old can profess to be 12.

The UK’s approach was to introduce an Age Verification system which purveyors of this kind of content, the PornHubs of the world, would have to deploy if their services were aimed at people resident in the UK, and irrespective of where the service providers themselves were based. This was due to go live late 2019, but shortly before the general election was announced this legislation, which many saw as being controversial, was pulled with the explanation that it made more sense to place it into the general “Online Harms Legislation” which is due for publication early 2021.

In some respects you can see the logic of that, although the delay means more children will be harmed and given the Government has its hands more than full with Brexit and COVID, who knows how long this delay might actually be?

Adult pornography aren’t the only images used in grooming. Photos and videos of child sex abuse are also used, again to “normalise” this behaviour and to desensitise the target child to it. PhotoDNA is a well-known technology used to tag, fingerprint or “hash” known images of child sexual abuse (often sourced from children “sexting”) so that they can be more readily identified and taken down from circulation. But this doesn’t necessarily include private and locally stored collections of such material and because it’s a retrospective analysis of the image, it’s too late – the abuse used to create the image has already happened, the child has already been victimised.

There are new innovative technologies that can be used to help address this and many members of the UK SafetyTech industry can bring their solutions to bear on this issue. But there’s a problem. At the time of writing, as a result of poorly thought out legislation, the EU’s ePrivacy Directive makes illegal the use of established technologies such as PhotoDNA. This sounds so incredible as to not be true, it must be some mistake – but it really is a case of fact being more surreal than fiction.

As we go down this rabbit hole, things get even more bizarre. In a last minute attempt to rectify this, the EU has proposed a “temporary derogation” of the ePrivacy Directive, which restores the status quo. At least it would do were it to come into law. That’s its intent. All well and good, you may well be thinking.

But there’s a but. The ePrivacy temporary derogation itself causes problems in that it restricts the range of technologies that can be used to help eradicate and eliminate CSAM to those that are already well established and mature and it excludes new innovative tech that could do a different and/or better job.

Here’s the wording from the Temporary Derogation itself: “…the derogation provided for by this Regulation should be limited to well-established technology that is regularly used by number-independent interpersonal communications services for the purpose of detecting and reporting child sexual abuse online and removing child sexual abuse material before the entry into force of this Regulation.”

And it goes on to also say “The use of the technology in question should therefore be common in the industry…”.

While PhotoDNA and similar technologies are invaluable tools in the anti-CSAM and anti-CSE arsenal, their weakness is in their retrospective analysis – for them to be successful, they rely on the abuse having actually happened.

If you’ve followed the white rabbit so far, hang on to your (mad) hat. You may be thinking that we’ve already left the EU, so what does this matter to us? It matters because any EU laws passed before the end of the transition period will automatically become UK law. There is no Parliamentary oversight of the EU-law to UK-law transcription process. We have no MEPs that can directly influence this law or to protect the UK’s nascent but influential “SafetyTech” industry as described by the DCMS here:

Safer technology, safer users: The UK as a world-leader in Safety Tech

Even if this Temporary Derogation doesn’t come into UK law in a few weeks’ time, we would still have our closest market comprising half a billion of the world’s wealthiest people denied access to the UK’s SafetyTech industry, or at least a number of members of it, because of EU law. And if the ePrivacy Temporary Derogation doesn’t become law, the original ePrivacy Directive itself will, and that’s even worse.

Grooming, those online conversations that lead to a child taking and sharing an intimate image, feels nice. If it didn’t no one would be groomed. Telling a child that if something makes you feel uncomfortable to report it therefore seems misguided advice, because the most dangerous type of conversation won’t be reported. The sign of a successful grooming operation is that the child doesn’t even know they’ve been groomed.

We know from the IWF that the locked family bathroom, with its privacy lock, is the most dangerous place for children to be with a smartphone, and we also know from the International Justice Mission that there is a thriving international real time encrypted streamed pay-per-view-and-direct child sex abuse webTV industry, using platforms such as Zoom and Skype. What can technology do to disrupt these behaviours and industry? How do you allow innocent social conversations to take place but identify and filter out those that could lead to sharing an intimate image, and to do so in a consistent fashion across a range of web-based services that are increasingly using end-to-end encryption?

sexual abuse contexts

A child is unlikely to spontaneously share an intimate image. They will do so as a result of a conversation with either another child, an adult pretending to be a child, or an adult who may be using a false ID. Those conversations might be the classic “long term high investment” type, where the perpetrator builds trust and confidence in the victim, or it might be a simple and direct request sent by the hundred, almost a spamming exercise, in the expectation that by the law of averages, someone, some child, will respond.

Imagine a world where conversations that lead to children taking and sharing intimate images are disrupted and can’t take place. The solution has to be on the device that the child uses; where the child is the smartphone is, where the smartphone is the child is. The keyboard is impervious to platform encryption and is commonly used on social media sites and within social media apps. So it would seem logical to place SafetyTech there, to influence, moderate and nudge children so that they make safer choices.

Imagine a world where the camera analyses what it sees in real time and takes decisions and action in real time to block intimate images of children being taken or recorded. That such images could be, in real time, greyed out or the camera shut down. There is enough processing power in the sophisticated smartphone cameras to do this, it “just” needs the software to be developed.

The SafeToNet Foundation’s supporting company, SafeToNet, has such technologies. The SafeToNet Foundation worked with the UK Government to make available 1,000,000 free-for-life copies of SafeToNet’s safeguarding and wellbeing smartphone solution during the first COVID lockdown, when so many children went online for longer, out of boredom or to do their homework and their exposure to online risk increased.

SafeToNet recently announced their Real Time Video Threat Detection software, called SafeToWatch, which they are currently productising for release in 2021.

Technology can play a vital role in safeguarding children online from myriad online risks, not least of which is Child Sexual Exploitation. But technology doesn’t exist in a vacuum. Laws and politics create opportunities but can also, even if inadvertently, close down opportunities for technology to make a difference and as we’ve seen, we have to be diligent in those realms also.

This article was written by Neil Fairbrother, Journalist, SafeToNet Foundation.