UK names five projects to get funding for CSAM detection

Europe

The UK government has named five projects that have scored public funding under a ‘tech safety’ challenged announced in September — when the Home Office said it wanted to encourage the tech industry and academia to develop novel AI/scanning technologies that could be implemented on end-to-end encrypted (e2ee) services to detect child sexual abuse material (CSAM).

A number of mainstream messaging services already use e2ee such as Facebook-owned WhatsApp and Apple’s iMessage.

The Home Office has claimed it’s looking for a middle ground that doesn’t require digital service providers to abandon the use of end-to-end encryption but will still allow for CSAM material to be detected and passed to law enforcement, also without the security being backdoored. At least that’s the claim.

The Safety Tech Challenge Fund is being administered by the Department for Digital, Culture, Media and Sport (DCMS) — and following the announcement of the funding awards yesterday, digital minister, Chris Philp, said in a statement: “It’s entirely possible for social media platforms to use end-to-end encryption without hampering efforts to stamp out child abuse. But they’ve failed to take action to address this problem so we are stepping in to help develop the solutions needed. It is not acceptable to deploy E2EE without ensuring that enforcement and child protection measures are still in place.”

The five projects which have been awarded an initial £85,000 apiece by the UK government — with a further £130k potentially available to be divided up between the “strongest” projects (bringing the total funding pot up to £555k) — are as follows:

  • Edinburgh-based digital forensics firm Cyan Forensics and real-time risk intelligence focused Crisp Thinking, in partnership with the University of Edinburgh and the not-for-profit Internet Watch Foundation, which will develop a plug-in to be integrated within encrypted social platforms to “detect CSAM by matching content against known illegal material”
  • Parental control app maker SafeToNet and Anglia Ruskin University which will develop a suite of live video-moderation AI technologies that can run on any smart device — and are intended to “prevent the filming of nudity, violence, pornography and CSAM in real-time, as it is being produced”
  • Enterprise security firm GalaxKey, based in St Albans, which will work with Poole-based content moderation software maker Image Analyser and digital identity and age assurance firm Yoti to “develop software focusing on user privacy, detection and prevention of CSAM and predatory behavior, and age verification to detect child sexual abuse before it reaches an E2EE environment, preventing it from being uploaded and shared”
  • Content moderation startup DragonflAI, based in Edinburgh, which will also work with Yoti to combine its on-device nudity AI detection tech with the latter’s age assurance technologies — in order to “spot new indecent images within E2EE environments”
  • Austria-based digital forensics firm T3K-Forensics has also won UK government funding to implement its AI-based child sexual abuse detection technology on smartphones to detect newly created material — providing what the government bills as “a toolkit that social platforms can integrate with their E2EE services”

The winning projects will be evaluated at the end of a five month delivery phase by an external evaluator — which the government said will look at success criteria including “commercial viability to determine deployability into the market, and long term impact”.

In a joint statement, DCMS and the Home Office also touted the forthcoming Online Safety Bill — which they claimed will transform how illegal and harmful online content is dealt with by placing a new duty of care on social media and other tech companies towards their UK users.

“This will mean there will be less illegal content such as child sexual abuse and exploitation online and when it does appear it will be removed quicker. The duty of care will still apply to companies that choose to use end-to-end encryption,” they added.

Prior guidance put out by DCMS this summer urged social media and messaging firms to “prevent” the use of e2e encryption on child accounts. So the government appears to be evolving its approach (or its messaging) — and banking on embedded CSAM detection tools being baked into e2e encryption and able to carry out content detection.

Assuming, of course, any of the aforementioned projects delivers the claimed CSAM detection/prevention functionality at acceptable levels of accuracy (i.e. avoiding any ruinous false positives).

Another salient point is the question of whether the novel AI/scanning techs could result in vulnerabilities or even backdoors being baked into e2e encrypted systems — thereby undermining everyone’s security.

There is also the question of whether UK citizens will be happy with state-mandated scanning of their electronic devices — given all the privacy and liberty issues that entails.

While the UK public has generally been happy to get behind the notion of improving online child safety it might be rather less happy to discover that means blanket device scanning — especially if novel technologies end up sending alerts to law enforcement about people’s innocent holiday/bath-time snaps.

The political backlash around misfiring ‘safety’ tech could be swift and substantial.

iPhone maker Apple put the rollout of its own on-device CSAM scanning tech — ‘NeuralHash’ — on hold this fall after a privacy backlash.

Products You May Like

Articles You May Like

PlayAI clones voices on command
Money for tech that matters
Future Google supplier Kairos gets approval to build two small nuclear reactors
Nuro expands driverless autonomous vehicle testing in push to attract customers
Google.org commits $20M to researchers using AI for scientific breakthroughs

Leave a Reply

Your email address will not be published. Required fields are marked *