Government to collaborate with Microsoft on deepfake detection framework

The UK government has announced plans to work with Microsoft, academic institutions and technology experts to develop what it describes as the world's first deepfake detection evaluation framework aimed at setting industry standards for identifying harmful AI-generated content.

The Home Office on Thursday said that the initiative will explore how technology can recognise and assess deepfakes whilst establishing clear expectations for detection standards across the industry. The framework will test detection technologies against real-world threats including child sexual abuse, fraud and impersonation, according to a government statement published on Wednesday.

Jess Phillips, minister for safeguarding, said the framework would address what she called an injustice affecting millions. "For the first time, this framework will take the injustice faced by millions to seek out the tactics of vile criminals, and close loopholes to stop them in their tracks so they have nowhere to hide," she told reporters.

The announcement follows growing concern over the rapid proliferation of deepfake content. Government figures estimate eight million deepfakes were shared in 2025, a sixteen-fold increase from 500,000 in 2023. The rising sophistication of generative artificial intelligence tools has made creating convincing fake images, video and audio significantly easier and cheaper.

The framework builds on work undertaken by the Accelerated Capability Environment, which ran a Deepfake Detection Challenge in 2024. That initiative, supported by the Home Office and the Department for Science, Innovation and Technology, drew more than 150 participants and resulted in six teams developing solutions now undergoing benchmark testing and user trials.

Technology minister Liz Kendall said deepfakes were being weaponised for criminal purposes. "Deepfakes are being weaponised by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear," Kendall said in a statement to Reuters.

The initiative comes days after the communications regulator Ofcom and the Information Commissioner's Office launched parallel investigations into Elon Musk's Grok chatbot over reports it generated non-consensual sexualised images, including of children. French prosecutors raided X's offices on Tuesday as part of an investigation into alleged offences including distributing child sexual abuse material and deepfakes.

Britain criminalised the creation of non-consensual intimate images last year. Deputy Commissioner Nik Adams of the City of London Police said the framework would help law enforcement stay ahead of offenders by rigorously testing detection technologies and setting industry expectations.

The framework aims to identify gaps in current detection capabilities and provide the government and law enforcement with better knowledge of where technological limitations remain, officials said.



Share Story:

Recent Stories


The future-ready CFO: Driving strategic growth and innovation
This National Technology News webinar sponsored by Sage will explore how CFOs can leverage their unique blend of financial acumen, technological savvy, and strategic mindset to foster cross-functional collaboration and shape overall company direction. Attendees will gain insights into breaking down operational silos, aligning goals across departments like IT, operations, HR, and marketing, and utilising technology to enable real-time data sharing and visibility.

The corporate roadmap to payment excellence: Keeping pace with emerging trends to maximise growth opportunities
In today's rapidly evolving finance and accounting landscape, one of the biggest challenges organisations face is attracting and retaining top talent. As automation and AI revolutionise the profession, finance teams require new skillsets centred on analysis, collaboration, and strategic thinking to drive sustainable competitive advantage.