Regulators must ‘counter AI threats’ before general election, warns Alan Turning Institute

The Alan Turing Institute has urged regulators to counter threats to the general election posed by AI “before it’s too late.”

According to new research from The Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS), Ofcom and the Electoral Commission have a “rapidly diminishing window of opportunity” to preserve trust in the democratic process.

CETaS said that advances in AI technology have caused many people to be concerned about its use to spread disinformation, influence voters, and disrupt the integrity of election processes with the aim of manipulating the outcome of elections or eroding trust in democracy.

The study found that while there is limited evidence that AI has prevented a candidate from winning compared to the expected result, there are early signs of damage to the broader democratic system. This includes confusion over whether AI-generated content is real, deepfakes inciting online hate against political figures, and politicians exploiting AI disinformation for potential electoral gain.

Additionally, CETaS said that the current electoral laws on AI are ambiguous which could lead to its misuse. Examples include using ChatGPT to create fake campaign endorsements which could damage the reputation of individuals involved and undermine trust.

The study recommended that the Electoral Commission should ensure any voter information contains advice on how to remain vigilant about AI-based election threats such as attempts to cause confusion over the time and place of voting.

It also urged the Electoral Commission and Ofcom to create guidelines and request voluntary agreements for political parties detailing how they should use AI technology for campaigning.

Sam Stockwell, research associate at The Alan Turing Institute and lead author of the report, said: “With a general election just weeks away, political parties are already in the midst of a busy campaigning period. Right now, there is no clear guidance or expectations for preventing AI being used to create false or misleading electoral information.

“That’s why it’s so important for regulators to act quickly before it’s too late.”

Earlier this month, The National Cyber Security Council (NCSC) launched a new personal internet protection service to increase the digital security of political candidates, election officials and other people at high risk of being targeted ahead of the general elections.

The opt-in service aims to prevent these individuals falling victim to phishing, malware and other cyber threats. The NCSC said it will provide an extra layer of security on personal devices by warning users visiting a domain known to be malicious and blocking outgoing traffic to these domains.



Share Story:

Recent Stories


The future-ready CFO: Driving strategic growth and innovation
This National Technology News webinar sponsored by Sage will explore how CFOs can leverage their unique blend of financial acumen, technological savvy, and strategic mindset to foster cross-functional collaboration and shape overall company direction. Attendees will gain insights into breaking down operational silos, aligning goals across departments like IT, operations, HR, and marketing, and utilising technology to enable real-time data sharing and visibility.

The corporate roadmap to payment excellence: Keeping pace with emerging trends to maximise growth opportunities
In today's rapidly evolving finance and accounting landscape, one of the biggest challenges organisations face is attracting and retaining top talent. As automation and AI revolutionise the profession, finance teams require new skillsets centred on analysis, collaboration, and strategic thinking to drive sustainable competitive advantage.