Google is using artificial intelligence (AI) solutions, including Gemini Nano, to boost user safety on Search, Chrome and Android.
Google said that Gemini Nano, Google's large language model (LLM) on desktop, is being introduced on Chrome to provide enhanced protection for Chrome users, offering an additional layer of defence against online scams.
According to the tech giant, the on-device approach provides immediate information about risky websites, enabling higher protection against online scams.
“Gemini Nano's LLM is perfect for this use case thanks to its ability to distil the varied and complex nature of websites, helping us adapt more quickly to new scam tactics,” Google said.
It added that the company is already using the new AI-based approach to protect users from remote technical support scams, with Google planning to extend the protection to Android devices and further scam types in the future.
The tech firm added it’s also using AI-powered defences on Search with the aim to help detect and block “hundreds of millions” of scam-related results every day. According to the firm, the new AI powered system is enabling the firm to catch 20-times the number of scam-related pages.
“These improvements help ensure the results you get are legitimate, and protect you from harmful sites trying to steal your sensitive data,” it said.
Advancements in AI have bolstered Google’s scam-fighting technologies, allowing the firm to analyse large quantities of text on the web, identify coordinated campaigns and detect emerging threats on Search.
According to Google, bad actors scam efforts, such as actors on the web impersonating airline customer service providers to scam people online, have been reduced more than 80 per cent in Search thanks to the new system.
The move comes as online scams remain a significant threat in the UK.
A report published in April by the UK government found that the overall prevalence of cybercrime among businesses and charities remained consistent in 2025 compared to 2024 (20 per cent of businesses in 2025 and 22 per cent in 2024, and 14 per cent in both years for charities), as have non-phishing cybercrimes for businesses (four per cent in 2025 and three per cent in 2024) and charities (three per cent in 2025 and two per cent in 2024).
A study by Charity Digital also reveals that AI may become increasingly used in advanced cyber-attacks like adaptive malware that evolves to bypass security, phishing emails that mimic trusted contacts, or deepfake videos that could trick teams into sharing confidential information.
Recent Stories