OpenAI has released GPT-5.4-Cyber, an AI model designed to find bugs and vulnerabilities in software, to a select group of customers.
The roll-out of the new model follows a similar release by Anthropic last week, which prompted security fears among senior government officials
The model is a variant of its flagship GPT-5.4 that is specifically designed to be cyber-permissive, meaning it is allowed to search for weaknesses in software that other models would be prevented from doing for security reasons.
If used correctly, AI cybersecurity models could help flag bugs long before a human would find them, enabling stronger defences against both humans and malicious AIs; Rival Anthropic said its Claude Mythos Preview, released 9 April, identified thousands of previously unknown “zero-day” vulnerabilities across major software and operating systems.
In order for it to be useful, however, the software has to remain out of the hands of malicious actors. OpenAI is taking a similar route to Anthropic, rolling out GPT-5.4-Cyber only to cybersecurity professionals who authenticate themselves with OpenAI. The process to do so builds on its previously released Trusted Access for Cyber framework, which allows individuals and trusted organisations to access cyber tools provided by the company.
Access to the new model will be limited to those who achieve a higher tier of clearance, which requires manual approval from a team member at OpenAI.
Anthropic’s Claude Mythos Preview sent shockwaves through the cybersecurity community when it released last week, with both the UK government and US Treasury meeting with top bankers to discuss the risks posed by AI-powered cyberattacks on financial institutions.







Recent Stories