Bankers working for Goldman Sachs in Hong Kong no longer have access to Anthropic’s AI models, according to a report by the Financial Times.
Citing sources familiar with the matter, the paper said employees have been unable to access Claude models directly or through the bank’s internal AI platform for a few weeks.
AI models made by western companies are currently banned in mainland China, but the FT said Hong Kong has operated outside of censors and any restrictions are imposed by the AI companies themselves.
A source told the FT that the restrictions are due to Goldman Sachs strictly interpreting its contract with Anthropic following recent discussions between the organisations. They confirmed that the restrictions do not apply to other AI companies such as OpenAI.
A spokesperson for Anthropic told the FT that its Claude models had never been officially supported in Hong Kong, and Goldman Sachs declined to comment.
FStech has reached out to Goldman Sachs and Anthropic for comment.
Hong Kong is a major hub for investment banking and finance across Greater China for most global banks, which the FT said use the territory as a place to co-ordinate cross-border activity including trading, M&A and share sales.
Some American AI companies are cautious about their models being used in China due to the threat of “distillation,” where local actors train new models through the use of foreign ones.
In January, Microsoft and OpenAI started an investigation into whether a group linked to Chinese AI startup DeepSeek improperly obtained data from OpenAI’s technology.
The probe followed concerns that the data extraction could breach terms of service or indicate unauthorised access by individuals associated with DeepSeek.
Microsoft’s security researchers observed suspicious activity in the autumn, where individuals believed to be linked to DeepSeek were using OpenAI’s Application Programming Interface (API) to exfiltrate large amounts of data. The API is a licensed system for integrating OpenAI’s AI models into external applications, but misuse could violate these terms.







Recent Stories