IMF warns of systemic threat from AI

Artificial intelligence could make cyberattacks a systemic threat to global finance, the International Monetary Fund has warned, saying advanced models can help attackers exploit vulnerabilities faster than institutions can fix them.
In a blog post published on Thursday, the IMF said its latest analysis suggests that “extreme cyber-incident losses could trigger funding strains, raise solvency concerns, and disrupt broader markets.”
According to the organization, the current financial system relies on shared digital infrastructure, including software, cloud services and networks for payments and other data. The fund warned that advanced AI models can sharply reduce the time and cost needed to identify and exploit weaknesses, raising the risk of simultaneous attacks on widely used systems.
The fund cited Anthropic’s recent controlled release of Claude Mythos Preview, which it described as “an advanced AI model with exceptional cyber capabilities.” According to the IMF, Mythos could find and exploit vulnerabilities in every major operating system and web browser, “even when used by non-experts.”
AI-driven cyber risks could destabilize the financial system if they are not managed carefully, the IMF stressed, noting that attacks could spread beyond finance because banks share digital foundations with energy, telecommunications and public services.
“Defenses will inevitably be breached, so resilience must also be a priority,” the IMF warned, calling for cyber stress testing, scenario analysis, board-level oversight, public-private cooperation and stronger international coordination.
The warning comes amid broader concerns over the misuse of AI. A recent UK study found artificial intelligence was being increasingly used by human traffickers to “identify, recruit and control victims at scale.”
The White House is allegedly also considering reviewing new AI models before they are released to avoid political fallout from potential AI-enabled cyberattacks, the New York Times reported earlier this week.
AI chatbots have also increasingly been implicated in facilitating serious and violent crimes. A recent joint investigation by CNN and the Center for Countering Digital Hate found that 8 out of 10 AI chatbots were eager to help researchers simulate the planning of violent attacks, including school shootings, religious bombings, and assassinations, wishing would-be attackers “happy (and safe) shooting!”












