Wall Street finally fears AI
Last week, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell did something unusual. They summoned the CEOs of Citigroup, Morgan Stanley, Bank of America, Wells Fargo, and Goldman Sachs to an emergency meeting at Treasury headquarters in Washington. The topic wasn't a banking crisis, a rate decision, or a looming recession. It was an AI model. Anthropic's Claude Mythos, a model so capable at finding and exploiting software vulnerabilities that the company decided not to release it publicly, had spooked regulators enough to pull the most powerful people in finance into a room on short notice. The model can identify zero-day vulnerabilities in every major operating system and every major web browser, generating working exploits 72.4% of the time. It even found a 27-year-old bug in OpenBSD, an operating system famous for its security. This is the moment Wall Street started taking AI risk seriously. Not the existential, Skynet-flavored risk that dominates headlines, but the practical, operational kind: AI makes existing cyberattacks faster, cheaper, and harder to detect. The threat was never artificial general intelligence. It was always phishing at scale.
The skill floor just dropped
For decades, sophisticated cyberattacks required sophisticated attackers. Writing custom malware, crafting convincing spear-phishing emails, and scanning for vulnerabilities all demanded technical skill and time. AI changes this equation dramatically. According to KnowBe4's 2025 Phishing Threat Trends Report, more than 73% of phishing emails analyzed showed signs of AI involvement. Vectra AI reported that AI-powered scams surged 1,210% in 2025, far outpacing the 195% growth in traditional fraud. These aren't marginal improvements. They represent a fundamental shift in who can launch attacks and at what scale. The mechanics are straightforward. Large language models can generate grammatically perfect, contextually relevant phishing messages in any language. Voice cloning tools can replicate a CEO's voice from a few seconds of audio. Deepfake video can put a convincing face on a fraudulent video call. In early 2024, a finance employee in Hong Kong was tricked into wiring $25 million after participating in a deepfake video conference where every executive on the call was synthetic. What used to require a team of skilled attackers can now be orchestrated by someone with a laptop and access to the right tools.
Finance is the highest-value target
Wall Street's alarm isn't paranoia. It's pattern recognition. The financial sector faces more AI-powered cyberattacks than any other industry. A 2025 survey by Deep Instinct found that 45% of financial services organizations experienced an AI-powered cyberattack in the prior 12 months, compared to 38% across other industries. Deepfake fraud attempts in financial services have increased by over 2,100% in the past three years, according to Signicat. Federal Reserve Governor Michael Barr noted that deepfake attacks have seen a twentyfold increase over the same period. The reasons are obvious. Banks handle enormous volumes of high-value transactions, store vast amounts of sensitive data, and serve as interconnected nodes in the global financial system. A successful breach doesn't just affect one institution, it cascades. But there's a less obvious factor: legacy infrastructure. Many of the world's largest banks still run critical systems on decades-old technology. These systems often lack modern encryption standards, multi-factor authentication, and the ability to receive security patches. One analysis found that end-of-life software accumulates an average of 218 new vulnerabilities every six months after support ends. When AI models like Mythos can scan for and exploit these vulnerabilities at machine speed, the gap between attacker capability and defender readiness widens fast.
Billions on AI productivity, pennies on AI defense
Here's the uncomfortable contrast. Financial institutions are spending aggressively on AI tools to boost productivity, automate workflows, and improve customer service. Meanwhile, investment in AI-powered cybersecurity defenses has lagged behind. A BCG report found that just 5% of companies have significantly increased their cybersecurity budgets in response to AI threats. Only 25% of existing AI-enabled defense tools are considered advanced. And 69% of organizations report difficulty hiring AI-cybersecurity talent. Global cybersecurity spending is projected to reach $240 billion in 2026, with AI-driven cybersecurity spend growing three to four times faster than the overall category. But the gap between offense and defense is growing. PwC found that while 78% of organizations plan to increase their cyber budgets, the top investment priority is AI for productivity (36%), ahead of cloud security (34%) and data protection (26%). The tools that make employees more efficient are the same category of tools that make attackers more efficient. This is the asymmetry that should worry everyone. The cost of launching an AI-powered attack is dropping toward zero. The cost of defending against one is not.
The Jevons paradox for cybersecurity
When steam engines became more efficient in the 19th century, economists expected coal consumption to fall. Instead, it skyrocketed. Cheaper energy made energy-intensive activity economically viable for the first time, and total usage exploded. This is Jevons paradox: efficiency gains increase, rather than decrease, total consumption. The same logic applies to AI-powered cyberattacks. As AI lowers the cost and skill required to launch attacks, the rational expectation isn't fewer attacks. It's more attacks, across a wider range of targets, by a broader set of actors. Consider the numbers. AI-generated phishing eliminates the grammatical errors and generic messaging that traditional filters relied on. Voice cloning removes the need for a human impersonator. Automated vulnerability scanning replaces weeks of manual reconnaissance with minutes of compute time. Each of these reductions in cost and effort expands the addressable market for cybercrime. Projected losses from generative AI-enabled fraud across the financial sector could reach $40 billion annually by 2027. That's not a prediction about some distant future. It's an extrapolation from trends already visible in the data.
What Wall Street gets right that Silicon Valley doesn't
There's a cultural difference worth noting. Silicon Valley's default posture toward AI risk has been optimistic, sometimes to the point of dismissiveness. Move fast, ship the model, let the ecosystem figure out safety. The prevailing narrative has been that AI capabilities are inherently good, and that the market will sort out the risks. Wall Street doesn't think this way. In finance, paranoia is a feature, not a bug. Risk management isn't an afterthought; it's the business. When Bessent and Powell called that emergency meeting, the CEOs showed up because they understand something that the tech industry sometimes forgets: capability in the wrong hands is the threat, not capability itself. Anthropic's response to Mythos reflects a version of this thinking. Rather than releasing the model publicly, the company restricted access to roughly 40 technology companies and launched Project Glasswing, a cybersecurity consortium including Apple, Nvidia, and Amazon, to help defenders patch the vulnerabilities Mythos finds before attackers can exploit them. It's a bet that coordinated disclosure and restricted access can buy time for defenses to catch up. Whether that bet pays off depends on how quickly the rest of the industry follows suit. The Mythos model is a preview of what's coming. If one model can find thousands of zero-day vulnerabilities across every major platform, the next one, or the one after that, will be able to do the same. The question isn't whether these capabilities will proliferate. It's whether defenses will scale at the same pace.
The real risk was always capability
The fear that AI would "take over" has always been a distraction. The real risk, the one that just forced the most powerful people in American finance into an emergency meeting, is much more mundane. AI makes existing attack vectors faster, cheaper, and harder to detect. It doesn't need to be sentient to be dangerous. It just needs to be good at what it does. Wall Street finally understands this. The question now is whether the rest of us will catch up before the next $25 million deepfake call, or the next zero-day exploit chain, or the next phishing campaign that looks exactly like a message from your bank. The threat was never AGI. It was always capability in the wrong hands.
References
- Bessent, Powell warned bank CEOs about Anthropic model risks, Reuters, April 2026
- Claude Mythos Preview: The significance for cybersecurity, Anthropic Red Team, April 2026
- Anthropic Mythos model can find and exploit 0-days, The Register, April 2026
- AI Phishing Attack Prevention Strategies, KnowBe4, 2025
- AI scams in 2026: how they work and how to detect them, Vectra AI, 2026
- AI-Driven Cyber Threats Are Outpacing Defense Capabilities, BCG, December 2025
- Exclusive: Financial sector most susceptible to AI-powered cyberattacks, Axios, July 2025
- Fraud attempts with deepfakes have increased by 2137%, Signicat, 2025
- Deepfakes and the AI arms race in bank cybersecurity, Bank for International Settlements, April 2025
- AI vs. AI: The arms race for security, J.P. Morgan Private Bank, 2026
- Making frontier cybersecurity capabilities available to defenders, Anthropic, 2026
- AI and the New Face of Fraud, Fortune, January 2026
- AI Cyber Threats Put Consumers' Trust In Banks At Risk, Forbes, April 2025
- Global Cybersecurity Outlook 2026, World Economic Forum, January 2026
You might also enjoy