AI is the best hacker
Ledger's CTO Charles Guillemet said it plainly: "It's really easier to hack everything." He was talking about crypto, but the warning applies to all software. AI is making attacks cheaper, faster, and more scalable, and the economics of cybersecurity are breaking down in ways that favor the attacker. This is not a future problem. It is happening now. And the defenders are losing ground.
The economics have flipped
Cybersecurity has always been an asymmetric game. Attackers need to find one way in. Defenders need to cover everything. That asymmetry existed before AI, but AI has made it dramatically worse. Guillemet's argument is straightforward. AI tools are driving down the cost and difficulty of cyberattacks across the board. Code generation, vulnerability scanning, phishing at scale, malware that adapts in real time, all of it is now cheaper and faster than ever. The old equilibrium, where security was expensive but manageable, has been invalidated. The numbers back this up. According to an IMF study, cybercrime is projected to cost the world $23 trillion by 2027, a 175% increase from 2022. Weekly cyberattacks per organization have more than doubled in the past four years, from 818 to 1,984. IBM's X-Force Threat Intelligence Index found that major supply chain and third-party breaches quadrupled over the past five years. And the median time to exploit a new vulnerability is now under five days, with 131 new CVEs disclosed every single day. AI didn't create these trends. But it accelerated every one of them.
Phishing at scale, deepfakes on demand
The most immediate impact of AI on hacking is in social engineering. Phishing has always been the easiest way into a system, and AI has turned it into an industrial operation. According to the Kiteworks State of AI Cybersecurity 2026 report, hyper-personalized phishing is the top concern among security professionals at 50%, followed by automated vulnerability scanning and exploit chaining at 45%, adaptive malware at 40%, and deepfake voice fraud at 40%. These are not separate threats. Attackers are now using AI to orchestrate full attack chains from reconnaissance through data exfiltration with minimal human involvement. Deepfakes have become a particularly effective weapon. Deepfake-related fraud now accounts for 6.5% of all fraud attacks, a 2,137% increase from 2022. Deloitte's Center for Financial Services estimates that losses from generative AI fraud will hit $40 billion by 2027. Experian's 2026 forecast warns of deepfake candidates passing job interviews and infiltrating companies, gaining access to internal systems without anyone realizing the employee isn't real. Forty-six percent of small and medium-sized businesses report facing AI-generated phishing attacks in the past year. The barrier to entry for sophisticated attacks has effectively collapsed. You no longer need a skilled hacker. You need a subscription to the right tool.
Supply chain attacks, supercharged
The Ledger story is instructive beyond crypto. In a separate incident, Guillemet warned the crypto community about an npm supply chain attack where malicious code silently swapped crypto addresses during transactions. The payload worked by intercepting wallet operations in real time, a level of sophistication that used to require significant resources. Supply chain attacks have become the preferred vector for sophisticated threat actors. IBM's research found that rather than breaking through a single organization's defenses, adversaries increasingly target interconnected systems, vendors, open-source dependencies, CI/CD workflows, and cloud interfaces. Six major supply chain attack groups were identified as active threats for 2026 by Group-IB, with npm ecosystem attacks becoming particularly prevalent. AI accelerates this in two ways. First, AI tools can scan vast codebases for exploitable patterns faster than any human team. Second, AI-generated code itself introduces new attack surface. When developers use AI assistants that suggest dependencies, some of those suggestions point to packages that don't exist, creating opportunities for attackers to register those names and inject malicious code. This technique, known as slopsquatting, is a direct consequence of AI's tendency to hallucinate plausible-sounding package names.
Automated vulnerability discovery changes everything
Traditional vulnerability research was slow, expensive, and required deep expertise. AI is changing that calculus entirely. Researchers at Tencent reported that by 2025, their AI-driven system had automatically discovered more than 60 real-world security vulnerabilities, half of them high-risk. The ThreatDown 2026 State of Malware Report documented threat actors using autonomous AI agents to conduct reconnaissance across thousands of VPN endpoints, harvest credentials, and penetrate networks simultaneously, with no human at the keyboard for the majority of operations. This is the shift that matters most. Offensive AI doesn't just help hackers work faster. It enables entirely new attack patterns that weren't economically viable before. Scanning every public-facing system of a target organization for known and unknown vulnerabilities, in parallel, around the clock, is now within reach of any motivated attacker with access to the right models. The defensive side is trying to keep up. AI-powered threat detection, anomaly identification, and automated patching are all improving. But there is a structural problem: defenders need to get everything right, every time. Attackers need to succeed once. AI amplifies the attacker's advantage disproportionately because it makes the "try everything" approach practically free.
The agent problem
The rise of autonomous AI agents introduces an entirely new category of attack surface. Every agent you deploy is a non-human identity with real access to real systems. It authenticates with API keys or OAuth tokens. It reads and writes to your data. And unlike a human, it doesn't question suspicious instructions. OWASP's 2026 top 10 for agentic AI security highlights the risks: prompt injection, tool abuse, data exfiltration, memory poisoning, goal hijacking, and cascading failures across multi-agent systems. A Gravitee survey found that only 24.4% of organizations have full visibility into which AI agents are communicating with each other, and more than half of all agents run without any security oversight or logging. Meta's security team formalized this with their "Agents Rule of Two" framework. An agent becomes dangerous when it can simultaneously process untrusted inputs, access sensitive data, and take external actions. Any two of those three properties is manageable. All three together, and you have automated compromise waiting to happen. More autonomous agents means more API keys floating around, more standing permissions, more trust assumptions baked into systems that nobody is actively monitoring. The convenience of agent-powered automation comes with an attack surface that scales with every new agent you add.
The "AI for defense" narrative is incomplete
Yes, defenders use AI too. AI-powered security tools are getting better at detecting anomalies, scanning code, prioritizing vulnerabilities, and automating incident response. This is real and valuable progress. But the narrative that AI creates an even playing field between attackers and defenders is misleading. The economics favor offense. An attacker using AI to generate a thousand personalized phishing emails has a higher return on investment than a defender using AI to catch them. An attacker scanning for zero-days across millions of endpoints has a structural advantage over a defender patching them one at a time. Security Boulevard framed it well: AI introduces a new asymmetry in cybersecurity, one that favors speed, scale, and automation over traditional human-centric defenses. The offensive side benefits more from AI because attacking is inherently a search problem, find the one weakness, and AI is exceptionally good at search.
Singapore as a fintech hub feels this directly
Singapore's position as one of Asia's most important financial centers makes it a high-value target. The Monetary Authority of Singapore has been proactive, issuing consultation papers on AI risk management guidelines for financial institutions and establishing a National AI Council to oversee AI governance across key sectors including finance. But the threat is already here. AI-driven cyber threats are redefining the attack landscape for Singapore's financial institutions. Ninety-five percent of Singaporean banking customers express concern about AI-related data security and privacy. Cybersecurity firms in the region are racing to develop SME-focused innovations to counter rising threats, with several showcasing at RSAC 2026. Singapore's approach, combining regulatory frameworks, sandbox experimentation, and proactive governance, is arguably more sophisticated than what most countries are doing. But even the most forward-thinking regulatory environment can't fully offset the fact that the tools available to attackers are improving faster than the tools available to defenders.
What actually helps
The uncomfortable truth is that there is no silver bullet. But there are practices that meaningfully reduce exposure. Least-privilege access is no longer optional. Every system, every agent, every API key should have the minimum permissions required for its current task. This was always best practice. Now it is survival. A compromised component with narrow access is an incident. A compromised component with broad access is a catastrophe. Formal verification matters more than ever. Guillemet advocates for hardware-based security and formal verification of critical code paths. When AI can find bugs faster than humans can review code, you need mathematical guarantees, not just good intentions. Assume breach. The old model of perimeter defense is dead. Design systems with the assumption that some components will be compromised. Segment networks, encrypt data at rest, implement zero-trust architectures, and build detection systems that can identify compromised behavior even from authenticated entities. Audit your agents. If you run autonomous AI systems, maintain an inventory of every agent, what it can access, and what credentials it holds. Review permissions quarterly. Log every action. Build kill switches that actually work. Invest in detection, not just prevention. As the cost of attacks drops toward zero, the volume of attempts will increase indefinitely. Prevention will always be imperfect. The organizations that survive will be the ones that detect compromise quickly and contain it before it cascades.
The real risk of AI
The real risk of AI isn't sentience. It isn't robots turning against humanity. It is that AI makes existing human threats 10x more effective. Every script kiddie now has the capability of a skilled penetration tester. Every phishing campaign can be personalized at scale. Every vulnerability in every public-facing system can be found and exploited faster than the patch can be deployed. Guillemet expects a divide ahead. Critical systems like wallets and protocols will invest heavily in security and adapt. But much of the broader software ecosystem may struggle to keep up. The gap between organizations that take security seriously and those that treat it as an afterthought will widen into a chasm. AI is the best hacker not because it is creative or strategic, but because it is cheap, tireless, and scales infinitely. The defenders who understand this, and build their systems accordingly, will survive. The rest are living on borrowed time.
References
- CoinDesk, "AI is making crypto's security problem even worse, Ledger CTO warns," April 5, 2026. Link
- SentinelOne, "Key Cyber Security Statistics for 2026," January 2026. Link
- Check Point Research, "Global Cyber Attacks Surge 21% in Q2 2025," via World Economic Forum. Link
- IBM, "X-Force Threat Intelligence Index 2026." Link
- Security Boulevard, "46 Vulnerability Statistics 2026," March 2026. Link
- Kiteworks, "AI Cybersecurity in 2026: Key Trends and Threats." Link
- Deloitte Center for Financial Services, via Hinckley Allen, "2025 Year in Review and Predictions for 2026." Link
- Experian, "AI takes center stage as the major threat to cybersecurity in 2026," December 2025. Link
- VikingCloud, "205 Cybersecurity Stats and Facts for 2026." Link
- Group-IB, "Six Supply Chain Attack Groups to Watch Out for in 2026," March 2026. Link
- ThreatDown, "2026 State of Malware Report," via Medium. Link
- Tencent Security, "Automated Vulnerability Discovery: Past, Present and Future," January 2026. Link
- OWASP, "Top 10 Risks and Mitigations for Agentic AI Security," 2025. Link
- Meta, "Agents Rule of Two: A Practical Approach to AI Agent Security," 2025. Link
- Security Boulevard, "The New Security Reality: When AI Accelerates Both Attack and Defense," March 2026. Link
- Corinium Intelligence, "AI Driven Cyber Threats Are Forcing a New Era of Cybersecurity in Singapore's Financial Sector." Link
- Monetary Authority of Singapore, "Proposed Guidelines on AI Risk Management for Financial Institutions," November 2025. Link
- Mayer Brown, "Singapore's Agentic AI Framework: Practical Guidance for Market Entry," April 2026. Link
- Gravitee, "State of AI Agent Security 2026 Report." Link
- CryptoRank, "Ledger CTO Suspects North Korea Behind $280M Drift Protocol Hack," April 2026. Link