Research

How Fast Does Cybercrime Move Now?

Mar 3, 2026

It took 27 seconds.


That's how long the fastest cyberattack on record took to move from initial access to lateral movement across an entire network.


That number comes from CrowdStrike's 2026 Global Threat Report, released last week. And it's not the only number that should worry you. It's the sharpest point on a trend line that should alarm anyone building, investing in, or relying on digital infrastructure.


The average breakout time for financially motivated cybercriminals dropped to 29 minutes in 2025, 65% faster than the year before. In one documented case, data exfiltration began within 4 minutes of initial access. The attackers weren't just getting in faster, but getting out with what they came for before most security teams even knew something was wrong.


This isn't a story about a few sophisticated nation-state actors pulling off once-in-a-decade operations. This is the new baseline.


What Changed


The short answer is AI. But not in the way most people think.


The conversation around "AI-powered cyberattacks" has been largely speculative for years - a theoretical concern raised at conferences and buried deep in risk reports. That changed in 2025.


CrowdStrike tracked an 89% YoY increase in attacks by AI-enabled adversaries. More than 90 organizations had their own legitimate AI tools turned against them; attackers injecting malicious prompts into GenAI systems to generate commands for stealing credentials and data. ChatGPT alone was referenced in criminal forums 550% more than any other AI model.


But perhaps the most telling development wasn't about how attackers use AI tools. It was about how they've started building AI directly into their malware.


LAMEHUG: The Malware That Thinks


In July 2025, Ukraine's national cyber response team (CERT-UA) discovered something that security researchers had been dreading: the first known malware to operationally integrate an LLM.


They called it LAMEHUG. It was attributed to a hacking unit tied to Russian military intelligence known as FANCY BEAR - the same group behind some of the most high-profile cyber operations of the last decade. It arrived the way most state-sponsored attacks do: phishing emails sent from compromised government accounts, impersonating ministry officials.


What made it different was what happened after it landed.


Traditional malware carries hardcoded commands - a set of fixed instructions baked into the payload. This makes it detectable. Security tools can fingerprint known command patterns through static analysis. LAMEHUG took a fundamentally different approach. Instead of hardcoded instructions, it carried prompts.


The malware used the Hugging Face API to communicate with Qwen 2.5-Coder-32B-Instruct, an open-source code generation model built by Alibaba Cloud. It sent pre-defined objectives to the LLM, encoded descriptions of what it wanted to accomplish, and received executable command sequences in return, running them immediately on the target system. Reconnaissance, document harvesting, data exfiltration - all generated dynamically by the model rather than hardcoded into the payload.


No hardcoded attack commands for analysts to fingerprint. The specific instructions for reconnaissance, data collection, and exfiltration were generated at runtime by a legitimate AI platform's infrastructure, with command-and-control traffic blending into normal API calls.


Security researchers at Cato Networks analyzed the code and concluded that this was likely a proof of concept - FANCY BEAR testing new capabilities rather than deploying a polished operational tool. The Python was relatively basic. The LLM integration was straightforward, without sophisticated obfuscation.


That assessment should make it more concerning, not less. If this is what the early experiments look like, the production-grade versions are already being built.


The Asymmetry Problem


ReliaQuest's Annual Cyber-Threat Report, published this week, puts numbers to what many defenders already feel: the gap between defenders and attackers is widening.


Their data tells a similar story to CrowdStrike's. The fastest breakout time ReliaQuest observed was 4 minutes. Fastest exfiltration: 6 minutes, down from 4 and a half hours the previous year. And critically, they found that 80% of ransomware groups are now using automation or AI in their operations.


Without automation, the average time for a security team to contain a threat is 16 hours. When attackers move in minutes and containment takes significantly longer, the outcome is already decided before most teams respond.


ReliaQuest argues that agentic AI on the defensive side can bring containment time down to 4 minutes, matching the speed of the fastest observed attacks. But that requires a fundamental shift in how organizations think about security. It requires shifting from periodic assessment to more continuous and scalable defense models.


AI is Making Both Sides Faster, But Not Equally


IBM's 2026 X-Force Threat Intelligence Index adds another dimension. They observed a 44% increase in attacks beginning with the exploitation of public-facing applications, driven significantly by AI-enabled vulnerability discovery. Vulnerability exploitation became the leading initial access vector in 2025, responsible for 40% of all incidents, overtaking stolen credentials which had held the top spot for 2 years running.


Meanwhile Veracode's State of Software Security report reveals the other side of the equation. AI isn't just helping attackers find vulnerabilities faster. It's helping developers create them faster. Security debt, known vulnerabilities left unresolved for more than a year, now affects 82% of companies, up from 74% the previous year. High-risk vulnerabilities rose from 8.3% to 11.3% of all findings.


Their conclusion was straightforward: "The velocity of development in the AI era makes comprehensive security unattainable."


The tools accelerating software development are simultaneously accelerating the creation of the vulnerabilities that attackers, now also armed with AI, are learning to exploit at machine speed.


More code, more attack surface. Faster attackers, slower fixes. The equation doesn't balance.


$3.4 Billion and Counting


These aren't abstract dynamics playing out in enterprise IT departments. They're already reshaping the economics of entire industries, and nowhere more visibly than in crypto.


Chainalysis reported that cryptocurrency theft reached $3.4 billion in 2025. North Korean state-sponsored hackers alone accounted for $2 billion of that, a 51% increase YoY. The Bybit exchange hack in February 2025, attributed by the FBI to North Korea's Lazarus Group, resulted in a single theft of $1.5 billion in ETH. It remains the largest cryptocurrency heist in history.


The mechanics of the Bybit attack illustrate how modern threats operate across multiple layers simultaneously. Lazarus compromised a developer's machine at Safe{Wallet}, the multisig platform Bybit used for transaction security, and injected malicious JavaScript into the interface. When Bybit employees went to approve a routine transfer, the interface displayed what appeared to be a legitimate transaction. The signers approved it. The funds went to wallets controlled by North Korean operatives. $160 million was laundered within the first 48 hours.


What's important about the Bybit case is what it reveals about the expanding attack surface. The attackers didn't exploit a smart contract vulnerability but went through the operational layer instead. Both are viable targets, and attackers will take whichever path offers the least resistance at the speed and value they need.


Onchain code security has improved meaningfully, but improved doesn't mean solved. Halborn's Top 100 DeFi Hacks Report found that faulty input verification remains the leading cause of direct smart contract exploitation. Access control vulnerabilities accounted for $953 million in losses in 2024 alone. And the OWASP Smart Contract Top 10 continues to document persistent patterns - reentrancy, oracle manipulation, flawed logic - vulnerabilities that have caused billions in cumulative loss.


The reality is that security is a full-stack problem. Code vulnerabilities, economic logic flaws, operational security gaps, supply chain compromises - these aren't competing categories but concurrent attack vectors. And with AI compressing the exploitation timeline across all of them, the protocols and platforms that hold up will be the ones treating security as a continuous discipline across every layer, not a periodic checkpoint on any single one.


The Uncomfortable Implication


Organizations have operated for decades around a fundamental assumption that there's enough time between an attacker gaining and causing damage to detect, investigate, and respond. That assumption is breaking.


When breakout happens in 27 seconds, there is no human-speed response that works. When malware generates its own commands from an LLM in real-time, there is no static signature to catch. When the majority of intrusions blend in with legitimate activity, there is no straightforward alert to trigger.


The threat landscape has evolved faster than the defensive infrastructure built to address it. The environment has fundamentally changed, and the tools and cadences designed for a previous era need to evolve with it.


The organizations and protocols that navigate this shift successfully will be the ones that adopt a simple premise: you can't wait to be attacked to find out if your defenses hold.


Machine-speed offense demands machine-speed defense - proactively testing your systems the way attackers do as an ongoing discipline, and proving what breaks before someone else does. Across code, logic, infrastructure, and operations.


The kill chain has collapsed. And the question isn't whether your systems will be tested. It's whether you tested them first.



We built Shepherd because we believe security needs to move at the same speed as the threats.


Get in touch with us