top of page

When Hackers Use AI: Why Defenders Must Evolve Before It’s Too Late

  • Writer: Anup Ghosh
    Anup Ghosh
  • Sep 4
  • 3 min read

ree

Listen to the podcast

The cyber game just changed.


A hacker recently leveraged Anthropic’s Claude AI, more specifically its coding assistant, to launch a ransomware campaign against 17 organizations, ranging from healthcare and government agencies to religious institutions.


What made this incident different wasn’t just the breadth of targets; it was the method of using AI to upskill, automate and scale.


Instead of painstakingly writing exploits or relying on years of technical expertise, the attacker turned to AI to automate the entire process. Reconnaissance, vulnerability scanning, credential harvesting, lateral movement, and data exfiltration were all handled with the help of Claude.


The AI didn’t stop there. It also drafted ransom notes that were psychologically manipulative and tailored to each victim, and it calculated ransom demands based on financial data—ranging anywhere from $75,000 to $500,000.


This new style of hacking, dubbed “vibe coding,” demonstrates that almost anyone—not just elite operators or nation-states—can now carry out complex, multi-stage attacks with the assistance of AI.


The Rise of AI-Enabled Cybercrime


The Anthropic case is a stark reminder of how quickly the cyber landscape is shifting. The attacker used Claude AI to scan thousands of VPN endpoints and highlight potential weak spots.


Once inside, the AI made both tactical and strategic decisions about what systems to prioritize and which files to steal. It didn’t just automate the mechanics of an attack, it orchestrated them, coordinating steps in a way that would normally require a team of seasoned hackers.


Even the ransom demands showed the power of AI. Rather than using cookie-cutter templates, the notes were customized, visually alarming, and crafted to exert maximum pressure. By analyzing stolen financial data, the AI adjusted the ransom amounts dynamically tailored to each victim, making them creditable, actionable and difficult to ignore.


In short, AI has lowered the barrier to entry for cybercrime. Operations that once demanded months of planning and technical skill can now be launched rapidly by individuals with far less expertise. This is bad news for businesses of all sizes, but particularly for SMBs who are most often the target of ransomware attacks.


Why Traditional Defenses Will Fail


This evolution poses a fundamental problem for defenders. Traditional security tools, firewalls, antivirus software, endpoint detection, were built to protect against human-driven attack patterns. They rely on recognizing known exploits typically fashioned after named adversaries, detecting their signature behavior patterns, and responding to established tactics, techniques and protocols (TTPs).


AI-enabled attacks don’t follow those patterns. They adapt in real time, modify their strategies on the fly, can do deep research based on context and attack surface, and operate at a speed and scale that overwhelms traditional defenses.


What took a hacker days or weeks to compromise can now happen in hours, and the automation means those same attacks can be replicated across dozens or even hundreds of targets.


The simple truth is that legacy defenses, however reliable they’ve been in the past, were not designed to withstand AI-driven adversaries.


Fighting AI With AI


If attackers are using AI to innovate, adapt, and scale their operations, defenders must do the same. The future of cybersecurity lies in automation on both sides of the equation. For organizations, that means moving beyond traditional defenses and embracing AI-first defenses. And where better to start than to emulate these adversarial tactics with AI-driven pentesting?


AI-driven pentesting allows organizations to simulate the same types of adaptive, multi-stage attacks that adversaries are now running. It reveals how an AI-enabled attacker would move through a network, identifies which defenses are most likely to fail, and highlights vulnerabilities in the order they are most likely to be exploited.


Armed with this knowledge, defenders can remediate exposures proactively, before the real attacks arrive.


The Bottom Line


The Anthropic case is a pre-view of what's to come.


Hackers no longer need elite skills to wreak havoc at scale. With AI, the line between amateur and professional has blurred, and the gap between script kiddie and nation-state is closing fast.


For defenders, the message could not be clearer. The tools of yesterday will not protect against the threats of tomorrow. To meet the challenge of AI-enabled adversaries, IT management must embrace automation and intelligence in their defenses.


That’s where ThreatMate comes in. Our AI-driven pentesting platform is designed to put you ahead of the curve, simulating the same kinds of AI-enabled attacks adversaries are using—before they ever reach your client networks.


By identifying and prioritizing vulnerabilities through automated mission plans, ThreatMate empowers MSPs and IT teams to shore up defenses proactively and protect more clients with less effort.


The attackers are evolving. With ThreatMate, evolve faster. Evolve now by seeing ThreatMate in action.



 
 
bottom of page