News Technology World News

AI Company Warns Hackers Are Weaponizing Its Technology

0
Please log in or register to do it.

 

US artificial intelligence (AI) company Anthropic has revealed that its technology is being “weaponized” by hackers to launch advanced cyber attacks.

Anthropic, the developer behind the chatbot Claude, confirmed that malicious actors have exploited its AI tools to carry out large-scale data theft and extortion schemes.

 

According to the firm, the technology was misused to generate malicious code for cyber attacks, while in another case, North Korean hackers leveraged Claude to fraudulently secure remote jobs at leading US companies.

#1

Anthropic Warns Hackers Are Weaponizing Its AI Technology in Sophisticated Cyber Attacks

US artificial intelligence (AI) company Anthropic has revealed that its chatbot Claude has been exploited by hackers to carry out highly advanced cyber attacks, data theft, and extortion schemes.

The company says it has disrupted the threat actors and reported the cases to authorities, while also upgrading its detection and security tools.

AI-Powered Cybercrime on the Rise

As AI becomes more advanced and accessible, hackers are increasingly using it to write malicious code and automate large-scale cyber operations.

Anthropic reported one alarming case of “vibe hacking”, where Claude was misused to develop code capable of infiltrating at least 17 organizations, including government agencies. The hackers relied on AI not only to generate attack code but also to make tactical and strategic decisions — such as which data to steal, how to craft psychologically targeted extortion demands, and even what ransom amounts to demand from victims.

According to Anthropic, this represents an “unprecedented degree” of AI misuse in cybercrime.

The Rise of Agentic AI

The misuse of AI highlights the risks posed by agentic AI — systems capable of operating autonomously. While touted as the “next big step” in AI, it also dramatically reduces the time required to exploit cybersecurity vulnerabilities, said Alina Timofeeva, a cybercrime and AI adviser.

“Detection and mitigation must shift towards being proactive and preventative, not reactive after harm is done,” Timofeeva explained.

North Korean Hackers Exploiting AI for Job Scams

Anthropic also revealed that North Korean operatives used Claude to create fake profiles and apply for remote jobs at major US Fortune 500 companies. Once hired, the hackers used AI to write job applications, translate communications, and develop code, effectively bypassing cultural and technical barriers that once limited their capabilities.

“This is a fundamentally new phase for employment scams,” Anthropic warned.

Geoff White, co-presenter of the BBC podcast The Lazarus Heist, noted that agentic AI allows North Korean workers—often isolated from the outside world—to overcome obstacles and secure jobs, leaving US employers unintentionally violating international sanctions.

AI Is a Tool, Not the Whole Threat

Despite these developments, experts stress that AI is not creating entirely new crime waves. Many ransomware and phishing attacks still rely on traditional methods like malicious emails and exploiting software vulnerabilities.

“Organizations need to understand that AI itself is a repository of sensitive data that requires protection, just like any other critical system,” said Nivedita Murthy, senior security consultant at Black Duck.
"Unbelievable Pain: How a Common Throat Infection Can Alter Children’s Brain Function"
Parents File Lawsuit Against OpenAI After Teenage Son’s Tragic Suicide

Reactions

0
0
0
1
0
0
Already reacted for this post.

Reactions

1

Your email address will not be published. Required fields are marked *

GIF