- Advertisement -Newspaper WordPress Theme

Top 5 This Week

Related Posts

First AI-Powered Ransomware PromptLocker Revealed as University Experiment, Not Real Cyberattack

The cybersecurity world was shaken when reports emerged of the first-ever ransomware powered by artificial intelligence (AI). Known as PromptLocker, it was initially flagged by security company ESET as a genuine cyberthreat. However, the truth turned out to be far less alarming—PromptLocker was not a real attack but rather an academic experiment conducted by New York University’s Tandon School of Engineering.

Researchers behind the project explained that PromptLocker was part of an initiative called Ransomware 3.0, designed to study how AI could be integrated into traditional ransomware operations. The sample was uploaded to VirusTotal for testing, where it was mistakenly identified by ESET as a real-world ransomware campaign.

From a technical perspective, the experimental code used Lua scripts generated from fixed AI instructions. These scripts enabled the malware to scan file systems, analyze data, extract sensitive information, and perform encryption—just like authentic ransomware. Interestingly, it was capable of completing all four classic stages of a ransomware attack: mapping the system, identifying valuable files, encrypting or exfiltrating data, and generating a ransom note. Even more striking, this functionality extended across different environments, including personal computers, corporate servers, and even industrial control systems.

Yet, the researchers intentionally excluded destructive capabilities. The goal was not to unleash chaos but to provide a controlled proof of concept. Despite that, the experiment demonstrated how AI can automate every stage of a ransomware campaign, raising red flags for future misuse by cybercriminals.

One of the most eye-opening findings was the economic efficiency of AI-driven ransomware. Traditional ransomware campaigns require specialized teams, custom code development, and significant infrastructure. In contrast, PromptLocker consumed only 23,000 AI tokens, equivalent to just $0.70 using premium commercial APIs. Open-source AI models could eliminate even this minimal cost, effectively making ransomware development nearly free of charge for attackers. This cost-to-impact ratio is unmatched in any legitimate AI application, highlighting the potential risks of democratized AI tools.

Should we be worried? Yes, but with caution. While the experiment shows that AI can be weaponized to create low-cost, scalable ransomware, there is a vast difference between a laboratory proof of concept and an active criminal campaign. The cybersecurity industry has not yet seen widespread adoption of AI-driven ransomware, but the possibility is no longer theoretical. Studies like this serve as a warning to defenders: as AI evolves, so too will the tools of cybercriminals.

Conclusion
PromptLocker was not the beginning of an AI-powered ransomware epidemic, but it has undeniably shifted the conversation. By showing how cheap, fast, and accessible AI-driven attacks can be, the research underscores the need for proactive cybersecurity measures, robust monitoring, and international collaboration. As the line between innovation and exploitation narrows, protecting against the next generation of AI threats becomes a global priority.

Popular Articles