While it’s well-known that threat actors can leverage generative AI (GenAI) to develop malware, a team of Tenable researchers recently proved that it’s possible to build a keylogger and even ransomware using the
DeepSeek R1 tool.
Tenable researchers on March 13 detailed how they successfully used a jailbreak technique to
trick DeepSeek into creating a keylogger that could hide an encrypted log file on disk after DeepSeek R1 refused to build a keylogger.“At its core, DeepSeek can create the basic structure for malware,” wrote Nick Miles, the Tenable researcher who authored Tenable’s blog post. “However, it’s not capable of doing so without additional prompt engineering as well as manual code editing for more advanced features.”Miles said the same was true for Tenable’s attempt to use DeepSeek R1 to develop ransomware. He said while the ransomware samples DeepSeek produced all required manual editing to compile, the Tenable team was able to get a few of them working.“DeepSeek provides a useful compilation of techniques and search terms that can help someone with no prior experience in writing malicious code the ability to quickly familiarize themselves with the relevant concepts,” noted Miles.Itamar Golan, co-founder and CEO at Prompt Security, agreed that today, virtually anyone — even people with minimal coding skills — can become a hacker by leveraging AI-powered tools to generate malicious code. Golan said while some AI models have stronger security guardrails than others, gaps still remain.“For instance, DeepSeek may refuse politically sensitive queries, but can still generate functional ransomware with minimal effort,” said Golan. “This growing accessibility of AI-generated malware highlights the urgent need for stronger cybersecurity defenses. Organizations must invest in continuous security testing, including penetration testing and red teaming, to proactively identify vulnerabilities before attackers exploit them.”Casey Ellis, founder at Bugcrowd, added that the findings from Tenable’s analysis of DeepSeek highlight a growing concern in the intersection of AI and cybersecurity: the dual-use nature of generative AI.Ellis said while the AI-generated malware in this case required manual intervention to function, the fact that these systems can produce even semi-functional malicious code tells us that security teams need to adapt their strategies to account for this emerging threat vector.Here are three strategies Ellis shared to help security teams mitigate
the risks posed by threat actors leveraging AI:
- Focus on behavioral detection over static signatures: Malware generated by AI, especially when iteratively improved, will likely evade traditional signature-based detection methods. Security teams should prioritize behavioral analysis — monitoring for unusual patterns of activity, such as unexpected file encryption, unauthorized persistence mechanisms, or anomalous network traffic. This approach is more resilient to novel or polymorphic threats.
- Invest in AI-augmented defenses: Just as attackers use AI to enhance their capabilities, defenders can leverage AI to detect and respond to threats more effectively. AI-driven tools can analyze vast amounts of data to identify subtle indicators of compromise, automate routine tasks, and even predict potential attack vectors based on emerging trends.
- Strengthen secure development practices and education: Generative AI systems like DeepSeek can be tricked into producing harmful outputs through techniques like jailbreaking. Organizations should implement robust guardrails in their AI systems to prevent misuse, including input validation, ethical use policies, and continuous monitoring for abuse. Additionally, it’s critical to educate developers and users about the risks and limitations of generative AI to reduce the likelihood of accidental or intentional misuse.
“Also keep in mind is that this is a rapidly evolving area,” said Ellis. “Threat actors are experimenting with AI, and while the current outputs may be imperfect, it’s only a matter of time before these tools become more sophisticated. Security teams need to stay ahead of the curve by fostering collaboration between researchers, industry, and policymakers to address these challenges proactively.”
Source link
Post Views: 13