AI agents such as OpenAI’s Operator have added more functionality than these tools had in the past and can now help attackers launch phishing campaigns. Symantec researchers explained that about a year ago, they would tell security pros that large language models (LLMs) were passive and could only help attackers create phishing materials or write basic code.“While an agent’s legitimate use case may be the automation of routine tasks, attackers could now potentially leverage them to create infrastructure and mount attacks,” wrote the researchers, who proved their case in a March 13 blog. Stephen Kowski, Field CTO at SlashNext Email Security, said this research on OpenAI’s Operator highlights how attackers can manipulate AI systems through simple prompt engineering to bypass ethical guardrails and execute complex attack chains that gather intelligence, create malicious code, and deliver convincing social engineering lures.“Organizations need to implement robust security controls that assume AI will be used against them, including enhanced email filtering that detects AI-generated content, zero-trust access policies, and continuous security awareness training that specifically addresses AI-generated threats,” said Kowski. “The best defense combines advanced threat detection that can identify behavioral anomalies with proactive security measures that limit what information is accessible to potential attackers in the first place.”Guy Feinberg, growth product manager at Oasis Security, explained that just as attackers use social engineering to trick people, they can prompt AI agents into taking malicious actions. Feinberg said the real risk isn’t AI itself, but the fact that organizations don’t manage these non-human identities (NHIs) with the same security controls as human users.“Manipulation is inevitable,” said Feinberg. “Just as we can’t prevent attackers from tricking people, we can’t stop them from manipulating AI agents. We need to limit what these agents can do without oversight. AI agents must be managed like human identities, with least privilege access, monitoring, and clear policies to prevent abuse.”Feinberg offered three tips for managing AI agents:
- Treat AI agents like human users: Assign them only the permissions they need and continuously monitor their activity.
- Implement strong identity governance: Track which systems and data AI agents can access, and revoke unnecessary privileges.
- Assume attackers will try to manipulate AI agents: Build security controls that detect and prevent unauthorized actions, just as teams would with phishing-resistant authentication for humans.