RSAC 2025 Keynote: Cisco open-sources AI security tools

SAN FRANCISCO — Cisco set the tone at Monday’s RSAC 2025 keynote by announcing a major open-source initiative aimed at securing the future of AI, while other speakers laid out how the industry must adapt to a rapidly changing battlefield.Jeetu Patel, Cisco’s executive vice president and chief product officer, unveiled Foundation AI — a purpose-built, security-specific AI model trained on cybersecurity data. The model isn’t just open source in concept — Cisco is releasing the actual trained model weights as well, allowing researchers and developers to inspect, adapt, and fine-tune it. Alongside that, the company is open sourcing its full tooling framework, inviting the global security community to collaborate on safer, more transparent AI systems.(For Complete Live RSAC 2025 Coverage by SC Media Visit SCWorld.com/RSAC)Patel emphasized the urgency: “The true enemy is not our competitors — it’s the adversary.”He warned that fine-tuned models are significantly more vulnerable to jailbreaks and toxic outputs, citing Cisco research that showed fine-tuning can triple jailbreak susceptibility and increase harmful responses by 22x.

Open-Sourcing AI Security to Defend the Future

Jeetu Patel, Cisco’s EVP and CPO

.

Cisco’s Foundation AI model may not be the biggest, but it’s built like a race car for the cybersecurity track — tuned for precision, speed, and efficiency rather than brute force. Specifics include:
  • It’s powered by 8 billion parameters, which are like the neurons in a digital brain. That’s large enough to perform complex tasks like detecting and responding to threats — but small enough to remain nimble, unlike massive general-purpose models that are trying to be everything to everyone.
  • It was trained on 5 billion carefully chosen tokens — think of these as the words, patterns, and behaviors the model studied to learn cybersecurity. Those 5 billion were handpicked from a haystack of 900 billion. That’s like selecting the most relevant chapters from an entire library — skipping the fluff to teach the model exactly what it needs to know about threat detection, ransomware tactics, and response workflows.
  • And perhaps most critically, it’s light enough to run on just one or two A100 GPUs, powerful chips used to train AI. Most general models need 30 times that. That’s the difference between needing a full data center to run your model — or fitting it in a secure enterprise rack. This isn’t just about cost — it’s about making AI-powered security scalable and accessible.
  • Patel framed this move as necessary in a world shifting from human-scale security to machine-scale threats. “Security is now the biggest accelerator for AI adoption, not an inhibitor,” he said.

    Cybersecurity’s Greatest Strength? Community

    Setting the broader tone, Hugh Thompson, executive chairman of RSAC, opened the conference with a call to unity and adaptability. “Community — it’s what makes us strong in cybersecurity,” he said, encouraging the 44,000 attendees to embrace change and new connections with a “Bayesian mindset” — being open to updating assumptions as new information arrives.

    Hugh Thompson, executive chairman of RSAC

    .

    Thompson also pointed to two seismic trends for the next 18 months: the transformation of application security into AI-driven defenses, and the surge of adversarial attacks specifically targeting AI models.

    Agentic AI Will Redefine Cybersecurity — If We Secure It First

    Vasu Jakkal, Microsoft’s corporate VP, Microsoft Security, then offered a sweeping look into the rise of Agentic AI — autonomous digital systems that will soon collaborate with each other and with humans to reshape cybersecurity, governance, and daily life.Microsoft’s Jakkal then offered a sweeping look into the rise of Agentic AI — autonomous digital systems that will soon collaborate with each other and with humans to reshape cybersecurity, governance, and daily life.

    Vasu Jakkal, corporate VP, Microsoft Security

    “Today, AI helps us with triage,” Jakkal said. “By 2027, agents will predict attacks, dynamically adjust access permissions, and autonomously enforce security policies.”
    She cautioned that as agentic AI grows more powerful, security models must evolve alongside it. “AI is not static — and security can’t be static either,” she emphasized.(For Complete Live RSAC 2025 Coverage by SC Media Visit SCWorld.com/RSAC)To underscore just how rapidly this transformation is happening, Jakkal shared a timeline mapping the “evolving stages of autonomous AI for security.”
  • Today, she said, most cybersecurity AI systems operate at Level 0 — mimicking human actions and automating repetitive, rules-based tasks.
  • Within 6 months, many organizations will move into Level 1, where AI agents will reason through tasks and use tools to achieve specific goals.
  • By 12 to 18 months, we will enter Level 2, with agents capable of self-modification or optimization to better meet their objectives.
  • And in 18 to 24 months, Jakkal forecasted the arrival of Level 3 autonomy, where AI agents will dynamically adjust their own goals to respond to evolving threats — with minimal human intervention.
  • The implications, she said, are profound: cybersecurity will shift from being a reactive discipline to a dynamic, predictive one. “Security mechanisms must evolve from static verification to dynamic, probabilistic verification,” she noted.Microsoft’s internal numbers underline the urgency:
  • Password attacks have jumped from 4,000 per second last year to 11,000 per second today.
  • Tracked threat actors have quintupled from 300 to 1,500 within a year.
  • Jakkal emphasized that governance, identity verification, data privacy, and dynamic risk management must be embedded into the design of every AI agent from the start. “It takes a village,” she said, echoing the conference’s broader theme of collective resilience.

    Why Community Intelligence Is Our Greatest Defense

    John Fokker, head of threat intelligence at Trellix, brought the conversation back to the human adversary — spotlighting how ransomware gangs like Black Basta now operate as full-fledged businesses, sometimes with government backing.

    John Fokker head of Threat Intelligence, Trellix

    Through leaked internal chats, Fokker’s team confirmed Black Basta’s deep ties to Russian protection networks. “They have HR departments, cafeterias, and powerful friends,” he said. “The connection between nation-states and cybercriminals is clearer than ever.”Through leaked internal chats, Fokker’s team confirmed Black Basta’s deep ties to Russian protection networks. “They have HR departments, cafeterias, and powerful friends,” he said. “The connection between nation-states and cybercriminals is clearer than ever.”Yet, Fokker’s message remained hopeful: collaborative intelligence can outpace even state-backed adversaries. “We are one team, working together,” he said. “And when we work as one community, there is no question we will reach the top.”

    Grammy-winning artist Common

    The day opened with Grammy-winning artist Common, who delivered a stirring reflection on unseen service, resilience, and community. “The creator I see in me is the creator I see in you,” he told the audience, celebrating cybersecurity professionals for protecting strangers they may never meet.RSAC 2025 made one thing clear: innovation alone won’t define the future of cybersecurity — community will. As threats evolve and AI accelerates, it’s the strength of the connections forged here that will determine what comes next.(For Complete Live RSAC 2025 Coverage by SC Media Visit SCWorld.com/RSAC)

    Source link