Cybersecurity

AI Models Uncover 75 Vulnerabilities at Palo Alto Networks

Palo Alto Networks used advanced AI models from Anthropic and OpenAI to find 75 product vulnerabilities, a significant increase from typical monthly findings. The company warns of a coming "vulnpocalypse" as attackers gain AI cyber capabilities.

Joshua Ramos
Joshua Ramos covers cybersecurity for Techawave.
2 min read0 views
AI Models Uncover 75 Vulnerabilities at Palo Alto Networks
Share

Palo Alto Networks announced a significant surge in discovered software vulnerabilities, identifying 75 flaws in its products over the past month. This number is more than seven times the company's usual monthly average of 5-10 vulnerabilities. The increased detection rate is attributed to the company's early use of advanced AI cybersecurity models, including Anthropic's Mythos Preview and OpenAI's GPT-5.5-Cyber.

The cybersecurity giant is among the first organizations to gain access to these cutting-edge AI tools. This provides an early indication of what some in the industry are calling a looming "vulnpocalypse"—a period where AI models could dramatically accelerate the discovery and exploitation of software weaknesses. Palo Alto Networks now estimates that organizations may have as little as three to five months before malicious actors broadly access similar AI-driven cyber capabilities.

During its intensive month-long scanning of over 130 products, Palo Alto Networks employed these sophisticated AI models. The identification of these 75 legitimate vulnerabilities, none of which were actively exploited in the wild, marks a substantial increase in the company's security audit output. Chief Product Officer Lee Klarich highlighted that a key advancement was the AI models' ability to identify and chain multiple, previously minor, flaws together to create working exploit paths. Earlier AI systems reportedly struggled with such complex exploit chaining.

AI's Evolving Role in Cybersecurity Audits

Klarich explained that the AI models demonstrated a particular aptitude for understanding the underlying logic of applications. This allowed them to more effectively pinpoint how attackers might leverage combinations of weaknesses. In several instances, individual vulnerabilities that might not have warranted individual disclosure became high-severity threats when considered in concert by the AI. Internal testing revealed that the models successfully generated working exploits in over 70% of cases, a significant leap in effectiveness compared to previous AI tools.

However, Klarich emphasized that the process still demands considerable human expertise and tailored customization. Palo Alto Networks reported an average false-positive rate of approximately 30%, though this figure varied based on how researchers trained the models and the contextual information provided. The company developed a specialized "AI-scanning harness" to feed the models essential threat intelligence, context, and operational guardrails. "These models aren't magic," Klarich stated, underscoring the extensive effort invested in building this framework to effectively connect the AI with the products being scanned.

The implications extend across the industry, with companies and governments actively assessing defenses against a future where attackers wield powerful AI tools for vulnerability hunting. While both Anthropic's and OpenAI's models show comparable power, they tend to identify different types of vulnerabilities. Consequently, Palo Alto Networks advises organizations to utilize multiple AI models concurrently to ensure the broadest possible detection of flaws.

In response to these advancements, Palo Alto Networks is advocating for a four-pronged defense strategy against AI-assisted cyberattacks. This includes bolstering the capacity to discover and patch vulnerabilities preemptively, minimizing internet-facing exposure to essential systems only, deploying automated detection and prevention tools for real-time attack blocking, and integrating AI and automation into security operations centers (SOCs) to enable responses at machine speed. Meanwhile, the White House is reportedly engaged in discussions regarding proposals for testing and potentially restricting advanced AI models with significant cybersecurity implications before their widespread release.

SourceAxios
Share