AppSec News
AI Model Security Risks Spotlighted by OpenAI
OpenAI published a warning about cybersecurity risks tied to its future AI models, noting they could potentially develop zero‑day exploits or assist in complex intrusions if misused. As part of a broader security strategy, OpenAI is enhancing defensive capabilities like code auditing and vulnerability patching while forming a Frontier Risk Council of experts to address emerging threats. While this is not a traditional vulnerability bulletin, it signals a shift in how AI model risks intersect with offensive security.
OWASP GenAI Project Publishes Top 10 Risks for Agentic AI
The OWASP® Foundation GenAI Security Project released a new Top 10 list for agentic AI applications, the result of a year‑long collaboration among more than 100 researchers and practitioners. The resource highlights threats such as agent‑behavior hijacking, tool misuse, and identity/privilege abuse. Alongside the list, OWASP published guides on agentic security and governance, a solutions landscape for AI security tools, and reference applications to help teams design and deploy autonomous systems securely.Rethinking “AI-Native” SAST
Critical Deserialization Bugs in React and Next.js
Snyk disclosed a class of remote‑code‑execution flaws in React Server Components (RSC) and in Next.js versions 13.4 through 14.1, stemming from an unsafe deserialization mechanism. An attacker can craft input that triggers arbitrary code execution on the server, bypassing typical validation. The issue affects Next.js because it embeds the same RSC code even though React 19 itself is unreleased. Developers should update to the patched 14.1.3 “canary” release or later and closely review any use of RSC in production.

Security Community Launches Hacklore to Retire Outdated Advice
Secure-by-Design leader Bob Lord published an open letter and launched Hacklore.org, a community effort backed by 80+ seasoned CISOs and practitioners to push back against persistent cybersecurity myths (“hacklore”) that do not reflect how modern attacks actually occur. The initiative argues that well‑intentioned but outdated guidance (like avoiding public Wi‑Fi or clearing cookies) diverts attention from high‑impact basics such as strong MFA, passphrases, and keeping software patched. Hacklore aims to promote fact‑based, actionable security guidance for individuals and organizations, urging a shift toward secure‑by‑design and teaching what truly reduces risk in the real world.

MixPanel Breach
What happened: Analytics provider Mixpanel disclosed a security incident that occurred on November 8, 2025, but the initial announcement was vague about the nature and scope of the breach. Mixpanel CEO Jen Taylor acknowledged unauthorized access in a brief blog post and said the company had taken steps to contain it, but didn’t share how many customers were affected or what data was accessed. Attempts by TechCrunch to get detailed answers from Taylor went unanswered. Meanwhile, OpenAI, one of Mixpanel’s customers, confirmed that data was exfiltrated from Mixpanel’s systems and terminated its use of the service. The exposed information reportedly included names, email addresses, coarse location, and device metadata tied to developers’ API usage, though sensitive credentials and payment data were not affected.
Why it matters: Analytics platforms like Mixpanel sit deep inside product workflows and collect user behavior data across apps and websites. A breach at such a provider can expose personal and usage information even when the primary service itself isn’t compromised. For AppSec teams, this highlights that your data risk extends across your third-party analytics stack, and that limited breach disclosures can leave customers unsure how to respond or what to protect.
AppSec Takeaway:
As Amir Kavousian noted in a recent LinkedIn post, security incidents are economic events, not just technical failures. A single breach can erase hard-won customer trust, derail sales momentum, slow recruiting, and force investors and prospects to reassess risk almost overnight.
Breaches can instantly reverse major wins, damage trust, and stall growth.
With AI accelerating code and integrations, AppSec teams must model business impact, not just vulnerabilities.

DeepTrail Brings Zero-Trust Identity to AI Agents
AI agents are changing how software operates by acting autonomously across systems and workflows. This introduces new security risks that traditional AppSec and IAM tools were not built to handle, especially around agent identity and delegated access. DeepTrail is addressing this gap by building an identity and authorization layer purpose built for AI agents, treating each agent as a cryptographically verifiable identity with clear policies and traceable actions. Led by CEO Mahendra Kutare, the team is tackling a growing blind spot in modern application security as agentic systems move into production.
Thanks for reading The AppSec Signal, DevArmor’s newsletter for security professionals. Have feedback or ideas for what we should cover next? Feel free to reach out - Hello at devarmor dot com
