AppSec News
Prompt Injection in Google Gemini Exposes Calendar Data
Security researchers found an indirect prompt-injection flaw in Google Gemini’s Google Calendar integration that could bypass privacy controls and expose private meeting details. Attackers embedded natural-language commands inside normal calendar invites, which Gemini later executed when summarizing events. While the issue was mitigated through responsible disclosure, it highlights a broader risk: AI systems treating seemingly harmless text as executable instructions across productivity tools.
The Signal (if you only read one thing)
AppSec teams must treat user-generated text (e.g. calendar descriptions, comments, and notes) as executable input, just like raw HTML or SQL.
Prompt injection risks are not just chatbot issues; they are new attack surfaces for application logic. Audit AI integrations, apply output constraints, and sandbox where possible.
Read more: Miggo’s technical write‑up on the Gemini exploit.
API Security Becomes a Strategic Priority in 2026
SecurityWeek’s Cyber Insights 2026 calls out APIs as the central attack surface for modern apps. With over 80% of internet traffic flowing through APIs, the report highlights the need for security practices that span the full lifecycle, from dev design to runtime detection. It warns that attackers are using AI and recon automation to target unauthenticated or loosely protected endpoints at scale.
The Signal (if you only read one thing)
If your AppSec strategy doesn’t include API-specific testing, rate-limiting, and behavioral anomaly detection, you’re leaving the front door wide open.
App-layer access control, parameter validation, and real-time observability are becoming table stakes.
Prioritize API inventories and make sure your threat modeling reflects today’s hyperconnected systems.
Embed Security Early in the SDLC, from Design Phase
A DevSecOps Imperative piece published January 26, 2026, lays out five practical steps developers should adopt to secure applications in 2026, emphasizing that security must be deeply integrated into the development workflow, not bolted on at the end. The core guidance recommends lightweight threat modeling in sprint planning, integrating SAST into CI/CD pipelines to catch issues early, and enforcing robust authentication and API hardening throughout the lifecycle.
The Signal (if you only read one thing)
Application security is now inseparable from how software is designed, built, tested, and shipped.
Traditional late‑stage security reviews are too slow to keep up with fast CI/CD rhythms and microservices adoption.
AppSec teams need to collaborate more closely with engineering on threat models, tooling automation, secure coding standards, and empowering developers as first responders in the security chain.

2026 State of AI-Era AppSec Survey Highlights AI’s Impact on AppSec
StackHawk released findings from its “2026 State of AI-Era AppSec” survey, capturing how teams are adapting to AI-assisted development . Some key findings from the report:
AI coding assistants are nearly universal (87% adoption) across organizations, and “keeping up with rapid development velocity and AI-generated code” has become a top AppSec challenge.
Opinions on AI’s security risk remain split, about 53% of respondents see coding AI as a moderate or significant risk, while others consider it low risk or even beneficial.
Teams still face a heavy triage workload: half of teams spend 40% or more of their time prioritizing security findings , underscoring the need for better automation and risk focus.
What Cybersecurity Leaders Are Saying About AI and Human Judgment
A week ago, the DevArmor and Formal teams hosted a private dinner with cybersecurity leaders across the Bay Area to discuss when teams should rely on AI and when they should not. Across different conversations, one theme kept surfacing: automation is accelerating, but judgment is not optional.
AI is increasingly trusted for high-volume, repeatable, and evidence-heavy work. Identifying common vulnerabilities, correlating signals across large datasets, generating reports, and flagging anomalies are areas where AI leads on speed and scale, and leaders are comfortable letting it do so.
The line is drawn at high-impact or irreversible decisions. Risk acceptance, architectural tradeoffs, exception handling, and decisions affecting customers or regulatory exposure still require human judgment. Not because AI is incapable, but because accountability, context, and intent matter.
This is not human versus AI. It is AI for execution and humans for judgment, with the strongest security teams in 2026 designing workflows that keep humans firmly in the loop where trust and responsibility matter most.


Crunchbase Breach Report
What happened: Crunchbase confirmed a breach after the ShinyHunters group leaked stolen data online. The attackers claim to have exfiltrated over 2 million records containing personal and corporate information and released ~400 MB of files after Crunchbase refused to pay a ransom. Crunchbase says the incident has been contained and is working with security experts and law enforcement; early analysis shows exposed PII, internal contracts, and other sensitive corporate data.
Why it matters? This breach was part of a broader campaign in which the same actors used voice-phishing to compromise identity providers and breach multiple tech companies. Crunchbase was hit via social engineering rather than a code flaw, underscoring how attackers are shifting tactics and how even strong technical controls can be undermined by the human layer.
AppSec Takeaway:
Sophisticated phishing attacks (even voice-based vishing for SSO credentials) can defeat traditional defenses . Security teams must bolster identity verification and employee training to counter techniques that trick their way past MFA and other controls.
A single breach can expose a trove of both customer and internal data, which attackers might leverage in downstream attacks. In Crunchbase’s case, leaked PII and contracts could fuel targeted scams or corporate espionage even if the primary systems are back under control .
Threat groups often run coordinated campaigns hitting multiple organizations. ShinyHunters didn’t stop at Crunchbase – they hit SoundCloud and even a financial firm in parallel . AppSec teams should actively share threat intelligence and watch for patterns across the industry to avoid being caught off-guard by attacks spreading laterally through the ecosystem.
Next Startup

Sepcular: Proactive Security Testing for Modern AppSec Teams
Specular is an AI-native security platform that modernizes application security and vulnerability management using large language models. It simulates realistic attacker behavior to automatically identify security weaknesses and produce actionable, context-aware remediation guidance. By reducing reliance on slow, manual workflows run by security engineers, MSSPs, or consultants, Specular enables continuous and proactive security testing at scale. Peyton Smith and his team are working to position this approach as a force multiplier for modern AppSec teams, helping organizations find and fix risk faster and earlier as software delivery and attacker sophistication continue to accelerate.
Thanks for reading The AppSec Signal, DevArmor’s newsletter for security professionals. Have feedback or ideas for what we should cover next? Feel free to reach out - [email protected]
