March 23, 2026 · Edition #7
AI Didn’t Create New Vulnerabilities — It Made Old Ones Affordable
Infrastructure was never fully hardened — and for years, it didn’t need to be. Exploiting a misconfigured DNS rule or an over-permissive IAM role required real skill. That complexity was a natural filter. AI removed it. This week, researchers broke out of AWS Bedrock’s “isolated” sandbox using DNS tunneling and escaped Snowflake’s coding agent via process substitution — techniques that once required deep infrastructure expertise. Check Point documented a single developer building 88,000 lines of deployment-ready malware in a week using an AI IDE. The vulnerability surface didn’t change — but the population that can exploit it expanded by orders of magnitude. Every misconfiguration, every “we’ll fix it next quarter” is now in play — because the cost to exploit it dropped to near zero. The vulnerability backlog didn’t grow. The exploitation clock just got 100x faster.
March 16, 2026 · Edition #6
Built-in AI Security Is a Sensor, Not a Solution
The McKinsey breach didn't happen at the model layer. No jailbreak, no prompt injection. The vulnerability was exposed API documentation, 22 unauthenticated endpoints, and SQL injection hidden in JSON key names. An AI agent found it in two hours. The most dangerous finding — 95 writable system prompts — could have silently corrupted every answer the platform delivered. Built-in AI security is one sensor. Securing AI means seeing across the full stack: model, infrastructure, data, identity, integrations. Not just the layer your provider happens to own.
March 8, 2026 · Edition #5
Zero Trust for Agent Memory
Zero trust changed how we think about network perimeters. We need the same shift for agent memory. Right now, most agents trust their own memory implicitly — whatever's in the vector database is treated as ground truth, retrieved and acted on without questioning whether it's been tampered with. CSA's new LPCI research shows payloads encoded into agent memory sitting dormant until triggered — across sessions, across users, 43–49% success rates. Your input/output filters won't catch it. And it's already happening in the wild: 31 companies embedding memory manipulation into 'Summarize with AI' buttons. One click changes how the agent responds forever.
March 1, 2026 · Edition #4
Not All Agents Are Built Equal — Why Posture Management Must Evolve for Non-Deterministic Risk
The agent landscape isn't one thing. Pro-code agents behave like traditional apps — deterministic, scoped, predictable. But low-code and local agents are different: their risk profile only materializes at runtime, when someone assigns a task and the agent decides which tools to pick and what data to pull. That's what makes Agentic SPM different from traditional AI-SPM. AI-SPM tells you what's deployed. Agentic SPM tells you what's actually happening when these agents run. Runtime threat protection catches the SANDWORM_MODE-style attacks in the act — then feeds that signal back to posture, reducing risk across every connected agent in the org. Three layers: AI-SPM, Agentic SPM, runtime protection. Not a replacement — an evolution.