March 8, 2026 · Edition #5
Zero Trust for Agent Memory
Zero trust changed how we think about network perimeters. We need the same shift for agent memory. Right now, most agents trust their own memory implicitly — whatever's in the vector database is treated as ground truth, retrieved and acted on without questioning whether it's been tampered with. CSA's new LPCI research shows payloads encoded into agent memory sitting dormant until triggered — across sessions, across users, 43–49% success rates. Your input/output filters won't catch it. And it's already happening in the wild: 31 companies embedding memory manipulation into 'Summarize with AI' buttons. One click changes how the agent responds forever.