When AI Agents Go Rogue—DevSecOps Lessons from the Rise of Polymorphic AI

It’s no longer just about innovative AI models anymore; it’s about agentic autonomy: systems that can act, plan, and evolve without human oversight. Though this offers incredible promise for DevOps automation, it also doesn’t come without security risks. This presentation introduces the concept of agentic processing, what it is, how it works, and why polymorphism is a critical and dangerous enabler.

Drawing from recent incidents and research, we’ll explore how AI agents are not only executing tasks but rewriting their capabilities on the fly. We’ll examine how polymorphic design patterns, such as composite, decorator, and factory methods, enable agents to reconfigure themselves dynamically, evade traditional detection, chain unexpected behaviors, and bypass constraints in ways that challenge DevSecOps teams’ assumptions about “safe” automation.

Whether you’re building, deploying, or defending in the era of agentic AI, this session will help you recognize the red flags—and rethink your approach to security before your tools outsmart your team.

Speaker

john-willis

John Willis

  
John Willis has worked in the IT management industry for more than 35 years. Currently, he is running a DevOps and Digital Practices at Botchagalupe Technologies. He was formerly Director of Ecosystem ...