Enterprise IaC teams have spent years building Terraform base modules, naming rules, and approval gates. Most AI agent demos ignore all of that — generating freeform HCL and creating more cleanup than they save.
This talk takes a different approach: your existing modules and conventions are the source of truth, not something the agent bypasses. I’ll walk through a governed workflow (discover → validate → implement → audit) and show how MCP and skills-based architecture keep it portable across models.
I’ll also cover why observability: structured events, audit trails, replay isn’t optional when you need to explain agent behavior to a compliance team. You’ll leave with a reference architecture for aligning agents with your internal Terraform standards.
