AI Slop Wrangling: Reining In Control When Developers Are Armed With A.I. Slop Cannons

AI coding assistants have handed every developer on your team a firehose. Pull requests are bigger, faster, and more confident than ever — and a growing percentage of what lands in your repo is code that no human has fully read, let alone understood. The productivity gains are real, but so is the slop: plausible-looking code that passes a casual review, slips past weak tests, leaks abstractions, and quietly introduces security issues that won’t surface until production.

This talk is about how engineering leaders and senior ICs can reassert control without killing the velocity their teams have come to love. We’ll cover the four gates that matter most when AI is writing a meaningful share of your code: quality gates that catch slop before it merges, test coverage strategies that actually verify behavior instead of rubber-stamping generated tests, abstractions that force AI output to conform to your architecture rather than warp it, and security gates designed for a world where secrets, injection flaws, and dependency risks can be generated at machine speed. Attendees will leave with a practical framework for auditing their current guardrails, concrete tooling recommendations, and a clearer picture of where human judgment still has to live in an AI-augmented SDLC.

Speaker

brandon-hedge

Brandon Hedge


B. Hedge is a technologist with nearly 30 years of experience spanning infrastructure, distributed systems, and engineering leadership. He currently works at Neural Payments, where he focuses on

...