They Didn't Stop to Think if They Should: Machine Learning and the Internet of Unpatched Things

Billions of edge computing devices have now been deployed in the so-called Internet of Things. And the combination of massive IoT data and machine learning promises to transform traditional industries like healthcare, manufacturing, and agriculture.

But there is a gap between the lofty promises and the dark side of that transformation. Personal data is funneled into barely-understood machine learning models used to deny health insurance or manipulate elections. Embedded systems engineers are working with yesterday’s threat models as often as yesterday’s build tools.

Even more troubling, when these systems share our public spaces we lose our ability to meaningfully consent to their use. A web page visit becomes part of an invisible behavioral profile. A pedestrian can be run over by a self-driving car having never ridden in one themself. A patient at a hospital taken over by WannaCry probably doesn’t know their EKG machine has an embedded OS, much less that it still runs Windows XP.

There’s little doubt that if we don’t act, laws will be imposed upon us by the technology experts known as Congress. Waiting for “the industry” to do something is abdicating responsibility. What can we as individuals do? Let’s discuss how we can create a framework of engineering ethics to decide how we implement health and privacy safeguards, how we decide what tools or even which data structures to use, and how we decide what projects and organizations to build.




Tim Gross

Tim Gross is founder of Machinist Labs, an independent consultancy in DevOps mentoring, training, and software architecture. Tim has done everything from designing PKI infrastructure for IoT, to ...