Large language models (LLMs) and generative AI tools are more accessible than ever—but do you really want to send all your data to the cloud? In this talk, we’ll explore how to build your own private AI assistant, running entirely on open-source software and self-hosted hardware.
We’ll cover:
✅ Hardware choices—from budget-friendly setups to high-performance AI rigs.
✅ Hosting LLMs locally with Ollama and picking the right models for your needs.
✅ Connecting AI to external tools—from coding assistance in your IDE to reading and analyzing legal documents.
✅ Generating images with Stable Diffusion
✅ Ditching Alexa—integrating voice-controlled AI with Home Assistant for fully private home automation.
Whether you’re looking to enhance your workflow, boost privacy, or just tinker with AI at home, this session will give you everything you need to get started—without breaking the bank or relying on Big Tech.
Paul Czarkowski is a seasoned DevOps practitioner, open-source advocate, and cloud automation expert with over 20 years of experience in infrastructure, Kubernetes, and cloud-native technologies. A
...