Over the past few weeks, I’ve been exploring how to build practical, privacy‑first agentic AI workflows that run entirely on a local machine. In my latest project, I combined GitHub Copilot SDK with Foundry Local to create a fully offline agent capable of choosing and executing tools intelligently — without relying on any cloud model.
In this demo, I walk through how I built:
- A Foundry Local LLM tool for on‑device inference
- Three lightweight Python tools
- A router prompt that lets Copilot SDK decide which tool to invoke
- A clean async loop that ties everything together
The result is a flexible, extensible agent that can reason, select tools, and produce polished answers — all running locally.
If you’re interested in agent design, local LLMs, or practical orchestration patterns, this walkthrough will give you a clear, end‑to‑end example you can adapt to your own projects.
🎥 Watch the full video here:
Comments
Post a Comment