Skip to main content

Posts

How To Use Custom Tools With GitHub Copilot SDK

AI agents are quickly becoming a core part of modern development workflows, and the GitHub Copilot SDK makes it surprisingly straightforward to build your own. Instead of relying on prompt engineering alone, the SDK lets you define structured tools, give your agent explicit capabilities, and execute real code through LLM‑driven reasoning. In my latest demo, I walk through the full process of creating an agent from scratch — setting up the project, defining the agent, building custom tools, and running everything locally. You’ll see how the SDK handles tool invocation, schema validation, and natural‑language responses, all while keeping your logic deterministic and maintainable. If you're exploring agentic workflows or want to understand how Copilot can power real execution paths, this walkthrough will give you a clear, practical starting point. 🎥 Watch the full step‑by‑step video here: 👉  This is just the beginning — once you understand the pattern, you can extend your agent with...

🚀 GitHub Copilot SDK Is Here — Build Your Own AI Developer Tools

The GitHub Copilot SDK just dropped, and it’s a game-changer for developers. You can now build your own Copilot-style AI features directly inside your apps, tools, and workflows — no more waiting for GitHub to do it for you. In my latest video, I break down exactly what the SDK is, how it works, and why it’s the future of developer productivity.  🎥 Watch now: Introducing GitHub Copilot SDK — Step-by-Step Demo If you’re serious about AI + dev tools, this is the video to start with.

How to Automate Phone Calls using AI Agent

Every once in a while, a new AI tool appears that doesn’t just improve on what already exists — it completely changes your expectations. That’s exactly what happened when I tested Awaz AI , a voice agent designed to handle real phone calls with natural, human‑like conversation. I’ve tried many voice systems before, and most of them sound robotic, interrupt at the wrong time, or fall apart when you ask something unexpected. But Awaz AI surprised me from the very first “hello.” The pacing, the tone, the timing — everything felt unusually natural. It didn’t rush. It didn’t freeze. It didn’t sound scripted. It actually felt like a real conversation. To make sure I wasn’t imagining it, I recorded the entire interaction. No edits. No retakes. Just a raw, real phone call between me and the Awaz AI agent. If you’re curious about how far voice AI has come — or if you simply want to hear an AI that sounds more human than most customer service lines — you should watch this demo. It’s one of the ...

Converting an AI Workflow Into an Agent

AI workflows were a great starting point. They helped us build early prototypes, automate simple tasks, and experiment with LLMs. But the future of AI is not workflows - it's agents. Agents are more flexible, more intelligent, and more aligned with how real‑world tasks work. If you want to understand this shift - and learn exactly how to convert your existing workflows into agents - my video will walk you through the entire process step by step. Watch it. Learn it. Build with it. Your future AI systems will thank you.

Declarative & Hosted Agents - from VS Code to Microsoft Foundry (PREVIEW)

If you’ve been curious about how to take your AI agent workflows from development to deployment, this new tutorial is for you. In my latest video, I walk you through the process of building declarative & hosted agents inside Visual Studio Code and then show you how to publish them directly to Microsoft Foundry (Preview). 🎯 What you’ll learn in the video: - How to set up declarative agents in VS Code - How to set up hosted agents in VS Code - Hosting workflows for scalable deployment - Publishing agents seamlessly to Microsoft Foundry - Why Foundry is becoming the go-to platform for enterprise-ready AI agents Whether you’re a developer experimenting with agent-based systems or an AI enthusiast looking to understand Microsoft’s latest tools, this tutorial will give you a clear, step-by-step guide to get started. 👉 Watch the full video here 📌 Don’t forget to like, comment, and subscribe for more tutorials on building intelligent agents with Microsoft Foundry!

Declarative Agent Workflows Made Easy - VS Code + Microsoft Foundry

If you’ve been exploring Microsoft Foundry and wondering how to actually run those declarative agent workflows you keep hearing about… this video is for you. I break down exactly how to view, run, and test declarative workflows inside VS Code — no fluff, just practical steps. You’ll learn how to open workflow files, understand agent logic, and test everything with real-world examples. Whether you're building multi-agent systems or just curious about how Foundry works, this tutorial will help you get hands-on fast. 🎯 What’s inside the video: - How to open and explore workflows in VS Code - Running workflows with Microsoft Foundry - Testing agent logic and human-in-the-loop steps - Tips for debugging and refining your setup 👉 Watch now: Happy learning!

Understanding Tools, Agents & Knowledge Bases in Microsoft Foundry

Why Foundry Is a Big Deal AI is moving fast, and Microsoft Foundry is one of the biggest updates you should know about. It’s not just another platform—it’s a new way to build AI agents that can actually do things. Instead of being limited to answering questions, these agents can plan, reason, connect to apps, and pull knowledge from your company’s data. If you’ve ever wished your chatbot could act more like a teammate, Foundry is where that shift happens. Agents: Smarter Than Assistants Traditional assistants were reactive—they waited for you to ask something. Foundry agents are proactive. They can: - Understand intent beyond keywords. - Plug into tools and apps. - Pull info from knowledge bases like Foundry IQ . - Execute tasks without constant supervision. Think of them as digital interns who never get tired and don’t need coffee breaks. Tools: Giving Agents Superpowers Tools are what make agents useful. They’re the “hands” that let agents interact with the world. With Foundry Tools...

How to build, deploy, and connect an MCP server on Azure — step-by-step!

In this full tutorial, I’ll walk you through the complete workflow of creating an MCP (Model Context Protocol) server, deploying it on Azure App Service, and integrating it with AI agents using the AI Toolkit in Visual Studio Code. Whether you're a developer, AI enthusiast, or cloud architect, this video covers everything you need to: ✅ Build your MCP server ✅ Deploy it to Azure App Service  ✅ Secure and scale your server for production ✅ Connect it to AI agents for real-time tool invocation ✅ Troubleshoot, optimize, and monitor your deployment   If you're more interested in reading, then here is my detailed blog on MEDIUM

Azure AI Foundry v/s Microsoft Foundry

Big news in the AI world! Azure AI Foundry has officially been rebranded as Microsoft Foundry. This isn’t just a name change — it’s a shift in how developers and enterprises will build, deploy, and scale AI. In my latest video, I break down: 🔄 What’s new in Microsoft Foundry ❌ What’s been retired or changed 💡 Why these updates matter for your workflows 👉 Watch the full breakdown here: If you’re working with AI agents, cloud workflows, or enterprise integrations, this update is one you don’t want to miss. Alternatively, if you're more interested in reading, then here is my detailed blog on MEDIUM .

How To Generate Architecture Diagrams With Natural Language Using LLMs — No Design Tools Needed

 ðŸŽ¬ Watch the Magic Happen — In Just Minutes Curious how you can turn plain English into a full-blown architecture diagram? In my latest video, I show you exactly how to auto-generate cloud diagrams using natural language and LLMs — no Visio, no manual layout, just smart markdown and AI. You’ll see: - How to describe your system in instructions.md - How the LLM interprets and builds the diagram - How to visualize Azure components, workflows, and tiers instantly 👉 Watch now and see how this technique can save you hours and make your documentation smarter: Auto-Generating Architecture Diagrams Using LLMs Once you try it, you’ll never diagram the old way again. If you're more interested in reading then here is my detailed blog: MEDIUM

How To Test AI Agent Tools Without Any Risk

Ever built an AI agent and thought,  “Wait… I don’t want it to actually run that tool yet”?  That’s where dry-run agents come in — and in this video, I show you exactly how to build one using the Microsoft Agent Framework. You’ll learn how to simulate tool usage without executing anything. It’s safe, smart, and perfect for testing workflows, debugging logic, or getting human approval before action. Whether you're a dev, a student, or just curious how AI agents “think before they act,” this tutorial breaks it down step-by-step — with real code, visuals, and a fun twist. 👉 Watch now and see how dry-run agents can transform your AI workflows — safely and brilliantly.

From Python to AI Agent Tool—In Just Minutes! 🚀

Ever wondered if your plain old Python function could do something smarter? Like… actually respond to prompts, act like a tool, and be part of an AI agent? It can. And I’ll show you how. 🎥 Watch the full video here In my latest YouTube demo, I take a simple Python function—generate_guid()—and turn it into a fully callable AI tool using the Microsoft Agent Framework. No LLMs. No fluff. Just clean, modular Python wrapped in something powerful. 🧠 What You’ll Learn: - How to wrap any Python function using FunctionTool - How to register it with an agent - How to trigger it with natural language (yes, really!) ⚡ Why You Should Watch: If you’re a developer, content creator, or just curious about AI agents, this is the fastest way to get started. You’ll see how to: - Build smarter tools with less code - Keep full control over logic - Scale your agent workflows without the LLM overhead 👉 Ready to see it in action? Click here to watch the video now and let me know what tool you’d build next!

What is the difference between a RAG and an Agent

If you’ve ever talked to a chatbot or used an AI assistant to answer a question, there’s a good chance it used something called RAG or was powered by AI agents behind the scenes. But what are these things, and how are they different? Let’s break it down in a way that’s super easy to understand. 🛠️✨ 📚 What is RAG? RAG stands for Retrieval-Augmented Generation . Think of it like this: Imagine you’re doing a school project on volcanoes. You know a little, but instead of guessing answers, you Google it first, grab info from a few trusted websites, and then write your project in your own words. That’s what RAG does: It retrieves useful information from a database or search system. Then it generates a response based on what it found. It’s like a super-smart librarian + writer combo! 📖✍️ 📌 Perfect for: Answering questions based on a LOT of documents (like customer support FAQs or legal documents). 🕹️ What is an AI Agent? An AI Agent is like a digital helper that can think, plan, and eve...

Azure Model Router: The Smart AI Traffic Controller

Imagine you're at a busy airport, and planes from different airlines are landing and taking off. To keep everything running smoothly, air traffic controllers decide which runway each plane should use. Now, think of AI models as those planes—each one has different strengths, speeds, and capabilities. The Azure Model Router  acts like an air traffic controller for AI models, ensuring that every request gets handled by the best model available. What is Azure Model Router? Azure Model Router is a smart AI system that automatically selects the best AI model to respond to a request. Instead of developers manually choosing which AI model to use, the Model Router does it for them, optimizing for speed, cost, and accuracy. It’s part of Azure AI Foundry , a platform that helps businesses and developers deploy AI models efficiently. Why Do We Need It? AI models come in different types—some are great at answering questions, others are better at reasoning, and some are super-fast but less detai...

Understanding Model Context Protocol (MCP): What and Why

AI models are powerful, but their utility is often limited by their inability to interact with external systems efficiently. The Model Context Protocol (MCP) is designed to bridge this gap, allowing AI models to integrate seamlessly with external tools, APIs, and real-time data sources. By standardizing this interaction, MCP enables AI assistants to provide more informed, precise, and interactive responses.                                                            image generated from copilot What is MCP? MCP is an open protocol designed to improve how applications communicate context to Large Language Models (LLMs) . It allows AI models to access relevant information from external sources dynamically, reducing reliance on static training data and enhancing responsiveness. MCP supports multiple interaction method...

How to Use Google Gemini with Semantic Kernel

In the ever-evolving world of artificial intelligence, combining powerful tools can open up new avenues for innovation and efficiency. Today, we're diving into how to use Google Gemini with Semantic Kernel —a match made in AI heaven. Whether you're an AI enthusiast, developer, or data scientist, this guide will walk you through the integration process step-by-step, ensuring you harness the full potential of these technologies. If you're more interested in watching the entire process, then here is the video: What is Google Gemini? Google Gemini is a suite of generative AI models designed to handle multiple types of data, including text, images, and audio. Its multimodal capabilities make it a versatile tool for a wide range of applications, from natural language processing to creative content generation. Introduction to Semantic Kernel Microsoft Semantic Kernel is an open-source development kit designed to help developers integrate AI models into their applications. It s...

Use Your Phone To Call ChatGPT - FREE!

Are you fascinated by AI and looking for an easier way to interact with it?  Great news!  You can now use your phone to call ChatGPT for free. Yes, you heard that right! Anyone in the USA can simply dial the number 1-800-242-8478 and start talking with ChatGPT instantly. Here is my video on this:

OpenAI Announcement - AI-powered Search Rolled Out For All ChatGPT Users

OpenAI has expanded its AI-powered search capabilities by rolling out ChatGPT Search to all users, both free and paid. This enhancement enables users to access real-time information directly within the ChatGPT interface, streamlining the process of obtaining up-to-date data without navigating to external search engines. Key Features of ChatGPT Search Real-Time Information Access Users can now retrieve current data, including news updates, weather forecasts, sports scores, and stock market trends, all within the ChatGPT environment.  Enhanced User Interface The search functionality has been integrated with a more traditional search engine appearance, featuring location-based searches that display lists of results, images, ratings, operating hours, and detailed information such as maps and directions directly within the app.  Direct Links to Sources Responses now include links to relevant web sources, allowing users to delve deeper into topics of interest.  Access and Avail...

How To Run Hugging Face Models On Local CPU Using Ollama

Are you fascinated by the capabilities of Hugging Face models but unsure how to run them locally?  Look no further!  Here, we will explore the simplest and most effective way to get Hugging Face models up and running on your local machine using Ollama . For a complete walkthrough check out my latest video on "How to Run Hugging Face Models Locally Using Ollama".  This video covers everything from installation to running an example, ensuring you have all the information you need to get started: Happy coding!

Generating AI Model Responses in JSON Format Using Ollama and Llama 3.2

In the rapidly evolving field of artificial intelligence, generating accurate and contextually relevant responses is crucial. Ollama , a lightweight and extensible framework, combined with the powerful Llama 3.2 model, provides a robust solution for generating AI model responses in JSON format. This article explores how to leverage these tools to create efficient and effective AI responses. In case, if you are interested in knowing every single bit then here is my video recording: Setting Up Ollama and Llama 3.2 Before diving into the specifics of generating responses, it's essential to set up Ollama and Llama 3.2 on your local machine. Ollama offers a straightforward installation process, and you can download the necessary models from the Ollama library.  Import required packages In order to get started with code, first we need to import the required packages: from ollama import chat from pydantic import BaseModel Generating Responses in JSON Format JSON format is a structure...