Skip to main content

Posts

Showing posts with the label AI

Azure Model Router: The Smart AI Traffic Controller

Imagine you're at a busy airport, and planes from different airlines are landing and taking off. To keep everything running smoothly, air traffic controllers decide which runway each plane should use. Now, think of AI models as those planes—each one has different strengths, speeds, and capabilities. The Azure Model Router  acts like an air traffic controller for AI models, ensuring that every request gets handled by the best model available. What is Azure Model Router? Azure Model Router is a smart AI system that automatically selects the best AI model to respond to a request. Instead of developers manually choosing which AI model to use, the Model Router does it for them, optimizing for speed, cost, and accuracy. It’s part of Azure AI Foundry , a platform that helps businesses and developers deploy AI models efficiently. Why Do We Need It? AI models come in different types—some are great at answering questions, others are better at reasoning, and some are super-fast but less detai...

Understanding Model Context Protocol (MCP): What and Why

AI models are powerful, but their utility is often limited by their inability to interact with external systems efficiently. The Model Context Protocol (MCP) is designed to bridge this gap, allowing AI models to integrate seamlessly with external tools, APIs, and real-time data sources. By standardizing this interaction, MCP enables AI assistants to provide more informed, precise, and interactive responses.                                                            image generated from copilot What is MCP? MCP is an open protocol designed to improve how applications communicate context to Large Language Models (LLMs) . It allows AI models to access relevant information from external sources dynamically, reducing reliance on static training data and enhancing responsiveness. MCP supports multiple interaction method...

How to Use Google Gemini with Semantic Kernel

In the ever-evolving world of artificial intelligence, combining powerful tools can open up new avenues for innovation and efficiency. Today, we're diving into how to use Google Gemini with Semantic Kernel —a match made in AI heaven. Whether you're an AI enthusiast, developer, or data scientist, this guide will walk you through the integration process step-by-step, ensuring you harness the full potential of these technologies. If you're more interested in watching the entire process, then here is the video: What is Google Gemini? Google Gemini is a suite of generative AI models designed to handle multiple types of data, including text, images, and audio. Its multimodal capabilities make it a versatile tool for a wide range of applications, from natural language processing to creative content generation. Introduction to Semantic Kernel Microsoft Semantic Kernel is an open-source development kit designed to help developers integrate AI models into their applications. It s...

How To Run Hugging Face Models On Local CPU Using Ollama

Are you fascinated by the capabilities of Hugging Face models but unsure how to run them locally?  Look no further!  Here, we will explore the simplest and most effective way to get Hugging Face models up and running on your local machine using Ollama . For a complete walkthrough check out my latest video on "How to Run Hugging Face Models Locally Using Ollama".  This video covers everything from installation to running an example, ensuring you have all the information you need to get started: Happy coding!

Generating AI Model Responses in JSON Format Using Ollama and Llama 3.2

In the rapidly evolving field of artificial intelligence, generating accurate and contextually relevant responses is crucial. Ollama , a lightweight and extensible framework, combined with the powerful Llama 3.2 model, provides a robust solution for generating AI model responses in JSON format. This article explores how to leverage these tools to create efficient and effective AI responses. In case, if you are interested in knowing every single bit then here is my video recording: Setting Up Ollama and Llama 3.2 Before diving into the specifics of generating responses, it's essential to set up Ollama and Llama 3.2 on your local machine. Ollama offers a straightforward installation process, and you can download the necessary models from the Ollama library.  Import required packages In order to get started with code, first we need to import the required packages: from ollama import chat from pydantic import BaseModel Generating Responses in JSON Format JSON format is a structure...

How to Clone Your Voice Using Open-Source

In the age of cutting-edge technology, the ability to clone your voice is no longer a futuristic dream. With advancements in Text-to-Speech (TTS) technology, you can create a digital replica of your voice using open-source tools like SWivid's F5-TTS. Whether you're a tech enthusiast, a content creator, or someone interested in preserving their voice, this guide will walk you through the process step-by-step. If you're interested in watching, then here is the recording: What is SWivid's F5-TTS? SWivid's F5-TTS is an open-source Text-to-Speech system that uses deep learning algorithms to synthesize speech. It leverages a powerful neural network to create highly realistic and natural-sounding voices.  The best part?  It’s accessible to anyone with a bit of tech know-how and a willingness to experiment. Why Clone Your Voice? Cloning your voice can have numerous applications: Accessibility: Create personalized voice assistants. Content Creation: Enhance your videos, podc...

Run Your OpenAI SWARM Agents Locally With Open Source Model - 100% 🆓

In this article we will see how we can run our agents locally which means we will be using OpenAI Swarm framework but still we will not be paying anything to OpenAI as we will not be utilizing OpenAI's API key. Using OpenAI's Swarm but not using OpenAI's key, got confused?  Well, we will be achieving this using Ollama :) Now, before we proceed, if you have not watched my earlier video on what is OpenAI-Swarm and how to get started with it, I would recommend you check this one here:  Here is link of the GitHub repository containing Swarm's source code and implementation details. What Are We Trying To Do? We will create our own agent utilizing  OpenAI-Swarm framework using Ollama and the open-source model Llama3.2:1b . This agent will run locally on our machine without the need to any API key from OpenAI . Setting Up The Things Install Swarm We need to install Swarm from GitHub as it is still in experimental stage and that can be done by running below command: pip...

How To Generate SQL Queries Without Giving DB Schema In Prompt - Azure OpenAI

In the ever-evolving landscape of data management, the ability to generate SQL queries without a predefined schema can significantly enhance flexibility and efficiency. Leveraging the power of GPT-3.5 Turbo , we can achieve this by fine-tuning the model to understand and generate schema-free SQL queries.  Why Schema-Free SQL Queries? Traditional SQL queries rely on a predefined schema, which can be limiting in case of huge number of tables or columns. Here are the few benefits of using schema-free approach: Efficiency: By not requiring schema definitions in every prompt, you save on prompt size and reduce the complexity of your queries, making the process faster and more efficient. User-Friendly: It simplifies the querying process for users who may not be familiar with the database schema, enabling them to retrieve data using natural language descriptions. Adaptability: GPT models can adapt to various database structures and types, making them versatile tools for querying differen...

How To Query Azure SQL DB Using FAISS and Open-Source AI Model

In today’s data-driven era, where we are dealing with massive data pertaining to different data sources having different data formats, it creates a very big challenge, especially in terms of efficient search and information retrieval. Dive into this deep ocean and locating the correct data is not at all an easy task with AI. In this article, I’ll show you how you can integrate Azure SQL Database, FAISS , and advanced models from Hugging Face to enhance your search capabilities. Let’s take a quick look at each of these pillars. Understanding Terminology Azure SQL Database Azure SQL Database i s a cloud-based service that provides a totally managed database system. The high availability, scalability, and security of this particular service type make it the best option for handling large amounts of data. FAISS FAISS is short for Facebook AI Similarity Search. It is a library written by Facebook AI Research that enables efficient similarity search and clustering of dense vectors. It suppor...