Skip to main content

Posts

Showing posts with the label LLM

Learn to Call Your Custom Function Using Semantic Kernel

In this tutorial, we will dive into the fascinating world of Semantic Kernel and explore how to seamlessly integrate our custom functions into AI applications. Whether you’re a developer, data scientist, or just curious about the intersection of language models and native code, this tutorial is for you! 📌 Key Points Covered: Why Create Functions for Your AI?  Large language models excel at generating text, but there are tasks they can’t handle alone. I'll discuss scenarios where native functions shine, such as retrieving data from external sources, performing complex math, and interacting with the real world. Creating Your Native Functions: Set up a new folder for your plugins Dive into the creation of your own plugin to demonstrate the required operations Implement functions as required Running Your Native Functions: I'll also highlight, how  Semantic Kernel utilizes  KernelFunctions to invoke our custom functions. 🎥  Watch the Full Tutorial: Shweta Lodha 

Tips To Improve LLM-Based Applications

Large Language Models (LLMs) are powerful AI systems that can understand and generate natural language. They have many applications in various domains, such as natural language processing, machine translation, and healthcare. However, building LLM-based applications is not a trivial task. It requires careful consideration of several factors, such as the choice of the LLM, the data quality, the evaluation metrics, and the ethical implications.  In this blog post, I will share some tips to solve most common problems. How to extract correct content from LLM Problem says that, although the answer is present in the content, but model fails to extract that.  Here are the quick tips to resolve this problem: Prompt compression Remove irrelevant data Rectify typos and grammatical errors Remove duplicate data Use data cleaning libraries Problem of missing top ranked documents Problem states that correct document was not rankled while ranking the documents. Here are the few suggestions, which can

How To Hide Sensitive Data Before Passing To LLM-OpenAI

In my previous article “ Passing An Audio File To LLM ”, I explained how one can pass an audio file to LLM. In continuation to that, I’m extending this article by adding more value to it by addressing the use case of sensitive information. Let’s say, an audio file contains information about your bank account number, your secure id, your pin, your passcode, your date of birth, or any such information which has to be kept secured. You will find this kind of information if you are dealing with customer facing audio calls, specifically in finance sector. As these details, which are also known as PII (Personal Identifiable Information), are very sensitive and it is not at all safe to keep them only on any server. Hence, one should be very careful while dealing with such kind of data. Now, when it comes to using PII with generative AI based application, we need a way wherein we can just remove such information from data before passing that to LLM and that’s what this article is all about. In

Get Answers From Audio Without Listening

In this article, I’ll explain about how we can pass an audio file to LLM and I’m taking OpenAI as our LLM . There are many people who prefer audio and video tutorials over reading along with our podcast lovers as listening seems to be more effective for them as compared to reading a book, an e-book or an article, and it is quite common that after a certain period of time, we may forget some of the portions of our tutorial. Now, in order to get the insights again, re-watching or re-listening is the only option, which could be very time-consuming. So, the best solution is to come up with a small AI-based application by writing just a few lines of code which can analyze the audio and respond to all the questions that are asked by the user. Here, utilizing generative AI could be the best option, but the problem is, we can’t pass audio directly as it is text-based. Let’s deep dive into this article, to understand how we can make this work in a step-by-step fashion. If this is what that int