Skip to main content

Posts

Showing posts with the label OpenAI

Generate The Streamed Response Using Semantic Kernel

Semantic Kernel , a powerful tool for integrating large language models into your applications, now supports streaming responses. In this blog post, we’ll explore how to leverage this feature to obtain streamed results from LLMs like AzureOpenAI and OpenAI. Why Streamed Responses Matter When working with language models, especially in conversational scenarios, streaming responses offer several advantages: Real-time Interaction: Streaming allows you to receive partial responses as they become available, enabling more interactive and dynamic conversations. Reduced Latency: Instead of waiting for the entire response, you can start processing and displaying content incrementally, reducing overall latency. Efficient Resource Usage: Streaming conserves memory and resources by handling data in smaller chunks. How to Use Streaming Responses I've published a complete video on how to generate this using Python and that can be found on my YouTube channel named Shweta Lodha . Here, I'm jus

How To Hide Sensitive Data Before Passing To LLM-OpenAI

In my previous article “ Passing An Audio File To LLM ”, I explained how one can pass an audio file to LLM. In continuation to that, I’m extending this article by adding more value to it by addressing the use case of sensitive information. Let’s say, an audio file contains information about your bank account number, your secure id, your pin, your passcode, your date of birth, or any such information which has to be kept secured. You will find this kind of information if you are dealing with customer facing audio calls, specifically in finance sector. As these details, which are also known as PII (Personal Identifiable Information), are very sensitive and it is not at all safe to keep them only on any server. Hence, one should be very careful while dealing with such kind of data. Now, when it comes to using PII with generative AI based application, we need a way wherein we can just remove such information from data before passing that to LLM and that’s what this article is all about. In

Get Answers From Audio Without Listening

In this article, I’ll explain about how we can pass an audio file to LLM and I’m taking OpenAI as our LLM . There are many people who prefer audio and video tutorials over reading along with our podcast lovers as listening seems to be more effective for them as compared to reading a book, an e-book or an article, and it is quite common that after a certain period of time, we may forget some of the portions of our tutorial. Now, in order to get the insights again, re-watching or re-listening is the only option, which could be very time-consuming. So, the best solution is to come up with a small AI-based application by writing just a few lines of code which can analyze the audio and respond to all the questions that are asked by the user. Here, utilizing generative AI could be the best option, but the problem is, we can’t pass audio directly as it is text-based. Let’s deep dive into this article, to understand how we can make this work in a step-by-step fashion. If this is what that int

Integrating ChatGPT With Google Docs

In this article, I’ll explain how you can integrate ChatGPT inside Google docs and utilize the capabilities of any text based OpenAI model of your choice. If you are not aware what Google docs is — it is an offering by Google, where in you can create and collaborate documents online. Setting Up Google Docs You can go to https://docs.google.com/, login with your Gmail id and you are all set. If you have already created any document, you can open that otherwise you can go ahead and open a new document. New document will look something like this: In the above window, click on Extensions button and select Apps Script: On click of Apps Script will open up an editor wherein we will write our code. If you're looking for complete source code and explanation, then feel free to check out my article on Medium . Alternatively you can watch the video here .

How To Search Content Which ChatGPT Can’t Find Today — OpenAI | Python

In this article, I’ll show you how can you get your hands dirty with Langchain agents. If you are not aware what Langchain is, I would recommend you to watch my recording here, wherein I just briefed about it. Langchain agents are the ones who uses LLM to determine what needs to be done and in which order. You can have a quick look at the available agents in the documentation, but I’ll list them here too. zero-shot-react-description react-docstore self-ask-with-search conversational-react-description Here agents work via tools and tools are nothing but those are functions which will be used by agents to interact with outside world. List of tools which are available today are: python_repl serpapi wolfram-alpha requests terminal pal-math pal-colored-objects llm-math open-meteo-api news-api tmdb-api google-search searx-search google-serper, etc In this article, I’m covering serpapi . Import Required Packages In order to get started, we need to import these below packages: from langchain

Tips To Get Started With Azure OpenAI

If you want to explore Azure OpenAI but not sure how to get started, then you are at the right place. I've created a video, which explains everything about get your journey started. Have a look:

Use Your Own Data To Get Response From GPT like ChatGPT | Python

In this article, I’ll show you how you can use your locally stored text files to get response using GPT-3 . You can ask questions and get response like ChatGPT . On technology front, we will be using: OpenAI  Langchain Python Input files You can take bunch of text files and store them in a directory on your local machine. I’ve grabbed input data from here and created 5 text files. My files are all about ‘ Cause And Effect Of Homelessness ’ and are placed in a directory named Store. Import Required Packages As we are using Python , let’s go ahead and import the required packages. If you do not have above packages installed on your machine, then please go ahead and install these packages before importing. Once required packages are imported, we need to get OpenAI API key. Get OpenAI API Key To get the OpenAI key, you need to go to https://openai.com/, login and then grab the keys using highlighted way: Once you got the key, set that inside an environment variable(I’m using Windows). Load

Create Chatbot Based Using GPT-Index/LlamaIndex | OpenAI | Python

In this article, I’ll show you how can you create a basic chat bot which utilizes the data provided by you. Here we will be using GPT-Index/LlamaIndex, OpenAI and Pytho n. Let’s get started by installing the required Python module. Install modules/packages We need to install, two packages named  llama-index and langchain and this can be done using below lines: pip install llama-index pip install langchain Importing packages Next, we need to import those packages so that we can use them: from llama_index import SimpleDirectoryReader , GPTListIndex , GPTVectorStoreIndex , LLMPredictor , PromptHelper , ServiceContext , StorageContext ,load_index_from_storage from langchain import OpenAI import sys import os Please note that, here, we don’t need an GPU because we are not doing anything local. All we are doing is using OpenAI server. Grab OpenAI Key To grab the OpenAI key, you need to go to https://openai.com/, login and then grab the keys using highlighted way: Once you got the key