📝 Table of Contents
1. Introduction
2. What are Embeddings?
3. How do Embeddings Work?
4. Vector Databases
5. Searching with Vector Databases
6. Creating Embeddings with OpenAI
7. Storing Embeddings with SingleStore
8. Creating a Function with JavaScript
9. Conclusion
10. Resources
Introduction
If you're building any type of AI product, embeddings and vector databases are essential. In this article, we'll cover what they are, how they work, and how to use them with OpenAI and their APIs. We'll cover this in three parts: theory, use, and integration. After reading this article, you'll be able to create long-term memory for a chatbot or perform semantic searches based on a huge database of PDFs connected directly to an AI.
What are Embeddings?
To put it simply, an embedding is data like words that have been converted into an array of numbers known as a vector that contains patterns of relationships. The combination of these numbers that make up the vector act as a multi-dimensional map to measure similarity. For example, the words "dog" and "puppy" are often used in similar situations, so in a word embedding, they would be represented by vectors that are close together. This is a simple 2D example of a single dimension, but in reality, the vector has hundreds of dimensions that cover the rich multi-dimensional complex relationship between words. Images can also be turned into vectors, and it's how Google does similar image searches. The image sections are broken down into arrays of numbers, allowing you to find patterns of similarity for those with closely resembling vectors.
How do Embeddings Work?
Once an embedding is created, it can be stored in a database. A database full of these is considered a vector database and can be used in several ways, including searching, clustering, recommendations, and classification. For the purpose of this article, we'll just cover searching since it would be the most commonly used. There are many practical ways to do this, but OpenAI has provided a great AI model to specifically create embeddings. It does not, however, provide a way to store them, which we will be using a cloud database for later in the article.
Vector Databases
A database full of embeddings is often referred to as a vector database. Vector databases can be used in several ways, including searching, clustering, recommendations, and classification. Searching is the most commonly used, and it involves identifying what you want to search for, creating an embedding for your search term, and then performing a search in the database against the existing embeddings. This would return a list with the closest similarity being at the top.
Searching with Vector Databases
Searching with vector databases is actually quite simple. The first step is to identify what you want to search for. For example, we might want to search for anything related to OpenAI. Next, we have to create an embedding for our search term. In this case, we would create an embedding for the word "OpenAI." Finally, we would perform a search in the database against the existing embeddings. This would return a list with the closest similarity being at the top.
Creating Embeddings with OpenAI
OpenAI has provided a great AI model to specifically create embeddings. It does not, however, provide a way to store them, which we will be using a cloud database for later in the article. To create an embedding, we'll need to access OpenAI on the Google page. We'll head over to the OpenAI website, where we can create a new account or log into an existing one. It's free if you want to sign up. We'll log in using our Google credentials and be taken to a few options. Between chat GPT, Dali, and other APIs, we'll head over to their API page. Here, we want to start off having a look at the documentation on the embeddings, which we can find just over here. We'll also link this in the description.
Storing Embeddings with SingleStore
OpenAI doesn't provide databases, so we'll need to create our own. A database full of embeddings is often referred to as a vector database. We'll use a provider called SingleStore. They provide a real-time unified distributed SQL database, which also is quite easy to use since it's in the cloud. On top of that, they allow for you to incorporate vector databases straight in there. We'll set up a database and start storing our embeddings and then also start searching through them.
Creating a Function with JavaScript
In this part of the article, we'll create an actual function using JavaScript on node.js to interact with embeddings. We'll create a function called "create embedding," which will pass in one item, which will be the text we want to embed. We'll use this to pass to an API. We'll do a fetch request to the OpenAI API, and once we get a response, we'll console log it out and return it as part of this function.
Conclusion
Embeddings and vector databases are essential if you're building any type of AI product. In this article, we covered what they are, how they work, and how to use them with OpenAI and their APIs. We covered this in three parts: theory, use, and integration. After reading this article, you'll be able to create long-term memory for a chatbot or perform semantic searches based on a huge database of PDFs connected directly to an AI.
Resources
- OpenAI API: https://beta.openai.com/docs/api-reference/introduction
- SingleStore: https://www.singlestore.com/
- Teach Me OpenAI and GPT: https://gumroad.com/l/teach-me-openai-and-gpt