Learn how to use Peaka’s RAG Capabilities with a sample project.
Technology | Description |
---|---|
https://www.peaka.com/ | A zero-ETL data integration platform with single-step context generation capability |
https://www.pinecone.io/ | The serverless vector database which will be used for storing vector embeddings. |
https://openai.com/ | An artificial intelligence research lab focused on developing advanced AI technologies. |
https://vercel.com/templates/ai | Library for building AI-powered streaming text and chat UIs. |
https://docs.nlkit.com/nlux/ | NLUX is an open-source JavaScript library for creating elegant and performant conversational user interfaces. |
https://nextjs.org/ | The React Framework for the Web. Nextjs will be used for building the chatbot app. |
Connect sample data sets
button on the screen as shown in the image below:
cd
command and install the necessary libraries
npm run dev
is going to be sufficient to run the project on localhost:3000.
If you need further clarification, you can refer to the Readme or the Next.js documentation
.env
in your project and add it to the .gitignore
file if you are considering to add this project to your Github account. In this file, we will store our API keys. You need to create a Pinecone project and API Key. After creating your Pinecone project then create an index with dimension
1536 and metric
cosine. Then use the name of the index in the environment variable. Finally, you need to create an OpenAI API Key. After completing necessary actions copy these values to environment variables accordingly.
service
folder under the root folder in your project and create a peaka.service.ts
file inside this folder. We will need to implement two methods in this service class. The first method is getAllSpacexLaunches
, which will fetch all the launches from SpaceX API. The SQL query is trivial like below:
vectorSearch
method, which will query both Pinecone index and will join the results with SpaceX API results to get all of the metadata of the launches. The query will be like this:
peaka.service.ts
will look like this:
route.ts
under app/api/populate-data
folder. We will follow below steps:
RecursiveUrlLoader
of the langhchain libraryconfig
in the root directory of the project and create a config.ts
under this folder. We will define our system prompt and OpenAI parameters for our chatbot in here with lodash templates and export them like below:
SPACEX_CHATBOT_INSTRUCTION
template we will feed the LLM with Pinecone results as context and user query and we will expect the LLM to answer with the given contex.
route.ts
under app/api/chat
. This file will have the POST endpoint with /api/chat
extension in the url.
We will use ai-sdk of Vercel for response streaming and use langchain open library to interact with LLM.
The code is straight forward with an algorithm is like this:
api/chat
route should like this:
NLUX
. We choose NLUX
because it provides easy integration with Vercel AI SDK.
Let’s open pages.tsx
file to build our chat window. The following code will implement a very basic chatbot UI for this demo. We will use AiChat
component from NLUX
and need to implement ChatAdapter
interface in order to communicate with the backend. Then, we provide conversationOptions
to our AiChat
component which will built-in chat prompt for demo purposes.