Prerequisites

Before adding a custom LLM model, make sure you have the following ready:
  • API Key: The authentication key issued by your LLM provider.
  • Base URL (if applicable): The endpoint through which requests to the model will be made.
We currently support the following providers:
  • OpenAI and Azure OpenAI
  • Google Gemini
  • Alibaba Cloud Qwen
👉 If you’re using another provider, let us know — we can add support for additional LLMs.

Model Roles

In Peaka, each model you add is assigned a role based on how it will be used. Currently, we support two types of roles:
RolePurposeExample Usages
Agent / ChatText-to-SQL generationShow me the top 10 customers by revenue → model generates the SQL query.
Embedding / RAGRetrieval-Augmented Generation (RAG)Store embeddings of table names & metadata → system retrieves context before SQL.

How they work together

  • Agent / Chat models handle natural language to SQL conversion.
  • Embedding / RAG models enhance accuracy by retrieving the most relevant tables and metadata before query generation.
By combining these roles, Peaka ensures your text-to-SQL agent is both accurate and context-aware.

Adding a Custom Model

Follow these steps to add a custom LLM model to Peaka:
  1. Navigate to AI Settings
    • Go to Organization Settings → AI Settings.
  2. Click “Add AI Model”
    • Locate and click the Add AI Model button to open the form.
  1. Fill out the Add AI Model Form The form contains the following fields:
    • Gen AI System – Select the provider (e.g., Google Gemini, OpenAI, Alibaba Cloud Qwen).
    • Model Role – Choose the role of the model: Agent/Chat or Embedding/RAG.
    • Gen AI Model – Specify the model name/version to use.
    • Gen AI API Key – Enter the API key issued by your provider.
    • Gen AI Base URL – Enter the base URL/endpoint if required by the provider.
  1. Save and Validate
    • Click Save.
    • Peaka will automatically test your credentials.
    • If the credentials are valid, the model will be added and ready to use.
    • If invalid, an error message will guide you to correct the information.
After successfully adding the model, it will appear in your list of AI models, ready for use in your projects.

Configuring Model Usage

Once you’ve added your custom AI models, you can configure Peaka to use them instead of the default models.
  1. Configure the Settings Form The form contains the following fields:
SettingDescription
Agent/Chat ModelSelect the model to use for chat and text-to-SQL agent features.
Embedding / RAG ModelSelect the model to use for retrieval-augmented generation and metadata retrieval. ⚠️ Changing this model will trigger reindexing of semantic metadata, which may take some time.
AI Response LanguageSet the global language for AI responses across the system.
Generate Semantics on Each Query Update with AIToggle On/Off. When enabled, Peaka will automatically generate semantic metadata whenever a new query is executed.
  1. Save Settings
    • Click Save to apply your configuration.
    • Peaka will start using your selected AI models according to the roles you’ve assigned.

Monitoring Token Usage of Models

Peaka allows you to monitor the usage of your AI models, including tokens consumed during operations. Follow these steps to access the monitoring page:
  1. Navigate to the AI Page
    • Go to Peaka AI in the main navigation.
  2. Open the Token Usage Table
    • Click the Token Usage button.
    • The LLM Monitoring Table will open, displaying detailed usage information.
  3. Understanding the Table Fields
    FieldDescription
    Gen AI SystemThe AI system used (e.g., OpenAI, Google Gemini).
    Gen AI ModelThe specific model used for the operation.
    Workflow NameThe main AI workflow associated with the operation.
    Workflow PathThe path of the subtask within the AI workflow.
    TokenTotal number of tokens consumed by Peaka for the operation.
    Start TimeTimestamp when the operation started.
    End TimeTimestamp when the operation ended.
    Model UsedIndicates whether the User Model or Peaka Model was used.
This table helps you track usage, analyze costs, and ensure that your custom models are being utilized as expected.