How to use custom AI models with Peaka
Role | Purpose | Example Usages |
---|---|---|
Agent / Chat | Text-to-SQL generation | Show me the top 10 customers by revenue → model generates the SQL query. |
Embedding / RAG | Retrieval-Augmented Generation (RAG) | Store embeddings of table names & metadata → system retrieves context before SQL. |
Setting | Description |
---|---|
Agent/Chat Model | Select the model to use for chat and text-to-SQL agent features. |
Embedding / RAG Model | Select the model to use for retrieval-augmented generation and metadata retrieval. ⚠️ Changing this model will trigger reindexing of semantic metadata, which may take some time. |
AI Response Language | Set the global language for AI responses across the system. |
Generate Semantics on Each Query Update with AI | Toggle On/Off. When enabled, Peaka will automatically generate semantic metadata whenever a new query is executed. |
Field | Description |
---|---|
Gen AI System | The AI system used (e.g., OpenAI, Google Gemini). |
Gen AI Model | The specific model used for the operation. |
Workflow Name | The main AI workflow associated with the operation. |
Workflow Path | The path of the subtask within the AI workflow. |
Token | Total number of tokens consumed by Peaka for the operation. |
Start Time | Timestamp when the operation started. |
End Time | Timestamp when the operation ended. |
Model Used | Indicates whether the User Model or Peaka Model was used. |