Documentation Index
Fetch the complete documentation index at: https://docs.experio.cloud/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Model configurations define which AI language models are available in your Experio deployment. You can configure multiple models from different providers and control which ones are available to end users. Navigate to Admin > Settings > Model Configurations.Viewing Models
The model configurations page lists all configured models with:- Name (internal identifier)
- Display Name (shown to users)
- Type (LLM, Embedding, Reranking)
- Provider
- Active status
- User-Facing status (whether end users can select this model)
Creating a Model Configuration
Click Create New and fill in:| Field | Description |
|---|---|
| Name | Internal identifier (no spaces, used in API calls) |
| Display Name | Human-readable name shown in the UI |
| Type | The model type: LLM, Embedding, or Reranking |
| Provider | The model provider (OpenAI, Anthropic, etc.) |
| API Configuration | JSON configuration with API keys, endpoints, model parameters, and other provider-specific settings |
| Active | Whether this model is available for use |
| User-Facing | Whether end users can select this model in the chat interface |
Model Types
| Type | Purpose |
|---|---|
| LLM | Language model for generating chat responses |
| Embedding | Model for creating vector embeddings of documents (used in semantic search) |
| Reranking | Model for re-ranking search results by relevance |
Testing Connections
Before activating a model, test its connection:- Click Test Connection on any model
- The system sends a real API request to the provider
- Results show:
- Response time
- Response preview
- Embedding dimensions (for embedding models)
- Success or error status