Documentation Index
Fetch the complete documentation index at: https://docs.experio.cloud/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Assistants are the AI personas users interact with in Experio. Each assistant has its own configuration including model assignments, tools, behavior settings, and agent architecture. You can create multiple assistants tailored to different use cases such as data exploration, conflict analysis, or knowledge transition.
Navigate to Admin Panel > Settings > Agent Configuration.
The interface provides a full-page CRUD experience with a tabbed form for managing all assistant settings. From the list view you can search, create, edit, and delete assistants. Clicking an assistant or the Add Assistant button opens the detail page with the following tabs.
Basic Tab
| Field | Description |
|---|
| Title | The assistant’s display name shown to users |
| Subtitle | A brief description shown on the assistant card |
| Icon | Select from a dropdown of available icons (e.g., robot, brain, lightbulb, analytics) |
| Welcome Message | The greeting shown when a user starts a new conversation with this assistant |
| Display Order | Display order on the Agents page (lower numbers appear first) |
| Enabled | Whether the assistant is available to users |
| Staff Only | Restrict this assistant to staff users only |
Gating
Gating requires users to provide specific context (for example: company size, industry, strategic focus) before the assistant will process their queries. When enabled, the first message of every conversation is evaluated against the gating prompt; if the required context is missing, the assistant returns a clarification asking for it instead of running the normal pipeline. Once a conversation provides complete context, gating is satisfied for the rest of that conversation.
| Field | Description |
|---|
| Enable Gating | Toggle gating on or off for this assistant. When off, no gating evaluation runs and there is no overhead. |
| Gating Prompt | Free-form text describing what context the user must provide. Shown to users in an amber banner on the welcome page and used by the LLM to decide whether the user’s message is complete. Only visible when Enable Gating is on; clearing the toggle clears the prompt. |
Write the gating prompt as a checklist of required information, e.g.:User must provide:
- Company size and industry
- Strategic objectives
- Key challenges they want to address
Users can amend the context they originally provided at any time by clicking the clipboard icon next to the message input — the panel shows the original context plus any updates.
Configuration Tab
| Field | Description |
|---|
| System Prompt | Custom system prompt for this assistant. Leave empty to use the default prompt for the assistant type. |
| Use Document Context | Include document context from vector search in responses |
| Allow Per-Question Model Switch | When enabled, users can select a report-writer model per message; for LangGraph deep agents this applies to the report phase only |
Model Configuration Tab
Each assistant is assigned models that control how it generates responses. All model dropdowns are populated from the configured Model Configurations.
| Field | Description |
|---|
| Reasoning model (default pipeline) | Default for LangGraph scope, retrieval, router, and summarization when no per-step override is set. Uses a Chat model config. Also the chain used when the report-writer model is unset. |
| Report writer model | Used for the LangGraph report phase (the streamed answer users see), and for final responses in native/planning agents. Same Chat model type as other assistant slots. If unset, falls back to the reasoning model. |
For LangGraph deep agents, the reasoning model drives the pipeline; the report writer model is used only for the final report step. Per-question model selection, when enabled, overrides the report writer model for that message.
The Model Configuration tab also includes per-tool model overrides:
| Field | Description |
|---|
| Cypher Reasoning Model | Model used for generating Cypher graph database queries. Falls back to the reasoning model if not set. |
| Resolve Entities Model | Model used for entity resolution in knowledge graph lookups. Falls back to the reasoning model if not set. |
| Intent Resolver Model | Model used for classifying user intent and routing queries. Falls back to the reasoning model if not set. |
| Orchestrator Model | Model used for orchestrator checkpoint nodes (think/write_todos). These nodes only plan tasks and show progress to the user — they do not generate the final answer. Use a fast, inexpensive model here. Falls back to the reasoning model if not set. |
Use a smaller, faster model for Cypher generation, entity resolution, and orchestrator checkpoints to reduce latency and cost, while keeping a more capable model for the final answering step.
Custom prompt instructions for specific tools. These are appended to the default prompts.
| Field | Description |
|---|
| Cypher Instructions | Custom instructions appended to the Cypher generation prompt |
| Resolve Entities Instructions | Custom instructions for the entity resolution step |
Agent Configuration Tab
Control the agent architecture and capabilities available to the assistant.
Agent Mode
Select one agent mode at a time using the card-based selector. New assistants default to Deep Agent.
| Mode | Description |
|---|
| Deep Agent | LangGraph-based 3-phase workflow (scope, retrieval, report) with TODO tracking, file offloading, and subagent delegation |
| Planning Agent | Multi-turn planning agent that creates a step-by-step plan before execution |
| Native Agent | Native LLM tool calling (ReAct loop) |
| None | No agent orchestration |
Select which tool groups the assistant has access to using the pill-based multi-select. Click a pill to toggle it on/off. Available groups include Knowledge Base, Employee Analysis, OCI Analysis, Graph Query, Document Processing, Reference Documents, and more.
If no tool groups are selected, defaults are applied based on the assistant type.
Only one agent mode can be active at a time. The interface enforces this automatically. Deep Agent provides the most thorough analysis but takes longer to respond. For faster, simpler interactions, use Native Agent instead.
When Deep Agent is selected, users see a Deep Research toggle in the chat interface. When activated, this enables parallel graph context queries (GCQ) alongside document search during the scope phase, producing more thorough but slower results. This is a per-message toggle — users can enable it for complex queries and disable it for quick lookups.
Deep Agent Settings Tab
When Deep Agent is selected as the agent mode, a Deep Agent Settings tab appears with additional configuration. This tab is hidden when other agent modes are selected.
Architecture Settings
| Field | Default | Description |
|---|
| Use LangGraph architecture | false | Enable pure LangGraph 3-phase workflow |
| Enable subagents | true | Enable focused subagents for parallel retrieval tasks |
| Recursion limit | 30 | Max recursion depth for agent execution |
| Stream recursion limit | 100 | Max streaming iterations before stopping |
| Enable checkpointer | true | Enable state persistence across conversation turns |
| Enable store | true | Enable PostgresStore for agent memory and context sharing |
| Checkpoint durability | exit | Controls checkpoint frequency (exit saves on completion) |
| Enable orchestrator | true | Enable orchestrator checkpoint nodes that provide user-visible progress updates (thinking steps, task tracking). When disabled, these nodes become pass-through — the pipeline still runs but skips the orchestrator LLM calls, reducing latency and cost. |
Disabling the orchestrator does not change the agent’s retrieval or answer quality. It only removes the intermediate “thinking” and “task planning” steps that are shown to the user during processing. This is useful for simpler queries where the overhead of orchestrator LLM calls is not needed.
Retrieval Settings
| Field | Default | Description |
|---|
| Document search K | 3 | Number of similar documents to retrieve per search |
| Document search score threshold | 0.7 | Minimum similarity score for document results (0.0 to 1.0) |
Scope Settings
| Field | Default | Description |
|---|
| Scope max count threshold | 1000 | Max results before the agent asks the user to narrow their query |
| Min path confidence | 0.5 | Minimum confidence score for intent resolver routing paths |
| Use resolved intent | true | Use the router-rephrased intent for all downstream nodes instead of the raw user message |
Report Settings
| Field | Default | Description |
|---|
| Report data preview limit | 300 | Max rows of retrieved data included in the report writer prompt |
| Report max data tokens | 50000 | Max tokens allocated for retrieved data in the report context. Rows are dropped from the end until the content fits within this budget. |
| Report writer fallback models | (none) | Fallback models for the report writer phase. On 429 rate-limit errors, these models are tried in order. |
Configure fallback models from different providers (e.g., primary: Azure GPT-5.1, fallback: Google Gemini) to ensure the report phase can complete even when one provider hits rate limits. The primary answering model is always tried first.
Graph Context Settings
| Field | Default | Description |
|---|
| Auto-include graph context if under | 30000 | Token threshold for auto-including graph context in the report. If the total graph context is under this limit, it is included automatically without requiring a store lookup. |
| Per-entity auto-include threshold | 20000 | Max tokens per individual entity for selective packing. Entities exceeding this limit are summarized instead of included in full. |
LangGraph Prompt Extensions
The Deep Agent Settings tab also includes a LangGraph Prompt Extensions section. These are custom instructions appended to each LangGraph node prompt, allowing fine-tuning of agent behavior at each stage of the pipeline without code changes.
| Prompt Extension | Applied To |
|---|
| Retrieval | The retrieval node that searches documents and knowledge graph |
| Report Writer | The final report generation node that synthesizes findings |
| Intent Resolver | The router that classifies user intent and selects the execution path |
| Orchestrator Pre-Scope | The orchestrator before the scoping phase begins |
| Orchestrator Pre-Retrieval | The orchestrator before the retrieval phase begins |
| Orchestrator Pre-Report | The orchestrator before the report generation phase begins |
| Router | The initial routing node that determines the query type |
| Retrieval Fallback | The fallback retrieval strategy when primary retrieval returns insufficient results |
| Summarization | The summarization node used for condensing large result sets |
Use prompt extensions to add domain-specific instructions like “Always cite contract numbers” or “Format financial data as tables” without modifying the underlying agent code.