Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.experio.cloud/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Assistants are the AI personas users interact with in Experio. Each assistant has its own configuration including model assignments, tools, behavior settings, and agent architecture. You can create multiple assistants tailored to different use cases such as data exploration, conflict analysis, or knowledge transition. Navigate to Admin Panel > Settings > Agent Configuration. The interface provides a full-page CRUD experience with a tabbed form for managing all assistant settings. From the list view you can search, create, edit, and delete assistants. Clicking an assistant or the Add Assistant button opens the detail page with the following tabs.

Basic Tab

FieldDescription
TitleThe assistant’s display name shown to users
SubtitleA brief description shown on the assistant card
IconSelect from a dropdown of available icons (e.g., robot, brain, lightbulb, analytics)
Welcome MessageThe greeting shown when a user starts a new conversation with this assistant
Display OrderDisplay order on the Agents page (lower numbers appear first)
EnabledWhether the assistant is available to users
Staff OnlyRestrict this assistant to staff users only

Gating

Gating requires users to provide specific context (for example: company size, industry, strategic focus) before the assistant will process their queries. When enabled, the first message of every conversation is evaluated against the gating prompt; if the required context is missing, the assistant returns a clarification asking for it instead of running the normal pipeline. Once a conversation provides complete context, gating is satisfied for the rest of that conversation.
FieldDescription
Enable GatingToggle gating on or off for this assistant. When off, no gating evaluation runs and there is no overhead.
Gating PromptFree-form text describing what context the user must provide. Shown to users in an amber banner on the welcome page and used by the LLM to decide whether the user’s message is complete. Only visible when Enable Gating is on; clearing the toggle clears the prompt.
Write the gating prompt as a checklist of required information, e.g.:
User must provide:
- Company size and industry
- Strategic objectives
- Key challenges they want to address
Users can amend the context they originally provided at any time by clicking the clipboard icon next to the message input — the panel shows the original context plus any updates.

Configuration Tab

FieldDescription
System PromptCustom system prompt for this assistant. Leave empty to use the default prompt for the assistant type.
Use Document ContextInclude document context from vector search in responses
Allow Per-Question Model SwitchWhen enabled, users can select a report-writer model per message; for LangGraph deep agents this applies to the report phase only

Model Configuration Tab

Each assistant is assigned models that control how it generates responses. All model dropdowns are populated from the configured Model Configurations.
FieldDescription
Reasoning model (default pipeline)Default for LangGraph scope, retrieval, router, and summarization when no per-step override is set. Uses a Chat model config. Also the chain used when the report-writer model is unset.
Report writer modelUsed for the LangGraph report phase (the streamed answer users see), and for final responses in native/planning agents. Same Chat model type as other assistant slots. If unset, falls back to the reasoning model.
For LangGraph deep agents, the reasoning model drives the pipeline; the report writer model is used only for the final report step. Per-question model selection, when enabled, overrides the report writer model for that message.
The Model Configuration tab also includes per-tool model overrides:
FieldDescription
Cypher Reasoning ModelModel used for generating Cypher graph database queries. Falls back to the reasoning model if not set.
Resolve Entities ModelModel used for entity resolution in knowledge graph lookups. Falls back to the reasoning model if not set.
Intent Resolver ModelModel used for classifying user intent and routing queries. Falls back to the reasoning model if not set.
Orchestrator ModelModel used for orchestrator checkpoint nodes (think/write_todos). These nodes only plan tasks and show progress to the user — they do not generate the final answer. Use a fast, inexpensive model here. Falls back to the reasoning model if not set.
Use a smaller, faster model for Cypher generation, entity resolution, and orchestrator checkpoints to reduce latency and cost, while keeping a more capable model for the final answering step.

Custom Tool Configuration Tab

Custom prompt instructions for specific tools. These are appended to the default prompts.
FieldDescription
Cypher InstructionsCustom instructions appended to the Cypher generation prompt
Resolve Entities InstructionsCustom instructions for the entity resolution step

Agent Configuration Tab

Control the agent architecture and capabilities available to the assistant.

Agent Mode

Select one agent mode at a time using the card-based selector. New assistants default to Deep Agent.
ModeDescription
Deep AgentLangGraph-based 3-phase workflow (scope, retrieval, report) with TODO tracking, file offloading, and subagent delegation
Planning AgentMulti-turn planning agent that creates a step-by-step plan before execution
Native AgentNative LLM tool calling (ReAct loop)
NoneNo agent orchestration

Tool Groups

Select which tool groups the assistant has access to using the pill-based multi-select. Click a pill to toggle it on/off. Available groups include Knowledge Base, Employee Analysis, OCI Analysis, Graph Query, Document Processing, Reference Documents, and more. If no tool groups are selected, defaults are applied based on the assistant type.
Only one agent mode can be active at a time. The interface enforces this automatically. Deep Agent provides the most thorough analysis but takes longer to respond. For faster, simpler interactions, use Native Agent instead.
When Deep Agent is selected, users see a Deep Research toggle in the chat interface. When activated, this enables parallel graph context queries (GCQ) alongside document search during the scope phase, producing more thorough but slower results. This is a per-message toggle — users can enable it for complex queries and disable it for quick lookups.

Deep Agent Settings Tab

When Deep Agent is selected as the agent mode, a Deep Agent Settings tab appears with additional configuration. This tab is hidden when other agent modes are selected.

Architecture Settings

FieldDefaultDescription
Use LangGraph architecturefalseEnable pure LangGraph 3-phase workflow
Enable subagentstrueEnable focused subagents for parallel retrieval tasks
Recursion limit30Max recursion depth for agent execution
Stream recursion limit100Max streaming iterations before stopping
Enable checkpointertrueEnable state persistence across conversation turns
Enable storetrueEnable PostgresStore for agent memory and context sharing
Checkpoint durabilityexitControls checkpoint frequency (exit saves on completion)
Enable orchestratortrueEnable orchestrator checkpoint nodes that provide user-visible progress updates (thinking steps, task tracking). When disabled, these nodes become pass-through — the pipeline still runs but skips the orchestrator LLM calls, reducing latency and cost.
Disabling the orchestrator does not change the agent’s retrieval or answer quality. It only removes the intermediate “thinking” and “task planning” steps that are shown to the user during processing. This is useful for simpler queries where the overhead of orchestrator LLM calls is not needed.

Retrieval Settings

FieldDefaultDescription
Document search K3Number of similar documents to retrieve per search
Document search score threshold0.7Minimum similarity score for document results (0.0 to 1.0)

Scope Settings

FieldDefaultDescription
Scope max count threshold1000Max results before the agent asks the user to narrow their query
Min path confidence0.5Minimum confidence score for intent resolver routing paths
Use resolved intenttrueUse the router-rephrased intent for all downstream nodes instead of the raw user message

Report Settings

FieldDefaultDescription
Report data preview limit300Max rows of retrieved data included in the report writer prompt
Report max data tokens50000Max tokens allocated for retrieved data in the report context. Rows are dropped from the end until the content fits within this budget.
Report writer fallback models(none)Fallback models for the report writer phase. On 429 rate-limit errors, these models are tried in order.
Configure fallback models from different providers (e.g., primary: Azure GPT-5.1, fallback: Google Gemini) to ensure the report phase can complete even when one provider hits rate limits. The primary answering model is always tried first.

Graph Context Settings

FieldDefaultDescription
Auto-include graph context if under30000Token threshold for auto-including graph context in the report. If the total graph context is under this limit, it is included automatically without requiring a store lookup.
Per-entity auto-include threshold20000Max tokens per individual entity for selective packing. Entities exceeding this limit are summarized instead of included in full.

LangGraph Prompt Extensions

The Deep Agent Settings tab also includes a LangGraph Prompt Extensions section. These are custom instructions appended to each LangGraph node prompt, allowing fine-tuning of agent behavior at each stage of the pipeline without code changes.
Prompt ExtensionApplied To
RetrievalThe retrieval node that searches documents and knowledge graph
Report WriterThe final report generation node that synthesizes findings
Intent ResolverThe router that classifies user intent and selects the execution path
Orchestrator Pre-ScopeThe orchestrator before the scoping phase begins
Orchestrator Pre-RetrievalThe orchestrator before the retrieval phase begins
Orchestrator Pre-ReportThe orchestrator before the report generation phase begins
RouterThe initial routing node that determines the query type
Retrieval FallbackThe fallback retrieval strategy when primary retrieval returns insufficient results
SummarizationThe summarization node used for condensing large result sets
Use prompt extensions to add domain-specific instructions like “Always cite contract numbers” or “Format financial data as tables” without modifying the underlying agent code.