Chat Tab
The Chat tab enables RAG (Retrieval-Augmented Generation) conversations with your indexed documents. Ask natural language questions and receive AI-generated answers that are grounded in your document content.
Prerequisites for Chat
Before using the Chat tab, ensure you have:
- Vector store configured: Set up in Settings > Vector Store
- Embedding provider configured: Set up in Settings > Embedding Model
- LLM provider configured: Set up in Settings > LLM Model with API key
- Documents synced: At least one document must be indexed in your vector store
If the LLM is not configured, you'll see a prompt directing you to the LLM settings page.
Using RAG Chat
- Navigate to Playground from the sidebar
- Click the Chat tab
- Enter a question about your documents
- Example: "What is the refund policy?"
- Example: "How do I configure two-factor authentication?"
- Click Send or press Enter
- Watch the response stream in real-time
How RAG Chat Works
When you send a question:
- Context Retrieval: The system searches your vector store for relevant document chunks
- Context Display: Retrieved chunks are shown in a collapsible panel with similarity scores
- AI Generation: The LLM receives your question along with the relevant context
- Streaming Response: The answer streams in real-time, grounded in your documents