Skip to main content

Chat Tab

The Chat tab enables RAG (Retrieval-Augmented Generation) conversations with your indexed documents. Ask natural language questions and receive AI-generated answers that are grounded in your document content.

Prerequisites for Chat

Before using the Chat tab, ensure you have:

  1. Vector store configured: Set up in Settings > Vector Store
  2. Embedding provider configured: Set up in Settings > Embedding Model
  3. LLM provider configured: Set up in Settings > LLM Model with API key
  4. Documents synced: At least one document must be indexed in your vector store

If the LLM is not configured, you'll see a prompt directing you to the LLM settings page.

Using RAG Chat

  1. Navigate to Playground from the sidebar
  2. Click the Chat tab
  3. Enter a question about your documents
    • Example: "What is the refund policy?"
    • Example: "How do I configure two-factor authentication?"
  4. Click Send or press Enter
  5. Watch the response stream in real-time

How RAG Chat Works

When you send a question:

  1. Context Retrieval: The system searches your vector store for relevant document chunks
  2. Context Display: Retrieved chunks are shown in a collapsible panel with similarity scores
  3. AI Generation: The LLM receives your question along with the relevant context
  4. Streaming Response: The answer streams in real-time, grounded in your documents