Understanding Chat Responses
Retrieved Context Panel
Before the AI responds, you'll see a collapsible panel showing:
| Element | Description |
|---|---|
| Chunk Count | Number of document chunks retrieved (e.g., "5 chunks") |
| Search Time | How long context retrieval took (in milliseconds) |
| Document Title | Source document for each chunk |
| Match Score | Similarity percentage (color-coded: green ≥80%, yellow 50-79%, gray below 50%) |
| Content Preview | First few lines of each chunk |
| Open Link | Direct link to the original source document |
Click the panel header to expand/collapse the context view.
Response Display
The AI response appears below the context panel:
- Streaming: Text appears word-by-word as it's generated
- Markdown Support: Responses may include formatting, lists, and code blocks
- Source Grounding: Answers are based on the retrieved context, not general knowledge
Referenced Sources
After the AI response completes, a Referenced Sources section appears with clickable links to the original documents. This enables you to verify the AI's answer by viewing the source material directly.
| Source | Link Destination |
|---|---|
| Google Drive | Opens file in Google Drive preview |
| Supabase Storage | Opens file in browser (PDFs viewable inline) |
| Confluence | Opens page in Atlassian Confluence |
| Website | Opens original crawled URL |
| S3/S3-Compatible | Opens object URL (if publicly accessible) |
| Notion | Opens page in Notion |
tip
Clickable source references allow you to verify AI-generated answers against the original documents, ensuring accuracy and building trust in your RAG application.
Response Metadata
After the response completes, a footer shows:
| Field | Description |
|---|---|
| Response Time | Total time for retrieval + generation (in milliseconds) |
| LLM Model | The language model used (e.g., "GPT-4o", "Claude 3.5 Sonnet") |
| Embedding Model | The embedding model used for context retrieval |