Skip to main content

Understanding Chat Responses

Retrieved Context Panel

Before the AI responds, you'll see a collapsible panel showing:

ElementDescription
Chunk CountNumber of document chunks retrieved (e.g., "5 chunks")
Search TimeHow long context retrieval took (in milliseconds)
Document TitleSource document for each chunk
Match ScoreSimilarity percentage (color-coded: green ≥80%, yellow 50-79%, gray below 50%)
Content PreviewFirst few lines of each chunk
Open LinkDirect link to the original source document

Click the panel header to expand/collapse the context view.

Response Display

The AI response appears below the context panel:

  • Streaming: Text appears word-by-word as it's generated
  • Markdown Support: Responses may include formatting, lists, and code blocks
  • Source Grounding: Answers are based on the retrieved context, not general knowledge

Referenced Sources

After the AI response completes, a Referenced Sources section appears with clickable links to the original documents. This enables you to verify the AI's answer by viewing the source material directly.

SourceLink Destination
Google DriveOpens file in Google Drive preview
Supabase StorageOpens file in browser (PDFs viewable inline)
ConfluenceOpens page in Atlassian Confluence
WebsiteOpens original crawled URL
S3/S3-CompatibleOpens object URL (if publicly accessible)
NotionOpens page in Notion
tip

Clickable source references allow you to verify AI-generated answers against the original documents, ensuring accuracy and building trust in your RAG application.

Response Metadata

After the response completes, a footer shows:

FieldDescription
Response TimeTotal time for retrieval + generation (in milliseconds)
LLM ModelThe language model used (e.g., "GPT-4o", "Claude 3.5 Sonnet")
Embedding ModelThe embedding model used for context retrieval