Skip to main content
Your conversation history is the single most valuable source of insights for improving your AI. Every conversation reveals what’s working, what’s not, and where to focus your efforts. This page shows you how to systematically browse, filter, and analyze conversations to identify patterns, discover knowledge gaps, and track customer satisfaction.

Why Conversation Analysis Matters

Raw conversation data only becomes valuable when you can find the conversations that matter most. With conversation analysis, you can:
  • Identify knowledge gaps - Find questions your AI struggles to answer
  • Track satisfaction trends - Monitor how customers rate their interactions
  • Discover common issues - Spot patterns in escalations and unresolved conversations
  • Understand user needs - See what topics and questions dominate your support
  • Validate improvements - Confirm that changes actually improve outcomes
  • Train your team - Review exceptional conversations as examples
The most effective AI teams review 10-20 conversations per week, focusing on low-rated or escalated interactions. This regular review habit drives continuous improvement.

Accessing Conversations

Navigate to Analyze → Conversations to view your conversation history. The conversation list shows:
  • Time - When the conversation started (relative, e.g., “2 hours ago”)
  • Message count - Number of user messages in the conversation
  • Preview - First user question and AI response
  • Status indicator - Color-coded resolution status
  • Rating - Customer satisfaction score if provided
Conversations are sorted by most recent first, with infinite scroll to load older conversations as you scroll down.

Understanding Conversation Status

Every conversation has a status that indicates how it concluded. Status helps you filter conversations by outcome and identify areas for improvement.

Status Types

Resolved (Green) The user’s question was successfully answered. Either the AI provided a satisfactory response, or a human operator resolved the issue after involvement.
Use cases:
- Tracking successful interactions
- Finding examples of good AI responses
- Measuring resolution rate over time
Unresolved (Red) The question was inadequately addressed or left unanswered. The user may have abandoned the conversation without satisfaction.
Common causes:
- AI lacked knowledge to answer
- Answer was unclear or incomplete
- User's question was too complex
- Technical issues interrupted conversation
Escalated (Purple) The conversation was handed off to a human agent, either automatically (based on your escalation rules) or manually (user requested human help).
Why this matters:
- High escalation rates indicate AI knowledge gaps
- Review escalations to improve autonomous handling
- Understand when humans add value vs. when AI should handle it
You can manually change conversation status by clicking the status tag on the conversation detail page. This is useful for correcting misclassified conversations or marking follow-ups as resolved.

AI Involvement Levels

Understanding how your AI participated in conversations helps you measure automation efficiency and identify opportunities to increase autonomous handling.

Involvement Types

Fully Autonomous (100% AI) The AI handled the entire conversation without any human operator involvement. This represents complete automation and maximum efficiency.
Why it matters:
- Highest efficiency - no human time required
- Shows AI confidence and capability
- Indicates well-covered topics in knowledge base
- Target: Increase this percentage over time
Public Involvement (AI + Human) The AI generated customer-visible responses, and a human operator later got involved. This represents partial automation where AI started the conversation but human expertise was needed.
Common scenarios:
- AI provided initial answer, human added details
- AI handled first questions, human took over for complex follow-ups
- AI maintained conversation until human available
Private Involvement (AI as Copilot) The AI suggested responses internally to your support team, but all customer-facing messages came from human operators. The AI acts as a copilot, helping agents respond faster.
Use in:
- High-stakes conversations (enterprise, legal, billing)
- Complex technical support
- Training new support agents
- Maintaining human touch while getting AI assistance
Not Involved (Human Only) Zero AI involvement. These are typically imported historical tickets or conversations where the AI was explicitly disabled.
Common causes:
- Imported from third-party systems (Zendesk, Salesforce)
- Outbound messages from operators
- AI disabled for specific channels or users
- Conversations before AI deployment
Filter conversations by involvement type to understand your automation mix. A healthy distribution might be 60% autonomous, 30% public, 10% private for mature deployments.

Using Filters to Find Insights

The conversation list includes powerful filtering options to help you drill down to specific segments.

Date Range Filter

Purpose: Focus on specific time periods to track trends or investigate issues Common use cases:
  • Compare this week vs. last week to measure improvement
  • Analyze conversations after a deployment
  • Review weekend conversations (Better Monday Score)
  • Investigate specific incident timeframes
Location: Top right of the page Click the date range selector to choose:
  • Last 7 days
  • Last 30 days
  • Last 90 days
  • Custom range (select start and end dates)

Involvement Filter

Purpose: Segment by AI participation level to analyze automation efficiency Options:
  • All Conversations - No filtering
  • Involved - Any AI participation (autonomous + public + private)
    • Fully Autonomous - 100% AI handled
    • Public Involvement - AI + human cooperation
    • Private Involvement - AI as copilot
  • Not Involved - Human only, no AI
How to use:
  1. Select “Fully Autonomous” to find successfully automated conversations
  2. Study these to understand what works well
  3. Select “Public Involvement” to find partial automation
  4. Review why humans needed to intervene
  5. Add knowledge or improve guidance to increase autonomous handling

Channel Filter

Purpose: Analyze performance across different communication channels Available channels:
  • Web (website chat widget)
  • Zendesk (support tickets)
  • Salesforce (CRM cases)
  • Slack (workspace messages)
  • WhatsApp (messaging app)
  • Email (support inbox)
Why filter by channel:
  • Different channels have different user expectations
  • Performance may vary (web vs. email response quality)
  • Channel-specific issues (formatting, timing, tone)
  • Optimize guidance per channel
Example: Filter to “Web” to see only website chat conversations, then check if CSAT is higher or lower than other channels.

Label Filter

Purpose: Categorize and filter conversations by custom labels Labels help you organize conversations by:
  • Product area (Billing, API, Dashboard)
  • Priority level (High, Medium, Low)
  • Customer segment (Enterprise, SMB, Free)
  • Quality markers (Training Example, Bug Report)
Two filter modes: Include labels (positive filter) Show only conversations tagged with specific labels.
Example: Include "Billing" to see only billing-related conversations
Exclude labels (negative filter) Hide conversations with specific labels.
Example: Exclude "Spam" to filter out junk conversations
You can apply both simultaneously for precise filtering.
Labels must be created and applied first before appearing in the filter. Learn more about creating labels in the Labels documentation.

Advanced Filters

Click “Show advanced filters” to access additional filtering options: Topic Filter Filter by automatically detected conversation topics. Topics are AI-generated clusters of similar questions.
Use when:
- Analyzing specific subject areas
- Comparing topic performance
- Finding conversations about new product features
Status Filter Filter by resolution outcome (Resolved, Unresolved, Escalated). Combine multiple statuses.
Common combinations:
- Unresolved + Escalated = Problems to investigate
- Resolved only = Success stories and examples
Rating Filter Filter by customer satisfaction score (1-5 stars, Abandoned, Unoffered).
Rating scale:
- 😠 Terrible (1)
- 🙁 Bad (2)
- 😐 OK (3)
- 😊 Good (4)
- 🤩 Amazing (5)
- Abandoned (0) - Survey offered but not completed
- Unoffered (-1) - Survey not presented
Priority use: Filter to ratings 1-2 (Terrible, Bad) to find the most problematic conversations requiring immediate attention.

Filter Combinations

Combine multiple filters to create precise segments: Example 1: Find knowledge gaps
- Involvement: Fully Autonomous
- Status: Unresolved
- Rating: 1-2 (Terrible, Bad)
- Date: Last 7 days

Result: Recent conversations where AI tried to help but failed
Action: Review for missing knowledge or guidance issues
Example 2: Validate recent improvements
- Topic: "API Authentication" (your recent focus area)
- Date: Last 30 days
- Status: Resolved
- Rating: 4-5 (Good, Amazing)

Result: Successful conversations about your improved topic
Action: Confirm your knowledge additions are working
Example 3: Weekend automation
- Date: Custom range (last Saturday-Sunday)
- Involvement: Fully Autonomous
- Status: Resolved

Result: Conversations successfully handled while team was offline
Action: Measure Better Monday Score impact

Viewing Conversation Details

Click any conversation card to open the detailed view. This shows:

Conversation Header

Metadata display:
  • Total message count
  • Topic tag (clickable to change topic)
  • Status tag (clickable to change status)
  • Rating indicator with score
  • Channel information
  • Timestamps
Actions:
  • Export conversation data
  • Navigate to next/previous conversation
  • Give feedback about the conversation
  • Hide conversation (spam or irrelevant)
  • Chat now (open conversation in live chat if ongoing)

Message Timeline

The conversation displays in chronological order with: User messages (left-aligned, white background)
  • User’s question or statement
  • Timestamp
  • Sender information (if available)
AI messages (right-aligned, blue background)
  • AI’s response
  • Source attribution (which knowledge was used)
  • Tool calls (searches, actions performed)
  • Confidence indicators
Operator messages (right-aligned, green background)
  • Human agent responses
  • Internal notes (marked as private)
  • Handoff indicators

Customer Satisfaction Feedback

If the customer rated the conversation, you’ll see: Rating display:
  • Star rating (1-5) with emoji
  • Rating label (Terrible to Amazing)
  • Optional text feedback from customer
This appears as a highlighted card at the top of the conversation timeline. Use feedback to:
  • Understand what went wrong in low-rated conversations
  • Identify patterns in dissatisfaction
  • Find examples of excellent experiences (5-star ratings)

User Information Sidebar

The right sidebar shows: User profile:
  • User ID and alias
  • Email address (if provided)
  • Device information (browser, OS, screen size)
  • Location (if available)
  • Previous conversation count
Channel details:
  • Communication channel used
  • Session information
  • Integration metadata
Conversation labels:
  • Applied labels
  • Add/remove labels
  • Quick label actions
Click the user to see their complete conversation history and user profile.

Keyboard Navigation

Navigate conversations efficiently with keyboard shortcuts. Press ? while viewing a conversation to see all shortcuts. Essential shortcuts: J or - Previous conversation Navigate to the previous conversation in your filtered list. K or - Next conversation Navigate to the next conversation in your filtered list. N - Next message Jump to the next message within the current conversation. P - Previous message Jump to the previous message within the current conversation. ESC - Clear selection or close help Exit message highlighting or close the shortcuts dialog. ? - Show keyboard shortcuts Display the complete list of available shortcuts.
Use J/K navigation to quickly review multiple conversations in sequence. This is much faster than clicking back and forth, especially when reviewing 20+ conversations per session.

Improving Answers from Conversations

Every conversation is an opportunity to improve your AI. Use the “Improve Answer” workflow to refine responses in real-time.

How to Improve an Answer

Step 1: Click any AI message This opens the knowledge sidebar on the right side of the screen. Step 2: Review used sources The sidebar shows:
  • Which knowledge documents the AI referenced
  • Exact text excerpts used (highlighted with letters A, B, C)
  • Links to view full source documents
  • Available but unused sources
Step 3: Identify the issue If information is missing:
  • Click “Add Snippet” in the sidebar
  • Write the correct or missing information
  • Save to your knowledge base
  • Rebuild profile to make it available
If information is wrong:
  • Navigate to the source document
  • Update or remove incorrect information
  • Sync data providers
  • Rebuild and redeploy
If tone or style is off:
  • Click the guidance link in sidebar
  • Adjust AI behavior instructions
  • Add examples of desired responses
  • Rebuild and test
Step 4: Deploy and monitor After making changes:
  1. Build a new profile version
  2. Deploy to your active deployment
  3. Monitor next batch of conversations
  4. Verify improvement
Changes to snippets and guidance don’t take effect until you rebuild your profile. Knowledge added from conversations requires a full knowledge sync and rebuild cycle.

Source Attribution

Understanding which sources the AI used helps you diagnose answer quality issues. Used Sources (Green) These documents were explicitly referenced in the AI’s response. Each source shows:
  • Document name and metadata
  • Highlighted excerpts (lettered A, B, C for reference)
  • Source type (PDF, webpage, snippet, table row)
  • Link to view full document
  • Date added and file size
Available Sources (Gray) These sources were retrieved from your knowledge base but not directly cited. This indicates:
  • Content is related but not specific enough
  • Better sources took priority
  • AI found the answer elsewhere
No Sources (Red flag) If “Used Sources (0)” appears, the AI:
  • Answered from general knowledge (not your data)
  • Made assumptions or guessed
  • Couldn’t find relevant information
When you see zero sources, this is a knowledge gap. Create a snippet immediately.

Batch Operations

Select multiple conversations to perform bulk actions.

How to Select Conversations

Method 1: Hover and click Hover over any conversation card - a selection circle appears in the top-right corner. Click to select. Method 2: Selection mode Click the selection circle on one card to enter selection mode. All cards now show selection circles. Select multiple:
  • Click individual cards to toggle selection
  • Selection count shown in toolbar at bottom
  • Press ESC to clear selection and exit mode

Batch Actions

With conversations selected, the batch toolbar appears at the bottom: Apply labels Tag multiple conversations at once for categorization or follow-up. Remove labels Remove specific labels from selected conversations. Export data Download conversation data for selected items only. Hide conversations Bulk hide spam or irrelevant conversations.
Batch hide is permanent. Hidden conversations cannot be recovered. Use this carefully for spam or test data only.

Exporting Conversation Data

Export conversations for compliance, analysis, or backup purposes.

Export Options

Click the “Export” button in the top-right to open the export dialog. Entity type: Conversation Choose to export full conversations (all messages) or just conversation metadata. Format:
  • CSV - Tabular format for spreadsheets and databases
  • JSON - Structured format for programmatic analysis
Filters applied: The export uses your current filter settings (date range, status, involvement, etc.). Review the filter summary before exporting. Fields included:
Conversation CSV includes:
- conversation_id
- created_at
- updated_at
- status (resolved, unresolved, escalated)
- rating_score
- rating_comment
- message_count
- user_id
- channel
- topic_ids
- labels
- involvement_type
- full_message_transcript

Scheduling Regular Exports

Navigate to Settings → Data Exports to set up automatic recurring exports: Frequency options:
  • Daily (every morning at 6 AM)
  • Weekly (every Monday)
  • Monthly (first of the month)
Delivery methods:
  • Email attachment (for small exports)
  • Download link (for large exports)
  • Webhook to external system
Use scheduled exports to:
  • Maintain compliance records
  • Feed external analytics tools
  • Backup conversation history
  • Generate custom reports
Use conversation data to identify systematic issues rather than isolated incidents.

Common Patterns to Look For

Pattern 1: Topic-specific poor ratings
Filter:
- Topic: "Pricing"
- Rating: 1-2
- Date: Last 30 days

If you see many low-rated pricing conversations:
- Review pricing information in knowledge base
- Check if AI is providing outdated or incorrect prices
- Update guidance to handle pricing questions better
- Consider when to escalate pricing questions
Pattern 2: Channel performance differences
Compare:
- Filter 1: Channel = Web, calculate average CSAT
- Filter 2: Channel = Email, calculate average CSAT

If web chat has lower satisfaction:
- Web users expect faster, shorter responses
- Adjust guidance for channel-specific tone
- Check if formatting renders correctly in web widget
Pattern 3: Time-based trends
Compare:
- Week 1 (after deployment): Review conversations
- Week 2: Review conversations
- Week 3: Review conversations

Track improvements:
- Are ratings increasing?
- Is resolution rate improving?
- Are escalations decreasing?
Pattern 4: User journey issues
Review conversations from same user:
- First conversation: What was their initial question?
- Follow-up conversations: What wasn't resolved?
- Pattern: Are users asking same question multiple times?

This indicates:
- AI isn't fully resolving the root issue
- Need more comprehensive answers
- Missing follow-up questions in knowledge
For open-ended investigation, use Message Search to find specific phrases or topics. Navigate to Analyze → Message Search to search across all message content, not just conversation metadata. Example searches:
"I don't have information" - Find knowledge gaps
"speak to a human" - Find escalation requests
"doesn't work" - Find technical issues
"thank you" - Find satisfaction indicators
"frustrated" or "annoyed" - Find negative sentiment
Learn more in the Message Search documentation.

Best Practices

Regular Review Schedule

Establish a consistent review routine: Daily (5-10 minutes):
  • Check newest conversations
  • Review any 1-2 star ratings
  • Verify AI is performing as expected
Weekly (30-60 minutes):
  • Filter to Unresolved + Escalated conversations
  • Identify top 3 knowledge gaps
  • Create snippets for common missing information
  • Review metrics trends (CSAT, resolution rate)
Monthly (2-3 hours):
  • Comprehensive performance review
  • Topic-by-topic analysis
  • Compare month-over-month metrics
  • Plan major knowledge or guidance updates
  • Review label usage and organization

Prioritizing What to Review

You can’t review every conversation. Focus on high-impact segments: Priority 1: Recent poor ratings
Filters:
- Rating: 1-2 (Terrible, Bad)
- Date: Last 7 days
- Limit: 10-20 conversations

Why: Most urgent signal of customer dissatisfaction
Priority 2: Escalated conversations
Filters:
- Status: Escalated
- Date: Last 30 days
- Sort: By topic

Why: Shows where AI needs improvement to handle autonomously
Priority 3: High-volume topics with issues
Filters:
- Topic: [Your top 3 topics by volume]
- Status: Unresolved
- Date: Last 30 days

Why: Fixing common topics has biggest impact
Priority 4: Success stories
Filters:
- Rating: 5 (Amazing)
- Status: Resolved
- Involvement: Fully Autonomous

Why: Learn what's working well to replicate success

Collaborating with Your Team

If multiple team members review conversations: Assign ownership with labels:
  • Label: “Review: Alex” for individual review assignments
  • Label: “Needs Engineering” for technical follow-up
  • Label: “Training Example” for exceptional conversations
Share insights:
  • Create a shared document with weekly findings
  • Link to specific conversations in team discussions
  • Track knowledge gap backlog together
Rotate review focus:
  • Week 1: Person A reviews billing, Person B reviews technical
  • Week 2: Switch focus areas
  • Ensures fresh perspectives on all topics

Avoiding Common Mistakes

Mistake 1: Only reviewing bad conversations Problem: You miss learning what’s working well Solution: Review mix of ratings - learn from 5-star conversations too Mistake 2: Not documenting changes Problem: Can’t track what you’ve improved or measure impact Solution: Keep changelog of knowledge additions and guidance updates Mistake 3: Making too many changes at once Problem: Can’t identify which change caused improvement (or regression) Solution: Make focused changes, deploy, monitor for 3-7 days, then iterate Mistake 4: Ignoring conversation context Problem: Misunderstanding why AI responded a certain way Solution: Always read full conversation thread, not just isolated messages Mistake 5: Not following up on improvements Problem: Make changes but never verify they worked Solution: After each update, filter to that topic and review next batch

Troubleshooting

Can’t find specific conversations

Issue: Looking for a conversation but can’t locate it Solutions:
  • Check your date range filter - expand to “All time”
  • Verify you’re not filtering by status, topic, or rating
  • Use Message Search to find by content
  • Check if conversation was hidden (Settings → Hidden Conversations)
  • Verify project selection if you have multiple projects

Filters not working as expected

Issue: Filter results seem incorrect or incomplete Solutions:
  • Clear all filters and reapply one at a time
  • Check “Advanced filters” section - may have hidden filters active
  • Verify date range is set correctly (not in future)
  • Refresh page to clear any cached filter state
  • Check if label names have changed or been deleted

Export is empty or incomplete

Issue: Exported file has no data or missing conversations Solutions:
  • Verify filters are set correctly before export
  • Check date range - may be too narrow
  • Ensure you have conversations matching filter criteria
  • Try smaller date range if export times out
  • Export in batches if you have 10,000+ conversations

Can’t change conversation status

Issue: Status tag won’t update when clicked Solutions:
  • Ensure you have edit permissions (not viewer role)
  • Check if conversation is imported (some imported conversations lock status)
  • Refresh page and try again
  • Verify project is not archived

Keyboard shortcuts not working

Issue: J/K or other shortcuts don’t respond Solutions:
  • Click anywhere on page to ensure focus (not in input field)
  • Close knowledge sidebar if open (blocks some shortcuts)
  • Check if you’re in browser’s “find on page” mode (Ctrl+F)
  • Refresh page to clear any event listener issues

Next Steps

Now that you understand conversation analysis: Remember: Conversation review is the foundation of AI improvement. Make it a regular habit, focus on high-impact issues, and track your progress over time. Small, consistent improvements compound into dramatically better customer experiences.