Skip to main content

Improve Answers

Your AI improves through a continuous cycle of deployment, monitoring, and refinement. Every conversation reveals opportunities to enhance knowledge, adjust guidance, or clarify instructions. This page shows you how to systematically improve your AI’s response quality based on real user interactions.

Why Continuous Improvement Matters

No AI is perfect on the first try. Your initial configuration is a starting point - real customer conversations reveal:
  • Knowledge gaps - Questions your AI can’t answer with existing sources
  • Guidance mismatches - When tone, style, or approach doesn’t fit the situation
  • Tool usage issues - Times when the AI should have used a tool but didn’t (or vice versa)
  • Edge cases - Unusual questions or scenarios you didn’t anticipate
  • User frustration - Low ratings, repeated questions, or escalations indicate problems
Regular review and refinement ensures your AI gets better over time, not worse.
The best AI teams review conversations weekly to identify patterns and make targeted improvements. Small, frequent updates beat large, infrequent overhauls.

The Improvement Workflow

Follow this systematic approach to improve your AI:

1. Review Conversations

Navigate to Analyze → Conversations to examine real interactions:
  • Sort by rating (ascending) to find poorly-rated conversations
  • Filter by status (unresolved, escalated) to see where AI struggled
  • Filter by topic to focus on specific areas
  • Look for patterns - similar questions with similar issues

2. Analyze Individual Conversations

Click any conversation to open the detail view. For each message:
  • Read the user’s question carefully
  • Evaluate the AI’s response quality
  • Check if the tone and style match your guidance
  • Verify factual accuracy

3. Open the Knowledge Sidebar

Click on any AI message to open the Improve Answer sidebar (right panel). This shows: Customer Question
  • Summary of what the user asked
  • Helps you understand intent
Guidance Link
  • Direct link to edit AI behavior
  • Use when tone, style, or approach needs adjustment
Knowledge Tools
  • Add snippets for missing information
  • View which sources were used
  • Identify knowledge gaps
Used Sources
  • Documents the AI cited
  • Exact text excerpts (highlighted)
  • Links to view full source
Available Sources
  • Other knowledge that was retrieved but not used
  • May indicate relevant but not perfectly matching content

4. Take Action

Based on your analysis: If the information is wrong or incomplete:
  • Create a snippet with correct information
  • Update existing data providers
  • Add missing documentation
If the tone or style is off:
  • Edit guidance instructions
  • Add examples of desired responses
  • Adjust audience targeting
If tool usage is incorrect:
  • Update tool descriptions in guidance
  • Add explicit instructions about when to use tools
  • Enable or disable specific tools
If guidance doesn’t match:
  • Check audience filters - user might not match criteria
  • Reorder guidance rules for better priority
  • Create more specific guidance for edge cases

5. Deploy and Monitor

After making changes:
  1. Build new profile version
  2. Deploy to production
  3. Monitor next batch of conversations
  4. Repeat the cycle

Using the Knowledge Sidebar

The Knowledge Sidebar is your primary tool for improving individual answers.

Viewing Source Attribution

When you click an AI message, the sidebar shows which knowledge sources were used: Attributed Sources These are documents the AI explicitly referenced. For each source:
  • Source name and metadata
  • Highlighted excerpts (lettered A, B, C…)
  • Links to view full document
  • Option to copy resource URL
Example:
Used Sources (2)

📄 Product Documentation - Pricing
   PDF • Added 2 days ago • 45 KB • Source #123

   (A) "Enterprise plans start at dollar 99/month and include
       unlimited users, priority support, and advanced analytics."

   (B) "Annual billing provides a 20% discount on all plan tiers."

🔗 https://docs.example.com/pricing
Available Sources These sources were retrieved from your knowledge base but not necessarily cited. They may contain relevant information the AI didn’t use, suggesting:
  • Content is related but not specific enough
  • Better sources took priority
  • AI couldn’t find exact answer in these sources

Creating Snippets from Conversations

When you discover missing or incorrect information, add it immediately: Step 1: Click “Add Snippet”
  • Opens snippet editor in sidebar
  • Pre-populated with question summary as title
Step 2: Write the correct information
  • Use rich text editor
  • Be concise and clear
  • Include all relevant details
  • Format for easy reading (headings, bullet points)
Step 3: Select collection
  • Choose which knowledge collection to add to
  • Defaults to “Snippets” if available
  • Create new collection if needed
Step 4: Save
  • Snippet is created immediately
  • Will be available after next knowledge sync and rebuild
  • Link opens to view snippet in data provider
Snippets created from conversations won’t be available to your AI until you rebuild your profile. This incorporates the latest knowledge snapshot into the active deployment.

Best Practices for Snippet Creation

Focus on the Question Write snippets that directly answer the specific question:
Good:
"How do I reset my password?

1. Click 'Forgot Password' on the login page
2. Enter your email address
3. Check your email for a reset link (valid for 1 hour)
4. Click the link and create a new password
5. Your new password must be at least 12 characters"

Avoid:
"Our authentication system uses industry-standard password reset
mechanisms with time-limited tokens..."
Use Clear Formatting
<h2>Canceling Your Subscription</h2>

<p>You can cancel anytime from your account settings:</p>

<ol>
  <li>Navigate to Settings → Billing</li>
  <li>Click "Cancel Subscription"</li>
  <li>Confirm cancellation</li>
</ol>

<p><strong>Important:</strong> You retain access until the end of your
billing period. No refunds for partial months.</p>
Include Context Help the AI understand when to use this snippet:
"For customers asking about API rate limits:

Free tier: 100 requests/hour
Pro tier: 1,000 requests/hour
Enterprise: Custom limits

Rate limit headers are included in every API response."
Keep It Current Review and update snippets as your product changes:
  • Mark outdated snippets
  • Create new versions for product updates
  • Archive deprecated information

Analyzing Conversation Metrics

Use aggregate metrics to identify systematic issues.

Key Metrics to Monitor

Customer Satisfaction (CSAT)
  • Average rating across conversations
  • Low scores indicate widespread issues
  • Filter by topic to find problem areas
Resolution Rate
  • Percentage of conversations marked “resolved”
  • Low rates suggest knowledge gaps or guidance issues
  • Compare across time periods to track improvement
Escalation Rate
  • How often conversations escalate to humans
  • High rates indicate AI can’t handle common scenarios
  • Review escalated conversations for patterns
Conversation Length
  • Average number of messages per conversation
  • Very long conversations suggest AI isn’t resolving issues efficiently
  • Very short conversations might indicate users giving up
Sentiment Analysis
  • Emotional tone of user messages
  • Increasing negativity during conversation indicates frustration
  • Compare sentiment before and after specific changes

Finding Patterns

Navigate to Analyze → Metrics and: Filter by Time Range
  • Compare week-over-week or month-over-month
  • Identify trends after deployments
  • Spot seasonal patterns
Segment by Topic
  • Which topics have lowest satisfaction?
  • Which topics escalate most often?
  • Which topics have best resolution rates?
Segment by Audience
  • Do premium customers have different satisfaction?
  • Do certain regions have more issues?
  • Do specific channels perform worse?
Segment by Channel
  • Website vs. Zendesk vs. Slack performance
  • Adjust guidance per channel if needed

Identifying Knowledge Gaps

Knowledge gaps occur when users ask questions your AI can’t answer with existing sources.

Signs of Knowledge Gaps

Look for conversations where:
  1. No sources used - Knowledge sidebar shows “Used Sources (0)”
  2. Vague answers - AI provides general information instead of specifics
  3. Web search used - AI resorted to external search instead of internal knowledge
  4. Frequent “I don’t know” - AI explicitly states it doesn’t have information
  5. Low confidence - AI hedges with “I think…” or “I’m not sure…”

Systematic Gap Analysis

Use Message Search Navigate to Analyze → Message Search and:
  1. Search for phrases like:
    • “I don’t have information”
    • “I’m not sure”
    • “I don’t know”
    • “I couldn’t find”
  2. Filter to AI messages only
  3. Review matching messages to find common themes
  4. Create snippets or update data providers for each gap
Review Topic Coverage Navigate to Analyze → Topics to see:
  • Which topics have most conversations
  • Which topics have lowest satisfaction
  • Which topics are growing vs. declining
Topics with high volume but low satisfaction often indicate knowledge gaps.

Filling Gaps Strategically

Prioritize by Impact Fill gaps that affect the most users first:
High Priority:
- Common questions (asked 10+ times per week)
- Premium customer questions
- Questions affecting sales conversion

Medium Priority:
- Occasional questions (asked 2-5 times per week)
- Technical edge cases
- New feature questions

Low Priority:
- One-off questions
- Deprecated product questions
- Out-of-scope topics
Choose the Right Solution For one-off questions:
  • Create a snippet
For entire missing sections:
  • Crawl existing documentation you haven’t indexed
  • Write comprehensive documentation if it doesn’t exist
For frequently changing information:
  • Use search tables instead of snippets
  • Set up automatic syncing from source systems

Refining Guidance Based on Feedback

Use conversation feedback to improve how your AI behaves.

Guidance Issues to Watch For

Tone Mismatches
  • AI is too formal when users want casual
  • AI is too casual for enterprise customers
  • Inconsistent personality across conversations
Length Issues
  • Answers are too long and overwhelming
  • Answers are too brief and unhelpful
  • Doesn’t match user’s question complexity
Structure Problems
  • Walls of text instead of formatted lists
  • Missing headers or organization
  • No examples when they’d help
Over/Under-Explaining
  • Too much background information for experts
  • Too little context for beginners
  • Not adapting to user’s demonstrated expertise

Refining Instructions

Edit guidance in Behavior → Guidance to address issues: Add Tone Guidelines
Before:
"Help users with product questions."

After:
"Help users with product questions using a friendly, conversational tone.
Imagine you're a helpful colleague explaining something to a friend. Use
simple language and avoid corporate jargon. Start responses with a warm
acknowledgment before diving into the answer."
Specify Length
"Keep initial responses under 150 words. If users need more detail, they'll
ask follow-up questions. Use bullet points instead of paragraphs when listing
more than 2 items."
Define Structure
"Format answers using this structure:
1. Brief direct answer to the question (1-2 sentences)
2. Step-by-step instructions if applicable
3. Link to relevant documentation for more details
4. Offer to clarify or elaborate"
Add Audience Adaptation
"Adjust explanation depth based on user expertise. If they use technical
terms, match their level. If they seem new to the product, provide more
context and explain concepts simply."

Testing Guidance Changes

Before deploying revised guidance:
  1. Use the preview frame - Test sample messages
  2. Check multiple scenarios - Try different question types
  3. Verify tool usage - Confirm tools are used appropriately
  4. Review tone - Ensure personality is consistent
Build a new version only when you’re satisfied with the changes.

Common Improvement Patterns

Learn from these frequent scenarios:

Pattern 1: Inconsistent Product Information

Symptoms:
  • AI gives different answers to similar questions
  • Some answers are outdated
  • Contradictory information from different sources
Root Cause:
  • Multiple sources with conflicting information
  • Outdated documentation still in knowledge base
  • Lack of single source of truth
Solution:
  1. Identify canonical source (official docs, product specs)
  2. Remove or archive conflicting sources
  3. Create snippets to override outdated information
  4. Set up automatic syncing from authoritative source
  5. Add “Last Updated” dates to snippets

Pattern 2: Wrong Tone for Audience

Symptoms:
  • Enterprise customers receive casual responses
  • Free users feel responses are too formal
  • Channel-specific tone issues (Slack vs. website)
Root Cause:
  • Single guidance for all users
  • No audience segmentation
  • Channel context ignored
Solution:
  1. Create audience-specific guidance
  2. Add filters: User.plan = "enterprise"
  3. Adjust tone per channel
  4. Order guidance from specific to general

Pattern 3: Repeated Escalations for Same Issue

Symptoms:
  • Same question type always escalates
  • AI says it can’t help
  • Users frustrated after providing info
Root Cause:
  • Missing knowledge for common scenario
  • Guidance doesn’t cover this use case
  • Tool needed but not enabled
Solution:
  1. Add comprehensive snippet for the scenario
  2. Update guidance with explicit instructions
  3. Enable required tools
  4. Add examples of how to handle this situation

Pattern 4: Good Information, Poor Presentation

Symptoms:
  • AI has correct information
  • High escalation despite accurate answers
  • Users say “that didn’t help”
Root Cause:
  • Information is buried in long responses
  • No clear action steps
  • Missing examples or context
Solution:
  1. Update guidance to structure responses better
  2. Add instruction: “Always include action steps”
  3. Require examples for complex topics
  4. Limit paragraph length

Pattern 5: Tool Overuse or Underuse

Symptoms:
  • AI searches web when internal docs have answer
  • AI doesn’t search when it should
  • Offers handoff too quickly or too slowly
Root Cause:
  • Vague tool descriptions
  • No clear usage criteria in guidance
  • Competing tool options
Solution:
  1. Update tool descriptions to be more specific
  2. Add explicit instructions: “Only use search_web after searching internal docs”
  3. Provide decision criteria: “Offer handoff for: billing disputes, account security issues”
  4. Remove unnecessary tools

Deployment Best Practices

Staging Changes

For major improvements:
  1. Build version without deploying
  2. Test in preview or isolated environment
  3. Get team review if available
  4. Deploy to production when confident

Tracking Changes

Keep a changelog of improvements:
Version v0.5 (2024-03-15)
- Added 12 snippets for API authentication questions
- Updated guidance to be more concise (under 150 words)
- Enabled search_knowledge_base tool for technical support guidance

Impact:
- CSAT increased from 4.2 to 4.6
- Escalation rate decreased from 15% to 8%
- Average conversation length reduced from 6 to 4 messages

Measuring Impact

After each deployment:
  1. Wait 3-7 days for sufficient data
  2. Compare metrics to previous period
  3. Review new conversations for improvement
  4. Iterate if results aren’t as expected

Rolling Back

If a deployment makes things worse:
  1. Navigate to Behavior → General
  2. Select previous version
  3. Click “Set as Active”
  4. Review what went wrong before trying again
Always monitor metrics for 24-48 hours after major deployments. Set calendar reminders to check conversation quality and ratings.

Next Steps

Now that you understand how to improve answers: Remember: Improvement is a continuous process. The most successful AI teams make small, frequent refinements based on real user conversations rather than waiting for major overhauls.