Skip to main content
Your AI agent improves through continuous optimization. This guide provides seven systematic strategies to enhance performance, increase resolution rates, and improve customer satisfaction.
Teams reviewing metrics weekly see 15-25% improvement in resolution rates within 90 days. Consistency matters more than intensity.

1. Track Key Performance Indicators

Essential metrics:
  • Involvement Rate: % of conversations with AI participation (target: 80%+)
  • Resolution Rate: % resolved without escalation (excellent: 75%+, good: 60-75%)
  • CSAT: % of 4-5 star ratings (leading: 80%+, solid: 70-80%)
Navigate to Analyze → Metrics for dashboards tracking these over time.
  1. Establish baseline (Analyze → Metrics, last 30 days)
  2. Set 90-day targets (aim for 10-15pp improvement)
  3. Weekly reviews (15 min): Check trends, spot issues
  4. Monthly deep dives (1 hour): Compare month-over-month, segment by channel
Pattern recognition:
  • High resolution + low CSAT = accuracy issues
  • Low resolution + high “No Answer” = knowledge gaps
  • CSAT 80%+ + Resolution 75%+ = healthy performance

2. Identify Poor Performing Topics

Navigate to Analyze → Topics and review the treemap:
  • Large red boxes: High volume + low resolution = maximum impact
  • Yellow boxes: Moderate performance, improvement potential
  • Green boxes: Success patterns to replicate
Prioritize by Impact Score: Volume × (100 - Resolution Rate)Focus on topics with highest impact first.
  1. Click topic to filter conversations
  2. Review 10-20 conversations to identify root cause:
    • Missing knowledge → Add snippets/documentation
    • Wrong information → Fix sources
    • Poor presentation → Update guidance
    • Too complex → Create escalation rules
  3. Implement fixes and rebuild
  4. Monitor improvement over 2 weeks
Action priorities:
  • This week: Top 1-2 topics by impact
  • This month: Top 5 topics
  • This quarter: All topics below 60% resolution

3. Identify Not Yet Enabled Tickets

Navigate to Analyze → Conversations, filter to “Not Involved” conversations. Common causes:
  • Channels not AI-enabled (80%+ uninvolvement in specific channel)
  • Brand/ticket type exclusions (patterns in Zendesk brand or Salesforce case type)
  • Historical imports (created before deployment date)
  • Outbound messages (agent-initiated, expected)
Target: 80%+ involvement rate for customer-initiated inquiries.
  1. Analyze uninvolved conversation breakdown
  2. Size expansion opportunity (exclude historical/intentional exclusions)
  3. Phased rollout:
    • Month 1: Enable one excluded segment, monitor quality
    • Month 2: Expand if successful
    • Month 3: Full rollout across appropriate channels
Calculate potential: uninvolved conversations × expected resolution rate = additional autonomous resolutions

4. Use Improve Answer to Fix Wrong Information

Find incorrect responses by filtering:
  • 1-2 star ratings with feedback like “wrong,” “incorrect,” “outdated”
  • Escalated conversations where agents corrected AI
  • Message search for “actually,” “incorrect,” “that’s wrong” (operator messages)
Click AI message → Improve Answer sidebar shows sources used.
  1. Identify root cause:
    • Outdated documentation → Update source
    • Conflicting sources → Remove duplicates
    • Misinterpreted source → Add explicit snippet
    • Wrong context → Add conditions
  2. Fix source (Train → Data Providers or Snippets)
  3. Rebuild and deploy
Quality loops:
  • Weekly: Review 10-15 low-rated conversations, fix wrong info
  • Monthly: Audit snippets for outdated pricing/features/policies
  • Quarterly: Remove deprecated sources, consolidate duplicates

5. Use Improve Answer to Add Missing Information

Find knowledge gaps via:
  • “No Answer” conversations (Analyze → Metrics, Answer Completeness chart)
  • Message search for “I don’t have information,” “I’m not sure,” “I couldn’t find”
  • Unresolved + Fully Autonomous conversations
Review what questions triggered these responses.
Quick fix: Create snippet (click AI message → Add Snippet → write ideal answer)Long-term: Create comprehensive documentation for topic clustersChoose source type:
  • Snippets: Single Q&A, quick gaps, policy clarifications
  • PDFs: Product manuals, process docs
  • Webpage crawls: Help center, living documentation
  • Tables: Pricing, specs, comparisons
  • Database integrations: Order status, account info
Proactive gap filling:
  • New product launch: Document before release, test AI responses
  • Seasonal prep: Holiday shipping, tax season FAQs
  • Track recurring unanswered questions weekly, prioritize by frequency

6. Foster a QA Culture Among Your Team

Label structure (Settings → Labels):
  • Workflow: “QA: Needs Review,” “QA: Reviewed,” “Review: [Name]”
  • Findings: “Knowledge Gap - [Topic],” “QA: Excellent Example,” “Wrong Information”
Weekly workflow:
  • Monday: Team lead assigns 5 conversations/person with labels
  • During week: Team reviews, applies finding labels, documents in shared doc
  • Friday: Team lead reviews findings, creates updates, deploys, removes temporary labels
Keep permanent labels (Knowledge Gap, Excellent Example), remove workflow labels weekly.
Start small:
  • Week 1-2: Review 5 conversations, document only
  • Week 3-4: Add labels, create 1-2 snippets
  • Month 2+: Full workflow with regular deployments
Team practices:
  • Shared doc for weekly findings and actions
  • 15-30 min weekly sync: metrics review, share findings, plan actions, celebrate wins
  • Rotate review topics monthly for cross-training
  • Set boundaries: 10 reviews/week max per person
  • Celebrate excellent responses publicly
Measure impact: Track CSAT, resolution rate, snippets created, weekly reviews completed

7. Review DSAT Conversations

Filter to 1-2 star ratings (Analyze → Conversations or Metrics → click rating chart). Check:
  • Customer feedback text for patterns: “didn’t answer,” “wrong,” “too long,” “misunderstood”
  • User context sidebar: device (mobile/desktop), page visited, previous conversations
  • AI message sources (Improve Answer sidebar): zero sources, wrong sources, or poor presentation
Common DSAT causes (% typical):
  • Knowledge gap (40%) → Missing information
  • Wrong information (20%) → Incorrect sources
  • Misunderstood question (15%) → Guidance issues
  • Poor presentation (15%) → Formatting/tone
  • Inherent complexity (10%) → Needs human
  • Unrealistic expectations (5%) → No action needed
Process 10-20 DSAT conversations weekly:
  1. Read full thread, identify root cause
  2. Batch by category (knowledge gaps, wrong info, presentation)
  3. Create action plan with priorities
  4. Implement fixes (snippets, source updates, guidance changes)
  5. Track DSAT reduction by topic month-over-month
Document patterns: Track recurring DSAT themes, validate fixes reduced similar complaintsOptional: For critical customers, have account manager follow up personally

Monthly Optimization Routine

  • Monday (30 min): Review metrics dashboard, compare to previous month
  • Wednesday (45 min): Review Topics treemap, prioritize top 3-5 by impact
  • Friday (30 min): Export metrics, share with team
  • Monday (45 min): Review 15-20 conversations in priority topic, create action plan
  • Wednesday (1 hour): Assign QA reviews to team (20-25 conversations)
  • Friday (30 min): Review team findings, prioritize fixes
  • Mon-Wed (2-3 hours): Create snippets, fix sources, update guidance
  • Thursday (1 hour): Review changes, build new profile
  • Friday (30 min): Deploy, announce improvements
  • Monday (30 min): Check deployed changes, look for issues
  • Wednesday (1 hour): Compare metrics to Week 1, measure impact
  • Friday (1 hour): Team retrospective, set next month goals, celebrate wins

Measuring Success

Track these indicators:
  • Lagging (results): CSAT ↑, Resolution ↑, DSAT ↓, Escalations ↓
  • Leading (activities): Conversations reviewed, snippets created, deployments, QA participation
Expected compound growth: 10-15pp improvement per quarter in resolution and CSAT Adjust when:
  • Metrics plateau → Focus on guidance refinement vs knowledge additions
  • Metrics regress → Review recent changes, consider rollback
  • Team engagement drops → Reduce review quota, increase recognition

Next Steps