Seven proven strategies to continuously improve your botBrains AI Agent performance, resolution rates, and customer satisfaction
Your AI agent is never truly “done” - it’s a living system that improves through continuous optimization. The most successful teams treat AI optimization as an ongoing practice, not a one-time project. This guide walks you through seven proven strategies to systematically improve your AI’s performance, increase autonomous resolution rates, and deliver exceptional customer experiences.
Even a well-configured AI agent will face new challenges as your business evolves:
Product changes introduce new questions your AI hasn’t encountered
Knowledge gaps emerge as customers ask novel questions
Customer expectations shift over time
Seasonal patterns create different support loads
Edge cases reveal themselves only through real usage
The teams that excel at AI support don’t just deploy and forget - they establish systematic optimization routines that compound improvements over time.
Teams that review metrics weekly and conversations daily see 15-25% improvement in resolution rates within the first 90 days. Consistency matters more than intensity.
Involvement RateThe percentage of conversations where your AI participates in any way (autonomous, public, or private involvement).
Copy
Involvement Rate = (AI-Involved Conversations / Total Conversations) × 100%Target: 80%+ involvement means AI is assisting with most customer interactionsBaseline: Track your current rate as starting point
This metric shows how much of your support workload the AI is touching. Low involvement rates indicate missed opportunities for automation.Resolution RateThe percentage of conversations successfully resolved without escalation or abandonment.
High resolution rates indicate your AI has the knowledge and guidance to handle most questions autonomously.Customer Satisfaction (CSAT)The percentage of customers who rate their experience positively (4-5 stars).
Navigate to Analyze → Topics to see your conversation landscape.Topic Resolution TreemapThe treemap shows topic health at a glance:
Size = Conversation volume (larger = more conversations)
Color = Resolution rate (green = high, yellow = moderate, red = low)
Position = Grouped by similarity
Priority Identification Strategy:
Find large red boxes - High volume + low resolution = maximum impact opportunity
Scan yellow boxes - Moderate performance with improvement potential
Note green boxes - Success stories to learn from
Example:
Copy
Topic: "API Authentication"Size: Large (150 conversations/month)Color: Red (35% resolution rate)Priority: HIGH - lots of customers strugglingTopic: "Shipping Status"Size: Medium (60 conversations/month)Color: Yellow (62% resolution rate)Priority: MEDIUM - room for improvementTopic: "Return Policy"Size: Large (180 conversations/month)Color: Green (85% resolution rate)Priority: LOW - performing well, use as template
Step 2: Click Through for DetailsClick any topic in the treemap or table to filter conversations to just that topic. Review:
What specific questions are customers asking?
How is the AI responding?
What knowledge sources is it using (or missing)?
Are answers accurate but poorly formatted?
Do certain edge cases cause consistent failures?
Step 3: Compare Against BenchmarksFor each priority topic, ask:
Copy
Is the low performance because:[ ] Missing knowledge - AI doesn't have the information[ ] Wrong information - AI has outdated or incorrect data[ ] Poor guidance - AI has info but presents it badly[ ] Inherent complexity - Topic requires human judgment[ ] Tool limitations - AI needs capabilities it doesn't have
Once you’ve identified a problem topic:For Knowledge Gaps:
Filter conversations to that topic
Review 10-20 conversations
List common questions the AI can’t answer
Create snippets or add documentation
Rebuild and deploy
Monitor improvement over next 2 weeks
For Quality Issues:
Review low-rated conversations in the topic
Check if AI has correct information but poor presentation
Update guidance with topic-specific instructions
Add examples of ideal responses
Deploy and verify improvement
Example Action Plan:
Copy
Topic: "API Rate Limits"Current Resolution: 38%Volume: 85 conversations/monthActions:✓ Created 3 snippets covering rate limit tiers✓ Added table of rate limits by plan type✓ Updated guidance to format limits as tables✓ Added examples showing how to interpret 429 errorsTarget: 65% resolution within 30 days
Uninvolvement Rate = (Not Involved Conversations / Total Conversations) × 100%If 40% of your conversations have no AI involvement, that’s 40% of your workload where AI could potentially help but doesn’t.
Cause 1: Channel SegmentationSome channels may not have AI enabled.Detection:
Filter uninvolved conversations by channel. If one channel has 80%+ uninvolvement, it’s likely not AI-enabled.Solution:
Review deployment settings for that channel
Enable AI deployment if appropriate
Consider whether channel should have AI (some channels like phone may be human-only by design)
Cause 2: Brand or Ticket Type ExclusionIn Zendesk or Salesforce, certain brands or ticket types may be excluded from AI.Detection:
Export uninvolved conversations and check for patterns in:
Zendesk brand
Salesforce case type
Ticket tags or categories
Solution:
Copy
Example: "Premium Support" Zendesk brand has no AIDecision tree:- Should premium customers get AI assistance? YES → Enable AI deployment for Premium Support brand → Consider private involvement mode (copilot) if you want human control- Should premium customers avoid AI? NO → Keep AI disabled, but track if this impacts metrics
Cause 3: Imported Historical TicketsTickets created before AI deployment won’t have involvement.Detection:
Check created_at dates for uninvolved tickets. If they’re all older than your deployment date, this is expected.Solution:
Filter your analysis to exclude historical imports:
Copy
Filters:- Date: After [deployment date]- Involvement: Not Involved
Cause 4: Operator-Initiated Outbound MessagesHuman agents proactively reaching out won’t trigger AI.Detection:
Review message flow. If the first message is from an operator (not customer), it’s outbound.Solution:
This is expected behavior. Consider if AI copilot mode could help agents draft these messages, but don’t force AI involvement on outbound.
Phase 1 (Month 1):- Enable AI for one previously excluded segment- Monitor closely for quality issues- Measure involvement and resolution ratesPhase 2 (Month 2):- If Phase 1 successful, expand to next segment- Compare metrics to Phase 1- Adjust guidance if neededPhase 3 (Month 3):- Full rollout across all appropriate channels/brands- Track overall involvement rate improvement- Measure impact on team workload
Aim for 80%+ involvement rate as a healthy target. Not every conversation should have AI (outbound, historical, etc.), but most customer-initiated inquiries should at least get AI assistance.
Filters:- Rating: 1-2 stars (Terrible, Bad)- Date: Last 7 days- Sort: By rating (lowest first)
Read customer feedback comments. Look for phrases like:
“That’s not correct”
“Wrong information”
“That’s outdated”
“Actually, it’s…”
Method 2: Escalated Conversations After AI Response
Copy
Filters:- Status: Escalated- Involvement: Public Involvement- Date: Last 30 days
Review conversations where AI provided an answer but human had to correct it. The correction reveals the wrong information.Method 3: Message Search for CorrectionsNavigate to Analyze → Message Search:
When you find incorrect information:Step 1: Open the conversation detailClick the conversation to see full message history.Step 2: Click the incorrect AI messageThis opens the Improve Answer sidebar showing:
Used sources
Available sources
Guidance link
Step 3: Review used sourcesThe sidebar highlights which knowledge documents the AI referenced:
Copy
Example:AI message: "Premium plans start at $79/month"Customer feedback: "That's wrong, they start at $99"Used Sources:📄 Pricing Documentation (outdated) (A) "Premium plans start at dollar 79/month with annual billing."Source: pricing_2023.pdf | Added: 8 months ago
Step 4: Identify the root cause
Copy
Common scenarios:Scenario A: Outdated documentation- Source has old information- Fix: Update the source document in your systemScenario B: Conflicting sources- Multiple sources with different prices- Fix: Remove outdated source, keep canonical versionScenario C: Misinterpreted source- Source is correct but AI read it wrong- Fix: Rewrite source for clarity or add explicit snippetScenario D: Source is correct for wrong context- Information is right for old plan, wrong for new plan- Fix: Add context or conditions to source
Establish a systematic process to prevent wrong information from recurring:Weekly Quality Audit:
Copy
Every Monday:1. Filter to 1-2 star ratings from last week2. Review 10-15 conversations3. Identify any incorrect information4. Create list of corrections needed5. Update sources by Friday6. Deploy updated profile7. Monitor next week for improvement
Source Review Cadence:
Copy
Monthly:- Review all snippets for accuracy- Check for outdated pricing, features, policies- Verify external documentation URLs still work- Update dates on time-sensitive informationQuarterly:- Audit all data providers- Remove deprecated sources- Consolidate duplicate information- Document source of truth for each topic
Correction Tracking:Keep a log of corrections made:
Copy
Example log:Date: March 15Issue: Incorrect pricing for Premium planSource: pricing_2023.pdfOld info: $79/monthCorrect info: $99/monthAction: Updated pricing_2024.pdf, removed old fileImpact: Pricing topic resolution improved from 45% to 78%
Never assume an AI response is wrong based on a single customer complaint. Verify against your canonical source of truth before making changes. Customers can also be mistaken.
Method 1: “No Answer” ConversationsNavigate to Analyze → Metrics and review the Answer Completeness chart:
Complete: AI provided full answer
Incomplete: AI answered but indicated uncertainty
No Answer: AI explicitly couldn’t answer
Filter conversations to “No Answer” category and review what questions triggered these responses.Method 2: Message SearchNavigate to Analyze → Message Search and search for:
Copy
AI message search queries:- "I don't have information about"- "I don't have access to"- "I'm not sure about"- "I couldn't find"- "I don't know"- "I'm unable to"
Each result reveals a knowledge gap.Method 3: Unresolved + Low Involvement
Copy
Filters:- Status: Unresolved- Involvement: Fully Autonomous- Rating: Any- Date: Last 30 days
These are conversations where AI tried to help but couldn’t resolve the issue - often due to missing knowledge.
When you discover a knowledge gap:Quick Fix: Create a Snippet
Open the conversation with the knowledge gap
Click the AI message that shows the gap
Click Add Snippet in the sidebar
Write the answer the AI should have provided:
Copy
Example:User question: "Can I integrate with Zapier?"AI response: "I don't have information about Zapier integration."Create snippet:Title: "Zapier Integration"Content:Yes, botBrains integrates with Zapier. Here's how to set it up:1. Navigate to Settings → Integrations2. Click "Connect Zapier"3. Authorize the connection in Zapier4. Choose trigger events (new conversation, rating received, etc.)5. Map fields to your desired actionsAvailable triggers:- New conversation started- Conversation resolved- Customer rating received- Escalation occurredAvailable actions:- Create conversation- Add message- Update conversation status- Apply labelLearn more: https://docs.botbrains.io/integrations/zapier
Save the snippet
Rebuild profile
Monitor next similar question
Long-term Fix: Comprehensive DocumentationFor topics with many related questions:
Identify the topic cluster
Create comprehensive documentation covering:
Overview
Common questions
Step-by-step procedures
Edge cases
Troubleshooting
Examples
Add as data provider (PDF or crawl webpage)
Rebuild profile
Example:
Copy
Topic: "Mobile App Features"Gap: 15 questions about features with no answersAction:- Wrote comprehensive mobile app documentation- Created sections for each major feature- Added screenshots and examples- Published to help center- Crawled help center page into botBrains- Resolution rate improved from 32% to 71%
When to use each source type:Snippets - Quick, specific Q&A
Copy
Use for:- Single questions with clear answers- Quick gaps discovered in conversation review- Override wrong information temporarily- Policy clarificationsExample: "What's your refund policy?"
PDFs - Comprehensive documentation
Copy
Use for:- Product manuals- Process documentation- Training materials- Policy documentsExample: Employee handbook, API documentation
Webpage Crawls - Living documentation
Copy
Use for:- Help center articles- Public documentation- Product pages- Blog postsExample: Your public FAQ or knowledge base
Tables - Structured data
Copy
Use for:- Pricing tiers- Product specifications- Feature comparison- Status lookupsExample: Rate limits by plan type
Database Integrations - Dynamic data
Copy
Use for:- Order status- Account information- Real-time availability- Custom recordsExample: Salesforce cases, Zendesk tickets
Don’t wait for customers to hit every gap. Anticipate missing knowledge:New Product Launch Checklist:
Copy
Before launching new feature:[ ] Create product documentation[ ] Add to help center[ ] Crawl documentation into botBrains[ ] Create snippet summary[ ] Test AI responses to common questions[ ] Update guidance with feature-specific instructions[ ] Deploy before launch[ ] Monitor conversations during launch week
Seasonal Preparation:
Copy
Examples:Holiday season approaching:- Add shipping deadline documentation- Create gift card policy snippets- Update return window for holiday purchases- Add snippets for common holiday questionsTax season:- Add tax form documentation- Create snippets for tax-related questions- Update guidance for financial sensitivityProduct update cycle:- Document new features before release- Archive deprecated feature docs- Update getting started guides- Create migration documentation
Competitive Monitoring:
Copy
Track questions you can't answer yet:Week 1:- "Do you support SSO?" (asked 3 times) - NO INFO- "What's your uptime SLA?" (asked 5 times) - NO INFOPriority: Add these before next sprintImpact: 8 questions/week could be answered autonomously
The best teams maintain a “Knowledge Backlog” - a prioritized list of missing documentation to create. Review it monthly and tackle high-impact gaps first.
Weekly QA Workflow Example:Monday Morning (Team Lead - 15 minutes):
Copy
1. Navigate to Analyze → Conversations2. Filter: - Date: Last 7 days - Rating: 1-2 stars - Status: Unresolved OR Escalated3. Review conversation list (don't read each one yet)4. Assign conversations to team members: - Select 5 conversations - Apply label "Review: Alice" - Select 5 different conversations - Apply label "Review: Bob" - Repeat for all team members5. Add "QA: Needs Review" to all assigned conversations
Throughout the Week (Team Members - 30 minutes each):
Copy
1. Navigate to Analyze → Conversations2. Filter: - Labels: Include "Review: [Your Name]" - Labels: Include "QA: Needs Review"3. For each conversation: a. Read full conversation thread b. Identify issue: - Missing knowledge? Add snippet - Wrong info? Fix source - Bad tone? Update guidance - Good example? No action needed c. Apply finding label: - "Knowledge Gap - Billing" - "QA: Excellent Example" - etc. d. Remove "QA: Needs Review" e. Add "QA: Reviewed" f. Add comment to shared doc with: - Conversation ID - Issue found - Action taken or recommended
Friday Review (Team Lead - 30 minutes):
Copy
1. Filter to "QA: Reviewed" from this week2. Review team findings in shared doc3. Prioritize actions: - Quick wins (snippets) - do now - Documentation needs - schedule for next week - Guidance updates - batch for next deployment4. Create knowledge updates5. Deploy improved profile6. Remove all "QA: Reviewed" labels (cleanup for next week)
Shared Documentation:Create a shared document (Google Doc, Notion, Confluence) for weekly findings:
Copy
Template:## Week of March 11-17, 2024### Summary- Conversations reviewed: 35- Knowledge gaps found: 8- Wrong information: 2- Excellent examples: 5- Snippets created: 6- Actions pending: 2### Detailed Findings**Alice's Review:**- Conv #123: Missing info about international shipping - Action: Created snippet - Status: Deployed- Conv #456: Customer loved the response format - Action: Saved as excellent example - Label: "QA: Excellent Example"**Bob's Review:**- Conv #789: Wrong pricing for Enterprise plan - Action: Updated pricing_2024.pdf - Status: Deployed - Impact: Critical fix... (continue for all team members)### Actions for Next Week1. Create comprehensive shipping documentation2. Review all pricing sources for accuracy3. Update guidance to use more formatting (based on excellent examples)
Team Meeting Agenda:Hold brief weekly QA sync (15-30 minutes):
Copy
Agenda:1. Review metrics (5 min) - CSAT trend - Resolution rate trend - Top topics by volume2. Share findings (10 min) - Each person shares most interesting finding - Discuss patterns across reviews - Identify systemic issues3. Plan actions (10 min) - Prioritize knowledge additions - Assign documentation tasks - Schedule next deployment4. Celebrate wins (5 min) - Highlight improved metrics - Share excellent AI responses - Recognize team contributions
Don'ts:- Don't try to review every conversation- Don't make QA feel like punishment- Don't let backlog grow unbounded- Don't sacrifice action for documentationDo's:- Review representative samples- Celebrate improvements and wins- Set boundaries (10 reviews/week max per person)- Prioritize action over perfect documentation
Measure QA Impact:
Copy
Track these metrics:Before QA program:- CSAT: 72%- Resolution: 58%- Snippets: 45- Weekly reviews: 0After 3 months of QA:- CSAT: 81% (+9pp)- Resolution: 71% (+13pp)- Snippets: 127 (+82)- Weekly reviews: 20-25ROI: 9pp CSAT improvement = fewer escalations, happier customersTime invested: 30 min/person/week = 2 hours/week for 4 person team
Make QA visible and celebrated. Share excellent AI responses in team chat. Create a “Response of the Week” highlight. Recognition keeps the team engaged.
For each low-rated conversation:Step 1: Read the full conversation threadDon’t just read the AI’s response - understand the full context:
What was the customer’s initial question?
Did the AI understand the question correctly?
How did the conversation evolve?
Where did it go wrong?
Step 2: Check if customer provided feedbackLook for the CSAT feedback card showing:
Star rating
Optional text feedback from customer
Common feedback themes:
Copy
"Didn't answer my question""Wrong information""Too complicated""Sent me in circles""Wanted to speak to a human""Response too long""Didn't understand what I asked"
Step 3: Click AI messages to review sourcesOpen the Improve Answer sidebar:
Zero sources used: Knowledge gap - customer asked something not in your knowledge base
Wrong sources used: AI misunderstood question and cited irrelevant info
Right sources, poor presentation: Knowledge exists but poorly formatted or explained
Sources conflict: Multiple sources with contradictory information
Step 4: Identify the failure point
Copy
Categorize DSAT by root cause:A. Knowledge Gap (40% of DSAT) - AI didn't have information - Solution: Add snippet or documentationB. Wrong Information (20% of DSAT) - AI provided incorrect answer - Solution: Fix source dataC. Misunderstood Question (15% of DSAT) - AI answered wrong question - Solution: Improve guidance, add examplesD. Poor Presentation (15% of DSAT) - Right info, wrong format/tone - Solution: Update guidance on formattingE. Inherent Complexity (10% of DSAT) - Question requires human judgment - Solution: Create escalation ruleF. Unrealistic Expectations (5% of DSAT) - Customer wanted different outcome than AI can provide - Solution: Set expectations earlier in conversation
User Information Sidebar:Review the right sidebar for context:Device and Location:
Browser/device type
Screen size (mobile vs desktop)
Geographic location
Language preference
Session Context:
Which page they were on when asking
How they arrived (referrer)
Previous pages visited
Time spent before asking
User History:
Previous conversation count
Previous ratings
Labels applied to user
Account information (if integrated)
Using Context for Insights:
Copy
Example 1:DSAT Rating: 1 starPage: /pricingDevice: MobileFeedback: "Too long, couldn't read it"Root cause: AI response optimized for desktop, too verbose for mobileSolution: Update guidance - "On mobile devices, provide shorter responses with key info first. Offer 'Would you like more details?' for elaboration."Example 2:DSAT Rating: 2 starsPage: /checkoutPrevious conversations: 3Feedback: "Asked this yesterday already"Root cause: Customer asked same question multiple times, indicates previous answer didn't helpSolution: Review previous conversations, identify what was missing, add comprehensive answerExample 3:DSAT Rating: 1 starPage: /enterprise-featuresUser label: "Free Plan"Feedback: "Can't help with this"Root cause: Free user asking about enterprise features, AI couldn't help due to plan restrictionsSolution: Create guidance - "When free users ask about enterprise features, explain plan limitations and offer upgrade path politely."
DSAT Conversation AnalysisConversation ID: #12345Date: March 15, 2024Rating: 1 star ⭐Topic: BillingCustomer Feedback:"This didn't answer my question at all. I need to know about pro-rated refunds."Root Cause Category: Knowledge GapWhat AI Said:"Our refund policy allows refunds within 30 days of purchase..."What AI Should Have Said:"Yes, we provide pro-rated refunds. Here's how it works:- Calculate unused days: [days remaining] / [billing period days]- Refund amount: [plan cost] × [unused percentage]- Processing time: 5-7 business days- How to request: Contact billing@example.com with your account IDExample: If you paid $100 for annual plan and cancel after 3 months:- Unused: 9 months = 75%- Refund: $100 × 0.75 = $75"Action Taken:[✓] Created snippet "Pro-rated Refund Calculation"[✓] Added to Billing data provider[✓] Rebuilt profile v0.12[ ] Deploy Friday March 17[ ] Monitor billing topic DSAT next weekExpected Impact:- Billing topic DSAT: Reduce from 25% to 15%- Billing topic resolution: Improve from 65% to 75%
Batch Processing DSAT:Review 10-20 DSAT conversations, then categorize:
Copy
Week of March 11-17 DSAT Review (15 conversations)By root cause:- Knowledge Gaps: 6 conversations → Create 4 snippets- Wrong Information: 2 conversations → Fix 2 sources- Poor Presentation: 4 conversations → Update guidance- Inherent Complexity: 2 conversations → Create escalation rules- Unrealistic Expectations: 1 conversation → No action neededPriority actions:1. Fix wrong pricing info (affects multiple topics) - DO TODAY2. Create snippets for top 3 knowledge gaps - DO THIS WEEK3. Update guidance for better mobile formatting - NEXT SPRINT4. Create escalation rule for refund disputes - NEXT SPRINT
Month 1 Baseline:- Total DSAT (1-2 star): 18%- Billing DSAT: 25%- Technical DSAT: 22%- Product Questions DSAT: 12%Actions Taken:- Created 12 snippets addressing common gaps- Fixed 3 sources with wrong information- Updated guidance for clearer, shorter responses- Added escalation rules for refund disputesMonth 2 Results:- Total DSAT: 12% (↓6pp)- Billing DSAT: 15% (↓10pp) ✓- Technical DSAT: 16% (↓6pp) ✓- Product Questions DSAT: 10% (↓2pp) ✓Impact: 6 percentage point reduction in overall DSAT= ~50 fewer unhappy customers per month
Follow-up with Customers (Optional):For critical DSAT:
Copy
Scenario: Enterprise customer gave 1-star ratingProcess:1. Identify in conversation review2. Check if issue was resolved after rating3. If not, create internal ticket for account manager4. Account manager reaches out: "Hi [Name], I saw you had trouble with our support. I wanted to personally ensure we address your question about [topic]. Here's the answer: [detailed response]. Is there anything else I can help with?"Result:- Customer feels heard- You learn more about the issue- Potential to recover the relationship- Insight into high-priority customer pain points
Document Patterns:
Copy
Keep a DSAT pattern log:Pattern: API authentication questionsFrequency: 8 DSAT conversations over 2 weeksCommon feedback: "Didn't give me the exact steps"Root cause: Documentation was conceptual, not proceduralFix: Rewrote API auth documentation with step-by-step instructions, code examples, screenshotsValidation: Next 2 weeks - only 1 DSAT in API auth (88% reduction)Pattern confirmed: Customers want procedural "how-to" not conceptual "what is"Applied to: All technical documentation
Not all DSAT is actionable. Some customers will be dissatisfied with your product, policy, or limitations - not the AI’s response. Focus on DSAT where the AI could have genuinely done better.
AI agent optimization is a marathon, not a sprint. The teams that excel follow consistent, systematic routines:
Track KPIs - Measure baseline and progress weekly
Identify poor topics - Use data to prioritize improvements
Find uninvolved tickets - Expand AI coverage systematically
Fix wrong information - Maintain knowledge quality rigorously
Add missing information - Fill gaps proactively and reactively
Foster QA culture - Make optimization a team sport
Review DSAT - Learn from unhappy customers
Start with just 30 minutes per week reviewing conversations. Add structure gradually. Celebrate improvements. Track your progress. Within 90 days, you’ll see measurable gains in resolution rates, customer satisfaction, and team efficiency.Your AI agent gets better when you commit to continuous improvement. The question isn’t whether to optimize - it’s when you’ll start.