Why Metrics Matter
Performance metrics give you the visibility and confidence to make data-driven decisions about your AI:- Measure customer satisfaction - Track how users rate their AI interactions and identify dissatisfaction patterns
- Quantify automation impact - See how much workload your AI handles autonomously vs. requiring human help
- Identify improvement opportunities - Spot trends in escalations, unresolved conversations, and low-rated interactions
- Validate changes - Confirm that knowledge updates and guidance refinements actually improve outcomes
- Demonstrate ROI - Show stakeholders concrete evidence of reduced support burden and improved customer experience
- Track weekend coverage - Measure how well your AI handles conversations when your team is offline
Accessing the Metrics Dashboard
Navigate to Analyze → Metrics to view your performance analytics. The dashboard provides two distinct views optimized for different use cases: General View - Comprehensive overview of all AI activity, customer satisfaction, and conversation outcomes Ticketing View - Specialized metrics for support teams managing ticketed conversations with AI involvement Switch between views using the tabs at the top of the page.Understanding Filters
All metrics respect your filter selections, allowing you to analyze specific segments of your data.Time Range Filter
Location: Top right corner of the page The date range filter controls which conversations are included in all metrics and charts. You can select:- Last 7 days (default for quick overviews)
- Last 30 days (recommended for trend analysis)
- Last 90 days (for long-term patterns)
- Custom range (specify exact start and end dates)
Trend comparisons require enough historical data. If you just started using botBrains, trends won’t appear until you have data from two equivalent time periods.
Channel Filter
Location: Below the view tabs, left side Filter metrics to specific communication channels to understand performance differences:- Web - Website chat widget conversations
- Zendesk - Support ticket conversations from Zendesk integration
- Salesforce - Case conversations from Salesforce integration
- Slack - Messages from Slack workspace integration
- WhatsApp - Conversations through WhatsApp Business integration
Label Filter
Location: Below the view tabs, center Filter metrics to conversations tagged with specific labels. Labels help you segment by:- Customer tier (Enterprise, Professional, Free)
- Product area (Billing, Technical Support, Sales)
- Issue type (Bug Report, Feature Request, Question)
- Quality markers (Training Example, Needs Review)
General View Metrics
The General view provides a comprehensive overview of your AI’s overall performance across all conversation types.Overview Cards
The top row displays five key performance indicators with trend comparisons: Messages Total number of messages exchanged across all conversations in your selected timeframe. Includes both user messages and AI responses.Conversation Status Chart
Visual: Stacked area chart showing resolution status over time This chart breaks down your conversations by their final status across the selected date range: Resolved (Green) Conversations where the user’s question was successfully answered. This is your target outcome - the AI or support team fully addressed the customer’s need. Unresolved (Yellow) Conversations that ended without a satisfactory resolution. Common causes include knowledge gaps, unclear questions, or users abandoning conversations mid-way. Escalated (Purple) Conversations that were handed off to human agents, either automatically (based on escalation rules) or manually (user requested human help). Using this chart:- Track resolution trends over time - are you improving?
- Identify specific dates with spikes in escalations or unresolved conversations
- Correlate status changes with deployments or knowledge updates
- Hover over any point to see exact counts for that day
A small percentage of escalations is normal and healthy - some questions genuinely require human expertise or judgment. Focus on reducing unnecessary escalations for routine questions.
Conversation Rating Chart
Visual: Histogram showing distribution of customer ratings This chart displays how customers rated their AI interactions: Rating scale:- 1 star (😠 Terrible) - Very dissatisfied
- 2 stars (🙁 Bad) - Dissatisfied
- 3 stars (😐 OK) - Neutral
- 4 stars (😊 Good) - Satisfied
- 5 stars (🤩 Amazing) - Very satisfied
- Abandoned (0) - Survey offered but not completed
- Unoffered - Survey not presented to user
- Majority of ratings at 4-5 stars
- Small tail at 1-2 stars (under 10%)
- Few abandonments (indicates survey timing is good)
- Bimodal distribution (peaks at both 1 and 5) - inconsistent experience
- High 1-2 star ratings - serious quality issues
- Many abandonments - survey timing or UX problems
- High “unoffered” percentage - need to increase survey coverage
Message Volume Chart
Visual: Area chart showing messages and conversations over time This dual-metric chart helps you understand conversation patterns: Total Messages (Blue area) Shows the volume of all messages exchanged. Spikes indicate busy periods or particularly complex issues requiring many back-and-forth exchanges. Total Conversations (Orange area) Shows how many conversation threads were active. Helps distinguish between “many conversations” vs. “long conversations.” Key ratios to watch:AI Involvement Rate Chart
Visual: Pie chart showing AI participation levels This chart categorizes conversations by how the AI was involved: Fully Autonomous (Green) AI handled the entire conversation without any human operator involvement. This represents complete automation and maximum efficiency.Handoff Chart
Visual: Visualization showing escalation patterns and reasons Shows when and why conversations were handed off from AI to human agents: Handoff triggers:- User explicitly requested human help
- AI detected it couldn’t answer confidently
- Automatic escalation rule triggered (based on topic, sentiment, or other criteria)
- Support agent manually took over conversation
- Peak handoff times (weekends, after hours, specific days)
- Most common escalation reasons
- Topics that frequently require human intervention
- Opportunities to reduce unnecessary handoffs through better knowledge
Answer Completeness Chart
Visual: Pie chart showing response quality distribution Measures whether the AI provided complete answers or indicated missing information: Complete (Green) AI provided a full answer based on your knowledge base without caveats about missing information. Incomplete (Yellow) AI answered but indicated uncertainty or missing details (“I don’t have complete information about…”) No Answer (Red) AI explicitly stated it couldn’t answer the question due to missing knowledge. Why this matters:- Identifies knowledge gaps systematically
- Tracks improvement as you add knowledge
- Helps prioritize which missing information to add first
- Distinguishes between “wrong answers” and “admitted gaps”
An honest “I don’t know” (No Answer) is better than a confident but incorrect response. Monitor the incomplete/no-answer percentages to guide your knowledge improvement efforts.
User Sentiment Chart
Visual: Bar chart showing emotional tone distribution Analyzes the sentiment of user messages using natural language processing: Positive (Green) User messages expressing satisfaction, gratitude, or positive emotions. Neutral (Gray) Factual questions or statements without emotional tone. Negative (Red) User messages expressing frustration, anger, or dissatisfaction. Using sentiment data:User Rating Trend Chart
Visual: Line chart showing rating distribution over time Tracks how customer ratings evolve across your selected time period: Reading the trend lines:- Each colored line represents a rating level (1-5 stars)
- Y-axis shows percentage of total rated conversations
- Upward-sloping 4-5 star lines = improving satisfaction
- Downward-sloping 1-2 star lines = fewer bad experiences
- Sustained improvement in 4-5 star percentage
- Reduction in 1-2 star percentage
- Correlation with your deployments or knowledge updates
- Day-of-week patterns (weekends often differ from weekdays)
User Language Chart
Visual: Horizontal bar chart showing language distribution Shows which languages your users communicate in: Why this matters:- Identify need for multilingual knowledge
- Verify your AI handles non-English languages appropriately
- Detect unexpected language patterns (could indicate spam or new markets)
- Plan internationalization priorities
Usage by Page Chart
Visual: Horizontal bar chart showing conversation sources Displays which pages or entry points generated conversations (for web chat): Insights from this data:- High-traffic pages that need better self-service content
- Product areas generating most support questions
- Effectiveness of page-specific AI guidance
- Opportunities for contextual knowledge (page-specific responses)
Knowledge Source Usage Chart
Visual: Horizontal bar chart showing data provider usage Shows which knowledge sources (data providers) your AI references most frequently: Knowledge sources:- PDFs (uploaded documentation)
- Webpages (crawled URLs)
- Snippets (manually created Q&A)
- Tables (structured data)
- Files (other uploaded content)
- Identify which knowledge types are most valuable
- Detect underutilized knowledge sources (might need better content)
- Validate that important documentation is being referenced
- Prioritize updates to frequently-used sources
Conversation Length Chart
Visual: Histogram showing message count distribution Displays how many messages typical conversations contain: Interpreting the distribution: Short conversations (1-3 messages):- Quick, simple questions
- “Thank you and goodbye” interactions
- Potentially unresolved issues where user gave up
- Normal back-and-forth for moderate complexity
- Follow-up questions after initial answer
- Multi-part inquiries
- Complex technical issues
- Multiple related questions
- Potential AI confusion (repeating itself or not understanding)
- Many single-message conversations (users not engaging)
- Very long conversations (AI not resolving efficiently)
- Increasing average length over time (quality degradation)
Activity Heatmaps
Visual: Two calendar heatmaps showing conversation patterns Weekly Heatmap Shows conversation volume by day of week and hour of day for the most recent week:Hidden Conversations Chart
Visual: Pie chart showing spam and blocked conversations Displays conversations that were hidden from main views: Spam (Red) Conversations automatically or manually marked as spam. These are typically bot attacks, gibberish, or irrelevant messages. Blocked (Orange) Conversations from blocked users or domains. Used to prevent abusive users from consuming resources. Visible (Green) Normal, legitimate conversations that appear in your conversation list. Why monitor this:- Ensure spam detection isn’t too aggressive (legitimate users blocked)
- Track abuse or attack patterns
- Validate that your spam filters are working
- Clean up spam manually if automatic detection missed it
Ticketing View Metrics
The Ticketing view focuses on AI involvement in support ticket workflows, providing specialized metrics for teams managing traditional ticketing systems with AI assistance.Key Ticketing Metrics
The top row displays four critical ticketing performance indicators: Involvement Rate Percentage of tickets where the AI participated in some way (autonomous, public, or private involvement).Relative Autonomous Rate removes “not involved” tickets from the calculation, giving you a clearer picture of how effective the AI is when it does participate.
Involvement Flow (Sankey Diagram)
Visual: Sankey flow diagram showing ticket paths through AI involvement levels This powerful visualization shows how tickets flow from different involvement categories to resolution outcomes: Flow structure:- Left side: AI involvement level (Autonomous, Public, Private, Not Involved)
- Right side: Resolution outcome (Resolved, Escalated, Unresolved)
- Flow width: Number of tickets following that path
- Maximize Autonomous → Resolved: This is pure automation. Focus improvements here.
- Minimize Not Involved → Any: If AI isn’t even attempting many tickets, investigate why.
- Analyze Autonomous → Escalated: These are tickets where AI tried to help but had to give up. Key improvement opportunity.
- Review Public → Unresolved: Even with human help, these weren’t resolved. Product or process issues?
AI Involvement vs Success Pivot Table
Visual: Interactive pivot table showing success rates across involvement and outcome dimensions This table provides a detailed breakdown of how different AI involvement levels correlate with resolution outcomes: Rows (AI Involvement):- Fully Autonomous
- Public Involvement
- Private Involvement
- Not Involved
- Resolved
- Escalated
- Unresolved
- Total
Involvement Rate Over Time Chart
Visual: Stacked bar chart showing involvement distribution across time periods Tracks how AI involvement levels evolve over your selected date range: Bars represent:- Green: Fully Autonomous tickets
- Blue: Public Involvement tickets
- Purple: Private Involvement tickets
- Gray: Not Involved tickets
Involvement Rate Evolution Chart
Visual: Multi-line chart showing involvement category trends Similar to the stacked bar chart but with separate lines for each involvement type, making it easier to see individual trends: Lines:- Green line: Autonomous rate trend
- Blue line: Public involvement trend
- Purple line: Private involvement trend
- Gray line: Not involved trend
Interpreting Metrics Together
Individual metrics tell part of the story. Combining metrics reveals deeper insights:Healthy Performance Pattern
Knowledge Gap Pattern
Quality Problem Pattern
Adoption Problem Pattern
Weekend Coverage Success Pattern
Filtering and Segmentation Strategies
Use filters to uncover insights hidden in aggregate numbers:Compare Time Periods
Strategy: Analyze same metrics for different time rangesSegment by Channel
Strategy: Analyze each channel separately to optimize channel-specific performanceAnalyze High-Value Customers
Strategy: Use labels to focus on important customer segmentsIdentify Problematic Topics
Strategy: Combine topic filters with low metricsTrack Improvement Over Time
Strategy: Monitor same segment across multiple time periodsExporting Metrics Data
Export your metrics for reporting, external analysis, or compliance purposes.Quick Export
- Click the Export button in the top right of the metrics page
- Select data type (conversations, messages, users)
- Choose format (CSV for spreadsheets, JSON for programmatic analysis)
- Current filters are automatically applied to the export
- Download begins immediately
- All visible metrics and their values
- Timestamps for the data range
- Filter parameters applied
- Trend comparisons to previous period
Scheduled Exports
For regular reporting, set up automatic recurring exports:- Navigate to Settings → Data Exports
- Click Create Scheduled Export
- Configure:
- Frequency (daily, weekly, monthly)
- Data type (metrics summary, raw conversations, etc.)
- Filters to apply
- Delivery method (email, webhook, cloud storage)
- Save the schedule
- Weekly executive reports
- Monthly board presentations
- Quarterly business reviews
- Compliance archiving
- External analytics tools (feed to Tableau, PowerBI, etc.)
Troubleshooting Metrics
Metrics seem incorrect or incomplete
Issue: Numbers don’t match expectations or seem too low Solutions:- Verify date range is set correctly (not accidentally in future)
- Check if filters are applied (channel, label) that limit the data
- Ensure conversations have finished (ongoing conversations may not have final status)
- Confirm data sync has completed for integrated channels (Zendesk, Salesforce)
- Refresh the page to clear any cached stale data
Trends not showing or showing as 0%
Issue: Trend indicators are missing or always show 0% change Solutions:- Ensure you have data from at least two equivalent time periods (e.g., 60 days of history for 30-day trend)
- Check that you just started using botBrains (trends require historical comparison data)
- Verify filters haven’t changed between periods (comparing “Web” to “All Channels” shows false trends)
- Confirm date range allows trend calculation (custom ranges too short may not have comparison data)
Better Monday Score is 0% or not calculating
Issue: Weekend coverage metric shows 0% or doesn’t appear Solutions:- Verify your date range includes at least one Saturday or Sunday
- Check that you actually had conversations on weekends (no weekend traffic = no score)
- Ensure conversations are not filtered out by channel or label restrictions
- Confirm AI is deployed and active on weekends (check deployment schedule)
Charts loading slowly or timing out
Issue: Dashboard takes a long time to load or shows errors Solutions:- Reduce date range to analyze shorter time periods (30 days instead of 90)
- Remove channel and label filters temporarily to reduce query complexity
- Refresh page to clear any stuck queries
- Try using General view instead of Ticketing view (simpler calculations)
- Contact support if issue persists (may indicate data optimization needed)
Exported data doesn’t match dashboard
Issue: CSV export has different numbers than what’s shown on screen Solutions:- Verify export was completed after applying filters (don’t change filters mid-export)
- Check export timestamp vs. dashboard timestamp (data may have updated between views)
- Ensure export format settings match expected structure
- Confirm you’re comparing the same time range and filters
- Re-export with explicit filter documentation to verify consistency
Best Practices
Establish a Metrics Review Routine
Weekly Review (15 minutes):- Check CSAT and Resolution Rate trends - are they improving?
- Review any sudden drops or spikes in key metrics
- Filter to 1-2 star ratings, scan recent poor experiences
- Note any growing topics or unusual patterns
- Track Better Monday Score to validate weekend coverage
- Compare month-over-month metrics across all categories
- Segment analysis by channel to identify optimization opportunities
- Review involvement rate evolution - is automation increasing?
- Analyze topic-specific metrics for your top 10 topics
- Export data for stakeholder reports
- Document improvements made and their measured impact
- Comprehensive trend analysis across 90-day periods
- Calculate ROI metrics (tickets automated, time saved, costs reduced)
- Validate long-term strategic goals (automation %, CSAT targets)
- Identify seasonal patterns for future planning
- Present findings to leadership with recommendations
Set Realistic Goals and Track Progress
Define specific, measurable targets based on your current baseline:Correlate Metrics with Actions
Always connect changes to outcomes to understand what works: Create a change log:Don’t Chase Perfect Metrics
Some important nuances to remember: 100% CSAT is not realistic or even desirable:- Some customers are dissatisfied with your product, not your AI
- Honest “I don’t know” responses may get low ratings but are correct behavior
- Controversial topics (pricing, policies) naturally have lower satisfaction
- Complex or sensitive issues legitimately require human judgment
- Maintaining human touch for high-value customers adds strategic value
- Some escalations prevent worse outcomes (wrong automated answers)
- 5 percentage point improvement in resolution rate per month is excellent
- Sustained upward trends matter more than hitting arbitrary targets
- Balance automation efficiency with answer quality and customer satisfaction
Combine Quantitative and Qualitative Analysis
Metrics show you what’s happening. Conversations show you why: Process:- Metrics identify problem areas (e.g., low CSAT for “Refund” topic)
- Filter conversations to that segment (Topic: Refund, Rating: 1-2 stars)
- Read 10-20 conversations to understand root causes
- Make targeted improvements based on qualitative insights
- Track metrics to validate improvements worked
Next Steps
Now that you understand your performance metrics:- Review Conversations - Dive deep into individual conversations to understand metrics context
- Analyze Topics - Segment metrics by topic to identify specific improvement areas
- Search Messages - Find patterns in user questions and AI responses
- Improve Answers - Use metric insights to guide knowledge and guidance refinement
- Manage Labels - Create custom labels to segment metrics by business-specific categories