Skip to main content
Your AI’s performance in ticketing systems differs fundamentally from web chat. Tickets involve structured workflows, specific SLAs, and often require human-AI collaboration. The Ticketing View in your metrics dashboard provides specialized analytics designed for support teams managing ticket volumes with AI assistance. This page shows you how to measure ticket deflection, optimize AI involvement, and demonstrate ROI through reduced agent workload.

Why Ticketing Metrics Matter

Ticketing systems have unique performance indicators that don’t apply to conversational channels. Understanding these metrics helps you:
  • Reduce agent workload - Measure how many tickets AI handles autonomously vs. requiring human intervention
  • Improve Monday mornings - Track weekend ticket coverage to eliminate Monday backlog spikes
  • Optimize ticket deflection - Identify which ticket types AI can fully automate vs. which need escalation
  • Demonstrate ROI - Calculate exact hours saved and cost reduction from AI automation
  • Track SLA compliance - Ensure AI responses meet your service level agreements
  • Identify training gaps - Find ticket categories where AI needs better knowledge
  • Balance automation with quality - Maintain resolution quality while increasing autonomous handling
The Ticketing View requires integration with Zendesk or Salesforce Service Cloud. If you haven’t connected your ticketing system yet, see the Zendesk Integration or Salesforce Integration guides.

Accessing Ticketing Metrics

Navigate to Analyze → Metrics, then switch to the Ticketing tab at the top of the page. The Ticketing View automatically filters to conversations from your connected ticketing channels (Zendesk, Salesforce). All standard filters apply to ticketing metrics:
  • Date Range - Focus on specific time periods
  • Channel Filter - Compare Zendesk vs. Salesforce performance
  • Label Filter - Segment by customer tier, product area, or priority

Understanding Ticketing-Specific Metrics

The Ticketing View emphasizes AI involvement rates and autonomous resolution - the key indicators of automation efficiency in support workflows.

Involvement Rate

Definition: Percentage of tickets where the AI participated in any way (autonomous, public, or private involvement).
Involvement Rate = (Autonomous + Public + Private) / Total Tickets × 100%
What different rates mean:
  • 80%+ involvement - AI is assisting with most tickets, excellent adoption
  • 60-80% involvement - Good participation, room to expand coverage
  • 40-60% involvement - Moderate adoption, investigate barriers to AI use
  • Below 40% involvement - Low adoption, may indicate integration issues or team resistance
Why this matters: The involvement rate shows what percentage of your ticket volume receives AI assistance. Low involvement rates mean you’re leaving automation opportunities on the table. High involvement rates indicate successful AI adoption across your support team. How to improve:
  • Reduce exclusion rules that prevent AI from engaging
  • Train support team on when to enable AI assistance
  • Review “not involved” tickets to understand why AI didn’t participate
  • Expand knowledge coverage for common ticket types
  • Adjust escalation rules to give AI first attempt at resolution
Compare involvement rate before and after major changes (new knowledge, updated guidance, integration tweaks) to measure impact. A 10-15 percentage point increase in involvement rate can translate to significant agent time savings.

Involved Tickets (Absolute Count)

Definition: Total number of tickets where AI was involved, with trend comparison to the previous period. Why absolute numbers matter: Percentage rates are important, but absolute counts reveal the actual workload impact:
Example Calculation:
150 involved tickets × 10 minutes average handling time = 1,500 minutes saved
1,500 minutes = 25 agent hours saved this week

If your average support cost is $30/hour:
25 hours × $30 = $750 in weekly cost savings
$750 × 52 weeks = $39,000 annual savings
Using this metric strategically:
  • Track month-over-month growth in involved ticket counts
  • Correlate involved tickets with support team capacity reports
  • Calculate ROI by multiplying involved tickets by average handling time
  • Present to stakeholders as concrete workload reduction evidence

Relative Autonomous Rate

Definition: Percentage of AI-involved tickets that were handled fully autonomously (no human intervention needed).
Relative Autonomous Rate = Autonomous / (Autonomous + Public + Private) × 100%
This differs from overall autonomous rate because it excludes human-only tickets from the calculation, giving you a clearer picture of how effective the AI is when it does participate. What different rates mean:
  • 60%+ relative autonomous - Strong AI performance, minimal human intervention needed
  • 40-60% relative autonomous - Moderate autonomy, significant human assistance still required
  • Below 40% relative autonomous - AI mostly acting as copilot, limited full automation
Reading the signal: High involvement rate + low relative autonomous rate: AI is engaged frequently but needs human finishing. This indicates knowledge gaps or confidence issues - the AI tries but can’t complete on its own. Low involvement rate + high relative autonomous rate: AI is highly effective when used but not used enough. This indicates adoption barriers - expand AI usage to more ticket types. High involvement rate + high relative autonomous rate: Ideal state - AI is both widely used and highly effective. Optimize for scale and edge cases.
Relative Autonomous Rate focuses on AI effectiveness, while Involvement Rate measures AI adoption. Both are critical: high involvement without autonomy means AI is busy but not efficient; high autonomy without involvement means AI is effective but underutilized.

Better Monday Score

Definition: Percentage of weekend tickets (Saturday and Sunday) where the AI provided at least one customer-visible response.
Better Monday Score = (Autonomous + Public Weekend Tickets) / Total Weekend Tickets × 100%
Why this metric is strategically important: Most support teams operate at reduced capacity on weekends. Tickets pile up, and Monday mornings face a backlog surge. AI doesn’t take weekends off. What different scores mean:
  • 70%+ Better Monday Score - Excellent weekend coverage, minimal Monday backlog
  • 50-70% Better Monday Score - Good coverage, some tickets still wait until Monday
  • 30-50% Better Monday Score - Moderate coverage, noticeable Monday spike remains
  • Below 30% Better Monday Score - Poor weekend automation, large Monday backlog
Real-world impact example:
Before AI Weekend Coverage:
Saturday + Sunday: 60 tickets received, 5 answered by on-call agent
Monday morning: 55-ticket backlog awaiting triage

With 75% Better Monday Score:
Saturday + Sunday: 60 tickets received, 45 answered by AI, 5 by on-call agent
Monday morning: 10-ticket backlog (only complex escalations)

Result: 45 fewer tickets for Monday morning team
How to improve your Better Monday Score:
  1. Review weekend tickets that weren’t answered - what made them difficult?
  2. Add knowledge for common weekend inquiry types
  3. Adjust escalation rules to be less aggressive on weekends (when immediate escalation isn’t possible anyway)
  4. Enable autonomous mode for straightforward ticket types outside business hours
  5. Test changes by comparing Saturday/Sunday performance week-over-week
Don’t optimize Better Monday Score at the expense of answer quality. A high score with poor responses frustrates customers. Balance automation with correctness by monitoring weekend ticket CSAT alongside the score.

Involvement Flow (Sankey Diagram)

The Involvement Flow visualization reveals how tickets move from different AI involvement levels to resolution outcomes. This is one of the most powerful tools for identifying optimization opportunities in your ticketing workflow.

Understanding the Flow Structure

Visual Layout:
  • Left side: AI involvement level (Autonomous, Public, Private, Not Involved)
  • Right side: Resolution outcome (Resolved, Escalated, Unresolved)
  • Flow width: Number of tickets following that path
  • Flow color: Indicates the involvement type (green for autonomous, blue for public, purple for private, gray for not involved)

Reading the Most Common Flows

Autonomous → Resolved (wide green flow to green outcome) This is your ideal state. These tickets were handled completely by AI without human intervention and successfully resolved. This flow represents pure automation efficiency - zero agent time required, immediate customer response, and successful resolution. Target: Maximize this flow over time as AI knowledge improves. Public → Resolved (blue flow to green outcome) Good outcome showing effective human-AI collaboration. The AI started the conversation, provided initial responses or assistance, and a human agent finished the ticket. The ticket was ultimately resolved successfully. Signal: If this flow is large, investigate whether better AI knowledge could convert some of these to fully autonomous. Private → Resolved (purple flow to green outcome) AI suggested responses internally, human agents sent them, and the ticket was resolved. This shows your support team successfully using AI as a copilot. Signal: If agents consistently accept AI suggestions (high private → resolved flow), consider enabling more public/autonomous modes for straightforward cases. Any → Escalated (flows to red outcome) Tickets that required human intervention regardless of AI involvement level. Escalations indicate either intentional human-in-the-loop workflows or AI reaching its knowledge limits. Signal: Review what triggered escalations - are they necessary (complex issues genuinely requiring humans) or preventable (knowledge gaps)? Any → Unresolved (flows to yellow outcome) Problematic tickets that weren’t fully resolved by anyone. These represent knowledge gaps, process issues, or abandoned tickets requiring attention. Signal: High unresolved rates in any involvement category indicate systemic problems needing investigation. Not Involved → Any (gray flows) Tickets where AI didn’t participate at all. These are typically imported historical tickets, manually excluded tickets, or cases where integration wasn’t active. Target: Minimize this flow (except for intentional exclusions like VIP customers) to maximize AI assistance across all tickets.

Strategic Analysis Using the Sankey

Optimize Autonomous → Resolved This flow represents your highest ROI opportunity. Drill into this segment:
  1. Click the flow to filter conversations
  2. Review resolved autonomous tickets to understand what works well
  3. Document patterns in successfully automated ticket types
  4. Use these patterns to identify similar tickets currently requiring human help
Investigate Autonomous → Escalated These are tickets where AI tried to help but had to give up. This is a critical improvement signal:
  1. Click the flow to view these specific tickets
  2. Identify common themes in what caused escalations
  3. Add missing knowledge for these topics
  4. Refine guidance to handle these scenarios better
  5. Monitor if these tickets shift to autonomous → resolved after changes
Analyze Public → Unresolved Even with AI + human help, these tickets weren’t resolved. This often indicates:
  • Product bugs or limitations requiring engineering fixes
  • Policy gaps requiring business decisions
  • Extremely complex issues beyond current capabilities
  • Process breakdowns (tickets fell through cracks)
Review these tickets to distinguish between “can’t be resolved by anyone” vs. “need better workflow.” Measure Flow Changes Over Time Take screenshots of the Sankey diagram monthly. Compare flow widths to track:
  • Is autonomous → resolved growing? (Success)
  • Is not involved → any shrinking? (Better adoption)
  • Are escalations decreasing for specific ticket types? (Knowledge improvement working)

AI Involvement vs Success Pivot Table

This detailed breakdown shows success rates across involvement and outcome dimensions, revealing which combinations work best and which need improvement.

Understanding the Table Structure

Rows (AI Involvement):
  • Fully Autonomous
  • Public Involvement
  • Private Involvement
  • Not Involved
Columns (Outcomes):
  • Resolved
  • Escalated
  • Unresolved
  • Total
Cells: Display both absolute count and percentage for each combination.

How to Read the Data

Example interpretation:
Fully Autonomous: 450 tickets (75% of all tickets)
  Resolved: 360 (80% of autonomous tickets)
  Escalated: 45 (10% of autonomous tickets)
  Unresolved: 45 (10% of autonomous tickets)

This tells you:
- AI attempts 450 tickets fully autonomously
- 80% success rate when it tries (good performance)
- 10% escalation rate (knowledge gaps to address)
- 10% unresolved (may be abandoned tickets or unclear questions)

Strategic Uses of the Pivot Table

Compare Resolution Rates Across Involvement Types Look horizontally across the “Resolved” column:
Autonomous: 80% resolved
Public: 90% resolved
Private: 85% resolved
Not Involved: 75% resolved
Insight: If public involvement has higher resolution rates than autonomous, humans are adding significant value. Focus on capturing that knowledge to increase autonomous resolution. If autonomous and public have similar resolution rates: Humans aren’t improving outcomes much beyond what AI already achieves. Consider expanding autonomous handling to reduce agent load without sacrificing quality. Identify Escalation Patterns High escalation rates in autonomous tickets indicate topics where AI correctly recognizes its limitations. Review these to decide:
  • Should you add knowledge to reduce escalations?
  • Should you create specific escalation rules for these ticket types?
  • Are these inherently complex issues that should always escalate?
Measure Copilot Effectiveness Look at the Private Involvement row: High resolution rate for private tickets: Your support team successfully uses AI suggestions. The copilot mode is effective. Low resolution rate for private tickets: Agents either don’t trust AI suggestions or suggestions are poor quality. Review actual private involvement tickets to diagnose. Calculate True Automation Rate
True Automation = (Fully Autonomous Resolved) / (Total Tickets) × 100%

Example:
360 autonomous resolved / 600 total tickets = 60% true automation rate

This is different from involvement rate (which includes all AI participation).
True automation rate measures complete end-to-end automation.

Involvement Rate Over Time

This stacked bar chart shows how AI involvement evolves across your selected date range, revealing adoption trends and the impact of changes.

Understanding the Chart

Bars represent daily or periodic ticket volume, stacked by involvement type:
  • Green section: Fully Autonomous tickets
  • Blue section: Public Involvement tickets
  • Purple section: Private Involvement tickets
  • Gray section: Not Involved tickets

Trend Patterns to Watch

Growing green (autonomous) section Your AI is successfully automating more tickets over time. This indicates:
  • Improving knowledge coverage
  • Better guidance refinement
  • Increasing team confidence in AI capability
  • Successful expansion of autonomous handling to new ticket types
Action: Document what drove the improvement to replicate success. Shrinking gray (not involved) section More tickets are receiving AI assistance. Positive signal showing:
  • Better AI adoption across the support team
  • Fewer manual exclusions or barriers
  • Expanded coverage to previously manual ticket types
Action: Continue expanding to remaining ticket categories. Growing purple (private) section Support team increasingly uses AI as copilot. This suggests:
  • Team trusts AI suggestions
  • Effective internal adoption of copilot features
  • May indicate hesitation to enable full public/autonomous mode
Action: Review private involvement tickets - could some shift to public if agents consistently accept suggestions? Stable or growing blue (public) section AI is involved but requires human finishing. If this grows while autonomous shrinks:
  • May indicate knowledge degradation
  • Could signal increased ticket complexity
  • Might show more cautious escalation rules
Action: Investigate cause - is this intentional (more complex tickets) or a regression? Calculate your automation trajectory:
Week 1: 40% autonomous
Week 4: 55% autonomous
Growth: +15 percentage points in 3 weeks
Projection: Could reach 70% autonomous in ~2 months

This helps you:
- Set realistic automation goals
- Demonstrate progress to stakeholders
- Plan capacity reduction as automation improves
- Forecast when you'll hit ROI targets

Involvement Rate Evolution (Multi-Line Chart)

Similar to the stacked bar chart but with separate lines for each involvement type, making it easier to see individual trends and correlations.

Key Cross-Reference Points

When autonomous line rises while public line falls: AI is successfully taking over tickets that previously required human finishing. This is excellent progress - you’re converting “AI + human” tickets to “AI only.” When private line rises while not-involved falls: Support team is adopting AI copilot features for tickets they previously handled alone. Good adoption signal, but may indicate opportunity to increase autonomous handling. When all involvement lines rise while not-involved falls: Overall AI adoption is increasing across all use cases. Healthy growth pattern showing AI expanding into previously manual territory. When autonomous line plateaus: You may have reached current knowledge limits. Review unresolved autonomous tickets to identify gaps preventing further automation growth.

Ticketing-Specific Metrics Deep Dive

Ticket Deflection Rate

While not displayed as a separate card, you can calculate ticket deflection using the metrics provided:
Ticket Deflection Rate = (Autonomous Resolved) / (Total Tickets Received) × 100%
This shows the percentage of incoming tickets that were completely handled by AI without any human intervention. Benchmarks:
  • 40%+ deflection = Excellent automation, significant workload reduction
  • 25-40% deflection = Good automation with room for growth
  • 10-25% deflection = Early-stage automation, expand coverage
  • Below 10% deflection = Limited automation, focus on knowledge expansion

First Contact Resolution

First contact resolution measures whether the first response (from AI or human) successfully resolved the ticket: For autonomous tickets: If AI resolves in first interaction, this is 100% first contact resolution For public involvement: If AI’s first response led to resolution without additional human clarification questions, this counts as first contact resolution Track this by filtering to resolved tickets and reviewing message counts. Shorter conversations typically indicate better first contact resolution.

Response Time Reduction

Compare average first response time before and after AI deployment:
Pre-AI average first response: 8 hours (during business hours only)
With AI:
- Autonomous tickets: < 1 minute (instant)
- Public tickets: Mix of instant AI + delayed human (average 2 hours)
- Weighted average based on involvement: ~2.5 hours

Improvement: 69% reduction in first response time

Agent Workload Reduction

Calculate the specific workload reduction from AI automation:
Formula:
Workload Reduction = (Autonomous Tickets) × (Average Handling Time)

Example:
- 300 autonomous resolved tickets per week
- 12 minutes average handling time per ticket
- Workload reduction: 300 × 12 = 3,600 minutes = 60 hours per week

This equals 1.5 FTE (full-time equivalent) at 40 hours/week
Monitor whether overall ticket volume decreases as AI improves:
Month 1: 1,000 tickets (baseline)
Month 2: 950 tickets (-5%)
Month 3: 920 tickets (-3% MoM, -8% vs baseline)

Interpretation: AI may be deflecting tickets before they're created
(customers find answers faster, submit fewer tickets)
This is an indirect but powerful indicator of AI effectiveness - better self-service means fewer tickets overall.

Using the Ticketing View in metrics Dashboard

The Ticketing View is optimized for support team workflows. Here’s how to use it effectively:

Daily Monitoring

Morning check (2 minutes):
  1. Glance at the four key metric cards
  2. Note any dramatic changes (red/green trend indicators)
  3. Check Better Monday Score if it’s Monday morning
Purpose: Catch integration issues or sudden performance changes quickly.

Weekly Analysis

Wednesday review (15 minutes):
  1. Review Involvement Rate trend - is it growing?
  2. Check Sankey diagram for any unusual flow patterns
  3. Filter to escalated tickets from this week
  4. Identify top 2-3 escalation reasons
Purpose: Identify knowledge gaps and improvement opportunities.

Monthly Planning

First week of month (1 hour):
  1. Compare all metrics month-over-month
  2. Calculate ROI (tickets saved, hours saved, cost reduction)
  3. Review pivot table for outcome distribution changes
  4. Export data for stakeholder reports
Purpose: Demonstrate value and plan next month’s priorities.

Integration with Zendesk and Salesforce Reporting

botBrains ticketing metrics complement your existing support platform analytics. Use both together for complete visibility.

Zendesk Explore Integration

botBrains metrics focus on AI performance. Zendesk Explore provides broader support metrics. Compare: In botBrains Ticketing View:
  • AI involvement rates
  • Autonomous resolution rates
  • Better Monday Score
  • Topic-specific AI performance
In Zendesk Explore:
  • Overall ticket volumes and trends
  • Agent performance metrics
  • SLA compliance rates
  • Customer satisfaction by ticket type
  • First response time and resolution time
Combined Analysis:
Example Insight:
Zendesk shows: Average first response time = 4 hours
botBrains shows: 65% autonomous rate

Calculation:
- 65% of tickets get immediate AI response (< 1 minute)
- 35% wait for human agent (average 11 hours based on weighted average)
- Without AI, average first response would be ~11 hours
- AI improved first response time by 7 hours (64% reduction)

Salesforce Service Cloud Dashboards

Similar complementary relationship with Salesforce: In botBrains Ticketing View:
  • AI effectiveness by case type
  • Involvement and autonomous rates
  • Weekend coverage metrics
In Salesforce Dashboards:
  • Case volume by product/priority
  • Agent productivity metrics
  • Escalation paths and routing
  • Customer survey results
Combined Analysis: Create custom Salesforce reports that include:
  • Cases with botBrains AI responses (filter by AI agent)
  • Compare resolution time: AI-only vs. AI-assisted vs. human-only
  • Track which case types have highest AI involvement
  • Measure deflection impact on queue backlogs

Exporting Data for Cross-Platform Analysis

  1. Export botBrains ticketing metrics (CSV format)
  2. Export Zendesk/Salesforce reports for same time period
  3. Join datasets on ticket ID or date
  4. Create combined dashboards in Excel, Tableau, or PowerBI
Useful combined metrics:
  • AI involvement rate by ticket priority
  • First response time reduction from AI
  • Cost savings (tickets handled × average handling time × hourly cost)
  • SLA compliance improvement from AI

Identifying Optimization Opportunities

Use ticketing metrics together to find high-impact improvements.

Opportunity 1: Convert Public to Autonomous

Signal: High public involvement rate with high resolution rate Analysis:
  • AI starts tickets well but hands off to humans
  • Humans successfully finish most tickets
  • Knowledge gaps prevent AI from completing autonomously
Action:
  1. Filter to public involvement tickets with resolved status
  2. Review the final human responses
  3. Extract knowledge patterns humans are providing
  4. Add this knowledge to AI data providers
  5. Monitor if more tickets shift to autonomous
Expected Impact: 10-20 percentage point increase in autonomous rate

Opportunity 2: Reduce Weekend Backlog

Signal: Low Better Monday Score (below 50%) with high weekend ticket volume Analysis:
  • Many tickets arrive on weekends
  • AI isn’t answering most of them
  • Monday team faces large backlog
Action:
  1. Review unanswered weekend tickets by topic
  2. Identify common themes in what wasn’t answered
  3. Add knowledge for these specific topics
  4. Adjust escalation rules to be less aggressive outside business hours
  5. Enable autonomous mode for routine weekend inquiries
Expected Impact: 20-30 percentage point Better Monday Score improvement

Opportunity 3: Improve Escalation Efficiency

Signal: High escalation rate in Autonomous tickets (Sankey flow) Analysis:
  • AI attempts many tickets
  • Large percentage escalate to humans
  • Some escalations may be preventable
Action:
  1. Click Autonomous → Escalated flow to filter conversations
  2. Group escalations by reason/topic
  3. For top 3 escalation triggers:
    • Add missing knowledge if information gap
    • Create specific guidance if judgment issue
    • Set explicit escalation rules if intentional
  4. Track escalation rate for these topics specifically
Expected Impact: 5-10 percentage point reduction in escalation rate

Opportunity 4: Increase Involvement Rate

Signal: Low overall involvement rate (below 60%) with good autonomous success when AI does participate Analysis:
  • AI performs well when used
  • Not being used enough
  • Adoption barrier problem, not capability problem
Action:
  1. Review “Not Involved” tickets to understand why AI didn’t participate
  2. Check for:
    • Overly restrictive exclusion rules
    • Integration disabled for certain queues/categories
    • Team manually disabling AI without clear reason
  3. Expand AI coverage to excluded ticket types
  4. Train support team on when/how to enable AI assistance
Expected Impact: 15-25 percentage point involvement rate increase

Best Practices for Ticketing System Monitoring

Establish a Ticketing Review Routine

Weekly Quick Check (15 minutes):
  1. Review key metrics: Involvement Rate, Autonomous Rate, Better Monday Score
  2. Check for sudden drops or spikes
  3. Compare to previous week
  4. Note any anomalies for investigation
Bi-Weekly Deep Dive (1 hour):
  1. Analyze Sankey diagram - where are tickets flowing?
  2. Review pivot table - which combinations underperform?
  3. Filter to autonomous → escalated tickets
  4. Identify top 3 escalation reasons
  5. Plan knowledge additions to address gaps
Monthly Strategic Review (2-3 hours):
  1. Calculate ROI metrics (tickets automated, hours saved, costs reduced)
  2. Review involvement rate evolution over full month
  3. Analyze Better Monday Score trend - is weekend coverage improving?
  4. Compare performance across ticket types or topics
  5. Present findings to support leadership with recommendations

Set Realistic Automation Goals

Define specific, measurable targets based on your current baseline:
Example goal setting for 3 months:

Current State:
- Involvement Rate: 55%
- Relative Autonomous Rate: 48%
- Better Monday Score: 45%

3-Month Goals:
- Involvement Rate: 70% (+15pp)
- Relative Autonomous Rate: 60% (+12pp)
- Better Monday Score: 65% (+20pp)

Track Weekly Progress:
Week 1: 55% / 48% / 45% (baseline)
Week 4: 59% / 52% / 51% (+4pp / +4pp / +6pp) - on track
Week 8: 64% / 56% / 58% (+5pp / +4pp / +7pp) - on track
Week 12: 71% / 61% / 67% (+7pp / +5pp / +9pp) - exceeded goals ✓

Balance Automation with Quality

Don’t optimize ticketing metrics at the expense of customer experience: Monitor CSAT alongside automation:
  • High autonomous rate with low CSAT = AI answering quickly but poorly
  • High autonomous rate with high CSAT = Successful automation
  • Track CSAT specifically for autonomous tickets vs. human-assisted
Maintain escalation thresholds:
  • Don’t eliminate all escalations to boost autonomous rate
  • Some tickets genuinely require human expertise
  • Better to escalate appropriately than provide inadequate automated responses
Review edge cases regularly:
  • Sample 10-20 autonomous resolved tickets weekly
  • Verify answers were actually correct and helpful
  • Check if tickets were marked “resolved” prematurely

Segment Analysis by Ticket Attributes

Use labels and filters to analyze performance by: Customer Tier:
Enterprise: 75% involvement, 55% autonomous (complex needs)
SMB: 80% involvement, 70% autonomous (more routine)
Free: 85% involvement, 75% autonomous (simple questions)

Insight: Enterprise customers need more human touch, lower autonomous rates are appropriate
Ticket Priority:
Critical: 50% involvement, 30% autonomous (intentional human escalation)
High: 70% involvement, 60% autonomous (selective automation)
Medium: 85% involvement, 75% autonomous (high automation)
Low: 90% involvement, 80% autonomous (routine inquiries)

Insight: Automation inversely correlated with priority - as designed
Product Area:
Billing: 90% involvement, 85% autonomous (well-documented)
Technical: 65% involvement, 45% autonomous (complex troubleshooting)
Account: 75% involvement, 65% autonomous (moderate complexity)

Insight: Technical support needs knowledge expansion

Troubleshooting and Common Issues

Metrics Don’t Match Zendesk/Salesforce

Issue: botBrains shows different ticket counts than your support platform Causes and Solutions: Date range mismatch: botBrains uses ticket creation date, your platform may use update date. Ensure you’re comparing the same time period. Ticket status filters: botBrains counts all tickets, your platform report may filter specific statuses. Check if you’re excluding closed, spam, or deleted tickets. Integration timing: New tickets may take 1-2 minutes to sync to botBrains. Recent tickets might not appear immediately in metrics. Manual imports: If you imported historical tickets, these may not have all metadata correctly mapped. Solution: Export both datasets for the same period and compare ticket IDs to identify discrepancies.

Involvement Rate Suddenly Dropped

Issue: AI participation decreased significantly without explanation Common Causes: Integration disabled: Check if Zendesk/Salesforce integration was accidentally toggled off Deployment paused: Verify your active deployment is running Exclusion rules added: Review if new queue/category exclusions were configured Queue changes: Check if tickets are routed to queues botBrains doesn’t monitor API credentials expired: Ensure API tokens haven’t been rotated or revoked Diagnostic Steps:
  1. Navigate to Deploy → Integrations
  2. Verify integration status is “Active”
  3. Check recent webhook events for errors
  4. Review exclusion rules configuration
  5. Test by creating a new ticket manually

Better Monday Score is 0%

Issue: Weekend coverage metric shows 0% or very low Possible Causes: No weekend tickets: Your date range may not include Saturday/Sunday, or you genuinely had no weekend tickets Deployment schedule: Check if your deployment is disabled on weekends (intentional or accidental) Aggressive escalation: AI may be escalating all weekend tickets instead of attempting responses Private mode enabled: If integration is in private mode, AI isn’t posting public responses (which Better Monday Score requires) Solutions:
  • Verify deployment runs 24/7
  • Review weekend ticket escalation patterns
  • Consider enabling public mode for routine ticket types on weekends
  • Adjust guidance to be more confident in autonomous responses outside business hours

High Autonomous Rate but Low Resolution Rate

Issue: AI handles many tickets autonomously but doesn’t resolve them successfully This indicates quality problems - AI is marking tickets complete prematurely or providing insufficient answers: Diagnostic Steps:
  1. Filter to Autonomous + Unresolved in Sankey diagram
  2. Read 20-30 of these tickets
  3. Identify patterns:
    • Is AI misunderstanding questions?
    • Is AI providing correct but incomplete answers?
    • Are tickets marked resolved prematurely?
    • Is information outdated or incorrect?
Solutions:
  • Refine guidance to ensure complete answers before resolving
  • Add validation checks before marking tickets complete
  • Update knowledge with more comprehensive information
  • Adjust AI confidence thresholds for autonomous resolution
  • Consider requiring human confirmation for edge cases

Private Involvement Very High

Issue: Most AI involvement is private (copilot) rather than public or autonomous This suggests team hesitation to let AI interact directly with customers: Possible Reasons: Trust building phase: Team is testing AI before full deployment (expected early on) Sensitive ticket types: Billing, legal, or VIP tickets may intentionally use private mode Quality concerns: Team doesn’t trust AI to respond publicly yet Training opportunity: Agents may not understand when/how to enable public mode Solutions:
  • Review private involvement tickets that had high-quality AI suggestions
  • Share examples of good AI responses with team
  • Gradually enable public mode for specific straightforward ticket types
  • Create clear guidelines on when private vs. public is appropriate
  • Provide training on reviewing and approving AI suggestions

Calculating ROI from Ticketing Metrics

Demonstrate the value of AI automation with concrete calculations.

Time Savings Calculation

Formula:
Time Saved = (Autonomous Resolved Tickets) × (Average Handling Time per Ticket)

Example:
- 450 autonomous resolved tickets last month
- Average handling time: 12 minutes per ticket
- Time saved: 450 × 12 = 5,400 minutes = 90 hours

Result: AI saved 90 agent hours last month

Cost Savings Calculation

Formula:
Cost Savings = (Time Saved in Hours) × (Average Hourly Support Cost)

Example:
- 90 hours saved per month
- Average support cost: $35/hour (salary + benefits + overhead)
- Monthly savings: 90 × $35 = $3,150

Annual Projection: $3,150 × 12 = $37,800

Ticket Deflection Rate

Formula:
Deflection Rate = (Autonomous Resolved) / (Autonomous Resolved + Public + Private) × 100%

Example:
- 450 autonomous resolved
- 150 public involvement
- 50 private involvement
- Total: 650 tickets involved

Deflection Rate: 450 / 650 = 69.2%

Interpretation: 69% of tickets with AI involvement were fully deflected from agents

First Response Time Improvement

Formula:
Weighted Average FRT = (Auto % × Auto FRT) + (Human % × Human FRT)

Example:
- 65% autonomous (immediate response, ~1 minute)
- 35% human-only (average 8 hours first response)
- Weighted average: (0.65 × 1 min) + (0.35 × 480 min) = 168 minutes = 2.8 hours

Without AI: 8-hour average FRT
With AI: 2.8-hour average FRT
Improvement: 65% reduction in first response time

Next Steps

Now that you understand ticketing system performance monitoring: Remember: Ticketing metrics are most valuable when reviewed consistently, compared over time, and translated into concrete actions. Weekly review of involvement rates, autonomous resolution, and Better Monday Score - combined with systematic knowledge improvements - compounds into dramatic agent workload reduction and faster customer resolutions. High ticket automation isn’t about replacing human agents. It’s about freeing them from routine inquiries so they can focus on complex problems that genuinely require human expertise, empathy, and judgment. Monitor your ticketing metrics to find that optimal balance.