Skip to main content
Insights provide comprehensive analytics and performance metrics to help you understand how you AI agent is performing and where to optimize. By analyzing key metrics, you can make data-driven decisions to improve customer satisfaction, automate more conversations, and identify areas for improvement.

Why Insights Matter

Analytics transform raw data into actionable intelligence. With botBrains Insights, you can:
  • Measure impact: Quantify how you AI agent is reducing support workload
  • Identify gaps: Discover questions your AI struggles to answer
  • Track satisfaction: Monitor customer feedback and sentiment trends
  • Optimize performance: Focus improvements on high-impact areas
  • Demonstrate value: Show stakeholders concrete ROI metrics
All metrics support filtering by date range, channel, and labels to drill down into specific segments of your data.

General Metrics

The General view provides a comprehensive overview of you AI agent’s activity and performance.

Overview Metrics

Messages Total number of messages exchanged in conversations. Tracks the volume of interactions between users and your AI. Conversations Total number of unique conversations started. A conversation is a distinct interaction thread with a user. Unique Users Count of distinct users who have interacted with you AI agent during the selected time period.

Customer Satisfaction (CSAT)

CSAT measures how satisfied customers are with their AI interactions. Rating Scale
  • Abandoned (0): User was offered the survey but didn’t respond
  • Terrible (1): Very dissatisfied
  • Bad (2): Dissatisfied
  • OK (3): Neutral
  • Good (4): Satisfied
  • Amazing (5): Very satisfied
  • Unoffered: CSAT survey was not presented to the user
CSAT Score Percentage of satisfied customers (Good + Amazing ratings) out of all rated conversations (1-5). Industry standard metric for measuring customer satisfaction.
CSAT Score = (Good + Amazing) / (All 1-5 Ratings) × 100%
DSAT Score Percentage of dissatisfied customers (Terrible + Bad ratings). Helps identify problematic interactions.
DSAT Score = (Terrible + Bad) / (All 1-5 Ratings) × 100%
Response Rate Percentage of users who provided a rating (1-5) when offered the survey. Sample Size Percentage of conversations where CSAT was offered (0-5 ratings).
A high CSAT score (above 80%) indicates your AI is effectively handling customer inquiries. Low DSAT scores (below 10%) show few negative experiences.

Conversation Status

Tracks the resolution status of conversations. Resolved User’s question was fully answered by the AI or support team. Unresolved Question was inadequately addressed or still needs clarification. Escalated Conversation was handed off to a human agent. Resolution Rate Percentage of conversations that were successfully resolved.
Resolution Rate = Resolved / (Resolved + Escalated + Unresolved) × 100%
A high resolution rate (above 70%) indicates your AI is effectively solving customer problems. Monitor unresolved conversations to identify knowledge gaps.

User Sentiment

Analyzes the emotional tone of user messages.
  • Positive: User expresses satisfaction, happiness, or appreciation
  • Neutral: Factual or emotionally neutral messages
  • Negative: User expresses frustration, anger, or dissatisfaction
Sentiment analysis helps you identify problematic interactions beyond explicit CSAT ratings.

Answer Completeness

Measures how thoroughly the AI addresses user questions.
  • Complete: Full, comprehensive answer provided
  • Partial: Some information provided but incomplete
  • No Answer: Unable to provide relevant information

Conversation Length

Distribution of conversations by number of messages exchanged. Helps identify if conversations are efficiently resolved or require too many back-and-forth exchanges.

User Language

Shows the distribution of languages used by your customers. Helps you understand your audience’s language preferences and ensure proper localization support.

Usage by Page

Tracks which pages on your website generate the most AI interactions. Useful for understanding where customers need help most.

Message Activity Heatmaps

Weekly Heatmap Visualizes message volume by day of week and hour. Identifies peak usage times and quiet periods. Yearly Heatmap Shows message volume across months and days of the month. Reveals seasonal trends and patterns.
Use heatmaps to optimize your AI’s availability and ensure adequate support coverage during peak hours.

Ticketing Metrics

The Ticketing view focuses on AI involvement in customer support workflows and automation efficiency.

Involvement Rate

Shows how the AI participates in conversations across four categories: Autonomous Involvement AI handled the complete conversation without any human operator involvement. This represents full automation and the highest efficiency.
Calculation: Conversations where AI responses were sufficient
Color: Green
Public Involvement AI generated customer-visible responses, and a human operator later got involved. Represents partial automation with human oversight.
Calculation: Conversations with AI public messages + human operator
Color: Blue
Private Involvement AI suggested responses internally, but the human operator handled all customer-facing communication. Represents AI as a copilot.
Calculation: Conversations with AI private suggestions only
Color: Orange
Uninvolved No AI involvement at all. Occurs with imported tickets and third-party synced outbound messages.
Calculation: Conversations with zero AI interaction
Color: Gray
These categories are mutually exclusive - each conversation belongs to exactly one category based on the AI’s level of involvement.

Key Involvement Metrics

Involvement Rate Percentage of conversations where the AI was involved in any capacity (Autonomous + Public + Private).
Involvement Rate = (Autonomous + Public + Private) / Total Conversations × 100%
Involved Tickets Absolute count of tickets where AI participated. Relative Autonomous Rate Percentage of involved tickets that were fully autonomous (no human intervention).
Relative Autonomous Rate = Autonomous / (Autonomous + Public + Private) × 100%
Better Monday Score Measures how many weekend tickets (Saturday and Sunday) received at least one AI response. Higher scores mean fewer tickets pile up for Monday morning.
Better Monday Score = (Weekend Autonomous + Weekend Public) / Total Weekend Tickets × 100%
Target a Better Monday Score above 60% to significantly reduce Monday morning ticket backlogs.

Using Insights for Optimization

Increase Autonomous Rate
  1. Review Public conversations to see where humans intervened
  2. Identify common patterns in these interventions
  3. Add knowledge or improve prompts to handle these cases autonomously
Improve CSAT Score
  1. Filter conversations with Terrible or Bad ratings
  2. Analyze what went wrong in these interactions
  3. Update knowledge base or refine AI responses
Reduce Unresolved Conversations
  1. Examine unresolved conversations for common themes
  2. Add missing information to your knowledge base
  3. Improve AI’s ability to understand user intent
Optimize Response Coverage
  1. Check Response Rate and Sample Size metrics
  2. Ensure CSAT surveys are offered at appropriate times
  3. Adjust survey triggers based on conversation characteristics

Filtering and Date Ranges

All metrics support powerful filtering options: Date Range Select any custom date range to analyze specific time periods. Metrics show period-over-period comparisons when viewing recent data. Channel Filter Filter by communication channel (web chat, email, etc.) to analyze channel-specific performance. Label Filter Use conversation labels to segment data by topic, product, or custom categories.
Combine filters to create precise segments. For example: “Web chat conversations about pricing in the last 30 days.”

Export and Reporting

Every chart includes an export menu (visible on hover) with options to:
  • Export as PNG: Download chart visualizations
  • Export as CSV: Get raw data for custom analysis
  • Copy Data: Quick access to data for spreadsheets
Use exports to create custom reports, share insights with stakeholders, or perform deeper analysis in your own tools.

Best Practices

  1. Monitor trends over time: Don’t just look at point-in-time metrics. Track how they change week over week or month over month.
  2. Set benchmarks: Establish baseline metrics when you launch, then set improvement targets.
  3. Investigate anomalies: Sudden drops in CSAT or spikes in unresolved conversations warrant investigation.
  4. Focus on actionable metrics: Prioritize metrics that inform specific improvements (e.g., which topics need better answers).
  5. Review regularly: Schedule weekly or monthly reviews to stay on top of performance trends.
  6. Segment your analysis: Use filters to understand performance across different user groups, products, or channels.