# Monitoring & Analytics

Track, measure, and optimize your AI agents' performance with comprehensive monitoring and analytics tools.

## Overview

Understanding how your agents perform is critical to delivering value. Our monitoring and analytics suite provides visibility into:

* **Usage patterns** - Who's using agents and how often
* **Response quality** - How accurate and helpful responses are
* **Performance metrics** - Response times and system health
* **Cost analysis** - Token usage and associated costs
* **User satisfaction** - Feedback and ratings

## Key Tools

### [Analytics Dashboard](/product/monitoring/view-analytics.md)

Your central hub for monitoring agent performance and usage. Get real-time insights with interactive visualizations.

**What You'll See:**

* Total queries and trends over time
* Most active agents and users
* Popular questions and topics
* Geographic usage distribution
* Success rates and error tracking

**Use Cases:**

* Track adoption across your organization
* Identify high-value use cases
* Spot usage anomalies
* Demonstrate ROI to stakeholders

***

### [Inbox & Training](/product/monitoring/inbox-training.md)

Review conversations and improve agent responses through active learning and human feedback.

**Key Features:**

* Conversation review queue
* Thumbs up/down feedback collection
* Annotation and correction tools
* Training data curation
* Quality assurance workflows

**Use Cases:**

* Improve response accuracy
* Identify knowledge gaps
* Curate training examples
* Quality control for customer-facing agents

***

### [Evaluation Framework](/product/monitoring/evals.md)

Systematically measure and improve agent performance with automated evaluations.

**Capabilities:**

* Automated testing of agent responses
* Benchmark datasets for comparison
* A/B testing different configurations
* Regression detection
* Custom evaluation metrics

**Use Cases:**

* Test changes before deployment
* Track improvements over time
* Compare different prompts or models
* Ensure consistent quality

***

### [Performance Tuning](/product/monitoring/performance-tuning.md)

Optimize response speed, accuracy, and cost through systematic tuning of agent parameters.

**What You Can Tune:**

* RAG strategy selection
* Chunking parameters
* Retrieval settings
* Model selection and parameters
* Caching strategies

**Use Cases:**

* Reduce latency for time-sensitive applications
* Improve accuracy for critical use cases
* Balance quality vs. speed trade-offs

***

### [Cost Optimization](/product/monitoring/cost-optimization.md)

Monitor and reduce costs associated with AI operations while maintaining quality.

**Cost Visibility:**

* Token usage by agent, user, and time period
* Model costs (embeddings, completions, reranking)
* Data processing costs
* Total cost of ownership

**Optimization Strategies:**

* Caching frequently requested information
* Choosing cost-effective models
* Optimizing context window usage
* Reducing unnecessary API calls

***

## Monitoring Best Practices

### 1. Set Baseline Metrics

Before optimization, establish baseline performance:

* Current response times
* Typical accuracy rates
* Normal usage patterns
* Baseline costs

### 2. Define Success Metrics

Determine what success looks like for your use case:

* Target response accuracy (e.g., 90%+ thumbs up)
* Acceptable latency (e.g., <3 seconds)
* Cost per query targets
* Adoption rates

### 3. Monitor Continuously

Set up regular monitoring routines:

* Daily: Check for errors or anomalies
* Weekly: Review usage trends and costs
* Monthly: Analyze conversation quality
* Quarterly: Evaluate ROI and strategic impact

### 4. Act on Insights

Use data to drive improvements:

* Add missing knowledge to fill gaps
* Adjust prompts based on feedback
* Optimize performance bottlenecks
* Scale resources based on usage

### 5. Close the Loop

Create feedback cycles:

* User feedback → Training data
* Analytics insights → Configuration changes
* Performance issues → Infrastructure upgrades
* Cost trends → Optimization initiatives

## Key Metrics to Track

### Usage Metrics

* **Total Queries**: Overall volume of requests
* **Active Users**: Unique users engaging with agents
* **Queries per User**: Average engagement level
* **Peak Usage Times**: When demand is highest

### Quality Metrics

* **User Satisfaction**: Thumbs up/down ratios
* **Response Accuracy**: Correct vs. incorrect answers
* **Source Attribution**: Percentage with citations
* **Fallback Rate**: How often "I don't know" is returned

### Performance Metrics

* **Response Time**: End-to-end latency
* **Time to First Token**: Perceived responsiveness
* **Retrieval Time**: Knowledge base query speed
* **Error Rate**: Failed requests

### Cost Metrics

* **Cost per Query**: Average spend per request
* **Token Usage**: Input and output tokens
* **Model Costs**: By model type (embeddings, completions)
* **Cost by Agent**: Which agents are most expensive

## Dashboards & Reports

### Real-Time Dashboard

Monitor current activity:

* Active conversations
* Recent queries
* System health indicators
* Error alerts

### Executive Summary

High-level overview for stakeholders:

* Adoption trends
* ROI metrics
* Cost savings
* Strategic insights

### Operational Reports

Detailed reports for optimization:

* Agent-by-agent performance
* User engagement patterns
* Knowledge base coverage
* Technical performance metrics

### Custom Reports

Build your own reports using:

* [Developer API](/product/developer-api.md)
* Data exports
* Webhook integrations
* Third-party analytics tools

## Alerting & Notifications

Set up proactive alerts for:

* **Error Spikes**: Sudden increase in failures
* **Performance Degradation**: Response times increase
* **Cost Overruns**: Budget thresholds exceeded
* **Quality Issues**: User satisfaction drops
* **Usage Anomalies**: Unusual activity patterns

Configure notifications via:

* Email
* Slack ([Slack App](/product/plugins/slack-app.md))
* Webhooks ([Webhooks Guide](/product/developer-api/webhooks.md))
* PagerDuty or other incident management tools

## Optimization Workflow

1. **Identify**: Use analytics to find improvement opportunities
2. **Hypothesize**: Form theories about what might help
3. **Test**: Use evaluation framework to validate changes
4. **Deploy**: Roll out improvements to production
5. **Measure**: Track impact with monitoring tools
6. **Iterate**: Continue the cycle

## Common Monitoring Scenarios

### Scenario 1: Agent Not Performing Well

**Symptoms**: Low satisfaction scores, high fallback rate

**Investigation Steps**:

1. Check [Analytics Dashboard](/product/monitoring/view-analytics.md) for patterns
2. Review conversations in [Inbox](/product/monitoring/inbox-training.md)
3. Run [Evaluations](/product/monitoring/evals.md) to quantify issues
4. Identify missing knowledge or prompt problems

**Resolution**: Update knowledge base or adjust prompts

***

### Scenario 2: High Costs

**Symptoms**: Costs increasing faster than expected

**Investigation Steps**:

1. Check [Cost Optimization](/product/monitoring/cost-optimization.md) dashboard
2. Identify high-cost agents or users
3. Analyze token usage patterns
4. Review model selection

**Resolution**: Implement caching, optimize context windows, or switch models

***

### Scenario 3: Slow Response Times

**Symptoms**: Users complaining about latency

**Investigation Steps**:

1. Check [Performance Tuning](/product/monitoring/performance-tuning.md) metrics
2. Identify bottlenecks (retrieval, model, network)
3. Review system load and resource usage

**Resolution**: Optimize retrieval, enable caching, or scale infrastructure

## Integration with Other Tools

### Export Data

Export analytics data to:

* Business intelligence tools (Tableau, Power BI)
* Data warehouses (Snowflake, BigQuery)
* Spreadsheets for ad-hoc analysis

### API Access

Access metrics programmatically:

* [Developer API](/product/developer-api.md) endpoints
* Custom dashboard integration
* Automated reporting workflows

### Webhooks

Receive real-time events:

* [Webhook configuration](/product/developer-api/webhooks.md)
* Stream data to analytics platforms
* Trigger automated workflows

## Advanced Topics

### Statistical Analysis

* Trend analysis and forecasting
* Cohort analysis for user behavior
* A/B test statistical significance
* Outlier detection

### Custom Metrics

* Define domain-specific KPIs
* Create composite scores
* Build custom evaluation criteria

### Machine Learning on Metrics

* Anomaly detection with ML models
* Predictive scaling
* Automated optimization recommendations

## Next Steps

1. **Start Monitoring**: Log in to your [Analytics Dashboard](/product/monitoring/view-analytics.md)
2. **Set Up Inbox**: Configure your [Inbox & Training](/product/monitoring/inbox-training.md) workflow
3. **Define Metrics**: Decide what success looks like with [Evaluation Framework](/product/monitoring/evals.md)
4. **Optimize**: Improve performance with [Performance Tuning](/product/monitoring/performance-tuning.md)
5. **Manage Costs**: Control spending with [Cost Optimization](/product/monitoring/cost-optimization.md)

For more detailed guidance, explore the individual topics listed above.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.twig.so/product/monitoring.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
