Performance Test Results and Analysis
Learn how to interpret and analyze performance test results in Maeris to understand system behavior, identify bottlenecks, and make data-driven decisions.
Overview
After a performance test completes, Maeris provides comprehensive results including metrics, charts, and analysis. Understanding these results is crucial for identifying performance issues and optimizing your APIs.
Test Summary
The test summary provides a high-level overview of test execution and key metrics.
Summary Information
- Test Duration: Total test execution time
- Total Requests: Number of requests executed
- Virtual Users: Maximum and average virtual users
- Success Rate: Percentage of successful requests
- Error Rate: Percentage of failed requests
- Average Response Time: Mean response time
- Throughput: Requests per second
- Threshold Status: Whether performance thresholds were met
Response Time Analysis
Response time metrics help you understand how quickly your API responds under load.
Key Metrics
- Min/Max: Fastest and slowest response times
- Average: Mean response time across all requests
- Median (P50): 50% of requests completed within this time
- P95: 95% of requests completed within this time
- P99: 99% of requests completed within this time
- Standard Deviation: Variability in response times
Interpreting Percentiles: If P95 is 1000ms, it means 95% of requests completed in 1 second or less. The remaining 5% took longer, which could indicate performance issues for some users.
Throughput Analysis
Throughput metrics indicate your API's capacity and how many requests it can handle per second.
Throughput Insights
- Peak Throughput: Maximum requests per second achieved
- Average Throughput: Average requests per second
- Throughput Stability: Consistency of throughput over time
- Capacity: Maximum sustainable throughput
Error Analysis
Understanding errors helps identify issues and improve API reliability.
Error Metrics
- Total Errors: Number of failed requests
- Error Rate: Percentage of requests that failed
- Error Types: Breakdown by status code (4xx, 5xx, timeouts)
- Error Distribution: When errors occurred during the test
- Error Trends: Whether errors increased or decreased over time
Identifying Bottlenecks
Performance bottlenecks are points in your system that limit performance. Identifying them is key to optimization.
Common Bottleneck Indicators
- Increasing Response Times: Response times gradually increase as load increases
- Plateauing Throughput: Throughput stops increasing despite more load
- High Error Rates: Errors increase significantly under load
- Resource Saturation: CPU, memory, or network at 100% utilization
- Response Time Spikes: Sudden increases in response times
Comparing Test Results
Comparing results across different test runs helps identify trends, regressions, and improvements.
Comparison Strategies
- Baseline Comparison: Compare against baseline performance
- Historical Trends: Track performance over time
- Before/After: Compare before and after optimizations
- Environment Comparison: Compare dev, staging, and production
- Load Level Comparison: Compare performance at different load levels
Threshold Evaluation
Evaluate whether your API met the performance thresholds (SLAs) defined for the test.
Threshold Status
- Check if response time thresholds were met (P95, P99)
- Verify error rate stayed below maximum allowed
- Confirm throughput met minimum requirements
- Review success rate against targets
- Identify which thresholds were exceeded
Creating Reports
Generate reports to share findings with stakeholders and document test results.
Report Components
- Executive summary with key findings
- Performance metrics and charts
- Threshold evaluation results
- Bottleneck identification
- Recommendations for improvement
- Comparison with previous tests
Best Practices
- Review Promptly: Analyze results soon after test completion
- Look Beyond Averages: Focus on percentiles (P95, P99) for better insights
- Identify Patterns: Look for patterns in response times and errors
- Compare Consistently: Use consistent metrics when comparing tests
- Document Findings: Document important findings and recommendations
- Share Results: Share results with relevant stakeholders
- Take Action: Use findings to drive optimization efforts
Next Steps
- Performance Metrics and Dashboards - Explore dashboards
- Performance Test Graphs and Visualizations - Visualize data
- Getting Started with Performance Testing - Review fundamentals