API Performance Metrics

Understand and monitor API performance metrics in Maeris to ensure your APIs meet performance requirements and identify optimization opportunities.

Overview

Performance metrics provide insights into how your APIs perform under various conditions. Monitoring these metrics helps you ensure APIs meet SLA requirements, identify bottlenecks, and optimize performance.

Key Performance Metrics

Response Time

The total time taken for a request to complete, from sending the request to receiving the complete response.

  • Measured in milliseconds (ms)
  • Includes all network and processing time
  • Key indicator of API responsiveness

Time to First Byte (TTFB)

The time from sending the request until the first byte of the response is received.

  • Indicates server processing time
  • Useful for identifying server-side bottlenecks
  • Typically should be under 200ms for good performance

Connection Time

The time required to establish a connection to the server.

  • Includes DNS lookup and TCP handshake
  • Affected by network latency and server availability
  • Should be minimal for good performance

Throughput

The number of requests processed per unit of time (requests per second).

  • Measures API capacity
  • Important for high-traffic scenarios
  • Helps with capacity planning

Error Rate

The percentage of requests that result in errors (4xx, 5xx status codes, timeouts).

  • Should be kept as low as possible
  • Indicates API reliability
  • Critical for production monitoring

Percentile Metrics

Percentile metrics help you understand the distribution of response times, not just averages. This is crucial because averages can be misleading.

Common Percentiles

  • P50 (Median): 50% of requests complete within this time
  • P95: 95% of requests complete within this time
  • P99: 99% of requests complete within this time
  • P99.9: 99.9% of requests complete within this time

Example: If P95 is 500ms, it means 95% of requests complete in 500ms or less. The remaining 5% take longer, which could indicate performance issues for some users.

Monitoring Performance Metrics

Setting Performance Thresholds

Define acceptable performance thresholds for your APIs:

  • Set maximum response time limits
  • Define acceptable error rates
  • Establish throughput requirements
  • Set percentile targets (e.g., P95 should be under 1 second)

Track performance metrics over time to identify trends, regressions, and improvements.

What to Look For

  • Gradual Degradation: Response times slowly increasing over time
  • Sudden Spikes: Unexpected performance drops
  • Patterns: Performance issues at specific times or under certain conditions
  • Improvements: Performance gains after optimizations
  • Seasonality: Performance variations based on usage patterns

Performance Assertions

Add performance assertions to your API tests to automatically fail tests when performance thresholds are exceeded.

Example Assertions

Assert response time is less than 1000ms
Assert response time is less than {{maxResponseTime}}
Assert TTFB is less than 200ms
Assert connection time is less than 100ms

Performance Optimization

Use performance metrics to identify optimization opportunities and measure the impact of changes.

Optimization Strategies

  • Identify slow endpoints and optimize them
  • Reduce response payload sizes
  • Implement caching where appropriate
  • Optimize database queries
  • Use CDN for static content
  • Implement pagination for large datasets
  • Optimize serialization/deserialization

Best Practices

  • Monitor Continuously: Track performance metrics regularly, not just during testing
  • Set Realistic Targets: Define performance goals based on business requirements
  • Use Percentiles: Don't rely solely on averages—use P95, P99 for better insights
  • Compare Environments: Compare performance across dev, staging, and production
  • Track Trends: Monitor performance over time to catch regressions early
  • Set Up Alerts: Configure alerts when performance degrades
  • Document Baselines: Document expected performance for future reference

Next Steps