Performance Test Configuration

Learn how to configure advanced settings for your performance tests in Maeris, including test duration, regions, thresholds, and monitoring options.

Overview

Performance test configuration includes various settings that control how your tests run, what metrics are collected, and how results are reported. Proper configuration ensures accurate and meaningful test results.

Test Duration Configuration

Test duration determines how long your performance test will run. The appropriate duration depends on your testing goals.

Duration Guidelines

  • Quick Tests (5-10 minutes): Initial validation, smoke tests, quick checks
  • Standard Tests (15-30 minutes): Load testing, stress testing, most common scenarios
  • Extended Tests (1-4 hours): Soak testing, stability testing, capacity planning
  • Long Tests (8+ hours): Extended soak tests, memory leak detection

Geographic Regions

Configure which geographic regions your performance tests run from. This helps you understand latency and performance from different locations.

Region Configuration

  • Single Region: Test from one region to establish baseline
  • Multiple Regions: Test from multiple regions to understand global performance
  • User-Based Regions: Distribute load based on your user base locations
  • CDN Testing: Test CDN performance from various edge locations

Common Regions: US East, US West, Europe, Asia Pacific, etc.

Performance Thresholds (SLAs)

Define performance thresholds that your API must meet. Tests can be configured to fail if thresholds are exceeded.

Common Thresholds

  • Response Time: Maximum acceptable response time (e.g., P95 < 1000ms)
  • Error Rate: Maximum acceptable error rate (e.g., < 1%)
  • Throughput: Minimum required throughput (e.g., > 100 req/s)
  • Percentile Targets: P50, P95, P99 response time targets
  • Success Rate: Minimum success rate (e.g., > 99%)
Example Threshold Configuration:
P95 Response Time: < 1000ms
P99 Response Time: < 2000ms
Error Rate: < 1%
Success Rate: > 99%

Monitoring and Metrics

Configure what metrics are collected and how they're monitored during test execution.

Available Metrics

  • Response Times: Track response time percentiles and distributions
  • Throughput: Monitor requests per second
  • Error Rates: Track error percentages and types
  • Resource Metrics: CPU, memory, network (if infrastructure monitoring is enabled)
  • Custom Metrics: Define custom metrics based on response data

Test Data Configuration

Configure how test data is managed and used during performance tests.

Data Options

  • Data Pools: Create pools of test data for realistic testing
  • Data Rotation: Rotate through test data to avoid conflicts
  • Unique Data: Ensure each virtual user uses unique data
  • Data Cleanup: Configure automatic cleanup of test data
  • Data Validation: Validate test data before use

Advanced Settings

Additional Configuration Options

  • Timeout Settings: Configure request timeouts and connection timeouts
  • Retry Logic: Configure automatic retries for failed requests
  • Connection Pooling: Configure connection pool settings
  • SSL/TLS: Configure SSL/TLS settings for secure connections
  • Proxy Settings: Configure proxy settings if needed
  • Custom Headers: Add custom headers to all requests
  • Request Prioritization: Prioritize certain requests over others

Notifications and Alerts

Configure notifications and alerts for test events and threshold breaches.

Alert Configuration

  • Test Completion: Notify when tests complete
  • Threshold Breaches: Alert when performance thresholds are exceeded
  • Test Failures: Notify on test failures or errors
  • Integration: Integrate with Slack, email, PagerDuty, etc.

Best Practices

  • Set Realistic Thresholds: Base thresholds on business requirements and SLAs
  • Test from Multiple Regions: Understand global performance characteristics
  • Configure Appropriate Duration: Balance test duration with resource costs
  • Monitor Key Metrics: Focus on metrics that matter for your use case
  • Set Up Alerts: Configure alerts for important events and threshold breaches
  • Document Configuration: Keep records of test configurations and rationale
  • Review and Adjust: Regularly review and adjust configuration based on results

Next Steps