Skip to content

Performance Testing Automation

Overview

This guide focuses on automating performance testing for Java applications. Performance testing ensures your application can handle expected loads while maintaining acceptable response times and resource utilization. We'll cover techniques, tools, and best practices for implementing automated performance testing in your CI/CD pipeline.

Prerequisites

Learning Objectives

  • Understand different types of performance tests for Java applications
  • Set up load testing tools like JMeter, Gatling, and k6
  • Implement performance testing in CI/CD pipelines
  • Analyze performance test results and establish performance baselines
  • Configure threshold-based performance test validation
  • Create performance test reports and dashboards
  • Implement performance testing best practices

Types of Performance Tests

Different performance tests evaluate various aspects of application performance:

┌───────────────────────────────────────────────────────────────────────┐
│                      Performance Test Types                            │
└───────────────────────────────────────────────────────────────────────┘
        │              │               │               │                │
        ▼              ▼               ▼               ▼                ▼
┌──────────────┐ ┌─────────────┐ ┌────────────┐ ┌─────────────┐ ┌──────────────┐
│  Load Test   │ │ Stress Test │ │ Spike Test │ │ Soak Test   │ │ Scalability  │
│              │ │             │ │            │ │             │ │    Test      │
│ • Normal load│ │ • Beyond    │ │ • Sudden   │ │ • Extended  │ │ • Increasing │
│   handling   │ │   capacity  │ │   traffic  │ │   duration  │ │   resources  │
│ • Expected   │ │ • Breaking  │ │   spikes   │ │ • Memory    │ │ • How system │
│   conditions │ │   point     │ │ • Recovery │ │   leaks     │ │   scales     │
└──────────────┘ └─────────────┘ └────────────┘ └─────────────┘ └──────────────┘

Load Testing

Tests the application's performance under expected user loads to ensure it meets performance requirements.

Stress Testing

Evaluates the application's stability under extreme conditions, pushing beyond normal operational capacity.

Spike Testing

Tests how the application performs during sudden, significant increases in user load.

Soak Testing

Validates the application's reliability and performance during extended periods of expected load.

Scalability Testing

Determines how effectively the application scales as resources (hardware, network, etc.) are increased.

Performance Testing Tools for Java

JMeter

Apache JMeter is a popular open-source tool for performance testing Java applications.

Setting Up JMeter

  1. Download JMeter from Apache JMeter website
  2. Extract the archive and run JMeter:
cd apache-jmeter-5.5/bin
./jmeter

Creating a Basic JMeter Test Plan

<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="5.0" jmeter="5.5">
  <hashTree>
    <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="Java API Load Test" enabled="true">
      <stringProp name="TestPlan.comments"></stringProp>
      <boolProp name="TestPlan.functional_mode">false</boolProp>
      <boolProp name="TestPlan.tearDown_on_shutdown">true</boolProp>
      <boolProp name="TestPlan.serialize_threadgroups">false</boolProp>
      <elementProp name="TestPlan.user_defined_variables" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
        <collectionProp name="Arguments.arguments"/>
      </elementProp>
      <stringProp name="TestPlan.user_define_classpath"></stringProp>
    </TestPlan>
    <hashTree>
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Users" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">10</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">100</stringProp>
        <stringProp name="ThreadGroup.ramp_time">30</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
        <boolProp name="ThreadGroup.same_user_on_next_iteration">true</boolProp>
      </ThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="API Request" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain">example.com</stringProp>
          <stringProp name="HTTPSampler.port">443</stringProp>
          <stringProp name="HTTPSampler.protocol">https</stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/api/products</stringProp>
          <stringProp name="HTTPSampler.method">GET</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree>
          <HeaderManager guiclass="HeaderPanel" testclass="HeaderManager" testname="HTTP Headers" enabled="true">
            <collectionProp name="HeaderManager.headers">
              <elementProp name="" elementType="Header">
                <stringProp name="Header.name">Content-Type</stringProp>
                <stringProp name="Header.value">application/json</stringProp>
              </elementProp>
              <elementProp name="" elementType="Header">
                <stringProp name="Header.name">Accept</stringProp>
                <stringProp name="Header.value">application/json</stringProp>
              </elementProp>
            </collectionProp>
          </HeaderManager>
          <hashTree/>
        </hashTree>
        <ResultCollector guiclass="ViewResultsFullVisualizer" testclass="ResultCollector" testname="View Results Tree" enabled="true">
          <boolProp name="ResultCollector.error_logging">false</boolProp>
          <objProp>
            <name>saveConfig</name>
            <value class="SampleSaveConfiguration">
              <time>true</time>
              <latency>true</latency>
              <timestamp>true</timestamp>
              <success>true</success>
              <label>true</label>
              <code>true</code>
              <message>true</message>
              <threadName>true</threadName>
              <dataType>true</dataType>
              <encoding>false</encoding>
              <assertions>true</assertions>
              <subresults>true</subresults>
              <responseData>false</responseData>
              <samplerData>false</samplerData>
              <xml>false</xml>
              <fieldNames>true</fieldNames>
              <responseHeaders>false</responseHeaders>
              <requestHeaders>false</requestHeaders>
              <responseDataOnError>false</responseDataOnError>
              <saveAssertionResultsFailureMessage>true</saveAssertionResultsFailureMessage>
              <assertionsResultsToSave>0</assertionsResultsToSave>
              <bytes>true</bytes>
              <sentBytes>true</sentBytes>
              <url>true</url>
              <threadCounts>true</threadCounts>
              <idleTime>true</idleTime>
              <connectTime>true</connectTime>
            </value>
          </objProp>
          <stringProp name="filename"></stringProp>
        </ResultCollector>
        <hashTree/>
        <ResultCollector guiclass="SummaryReport" testclass="ResultCollector" testname="Summary Report" enabled="true">
          <boolProp name="ResultCollector.error_logging">false</boolProp>
          <objProp>
            <name>saveConfig</name>
            <value class="SampleSaveConfiguration">
              <time>true</time>
              <latency>true</latency>
              <timestamp>true</timestamp>
              <success>true</success>
              <label>true</label>
              <code>true</code>
              <message>true</message>
              <threadName>true</threadName>
              <dataType>true</dataType>
              <encoding>false</encoding>
              <assertions>true</assertions>
              <subresults>true</subresults>
              <responseData>false</responseData>
              <samplerData>false</samplerData>
              <xml>false</xml>
              <fieldNames>true</fieldNames>
              <responseHeaders>false</responseHeaders>
              <requestHeaders>false</requestHeaders>
              <responseDataOnError>false</responseDataOnError>
              <saveAssertionResultsFailureMessage>true</saveAssertionResultsFailureMessage>
              <assertionsResultsToSave>0</assertionsResultsToSave>
              <bytes>true</bytes>
              <sentBytes>true</sentBytes>
              <url>true</url>
              <threadCounts>true</threadCounts>
              <idleTime>true</idleTime>
              <connectTime>true</connectTime>
            </value>
          </objProp>
          <stringProp name="filename"></stringProp>
        </ResultCollector>
        <hashTree/>
      </hashTree>
    </hashTree>
  </hashTree>
</jmeterTestPlan>

Running JMeter Tests from Command Line

./jmeter -n -t test-plan.jmx -l results.jtl -e -o report-folder

Options: - -n: Run in non-GUI mode - -t: Test plan file - -l: Results file - -e: Generate report - -o: Output folder for report

Gatling

Gatling is a modern load testing tool specifically designed for web applications.

Setting Up Gatling

Include Gatling in your Maven project:

<plugin>
    <groupId>io.gatling</groupId>
    <artifactId>gatling-maven-plugin</artifactId>
    <version>4.3.0</version>
    <configuration>
        <simulationsFolder>src/test/scala</simulationsFolder>
        <runMultipleSimulations>true</runMultipleSimulations>
    </configuration>
</plugin>

Creating a Gatling Simulation

// BasicSimulation.scala
package simulations

import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._

class BasicSimulation extends Simulation {

  val httpProtocol = http
    .baseUrl("https://example.com")
    .acceptHeader("application/json")
    .contentTypeHeader("application/json")
    .userAgentHeader("Gatling Performance Test")

  val scn = scenario("Basic Scenario")
    .exec(
      http("Get Products")
        .get("/api/products")
        .check(status.is(200))
    )
    .pause(2)
    .exec(
      http("Get Product Details")
        .get("/api/products/1")
        .check(status.is(200))
        .check(jsonPath("$.name").exists)
    )
    .pause(1, 3)
    .exec(
      http("Add to Cart")
        .post("/api/cart")
        .body(StringBody("""{"productId":1,"quantity":1}"""))
        .check(status.is(201))
    )

  setUp(
    scn.inject(
      rampUsers(50).during(30.seconds),
      constantUsersPerSec(10).during(1.minute)
    )
  ).protocols(httpProtocol)
   .assertions(
     global.responseTime.max.lt(1000),
     global.successfulRequests.percent.gt(95)
   )
}

Running Gatling Tests

mvn gatling:test

k6

k6 is a modern performance testing tool by Grafana Labs.

Setting Up k6

Install k6:

# For macOS
brew install k6

# For Linux
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
echo "deb https://dl.k6.io/deb stable main" | sudo tee /etc/apt/sources.list.d/k6.list
sudo apt-get update
sudo apt-get install k6

Creating a k6 Test Script

// load-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';

export const options = {
  stages: [
    { duration: '30s', target: 20 },  // Ramp up to 20 users
    { duration: '1m', target: 20 },   // Stay at 20 users
    { duration: '30s', target: 0 },   // Ramp down to 0 users
  ],
  thresholds: {
    http_req_duration: ['p(95)<500'],  // 95% of requests should be below 500ms
    'http_req_duration{name:Product details}': ['p(95)<400'],  // More specific threshold
    http_req_failed: ['rate<0.01'],  // Error rate should be less than 1%
  },
};

export default function() {
  const baseUrl = 'https://example.com';

  // Get product list
  const productList = http.get(`${baseUrl}/api/products`);
  check(productList, {
    'products status 200': (r) => r.status === 200,
    'products body contains items': (r) => r.body.includes('items'),
  });

  sleep(1);

  // Get product details
  const productDetail = http.get(`${baseUrl}/api/products/1`, {
    tags: { name: 'Product details' },
  });
  check(productDetail, {
    'product details status 200': (r) => r.status === 200,
    'product has price': (r) => r.json('price') !== undefined,
  });

  sleep(2);

  // Add to cart
  const cartResponse = http.post(`${baseUrl}/api/cart`, 
    JSON.stringify({ productId: 1, quantity: 1 }), 
    { headers: { 'Content-Type': 'application/json' } }
  );
  check(cartResponse, {
    'cart add status 201': (r) => r.status === 201,
  });

  sleep(3);
}

Running k6 Tests

k6 run load-test.js

Java-Specific Performance Testing Techniques

Testing Spring Boot Applications

For Spring Boot applications, focus on these specific areas:

  1. REST Endpoints Performance:
  2. Test all critical API endpoints under load
  3. Validate response times for both simple and complex API calls
  4. Test pagination and filtering performance

  5. Database Query Performance:

  6. Test with realistic data volumes
  7. Monitor query execution times
  8. Check connection pool behavior under load

  9. Caching Effectiveness:

  10. Measure performance with and without caching
  11. Test cache hit ratios
  12. Validate cache eviction policies

Example JMeter test plan structure for Spring Boot:

- Test Plan
  - Thread Group: API Users
    - HTTP Request: Authentication
    - HTTP Request: Get Products
    - HTTP Request: Get Product Details
    - HTTP Request: Add to Cart
    - HTTP Request: Checkout
  - Thread Group: Admin Users
    - HTTP Request: Authentication
    - HTTP Request: Get Orders
    - HTTP Request: Update Product
  - Backend Listeners
  - Summary Report

Testing JVM Performance

Monitor these JVM metrics during performance tests:

  1. Memory Usage:
  2. Heap and non-heap memory
  3. Memory pool usage
  4. GC frequency and duration

  5. Thread Utilization:

  6. Thread count
  7. Thread states (runnable, blocked, waiting)
  8. Thread CPU time

  9. Class Loading:

  10. Loaded class count
  11. Class loading time

JVM configuration options for testing:

java -Xms2g -Xmx2g -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -verbose:gc -Xlog:gc*:file=gc.log -jar myapp.jar

Testing Microservices

For microservices architecture, consider these testing approaches:

  1. Service-Level Testing:
  2. Test individual services in isolation
  3. Use service virtualization to mock dependencies

  4. Integration Testing:

  5. Test realistic service interaction paths
  6. Include API Gateway in tests
  7. Test service discovery and load balancing

  8. End-to-End Testing:

  9. Simulate full user journeys across multiple services
  10. Test resilience patterns (circuit breakers, retries, etc.)

Integrating Performance Testing in CI/CD

Jenkins Pipeline Integration

pipeline {
    agent any

    stages {
        // ... other stages ...

        stage('Performance Tests') {
            steps {
                // Run JMeter tests
                sh '''
                    cd performance-tests
                    apache-jmeter-5.5/bin/jmeter -n -t test-plan.jmx \
                        -l results.jtl \
                        -e -o report-folder
                '''

                // Process JMeter results
                perfReport sourceDataFiles: 'performance-tests/results.jtl',
                          errorFailedThreshold: 5,
                          errorUnstableThreshold: 3,
                          errorUnstableResponseTimeThreshold: '500',
                          relativeFailedThresholdPositive: 10,
                          relativeUnstableThresholdPositive: 5
            }
            post {
                always {
                    publishHTML([
                        allowMissing: false,
                        alwaysLinkToLastBuild: true,
                        keepAll: true,
                        reportDir: 'performance-tests/report-folder',
                        reportFiles: 'index.html',
                        reportName: 'Performance Test Report'
                    ])
                }
            }
        }

        // ... other stages ...
    }
}

GitHub Actions Integration

name: Performance Tests

on:
  push:
    branches: [ main ]
  schedule:
    - cron: '0 2 * * 1-5'  # Run at 2 AM on weekdays

jobs:
  performance:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v3

      - name: Set up JDK
        uses: actions/setup-java@v3
        with:
          java-version: '17'
          distribution: 'temurin'

      - name: Run Gatling tests
        run: mvn gatling:test

      - name: Archive test results
        uses: actions/upload-artifact@v3
        with:
          name: performance-test-results
          path: target/gatling/*

      - name: Publish test results
        if: success() || failure()
        uses: actions/github-script@v6
        with:
          script: |
            const fs = require('fs');
            const resultsJson = JSON.parse(fs.readFileSync('target/gatling/results.json', 'utf8'));

            const summary = `
            ## Performance Test Results

            - Mean response time: ${resultsJson.meanResponseTime} ms
            - 95th percentile: ${resultsJson.percentiles.p95} ms
            - Success rate: ${resultsJson.successRate}%
            - Total requests: ${resultsJson.totalRequests}
            `;

            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: summary
            });

GitLab CI Integration

performance:
  stage: test
  image: 
    name: justb4/jmeter:5.5
    entrypoint: [""]
  variables:
    JMETER_TESTS_FOLDER: "performance-tests"
  before_script:
    - mkdir -p $JMETER_TESTS_FOLDER/report
  script:
    - /entrypoint.sh -n -t $JMETER_TESTS_FOLDER/test-plan.jmx -l $JMETER_TESTS_FOLDER/results.jtl -e -o $JMETER_TESTS_FOLDER/report
  after_script:
    - cat $JMETER_TESTS_FOLDER/results.jtl | tail -n 100
  artifacts:
    paths:
      - $JMETER_TESTS_FOLDER/report/
    expire_in: 1 week
    when: always
  rules:
    - if: $CI_PIPELINE_SOURCE == "schedule"
    - if: $CI_COMMIT_BRANCH == "main"

Analyzing Performance Test Results

Key Performance Metrics

When analyzing performance test results, focus on these key metrics:

  1. Response Time:
  2. Average response time
  3. 90th and 95th percentile response times
  4. Maximum response time

  5. Throughput:

  6. Requests per second
  7. Transactions per second
  8. Completed iterations

  9. Error Rate:

  10. Percentage of failed requests
  11. Types of errors
  12. Error distribution

  13. Resource Utilization:

  14. CPU usage
  15. Memory usage
  16. Network I/O
  17. Disk I/O

Creating Performance Baselines

  1. Establish Initial Baseline:
  2. Run performance tests on a stable version
  3. Record all key metrics
  4. Document test conditions and environment

  5. Update Baselines Regularly:

  6. Re-evaluate baselines after significant changes
  7. Consider seasonal variations in traffic
  8. Adjust baselines as application evolves

  9. Compare Against Baselines:

  10. Automatically compare new results with baselines
  11. Flag significant deviations
  12. Track trends over time

Visualizing Performance Data with Grafana

Grafana dashboard configuration for performance test results:

{
  "annotations": {
    "list": [
      {
        "builtIn": 1,
        "datasource": "-- Grafana --",
        "enable": true,
        "hide": true,
        "iconColor": "rgba(0, 211, 255, 1)",
        "name": "Annotations & Alerts",
        "type": "dashboard"
      }
    ]
  },
  "editable": true,
  "gnetId": null,
  "graphTooltip": 0,
  "id": 1,
  "links": [],
  "panels": [
    {
      "aliasColors": {},
      "bars": false,
      "dashLength": 10,
      "dashes": false,
      "datasource": "InfluxDB",
      "fill": 1,
      "fillGradient": 0,
      "gridPos": {
        "h": 8,
        "w": 12,
        "x": 0,
        "y": 0
      },
      "hiddenSeries": false,
      "id": 2,
      "legend": {
        "avg": false,
        "current": false,
        "max": false,
        "min": false,
        "show": true,
        "total": false,
        "values": false
      },
      "lines": true,
      "linewidth": 1,
      "nullPointMode": "null",
      "options": {
        "dataLinks": []
      },
      "percentage": false,
      "pointradius": 2,
      "points": false,
      "renderer": "flot",
      "seriesOverrides": [],
      "spaceLength": 10,
      "stack": false,
      "steppedLine": false,
      "targets": [
        {
          "groupBy": [
            {
              "params": [
                "$__interval"
              ],
              "type": "time"
            },
            {
              "params": [
                "transaction"
              ],
              "type": "tag"
            }
          ],
          "measurement": "jmeter",
          "orderByTime": "ASC",
          "policy": "default",
          "refId": "A",
          "resultFormat": "time_series",
          "select": [
            [
              {
                "params": [
                  "mean"
                ],
                "type": "field"
              },
              {
                "params": [],
                "type": "mean"
              }
            ]
          ],
          "tags": []
        }
      ],
      "thresholds": [],
      "timeFrom": null,
      "timeRegions": [],
      "timeShift": null,
      "title": "Response Time by Transaction",
      "tooltip": {
        "shared": true,
        "sort": 0,
        "value_type": "individual"
      },
      "type": "graph",
      "xaxis": {
        "buckets": null,
        "mode": "time",
        "name": null,
        "show": true,
        "values": []
      },
      "yaxes": [
        {
          "format": "ms",
          "label": null,
          "logBase": 1,
          "max": null,
          "min": null,
          "show": true
        },
        {
          "format": "short",
          "label": null,
          "logBase": 1,
          "max": null,
          "min": null,
          "show": true
        }
      ],
      "yaxis": {
        "align": false,
        "alignLevel": null
      }
    }
  ],
  "schemaVersion": 22,
  "style": "dark",
  "tags": [],
  "templating": {
    "list": []
  },
  "time": {
    "from": "now-6h",
    "to": "now"
  },
  "timepicker": {
    "refresh_intervals": [
      "5s",
      "10s",
      "30s",
      "1m",
      "5m",
      "15m",
      "30m",
      "1h",
      "2h",
      "1d"
    ]
  },
  "timezone": "",
  "title": "Performance Test Dashboard",
  "uid": "performance-tests",
  "variables": {
    "list": []
  },
  "version": 1
}

Setting Performance Test Thresholds

Defining Acceptance Criteria

Establish clear performance acceptance criteria:

  1. Response Time Thresholds:
  2. 95% of requests must complete within 500ms
  3. No single request should take longer than 2 seconds
  4. Average response time should be under 300ms

  5. Throughput Requirements:

  6. System must handle 100 transactions per second
  7. Must support 1000 concurrent users
  8. Checkout process must handle 50 orders per minute

  9. Error Rate Limits:

  10. Error rate must not exceed 1%
  11. No critical path should have any errors
  12. Authentication errors should be less than 0.1%

  13. Resource Utilization Limits:

  14. CPU utilization should not exceed 70%
  15. Memory usage should remain below 80%
  16. Database connections should not exceed 80% of pool size

JMeter Assertions

Add assertions to your JMeter test plan:

<DurationAssertion guiclass="DurationAssertionGui" testclass="DurationAssertion" testname="Response Time Assertion" enabled="true">
  <stringProp name="DurationAssertion.duration">500</stringProp>
</DurationAssertion>

<ResponseAssertion guiclass="ResponseAssertionGui" testclass="ResponseAssertion" testname="Status Code Assertion" enabled="true">
  <collectionProp name="Asserion.test_strings">
    <stringProp name="49586">200</stringProp>
    <stringProp name="49587">201</stringProp>
  </collectionProp>
  <stringProp name="Assertion.custom_message"></stringProp>
  <stringProp name="Assertion.test_field">Assertion.response_code</stringProp>
  <boolProp name="Assertion.assume_success">false</boolProp>
  <intProp name="Assertion.test_type">40</intProp>
</ResponseAssertion>

Gatling Assertions

Add assertions to your Gatling simulations:

setUp(
  scn.inject(rampUsers(50).during(30.seconds))
).protocols(httpProtocol)
 .assertions(
   global.responseTime.max.lt(1000),    // Max response time less than 1s
   global.responseTime.mean.lt(500),    // Mean response time less than 500ms
   global.responseTime.percentile3.lt(800),  // 95th percentile under 800ms
   global.successfulRequests.percent.gt(95), // Success rate above 95%
   details("Get Products").responseTime.percentile3.lt(400) // Specific endpoint assertion
 )

k6 Thresholds

Define thresholds in your k6 test script:

export const options = {
  thresholds: {
    // HTTP request duration thresholds
    'http_req_duration': ['p(95)<500', 'p(99)<1500', 'avg<400', 'med<300', 'max<2000'],

    // HTTP request rate threshold
    'http_reqs': ['rate>100'],

    // Error rate threshold
    'http_req_failed': ['rate<0.01'],

    // Custom metric thresholds
    'content_size': ['avg<5000'],

    // Endpoint-specific thresholds
    'http_req_duration{name:Get Products}': ['p(95)<300', 'max<1000'],
    'http_req_duration{name:Checkout}': ['p(95)<800', 'max<3000'],

    // Custom group thresholds
    'group_duration{group:::Main page}': ['avg<500', 'med<400'],
  },
};

Performance Testing Best Practices

Test Environment Considerations

  1. Representative Environment:
  2. Test environment should closely match production
  3. Use similar hardware specifications
  4. Configure similar network conditions
  5. Use realistic database size and content

  6. Isolation:

  7. Isolate test environment to prevent interference
  8. Avoid sharing resources with other systems
  9. Control external dependencies

  10. Data Management:

  11. Use representative test data
  12. Refresh test data between runs
  13. Consider data growth during soak tests

Realistic Test Scenarios

  1. Base on Real User Behavior:
  2. Analyze production logs for common paths
  3. Incorporate actual usage patterns
  4. Include realistic think times between actions

  5. Include All Critical Paths:

  6. Test main user journeys
  7. Include administrative functions
  8. Test batch processes and scheduled jobs

  9. Vary Test Parameters:

  10. Test different user loads
  11. Include peak and off-peak scenarios
  12. Test with different data volumes

Continuous Performance Testing

  1. Regular Testing Schedule:
  2. Run basic tests with every build
  3. Schedule comprehensive tests weekly
  4. Conduct full load tests before releases

  5. Trending and Analysis:

  6. Track performance metrics over time
  7. Identify gradual degradations
  8. Correlate changes with performance impact

  9. Automated Analysis:

  10. Automatically compare against baselines
  11. Send alerts for significant deviations
  12. Generate trend reports

Java Performance Optimization Techniques

Based on performance test results, consider these optimization techniques:

Database Optimizations

  1. Query Optimization:
  2. Analyze slow queries using query profiling
  3. Optimize SQL statements
  4. Add appropriate indexes

  5. Connection Pooling:

  6. Configure optimal pool size
  7. Monitor connection usage
  8. Implement connection validation

  9. JPA and Hibernate Tuning:

  10. Configure batch processing
  11. Optimize fetch strategies
  12. Use query caching

JVM Tuning

  1. Memory Settings:
  2. Optimize heap size (-Xms, -Xmx)
  3. Configure generation sizes
  4. Select appropriate garbage collector

  5. Garbage Collection Tuning:

  6. Choose GC algorithm based on application needs
  7. Set GC parameters to minimize pauses
  8. Monitor and adjust GC performance

  9. Thread Pool Optimization:

  10. Size thread pools appropriately
  11. Monitor thread utilization
  12. Avoid thread starvation

Application-Level Optimizations

  1. Caching Strategies:
  2. Implement multi-level caching
  3. Cache frequently accessed data
  4. Use distributed caching for clustered environments

  5. Asynchronous Processing:

  6. Move time-consuming tasks to background threads
  7. Implement non-blocking I/O
  8. Use CompletableFuture for parallel processing

  9. Code-Level Optimizations:

  10. Profile code to identify bottlenecks
  11. Optimize critical paths
  12. Reduce object creation and garbage

Conclusion

Automated performance testing is essential for maintaining the reliability and responsiveness of Java applications. By implementing the techniques and tools described in this guide, you can establish a robust performance testing strategy that integrates smoothly with your CI/CD pipeline.

Regular performance testing helps identify issues early, ensures your application meets performance requirements, and provides valuable data for optimizing your Java codebase. Remember that performance testing is not a one-time activity but a continuous process that should evolve with your application.

References