Table Of Contents
Introduction
Load testing NestJS microservices using MySQL and RabbitMQ with Grafana K6 provides valuable insights into the performance, scalability, and reliability of your system under stress. NestJS, a highly modular Node.js framework, efficiently handles microservices with event-driven architectures, where RabbitMQ serves as the message broker for asynchronous task queues and inter-service communication, and MySQL manages relational data storage. Using Grafana K6, a powerful load testing tool, you can simulate real-world traffic, stress-test the microservices, and measure their performance under varying load conditions. By sending concurrent requests to your API endpoints and monitoring message throughput in RabbitMQ, K6 generates detailed performance metrics such as response times, failure rates, and latency. These metrics are visualized in Grafana dashboards, allowing for easy identification of bottlenecks, database query inefficiencies, or RabbitMQ queue backlogs. This load testing process ensures that the NestJS microservices are optimized for handling high traffic and remain resilient during peak loads, providing a smoother user experience and improving the reliability of the system in production.
Key Features
- Database load testing
- High Concurrency and performance testing.
- Built-in performance metrics and reporting.
- Real-time indights with Grafana Integration.
- Integration with Event-Driven Architectures (RabbitMQ)
Software Used
- Grafana K6 for API load testing.
- Microservices built using NestJS.
- MySQL database for storing products and inventory.
- Docker for local development environment.
- RabbitMQ for message pattern communication between the services.
Environment Setup
Prerequisites
- Docker Desktop
- Node.js v22.1.0
- Git CLI
Application Code
I have setup a nestjs microservice application for demonstration purposes. You can download the application code using git repository here. Follow below instructions to clone the git repository and install application dependencies.
git clone https://github.com/ReddyPrashanth/product-inventory.git
cd product-inventory
npm install
Build Process
Application code contains a docker compose file to start RabbitMQ, MySQL and Grafana K6 containers and a separate Dockerfile for api-gateway, product and inventory service. After installing application dependencies you can start the application using docker compose. Follow below instruction to start the application using Docker.
cd product-inventory
# Add below env variables into a .env file in root folders
|------------------------------------|
# App
APP_NAME='Product Inventory'
APP_URL=http://localhost:3000
APP_ENV=local
API_GATEWAY_PORT=3000
# Database
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=laravel
DB_USER=laravel
DB_PASSWORD=secret
# RabbitMQ
RABBITMQ_PASS=secret
RABBITMQ_USER=admin
RABBITMQ_VHOST=default
QUEUE_URI='amqp://admin:secret@rabbitmq:5672/default'
|------------------------------------|
# Copy .env file into src directory
cp .env src/.env
# Start your services using docker compose
docker compose up -d
# Check all services are started using below command and healthy
docker ps
# Additionally you can check logs for each service using below docker compose command in case you run into an issue
docker compose logs api-gateway
docker compose logs products
docker compose logs inventory
docker compose logs mysql
docker compose logs rabbitmq
K6 Script Breakdown
I have created a simple k6 test under k6/products-list.js and it is used to load test product list api.
Load test Configuration (Stages)
export const options = {
stages: [
{ duration: "15s", target: 20 },
{ duration: "30s", target: 20 },
{ duration: "15s", target: 0 },
],
};
The options object defines the load stages that the application will be subjected to during the test. The stages array specifies how the load will increase and decrease over time:
- First stage (15s): Gradually ramps up the number of virtual users (VUs) to 20 over 15 seconds.
- Second stage (30s): Keeps the number of VUs constant at 20 for 30 seconds, maintaining a steady load.
- Final stage (15s): Gradually decreases the number of VUs back to 0 over 15 seconds, simulating a cooldown period.
Test Execution (Main Function)
export default function () {
const res = http.get("http://api-gateway:3000/api/products");
check(res, { "status was 200": (r) => r.status == 200 });
sleep(1);
}
- HTTP Request: The script sends a GET request to the /api/products endpoint through the API gateway at http://api-gateway:3000. This mirrors how a client would interact with a microservice in production.
- Second stage (30s): The check function verifies that the response has a 200 status code (successful request). This ensures that the API is responding correctly under load. If the condition fails, it will be recorded as an error.
- Simulated Delay: The sleep(1) function introduces a 1-second delay between each virtual user's request. This simulates a real-world scenario where users don't immediately send multiple requests without pauses.
Test Run
To execute load test using k6 use below docker compose command. By default k6 outputs results to stdout but you can configure k6 to write results to a timeseries database.
# runs prouct list load test
docker compose run --rm -i k6 run products-index.js
Test Results
After executing a K6 load test, the tool provides detailed results that give insight into how your NestJS microservices perform under various traffic conditions. Here's a breakdown of the key metrics and how to interpret them.
/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: products-index.js
output: -
scenarios: (100.00%) 1 scenario, 20 max VUs, 1m30s max duration (incl. graceful stop):
* default: Up to 20 looping VUs for 1m0s over 3 stages (gracefulRampDown: 30s, gracefulStop: 30s)
✓ status wa 200
checks.........................: 100.00% ✓ 908 ✗ 0
data_received..................: 362 kB 6.0 kB/s
data_sent......................: 85 kB 1.4 kB/s
http_req_blocked...............: avg=49.21µs min=1.78µs med=6.38µs max=16.52ms p(90)=9.35µs p(95)=11.33µs
http_req_connecting............: avg=24.53µs min=0s med=0s max=4.62ms p(90)=0s p(95)=0s
http_req_duration..............: avg=10.76ms min=3.08ms med=8.44ms max=847.87ms p(90)=14.19ms p(95)=20.82ms
{ expected_response:true }...: avg=10.76ms min=3.08ms med=8.44ms max=847.87ms p(90)=14.19ms p(95)=20.82ms
http_req_failed................: 0.00% ✓ 0 ✗ 908
http_req_receiving.............: avg=149.86µs min=13.89µs med=127.57µs max=2.76ms p(90)=234.98µs p(95)=308.89µs
http_req_sending...............: avg=41.11µs min=6.7µs med=27.18µs max=2.87ms p(90)=42.22µs p(95)=79.34µs
http_req_tls_handshaking.......: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s
http_req_waiting...............: avg=10.57ms min=2.99ms med=8.24ms max=842.22ms p(90)=14.05ms p(95)=20.7ms
http_reqs......................: 908 14.971024/s
iteration_duration.............: avg=1.01s min=1s med=1s max=1.88s p(90)=1.01s p(95)=1.02s
iterations.....................: 908 14.971024/s
vus............................: 1 min=1 max=20
vus_max........................: 20 min=20 max=20
running (1m00.7s), 00/20 VUs, 908 complete and 0 interrupted iterations
default ✓ [======================================] 00/20 VUs 1m0s
HTTP Request Metrics
http_reqs..............: 600 20.00/s
This metric tracks the total number of HTTP requests made during the test. In this case, 600 requests were sent to the API over the test duration at a rate of 20 requests per second. A higher request rate indicates the system’s ability to handle more user interactions per second.
Response Time Metrics
http_req_duration......: avg=80ms min=60ms med=75ms max=150ms p(90)=120ms p(95)=135ms
This metric measures the time it took for the server to respond to each HTTP request.
Error Rates
checks..................: 600 100.00%
This metric reflects the success rate of validation checks (e.g., checking that the response status was 200). Here, 100% of the checks passed, meaning all requests returned the expected result.
Virtual User (VU) Metrics
vus.....................: 0 min=0 max=20
vus_max.................: 20
This tracks the number of active virtual users at any point in time. The max=20 indicates that the system was tested with 20 concurrent users, while min=0 shows that there were moments with no active users (during ramp-down).
Throughput Metrics
data_received..........: 100 KB 3.33 KB/s
data_sent..............: 50 KB 1.67 KB/s
This metric Indicates the total amount of data received by the client during the test (100 KB) and the average rate (3.33 KB/s). Tracks the total data sent from the client to the server (50 KB) and the rate of data transfer (1.67 KB/s).
Conclusion
The K6 load test results provide valuable insights into the performance and resilience of your NestJS microservices. By analyzing metrics like response times, throughput, and error rates, you can pinpoint bottlenecks and optimize your microservices for better scalability, ensuring they remain stable and responsive under real-world traffic conditions.
For questions and queries please reach via below email