Deploying JMeter on Kubernetes can significantly enhance your performance testing capabilities, enabling you to scale effortlessly and manage your test infrastructure efficiently. In this guide, we’ll walk you through the steps to deploy JMeter on Kubernetes. We will deploy a one master/two slaves jmeter service on kubernetes cluster v1.30.
Introduction
Apache JMeter is a popular open-source tool for performance testing, but deploying it in a Kubernetes environment can be challenging. Kubernetes offers a robust platform for scaling and managing containerized applications, making it an ideal choice for running JMeter tests. This guide will show you how to set up JMeter on Kubernetes, from preparing your environment to executing your tests. This guide is based on the post here. I made some bugfix for the files in git repository to deploy it on recent kubernetes versions.
First, let’s check out the kubernetes cluster status, this environment is performed on lab from easycoda, you can try out by your self.
root@control-plane:/# kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-ddf655445-crd7c 1/1 Running 7 (60s ago) 91d
kube-system calico-node-fp7pv 1/1 Running 4 (60s ago) 91d
kube-system coredns-7db6d8ff4d-6bzb8 1/1 Running 4 (60s ago) 91d
kube-system coredns-7db6d8ff4d-qrgnf 1/1 Running 4 (60s ago) 91d
kube-system etcd-control-plane 1/1 Running 4 (60s ago) 91d
kube-system kube-apiserver-control-plane 1/1 Running 1 (60s ago) 91d
kube-system kube-controller-manager-control-plane 1/1 Running 6 (60s ago) 91d
kube-system kube-proxy-6jr69 1/1 Running 4 (60s ago) 91d
kube-system kube-scheduler-control-plane 1/1 Running 6 (60s ago) 91d
when all pods are in running status and is ready, clone the git repostory from here: https://github.com/sheencloud/jmeter-kubernetes.git
root@control-plane:/# git clone https://github.com/sheencloud/jmeter-kubernetes.git
Cloning into 'jmeter-kubernetes'...
remote: Enumerating objects: 213, done.
remote: Counting objects: 100% (6/6), done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 213 (delta 1), reused 0 (delta 0), pack-reused 207
Receiving objects: 100% (213/213), 2.25 MiB | 3.81 MiB/s, done.
Resolving deltas: 100% (100/100), done.
root@control-plane:/# cd jmeter-kubernetes/
root@control-plane:/jmeter-kubernetes# ls
'Docker files' dashboard.sh jmeter_master_configmap.yaml
Dockerfile-base dockerimages.sh jmeter_master_deploy.yaml
Dockerfile-master ingress jmeter_slaves_deploy.yaml
Dockerfile-reporter jmeter_cluster_create.sh jmeter_slaves_svc.yaml
Dockerfile-slave jmeter_grafana_deploy.yaml jmeter_stop.sh
GrafanaJMeterTemplate.json jmeter_grafana_reporter.yaml openshift
LICENSE jmeter_grafana_svc.yaml start_test.sh
'LTaaS With SSL Enabled.pdf' jmeter_influxdb_configmap.yaml start_test_csv.sh
README.md jmeter_influxdb_deploy.yaml
cloudssky.jmx jmeter_influxdb_svc.yaml
A brief description of each yaml file from the original post:
- jmeter_cluster_create.sh — This script will ask for a unique tenant name (namespace) and then it will go ahead to create the namespace and all the components (jmeter master, slaves, influxdb and grafana).
- N.B — Set the number of replicas you want to use for the slaves in the jmeter_slaves_deploy.yaml file before starting, normally the replicas should match the number of worker nodes that you have.
- jmeter_master_configmap.yaml — The config map for the Jmeter master deployment
- jmeter_master_deployment.yaml — The deployment manifest for the jmeter master.
- jmeter_slaves_deploy.yaml — The deployment manifest for the jmeter slaves.
- jmeter_slave_svc.yaml — The service manifest for the jmeter slave. It uses an headless service, this enables us to get just the jmeter slaves POD ip address directly, we don’t need the DNS or round robin for this. We created this so as to make easier to feed the slave pod IP addresses directly to the jmeter master, the advantage of this will be shown later.
- jmeter_influxdb_configmap.yaml — The config map for the influxdb deployment. This configures the influxdb to exposes port 2003 so as to support the graphite if you want to use the graphite storage method in addition to the default influxdb port. So you can use the influxdb deployment to support both the jmeter backend listener methods (graphite and influxdb).
- jmeter_influxdb_deploy.yaml — The deployment manifest for Influxdb
- jmeter_influxdb_svc.yaml — The service manifest for the Influxdb.
- jmeter_grafana_deploy.yaml — The grafana deployment manifest.
- jmeter_grafana_svc.yaml — The service manifest for the grafana deployment, it uses NodePort by default, you can change this to LoadBalancer if you are running this in a public cloud (and maybe setup a CNAME to shorten the name with a FQDN).
- jmeter_grafana_reporter.yaml — The deployment and service manifest of the reporter module.
- dashboard.sh — This script is used to create the following automatically: (1) An influxdb database (jmeter) in the influxdb pod. (2) A datasource (jmeterdb) in the grafana pod
- start_test.sh — This script is used to run the Jmeter test script automatically without you manually logging into the Jmeter master shell, it will ask for the location of the jmeter test script, then it will copy it to the jmeter master pod and initiate the test automatically towards the jmeter slaves.
- GrafanaJMeterTemplate.json — A prebuilt jmeter grafana dashboard, this can also be found in the jmeter installation folder (extras folder).
Run jmeter_cluster_create.sh
to deploy components, input a namespace name, such as jmeter:
root@control-plane:/jmeter-kubernetes# ./jmeter_cluster_create.sh
checking if kubectl is present
Client Version: v1.30.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0
Current list of namespaces on the kubernetes cluster:
default
kube-node-lease
kube-public
kube-system
Enter the name of the new tenant unique name, this will be used to create the namespace
jmeter
Creating Namespace: jmeter
namespace/jmeter created
Namspace jmeter has been created
Creating Jmeter slave nodes
Number of worker nodes on this cluster is 1
deployment.apps/jmeter-slaves created
service/jmeter-slaves-svc created
Creating Jmeter Master
configmap/jmeter-load-test created
deployment.apps/jmeter-master created
Creating Influxdb and the service
configmap/influxdb-config created
deployment.apps/influxdb-jmeter created
service/jmeter-influxdb created
Creating Grafana Deployment
deployment.apps/jmeter-grafana created
service/jmeter-grafana created
Printout Of the jmeter Objects
NAME READY STATUS RESTARTS AGE
pod/influxdb-jmeter-65465c5d59-xxb5l 0/1 ContainerCreating 0 1s
pod/jmeter-grafana-676f66476f-hvf5m 0/1 ContainerCreating 0 1s
pod/jmeter-master-5586f744c4-h5bkq 0/1 ContainerCreating 0 1s
pod/jmeter-slaves-bd64d5bcc-hbm7s 0/1 ContainerCreating 0 2s
pod/jmeter-slaves-bd64d5bcc-qbj76 0/1 ContainerCreating 0 2s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jmeter-grafana NodePort 10.97.134.9 <none> 3000:30132/TCP 0s
service/jmeter-influxdb ClusterIP 10.98.235.104 <none> 8083/TCP,8086/TCP,2003/TCP 1s
service/jmeter-slaves-svc ClusterIP None <none> 1099/TCP,50000/TCP 1s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/influxdb-jmeter 0/1 1 0 1s
deployment.apps/jmeter-grafana 0/1 1 0 1s
deployment.apps/jmeter-master 0/1 1 0 1s
deployment.apps/jmeter-slaves 0/2 2 0 2s
NAME DESIRED CURRENT READY AGE
replicaset.apps/influxdb-jmeter-65465c5d59 1 1 0 1s
replicaset.apps/jmeter-grafana-676f66476f 1 1 0 1s
replicaset.apps/jmeter-master-5586f744c4 1 1 0 1s
replicaset.apps/jmeter-slaves-bd64d5bcc 2 2 0 2s
When all components deployed, check the pod status in namespace created above:
root@control-plane:/jmeter-kubernetes# kubectl get po -n jmeter
NAME READY STATUS RESTARTS AGE
influxdb-jmeter-65465c5d59-xxb5l 1/1 Running 0 49s
jmeter-grafana-676f66476f-hvf5m 1/1 Running 0 49s
jmeter-master-5586f744c4-h5bkq 1/1 Running 0 49s
jmeter-slaves-bd64d5bcc-hbm7s 1/1 Running 0 50s
jmeter-slaves-bd64d5bcc-qbj76 1/1 Running 0 50s
When all pods are in running status, run dashboard.sh to setup grafana service and influx db.
root@control-plane:/jmeter-kubernetes# ./dashboard.sh
Creating Influxdb jmeter Database
Creating the Influxdb data source
{"datasource":{"id":1,"orgId":1,"name":"jmeterdb","type":"influxdb","typeLogoUrl":"","access":"proxy","url":"http://jmeter-influxdb:8086","password":"admin","user":"admin","database":"jmeter","basicAuth":false,"basicAuthUser":"","basicAuthPassword":"","withCredentials":false,"isDefault":true,"secureJsonFields":{},"version":1,"readOnly":false},"id":1,"message":"Datasource added","name":"jmeterdb"}root@control-plane:/jmeter-kubernetes#
Add your test jmx file and run start_test.sh, specify your own jmx file launch test,
root@control-plane:/jmeter-kubernetes# ./start_test.sh
Enter path to the jmx file ./cloudssky.jmx
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/jmeter/apache-jmeter-5.0/lib/log4j-slf4j-impl-2.11.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/jmeter/apache-jmeter-5.0/lib/ext/pepper-box-1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Jul 19, 2024 8:24:38 AM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Creating summariser <summary>
Created the tree successfully using cloudssky.jmx
Configuring remote engine: 10.244.235.144
Configuring remote engine: 10.244.235.145
Starting remote engines
Starting the test @ Fri Jul 19 08:24:38 UTC 2024 (1721377478963)
Remote engines have been started
Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445
Expose grafana service and access it from url, rember to change the port to yours, the port is the node port from service/jmeter-grafana
, and click expose to generate an access url.
open the grafana url generated above in browser, import a dashboard from file GrafanaJMeterTemplate.json
located in git repository, you can get file content from file exporer (vs code) of your lab environment,
Note: the lab environment has a small resource quota and may not suitable for running large test, you can run it on your own cluster follow steps provided.