This article will teach you how to run ActiveMQ on Kubernetes and integrate it with your app through Spring Boot. We will deploy a clustered ActiveMQ broker using a dedicated operator. Then we are going to build and run two Spring Boot apps. The first of them is running in multiple instances and receiving messages from the queue, while the second is sending messages to that queue. In order to test the ActiveMQ cluster, we will use Kind. The consumer app connects to the cluster using several different modes. We will discuss those modes in detail.
You can find a lot of articles about other message brokers like RabbitMQ or Kafka on my blog. If you would to read about RabbitMQ on Kubernetes please refer to that article. In order to find out more about Kafka and Spring Boot integration, you can read the article about Kafka Streams and Spring Cloud Stream available here. Previously I didn’t write much about ActiveMQ, but it is also a very popular message broker. For example, it supports the latest version of AMQP protocol, while Rabbit is based on their extension of AMQP 0.9.
Source Code
If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then go to the messaging
directory. You will find there three Spring Boot apps: simple-producer
, simple-consumer
and simple-counter
. After that, you should just follow my instructions. Let’s begin.
Integrate Spring Boot with ActiveMQ
Let’s begin with integration between our Spring Boot apps and the ActiveMQ Artemis broker. In fact, ActiveMQ Artemis is the base of the commercial product provided by Red Hat called AMQ Broker. Red Hat actively develops a Spring Boot starter for ActiveMQ and an operator for running it on Kubernetes. In order to access Spring Boot, you need to include the Red Hat Maven repository in your pom.xml
file:
After that, you can include a starter in your Maven pom.xml
:
Then, we just need to enable JMS for our app with the @EnableJMS
annotation:
Our application is very simple. It just receives and prints an incoming message. The method for receiving messages should be annotated with @JmsListener
. The destination
field contains the name of a target queue.
Here’s the class that represents our message:
|
|
Finally, we need to set connection configuration settings. With AMQP Spring Boot starter it is very simple. We just need to set the property amqphub.amqp10jms.remoteUrl
. For now, we are going to base on the environment variable set at the level of Kubernetes Deployment
.
|
|
The producer application is pretty similar. Instead of the annotation for receiving messages, we use Spring JmsTemplate
for producing and sending messages to the target queue. The method for sending messages is exposed as an HTTP POST /producer/send
endpoint.
|
|
Create a Kind cluster with Nginx Ingress
Our example apps are ready. Before deploying them, we need to prepare the local Kubernetes cluster. We will deploy there the ActiveMQ cluster consisting of three brokers. Therefore, our Kubernetes cluster will also consist of three nodes. Consequently, there are three instances of the consumer app running on Kubernetes. They are connecting to the ActiveMQ brokers over the AMQP protocol. There is also a single instance of the producer app that sends messages on demand. Here’s the diagram of our architecture.
In order to run a multi-node Kubernetes cluster locally, we will use Kind. We will test not only communication over AMQP protocol but also expose the ActiveMQ management console over HTTP. Because ActiveMQ uses headless services for exposing a web console we have to create and configure Ingress on Kind to access it. Let’s begin.
In the first step, we are going to create a Kind cluster. It consists of a control plane and three workers. The configuration has to be prepared correctly to run the Nginx Ingress Controller. We should add the ingress-ready
label to a single worker node and expose ports 80
and 443
. Here’s the final version of a Kind config file:
|
|
Now, let’s create a Kind cluster by executing the following command:
|
|
If your cluster has been successfully created you should see similar information:
After that, let’s install the Nginx Ingress Controller. It is just a single command:
|
|
Let’s verify the installation:
Install ActiveMQ Artemis on Kubernetes
Finally, we may proceed to the ActiveMQ Artemis installation. Firstly, let’s install the required CRDs. You may find all the YAML manifests inside the operator repository on GitHub.
The manifests with CRDs are located in the deploy/crds
directory:
|
|
After that, we can install the operator:
|
|
In order to create a cluster, we have to create the ActiveMQArtemis
object. It contains a number of brokers being a part of the cluster (1). We should also set the accessor, to expose the AMQP port outside of every single broker pod (2). Of course, we will also expose the management console (3).
|
|
Once the ActiveMQArtemis
is created, and the operator starts the deployment process. It creates the StatefulSet
object:
It starts all three pods with brokers sequentially:
Let’s display a list of Service
s created by the operator. There is a single Service
per broker for exposing the AMQP port (ex-aao-amqp-*
) and web console (ex-aao-wsconsj-*
):
The operator automatically creates Ingress objects per each web console Service
. We will modify them by adding different hosts. Let’s say that is the one.activemq.com
domain for the first broker, two.activemq.com
for the second broker, etc.
After creating ingresses we would have to add the following line in /etc/hosts
.
|
|
Now, we access the management console, for example for the third broker under the following URL http://three.activemq.com/console.
Once the broker is ready, we may define a test queue. The name of that queue is test-1
.
Run the Spring Boot app on Kubernetes and connect to ActiveMQ
Now, let’s deploy the consumer app. In the Deployment
manifest, we have to set the ActiveMQ cluster connection URL. But wait… how to connect it? There are three brokers exposed using three separate Kubernetes Service
s. Fortunately, the AMQP Spring Boot starter supports it. We may set the addresses of three brokers inside the failover
section. Let’s try it to see what will happen.
|
|
The application is prepared to be deployed with Skaffold. If you run the skaffold dev
command you will deploy and see the logs of all three instances of the consumer app. What’s the result? All the instances connect to the first URL from the list as shown below.
Fortunately, there is a failover parameter that helps distribute client connections more evenly across multiple remote peers. With the failover.randomize
option, URIs are randomly shuffled before attempting to connect to one of them. Let’s replace the ARTEMIS_URL
env in the Deployment
manifest with the following line:
|
|
The distribution between broker instances looks slightly better. Of course, the result is random, so you may get different results.
The first way to distribute the connections is through the dedicated Kubernetes Service
. We don’t have to leverage the services created automatically by the operator. We can create our own Service
that load balances between all available pods with brokers.
Now, we can resign from the failover
section on the client side and fully rely on Kubernetes mechanisms.
This time we won’t see anything in the application logs, because all the instances connect to the same URL. We can verify a distribution between all the broker instances using e.g. the management web console. Here’s a list of consumers on the first instance of ActiveMQ:
Below, you will exactly the same results for the second instance. All the consumer app instances have been distributed equally between all available brokers inside the cluster.
Now, we are going to deploy the producer app. We use the same Kubernetes Service
for connecting the ActiveMQ cluster.
|
|
Because we have to call the HTTP endpoint let’s create the Service
for the producer app:
Let’s deploy the producer app using Skaffold with port-forwarding enabled:
|
|
Here’s a list of our Deployment
s:
In order to send a test message just execute the following command:
Advanced configuration
If you need more advanced traffic distribution between brokers inside the cluster you can achieve it in several ways. For example, we can dynamically override configuration property on runtime. Here’s a very simple example. After starting the application we are connecting the external service over HTTP. It returns the next instance number.
|
|
Here’s the implementation of the counter app. It just increments the number and divides it by the number of the broker instances. Of course, we may create a more advanced implementation, and provide e.g. connection to the instance of a broker running on the same Kubernetes node as the app pod.
|
|
Final Thoughts
ActiveMQ is an interesting alternative to RabbitMQ as a message broker. In this article, you learned how to run, manage and integrate ActiveMQ with Spring Boot on Kubernetes. It can be declaratively managed on Kubernetes thanks to ActiveMQ Artemis Operator. You can also easily integrate it with Spring Boot using a dedicated starter. It provides various configuration options and is actively developed by Red Hat and the community.
Reference https://piotrminkowski.com/2022/07/26/activemq-artemis-with-spring-boot-on-kubernetes/