How To Make A Multi Node RabbitMQ Cluster
1. Introduction
Several days ago, I made a tutorial of how to install Apache Kafka in a multi node cluster, what’s wrong with that? Well, if you have cheap VPS machines like me, maybe you don’t want to install Kafka due to the high hardware requirements it needs (and ZooKeeper) or because you want to use RabbitMQ over Apache Kafka.
Whatever the reason you have, I’m going to create another tutorial to guide you (and me in the future) through the installation of a multi node cluster of RabbitMQ.
For this tutorial I’m going to use Ubuntu 18.04 as OS in each machine and three machines for the cluster.
2. Preparing the machines
The first thing that you have to do, is to configure the hostname of each machine. RabbitMQ will use the hostname of the machines to communicate with each other. To do it, you have to follow the next steps in every machine of the cluster.
2.1 Disable the cloud-init module (Optional)
Assuming that your VPS is in some cloud provider (I’m using OVH), maybe you have to disable the cloud-init module that
overwrites the /etc/hosts
file in every system reboot. This could lead to problems if you don’t disable it. To
do it so, you have to:
- Edit the cloud-init configuration file:
- Add or modify the following two lines:
…save and exit.
2.2 Modify your hostname (Optional)
If you have a domain name pointing to your machine, you probably want to put it as hostname.
- Modify your hostname:
- Put your domain name without the domain:
…save and exit.
- Modify
/etc/hosts
file:
- Put your IP:
…save and exit.
- Reboot the system:
- Check if everything is still there:
…you should see you config:
- Check if the hostname is correct:
…should print:
2.3 Add other machine host
For each of the machines, edit your /etc/hosts
and add the IP and hostname of the other.
- In machine1:
- Add the IP and hostname of the other machines:
- In machine2:
- Add the IP and hostname of the other machines:
- In machine3:
- Add the IP and hostname of the other machines:
3. Installing RabbitMQ
In every machine, you have to install RabbitMQ. I will use the apt package that RabbitMQ provides to install RabbitMQ.
This script installs the latest version of RabbitMQ along with the latest Erlang version. Now you have to create a config file for RabbitMQ:
With the following content:
For security reasons we should keep each RabbitMQ instance with the AMQP port listening in localhost, we will configure a reverse proxy with Nginx
and add SSL Termination on top of it. Additionally, RabbitMQ have a port listening in all interfaces for inter-node communication, 25672
by default,
you could also put it behind the proxy, but I will keep it listening in all interfaces.
If everything went well, you should see the RabbitMQ server running:
If not, you could start it with:
Also, you could see the logs in:
4. Creating the cluster
Now we can setup the cluster.
4.1 Setting the Erlang cookie
In order to connect each node with the main one, you should use the Erlang cookie. Usually, it is located at:
…in the main machine (mydomain1 node in my case). If not, you could use the following command that tells you the information about the cookie:
…the output is:
Then get the cookie:
…and copy/paste it in mydomain2 and mydomain3 nodes in the same file.
4.2 Join mydomain2 and mydomain3 to mydomain1 cluster
For each node execute:
…this creates three RabbitMQ brokers that don’t know nothing of each other. Now let’s say that we are gonna use mydomain1 node as the main one, we have to tell to mydomain2 and mydomain3 nodes to join the cluster. To do that, on mydomain2 we have to stop the RabbitMQ application and join the mydomain1 cluster, then restart the RabbitMQ application.
On mydomain2 and mydomain3 execute:
…then reset the RabbitMQ application:
…tell to join the cluster:
…and start again the RabbitMQ application:
To check that all nodes have joined to the cluster execute in each node:
…you should see something like:
5. SSL Termination with Let’s Encrypt and Nginx
Assuming you have already a certificate issued by Let’s Encrypt I will add SSL termination to every node. To do that, I’m going to use the stream module of Nginx (I also assume that you have Nginx in every machine).
In every machine, copy/paste the following config:
Note that now every client have to connect to the cluster with the following string:
Conclusion
RabbitMQ could have some problems comparing with Kafka, for example your client could not be connected automatically to another node if one fails (you should ensure that you client have the reconnection feature) or for example does not maintain the message ordering (if you have a queue per consumer it is guaranteed, if you have multiple consumers in parallel the order is not guaranteed).
Apart from that, RabbitMQ fits perfectly on my machines and consume less resources, I will try it and see if could be used for my needs, also installing it was a little bit more difficult than the Kafka cluster but the documentation of RabbitMQ is quite good, better than the Kafka one.