Automating infrastructure on cloud is an interesting topic, one that is both intriguing and challenging at the same time. With a proper arsenal of DevOps tools, this becomes pretty straightforward. In this blog post, we will attempt to create an EMQ cluster (Erlang MQTT) in AWS using popular HashiCorp tools like Packer and Terraform.
For a simple guide to setup EMQ cluster using Oracle VirtualBox, please refer to this post.
Additionally, it is assumed that the reader is moderately versed with technologies like Packer & Terraform and has basic understanding of AWS networking & EC2 instances.
Before we proceed into the technical nitty-gritty, here's a brief overview of what we will try to achieve.
Crux of the matter
First of all, we will 'bake' an Amazon Machine Image or AMI for short, using Packer. This AMI will have EMQ (emqttd) installed. Subsequently, we'll use this custom AMI in our Terraform script to spin up EC2 instances and create a cluster of EMQ nodes for high availability. Just to ensure that there are no loose ends, we'll use external MQTT publisher and subscriber to test our cluster. Here's how the AWS infrastructure would look once we are finished.
Let us explain the diagram above to have a better understanding.
For the external world, there are two entry points- the bastion host and the Internet facing load-balancer (We could have used an internal load balancer instead, but we want to access the EMQ cluster from the Internet).
The bastion host (or jump-box) will be used to SSH into any of the other instances (including the EMQ nodes) and will have an EIP associated with it. This ensures that such instances are not accessible from anywhere else via SSH. Needless to say, in production, the bastion host should be hardened and SSH should only be allowed from reliable networks (like home/office network). However, here we will keep things simple and make the bastion host accessible from anywhere (0.0.0.0/0).
The load-balancer on the other hand, allows MQTT traffic from the entire world. This includes ports 1883 (MQTT port), 18083 (Web Dashboard port) and 8080 (API Management port). For more information on MQTT ports, follow this link. In production, we should ensure that Dashboard port and API Management port are accessible only from a reliable network.
The NAT instance deployed in a public subnet will allow the EMQ nodes to access the Internet, when required. However no traffic from the Internet can reach the EMQ nodes, deployed in private subnets.
The EMQ nodes themselves will allow SSH inbound traffic from the bastion host only, along with MQTT traffic from the load-balancer. Two special ports- 4369 and 6369 will also have to be allowed since these ports are used by EMQ nodes to establish cluster communication.
(Note that for production deployments proper security measures needs to be taken, including allowing MQTT traffic from external world over MQTT/SSL 8883 port only.)
Solving the puzzle piece by piece
The first step, is to utilize Packer to bake the custom AMI. Here are the steps for the same.
1. Baking AMI using Packer
Install Packer
Download Packer for your specific OS from this link. Packer is packaged as a "zip" file. Unzip it and make sure the installation directory is on the system PATH. In order to verify the installation, using the command 'packer version' which will give the version of packer distribution.
Write Packer template
Next step is to write a Packer template that will use amazon-ebs builder to build the AMI specifically for AWS and provisioners to install emqttd.
Here's the template (JSON):
{
"variables": {
"aws_access_key": "<YOUR_AWS_ACCESS_KEY_ID>"
"aws_secret_key": "<YOUR_AWS_SECRET_KEY>"
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "ap-south-1",
"source_ami": "ami-f3e5aa9c",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "emqttd-ami-{{timestamp}}"
}],
"provisioners": [
{
"type": "shell",
"inline":[
"cd ~",
"sudo apt-get install -y zip",
"wget http://emqtt.io/downloads/latest/ubuntu16_04 -O emqttd.zip && unzip emqttd.zip"
]
}
]
}
There are a few things to notice in the Packer template above. Firstly, we have used a Ubuntu based source AMI with AMI ID- ami-f3e5aa9c (in Mumbai region) and are using the shell provisioner to download and unzip emqttd. The unzipped folder appears under ubuntu user's home directory. Additionally for better security, we could also use environment variables to populate the values in the 'variables' section.
(Note: For Ubuntu instances one may have to install 'zip' packages, if not present)
Validate and build template
In order to validate the template file, use 'packer validate <YOUR TEMPLATE FILE>'
And finally, use 'packer build' to generate the image. Here's is a sample output of the command:
$packer build -machine-readable template.json | tee packer_build.log
We have chosen to use the -machine-readable flag as part of the build command and tee'd it to a log file. That way, it'd be easier for us to extract the AMI identifier which will have to be used in the Terraform script.
Along with plethora of output, the build process generates the line with the newly created AMI's identifier (or the Packer artifact), underlined in red and shown below.
We may use a simple command (given below) to extract this AMI identifier from the log and use it in the next steps..
However we'll keep it simple and just note the AMI identifier- ami-84f8b2eb
2. Creating Terraform script
Install Terraform
Installation of Terraform is also very simple. All we need to do, is download the OS specific Zip file from this downloads page. Upon unzipping, we'll get Terraform binary which subsequently needs to be added to PATH variable of OS, so that Terraform binary becomes easily accessible. In order to verify successful installation, just type the following in the command line: $terraform version
Write Terraform script
Here's the link to download Terraform script pertaining to this example. Let's touch upon some of the interesting aspects of this script.
- The Ubuntu based AMI used to create EMQ servers, corresponds to the Packer artifact (AMI ID) generated in the 'Baking AMI using Packer' step above (AMI ID = ami-84f8b2eb)
- The AMIs used for creating bastion and NAT instances are Amazon Linux based and hence the default user is 'ec2-user' for these instances
- Preexisting key-pair has been used (instead of creating a fresh pair). To create a new key-pair, visit: AWS console > EC2 > Network & Security (side bar) > Key Pairs and click on 'Create Key Pair' button on top. Otherwise a fresh key-pair could be created using Terraform script as well (see this link).
- The same key-pair has been used to create bastion host, NAT instance and EMQ nodes.
- Preexisting user with admin privileges and access keys (access key id and secret access key) has been used
- A special Terraform file- terraform.tfvars has been used to store pertinent values which are automatically loaded by Terraform runtime. One may use environment variables and/or command line parameters as well, to specify the values of the variables. For a comprehensive set of variables used in the script, please refer to 'variables.tf' file
- An important aspect of this script is how it handles EMQ cluster creation. The same is done in two steps:
- It creates the first EMQ node and uses 'remote-exec' provisioner to give this node a new name = emq@<ipv4 address> and subsequently starts the emqttd server. Here's the snippet of the provisioner.
- In a similar fashion, it creates the second EMQ node (again with the same naming pattern) and starts the emqttd server. However this time, it uses emqttd_ctl to join the cluster of the first node. It is important to note that after creation of the first node, we can create an arbitrary number of nodes which can join the cluster of the first one. Here's the snippet of the provisioner of the second node
- Connection to the EMQ nodes to execute shell commands using the 'remote-exec' provisioner is made via the bastion host without using SSH-agent.
Apply script to AWS
The first step in running this Terraform script is to use the command 'terraform init'. This will download all the necessary provider plugins and load modules. Next is to use 'terraform plan', to take stock of what all AWS resources will this script create/modify/delete. This plan could optionally be saved in a file and used along with the next command, which is 'terraform apply'. This will actually create/modify/delete AWS resources, however before doing that, the command will ask for our confirmation. Once confirmed, the action begins !
Here's a partial sample output of 'terraform apply' command with the confirmation prompt.
Once the script runs successfully, it will produce output with some interesting details about the AWS resources created. Here's a sample output.
As can be seen above, the output gives us EIP of bastion host, DNS of load balancer, private IPs of EMQ nodes and VPC ID.
Also here's a snapshot of the EC2 instances that gets created by the script.
Now let's test this deployment.
3. Test EMQ cluster
In order to test the EMQ cluster, the first step is to log into one of the EMQ nodes (through the bastion host) and check the cluster status. Here are the steps to do the same in Linux:
- $eval `ssh-agent` // This will start the SSH agent in the background
- $ssh-add <PATH TO PRIVATE KEY FILE> // This will add private key file to the agent
- $ssh -A ec2-user@<EIP OF BASTION HOST> // Use SSH agent forwarding and log into the bastion host
- $ssh ubuntu@<PRIVATE IP OF EMQ NODE> // Once inside bastion host, log into one of the EMQ nodes via SSH
- $./emqttd_ctl cluster status // Shows the current status of the EMQ cluster
The cluster status should show two EMQ nodes in running state. Here's a sample snapshot of cluster status.
In order to test the load balancing functionality, we will install a simple mobile app- MyMQTT (download here) and then configure this app as both the subscriber as well as publisher to the EMQ cluster.Here are the steps and corresponding screenshots:
- Visit the dashboard and confirm that there are no notifications
- Configure EMQ broker and port (DNS of load balancer and port 1883)
- Subscribe for topic: temp/+/fahrenheit and add the subscriber
- Publish message- "102f" to topic: temp/kitchen/fahrenheit
- Visit notification dashboard to check if the app has received the message (as it is also a subscriber)
- Confirm the message received- "102f"
We could have used any other MQTT client for this testing (including other mobile apps).
4. Wrapping up
We have scratched the surface of what tools like Packer and Terraform can offer. There is so much more that these tools can bring into the DevOps picture. Would love to hear your feedback on this post and the beautiful solutions that you have created around these tools.
Those guidelines additionally worked to become a good way to recognize that other people online have the identical fervor like mine to grasp great deal more around this condition. . aws training in chennai
ReplyDeleteReally I enjoy your blog with an effective and useful information. Very nice post with loads of information. Thanks for sharing with us.
ReplyDeletecloud computing training in chennai
cloud computing training
Excellent blog admin, this is what I have looked for.
ReplyDeleteCloud computing Training Chennai | Cloud computing Training centers in Chennai
Very informative and useful to the users. Keep share and update your information. AWS Online Training Bangalore
ReplyDeleteThanks for sharing this Automating EMQ Cluster Provisioning On AWS. It gives lots of information to me. Share more about AWS.
ReplyDeleteBest AWS Training center in Chennai | Best AWS Training institute in Chennai
This comment has been removed by the author.
ReplyDelete