HashiCorp's Terraform is a wonderful tool to create and manage Infrastructure as Code (IaC). IaC is the modern approach to manage infrastructure and a key part of DevOps practice. The idea of IaC is to treat infrastructure in the same way as we treat an application/software. It should go through similar cycles of version control, continuous integration, review and testing. In this blog post, we'll use Terraform to create a simple, secure and scalable infrastructure for web servers in AWS. The following diagram shows the landscape that we are about to create using Terraform.
Assumptions
Since we are going to use AWS provider, we need to possess appropriate access credentials. So we must have a user with programmatic access and permissions to create/modify/delete AWS resources. The AdministratorAccess policy is a good start, however it provides full admin access which may not be appropriate in all cases.
Download and install Terraform
Downloading and installing Terraform is a very simple process. All we need to do, is download the OS specific Zip file from Terraform downloads page. Upon unzipping, we'll get the terraform binary which subsequently needs to be added to the PATH variable of your OS, so that terraform binary becomes easily accessible. In order to verify successful installation, just type the following in the command line:
terraform version OR
terraform -v
At the time of this writing, the latest version of Terraform is 0.11.0.
Create Terraform configuration files
In the current working directory or a directory of choice, create a file named 'provider.tf ' (any valid file name will work) and start writing Terraform configuration.
The first step is to specify the provider. In our case its AWS. To know the list of supported providers, visit this page.
# file :: provider.tf
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
variable "AWS_REGION" { default="ap-south-1"}
provider "aws" {
access_key = "${var.AWS_ACCESS_KEY}"
secret_key = "${var.AWS_SECRET_KEY}"
region = "${var.AWS_REGION}"
}
The values of provider attributes (access_key, secret_key, region) above are specified using the following syntax- ${variable}. This is known as the interpolation syntax. This syntax allows us to reference variables, attributes of resources and call functions. The variables are declared with 'variable' keyword.
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
variable "AWS_REGION" { default="ap-south-1"}
provider "aws" {
access_key = "${var.AWS_ACCESS_KEY}"
secret_key = "${var.AWS_SECRET_KEY}"
region = "${var.AWS_REGION}"
}
The values of provider attributes (access_key, secret_key, region) above are specified using the following syntax- ${variable}. This is known as the interpolation syntax. This syntax allows us to reference variables, attributes of resources and call functions. The variables are declared with 'variable' keyword.
We have specified the default value of AWS_REGION. However, we have not specified AWS_ACCESS_KEY and AWS_SECRET_KEY. This is so because, we would like these secrets to be injected into the Terraform runtime using a special file named 'terraform.tfvars'. Here's the content of this file:
# file :: terraform.tfvars
AWS_ACCESS_KEY="<Access Key ID>"
AWS_SECRET_KEY="<Secret Access Key>"
PATH_TO_PUBLIC_KEY="<Path to public key>"
AWS_ACCESS_KEY="<Access Key ID>"
AWS_SECRET_KEY="<Secret Access Key>"
PATH_TO_PUBLIC_KEY="<Path to public key>"
The PATH_TO_PUBLIC_KEY variable carries the actual path to the public key of the key pair. The corresponding private key will be used to SSH into EC2 instances.
We can use puttygen or ssh-keygen to generate the private & public key pair.
We can use puttygen or ssh-keygen to generate the private & public key pair.
Now we can create the AWS resources that are required, one by one in 'main.tf ' file. For convenience we will create the security groups in a separate file ('security_groups.tf').
# file :: main.tf
############################ KEY PAIR
resource "aws_key_pair" "keypair" {
key_name = "demokey"
public_key = "${file(var.PATH_TO_PUBLIC_KEY)}"
}
############################ VPC and subnets
resource "aws_vpc" "demo_vpc" {
cidr_block = "${var.VPC_CIDR}"
instance_tenancy = "default"
enable_dns_support = "true"
enable_dns_hostnames = "true"
enable_classiclink = "false"
tags = "${merge(var.TAGS, map("Name", "demo_vpc"))}"
}
################## Private Subnets
resource "aws_subnet" "demo_private_subnet_1a" {
vpc_id = "${aws_vpc.demo_vpc.id}"
cidr_block = "${var.PRIVATE_SUBNET_CIDRS[0]}"
map_public_ip_on_launch = "false"
availability_zone = "${var.AZs[0]}"
tags = "${merge(var.TAGS, map("Name", "demo_private_subnet_1a"))}"
}
resource "aws_subnet" "demo_private_subnet_1b" {
vpc_id = "${aws_vpc.demo_vpc.id}"
cidr_block = "${var.PRIVATE_SUBNET_CIDRS[1]}"
map_public_ip_on_launch = "false"
availability_zone = "${var.AZs[1]}"
tags = "${merge(var.TAGS, map("Name", "demo_private_subnet_1b"))}"
}
################## Public Subnets
resource "aws_subnet" "demo_public_subnet_1a" {
vpc_id = "${aws_vpc.demo_vpc.id}"
cidr_block = "${var.PUBLIC_SUBNET_CIDRS[0]}"
map_public_ip_on_launch = "true"
availability_zone = "${var.AZs[0]}"
tags = "${merge(var.TAGS, map("Name", "demo_public_subnet_1a"))}"
}
resource "aws_subnet" "demo_public_subnet_1b" {
vpc_id = "${aws_vpc.demo_vpc.id}"
cidr_block = "${var.PUBLIC_SUBNET_CIDRS[1]}"
map_public_ip_on_launch = "true"
availability_zone = "${var.AZs[1]}"
tags = "${merge(var.TAGS, map("Name", "demo_public_subnet_1b"))}"
}
################## Internet Gateway
resource "aws_internet_gateway" "demo_igateway" {
vpc_id = "${aws_vpc.demo_vpc.id}"
tags = "${merge(var.TAGS, map("Name", "demo_igateway"))}"
}
################# Public Route Table and associations
resource "aws_route_table" "demo_public_route" {
vpc_id = "${aws_vpc.demo_vpc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.demo_igateway.id}"
}
tags = "${merge(var.TAGS, map("Name", "demo_public_route"))}"
}
# Route Associations for Public subnets
resource "aws_route_table_association" "demo_public_route_assoc_1" {
subnet_id = "${aws_subnet.demo_public_subnet_1a.id}"
route_table_id = "${aws_route_table.demo_public_route.id}"
}
resource "aws_route_table_association" "demo_public_route_assoc_2" {
subnet_id = "${aws_subnet.demo_public_subnet_1b.id}"
route_table_id = "${aws_route_table.demo_public_route.id}"
}
################ Private Route Table and associations
resource "aws_route_table" "demo_private_route" {
vpc_id = "${aws_vpc.demo_vpc.id}"
route {
cidr_block = "0.0.0.0/0"
instance_id = "${aws_instance.demo_NAT.id}"
}
tags = "${merge(var.TAGS, map("Name", "demo_private_route"))}"
}
# route associations private
resource "aws_route_table_association" "demo_private_route_assoc_1" {
subnet_id = "${aws_subnet.demo_private_subnet_1a.id}"
route_table_id = "${aws_route_table.demo_private_route.id}"
}
resource "aws_route_table_association" "demo_private_route_assoc_2" {
subnet_id = "${aws_subnet.demo_private_subnet_1b.id}"
route_table_id = "${aws_route_table.demo_private_route.id}"
}
##################### NAT Instance
resource "aws_instance" "demo_NAT" {
ami = "ami-48dcaa27"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.demo_public_subnet_1a.id}"
# source destination check, default is true
source_dest_check = false
# the security group
vpc_security_group_ids = ["${aws_security_group.demo_sg_nat.id}"]
# the public SSH key
key_name = "${aws_key_pair.keypair.key_name}"
# the tags
tags = "${merge(var.TAGS, map("Name", "demo_NAT"))}"
}
#################### Bastion Instance
resource "aws_instance" "demo_Bastion" {
ami = "ami-d5c18eba"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.demo_public_subnet_1b.id}"
# the security group
vpc_security_group_ids = ["${aws_security_group.demo_sg_bastion.id}"]
# the public SSH key
key_name = "${aws_key_pair.keypair.key_name}"
# the tags
tags = "${merge(var.TAGS, map("Name", "demo_Bastion"))}"
lifecycle {
ignore_changes = ["tags"]
}
}
###################### Bastion Elastic IP
resource "aws_eip" "demo_bastion_eip" {
instance = "${aws_instance.demo_Bastion.id}"
vpc = true
}
###################### Web Server instances
resource "aws_instance" "demo_web_servers" {
count = "2"
ami = "ami-d5c18eba"
instance_type = "t2.micro"
# the VPC subnet
subnet_id = "${element(list(aws_subnet.demo_private_subnet_1a.id, aws_subnet.demo_private_subnet_1b.id), count.index)}"
# the security group
vpc_security_group_ids = ["${aws_security_group.demo_sg_web_servers.id}"]
# the public SSH key
key_name = "${aws_key_pair.keypair.key_name}"
user_data="${file("./user_data_web_servers.sh")}"
tags = "${merge(var.TAGS, map("Name", format("demo_web_servers_node_%02d", count.index+1)))}"
}
##################### Web Tier ELB
resource "aws_elb" "demo_webelb" {
name = "demo-web-elb"
subnets = ["${aws_subnet.demo_public_subnet_1a.id}","${aws_subnet.demo_public_subnet_1b.id}"]
security_groups = ["${aws_security_group.demo_sg_web_elb.id}"]
listener {
lb_protocol = "tcp"
lb_port = 80
instance_protocol = "tcp"
instance_port = 80
}
listener {
lb_protocol = "tcp"
lb_port = 443
instance_protocol = "tcp"
instance_port = 443
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 5
timeout = 5
target = "HTTP:80/"
interval = 30
}
instances = ["${aws_instance.demo_web_servers.*.id}"]
cross_zone_load_balancing = true
connection_draining = true
connection_draining_timeout = 400
idle_timeout = 120
tags = "${merge(var.TAGS, map("Name", "demo_web_elb"))}"
}
# file :: security_groups.tf
# Security Group for NAT instance
resource "aws_security_group" "demo_sg_nat" {
vpc_id = "${aws_vpc.demo_vpc.id}"
name = "demo_sg_nat"
description = "security group for NAT instance"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = ["${aws_security_group.demo_sg_web_servers.id}"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
security_groups = ["${aws_security_group.demo_sg_web_servers.id}"]
}
ingress { # SSH to NAT instance can be done only from bastion hosts
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = ["${aws_security_group.demo_sg_bastion.id}"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${merge(var.TAGS, map("Name", "demo_sg_nat"))}"
}
# Security Group for Bastion host
resource "aws_security_group" "demo_sg_bastion" {
vpc_id = "${aws_vpc.demo_vpc.id}"
name = "demo_sg_bastion"
description = "security group for Bastion instance"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${merge(var.TAGS, map("Name", "demo_sg_bastion"))}"
}
# Security Group for Web tier ELB
resource "aws_security_group" "demo_sg_web_elb" {
vpc_id = "${aws_vpc.demo_vpc.id}"
name = "demo_sg_web_elb"
description = "security group for Web tier ELB which allows HTTP/(S) traffic from anywhere"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${merge(var.TAGS, map("Name", "demo_sg_web_elb"))}"
}
# Security Group for Web servers in private subnet
resource "aws_security_group" "demo_sg_web_servers" {
vpc_id = "${aws_vpc.demo_vpc.id}"
name = "demo_sg_web_servers"
description = "security group for Webservers which allows HTTP/(S) traffic only from instances in demo_sg_web_elb security group"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = ["${aws_security_group.demo_sg_web_elb.id}"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
security_groups = ["${aws_security_group.demo_sg_web_elb.id}"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = ["${aws_security_group.demo_sg_bastion.id}"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${merge(var.TAGS, map("Name", "demo_sg_web_servers"))}"
}
Note that the security group for Bastion host: demo_sg_bastion, allows SSH traffic from anywhere. However, in reality, it should allow such traffic only from trusted IP addresses and/or CIDR range.
Additionally, we have used user_data to install Apache HTTP server in the web instances. The relevant script is given below:
#!/bin/bash
yum update -y
yum install httpd -y
service httpd start
chkconfig httpd on
echo "<html><h1>Hello from `hostname`</h1></html>" > /var/www/html/index.html
############################ KEY PAIR
resource "aws_key_pair" "keypair" {
key_name = "demokey"
public_key = "${file(var.PATH_TO_PUBLIC_KEY)}"
}
############################ VPC and subnets
resource "aws_vpc" "demo_vpc" {
cidr_block = "${var.VPC_CIDR}"
instance_tenancy = "default"
enable_dns_support = "true"
enable_dns_hostnames = "true"
enable_classiclink = "false"
tags = "${merge(var.TAGS, map("Name", "demo_vpc"))}"
}
################## Private Subnets
resource "aws_subnet" "demo_private_subnet_1a" {
vpc_id = "${aws_vpc.demo_vpc.id}"
cidr_block = "${var.PRIVATE_SUBNET_CIDRS[0]}"
map_public_ip_on_launch = "false"
availability_zone = "${var.AZs[0]}"
tags = "${merge(var.TAGS, map("Name", "demo_private_subnet_1a"))}"
}
resource "aws_subnet" "demo_private_subnet_1b" {
vpc_id = "${aws_vpc.demo_vpc.id}"
cidr_block = "${var.PRIVATE_SUBNET_CIDRS[1]}"
map_public_ip_on_launch = "false"
availability_zone = "${var.AZs[1]}"
tags = "${merge(var.TAGS, map("Name", "demo_private_subnet_1b"))}"
}
################## Public Subnets
resource "aws_subnet" "demo_public_subnet_1a" {
vpc_id = "${aws_vpc.demo_vpc.id}"
cidr_block = "${var.PUBLIC_SUBNET_CIDRS[0]}"
map_public_ip_on_launch = "true"
availability_zone = "${var.AZs[0]}"
tags = "${merge(var.TAGS, map("Name", "demo_public_subnet_1a"))}"
}
resource "aws_subnet" "demo_public_subnet_1b" {
vpc_id = "${aws_vpc.demo_vpc.id}"
cidr_block = "${var.PUBLIC_SUBNET_CIDRS[1]}"
map_public_ip_on_launch = "true"
availability_zone = "${var.AZs[1]}"
tags = "${merge(var.TAGS, map("Name", "demo_public_subnet_1b"))}"
}
################## Internet Gateway
resource "aws_internet_gateway" "demo_igateway" {
vpc_id = "${aws_vpc.demo_vpc.id}"
tags = "${merge(var.TAGS, map("Name", "demo_igateway"))}"
}
################# Public Route Table and associations
resource "aws_route_table" "demo_public_route" {
vpc_id = "${aws_vpc.demo_vpc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.demo_igateway.id}"
}
tags = "${merge(var.TAGS, map("Name", "demo_public_route"))}"
}
# Route Associations for Public subnets
resource "aws_route_table_association" "demo_public_route_assoc_1" {
subnet_id = "${aws_subnet.demo_public_subnet_1a.id}"
route_table_id = "${aws_route_table.demo_public_route.id}"
}
resource "aws_route_table_association" "demo_public_route_assoc_2" {
subnet_id = "${aws_subnet.demo_public_subnet_1b.id}"
route_table_id = "${aws_route_table.demo_public_route.id}"
}
################ Private Route Table and associations
resource "aws_route_table" "demo_private_route" {
vpc_id = "${aws_vpc.demo_vpc.id}"
route {
cidr_block = "0.0.0.0/0"
instance_id = "${aws_instance.demo_NAT.id}"
}
tags = "${merge(var.TAGS, map("Name", "demo_private_route"))}"
}
# route associations private
resource "aws_route_table_association" "demo_private_route_assoc_1" {
subnet_id = "${aws_subnet.demo_private_subnet_1a.id}"
route_table_id = "${aws_route_table.demo_private_route.id}"
}
resource "aws_route_table_association" "demo_private_route_assoc_2" {
subnet_id = "${aws_subnet.demo_private_subnet_1b.id}"
route_table_id = "${aws_route_table.demo_private_route.id}"
}
##################### NAT Instance
resource "aws_instance" "demo_NAT" {
ami = "ami-48dcaa27"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.demo_public_subnet_1a.id}"
# source destination check, default is true
source_dest_check = false
# the security group
vpc_security_group_ids = ["${aws_security_group.demo_sg_nat.id}"]
# the public SSH key
key_name = "${aws_key_pair.keypair.key_name}"
# the tags
tags = "${merge(var.TAGS, map("Name", "demo_NAT"))}"
}
#################### Bastion Instance
resource "aws_instance" "demo_Bastion" {
ami = "ami-d5c18eba"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.demo_public_subnet_1b.id}"
# the security group
vpc_security_group_ids = ["${aws_security_group.demo_sg_bastion.id}"]
# the public SSH key
key_name = "${aws_key_pair.keypair.key_name}"
# the tags
tags = "${merge(var.TAGS, map("Name", "demo_Bastion"))}"
lifecycle {
ignore_changes = ["tags"]
}
}
###################### Bastion Elastic IP
resource "aws_eip" "demo_bastion_eip" {
instance = "${aws_instance.demo_Bastion.id}"
vpc = true
}
###################### Web Server instances
resource "aws_instance" "demo_web_servers" {
count = "2"
ami = "ami-d5c18eba"
instance_type = "t2.micro"
# the VPC subnet
subnet_id = "${element(list(aws_subnet.demo_private_subnet_1a.id, aws_subnet.demo_private_subnet_1b.id), count.index)}"
# the security group
vpc_security_group_ids = ["${aws_security_group.demo_sg_web_servers.id}"]
# the public SSH key
key_name = "${aws_key_pair.keypair.key_name}"
user_data="${file("./user_data_web_servers.sh")}"
tags = "${merge(var.TAGS, map("Name", format("demo_web_servers_node_%02d", count.index+1)))}"
}
##################### Web Tier ELB
resource "aws_elb" "demo_webelb" {
name = "demo-web-elb"
subnets = ["${aws_subnet.demo_public_subnet_1a.id}","${aws_subnet.demo_public_subnet_1b.id}"]
security_groups = ["${aws_security_group.demo_sg_web_elb.id}"]
listener {
lb_protocol = "tcp"
lb_port = 80
instance_protocol = "tcp"
instance_port = 80
}
listener {
lb_protocol = "tcp"
lb_port = 443
instance_protocol = "tcp"
instance_port = 443
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 5
timeout = 5
target = "HTTP:80/"
interval = 30
}
instances = ["${aws_instance.demo_web_servers.*.id}"]
cross_zone_load_balancing = true
connection_draining = true
connection_draining_timeout = 400
idle_timeout = 120
tags = "${merge(var.TAGS, map("Name", "demo_web_elb"))}"
}
Above, we have created 2 web server instances for sheer ease of creating the configuration file. We could have also used an auto-scaling group.
Here's the content of 'security_groups.tf ' file:
# Security Group for NAT instance
resource "aws_security_group" "demo_sg_nat" {
vpc_id = "${aws_vpc.demo_vpc.id}"
name = "demo_sg_nat"
description = "security group for NAT instance"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = ["${aws_security_group.demo_sg_web_servers.id}"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
security_groups = ["${aws_security_group.demo_sg_web_servers.id}"]
}
ingress { # SSH to NAT instance can be done only from bastion hosts
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = ["${aws_security_group.demo_sg_bastion.id}"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${merge(var.TAGS, map("Name", "demo_sg_nat"))}"
}
# Security Group for Bastion host
resource "aws_security_group" "demo_sg_bastion" {
vpc_id = "${aws_vpc.demo_vpc.id}"
name = "demo_sg_bastion"
description = "security group for Bastion instance"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${merge(var.TAGS, map("Name", "demo_sg_bastion"))}"
}
# Security Group for Web tier ELB
resource "aws_security_group" "demo_sg_web_elb" {
vpc_id = "${aws_vpc.demo_vpc.id}"
name = "demo_sg_web_elb"
description = "security group for Web tier ELB which allows HTTP/(S) traffic from anywhere"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${merge(var.TAGS, map("Name", "demo_sg_web_elb"))}"
}
# Security Group for Web servers in private subnet
resource "aws_security_group" "demo_sg_web_servers" {
vpc_id = "${aws_vpc.demo_vpc.id}"
name = "demo_sg_web_servers"
description = "security group for Webservers which allows HTTP/(S) traffic only from instances in demo_sg_web_elb security group"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = ["${aws_security_group.demo_sg_web_elb.id}"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
security_groups = ["${aws_security_group.demo_sg_web_elb.id}"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = ["${aws_security_group.demo_sg_bastion.id}"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${merge(var.TAGS, map("Name", "demo_sg_web_servers"))}"
}
Note that the security group for Bastion host: demo_sg_bastion, allows SSH traffic from anywhere. However, in reality, it should allow such traffic only from trusted IP addresses and/or CIDR range.
Additionally, we have used user_data to install Apache HTTP server in the web instances. The relevant script is given below:
#!/bin/bash
yum update -y
yum install httpd -y
service httpd start
chkconfig httpd on
echo "<html><h1>Hello from `hostname`</h1></html>" > /var/www/html/index.html
Each variable that appears in the interpolation syntax, needs to be declared. We do so in a separate file 'variables.tf'. We could have kept everything in a single file, but that would easily become a maintenance nightmare with the growing size of our infrastructure. Terraform does the magic of combining all the 'tf' files (provider.tf, variables.tf, security_groups.tf, main.tf, etc.) in a folder during runtime.
The content of 'variables.tf ' file is as follows:
# file :: variables.tfvariable "PATH_TO_PUBLIC_KEY" {}
variable "VPC_CIDR" { default = "10.0.0.0/16"}
variable "PRIVATE_SUBNET_CIDRS" {
type = "list"
default = ["10.0.1.0/24", "10.0.2.0/24"]
}
variable "PUBLIC_SUBNET_CIDRS" {
type = "list"
default = ["10.0.3.0/24", "10.0.4.0/24"]
}
variable "AZs" {
default = ["ap-south-1a", "ap-south-1b"]
}
# Custom Tags
variable "TAGS" {
type = "map"
default = {
environment = "demo"
owner = "terraform"
version = "v1.0"
}
}
Finally we will create 'outputs.tf ' file, where we'll declare the outputs expected from terraform, once it runs successfully.
# file :: outputs.tf
output "vpc_id" {
description = "The ID of the VPC"
value = "${aws_vpc.demo_vpc.id}"
}
output "public_eip_nat" {
description = "Public elastic IP associated with Bastion host"
value = "${aws_eip.demo_bastion_eip.public_ip}"
}
output "web_elb_dns" {
description = "DNS of the web ELB"
value = "${aws_elb.demo_webelb.dns_name }"
}
output "web_elb_instances" {
description = "Web server instances asociated with Web ELB"
value = "${aws_elb.demo_webelb.instances }"
}
output "web_instances_ip" {
description = "Private IPs of Web server instances"
value = ["${aws_instance.demo_web_servers.*.private_ip}"]
}
Apply configuration
Since we are using AWS provider, the same needs to be downloaded. We will use Terraform 'init' command to download the latest AWS provider for Terraform.
terraform init
The provider binary will get downloaded and a new directory (.terraform) will get created in the current working directory and the binary will be placed there under 'plugins' sub-directory.
A useful Terraform command is 'validate'. It validates the configuration file/s and reports back.
terraform validate
Once the validation succeeds, the next course of action is to generate a plan using Terraform 'plan' command. We might have jumped directly into creating the resources in AWS, however it is recommended that we use the plan command to check what all resources Terraform will create/modify/delete.
terraform plan -out myplan
The above command will generate the plan and place it in 'myplan' file. The next step is to actually apply this plan to AWS in order to create the resources. For this purpoe we'll use the Terraform 'apply' command
terraform apply myplan
Once the resources get created successfully, we can easily verify the same from AWS console. Also Terraform would give us the outputs as defined in 'outputs.tf ' file.
Once terraform apply is executed, Terraform creates a remote state file- terraform.tfstate (and its backup terraform.tfstate.backup). This file plays a very important role in keeping the local state in sync with AWS remote state.
Additionally, since we have added custom tags, it should be easier for us to find the resources that we have created using Terraform.
Once terraform apply is executed, Terraform creates a remote state file- terraform.tfstate (and its backup terraform.tfstate.backup). This file plays a very important role in keeping the local state in sync with AWS remote state.
Additionally, since we have added custom tags, it should be easier for us to find the resources that we have created using Terraform.
For example, we can use any of the tag values like 'demo' in the Search field under AWS console to get all the EC2 instances created in this process.
Conclusion
In this blog post, we have only scratched the surface of the feature-set supplied by Terraform. However it portrays the power of Terraform and how greatly it can simplify infrastructure management.
Terraform modules are very effective and powerful feature and should be used to write complex configurations. Also Terraform module registry is a very good source of getting verified and pre-packaged modules. We will touch upon these in some future blog post.
Here's the link to the ZIP which has all the files described in this blog post.
Here's the link to the ZIP which has all the files described in this blog post.
Nice one again Adrin...
ReplyDeleteThanks Arijit
Delete