Immutable Infrastructure CI/CD Using Hashicorp Terraform and Jenkins

Infrastructure-as-Code has gained much popularity in because of its easy implementation and building a clean infrastructure with declarative programming model. This article covers the various approaches of building and maintaining your infrastructure with Terraform and Jenkins Server.

Introduction

DevOps methodologies and practices have transformed the complexities of IT infrastructure management into code that manages the entire IT infrastructure with little maintenance. We have a lot of configuration management tools, and orchestration tools to tailor our IT infrastructure-as-code, but selecting the right tool relies on numerous factors such as analyzing the pros and cons of the tools and understanding how it fits to our use case. They should ideally have no vendor lock-in, clear official documentation, good community support, easy integration with the platform, and be agnostic to different cloud providers and third party software.

Common Scenarios in IT Infrastructure Management

Provisioning and de-provisioning resources in a cloud environment is a common practice for testing and releasing a software product without any bugs. In conjunction with continuous integration and deployment tools, we may need to use both orchestration tools and configuration management tools as well. In any cloud environment, orchestration tools such as Terraform, CloudFormation, Heat, and AzureRM are responsible for provisioning infrastructure, and configuration management tools such as Chef, Ansible, Puppet, and Saltstack take care of installing the software packages in the server, configuring the services, and deploying the applications on top of it. But today, configuration management tools have to support, to some extent, provisioning resources in the cloud, and provisioning tools must support installing and configuring software on a newly created resource. It balances the complexity of provisioning and managing infrastructure. On the other hand, it is difficult to achieve everything with a single tool. The recommended way is to use both provisioning and configuration tools for managing infrastructure at scale.

Why Do We Need an Immutable Infrastructure?

Even we are managing an infrastructure with configuration management tools, there is a chance of having configuration drift in the servers if there are frequent configuration changes applied on the server. In order to avoid this situation, we should not change the configuration of the running server by either modifying it manually or through configuration management tools. Maintaining an immutable infrastructure is the best practice to avoid configuration drift. Immutable infrastructure is now becoming a popular term across the DevOps community. It is a practice of provisioning a new server for every config changes and d-provisioning the old ones. Provisioning tools like Terraform and CloudFormation support creating an immutable infrastructure to a great extent. For every software configuration changes, it will create a new infrastructure and deploy the configuration then delete the old ones. It will not create any confusions when we are managing a large infrastructure. We do not need to worry about the configuration changes and their impact over a period of time. In a production environment, DevOps practitioners often follow Blue-Green deployment to avoid unexpected issues, which leads to downtime in a production environment. Rollback is possible here and an application can enter into the previous state without any difficulties because we did not make any changes to the existing environment. Terraform helps to create an immutable infrastructure.

Fig. 1: Mutable Infrastructure

Fig. 2: Immutable Infrastructure

Infrastructure-as-Code

HashiCorp Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open-source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned. Terraform supports storing the state of the infrastructure which helps us to prevent the configuration drift. We can store the state in the local environment or from remote key-value or object stores.

Fig. 3: Terraform provider and backend configuration

The syntax of Terraform configuration is called HashiCorp Configuration Language (HCL). Terraform can also support JSON configurations. It supports multiple providers for orchestration. The majority of code is written in GoLang. Terraform follows clear syntax to define the resources and it supports the most common data structures such as list, map, string to define the variables. It is quite simple to organize the code. We can read the credentials from the environment variables instead of defining it inside the Terraform configuration file.

Fig. 4: Terraform Variables

Many open-source IDEs support development environments for Terraform modules. We can extend the functionality of the Terraform by writing custom plugins and provisioners to run script from bash, Ruby, Chef, etc. Reusable Terraform modules for various providers are available in Terraform registry. Terraform enterprise offers a web interface to manage Terraform and its state.

Fig. 5: Terraform Project Structure

Benefits of Using Terraform

Fig. 6: Blue Green Deployment with Terraform
  1. Defines infrastructure-as-code to increase operator productivity and transparency.
  2. Terraform configuration can be stored in version control, shared, and collaborated on by teams of operators.
  3. Tracks the complete history of infrastructure versions. Terraform state can be stored in a local disk as well as any one of the supported remote backends such as AWS S3, OpenStack Swift, Azure Blob, Consul, etc.
  4. Terraform provides an elegant user experience for operators to safely and predictably make changes to infrastructure.
  5. Terraform builds a dependency graph from the configurations, and walks this graph to generate plans, refresh state, and more.
  6. Separates plans and reduces mistakes and uncertainty at scale. Plans show operators what would happen,  and it applies execute changes.
  7. Terraform can be used to create resources across all major infrastructure providers (AWS, GCP, Azure, OpenStack, VMware, and more) and third-party tools such as Github, Bitbucket, New Relic, Consul, and Docker.
  8. Terraform lets operators easily use the same configurations in multiple places to reduce mistakes and save time.
  9. We can use the same Terraform configuration to provision identical staging, QA, and production environments.
  10. Common Terraform configurations can be packaged as modules and used across teams and organizations.
Fig. 7: Calling Terraform modules from the workspace

CI/CD Pipeline Workflow for Applying Changes to The Infrastructure Using Terraform

Fig. 8: CI/Cd Pipeline for Terraform Using Jenkins
  1. The developer or operations engineer changes the Terraform configuration file in his local machine and commits the code to BitBucket.
  2. Gitbucket Webhook triggers a continuous integration job to Jenkins.
  3. Jenkins will pull the latest code from the configured repo which contains Terraform files to its workspace.
  4. It reads the Terraform configuration then initializes the remote consul backend.
  5. Terraform generates a plan about the changes that have to be applied on the infrastructure
  6. Jenkins sends a notification to a Slack channel about the changes for manual approval.
  7. Here, the user can approve or disapprove the Terraform plan.
  8. The user input is sent to Jenkins server for proceeding with the further action.
  9. Once the changes are approved by an operator, Jenkins will execute the terraform apply command to reflect the changes to the infrastructure.
  10. Terraform will create a report about the resources and its dependency created while executing the plan.
  11. Terraform will provision the resources in the provider environment.
  12. Jenkins will again send a notification to the Slack channel about the status of the infrastructure after the applying changes on it. Once the job is executed, Jenkins pipeline job is configured to clean up the workspace created by the job.

###Jenkinsfile###
import groovy.json.JsonOutput
//git env vars
env.git_url = ‘https://[email protected]/user/terraform-ci.git’
env.git_branch = ‘master’
env.credentials_id = ‘1’
//slack env vars
env.slack_url = ‘https://hooks.slack.com/services/SDKJSDKS/SDSDJSDK/SDKJSDKDS23434SDSDLCMLC’
env.notification_channel = ‘my-slack-channel’
//jenkins env vars
env.jenkins_server_url = ‘https://52.79.46.98’
env.jenkins_node_custom_workspace_path = “/opt/bitnami/apps/jenkins/jenkins_home/${JOB_NAME}/workspace”
env.jenkins_node_label = ‘master’
env.terraform_version = ‘0.11.10’
def notifySlack(text, channel, attachments) {
def payload = JsonOutput.toJson([text: text,
channel: channel,
username: “Jenkins”,
attachments: attachments
])
sh “export PATH=/opt/bitnami/common/bin:$PATH && curl -X POST –data-urlencode \’payload=${payload}\’ ${slack_url}”
}
pipeline {
agent {
node {
customWorkspace “$jenkins_node_custom_workspace_path”
label “$jenkins_node_label”
}
}
stages {
stage(‘fetch_latest_code’) {
steps {
git branch: “$git_branch” ,
credentialsId: “$credentials_id” ,
url: “$git_url”
}
}
stage(‘install_deps’) {
steps {
sh “sudo apt install wget zip python-pip -y”
sh “cd /tmp”
sh “curl -o terraform.zip https://releases.hashicorp.com/terraform/’$terraform_version’/terraform_’$terraform_version’_linux_amd64.zip”
sh “unzip terraform.zip”
sh “sudo mv terraform /usr/bin”
sh “rm -rf terraform.zip”
}
}
stage(‘init_and_plan’) {
steps {
sh “sudo terraform init $jenkins_node_custom_workspace_path/workspace”
sh “sudo terraform plan $jenkins_node_custom_workspace_path/workspace”
notifySlack(“Build completed! Build logs from jenkins server $jenkins_server_url/jenkins/job/$JOB_NAME/$BUILD_NUMBER/console”, notification_channel, [])
}
}
stage(‘approve’) {
steps {
notifySlack(“Do you approve deployment? $jenkins_server_url/jenkins/job/$JOB_NAME”, notification_channel, [])
input ‘Do you approve deployment?’
}
}
stage(‘apply_changes’) {
steps {
sh “echo ‘yes’ | sudo terraform apply $jenkins_node_custom_workspace_path/workspace”
notifySlack(“Deployment logs from jenkins server $jenkins_server_url/jenkins/job/$JOB_NAME/$BUILD_NUMBER/console”, notification_channel, [])
}
}
}
post {
always {
cleanWs()
}
}


How to Setup the Deployment Environment:

  1. Create a repo in SCM tools like Gitlab or BitBucket and commit the Terraform configuration and its dependency module to the repo. If you are using any third-party remote module as a dependency, it will be automatically downloaded while execution.
  2. If you do not have Jenkins server, then just pull a Jenkins Docker image and run it in your local machine. If you are setting it up in a cloud environment, check the Jenkins virtual machine image from the marketplace to set up the environment and configure the required plugins.
  3. Create a webhook in your BitBucket repo settings to invoke an HTTP call to your Jenkins callback URL for triggering continuous integration job.
  4. If you have an existing Jenkins server, ensure pipeline plugin is installed in the Jenkins server. Otherwise go to “Manage Plugins” and install the pipeline plugin.
  5. In this project, we are using consul as a remote backend for state storing and state locking. It is not recommended to use in the local state for the case where multiple people involved in the project and for production deployments. It is good to use remote backend which provides highly available storage with state lock functionalities to avoid writing the state by multiple users at a time.
  6. If you do not have consul key-value store in your environment, just pull consul Docker image and set up a single node cluster. If it is production deployment, set up a distributed key-value store.
  7. Create an application in Slack and note down the slack integration details for configuring it in Jenkinsfile.
  8. Configure your provider details and backend details in the main Terraform configuration file either by environment variable or persisting in a repo. In my case, I am going to provision a resource in AWS and my CI server is hosted in AWS. So I am assigning an IAM Role to my server with sufficient privileges.
  9. Create a new project in Jenkins by using pipeline plugin.
  10. Add the Jenkinsfile where the pipeline stages are defined. Save the job and trigger it manually for testing. Then apply changes to the configuration and commit the changes to the BitBucket and ensure the job is automatically triggered. Check Jenkins log for more details about the job.
Fig. 9 Jenkins Log
 Fig. 10: Build History

It is recommended to use reusable modules in Terraform by writing your own modules and using modules from the Terraform Registry. We can also use the Docker Build Agent for Jenkins slave and save the workspace by attaching a persistent volume to the Jenkins server from the Docker host. It is recommended to encrypt the consul key-value with HashiCorp vault. It is a reliable key management service and it can be accessed by HTTP calls.

Fig11. CI/CD Using Hashicorp Terraform and Aws Code Pipeline

Right now, each cloud provider is offering their own CI tools. AWS offers code pipeline here we can use code commit for SCM, code build for build environment where we can apply Terraform configurations, sns to send notifications for manual approval and Azure offers Azure DevOps tools for creating CI/CD pipeline here user can commit the code to Azure TFS or any SCM through VSTS and it will trigger the CI job. we can setup the pipeline job based on the cloud platform that we are using. Here, Jenkins can be used in cloud as well as on-prem infrastructure.

Enjoy Terraforming!

This article is also published in the most reputed blogging site DZone and OSFY

31 Comments

  1. Hi to all, how is all, I think every one is getting more from this web site, and
    your views are nice for new viewers. Liverpool Trøje Børn WillisHey PSG
    Tröja Barn KrystynaZ
    AlishaAll maglie calcio a poco prezzo DellHarrx

  2. Have you ever thought about including a little bit
    more than just your articles? I mean, what you say is fundamental
    and everything. However think about if you added some great graphics or videos to give your posts more,
    “pop”! Your content is excellent but with
    images and clips, this site could definitely be one of the very best in its niche.
    Superb blog! Maglia Calcio Bambino

    • Yes, I am also thinking about the same. But, I don’t have much time to add graphics and video contents, Will try to add more video/graphical content in future.
      Thanks!

  3. Excellent blog! Do you have any hints for aspiring writers?

    I’m planning to start my own website soon but I’m a little lost
    on everything. Would you advise starting with a free platform like WordPress or go for a paid option? There
    are so many options out there that I’m completely confused ..
    Any suggestions? Appreciate it!

    • Thanks for asking.

      You can host your website either in paid infra or any free services. If you want to have a fancy domain name and scale your website based on visitor count, adding more protection to a website, getting more statistics about visitors, etc it is better to go with a paid infra.

      Hosting a website in a paid infra involves cost for domain registration, ssl certificate, storage, network bandwidth and compute cost(CPU and RAM).

      My website is hosted on top of wordpress which is managed by AWS Lightsail.
      https://aws.amazon.com/lightsail/pricing/

      For Domain Registrar and DNS management, you can use AWS Route53/GoDaddy/BigRock/CloudFlare
      Compare the price and features based on your requirements.

      Apart from AWS, you can also use GoDaddy, Linode, DigitalOcean cloud platform.
      Compare the price and features of the above providers based on your requirements.

      I personally recommend to use AWS platform to host a website. I thought of sharing a blog post regarding deployment architecture of serverless/server based three tier web application. Will share in my blog soon.

      Thanks!

  4. I read this post completely concerning the comparison of most recent and earlier technologies, it’s awesome article.

  5. When some one searches for his required thing, thus he/she wishes to be available
    that in detail, so that thing is maintained over here.

  6. Your style is very unique in comparison to other
    people I have read stuff from. I appreciate you for posting when you’ve got
    the opportunity, Guess I’ll just book mark this site.

  7. I used to be recommended this blog by my cousin. I’m no longer sure whether this submit is written through him as no one else
    recognise such detailed about my trouble. You are amazing!

    Thank you!

Leave a Reply to Liverpool Trøje Børn Cancel reply

Your email address will not be published.


*