Upload Terraform State files to remote backend – Amazon S3 and Azure Storage Account

As you might have already learned, Terraform stores information about the infrastructure managed by it by using state files. By default, if we run Terraform code in a directory named /code/tf, it will record state in a file named /code/tf/terraform.tfstate file. This file contains data in JSON format which contains information about resources mentioned in the configuration files from the real-world infrastructure. Using this file, terraform knows what has been deployed and compare that to what has been mentioned in the configuration files, and come up with a plan on what needs to be changed. So its very critical that terraform is referring to correct state file, which ideally should be 1:1 mapping of real-world infrastructure.

However as we have learned in our previous blog post on managing Terraform files as git repository, we should not be checking in the state files, as they may contain secrets and sensitive information about the infrastructure. Now if you are working in a team, this creates many problems as you need to find a way to share state files amongst team members. To avoid this problem, terraform provides us the feature known as remote backends. A terraform backend determines how terraform loads and stores state files.

The default method is local backend, which stores files on local disk. Remote backends allow us to store the state file in a remote, shared store. A number of remote backends are supported, including Amazon S3; Azure Storage Account; Google Cloud Storage; HashiCorp’s Terraform Cloud, Terraform Pro, and Terraform Enterprise and many more. In this blog post, we’ll learn how we can use two of these – Amazon S3 and Azure Storage Account to store and use the terraform state files

Using Amazon S3 Bucket to store State files

Create an S3 bucket

We can create an S3 bucket, we can use the aws_s3_bucket resource, which is provided by aws provider. Now there are lot of options but we can generally provide just the name of bucket and that will do. However, we’ll add certain important attributes like lifecycle, versioning and server_side_encryption_configuration. This will save us from some pain in the future. We can use below example code to create S3 bucket:

terraform {
required_version = ">= 0.12, < 0.13"
provider "aws" {
region = "us-east-2"
resource "aws_s3_bucket" "terrafrom-state" {
bucket = "terrafrom-state"
lifecycle {
prevent_destroy = true
versioning {
enabled = true
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
view raw s3-bucket.tf hosted with ❤ by GitHub

In above code, under lifecycle, we have mentioned setting prevent_destroy = true. This will prevent s3 bucket from being accidentally deleted. By enabling versioning, one can go through the history of the state files and determine which changes were made when. It is also useful, in case, one needs to revert to old state. Also, by setting server side encryption to AES256, it will make sure the state files are encrypted when stored on s3.

Create AWS DynamoDB Table

To solve the issue of locking state and unlocking state, we need to create an AWS DynamoDB table where we can store lock state. To create this resource, we can use aws_dynamodb_table resource, which is provided by aws provider. To create a dynamodb table, we need to provide a AWS region wide unique name for the table. We’ll also set billing_mode to PAY_PER_REQUEST, so that we are only charged per read/write request. Below is our sample code for the same:

terraform {
required_version = ">= 0.12, < 0.13"
provider "aws" {
region = "us-east-2"
resource "aws_dynamodb_table" "terraform-locks" {
name = "terraform-locks-mohit-20200929"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"

Do note that you do not need to repeat terraform and provider blocks again. At this point, we can run terraform init, plan and apply commands to deploy the above resources.

Use Remote backend within Terraform Config files

For this, we need to define a block named backend within terraform block in the resource configuration file. To use Amazon S3, the backend is named as s3. Here’s what the backend configuration looks like for an S3 bucket:

# partially complete block of code below – will not work in isolation
terraform {
required_version = ">= 0.12, < 0.13"
backend "s3" {
bucket = "terrafrom-state-20200929"
key = "azure-devops/terraform.tfstate"
region = "us-east-2"
dynamodb_table = "terraformlocks-mohit-202009292250"
encrypt = true
view raw tf-backend.tf hosted with ❤ by GitHub

Note that I have changed the dynamodb table name and bucket name a little bit from previous code, since I had to create unique names for resources within aws.

To instruct Terraform to switch to new backend, we have to run terraform init command again. We can run this command again and again as this is idempotent, so its safe to run. It should give you below output:

At this point, if we already have any state files in local, they would be copied to s3:

With this backend enabled, Terraform will automatically pull the latest state from this S3 bucket before running a command, and automatically push the latest state to the S3 bucket after running a command.

See it in action

Building onto code from our previous blog posts, we can add couple of output parameters to output.tf file:

output "azuredevops_project_id" {
value = azuredevops_project.tf-example.id
description = "azure devops project id"
output "azuredevops_project_name" {
value = azuredevops_project.tf-example.project_name
description = "azure devops project name"
view raw outputs.tf hosted with ❤ by GitHub

Now we run terraform apply to apply our changes. Since we have not changed any infrastructure, there is no real-world infrastructure change. However, if take a look at the output, we can see now that terraform is locking and releasing locks on state files:

Also, if we move over to S3, we can see the different versions of state files, since we enabled versioning:

Using Azure Storage Account to store State files

Create Azure Storage Account

To create Azure Storage Account, we can use the azurerm_storage_account resource provided by azurerm provider. Since every resource in azure rm is mandatorily associated with a resource group, we have to use either existing resource group and fetch its properties using data source or we can create a new resource group as well. Below is the sample code for creating resource group and storage account

provider "azurerm" {
version = "=2.20.0"
features {}
resource "azurerm_resource_group" "tf-state-resource-group" {
name = "tf-state-rg"
location = "eastus2"
resource "azurerm_storage_account" "tf-state-account" {
name = "tfstatemohit20200930"
resource_group_name = azurerm_resource_group.tf-state-resource-group.name
location = "eastus2"
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
usage = "terraform-state-files"

The above code is very basic code to create required resources. Let’s run terraform init, plan and apply to create the resources:

Use storage account as remote backend

To use storage account as remote backend, we need to authenticate first by using either storage key or storage SAS token. This can be either supplied using environmental variables like ARM_SAS_TOKEN (for SAS token) or ARM_ACCESS_KEY (for storage key). Or they can also be included in the terraform resource configuration code (not recommended because of security reasons).

terraform {
required_version = ">= 0.12, < 0.13"
backend "azurerm" {
resource_group_name = "tf-state-rg"
storage_account_name = "tfstatemohit20200930"
container_name = "azure-devops"
key = "terraform.tfstate"

In above code, resource_group_name contains the name of the resource group, storage_account_name contains the name of the storage account, container_name contains container name which will contain the state file as blob and key contains the terraform state file name.

As we have done previously, we have to run terraform init command so that it can transfer state files to storage account. Once its completed, you an view the state file in the specified container_name:

State locking and encryption

Note that we have not mentioned any attributes to enable locking and encryption in our code. This is because Azure Storage blobs are automatically locked before any operation that writes state. This pattern prevents concurrent state operations, which can cause corruption. If the state file is locked, you can view the same using file state, it will be preceded by placement of a lock icon:

Also, encryption for data at rest is enabled by default. So we need not enable it.

Enable versioning of Blobs

Blob versioning is a relatively new feature in Azure Storage Account and it is not yet covered by Terraform provider. We can enable versioning by going to azure portal -> azure storage account -> blob service -> data protection -> select check box for ‘turn on versioning’:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s