An S3 backend configuration for Terraform can be simulated locally by using Localstack, free-tier edition.
Running Localstack
The following Docker command launches an instance of Localstack:
$ docker run --rm -it -d \
--name localstack-container \
-p 127.0.0.1:4566:4566 \
-p 127.0.0.1:4510-4559:4510-4559 \
-v /var/run/docker.sock:/var/run/docker.sock \
localstack/localstack
Respective services can be accessed through endpoint http(s)://localhost:4566.
It’s probably worthwhile creating an AWS CLI profile, which can be used to authenticate when making calls to Localstack services.
Here, we create a profile named localstack.
$ aws configure --profile localstack
AWS Access Key ID [None]: test
AWS Secret Access Key [None]: test
Default region name [None]: us-east-1
Default output format [None]: json
Now we can run a simple test to create an S3 bucket using the AWS CLI.
$ aws --profile=localstack --endpoint-url=http://localhost:4566 s3 mb s3://my-test-bucket
Check the bucket has been created:
$ aws --profile=localstack --endpoint-url=http://localhost:4566 s3 ls
...
my-test-bucket
Configuring Terraform to use Localstack S3 Endpoint
Create a backend.tf to store our S3 backend configuration.
terraform {
backend "s3" {
bucket = "my-local-s3-state-bucket"
key = "local/terraform.tfstate"
region = "us-east-1"
profile = "localstack"
endpoint= "http://s3.localhost.localstack.cloud:4566"
}
}
Note: According to the documentation, the Localstack S3 endpoint is http://s3.localhost.localstack.cloud:4566.
Before we can continue, the S3 bucket in the configuration above, my-local-s3-state-bucket, will need to be created.
$ aws --profile=localstack --endpoint-url=http://localhost:4566 s3 mb s3://my-local-s3-state-bucket
Now we can run terraform init:
[~/iac-sandpit]-> terraform init
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Test Terraform uses the Backend Correctly
We can go a step further and configure the AWS provider to point to our instance of Localstack. Once this has been accomplished, we can run a test to create an S3 bucket.
Create a main.tf containing the AWS provider configuration, and a resource block for creating our local test bucket, local-s3-test-bucket.
provider "aws" {
region = "us-east-1"
s3_use_path_style = false
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
profile = "localstack"
endpoints {
apigateway = "http://localhost:4566"
dynamodb = "http://localhost:4566"
iam = "http://localhost:4566"
kinesis = "http://localhost:4566"
lambda = "http://localhost:4566"
s3 = "http://s3.localhost.localstack.cloud:4566"
ses = "http://localhost:4566"
sns = "http://localhost:4566"
sqs = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
resource "aws_s3_bucket" "local_s3_test_bucket" {
bucket = "local-s3-test-bucket"
}
Run terraform init.
[~/iac-sandpit]-> terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v6.5.0...
- Installed hashicorp/aws v6.5.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Run terraform plan.
[~/iac-sandpit]-> terraform plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# aws_s3_bucket.local_s3_test_bucket will be created
+ resource "aws_s3_bucket" "local_s3_test_bucket" {
+ acceleration_status = (known after apply)
+ acl = (known after apply)
+ arn = (known after apply)
+ bucket = "local-s3-test-bucket"
+ bucket_domain_name = (known after apply)
+ bucket_prefix = (known after apply)
+ bucket_region = (known after apply)
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = false
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ object_lock_enabled = (known after apply)
+ policy = (known after apply)
+ region = "us-east-1"
+ request_payer = (known after apply)
+ tags_all = (known after apply)
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Finally, apply the plan, terraform apply -auto-approve.
[~/iac-sandpit]-> terraform apply -auto-approve
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# aws_s3_bucket.local_s3_test_bucket will be created
+ resource "aws_s3_bucket" "local_s3_test_bucket" {
+ acceleration_status = (known after apply)
+ acl = (known after apply)
+ arn = (known after apply)
+ bucket = "local-s3-test-bucket"
+ bucket_domain_name = (known after apply)
+ bucket_prefix = (known after apply)
+ bucket_region = (known after apply)
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = false
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ object_lock_enabled = (known after apply)
+ policy = (known after apply)
+ region = "us-east-1"
+ request_payer = (known after apply)
+ tags_all = (known after apply)
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
aws_s3_bucket.local_s3_test_bucket: Creating...
aws_s3_bucket.local_s3_test_bucket: Creation complete after 0s [id=local-s3-test-bucket]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Confirm the bucket has been created in Localstack instance:
$ aws --profile=localstack --endpoint-url=http://localhost:4566 s3 ls
... local-s3-test-bucket
Now we can finally check if our terraform state file was populated to account for the new resource.
Recalling that our state bucket name is my-local-s3-state-bucket, with a state file stored with key name local/terraform.tfstate, we can view its contents by running:
$ aws --profile=localstack --endpoint-url=http://localhost:4566 s3 cp s3://my-local-s3-state-bucket
/local/terraform.tfstate -
Sample excerpt from output:
{
"version": 4,
"terraform_version": "1.5.7",
"serial": 1,
"lineage": "*******-****-****-****-************",
"outputs": {},
"resources": [
{
"mode": "managed",
"type": "aws_s3_bucket",
"name": "local_s3_test_bucket",
"provider": "provider[\"registry.terraform.io/hashicorp/aws\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"acceleration_status": "",
...
...