Bootstrapping your Terraform automation with Amazon CodeCatalyst
A walk-through of how to set up Terraform with Amazon CodeCatalyst from scratch to create a CI/CD pipeline for your infrastructure
Terraform is awesome to manage all your infrastructure, but when you have more than one developer trying to make changes to the infrastructure, things can get messy very quickly if there isn't a mechanism (CI/CD pipeline) in place to manage it. Without one, making changes to any infrastructure requires coordination and communication, and the challenge quickly scales the more people that are involved with making these changes. Imagine having to run around shouting "Hey Bob! Hey Jane! You done yet with that DB change? I need to add a new container build job!". As Jeff Bezos said:
"Good intentions never work, you need good mechanisms to make anything happen."
This tutorial will show you how to set up a CI/CD pipeline using Amazon CodeCatalyst and Terraform. The pipeline will utilize pull requests to submit, test, and review any changes requested to the infrastructure. We will cover the following topics in this tutorial:
- Using S3 as a backend for Terraform state files, with DynamoDB for locking, and encrypting the state file at rest with KMS
- CodeCatalyst to run our CI/CD pipelines to create and update all your infrastructure
Table of Contents
Chicken and egg problem
Automating your infrastructure is a great idea, but you need infrastructure to automate your infrastructure. There are three approaches to doing this:
- Clicking in the console to set everything up, aka "ClickOps"
- Using a CLI to create the resources for you with scripts, "Procedural"
- Using Terraform without storing the state file to bootstrap, then add in the state file configurations to store it
We will be using the 3rd option, have a look at the Stack Overflow discussion around approaches for more details on the trade-offs.
Getting started
Let's get started setting this up! Make sure you are logged into your AWS, and CodeCatalyst accounts in the same browser.
Setting up a CodeCatalyst Space, Project, Repo, and Environment
Now, let's set up our CodeCatalyst Space and Project. Create a new space by clicking on Create Space
on the CodeCatalyst Dashboard, add a name (we will use Terraform CodeCatalyst
), add the AWS Account ID to link to for billing (111122223333
is a placeholder), you can find your account ID in the top right of your AWS Console, and follow the prompts to link your AWS Account with CodeCatalyst.

Next, we need to create a new Project, click on the Create Project
button, select Start from scratch
, and give your project a name - we will use TerraformCodeCatalyst
.

Now we need to create a new repository for our code. Click Code
in the left-side navigation menu, then on Source repositories
, Add repository
and choose Create repository
. Set a repository name (we will use bootstrapping-terraform-automation-for-amazon-codecatalyst
in this tutorial), add a description, and Terraform
for the .gitignore file
:

Lastly, we need to set up the AWS environment we will use for our workflow. In the left-side navigation menu, click on CI/CD
, then on Environments
, and then Create environment
. Add the Environment name
, Description
, choose your AWS account from the dropdown under AWS account connection
, and click Create environment
.

Setting up a Dev Environment
To start working on our code, we need to set up a development environment, and will be using the built-in ones provided by CodeCatalyst. In the left navigation menu, click on Dev Environment
under Code
, then Create Dev Environment
, select Cloud9
- this tutorial with use Cloud9. Select Clone a repository
, select bootstrapping-terraform-automation-for-amazon-code-catalyst
in the dropdown for Repository
, add an Alias
of TerraformBootstrap
, and then click the Create
button.

It will take 1 - 2 minutes to provision your development environment, and once done, you will be presented with a welcome screen:

The version of Terraform may not be the latest, you can check which version is installed by running terraform --version
. This tutorial uses version 1.3.7, to ensure you are using that version, use the following commands:
🚨 NB: If you are using a local development environment instead of one managed by CodeCatalyst, the architecture / operating system may be different, please see the downloads page to download the appropriate version of Terraform.
1
2
3
4
5
6
7
8
9
TF_VERSION=1.3.7
wget -O terraform.zip https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip
unzip terraform.zip
rm terraform.zip
sudo mv terraform /usr/bin/terraform
sudo chmod +x /usr/bin/terraform
# Confirm correct version
terraform --version
Lastly, we need to add AWS CLI credentials to our Dev Environment to access resource in our account. It is recommended to not use the root user, if you have not yet set up an IAM user, please do so now by following the instructions, and make sure to copy the Access key ID
and Secret access key
values, then run aws configure
in the terminal of your dev environment (you can leave the last two default values blank, or enter values you prefer):
1
2
3
4
5
$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]:
Default output format [None]:
You can verify that access is set up correctly by running aws sts get-caller-identity
in the terminal:
1
2
3
4
5
6
$ aws sts get-caller-identity
{
"UserId": "AIDACKCEVSQ6C2EXAMPLE",
"Account": "111122223333",
"Arn": "arn:aws:iam::111122223333:user/JaneDoe"
}
Bootstrapping Terraform
Next, we need to add the required infrastructure to our AWS account using Terraform. We will be creating the following resources:
- IAM roles: Provides the role for our workflow to assume in the account - one for the
main
branch, one for any pull requests (PRs). - IAM policies: Set the boundaries of what the workflow IAM roles may do in our account - full admin access for
main
branch allowing creation of infrastructure,ReadOnly
for the PR branches to allow validating any changes. - S3 bucket: An S3 bucket to store our Terraform state file in.
- S3 bucket versioning: Allows keeping backup copies of the Terraform state file each time it changes.
- DynamoDB Table: Used by Terraform to create a lock while running - this prevents multiple CI jobs making changes when run in parallel.
- KMS Encryption Key: (Optional) While the state file is stored in S3, we want to encrypt it while at rest using a KMS key. For this tutorial, we will use the pre-existing
aws/s3
key, if you prefer to use a different KMS key ($1/month/key), there will be a section below to describe how to make changes to do that.
To create all of the required files, you can use the following commands to create the directories, and download the files directly from the sample repository. Run the commands in the root of your cloned git repo via the dev environment terminal:
1
2
3
4
5
6
7
8
cd bootstrapping-terraform-automation-for-amazon-codecatalyst
mkdir -p _bootstrap
cd _bootstrap
wget https://raw.githubusercontent.com/build-on-aws/bootstrapping-terraform-automation/main/_bootstrap/codecatalyst/main_branch_iam_role.tf
wget https://raw.githubusercontent.com/build-on-aws/bootstrapping-terraform-automation/main/_bootstrap/codecatalyst/pr_branch_iam_role.tf
wget https://raw.githubusercontent.com/build-on-aws/bootstrapping-terraform-automation/main/_bootstrap/codecatalyst/providers.tf
wget https://raw.githubusercontent.com/build-on-aws/bootstrapping-terraform-automation/main/_bootstrap/codecatalyst/state_file_resources.tf
wget https://raw.githubusercontent.com/build-on-aws/bootstrapping-terraform-automation/main/_bootstrap/codecatalyst/variables.tf
The files created will have the following content:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
variable "aws_region" {
default = "us-east-1"
}
variable "state_file_bucket_name" {
default = "tf-state-files"
}
variable "state_file_lock_table_name" {
default = "TerraformMainStateLock"
}
variable "kms_key_alias" {
default = "Terraform-Main"
}
Once done, edit the _bootstrap/variable.tf
file and update the state_file_bucket_name
(S3 bucket names are globally unique), and optionally the state_file_lock_table_name
variables with the values for your S3 bucket name for the state file, DynamoDB table name for locks, and optionally change the aws_region
if you want to use a different region.
We will now bootstrap our infrastructure (the body of each Terraform resource from the terraform plan
command has been omitted using ...
):
1
2
terraform init
terraform plan
The output should look like:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v4.53.0...
- Installed hashicorp/aws v4.53.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
$ terraform plan
data.aws_caller_identity.current: Reading...
data.aws_iam_policy_document.pr_branch_assume_role_policy: Reading...
data.aws_iam_policy_document.main_branch_assume_role_policy: Reading...
data.aws_iam_policy_document.pr_branch_assume_role_policy: Read complete after 0s [id=2789987180]
data.aws_iam_policy_document.main_branch_assume_role_policy: Read complete after 0s [id=2789987180]
data.aws_caller_identity.current: Read complete after 0s [id=111122223333]
data.aws_iam_policy_document.pr_branch_lock_table_access: Reading...
data.aws_iam_policy_document.pr_branch_lock_table_access: Read complete after 0s [id=813239658]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_dynamodb_table.state_file_lock will be created
...
# aws_iam_policy.lock_table_policy_pr_branch will be created
...
# aws_iam_role.main_branch will be created
...
# aws_iam_role.pr_branch will be created
...
# aws_iam_role_policy_attachment.admin_policy_main_branch will be created
...
# aws_iam_role_policy_attachment.lock_table_policy_pr_branch will be created
...
# aws_iam_role_policy_attachment.readonly_policy_pr_branch will be created
...
# aws_s3_bucket.state_file will be created
...
# aws_s3_bucket_versioning.state_file will be created
...
Plan: 9 to add, 0 to change, 0 to destroy.
The plan
command will output a list of resources to create, and you can take a look at exactly what it will create. Once you are satisfied, run terraform apply
, and confirm the infrastructure creation.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
$ terraform apply
...
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_dynamodb_table.state_file_lock: Creating...
aws_iam_role.pr_branch: Creating...
aws_s3_bucket.state_file: Creating...
aws_iam_role.main_branch: Creating...
aws_iam_policy.lock_table_policy_pr_branch: Creating...
aws_iam_policy.lock_table_policy_pr_branch: Creation complete after 1s [id=arn:aws:iam::111122223333:policy/pr_branch_lock_table_access_policy]
aws_iam_role.pr_branch: Creation complete after 1s [id=PR-Branch-Infrastructure]
aws_iam_role.main_branch: Creation complete after 1s [id=Main-Branch-Infrastructure]
aws_iam_role_policy_attachment.lock_table_policy_pr_branch: Creating...
aws_iam_role_policy_attachment.readonly_policy_pr_branch: Creating...
aws_iam_role_policy_attachment.admin_policy_main_branch: Creating...
aws_iam_role_policy_attachment.admin_policy_main_branch: Creation complete after 1s [id=Main-Branch-Infrastructure-20230203124909419300000001]
aws_iam_role_policy_attachment.lock_table_policy_pr_branch: Creation complete after 1s [id=PR-Branch-Infrastructure-20230203124909423900000002]
aws_iam_role_policy_attachment.readonly_policy_pr_branch: Creation complete after 1s [id=PR-Branch-Infrastructure-20230203124909427700000003]
aws_s3_bucket.state_file: Creation complete after 2s [id=ttf-state-files]
aws_s3_bucket_versioning.state_file: Creating...
aws_s3_bucket_versioning.state_file: Creation complete after 1s [id=tf-state-files]
aws_dynamodb_table.state_file_lock: Creation complete after 7s [id=TerraformMainStateLock]
Apply complete! Resources: 9 added, 0 changed, 0 destroyed.
Next, we will move the state file we just created with all the details of our infrastructure to our S3 bucket. To do this, we need to configure a Terraform backend using S3. Create _bootstrap/terraform.tf
with the following, and update the bucket
and region
values with your values:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
terraform {
backend "s3" {
bucket = "tf-state-files"
key = "terraform-bootstrap-state-files/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "TerraformMainStateLock"
kms_key_id = "alias/s3" # Optionally change this to the custom KMS alias you created - "alias/terraform"
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.33"
}
}
required_version = "= 1.3.7"
}
It would be easier if we could reference region
and state_file_bucket
variables in the Terraform backend configuration, but it does not allow any variable / local interpolation.
To migrate the state file to S3, run terraform init -migrate-state
, and you should see the following output:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ terraform init -migrate-state
Initializing the backend...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the
newly configured "s3" backend. No existing state was found in the newly
configured "s3" backend. Do you want to copy this state to the new "s3"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
Releasing state lock. This may take a few moments...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v4.53.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
We are now ready to set up our workflows, but first, let's ensure we commit our changes to our git repo. Run git add .
, git commit -m "Terraform bootstrapped"
and git push
:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ git add .
$ git commit -m "Terraform bootstrapped"
[main b05ffa8] Terraform bootstrapped
10 files changed, 494 insertions(+)
create mode 100644 _bootstrap/.terraform.lock.hcl
create mode 100644 _bootstrap/main_branch_iam_role.tf
create mode 100644 _bootstrap/pr_branch_iam_role.tf
create mode 100644 _bootstrap/providers.tf
create mode 100644 _bootstrap/state_file_resources.tf
create mode 100644 _bootstrap/terraform.tf
create mode 100644 _bootstrap/variables.tf
$ git push
Enumerating objects: 14, done.
Counting objects: 100% (14/14), done.
Delta compression using up to 2 threads
Compressing objects: 100% (13/13), done.
Writing objects: 100% (13/13), 4.22 KiB | 2.11 MiB/s, done.
Total 13 (delta 2), reused 0 (delta 0), pack-reused 0
remote: Validating objects: 100%
To https://git.us-west-2.codecatalyst.aws/v1/Cobus-AWS/TerraformCodeCatalyst/bootstrapping-terraform-automation-for-amazon-codecatalyst
fa1b9aa..b05ffa8 main -> main
Setting up workflows
In the previous section, we created two new IAM roles for our workflow, one for the main
branch with permissions to create resources, and another for all pull requests with read-only permissions. We need to add these to our CodeCatalyst Space. In the top left of the page, click on the Space dropdown, and then click on your Space name. Navigate to the AWS accounts
tab, click your AWS account number, and then on Manage roles from the AWS Management Console
. This will open a new tab, select Add an existing role you have created in IAM
, and select Main-Branch-Infrastructure
from the dropdown. Click Add role
:

This will take you to a new page with a green Successfully added IAM role Main-Branch-Infrastructure.
banner at the top. Click on Add IAM role
, and follow the same process to add the PR-Branch-Infrastructure
role. Once done, you can close this window and go back to the CodeCatalyst one.
The base infrastructure is now in place to allow us to start using our workflow for any future changes to our infrastructure. We need to create a similar Terraform backend configuration for all the resource we will create using our workflow - as mentioned, we are intentionally keeping out bootstrapping infrastructure separate from the day-to-day infrastructure. In the root of the repo, create terraform.tf
, with the following content - take note that the key
for the bucket is different from what we used for the bootstrapping infrastructure, and as before, replace the bucket
, region
, dynamodb_table
, and kms_key_id
with your values:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
terraform {
backend "s3" {
bucket = "tf-state-files"
key = "terraform-state-file/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "TerraformMainStateLock"
kms_key_id = "alias/s3" # Optionally change this to the custom KMS alias you created - "alias/terraform"
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.33"
}
}
required_version = "= 1.3.7"
}
The region
set in the above block indicates in which region the S3 bucket was created, not where we will create our resources. We also need to configure the AWS
provider, and set the region
to use. Will use a variable for this, you could also hard-code it, but it is more manageable to keep all the variables in a single variables.tf
file for this purpose. Create providers.tf
with the following content:
1
2
3
4
# Configuring the AWS provider
provider "aws" {
region = var.aws_region
}
And the variables.tf
file with (you can change the region here if you want to create resources in a different one):
1
2
3
variable "aws_region" {
default = "us-east-1"
}
Now we are ready to create our workflow file. First, we need to create the workflow directory and file:
1
2
3
cd .. # go to the root folder of the repo
mkdir -p .codecatalyst/workflows
touch .codecatalyst/workflows/main_branch.yml
Open .codecatalyst/workflows/main_branch.yml
in your IDE, and add the following - remember to replace the placeholder AWS account ID 111122223333
with the value of your account, and the IAM role names if you changed them (you can choose between the standard CodeCatalyst workflow, or to use GitHub Actions with CodeCatalyst):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# Adaptation of the https://developer.hashicorp.com/terraform/tutorials/automation/github-actions workflow
Name: TerraformMainBranch
SchemaVersion: "1.0"
Triggers:
- Type: Push
Branches:
- main
Actions:
Terraform-Main-Branch-Apply:
Identifier: aws/build@v1
Inputs:
Sources:
- WorkflowSource
Environment:
Connections:
- Role: Main-Branch-Infrastructure
Name: "111122223333"
Name: TerraformBootstrap
Configuration:
Steps:
- Run: export TF_VERSION=1.3.7 && wget -O terraform.zip "https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip"
- Run: unzip terraform.zip && rm terraform.zip && mv terraform /usr/bin/terraform && chmod +x /usr/bin/terraform
- Run: terraform fmt -check -no-color
- Run: terraform init -no-color
- Run: terraform validate -no-color
- Run: terraform plan -no-color -input=false
- Run: terraform apply -auto-approve -no-color -input=false
Compute:
Type: EC2
Let's try out our new workflow! First, we need to stage, commit, and push our changes directly to the main
branch - this is needed as only workflows committed to the repo will be run by CodeCatalyst. Use the following commands:
1
2
3
git add . -A
git commit -m "Adding main branch workflow"
git push
Output:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ git add .
$ git commit -m "Adding main branch workflow"
[main 1b88c0f] Adding main branch workflow
2 files changed, 54 insertions(+)
create mode 100644 .codecatalyst/workflows/main_branch.yml
create mode 100644 terraform.tf
$ git push
Enumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 2 threads
Compressing objects: 100% (4/4), done.
Writing objects: 100% (6/6), 1.12 KiB | 1.12 MiB/s, done.
Total 6 (delta 0), reused 0 (delta 0), pack-reused 0
remote: Validating objects: 100%
To https://git.us-west-2.codecatalyst.aws/v1/Cobus-AWS/TerraformCodeCatalyst/bootstrapping-terraform-automation-for-amazon-codecatalyst
b05ffa8..1b88c0f main -> main
In your browser, navigate to the CI/CD
-> Workflows
page. You should see the workflow running:

If you click on Recent runs
to expand it, you will see the details of the currently running job. Click on the job ID (Run-XXXXX
) to view the different stages of the build:

Pull Request Workflow
Now that we have our main
branch workflow done, it is time to set up the pull request one. The workflow will be very similar as the main
branch one, with the following difference:
- A different workflow name -
TerraformPRBranch
- We use the
PR-Branch-Infrastructure
IAM role to ensure we cannot make any infrastructure changes in the PR workflow - We remove the
terraform apply
step - The trigger for the build is for when a PR to the
main
branch is opened or updated (REVISION
)
Create a new file for the PR workflow as .codecatalyst/workflows/pr_branch.yml
, and add the following (replacing the placeholder AWS account ID of 111122223333
, and the IAM role name if you changed it) - you can choose between the standard CodeCatalyst workflow, or to use GitHub Actions with CodeCatalyst:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Name: TerraformPRBranch
SchemaVersion: "1.0"
Triggers:
- Type: PULLREQUEST
Events:
- OPEN
- REVISION
Actions:
Terraform-PR-Branch-Plan:
Identifier: aws/build@v1
Inputs:
Sources:
- WorkflowSource
Environment:
Connections:
- Role: PR-Branch-Infrastructure
Name: "111122223333"
Name: TerraformBootstrap
Configuration:
Steps:
- Run: export TF_VERSION=1.3.7 && wget -O terraform.zip "https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip"
- Run: unzip terraform.zip && rm terraform.zip && mv terraform /usr/bin/terraform && chmod +x /usr/bin/terraform
- Run: terraform fmt -check -no-color
- Run: terraform init -no-color
- Run: terraform validate -no-color
- Run: terraform plan -no-color -input=false
Compute:
Type: EC2
This workflow needs to be added to the main
branch before it will trigger for a new PR, so let's do that now:
1
2
3
git add .codecatalyst/workflows/pr_branch.yml
git commit -m "Adding PR branch workflow"
git push
This will trigger the main
branch workflow as we added a change, but without adding any additional Terraform resources, it will not make any changes:

We will now add an AWS resource via Terraform via a PR. First, we need to create a new branch:
1
git checkout -b test-pr-workflow
Next, create a new file in the root of the project vpc.tf
- we will create a VPC that has three public subnets, and the required routing tables. Add the following content to the file:
1
2
3
4
5
6
7
8
9
10
11
12
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "CodeCatalyst-Terraform"
cidr = "10.0.0.0/16"
azs = ["${var.aws_region}a", "${var.aws_region}b", "${var.aws_region}c"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = false
enable_vpn_gateway = false
}
We need to commit the change, and push the branch using --set-upstream origin test-pr-workflow
as the remote branch does not yet exist:
1
2
3
git add vpc.tf
git commit -m "Adding a VPC with only public subnets"
git push --set-upstream origin test-pr-workflow
The output shows that the remote branch has been created, and we pushed changes from our local branch to that one:
1
2
3
4
5
6
7
8
9
10
11
12
$ git push --set-upstream origin test-pr-workflow
Enumerating objects: 4, done.
Counting objects: 100% (4/4), done.
Delta compression using up to 2 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 469 bytes | 469.00 KiB/s, done.
Total 3 (delta 1), reused 0 (delta 0), pack-reused 0
remote: Validating objects: 100%
To https://git.us-west-2.codecatalyst.aws/v1/Cobus-AWS/TerraformCodeCatalyst/bootstrapping-terraform-automation-for-amazon-codecatalyst
* [new branch] test-pr-workflow -> test-pr-workflow
branch 'test-pr-workflow' set up to track 'origin/test-pr-workflow'.
This will not yet trigger a PR branch workflow as we haven't opened the pull request. In CodeCatalyst, navigate to Code
, then Pull requests
, and click on Create pull request
. Select test-pr-workflow
as the Source branch
, main
as the Destination branch
, and add in a Pull request title
and Pull request description
. You can also preview the changes the PR will make on the bottom of the screen:

Click Create
, and then navigate to CI/CD
-> Workflows
, and select All branches
from the dropdown in the top of the Workflows
menu. After selecting All branches
, you will see four workflows, the TerraformMainBranch
and TerraformPRBranch
ones, and a copy for each of the two branches main
and test-pr-workflow
. The TerraformMainBranch
workflow will have an error with Workflow is inactive
, which is expected as we limit that workflow to only run on our main branch. Click on the Recent runs
under the TerraformPRBranch
workflow for the test-pr-workflow
branch, and then on Terraform-PR-Branch-Plan
job to see the details.

By clicking on the Terraform Plan
step, you will be able to see the proposed infrastructure changes listed in the output. You can now inspect exactly which changes will be made to your infrastructure from this pull request. In you standard day-to-day operations, you would now go back to pull request to decide what action to take. If the proposed changes have been reviewed and approved, you can merge the pull request, or you can start a conversation on the PR to address any issues or concerns. We will now merge this request to roll out this infrastructure in our account by navigating to Code
-> Pull requests
, clicking on the Title
or ID
of the PR, and then the Merge
button. You are presented with a choice between a Fast forward merge
, or a Squash and merge
option. Fast forward merge
will take all the commits on the branch and add them sequentially to the main
branch as if they were done there. For the Squash merge
, it will combine all the commits on the test-pr-workflow
branch into a single commit before merging that single commit to main
. Which one you use will depend on your development approach, for this tutorial, will use the Fast forward merge
one. You can also select the option to Delete the source branch after merging this pill request. Source branch: test-pr-workflow
, this will help keep your repository clean from too many branches if they are no longer used. Click on Merge
, and navigate to CI/CD
-> Workflows
to see the new VPC being created. Click on the currently running TerraformMainBranch
workflow's Recent runs
, then on the job ID, and then on the 2nd step to see the progress in the right-hand pane. Once the job completes, we can verify that the VPC was created by navigating to the VPC section of the AWS Console, a clicking on the VPC ID
for the VPC with the name CodeCatalyst-Terraform
. You should see something similar:

Clean up
We have now reached the end of this tutorial, you can either keep the current setup and expand on it, or delete all the resources created if you are not. If you are planning to manage multiple AWS accounts, we recommend reading the Automating multiple environments with Terraform tutorial - it follows directly from this one, and you can leave the resources created in place.
To remove all the resources we created in this project, follow the following steps in your dev environment:
Make sure you are on the
main
branch by runninggit checkout main
andgit pull
to ensure you have the latest changes, then runterraform destroy
, and typeyes
to confirm - this will remove the VPC we createdTo delete all the bootstrapping resourced, first change into the directory by running
cd _bootstrap
. Before we can delete everything, we need to update our S3 state file bucket. We need to change the lifecycle policy to allow the deletion, and addforce_destroy = true
to also delete all the objects in the bucket. Edit_bootstrap/state_file_resources.tf
, and replace the firstaws_s3_bucket
resource with:1
2
3
4
5
6
7
8
9# Bucket used to store our state file
resource "aws_s3_bucket" "state_file" {
bucket = var.state_file_bucket_name
force_destroy = true
lifecycle {
prevent_destroy = false
}
}Run
terraform apply
, and accept the changes.Now run
terraform destroy
, and accept the changes. This will result in two errors since we are deleting the S3 bucket where it tries to store the updated state file, and also the DynamoDB table Terraform uses to store the lock to prevent parallel runs. The output will look similar to this:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34│ Error: Failed to save state
│
│ Error saving state: failed to upload state: NoSuchBucket: The specified bucket does not exist
│ status code: 404, request id: VJDXS21J9YFQ2J5J, host id: aG3pXy1Kfx2jncT1js0iDL5d+5j/rf3mNDVNzRp7aYpa3bCkAIKKJDh8HJQymS2prphHrazmjmo=
╵
╷
│ Error: Failed to persist state to backend
│
│ The error shown above has prevented Terraform from writing the updated state to the configured backend. To allow for recovery, the state has been written to the file "errored.tfstate" in the current working directory.
│
│ Running "terraform apply" again at this point will create a forked state, making it harder to recover.
│
│ To retry writing this state, use the following command:
│ terraform state push errored.tfstate
│
╵
╷
│ Error: Error releasing the state lock
│
│ Error message: failed to retrieve lock info: ResourceNotFoundException: Requested resource not found
│
│ Terraform acquires a lock when accessing your state to prevent others
│ running Terraform to potentially modify the state at the same time. An
│ error occurred while releasing this lock. This could mean that the lock
│ did or did not release properly. If the lock didn't release properly,
│ Terraform may not be able to run future commands since it'll appear as if
│ the lock is held.
│
│ In this scenario, please call the "force-unlock" command to unlock the
│ state manually. This is a very dangerous operation since if it is done
│ erroneously it could result in two people modifying state at the same time.
│ Only call this command if you're certain that the unlock above failed and
│ that no one else is holding a lock.Lastly, we need to delete the project we created in CodeCatalyst. In the left-hand navigation, go to
Project settings
, click onDelete project
, and follow the instructions to delete the project.
Conclusion
Congratulations! You've now bootstrapped Terraform with CodeCatalyst, and can deploy any infrastructure changes using a pull request workflow. If you enjoyed this tutorial, found an issues, or have feedback for us, please send it our way!