Featured image of post First impressions of Terraform IaC

First impressions of Terraform IaC

Terraform lets us manage different pieces of infrastructure centrally as infrastructure as code (IaC), such as DNS records and Argo Tunnel on Cloudflare, as well as instances on AWS and GCP, but the import process is quite painful. This post records how I migrated my existing infrastructure to Terraform, covering installation, providers and state management, importing Cloudflare resources, remote state and CI workflow practices, as well as day-to-day operations and lessons learned.

Terraform is an IaC tool. IaC stands for ‘Infrastructure as Code’: we write our infrastructure as declarative code, then use terraform apply to deploy it. With the same configuration, you will always get exactly the same infrastructure every time (Nix OS users rejoice).

Why use Terraform

Traditional infrastructure management mostly relies on manual work and the dashboards provided by various cloud vendors, which brings the following pain points:

Pain pointExplanation
Hard to reproduceConfiguring things by clicking around in a dashboard makes it easy to miss or misconfigure something, and hard to reproduce later
Environment driftManual changes gradually cause production and test environments to diverge, so in extreme cases testing is fine but production falls over
Hard to scaleAdding a new environment requires repeating lots of manual steps, which is time-consuming and error-prone
Hard to auditThere is no change history, so when things go wrong it is harder to pass the buck
Hard to collaborateInfrastructure ends up controlled by a small number of ‘people who know’, and anyone who wants to change something has to go through them, which is inefficient

To solve these problems, the concept of ‘Infrastructure as Code’ 1 was introduced, and Terraform is one of the best-known solutions.

Take a realistic use case: suppose you buy a new GCP account with $300 trial credit every year. It is cheap, but every year you have to go back into the GCP console and recreate your machines. With Terraform, if you want to deploy the same setup again, you just replace the API token after switching accounts, then run terraform apply. In a few minutes, you can recreate exactly the same machines, VPCs, S3, firewall rules, and so on as in your previous account.

Another example is managing and migrating Cloudflare DNS and Tunnel. You only need to copy the old Tunnel’s Ingress Rule to the new Tunnel’s Ingress Rule. Even if you later migrate to other providers such as AliDNS or Route 53, you can still copy the data across as-is 2.

This becomes especially useful when working with other people. Combined with Git, every change leaves a trace, merge conflicts are far less worrying, PRs can automatically generate previews of changes, and if something goes wrong you can roll back to the previous version immediately. None of this is really possible with the traditional approach of people directly operating a provider’s dashboard by hand.

Installing Terraform

Terraform is written in Go, and the compiled output is naturally a single executable file, so installation is very straightforward. On Windows, you can install it directly with Winget:

1
winget install Hashicorp.Terraform

If you do not want to use Winget, you can also use Scoop or another package manager, or place the precompiled binary into $PATH to complete the installation.

If you are on Linux, just use the appropriate package manager. If you install it by placing the binary into $PATH, remember to use sudo chmod +x terraform to make the file executable.

Two basic Terraform concepts

Provider(s)

As mentioned earlier, Terraform is an Infrastructure as Code tool. As a tool, it is not tied to any specific platform. Instead, it connects to different platforms through Provider(s). To see what Provider(s) are available, you can browse Terraform’s Registry.

Terraform has a rich Provider(s) ecosystem

State management

Terraform stores the state information from each infrastructure change operation in a state file. By default, this is saved as the terraform.tfstate file in the current working directory 3, though you can also configure a different backend such as S3 or Postgres. Every time you run terraform apply, Terraform compares the state declared in the current configuration files with the existing state file, calculates the differences, works out the correct order of operations, and then tells the Provider to apply those changes.

Terraform’s resource import problem

Importing existing resources has long been one of Terraform’s most criticised pain points. Hashi Corp. seems to have stuck to a stubborn and frankly silly idea for years: all your infrastructure should have been created with Terraform from the very beginning, so there is no such thing as a resource import problem.

TimeVersionProgressProblem
2014–2022v0.x–v1.4Only terraform import, one resource at a time, and it did not generate any configurationAfter importing, you still had to hand-write the HCL 4
2023.06v1.5Introduced the import block and the -generate-config-out parameter, so it could generate configurationBut you still had to write them one by one, and still had to provide the existing resource IDs yourself
2024.01v1.7The import block gained support for for_eachBatch import at last, but you still had to obtain the IDs yourself
Second half of 2024v1.12Introduced the terraform query and list blocks, finally enabling automatic resource discoveryBut this feature has to be implemented by each Provider, and many Providers simply have not caught up

The community has been complaining about this for years, yet the question ‘I already have a pile of existing resources — how do I import them into Terraform?’ was never taken seriously. As an Infrastructure as Code tool, Terraform’s design philosophy is declarative configuration and idempotence 5. Hashi Corp. firmly believes that ‘the state declared in code is the only source of truth, so resources should be created from scratch with Terraform’. But in reality, most companies already have a large amount of legacy infrastructure. Having resources first and code later is the norm. Hashi Corp. acknowledged this contradiction very late; the terraform query in v1.12 was really the first time the official tooling took the issue seriously — though as for Provider ecosystem support… well, it has been rough.

Terraform file structure

Terraform’s file structure is very simple. When it runs, the main program blindly reads all .tf files in the working directory. As long as the required information is present, you can name the files whatever you like. For example, my file structure looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
 tree -a -I .git
.
├── .editorconfig
├── .github
   ├── dependabot.yml
   └── workflows
       ├── terraform-apply.yml
       ├── terraform-plan.yml
       └── your-fork.yml
├── .gitignore
├── .terraform.lock.hcl
├── cf_dns_zones.tf
├── cf_tunnel.tf
├── dns_example_com.tf
├── dns_example_net.tf
├── dns_example_top.tf
├── dns_example_cn.tf
├── main.tf                 # Basic configuration (terraform block)
├── moved.tf                # Moved resources
├── provider.tf             # Configuration for each Provider
├── README.md
├── rename_resources.ps1
└── variables.tf            # Custom variables

main.tf is used to store the basic configuration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
terraform {
  required_providers {
    cloudflare = {
      source  = "cloudflare/cloudflare"
      version = "~> 5"
    }

    tencentcloud = {
      source  = "tencentcloudstack/tencentcloud"
      version = ">= 1.81.43"
    }
  }
}

provider.tf is used to store the configuration for each Provider:

1
2
provider "cloudflare" {}
provider "tencentcloud" {}

It is left empty here because we can pass credentials through environment variables instead of writing them directly into the configuration file. For example, the Cloudflare Provider accepts CLOUDFLARE_API_TOKEN as an alternative to the api_token variable.

variables.tf is used to declare custom variables:

1
2
3
4
5
6
7
8
9
variable "cloudflare_zone_example_com" {
  description = "Cloudflare zone ID for example.com"
  type        = string
}

variable "cloudflare_zone_example_top" {
  description = "Cloudflare zone ID for example.top"
  type        = string
}

These variables can be passed in through environment variables prefixed with TF_VAR_. Terraform will also automatically read variables from the terraform.tfvars file. The main purpose of variables is to be referenced from other configuration files, for example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
resource "cloudflare_dns_record" "example_cname" {
  content = "${cloudflare_zero_trust_tunnel_cloudflared.Production_Tunnel.id}.cfargotunnel.com"
  name    = "example.example.com"
  proxied = true
  tags    = []
  ttl     = 1
  type    = "CNAME"
  zone_id = var.cloudflare_zone_id_example_com  # the cloudflare_zone_example_com variable is referenced here
  settings = {
    flatten_cname = false
  }
}

With these basic settings in place, we can run terraform init to initialise the Terraform environment and lock dependency versions. After that, the rest is just declaring our resources.

Importing existing resources into Terraform

As mentioned earlier, Terraform uses state management. This is great, but it also creates a problem during initialisation: by default, Terraform’s state file is obviously empty.

At this point, what we need to do is import the state of the existing resources from the cloud provider into Terraform, so that Terraform can take over and manage our infrastructure seamlessly. As you have probably noticed from the earlier rant, resource import in Terraform is a sore point. Taking DNS records hosted on Cloudflare as an example, if the Provider supports it, you can use terraform query, introduced in Terraform 1.12. In most cases, though, Providers have not implemented this new feature, and Cloudflare is one of those that does not support it.

Fortunately, even if Hashi Corp. has not taken the problem seriously, companies that use Terraform heavily to manage infrastructure have come up with their own solutions. Cloudflare maintains an import tool called cf-terraforming, which saves a lot of manual effort. First, install cf-terraforming. This tool is written in Go, so you either need a Go environment to install it, or you can place the official precompiled binary into $PATH and make it executable.

1
go install github.com/cloudflare/cf-terraforming/cmd/cf-terraforming@latest

The tool is fairly straightforward to use. First, you need the following environment variables:

1
2
3
4
5
6
7
8
9
# If you use an API Token
export CLOUDFLARE_API_TOKEN='Hzsq3Vub-7Y-hSTlAaLH3Jq_YfTUOCcgf22_Fs-j'

# If you use an API Key
export CLOUDFLARE_EMAIL='user@example.com'
export CLOUDFLARE_API_KEY='1150bed3f45247b99f7db9696fffa17cbx9'

# Specify the zone ID of the domain to import; this is not needed for account-level resources (such as Cloudflare Tunnel)
export CLOUDFLARE_ZONE_ID='81b06ss3228f488fh84e5e993c2dc17'
💡 Tip

The commands here assume you are using Bash. If your shell is not compatible with Bash syntax, you will need to adjust them. For example, in PowerShell on Windows, the syntax for setting an environment variable is:

1
$env:CLOUDFLARE_API_TOKEN='Hzsq3Vub-7Y-hSTlAaLH3Jq_YfTUOCcgf22_Fs-j'

Usually, you only need to set CLOUDFLARE_API_TOKEN and CLOUDFLARE_ZONE_ID. When creating the API token in the console, remember to grant it the necessary permissions. In this case, we are only importing DNS records, so giving it permission to edit zone DNS is enough.

Grant the permissions required to operate on the resources

Right, all the preparation is done. Now we can start generating the configuration files.

First, import the domain configuration in the account, namely cloudflare_zone:

1
2
3
cf-terraforming generate \
  --key $CLOUDFLARE_API_KEY \
  --resource-type "cloudflare_zone" > zone.tf

This step generates a file called zone.tf in the current directory, containing content in the following format:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
resource "cloudflare_zone" "REDACTED" {
  name                = "REDACTED"
  paused              = false
  type                = "full"
  vanity_name_servers = []
  account = {
    id   = "REDACTED"
    name = "REDACTED"
  }
}

resource "cloudflare_zone" "REDACTED" {
  name                = "REDACTED"
  paused              = false
  type                = "full"
  vanity_name_servers = []
  account = {
    id   = "REDACTED"
    name = "REDACTED"
  }
}

At this point, the domain resources have been imported, but their internal configuration has not. Next, import the DNS records under the domain:

1
2
3
4
cf-terraforming generate \
  --zone $CLOUDFLARE_ZONE_ID \
  --key $CLOUDFLARE_API_KEY \
  --resource-type "cloudflare_dns_record" >> dns.tf

This step generates a configuration file called dns.tf in the current directory, containing content in the following format:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
resource "cloudflare_dns_record" "terraform_managed_resource_5deb14xxxxxb629bf123xxxxxxxc8f_0" {
  content  = "67.24.33.108"
  name     = "example.example.com"
  proxied  = true
  tags     = []
  ttl      = 1
  type     = "A"
  zone_id  = "81c7f2de8dfxxxxxx52629xxxxxxfc"
  settings = {}
}

resource "cloudflare_dns_record" "terraform_managed_resource_89xxxxx0bf9cxxxxxx9a_1" {
  content  = "35.27.108.33"
  name     = "terraform.example.com"
  proxied  = true
  tags     = []
  ttl      = 1
  type     = "A"
  zone_id  = "8xxxxxx7644e428526xxxxxx"
  settings = {}
}

If you need to import multiple domains, just set the CLOUDFLARE_ZONE_ID environment variable separately each time and rerun the command.

The generated configuration file here can be used directly — it is the Terraform configuration file we need later. However, at this point we have only generated the configuration file; Terraform’s state is still empty. If you run terraform apply now, Terraform will blindly treat all the declarations we just imported as new resources and throw a pile of ‘Alredy Exists’ errors. So next, we need to import the generated configuration into Terraform’s terraform.tfstate state.

Terraform introduced the import block in version 1.5, which is much more modern than typing import commands one line at a time. The process is to generate an .tf file containing import blocks. The next time you run terraform apply, Terraform will automatically perform the import for you.

Generate the import blocks for cloudflare_zone:

1
2
3
4
5
cf-terraforming import \
  --resource-type "cloudflare_zone" \
  --modern-import-block \
  --key $CLOUDFLARE_API_KEY \
  --zone $CLOUDFLARE_ZONE_ID >> import.tf

Generate the import blocks for cloudflare_dns_record:

1
2
3
4
5
cf-terraforming import \
  --resource-type "cloudflare_dns_record" \
  --modern-import-block \
  --key $CLOUDFLARE_API_KEY \
  --zone $CLOUDFLARE_ZONE_ID >> import.tf

This step generates import.tf in the current directory. It contains the import information needed to tell Terraform which cloud-provider resource ID corresponds to each resource block generated in the previous step. This ID is the code the cloud provider uses internally to identify a resource. You would not normally see it in the control panel; you only get it by requesting it through the API. Terraform needs this ID during import to confirm that the local definition matches the cloud resource, ensuring strict idempotence.

Right, now let us run terraform plan:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
$ terraform plan
cloudflare_dns_record.minio_a: Refreshing state... [id=xxxxxxxxxxx53]
cloudflare_zero_trust_tunnel_cloudflared_config.raspberrypi: Refreshing state...
......

Terraform will perform the following actions:

  # cloudflare_dns_record.terraform_managed_resource_0 will be imported
    resource "cloudflare_dns_record" "terraform_managed_resource_REDACTED_0" {
        content     = "67.24.33.108"
        created_on  = "2026-04-08T10:18:12Z"
        id          = "5deb14c21xxxxxxx20f1c8f"
        meta        = jsonencode({})
        modified_on = "2026-04-08T10:18:12Z"
        name        = "example.example.com"
        proxiable   = true
        proxied     = true
        settings    = {}
        tags        = []
        ttl         = 1
        type        = "A"
        zone_id     = "REDACTED"
    }

  # cloudflare_dns_record.terraform_managed_resource_1 will be imported
    resource "cloudflare_dns_record" "terraform_managed_resource_89c149exxxxxxxxxxxba13xxxxxa_1" {
        content     = "35.27.108.33"
        created_on  = "2026-04-08T10:17:54Z"
        id          = "89cxxxxxxxxxxxxxxxxxx09a"
        meta        = jsonencode({})
        modified_on = "2026-04-08T10:17:54Z"
        name        = "terraform.example.com"
        proxiable   = true
        proxied     = true
        settings    = {}
        tags        = []
        ttl         = 1
        type        = "A"
        zone_id     = "81xxxxxxxxxxxxxxxxxxxxxfc"
    }

Plan: 2 to import, 0 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly
these actions if you run "terraform apply" now.

If the numbers for Add, Change, and Destroy are all 0, then the import has gone correctly. Just run terraform apply --auto-approve, and the resources will be imported.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
$ terraform apply --auto-approve
cloudflare_dns_record.push_a: Refreshing state... [id=REDACTED]

Terraform will perform the following actions:

  # cloudflare_dns_record.terraform_managed_resource_REDACTED_0 will be imported
    resource "cloudflare_dns_record" "terraform_managed_resource_REDACTED_0" {
        content     = "67.24.33.108"
        created_on  = "2026-04-08T10:18:12Z"
        id          = "REDACTED"
        meta        = jsonencode({})
        modified_on = "2026-04-08T10:18:12Z"
        name        = "example.example.com"
        proxiable   = true
        proxied     = true
        settings    = {}
        tags        = []
        ttl         = 1
        type        = "A"
        zone_id     = "REDACTED"
    }

  # cloudflare_dns_record.terraform_managed_resource_REDACTED_1 will be imported
    resource "cloudflare_dns_record" "terraform_managed_resource_REDACTED_1" {
        content     = "35.27.108.33"
        created_on  = "2026-04-08T10:17:54Z"
        id          = "REDACTED"
        meta        = jsonencode({})
        modified_on = "2026-04-08T10:17:54Z"
        name        = "terraform.example.com"
        proxiable   = true
        proxied     = true
        settings    = {}
        tags        = []
        ttl         = 1
        type        = "A"
        zone_id     = "REDACTED"
    }

Plan: 2 to import, 0 to add, 0 to change, 0 to destroy.
cloudflare_dns_record.terraform_managed_resource_REDACTED_1: Importing... [id=REDACTED/REDACTED]
cloudflare_dns_record.terraform_managed_resource_REDACTED_1: Import complete [id=REDACTED/REDACTED]
cloudflare_dns_record.terraform_managed_resource_REDACTED_0: Importing... [id=REDACTED/REDACTED]
cloudflare_dns_record.terraform_managed_resource_REDACTED_0: Import complete [id=REDACTED/REDACTED]

Apply complete! Resources: 2 imported, 0 added, 0 changed, 0 destroyed.

State storage and continuous integration

One of IaC’s core strengths is that it makes Git-based collaboration and CI easy, but before that there is another problem to solve: where exactly should terraform.tfstate live? Nobody wants to painstakingly import state for each environment only to lose it every time they switch.

Terraform currently supports the following state backends:

  • local
  • remote
  • azurerm
  • consul
  • cos
  • gcs
  • http
  • Kubernetes
  • oci
  • oss
  • pg
  • s3

If you do not have any special requirements, you can choose s3 as I do. Cloudflare R2 has a free tier, after all, so you might as well use it.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
terraform {
  backend "s3" {
    bucket = "terraform"
    key    = "terraform.tfstate"
    region = "auto"
    endpoints = {
      s3 = "https://REDACTED.r2.cloudflarestorage.com"
    }

    # R2 does not need this AWS validation
    skip_credentials_validation = true
    skip_metadata_api_check     = true
    skip_region_validation      = true
    skip_requesting_account_id  = true
    use_path_style              = true
  }
}

For the S3 backend, it is recommended to store credentials in the two environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY:

1
2
export AWS_ACCESS_KEY_ID='REDACTED'
export AWS_SECRET_ACCESS_KEY='REDACTED'

Once configured, run terraform init -migrate-state and the state will be stored successfully in the cloud. After that, no matter where you edit the configuration or run terraform apply, you will not need to worry about Terraform state getting out of sync.

Next comes the GitHub CI configuration. It is actually very simple: on each git push, just trigger terraform init, terraform fmt, and terraform apply. Here is my .github/workflows/apply.yml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
name: "Terraform Apply"

on:
  push:
    branches:
      - main

env:
  TF_IN_AUTOMATION: "true"
  CLOUDFLARE_API_TOKEN: "${{ secrets.CLOUDFLARE_API_TOKEN }}"
  AWS_ACCESS_KEY_ID: "${{ secrets.AWS_ACCESS_KEY_ID }}"
  AWS_SECRET_ACCESS_KEY: "${{ secrets.AWS_SECRET_ACCESS_KEY }}"
  TF_VAR_cloudflare_zone_id_example_com: "${{ vars.TF_VAR_CLOUDFLARE_ZONE_ID_EXAMPLE_COM }}"
  TF_VAR_cloudflare_zone_id_example_top: ${{ vars.TF_VAR_CLOUDFLARE_ZONE_ID_EXAMPLE_TOP }}

jobs:
  terraform:
    name: "Terraform Apply"
    runs-on: ubuntu-latest
    permissions:
      contents: read
    concurrency:
      group: terraform-apply
      cancel-in-progress: false
    steps:
      - name: Checkout
        uses: actions/checkout@v6

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v4

      - name: Terraform Init
        run: terraform init -input=false

      - name: Terraform Apply
        run: terraform apply -input=false -auto-approve

For PRs, CI should automatically attach the output of terraform plan to each PR:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
name: Terraform Plan

on:
  pull_request:
    paths:
      - "**/*.tf"
      - ".github/workflows/terraform-plan.yml"

permissions:
  contents: read
  pull-requests: write

env:
  TF_IN_AUTOMATION: "true"
  CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
  AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
  AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
  TF_VAR_cloudflare_zone_id_example_com: "${{ vars.TF_VAR_CLOUDFLARE_ZONE_ID_EXAMPLE_COM }}"
  TF_VAR_cloudflare_zone_id_example_top: ${{ vars.TF_VAR_CLOUDFLARE_ZONE_ID_EXAMPLE_TOP }}

jobs:
  plan:
    name: Terraform Plan
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v6

      - uses: hashicorp/setup-terraform@v4

      - name: Terraform fmt
        id: fmt
        run: terraform fmt -check -recursive
        continue-on-error: true

      - name: Terraform Init
        id: init
        run: terraform init -input=false

      - name: Terraform Validate
        id: validate
        run: terraform validate -no-color

      - name: Terraform Plan
        id: plan
        run: terraform plan -input=false -no-color
        continue-on-error: true

      - name: Post Plan to PR
        uses: actions/github-script@v8
        with:
          github-token: ${{ secrets.GITHUB_TOKEN }}
          script: |
            const { data: comments } = await github.rest.issues.listComments({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: context.issue.number,
            });
            const botComment = comments.find(c =>
              c.user.type === 'Bot' && c.body.includes('<!-- terraform-plan -->')
            );

            const planOutput = `${{ steps.plan.outputs.stdout }}`.substring(0, 65000);

            const body = `<!-- terraform-plan -->
            #### Terraform Plan

            | Step     | Result                            |
            | -------- | --------------------------------- |
            | fmt      | \`${{ steps.fmt.outcome }}\`      |
            | init     | \`${{ steps.init.outcome }}\`     |
            | validate | \`${{ steps.validate.outcome }}\` |
            | plan     | \`${{ steps.plan.outcome }}\`     |

            <details><summary>Expand Plan details</summary>

            \`\`\`terraform
            ${planOutput}
            \`\`\`
            </details>`;

            if (botComment) {
              await github.rest.issues.updateComment({
                owner: context.repo.owner,
                repo: context.repo.repo,
                comment_id: botComment.id,
                body
              });
            } else {
              await github.rest.issues.createComment({
                issue_number: context.issue.number,
                owner: context.repo.owner,
                repo: context.repo.repo,
                body
              });
            }

      - name: Fail if plan failed
        if: steps.plan.outcome == 'failure'
        run: exit 1

Every PR will have Plan output

Ongoing workflow

At this point, Terraform’s initial ‘takeover’ is complete, and after that you move into day-to-day maintenance. At this stage there are really only three things to do:

  1. Create new resources
  2. Modify existing resources
  3. Delete resources that are no longer needed

There are only two common operations: terraform plan and terraform apply. If you are working alone, you can usually just commit small changes directly. If you are working in a team, though, each change should follow the principle of using a PR whenever possible rather than committing directly.

Creating infrastructure

Suppose you want to add a new DNS record, create a new Tunnel, or create a new object storage bucket. The process looks like this:

The ideal plan output is:

  • X to add
  • 0 to change
  • 0 to destroy

If you only meant to add something but to destroy appears, do not rush in. Usually it means a bad reference, a wrong variable, or that you accidentally changed a resource address. Check carefully to see what went wrong.

Modifying and deleting infrastructure

The process for modifying resources is similar to creating them, but there is one extra step: evaluate whether the change will trigger a rebuild.

That is because many Provider fields are ForceNew. You think you are only changing one field, and Terraform replies: ‘Right then, delete and recreate it.’ In a DNS scenario like this, that is not a huge issue, but for something like a cloud instance, deleting and recreating it can obviously cause real damage.

It is best to follow this order:

For production environments, being a bit slower when creating or modifying things is not a big problem. What matters most is correctness. Go a bit slower; do not make mistakes. IaC is not a speed contest — it is about predictability.

If you are deleting resources instead (for example, retiring a DNS record or cleaning up an abandoned Tunnel), follow the process below 6:

Suggestions for day-to-day collaboration

  • Put credentials in environment variables or CI secrets; do not write them into .tf or the repository
  • Enable protection policies for critical resources to prevent accidental deletion
  • Split directories by resource type
  • Run terraform plan regularly to check for and correct infrastructure drift 7, so you do not end up with manual dashboard changes by mistake

Although this workflow may look a bit cumbersome, every change is recorded, auditable, and reversible — and most importantly, reproducible. That is where IaC delivers its real value.

References


  1. Infrastructure as Code refers to a method of defining and deploying the required infrastructure using machine-readable configuration files. ↩︎

  2. Each provider uses different field names and formats in its configuration files, but these can be converted fairly easily with a script. ↩︎

  3. One especially important thing to note is that the state file may contain sensitive information stored in plain text, such as database passwords and API keys, so you must never commit the .tfstate file to a public code repository. ↩︎

  4. HashiCorp Configuration Language, a declarative configuration language developed by HashiCorp, designed to balance machine readability with human readability. ↩︎

  5. Idempotence means that when a computer system or interface receives the same request multiple times, the effect is the same as if it had been executed once. No matter how many times it runs, the system’s final state remains consistent. ↩︎

  6. If you only want Terraform to stop managing a resource, rather than actually destroying it in the cloud, you should use the terraform state rm command instead of deleting the resource block from the code and then running apply, otherwise the real resource in the cloud will be destroyed as well. ↩︎

  7. Infrastructure drift refers to a situation where infrastructure is modified in reality through non-IaC means such as clicking around in a console, causing the actual state to differ from the state declared in code. ↩︎

Built with Hugo
Theme Stack designed by Jimmy