How to Write Terraform Script
How to Write Terraform Script Terraform is an open-source infrastructure as code (IaC) tool developed by HashiCorp that enables engineers to define, provision, and manage cloud and on-premises infrastructure using declarative configuration files. Unlike traditional manual or script-based provisioning methods, Terraform allows teams to codify their infrastructure in a version-controlled, repeatable
How to Write Terraform Script
Terraform is an open-source infrastructure as code (IaC) tool developed by HashiCorp that enables engineers to define, provision, and manage cloud and on-premises infrastructure using declarative configuration files. Unlike traditional manual or script-based provisioning methods, Terraform allows teams to codify their infrastructure in a version-controlled, repeatable, and scalable manner. Writing a Terraform scriptcommonly referred to as a Terraform configurationis the foundational skill required to leverage this powerful tool effectively.
The importance of learning how to write Terraform script cannot be overstated in modern DevOps and cloud engineering workflows. As organizations migrate to multi-cloud and hybrid environments, consistency, auditability, and automation become critical. Terraform scripts eliminate configuration drift, reduce human error, accelerate deployment cycles, and ensure compliance across environmentsfrom development to production. Whether youre deploying a single virtual machine or orchestrating an entire Kubernetes cluster across AWS, Azure, or Google Cloud, Terraform provides a unified language to describe and manage your infrastructure.
This guide walks you through everything you need to know to write effective, maintainable, and production-ready Terraform scripts. From basic syntax to advanced patterns, best practices, real-world examples, and essential tools, youll gain the confidence to create infrastructure configurations that are robust, reusable, and scalable.
Step-by-Step Guide
Step 1: Install Terraform
Before writing any Terraform script, ensure Terraform is installed on your local machine or CI/CD environment. Terraform is distributed as a single binary, making installation straightforward.
On macOS, use Homebrew:
brew install terraform
On Ubuntu/Debian:
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install terraform
On Windows, download the .zip file from the official Terraform downloads page, extract it, and add the binary to your system PATH.
Verify the installation by running:
terraform version
You should see output similar to:
Terraform v1.7.5
on linux_amd64
Step 2: Choose a Cloud Provider
Terraform supports over 3,000 providers, including AWS, Azure, Google Cloud, DigitalOcean, and even on-premises solutions like VMware and OpenStack. For this guide, well use AWS as the primary example, but the principles apply universally.
To interact with AWS, you need:
- An AWS account
- An IAM user with programmatic access
- Access Key ID and Secret Access Key
Configure AWS credentials using the AWS CLI:
aws configure
Or set environment variables:
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_DEFAULT_REGION="us-east-1"
Step 3: Initialize a Terraform Project Directory
Create a new directory for your Terraform project:
mkdir my-terraform-project
cd my-terraform-project
Inside this directory, create a file named main.tf. This is where youll write your infrastructure definitions. Terraform automatically loads all files ending in .tf in the current directory.
Step 4: Define the Provider
Every Terraform script must declare which cloud provider it will interact with. In main.tf, add:
provider "aws" {
region = "us-east-1"
}
This tells Terraform to use the AWS provider and deploy resources in the us-east-1 region. Terraform will automatically use the credentials you configured earlier.
Step 5: Declare Resources
Resources are the core building blocks of Terraform. Each resource represents a component of your infrastructurelike a virtual machine, network, bucket, or security group.
Lets create a simple EC2 instance. Add the following to main.tf:
resource "aws_instance" "web_server" {
ami = "ami-0c55b159cbfafe1f0" Amazon Linux 2 AMI (us-east-1)
instance_type = "t2.micro"
tags = {
Name = "Web-Server-01"
}
}
Heres what each line means:
- resource declares a new infrastructure component.
- aws_instance the resource type (EC2 instance in AWS).
- web_server the local name you assign to reference this resource elsewhere in your code.
- ami the Amazon Machine Image identifier. This determines the OS.
- instance_type the hardware profile (t2.micro is free-tier eligible).
- tags metadata for identification and cost allocation.
Step 6: Initialize and Plan
Before applying any changes, initialize your Terraform project to download the required provider plugins:
terraform init
This creates a .terraform directory and downloads the AWS provider.
Next, generate an execution plan to preview what Terraform will do:
terraform plan
Youll see output similar to:
Terraform will perform the following actions:
aws_instance.web_server will be created
+ resource "aws_instance" "web_server" {
+ ami = "ami-0c55b159cbfafe1f0"
+ instance_type = "t2.micro"
+ tags = {
+ "Name" = "Web-Server-01"
}
...
}
Plan: 1 to add, 0 to change, 0 to destroy.
This step is critical. Always review the plan before applying changes to avoid unintended modifications.
Step 7: Apply the Configuration
If the plan looks correct, apply the configuration:
terraform apply
Terraform will prompt you to confirm. Type yes and press Enter.
Within seconds, Terraform will create the EC2 instance. You can verify this in the AWS Console under EC2 > Instances.
Step 8: Manage State
Terraform maintains a state file (terraform.tfstate) that tracks the current state of your infrastructure. This file maps real-world resources to your configuration.
By default, the state file is stored locally. For team collaboration or production use, store state remotely using Terraform Cloud, AWS S3, or Azure Blob Storage.
To use S3 for remote state, create a backend configuration in main.tf:
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket"
key = "prod/terraform.tfstate"
region = "us-east-1"
}
}
Then re-run terraform init to migrate state to S3.
Step 9: Destroy Resources
To clean up and delete all resources created by Terraform:
terraform destroy
This ensures youre not incurring unnecessary cloud costs. Always confirm before proceeding.
Step 10: Modularize Your Code
As your infrastructure grows, keeping everything in one file becomes unmanageable. Terraform supports modulesreusable, encapsulated configurations.
Create a directory called modules, then inside it, create web-server:
mkdir -p modules/web-server
In modules/web-server/main.tf:
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t2.micro"
}
variable "ami" {
description = "AMI ID"
type = string
}
resource "aws_instance" "server" {
ami = var.ami
instance_type = var.instance_type
tags = {
Name = "Web-Server"
}
}
In your root main.tf, call the module:
module "web_server" {
source = "./modules/web-server"
ami = "ami-0c55b159cbfafe1f0"
}
Run terraform apply again. Terraform will now use your module to create the instance. Modularization improves readability, reusability, and team collaboration.
Best Practices
Use Version Control
Always store your Terraform configurations in a version control system like Git. This allows you to track changes, review pull requests, and roll back to previous states if something breaks. Include a .gitignore file to exclude sensitive or auto-generated files:
.terraform/
terraform.tfstate
terraform.tfstate.backup
*.tfstate
Separate Environments
Never use the same Terraform configuration for development, staging, and production. Instead, use separate directories or workspaces:
environments/dev/environments/staging/environments/prod/
Each directory contains its own main.tf, variables.tf, and backend configuration. This prevents accidental changes to production infrastructure.
Use Variables and Outputs
Hardcoding values like AMI IDs, instance types, or region names makes your code inflexible. Define variables in a separate variables.tf file:
variable "aws_region" {
description = "AWS region to deploy resources"
type = string
default = "us-east-1"
}
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t2.micro"
}
Reference them in your resources:
provider "aws" {
region = var.aws_region
}
resource "aws_instance" "web_server" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = var.instance_type
}
Use outputs to expose important values after deployment:
output "instance_public_ip" {
value = aws_instance.web_server.public_ip
}
After applying, run terraform output to see the public IP of your instance.
Use Remote State for Team Collaboration
Local state files are not suitable for teams. Use remote backends like S3, Azure Storage, or Terraform Cloud to store state securely and enable concurrent access. Enable state locking to prevent race conditions:
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket"
key = "prod/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
Ensure the DynamoDB table exists for locking:
aws dynamodb create-table \
--table-name terraform-locks \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--billing-mode PAY_PER_REQUEST
Validate and Lint Your Code
Use terraform validate to check syntax and configuration validity before applying:
terraform validate
Install the tfsec or checkov tools to scan for security misconfigurations:
tfsec .
These tools detect common issues like open security groups, unencrypted S3 buckets, or overly permissive IAM policies.
Follow the Principle of Least Privilege
When configuring IAM roles for Terraform, grant only the permissions necessary to perform the required actions. Avoid using root or admin-level credentials. Create a dedicated IAM user with policies like:
- AmazonEC2FullAccess
- AmazonS3FullAccess
- AmazonVPCFullAccess
Use AWS IAM Roles for Service Accounts (IRSA) in Kubernetes or assume roles for temporary credentials.
Document Your Code
Use comments liberally to explain complex logic, resource dependencies, or environment-specific configurations:
This EC2 instance runs a web server for the public-facing application
Uses Amazon Linux 2 AMI (2023) for long-term support
resource "aws_instance" "web_server" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
Also maintain a README.md file in your project root explaining:
- How to set up credentials
- How to deploy each environment
- Expected outputs
- Dependencies
Use Terraform Modules from the Registry
Instead of reinventing the wheel, leverage community-tested modules from the Terraform Registry. For example, to deploy a VPC:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.0.0"
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
}
This reduces maintenance overhead and ensures best practices are followed.
Tools and Resources
Essential Tools
- Terraform CLI The core tool for writing, planning, and applying configurations.
- Terraform Cloud HashiCorps hosted platform for remote state, collaboration, policy enforcement, and run automation.
- tfsec Static analysis tool for detecting security issues in Terraform code.
- checkov Scans Terraform, CloudFormation, and Kubernetes files for misconfigurations.
- terragrunt A thin wrapper that helps manage multiple Terraform modules and environments with DRY principles.
- VS Code with Terraform Extension Provides syntax highlighting, auto-completion, and linting.
- Atlantis Automates Terraform plans and applies via GitHub/GitLab pull requests.
Learning Resources
- HashiCorp Learn Free, interactive tutorials covering everything from basics to advanced topics.
- Terraform Registry Official repository of verified modules and providers.
- Terraform Provider GitHub Repos Source code and issue tracking for all official providers.
- Terraform Up & Running by Yevgeniy Brikman A comprehensive book for beginners and advanced users.
- HashiCorp YouTube Channel Tutorials, webinars, and product updates.
Testing and Validation Tools
Use the following to ensure your Terraform scripts are reliable:
- Terratest Go-based testing framework to write automated tests for infrastructure.
- InSpec Compliance testing tool to validate real infrastructure state against expected configurations.
- tfvalidate Validates Terraform configurations against schemas and policies.
CI/CD Integration
Integrate Terraform into your CI/CD pipeline using GitHub Actions, GitLab CI, or Jenkins. Example GitHub Actions workflow:
name: Terraform Plan & Apply
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
terraform:
name: Terraform
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
- name: Terraform Init
run: terraform init
- name: Terraform Plan
run: terraform plan
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Terraform Apply
if: github.ref == 'refs/heads/main'
run: terraform apply -auto-approve
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
This ensures every change is reviewed, tested, and deployed consistently.
Real Examples
Example 1: Deploy a Secure Web Server with Security Group and Elastic IP
Lets create a more realistic example: an EC2 instance with a custom security group that allows only HTTP and SSH traffic from specific IPs.
Create main.tf:
provider "aws" {
region = "us-east-1"
}
resource "aws_security_group" "web_sg" {
name = "web-security-group"
description = "Allow HTTP and SSH from specific IPs"
ingress {
description = "HTTP from anywhere"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "SSH from corporate IP"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["203.0.113.0/24"]
Replace with your IP
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "web-sg"
}
}
resource "aws_eip" "web_eip" {
instance = aws_instance.web_server.id
vpc = true
}
resource "aws_instance" "web_server" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t3.small"
security_groups = [aws_security_group.web_sg.name]
tags = {
Name = "Secure-Web-Server"
}
}
output "public_ip" {
value = aws_eip.web_eip.public_ip
}
Run terraform apply. The resulting instance will be accessible via HTTP and only SSHable from your corporate IP range.
Example 2: Provision an S3 Bucket with Versioning and Encryption
Storage is a common requirement. Heres how to create a secure, versioned S3 bucket:
resource "aws_s3_bucket" "backup_bucket" {
bucket = "my-company-backups-2024"
tags = {
Environment = "production"
Owner = "devops-team"
}
}
resource "aws_s3_bucket_versioning" "versioning" {
bucket = aws_s3_bucket.backup_bucket.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "encryption" {
bucket = aws_s3_bucket.backup_bucket.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_public_access_block" "block_public" {
bucket = aws_s3_bucket.backup_bucket.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
output "bucket_arn" {
value = aws_s3_bucket.backup_bucket.arn
}
This configuration ensures the bucket is private, encrypted, and prevents accidental exposure.
Example 3: Multi-Environment Setup with Workspaces
Use Terraform workspaces to manage multiple environments from the same codebase:
Initialize workspaces
terraform workspace new dev
terraform workspace new prod
Switch to dev
terraform workspace select dev
Apply dev config
terraform apply
Switch to prod
terraform workspace select prod
Apply prod config (with different variables)
terraform apply
Create variables.tf with environment-specific defaults:
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t2.micro"
}
variable "env" {
description = "Environment name"
type = string
default = "dev"
}
In main.tf, use conditional logic:
resource "aws_instance" "server" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = var.env == "prod" ? "t3.large" : var.instance_type
tags = {
Name = "Server-${var.env}"
}
}
This approach reduces duplication while allowing environment-specific tuning.
FAQs
What is the difference between Terraform and CloudFormation?
Terraform is cloud-agnostic and supports multiple providers (AWS, Azure, GCP, etc.) with a single syntax. AWS CloudFormation is specific to AWS and uses YAML or JSON. Terraform uses a more intuitive HCL (HashiCorp Configuration Language), while CloudFormation requires complex templates. Terraform also maintains state externally, whereas CloudFormation state is managed internally by AWS.
Can Terraform manage on-premises infrastructure?
Yes. Terraform supports providers for VMware vSphere, OpenStack, Nutanix, and even custom APIs via the HTTP provider. You can use Terraform to automate physical server provisioning via IPMI or integrate with Puppet/Ansible for configuration management.
How do I handle secrets in Terraform?
Never hardcode secrets like passwords or API keys in Terraform files. Use environment variables, HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault. Reference them using data "aws_secretsmanager_secret_version" or var.secret passed via CLI or CI/CD.
Is Terraform state encrypted?
By default, local state is not encrypted. Always use remote backends like S3 with server-side encryption (SSE) enabled. For enhanced security, integrate with AWS KMS or HashiCorp Vault to encrypt state at rest.
What happens if I delete a resource manually in the cloud?
Terraform detects drift during the next terraform plan. It will show that the resource is missing and plan to recreate it. To avoid this, always manage infrastructure through Terraform. Use terraform import to bring manually created resources under Terraform control.
How do I upgrade Terraform versions?
Always test upgrades in a non-production environment first. Terraform maintains backward compatibility, but provider versions may break. Use terraform init -upgrade to update providers, and review release notes for breaking changes.
Can I use Terraform with Kubernetes?
Yes. Use the Kubernetes provider to deploy Helm charts, namespaces, deployments, and services. Combine it with the AWS EKS provider to create managed Kubernetes clusters.
How do I debug Terraform errors?
Run terraform apply -debug to enable verbose logging. Check the Terraform log file (usually in your home directory). Use terraform state list to inspect what resources are tracked. Use terraform console to evaluate expressions interactively.
Conclusion
Writing a Terraform script is more than just defining infrastructureits about adopting a disciplined, automated, and repeatable approach to managing your cloud environments. By following the step-by-step guide in this tutorial, youve learned how to initialize a project, define resources, modularize configurations, and apply best practices for security and scalability.
The real power of Terraform lies not in its syntax, but in its ability to transform infrastructure from a chaotic, manual process into a version-controlled, auditable, and collaborative engineering discipline. Whether youre managing a single server or orchestrating thousands of microservices across hybrid clouds, Terraform provides the foundation for reliable, predictable, and efficient operations.
As you continue your journey, focus on mastering modules, remote state management, and integration with CI/CD pipelines. Explore the Terraform Registry for production-ready modules. Contribute to open-source configurations. And most importantlyalways plan before you apply.
Terraform is not just a tool. Its a mindset. And with the knowledge youve gained here, youre now equipped to lead your team toward infrastructure that is not just functionalbut exceptional.