Terraform Native Service enables you to deploy and manage Terraform or OpenTofu infrastructure code directly within Qovery. This service type allows you to provision cloud resources, configure external services, and manage infrastructure as code (IaC) using the same environment structure as your applications. Terraform executes within Kubernetes pods on your cluster, with automatic state management, variable injection, and integrated deployment workflows.
Navigate to the environment where you want to deploy your Terraform infrastructure.
2
Create New Service
Click New Service and select Terraform from the service type options.
3
Configure Service Name
Provide a service name that identifies this Terraform service (e.g., aws-infrastructure, cloudflare-config).
4
Select Git Repository
Choose the Git repository containing your Terraform code. This repository must include your .tf configuration files.Specify the branch and root path if your Terraform code is in a subdirectory.
5
Select Engine
Choose the execution engine:
Terraform - Official HashiCorp Terraform
OpenTofu - Open-source Terraform fork
6
Select Terraform Version
Choose the Terraform version to use for execution. Supported versions depend on your selected engine.
7
State Management
Configure how Terraform stores and manages state files. Qovery supports two state management modes.
Default (Cluster-Managed)
Custom Backend (AWS S3)
By default, Terraform state is managed inside the Kubernetes cluster. State files are stored securely within the cluster and managed automatically by Qovery.This default configuration requires no additional setup and is ideal for getting started quickly.Benefits:
Zero configuration required
Automatic state management
Secure storage within your cluster
No external dependencies
If you prefer to manage your Terraform state in your own backend, you can configure a custom backend. The standard for AWS is to use an S3 backend.PrerequisitesBefore configuring your S3 backend, you need to:
Create an S3 bucket in your AWS account to store the Terraform state
Enable bucket versioning on your S3 bucket to allow state recovery in case of accidental deletions
Configure S3 state locking using the use_lockfile option (this avoids the need for a DynamoDB table)
Prepare AWS credentials with appropriate permissions to access the S3 bucket
Backend ConfigurationAdd a backend configuration block to your main.tf file in your Git repository:
bucket - Name of your S3 bucket for storing state files
key - Path to the state file within the bucket (typically terraform.tfstate)
region - AWS region where your S3 bucket is located
use_lockfile - Set to true to use S3’s native locking mechanism instead of DynamoDB (recommended)
encrypt - Set to true to enable server-side encryption for the state file (recommended)
Using use_lockfile = true eliminates the need to set up a DynamoDB table for state locking, simplifying your infrastructure. This is the recommended approach as DynamoDB-based locking is deprecated and will be removed in a future version of Terraform.
Best Practices:
Enable bucket versioning on your S3 bucket to allow state recovery in case of accidental deletions or corruption
Use encrypt = true to enable server-side encryption for your state files
Never hardcode sensitive values in your backend configuration
Configuring AWS CredentialsTo allow Terraform to access your S3 backend, you must provide AWS credentials as environment variables in the service configuration:
The s3:DeleteObject permission is required for managing the lock file when using use_lockfile = true.
Using Cluster Credentials with Custom BackendIf you choose to use the default cluster credentials (instead of providing custom AWS credentials), be aware that the cluster’s IAM policy only allows access to S3 buckets with names prefixed with qovery-.You have two options:
Name your bucket with the qovery- prefix (e.g., qovery-my-terraform-backend)
Provide custom AWS credentials (recommended) with appropriate permissions for your bucket name
This restriction exists to prevent accidental access to unrelated S3 buckets in your AWS account.
Migration from Default StateIf you’re migrating from the default cluster-managed state to a custom S3 backend:
Add the backend configuration to your main.tf
Configure the AWS credentials as environment variables
Use the Migrate State action from the service’s Action Toolbar to transfer the existing state to your S3 bucket
8
Execution Timeout (Default)
The default timeout is set to 1 hour. This can be customized if your Terraform operations require more time.
9
Cloud Credentials (Default Behavior)
By default, Terraform uses cluster credentials when provisioning resources on the same cloud provider as your cluster.If you need to use custom credentials (e.g., a different AWS account, GCP project, or Azure subscription), you will configure them in the Environment Variables step.
10
Compute Resources (Default)
Terraform execution uses the following default compute resources:
CPU: 500,000 millicores (500 mCPU or 0.5 vCPU)
Memory: 512 MB
Storage: 1 GB
These resources can be updated later in the Service Settings if your Terraform operations require more capacity.
11
Configure Terraform Variables
Qovery provides comprehensive variable management for Terraform, automatically detecting variables from your code and allowing flexible configuration.Automatic Variable DetectionQovery automatically loads variables from:
main.tf
variables.tf
Variables detected by Qovery will appear prefixed with tf_var_.Example:If your variables.tf contains:
variable "bucket_name" { type = string}variable "environment" { type = string default = "production"}
Qovery will create:
tf_var_bucket_name
tf_var_environment
Importing TFVAR FilesYou can import .tfvars files to configure multiple variables at once:
Click Import TFVAR
Select the TFVAR file(s) to import
Choose which TFVAR files to apply
Reorder the TFVAR files as needed
The last TFVAR file applied wins. If multiple TFVAR files define the same variable, the value from the last file in the order takes precedence.
Manual Variable OverrideYou can manually override any variable value in two ways:
Direct Value Entry - Enter a value directly:
tf_var_bucket_name = "my-custom-bucket"
Reference Environment Variable - Reference another environment variable:
tf_var_bucket_name = ${MY_BUCKET_NAME}
Variables Not in main.tf or variables.tfIf you have variables defined in other .tf files that are not in main.tf or variables.tf, Qovery will not automatically detect them. You can create these variables manually using the tf_var_ prefix:
tf_var_my_custom_variable = "value"
12
Configure Environment Variables
Add standard environment variables that will be available during Terraform execution. This is where you configure custom cloud provider credentials, provider-specific settings, and other configuration.Common use cases:Custom Cloud CredentialsOverride the default cluster credentials with your own:AWS:
Use Qovery Secrets for sensitive credentials to ensure they are encrypted and never exposed in logs.
Terraform Environment VariablesYou can also set Terraform-specific environment variables:
TF_LOG=DEBUGTF_WORKSPACE=production
13
Review and Create
Review your configuration and click Create to provision the Terraform service.Optionally, you can select Create & Run Plan to execute terraform plan immediately and preview the execution plan before applying changes.
The Terraform Arguments section allows you to specify additional CLI arguments for each Terraform command. These arguments override default behaviors and enable advanced customization.
Terraform Arguments provide fine-grained control over Terraform execution. Use these to customize init, validate, plan, apply, and destroy operations.
By default, Terraform services run using a base Docker image (Debian-based) containing Terraform (or OpenTofu), dumb-init, rsync, bash, and ca-certificates. If your Terraform code requires additional binaries or tools (e.g., AWS CLI, kubectl, jq, custom scripts), you can customize the build image using a Dockerfile fragment.
Qovery provides two ways to inject custom Dockerfile commands during the build:
File-based: Reference a Dockerfile fragment file stored in your Git repository
Inline: Provide Dockerfile commands directly in the service configuration
File-based Fragment
Inline Fragment
Reference a Dockerfile fragment file from your repository.
1
Create Fragment File
In your Git repository, create a Dockerfile fragment file in your Terraform code directory or a custom location.Example directory structure:
my-repo/├── terraform/│ ├── main.tf│ ├── variables.tf│ └── custom-build.dockerfile # Your fragment file└── README.md
2
Add Dockerfile Instructions
Add valid Dockerfile instructions to install the tools you need. The fragment is injected into the build after your Terraform files are copied and before the final user switch.See the Fragment Examples section below for common use cases.
3
Configure Service
In the Terraform service settings, configure the Dockerfile fragment:
Navigate to Service Settings → Dockerfile Fragment
Select Custom file path
Enter the absolute path to your fragment file (e.g., /terraform/custom-build.dockerfile)
The path must be absolute (starting with /) and located within your service’s root_path.
4
Deploy
Save your changes and deploy. Qovery will inject the fragment contents during the Docker build.
Define Dockerfile commands directly in the service configuration without creating a file in your repository.
1
Configure Service
In the Terraform service settings:
Navigate to Service Settings → Dockerfile Fragment
# Install kubectl (curl is already available in the base image)RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" && \ chmod +x kubectl && \ mv kubectl /usr/local/bin/
Installing a custom binary from your repository:
# Install a custom tool already present in the repository# (repository files are already copied to /data)RUN cp /data/bin/my-custom-tool /usr/local/bin/my-custom-tool && \ chmod +x /usr/local/bin/my-custom-tool
Downloading and extracting a remote archive:
# Download and extract a tool from the internetADD https://releases.example.com/tool-v1.2.3.tar.gz /tmp/tool.tar.gzRUN tar -xzf /tmp/tool.tar.gz -C /usr/local/bin/ && \ rm /tmp/tool.tar.gz
The fragment supports the following Dockerfile instructions:
RUN - Execute commands during build (install packages, configure tools, etc.)
ADD - Add files with URL/archive support (download and extract remote archives)
Repository files are already present in /data before the fragment executes, so you can reference them in RUN commands without needing COPY.
The fragment runs as root during the build. The container switches to a non-root user (app) after your fragment executes. Make sure any binaries you install are accessible to all users.
This feature is specific to Terraform and OpenTofu services. Applications and Jobs use different build mechanisms.