Close Menu
IOupdate | IT News and SelfhostingIOupdate | IT News and Selfhosting
  • Home
  • News
  • Blog
  • Selfhosting
  • AI
  • Linux
  • Cyber Security
  • Gadgets
  • Gaming

Subscribe to Updates

Get the latest creative news from ioupdate about Tech trends, Gaming and Gadgets.

[contact-form-7 id="dd1f6aa" title="Newsletter"]
What's Hot

Testing Proxmox 9 Snapshots as Volume Chains on iSCSI (Tech Preview)

August 13, 2025

Z-Wave reborn – Home Assistant Connect ZWA-2

August 13, 2025

Awesome List Updates on May 17, 2025

August 13, 2025
Facebook X (Twitter) Instagram
Facebook Mastodon Bluesky Reddit
IOupdate | IT News and SelfhostingIOupdate | IT News and Selfhosting
  • Home
  • News
  • Blog
  • Selfhosting
  • AI
  • Linux
  • Cyber Security
  • Gadgets
  • Gaming
IOupdate | IT News and SelfhostingIOupdate | IT News and Selfhosting
Home»Selfhosting»Instant VMs and LXCs on Proxmox: My Go-To Terraform Templates for Quick Deployments
Selfhosting

Instant VMs and LXCs on Proxmox: My Go-To Terraform Templates for Quick Deployments

AndyBy AndyAugust 7, 2025No Comments16 Mins Read
Instant VMs and LXCs on Proxmox: My Go-To Terraform Templates for Quick Deployments


Dive into the power of Infrastructure as Code (IaC) for your self-hosting journey with Proxmox! This article unveils how Terraform, the industry-standard tool, can revolutionize the deployment of virtual machines (VMs) and LXC containers in your home lab. Learn to automate resource provisioning, achieve unparalleled repeatability, and streamline your virtualization management – turning hours of manual setup into mere seconds. Discover our battle-tested Terraform templates, gain production-grade skills, and unlock new levels of efficiency for your self-hosted projects.

Why Terraform is Essential for Proxmox Self-Hosting

Terraform stands out as a leading tool for rapidly deploying virtual machines and LXC containers on Proxmox, or indeed, any other infrastructure. Having utilized it extensively in VMware vSphere for on-premises environments, and across AWS and Azure for cloud deployments, its adaptability is unmatched. As I’ve primarily transitioned to Proxmox for my **home lab** endeavors, I’m excited to share my preferred Terraform templates for swift, efficient deployments.

Put simply, Terraform is the go-to utility for infrastructure deployment. While OpenTofu is gaining traction, Terraform retains a significant market share and is expected to do so for the foreseeable future. Its core strength lies in enabling “Infrastructure as Code” (IaC). You define your desired infrastructure using HashiCorp Configuration Language (HCL), and Terraform takes care of the provisioning.

This approach offers substantial advantages over manual infrastructure deployment efforts:

  • Repeatability: Your VM/LXC configurations are version-controlled, ensuring consistent and reusable deployments.
  • Speed: Launch complete systems in mere seconds, eliminating tedious UI clicks.
  • Scalability: Effortlessly provision multiple VMs or containers using loops, variables, and modules.
  • Automation: Seamlessly integrate into CI/CD pipelines (e.g., GitLab, Jenkins) for dynamic and automated environments.

For any serious Proxmox user or **home lab** enthusiast aiming to streamline deployments and enhance their **virtualization management**, Terraform is an intuitive and powerful fit. Moreover, adopting Terraform teaches you valuable, production-ready skills applicable across diverse IT environments.

Terraform Workflow for Proxmox: From Template to Deployment

Before diving into the specifics of Terraform code, let’s establish a general overview of the workflow for creating VMs and LXC containers. Generally, you’ll need a pre-configured VM or LXC template. I typically leverage HashiCorp Packer to build these base templates, and then use Terraform to automate the cloning and provisioning of these templates into new, fully functional VMs or containers.

Overview of creating vms and lxcs with proxmox terraform iac code

Prerequisites for Automating Proxmox with Terraform

Terraform interacts with various environments via “providers.” For Proxmox, popular options include `telmate/proxmox` and `bpg/proxmox`. These providers allow you to define and manage both virtual machines and LXC containers. I’ll demonstrate the `bpg/proxmox` provider for VMs and the `telmate/proxmox` provider for LXCs.

Choosing Your Proxmox Terraform Provider

At the top of your `main.tf` file, you’ll declare the required provider. Lately, I’ve found the `bpg/proxmox` provider to be excellent, noting its more active development compared to `telmate/proxmox`.


terraform {
required_version = ">= 1.0"
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "~> 0.66"
}
}
}

Securing API Access for Terraform

To enable Terraform to interact with your Proxmox environment, you need to set up API access:

  1. Create a Dedicated API User: In Proxmox, create a user specifically for Terraform, such as `terraform@pve`, and assign it appropriate permissions (PVEVMAdmin is usually sufficient for VM/LXC management).
  2. Generate an API Token: Generate an API token for this user. Crucially, copy the secret immediately after creation, as it will only be displayed once. Store this token securely.

Adding a token for a user

Getting your api token for proxmox terraform

Here’s an example provider block configured to use your Proxmox API token:


provider "proxmox" {
endpoint = var.proxmox_api_url
api_token = "${var.proxmox_token_id}=${var.proxmox_token_secret}"
insecure = var.proxmox_tls_insecure
}

Regardless of the provider chosen, execute `terraform init` to download and prepare the provider for use.

Running a terraform init

As a best practice for security and maintainability, use a `terraform.tfvars` or `.env` file to manage sensitive values.

Essential Terraform Templates for Proxmox Virtual Machines

Below is the scaffolding for cloning an Ubuntu 24.04 cloud-init template, typically stored on `local-lvm` storage. This is a streamlined version of my go-to VM template:

main.tf

If you intend for Terraform to interact with the QEMU agent within the VM, ensure `agent { enabled = true }`. In the example below, it is disabled.


terraform {
required_version = ">= 1.0"
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "~> 0.66"
}
}
}

provider "proxmox" {
endpoint = var.proxmox_api_url
api_token = "${var.proxmox_token_id}=${var.proxmox_token_secret}"
insecure = var.proxmox_tls_insecure
}

data "proxmox_virtual_environment_vms" "all_vms" {
node_name = var.proxmox_node
}

locals {
template_vms = [
for vm in data.proxmox_virtual_environment_vms.all_vms.vms : vm
if vm.name == var.vm_template
]
template_vm_id = length(local.template_vms) > 0 ? local.template_vms[0].vm_id : null
}

resource "proxmox_virtual_environment_vm" "ubuntu_vm" {
name = var.vm_name
node_name = var.proxmox_node

clone {
vm_id = local.template_vm_id
full = true
}

VM Hardware Configuration

cpu {
cores = var.vm_cores
sockets = var.vm_sockets
}

memory {
dedicated = var.vm_memory
}

agent {
enabled = false
}

Network Configuration

network_device {
bridge = var.vm_network_bridge
model = var.vm_network_model
}

Cloud-Init Configuration

initialization {
user_account {
username = var.vm_ci_user
password = var.vm_ci_password
keys = var.vm_ssh_keys != "" ? [file(var.vm_ssh_keys)] : []
}

ip_config {
  ipv4 {
    address = var.vm_ip_config != "" && var.vm_ip_config != "ip=dhcp" ? split(",", var.vm_ip_config)[0] : "dhcp"
    gateway = var.vm_ip_config != "" && var.vm_ip_config != "ip=dhcp" ? split("=", split(",", var.vm_ip_config)[1])[1] : null
  }
}

}

Tags

tags = split(",", var.vm_tags)
}

variables.tf


# Proxmox Provider Configuration
variable "proxmox_api_url" {
description = "Proxmox API URL"
type = string
default = ""
}

variable "proxmox_user" {
description = "Proxmox user for API access"
type = string
default = "terraform@pam"
}

variable "proxmox_token_id" {
description = "Proxmox API token ID"
type = string
sensitive = true
}

variable "proxmox_token_secret" {
description = "Proxmox API token secret"
type = string
sensitive = true
}

variable "proxmox_tls_insecure" {
description = "Skip TLS verification for Proxmox API"
type = bool
default = true
}

variable "proxmox_node" {
description = "Proxmox node name where VM will be created"
type = string
default = "pve01"
}

VM Template Configuration

variable "vm_template" {
description = "Name of the Proxmox template to clone"
type = string
default = "ubuntu-24.04-template"
}

VM Basic Configuration

variable "vm_name" {
description = "Name of the virtual machine"
type = string
default = "ubuntu-cloudinit"

validation {
condition = length(var.vm_name) > 0
error_message = "VM name cannot be empty."
}
}

variable "vm_cores" {
description = "Number of CPU cores for the VM"
type = number
default = 2

validation {
condition = var.vm_cores > 0 && var.vm_cores <= 32
error_message = "VM cores must be between 1 and 32."
}
}

variable "vm_memory" {
description = "Amount of memory in MB for the VM"
type = number
default = 2048

validation {
condition = var.vm_memory >= 512
error_message = "VM memory must be at least 512 MB."
}
}

variable "vm_sockets" {
description = "Number of CPU sockets for the VM"
type = number
default = 1

validation {
condition = var.vm_sockets > 0 && var.vm_sockets <= 4
error_message = "VM sockets must be between 1 and 4."
}
}

VM Hardware Configuration

variable "vm_boot_order" {
description = "Boot order for the VM"
type = string
default = "order=ide0;net0"
}

variable "vm_scsihw" {
description = "SCSI hardware type"
type = string
default = "virtio-scsi-pci"
}

variable "vm_qemu_agent" {
description = "Enable QEMU guest agent"
type = number
default = 1

validation {
condition = contains([0, 1], var.vm_qemu_agent)
error_message = "QEMU agent must be 0 (disabled) or 1 (enabled)."
}
}

Network Configuration

variable "vm_network_model" {
description = "Network model for the VM"
type = string
default = "virtio"
}

variable "vm_network_bridge" {
description = "Network bridge for the VM"
type = string
default = "vmbr0"
}

Disk Configuration

variable "vm_disk_type" {
description = "Disk type for the VM"
type = string
default = "scsi"
}

variable "vm_disk_storage" {
description = "Storage location for the VM disk"
type = string
default = "local-lvm"
}

variable "vm_disk_size" {
description = "Size of the VM disk"
type = string
default = "10G"
}

Cloud-Init Configuration

variable "vm_ci_user" {
description = "Cloud-init username"
type = string
default = "ubuntu"
}

variable "vm_ci_password" {
description = "Cloud-init password"
type = string
sensitive = true
default = ""
}

variable "vm_ip_config" {
description = "IP configuration for the VM (e.g., 'ip=192.168.1.110/24,gw=192.168.1.1')"
type = string
default = ""
}

variable "vm_ssh_keys" {
description = "Path to SSH public key file"
type = string
default = ""
}

variable "vm_tags" {
description = "Tags for the VM"
type = string
default = "terraform,ubuntu"
}

Legacy variables (kept for backward compatibility)

variable "ssh_key_path" {
description = "Path to SSH public key file (deprecated, use vm_ssh_keys)"
type = string
default = "~/.ssh/id_rsa.pub"
}

variable "vm_ip" {
description = "VM IP address (deprecated, use vm_ip_config)"
type = string
default = ""
}

variable "gateway" {
description = "Gateway IP address (deprecated, use vm_ip_config)"
type = string
default = ""
}

LXC variables (if needed for future use)

variable "lxc_name" {
description = "Name of the LXC container"
type = string
default = "alpine-lxc"
}

variable "lxc_template" {
description = "LXC template path"
type = string
default = "local:vztmpl/alpine-3.19-default_20240110_amd64.tar.xz"
}

variable "lxc_ip" {
description = "LXC container IP address"
type = string
default = ""
}

terraform.tfvars

For transparency, this is how `terraform.tfvars` is formatted in a test environment.


# Proxmox API Configuration
proxmox_api_url = ""
proxmox_user = "terraform@pam"
proxmox_token_id = "terraform@pam!terraform"
proxmox_token_secret = "a2683440-ccf2-40f0-83b1-75a0454a178c"
proxmox_tls_insecure = true
proxmox_node = "pve01"

VM Template Configuration

vm_template = "ubuntu-24.04-template"

VM Basic Configuration

vm_name = "ubuntutest01"
vm_cores = 2
vm_memory = 2048
vm_sockets = 1

VM Hardware Configuration

vm_boot_order = "order=ide0;net0"
vm_scsihw = "virtio-scsi-pci"
vm_qemu_agent = 1

Network Configuration

vm_network_model = "virtio"
vm_network_bridge = "vmbr0"

Disk Configuration

vm_disk_type = "scsi"
vm_disk_storage = "local-lvm"
vm_disk_size = "100G"

Cloud-Init Configuration

vm_ci_user = "ubuntu"
vm_ci_password = "your-secure-password"
vm_ip_config = "ip=dhcp"
vm_ssh_keys = ""

VM Tags

vm_tags = "terraform,ubuntu,production"

Example configurations for different scenarios:

DHCP Configuration (no static IP)

vm_ip_config = "ip=dhcp"

Multiple network interfaces

vm_ip_config = "ip=192.168.1.100/24,gw=192.168.1.1"

Larger VM configuration

vm_cores = 4

vm_memory = 8192

vm_disk_size = "50G"

Development environment tags

vm_tags = "terraform,ubuntu,development"

outputs.tf


# VM Information Outputs
output "vm_id" {
description = "The ID of the created VM"
value = proxmox_virtual_environment_vm.ubuntu_vm.vm_id
}

output "vm_name" {
description = "The name of the created VM"
value = proxmox_virtual_environment_vm.ubuntu_vm.name
}

output "vm_node" {
description = "The Proxmox node where the VM is running"
value = proxmox_virtual_environment_vm.ubuntu_vm.node_name
}

output "vm_template_name" {
description = "The template name used to create the VM"
value = var.vm_template
}

output "vm_template_id" {
description = "The template ID used to create the VM"
value = local.template_vm_id
}

VM Configuration Outputs

output "vm_cores" {
description = "Number of CPU cores assigned to the VM"
value = var.vm_cores
}

output "vm_memory" {
description = "Amount of memory (MB) assigned to the VM"
value = var.vm_memory
}

output "vm_sockets" {
description = "Number of CPU sockets assigned to the VM"
value = var.vm_sockets
}

Network Information

output "vm_ip_config" {
description = "IP configuration of the VM"
value = var.vm_ip_config
sensitive = false
}

output "vm_network_bridge" {
description = "Network bridge used by the VM"
value = var.vm_network_bridge
}

Cloud-Init Information

output "vm_ci_user" {
description = "Cloud-init username"
value = var.vm_ci_user
}

VM Status and Connection Info

output "vm_tags" {
description = "Tags assigned to the VM"
value = var.vm_tags
}

SSH Connection Information

output "ssh_connection_info" {
description = "SSH connection information for the VM"
value = var.vm_ip_config != "" && var.vm_ip_config != "ip=dhcp" && var.vm_ci_user != "" ? {
user = var.vm_ci_user
host = split("/", split("=", var.vm_ip_config)[1])[0]
command = "ssh ${var.vm_ci_user}@${split("/", split("=", var.vm_ip_config)[1])[0]}"
} : {
user = var.vm_ci_user
host = "DHCP - check Proxmox console for IP"
command = "Check Proxmox console for assigned IP address"
}
}

Summary Output

output "vm_summary" {
description = "Summary of the created VM"
value = {
id = proxmox_virtual_environment_vm.ubuntu_vm.vm_id
name = proxmox_virtual_environment_vm.ubuntu_vm.name
node = proxmox_virtual_environment_vm.ubuntu_vm.node_name
template_id = local.template_vm_id
template_name = var.vm_template
cores = var.vm_cores
memory = var.vm_memory
ip_config = var.vm_ip_config
tags = var.vm_tags
}
}

Once these files are in place, execute a `terraform plan` to review the proposed changes, followed by `terraform apply` to provision your new VM.

Running a terraform plan

Running a terraform apply

This configuration can provision a ready-to-use VM in approximately 20 seconds, significantly boosting your **home lab** efficiency.

Streamlining LXC Container Deployment with Terraform

LXC container provisioning with Terraform is equally straightforward. I commonly use Alpine or Ubuntu containers for their lightweight nature and efficiency in various workloads.


terraform {
required_providers {
proxmox = {
source = "telmate/proxmox"
}
}
}

provider "proxmox" {
pm_api_url = var.pm_api_url
pm_api_token_id = var.pm_api_token_id
pm_api_token_secret = var.pm_api_token_secret
pm_tls_insecure = true
}

resource "proxmox_lxc" "test-container" {
count = var.lxc_count
hostname = "LXC-test-${count.index + 1}"
target_node = var.node_name
vmid = 1000 + count.index # Ensure unique VMIDs
ostemplate = "local:vztmpl/${var.lxc_template}"
cores = var.lxc_cores
memory = var.lxc_memory
password = var.lxc_password
unprivileged = true
onboot = true
start = true

rootfs {
storage = "local-lvm"
size = "2G"
}

features {
nesting = true
}

network {
name = "eth0"
bridge = "vmbr0"
ip = "dhcp"
type = "veth"
}
}

terraform.tfvars


pm_api_url = ""
pm_api_token_id = "REDACTED-TOKEN-ID"
pm_api_token_secret = "REDACTED-API-TOKEN"
node_name = "REDACTED-NODE-NAME" # replace with your Proxmox node name
lxc_password = "REDACTED-PASSWORD"

variables.tf

Notice the `lxc_count` variable at the bottom, which determines how many containers will be provisioned. It is currently set to 3, but you can adjust this as needed.


variable "pm_api_url" {
type = string
}

variable "pm_api_token_id" {
type = string
}

variable "pm_api_token_secret" {
type = string
sensitive = true
}

variable "node_name" {
type = string
default = "REDACTED-NODE-NAME"
}

variable "lxc_template" {
type = string
default = "ubuntu-22.04-standard_22.04-1_amd64.tar.zst"
}

variable "lxc_storage" {
type = string
default = "local-lvm"
}

variable "lxc_password" {
type = string
default = "REDACTED-PASSWORD"
sensitive = true
}

variable "lxc_memory" {
type = number
default = 384
}

variable "lxc_cores" {
type = number
default = 1
}

variable "lxc_count" {
type = number
default = 3
}

Enhancing Reusability with Terraform Modules for Scale

When provisioning multiple VMs or containers, leveraging Terraform modules is crucial to prevent code duplication and promote maintainability. Modules encapsulate reusable configurations, making your **Infrastructure as Code** more organized and scalable.

In a `modules/vm/` folder, you might have files like this:


variable "name" {}
variable "ip" {}
variable "clone_template" {}

resource "proxmox_vm_qemu" "vm" {
name = var.name
clone = var.clone_template
ipconfig0 = "ip=${var.ip}/24,gw=192.168.1.1"
...
}

Then, you can call this module in your root `main.tf` like so:


module "vm1" {
source = "./modules/vm"
name = "webserver01"
ip = "192.168.1.101"
clone_template = "ubuntu-24.04-template"
}

Pro Tips for Optimizing Cloud-Init Templates in Proxmox

If you’re creating your own cloud-init templates for automated deployments, consider these key steps for optimal results:

  • Start Fresh: Begin with a clean Ubuntu VM. Install cloud-init, update all packages, then shut it down. Convert this VM into a template within Proxmox.
  • Configure CD Drive: Ensure you select “Cloud-Init” under the Hardware > CD Drive settings for the template.
  • SSH Key Integration: For seamless access, add your SSH public key to `/home/ubuntu/.ssh/authorized_keys` within the template, then test the first boot to confirm access.
  • QEMU Guest Agent: Implement the `qemu-guest-agent` for enhanced integration and reporting between your VMs and Proxmox.

Unique Tip: Consider leveraging Terraform’s `remote-exec` or `ansible-local` provisioners for initial post-deployment configuration (e.g., installing Docker, setting up a specific service, or deploying a **self-hosted** application) right after your VM/LXC is spun up. This creates a truly ‘ready-to-use’ instance without manual intervention, saving even more time in your **home lab**.

Integrating Terraform with CI/CD for Advanced Automation

One of the most powerful aspects of having your infrastructure defined as code is the ability to integrate it with Continuous Integration/Continuous Deployment (CI/CD) pipelines. Once your Terraform templates are refined and tested, you can plug them into tools like GitLab CI or Gitea pipelines for unparalleled automation in your **home lab**.

A basic GitLab CI configuration for Terraform might look like this:


terraform:
script:
- terraform init
- terraform plan -out=tfplan
- terraform apply -auto-approve tfplan

For scenarios where you need to tear down and then provision entirely fresh lab resources, you can execute a `terraform destroy` followed by a `terraform apply` within your pipeline, enabling rapid iteration and environment resets.

Conclusion: Elevate Your Home Lab with Terraform & Proxmox

The power of automation, particularly with Terraform and Proxmox, cannot be overstated in a **self-hosting** or **home lab** context. The advantages are numerous: spinning up new VMs or containers in mere seconds, ensuring predictable and repeatable configurations, and the ability to integrate with advanced automation tools like CI/CD pipelines. This mastery of **Infrastructure as Code** not only streamlines your personal projects but also hones skills highly valued in professional environments.

Are you currently using Terraform with your Proxmox environment? If so, what aspects of your **home lab** are you automating? Share your experiences in the comments below!

Like this:Like Loading…


FAQ

Why is Infrastructure as Code (IaC) crucial for my home lab?

IaC, specifically with Terraform, is vital for your home lab because it transforms manual setup into reproducible code. This means consistent deployments, faster recovery from failures, easy scaling of resources, and version control for your entire infrastructure. It’s the difference between building with blueprints versus trial-and-error.

What self-hosted applications can I easily deploy with Terraform on Proxmox?

With Terraform and pre-configured templates, you can rapidly deploy a wide array of self-hosted applications. Common examples include personal cloud solutions like Nextcloud, media servers such as Plex or Jellyfin, home automation hubs like Home Assistant, network services like Pi-hole or AdGuard Home, or even complex setups like a small Kubernetes cluster for container orchestration. The speed of deployment allows for quick experimentation and robust, repeatable service provisioning.

How does Terraform manage state in Proxmox, and why is it important?

Terraform manages the state of your infrastructure in a state file (e.g., `terraform.tfstate`). This file acts as a record of what Terraform has deployed and configured in your Proxmox environment. It’s crucial because it allows Terraform to understand the current state of your resources, compare it with your desired configuration (your HCL code), and determine what changes need to be applied. For collaborative environments or preventing resource drift, storing this state remotely (e.g., on a network share or in an S3-compatible bucket) is a best practice for consistency and safety.



Read the original article

0 Like this
Deployments goto Instant LXCs Proxmox Quick Templates Terraform VMs
Share. Facebook LinkedIn Email Bluesky Reddit WhatsApp Threads Copy Link Twitter
Previous ArticleA state-of-the-art machine learning engineering agent
Next Article Shelly joins Works with Home Assistant

Related Posts

Selfhosting

Testing Proxmox 9 Snapshots as Volume Chains on iSCSI (Tech Preview)

August 13, 2025
Selfhosting

Z-Wave reborn – Home Assistant Connect ZWA-2

August 13, 2025
Selfhosting

Awesome List Updates on May 17, 2025

August 13, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

AI Developers Look Beyond Chain-of-Thought Prompting

May 9, 202515 Views

6 Reasons Not to Use US Internet Services Under Trump Anymore – An EU Perspective

April 21, 202512 Views

Andy’s Tech

April 19, 20259 Views
Stay In Touch
  • Facebook
  • Mastodon
  • Bluesky
  • Reddit

Subscribe to Updates

Get the latest creative news from ioupdate about Tech trends, Gaming and Gadgets.

About Us

Welcome to IOupdate — your trusted source for the latest in IT news and self-hosting insights. At IOupdate, we are a dedicated team of technology enthusiasts committed to delivering timely and relevant information in the ever-evolving world of information technology. Our passion lies in exploring the realms of self-hosting, open-source solutions, and the broader IT landscape.

Most Popular

AI Developers Look Beyond Chain-of-Thought Prompting

May 9, 202515 Views

6 Reasons Not to Use US Internet Services Under Trump Anymore – An EU Perspective

April 21, 202512 Views

Subscribe to Updates

Facebook Mastodon Bluesky Reddit
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions
© 2025 ioupdate. All Right Reserved.

Type above and press Enter to search. Press Esc to cancel.