For a while I managed Proxmox firewall rules either directly in the web UI or mixed into the same Terraform configuration that creates virtual machines.
Both approaches worked, but they had drawbacks:
- Changing a single rule meant clicking through multiple screens, or
- Refactoring VM resources in Terraform risked touching firewall policies at the same time.
Eventually I settled on a pattern that keeps firewall configuration completely separate from VM definitions, while still using Terraform for everything and driving it via the Proxmox API.
This article documents that approach and shows the real Terraform code I use today.
Goals
The design goals were:
- Firewall configuration should be managed as code and versioned.
- Firewall rules should live in a separate Terraform project from VM lifecycle.
- The firewall project should work even if VMs are created elsewhere (another repo, another tool, or manually).
- Everything should be driven through the Proxmox API, not shell scripts on the nodes.
To achieve this, I use the bpg/proxmox Terraform provider.
Project layout
I keep two independent Terraform projects (and repositories):
infra-proxmox-vms/– VM templates, storage, networks, and VM lifecycle.infra-proxmox-firewall/– all firewall configuration: cluster defaults, aliases, IP sets, security groups, and per-VM rules.
The firewall project does not depend on Terraform resources from the VM project. It only talks to the Proxmox cluster via the API and discovers VMs via data sources.
Provider configuration
The firewall project starts with a standard bpg/proxmox provider configuration:
terraform {
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "~> 0.89.1"
}
}
}
Cluster-wide firewall baseline
First, I ensure that the cluster firewall is globally enabled and has a predictable baseline:
resource "proxmox_virtual_environment_cluster_firewall" "this" {
enabled = true
ebtables = true
input_policy = "ACCEPT"
output_policy = "ACCEPT"
forward_policy = "ACCEPT"
log_ratelimit {
enabled = true
burst = 5
rate = "1/second"
}
}
Aliases and IP sets
Next, I define a few aliases and IP sets that are reused across security groups.
Service alias (Zabbix)
resource "proxmox_virtual_environment_firewall_alias" "zabbix" {
name = "zabbix_server"
cidr = "192.0.2.10" # example Zabbix server address (documentation range)
}
Administrative IP sets
Two IP sets for administrative access (e.g. jump hosts or admin workstations):
resource "proxmox_virtual_environment_firewall_ipset" "ipset_ssh" {
name = "ipset_ssh"
cidr { name = "198.51.100.10" } # admin workstation A
cidr { name = "198.51.100.20" } # admin workstation B
}
resource "proxmox_virtual_environment_firewall_ipset" "ipset_admin" {
name = "ipset_admin"
cidr { name = "198.51.100.10" } # admin workstation A
cidr { name = "198.51.100.20" } # admin workstation B
}
Reusable security groups
With aliases and IP sets in place, I define a set of cluster-level security groups that model common policies (Zabbix, SSH, RDP, DNS, generic egress, and a default drop).
Zabbix monitoring
This group allows Zabbix server ↔ agent traffic:
resource "proxmox_virtual_environment_cluster_firewall_security_group" "allow_zabbix" {
name = "allow_zabbix"
rule {
type = "in"
source = proxmox_virtual_environment_firewall_alias.zabbix.name
action = "ACCEPT"
proto = "tcp"
dport = "10050"
comment = "Allow Zabbix Server to Zabbix Agent"
log = "nolog"
}
rule {
type = "out"
dest = proxmox_virtual_environment_firewall_alias.zabbix.name
action = "ACCEPT"
proto = "tcp"
dport = "10051"
comment = "Allow Zabbix Agent to Zabbix Server"
log = "nolog"
}
}
Generic egress
A simple group to allow all outbound traffic from the VM:
resource "proxmox_virtual_environment_cluster_firewall_security_group" "allow_egress" {
name = "allow_egress"
rule {
type = "out"
action = "ACCEPT"
log = "nolog"
}
}
In combination with a DROP input policy this gives a clear, asymmetric model: outbound is open by default, inbound is explicit.
SSH access
SSH from a dedicated IP set:
resource "proxmox_virtual_environment_cluster_firewall_security_group" "allow_ssh" {
depends_on = [proxmox_virtual_environment_firewall_ipset.ipset_ssh]
name = "allow_ssh"
rule {
type = "in"
source = "+${proxmox_virtual_environment_firewall_ipset.ipset_ssh.name}"
action = "ACCEPT"
proto = "tcp"
dport = "22"
comment = "Allow SSH from IP set"
log = "info"
}
}
Note the +ipset_name syntax in source, which references an IP set from the Proxmox firewall.
RDP access
For Windows or other GUI-based workloads I use a separate admin IP set:
resource "proxmox_virtual_environment_cluster_firewall_security_group" "allow_rdp" {
depends_on = [proxmox_virtual_environment_firewall_ipset.ipset_admin]
name = "allow_rdp"
rule {
type = "in"
source = "+${proxmox_virtual_environment_firewall_ipset.ipset_admin.name}"
action = "ACCEPT"
proto = "tcp"
dport = "3389"
comment = "Allow RDP from admin IP set"
log = "info"
}
}
DNS server
A group for VMs that provide DNS:
resource "proxmox_virtual_environment_cluster_firewall_security_group" "serve_dns" {
name = "serve_dns"
rule {
type = "in"
action = "ACCEPT"
proto = "tcp"
dport = "53"
log = "nolog"
}
rule {
type = "in"
action = "ACCEPT"
proto = "udp"
dport = "53"
log = "nolog"
}
}
Default drop
Finally, a catch-all group to drop everything else inbound:
resource "proxmox_virtual_environment_cluster_firewall_security_group" "drop_in_other" {
name = "drop_in_other"
rule {
type = "in"
action = "DROP"
comment = "Drop all other inbound traffic"
log = "info"
}
}
Discovering DNS VMs via data source
The interesting part is that firewall rules are not tied to VM resources. Instead, I discover VMs that should receive a given policy, based on tags.
In the Terraform firewall project, I find all VMs tagged dns:
data "proxmox_virtual_environment_vms" "dns" {
tags = ["dns"]
}
This data source talks directly to Proxmox and returns information about all matching VMs, regardless of how they were created.
From that result I build a map keyed by VM ID:
locals {
dns_vms = {
for vm in data.proxmox_virtual_environment_vms.dns.vms :
tostring(vm.vm_id) => vm
}
}
I then use this local.dns_vms in for_each loops for both firewall options and rules.
Per-VM firewall options for DNS
First, I enable and configure the VM-level firewall:
resource "proxmox_virtual_environment_firewall_options" "dns" {
for_each = local.dns_vms
node_name = each.value.node_name
vm_id = each.value.vm_id
enabled = true
input_policy = "DROP"
output_policy = "ACCEPT"
dhcp = false # keep true if the VM uses DHCP; false if IP is static
ipfilter = false # IP spoofing protection (filter per assigned IPs)
macfilter = false # MAC-level filtering
ndp = false # useful for IPv6 environments
radv = false # IPv6 router advertisements
log_level_in = "info"
log_level_out = "nolog"
}
Each DNS VM receives the same baseline policy:
- Firewall enabled,
- Inbound default
DROP, - Outbound default
ACCEPT, - Logging configured consistently.
Per-VM firewall rules for DNS Servers
Finally, I attach the reusable security groups to each DNS VM:
resource "proxmox_virtual_environment_firewall_rules" "dns" {
for_each = local.dns_vms
node_name = each.value.node_name
vm_id = each.value.vm_id
rule {
security_group = proxmox_virtual_environment_cluster_firewall_security_group.allow_egress.name
}
rule {
security_group = proxmox_virtual_environment_cluster_firewall_security_group.allow_zabbix.name
}
rule {
security_group = proxmox_virtual_environment_cluster_firewall_security_group.allow_ssh.name
}
rule {
security_group = proxmox_virtual_environment_cluster_firewall_security_group.serve_dns.name
}
rule {
security_group = proxmox_virtual_environment_cluster_firewall_security_group.drop_in_other.name
}
}
Key points:
- The rules are attached per VM, but expressed entirely in terms of cluster security groups.
- Adding a new DNS VM is as simple as setting the
dnstag on it in Proxmox or in the VM Terraform project. - The next
terraform applyin the firewall project automatically picks up the new VM and applies the standard DNS policy.
Summary
This pattern turned out to be a simple and robust way to manage Proxmox firewall configuration with Terraform:
- Use the
bpg/proxmoxprovider in a dedicated firewall project. - Define cluster-wide aliases, IP sets, and security groups once.
- Discover VMs by tags or naming conventions via data sources.
- Apply firewall options and rules using
for_eachover those discovered VMs.
The result is a firewall configuration that:
- Is fully reproducible and versioned,
- Does not depend on how VMs are defined or where their code lives,
- And can be applied to both Terraform-managed and manually created VMs, as long as they follow the tagging conventions.