Skip to main content

Proxmox Kiosk Desktop Appliance: Windows VM, GPU Passthrough, and Fast Recovery

·3101 words·15 mins
Stanislav Cherkasov
Author
Stanislav Cherkasov
{DevOps,DevSecOps,Platform} Engineer
Table of Contents
homelab - This article is part of a series.
Part : This Article

A Windows desktop is easy to love until it stops booting and the machine is in someone else’s house.

At that point, the interesting question is not how elegant your troubleshooting is. It is how fast you can get the user back to a working desktop.

That is why I use Proxmox VE as the base for certain Windows systems used by relatives and other non-technical users. The goal is simple: keep the desktop experience completely normal for the person sitting in front of it, while making the machine behave like an appliance from an operational point of view.

The user sees a regular local Windows desktop with a monitor, keyboard, mouse, browser, printer, webcam, and audio. I get host-level control, daily backups, rollback options, and a recovery path that is often faster than debugging Windows live.

I have been running this pattern since 2020. Over time it grew into four similar setups and proved stable enough for real day-to-day use.

By “kiosk” here I do not mean a locked-down public terminal. I mean an appliance-style Windows desktop that behaves predictably, is easy to recover remotely, and does not turn every bad update or user mistake into a long support session.

TL;DR
#

  • Install Proxmox VE on bare metal on an ordinary desktop-class machine.
  • Run the desktop as a Windows VM that autostarts on boot.
  • Use PCIe GPU passthrough so the local monitor is driven by the guest.
  • Keep peripheral handling simple and boring. A cheap USB audio dongle is still one of my favorite defaults.
  • Keep user data outside the desktop VM.
  • Use VPN-first admin access and treat the hypervisor as the external control plane.
  • Back up the desktop VM to Proxmox Backup Server and recover by restoring known-good state instead of improvising live repair.

Why I built it this way
#

This design did not come from a desire to virtualize everything.

It came from a simpler problem: sometimes a Windows desktop used by relatives just needs to be dependable, and I need to be able to recover it remotely without turning every incident into a long repair session.

That changed the design goal. I stopped thinking in terms of “how do I fix every possible Windows problem in place?” and started thinking in terms of “how do I get the machine back to a known-good state quickly and safely?”

That shift matters. It changes what “good desktop architecture” means.

What this is not
#

This is not a universal replacement for a normal Windows PC.

It is not the simplest option for a single local user who never needs remote support, never breaks anything, and is happy to reinstall Windows when needed.

It is also not VDI, not a thin-client design, and not a locked-down browser-only terminal.

This pattern is specifically useful when local desktop UX matters, but predictable remote recovery matters even more.

Where this model makes sense
#

This pattern is most useful when a desktop must feel local and normal for the user, but support and recovery need to be predictable for someone else.

Typical examples include:

  • relatives or non-technical users you support remotely
  • front desk or reception systems
  • training or lab PCs that need a known-good reset path
  • small office workstations where downtime costs more than virtualization overhead
  • single-purpose or low-change desktops that still need full Windows compatibility

The common requirement is not peak benchmark numbers. It is recoverability.

What the user sees vs what I control
#

What the user sees:

  • a power button
  • a normal Windows login on a local monitor
  • a normal browser, printer, webcam, office apps, and audio devices
  • a desktop that behaves like a desktop, not like a remote session

What I control remotely:

  • the Proxmox host state before the guest OS even boots
  • VM power, boot, and device assignment
  • scheduled backups and restores
  • firewall policy outside the guest OS
  • recovery from a clean external control plane

That split is the whole point. Familiar UX for the user, predictable operations for the person supporting the machine.

Why not just run Windows on bare metal?
#

A bare-metal Windows PC gives you one control plane.

If Windows is unstable, the thing you are relying on to fix it is the same thing that is failing. You are debugging inside the failure domain.

Proxmox changes that. I can power-cycle the VM, check hardware assignment, review backups, enforce firewall policy, and restore known-good state before the guest OS becomes trustworthy again.

That is the real benefit here. Not novelty. Not homelab points. A cleaner operational boundary.

Architecture overview
#

flowchart LR
user[User at desk]
monitor[Monitor]
peripherals[Keyboard mouse audio USB]

subgraph host[Proxmox host]
pve[Proxmox VE]
fw[Proxmox firewall]
vm[Windows VM autostart]
gpu[Passed-through GPU]
usb[USB passthrough]
end

subgraph services[Supporting services]
pbs[Proxmox Backup Server]
fs[File server VM]
nc[Nextcloud]
ups[UPS and NUT]
end

admin[Admin laptop]
vpn[VPN access]

user --> monitor
user --> peripherals

monitor --> gpu --> vm
peripherals --> usb --> vm

pve --> vm
fw --> vm

admin --> vpn --> pve

pve --> pbs
vm --> fs
vm --> nc
ups --> pve

What I have actually validated
#

This is not a theoretical layout.

I have been running this pattern since 2020, and over time it grew into four similar setups. The validated baseline is:

  • Proxmox VE on bare metal
  • Windows running as an autostart VM
  • a dedicated PCIe GPU passed through for local monitor output
  • USB passthrough for normal workstation peripherals
  • daily backups through Proxmox Backup Server
  • restore-based recovery when updates or Windows drift caused trouble

In day-to-day use, these machines behave like normal desktops for the people using them. Operationally, they are much easier to support than a bare-metal Windows install because recovery happens outside the guest OS.

The result is not that Windows never breaks.

The result is that the desktop remains usable as a normal workstation for the user, while recovery becomes much faster and more predictable for the person supporting it.

Hardware context
#

The validated baseline described here runs on ordinary consumer desktop hardware rather than server gear.

One representative host in my setup uses:

  • Proxmox VE 9.1 on Debian 13 with a 6.17 PVE kernel
  • an 8-core / 16-thread AMD Ryzen-class CPU with AMD-V
  • 64 GiB RAM
  • an older AM4 consumer motherboard with working IOMMU isolation
  • a dedicated AMD GPU passed through to the Windows VM
  • wired Ethernet
  • SSD-backed storage for the desktop VM
  • remote Proxmox Backup Server targets for daily backups
  • a USB-connected UPS integrated with NUT

The exact hostnames, PCI addresses, bridge names, storage identifiers, backup datastore names, MAC addresses, UUIDs, serial numbers, and internal network layout are intentionally omitted. For this design, what matters is not the exact label on the hardware but clean IOMMU grouping, enough RAM, predictable storage, and a reliable restore path.

On the representative host shown here, the passed-through GPU and its HDMI audio function sit together in their own IOMMU group, which keeps GPU passthrough straightforward.

At the moment, my validated baseline uses full PCIe passthrough for the desktop GPU. I am testing iGPU-based variants separately, but they are not part of the proven setup described in this article.

Representative desktop VM profile
#

One baseline desktop VM in this model uses:

  • Windows 11
  • 8 vCPU
  • 10 GiB RAM
  • cpu: host
  • OVMF + Q35
  • TPM 2.0
  • QEMU guest agent
  • virtio storage and networking
  • one dedicated passed-through GPU
  • per-device USB passthrough for workstation peripherals
  • autostart on host boot
  • daily backups via Proxmox Backup Server

These exact numbers are not mandatory. They simply describe one stable desktop profile from my current environment.

Fast restore only works if user data is handled separately
#

A restore-first desktop model only makes sense if user data is not tightly coupled to the Windows system disk.

In my setups, I do not rely on OneDrive for this. I explicitly disable that path because I want storage and sync to be under my control, not tied to a third-party desktop sync model.

Instead, I keep data in services designed for that purpose:

  • a separate file server VM for shared and persistent storage
  • Nextcloud for the parts that benefit from sync and remote access

That gives me a cleaner operational boundary. The Windows desktop VM is primarily workspace and application state. The important user data lives outside it.

This matters because restoring the desktop should not mean rolling back everything the user worked on that day.

flowchart TB
problem[Windows update issue
driver regression
user mistake] decision[Restore desktop VM] subgraph desktop[Desktop VM] os[Windows OS] apps[Applications] state[Local desktop state] end subgraph data[Data outside the desktop VM] files[File server VM] cloud[Nextcloud] end problem --> decision --> desktop desktop -. restore affects .-> os desktop -. restore affects .-> apps desktop -. restore affects .-> state decision -. does not replace .-> files decision -. does not replace .-> cloud

A real failure this setup is designed for
#

One of the most useful real-world cases was a bad Windows update.

The machine was no longer in a state I wanted to troubleshoot live. Because I keep daily backups at 03:00 in Proxmox Backup Server, I restored the last known-good state and had the desktop usable again in about five minutes on local SSD-backed storage.

That number is specific to this kind of setup. Measure your own recovery time if your storage, network, or backup placement differ.

That is exactly the kind of incident this design is meant for. The point is not that Windows never breaks. The point is that recovery is faster and more predictable when the desktop is treated like a VM-backed appliance instead of a one-off snowflake machine.

What I test during a restore drill
#

I do not treat “the VM booted” as success.

After a restore, I want to see a usable desktop, not just a green status light in Proxmox. My practical acceptance check is:

  • the local monitor comes up on the passed-through GPU
  • keyboard and mouse work without replugging
  • audio works
  • the user can log in normally
  • network access is up
  • the desktop can reach the file server and Nextcloud
  • the machine is usable without local admin intervention

That is the standard that matters for a supportable desktop appliance.

Implementation overview
#

This is not a full step-by-step passthrough tutorial. The point of this article is the operating model.

At a high level, the implementation looks like this:

  1. Install Proxmox VE on bare metal and keep the host minimal.
  2. Enable virtualization and IOMMU in BIOS, then validate device isolation.
  3. Create a Windows VM with OVMF, Q35, virtio drivers, and a sane baseline hardware profile.
  4. Pass through one dedicated GPU for local monitor output.
  5. Add the required USB devices for keyboard, mouse, audio, webcam, printer, or other peripherals.
  6. Enable autostart so the system behaves like an appliance after power-on.
  7. Keep user data outside the desktop VM.
  8. Back up the desktop VM regularly to Proxmox Backup Server.
  9. Recover by restoring known-good state when the desktop becomes messy enough to stop trusting it.

The less custom magic on the host, the better long-term support becomes.

Sanitized VM config example
#

This is a representative working pattern from my setup, with sensitive values masked.

# /etc/pve/qemu-server/[VMID].conf (sanitized)
agent: 1
balloon: 0
bios: ovmf
boot: order=virtio0
cores: 8
cpu: host
efidisk0: [FAST_SSD_STORAGE]:vm-[VMID]-disk-0,efitype=4m,format=raw,pre-enrolled-keys=1,size=528K
hostpci0: [GPU_PCI_SLOT],pcie=1,x-vga=1
machine: pc-q35-10.1
memory: 10240
name: [WORKSTATION_VM]
net0: virtio=[MASKED_MAC],bridge=[LAN_BRIDGE],mtu=1,queues=8
numa: 1
onboot: 1
ostype: win11
rng0: source=/dev/urandom,max_bytes=1024,period=1000
scsihw: virtio-scsi-single
serial0: socket
sockets: 1
startup: order=1,up=200,down=200
tags: windows
tpmstate0: [FAST_SSD_STORAGE]:vm-[VMID]-disk-1,size=4M,version=v2.0
usb0: host=[USB_PORT_A],usb3=1
usb1: host=[USB_PORT_B],usb3=1
usb2: host=[USB_PORT_C],usb3=1
usb3: host=[USB_PORT_D],usb3=1
usb4: host=[USB_PORT_E],usb3=1
vga: none
virtio0: [FAST_SSD_STORAGE]:vm-[VMID]-disk-2,aio=io_uring,backup=1,cache=none,discard=on,format=raw,iothread=1,size=100G

Notes:

  • Masked fields include PCI addresses, MAC addresses, storage names, VM IDs, USB paths, UUIDs, and internal bridge names.
  • machine: pc-q35-10.1 is what I run on this host today. The exact suffix varies by Proxmox and QEMU version.
  • mtu=1 is intentional when inheriting MTU from the Linux bridge.

Power events and remote recovery
#

Appliance-like behavior matters during power events too.

In my setups, Wake-on-LAN is part of the recovery model, UPS protection is in place, and NUT is used where appropriate to handle shutdown and recovery more cleanly.

The goal is simple:

  • after a power event, the box should either come back automatically or be easy to wake remotely
  • the Windows VM should autostart without user intervention
  • recovery should not depend on somebody standing next to the machine and guessing what to do

That is a small detail, but it is part of what makes the system feel supportable rather than fragile.

A sanitized example of the host-side network configuration looks like this:

# /etc/network/interfaces (sanitized)
source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual
  ethernet-wol g

auto vmbr0
iface vmbr0 inet static
  address 198.51.100.10/24
  gateway 198.51.100.1
  bridge-ports eth0
  bridge-stp off
  bridge-fd 0
  bridge-vlan-aware yes
  bridge-vids 2-4094

The IP addresses above are documentation ranges from RFC 5737.

Backups and why PBS changes the support model
#

Backups are the strongest argument for this design.

My baseline is simple:

  • daily VM backups
  • retention that keeps multiple recent restore points
  • restore procedures treated as part of normal operations, not as a theoretical safety net

Using Proxmox Backup Server improves this model because it gives efficient incremental backups, better retention handling, and a cleaner workflow for restore-based recovery.

Important constraints still apply:

  • backups help only if you can restore quickly and confidently
  • if PBS runs on the same physical host, it does not protect you from full host failure
  • PBS solves VM-state recovery, not user-data placement strategy
  • restore time depends on storage, network, and backup target placement

For the desktop VM itself, I plan around a practical RPO <= 24h because backups run daily. That is acceptable here because important user data lives outside the desktop VM.

Incident workflow: restore first, debug later
#

When something breaks, I follow a simple flow:

  1. collect concrete symptoms from the user
  2. check host and VM status in Proxmox
  3. if damage is shallow, fix in place
  4. if damage is messy, restore the last clean backup and move on
  5. if needed, debug the broken state later and without pressure
sequenceDiagram
autonumber
participant U as User
participant P as Proxmox Host
participant B as PBS
participant W as Windows VM

U->>P: Reports issue
P->>W: Check current VM state
P->>B: Review restore points
B-->>P: Last known good backup available
P->>W: Stop VM if needed
P->>B: Restore VM
B-->>P: Restore completed
P->>W: Start VM
W-->>U: Working desktop again

In practice, this changed the support model from “repair whatever state the machine drifted into” to “restore a known-good state and move on”.

Security boundary outside the guest OS
#

For this model, Proxmox firewall is my primary network boundary. Windows firewall is still useful, but it is a secondary layer inside the guest.

That matters because:

  • policy is enforced outside the guest OS
  • rules remain consistent even if Windows settings drift
  • logging is available at the hypervisor layer
  • if you run multiple desktops, policy stays repeatable across the fleet

My practical baseline is:

  • keep firewalling enabled on VM NICs
  • default deny inbound where possible
  • use VPN-first administrative access
  • keep RDP off the public Internet
  • expose remote access only through VPN or a strict allowlist

This is not a substitute for patching, updates, endpoint hygiene, or sane user behavior. It is an external control plane the guest OS cannot silently rewrite.

Trade-offs
#

This model buys recoverability, but it is not free.

You take on:

  • more setup complexity than bare-metal Windows
  • dependence on IOMMU quality and passthrough behavior
  • backup storage planning
  • a clearer need for disciplined data placement outside the desktop VM
  • some hardware-specific tuning during the initial build

That trade makes sense to me because the support model becomes much cleaner afterwards.

Failure modes and caveats
#

Things that actually matter in real life:

  • messy IOMMU groups make passthrough harder and increase the temptation to use riskier workarounds
  • GPU reset behavior can still vary between cards and generations
  • USB behavior is not always equally clean across all devices
  • storage planning matters more than people expect once the desktop VM, backups, and retained restore points all start growing
  • untested backups are a confidence trap

I would not oversell this as a zero-maintenance design. It is lower-stress to support, but only after the initial build is done carefully.

A note on performance tuning
#

This design is primarily about recoverability and supportability.

That said, the same pattern can also be tuned for better local performance when the hardware allows it. With enough RAM, a dedicated passed-through GPU, and careful VM tuning, it is possible to push the setup well beyond a basic kiosk-style desktop.

In my case, that also made local gaming practical on one of the hosts. I treat that as a bonus, not the main reason to build the system.

For the AMD RX 7900 XTX passthrough specifics I used elsewhere, see:

Proxmox RX 7900 XTX Passthrough: Fix VFIO_MAP_DMA failed

FAQ
#

Do I need two GPUs? Not strictly. The Windows VM needs a GPU that drives the physical monitor. A second GPU or iGPU just makes troubleshooting easier.

Why use a passed-through GPU at all? Because the goal here is a normal local desktop experience, not a remote-only session.

Why use a USB audio dongle instead of HDMI audio? Because it is predictable, cheap, easy to pass through, and easy to replace.

USB controller passthrough or per-device passthrough? Controller passthrough often feels more native for mixed peripherals. In the baseline shown here, I use per-device USB passthrough.

Do I need server hardware? No. This pattern works on consumer hardware as long as virtualization support, IOMMU isolation, RAM capacity, and storage planning are good enough.

Why not keep everything on the Windows VM and just use OneDrive? Because I want a clean boundary between desktop state and user data, and I want that boundary under my control.

Is this overkill for home use? If nobody needs remote support and reinstalling Windows is cheap, maybe. If downtime and support friction matter, the trade can make sense surprisingly quickly.

Practical takeaway
#

For me, the value of this design is not that it turns Windows into something magical.

It changes the support model.

Instead of inheriting whatever state the machine drifted into and trying to repair it live, I can keep the user experience familiar while making recovery faster, calmer, and far more predictable.

That is why I keep coming back to this pattern.

Related articles#

References
#

homelab - This article is part of a series.
Part : This Article