Forwarding the GPU through IOMMU with Proxmox

Finally, I have access to a device which is capable of IOMMU which is needed to forward PCI devices like a GPU to a VM. So I could not resist to try this out and create a VM in Proxmox which can be accessed like a “normal” desktop.

First of all, one needs to enable IOMMU in the UEFI/BIOS - mine did not have any related switch, it was just always enabled. But adding IOMMU support to the linux kernel params in /etc/default/grub is required:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt video=efifb:off"

intel_iommu=on is of course only required on Intel CPUs, while iommu=pt is required for Nvidia GPUs. video=efifb:off is used to disable the EFI/VESA framebuffer, which might be needed and disallows proxmox to access the gpu on boot, I left it off.

If an external graphics card is installed, its just plug’n’play with Proxmox - installing a VM, adding the PCI Device, selecting it as Primary GPU and ready - the VM boots to the connected monitor. One can force this behavior by using Display - none in the Hardware tab.

Alternatively, if one wants to access the display on the go from a different device, one can choose SPICE to be able to view the screen inside noVNC or using the SPICE protocol, which gives a native like experience.

If one wants to change from connected monitor to the virtual monitor, one has to change this setting in Proxmox and reboot the VM.

Internal GPU

The internal GPU is only available if no external GPU is connected, so one can not use both at the same time.

To use the internal GPU, one can not just add the PCI device to the VM. The following steps are needed instead:

Make sure that IOMMU support is added to the linux kernel params in /etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

Disable loading of integrated graphics parts in /etc/modprobe.d/blacklist.conf

blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915

adding INTEL IGD ID to vfio

Run lspci to find the VGA compatible controller and use its first 4 digits to find the INTEL IGD IDrunning lspci -n -s 00:02.

Or use this command to find the INTEL IGD ID:

lspci -n -s `lspci | grep VGA | cut -c-5`

This returns

00:02.0 0300: 8086:46d1
              ^^^^^^^^^- INTEL IGD ID

Then edit the following file accordingly

/etc/modprobe.d/vfio.conf

options vfio-pci ids=<YOUR INTEL IGD ID>

To fix some issues with some windows applications with “Model Specific Registers” (MSRS) you might need (I didn’t need it)

/etc/modprobe.d/kvm.conf

options kvm ignore_msrs=1

Then do:

update-initramfs -u
reboot

Adjust VM settings

Finally add rquired args and hostpci0 to you VM config (/etc/pve/qemu-server/<YOUR VM ID>.conf): Some tutorials suggest to add the following:

args: -device 'vfio-pci,host=00:02.0,addr=0x02,x-igd-gms=1'

to pass the GPU to the VM. At least on Proxmox 8 I could just add the PCI Device through the UI as Raw Device, with All Functions and Primary GPU set, resulting in the entry hostpci0: 0000:00:02,x-vga=1 in the conf.

Adding Audio

To pass through the HDMI audio to the VM, one needs to add the according device. If there is already a mapped HDMI audio device in the UI available, just use this. In my case, I did not see it in mapped devices or audio related (just the audio jack, but I did not want to use this). Therefore, I did run lspci on proxmox and found: 00:1f.3 Audio device: Intel Corporation Device 54c8, which is the correct device for the HDMI audio controller. I added it through the Proxmox UI and had the HDMI audio output available in the VM.

Final VM config

/etc/pve/qemu-server/<YOUR VM ID>.conf

boot: order=scsi0;net0
cores: 1
cpu: x86-64-v2-AES
hostpci0: 0000:00:02,x-vga=1
hostpci1: 0000:00:1f.3
memory: 2048
meta: creation-qemu=8.0.2,ctime=1693742794
name: libreelec
net0: virtio=8A:30:E2:28:C3:BB,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-115-disk-0,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=fcdd357b-fa7b-4fce-9bcf-abc4c5ffaaf8
sockets: 1
vga: none
vmgenid: 3ce11ab5-1bca-4c3e-a615-d913d33a64ac

after verifying that this setup works fine one can add onboot: 1 to automatically start the VM to the attached Monitor and have the proxmox server behave like a “normal” desktop. Of course you need to pass the Mouse/Keyboard through USB to the VM too.

Source for internal GPU forwarding: https://forum.proxmox.com/threads/guide-intel-intergrated-graphic-passthrough.30451/

Closing Remarks

Currently, I have an external GPU running, which is much simpler to set up. It is an Nvidia GTX 750Ti which is a very old model, but still functions quite well for my use cases. I am wondering how often I will actually use this setup, as I am very used to using my laptop most of the time. But it was very good to learn about IOMMU in praxis and know how to create cloud gaming, as STADIA is shutting down currently ;D

Yet it’s a shame that RDP, SPICE and VNC are not nearly as capable as Steams moonlight protocol or Google Stadias Protocol which is based on QUIC-Traffic instead of TCP/UDP. So playing through SPICE still has a quite decent lag, while using the real monitor works very well. I will look into Nvidia Moonlight and Steam PlayOn in the future if I have demand to play on the go..

Update 04.09.2023: I now have a T-Bao T8 Plus, which is using the Integrated Graphics to provide a Libreelec VM, which works fine.