kiss
my

< le blog / >

Running a high performance Windows VM on Linux

Being a developer, Linux is my main OS as I find it easier to use for a lot of stuff – for example, docker (we use it a lot at KissMy). Additionally, I am a gamer and like to play after a good day of work. Let me just say that Linux has made a lot of progress in the gaming space. With Proton being integrated to Steam and Lutris for non Steam games, it’s easier than ever to run games on Linux without much performance loss most of the time.

The only games that pose problems are usually online games with their anti cheat or games with DRMs. This is why I still have a dual boot setup on my machine and usually reboot to windows when I want to play. The thing I don’t like with this setup is that I can’t have best of both worlds. Since I spend most of time on Linux, even outside of work hours with my personal projects projects, most of my files are on Linux. Also the fact that Windows has become more and more bloated over the years with a lot of tracking baked in the OS doesn’t inspire me to install WSL (Windows Subsystem for Linux).

So I had an idea : run a Windows VM.

« Wait … you want to play on windows and you want a VM ?! Are you fucking braindead or what ? You’re gonna get the worst performance ever it’s a total waste of time ! How are you even going to use your GPU in VM ? »

Enter OVMF and QEMU

With OVMF and QEMU you can pass through you GPU directly and even pin CPU cores to the VM getting near native performance.

« OVMF ? QEMU ? »

Wait I’ll explain OVMF (Open Virtual Machine Firmware) basically enables you to run UEFI systems in a VM and QEMU defines itself as « a generic and open source machine emulator and virtualizer ».

« Okay then. What do I need for this ? »

Well the list of hardware requirement is pretty simple :

  • A computer
  • Lots of RAM (since you’ll be effectively running two machines at the same time, I have 32Go but you can try out different setups and see what works for you)
  • A CPU with a lot of cores (We’ll be reserving some cores for our VM, I’m using a Ryzen 3600 so I have 6 cores, I’ll do 3 cores for the host machine and 3 cores for the VM)
  • Two GPUs in your system (Since we’ll passthrough a GPU to our VM it will be unavailable to our host system so if you want to have access to your Linux desktop when your VM is running you’ll need two GPUs or 1 GPU and 1 IGPU (Integrated graphics) if you are on a laptop

Some special notes about the hardware. You’ll need recent hardware that supports hardware virtualization for the VM and IOMMU for the GPU pass through.
For Intel here’s a list of compatible CPUs : https://ark.intel.com/content/www/us/en/ark/search/featurefilter.html?productType=873&0_VTD=True
For AMD every CPU from the Bulldozer generation and after should be compatible. So every Ryzen CPU should be okay.

Your motherboard should also support IOMMU for isolating the PCIe devices. (The GPU in this case) To be passed through the VM.

For the software side :

  • Linux (I’m using Manjaro which is based on Arch Linux but already comes with a desktop environment and drivers)
  • QEMU
  • Libvirt
  • virt-manager (A GUI manager for ou VM(s))
  • VFIO (Kernel module for PCIe device isolation)

So let’s start.

Enable IOMMU support in the BIOS

Boot into the bios and enable IOMMU. It can have different names so look for something named IOMMU or Intel VT-d or AMD-Vi. If you don’t find this anything with virtualization in the name is a good bet.

Enable device isolation in the Linux kernel

After booting back to Linux you can enable IOMMU support by editing your grub configuration. I’m running Arch Linux but the path should be the same overall.

sudo nano /etc/default/grub

Find the line GRUB_CMDLINE_LINUX_DEFAULT= and append intel_iommu=on or amd_iommu=on for Intel or AMD cpu. Also add iommu=pt.
Then regenerate your GRUB config with the following command :

sudo grub-mkconfig -o /boot/grub/grub.cfg

and reboot to apply the changes to your system.

Isolate the GPU at boot time

So since we want our VM to have access to the GPU, we have to tell Linux not to touch it and that is done by isolating it very early in the boot process.

First, we need to identify the IOMMU group our GPU is in.

#!/bin/bash
shopt -s nullglob
for g in /sys/kernel/iommu_groups/*; do
    echo "IOMMU Group ${g##*/}:"
    for d in $g/devices/*; do
        echo -e "\t$(lspci -nns ${d##*/})"
    done;
done;

This little script will print all of our IOMMU groups and the devices inside them.

IOMMU Group 14:
	07:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU106 [GeForce RTX 2070 Rev. A] [10de:1f07] (rev a1)
	07:00.1 Audio device [0403]: NVIDIA Corporation TU106 High Definition Audio Controller [10de:10f9] (rev a1)
	07:00.2 USB controller [0c03]: NVIDIA Corporation TU106 USB 3.1 Host Controller [10de:1ada] (rev a1)
	07:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU106 USB Type-C UCSI Controller [10de:1adb] (rev a1)

Here you should see your GPU in a IOMMU group. Make sure that only your GPU is showing up in the group though as the whole groups needs to be isolated for pass through to occur.

Depending on your graphics card there should be multiple devices. For example Nvidia GTX 960s have two entries the VGA compatible controller and the Audio device, while a RTX 2070 have a 4 entries. (1 VGA, 1 Audio, 2 for the USB controller)

Note the ids on the lines for GPU and them to the kernel options like we did before.

sudo nano /etc/defaults/grub

and add the following to GRUB_CMDLINE_LINUX_DEFAULT=

vfio-pci.ids=10de:13c2,10de:0fbb

Keep in mind that when isolation is in place the GPU will not be visible by Linux anymore and you will not be able to use it anymore. If something goes wrong, you can always edit the GRUB config before booting and removing this line to temporarily revert the changes.

We also need to tell the kernel that it needs to load the VFIO modules early during boot. We do this by editing

/etc/mkinitcpio.conf

Find the line that starts with

MODULES

and add the following modules (replace the three dots with already existing modules if any)

MODULES=(vfio_pci vfio vfio_iommu_type1 vfio_virqfd ...)

Then regenerate your GRUB config again

sudo grub-mkconfig -o /boot/grub/grub.cfg

Setting Up The VM

Okay, now that we have setup the system for PCI pass through of our GPU it’s time to setup the VM. For this we’ll need libvirt and QEMU. Let’s start by installing them.

trizen -S qemu libvirt edk2-ovmf virt-manager

Again, this is using Arch Linux package manager. Depending on your distro you may also need to enable the libvirt service.

sudo systemctl enable libvirtd.service && sudo systemctl enable libvirtd.service

Start Virtual Machine Manager and create a VM.

Select Local install media and go to the next step. Click on Browse to select your Windows install media ISO. You may need to move the ISO to your libvirt images folder (usually it’s /var/lib/libvirt/images).

On the next step input the amount of RAM and CPU cores you want to be allocated to the VM. Keep in mind the whole amount of the RAM will be claimed by the VM as soon as it starts so leave some for your system to keep running correctly.

On the next step you’ll be given the choice to add a hard disk to the VM. The GUI setup process can only handle disk images so I’ll cover this for now and go over the process of passing through a whole disk in the second part of this blog post.

The size of the disk doesn’t really matter just make one at least big enough for a fresh Windows install + drivers so I’d say 40GiB is my minimum.

On the final step make sure you check the « Edit before installing » checkbox because we’re going to make some changes to the VM.

First of all switch the chipset to Q35 under the overview menu.

Then we’ll need to remove some devices. The tablet, display spice, sound ich9, console, channel spice, Video QXL, and both USB redirector need to go.

We’ll then add our GPU that we want to pass through. For this click on « Add Hardware » then under the PCI Host Device menu add **all** of your GPU PCI devices. (All the ones we isolated before).

You’ll have to do this for every devices of your graphics card. So in my case, I will need to add all 4 PCI devices. (0000:07:00:0, 0000:07:00:1, 0000:07:00:2, 0000:07:00:3).

We’ll then add our second mouse and keyboard to be passed to the VM. You can use the same mouse and keyboard for Linux and your VM but it requires a little bit more setup so I’ll cover it the second part of this post.

So again « Add Hardware » and under the « USB Host Device » menu select your second mouse and keyboard.

Once that’s done we should be ready to start the windows setup.

One last pro tip : if you don’t want Windows to force you to login to a Microsoft account (like they’ve been doing recently) you can disable the network and re-enable it after the setup is complete.

One last note if you’re running an Nvidia the driver install will not succeed without spoofing the vendor id of the hypervisor you can edit the XML under the overview section.
Find the <hyperv> ... </hyperv> section and add the following line inside.

<vendor_id state='on' value='whatever'/>

 » whatever  » can be changed to any value up to 12 characters.

You can now begin the installation. Remember to plug in a monitor on the graphics card that’s being used by the VM since the video output will be on this GPU.

Once booted into the VM then it’s a usual windows install. After Windows as finished installing you can install your graphics driver and see if it works (it should).

In part two of this post, I’ll cover :

  • Using only one keyboard and mouse and dynamically switching between the VM and Linux.
  • Storage performance
  • CPU pinning & governor for better performance
  • CPU cores isolation
  • Decreasing memory latency by using huge pages
  • Passing audio directly to Linux and no latency with Shared Memory