Skip to content

Instantly share code, notes, and snippets.

@krzys-h
Last active November 26, 2024 08:54
Show Gist options
  • Save krzys-h/e2def49966aa42bbd3316dfb794f4d6a to your computer and use it in GitHub Desktop.
Save krzys-h/e2def49966aa42bbd3316dfb794f4d6a to your computer and use it in GitHub Desktop.
Ubuntu 21.04 VM with GPU acceleration under Hyper-V...?

Ubuntu 21.04 VM with GPU acceleration under Hyper-V...?

Modern versions of Windows support GPU paravirtualization in Hyper-V with normal consumer graphics cards. This is used e.g. for graphics acceleration in Windows Sandbox, as well as WSLg. In some cases, it may be useful to create a normal VM with GPU acceleration using this feature, but this is not officially supported. People already figured out how to do it with Windows guests though, so why not do the same with Linux? It should be easy given that WSLg is open source and reasonably well documented, right?

Well... not quite. I managed to get it to run... but not well.

How to do it?

  1. Verify driver support

Run Get-VMHostPartitionableGpu in PowerShell. You should see your graphics card listed, if you get nothing, update your graphics drivers and try again.

  1. Create a new VM in Hyper-V Manager.

Make sure to:

  • Use Generation 2
  • DISABLE dynamic memory (it interferes with vGPU on Windows so it probably won't work on Linux either, I didn't check this yet though)
  • DISABLE automatic snapshots (they are not supported with vGPU and will only cause problems)
  • DISABLE secure boot (we'll need custom kernel drivers, and I never tried to make this work with secure boot)
  • Don't forget to add more CPU cores because the stupid wizard still adds only one vCPU...
  1. Add GPU-PV adapter

From PowerShell running as administrator:

Set-VM -VMName <vmname> -GuestControlledCacheTypes $true -LowMemoryMappedIoSpace 1GB -HighMemoryMappedIoSpace 32GB
Add-VMGpuPartitionAdapter -VMName <vmname>
  1. Install Ubuntu 21.04 in the VM like you would usually

  2. Build the dxgkrnl driver

Until Microsoft upstreams this driver to the mainline Linux kernel, you will have to build it manually. Use the following script I made to get the driver from the WSL2-Linux-Kernel tree, patch it for out-of-tree build and add it to DKMS:

#!/bin/bash -e
BRANCH=linux-msft-wsl-5.10.y

if [ "$EUID" -ne 0 ]; then
    echo "Swithing to root..."
    exec sudo $0 "$@"
fi

apt-get install -y git dkms

git clone -b $BRANCH --depth=1 https://github.com/microsoft/WSL2-Linux-Kernel
cd WSL2-Linux-Kernel
VERSION=$(git rev-parse --short HEAD)

cp -r drivers/hv/dxgkrnl /usr/src/dxgkrnl-$VERSION
mkdir -p /usr/src/dxgkrnl-$VERSION/inc/{uapi/misc,linux}
cp include/uapi/misc/d3dkmthk.h /usr/src/dxgkrnl-$VERSION/inc/uapi/misc/d3dkmthk.h
cp include/linux/hyperv.h /usr/src/dxgkrnl-$VERSION/inc/linux/hyperv_dxgkrnl.h
sed -i 's/\$(CONFIG_DXGKRNL)/m/' /usr/src/dxgkrnl-$VERSION/Makefile
sed -i 's#linux/hyperv.h#linux/hyperv_dxgkrnl.h#' /usr/src/dxgkrnl-$VERSION/dxgmodule.c
echo "EXTRA_CFLAGS=-I\$(PWD)/inc" >> /usr/src/dxgkrnl-$VERSION/Makefile

cat > /usr/src/dxgkrnl-$VERSION/dkms.conf <<EOF
PACKAGE_NAME="dxgkrnl"
PACKAGE_VERSION="$VERSION"
BUILT_MODULE_NAME="dxgkrnl"
DEST_MODULE_LOCATION="/kernel/drivers/hv/dxgkrnl/"
AUTOINSTALL="yes"
EOF

dkms add dxgkrnl/$VERSION
dkms build dxgkrnl/$VERSION
dkms install dxgkrnl/$VERSION
  1. Copy GPU drivers from your host system

Now you will also need to copy some files from the host machine: the closed-source D3D12 implementation provided by Microsoft, as well as Linux parts of the graphics driver provided by your GPU vendor. If you ever tried to run GPU-PV with a Windows guest, this part should look familiar. Figuring out how to transfer the files into the VM is left as an exercise to the reader, I'll just assume that your Windows host volume is available at /mnt for simplicity:

mkdir -p /usr/lib/wsl/{lib,drivers}
cp -r /mnt/Windows/system32/lxss/lib/* /usr/lib/wsl/lib/
cp -r /mnt/Windows/system32/DriverStore/FileRepository/nv_dispi.inf_amd64_* /usr/lib/wsl/drivers/   # this may be different for different GPU vendors, refer to tutorials for Windows guests if needed
chmod -R 0555 /usr/lib/wsl

Note: You will need to repeat this step every time you update Windows or your graphics drivers

  1. Set up the system to be able to load libraries from /usr/lib/wsl/lib/:
echo "/usr/lib/wsl/lib" > /etc/ld.so.conf.d/ld.wsl.conf
ldconfig  # (if you get 'libcuda.so.1 is not a symbolic link', just ignore it)
  1. Workaround a bug in the D3D12 implementation (it assumes that the /usr/lib/wsl/lib/ mount is case-insensitive... just Windows things...)
ln -s /usr/lib/wsl/lib/libd3d12core.so /usr/lib/wsl/lib/libD3D12Core.so
  1. Reboot the VM

If you've done everything correctly, glxinfo | grep "OpenGL renderer string" should display D3D12 (Your GPU Name). If it does not, here are some useful commands for debugging:

sudo lspci -v  # should list the vGPU and the dxgkrnl driver
ls -l /dev/dxg  # should exist if the dxgkrnl
/usr/lib/wsl/lib/nvidia-smi  # should be able to not fail :P

The problems

  1. The thing is UNSTABLE. Just running glxgears crashes GNOME, spectacularly. I'd recommend switching to a simple window manager like i3 for testing.
  2. GPU acceleration doesn't seem to be picked up everywhere, sometimes it falls back to software rendering with llvmpipe for no apparent reason
  3. While when it works you can clearly see that the GPU is working from the FPS counter... I didn't figure out a good way to get these frames from the VM at a reasonable rate yet! The Hyper-V virtual display is slow, and even if you get enhanced session to work it's just RDP under the hood which is not really designed for high FPS output either. On Windows, you can simply use something like Parsec to connect to the VM, but all streaming solutions I know of don't work on a Linux host at all.
@mattenz
Copy link

mattenz commented Nov 27, 2023

I applied what you stated but to no avail, any insight? When loading in my screen is stuck after the splash screen.
As in you're stuck booting? Are you running a UI or server? My Ubuntu install was just a server install without a UI.

@ColbyHF
Copy link

ColbyHF commented Nov 27, 2023

I applied what you stated but to no avail, any insight? When loading in my screen is stuck after the splash screen.
As in you're stuck booting? Are you running a UI or server? My Ubuntu install was just a server install without a UI.

That would pry cause it, I'm running Ubuntu Desktop

@LaZoRBear
Copy link

Thank you for the guide, it has been very helpful implementing this with Manjaro. However, I have some issues now that everything is installed. If I start my VM with the VMGpuPartitionAdapter already added for the VM, I will get a black screen after seeing the initial loading splash screen. If I activate it only after login into the VM, my vm will freeze pretty much after only a few interactions or commands entered in the terminal.

glxinfo | grep "OpenGL renderer string" -> Freezes instantly or locks up with the terminal printing white lines indefinitely.
sudo lspci -v  # should list the vGPU and the dxgkrnl driver -> list me the proper info and everything seems normal
ls -l /dev/dxg  # should exist if the dxgkrnl -> I get a directory or file doesn't exist
/usr/lib/wsl/lib/nvidia-smi -> Freezes instantly or locks up with the terminal printing white lines indefinitely.

I'm not sure what steps I need to take to fix this. Also, for copying the drivers over the path on windows mentioned wasn't on my system, I ended up having to go into C:\Windows\System32\DriverStore\FileRepository\nvmdi.inf_amd64_509c7440ad905b9c. It was the folder in there that had the created date that was inline with my last driver update.

@D4rkGambit
Copy link

Thanks to this guide I was able to get GPU-PV working on Server 2022 running Hyper-V with an Ubuntu 22.04.3 VM. I did have to do a few things differently though.

First, I changed the WSL branch to "linux-msft-wsl-5.15.y" since this Ubuntu version uses the 5.15 kernel. I then also had to add the following before the dkms steps otherwise dkms would fail to build:

apt install dwarves  
cp /sys/kernel/btf/vmlinux /usr/lib/modules/`uname -r`/build/

With those changes, everything started working and my GPU is working in my Ubuntu VM now.

Did you have to rebuild the Ubuntu kernel on for Server 2022?
lspci only shows the vGPI adaptor on BrokeDudes custom kernel, but nvidia-smi is still broke when using the one provided by 2022.
image

@residentcode
Copy link

sudo chmod a+rwx /usr/lib/wsl/{lib,drivers}

chmod a+rwx /usr/lib/wsl/drivers
chmod: changing permissions of '/usr/lib/wsl/drivers': Read-only file system

@Marietto2008
Copy link

But,instead of using Linux inside the WSL2,its not better to virtualize Linux with qemu + kvm + hyperV ? I did it with FreeBSD,but it will work also with Linux :

https://www.reddit.com/r/freebsd/comments/1c71mjn/how_to_virtualize_freebsd_14_as_a_vm_on_top_of/

@Heodel
Copy link

Heodel commented Apr 25, 2024

Im getting dkms errors when actually trying to build the module. i dont think it the warning about the different subversion is the cause of it. at least it shouldnt. ;) This is Ubuntu 22.04.3 in a VM (Hyper-V) 8 cores, 32gb ram and an AMD 6900XT. build_error

DKMS make.log for dxgkrnl-d489414c2 for kernel 6.2.0-26-generic (x86_64) Do 24. Aug 18:16:22 CEST 2023 make: Entering directory '/usr/src/linux-headers-6.2.0-26-generic' warning: the compiler differs from the one used to build the kernel The kernel was built by: x86_64-linux-gnu-gcc-11 (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0 You are using: gcc-11 (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/dxgmodule.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/hmgr.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/misc.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/dxgadapter.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/ioctl.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/dxgvmbus.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/dxgprocess.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/dxgsyncfile.o /var/lib/dkms/dxgkrnl/d489414c2/build/dxgadapter.c: In function ‘dxgallocation_destroy’: /var/lib/dkms/dxgkrnl/d489414c2/build/dxgadapter.c:934:66: warning: passing argument 2 of ‘vmbus_teardown_gpadl’ makes pointer from integer without a cast [-Wint-conversion] 934 | vmbus_teardown_gpadl(dxgglobal_get_vmbus(), alloc->gpadl); | ~~~~~^~~~~~~ | | | u32 {aka unsigned int} In file included from /var/lib/dkms/dxgkrnl/d489414c2/build/dxgadapter.c:15: ./include/linux/hyperv.h:1226:58: note: expected ‘struct vmbus_gpadl *’ but argument is of type ‘u32’ {aka ‘unsigned int’} 1226 | struct vmbus_gpadl *gpadl); | ~~~~~~~~~~~~~~~~~~~~^~~~~ /var/lib/dkms/dxgkrnl/d489414c2/build/dxgvmbus.c: In function ‘create_existing_sysmem’: /var/lib/dkms/dxgkrnl/d489414c2/build/dxgvmbus.c:1425:53: error: passing argument 4 of ‘vmbus_establish_gpadl’ from incompatible pointer type [-Werror=incompatible-pointer-types] 1425 | alloc_size, &dxgalloc->gpadl); | ^~~~~~~~~~~~~~~~ | | | u32 * {aka unsigned int *} In file included from /var/lib/dkms/dxgkrnl/d489414c2/build/dxgvmbus.c:18: ./include/linux/hyperv.h:1223:59: note: expected ‘struct vmbus_gpadl *’ but argument is of type ‘u32 *’ {aka ‘unsigned int *’} 1223 | struct vmbus_gpadl *gpadl); | ~~~~~~~~~~~~~~~~~~~~^~~~~ cc1: some warnings being treated as errors make[1]: *** [scripts/Makefile.build:260: /var/lib/dkms/dxgkrnl/d489414c2/build/dxgvmbus.o] Error 1 make[1]: *** Waiting for unfinished jobs.... make: *** [Makefile:2026: /var/lib/dkms/dxgkrnl/d489414c2/build] Error 2 make: Leaving directory '/usr/src/linux-headers-6.2.0-26-generic'

I have the same error when building kernel, what should we do?

@Raruu
Copy link

Raruu commented May 3, 2024

I have the same error when building kernel, what should we do?

Maybe try using the right kernel branch?

@sikhness
Copy link

sikhness commented May 21, 2024

Hello. I can get this working passing through a 3070 to an Ubuntu 22.04 VM but if I also try to pass through a PCI device (Coral TPU) using DDA, the video card is very unstable and causes FFMPEG to segfault. Either GPU-PV alone or DDA alone work fine but don't play well together on the VM. It was really cool to be able to use GPU-PV in Hyper-V.
Anyway, I was wondering if you saw the D3D12/VAAPI capability that was added to WSL2 and if you had any thoughts about how to enable VAAPI for Hyper-V VMs.
Thanks.

Up vote! I've tried so hard but failed to enable VAAPI in Hyper-V VMs. Seems like it goes beyond the dxgkrnl module and needs WSL2-Linux-Kernel specific drivers.

I've found this PR and this PR which might be helpful.

@seflerZ, were you able to get any closer to getting VAAPI working on Hyper-V? I wonder if the Linux Azure kernel includes whatever additional kernel specific drivers are needed to get it to work because you can simply install that in a Ubuntu VM via apt. I haven't tested it myself yet but was wondering if anyone was able to get closer to getting VAAPI working in Hyper-V as well.

@seflerZ
Copy link

seflerZ commented May 21, 2024

@sikhness Yes, managed to make it work. See my project here

@Weroxig
Copy link

Weroxig commented Jun 14, 2024

Is everything gpu related working even though it says the device is dx12? does vulkan work?

@seflerZ
Copy link

seflerZ commented Jun 15, 2024 via email

@Alihkhawaher
Copy link

@sikhness Yes, managed to make it work. See my project here

Thank you, I have managed to install the kernel using your script but what I am confused about is the following lines

export pkgver=5.6.rc2.r77380.g918dbaa9fa4a
export pkgdir=""
dkms install dxgkrnl/5.6.rc2.r77380.g918dbaa9fa4a

Why are you hard coding the versions?

@thexperiments
Copy link

Thanks! got it working on Arch with kernel 6.9 needed 3 small additional patches but not much.
Now thinking if I should make this a first AUR package, problem is all the proprietary closed source stuff from Micrsoft and Nvidia but I could build one for building the dxgkrnl driver...

I copied all the files via SSH from WSL as you already have a /mnt/c there

@darklenre
Copy link

@thexperiments that would be great! I'm really interested in making it work on Arch and an AUR package would be the best

@thexperiments
Copy link

I created one (hope it works, it's my first one, just installed arch two days ago)
https://github.com/thexperiments/dxgkrnl-dkms-git

However when trying to get it into AUR I saw that there are already two versions. Guess I should have checked before however it looks like none of them would work for the current kernel with the latest branch from wsl2 which mine does (at least for me)

@seflerZ
Copy link

seflerZ commented Jul 1, 2024 via email

@jansaltenberger
Copy link

@kittykernel On Arch, I ran into this problem when trying to build the AUR package from @thexperiments on the newer 6.10.2 kernel. So I downgraded the kernel and headers to linux-6.9.arch1-1-x86_64.pkg.tar.zst linux-headers-6.9.arch1-1-x86_64.pkg.tar.zst. After that, I was able to build and install the dxgkrnl package. With this and steps 6 through 9, nvidia-smi now shows me the gpu.

@valinet
Copy link

valinet commented Oct 5, 2024

Thank you, just running Immich under official Docker on Ubuntu 24.04 in a Hyper-V VM with working ML acceleration thanks to your guide. I have compiled dxgkrnl based on the linux-msft-wsl-6.6.y branch, which is closer to kernel 6.8 used in Ubuntu 24.04. Only change required in the source file is line 178 in dxgmodule.c, change from eventfd_signal(event->cpu_event, 1); to eventfd_signal(event->cpu_event); - eventfd_signal takes only one parameter starting with kernel version 6.8.

This is awesome, I guess Microsoft doesn't popularize it enough since it would just cannibalize NVIDIA's paid vGPU offering. Same reason GPU partitioning, Microsoft's generic vGPU, is only available to Teslacards. Or how, even with those cards, live migration is still not available - again, so that vGPU still makes sense, which offers that.

Anyway, great write-up, VERY useful, thanks again.

@mioiox
Copy link

mioiox commented Nov 14, 2024

@valinet, what GPU do you have on your HV host? I can't get it working with Intel iGPU on i5-12500T CPU... I wonder if this is only working with nVidia GPUs?

@valinet
Copy link

valinet commented Nov 18, 2024

@mioiox Maybe, haven't tested myself with Intel GPUs. I run a regular, consumer grade RTX 3060, nothing fancy. Microsoft's blog posts mention NVIDIA a lot, so it being NVIDIA-only would not surprise me. As is usual these days with anything Microsoft, it's half baked/not yet ready/not that user friendly, when it indeed it is a pretty useful tech. Shame...

@JohnHolmesTW
Copy link

JohnHolmesTW commented Nov 23, 2024

@sikhness Yes, managed to make it work. See my project here

@seflerZ Whenever I run the win_gpu_pv.ps1 script against an Ubuntu VM I get an error when it tries to create the New-PSSession to the Ubuntu VM as follows:
New-PSSession : [Ubuntu] An error has occurred which Windows PowerShell cannot handle. A remote session might have ended.
At C:\Users\john.holmes\Downloads\oneclick-gpu-pv-main\oneclick-gpu-pv-main\win-gpu-pv.ps1:27 char:11

  • $vmsess = New-PSSession -VMId $vmid
  •       ~~~~~~~~~~~~~~~~~~~~~~~~~
    
    • CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [New-PSSession], PSRemotingDataStructureException
    • FullyQualifiedErrorId : PSSessionOpenFailed

I've tried Ubuntu 22.04 and 24.04 but neither will connect. Anyone have any ideas?

Just prior to this it asks for credentials which I give the username and password for the Ubuntu VM which is what I assume is required.

Incidentally, if I try the same command against a Windows VM it connects without issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment