Alex, thanks for the quick answer, but sadly I still do not fully understand the implications, even if I read the pdf paper on RH website you mention, as well as the vendor advisory at https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c04781229
When you say "qemu has no support", do you actually mean "qemu people are unable to help you if you break things by bypassing the in-place restrictions", or "qemu is designed to not work when restrictions are bypassed"? Do I understand correctly that the BIOS can modify portions of the system usable RAM, so the vendor specific software tools can read those addresses, and if yes, does this mean is there a risk for data corruption if the RMRR restrictions are bypassed? I have eventually managed to passthrough an nvidia card in the microserver gen8 to a windows vm using patched kernel 5.3, along with the vendor instructions to exclude the pcie slot aka the conrep solution but for it to work it still needed the "rmrr patch" aka removing the "return -EPERM" line below the "Device is ineligible [...]" in drivers/iommu/intel-iommu.c However applying the same modification to kernel 5.4 leads to the "VFIO_MAP_DMA: -22" error. Is there other place in the kernel 5.4 source that must be modified to bring back the v5.3 kernel behaviour? (ie. I have a stable home windows vm with the gpu passthrough despite all) -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1869006 Title: PCIe cards passthrough to TCG guest works on 2GB of guest memory but fails on 4GB (vfio_dma_map invalid arg) Status in QEMU: New Bug description: During one meeting coworker asked "did someone tried to passthrough PCIe card to other arch guest?" and I decided to check it. Plugged SATA and USB3 controllers into spare slots on mainboard and started playing. On 1GB VM instance it worked (both cold- and hot- plugged). On 4GB one it did not: Błąd podczas uruchamiania domeny: internal error: process exited while connecting to monitor: 2020-03-25T13:43:39.107524Z qemu-system-aarch64: -device vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: VFIO_MAP_DMA: -22 2020-03-25T13:43:39.107560Z qemu-system-aarch64: -device vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: vfio 0000:29:00.0: failed to setup container for group 28: memory listener initialization failed: Region mach-virt.ram: vfio_dma_map(0x563169753c80, 0x40000000, 0x100000000, 0x7fb2a3e00000) = -22 (Invalid argument) Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/asyncjob.py", line 111, in tmpcb callback(*args, **kwargs) File "/usr/share/virt-manager/virtManager/object/libvirtobject.py", line 66, in newfn ret = fn(self, *args, **kwargs) File "/usr/share/virt-manager/virtManager/object/domain.py", line 1279, in startup self._backend.create() File "/usr/lib64/python3.8/site-packages/libvirt.py", line 1234, in create if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self) libvirt.libvirtError: internal error: process exited while connecting to monitor: 2020-03-25T13:43:39.107524Z qemu-system-aarch64: -device vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: VFIO_MAP_DMA: -22 2020-03-25T13:43:39.107560Z qemu-system-aarch64: -device vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: vfio 0000:29:00.0: failed to setup container for group 28: memory listener initialization failed: Region mach-virt.ram: vfio_dma_map(0x563169753c80, 0x40000000, 0x100000000, 0x7fb2a3e00000) = -22 (Invalid argument) I played with memory and 3054 MB is maximum value possible to boot VM with coldplugged host PCIe cards. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1869006/+subscriptions