On 02/09/2017 07:23 PM, Alex Williamson wrote:
On Thu, Feb 9, 2017 at 5:09 PM, David Reed <[email protected]> wrote:
I've successfully been able to get two VMs setup with GPU/USB pass-through
and individually they both work, but I can't run both of them at the same
time. Virt-manager will complain that the other's PCI device (GPU) is
already in use even though they don't share the same GPU.
I suspect it is because both GPUs have the same IOMMU group that is being
assigned to the vfio driver. I was hoping there would be some way to make
this work as they are both being controlled by vfio.
Sorry, this is working as expected for your hardware. The PCIe root ports
do not guarantee upstream routing, allowing the possibility of non-IOMMU
translated peer-to-peer between downstream devices. See here for further
info http://vfio.blogspot.com/2014/08/iommu-groups-inside-and-out.html
Hacks to bypass this isolation are not supported upstream. You can find
information about processors supporting isolation on root ports here:
http://vfio.blogspot.com/2015/10/intel-processors-with-acs-support.html
(it's a bit dated but you can extrapolate from the trend). Thanks,
Alex
_______________________________________________
vfio-users mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/vfio-users
If the OP wants hardware reccs I like AMD's c32/g34 opteron's on a
coreboot motherboard, they are a great option for a cheap *proper* iommu
supporting virtualization setup ($20 per 16 cores).
On my no-blobs coreboot kgpe-d16 system every device gets its own iommu
group.
Before that I bought two different computers that "supported" IOMMU and
wasted too much money, even a lot of new intel "server" motherboards
don't properly implement it for one reason or another so it is a good
idea to go with a mobo/system that supports free firmware so that
problems are fixable.
_______________________________________________
vfio-users mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/vfio-users