On 2016-04-12 18:32, Alex Williamson wrote:
On 2016-04-12 17:24, Alex Williamson wrote:
On Tue, Apr 12, 2016 at 2:30 PM, Bronek Kozicki <[email protected]
<mailto:[email protected]>> wrote:
2. does PCI bridge have to be in a separate IOMMU group than
passed-through device?
No. Blank is mostly correct on this, newer kernel remove the
pcieport driver test and presumes any driver attached to a bridge
device is ok.
Really? From what I understood reading your IOMMU article, plus
from the issues I had getting my own GPU to work on the CPU-based
PCIe slot on my E3-1200, I thought having a PCIe root port grouped
with a PCI device made the GPU unsuited for passthrougs. What
reccomendations should I give here
<https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#Plugging_your_guest_GPU_in_an_unisolated_CPU-based_PCIe_slot>,
then?
The statement "(there's generally only one)" is completely incorrect
regarding processor based root port slots. That $30k PC that
LinuxTechTips did has 7 processor based root ports between the 2 sockets.
You're right, I shouldn't have extrapolated from the fact that most of
the consumer hardware I have access to works that way, I'll remove that
line on my next edit.
IOMMU group isolation requires that a group is never shared between
host and guest or between different guests. However we assume that
bridge devices only do DMA on behalf of the devices downstream of
them, so we allow the bridge to be managed by a host driver. So in
your example, it's possible that the bridge could do redirections, but
the only affected party would be the VM itself. The same is true for
a multi-function device like the GPU itself, internal routing may
allow the devices to perform peer-to-peer internally. So it's not
ideal when the bridge is part of the group, but it generally works and
is allowed because it can't interfere with anyone else.
Ah, I see. I suppose the issues i was having with my 970 were due to
something else, then. Now that I look back at it, it's probably because
my CPU-based PCIe slot was the only one that could be set as a boot GPU
<https://www.redhat.com/archives/vfio-users/2015-October/msg00005.html>.
I'll try to rework that part and mention that it adresses a much more
specific case than what I iniaially thought, then.
I have the identical setup on my E3-1245v2 and haven't had any problems.
The line is actually copy-pasted from your IOMMU blog-articles, since my
own machine no longer follows that configuration and I needed a snippet
for that specific exemple.
On 2016-04-12 18:57, Alex Williamson wrote:
Skimming...
Most of those AMD CPUs in the amd.com <http://amd.com> link do not
support AMD-Vi
I should have double-checked, I was under the impression that RVI and
AMD-Vi were the same thing. The fact that AMD doesn't really maintain
any sort of public centralized database like Intel ARK makes it really
complicated to give advices on this.
User-level access to devices... No, don't do this. System mode
libvirt manages device permissions. If you want unprivileged, session
mode libvirt you need a whole other wiki page.
Binding to VFIO... Gosh I wish those vfio-bind scripts would die.
Just use libvirt, virsh nodedev-detach
QEMU permissions... WRONG! Don't touch any of this.
Complete example for QEMU with libvirtd... No, qemu:args are the
worst. This hides the assigned device from libvirt and is what causes
you to need to do the QEMU permissions hacks that are completely
wrong. Use a wrapper script!
As others have said, ignore_msrs makes lots of things work, not just
GeForce Experience
Yeah, I think you're starting to see why a rewrite is in order here. ;)
_______________________________________________
vfio-users mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/vfio-users