Well it looks to me like your GeForce GTX 970 is correctly being claimed by vfio-pci, so I would expect that if you passed it to a VM, the VM should be able to see it. I'd suggest removing <timer name='hypervclock' present='yes'/> from your XML file, and accessing it via VNC. You should be able to go into the Windows device manager and see the video card there (where I actually think you'll see an Error 43 currently, because of the hypervclock line).
On Sun, Jan 17, 2016 at 1:36 AM, Nicolas Roy-Renaud < [email protected]> wrote: > Here the output of some of the more common diagnosis commands around. I'm > also joining my libvirt XML and the ROM I'm using on my guest GPU. > > [root@OCCAM user]# cat /proc/cmdline > initrd=\intel-ucode.img initrd=\initramfs-linux.img > root=PARTUUID=facab1af-8406-4245-881d-3bfca920f0cd rw intel_iommu=on > iommu=pt rd.driver.pre=vfio-pci video=efifb:off vfio-pci.disable_vga=1 > > [root@OCCAM user]# cat /etc/modprobe.d/* > #options kvm ignore_msrs=1 > options vfio-pci ids=10de:13c2,10de:0fbb disable_vga=1 > > [root@OCCAM user]# find /sys/kernel/iommu_groups/ -type l > /sys/kernel/iommu_groups/0/devices/0000:00:00.0 > /sys/kernel/iommu_groups/1/devices/0000:00:01.0 > /sys/kernel/iommu_groups/1/devices/0000:01:00.0 > /sys/kernel/iommu_groups/1/devices/0000:01:00.1 > /sys/kernel/iommu_groups/2/devices/0000:00:14.0 > /sys/kernel/iommu_groups/3/devices/0000:00:16.0 > /sys/kernel/iommu_groups/4/devices/0000:00:1a.0 > /sys/kernel/iommu_groups/5/devices/0000:00:1b.0 > /sys/kernel/iommu_groups/6/devices/0000:00:1c.0 > /sys/kernel/iommu_groups/7/devices/0000:00:1c.1 > /sys/kernel/iommu_groups/8/devices/0000:00:1c.3 > /sys/kernel/iommu_groups/8/devices/0000:04:00.0 > /sys/kernel/iommu_groups/9/devices/0000:00:1c.4 > /sys/kernel/iommu_groups/10/devices/0000:00:1d.0 > /sys/kernel/iommu_groups/11/devices/0000:00:1f.0 > /sys/kernel/iommu_groups/11/devices/0000:00:1f.2 > /sys/kernel/iommu_groups/11/devices/0000:00:1f.3 > /sys/kernel/iommu_groups/12/devices/0000:03:00.0 > /sys/kernel/iommu_groups/13/devices/0000:06:00.0 > /sys/kernel/iommu_groups/13/devices/0000:06:00.1 > > > [root@OCCAM user]# lspci -nnk > 00:00.0 Host bridge [0600]: Intel Corporation Xeon E3-1200 v2/Ivy Bridge > DRAM Controller [8086:0158] (rev 09) > Subsystem: Micro-Star International Co., Ltd. [MSI] Device > [1462:7758] > Kernel modules: ie31200_edac > 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v2/3rd Gen Core > processor PCI Express Root Port [8086:0151] (rev 09) > Kernel driver in use: pcieport > Kernel modules: shpchp > ==================snip======================== > 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 > [GeForce GTX 970] [10de:13c2] (rev a1) > Subsystem: ASUSTeK Computer Inc. Device [1043:8508] > Kernel driver in use: vfio-pci > Kernel modules: nouveau > 01:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition > Audio Controller [10de:0fbb] (rev a1) > Subsystem: ASUSTeK Computer Inc. Device [1043:8508] > Kernel driver in use: vfio-pci > Kernel modules: snd_hda_intel > 03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. > RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev > 06) > Subsystem: Micro-Star International Co., Ltd. [MSI] Device > [1462:7758] > Kernel driver in use: r8169 > Kernel modules: r8169 > 04:00.0 PCI bridge [0604]: ASMedia Technology Inc. ASM1083/1085 PCIe to > PCI Bridge [1b21:1080] (rev 01) > Kernel modules: shpchp > 06:00.0 VGA compatible controller [0300]: NVIDIA Corporation GT218 > [GeForce G210] [10de:0a60] (rev a2) > Subsystem: PC Partner Limited / Sapphire Technology Device > [174b:2180] > Kernel driver in use: nouveau > Kernel modules: nouveau > 06:00.1 Audio device [0403]: NVIDIA Corporation High Definition Audio > Controller [10de:0be3] (rev a1) > Subsystem: PC Partner Limited / Sapphire Technology Device > [174b:2180] > Kernel driver in use: snd_hda_intel > Kernel modules: snd_hda_intel > > > [root@OCCAM user]# dmesg -w #When starting a VM > [ 4378.349041] device vnet0 entered promiscuous mode > [ 4378.362333] virbr0: port 2(vnet0) entered listening state > [ 4378.362343] virbr0: port 2(vnet0) entered listening state > [ 4379.134931] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x1e@0x258 > [ 4379.134938] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x19@0x900 > [ 4380.367958] virbr0: port 2(vnet0) entered learning state > [ 4382.371677] virbr0: topology change detected, propagating > [ 4382.371685] virbr0: port 2(vnet0) entered forwarding state > [ 4384.215276] kvm: zapping shadow pages for mmio generation wraparound > [ 4384.219678] kvm: zapping shadow pages for mmio generation wraparound > [ 4396.767174] kvm [1661]: vcpu2 unhandled rdmsr: 0x641 > > > ________________________________________ > De : [email protected] [[email protected]] de la > part de Nicolas Roy-Renaud [[email protected]] > Envoyé : 17 janvier 2016 03:03 > À : [email protected] > Objet : [vfio-users] "No signal" on dual Nvidia setup > > > For the last few days, now, I've been trying to get a gpu passthrough to > work on my computer, but I haven't been able to get the VM to output > anything on my passthrough monitor at all since I started (I've had to > either rely on a QXL adapteror just boot on the drive bare metal). > Here's my situation : > > I'm using 2 dedicated NVIDIA GPUs. One is a 970 GTX from Asus which I > want to pass through (PCI:01:00.0; IOMMU group 1) and the other is an > old OEM 210 GT which I'm going to be using to run the host (PCI:06:00.0; > IOMMU group 14). Since the 970 is set at my primary GPU, it is > responsible for displaying my bios and bootloader until linux boots, > where I have its framebuffer disabled and vfio-pci latch onto it. The > 210 GT, however, is still managed by the nouveau driver. Note that from > the moment linux starts up until I try running a VM attempting to access > the passthrough, the framebuffer for the guest card remains untouched > and keeps showing my bootloader (systemd-boot). It gets flushed as soon > as I start my Windows VM, and from there on receives no signal. > > My CPU (Xeon E3 1230v2) and motherboard (MSI Z77-G43) both seem to be > compatible with IOMMU. I've looked into the GPU rom, which does appear > to support EFI according to rom_parser, even if TechPowerUp says is > shouldn't, and whether I try injecting one or using the embarked one > doesn't change the final result (although I do get "Invalid ROM content" > warnings if do the later). Trying to boot the guest with an extra QXL > video adapter forces Windows to disable the guest card, and reenabling > it causes an immediate blue screen (and nothing on the punitor plugged > on the guest card). No error 43 there (yet), although I am using qemu > 2.5 with the cpu's hv_vendor_id blanked out. x-vga flat out refuses to > work, as my guest GPU doesn't support it according to QEMU. > > I'm not really sure where to go from there, so I thought I'd at least > try my luck here before giving up. Actual logs will follow. > > _______________________________________________ > vfio-users mailing list > [email protected] > https://www.redhat.com/mailman/listinfo/vfio-users > > > > _______________________________________________ > vfio-users mailing list > [email protected] > https://www.redhat.com/mailman/listinfo/vfio-users > >
_______________________________________________ vfio-users mailing list [email protected] https://www.redhat.com/mailman/listinfo/vfio-users
