I get the feeling this has more to to with a certain quirk with some
recent Asus cards. Now that we got Ruben's passthrough to work, maybe we
could try and gather some info to pinpoint that specific bug.
Ruben, do you think you could try and dump your GPUs firmware from
within your Windows VM and then send it over? Just fire up GPU-Z and
click here <http://img.techpowerup.org/150524/SavingBios.png>. Maybe
there's something we can uncover by analyzing those, providing it's not
an hardware issue of some sort. Don't worry about the VM altering the
ROM in any way either, I tried booting on my VM disk and dumping the
image from a native environnement and ended up with a completely
identical dump.
On 2016-01-31 02:44, Will Marler wrote:
Hey Ruben,
I think this might be problematic in your XML:
<graphics type='spice' autoport='yes'>
<image compression='off'/>
</graphics>
[...]
<video>
<model type='qxl' ram='65536' vram='65536' vgamem='16384'
heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>
</video>
[...]
I wouldn't play with the XML directly to remove them; rather, do it
through virt-manager. As a reference, here's
<http://imgur.com/9722s0c> what my virt-manager window looks like. My
guess is you still have a spice & QXL device.
I have an IGD and was completely uninterested in patching the kernel,
so I only ever tried UEFI/OVMF. I tried both Fedora and Arch, and
ultimately preferred Arch, so I installed the ovmf-git package to get
Windows booting. At the time I was working on getting my VM installed
there was a bug in the released OVMF code that would cause Windows to
reboot during install. I'm not sure if this is fixed or not (but you
can search the forums and find me asking about it, and find me getting
pointed in the right direction).
Removing the Spice & QXL device might be all you need. But if you're
feeling adventurous, here's a fuller suggested solution:
-- Create a new guest using an LVM partition. This allows you to
create snapshots (useful in one particular situation, see below).
-- Spin up the VM guest using UEFI/OVMF. Get Windows 10 to boot &
install. (This might take a few tries; it took me 5 or 6 iirc)
-- Install a VNC server in the guest, and test that it works.
-- (snapshot & dump xml and ...) using virt-manager delete the
non-essential devices from the VM. Make the necessary tweaks to the
XML using virsh (deleting the hyperv stuff, kvm hidden state = on).
-- restart the guest, and connect with VNC to make sure the guest is
still boots. If it doesn't, import the earlier XML & roll back the LVM
snapshot & try again (after creating a new LVM snapshot ofc!).
-- Shut off the guest and configure the VGA devices in virt-manager
to be passed through
-- Restart the guest and connect over VNC; the Windows Device Manager
should now see the graphics hardware. Install the device drivers.
-- Reboot the guest and switch your monitor input and hold your breath!
-- Once it works, don't forget to delete the LVM snapshot! (filling
it up would be a Bad Time).
Hope to hear of your success!
Will
On Sat, Jan 30, 2016 at 3:10 AM, Ruben Felgenhauer
<[email protected]
<mailto:[email protected]>> wrote:
Hi,
I tried your method and it definitely did something.
When attaching the device as is, virsh tells me:
error: internal error: Attempted double use of PCI slot
0000:00:02.0 (may need "multifunction='on'" for device on function 0)
If I remove the second <address ... /> tag completely, virsh
attaches the device successfully.
Windows tells me, that the GPU has an error code 14 (that's new)
and that I shall restart.
After the restart, the GPU gives me Code 43 again.
Best regards,
Ruben
Am 29.01.2016 um 01:56 schrieb Nicolas Roy-Renaud:
Ok, try to remove your passthrough from your guest configuration
(either using virsh of virt manager). That is : remove the actual
gpu (PCI:1:0.0) but keep the associated sound card (PCI:1:0.1) in
there so virsh knows it needs to bind with this VFIO group.
From there, create a file (let's say ./GPU_DEVICE.xml) containing
just the following :
1.
<hostdevmode='subsystem'type='pci'managed='yes'>
2.
<source>
3.
<addressdomain='0x0000'bus='0x01'slot='0x00'function='0x0'/>
4.
</source>
5.
<addresstype='pci'domain='0x0000'bus='0x00'slot='0x02'function='0x0'/>
6.
</hostdev>
You'll be able to use this file to tell libvirt to append your
GPU to guest's config at runtime, which somehow gets around the
invalid ROM issue. Just run something like this :
virsh start Win10Full && sleep 60 && virsh attach-device --live --file
./GPU_DEVICE.xml
If I guessed right, windows should detect a new GPU and get the
drivers in place once virsh is done mounting it. If that does
work, you'll gave to run this same command every time you start
your VM too, or at least until that specific bug is fixed.
Hopefully that should get you some results so you can work your
way from there.
On 2016-01-28 17:03, Ryan Flagler wrote:
I was going to recommend you use UEFI, which is why I was
asking. I've personally had better luck getting things to pass
through properly.
Is your VM down when you try to cat the rom? The GPU needs to be
unused by anything.
I had the exact same symptoms on my Asus Strix 970, looks like a
recurring issue with Asus cards. This happenned both when trying
to start a VM with a managed passthrough and when attempting to
dump the ROM from sysfs. I figured it's probably an issue with
vfio-pci itself, and I still haven't fixed it wet, but the
solution I posted above is my current workaround.
_______________________________________________
vfio-users mailing list
[email protected] <mailto:[email protected]>
https://www.redhat.com/mailman/listinfo/vfio-users
_______________________________________________
vfio-users mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/vfio-users