Re: [ANNOUNCE][RFC] KVMGT - the implementation of Intel GVT-g(full GPU virtualization) for KVM
Hi all, We are pleased to announce another update of Intel GVT-g for KVM. Intel GVT-g is a full GPU virtualization solution with mediated pass-through, starting from 4th generation Intel Core(TM) processors with Intel Graphics processors. A virtual GPU instance is maintained for each VM, with part of performance critical resources directly assigned. The capability of running native graphics driver inside a VM, without hypervisor intervention in performance critical paths, achieves a good balance among performance, feature, and sharing capability. KVM is supported by Intel GVT-g(a.k.a. KVMGT). Repositories Kernel: https://github.com/01org/igvtg-kernel (2015q3-3.18.0 branch) Qemu: https://github.com/01org/igvtg-qemu (kvmgt_public2015q3 branch) This update consists of: - KVMGT is now merged with XenGT in unified repositories(kernel and qemu), but currently different branches for qemu. KVMGT and XenGT share same iGVT-g core logic. - PPGTT supported, hence the Windows guest support - KVMGT now supports both 4th generation (Haswell) and 5th generation (Broadwell) Intel Core(TM) processors - 2D/3D/Media decoding have been validated on Ubuntu 14.04 and Windows7/Windows 8.1 Next update will be around early Jan, 2016. Known issues: - At least 2GB memory is suggested for VM to run most 3D workloads. - 3Dmark06 running in Windows VM may have some stability issue. - Using VLC to play .ogg file may cause mosaic or slow response. Please subscribe the mailing list to report BUGs, discuss, and/or contribute: https://lists.01.org/mailman/listinfo/igvt-g More information about Intel GVT-g background, architecture, etc can be found at(may not be up-to-date): https://01.org/igvt-g http://www.linux-kvm.org/images/f/f3/01x08b-KVMGT-a.pdf https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian Note: The KVMGT project should be considered a work in progress. As such it is not a complete product nor should it be considered one. Extra care should be taken when testing and configuring a system to use the KVMGT project. -- Thanks, Jike On 12/04/2014 10:24 AM, Jike Song wrote: Hi all, We are pleased to announce the first release of KVMGT project. KVMGT is the implementation of Intel GVT-g technology, a full GPU virtualization solution. Under Intel GVT-g, a virtual GPU instance is maintained for each VM, with part of performance critical resources directly assigned. The capability of running native graphics driver inside a VM, without hypervisor intervention in performance critical paths, achieves a good balance of performance, feature, and sharing capability. KVMGT is still in the early stage: - Basic functions of full GPU virtualization works, guest can see a full-featured vGPU. We ran several 3D workloads such as lightsmark, nexuiz, urbanterror and warsow. - Only Linux guest supported so far, and PPGTT must be disabled in guest through a kernel parameter(see README.kvmgt in QEMU). - This drop also includes some Xen specific changes, which will be cleaned up later. - Our end goal is to upstream both XenGT and KVMGT, which shares ~90% logic for vGPU device model (will be part of i915 driver), with only difference in hypervisor specific services - insufficient test coverage, so please bear with stability issues :) There are things need to be improved, esp. the KVM interfacing part: 1 a domid was added to each KVMGT guest An ID is needed for foreground OS switching, e.g. # echo > /sys/kernel/vgt/control/foreground_vm domid 0 is reserved for host OS. 2 SRCU workarounds. Some KVM functions, such as: kvm_io_bus_register_dev install_new_memslots must be called *without* >srcu read-locked. Otherwise it hangs. In KVMGT, we need to register an iodev only *after* BAR registers are written by guest. That means, we already have >srcu hold - trapping/emulating PIO(BAR registers) makes us in such a condition. That will make kvm_io_bus_register_dev hangs. Currently we have to disable rcu_assign_pointer() in such functions. These were dirty workarounds, your suggestions are high welcome! 3 syscalls were called to access "/dev/mem" from kernel An in-kernel memslot was added for aperture, but using syscalls like open and mmap to open and access the character device "/dev/mem", for pass-through. The source codes(kernel, qemu as well as seabios) are available at github: git://github.com/01org/KVMGT-kernel git://github.com/01org/KVMGT-qemu git://github.com/01org/KVMGT-seabios
Re: [ANNOUNCE][RFC] KVMGT - the implementation of Intel GVT-g(full GPU virtualization) for KVM
On 11/12/2014 01:33, Tian, Kevin wrote: My point is that KVMGT doesn't introduce new requirements as what's required in IGD passthrough case, because all the hacks you see now is to satisfy guest graphics driver's expectation. I haven't follow up the KVM IGD passthrough progress, but if it doesn't require ISA bridge hacking the same trick can be adopted by KVMGT too. Right now it did require ISA bridge hacking. You may know Allen is working on driver changes to avoid causing those hacks in Qemu side. That effort will benefit us too. That's good to know, thanks! Paolo -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [ANNOUNCE][RFC] KVMGT - the implementation of Intel GVT-g(full GPU virtualization) for KVM
On 2014-12-04 03:24, Jike Song wrote: Hi all, We are pleased to announce the first release of KVMGT project. KVMGT is the implementation of Intel GVT-g technology, a full GPU virtualization solution. Under Intel GVT-g, a virtual GPU instance is maintained for each VM, with part of performance critical resources directly assigned. The capability of running native graphics driver inside a VM, without hypervisor intervention in performance critical paths, achieves a good balance of performance, feature, and sharing capability. KVMGT is still in the early stage: - Basic functions of full GPU virtualization works, guest can see a full-featured vGPU. We ran several 3D workloads such as lightsmark, nexuiz, urbanterror and warsow. - Only Linux guest supported so far, and PPGTT must be disabled in guest through a kernel parameter(see README.kvmgt in QEMU). - This drop also includes some Xen specific changes, which will be cleaned up later. - Our end goal is to upstream both XenGT and KVMGT, which shares ~90% logic for vGPU device model (will be part of i915 driver), with only difference in hypervisor specific services - insufficient test coverage, so please bear with stability issues :) There are things need to be improved, esp. the KVM interfacing part: 1a domid was added to each KVMGT guest An ID is needed for foreground OS switching, e.g. # echo domid/sys/kernel/vgt/control/foreground_vm domid 0 is reserved for host OS. 2SRCU workarounds. Some KVM functions, such as: kvm_io_bus_register_dev install_new_memslots must be called *without* kvm-srcu read-locked. Otherwise it hangs. In KVMGT, we need to register an iodev only *after* BAR registers are written by guest. That means, we already have kvm-srcu hold - trapping/emulating PIO(BAR registers) makes us in such a condition. That will make kvm_io_bus_register_dev hangs. Currently we have to disable rcu_assign_pointer() in such functions. These were dirty workarounds, your suggestions are high welcome! 3syscalls were called to access /dev/mem from kernel An in-kernel memslot was added for aperture, but using syscalls like open and mmap to open and access the character device /dev/mem, for pass-through. The source codes(kernel, qemu as well as seabios) are available at github: git://github.com/01org/KVMGT-kernel git://github.com/01org/KVMGT-qemu git://github.com/01org/KVMGT-seabios In the KVMGT-qemu repository, there is a README.kvmgt to be referred. More information about Intel GVT-g and KVMGT can be found at: https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian http://events.linuxfoundation.org/sites/events/files/slides/KVMGT-a%20Full%20GPU%20Virtualization%20Solution_1.pdf Appreciate your comments, BUG reports, and contributions! There is an even increasing interest to keep KVM's in-kernel guest interface as small as possible, specifically for security reasons. I'm sure there are some good performance reasons to create a new in-kernel device model, but I suppose those will need good evidences why things are done in the way they finally should be - and not via a user-space device model. This is likely not a binary decision (all userspace vs. no userspace), it is more about the size and robustness of the in-kernel model vs. its performance. One aspect could also be important: Are there hardware improvements in sight that will eventually help to reduce the in-kernel device model and make the overall design even more robust? How will those changes fit best into a proposed user/kernel split? Jan -- Siemens AG, Corporate Technology, CT RTC ITP SES-DE Corporate Competence Center Embedded Linux -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [ANNOUNCE][RFC] KVMGT - the implementation of Intel GVT-g(full GPU virtualization) for KVM
CC Kevin. On 12/09/2014 05:54 PM, Jan Kiszka wrote: On 2014-12-04 03:24, Jike Song wrote: Hi all, We are pleased to announce the first release of KVMGT project. KVMGT is the implementation of Intel GVT-g technology, a full GPU virtualization solution. Under Intel GVT-g, a virtual GPU instance is maintained for each VM, with part of performance critical resources directly assigned. The capability of running native graphics driver inside a VM, without hypervisor intervention in performance critical paths, achieves a good balance of performance, feature, and sharing capability. KVMGT is still in the early stage: - Basic functions of full GPU virtualization works, guest can see a full-featured vGPU. We ran several 3D workloads such as lightsmark, nexuiz, urbanterror and warsow. - Only Linux guest supported so far, and PPGTT must be disabled in guest through a kernel parameter(see README.kvmgt in QEMU). - This drop also includes some Xen specific changes, which will be cleaned up later. - Our end goal is to upstream both XenGT and KVMGT, which shares ~90% logic for vGPU device model (will be part of i915 driver), with only difference in hypervisor specific services - insufficient test coverage, so please bear with stability issues :) There are things need to be improved, esp. the KVM interfacing part: 1a domid was added to each KVMGT guest An ID is needed for foreground OS switching, e.g. # echo domid/sys/kernel/vgt/control/foreground_vm domid 0 is reserved for host OS. 2SRCU workarounds. Some KVM functions, such as: kvm_io_bus_register_dev install_new_memslots must be called *without* kvm-srcu read-locked. Otherwise it hangs. In KVMGT, we need to register an iodev only *after* BAR registers are written by guest. That means, we already have kvm-srcu hold - trapping/emulating PIO(BAR registers) makes us in such a condition. That will make kvm_io_bus_register_dev hangs. Currently we have to disable rcu_assign_pointer() in such functions. These were dirty workarounds, your suggestions are high welcome! 3syscalls were called to access /dev/mem from kernel An in-kernel memslot was added for aperture, but using syscalls like open and mmap to open and access the character device /dev/mem, for pass-through. The source codes(kernel, qemu as well as seabios) are available at github: git://github.com/01org/KVMGT-kernel git://github.com/01org/KVMGT-qemu git://github.com/01org/KVMGT-seabios In the KVMGT-qemu repository, there is a README.kvmgt to be referred. More information about Intel GVT-g and KVMGT can be found at: https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian http://events.linuxfoundation.org/sites/events/files/slides/KVMGT-a%20Full%20GPU%20Virtualization%20Solution_1.pdf Appreciate your comments, BUG reports, and contributions! There is an even increasing interest to keep KVM's in-kernel guest interface as small as possible, specifically for security reasons. I'm sure there are some good performance reasons to create a new in-kernel device model, but I suppose those will need good evidences why things are done in the way they finally should be - and not via a user-space device model. This is likely not a binary decision (all userspace vs. no userspace), it is more about the size and robustness of the in-kernel model vs. its performance. One aspect could also be important: Are there hardware improvements in sight that will eventually help to reduce the in-kernel device model and make the overall design even more robust? How will those changes fit best into a proposed user/kernel split? Jan -- Thanks, Jike -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [ANNOUNCE][RFC] KVMGT - the implementation of Intel GVT-g(full GPU virtualization) for KVM
On Sa, 2014-12-06 at 12:17 +0800, Jike Song wrote: On 12/05/2014 04:50 PM, Gerd Hoffmann wrote: A few comments on the kernel stuff (brief look so far, also compile-tested only, intel gfx on my test machine is too old). * Noticed the kernel bits don't even compile when configured as module. Everything (vgt, i915, kvm) must be compiled into the kernel. Yes, that's planned to be done along with separating hypervisor-related code from vgt. Good. What are the exact requirements for the device? Must it match the host exactly, to not confuse the guest intel graphics driver? Or would something more recent -- such as the q35 emulation qemu has -- be good enough to make things work (assuming we add support for the graphic-related pci config space registers there)? I don't know that is exactly needed, we also need to have Windows driver considered. However, I'm quite confident that, if things gonna work for IGD passthrough, it gonna work for GVT-g. I'd suggest to focus on q35 emulation. q35 is new enough that a version with integrated graphics exists, so the gap we have to close is *much* smaller. In case guests expect a northbridge matching the chipset generation of the graphics device (which I'd expect is the case, after digging a bit in the igd and agpgart linux driver code) I think we should add proper device emulation for them, i.e. comply q35-pcihost with sandybridge-pcihost + ivybridge-pcihost + haswell-pcihost instead of just copying over the pci ids from the host. Most likely all those variants can share most of the emulation code. SeaBIOS then can just get support for these three northbridge variants, so we don't need magic pci id switching hacks at all. The patch also adds a dummy isa bridge at 0x1f. Simliar question here: What exactly is needed here? Would things work if we simply use the q35 lpc device here? Ditto. Ok. Lets try to just use the q35 emulation + q35 lpc device then instead of adding a second dummy lpc device. cheers, Gerd -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [ANNOUNCE][RFC] KVMGT - the implementation of Intel GVT-g(full GPU virtualization) for KVM
Hi, In KVMGT, we need to register an iodev only *after* BAR registers are written by guest. Oh, the guest can write the bar register at any time. Typically it happens at boot only, but it can also happen at runtime, for example on reboot. I've also seen the kernel redoing the pci mappings created by the bios, due to buggy _crs declarations in the qemu acpi tables. https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian /me goes read this. A few comments on the kernel stuff (brief look so far, also compile-tested only, intel gfx on my test machine is too old). * Noticed the kernel bits don't even compile when configured as module. Everything (vgt, i915, kvm) must be compiled into the kernel. * Design approach still seems to be i915 on vgt not the other way around. Qemu/SeaBIOS bits: I've seen the host bridge changes identity from i440fx to copy-pci-ids-from-host. Guess the reason for this is that seabios uses this device to figure whenever it is running on i440fx or q35. Correct? What are the exact requirements for the device? Must it match the host exactly, to not confuse the guest intel graphics driver? Or would something more recent -- such as the q35 emulation qemu has -- be good enough to make things work (assuming we add support for the graphic-related pci config space registers there)? The patch also adds a dummy isa bridge at 0x1f. Simliar question here: What exactly is needed here? Would things work if we simply use the q35 lpc device here? more to come after I've read the paper linked above ... cheers, Gerd -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [ANNOUNCE][RFC] KVMGT - the implementation of Intel GVT-g(full GPU virtualization) for KVM
On 05/12/2014 09:50, Gerd Hoffmann wrote: A few comments on the kernel stuff (brief look so far, also compile-tested only, intel gfx on my test machine is too old). * Noticed the kernel bits don't even compile when configured as module. Everything (vgt, i915, kvm) must be compiled into the kernel. I'll add that the patch is basically impossible to review with all the XenGT bits still in. For example, the x86 emulator seems to be unnecessary for KVMGT, but I am not 100% sure. I would like a clear understanding of why/how Andrew Barnes was able to do i915 passthrough (GVT-d) without hacking the ISA bridge, and why this does not apply to GVT-g. Paolo * Design approach still seems to be i915 on vgt not the other way around. Qemu/SeaBIOS bits: I've seen the host bridge changes identity from i440fx to copy-pci-ids-from-host. Guess the reason for this is that seabios uses this device to figure whenever it is running on i440fx or q35. Correct? What are the exact requirements for the device? Must it match the host exactly, to not confuse the guest intel graphics driver? Or would something more recent -- such as the q35 emulation qemu has -- be good enough to make things work (assuming we add support for the graphic-related pci config space registers there)? The patch also adds a dummy isa bridge at 0x1f. Simliar question here: What exactly is needed here? Would things work if we simply use the q35 lpc device here? more to come after I've read the paper linked above ... cheers, Gerd ___ Intel-gfx mailing list intel-...@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/intel-gfx -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [ANNOUNCE][RFC] KVMGT - the implementation of Intel GVT-g(full GPU virtualization) for KVM
On 12/05/2014 04:50 PM, Gerd Hoffmann wrote: A few comments on the kernel stuff (brief look so far, also compile-tested only, intel gfx on my test machine is too old). * Noticed the kernel bits don't even compile when configured as module. Everything (vgt, i915, kvm) must be compiled into the kernel. Yes, that's planned to be done along with separating hypervisor-related code from vgt. * Design approach still seems to be i915 on vgt not the other way around. So far yes. Qemu/SeaBIOS bits: I've seen the host bridge changes identity from i440fx to copy-pci-ids-from-host. Guess the reason for this is that seabios uses this device to figure whenever it is running on i440fx or q35. Correct? I did some trick in seabios/qemu. The purpose is to make qemu: - provide IDs of an old host bridge to SeaBIOS - provide IDs of new host bridge(the physical ones) to guest OS So I made seabios to tell qemu that POST is done before jumping to guest OS context. This may be the simplest method to make things work, but yes, q35 emulation of qemu may have this unnecessary, see below. What are the exact requirements for the device? Must it match the host exactly, to not confuse the guest intel graphics driver? Or would something more recent -- such as the q35 emulation qemu has -- be good enough to make things work (assuming we add support for the graphic-related pci config space registers there)? I don't know that is exactly needed, we also need to have Windows driver considered. However, I'm quite confident that, if things gonna work for IGD passthrough, it gonna work for GVT-g. The patch also adds a dummy isa bridge at 0x1f. Simliar question here: What exactly is needed here? Would things work if we simply use the q35 lpc device here? Ditto. more to come after I've read the paper linked above ... Thanks for review :) cheers, Gerd -- Thanks, Jike -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [ANNOUNCE][RFC] KVMGT - the implementation of Intel GVT-g(full GPU virtualization) for KVM
CC Andy :) On 12/05/2014 09:03 PM, Paolo Bonzini wrote: On 05/12/2014 09:50, Gerd Hoffmann wrote: A few comments on the kernel stuff (brief look so far, also compile-tested only, intel gfx on my test machine is too old). * Noticed the kernel bits don't even compile when configured as module. Everything (vgt, i915, kvm) must be compiled into the kernel. I'll add that the patch is basically impossible to review with all the XenGT bits still in. For example, the x86 emulator seems to be unnecessary for KVMGT, but I am not 100% sure. This is not ready for merge yet, please wait for a while, we'll have Xen/KVM specific code separated. BTW, definitely you are right, the emulator is unnecessary for KVMGT, and ... unnecessary for XenGT :) I would like a clear understanding of why/how Andrew Barnes was able to do i915 passthrough (GVT-d) without hacking the ISA bridge, and why this does not apply to GVT-g. AFAIK, the graphics drivers need to figure out the offset of some MMIO registers, by the IDs of this ISA bridge. It simply won't work without this information. Talked with Andy about the pass-through but I don't have his implementation, CC Andy for his advice :) Paolo Thanks for review. Would you please also have a look at the issues I mentioned in the original email? they are most KVM-related: the SRCU trickiness, domid, and the memslot created in kernel. Thank you! -- Thanks, Jike -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html