Re: [Xen-devel] [Intel-gfx] [Announcement] 2016-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2016-11-06 Thread Jike Song
Hi all,

We are pleased to announce another update of Intel GVT-g for Xen.

Intel GVT-g is a full GPU virtualization solution with mediated pass-through, 
starting from 4th generation Intel Core(TM) processors with Intel Graphics 
processors. A virtual GPU instance is maintained for each VM, with part of 
performance critical resources directly assigned. The capability of running 
native graphics driver inside a VM, without hypervisor intervention in 
performance critical paths, achieves a good balance among performance, feature, 
and sharing capability. Xen is currently supported on Intel Processor Graphics 
(a.k.a. XenGT).


Repositories

-Xen: https://github.com/01org/igvtg-xen (2016q3-4.6 branch)
-Kernel: https://github.com/01org/igvtg-kernel (2016q3-4.3.0 branch)
-Qemu: https://github.com/01org/igvtg-qemu (2016q3-2.3.0 branch)


This update consists of:

-Preliminary support new platform: 7th generation Intel® Core™ processors. 
For windows OS, it only supports Win10 RedStone 64 bit.

-Windows 10 RedStone guest Support

-Windows Guest QoS preliminary support:  Administrators now are able to 
control the maximum amount of vGPU resource to be consumed by each VM from 
value 1% ~ 99%”

-Display virtualization preliminary support: Besides the tracking of 
display register visit in guest VM, removing irrelative display pipeline info 
between host and guest VM

-Live Migration and savevm/restorevm preliminary support on BDW with 2D/3D 
workload running



Known issues:

-   At least 2GB memory is suggested for Guest Virtual Machine (win7-32/64, 
win8.1-64, win10-64) to run most 3D workloads

-   Windows8 and later Windows fast boot is not supported, the workaround is to 
disable power S3/S4 in HVM file by adding “acpi_S3=0, acpi_S4=0”

-   Sometimes when dom0 and guest has heavy workload, i915 in dom0 will trigger 
a false-alarmed TDR. The workaround is to disable dom0 hangcheck in dom0 grub 
file by adding “i915.enable_hangcheck=0”

-   Stability: When QoS feature is enabled, Windows guest full GPU reset is 
often trigger during MTBF test.  This bug will be fixed in next release

-   Windows guest running OpenCL allocations occurs to host crash; the 
workaround is to disable logd in dom0 grub file by adding “i915. logd_enable =0”


Next update will be around early Jan, 2017.


GVT-g project portal: https://01.org/igvt-g
Please subscribe mailing list: https://lists.01.org/mailman/listinfo/igvt-g


More information about background, architecture and others about Intel GVT-g, 
can be found at:

https://01.org/igvt-g
https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian

http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-v7_0.pdf

http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-REWRITE%203RD%20v4.pdf
https://01.org/xen/blogs/srclarkx/2013/graphics-virtualization-xengt


Note: The XenGT project should be considered a work in progress. As such it is 
not a complete product nor should it be considered one. Extra care should be 
taken when testing and configuring a system to use the XenGT project.

--
Thanks,
Jike

On 07/22/2016 01:42 PM, Jike Song wrote:
> Hi all,
> 
> We are pleased to announce another update of Intel GVT-g for Xen.
> 
> Intel GVT-g is a full GPU virtualization solution with mediated pass-through, 
> starting from 4th generation Intel Core(TM) processors with Intel Graphics 
> processors. A virtual GPU instance is maintained for each VM, with part of 
> performance critical resources directly assigned. The capability of running 
> native graphics driver inside a VM, without hypervisor intervention in 
> performance critical paths, achieves a good balance among performance, 
> feature, and sharing capability. Xen is currently supported on Intel 
> Processor Graphics (a.k.a. XenGT).
> 
> Repositories
> -Xen: https://github.com/01org/igvtg-xen (2016q2-4.6 branch)
> -Kernel: https://github.com/01org/igvtg-kernel (2016q2-4.3.0 branch)
> -Qemu: https://github.com/01org/igvtg-qemu (2016q2-2.3.0 branch)
> 
> This update consists of:
> -Support Windows 10 guest
> -Support Windows Graphics driver installation on both Windows Normal mode 
> and Safe mode
> 
> Known issues:
> -   At least 2GB memory is suggested for Guest Virtual Machine (VM) to run 
> most 3D workloads
> -   Dom0 S3 related feature is not supported
> -   Windows 8 and later versions: fast boot is not supported, the workaround 
> is to disable power S3/S4 in HVM file by adding "acpi_S3=0, acpi_S4=0"
> -   Using Windows Media Player play videos may cause host crash. Using VLC to 
> play .ogg file may cause mosaic or slow response.
> -   Sometimes when both dom0 and guest have heavy workloads, i915 in dom0 
> will trigger a false graphics reset,
> the workaround is to disab

Re: [Xen-devel] [Intel-gfx] [Announcement] 2016-Q2 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2016-07-21 Thread Jike Song
Hi all,

We are pleased to announce another update of Intel GVT-g for Xen.

Intel GVT-g is a full GPU virtualization solution with mediated pass-through, 
starting from 4th generation Intel Core(TM) processors with Intel Graphics 
processors. A virtual GPU instance is maintained for each VM, with part of 
performance critical resources directly assigned. The capability of running 
native graphics driver inside a VM, without hypervisor intervention in 
performance critical paths, achieves a good balance among performance, feature, 
and sharing capability. Xen is currently supported on Intel Processor Graphics 
(a.k.a. XenGT).

Repositories
-Xen: https://github.com/01org/igvtg-xen (2016q2-4.6 branch)
-Kernel: https://github.com/01org/igvtg-kernel (2016q2-4.3.0 branch)
-Qemu: https://github.com/01org/igvtg-qemu (2016q2-2.3.0 branch)

This update consists of:
-Support Windows 10 guest
-Support Windows Graphics driver installation on both Windows Normal mode 
and Safe mode

Known issues:
-   At least 2GB memory is suggested for Guest Virtual Machine (VM) to run most 
3D workloads
-   Dom0 S3 related feature is not supported
-   Windows 8 and later versions: fast boot is not supported, the workaround is 
to disable power S3/S4 in HVM file by adding "acpi_S3=0, acpi_S4=0"
-   Using Windows Media Player play videos may cause host crash. Using VLC to 
play .ogg file may cause mosaic or slow response.
-   Sometimes when both dom0 and guest have heavy workloads, i915 in dom0 will 
trigger a false graphics reset,
the workaround is to disable dom0 hangcheck in grub file by adding 
"i915.enable_hangcheck=0".

Next update will be around early Oct, 2016.

GVT-g project portal: https://01.org/igvt-g
Please subscribe mailing list: https://lists.01.org/mailman/listinfo/igvt-g

More information about background, architecture and others about Intel GVT-g, 
can be found at:
https://01.org/igvt-g
https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian

http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-v7_0.pdf

http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-REWRITE%203RD%20v4.pdf
https://01.org/xen/blogs/srclarkx/2013/graphics-virtualization-xengt


Note: The XenGT project should be considered a work in progress. As such it is 
not a complete product nor should it be considered one. Extra care should be 
taken when testing and configuring a system to use the XenGT project.

--
Thanks,
Jike

On 04/28/2016 01:29 PM, Jike Song wrote:
> Hi all,
> 
> We are pleased to announce another update of Intel GVT-g for Xen.
> 
> Intel GVT-g is a full GPU virtualization solution with mediated pass-through, 
> starting from 4th generation Intel Core(TM) processors with Intel Graphics 
> processors. A virtual GPU instance is maintained for each VM, with part of 
> performance critical resources directly assigned. The capability of running 
> native graphics driver inside a VM, without hypervisor intervention in 
> performance critical paths, achieves a good balance among performance, 
> feature, and sharing capability. Xen is currently supported on Intel 
> Processor Graphics (a.k.a. XenGT).
> 
> 
> Repositories
> -
> 
> Kernel: https://github.com/01org/igvtg-kernel (2016q1-4.3.0 branch)
> Xen: https://github.com/01org/igvtg-xen (2016q1-4.6 branch)
> Qemu: https://github.com/01org/igvtg-qemu (2016q1-2.3.0 branch)
> 
> This update consists of:
> -Windows 10 guest is preliminarily supported in this release. 
> -Implemented vgtbuffer(Indirect display) feature on SKL platform.
> -Backward compatibility support 5th generation (Broadwell)
> -Increased VGT stability on SKL platform
> -Kernel updated from drm-intel 4.2.0 to drm-intel 4.3.0
> -Xen updated from Xen 4.5.0 to Xen 4.6.0
> -Qemu updated from 1.6 to 2.3
> 
> Known issues:
> -At least 2GB memory is suggested for VM(win7-32/64, win8.1 64) to run 
> most 3D workloads.
> -Windows 7 GFX driver upgrading only works on Safe mode.
> -Some media decode can't work well (will be resolved in the next version 
> Windows GFX driver). 
> -Windows8 and later Windows fast boot is not supported, whose workaround 
> is to disable power S3/S4 in HVM file by adding "acpi_s3=0, acpi_s4=0"
> -Sometimes when dom0 and guest have heavy workload, i915 in dom0 will 
> trigger a false graphics reset. The workaround is to disable dom0 hangcheck 
> in dom0 grub file by adding "i915.enable_hangcheck=0"
> 
> Next update will be around early July, 2016.
> 
> GVT-g project portal:
>   https://01.org/igvt-g
> 
> Please subscribe the mailing list:
>   https://lists.01.org/mailman/listinfo/igvt-g
> 
> 
> More information a

Re: [Xen-devel] [Intel-gfx] [Announcement] 2016-Q1 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2016-04-27 Thread Jike Song
Hi all,

We are pleased to announce another update of Intel GVT-g for Xen.

Intel GVT-g is a full GPU virtualization solution with mediated pass-through, 
starting from 4th generation Intel Core(TM) processors with Intel Graphics 
processors. A virtual GPU instance is maintained for each VM, with part of 
performance critical resources directly assigned. The capability of running 
native graphics driver inside a VM, without hypervisor intervention in 
performance critical paths, achieves a good balance among performance, feature, 
and sharing capability. Xen is currently supported on Intel Processor Graphics 
(a.k.a. XenGT).


Repositories
-

Kernel: https://github.com/01org/igvtg-kernel (2016q1-4.3.0 branch)
Xen: https://github.com/01org/igvtg-xen (2016q1-4.6 branch)
Qemu: https://github.com/01org/igvtg-qemu (2016q1-2.3.0 branch)

This update consists of:
-Windows 10 guest is preliminarily supported in this release. 
-Implemented vgtbuffer(Indirect display) feature on SKL platform.
-Backward compatibility support 5th generation (Broadwell)
-Increased VGT stability on SKL platform
-Kernel updated from drm-intel 4.2.0 to drm-intel 4.3.0
-Xen updated from Xen 4.5.0 to Xen 4.6.0
-Qemu updated from 1.6 to 2.3

Known issues:
-At least 2GB memory is suggested for VM(win7-32/64, win8.1 64) to run most 
3D workloads.
-Windows 7 GFX driver upgrading only works on Safe mode.
-Some media decode can't work well (will be resolved in the next version 
Windows GFX driver). 
-Windows8 and later Windows fast boot is not supported, whose workaround is 
to disable power S3/S4 in HVM file by adding "acpi_s3=0, acpi_s4=0"
-Sometimes when dom0 and guest have heavy workload, i915 in dom0 will 
trigger a false graphics reset. The workaround is to disable dom0 hangcheck in 
dom0 grub file by adding "i915.enable_hangcheck=0"

Next update will be around early July, 2016.

GVT-g project portal:
https://01.org/igvt-g

Please subscribe the mailing list:
https://lists.01.org/mailman/listinfo/igvt-g


More information about background, architecture and others about Intel GVT-g, 
can be found at:

https://01.org/igvt-g
https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian

http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-v7_0.pdf

http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-REWRITE%203RD%20v4.pdf
https://01.org/xen/blogs/srclarkx/2013/graphics-virtualization-xengt


Note: The XenGT project should be considered a work in progress. As such it is 
not a complete product nor should it be considered one. Extra care should be 
taken when testing and configuring a system to use the XenGT project.


--
Thanks,
Jike

On 01/27/2016 02:21 PM, Jike Song wrote:
> Hi all,
> 
> We are pleased to announce another update of Intel GVT-g for Xen.
> 
> Intel GVT-g is a full GPU virtualization solution with mediated pass-through, 
> starting from 4th generation Intel Core(TM) processors with Intel Graphics 
> processors. A virtual GPU instance is maintained for each VM, with part of 
> performance critical resources directly assigned. The capability of running 
> native graphics driver inside a VM, without hypervisor intervention in 
> performance critical paths, achieves a good balance among performance, 
> feature, and sharing capability. Xen is currently supported on Intel 
> Processor Graphics (a.k.a. XenGT).
> 
> Repositories
> -
> 
> Kernel: https://github.com/01org/igvtg-kernel (2015q4-4.2.0 branch)
> Xen: https://github.com/01org/igvtg-xen (2015q4-4.5 branch)
> Qemu: https://github.com/01org/igvtg-qemu (xengt_public2015q4 branch)
> 
> This update consists of:
> 
>   - 6th generation Intel Core Processor (code name: Skylake) is 
> preliminarily supported in this release. Users could start run multiple 
> Windows / Linux virtual machines simultaneously, and switch display among 
> them.
>   - Backward compatibility support 4th generation Intel Core Processor 
> (code name: Haswell) and 5th generation Intel Core Processor (code name: 
> Broadwell).
>   - Kernel update from drm-intel 3.18.0 to drm-intel 4.2.0.
> 
> Known issues:
>- At least 2GB memory is suggested for a VM to run most 3D workloads.
>- Keymap might be incorrect in guest. Config file may need to explicitly 
> specify "keymap='en-us'". Although it looks like the default value, earlier 
> we saw the problem of wrong keymap code if it is not explicitly set.
>- Cannot move mouse pointer smoothly in guest by default launched by VNC 
> mode. Configuration file need to explicitly specify "usb=1" to enable a USB 
> bus, and "usbdevice='tablet'" to add pointer device using absolute 
&

Re: [Xen-devel] [iGVT-g] [vfio-users] [PATCH v3 00/11] igd passthrough chipset tweaks

2016-01-28 Thread Jike Song
On 01/29/2016 10:54 AM, Alex Williamson wrote:
> On Fri, 2016-01-29 at 02:22 +, Kay, Allen M wrote:
>>  
>>> -Original Message-
>>> From: iGVT-g [mailto:igvt-g-boun...@lists.01.org] On Behalf Of Alex
>>> Williamson
>>> Sent: Thursday, January 28, 2016 11:36 AM
>>> To: Gerd Hoffmann; qemu-de...@nongnu.org
>>> Cc: igv...@ml01.01.org; xen-de...@lists.xensource.com; Eduardo Habkost;
>>> Stefano Stabellini; Cao jin; vfio-us...@redhat.com
>>> Subject: Re: [iGVT-g] [vfio-users] [PATCH v3 00/11] igd passthrough chipset
>>> tweaks
>>>  
>>>  
>>> 1) The OpRegion MemoryRegion is mapped into system_memory through
>>> programming of the 0xFC config space register.
>>>  a) vfio-pci could pick an address to do this as it is realized.
>>>  b) SeaBIOS/OVMF could program this.
>>>  
>>> Discussion: 1.a) Avoids any BIOS dependency, but vfio-pci would need to pick
>>> an address and mark it as e820 reserved.  I'm not sure how to pick that
>>> address.  We'd probably want to make the 0xFC config register read-
>>> only.  1.b) has the issue you mentioned where in most cases the OpRegion
>>> will be 8k, but the BIOS won't know how much address space it's mapping
>>> into system memory when it writes the 0xFC register.  I don't know how
>>> much of a problem this is since the BIOS can easily determine the size once
>>> mapped and re-map it somewhere there's sufficient space.
>>> Practically, it seems like it's always going to be 8K.  This of course 
>>> requires
>>> modification to every BIOS.  It also leaves the 0xFC register as a mapping
>>> control rather than a pointer to the OpRegion in RAM, which doesn't really
>>> match real hardware.  The BIOS would need to pick an address in this case.
>>>  
>>> 2) Read-only mappings version of 1)
>>>  
>>> Discussion: Really nothing changes from the issues above, just prevents any
>>> possibility of the guest modifying anything in the host.  Xen apparently 
>>> allows
>>> write access to the host page already.
>>>  
>>> 3) Copy OpRegion contents into buffer and do either 1) or 2) above.
>>>  
>>> Discussion: No benefit that I can see over above other than maybe allowing
>>> write access that doesn't affect the host.
>>>  
>>> 4) Copy contents into a guest RAM location, mark it reserved, point to it 
>>> via
>>> 0xFC config as scratch register.
>>>  a) Done by QEMU (vfio-pci)
>>>  b) Done by SeaBIOS/OVMF
>>>  
>>> Discussion: This is the most like real hardware.  4.a) has the usual issue 
>>> of
>>> how to pick an address, but the benefit of not requiring BIOS changes 
>>> (simply
>>> mark the RAM reserved via existing methods).  4.b) would require passing a
>>> buffer containing the contents of the OpRegion via fw_cfg and letting the
>>> BIOS do the setup.  The latter of course requires modifying each BIOS for 
>>> this
>>> support.
>>>  
>>> Of course none of these support hotplug nor really can they since reserved
>>> memory regions are not dynamic in the architecture.
>>>  
>>> In all cases, some piece of software needs to know where it can place the
>>> OpRegion in guest memory.  It seems like there are advantages or
>>> disadvantages whether that's done by QEMU or the BIOS, but we only need
>>> to do it once if it's QEMU.  Suggestions, comments, preferences?
>>>  
>>  
>> Hi Alex, another thing to consider is how to communicate to the guest driver 
>> the address at 0xFC contains a valid GPA address that can be accessed by the 
>> driver without causing a EPT fault - since
>> the same driver will be used on other hypervisors and they may not EPT map 
>> OpRegion memory.  On idea proposed by display driver team is to set bit0 of 
>> the address to 1 for indicating OpRegion memory
>> can be safely accessed by the guest driver.
> 
> Hi Allen,
> 
> Why is that any different than a guest accessing any other memory area
> that it shouldn't?  The OpRegion starts with a 16-byte ID string, so if
> the guest finds that it should feel fairly confident the OpRegion data
> is valid.  The published spec also seems to define all bits of 0xfc as
> valid, not implying any sort of alignment requirements, and the i915
> driver does a memremap directly on the value read from 0xfc.  So I'm not
> sure whether there's really a need to or ability to define any of those
> bits in an adhoc way to indicate mapping.  If we do things right,
> shouldn't the guest driver not even know it's running in a VM, at least
> for the KVMGT-d case, so we need to be compatible with physical
> hardware.  Thanks,
> 

I agree. EPT page fault is allowed on guest OpRegion accessing, as long as
during the page fault handling, KVM will find a proper PFN for that GPA.
It's exactly what is expected for 'normal' memory.

> Alex
> 

--
Thanks,
Jike

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Intel-gfx] [Announcement] 2015-Q4 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2016-01-26 Thread Jike Song
Hi all,

We are pleased to announce another update of Intel GVT-g for Xen.

Intel GVT-g is a full GPU virtualization solution with mediated pass-through, 
starting from 4th generation Intel Core(TM) processors with Intel Graphics 
processors. A virtual GPU instance is maintained for each VM, with part of 
performance critical resources directly assigned. The capability of running 
native graphics driver inside a VM, without hypervisor intervention in 
performance critical paths, achieves a good balance among performance, feature, 
and sharing capability. Xen is currently supported on Intel Processor Graphics 
(a.k.a. XenGT).

Repositories
-

Kernel: https://github.com/01org/igvtg-kernel (2015q4-4.2.0 branch)
Xen: https://github.com/01org/igvtg-xen (2015q4-4.5 branch)
Qemu: https://github.com/01org/igvtg-qemu (xengt_public2015q4 branch)

This update consists of:

- 6th generation Intel Core Processor (code name: Skylake) is 
preliminarily supported in this release. Users could start run multiple Windows 
/ Linux virtual machines simultaneously, and switch display among them.
- Backward compatibility support 4th generation Intel Core Processor 
(code name: Haswell) and 5th generation Intel Core Processor (code name: 
Broadwell).
- Kernel update from drm-intel 3.18.0 to drm-intel 4.2.0.

Known issues:
   - At least 2GB memory is suggested for a VM to run most 3D workloads.
   - Keymap might be incorrect in guest. Config file may need to explicitly 
specify "keymap='en-us'". Although it looks like the default value, earlier we 
saw the problem of wrong keymap code if it is not explicitly set.
   - Cannot move mouse pointer smoothly in guest by default launched by VNC 
mode. Configuration file need to explicitly specify "usb=1" to enable a USB 
bus, and "usbdevice='tablet'" to add pointer device using absolute coordinates.
   - Running heavy 3D workloads in multiple guests for couple of hours may 
cause stability issue.
   - There are still stability issues on Skylake


Next update will be around early April, 2016.

GVT-g project portal: https://01.org/igvt-g
Please subscribe mailing list: https://lists.01.org/mailman/listinfo/igvt-g


More information about background, architecture and others about Intel GVT-g, 
can be found at:

https://01.org/igvt-g
https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian

http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-v7_0.pdf

http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-REWRITE%203RD%20v4.pdf
https://01.org/xen/blogs/srclarkx/2013/graphics-virtualization-xengt


Note: The XenGT project should be considered a work in progress. As such it is 
not a complete product nor should it be considered one. Extra care should be 
taken when testing and configuring a system to use the XenGT project.


--
Thanks,
Jike

On 10/27/2015 05:25 PM, Jike Song wrote:
> Hi all,
> 
> We are pleased to announce another update of Intel GVT-g for Xen.
> 
> Intel GVT-g is a full GPU virtualization solution with mediated pass-through, 
> starting from 4th generation Intel Core(TM) processors with Intel Graphics 
> processors. A virtual GPU instance is maintained for each VM, with part of 
> performance critical resources directly assigned. The capability of running 
> native graphics driver inside a VM, without hypervisor intervention in 
> performance critical paths, achieves a good balance among performance, 
> feature, and sharing capability. Xen is currently supported on Intel 
> Processor Graphics (a.k.a. XenGT); and the core logic can be easily ported to 
> other hypervisors.
> 
> 
> Repositories
> 
>  Kernel: https://github.com/01org/igvtg-kernel (2015q3-3.18.0 branch)
>  Xen: https://github.com/01org/igvtg-xen (2015q3-4.5 branch)
>  Qemu: https://github.com/01org/igvtg-qemu (xengt_public2015q3 branch)
> 
> 
> This update consists of:
> 
>  - XenGT is now merged with KVMGT in unified repositories(kernel and 
> qemu), but currently
>different branches for qemu.  XenGT and KVMGT share same iGVT-g core 
> logic.
>  - fix sysfs/debugfs access seldom crash issue
>  - fix a BUG in XenGT I/O emulation logic
>  - improve 3d workload stability
> 
> Next update will be around early Jan, 2016.
> 
> 
> Known issues:
> 
>  - At least 2GB memory is suggested for VM to run most 3D workloads.
>  - Keymap might be incorrect in guest. Config file may need to explicitly 
> specify "keymap='en-us'". Although it looks like the default value, earlier 
> we saw the problem of wrong keymap code if it is not explicitly set.
>  - When using three monitors, doing hotplug between Guest pause/unpause 
> may not be able to lightup all monitors automatically

Re: [Xen-devel] [iGVT-g] XenGT for PV guest

2015-11-26 Thread Jike Song

On 11/27/2015 01:10 AM, Oleksii Kurochko wrote:

Hello all,

Do you have any ideas about previously mentioned question?

With best regards,
  Oleksii

On Tue, Nov 24, 2015 at 6:48 PM, Oleksii Kurochko > wrote:

Hi all,

I am trying to enable XenGT for Android on board vtc1010 in PV mode.
Used:
- Intel® Atom™ processor E3827
- Xen 4.3.1 on branch "byt_experiment".
- dom0: ubuntu 14-04 with linux vgt 3.11.6-vgt+ kernel version
- domU: Android-IA 5.1 with 3.11.6-vgt+ (added Android configs)  kernel 
version

vgt was successfully started in dom0.
vgt does not start in domU. After registration of pci dev in i915_init() 
there is no call of i915_pci_driver.probe(). Inte HD Graphics is on pci bus, 
but it is not passthrough to domU. When tried to passtrough it to domU than 
dom0 crashes in drm_framebuffer_remove(). More than that it is not my case 
because of intel hd graphics need to be working in dom0 and domU.

So could U give advice how to probe i915 driver in domU?


The difficult part may not be how to probe i915 driver, but how
to implement all necessary MPT(Mediated Pass-Through) ops for PV guests.

What reminds me right now is, how to trapp guest GTT
without EPT support? I'm not familiar with PV, however, on
my gut feelings there should be PV ops to be added, which
should not be trivial.



With best,
  Oleksii


--
Thanks,
Jike

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Qemu-devel] [Intel-gfx] [Announcement] 2015-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2015-11-22 Thread Jike Song

On 11/21/2015 12:40 AM, Alex Williamson wrote:


Thanks for confirmation. For QEMU/KVM, I totally agree your point; However,
if we take XenGT to consider, it will be a bit more complex: with Xen
hypervisor and Dom0 kernel running in different level, it's not a straight-
forward way for QEMU to do something like mapping a portion of MMIO BAR
via VFIO in Dom0 kernel, instead of calling hypercalls directly.


This would need to be part of the support added for Xen.  To directly
map a device MMIO space to the VM, VFIO provides an mmap, QEMU registers
that mmap with KVM, or Xen.  It's all just MemoryRegions in QEMU.
Perhaps it's even already supported by Xen.



AFAICT, things are different here for Xen. To establish mappings between
Dom0 pfns and DomU gfn, one will have to call Xen hypercalls. In the scene
above, either QEMU calls the hypercall directly, or it asks the VFIO in
dom0 kernel to do it.

I'm not saying that VFIO is not applicable for XenGT. I just want to
say that given the VFIO based kernel/QEMU split model, additional effort
is needed for XenGT.


I don't know if there is a better way to handle this. But I do agree that
channels between kernel and Qemu via VFIO is a good idea, even though we
may have to split KVMGT/XenGT in Qemu a bit.  We are currently working on
moving all of PCI CFG emulation from kernel to Qemu, hopefully we can
release it by end of this year and work with you guys to adjust it for
the agreed method.


Well, moving PCI config space emulation from kernel to QEMU is exactly
the wrong direction to take for this proposal.  Config space access to
the vGPU would occur through the VFIO API.  So if you already have
config space emulation in the kernel, that's already one less piece of
work for a VFIO model, it just needs to be "wired up" through the VFIO
API.  Thanks,


If I understand correctly, the idea of moving PCI CFG to QEMU is actually
very similar to your VFIO design:

a) VM access a CFG regsiter
b) KVM hands over the access to QEMU
c) QEMU may emulate it, and when necessary, ioctl into kernel(i915/vgt)




Alex



--
Thanks,
Jike

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Intel-gfx] [Announcement] 2015-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2015-11-22 Thread Jike Song

On 11/21/2015 01:25 AM, Alex Williamson wrote:

On Fri, 2015-11-20 at 08:10 +, Tian, Kevin wrote:


Here is a more concrete example:

KVMGT doesn't require IOMMU. All DMA targets are already replaced with
HPA thru shadow GTT. So DMA requests from GPU all contain HPAs.

When IOMMU is enabled, one simple approach is to have vGPU IOMMU
driver configure system IOMMU with identity mapping (HPA->HPA). We
can't use (GPA->HPA) since GPAs from multiple VMs are conflicting.

However, we still have host gfx driver running. When IOMMU is enabled,
dma_alloc_*** will return IOVA (drvers/iommu/iova.c) in host gfx driver,
which will have IOVA->HPA programmed to system IOMMU.

One IOMMU device entry can only translate one address space, so here
comes a conflict (HPA->HPA vs. IOVA->HPA). To solve this, vGPU IOMMU
driver needs to allocate IOVA from iova.c for each VM w/ vGPU assigned,
and then KVMGT will program IOVA in shadow GTT accordingly. It adds
one additional mapping layer (GPA->IOVA->HPA). In this way two
requirements can be unified together since only IOVA->HPA mapping
needs to be built.

So unlike existing type1 IOMMU driver which controls IOMMU alone, vGPU
IOMMU driver needs to cooperate with other agent (iova.c here) to
co-manage system IOMMU. This may not impact existing VFIO framework.
Just want to highlight additional work here when implementing the vGPU
IOMMU driver.


Right, so the existing i915 driver needs to use the DMA API and calls
like dma_map_page() to enable translations through the IOMMU.  With
dma_map_page(), the caller provides a page address (~HPA) and is
returned an IOVA.  So unfortunately you don't get to take the shortcut
of having an identity mapping through the IOMMU unless you want to
convert i915 entirely to using the IOMMU API, because we also can't have
the conflict that an HPA could overlap an IOVA for a previously mapped
page.

The double translation, once through the GPU MMU and once through the
system IOMMU is going to happen regardless of whether we can identity
map through the IOMMU.  The only solution to this would be for the GPU
to participate in ATS and provide pre-translated transactions from the
GPU.  All of this is internal to the i915 driver (or vfio extension of
that driver) and needs to be done regardless of what sort of interface
we're using to expose the vGPU to QEMU.  It just seems like VFIO
provides a convenient way of doing this since you'll have ready access
to the HVA-GPA mappings for the user.

I think the key points though are:

   * the VFIO type1 IOMMU stores GPA to HVA translations
   * get_user_pages() on the HVA will pin the page and give you a
 page
   * dma_map_page() receives that page, programs the system IOMMU and
 provides an IOVA
   * the GPU MMU can then be programmed with the GPA to IOVA
 translations


Thanks for such a nice example! I'll do my home work and get back to you
shortly :)



Thanks,
Alex



--
Thanks,
Jike

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Intel-gfx] [Announcement] 2015-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2015-11-19 Thread Jike Song

On 11/19/2015 11:52 PM, Alex Williamson wrote:

On Thu, 2015-11-19 at 15:32 +, Stefano Stabellini wrote:

On Thu, 19 Nov 2015, Jike Song wrote:

Hi Alex, thanks for the discussion.

In addition to Kevin's replies, I have a high-level question: can VFIO
be used by QEMU for both KVM and Xen?


No. VFIO cannot be used with Xen today. When running on Xen, the IOMMU
is owned by Xen.


Right, but in this case we're talking about device MMUs, which are owned
by the device driver which I think is running in dom0, right?  This
proposal doesn't require support of the system IOMMU, the dom0 driver
maps IOVA translations just as it would for itself.  We're largely
proposing use of the VFIO API to provide a common interface to expose a
PCI(e) device to QEMU, but what happens in the vGPU vendor device and
IOMMU backends is specific to the device and perhaps even specific to
the hypervisor.  Thanks,


Let me conclude this, and please correct me in case of any misread: the
vGPU interface between kernel and QEMU will be through VFIO, with a new
VFIO backend (instead of the existing type1), for both KVMGT and XenGT?




Alex



--
Thanks,
Jike

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Intel-gfx] [Announcement] 2015-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2015-11-19 Thread Jike Song

On 11/19/2015 07:09 PM, Paolo Bonzini wrote:

On 19/11/2015 09:40, Gerd Hoffmann wrote:

But this code should be
minor to be maintained in libvirt.

As far I know libvirt only needs to discover those devices.  If they
look like sr/iov devices in sysfs this might work without any changes to
libvirt.


I don't think they will look like SR/IOV devices.

The interface may look a little like the sysfs interface that GVT-g is
already using.  However, it should at least be extended to support
multiple vGPUs in a single VM.  This might not be possible for Intel
integrated graphics, but it should definitely be possible for discrete
graphics cards.


Didn't hear about multiple vGPUs for a single VM before. Yes If we
expect same vGPU interfaces for different vendors, abstraction and
vendor specific stuff should be implemented.



Another nit is that the VM id should probably be replaced by a UUID
(because it's too easy to stumble on an existing VM id), assuming a VM
id is needed at all.


For the last assumption, yes, a VM id is not necessary for gvt-g, it's
only a temporary implementation.

As long as libvirt is used, UUID should be enough for gvt-g. However,
UUID is not mandatory? What should we do if user don't specify an UUID
in QEMU cmdline?



Paolo



--
Thanks,
Jike

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Intel-gfx] [Announcement] 2015-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2015-11-19 Thread Jike Song

On 11/20/2015 12:22 PM, Alex Williamson wrote:

On Fri, 2015-11-20 at 10:58 +0800, Jike Song wrote:

On 11/19/2015 11:52 PM, Alex Williamson wrote:

On Thu, 2015-11-19 at 15:32 +, Stefano Stabellini wrote:

On Thu, 19 Nov 2015, Jike Song wrote:

Hi Alex, thanks for the discussion.

In addition to Kevin's replies, I have a high-level question: can VFIO
be used by QEMU for both KVM and Xen?


No. VFIO cannot be used with Xen today. When running on Xen, the IOMMU
is owned by Xen.


Right, but in this case we're talking about device MMUs, which are owned
by the device driver which I think is running in dom0, right?  This
proposal doesn't require support of the system IOMMU, the dom0 driver
maps IOVA translations just as it would for itself.  We're largely
proposing use of the VFIO API to provide a common interface to expose a
PCI(e) device to QEMU, but what happens in the vGPU vendor device and
IOMMU backends is specific to the device and perhaps even specific to
the hypervisor.  Thanks,


Let me conclude this, and please correct me in case of any misread: the
vGPU interface between kernel and QEMU will be through VFIO, with a new
VFIO backend (instead of the existing type1), for both KVMGT and XenGT?


My primary concern is KVM and QEMU upstream, the proposal is not
specifically directed at XenGT, but does not exclude it either.  Xen is
welcome to adopt this proposal as well, it simply defines the channel
through which vGPUs are exposed to QEMU as the VFIO API.  The core VFIO
code in the Linux kernel is just as available for use in Xen dom0 as it
is for a KVM host. VFIO in QEMU certainly knows about some
accelerations for KVM, but these are almost entirely around allowing
eventfd based interrupts to be injected through KVM, which is something
I'm sure Xen could provide as well.  These accelerations are also not
required, VFIO based device assignment in QEMU works with or without
KVM.  Likewise, the VFIO kernel interface knows nothing about KVM and
has no dependencies on it.

There are two components to the VFIO API, one is the type1 compliant
IOMMU interface, which for this proposal is really doing nothing more
than tracking the HVA to GPA mappings for the VM.  This much seems
entirely common regardless of the hypervisor.  The other part is the
device interface.  The lifecycle of the virtual device seems like it
would be entirely shared, as does much of the emulation components of
the device.  When we get to pinning pages, providing direct access to
memory ranges for a VM, and accelerating interrupts, the vGPU drivers
will likely need some per hypervisor branches, but these are areas where
that's true no matter what the interface.  I'm probably over
simplifying, but hopefully not too much, correct me if I'm wrong.



Thanks for confirmation. For QEMU/KVM, I totally agree your point; However,
if we take XenGT to consider, it will be a bit more complex: with Xen
hypervisor and Dom0 kernel running in different level, it's not a straight-
forward way for QEMU to do something like mapping a portion of MMIO BAR
via VFIO in Dom0 kernel, instead of calling hypercalls directly.

I don't know if there is a better way to handle this. But I do agree that
channels between kernel and Qemu via VFIO is a good idea, even though we
may have to split KVMGT/XenGT in Qemu a bit.  We are currently working on
moving all of PCI CFG emulation from kernel to Qemu, hopefully we can
release it by end of this year and work with you guys to adjust it for
the agreed method.



The benefit of course is that aside from some extensions to the API, the
QEMU components are already in place and there's a lot more leverage for
getting both QEMU and libvirt support upstream in being able to support
multiple vendors, perhaps multiple hypervisors, with the same code.
Also, I'm not sure how useful it is, but VFIO is a userspace driver
interface, where here we're predominantly talking about that userspace
driver being QEMU.  It's not limited to that though.  A userspace
compute application could have direct access to a vGPU through this
model.  Thanks,





Alex


--
Thanks,
Jike

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Intel-gfx] [Announcement] 2015-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2015-11-18 Thread Jike Song

Hi Alex,
On 11/19/2015 12:06 PM, Tian, Kevin wrote:

From: Alex Williamson [mailto:alex.william...@redhat.com]
Sent: Thursday, November 19, 2015 2:12 AM

[cc +qemu-devel, +paolo, +gerd]

On Tue, 2015-10-27 at 17:25 +0800, Jike Song wrote:

{snip}


Hi!

At redhat we've been thinking about how to support vGPUs from multiple
vendors in a common way within QEMU.  We want to enable code sharing
between vendors and give new vendors an easy path to add their own
support.  We also have the complication that not all vGPU vendors are as
open source friendly as Intel, so being able to abstract the device
mediation and access outside of QEMU is a big advantage.

The proposal I'd like to make is that a vGPU, whether it is from Intel
or another vendor, is predominantly a PCI(e) device.  We have an
interface in QEMU already for exposing arbitrary PCI devices, vfio-pci.
Currently vfio-pci uses the VFIO API to interact with "physical" devices
and system IOMMUs.  I highlight /physical/ there because some of these
physical devices are SR-IOV VFs, which is somewhat of a fuzzy concept,
somewhere between fixed hardware and a virtual device implemented in
software.  That software just happens to be running on the physical
endpoint.


Agree.

One clarification for rest discussion, is that we're talking about GVT-g vGPU
here which is a pure software GPU virtualization technique. GVT-d (note
some use in the text) refers to passing through the whole GPU or a specific
VF. GVT-d already falls into existing VFIO APIs nicely (though some on-going
effort to remove Intel specific platform stickness from gfx driver). :-)



Hi Alex, thanks for the discussion.

In addition to Kevin's replies, I have a high-level question: can VFIO
be used by QEMU for both KVM and Xen?

--
Thanks,
Jike

 


vGPUs are similar, with the virtual device created at a different point,
host software.  They also rely on different IOMMU constructs, making use
of the MMU capabilities of the GPU (GTTs and such), but really having
similar requirements.


One important difference between system IOMMU and GPU-MMU here.
System IOMMU is very much about translation from a DMA target
(IOVA on native, or GPA in virtualization case) to HPA. However GPU
internal MMUs is to translate from Graphics Memory Address (GMA)
to DMA target (HPA if system IOMMU is disabled, or IOVA/GPA if system
IOMMU is enabled). GMA is an internal addr space within GPU, not
exposed to Qemu and fully managed by GVT-g device model. Since it's
not a standard PCI defined resource, we don't need abstract this capability
in VFIO interface.



The proposal is therefore that GPU vendors can expose vGPUs to
userspace, and thus to QEMU, using the VFIO API.  For instance, vfio
supports modular bus drivers and IOMMU drivers.  An intel-vfio-gvt-d
module (or extension of i915) can register as a vfio bus driver, create
a struct device per vGPU, create an IOMMU group for that device, and
register that device with the vfio-core.  Since we don't rely on the
system IOMMU for GVT-d vGPU assignment, another vGPU vendor driver (or
extension of the same module) can register a "type1" compliant IOMMU
driver into vfio-core.  From the perspective of QEMU then, all of the
existing vfio-pci code is re-used, QEMU remains largely unaware of any
specifics of the vGPU being assigned, and the only necessary change so
far is how QEMU traverses sysfs to find the device and thus the IOMMU
group leading to the vfio group.


GVT-g requires to pin guest memory and query GPA->HPA information,
upon which shadow GTTs will be updated accordingly from (GMA->GPA)
to (GMA->HPA). So yes, here a dummy or simple "type1" compliant IOMMU
can be introduced just for this requirement.

However there's one tricky point which I'm not sure whether overall
VFIO concept will be violated. GVT-g doesn't require system IOMMU
to function, however host system may enable system IOMMU just for
hardening purpose. This means two-level translations existing (GMA->
IOVA->HPA), so the dummy IOMMU driver has to request system IOMMU
driver to allocate IOVA for VMs and then setup IOVA->HPA mapping
in IOMMU page table. In this case, multiple VM's translations are
multiplexed in one IOMMU page table.

We might need create some group/sub-group or parent/child concepts
among those IOMMUs for thorough permission control.



There are a few areas where we know we'll need to extend the VFIO API to
make this work, but it seems like they can all be done generically.  One
is that PCI BARs are described through the VFIO API as regions and each
region has a single flag describing whether mmap (ie. direct mapping) of
that region is possible.  We expect that vGPUs likely need finer
granularity, enabling some areas within a BAR to be trapped and fowarded
as a read or write access for the vGPU-vfio-device module to emulate,
while other regions, like framebuffers or texture regions, are directly
mapped.  I have prototype code to enable thi

Re: [Xen-devel] [Intel-gfx] [Announcement] 2015-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2015-10-27 Thread Jike Song

Hi all,

We are pleased to announce another update of Intel GVT-g for Xen.

Intel GVT-g is a full GPU virtualization solution with mediated pass-through, 
starting from 4th generation Intel Core(TM) processors with Intel Graphics 
processors. A virtual GPU instance is maintained for each VM, with part of 
performance critical resources directly assigned. The capability of running 
native graphics driver inside a VM, without hypervisor intervention in 
performance critical paths, achieves a good balance among performance, feature, 
and sharing capability. Xen is currently supported on Intel Processor Graphics 
(a.k.a. XenGT); and the core logic can be easily ported to other hypervisors.


Repositories

Kernel: https://github.com/01org/igvtg-kernel (2015q3-3.18.0 branch)
Xen: https://github.com/01org/igvtg-xen (2015q3-4.5 branch)
Qemu: https://github.com/01org/igvtg-qemu (xengt_public2015q3 branch)


This update consists of:

- XenGT is now merged with KVMGT in unified repositories(kernel and qemu), 
but currently
  different branches for qemu.  XenGT and KVMGT share same iGVT-g core 
logic.
- fix sysfs/debugfs access seldom crash issue
- fix a BUG in XenGT I/O emulation logic
- improve 3d workload stability

Next update will be around early Jan, 2016.


Known issues:

- At least 2GB memory is suggested for VM to run most 3D workloads.
- Keymap might be incorrect in guest. Config file may need to explicitly specify 
"keymap='en-us'". Although it looks like the default value, earlier we saw the 
problem of wrong keymap code if it is not explicitly set.
- When using three monitors, doing hotplug between Guest pause/unpause may 
not be able to lightup all monitors automatically. Some specific monitor issues.
- Cannot move mouse pointer smoothly in guest by default launched by VNC mode. Configuration 
file need to explicitly specify "usb=1" to enable a USB bus, and 
"usbdevice='tablet'" to add pointer device using absolute coordinates.
- Resume dom0 from S3 may cause some error message.
- i915 unload/reload cannot works well with less than 3 vcpu when upowerd 
service was running
- Unigen Tropics running in multiple guests will cause dom0 and guests tdr.


Please subscribe the mailing list to report BUGs, discuss, and/or contribute:

https://lists.01.org/mailman/listinfo/igvt-g


More information about Intel GVT-g background, architecture, etc can be found 
at(may not be up-to-date):

https://01.org/igvt-g
https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian

http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-v7_0.pdf

http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-REWRITE%203RD%20v4.pdf
https://01.org/xen/blogs/srclarkx/2013/graphics-virtualization-xengt


Note:

   The XenGT project should be considered a work in progress. As such it is not 
a complete product nor should it be considered one. Extra care should be taken 
when testing and configuring a system to use the XenGT project.


--
Thanks,
Jike

On 07/07/2015 10:49 AM, Jike Song wrote:

Hi all,

We're pleased to announce a public update to Intel Graphics Virtualization 
Technology(Intel GVT-g, formerly known as XenGT).

Intel GVT-g is a full GPU virtualization solution with mediated pass-through, 
starting from 4th generation Intel Core(TM) processors with Intel Graphics 
processors. A virtual GPU instance is maintained for each VM, with part of 
performance critical resources directly assigned. The capability of running 
native graphics driver inside a VM, without hypervisor intervention in 
performance critical paths, achieves a good balance among performance, feature, 
and sharing capability. Xen is currently supported on Intel Processor Graphics 
(a.k.a. XenGT); and the core logic can be easily ported to other hypervisors, 
for example, the experimental code has been released to support GVT-g running 
on a KVM hypervisor (a.k.a KVMGT).

Tip of repositories
-

  Kernel: 5b73653d5ca, Branch: master-2015Q2-3.18.0
  Qemu: 2a75bbff62c1, Branch: master
  Xen: 38c36f0f511b1, Branch: master-2015Q2-4.5

This update consists of:
  - Change time based scheduler timer to be configurable to enhance 
stability
  - Fix stability issues that VM/Dom0 got tdr when hang up at some specific 
instruction on BDW
  - Optimize the emulation of el_status register to enhance stability
  - 2D/3D performance in linux VMs has been improved about 50% on BDW
  - Fix abnormal idle power consumption issue due to wrong forcewake policy
  - Fix tdr issue when running 2D/3D/Media workloads in Windows VMs 
simultaneously
  - KVM support is still in a separate branch as prototype work. We plan to 
integrate KVM/Xen support together in the future releases
  - Next update will be around early Oct, 2015

Notice that this rele

Re: [Xen-devel] [Intel-gfx] [Announcement] 2015-Q2 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2015-07-06 Thread Jike Song

Hi all,

We're pleased to announce a public update to Intel Graphics Virtualization 
Technology(Intel GVT-g, formerly known as XenGT).

Intel GVT-g is a full GPU virtualization solution with mediated pass-through, 
starting from 4th generation Intel Core(TM) processors with Intel Graphics 
processors. A virtual GPU instance is maintained for each VM, with part of 
performance critical resources directly assigned. The capability of running 
native graphics driver inside a VM, without hypervisor intervention in 
performance critical paths, achieves a good balance among performance, feature, 
and sharing capability. Xen is currently supported on Intel Processor Graphics 
(a.k.a. XenGT); and the core logic can be easily ported to other hypervisors, 
for example, the experimental code has been released to support GVT-g running 
on a KVM hypervisor (a.k.a KVMGT).

Tip of repositories
-

Kernel: 5b73653d5ca, Branch: master-2015Q2-3.18.0
Qemu: 2a75bbff62c1, Branch: master
Xen: 38c36f0f511b1, Branch: master-2015Q2-4.5

This update consists of:
- Change time based scheduler timer to be configurable to enhance stability
- Fix stability issues that VM/Dom0 got tdr when hang up at some specific 
instruction on BDW
- Optimize the emulation of el_status register to enhance stability
- 2D/3D performance in linux VMs has been improved about 50% on BDW
- Fix abnormal idle power consumption issue due to wrong forcewake policy
- Fix tdr issue when running 2D/3D/Media workloads in Windows VMs 
simultaneously
- KVM support is still in a separate branch as prototype work. We plan to 
integrate KVM/Xen support together in the future releases
- Next update will be around early Oct, 2015

Notice that this release can support both Intel 4th generation Core CPU(code 
name: Haswell) and Intel 5th generation Core CPU (code name: Broadwell), while 
the limitation of the latter include:
* 3D conformance may have some failure
* Under high demand 3D workloads, stability issues are detected
* Multi-monitor scenario is not fully tested, while single monitor of 
VGA/HDMI/DP/eDP should just work
* Hotplug DP may cause black screen even on native environment

Where to get

kernel: https://github.com/01org/XenGT-Preview-kernel.git
xen: https://github.com/01org/XenGT-Preview-xen.git
qemu: https://github.com/01org/XenGT-Preview-qemu.git


We have a mailing list for GVT-g development, bug report and technical 
discussion:

https://lists.01.org/mailman/listinfo/igvt-g

More information about Intel GVT-g background, architecture, etc can be found 
at:

https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian

http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-v7_0.pdf
https://01.org/xen/blogs/srclarkx/2013/graphics-virtualization-xengt


Note: The XenGT project should be considered a work in progress. As such it is 
not a complete product nor should it be considered one. Extra care should be 
taken when testing and configuring a system to use the XenGT project.


--
Thanks,
Jike

On 04/10/2015 09:23 PM, Jike Song wrote:

Hi all,

We're pleased to announce a public update to Intel Graphics Virtualization 
Technology (Intel GVT-g, formerly known as XenGT). Intel GVT-g is a complete 
vGPU solution with mediated pass-through, supported today on 4th generation 
Intel Core(TM) processors with Intel Graphics processors. A virtual GPU 
instance is maintained for each VM, with part of performance critical resources 
directly assigned. The capability of running native graphics driver inside a 
VM, without hypervisor intervention in performance critical paths, achieves a 
good balance among performance, feature, and sharing capability. Though we only 
support Xen on Intel Processor Graphics so far, the core logic can be easily 
ported to other hypervisors.

Tip of repositories
-

   Kernel: a011c9f953e, Branch: master-2015Q1-3.18.0
   Qemu: 2a75bbff62c1, Branch: master
   Xen: 38c36f0f511b1, Branch: master-2015Q1-4.5

Summary this update
-
- Preliminary Broadwell support.
- kernel update from drm-intel 3.17.0 to drm-intel 3.18.0(tag: 
drm-intel-next-fixes-2014-12-17, notice that i915 driver code is much newer 
than kernel stable version).
- Next update will be around early July, 2015.
- KVM support is still in a separate branch as prototype work. We plan 
to integrate KVM/Xen support together in future releases.

This update consists of:
- gvt-g core logic code was moved into i915 driver directory.
- Host mediation is used for dom0 i915 driver access, instead of 
de-privileged dom0.
- The Xen-specific code was separated from vgt core logic into a new file 
driver/xen/xengt.c.
- Broadwell is preliminarily supported in this release. Users could 
start multiple linux/windows 64-bit

Re: [Xen-devel] [Intel-gfx] [Announcement] 2014-Q4 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2015-01-11 Thread Jike Song

Whoops. Changed the title from 2015-Q1 to be 2014-Q4 :)

--
Thanks,
Jike


On 01/09/2015 04:51 PM, Jike Song wrote:

Hi all,

We're pleased to announce a public update to Intel Graphics Virtualization 
Technology (Intel GVT-g, formerly known as XenGT). Intel GVT-g is a complete 
vGPU solution with mediated pass-through, supported today on 4th generation 
Intel Core(TM) processors with Intel Graphics processors. A virtual GPU 
instance is maintained for each VM, with part of performance critical resources 
directly assigned. The capability of running native graphics driver inside a 
VM, without hypervisor intervention in performance critical paths, achieves a 
good balance among performance, feature, and sharing capability. Though we only 
support Xen on Intel Processor Graphics so far, the core logic can be easily 
ported to other hypervisors.   The XenGT project should be considered a work in 
progress, As such it is not a complete product nor should it be considered 
one., Extra care should be taken when testing and configuring a system to use 
the XenGT project.

The news of this update:

- kernel update from 3.14.1 to drm-intel 3.17.0.
- We plan to integrate Intel GVT-g as a feature in i915 driver. That 
effort is still under review, not included in this update yet.
- Next update will be around early Apr, 2015.

This update consists of:

- Including some bug fixes and stability enhancement.
- Making XenGT device model to be aware of Broadwell. In this version 
BDW is not yet functioning.
- Available Fence registers number is changed to 32 from 16 to align 
with HSW hardware.
- New cascade interrupt framework for supporting interrupt 
virtualization on both Haswell and Broadwell.
- Add back the gem_vgtbuffer. The previous release did not build that 
module for 3.14 kernel. In this release, the module is back and rebased to 3.17.
- Enable the irq based context switch in vgt driver, which will help 
reduce the cpu utilization while doing context switch, it is enabled by 
default, and can be turn off by kernel flag irq_based_ctx_switch.


Please refer to the new setup guide, which provides step-to-step details about 
building/configuring/running Intel GVT-g:


https://github.com/01org/XenGT-Preview-kernel/blob/master/XenGT_Setup_Guide.pdf

The new source codes are available at the updated github repos:

Linux: https://github.com/01org/XenGT-Preview-kernel.git
Xen: https://github.com/01org/XenGT-Preview-xen.git
Qemu: https://github.com/01org/XenGT-Preview-qemu.git


More information about Intel GVT-g background, architecture, etc can be found 
at:



https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian

http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-v7_0.pdf
https://01.org/xen/blogs/srclarkx/2013/graphics-virtualization-xengt



The previous update can be found here:


http://lists.xen.org/archives/html/xen-devel/2014-12/msg00474.html



Appreciate your comments!



--
Thanks,
Jike


On 12/04/2014 10:45 AM, Jike Song wrote:

Hi all,

We're pleased to announce a public release to Intel Graphics Virtualization 
Technology (Intel GVT-g, formerly known as XenGT). Intel GVT-g is a complete 
vGPU solution with mediated pass-through, supported today on 4th generation 
Intel Core(TM) processors with Intel Graphics processors. A virtual GPU 
instance is maintained for each VM, with part of performance critical resources 
directly assigned. The capability of running native graphics driver inside a 
VM, without hypervisor intervention in performance critical paths, achieves a 
good balance among performance, feature, and sharing capability. Though we only 
support Xen on Intel Processor Graphics so far, the core logic can be easily 
ported to other hypervisors.


The news of this update:


- kernel update from 3.11.6 to 3.14.1

- We plan to integrate Intel GVT-g as a feature in i915 driver. That 
effort is still under review, not included in this update yet

- Next update will be around early Jan, 2015


This update consists of:

- Windows HVM support with driver version 15.33.3910

- Stability fixes, e.g. stabilize GPU, the GPU hang occurrence rate 
becomes rare now

- Hardware Media Acceleration for Decoding/Encoding/Transcoding, VC1, 
H264 etc. format supporting

- Display enhancements, e.g. DP type is supported for virtual PORT

- Display port capability virtualization: with this feature, dom0 
manager could freely assign virtual DDI ports to VM, not necessary to check 
whether the corresponding physical DDI ports are available



Please refer to the new setup guide, which provides step-to-step details about 
building/configuring/running Intel GVT-g:



https://github.com/01org/XenGT-Preview-kernel/blob/master/XenGT_Setup_Guide.pdf



The new source

Re: [Xen-devel] [Intel-gfx] [Announcement] 2015-Q1 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2015-01-09 Thread Jike Song

Hi all,

  We're pleased to announce a public update to Intel Graphics Virtualization 
Technology (Intel GVT-g, formerly known as XenGT). Intel GVT-g is a complete 
vGPU solution with mediated pass-through, supported today on 4th generation 
Intel Core(TM) processors with Intel Graphics processors. A virtual GPU 
instance is maintained for each VM, with part of performance critical resources 
directly assigned. The capability of running native graphics driver inside a 
VM, without hypervisor intervention in performance critical paths, achieves a 
good balance among performance, feature, and sharing capability. Though we only 
support Xen on Intel Processor Graphics so far, the core logic can be easily 
ported to other hypervisors.   The XenGT project should be considered a work in 
progress, As such it is not a complete product nor should it be considered 
one., Extra care should be taken when testing and configuring a system to use 
the XenGT project.

The news of this update:

- kernel update from 3.14.1 to drm-intel 3.17.0.
- We plan to integrate Intel GVT-g as a feature in i915 driver. That 
effort is still under review, not included in this update yet.
- Next update will be around early Apr, 2015.

This update consists of:

- Including some bug fixes and stability enhancement.
- Making XenGT device model to be aware of Broadwell. In this version 
BDW is not yet functioning.
- Available Fence registers number is changed to 32 from 16 to align 
with HSW hardware.
- New cascade interrupt framework for supporting interrupt 
virtualization on both Haswell and Broadwell.
- Add back the gem_vgtbuffer. The previous release did not build that 
module for 3.14 kernel. In this release, the module is back and rebased to 3.17.
- Enable the irq based context switch in vgt driver, which will help 
reduce the cpu utilization while doing context switch, it is enabled by 
default, and can be turn off by kernel flag irq_based_ctx_switch.


Please refer to the new setup guide, which provides step-to-step details about 
building/configuring/running Intel GVT-g:


https://github.com/01org/XenGT-Preview-kernel/blob/master/XenGT_Setup_Guide.pdf

The new source codes are available at the updated github repos:

Linux: https://github.com/01org/XenGT-Preview-kernel.git
Xen: https://github.com/01org/XenGT-Preview-xen.git
Qemu: https://github.com/01org/XenGT-Preview-qemu.git


More information about Intel GVT-g background, architecture, etc can be found 
at:



https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian

http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-v7_0.pdf
https://01.org/xen/blogs/srclarkx/2013/graphics-virtualization-xengt



The previous update can be found here:


http://lists.xen.org/archives/html/xen-devel/2014-12/msg00474.html



Appreciate your comments!



--
Thanks,
Jike


On 12/04/2014 10:45 AM, Jike Song wrote:

Hi all,

We're pleased to announce a public release to Intel Graphics Virtualization 
Technology (Intel GVT-g, formerly known as XenGT). Intel GVT-g is a complete 
vGPU solution with mediated pass-through, supported today on 4th generation 
Intel Core(TM) processors with Intel Graphics processors. A virtual GPU 
instance is maintained for each VM, with part of performance critical resources 
directly assigned. The capability of running native graphics driver inside a 
VM, without hypervisor intervention in performance critical paths, achieves a 
good balance among performance, feature, and sharing capability. Though we only 
support Xen on Intel Processor Graphics so far, the core logic can be easily 
ported to other hypervisors.


The news of this update:


- kernel update from 3.11.6 to 3.14.1

- We plan to integrate Intel GVT-g as a feature in i915 driver. That 
effort is still under review, not included in this update yet

- Next update will be around early Jan, 2015


This update consists of:

- Windows HVM support with driver version 15.33.3910

- Stability fixes, e.g. stabilize GPU, the GPU hang occurrence rate 
becomes rare now

- Hardware Media Acceleration for Decoding/Encoding/Transcoding, VC1, 
H264 etc. format supporting

- Display enhancements, e.g. DP type is supported for virtual PORT

- Display port capability virtualization: with this feature, dom0 
manager could freely assign virtual DDI ports to VM, not necessary to check 
whether the corresponding physical DDI ports are available



Please refer to the new setup guide, which provides step-to-step details about 
building/configuring/running Intel GVT-g:



https://github.com/01org/XenGT-Preview-kernel/blob/master/XenGT_Setup_Guide.pdf



The new source codes are available at the updated github repos:


Linux: https://github.com/01org/XenGT-Preview-kernel.git