Re: [Qemu-devel] [libvirt] [RFC] libvirt vGPU QEMU integration

2016-08-25 Thread Laine Stump

On 08/24/2016 06:29 PM, Daniel P. Berrange wrote:

On Thu, Aug 18, 2016 at 09:41:59AM -0700, Neo Jia wrote:

Hi libvirt experts,

I am starting this email thread to discuss the potential solution / proposal of
integrating vGPU support into libvirt for QEMU.

Some quick background, NVIDIA is implementing a VFIO based mediated device
framework to allow people to virtualize their devices without SR-IOV, for
example NVIDIA vGPU, and Intel KVMGT. Within this framework, we are reusing the
VFIO API to process the memory / interrupt as what QEMU does today with passthru
device.

The difference here is that we are introducing a set of new sysfs file for
virtual device discovery and life cycle management due to its virtual nature.

Here is the summary of the sysfs, when they will be created and how they should
be used:

1. Discover mediated device

As part of physical device initialization process, vendor driver will register
their physical devices, which will be used to create virtual device (mediated
device, aka mdev) to the mediated framework.

Then, the sysfs file "mdev_supported_types" will be available under the physical
device sysfs, and it will indicate the supported mdev and configuration for this
particular physical device, and the content may change dynamically based on the
system's current configurations, so libvirt needs to query this file every time
before create a mdev.

Note: different vendors might have their own specific configuration sysfs as
well, if they don't have pre-defined types.

For example, we have a NVIDIA Tesla M60 on 86:00.0 here registered, and here is
NVIDIA specific configuration on an idle system.

For example, to query the "mdev_supported_types" on this Tesla M60:

cat /sys/bus/pci/devices/:86:00.0/mdev_supported_types
# vgpu_type_id, vgpu_type, max_instance, num_heads, frl_config, framebuffer,
max_resolution
11  ,"GRID M60-0B",  16,   2,  45, 512M,2560x1600
12  ,"GRID M60-0Q",  16,   2,  60, 512M,2560x1600
13  ,"GRID M60-1B",   8,   2,  45,1024M,2560x1600
14  ,"GRID M60-1Q",   8,   2,  60,1024M,2560x1600
15  ,"GRID M60-2B",   4,   2,  45,2048M,2560x1600
16  ,"GRID M60-2Q",   4,   4,  60,2048M,2560x1600
17  ,"GRID M60-4Q",   2,   4,  60,4096M,3840x2160
18  ,"GRID M60-8Q",   1,   4,  60,8192M,3840x2160

I'm unclear on the requirements about data format for this file.
Looking at the docs:

   http://www.spinics.net/lists/kvm/msg136476.html

the format is completely unspecified.


2. Create/destroy mediated device

Two sysfs files are available under the physical device sysfs path : mdev_create
and mdev_destroy

The syntax of creating a mdev is:

 echo "$mdev_UUID:vendor_specific_argument_list" >
/sys/bus/pci/devices/.../mdev_create

I'm not really a fan of the idea of having to provide arbitrary vendor
specific arguments to the mdev_create call, as I don't really want to
have to create vendor specific code for each vendor's vGPU hardware in
libvirt.

What is the relationship between the mdev_supported_types data and
the vendor_specific_argument_list requirements ?



The syntax of destroying a mdev is:

 echo "$mdev_UUID:vendor_specific_argument_list" >
/sys/bus/pci/devices/.../mdev_destroy

The $mdev_UUID is a unique identifier for this mdev device to be created, and it
is unique per system.

For NVIDIA vGPU, we require a vGPU type identifier (shown as vgpu_type_id in
above Tesla M60 output), and a VM UUID to be passed as
"vendor_specific_argument_list".

If there is no vendor specific arguments required, either "$mdev_UUID" or
"$mdev_UUID:" will be acceptable as input syntax for the above two commands.

This raises the question of how an application discovers what
vendor specific arguments are required or not, and what they
might mean.


To create a M60-4Q device, libvirt needs to do:

 echo "$mdev_UUID:vgpu_type_id=20,vm_uuid=$VM_UUID" >
/sys/bus/pci/devices/\:86\:00.0/mdev_create

Overall it doesn't seem like the proposed kernel interfaces provide
enough vendor abstraction to be able to use this functionality without
having to create vendor specific code in libvirt, which is something
I want to avoid us doing.



Ignoring the details though, in terms of libvirt integration, I think I'd
see us primarily doing work in the node device APIs / XML. Specifically
for physical devices, we'd have to report whether they support the
mediated device feature and some way to enumerate the validate device
types that can be created. The node device creation API would have to
support create/deletion of the devices (mapping to mdev_create/destroy)


When configuring a guest VM, we'd use the  XML to point to one
or more mediated devices that have been created via the node device APIs
previously.


I'd originally thought of this as having two separate points of support 
in libvirt as well:

Re: [Qemu-devel] [libvirt] [RFC] libvirt vGPU QEMU integration

2016-08-21 Thread Neo Jia
On Fri, Aug 19, 2016 at 03:22:48PM -0400, Laine Stump wrote:
> On 08/18/2016 12:41 PM, Neo Jia wrote:
> > Hi libvirt experts,
> > 
> > I am starting this email thread to discuss the potential solution / 
> > proposal of
> > integrating vGPU support into libvirt for QEMU.
> 
> Thanks for the detailed description. This is very helpful.
> 
> 
> > 
> > Some quick background, NVIDIA is implementing a VFIO based mediated device
> > framework to allow people to virtualize their devices without SR-IOV, for
> > example NVIDIA vGPU, and Intel KVMGT. Within this framework, we are reusing 
> > the
> > VFIO API to process the memory / interrupt as what QEMU does today with 
> > passthru
> > device.
> > 
> > The difference here is that we are introducing a set of new sysfs file for
> > virtual device discovery and life cycle management due to its virtual 
> > nature.
> > 
> > Here is the summary of the sysfs, when they will be created and how they 
> > should
> > be used:
> > 
> > 1. Discover mediated device
> > 
> > As part of physical device initialization process, vendor driver will 
> > register
> > their physical devices, which will be used to create virtual device 
> > (mediated
> > device, aka mdev) to the mediated framework.
> 
> 
> We've discussed this question offline, but I just want to make sure I
> understood correctly - all initialization of the physical device on the host
> is already handled "elsewhere", so libvirt doesn't need to be concerned with
> any physical device lifecycle or configuration (setting up the number or
> types of vGPUs), correct? 

Hi Laine,

Yes, that is right, at least for NVIDIA vGPU.

> Do you think this would also be the case for other
> vendors using the same APIs? I guess this all comes down to whether or not
> the setup of the physical device is defined within the bounds of the common
> infrastructure/API, or if it's something that's assumed to have just
> magically happened somewhere else.

I would assume that is the case for other vendors as well, although this common
infrastructure doesn't put any restrictions about the physical device setup or
initialization, so actually vendor can have options to defer some of them till
the point when virtual device gets created. 

But if we just look at from the API level which gets exposed to libvirt, it is
the vendor driver's responsibility to ensure that the virtual device will be
available in a reasonable amount of time after the "online" sysfs file is set to
1. But where to hide the HW setup is not enforced in this common API.

In NVIDIA case, once our kernel driver registers the physical devices that he
owns to the "common infrastructure", all the physical devices are already fully
initialized and ready for virtual device creation.

> 
> 
> > 
> > Then, the sysfs file "mdev_supported_types" will be available under the 
> > physical
> > device sysfs, and it will indicate the supported mdev and configuration for 
> > this
> > particular physical device, and the content may change dynamically based on 
> > the
> > system's current configurations, so libvirt needs to query this file every 
> > time
> > before create a mdev.
> 
> I had originally thought that libvirt would be setting up and managing a
> pool of virtual devices, similar to what we currently do with SRIOV VFs. But
> from this it sounds like the management of this pool is completely handled
> by your drivers (especially since the contents of the pool can apparently
> completely change at any instant). In one way that makes life easier for
> libvirt, because it doesn't need to manage anything.

The pool (vgpu type availabilities) will only subject to change when virtual
devices get created or destroyed, as for now we don't support heterogeneous vGPU
type on the same physical GPU. Even in the future we have added such support,
the point of change is still the same.

> 
> On the other hand, it makes thing less predictable. For example, when
> libvirt defines a domain, it queries the host system to see what types of
> devices are legal in guests on this host, and expects those devices to be
> available at a later time. As I understand it (and I may be completely
> wrong), when no vGPUs are running on the hardware, there is a choice of
> several different models of vGPU (like the example you give below), but when
> the first vGPU is started up, that triggers the host driver to restrict the
> available models. If that's the case, then a particular vGPU could be
> "available" when a domain is defined, but not an option by the time the
> domain is started. That's not a show stopper, but I want to make sure I am
> understanding everything properly.

Yes, your understanding is correct as I talked about no heterogeneous vGPU
support yet. But this will open up another interesting point of vGPU placement
policy that libvirt might need to consider.

> 
> Also, is there any information about the maximum number of vGPUs that can be
> handled by a particular physical device (I think that c

Re: [Qemu-devel] [libvirt] [RFC] libvirt vGPU QEMU integration

2016-08-21 Thread Neo Jia
On Fri, Aug 19, 2016 at 02:42:27PM +0200, Michal Privoznik wrote:
> On 18.08.2016 18:41, Neo Jia wrote:
> > Hi libvirt experts,
> 
> Hi, welcome to the list.
> 
> > 
> > I am starting this email thread to discuss the potential solution / 
> > proposal of
> > integrating vGPU support into libvirt for QEMU.
> > 
> > Some quick background, NVIDIA is implementing a VFIO based mediated device
> > framework to allow people to virtualize their devices without SR-IOV, for
> > example NVIDIA vGPU, and Intel KVMGT. Within this framework, we are reusing 
> > the
> > VFIO API to process the memory / interrupt as what QEMU does today with 
> > passthru
> > device.
> 
> So as far as I understand, this is solely NVIDIA's API and other vendors
> (e.g. Intel) will use their own or is this a standard that others will
> comply to?

Hi Michal,

Based on the initial vGPU VFIO design discussion thread on QEMU mailing, I
believe this is what both NVIDIA, Intel and even other companies will comply to.

(People from related parties are CC'ed in this email, such as Intel and IBM.)

As you know, I can't speak for Intel, so I would like to defer this question to
them, but above is my understanding based on the QEMU/KVM community discussions.

> 
> > 
> > The difference here is that we are introducing a set of new sysfs file for
> > virtual device discovery and life cycle management due to its virtual 
> > nature.
> > 
> > Here is the summary of the sysfs, when they will be created and how they 
> > should
> > be used:
> > 
> > 1. Discover mediated device
> > 
> > As part of physical device initialization process, vendor driver will 
> > register
> > their physical devices, which will be used to create virtual device 
> > (mediated
> > device, aka mdev) to the mediated framework.
> > 
> > Then, the sysfs file "mdev_supported_types" will be available under the 
> > physical
> > device sysfs, and it will indicate the supported mdev and configuration for 
> > this 
> > particular physical device, and the content may change dynamically based on 
> > the
> > system's current configurations, so libvirt needs to query this file every 
> > time
> > before create a mdev.
> 
> Ah, that was gonna be my question. Because in the example below, you
> used "echo '...vgpu_type_id=20...' > /sys/bus/.../mdev_create". And I
> was wondering where does the number 20 come from. Now what I am
> wondering about is how libvirt should expose these to users. Moreover,
> how it should let users to chose.
> We have a node device driver where I guess we could expose possible
> options and then require some explicit value in the domain XML (but what
> value would that be? I don't think taking vgpu_type_id-s as they are
> would be a great idea).

Right, the vgpu_type_id is just a handle for a given type of vGPU device for
NVIDIA case.  How about expose the "vgpu_type" which is a meaningful name
for the vGPU end users?

Also, when you are saying "let users to chose", does this mean to expose some
virsh command to allow user to dump their potential virtual devices and pick
one?

> 
> > 
> > Note: different vendors might have their own specific configuration sysfs as
> > well, if they don't have pre-defined types.
> > 
> > For example, we have a NVIDIA Tesla M60 on 86:00.0 here registered, and 
> > here is
> > NVIDIA specific configuration on an idle system.
> > 
> > For example, to query the "mdev_supported_types" on this Tesla M60:
> > 
> > cat /sys/bus/pci/devices/:86:00.0/mdev_supported_types
> > # vgpu_type_id, vgpu_type, max_instance, num_heads, frl_config, framebuffer,
> > max_resolution
> > 11  ,"GRID M60-0B",  16,   2,  45, 512M,2560x1600
> > 12  ,"GRID M60-0Q",  16,   2,  60, 512M,2560x1600
> > 13  ,"GRID M60-1B",   8,   2,  45,1024M,2560x1600
> > 14  ,"GRID M60-1Q",   8,   2,  60,1024M,2560x1600
> > 15  ,"GRID M60-2B",   4,   2,  45,2048M,2560x1600
> > 16  ,"GRID M60-2Q",   4,   4,  60,2048M,2560x1600
> > 17  ,"GRID M60-4Q",   2,   4,  60,4096M,3840x2160
> > 18  ,"GRID M60-8Q",   1,   4,  60,8192M,3840x2160
> > 
> > 2. Create/destroy mediated device
> > 
> > Two sysfs files are available under the physical device sysfs path : 
> > mdev_create
> > and mdev_destroy
> > 
> > The syntax of creating a mdev is:
> > 
> > echo "$mdev_UUID:vendor_specific_argument_list" >
> > /sys/bus/pci/devices/.../mdev_create
> > 
> > The syntax of destroying a mdev is:
> > 
> > echo "$mdev_UUID:vendor_specific_argument_list" >
> > /sys/bus/pci/devices/.../mdev_destroy
> > 
> > The $mdev_UUID is a unique identifier for this mdev device to be created, 
> > and it
> > is unique per system.
> 
> Ah, so a caller (the one doing the echo - e.g. libvirt) can generate
> their own UUID under which the mdev will be known? I'm asking because of
> migration - we might want to preserve UUIDs w

Re: [Qemu-devel] [libvirt] [RFC] libvirt vGPU QEMU integration

2016-08-19 Thread Laine Stump

On 08/18/2016 12:41 PM, Neo Jia wrote:

Hi libvirt experts,

I am starting this email thread to discuss the potential solution / proposal of
integrating vGPU support into libvirt for QEMU.


Thanks for the detailed description. This is very helpful.




Some quick background, NVIDIA is implementing a VFIO based mediated device
framework to allow people to virtualize their devices without SR-IOV, for
example NVIDIA vGPU, and Intel KVMGT. Within this framework, we are reusing the
VFIO API to process the memory / interrupt as what QEMU does today with passthru
device.

The difference here is that we are introducing a set of new sysfs file for
virtual device discovery and life cycle management due to its virtual nature.

Here is the summary of the sysfs, when they will be created and how they should
be used:

1. Discover mediated device

As part of physical device initialization process, vendor driver will register
their physical devices, which will be used to create virtual device (mediated
device, aka mdev) to the mediated framework.



We've discussed this question offline, but I just want to make sure I 
understood correctly - all initialization of the physical device on the 
host is already handled "elsewhere", so libvirt doesn't need to be 
concerned with any physical device lifecycle or configuration (setting 
up the number or types of vGPUs), correct? Do you think this would also 
be the case for other vendors using the same APIs? I guess this all 
comes down to whether or not the setup of the physical device is defined 
within the bounds of the common infrastructure/API, or if it's something 
that's assumed to have just magically happened somewhere else.





Then, the sysfs file "mdev_supported_types" will be available under the physical
device sysfs, and it will indicate the supported mdev and configuration for this
particular physical device, and the content may change dynamically based on the
system's current configurations, so libvirt needs to query this file every time
before create a mdev.


I had originally thought that libvirt would be setting up and managing a 
pool of virtual devices, similar to what we currently do with SRIOV VFs. 
But from this it sounds like the management of this pool is completely 
handled by your drivers (especially since the contents of the pool can 
apparently completely change at any instant). In one way that makes life 
easier for libvirt, because it doesn't need to manage anything.


On the other hand, it makes thing less predictable. For example, when 
libvirt defines a domain, it queries the host system to see what types 
of devices are legal in guests on this host, and expects those devices 
to be available at a later time. As I understand it (and I may be 
completely wrong), when no vGPUs are running on the hardware, there is a 
choice of several different models of vGPU (like the example you give 
below), but when the first vGPU is started up, that triggers the host 
driver to restrict the available models. If that's the case, then a 
particular vGPU could be "available" when a domain is defined, but not 
an option by the time the domain is started. That's not a show stopper, 
but I want to make sure I am understanding everything properly.


Also, is there any information about the maximum number of vGPUs that 
can be handled by a particular physical device (I think that changes 
based on which model of vGPU is being used, right?) Or maybe what is the 
current "load" on a physical device, in case there is more than one and 
libvirt (or management) wants to make a decision about which one to use?




Note: different vendors might have their own specific configuration sysfs as
well, if they don't have pre-defined types.

For example, we have a NVIDIA Tesla M60 on 86:00.0 here registered, and here is
NVIDIA specific configuration on an idle system.

For example, to query the "mdev_supported_types" on this Tesla M60:

cat /sys/bus/pci/devices/:86:00.0/mdev_supported_types
# vgpu_type_id, vgpu_type, max_instance, num_heads, frl_config, framebuffer,
max_resolution
11  ,"GRID M60-0B",  16,   2,  45, 512M,2560x1600
12  ,"GRID M60-0Q",  16,   2,  60, 512M,2560x1600
13  ,"GRID M60-1B",   8,   2,  45,1024M,2560x1600
14  ,"GRID M60-1Q",   8,   2,  60,1024M,2560x1600
15  ,"GRID M60-2B",   4,   2,  45,2048M,2560x1600
16  ,"GRID M60-2Q",   4,   4,  60,2048M,2560x1600
17  ,"GRID M60-4Q",   2,   4,  60,4096M,3840x2160
18  ,"GRID M60-8Q",   1,   4,  60,8192M,3840x2160

2. Create/destroy mediated device

Two sysfs files are available under the physical device sysfs path : mdev_create
and mdev_destroy

The syntax of creating a mdev is:

 echo "$mdev_UUID:vendor_specific_argument_list" >
/sys/bus/pci/devices/.../mdev_create

The syntax of destroying a mdev is:

 echo "$mdev_UUI

Re: [Qemu-devel] [libvirt] [RFC] libvirt vGPU QEMU integration

2016-08-19 Thread Michal Privoznik
On 18.08.2016 18:41, Neo Jia wrote:
> Hi libvirt experts,

Hi, welcome to the list.

> 
> I am starting this email thread to discuss the potential solution / proposal 
> of
> integrating vGPU support into libvirt for QEMU.
> 
> Some quick background, NVIDIA is implementing a VFIO based mediated device
> framework to allow people to virtualize their devices without SR-IOV, for
> example NVIDIA vGPU, and Intel KVMGT. Within this framework, we are reusing 
> the
> VFIO API to process the memory / interrupt as what QEMU does today with 
> passthru
> device.

So as far as I understand, this is solely NVIDIA's API and other vendors
(e.g. Intel) will use their own or is this a standard that others will
comply to?

> 
> The difference here is that we are introducing a set of new sysfs file for
> virtual device discovery and life cycle management due to its virtual nature.
> 
> Here is the summary of the sysfs, when they will be created and how they 
> should
> be used:
> 
> 1. Discover mediated device
> 
> As part of physical device initialization process, vendor driver will register
> their physical devices, which will be used to create virtual device (mediated
> device, aka mdev) to the mediated framework.
> 
> Then, the sysfs file "mdev_supported_types" will be available under the 
> physical
> device sysfs, and it will indicate the supported mdev and configuration for 
> this 
> particular physical device, and the content may change dynamically based on 
> the
> system's current configurations, so libvirt needs to query this file every 
> time
> before create a mdev.

Ah, that was gonna be my question. Because in the example below, you
used "echo '...vgpu_type_id=20...' > /sys/bus/.../mdev_create". And I
was wondering where does the number 20 come from. Now what I am
wondering about is how libvirt should expose these to users. Moreover,
how it should let users to chose.
We have a node device driver where I guess we could expose possible
options and then require some explicit value in the domain XML (but what
value would that be? I don't think taking vgpu_type_id-s as they are
would be a great idea).

> 
> Note: different vendors might have their own specific configuration sysfs as
> well, if they don't have pre-defined types.
> 
> For example, we have a NVIDIA Tesla M60 on 86:00.0 here registered, and here 
> is
> NVIDIA specific configuration on an idle system.
> 
> For example, to query the "mdev_supported_types" on this Tesla M60:
> 
> cat /sys/bus/pci/devices/:86:00.0/mdev_supported_types
> # vgpu_type_id, vgpu_type, max_instance, num_heads, frl_config, framebuffer,
> max_resolution
> 11  ,"GRID M60-0B",  16,   2,  45, 512M,2560x1600
> 12  ,"GRID M60-0Q",  16,   2,  60, 512M,2560x1600
> 13  ,"GRID M60-1B",   8,   2,  45,1024M,2560x1600
> 14  ,"GRID M60-1Q",   8,   2,  60,1024M,2560x1600
> 15  ,"GRID M60-2B",   4,   2,  45,2048M,2560x1600
> 16  ,"GRID M60-2Q",   4,   4,  60,2048M,2560x1600
> 17  ,"GRID M60-4Q",   2,   4,  60,4096M,3840x2160
> 18  ,"GRID M60-8Q",   1,   4,  60,8192M,3840x2160
> 
> 2. Create/destroy mediated device
> 
> Two sysfs files are available under the physical device sysfs path : 
> mdev_create
> and mdev_destroy
> 
> The syntax of creating a mdev is:
> 
> echo "$mdev_UUID:vendor_specific_argument_list" >
> /sys/bus/pci/devices/.../mdev_create
> 
> The syntax of destroying a mdev is:
> 
> echo "$mdev_UUID:vendor_specific_argument_list" >
> /sys/bus/pci/devices/.../mdev_destroy
> 
> The $mdev_UUID is a unique identifier for this mdev device to be created, and 
> it
> is unique per system.

Ah, so a caller (the one doing the echo - e.g. libvirt) can generate
their own UUID under which the mdev will be known? I'm asking because of
migration - we might want to preserve UUIDs when a domain is migrated to
the other side. Speaking of which, is there such limitation or will
guest be able to migrate even if UUID's changed?

> 
> For NVIDIA vGPU, we require a vGPU type identifier (shown as vgpu_type_id in
> above Tesla M60 output), and a VM UUID to be passed as
> "vendor_specific_argument_list".

I understand the need for vgpu_type_id, but can you shed more light on
the VM UUID? Why is that required?

> 
> If there is no vendor specific arguments required, either "$mdev_UUID" or
> "$mdev_UUID:" will be acceptable as input syntax for the above two commands.
> 
> To create a M60-4Q device, libvirt needs to do:
> 
> echo "$mdev_UUID:vgpu_type_id=20,vm_uuid=$VM_UUID" >
> /sys/bus/pci/devices/\:86\:00.0/mdev_create
> 
> Then, you will see a virtual device shows up at:
> 
> /sys/bus/mdev/devices/$mdev_UUID/
> 
> For NVIDIA, to create multiple virtual devices per VM, it has to be created
> upfront before bringing any of them online.
> 
> Regarding error reporting and detection, on failure,