Re: [PATCH 07/11] KVM: page track: add notifier support

2015-12-15 Thread Jike Song

On 12/01/2015 02:26 AM, Xiao Guangrong wrote:

Notifier list is introduced so that any node wants to receive the track
event can register to the list

Two APIs are introduced here:
- kvm_page_track_register_notifier(): register the notifier to receive
   track event

- kvm_page_track_unregister_notifier(): stop receiving track event by
   unregister the notifier

The callback, node->track_write() is called when a write access on the
write tracked page happens

Signed-off-by: Xiao Guangrong 
---
  arch/x86/include/asm/kvm_host.h   |  1 +
  arch/x86/include/asm/kvm_page_track.h | 39 
  arch/x86/kvm/page_track.c | 67 +++
  arch/x86/kvm/x86.c|  4 +++
  4 files changed, 111 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index afff1f1..0f7b940 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -658,6 +658,7 @@ struct kvm_arch {
 */
struct list_head active_mmu_pages;
struct list_head zapped_obsolete_pages;
+   struct kvm_page_track_notifier_head track_notifier_head;

struct list_head assigned_dev_head;
struct iommu_domain *iommu_domain;
diff --git a/arch/x86/include/asm/kvm_page_track.h 
b/arch/x86/include/asm/kvm_page_track.h
index f223201..6744234 100644
--- a/arch/x86/include/asm/kvm_page_track.h
+++ b/arch/x86/include/asm/kvm_page_track.h
@@ -6,6 +6,36 @@ enum kvm_page_track_mode {
KVM_PAGE_TRACK_MAX,
  };

+/*
+ * The notifier represented by @kvm_page_track_notifier_node is linked into
+ * the head which will be notified when guest is triggering the track event.
+ *
+ * Write access on the head is protected by kvm->mmu_lock, read access
+ * is protected by track_srcu.
+ */
+struct kvm_page_track_notifier_head {
+   struct srcu_struct track_srcu;
+   struct hlist_head track_notifier_list;
+};
+
+struct kvm_page_track_notifier_node {
+   struct hlist_node node;
+
+   /*
+* It is called when guest is writing the write-tracked page
+* and write emulation is finished at that time.
+*
+* @vcpu: the vcpu where the write access happened.
+* @gpa: the physical address written by guest.
+* @new: the data was written to the address.
+* @bytes: the written length.
+*/
+   void (*track_write)(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new,
+   int bytes);


Sir, is it possible to make this non-void? as you described below, the
callback may find this gpa isn't the page being tracked, so it probably
want to return something to indicate: not my business, continue :)


+};
+
+void kvm_page_track_init(struct kvm *kvm);
+
  int kvm_page_track_create_memslot(struct kvm_memory_slot *slot,
  unsigned long npages);
  void kvm_page_track_free_memslot(struct kvm_memory_slot *free,
@@ -17,4 +47,13 @@ void kvm_page_track_remove_page(struct kvm *kvm, gfn_t gfn,
enum kvm_page_track_mode mode);
  bool kvm_page_track_check_mode(struct kvm_vcpu *vcpu, gfn_t gfn,
   enum kvm_page_track_mode mode);
+
+void
+kvm_page_track_register_notifier(struct kvm *kvm,
+struct kvm_page_track_notifier_node *n);
+void
+kvm_page_track_unregister_notifier(struct kvm *kvm,
+  struct kvm_page_track_notifier_node *n);
+void kvm_page_track_write(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new,
+ int bytes);
  #endif
diff --git a/arch/x86/kvm/page_track.c b/arch/x86/kvm/page_track.c
index dc2da12..84420df 100644
--- a/arch/x86/kvm/page_track.c
+++ b/arch/x86/kvm/page_track.c
@@ -165,3 +165,70 @@ bool kvm_page_track_check_mode(struct kvm_vcpu *vcpu, 
gfn_t gfn,

return !!ACCESS_ONCE(slot->arch.gfn_track[mode][index]);
  }
+
+void kvm_page_track_init(struct kvm *kvm)
+{
+   struct kvm_page_track_notifier_head *head;
+
+   head = &kvm->arch.track_notifier_head;
+   init_srcu_struct(&head->track_srcu);
+   INIT_HLIST_HEAD(&head->track_notifier_list);
+}
+
+/*
+ * register the notifier so that event interception for the tracked guest
+ * pages can be received.
+ */
+void
+kvm_page_track_register_notifier(struct kvm *kvm,
+struct kvm_page_track_notifier_node *n)
+{
+   struct kvm_page_track_notifier_head *head;
+
+   head = &kvm->arch.track_notifier_head;
+
+   spin_lock(&kvm->mmu_lock);
+   hlist_add_head_rcu(&n->node, &head->track_notifier_list);
+   spin_unlock(&kvm->mmu_lock);
+}
+
+/*
+ * stop receiving the event interception. It is the opposed operation of
+ * kvm_page_track_register_notifier().
+ */
+void
+kvm_page_track_unregister_notifier(struct kvm *kvm,
+  struct kvm_page_track_notifier_node *n)
+{
+   struct kvm_page_track_notifier_head *head;
+

Re: [ANNOUNCE][RFC] KVMGT - the implementation of Intel GVT-g(full GPU virtualization) for KVM

2015-10-27 Thread Jike Song

Hi all,

We are pleased to announce another update of Intel GVT-g for KVM.

Intel GVT-g is a full GPU virtualization solution with mediated pass-through, 
starting from 4th generation Intel Core(TM) processors with Intel Graphics 
processors. A virtual GPU instance is maintained for each VM, with part of 
performance critical resources directly assigned. The capability of running 
native graphics driver inside a VM, without hypervisor intervention in 
performance critical paths, achieves a good balance among performance, feature, 
and sharing capability. KVM is supported by Intel GVT-g(a.k.a. KVMGT).


Repositories

Kernel: https://github.com/01org/igvtg-kernel (2015q3-3.18.0 branch)
Qemu: https://github.com/01org/igvtg-qemu (kvmgt_public2015q3 branch)


This update consists of:

- KVMGT is now merged with XenGT in unified repositories(kernel and qemu), 
but currently
  different branches for qemu.  KVMGT and XenGT share same iGVT-g core 
logic.
- PPGTT supported, hence the Windows guest support
- KVMGT now supports both 4th generation (Haswell) and 5th generation 
(Broadwell) Intel Core(TM) processors
- 2D/3D/Media decoding have been validated on Ubuntu 14.04 and 
Windows7/Windows 8.1

Next update will be around early Jan, 2016.

Known issues:

- At least 2GB memory is suggested for VM to run most 3D workloads.
- 3Dmark06 running in Windows VM may have some stability issue.
- Using VLC to play .ogg file may cause mosaic or slow response.


Please subscribe the mailing list to report BUGs, discuss, and/or contribute:

https://lists.01.org/mailman/listinfo/igvt-g

More information about Intel GVT-g background, architecture, etc can be found 
at(may not be up-to-date):

https://01.org/igvt-g
http://www.linux-kvm.org/images/f/f3/01x08b-KVMGT-a.pdf
https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian


Note:

The KVMGT project should be considered a work in progress. As such it is not a 
complete product nor should it be considered one. Extra care should be taken 
when testing and configuring a system to use the KVMGT project.


--
Thanks,
Jike

On 12/04/2014 10:24 AM, Jike Song wrote:

Hi all,

   We are pleased to announce the first release of KVMGT project. KVMGT is the 
implementation of Intel GVT-g technology, a full GPU virtualization solution. 
Under Intel GVT-g, a virtual GPU instance is maintained for each VM, with part 
of performance critical resources directly assigned. The capability of running 
native graphics driver inside a VM, without hypervisor intervention in 
performance critical paths, achieves a good balance of performance, feature, 
and sharing capability.


   KVMGT is still in the early stage:

- Basic functions of full GPU virtualization works, guest can see a 
full-featured vGPU.
  We ran several 3D workloads such as lightsmark, nexuiz, urbanterror and 
warsow.

- Only Linux guest supported so far, and PPGTT must be disabled in guest 
through a
  kernel parameter(see README.kvmgt in QEMU).

- This drop also includes some Xen specific changes, which will be cleaned 
up later.

- Our end goal is to upstream both XenGT and KVMGT, which shares ~90% logic 
for vGPU
  device model (will be part of i915 driver), with only difference in 
hypervisor
  specific services

- insufficient test coverage, so please bear with stability issues :)



   There are things need to be improved, esp. the KVM interfacing part:

1   a domid was added to each KVMGT guest

An ID is needed for foreground OS switching, e.g.

# echo >
/sys/kernel/vgt/control/foreground_vm

domid 0 is reserved for host OS.


2   SRCU workarounds.

Some KVM functions, such as:

kvm_io_bus_register_dev
install_new_memslots

must be called *without* &kvm->srcu read-locked. Otherwise it 
hangs.

In KVMGT, we need to register an iodev only *after* BAR 
registers are
written by guest. That means, we already have &kvm->srcu hold -
trapping/emulating PIO(BAR registers) makes us in such a 
condition.
That will make kvm_io_bus_register_dev hangs.

Currently we have to disable rcu_assign_pointer() in such 
functions.

These were dirty workarounds, your suggestions are high welcome!


3   syscalls were called to access "/dev/mem" from kernel

An in-kernel memslot was added for aperture, but using syscalls 
like
open and mmap to open and access the character device 
"/dev/mem",
for pass-through.




The source codes(kernel, qemu as well as seabios) are available at github:

git://github.com/01org/KVMGT-kernel
git://github.com/01org/KVMGT-qemu

Re: [Intel-gfx] [ANNOUNCE][RFC] KVMGT - the implementation of Intel GVT-g(full GPU virtualization) for KVM

2014-12-09 Thread Jike Song

CC Kevin.


On 12/09/2014 05:54 PM, Jan Kiszka wrote:

On 2014-12-04 03:24, Jike Song wrote:

Hi all,

  We are pleased to announce the first release of KVMGT project. KVMGT is
the implementation of Intel GVT-g technology, a full GPU virtualization
solution. Under Intel GVT-g, a virtual GPU instance is maintained for
each VM, with part of performance critical resources directly assigned.
The capability of running native graphics driver inside a VM, without
hypervisor intervention in performance critical paths, achieves a good
balance of performance, feature, and sharing capability.


  KVMGT is still in the early stage:

   - Basic functions of full GPU virtualization works, guest can see a
full-featured vGPU.
 We ran several 3D workloads such as lightsmark, nexuiz, urbanterror
and warsow.

   - Only Linux guest supported so far, and PPGTT must be disabled in
guest through a
 kernel parameter(see README.kvmgt in QEMU).

   - This drop also includes some Xen specific changes, which will be
cleaned up later.

   - Our end goal is to upstream both XenGT and KVMGT, which shares ~90%
logic for vGPU
 device model (will be part of i915 driver), with only difference in
hypervisor
 specific services

   - insufficient test coverage, so please bear with stability issues :)



  There are things need to be improved, esp. the KVM interfacing part:

 1a domid was added to each KVMGT guest

 An ID is needed for foreground OS switching, e.g.

 # echo >/sys/kernel/vgt/control/foreground_vm

 domid 0 is reserved for host OS.


  2SRCU workarounds.

 Some KVM functions, such as:

 kvm_io_bus_register_dev
 install_new_memslots

 must be called *without* &kvm->srcu read-locked. Otherwise it
hangs.

 In KVMGT, we need to register an iodev only *after* BAR
registers are
 written by guest. That means, we already have &kvm->srcu hold -
 trapping/emulating PIO(BAR registers) makes us in such a condition.
 That will make kvm_io_bus_register_dev hangs.

 Currently we have to disable rcu_assign_pointer() in such
functions.

 These were dirty workarounds, your suggestions are high welcome!


 3syscalls were called to access "/dev/mem" from kernel

 An in-kernel memslot was added for aperture, but using syscalls
like
 open and mmap to open and access the character device "/dev/mem",
 for pass-through.




The source codes(kernel, qemu as well as seabios) are available at github:

 git://github.com/01org/KVMGT-kernel
 git://github.com/01org/KVMGT-qemu
 git://github.com/01org/KVMGT-seabios

In the KVMGT-qemu repository, there is a "README.kvmgt" to be referred.



More information about Intel GVT-g and KVMGT can be found at:

 
https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian

 
http://events.linuxfoundation.org/sites/events/files/slides/KVMGT-a%20Full%20GPU%20Virtualization%20Solution_1.pdf



Appreciate your comments, BUG reports, and contributions!



There is an even increasing interest to keep KVM's in-kernel guest
interface as small as possible, specifically for security reasons. I'm
sure there are some good performance reasons to create a new in-kernel
device model, but I suppose those will need good evidences why things
are done in the way they finally should be - and not via a user-space
device model. This is likely not a binary decision (all userspace vs. no
userspace), it is more about the size and robustness of the in-kernel
model vs. its performance.

One aspect could also be important: Are there hardware improvements in
sight that will eventually help to reduce the in-kernel device model and
make the overall design even more robust? How will those changes fit
best into a proposed user/kernel split?

Jan



--
Thanks,
Jike
___
Intel-gfx mailing list
intel-...@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ANNOUNCE][RFC] KVMGT - the implementation of Intel GVT-g(full GPU virtualization) for KVM

2014-12-09 Thread Jike Song

CC Kevin.


On 12/09/2014 05:54 PM, Jan Kiszka wrote:

On 2014-12-04 03:24, Jike Song wrote:

Hi all,

  We are pleased to announce the first release of KVMGT project. KVMGT is
the implementation of Intel GVT-g technology, a full GPU virtualization
solution. Under Intel GVT-g, a virtual GPU instance is maintained for
each VM, with part of performance critical resources directly assigned.
The capability of running native graphics driver inside a VM, without
hypervisor intervention in performance critical paths, achieves a good
balance of performance, feature, and sharing capability.


  KVMGT is still in the early stage:

   - Basic functions of full GPU virtualization works, guest can see a
full-featured vGPU.
 We ran several 3D workloads such as lightsmark, nexuiz, urbanterror
and warsow.

   - Only Linux guest supported so far, and PPGTT must be disabled in
guest through a
 kernel parameter(see README.kvmgt in QEMU).

   - This drop also includes some Xen specific changes, which will be
cleaned up later.

   - Our end goal is to upstream both XenGT and KVMGT, which shares ~90%
logic for vGPU
 device model (will be part of i915 driver), with only difference in
hypervisor
 specific services

   - insufficient test coverage, so please bear with stability issues :)



  There are things need to be improved, esp. the KVM interfacing part:

 1a domid was added to each KVMGT guest

 An ID is needed for foreground OS switching, e.g.

 # echo >/sys/kernel/vgt/control/foreground_vm

 domid 0 is reserved for host OS.


  2SRCU workarounds.

 Some KVM functions, such as:

 kvm_io_bus_register_dev
 install_new_memslots

 must be called *without* &kvm->srcu read-locked. Otherwise it
hangs.

 In KVMGT, we need to register an iodev only *after* BAR
registers are
 written by guest. That means, we already have &kvm->srcu hold -
 trapping/emulating PIO(BAR registers) makes us in such a condition.
 That will make kvm_io_bus_register_dev hangs.

 Currently we have to disable rcu_assign_pointer() in such
functions.

 These were dirty workarounds, your suggestions are high welcome!


 3syscalls were called to access "/dev/mem" from kernel

 An in-kernel memslot was added for aperture, but using syscalls
like
 open and mmap to open and access the character device "/dev/mem",
 for pass-through.




The source codes(kernel, qemu as well as seabios) are available at github:

 git://github.com/01org/KVMGT-kernel
 git://github.com/01org/KVMGT-qemu
 git://github.com/01org/KVMGT-seabios

In the KVMGT-qemu repository, there is a "README.kvmgt" to be referred.



More information about Intel GVT-g and KVMGT can be found at:

 
https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian

 
http://events.linuxfoundation.org/sites/events/files/slides/KVMGT-a%20Full%20GPU%20Virtualization%20Solution_1.pdf



Appreciate your comments, BUG reports, and contributions!



There is an even increasing interest to keep KVM's in-kernel guest
interface as small as possible, specifically for security reasons. I'm
sure there are some good performance reasons to create a new in-kernel
device model, but I suppose those will need good evidences why things
are done in the way they finally should be - and not via a user-space
device model. This is likely not a binary decision (all userspace vs. no
userspace), it is more about the size and robustness of the in-kernel
model vs. its performance.

One aspect could also be important: Are there hardware improvements in
sight that will eventually help to reduce the in-kernel device model and
make the overall design even more robust? How will those changes fit
best into a proposed user/kernel split?

Jan



--
Thanks,
Jike
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Intel-gfx] [ANNOUNCE][RFC] KVMGT - the implementation of Intel GVT-g(full GPU virtualization) for KVM

2014-12-05 Thread Jike Song

On 12/05/2014 09:54 PM, Daniel Vetter wrote:

Yeah done a quick read-through of just the i915 bits too, same comment. I
guess this is just the first RFC and the redesign we've discussed about
already with xengt is in progress somewhere?


Yes, it's marching on with Xen now. The KVM implementation is
currently not even feature complete - we still have PPGTT missing.




Thanks, Daniel



--
Thanks,
Jike
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ANNOUNCE][RFC] KVMGT - the implementation of Intel GVT-g(full GPU virtualization) for KVM

2014-12-05 Thread Jike Song

CC Andy :)

On 12/05/2014 09:03 PM, Paolo Bonzini wrote:


On 05/12/2014 09:50, Gerd Hoffmann wrote:

A few comments on the kernel stuff (brief look so far, also
compile-tested only, intel gfx on my test machine is too old).

  * Noticed the kernel bits don't even compile when configured as
module.  Everything (vgt, i915, kvm) must be compiled into the
kernel.


I'll add that the patch is basically impossible to review with all the
XenGT bits still in.  For example, the x86 emulator seems to be
unnecessary for KVMGT, but I am not 100% sure.



This is not ready for merge yet, please wait for a while, we'll have
Xen/KVM specific code separated.

BTW, definitely you are right, the emulator is unnecessary for KVMGT,
and ... unnecessary for XenGT :)


I would like a clear understanding of why/how Andrew Barnes was able to
do i915 passthrough (GVT-d) without hacking the ISA bridge, and why this
does not apply to GVT-g.


AFAIK, the graphics drivers need to figure out the offset of
some MMIO registers, by the IDs of this ISA bridge. It simply won't work
without this information.

Talked with Andy about the pass-through but I don't have his implementation,
CC Andy for his advice :)



Paolo



Thanks for review. Would you please also have a look at the issues I mentioned
in the original email? they are most KVM-related: the SRCU trickiness, domid,
and the memslot created in kernel.

Thank you!

--
Thanks,
Jike
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ANNOUNCE][RFC] KVMGT - the implementation of Intel GVT-g(full GPU virtualization) for KVM

2014-12-05 Thread Jike Song

On 12/05/2014 04:50 PM, Gerd Hoffmann wrote:

A few comments on the kernel stuff (brief look so far, also
compile-tested only, intel gfx on my test machine is too old).

  * Noticed the kernel bits don't even compile when configured as
module.  Everything (vgt, i915, kvm) must be compiled into the
kernel.


Yes, that's planned to be done along with separating hypervisor-related
code from vgt.


  * Design approach still seems to be i915 on vgt not the other way
around.


So far yes.



Qemu/SeaBIOS bits:

I've seen the host bridge changes identity from i440fx to
copy-pci-ids-from-host.  Guess the reason for this is that seabios uses
this device to figure whenever it is running on i440fx or q35.  Correct?



I did some trick in seabios/qemu. The purpose is to make qemu:

- provide IDs of an old host bridge to SeaBIOS
- provide IDs of new host bridge(the physical ones) to guest OS

So I made seabios to tell qemu that POST is done before jumping to guest
OS context.

This may be the simplest method to make things work, but yes, q35 emulation
of qemu may have this unnecessary, see below.


What are the exact requirements for the device?  Must it match the host
exactly, to not confuse the guest intel graphics driver?  Or would
something more recent -- such as the q35 emulation qemu has -- be good
enough to make things work (assuming we add support for the
graphic-related pci config space registers there)?



I don't know that is exactly needed, we also need to have Windows
driver considered.  However, I'm quite confident that, if things gonna
work for IGD passthrough, it gonna work for GVT-g.


The patch also adds a dummy isa bridge at 0x1f.  Simliar question here:
What exactly is needed here?  Would things work if we simply use the q35
lpc device here?



Ditto.


more to come after I've read the paper linked above ...


Thanks for review :)



cheers,
   Gerd



--
Thanks,
Jike
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANNOUNCE][RFC] KVMGT - the implementation of Intel GVT-g(full GPU virtualization) for KVM

2014-12-03 Thread Jike Song

Hi all,

 We are pleased to announce the first release of KVMGT project. KVMGT is the 
implementation of Intel GVT-g technology, a full GPU virtualization solution. 
Under Intel GVT-g, a virtual GPU instance is maintained for each VM, with part 
of performance critical resources directly assigned. The capability of running 
native graphics driver inside a VM, without hypervisor intervention in 
performance critical paths, achieves a good balance of performance, feature, 
and sharing capability.


 KVMGT is still in the early stage:

  - Basic functions of full GPU virtualization works, guest can see a 
full-featured vGPU.
We ran several 3D workloads such as lightsmark, nexuiz, urbanterror and 
warsow.

  - Only Linux guest supported so far, and PPGTT must be disabled in guest 
through a
kernel parameter(see README.kvmgt in QEMU).

  - This drop also includes some Xen specific changes, which will be cleaned up 
later.

  - Our end goal is to upstream both XenGT and KVMGT, which shares ~90% logic 
for vGPU
device model (will be part of i915 driver), with only difference in 
hypervisor
specific services

  - insufficient test coverage, so please bear with stability issues :)



 There are things need to be improved, esp. the KVM interfacing part:

1   a domid was added to each KVMGT guest

An ID is needed for foreground OS switching, e.g.

# echo >
/sys/kernel/vgt/control/foreground_vm

domid 0 is reserved for host OS.


2   SRCU workarounds.

Some KVM functions, such as:

kvm_io_bus_register_dev
install_new_memslots

must be called *without* &kvm->srcu read-locked. Otherwise it 
hangs.

In KVMGT, we need to register an iodev only *after* BAR 
registers are
written by guest. That means, we already have &kvm->srcu hold -
trapping/emulating PIO(BAR registers) makes us in such a 
condition.
That will make kvm_io_bus_register_dev hangs.

Currently we have to disable rcu_assign_pointer() in such 
functions.

These were dirty workarounds, your suggestions are high welcome!


3   syscalls were called to access "/dev/mem" from kernel

An in-kernel memslot was added for aperture, but using syscalls 
like
open and mmap to open and access the character device 
"/dev/mem",
for pass-through.

 



The source codes(kernel, qemu as well as seabios) are available at github:

git://github.com/01org/KVMGT-kernel
git://github.com/01org/KVMGT-qemu
git://github.com/01org/KVMGT-seabios

In the KVMGT-qemu repository, there is a "README.kvmgt" to be referred.



More information about Intel GVT-g and KVMGT can be found at:


https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian

http://events.linuxfoundation.org/sites/events/files/slides/KVMGT-a%20Full%20GPU%20Virtualization%20Solution_1.pdf


Appreciate your comments, BUG reports, and contributions!




--
Thanks,
Jike
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/13 v7] PCI: Linux kernel SR-IOV support

2008-12-16 Thread Jike Song
Jesse Barnes wrote:
> Given a respin of 10-13 I think it's reasonable to merge this into 2.6.29, 
> but 
> I'd be much happier about it if we got some driver code along with it, so as 
> not to have an unused interface sitting around for who knows how many 
> releases.  Is that reasonable?  Do you know if any of the corresponding PF/VF 
> driver bits are ready yet?

Hi Jesse, 

Yu Zhao has posted a patch set with subject "SR-IOV driver example" 
at November 26, which illustrated the usage of SR-IOV API in Intel 82576 VF/PF
drivers;-)

--
Thanks,
Jike
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html