From: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
x86-run |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/x86-run b/x86-run
index 14ff331..646c577 100755
--- a/x86-run
+++ b/x86-run
@@ -33,7 +33,7 @@ else
On Wed, Jun 26, 2013 at 07:49:37AM +0200, Jan Kiszka wrote:
On 2013-06-24 14:19, Gleb Natapov wrote:
This reverts most of the f1ed0450a5fac7067590317cbf027f566b6ccbca. After
the commit kvm_apic_set_irq() no longer returns accurate information
about interrupt injection status if injection is
On 2013-06-26 08:15, Gleb Natapov wrote:
On Wed, Jun 26, 2013 at 07:49:37AM +0200, Jan Kiszka wrote:
On 2013-06-24 14:19, Gleb Natapov wrote:
This reverts most of the f1ed0450a5fac7067590317cbf027f566b6ccbca. After
the commit kvm_apic_set_irq() no longer returns accurate information
about
On 2013-06-05 11:06, Kashyap Chamarthy wrote:
Adding Jan, Jun, to see if they have any inputs here.
Thanks for the note, it's very helpful! This test actually fails on
older CPUs as well, and I can finally reproduce the issue that Jay also
reported. I'm not able to cure it by going back to
On 06/26/2013 01:42 PM, Bharat Bhushan wrote:
ehpriv instruction is used for setting software breakpoints
by user space. This patch adds support to exit to user space
with run-debug have relevant information.
As this is the first point we are using run-debug, also defined
the run-debug
Il 26/06/2013 08:06, Jan Kiszka ha scritto:
From: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
x86-run |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/x86-run b/x86-run
index 14ff331..646c577 100755
--- a/x86-run
+++
Il 25/06/2013 22:30, Srivatsa S. Bhat ha scritto:
- cpu = get_cpu();
+ cpu = get_online_cpus_atomic();
vmx_vcpu_load(vmx-vcpu, cpu);
vmx-vcpu.cpu = cpu;
err = vmx_vcpu_setup(vmx);
vmx_vcpu_put(vmx-vcpu);
- put_cpu();
+ put_online_cpus_atomic();
The
Thanks for the note, it's very helpful! This test actually fails on
older CPUs as well, and I can finally reproduce the issue that Jay also
reported. I'm not able to cure it by going back to 3b656cf764^,
Ok, you tried w/o this commit..
commit
It is a pleasure to welcome the following GSoC 2013 students to the
QEMU, KVM, and libvirt communities:
Libvirt Wireshark Dissector - Yuto KAWAMURA (kawamuray)
http://qemu-project.org/Features/LibvirtWiresharkDissector
Libvirt Introduce API to query IP addresses for given domain - Nehal
J. Wani
On 06/26/2013 01:16 PM, Paolo Bonzini wrote:
Il 25/06/2013 22:30, Srivatsa S. Bhat ha scritto:
-cpu = get_cpu();
+cpu = get_online_cpus_atomic();
vmx_vcpu_load(vmx-vcpu, cpu);
vmx-vcpu.cpu = cpu;
err = vmx_vcpu_setup(vmx);
vmx_vcpu_put(vmx-vcpu);
-put_cpu();
Il 26/06/2013 00:34, Paul Gortmaker ha scritto:
In commit e935b8372cf8 (KVM: Convert kvm_lock to raw_spinlock),
the kvm_lock was made a raw lock. However, the kvm mmu_shrink()
function tries to grab the (non-raw) mmu_lock within the scope of
the raw locked kvm_lock being held. This leads to
Il 25/06/2013 22:30, Srivatsa S. Bhat ha scritto:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while
Il 26/06/2013 10:06, Srivatsa S. Bhat ha scritto:
On 06/26/2013 01:16 PM, Paolo Bonzini wrote:
Il 25/06/2013 22:30, Srivatsa S. Bhat ha scritto:
- cpu = get_cpu();
+ cpu = get_online_cpus_atomic();
vmx_vcpu_load(vmx-vcpu, cpu);
vmx-vcpu.cpu = cpu;
err = vmx_vcpu_setup(vmx);
On 06/24/2013 06:47 PM, Andrew Jones wrote:
On Mon, Jun 24, 2013 at 06:10:14PM +0530, Raghavendra K T wrote:
Results:
===
base = 3.10-rc2 kernel
patched = base + this series
The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled
with 32 KVM guest vcpu 8GB RAM.
Have you
On 06/25/2013 08:20 PM, Andrew Theurer wrote:
On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism. The series provides
implementation for both Xen and KVM.
Changes in V9:
-
On 06/26/2013 01:53 PM, Paolo Bonzini wrote:
Il 26/06/2013 10:06, Srivatsa S. Bhat ha scritto:
On 06/26/2013 01:16 PM, Paolo Bonzini wrote:
Il 25/06/2013 22:30, Srivatsa S. Bhat ha scritto:
- cpu = get_cpu();
+ cpu = get_online_cpus_atomic();
vmx_vcpu_load(vmx-vcpu, cpu);
On Wed, Jun 26, 2013 at 09:08:12AM +0200, Paolo Bonzini wrote:
Il 26/06/2013 08:06, Jan Kiszka ha scritto:
From: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Applied, thanks.
---
x86-run |2 +-
1 files changed, 1 insertions(+), 1
Il 26/06/2013 10:41, Srivatsa S. Bhat ha scritto:
On 06/26/2013 01:53 PM, Paolo Bonzini wrote:
Il 26/06/2013 10:06, Srivatsa S. Bhat ha scritto:
On 06/26/2013 01:16 PM, Paolo Bonzini wrote:
Il 25/06/2013 22:30, Srivatsa S. Bhat ha scritto:
- cpu = get_cpu();
+ cpu = get_online_cpus_atomic();
On 06/26/2013 04:44 PM, Bhushan Bharat-R65777 wrote:
-Original Message-
From: tiejun.chen [mailto:tiejun.c...@windriver.com]
Sent: Wednesday, June 26, 2013 12:25 PM
To: Bhushan Bharat-R65777
Cc: kvm-...@vger.kernel.org; kvm@vger.kernel.org; ag...@suse.de; Wood Scott-
B07421;
On Mon, Jun 24, 2013 at 10:42:57PM +0200, Stefan Pietsch wrote:
On 24.06.2013 14:30, Gleb Natapov wrote:
On Mon, Jun 24, 2013 at 01:59:34PM +0200, Stefan Pietsch wrote:
As soon as I remove kvmvapic.bin the virtual machine boots with
qemu-kvm 1.5.0. I just verified this with Linux kernel
On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
On 06/25/2013 08:20 PM, Andrew Theurer wrote:
On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism. The series
On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
On 06/25/2013 08:20 PM, Andrew Theurer wrote:
On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
This series replaces the existing paravirtualized spinlock
On 06/26/2013 06:22 PM, Gleb Natapov wrote:
On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
On 06/25/2013 08:20 PM, Andrew Theurer wrote:
On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
This series
Hi,
I noticed that on my 3 VMs running server, that there are 10-20 threads
doing i/o. As the VMs are running on HDDs and not SSDs I think that is
counterproductive: won't these threads make the HDDs seek back and forth
constantly?
Folkert van Heusden
--
Always wondered what the latency of
On Tue, Jun 25, 2013 at 02:10:20PM +0300, Gleb Natapov wrote:
- if (!(ctxt-d VendorSpecific) ctxt-only_vendor_specific_insn)
+ if (!(ctxt-d EmulateOnUD) ctxt-only_vendor_specific_insn)
Lets rename only_vendor_specific_insn to something like -ud too.
So this thing is set only when
On Wed, Jun 26, 2013 at 03:52:40PM +0300, Gleb Natapov wrote:
On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
On 06/25/2013 08:20 PM, Andrew Theurer wrote:
On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T
On Wed, Jun 26, 2013 at 04:11:59PM +0200, Borislav Petkov wrote:
On Tue, Jun 25, 2013 at 02:10:20PM +0300, Gleb Natapov wrote:
- if (!(ctxt-d VendorSpecific) ctxt-only_vendor_specific_insn)
+ if (!(ctxt-d EmulateOnUD) ctxt-only_vendor_specific_insn)
Lets rename
On 06/26/2013 08:09 PM, Chegu Vinod wrote:
On 6/26/2013 6:40 AM, Raghavendra K T wrote:
On 06/26/2013 06:22 PM, Gleb Natapov wrote:
On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
On 06/25/2013 08:20 PM, Andrew
Bjorn Helgaas wrote:
[fix Joerg's email address]
On Tue, Jun 25, 2013 at 10:15 PM, Bjorn Helgaas bhelg...@google.com wrote:
On Wed, Jul 11, 2012 at 11:18 PM, Alex Williamson
alex.william...@redhat.com wrote:
We've confirmed that peer-to-peer between these devices is
not possible. We can
On Wed, 2013-06-26 at 15:52 +0300, Gleb Natapov wrote:
On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
On 06/25/2013 08:20 PM, Andrew Theurer wrote:
On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
On Wed, 2013-06-26 at 17:14 +0200, Andreas Hartmann wrote:
Bjorn Helgaas wrote:
[fix Joerg's email address]
On Tue, Jun 25, 2013 at 10:15 PM, Bjorn Helgaas bhelg...@google.com wrote:
On Wed, Jul 11, 2012 at 11:18 PM, Alex Williamson
alex.william...@redhat.com wrote:
We've confirmed
On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
On 06/26/2013 06:22 PM, Gleb Natapov wrote:
On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
On 06/25/2013 08:20 PM, Andrew Theurer wrote:
On Sun,
Alex Williamson wrote:
On Wed, 2013-06-26 at 17:14 +0200, Andreas Hartmann wrote:
Bjorn Helgaas wrote:
[fix Joerg's email address]
On Tue, Jun 25, 2013 at 10:15 PM, Bjorn Helgaas bhelg...@google.com wrote:
On Wed, Jul 11, 2012 at 11:18 PM, Alex Williamson
alex.william...@redhat.com wrote:
On Wed, 2013-06-26 at 18:24 +0200, Andreas Hartmann wrote:
Alex Williamson wrote:
On Wed, 2013-06-26 at 17:14 +0200, Andreas Hartmann wrote:
Bjorn Helgaas wrote:
[fix Joerg's email address]
On Tue, Jun 25, 2013 at 10:15 PM, Bjorn Helgaas bhelg...@google.com
wrote:
On Wed, Jul 11,
On 2013-06-26 10:03, Kashyap Chamarthy wrote:
Thanks for the note, it's very helpful! This test actually fails on
older CPUs as well, and I can finally reproduce the issue that Jay also
reported. I'm not able to cure it by going back to 3b656cf764^,
Ok, you tried w/o this commit..
Since CPU loops are done as last step in kvm_{insert,remove}_breakpoint()
and kvm_remove_all_breakpoints(), we do not need to distinguish between
invoking CPU and iterated CPUs and can thereby free the identifier for
use as a global variable.
Acked-by: Paolo Bonzini pbonz...@redhat.com
Commit 3474b679486caa8f6448bae974e131370f360c13 (Utilize selective
runtime reg sync for hot code paths) introduced two uses of
ENV_GET_CPU() inside target-s390x/ KVM code. In one case we can use a
direct CPU() cast instead.
Cc: Jason J. Herne jjhe...@us.ibm.com
Signed-off-by: Andreas Färber
This allows to get rid of the last remaining ENV_GET_CPU() in
target-s390x/ by using CPU() cast directly on the argument.
Cc: Jason J. Herne jjhe...@us.ibm.com
Signed-off-by: Andreas Färber afaer...@suse.de
---
target-s390x/kvm.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
Acked-by: Paolo Bonzini pbonz...@redhat.com
Reviewed-by: Richard Henderson r...@twiddle.net
Signed-off-by: Andreas Färber afaer...@suse.de
---
gdbstub.c| 2 +-
include/sysemu/kvm.h | 2 +-
kvm-all.c| 6 +++---
kvm-stub.c | 2 +-
4 files changed, 6 insertions(+),
On 06/26/2013 09:41 PM, Gleb Natapov wrote:
On Wed, Jun 26, 2013 at 07:10:21PM +0530, Raghavendra K T wrote:
On 06/26/2013 06:22 PM, Gleb Natapov wrote:
On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote:
On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote:
On
In commit e935b8372cf8 (KVM: Convert kvm_lock to raw_spinlock),
the kvm_lock was made a raw lock. However, the kvm mmu_shrink()
function tries to grab the (non-raw) mmu_lock within the scope of
the raw locked kvm_lock being held. This leads to the following:
BUG: sleeping function called from
Hi,
this small series contains a few type and style cleanups. It has no
impact on the generated code but removes a few small nits from the
code.
Please apply!
Thanks,
Mathias Krause (3):
KVM: VMX: Use proper types to access const arrays
KVM: VMX: Use size_t to store sizeof() values
KVM:
The type for storing values of the sizeof operator should be size_t.
No semantical changes, only type correctness.
Signed-off-by: Mathias Krause mini...@googlemail.com
---
arch/x86/kvm/vmx.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/vmx.c
Void pointers don't need no casting, drop it.
Signed-off-by: Mathias Krause mini...@googlemail.com
---
arch/x86/kvm/x86.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index e8ba99c..472350c 100644
--- a/arch/x86/kvm/x86.c
+++
Use a const pointer type instead of casting away the const qualifier
from const arrays. Keep the pointer array on the stack, nonetheless.
Making it static just increases the object size.
Signed-off-by: Mathias Krause mini...@googlemail.com
---
arch/x86/kvm/vmx.c | 15 +++
1 file
Hi all,
I messed up my workflow earlier on, so I had to rebase kvm-arm-next onto
kvm/next. I will do everything in my powers to avoid this in the
future.
Sorry for any troubles.
-Christoffer
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
Hi Gleb and Paolo,
The following changes since commit 87d41fb4da6467622b7a87fd6afe8071abab6dae:
KVM: s390: Fixed priority of execution in STSI (2013-06-20 23:33:01 +0200)
are available in the git repository at:
git://git.linaro.org/people/cdall/linux-kvm-arm.git tags/kvm-arm-3.11
for you
On Mon, 2013-06-24 at 11:43 -0600, Bjorn Helgaas wrote:
On Wed, Jun 19, 2013 at 6:43 AM, Don Dutile ddut...@redhat.com wrote:
On 06/18/2013 10:52 PM, Bjorn Helgaas wrote:
On Tue, Jun 18, 2013 at 5:03 PM, Don Dutileddut...@redhat.com wrote:
On 06/18/2013 06:22 PM, Alex Williamson wrote:
Il 26/06/2013 20:11, Paul Gortmaker ha scritto:
spin_unlock(kvm-mmu_lock);
+ kvm_put_kvm(kvm);
srcu_read_unlock(kvm-srcu, idx);
kvm_put_kvm needs to go last. I can fix when applying, but I'll wait
for Gleb to take a look too.
Paolo
--
To unsubscribe
Enable support for MSI interrupts if the device supports it.
Since MSI interrupts are edge triggered, it is no longer necessary to
disable interrupts in the kernel and re-enable them from user-space.
Instead, clearing the interrupt condition in the user space application
automatically re-enables
Sorry for the user query but I'm not finding expertise on the Linux mailing
lists I belong to. The web site says one-off user questions are OK.
I have a few VM images on Parallels 8 for Mac. I want them to be on KVM/Linux.
Some of the images are Linux, but the critical ones are a few types of
[Re: [PATCH-next v2] kvm: don't try to take mmu_lock while holding the main raw
kvm_lock] On 26/06/2013 (Wed 23:59) Paolo Bonzini wrote:
Il 26/06/2013 20:11, Paul Gortmaker ha scritto:
spin_unlock(kvm-mmu_lock);
+ kvm_put_kvm(kvm);
Hi,
On Wed, 26 Jun 2013 11:12:23 +0530 Bharat Bhushan r65...@freescale.com wrote:
diff --git a/arch/powerpc/include/asm/switch_to.h
b/arch/powerpc/include/asm/switch_to.h
index 200d763..50b357f 100644
--- a/arch/powerpc/include/asm/switch_to.h
+++ b/arch/powerpc/include/asm/switch_to.h
@@
The changes are:
1. rebased on v3.10-rc7
2. removed spinlocks from real mode
3. added security checks between KVM and VFIO
MOre details in the individual patch comments.
Alexey Kardashevskiy (8):
KVM: PPC: reserve a capability number for multitce support
KVM: PPC: reserve a capability and
The current VFIO-on-POWER implementation supports only user mode
driven mapping, i.e. QEMU is sending requests to map/unmap pages.
However this approach is really slow, so we want to move that to KVM.
Since H_PUT_TCE can be extremely performance sensitive (especially with
network adapters where
This allows the host kernel to handle H_PUT_TCE, H_PUT_TCE_INDIRECT
and H_STUFF_TCE requests without passing them to QEMU, which saves time
on switching to QEMU and back.
Both real and virtual modes are supported. First the kernel tries to
handle a TCE request in the real mode, if failed it
This adds special support for huge pages (16MB). The reference
counting cannot be easily done for such pages in real mode (when
MMU is off) so we added a list of huge pages. It is populated in
virtual mode and get_page is called just once per a huge page.
Real mode handlers check if the
This adds real mode handlers for the H_PUT_TCE_INDIRECT and
H_STUFF_TCE hypercalls for QEMU emulated devices such as IBMVIO
devices or emulated PCI. These calls allow adding multiple entries
(up to 512) into the TCE table in one call which saves time on
transition to/from real mode.
This adds a
This adds hash_for_each_possible_rcu_notrace() which is basically
a notrace clone of hash_for_each_possible_rcu() which cannot be
used in real mode due to its tracing/debugging capability.
Signed-off-by: Alexey Kardashevskiy a...@ozlabs.ru
---
include/linux/hashtable.h | 15 +++
1
VFIO is designed to be used via ioctls on file descriptors
returned by VFIO.
However in some situations support for an external user is required.
The first user is KVM on PPC64 (SPAPR TCE protocol) which is going to
use the existing VFIO groups for exclusive access in real/virtual mode
in the
Signed-off-by: Alexey Kardashevskiy a...@ozlabs.ru
---
include/uapi/linux/kvm.h |1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index d88c8ee..970b1f5 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -666,6 +666,7
Signed-off-by: Alexey Kardashevskiy a...@ozlabs.ru
---
include/uapi/linux/kvm.h |2 ++
1 file changed, 2 insertions(+)
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 970b1f5..0865c01 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -667,6 +667,7
On 06/26/2013 01:42 PM, Bharat Bhushan wrote:
ehpriv instruction is used for setting software breakpoints
by user space. This patch adds support to exit to user space
with run-debug have relevant information.
As this is the first point we are using run-debug, also defined
the run-debug
On 06/26/2013 04:44 PM, Bhushan Bharat-R65777 wrote:
-Original Message-
From: tiejun.chen [mailto:tiejun.c...@windriver.com]
Sent: Wednesday, June 26, 2013 12:25 PM
To: Bhushan Bharat-R65777
Cc: kvm-ppc@vger.kernel.org; k...@vger.kernel.org; ag...@suse.de; Wood Scott-
B07421;
Hi,
On Wed, 26 Jun 2013 11:12:23 +0530 Bharat Bhushan r65...@freescale.com wrote:
diff --git a/arch/powerpc/include/asm/switch_to.h
b/arch/powerpc/include/asm/switch_to.h
index 200d763..50b357f 100644
--- a/arch/powerpc/include/asm/switch_to.h
+++ b/arch/powerpc/include/asm/switch_to.h
@@
The changes are:
1. rebased on v3.10-rc7
2. removed spinlocks from real mode
3. added security checks between KVM and VFIO
MOre details in the individual patch comments.
Alexey Kardashevskiy (8):
KVM: PPC: reserve a capability number for multitce support
KVM: PPC: reserve a capability and
This allows the host kernel to handle H_PUT_TCE, H_PUT_TCE_INDIRECT
and H_STUFF_TCE requests without passing them to QEMU, which saves time
on switching to QEMU and back.
Both real and virtual modes are supported. First the kernel tries to
handle a TCE request in the real mode, if failed it
This adds special support for huge pages (16MB). The reference
counting cannot be easily done for such pages in real mode (when
MMU is off) so we added a list of huge pages. It is populated in
virtual mode and get_page is called just once per a huge page.
Real mode handlers check if the
The current VFIO-on-POWER implementation supports only user mode
driven mapping, i.e. QEMU is sending requests to map/unmap pages.
However this approach is really slow, so we want to move that to KVM.
Since H_PUT_TCE can be extremely performance sensitive (especially with
network adapters where
This adds real mode handlers for the H_PUT_TCE_INDIRECT and
H_STUFF_TCE hypercalls for QEMU emulated devices such as IBMVIO
devices or emulated PCI. These calls allow adding multiple entries
(up to 512) into the TCE table in one call which saves time on
transition to/from real mode.
This adds a
This adds hash_for_each_possible_rcu_notrace() which is basically
a notrace clone of hash_for_each_possible_rcu() which cannot be
used in real mode due to its tracing/debugging capability.
Signed-off-by: Alexey Kardashevskiy a...@ozlabs.ru
---
include/linux/hashtable.h | 15 +++
1
Signed-off-by: Alexey Kardashevskiy a...@ozlabs.ru
---
include/uapi/linux/kvm.h |1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index d88c8ee..970b1f5 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -666,6 +666,7
VFIO is designed to be used via ioctls on file descriptors
returned by VFIO.
However in some situations support for an external user is required.
The first user is KVM on PPC64 (SPAPR TCE protocol) which is going to
use the existing VFIO groups for exclusive access in real/virtual mode
in the
73 matches
Mail list logo