On Wed, Feb 13, 2013 at 10:53:14AM +0100, Sylvain Bauza wrote:
As per documentation, Nova (Openstack Compute layer) is doing a
'qemu-img convert -s' against a running instance.
http://docs.openstack.org/trunk/openstack-compute/admin/content/creating-images-from-running-instances.html
That
On Tue, Feb 12, 2013 at 03:30:37PM +0100, Sylvain Bauza wrote:
We currently run Openstack Essex hosts with KVM-1.0 (Ubuntu 12.04)
instances with qcow2,virtio,cache=none
For Linux VMs, no trouble at all but we do observe filesystem
corruption and inconsistency (missing DLLs, CHKDSK asked by
Hi,
Latest updates, I tried using :
- cache=writethrough / kvm-1.0 : errors in qcow2
- cache=none/kvm-1.3 : no errors using 'qemu-img check', but
EventViewer is complaining
I have to admit I'm lost. I cannot understand what is causing this
corruption, only appearing on some Windows
On Mon, Feb 11, 2013 at 12:19:28PM +0100, Jan Kiszka wrote:
We already pass vmcs12 as argument.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Thanks, applied.
---
arch/x86/kvm/vmx.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/vmx.c
Il 14/02/2013 07:00, Rusty Russell ha scritto:
Paolo Bonzini pbonz...@redhat.com writes:
This series adds a different set of APIs for adding a buffer to a
virtqueue. The new API lets you pass the buffers piecewise, wrapping
multiple calls to virtqueue_add_sg between virtqueue_start_buf and
On Mon, Feb 11, 2013 at 12:19:17PM +0100, Jan Kiszka wrote:
This prevents trapping L2 I/O exits if L1 has neither unconditional nor
bitmap-based exiting enabled. Furthermore, it implements basic I/O
bitmap handling. Repeated string accesses are still reported to L1
unconditionally for now.
Interesting point you mention. Even if qcow2 is read only, the image is
changing (especially, I'm running IIS with ASP support and VB DLLs)
while the snapshot is taken.
As asked in a second post, I'm running with latest Windows virtio
drivers, but I only apply a virtio driver update *after*
Am 14.02.2013 08:55, schrieb Gleb Natapov:
On Wed, Feb 13, 2013 at 12:43:10PM +0100, Jan Kiszka wrote:
Python may otherwise decide to to read larger chunks, applying the seek
only on the software buffer. This will return results from the wrong
MSRs.
Signed-off-by: Jan Kiszka
On Tue, Feb 12, 2013 at 5:21 AM, Stefan Hajnoczi stefa...@gmail.com wrote:
On Thu, Feb 7, 2013 at 4:19 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
I believe Google will announce GSoC again this year (there is
no guarantee though) and I have created the wiki page so we can begin
organizing
Add support for error containment when a VFIO device assigned to a KVM
guest encounters an error. This is for PCIe devices/drivers that support AER
functionality. When the host OS is notified of an error in a device either
through the firmware first approach or through an interrupt handled by the
- New VFIO_SET_IRQ ioctl option to pass the eventfd that is signaled
when
an error occurs in the vfio_pci_device
- Register pci_error_handler for the vfio_pci driver
- When the device encounters an error, the error handler registered by
the vfio_pci
- Create eventfd per vfio device assigned to a guest and register an
event handler
- This fd is passed to the vfio_pci driver through the SET_IRQ ioctl
- When the device encounters an error, the eventfd is signalled
and the qemu eventfd handler gets
- Added vfio_device_get_from_dev() as wrapper to get
reference to vfio_device from struct device.
- Added vfio_device_data() as a wrapper to get device_data from
vfio_device.
Signed-off-by: Vijay Mohan Pandarathil vijaymohan.pandarat...@hp.com
---
On 2013-02-14 10:32, Gleb Natapov wrote:
On Mon, Feb 11, 2013 at 12:19:17PM +0100, Jan Kiszka wrote:
This prevents trapping L2 I/O exits if L1 has neither unconditional nor
bitmap-based exiting enabled. Furthermore, it implements basic I/O
bitmap handling. Repeated string accesses are still
On Thu, Feb 14, 2013 at 11:25:05AM +0100, Andreas Färber wrote:
Am 14.02.2013 08:55, schrieb Gleb Natapov:
On Wed, Feb 13, 2013 at 12:43:10PM +0100, Jan Kiszka wrote:
Python may otherwise decide to to read larger chunks, applying the seek
only on the software buffer. This will return
On Thu, Feb 14, 2013 at 12:19:26PM +0100, Jan Kiszka wrote:
On 2013-02-14 10:32, Gleb Natapov wrote:
On Mon, Feb 11, 2013 at 12:19:17PM +0100, Jan Kiszka wrote:
This prevents trapping L2 I/O exits if L1 has neither unconditional nor
bitmap-based exiting enabled. Furthermore, it implements
On 2013-02-14 13:11, Gleb Natapov wrote:
On Thu, Feb 14, 2013 at 12:19:26PM +0100, Jan Kiszka wrote:
On 2013-02-14 10:32, Gleb Natapov wrote:
On Mon, Feb 11, 2013 at 12:19:17PM +0100, Jan Kiszka wrote:
This prevents trapping L2 I/O exits if L1 has neither unconditional nor
bitmap-based
On Thu, Feb 14, 2013 at 01:22:01PM +0100, Jan Kiszka wrote:
On 2013-02-14 13:11, Gleb Natapov wrote:
On Thu, Feb 14, 2013 at 12:19:26PM +0100, Jan Kiszka wrote:
On 2013-02-14 10:32, Gleb Natapov wrote:
On Mon, Feb 11, 2013 at 12:19:17PM +0100, Jan Kiszka wrote:
This prevents trapping L2
Ciao,
I have trouble to get full list of the output of
qemu help
inside kvm when I switch to second console CTRL-ALT-2
I can't find the full list even inside source code (apt-get source
qemu-kvm) and neither inside binary file (grep blockarg qemu-*)
Is it possible to redirect the output of
On Thu, Feb 14, 2013, Gleb Natapov wrote about Re: [PATCH v2] KVM: nVMX:
Improve I/O exit handling:
Not sure how to map a failure on real HW behaviour. I guess it's best to
Exit to L1 with nested_vmx_failValid() may be?
To my understanding, nested_vmx_failValid/Invalid are related to
https://bugzilla.kernel.org/show_bug.cgi?id=53851
Summary: nVMX: Support live migration of whole L1 guest
Product: Virtualization
Version: unspecified
Platform: All
OS/Version: Linux
Tree: Mainline
Status: NEW
https://bugzilla.kernel.org/show_bug.cgi?id=53601
Nadav Har'El n...@math.technion.ac.il changed:
What|Removed |Added
Depends on||53851
--
https://bugzilla.kernel.org/show_bug.cgi?id=53851
Nadav Har'El n...@math.technion.ac.il changed:
What|Removed |Added
Blocks||53601
--
On Thu, Feb 14, 2013 at 03:54:23PM +0200, Nadav Har'El wrote:
On Thu, Feb 14, 2013, Gleb Natapov wrote about Re: [PATCH v2] KVM: nVMX:
Improve I/O exit handling:
Not sure how to map a failure on real HW behaviour. I guess it's best
to
Exit to L1 with nested_vmx_failValid() may be?
https://bugzilla.kernel.org/show_bug.cgi?id=53861
Summary: nVMX: inaccuracy in emulation of entry failure
Product: Virtualization
Version: unspecified
Platform: All
OS/Version: Linux
Tree: Mainline
Status: NEW
https://bugzilla.kernel.org/show_bug.cgi?id=53601
Nadav Har'El n...@math.technion.ac.il changed:
What|Removed |Added
Depends on||53861
--
https://bugzilla.kernel.org/show_bug.cgi?id=53861
Nadav Har'El n...@math.technion.ac.il changed:
What|Removed |Added
Blocks||53601
--
On Thu, Feb 14, 2013 at 11:39 AM, harryxiyou harryxi...@gmail.com wrote:
On Tue, Feb 12, 2013 at 5:21 AM, Stefan Hajnoczi stefa...@gmail.com wrote:
On Thu, Feb 7, 2013 at 4:19 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
I believe Google will announce GSoC again this year (there is
no
https://bugzilla.kernel.org/show_bug.cgi?id=53871
Summary: nVMX: Can malicious L2 kill L1?
Product: Virtualization
Version: unspecified
Platform: All
OS/Version: Linux
Tree: Mainline
Status: NEW
Severity: low
https://bugzilla.kernel.org/show_bug.cgi?id=53601
Nadav Har'El n...@math.technion.ac.il changed:
What|Removed |Added
Depends on||53871
--
https://bugzilla.kernel.org/show_bug.cgi?id=53871
Nadav Har'El n...@math.technion.ac.il changed:
What|Removed |Added
Blocks||53601
--
On Thu, Feb 14, 2013 at 11:15 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
[...]
Hi Harry,
Hi Stefan,
Thanks for your interest. You can begin thinking about ideas but
please keep in mind that we are still in the very early stages of GSoC
preparation.
Google will publish the list of
Am 01.02.2013 13:38, schrieb Andreas Färber:
Hello,
This series moves more fields from CPU_COMMON / CPU*State to CPUState,
allowing access from target-independent code.
The final patch in this series will help solve some issues (in particular
avoid a dependency on CPU_COMMON TLB
This prevents trapping L2 I/O exits if L1 has neither unconditional nor
bitmap-based exiting enabled. Furthermore, it implements basic I/O
bitmap handling. Repeated string accesses are still reported to L1
unconditionally for now.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
Changes in
This avoids basing decisions on uninitialized variables, potentially
leaking kernel data to the L1 guest.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
arch/x86/kvm/vmx.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
On Thu, 2013-02-14 at 04:41 -0600, Vijay Mohan Pandarathil wrote:
- New VFIO_SET_IRQ ioctl option to pass the eventfd that is signaled
when
an error occurs in the vfio_pci_device
- Register pci_error_handler for the vfio_pci driver
- When the device encounters
On Thu, 2013-02-14 at 04:41 -0600, Vijay Mohan Pandarathil wrote:
- Create eventfd per vfio device assigned to a guest and register an
event handler
- This fd is passed to the vfio_pci driver through the SET_IRQ ioctl
- When the device encounters an error, the
This adds support for the ibm,int-on and ibm,int-off RTAS calls to the
in-kernel XICS emulation and corrects the handling of the saved
priority by the ibm,set-xive RTAS call. With this, ibm,int-off sets
the specified interrupt's priority in its saved_priority field and
sets the priority to 0xff
This patch series implements in-kernel emulation of the XICS interrupt
controller architecture defined in PAPR (Power Architecture Platform
Requirements, the document that defines IBM's pSeries platform
architecture).
One of the things I have done in this version is to provide a way for
this to
From: Michael Ellerman mich...@ellerman.id.au
For pseries machine emulation, in order to move the interrupt
controller code to the kernel, we need to intercept some RTAS
calls in the kernel itself. This adds an infrastructure to allow
in-kernel handlers to be registered for RTAS services by
This makes the XICS interrupt controller emulation code export a struct
containing function pointers for the various calls into the XICS code.
The generic book3s code then uses these function pointers instead of
calling directly into the XICS code (except for the XICS instantiation
function).
This adds the ability for userspace to save and restore the state
of the XICS interrupt presentation controllers (ICPs) via the
KVM_GET/SET_ONE_REG interface. Since there is one ICP per vcpu, we
simply define a new 64-bit register in the ONE_REG space for the ICP
state. The state includes the
This streamlines our handling of external interrupts that come in
while we're in the guest. First, when waking up a hardware thread
that was napping, we split off the napping due to H_CEDE case
earlier, and use the code that handles an external interrupt (0x500)
in the guest to handle that too.
Currently kvmppc_core_dequeue_external() takes a struct kvm_interrupt *
argument and does nothing with it, in any of its implementations.
This removes it in order to make things easier for forthcoming
in-kernel interrupt controller emulation code.
Signed-off-by: Paul Mackerras pau...@samba.org
From: Benjamin Herrenschmidt b...@kernel.crashing.org
This adds in-kernel emulation of the XICS (eXternal Interrupt
Controller Specification) interrupt controller specified by PAPR, for
both HV and PR KVM guests.
This adds a new KVM_CREATE_IRQCHIP_ARGS ioctl, which is like
KVM_CREATE_IRQCHIP in
From: Benjamin Herrenschmidt b...@kernel.crashing.org
This adds an implementation of the XICS hypercalls in real mode for HV
KVM, which allows us to avoid exiting the guest MMU context on all
threads for a variety of operations such as fetching a pending
interrupt, EOI of messages, IPIs, etc.
From: Benjamin Herrenschmidt b...@kernel.crashing.org
Currently, we wake up a CPU by sending a host IPI with
smp_send_reschedule() to thread 0 of that core, which will take all
threads out of the guest, and cause them to re-evaluate their
interrupt status on the way back in.
This adds a
Hi Marcelo / Gleb,
This is my current patch queue for ppc. Please pull.
Highlights of this queue drop are:
- BookE: Fast mapping support for 4k backed memory
- BookE: Handle alignment interrupts
Alex
The following changes since commit cbd29cb6e38af6119df2cdac0c58acf0e85c177e:
Jan
When we invalidate shadow TLB maps on the host, we don't mark them
as not valid. But we should.
Fix this by removing the E500_TLB_VALID from their flags when
invalidating.
Signed-off-by: Alexander Graf ag...@suse.de
---
arch/powerpc/kvm/e500_tlb.c | 13 ++---
1 files changed, 10
When shadow mapping a page, mapping this page can fail. In that case we
don't have a shadow map.
Take this case into account, otherwise we might end up writing bogus TLB
entries into the host TLB.
While at it, also move the write_stlbe() calls into the respective TLBn
handlers.
Signed-off-by:
When the guest triggers an alignment interrupt, we don't handle it properly
today and instead BUG_ON(). This really shouldn't happen.
Instead, we should just pass the interrupt back into the guest so it can deal
with it.
Reported-by: Gao Guanhua-B22826 b22...@freescale.com
Tested-by: Gao
From: Bharat Bhushan bharat.bhus...@freescale.com
Signed-off-by: Bharat Bhushan bharat.bhus...@freescale.com
Signed-off-by: Alexander Graf ag...@suse.de
---
arch/powerpc/include/asm/reg_booke.h |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git
When emulating tlbwe, we want to automatically map the entry that just got
written in our shadow TLB map, because chances are quite high that it's
going to be used very soon.
Today this happens explicitly, duplicating all the logic that is in
kvmppc_mmu_map() already. Just call that one instead.
Later patches want to call the function and it doesn't have
dependencies on anything below write_host_tlbe.
Move it higher up in the file.
Signed-off-by: Alexander Graf ag...@suse.de
---
arch/powerpc/kvm/e500_tlb.c | 32
1 files changed, 16 insertions(+), 16
From: Bharat Bhushan bharat.bhus...@freescale.com
Current kvmppc_booke_handlers uses the same macro (KVM_HANDLER) and
all handlers are considered to be the same size. This will not be
the case if we want to use different macros for different handlers.
This patch improves the kvmppc_booke_handler
From: Bharat Bhushan bharat.bhus...@freescale.com
Like other places, use thread_struct to get vcpu reference.
Signed-off-by: Bharat Bhushan bharat.bhus...@freescale.com
Signed-off-by: Alexander Graf ag...@suse.de
---
arch/powerpc/include/asm/reg.h |2 --
The guest TLB handling code should not have any insight into how the host
TLB shadow code works.
kvmppc_e500_tlbil_all() is a function that is used for distinction between
e500v2 and e500mc (E.HV) on how to flush shadow entries. This function really
is private between the e500.c/e500mc.c file and
When a host mapping fault happens in a guest TLB1 entry today, we
map the translated guest entry into the host's TLB1.
This isn't particularly clever when the guest is mapped by normal 4k
pages, since these would be a lot better to put into TLB0 instead.
This patch adds the required logic to map
Host shadow TLB flushing is logic that the guest TLB code should have
no insight about. Declare the internal clear_tlb_refs and clear_tlb1_bitmap
functions static to the host TLB handling file.
Instead of these, we can use the already exported kvmppc_core_flush_tlb().
This gives us a common API
On Tue, Feb 12, 2013 at 11:43 PM, Jan Kiszka jan.kis...@siemens.com wrote:
On 2013-02-12 20:13, Nakajima, Jun wrote:
I looked at your (old) patches, and they seem to be very useful
although some of them require rebasing or rewriting. We are interested
in completing the nested-VMX features.
On Mon, Feb 11, 2013 at 11:12:42PM +1100, a...@ozlabs.ru wrote:
From: Alexey Kardashevskiy a...@ozlabs.ru
he current VFIO-on-POWER implementation supports only user mode
driven mapping, i.e. QEMU is sending requests to map/unmap pages.
However this approach is really slow in really fast
On Mon, Feb 11, 2013 at 11:12:41PM +1100, a...@ozlabs.ru wrote:
+static long emulated_h_put_tce(struct kvmppc_spapr_tce_table *stt,
+ unsigned long ioba, unsigned long tce)
+{
+ unsigned long idx = ioba SPAPR_TCE_SHIFT;
+ struct page *page;
+ u64 *tbl;
+
+ /*
On Mon, Feb 11, 2013 at 11:12:43PM +1100, a...@ozlabs.ru wrote:
From: Alexey Kardashevskiy a...@ozlabs.ru
The patch allows the host kernel to handle H_PUT_TCE request
without involving QEMU in it what should save time on switching
from the kernel to QEMU and back.
The patch adds an IOMMU
On Mon, Feb 11, 2013 at 11:12:40PM +1100, a...@ozlabs.ru wrote:
From: Alexey Kardashevskiy a...@ozlabs.ru
The lookup_linux_pte() function returns a linux PTE which
is required to convert KVM guest physical address into host real
address in real mode.
This convertion will be used by
From: Michael Ellerman mich...@ellerman.id.au
For pseries machine emulation, in order to move the interrupt
controller code to the kernel, we need to intercept some RTAS
calls in the kernel itself. This adds an infrastructure to allow
in-kernel handlers to be registered for RTAS services by
This adds support for the ibm,int-on and ibm,int-off RTAS calls to the
in-kernel XICS emulation and corrects the handling of the saved
priority by the ibm,set-xive RTAS call. With this, ibm,int-off sets
the specified interrupt's priority in its saved_priority field and
sets the priority to 0xff
This makes the XICS interrupt controller emulation code export a struct
containing function pointers for the various calls into the XICS code.
The generic book3s code then uses these function pointers instead of
calling directly into the XICS code (except for the XICS instantiation
function).
This streamlines our handling of external interrupts that come in
while we're in the guest. First, when waking up a hardware thread
that was napping, we split off the napping due to H_CEDE case
earlier, and use the code that handles an external interrupt (0x500)
in the guest to handle that too.
This adds the ability for userspace to save and restore the state
of the XICS interrupt presentation controllers (ICPs) via the
KVM_GET/SET_ONE_REG interface. Since there is one ICP per vcpu, we
simply define a new 64-bit register in the ONE_REG space for the ICP
state. The state includes the
This patch series implements in-kernel emulation of the XICS interrupt
controller architecture defined in PAPR (Power Architecture Platform
Requirements, the document that defines IBM's pSeries platform
architecture).
One of the things I have done in this version is to provide a way for
this to
Currently kvmppc_core_dequeue_external() takes a struct kvm_interrupt *
argument and does nothing with it, in any of its implementations.
This removes it in order to make things easier for forthcoming
in-kernel interrupt controller emulation code.
Signed-off-by: Paul Mackerras pau...@samba.org
From: Benjamin Herrenschmidt b...@kernel.crashing.org
This adds in-kernel emulation of the XICS (eXternal Interrupt
Controller Specification) interrupt controller specified by PAPR, for
both HV and PR KVM guests.
This adds a new KVM_CREATE_IRQCHIP_ARGS ioctl, which is like
KVM_CREATE_IRQCHIP in
From: Benjamin Herrenschmidt b...@kernel.crashing.org
This adds an implementation of the XICS hypercalls in real mode for HV
KVM, which allows us to avoid exiting the guest MMU context on all
threads for a variety of operations such as fetching a pending
interrupt, EOI of messages, IPIs, etc.
From: Benjamin Herrenschmidt b...@kernel.crashing.org
Currently, we wake up a CPU by sending a host IPI with
smp_send_reschedule() to thread 0 of that core, which will take all
threads out of the guest, and cause them to re-evaluate their
interrupt status on the way back in.
This adds a
Hi Marcelo / Gleb,
This is my current patch queue for ppc. Please pull.
Highlights of this queue drop are:
- BookE: Fast mapping support for 4k backed memory
- BookE: Handle alignment interrupts
Alex
The following changes since commit cbd29cb6e38af6119df2cdac0c58acf0e85c177e:
Jan
Later patches want to call the function and it doesn't have
dependencies on anything below write_host_tlbe.
Move it higher up in the file.
Signed-off-by: Alexander Graf ag...@suse.de
---
arch/powerpc/kvm/e500_tlb.c | 32
1 files changed, 16 insertions(+), 16
When emulating tlbwe, we want to automatically map the entry that just got
written in our shadow TLB map, because chances are quite high that it's
going to be used very soon.
Today this happens explicitly, duplicating all the logic that is in
kvmppc_mmu_map() already. Just call that one instead.
When the guest triggers an alignment interrupt, we don't handle it properly
today and instead BUG_ON(). This really shouldn't happen.
Instead, we should just pass the interrupt back into the guest so it can deal
with it.
Reported-by: Gao Guanhua-B22826 b22...@freescale.com
Tested-by: Gao
When shadow mapping a page, mapping this page can fail. In that case we
don't have a shadow map.
Take this case into account, otherwise we might end up writing bogus TLB
entries into the host TLB.
While at it, also move the write_stlbe() calls into the respective TLBn
handlers.
Signed-off-by:
From: Bharat Bhushan bharat.bhus...@freescale.com
Signed-off-by: Bharat Bhushan bharat.bhus...@freescale.com
Signed-off-by: Alexander Graf ag...@suse.de
---
arch/powerpc/include/asm/reg_booke.h |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git
From: Bharat Bhushan bharat.bhus...@freescale.com
Current kvmppc_booke_handlers uses the same macro (KVM_HANDLER) and
all handlers are considered to be the same size. This will not be
the case if we want to use different macros for different handlers.
This patch improves the kvmppc_booke_handler
From: Bharat Bhushan bharat.bhus...@freescale.com
Like other places, use thread_struct to get vcpu reference.
Signed-off-by: Bharat Bhushan bharat.bhus...@freescale.com
Signed-off-by: Alexander Graf ag...@suse.de
---
arch/powerpc/include/asm/reg.h |2 --
The guest TLB handling code should not have any insight into how the host
TLB shadow code works.
kvmppc_e500_tlbil_all() is a function that is used for distinction between
e500v2 and e500mc (E.HV) on how to flush shadow entries. This function really
is private between the e500.c/e500mc.c file and
When a host mapping fault happens in a guest TLB1 entry today, we
map the translated guest entry into the host's TLB1.
This isn't particularly clever when the guest is mapped by normal 4k
pages, since these would be a lot better to put into TLB0 instead.
This patch adds the required logic to map
Host shadow TLB flushing is logic that the guest TLB code should have
no insight about. Declare the internal clear_tlb_refs and clear_tlb1_bitmap
functions static to the host TLB handling file.
Instead of these, we can use the already exported kvmppc_core_flush_tlb().
This gives us a common API
On Mon, Feb 11, 2013 at 11:12:40PM +1100, a...@ozlabs.ru wrote:
From: Alexey Kardashevskiy a...@ozlabs.ru
The lookup_linux_pte() function returns a linux PTE which
is required to convert KVM guest physical address into host real
address in real mode.
This convertion will be used by
On Mon, Feb 11, 2013 at 11:12:42PM +1100, a...@ozlabs.ru wrote:
From: Alexey Kardashevskiy a...@ozlabs.ru
he current VFIO-on-POWER implementation supports only user mode
driven mapping, i.e. QEMU is sending requests to map/unmap pages.
However this approach is really slow in really fast
On Mon, Feb 11, 2013 at 11:12:41PM +1100, a...@ozlabs.ru wrote:
+static long emulated_h_put_tce(struct kvmppc_spapr_tce_table *stt,
+ unsigned long ioba, unsigned long tce)
+{
+ unsigned long idx = ioba SPAPR_TCE_SHIFT;
+ struct page *page;
+ u64 *tbl;
+
+ /*
On Mon, Feb 11, 2013 at 11:12:43PM +1100, a...@ozlabs.ru wrote:
From: Alexey Kardashevskiy a...@ozlabs.ru
The patch allows the host kernel to handle H_PUT_TCE request
without involving QEMU in it what should save time on switching
from the kernel to QEMU and back.
The patch adds an IOMMU
89 matches
Mail list logo