Am Tue, 21 Apr 2015 10:41:51 +1000
schrieb David Gibson da...@gibson.dropbear.id.au:
On POWER, storage caching is usually configured via the MMU - attributes
such as cache-inhibited are stored in the TLB and the hashed page table.
This makes correctly performing cache inhibited IO accesses
On Tue, Apr 21, 2015 at 08:37:02AM +0200, Thomas Huth wrote:
Am Tue, 21 Apr 2015 10:41:51 +1000
schrieb David Gibson da...@gibson.dropbear.id.au:
On POWER, storage caching is usually configured via the MMU - attributes
such as cache-inhibited are stored in the TLB and the hashed page
From: Paul Mackerras pau...@samba.org
This reads the timebase at various points in the real-mode guest
entry/exit code and uses that to accumulate total, minimum and
maximum time spent in those parts of the code. Currently these
times are accumulated per vcpu in 5 parts of the code:
* rm_entry
From: Paul Mackerras pau...@samba.org
This uses msgsnd where possible for signalling other threads within
the same core on POWER8 systems, rather than IPIs through the XICS
interrupt controller. This includes waking secondary threads to run
the guest, the interrupts generated by the virtual
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This adds helper routines for locking and unlocking HPTEs, and uses
them in the rest of the code. We don't change any locking rules in
this patch.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Paul Mackerras
From: Paul Mackerras pau...@samba.org
This creates a debugfs directory for each HV guest (assuming debugfs
is enabled in the kernel config), and within that directory, a file
by which the contents of the guest's HPT (hashed page table) can be
read. The directory is named vm, where is
From: Paul Mackerras pau...@samba.org
We can tell when a secondary thread has finished running a guest by
the fact that it clears its kvm_hstate.kvm_vcpu pointer, so there
is no real need for the nap_count field in the kvmppc_vcore struct.
This changes kvmppc_wait_for_nap to poll the
Hi Paolo / Marcelo,
This is my current patch queue for ppc. Please pull.
Alex
The following changes since commit b79013b2449c23f1f505bdf39c5a6c330338b244:
Merge tag 'staging-4.1-rc1' of
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging (2015-04-13
17:37:33 -0700)
are
From: Paul Mackerras pau...@samba.org
When running a multi-threaded guest and vcpu 0 in a virtual core
is not running in the guest (i.e. it is busy elsewhere in the host),
thread 0 of the physical core will switch the MMU to the guest and
then go to nap mode in the code at kvm_do_nap. If the
From: David Gibson da...@gibson.dropbear.id.au
On POWER, storage caching is usually configured via the MMU - attributes
such as cache-inhibited are stored in the TLB and the hashed page table.
This makes correctly performing cache inhibited IO accesses awkward when
the MMU is turned off (real
From: Paul Mackerras pau...@samba.org
Rather than calling cond_resched() in kvmppc_run_core() before doing
the post-processing for the vcpus that we have just run (that is,
calling kvmppc_handle_exit_hv(), kvmppc_set_timer(), etc.), we now do
that post-processing before calling cond_resched(),
From: Paul Mackerras pau...@samba.org
This replaces the assembler code for kvmhv_commence_exit() with C code
in book3s_hv_builtin.c. It also moves the IPI sending code that was
in book3s_hv_rm_xics.c into a new kvmhv_rm_send_ipi() function so it
can be used by kvmhv_commence_exit() as well as
From: Paul Mackerras pau...@samba.org
* Remove unused kvmppc_vcore::n_busy field.
* Remove setting of RMOR, since it was only used on PPC970 and the
PPC970 KVM support has been removed.
* Don't use r1 or r2 in setting the runlatch since they are
conventionally reserved for other things; use
From: Paul Mackerras pau...@samba.org
On entry to the guest, secondary threads now wait for the primary to
switch the MMU after loading up most of their state, rather than before.
This means that the secondary threads get into the guest sooner, in the
common case where the secondary threads get
From: Suresh Warrier warr...@linux.vnet.ibm.com
Add two counters to count how often we generate real-mode ICS resend
and reject events. The counters provide some performance statistics
that could be used in the future to consider if the real mode functions
need further optimizing. The counters
From: Suresh Warrier warr...@linux.vnet.ibm.com
Replaces the ICS mutex lock with a spin lock since we will be porting
these routines to real mode. Note that we need to disable interrupts
before we take the lock in anticipation of the fact that on the guest
side, we are running in the context of a
From: Suresh E. Warrier warr...@linux.vnet.ibm.com
Export __spin_yield so that the arch_spin_unlock() function can
be invoked from a module. This will be required for modules where
we want to take a lock that is also is acquired in hypervisor
real mode. Because we want to avoid running any
From: Michael Ellerman mich...@ellerman.id.au
Some PowerNV systems include a hardware random-number generator.
This HWRNG is present on POWER7+ and POWER8 chips and is capable of
generating one 64-bit random number every microsecond. The random
numbers are produced by sampling a set of 64
From: Suresh E. Warrier warr...@linux.vnet.ibm.com
Add counters to track number of times we switch from guest real mode
to host virtual mode during an interrupt-related hyper call because the
hypercall requires actions that cannot be completed in real mode. This
will help when making
From: Paul Mackerras pau...@samba.org
Previously, if kvmppc_run_core() was running a VCPU that needed a VPA
update (i.e. one of its 3 virtual processor areas needed to be pinned
in memory so the host real mode code can update it on guest entry and
exit), we would drop the vcore lock and do the
From: Paul Mackerras pau...@samba.org
Currently, the entry_exit_count field in the kvmppc_vcore struct
contains two 8-bit counts, one of the threads that have started entering
the guest, and one of the threads that have started exiting the guest.
This changes it to an entry_exit_map field which
From: Paul Mackerras pau...@samba.org
This arranges for threads that are napping due to their vcpu having
ceded or due to not having a vcpu to wake up at the end of the guest's
timeslice without having to be poked with an IPI. We do that by
arranging for the decrementer to contain a value no
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We don't support real-mode areas now that 970 support is removed.
Remove the remaining details of rma from the code. Also rename
rma_setup_done to hpte_setup_done to better reflect the changes.
Signed-off-by: Aneesh Kumar K.V
On Tue, Apr 21, 2015 at 05:48:20PM +0200, Greg Kurz wrote:
On Tue, 21 Apr 2015 16:04:23 +0200
Michael S. Tsirkin m...@redhat.com wrote:
On Fri, Apr 10, 2015 at 12:19:16PM +0200, Greg Kurz wrote:
This patch brings cross-endian support to vhost when used to implement
legacy virtio
From: Suresh Warrier warr...@linux.vnet.ibm.com
Interrupt-based hypercalls return H_TOO_HARD to inform KVM that it needs
to switch to the host to complete the rest of hypercall function in
virtual mode. This patch ports the virtual mode ICS/ICP reject and resend
functions to be runnable in
On Tue, Apr 21, 2015 at 06:22:20PM +0200, Greg Kurz wrote:
On Tue, 21 Apr 2015 16:06:33 +0200
Michael S. Tsirkin m...@redhat.com wrote:
On Fri, Apr 10, 2015 at 12:20:21PM +0200, Greg Kurz wrote:
The VNET_LE flag was introduced to fix accesses to virtio 1.0 headers
that are always
On Fri, 17 Apr 2015 11:18:13 +0200
Greg Kurz gk...@linux.vnet.ibm.com wrote:
On Fri, 10 Apr 2015 12:15:00 +0200
Greg Kurz gk...@linux.vnet.ibm.com wrote:
Hi,
This patchset allows vhost to be used with legacy virtio when guest and host
have a different endianness.
Patch 7 got
On 21/04/2015 09:52, Paolo Bonzini wrote:
From: Nadav Amit na...@cs.technion.ac.il
[ upstream commit f210f7572bedf3320599e8b2d8e8ec2d96270d0b ]
apic_find_highest_irr assumes irr_pending is set if any vector in APIC_IRR is
set. If this assumption is broken and apicv is disabled, the
2015-04-20 13:33-0500, Wei Huang:
snip
+/* check if msr_idx is a valid index to access PMU */
+inline int kvm_pmu_check_msr_idx(struct kvm_vcpu *vcpu, unsigned msr_idx)
If we really want it inline, it's better done in header.
(I think GCC would inline this in-module anyway, but other
Peter Maydell peter.mayd...@linaro.org writes:
On 31 March 2015 at 16:40, Alex Bennée alex.ben...@linaro.org wrote:
This adds support for single-step. There isn't much to do on the QEMU
side as after we set-up the request for single step via the debug ioctl
it is all handled within the
On 21 April 2015 at 13:56, Alex Bennée alex.ben...@linaro.org wrote:
Peter Maydell peter.mayd...@linaro.org writes:
switch (hsr_ec) {
+case HSR_EC_SOFT_STEP:
+if (cs-singlestep_enabled) {
+return true;
+} else {
+error_report(Came out of
Peter Maydell peter.mayd...@linaro.org writes:
On 31 March 2015 at 16:40, Alex Bennée alex.ben...@linaro.org wrote:
From: Alex Bennée a...@bennee.com
This adds basic support for HW assisted debug. The ioctl interface to
KVM allows us to pass an implementation defined number of break and
Am Tue, 21 Apr 2015 16:51:21 +1000
schrieb David Gibson da...@gibson.dropbear.id.au:
On Tue, Apr 21, 2015 at 08:37:02AM +0200, Thomas Huth wrote:
Am Tue, 21 Apr 2015 10:41:51 +1000
schrieb David Gibson da...@gibson.dropbear.id.au:
On POWER, storage caching is usually configured via the
On Tue, 7 Apr 2015 17:56:25 +0200
Michael S. Tsirkin m...@redhat.com wrote:
On Tue, Apr 07, 2015 at 02:15:52PM +0200, Greg Kurz wrote:
The current memory accessors logic is:
- little endian if little_endian
- native endian (i.e. no byteswap) if !little_endian
If we want to fully
On 2015-04-21 15:52, Jonas Jelten wrote:
Hai *!
We [0] are developing x-tier [1], a VMI system that injects code into a
kvm guest from the hypervisor.
Currently we're using kernel modules to be executed in the context of
the VM. The execution is carefully separated from the target VM so
On Fri, Apr 10, 2015 at 12:16:20PM +0200, Greg Kurz wrote:
The current memory accessors logic is:
- little endian if little_endian
- native endian (i.e. no byteswap) if !little_endian
If we want to fully support cross-endian vhost, we also need to be
able to convert to big endian.
I don't get the part with getting cryptodev upstream.
I don't know what getting cryptodev upstream actually implies.
From what I know cryptodev is done (is a functional project) that was
rejected in the Linux Kernel
and there isn't actually way to get it upstream.
On Tue, Mar 31, 2015 at 8:14 PM,
Can you give me more details on GnuTLS?
I'm going through some documentation and code and I see that it
doesn't actually have separate encryption and authentication
primitives.
P.S. I have excluded Kim Philiphs from this mail because the mailing
list doesn't allow me to send e-mails to users not
On Mon, Apr 20, 2015 at 05:47:23PM -0700, Andy Lutomirski wrote:
I just wrote a little perf self-monitoring tool that uses rdpmc to
count cycles. Performance sucks under KVM (VMX).
How hard would it be to avoid rdpmc exits in cases where the host and
guest pmu configurations are compatible
On Fri, Apr 10, 2015 at 12:19:16PM +0200, Greg Kurz wrote:
This patch brings cross-endian support to vhost when used to implement
legacy virtio devices. Since it is a relatively rare situation, the
feature availability is controlled by a kernel config option (not set
by default).
The
I don't get the part with getting cryptodev upstream.
I don't know what getting cryptodev upstream actually implies.
From what I know cryptodev is done (is a functional project) that was
rejected in the Linux Kernel
and there isn't actually way to get it upstream.
On Tue, Mar 31, 2015 at 8:14 PM,
Hai *!
We [0] are developing x-tier [1], a VMI system that injects code into a
kvm guest from the hypervisor.
Currently we're using kernel modules to be executed in the context of
the VM. The execution is carefully separated from the target VM so the
injection remains stealthy (as always, except
On Fri, Apr 10, 2015 at 12:20:21PM +0200, Greg Kurz wrote:
The VNET_LE flag was introduced to fix accesses to virtio 1.0 headers
that are always little-endian. It can also be used to handle the special
case of a legacy little-endian device implemented by a big-endian host.
Let's add a flag
On 21/04/2015 16:07, Catalin Vasile wrote:
I don't get the part with getting cryptodev upstream.
I don't know what getting cryptodev upstream actually implies.
From what I know cryptodev is done (is a functional project) that was
rejected in the Linux Kernel
and there isn't actually way to
On Fri, Apr 10, 2015 at 12:15:00PM +0200, Greg Kurz wrote:
Hi,
This patchset allows vhost to be used with legacy virtio when guest and host
have a different endianness.
Patch 7 got rewritten according to Cornelia's and Michael's comments. I have
also introduced patch 8 that brings BE vnet
On 19/03/2015 23:51, James Sullivan wrote:
I played around with native_compose_msi_msg and discovered the following:
* dm=0, rh=0 = Physical Destination Mode
* dm=0, rh=1 = Failed delivery
* dm=1, rh=0 = Logical Destination Mode, No Redirection
* dm=1, rh=1 = Logical Destination Mode,
On 2015-04-21 13:09, Paolo Bonzini wrote:
On 20/04/2015 19:25, Jan Kiszka wrote:
When hardware supports the g_pat VMCB field, we can use it for emulating
the PAT configuration that the guest configures by writing to the
corresponding MSR.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
On 20/04/2015 20:41, Jan Kiszka wrote:
If the guest pushes data for DMA into RAM, it may assume that it lands
there directly, without the need for explicit flushes, because it has
caching disabled - no?
Yes, but Intel IOMMUs can have snooping control and in this case you can
just set memory
2015-04-20 20:41+0200, Jan Kiszka:
On 2015-04-20 20:33, Radim Krčmář wrote:
2015-04-20 19:45+0200, Jan Kiszka:
On 2015-04-20 19:37, Jan Kiszka wrote:
On 2015-04-20 19:33, Radim Krčmář wrote:
2015-04-20 19:21+0200, Jan Kiszka:
On 2015-04-20 19:16, Radim Krčmář wrote:
2015-04-20
On 21/04/2015 13:56, Jan Kiszka wrote:
Basically it's an optimization. The guest can set the UC memory type on
PCI BARs that are actually backed by RAM in QEMU, and then accesses to
these BARs will be unnecessarily slow. It would be particularly bad if,
for example, access to ivshmem
2015-04-21 13:09+0200, Paolo Bonzini:
On 20/04/2015 19:25, Jan Kiszka wrote:
When hardware supports the g_pat VMCB field, we can use it for emulating
the PAT configuration that the guest configures by writing to the
corresponding MSR.
Signed-off-by: Jan Kiszka
2015-04-21 14:18+0200, Paolo Bonzini:
On 19/03/2015 23:51, James Sullivan wrote:
I played around with native_compose_msi_msg and discovered the following:
* dm=0, rh=0 = Physical Destination Mode
* dm=0, rh=1 = Failed delivery
* dm=1, rh=0 = Logical Destination Mode, No Redirection
*
On Tue, 21 Apr 2015 16:04:23 +0200
Michael S. Tsirkin m...@redhat.com wrote:
On Fri, Apr 10, 2015 at 12:19:16PM +0200, Greg Kurz wrote:
This patch brings cross-endian support to vhost when used to implement
legacy virtio devices. Since it is a relatively rare situation, the
feature
On Tue, Apr 21, 2015 at 06:32:54PM +0200, Paolo Bonzini wrote:
However, if you take into account that RDPMC can also be used
to read an inactive counter, and that multiple guests fight for the
same host counters, it's even harder to ensure that the guest counter
indices match those on the
On Tue, Apr 21, 2015 at 1:51 PM, Peter Zijlstra pet...@infradead.org wrote:
On Tue, Apr 21, 2015 at 06:32:54PM +0200, Paolo Bonzini wrote:
However, if you take into account that RDPMC can also be used
to read an inactive counter, and that multiple guests fight for the
same host counters, it's
On 04/21/2015 02:41 AM, David Gibson wrote:
On POWER, storage caching is usually configured via the MMU - attributes
such as cache-inhibited are stored in the TLB and the hashed page table.
This makes correctly performing cache inhibited IO accesses awkward when
the MMU is turned off (real
On 20/04/2015 19:25, Jan Kiszka wrote:
When hardware supports the g_pat VMCB field, we can use it for emulating
the PAT configuration that the guest configures by writing to the
corresponding MSR.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
I'm not sure about this. The problem is
On 21/04/2015 13:25, Jan Kiszka wrote:
On 2015-04-21 13:09, Paolo Bonzini wrote:
On 20/04/2015 19:25, Jan Kiszka wrote:
When hardware supports the g_pat VMCB field, we can use it for emulating
the PAT configuration that the guest configures by writing to the
corresponding MSR.
On 2015-04-21 13:32, Paolo Bonzini wrote:
Basically it's an optimization. The guest can set the UC memory type on
PCI BARs that are actually backed by RAM in QEMU, and then accesses to
these BARs will be unnecessarily slow. It would be particularly bad if,
for example, access to ivshmem were
Change to u16 if they only contain data in the low 16 bits.
Change the level field to bool, since we assign 1 sometimes, but
just mask icr_low with APIC_INT_ASSERT in apic_send_ipi.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
arch/x86/include/asm/kvm_host.h | 8
On 19/03/2015 02:26, James Sullivan wrote:
Changes Since v1:
* Reworked patches into two commits:
1) [Patch v2 1/2] Extended struct kvm_lapic_irq with bool
msi_redir_hint
* Initialize msi_redir_hint = true in kvm_set_msi_irq when RH=1
*
On Tue, 21 Apr 2015 16:09:44 +0200
Michael S. Tsirkin m...@redhat.com wrote:
On Fri, Apr 10, 2015 at 12:16:20PM +0200, Greg Kurz wrote:
The current memory accessors logic is:
- little endian if little_endian
- native endian (i.e. no byteswap) if !little_endian
If we want to fully
Hello Fellows,
We are experiencing some kernel panics, probably caused by APIC
handling, in our host servers when using kernel-3.12.38 and qemu-1.2.
The issues (most of them similar) happen when executing VMs with
CentOS-6, Ubuntu-14.* and Windows 2012 Server as guest's OS and
different
On Tue, 21 Apr 2015 16:06:33 +0200
Michael S. Tsirkin m...@redhat.com wrote:
On Fri, Apr 10, 2015 at 12:20:21PM +0200, Greg Kurz wrote:
The VNET_LE flag was introduced to fix accesses to virtio 1.0 headers
that are always little-endian. It can also be used to handle the special
case of a
On Tue, 21 Apr 2015 16:10:18 +0200
Michael S. Tsirkin m...@redhat.com wrote:
On Fri, Apr 10, 2015 at 12:15:00PM +0200, Greg Kurz wrote:
Hi,
This patchset allows vhost to be used with legacy virtio when guest and host
have a different endianness.
Patch 7 got rewritten according to
2015-04-18 02:23-0400, Wei Huang:
Currently KVM only supports vPMU for Intel CPUs. This patchset enables
KVM vPMU support for AMD platform by creating a common PMU interface for
x86. By refractoring, PMU related MSR accesses from guest VMs are dispatched
to corresponding functions defined in
On 21/04/2015 17:05, Peter Zijlstra wrote:
On Mon, Apr 20, 2015 at 05:47:23PM -0700, Andy Lutomirski wrote:
I just wrote a little perf self-monitoring tool that uses rdpmc to
count cycles. Performance sucks under KVM (VMX).
How hard would it be to avoid rdpmc exits in cases where the host
Am Tue, 21 Apr 2015 10:41:51 +1000
schrieb David Gibson da...@gibson.dropbear.id.au:
On POWER, storage caching is usually configured via the MMU - attributes
such as cache-inhibited are stored in the TLB and the hashed page table.
This makes correctly performing cache inhibited IO accesses
On Tue, Apr 21, 2015 at 08:37:02AM +0200, Thomas Huth wrote:
Am Tue, 21 Apr 2015 10:41:51 +1000
schrieb David Gibson da...@gibson.dropbear.id.au:
On POWER, storage caching is usually configured via the MMU - attributes
such as cache-inhibited are stored in the TLB and the hashed page
From: David Gibson da...@gibson.dropbear.id.au
On POWER, storage caching is usually configured via the MMU - attributes
such as cache-inhibited are stored in the TLB and the hashed page table.
This makes correctly performing cache inhibited IO accesses awkward when
the MMU is turned off (real
Hi Paolo / Marcelo,
This is my current patch queue for ppc. Please pull.
Alex
The following changes since commit b79013b2449c23f1f505bdf39c5a6c330338b244:
Merge tag 'staging-4.1-rc1' of
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging (2015-04-13
17:37:33 -0700)
are
From: Suresh Warrier warr...@linux.vnet.ibm.com
Interrupt-based hypercalls return H_TOO_HARD to inform KVM that it needs
to switch to the host to complete the rest of hypercall function in
virtual mode. This patch ports the virtual mode ICS/ICP reject and resend
functions to be runnable in
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This adds helper routines for locking and unlocking HPTEs, and uses
them in the rest of the code. We don't change any locking rules in
this patch.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Paul Mackerras
From: Paul Mackerras pau...@samba.org
This replaces the assembler code for kvmhv_commence_exit() with C code
in book3s_hv_builtin.c. It also moves the IPI sending code that was
in book3s_hv_rm_xics.c into a new kvmhv_rm_send_ipi() function so it
can be used by kvmhv_commence_exit() as well as
From: Paul Mackerras pau...@samba.org
When running a multi-threaded guest and vcpu 0 in a virtual core
is not running in the guest (i.e. it is busy elsewhere in the host),
thread 0 of the physical core will switch the MMU to the guest and
then go to nap mode in the code at kvm_do_nap. If the
From: Paul Mackerras pau...@samba.org
Currently, the entry_exit_count field in the kvmppc_vcore struct
contains two 8-bit counts, one of the threads that have started entering
the guest, and one of the threads that have started exiting the guest.
This changes it to an entry_exit_map field which
From: Suresh Warrier warr...@linux.vnet.ibm.com
Add two counters to count how often we generate real-mode ICS resend
and reject events. The counters provide some performance statistics
that could be used in the future to consider if the real mode functions
need further optimizing. The counters
From: Suresh E. Warrier warr...@linux.vnet.ibm.com
Add counters to track number of times we switch from guest real mode
to host virtual mode during an interrupt-related hyper call because the
hypercall requires actions that cannot be completed in real mode. This
will help when making
From: Paul Mackerras pau...@samba.org
Previously, if kvmppc_run_core() was running a VCPU that needed a VPA
update (i.e. one of its 3 virtual processor areas needed to be pinned
in memory so the host real mode code can update it on guest entry and
exit), we would drop the vcore lock and do the
From: Paul Mackerras pau...@samba.org
* Remove unused kvmppc_vcore::n_busy field.
* Remove setting of RMOR, since it was only used on PPC970 and the
PPC970 KVM support has been removed.
* Don't use r1 or r2 in setting the runlatch since they are
conventionally reserved for other things; use
From: Michael Ellerman mich...@ellerman.id.au
Some PowerNV systems include a hardware random-number generator.
This HWRNG is present on POWER7+ and POWER8 chips and is capable of
generating one 64-bit random number every microsecond. The random
numbers are produced by sampling a set of 64
From: Paul Mackerras pau...@samba.org
We can tell when a secondary thread has finished running a guest by
the fact that it clears its kvm_hstate.kvm_vcpu pointer, so there
is no real need for the nap_count field in the kvmppc_vcore struct.
This changes kvmppc_wait_for_nap to poll the
From: Paul Mackerras pau...@samba.org
This reads the timebase at various points in the real-mode guest
entry/exit code and uses that to accumulate total, minimum and
maximum time spent in those parts of the code. Currently these
times are accumulated per vcpu in 5 parts of the code:
* rm_entry
From: Paul Mackerras pau...@samba.org
Rather than calling cond_resched() in kvmppc_run_core() before doing
the post-processing for the vcpus that we have just run (that is,
calling kvmppc_handle_exit_hv(), kvmppc_set_timer(), etc.), we now do
that post-processing before calling cond_resched(),
From: Suresh Warrier warr...@linux.vnet.ibm.com
Replaces the ICS mutex lock with a spin lock since we will be porting
these routines to real mode. Note that we need to disable interrupts
before we take the lock in anticipation of the fact that on the guest
side, we are running in the context of a
From: Paul Mackerras pau...@samba.org
On entry to the guest, secondary threads now wait for the primary to
switch the MMU after loading up most of their state, rather than before.
This means that the secondary threads get into the guest sooner, in the
common case where the secondary threads get
From: Paul Mackerras pau...@samba.org
This uses msgsnd where possible for signalling other threads within
the same core on POWER8 systems, rather than IPIs through the XICS
interrupt controller. This includes waking secondary threads to run
the guest, the interrupts generated by the virtual
From: Paul Mackerras pau...@samba.org
This arranges for threads that are napping due to their vcpu having
ceded or due to not having a vcpu to wake up at the end of the guest's
timeslice without having to be poked with an IPI. We do that by
arranging for the decrementer to contain a value no
From: Suresh E. Warrier warr...@linux.vnet.ibm.com
Export __spin_yield so that the arch_spin_unlock() function can
be invoked from a module. This will be required for modules where
we want to take a lock that is also is acquired in hypervisor
real mode. Because we want to avoid running any
From: Paul Mackerras pau...@samba.org
This creates a debugfs directory for each HV guest (assuming debugfs
is enabled in the kernel config), and within that directory, a file
by which the contents of the guest's HPT (hashed page table) can be
read. The directory is named vm, where is
Am Tue, 21 Apr 2015 16:51:21 +1000
schrieb David Gibson da...@gibson.dropbear.id.au:
On Tue, Apr 21, 2015 at 08:37:02AM +0200, Thomas Huth wrote:
Am Tue, 21 Apr 2015 10:41:51 +1000
schrieb David Gibson da...@gibson.dropbear.id.au:
On POWER, storage caching is usually configured via the
91 matches
Mail list logo