On 20/01/2015 08:54, Wincy Van wrote:
On Tue, Jan 20, 2015 at 3:34 PM, Paolo Bonzini pbonz...@redhat.com wrote:
Hence, we can disable local interrupts while delivering nested posted
interrupts to make sure
we are faster than the destination vcpu. This is a bit tricky but it
an avoid that
If vcpu has a interrupt in vmx non-root mode, we will
kick that vcpu to inject interrupt timely. With posted
interrupt processing, the kick intr is not needed, and
interrupts are fully taken care of by hardware.
In nested vmx, this feature avoids much more vmexits
than non-nested vmx.
This patch
On 20/01/2015 09:48, Wincy Van wrote:
+static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu,
+ int vector)
+{
+ int r = 0;
+ struct vmcs12 *vmcs12;
+
+ /*
+* Since posted intr delivery is async,
+
On Tue, Jan 20, 2015 at 5:54 PM, Paolo Bonzini pbonz...@redhat.com wrote:
On 20/01/2015 09:48, Wincy Van wrote:
+static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu,
+ int vector)
+{
+ int r = 0;
+ struct vmcs12
On 20/01/2015 11:34, Li Kaihang wrote:
Li kaihang: I think I make a mistake here that IDT-vectoring information
field is not written by vectored event but is done by Event Delivery.
vm exit during Event Delivery is not triggered by external
interrupt delivery, only vm exit due
From: Paolo Bonzini pbonz...@redhat.com
To: Li Kaihang li.kaih...@zte.com.cn, g...@kernel.org,
Cc: t...@linutronix.de, mi...@redhat.com, h...@zytor.com, x...@kernel.org,
kvm@vger.kernel.org, linux-ker...@vger.kernel.org
Date: 2015-01-19 下午 11:29
Subject:Re: [PATCH 1/1]
x86/unittests.cfg uses the variant with underscore.
Rename tscdeadline-latency.c instead of fixing x86/unittests.cfg because
we use only underscores elsewhere.
Signed-off-by: Radim Krčmář rkrc...@redhat.com
---
config/config-x86-common.mak | 2 +-
config/config-x86_64.mak
There is no point in executing it through run_tests.sh.
Signed-off-by: Radim Krčmář rkrc...@redhat.com
---
x86/unittests.cfg | 5 -
1 file changed, 5 deletions(-)
diff --git a/x86/unittests.cfg b/x86/unittests.cfg
index 75b959535c01..badb08ad138e 100644
--- a/x86/unittests.cfg
+++
unittest.cfg expects tscdeadline_latency.flat, while the existing
filename contains a dash = we don't call it from ./run_tests.sh.
(Which is considered a PASS :)
tscdeadline_latency isn't a unit test so we can omit it.
(Feel free to drop [1/2], I just loathe inconsistencies.)
Radim Krčmář (2):
Juan Quintela quint...@redhat.com wrote:
Hi
Please, send any topic that you are interested in covering.
Thanks, Juan.
Call details:
By popular demand, a google calendar public entry with it
As there are no agenda, call gets cancelled.
Sorry for the late notification.
We failed smptest [1/2], but didn't notice [2/2].
APICv returns 0 on APIC MMIO reads under x2APIC,
KVM returns real APIC ID instead.
Do we want to test this weirdness in another unit test?
(I'll post a patch for KVM after checking that there aren't quirks.)
Radim Krčmář (2):
lib/x86: fix
On 20/01/2015 17:35, Radim Krčmář wrote:
We failed smptest [1/2], but didn't notice [2/2].
APICv returns 0 on APIC MMIO reads under x2APIC,
KVM returns real APIC ID instead.
Do we want to test this weirdness in another unit test?
(I'll post a patch for KVM after checking that there
smptest was failing and we didn't notice, turn it into unit test.
Signed-off-by: Radim Krčmář rkrc...@redhat.com
---
x86/smptest.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/x86/smptest.c b/x86/smptest.c
index 37805999b3b0..acda22e18314 100644
--- a/x86/smptest.c
We used MMIO (xAPIC) for x2APIC.
This is the same as using xAPIC in globally disabled mode.
Signed-off-by: Radim Krčmář rkrc...@redhat.com
---
lib/x86/apic.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/x86/apic.c b/lib/x86/apic.c
index 6876d85fac95..9c42c4d0a4fc
On 20/01/2015 15:27, Radim Krčmář wrote:
unittest.cfg expects tscdeadline_latency.flat, while the existing
filename contains a dash = we don't call it from ./run_tests.sh.
(Which is considered a PASS :)
tscdeadline_latency isn't a unit test so we can omit it.
(Feel free to drop [1/2], I
On Tue, 20 Jan 2015 16:46:53 +1100
Paul Mackerras pau...@samba.org wrote:
On Mon, Jan 19, 2015 at 12:41:00PM -0200, Marcelo Tosatti wrote:
On Fri, Jan 16, 2015 at 11:48:46AM -0500, Steven Rostedt wrote:
static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc)
{
-
SuSE's 2.6.16 kernel fails to boot if the delta between tsc_timestamp
and rdtsc is larger than a given threshold:
* If we get more than the below threshold into the future, we rerequest
* the real time from the host again which has only little offset then
* that we need to adjust using the
This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.
Signed-off-by: Waiman Long waiman.l...@hp.com
Signed-off-by: Peter Zijlstra pet...@infradead.org
---
arch/x86/include/asm/spinlock.h |4 ++--
arch/x86/kernel/kvm.c
From: Peter Zijlstra pet...@infradead.org
When we detect a hypervisor (!paravirt, see qspinlock paravirt support
patches), revert to a simple test-and-set lock to avoid the horrors
of queue preemption.
Signed-off-by: Peter Zijlstra pet...@infradead.org
Signed-off-by: Waiman Long
This patch adds para-virtualization support to the queue spinlock
code base with minimal impact to the native case. There are some
minor code changes in the generic qspinlock.c file which should be
usable in other architectures. The other code changes are specific
to x86 processors and so are all
This patch adds the necessary XEN specific code to allow XEN to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: Waiman Long waiman.l...@hp.com
---
arch/x86/xen/spinlock.c | 149 +--
This patch adds the necessary KVM specific code to allow KVM to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Two KVM guests of 20 CPU cores (2 nodes) were created for performance
testing in one of the following three configurations:
1) Only 1 VM is active
Currently, atomic_cmpxchg() is used to get the lock. However, this
is not really necessary if there is more than one task in the queue
and the queue head don't need to reset the tail code. For that case,
a simple write to set the lock bit is enough as the queue head will
be the only one eligible
From: Peter Zijlstra pet...@infradead.org
When we allow for a max NR_CPUS 2^14 we can optimize the pending
wait-acquire and the xchg_tail() operations.
By growing the pending bit to a byte, we reduce the tail to 16bit.
This means we can use xchg16 for the tail part and do away with all
the
This is a preparatory patch that extracts out the following 2 code
snippets to prepare for the next performance optimization patch.
1) the logic for the exchange of new and previous tail code words
into a new xchg_tail() function.
2) the logic for clearing the pending bit and setting the
This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.
Currently, there is some
From: Peter Zijlstra pet...@infradead.org
Because the qspinlock needs to touch a second cacheline (the per-cpu
mcs_nodes[]); add a pending bit and allow a single in-word spinner
before we punt to the second cacheline.
It is possible so observe the pending bit without the locked bit when
the last
This patch introduces a new generic queue spinlock implementation that
can serve as an alternative to the default ticket spinlock. Compared
with the ticket spinlock, this queue spinlock should be almost as fair
as the ticket spinlock. It has about the same speed in single-thread
and it can be much
v13-v14:
- Patches 1 2: Add queue_spin_unlock_wait() to accommodate commit
78bff1c86 from Oleg Nesterov.
- Fix the system hang problem when using PV qspinlock in an
over-committed guest due to a racing condition in the
pv_set_head_in_tail() function.
- Increase the MAYHALT_THRESHOLD
Radim Kr?má? rkrc...@redhat.com wrote:
2015-01-14 01:27+, Wu, Feng:
the new
hardware even doesn't consider the TPR for lowest priority interrupts
delivery.
A bold move ... what hardware was the first to do so?
I think it was starting with Nehalem.
Thanks, (Could be that QPI
-Original Message-
From: Wu, Feng
Sent: Friday, December 12, 2014 11:15 PM
To: t...@linutronix.de; mi...@redhat.com; h...@zytor.com; x...@kernel.org;
g...@kernel.org; pbonz...@redhat.com; dw...@infradead.org;
j...@8bytes.org; alex.william...@redhat.com; jiang@linux.intel.com
On 20/01/2015 18:54, Marcelo Tosatti wrote:
SuSE's 2.6.16 kernel fails to boot if the delta between tsc_timestamp
and rdtsc is larger than a given threshold:
* If we get more than the below threshold into the future, we rerequest
* the real time from the host again which has only
32 matches
Mail list logo