s.
>
> Pull the check up to those functions, by making them simple
> wrappers around the user_enter and user_exit inline functions.
>
> Cc: Andy Lutomirski <l...@kernel.org>
> Cc: Frederic Weisbecker <fweis...@gmail.com>
> Cc: Rik van Riel <r...@redhat.com&
ext tracking functions are
> called by guest_enter and guest_exit.
>
> Split the body of context_tracking_entry and context_tracking_exit
> out to __-prefixed functions, and use them from KVM.
>
> Rik van Riel has measured this to speed up a tight vmentry/vmexit
> loop by abou
functions are
called by guest_enter and guest_exit.
Split the body of context_tracking_entry and context_tracking_exit
out to __-prefixed functions, and use them from KVM.
Rik van Riel has measured this to speed up a tight vmentry/vmexit
loop by about 2%.
Cc: Frederic Weisbecker fweis
On 04/28/2015 07:36 AM, Paolo Bonzini wrote:
All calls to context_tracking_enter and context_tracking_exit
are already checking context_tracking_is_enabled, except the
context_tracking_user_enter and context_tracking_user_exit
functions left in for the benefit of assembly calls.
Pull the
the check up to those functions, by making them simple
wrappers around the user_enter and user_exit inline functions.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
Reviewed-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm
.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
the x86 task switch policy, which seems to work.
Signed-off-by: Rik van Riel r...@redhat.com
---
I still hope to put the larger FPU changes in at some point, but with
all the current changes to the FPU code I am somewhat uncomfortable
causing even more churn. After 4.1 I may send in the changes to defer
On 04/23/2015 06:57 PM, Wanpeng Li wrote:
Cc Rik, who is doing the similar work. :)
Hi Liang,
I posted this patch earlier, which should have the same effect as
your patch on more modern systems, while not loading the FPU context
for guests that barely use it on older systems:
On 04/09/2015 10:13 AM, Peter Zijlstra wrote:
On Thu, Apr 09, 2015 at 09:16:24AM -0400, Rik van Riel wrote:
On 04/09/2015 03:01 AM, Peter Zijlstra wrote:
On Wed, Apr 08, 2015 at 02:32:19PM -0400, Waiman Long wrote:
For a virtual guest with the qspinlock patch, a simple unfair byte lock
On 04/09/2015 03:01 AM, Peter Zijlstra wrote:
On Wed, Apr 08, 2015 at 02:32:19PM -0400, Waiman Long wrote:
For a virtual guest with the qspinlock patch, a simple unfair byte lock
will be used if PV spinlock is not configured in or the hypervisor
isn't either KVM or Xen. The byte lock works
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/12/2015 12:00 PM, Frederic Weisbecker wrote:
On Thu, Feb 12, 2015 at 10:47:10AM -0500, Rik van Riel wrote:
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1
On 02/12/2015 10:42 AM, Frederic Weisbecker wrote:
On Wed, Feb 11, 2015 at 02:43:19PM
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/12/2015 10:42 AM, Frederic Weisbecker wrote:
On Wed, Feb 11, 2015 at 02:43:19PM -0500, Rik van Riel wrote:
If exception_enter happens when already in IN_KERNEL state, the
code still calls context_tracking_exit, which ends up
to
IN_KERNEL state if the current state is not already IN_KERNEL.
Signed-off-by: Rik van Riel r...@redhat.com
Reported-by: Luiz Capitulino lcapitul...@redhat.com
---
Frederic, you will want this bonus patch, too :)
Thanks to Luiz for finding this one. Whatever I was running did not
trigger
From: Rik van Riel r...@redhat.com
With code elsewhere doing something conditional on whether or not
context tracking is enabled, we want a stub function that tells us
context tracking is not enabled, when CONFIG_CONTEXT_TRACKING is
not set.
Signed-off-by: Rik van Riel r...@redhat.com
From: Rik van Riel r...@redhat.com
The host kernel is not doing anything while the CPU is executing
a KVM guest VCPU, so it can be marked as being in an extended
quiescent state, identical to that used when running user space
code.
The only exception to that rule is when the host handles
From: Rik van Riel r...@redhat.com
Add the expected ctx_state as a parameter to context_tracking_enter and
context_tracking_exit, allowing the same functions to not just track
kernel user space switching, but also kernel guest transitions.
Signed-off-by: Rik van Riel r...@redhat.com
From: Rik van Riel r...@redhat.com
Only run vtime_user_enter, vtime_user_exit, and the user enter exit
trace points when we are entering or exiting user state, respectively.
The KVM code in guest_enter and guest_exit already take care of calling
vtime_guest_enter and vtime_guest_exit
When running a KVM guest on a system with NOHZ_FULL enabled, and the
KVM guest running with idle=poll mode, we still get wakeups of the
rcuos/N threads.
This problem has already been solved for user space by telling the
RCU subsystem that the CPU is in an extended quiescent state while
running
From: Rik van Riel r...@redhat.com
These wrapper functions allow architecture code (eg. ARM) to keep
calling context_tracking_user_enter context_tracking_user_exit
the same way it always has, without error prone tricks like duplicate
defines of argument values in assembly code.
Signed-off
On 02/10/2015 10:28 AM, Frederic Weisbecker wrote:
On Tue, Feb 10, 2015 at 09:41:45AM -0500, r...@redhat.com wrote:
From: Rik van Riel r...@redhat.com
These wrapper functions allow architecture code (eg. ARM) to keep
calling context_tracking_user_enter context_tracking_user_exit
the same
When running a KVM guest on a system with NOHZ_FULL enabled, and the
KVM guest running with idle=poll mode, we still get wakeups of the
rcuos/N threads.
This problem has already been solved for user space by telling the
RCU subsystem that the CPU is in an extended quiescent state while
running
From: Rik van Riel r...@redhat.com
With code elsewhere doing something conditional on whether or not
context tracking is enabled, we want a stub function that tells us
context tracking is not enabled, when CONFIG_CONTEXT_TRACKING is
not set.
Signed-off-by: Rik van Riel r...@redhat.com
From: Rik van Riel r...@redhat.com
Export context_tracking_user_enter/exit so it can be used by KVM.
Signed-off-by: Rik van Riel r...@redhat.com
---
kernel/context_tracking.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index
From: Rik van Riel r...@redhat.com
The host kernel is not doing anything while the CPU is executing
a KVM guest VCPU, so it can be marked as being in an extended
quiescent state, identical to that used when running user space
code.
The only exception to that rule is when the host handles
From: Rik van Riel r...@redhat.com
Split out the mechanism from context_tracking_user_enter and
context_tracking_user_exit into context_tracking_enter and
context_tracking_exit. Leave the old functions in order to avoid
breaking ARM, which calls these functions from assembler code,
and cannot
On 02/10/2015 02:59 PM, Andy Lutomirski wrote:
On 02/10/2015 06:41 AM, r...@redhat.com wrote:
From: Rik van Riel r...@redhat.com
The host kernel is not doing anything while the CPU is executing
a KVM guest VCPU, so it can be marked as being in an extended
quiescent state, identical
From: Rik van Riel r...@redhat.com
Only run vtime_user_enter, vtime_user_exit, and the user enter exit
trace points when we are entering or exiting user state, respectively.
The KVM code in guest_enter and guest_exit already take care of calling
vtime_guest_enter and vtime_guest_exit
From: Rik van Riel r...@redhat.com
Export context_tracking_user_enter/exit so it can be used by KVM.
Signed-off-by: Rik van Riel r...@redhat.com
---
kernel/context_tracking.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index
From: Rik van Riel r...@redhat.com
Only run vtime_user_enter, vtime_user_exit, and the user enter exit
trace points when we are entering or exiting user state, respectively.
The RCU code only distinguishes between idle and not idle or kernel.
There should be no need to add an additional (unused
From: Rik van Riel r...@redhat.com
Export context_tracking_user_enter/exit so it can be used by KVM.
Signed-off-by: Rik van Riel r...@redhat.com
---
kernel/context_tracking.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index
From: Rik van Riel r...@redhat.com
Rename context_tracking_user_enter context_tracking_user_exit
to just context_tracking_enter context_tracking_exit, since it
will be used to track guest state, too.
This also breaks ARM. The rest of the series does not look like
it impacts ARM.
Cc: will.dea
From: Rik van Riel r...@redhat.com
Add the expected ctx_state as a parameter to context_tracking_user_enter
and context_tracking_user_exit, allowing the same functions to not just
track kernel user space switching, but also kernel guest transitions.
Catalin, Will: this patch and the next one
From: Rik van Riel r...@redhat.com
With code elsewhere doing something conditional on whether or not
context tracking is enabled, we want a stub function that tells us
context tracking is not enabled, when CONFIG_CONTEXT_TRACKING is
not set.
Signed-off-by: Rik van Riel r...@redhat.com
wrote:
On Fri, Feb 06, 2015 at 10:53:34PM -0500, Rik van Riel
wrote:
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1
On 02/06/2015 06:15 PM, Frederic Weisbecker wrote:
Just a few things then:
1) In this case rename context_tracking_user_enter/exit()
to context_tracking_enter
From: Rik van Riel r...@redhat.com
The host kernel is not doing anything while the CPU is executing
a KVM guest VCPU, so it can be marked as being in an extended
quiescent state, identical to that used when running user space
code.
The only exception to that rule is when the host handles
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/09/2015 08:15 PM, Will Deacon wrote:
Hi Rik,
On Mon, Feb 09, 2015 at 04:04:38PM +, r...@redhat.com wrote:
Apologies to Catalin and Will for not fixing up ARM. I am not
familiar with ARM assembly, and not sure how to pass a constant
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/09/2015 10:01 PM, Paul E. McKenney wrote:
On Tue, Feb 10, 2015 at 02:44:17AM +0100, Frederic Weisbecker
wrote:
On Mon, Feb 09, 2015 at 08:22:59PM -0500, Rik van Riel wrote:
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1
On 02/09/2015 08:15
When running a KVM guest on a system with NOHZ_FULL enabled, and the
KVM guest running with idle=poll mode, we still get wakeups of the
rcuos/N threads.
This problem has already been solved for user space by telling the
RCU subsystem that the CPU is in an extended quiescent state while
running
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/09/2015 12:05 PM, Paolo Bonzini wrote:
On 09/02/2015 17:04, r...@redhat.com wrote:
From: Rik van Riel r...@redhat.com
Export context_tracking_user_enter/exit so it can be used by
KVM.
Wrong function name in the commit message
-by: Paolo Bonzini pbonz...@redhat.com
Reviewed-by: Rik van Riel r...@redhat.com
- --
All rights reversed
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
iQEcBAEBAgAGBQJU2UOmAAoJEM553pKExN6DLKoIAJhuuGhD46k4Xyai8JdtTGPr
osSYKVjIn9PYCYDjR9NQuUfji1i5JXluFMDHLREnKnTQlzvC9+pKIfyxEObgI3ni
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/07/2015 03:30 AM, Frederic Weisbecker wrote:
I prefer a new series too. Now whether you or me take the patches,
I don't mind either way :-)
I'll make it, no problem.
Also I wonder how this feature is going to be enabled. Will it be
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/06/2015 01:23 PM, Frederic Weisbecker wrote:
On Fri, Feb 06, 2015 at 01:20:21PM -0500, Rik van Riel wrote: On
02/06/2015 12:22 PM, Frederic Weisbecker wrote:
On Thu, Feb 05, 2015 at 03:23:48PM -0500, r...@redhat.com
wrote:
From: Rik van
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/06/2015 06:15 PM, Frederic Weisbecker wrote:
Just a few things then:
1) In this case rename context_tracking_user_enter/exit() to
context_tracking_enter() and context_tracking_exit(), since it's
not anymore about user only but about any
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/06/2015 08:46 AM, Frederic Weisbecker wrote:
On Thu, Feb 05, 2015 at 03:23:47PM -0500, r...@redhat.com wrote:
When running a KVM guest on a system with NOHZ_FULL enabled
I just need to clarify the motivation first, does the above
situation
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/06/2015 12:22 PM, Frederic Weisbecker wrote:
On Thu, Feb 05, 2015 at 03:23:48PM -0500, r...@redhat.com wrote:
From: Rik van Riel r...@redhat.com
Add the expected ctx_state as a parameter to
context_tracking_user_enter
On 02/05/2015 01:56 PM, Paul E. McKenney wrote:
The real danger is doing neither.
On tick_nohz_full_cpu() CPUs, the exit-to-userspace code should invoke
rcu_user_enter(), which sets some per-CPU state telling RCU to ignore
that CPU, since it cannot possibly do host RCU read-side critical
On 02/05/2015 02:20 PM, Paolo Bonzini wrote:
On 05/02/2015 19:55, Jan Kiszka wrote:
This patch introduces a new module parameter for the KVM module; when it
is present, KVM attempts a bit of polling on every HLT before scheduling
itself out via kvm_vcpu_block.
Wouldn't it be better to
From: Rik van Riel r...@redhat.com
The host kernel is not doing anything while the CPU is executing
a KVM guest VCPU, so it can be marked as being in an extended
quiescent state, identical to that used when running user space
code.
The only exception to that rule is when the host handles
From: Rik van Riel r...@redhat.com
Export context_tracking_user_enter/exit so it can be used by KVM.
Signed-off-by: Rik van Riel r...@redhat.com
---
kernel/context_tracking.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index
When running a KVM guest on a system with NOHZ_FULL enabled, and the
KVM guest running with idle=poll mode, we still get wakeups of the
rcuos/N threads.
This problem has already been solved for user space by telling the
RCU subsystem that the CPU is in an extended quiescent state while
running
From: Rik van Riel r...@redhat.com
Add the expected ctx_state as a parameter to context_tracking_user_enter
and context_tracking_user_exit, allowing the same functions to not just
track kernel user space switching, but also kernel guest transitions.
Signed-off-by: Rik van Riel r...@redhat.com
From: Rik van Riel r...@redhat.com
Only run vtime_user_enter and vtime_user_exit when we are entering
or exiting user state, respectively.
The RCU code only distinguishes between idle and not idle or kernel.
There should be no need to add an additional (unused) state there.
Signed-off-by: Rik
On 02/05/2015 11:44 AM, Christian Borntraeger wrote:
Am 05.02.2015 um 17:35 schrieb r...@redhat.com:
From: Rik van Riel r...@redhat.com
The host kernel is not doing anything while the CPU is executing
a KVM guest VCPU, so it can be marked as being in an extended
quiescent state, identical
a smaller average latency and
a smaller variance.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
On 02/05/2015 01:56 PM, Paul E. McKenney wrote:
On Thu, Feb 05, 2015 at 01:09:19PM -0500, Rik van Riel wrote:
On 02/05/2015 12:50 PM, Paul E. McKenney wrote:
On Thu, Feb 05, 2015 at 11:52:37AM -0500, Rik van Riel wrote:
On 02/05/2015 11:44 AM, Christian Borntraeger wrote:
Am 05.02.2015 um 17
On 02/05/2015 12:50 PM, Paul E. McKenney wrote:
On Thu, Feb 05, 2015 at 11:52:37AM -0500, Rik van Riel wrote:
On 02/05/2015 11:44 AM, Christian Borntraeger wrote:
Am 05.02.2015 um 17:35 schrieb r...@redhat.com:
From: Rik van Riel r...@redhat.com
The host kernel is not doing anything while
On 02/05/2015 02:27 PM, Paul E. McKenney wrote:
On Thu, Feb 05, 2015 at 02:02:35PM -0500, Rik van Riel wrote:
Looking at context_tracking.h, I see the
function context_tracking_cpu_is_enabled().
That looks like it should do the right thing
in this case.
Right you are -- that same check
When running a KVM guest on a system with NOHZ_FULL enabled, and the
KVM guest running with idle=poll mode, we still get wakeups of the
rcuos/N threads.
This problem has already been solved for user space by telling the
RCU subsystem that the CPU is in an extended quiescent state while
running
From: Rik van Riel r...@redhat.com
With code elsewhere doing something conditional on whether or not
context tracking is enabled, we want a stub function that tells us
context tracking is not enabled, when CONFIG_CONTEXT_TRACKING is
not set.
Signed-off-by: Rik van Riel r...@redhat.com
From: Rik van Riel r...@redhat.com
Add the expected ctx_state as a parameter to context_tracking_user_enter
and context_tracking_user_exit, allowing the same functions to not just
track kernel user space switching, but also kernel guest transitions.
Signed-off-by: Rik van Riel r...@redhat.com
From: Rik van Riel r...@redhat.com
The host kernel is not doing anything while the CPU is executing
a KVM guest VCPU, so it can be marked as being in an extended
quiescent state, identical to that used when running user space
code.
The only exception to that rule is when the host handles
From: Rik van Riel r...@redhat.com
Only run vtime_user_enter and vtime_user_exit when we are entering
or exiting user state, respectively.
The RCU code only distinguishes between idle and not idle or kernel.
There should be no need to add an additional (unused) state there.
Signed-off-by: Rik
From: Rik van Riel r...@redhat.com
Export context_tracking_user_enter/exit so it can be used by KVM.
Signed-off-by: Rik van Riel r...@redhat.com
---
kernel/context_tracking.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index
latency in the
LAPIC path for a KVM guest.
This patch reduces the average latency in my tests from 14us to 11us.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message
On 01/14/2015 12:12 PM, Marcelo Tosatti wrote:
Since lapic timer handler only wakes up a simple waitqueue,
it can be executed from hardirq context.
Reduces average cyclictest latency by 3us.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
On 12/11/2014 04:07 PM, Andy Lutomirski wrote:
On Thu, Dec 11, 2014 at 12:58 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Thu, Dec 11, 2014 at 12:48:36PM -0800, Andy Lutomirski wrote:
On 12/10/2014 07:07 PM, Marcelo Tosatti wrote:
On Thu, Dec 11, 2014 at 12:37:57AM +0100, Paolo Bonzini
On 12/10/2014 12:06 PM, Marcelo Tosatti wrote:
For the hrtimer which emulates the tscdeadline timer in the guest,
add an option to advance expiration, and busy spin on VM-entry waiting
for the actual expiration time to elapse.
This allows achieving low latencies in cyclictest (or any
On 12/10/2014 12:06 PM, Marcelo Tosatti wrote:
kvm_x86_ops-test_posted_interrupt() returns true/false depending
whether 'vector' is set.
Is that good? Bad? How does this patch address the issue?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
On 12/10/2014 12:27 PM, Marcelo Tosatti wrote:
On Wed, Dec 10, 2014 at 12:10:04PM -0500, Rik van Riel wrote:
On 12/10/2014 12:06 PM, Marcelo Tosatti wrote:
kvm_x86_ops-test_posted_interrupt() returns true/false depending
whether 'vector' is set.
Is that good? Bad? How does this patch address
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 12/05/2014 03:17 AM, Thomas Lau wrote:
virsh cpu-models x86_64 | sort
...
Penryn SandyBridge Westmere
...
interesting that it doesn't have Ivy Bridge, any reason?
The reason would be that you are running an older version.
- --
All rights
On 11/25/2014 12:21 PM, Marcelo Tosatti wrote:
The problem:
On -RT, an emulated LAPIC timer instances has the following path:
1) hard interrupt
2) ksoftirqd is scheduled
3) ksoftirqd wakes up vcpu thread
4) vcpu thread is scheduled
This extra context switch introduces unnecessary
On 11/25/2014 01:57 PM, Rik van Riel wrote:
On 11/25/2014 12:21 PM, Marcelo Tosatti wrote:
The problem:
On -RT, an emulated LAPIC timer instances has the following path:
1) hard interrupt
2) ksoftirqd is scheduled
3) ksoftirqd wakes up vcpu thread
4) vcpu thread is scheduled
This extra
mtosa...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
notifier, so a lot of code has been trivially
touched.
3. In the absence of shadow_accessed_mask (e.g. EPT A bit), we
emulate the access bit by blowing the spte. This requires proper
synchronizing with MMU notifier consumers, like every other removal
of spte's does.
Acked-by: Rik van Riel r
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 06/21/2014 04:55 AM, Xuekun Hu wrote:
Hi, Rik
I saw your presentation at last year 2013 kvm submit.
(http://www.linux-kvm.org/wiki/images/2/27/Kvm-forum-2013-idle-latency.pdf).
You said there will have some patches later, but I didn't find
On 05/28/2014 08:16 AM, Raghavendra K T wrote:
This patch looks very promising.
TODO:
- we need an intelligent way to nullify the effect of batching for baremetal
(because extra cmpxchg is not required).
On (larger?) NUMA systems, the unfairness may be a nice performance
benefit, reducing
On 05/28/2014 06:19 PM, Linus Torvalds wrote:
If somebody has a P4 still, that's likely the worst case by far.
I'm sure cmpxchg isn't the only thing making P4 the worst case :)
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
On 04/22/2014 07:57 AM, Christian Borntraeger wrote:
On 22/04/14 12:55, Christian Borntraeger wrote:
While preparing/testing some KVM on s390 patches for the next merge window
(target is kvm/next which is based on 3.15-rc1) I faced a very severe
performance hickup on guest paging (all
From: Rik van Riel r...@redhat.com
Normally task_numa_work scans over a fairly small amount of memory,
but it is possible to run into a large unpopulated part of virtual
memory, with no pages mapped. In that case, task_numa_work can run
for a while, and it may make sense to reschedule as required
The NUMA scanning code can end up iterating over many gigabytes
of unpopulated memory, especially in the case of a freshly started
KVM guest with lots of memory.
This results in the mmu notifier code being called even when
there are no mapped pages in a virtual address range. The amount
of time
From: Rik van Riel r...@redhat.com
The NUMA scanning code can end up iterating over many gigabytes
of unpopulated memory, especially in the case of a freshly started
KVM guest with lots of memory.
This results in the mmu notifier code being called even when
there are no mapped pages in a virtual
From: Rik van Riel r...@redhat.com
Reorganize the order of ifs in change_pmd_range a little, in
preparation for the next patch.
Signed-off-by: Rik van Riel r...@redhat.com
Cc: Peter Zijlstra pet...@infradead.org
Cc: Andrea Arcangeli aarca...@redhat.com
Reported-by: Xing Gang gang.x...@hp.com
On 02/18/2014 09:24 PM, David Rientjes wrote:
On Tue, 18 Feb 2014, r...@redhat.com wrote:
From: Rik van Riel r...@redhat.com
The NUMA scanning code can end up iterating over many gigabytes
of unpopulated memory, especially in the case of a freshly started
KVM guest with lots of memory
with.
Signed-off-by: Rik van Riel r...@redhat.com
Acked-by: David Rientjes rient...@google.com
---
mm/hugetlb.c | 2 ++
mm/mprotect.c | 15 ---
2 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c0d930f..f0c5dfb 100644
--- a/mm/hugetlb.c
+++ b
in
11feeb498086a3a5907b8148bdf1786a9b18fc55. The cacheline was already
modified in order to set PG_tail so this won't affect the boot time of
large memory systems.
Reported-by: andy123 ajs124.ajs...@gmail.com
Signed-off-by: Andrea Arcangeli aarca...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
On 05/13/2013 11:16 AM, Michael S. Tsirkin wrote:
However, there's a big question mark: host specifies
inflate, guest says deflate, who wins?
If we're dealing with a NUMA guest, they could both win :)
The host could see reduced memory use of the guest in one
place, while the guest could see
On 05/13/2013 11:35 AM, Michael S. Tsirkin wrote:
On Mon, May 13, 2013 at 11:22:58AM -0400, Rik van Riel wrote:
I believe the Google patches still included some way for the
host to initiate balloon inflation on the guest side, because
the guest internal state alone is not enough to see when
On 05/12/2013 10:30 AM, Michael S. Tsirkin wrote:
On Thu, May 09, 2013 at 10:53:49AM -0400, Luiz Capitulino wrote:
Automatic ballooning consists of dynamically adjusting the guest's
balloon according to memory pressure in the host and in the guest.
This commit implements the guest side of
On 04/22/2013 07:51 AM, Peter Zijlstra wrote:
On Sun, 2013-04-21 at 17:12 -0400, Rik van Riel wrote:
If we always incremented the ticket number by 2 (instead of 1), then
we could use the lower bit of the ticket number as the spinlock.
ISTR that paravirt ticket locks already do that and use
On 04/22/2013 03:49 PM, Peter Zijlstra wrote:
On Mon, 2013-04-22 at 08:52 -0400, Rik van Riel wrote:
If the native spin_lock code has been called already at
that time, the native code would still need to be modified
to increment the ticket number by 2, so we end up with a
compatible value
On 04/22/2013 04:08 PM, Peter Zijlstra wrote:
On Mon, 2013-04-22 at 15:56 -0400, Rik van Riel wrote:
On 04/22/2013 03:49 PM, Peter Zijlstra wrote:
On Mon, 2013-04-22 at 08:52 -0400, Rik van Riel wrote:
If the native spin_lock code has been called already at
that time, the native code would
On 04/22/2013 04:46 PM, Jiannan Ouyang wrote:
It would still be very interesting to conduct more experiments to
compare these two, to see if the fairness enforced by pv_lock is
mandatory, and if preemptable-lock outperforms pv_lock in most cases,
and how do they work with PLE.
Given the
On 04/22/2013 04:48 PM, Peter Zijlstra wrote:
Hmm.. it looked like under light overcommit the paravirt ticket lock
still had some gain (~10%) and of course it brings the fairness thing
which is always good.
I can only imagine the mess unfair + vcpu preemption can bring to guest
tasks.
If you
On 04/22/2013 04:55 PM, Peter Zijlstra wrote:
On Mon, 2013-04-22 at 16:46 -0400, Jiannan Ouyang wrote:
- pv-preemptable-lock has much less performance variance compare to
pv_lock, because it adapts to preemption within VM,
other than using rescheduling that increase VM interference
I
On 04/22/2013 05:56 PM, Andi Kleen wrote:
Rik van Riel r...@redhat.com writes:
If we always incremented the ticket number by 2 (instead of 1), then
we could use the lower bit of the ticket number as the spinlock.
Spinning on a single bit is very inefficient, as you need to do
try lock
On 04/20/2013 06:12 PM, Jiannan Ouyang wrote:
Hello Everyone,
I recently came up with a spinlock algorithm that can adapt to
preemption, which you may be interested in. The intuition is to
downgrade a fair lock to an unfair lock automatically upon preemption,
and preserve the fairness
On 02/05/2013 04:49 PM, Michael Wolf wrote:
Expand the steal time msr to also contain the consigned time.
Signed-off-by: Michael Wolf m...@linux.vnet.ibm.com
---
arch/x86/include/asm/paravirt.h |4 ++--
arch/x86/include/asm/paravirt_types.h |2 +-
arch/x86/kernel/kvm.c
On 02/05/2013 04:49 PM, Michael Wolf wrote:
Change the paravirt calls that retrieve the steal-time information
from the host. Add to it getting the consigned value as well as
the steal time.
Signed-off-by: Michael Wolf m...@linux.vnet.ibm.com
diff --git
On 01/14/2013 01:24 PM, Andrew Clayton wrote:
On Mon, 14 Jan 2013 15:27:36 +0200, Gleb Natapov wrote:
On Sun, Jan 13, 2013 at 10:29:58PM +, Andrew Clayton wrote:
When running qemu-kvm under 64but Fedora 16 under current 3.8, it
just hangs at start up. Dong a ps -ef hangs the process at
On 10/16/2012 10:23 PM, Michael Wolf wrote:
In the case of where you have a system that is running in a
capped or overcommitted environment the user may see steal time
being reported in accounting tools such as top or vmstat. This can
cause confusion for the end user.
How do s390 and Power
1 - 100 of 323 matches
Mail list logo