...@mgebm.net
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
CC: Jeremy Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
CC: Anthony Liguorialigu...@us.ibm.com
CC: Eric B Munsonemun...@mgebm.net
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send
CC: Eric B Munsonemun...@mgebm.net
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Costaglom...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
van Rielr...@redhat.com
CC: Jeremy Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
CC: Anthony Liguorialigu...@us.ibm.com
CC: Eric B Munsonemun...@mgebm.net
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 06/22/2011 07:19 AM, Izik Eidus wrote:
So what we say here is: it is better to have little junk in the unstable
tree that get flushed eventualy anyway, instead of make the guest
slower
this race is something that does not reflect accurate of ksm anyway due
to the full memcmp that we will
On 06/22/2011 07:13 PM, Nai Xia wrote:
On Wed, Jun 22, 2011 at 11:39 PM, Rik van Rielr...@redhat.com wrote:
On 06/22/2011 07:19 AM, Izik Eidus wrote:
So what we say here is: it is better to have little junk in the unstable
tree that get flushed eventualy anyway, instead of make the guest
On 06/22/2011 07:37 PM, Nai Xia wrote:
On 2MB pages, I'd like to remind you and Rik that ksmd currently splits
huge pages before their sub pages gets really merged to stable tree.
Your proposal appears to add a condition that causes ksmd to skip
doing that, which can cause the system to start
Munsonemun...@mgebm.net
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
CC: Anthony Liguorialigu...@us.ibm.com
CC: Eric B Munsonemun...@mgebm.net
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body
van Rielr...@redhat.com
CC: Jeremy Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
CC: Anthony Liguorialigu...@us.ibm.com
CC: Eric B Munsonemun...@mgebm.net
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 02/15/2011 10:17 AM, Avi Kivity wrote:
Ah, so we're all set. Do you know if any user tools process this
information?
Top and vmstat have been displaying steal time for
maybe 4 or 5 years now.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
On 02/01/2011 05:53 AM, Peter Zijlstra wrote:
On Mon, 2011-01-31 at 16:40 -0500, Rik van Riel wrote:
v8:
- some more changes and cleanups suggested by Peter
Did you, by accident, send out the -v7 patches again? I don't think I've
spotted a difference..
Arghhh. Yeah, I did :(
--
All
With CONFIG_FAIR_GROUP_SCHED, each task_group has its own cfs_rq.
Yielding to a task from another cfs_rq may be worthwhile, since
a process calling yield typically cannot use the CPU right now.
Therefor, we want to check the per-cpu nr_running, not the
cgroup local one.
Signed-off-by: Rik van
just
ignoring the hint.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Signed-off-by: Mike Galbraith efa...@gmx.de
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2c79e92..6c43fc4 100644
--- a/include/linux/sched.h
+++ b/include
Export the symbols required for a race-free kvm_vcpu_on_spin.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/fork.c b/kernel/fork.c
index 3b159c5..adc8f47 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -191,6 +191,7 @@ void __put_task_struct(struct task_struct *tsk
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a feature that detects when a virtual
CPU is
-by: Rik van Riel r...@redhat.com
---
kernel/sched_fair.c | 30 +++---
1 files changed, 23 insertions(+), 7 deletions(-)
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index f4ee445..0321473 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -784,19 +784,35
.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index a055742..9d56ed5 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -81,6 +81,7 @@ struct kvm_vcpu {
#endif
int vcpu_id;
struct mutex mutex
Instead of sleeping in kvm_vcpu_on_spin, which can cause gigantic
slowdowns of certain workloads, we instead use yield_to to get
another VCPU in the same KVM guest to run sooner.
This seems to give a 10-15% speedup in certain workloads.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off
to the right level.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/sched.c b/kernel/sched.c
index dc91a4d..7ff53e2 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -327,7 +327,7 @@ struct cfs_rq {
* 'curr' points to currently running entity on this cfs_rq
On 01/31/2011 06:47 AM, Peter Zijlstra wrote:
On Wed, 2011-01-26 at 17:21 -0500, Rik van Riel wrote:
+static struct sched_entity *__pick_second_entity(struct cfs_rq *cfs_rq)
+{
+ struct rb_node *left = cfs_rq-rb_leftmost;
+ struct rb_node *second;
+
+ if (!left
On 01/31/2011 06:49 AM, Peter Zijlstra wrote:
On Wed, 2011-01-26 at 17:21 -0500, Rik van Riel wrote:
+ if (yielded)
+ yield();
+
+ return yielded;
+}
+EXPORT_SYMBOL_GPL(yield_to);
yield() will again acquire rq-lock.. not not simply have
-yield_to_task() do
to the right level.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/sched.c b/kernel/sched.c
index dc91a4d..7ff53e2 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -327,7 +327,7 @@ struct cfs_rq {
* 'curr' points to currently running entity on this cfs_rq
-by: Rik van Riel r...@redhat.com
---
kernel/sched_fair.c | 30 +++---
1 files changed, 23 insertions(+), 7 deletions(-)
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index f4ee445..0321473 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -784,19 +784,35
just
ignoring the hint.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Signed-off-by: Mike Galbraith efa...@gmx.de
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2c79e92..6c43fc4 100644
--- a/include/linux/sched.h
+++ b/include
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a feature that detects when a virtual
CPU is
Instead of sleeping in kvm_vcpu_on_spin, which can cause gigantic
slowdowns of certain workloads, we instead use yield_to to get
another VCPU in the same KVM guest to run sooner.
This seems to give a 10-15% speedup in certain workloads.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off
.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index a055742..9d56ed5 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -81,6 +81,7 @@ struct kvm_vcpu {
#endif
int vcpu_id;
struct mutex mutex
With CONFIG_FAIR_GROUP_SCHED, each task_group has its own cfs_rq.
Yielding to a task from another cfs_rq may be worthwhile, since
a process calling yield typically cannot use the CPU right now.
Therefor, we want to check the per-cpu nr_running, not the
cgroup local one.
Signed-off-by: Rik van
Export the symbols required for a race-free kvm_vcpu_on_spin.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/fork.c b/kernel/fork.c
index 3b159c5..adc8f47 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -191,6 +191,7 @@ void __put_task_struct(struct task_struct *tsk
, or the other way around.
Signed-off-by: Glauber Costaglom...@redhat.com
CC: Rik van Rielr...@redhat.com
CC: Jeremy Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
but not the hypervisor, or the other way around.
Signed-off-by: Glauber Costaglom...@redhat.com
CC: Rik van Rielr...@redhat.com
CC: Jeremy Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All
Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
Not the traditional way of doing steal time, but a lot cleaner
than the legacy code that's left over from when each clocksource
had its own interrupt function.
I like it.
Acked-by: Rik van Riel r
...@redhat.com
CC: Jeremy Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord
On 01/26/2011 08:01 AM, Avi Kivity wrote:
Suggest moving the code to vcpu_load(), where it can execute under the
protection of vcpu-mutex.
I've made the suggested changes by you and Peter, and
will re-post the patch series in a bit...
--
All rights reversed
--
To unsubscribe from this list:
.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index a055742..9d56ed5 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -81,6 +81,7 @@ struct kvm_vcpu {
#endif
int vcpu_id;
struct mutex mutex
to the right level.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/sched.c b/kernel/sched.c
index dc91a4d..7ff53e2 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -327,7 +327,7 @@ struct cfs_rq {
* 'curr' points to currently running entity on this cfs_rq
Instead of sleeping in kvm_vcpu_on_spin, which can cause gigantic
slowdowns of certain workloads, we instead use yield_to to get
another VCPU in the same KVM guest to run sooner.
This seems to give a 10-15% speedup in certain workloads.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a feature that detects when a virtual
CPU is
With CONFIG_FAIR_GROUP_SCHED, each task_group has its own cfs_rq.
Yielding to a task from another cfs_rq may be worthwhile, since
a process calling yield typically cannot use the CPU right now.
Therefor, we want to check the per-cpu nr_running, not the
cgroup local one.
Signed-off-by: Rik van
-by: Rik van Riel r...@redhat.com
---
kernel/sched_fair.c | 30 +++---
1 files changed, 23 insertions(+), 7 deletions(-)
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index f4ee445..0321473 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -784,19 +784,35
just
ignoring the hint.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Signed-off-by: Mike Galbraith efa...@gmx.de
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2c79e92..6c43fc4 100644
--- a/include/linux/sched.h
+++ b/include
On 01/24/2011 12:57 PM, Peter Zijlstra wrote:
On Thu, 2011-01-20 at 16:33 -0500, Rik van Riel wrote:
The clear_buddies function does not seem to play well with the concept
of hierarchical runqueues. In the following tree, task groups are
represented by 'G', tasks by 'T', next by 'n' and last
On 01/24/2011 01:04 PM, Peter Zijlstra wrote:
diff --git a/kernel/sched.c b/kernel/sched.c
index dc91a4d..e4e57ff 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -327,7 +327,7 @@ struct cfs_rq {
* 'curr' points to currently running entity on this cfs_rq.
* It is set to
On 01/24/2011 01:12 PM, Peter Zijlstra wrote:
On Thu, 2011-01-20 at 16:34 -0500, Rik van Riel wrote:
From: Mike Galbraithefa...@gmx.de
Currently only implemented for fair class tasks.
Add a yield_to_task method() to the fair scheduling class. allowing the
caller of yield_to() to accelerate
...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
...@redhat.com
Reviewed-by: Rik van Riel r...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
...@redhat.com
CC: Jeremy Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
Reviewed-by: Rik van Riel r...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More
On 01/24/2011 01:06 PM, Glauber Costa wrote:
Register steal time within KVM. Everytime we sample the steal time
information, we update a local variable that tells what was the
last time read. We then account the difference.
Signed-off-by: Glauber Costaglom...@redhat.com
CC: Rik van
Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
Reviewed-by: Rik van Riel r...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
On 01/24/2011 08:25 PM, Glauber Costa wrote:
On Mon, 2011-01-24 at 18:31 -0500, Rik van Riel wrote:
On 01/24/2011 01:06 PM, Glauber Costa wrote:
Register steal time within KVM. Everytime we sample the steal time
information, we update a local variable that tells what was the
last time read. We
On 01/22/2011 01:14 AM, Srivatsa Vaddagiri wrote:
Also it may be possible for the pv-ticketlocks to track owning vcpu and make use
of a yield-to interface as further optimization to avoid the
others-get-more-time problem, but Peterz rightly pointed that PI would be a
better solution there than
On 01/21/2011 09:02 AM, Srivatsa Vaddagiri wrote:
On Thu, Jan 20, 2011 at 09:56:27AM -0800, Jeremy Fitzhardinge wrote:
The key here is not to
sleep when waiting for locks (as implemented by current patch-series, which can
put other VMs at an advantage by giving them more time than they are
With CONFIG_FAIR_GROUP_SCHED, each task_group has its own cfs_rq.
Yielding to a task from another cfs_rq may be worthwhile, since
a process calling yield typically cannot use the CPU right now.
Therefor, we want to check the per-cpu nr_running, not the
cgroup local one.
Signed-off-by: Rik van
-by: Rik van Riel r...@redhat.com
---
kernel/sched_fair.c | 30 +++---
1 files changed, 23 insertions(+), 7 deletions(-)
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index f4ee445..0321473 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -784,19 +784,35
to the right level.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/sched.c b/kernel/sched.c
index dc91a4d..e4e57ff 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -327,7 +327,7 @@ struct cfs_rq {
* 'curr' points to currently running entity on this cfs_rq
just
ignoring the hint.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Signed-off-by: Mike Galbraith efa...@gmx.de
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2c79e92..6c43fc4 100644
--- a/include/linux/sched.h
+++ b/include
Instead of sleeping in kvm_vcpu_on_spin, which can cause gigantic
slowdowns of certain workloads, we instead use yield_to to get
another VCPU in the same KVM guest to run sooner.
This seems to give a 10-15% speedup in certain workloads, versus
not having PLE at all.
Signed-off-by: Rik van Riel r
.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index a055742..9d56ed5 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -81,6 +81,7 @@ struct kvm_vcpu {
#endif
int vcpu_id;
struct mutex mutex
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a feature that detects when a virtual
CPU is
Export the symbols required for a race-free kvm_vcpu_on_spin.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/fork.c b/kernel/fork.c
index 3b159c5..adc8f47 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -191,6 +191,7 @@ void __put_task_struct(struct task_struct *tsk
Fairness is enforced by pick_next_entity, so we can drop some
superfluous tests from yield_to.
Signed-off-by: Rik van Riel r...@redhat.com
---
kernel/sched.c |8
1 files changed, 0 insertions(+), 8 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 1f38ed2..398eedf
Instead of sleeping in kvm_vcpu_on_spin, which can cause gigantic
slowdowns of certain workloads, we instead use yield_to to hand
the rest of our timeslice to another vcpu in the same KVM guest.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
diff
just
ignoring the hint.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Signed-off-by: Mike Galbraith efa...@gmx.de
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2c79e92..6c43fc4 100644
--- a/include/linux/sched.h
+++ b/include
.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index a055742..9d56ed5 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -81,6 +81,7 @@ struct kvm_vcpu {
#endif
int vcpu_id;
struct mutex mutex
Export the symbols required for a race-free kvm_vcpu_on_spin.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/fork.c b/kernel/fork.c
index 3b159c5..adc8f47 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -191,6 +191,7 @@ void __put_task_struct(struct task_struct *tsk
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a feature that detects when a virtual
CPU is
On 01/14/2011 03:02 AM, Rik van Riel wrote:
Benchmark results:
Two 4-CPU KVM guests are pinned to the same 4 physical CPUs.
I just discovered that I had in fact pinned the 4-CPU KVM
guests to 4 HT threads across 2 cores, and the scheduler
has all kinds of special magic for dealing with HT
On 01/14/2011 12:47 PM, Srivatsa Vaddagiri wrote:
If I recall correctly, one of the motivations for yield_to_task (rather than
a simple yield) was to avoid leaking bandwidth to other guests i.e we don't want
the remaining timeslice of spinning vcpu to be given away to other guests but
rather
On 01/14/2011 03:02 AM, Rik van Riel wrote:
Benchmark results:
Two 4-CPU KVM guests are pinned to the same 4 physical CPUs.
Unfortunately, it turned out I was running my benchmark on
only two CPU cores, using two HT threads of each core.
I have re-run the benchmark with the guests bound
On 01/13/2011 08:16 AM, Avi Kivity wrote:
+ for (pass = 0; pass 2 !yielded; pass++) {
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ struct task_struct *task = vcpu-task;
+ if (!pass i last_boosted_vcpu) {
+ i = last_boosted_vcpu;
+ continue;
+ } else if (pass i last_boosted_vcpu)
+ break;
+ if (vcpu ==
On 01/11/2011 04:25 AM, Avi Kivity wrote:
On 01/10/2011 09:31 PM, Linus Torvalds wrote:
Why wasn't I notified
before-hand? Was Andrew cc'd?
Andrew and linux-mm were copied. Rik was the only one who reviewed (and
ack'ed) it. I guess I should have explicitly asked for Nick's review.
Last
On 01/07/2011 12:29 AM, Mike Galbraith wrote:
+#ifdef CONFIG_SMP
+ /*
+* If this yield is important enough to want to preempt instead
+* of only dropping a -next hint, we're alone, and the target
+* is not alone, pull the target to this cpu.
+*
+*
On 01/12/2011 10:26 PM, Mike Galbraith wrote:
On Wed, 2011-01-12 at 22:02 -0500, Rik van Riel wrote:
Cgroups only makes the matter worse - libvirt places
each KVM guest into its own cgroup, so a VCPU will
generally always be alone on its own per-cgroup, per-cpu
runqueue! That can lead
36616
Increase the ple_gap to 128 to be on the safe side. Is this enough
for a CPU with HT that has a busy sibling thread, or should it be
even larger? On the X5670, loading up the sibling thread with an
infinite loop does not seem to increase the required ple_gap.
Signed-off-by: Rik van Riel r
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a feature that detects when a virtual
CPU is
.
Signed-off-by: Rik van Riel r...@redhat.com
---
- move vcpu-task manipulation as suggested by Chris Wright
include/linux/kvm_host.h |1 +
virt/kvm/kvm_main.c |2 ++
2 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index
Instead of sleeping in kvm_vcpu_on_spin, which can cause gigantic
slowdowns of certain workloads, we instead use yield_to to hand
the rest of our timeslice to another vcpu in the same KVM guest.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
diff
to encourage the
target being selected. We can rely on pick_next_entity to keep things
fair, so noone can accelerate a thread that has already used its fair
share of CPU time.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Signed-off-by: Mike Galbraith
On 01/03/2011 10:21 PM, Zhai, Edwin wrote:
Riel,
Thanks for your patch. I have changed the ple_gap to 128 on xen side,
but forget the patch for KVM:(
A little bit big is no harm, but more perf data is better.
So should I resend the patch with the ple_gap default
changed to 128, or are you
On 01/04/2011 11:41 AM, Hillf Danton wrote:
/* !curr-sched_class-yield_to_task ||*/
+ curr-sched_class != p-sched_class) {
+ goto out;
+ }
+
/*
* ask scheduler to compute the next for successfully
On 01/04/2011 11:51 AM, Hillf Danton wrote:
Wouldn't that break for FIFO and RR tasks?
There's a reason all the scheduler folks wanted a
per-class yield_to_task function :)
Where is the yield_to callback in the patch for RT schedule class?
If @p is RT, what could you do?
If the user
On 01/04/2011 12:08 PM, Peter Zijlstra wrote:
On Wed, 2011-01-05 at 00:51 +0800, Hillf Danton wrote:
Where is the yield_to callback in the patch for RT schedule class?
If @p is RT, what could you do?
RT guests are a pipe dream, you first need to get the hypervisor (kvm in
this case) to be RT,
36616
Increase the ple_gap to 64 to be on the safe side.
Is this enough for a CPU with HT that has a busy sibling thread, or
should it be even larger? On the X5670, loading up the sibling thread
with an infinite loop does not seem to increase the required ple_gap.
Signed-off-by: Rik van Riel r
.
Signed-off-by: Rik van Riel r...@redhat.com
---
- move vcpu-task manipulation as suggested by Chris Wright
include/linux/kvm_host.h |1 +
virt/kvm/kvm_main.c |2 ++
2 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a feature that detects when a virtual
CPU is
Instead of sleeping in kvm_vcpu_on_spin, which can cause gigantic
slowdowns of certain workloads, we instead use yield_to to hand
the rest of our timeslice to another vcpu in the same KVM guest.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
diff
-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Not-signed-off-by: Mike Galbraith efa...@gmx.de
---
Mike, want to change the above into a Signed-off-by: ? :)
This code seems to work well.
diff --git a/include/linux/sched.h b/include/linux/sched.h
index
On 12/28/2010 12:54 AM, Mike Galbraith wrote:
On Mon, 2010-12-20 at 17:04 +0100, Mike Galbraith wrote:
On Mon, 2010-12-20 at 10:40 -0500, Rik van Riel wrote:
On 12/17/2010 02:15 AM, Mike Galbraith wrote:
BTW, with this vruntime donation thingy, what prevents a task from
forking off
On 12/17/2010 02:15 AM, Mike Galbraith wrote:
BTW, with this vruntime donation thingy, what prevents a task from
forking off accomplices who do nothing but wait for a wakeup and
yield_to(exploit)?
Even swapping vruntimes in the same cfs_rq is dangerous as hell, because
one party is going
On 12/14/2010 07:22 AM, Peter Zijlstra wrote:
... fixed all the obvious stuff. No idea what the hell I was
thinking while doing that cleanup - probably too busy looking
at the tests that I was running on a previous codebase :(
For the next version of the patches, I have switched to your
On 12/14/2010 01:08 AM, Mike Galbraith wrote:
On Mon, 2010-12-13 at 22:46 -0500, Rik van Riel wrote:
diff --git a/kernel/sched.c b/kernel/sched.c
index dc91a4d..6399641 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -5166,6 +5166,46 @@ SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid
On 12/11/2010 08:57 AM, Balbir Singh wrote:
If the vpcu holding the lock runs more and capped, the timeslice
transfer is a heuristic that will not help.
That indicates you really need the cap to be per guest, and
not per VCPU.
Having one VCPU spin on a lock (and achieve nothing), because
the
.
Signed-off-by: Rik van Riel r...@redhat.com
---
- move vcpu-task manipulation as suggested by Chris Wright
include/linux/kvm_host.h |1 +
virt/kvm/kvm_main.c |2 ++
2 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index
101 - 200 of 288 matches
Mail list logo