I have been tracking down what I thought was a KVM related network
issue for a while, however it appears it could be a hardware issue.
The symptom is that data in network packets gets corrupted, before
the checksum is calculated. This means the remote host can get
corrupted data, with no way to
On 11/01/2009 06:56 AM, Gleb Natapov wrote:
Add hypercall that allows guest and host to setup per cpu shared
memory.
While it is pretty obvious that we should implement
the asynchronous pagefaults for KVM, so a swap-in
of a page the host swapped out does not stall the
entire virtual CPU, I
On 11/02/2009 04:22 AM, Ingo Molnar wrote:
* Gleb Natapovg...@redhat.com wrote:
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index f4cee90..14707dc 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -952,6 +952,9 @@ do_page_fault(struct pt_regs *regs, unsigned long
On 11/01/2009 06:56 AM, Gleb Natapov wrote:
This patch add get_user_pages() variant that only succeeds if getting
a reference to a page doesn't require major fault.
Signed-off-by: Gleb Natapovg...@redhat.com
Reviewed-by: Rik van Riel r...@redhat.com
--
All rights reversed.
--
To unsubscribe
On 11/02/2009 02:33 PM, Avi Kivity wrote:
On 11/02/2009 09:03 PM, Rik van Riel wrote:
This patch is not acceptable unless it's done cleaner. Currently we
already have 3 callbacks in do_page_fault() (kmemcheck, mmiotrace,
notifier), and this adds a fourth one.
There's another alternative
On 12/27/2009 11:03 AM, Avi Kivity wrote:
On 12/27/2009 05:51 PM, Daniel Bareiro wrote:
Hi, all!
I installed qemu-kvm-0.12.1.1 in one equipment of my house yesterday to
test it with Linux 2.6.32 compiled by myself from the source code of
kernel.org.
From the night of yesterday that I am
On 12/27/2009 11:38 AM, Avi Kivity wrote:
On 12/27/2009 06:32 PM, Rik van Riel wrote:
Probably a regression in Linux swapping. Rik, Hugh, are you aware of
any? Hugh posted something but it appears to be performance related, not
causing early swap.
Yes, it is a smal bug in the VM.
A fix has
On 12/27/2009 12:12 PM, Avi Kivity wrote:
On 12/27/2009 06:45 PM, Rik van Riel wrote:
If so, it doesn't copy sta...@kernel.org. Is it queued for -stable?
I do not believe that it is queued for -stable.
Do performance fixes fit with -stable policy?
If it is a serious regression, I believe
With CONFIG_FAIR_GROUP_SCHED, each task_group has its own cfs_rq.
Yielding to a task from another cfs_rq may be worthwhile, since
a process calling yield typically cannot use the CPU right now.
Therefor, we want to check the per-cpu nr_running, not the
cgroup local one.
Signed-off-by: Rik van
-by: Rik van Riel r...@redhat.com
---
kernel/sched_fair.c | 30 +++---
1 files changed, 23 insertions(+), 7 deletions(-)
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index f4ee445..0321473 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -784,19 +784,35
to the right level.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/sched.c b/kernel/sched.c
index dc91a4d..e4e57ff 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -327,7 +327,7 @@ struct cfs_rq {
* 'curr' points to currently running entity on this cfs_rq
just
ignoring the hint.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Signed-off-by: Mike Galbraith efa...@gmx.de
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2c79e92..6c43fc4 100644
--- a/include/linux/sched.h
+++ b/include
Instead of sleeping in kvm_vcpu_on_spin, which can cause gigantic
slowdowns of certain workloads, we instead use yield_to to get
another VCPU in the same KVM guest to run sooner.
This seems to give a 10-15% speedup in certain workloads, versus
not having PLE at all.
Signed-off-by: Rik van Riel r
.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index a055742..9d56ed5 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -81,6 +81,7 @@ struct kvm_vcpu {
#endif
int vcpu_id;
struct mutex mutex
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a feature that detects when a virtual
CPU is
Export the symbols required for a race-free kvm_vcpu_on_spin.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/fork.c b/kernel/fork.c
index 3b159c5..adc8f47 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -191,6 +191,7 @@ void __put_task_struct(struct task_struct *tsk
Fairness is enforced by pick_next_entity, so we can drop some
superfluous tests from yield_to.
Signed-off-by: Rik van Riel r...@redhat.com
---
kernel/sched.c |8
1 files changed, 0 insertions(+), 8 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 1f38ed2..398eedf
On 01/21/2011 09:02 AM, Srivatsa Vaddagiri wrote:
On Thu, Jan 20, 2011 at 09:56:27AM -0800, Jeremy Fitzhardinge wrote:
The key here is not to
sleep when waiting for locks (as implemented by current patch-series, which can
put other VMs at an advantage by giving them more time than they are
On 01/22/2011 01:14 AM, Srivatsa Vaddagiri wrote:
Also it may be possible for the pv-ticketlocks to track owning vcpu and make use
of a yield-to interface as further optimization to avoid the
others-get-more-time problem, but Peterz rightly pointed that PI would be a
better solution there than
On 01/24/2011 12:57 PM, Peter Zijlstra wrote:
On Thu, 2011-01-20 at 16:33 -0500, Rik van Riel wrote:
The clear_buddies function does not seem to play well with the concept
of hierarchical runqueues. In the following tree, task groups are
represented by 'G', tasks by 'T', next by 'n' and last
On 01/24/2011 01:04 PM, Peter Zijlstra wrote:
diff --git a/kernel/sched.c b/kernel/sched.c
index dc91a4d..e4e57ff 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -327,7 +327,7 @@ struct cfs_rq {
* 'curr' points to currently running entity on this cfs_rq.
* It is set to
On 01/24/2011 01:12 PM, Peter Zijlstra wrote:
On Thu, 2011-01-20 at 16:34 -0500, Rik van Riel wrote:
From: Mike Galbraithefa...@gmx.de
Currently only implemented for fair class tasks.
Add a yield_to_task method() to the fair scheduling class. allowing the
caller of yield_to() to accelerate
...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
...@redhat.com
Reviewed-by: Rik van Riel r...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
...@redhat.com
CC: Jeremy Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
Reviewed-by: Rik van Riel r...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More
On 01/24/2011 01:06 PM, Glauber Costa wrote:
Register steal time within KVM. Everytime we sample the steal time
information, we update a local variable that tells what was the
last time read. We then account the difference.
Signed-off-by: Glauber Costaglom...@redhat.com
CC: Rik van
Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
Reviewed-by: Rik van Riel r...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
On 01/24/2011 08:25 PM, Glauber Costa wrote:
On Mon, 2011-01-24 at 18:31 -0500, Rik van Riel wrote:
On 01/24/2011 01:06 PM, Glauber Costa wrote:
Register steal time within KVM. Everytime we sample the steal time
information, we update a local variable that tells what was the
last time read. We
On 01/26/2011 08:01 AM, Avi Kivity wrote:
Suggest moving the code to vcpu_load(), where it can execute under the
protection of vcpu-mutex.
I've made the suggested changes by you and Peter, and
will re-post the patch series in a bit...
--
All rights reversed
--
To unsubscribe from this list:
.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index a055742..9d56ed5 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -81,6 +81,7 @@ struct kvm_vcpu {
#endif
int vcpu_id;
struct mutex mutex
to the right level.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/sched.c b/kernel/sched.c
index dc91a4d..7ff53e2 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -327,7 +327,7 @@ struct cfs_rq {
* 'curr' points to currently running entity on this cfs_rq
Instead of sleeping in kvm_vcpu_on_spin, which can cause gigantic
slowdowns of certain workloads, we instead use yield_to to get
another VCPU in the same KVM guest to run sooner.
This seems to give a 10-15% speedup in certain workloads.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a feature that detects when a virtual
CPU is
With CONFIG_FAIR_GROUP_SCHED, each task_group has its own cfs_rq.
Yielding to a task from another cfs_rq may be worthwhile, since
a process calling yield typically cannot use the CPU right now.
Therefor, we want to check the per-cpu nr_running, not the
cgroup local one.
Signed-off-by: Rik van
-by: Rik van Riel r...@redhat.com
---
kernel/sched_fair.c | 30 +++---
1 files changed, 23 insertions(+), 7 deletions(-)
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index f4ee445..0321473 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -784,19 +784,35
just
ignoring the hint.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Signed-off-by: Mike Galbraith efa...@gmx.de
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2c79e92..6c43fc4 100644
--- a/include/linux/sched.h
+++ b/include
, or the other way around.
Signed-off-by: Glauber Costaglom...@redhat.com
CC: Rik van Rielr...@redhat.com
CC: Jeremy Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
but not the hypervisor, or the other way around.
Signed-off-by: Glauber Costaglom...@redhat.com
CC: Rik van Rielr...@redhat.com
CC: Jeremy Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All
Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
Not the traditional way of doing steal time, but a lot cleaner
than the legacy code that's left over from when each clocksource
had its own interrupt function.
I like it.
Acked-by: Rik van Riel r
...@redhat.com
CC: Jeremy Fitzhardingejeremy.fitzhardi...@citrix.com
CC: Peter Zijlstrapet...@infradead.org
CC: Avi Kivitya...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord
On 01/31/2011 06:47 AM, Peter Zijlstra wrote:
On Wed, 2011-01-26 at 17:21 -0500, Rik van Riel wrote:
+static struct sched_entity *__pick_second_entity(struct cfs_rq *cfs_rq)
+{
+ struct rb_node *left = cfs_rq-rb_leftmost;
+ struct rb_node *second;
+
+ if (!left
On 01/31/2011 06:49 AM, Peter Zijlstra wrote:
On Wed, 2011-01-26 at 17:21 -0500, Rik van Riel wrote:
+ if (yielded)
+ yield();
+
+ return yielded;
+}
+EXPORT_SYMBOL_GPL(yield_to);
yield() will again acquire rq-lock.. not not simply have
-yield_to_task() do
to the right level.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/sched.c b/kernel/sched.c
index dc91a4d..7ff53e2 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -327,7 +327,7 @@ struct cfs_rq {
* 'curr' points to currently running entity on this cfs_rq
-by: Rik van Riel r...@redhat.com
---
kernel/sched_fair.c | 30 +++---
1 files changed, 23 insertions(+), 7 deletions(-)
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index f4ee445..0321473 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -784,19 +784,35
just
ignoring the hint.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Signed-off-by: Mike Galbraith efa...@gmx.de
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2c79e92..6c43fc4 100644
--- a/include/linux/sched.h
+++ b/include
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a feature that detects when a virtual
CPU is
Instead of sleeping in kvm_vcpu_on_spin, which can cause gigantic
slowdowns of certain workloads, we instead use yield_to to get
another VCPU in the same KVM guest to run sooner.
This seems to give a 10-15% speedup in certain workloads.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off
.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index a055742..9d56ed5 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -81,6 +81,7 @@ struct kvm_vcpu {
#endif
int vcpu_id;
struct mutex mutex
With CONFIG_FAIR_GROUP_SCHED, each task_group has its own cfs_rq.
Yielding to a task from another cfs_rq may be worthwhile, since
a process calling yield typically cannot use the CPU right now.
Therefor, we want to check the per-cpu nr_running, not the
cgroup local one.
Signed-off-by: Rik van
Export the symbols required for a race-free kvm_vcpu_on_spin.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/fork.c b/kernel/fork.c
index 3b159c5..adc8f47 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -191,6 +191,7 @@ void __put_task_struct(struct task_struct *tsk
On 02/01/2011 05:53 AM, Peter Zijlstra wrote:
On Mon, 2011-01-31 at 16:40 -0500, Rik van Riel wrote:
v8:
- some more changes and cleanups suggested by Peter
Did you, by accident, send out the -v7 patches again? I don't think I've
spotted a difference..
Arghhh. Yeah, I did :(
--
All
With CONFIG_FAIR_GROUP_SCHED, each task_group has its own cfs_rq.
Yielding to a task from another cfs_rq may be worthwhile, since
a process calling yield typically cannot use the CPU right now.
Therefor, we want to check the per-cpu nr_running, not the
cgroup local one.
Signed-off-by: Rik van
just
ignoring the hint.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Signed-off-by: Mike Galbraith efa...@gmx.de
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2c79e92..6c43fc4 100644
--- a/include/linux/sched.h
+++ b/include
Export the symbols required for a race-free kvm_vcpu_on_spin.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/fork.c b/kernel/fork.c
index 3b159c5..adc8f47 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -191,6 +191,7 @@ void __put_task_struct(struct task_struct *tsk
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a feature that detects when a virtual
CPU is
-by: Rik van Riel r...@redhat.com
---
kernel/sched_fair.c | 30 +++---
1 files changed, 23 insertions(+), 7 deletions(-)
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index f4ee445..0321473 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -784,19 +784,35
.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index a055742..9d56ed5 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -81,6 +81,7 @@ struct kvm_vcpu {
#endif
int vcpu_id;
struct mutex mutex
Instead of sleeping in kvm_vcpu_on_spin, which can cause gigantic
slowdowns of certain workloads, we instead use yield_to to get
another VCPU in the same KVM guest to run sooner.
This seems to give a 10-15% speedup in certain workloads.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off
to the right level.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/sched.c b/kernel/sched.c
index dc91a4d..7ff53e2 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -327,7 +327,7 @@ struct cfs_rq {
* 'curr' points to currently running entity on this cfs_rq
On 02/15/2011 10:17 AM, Avi Kivity wrote:
Ah, so we're all set. Do you know if any user tools process this
information?
Top and vmstat have been displaying steal time for
maybe 4 or 5 years now.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
On 07/06/2010 12:24 PM, Gleb Natapov wrote:
Async PF also needs to hook into smp_prepare_boot_cpu so move the hook
into generic code.
Signed-off-by: Gleb Natapovg...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line
On 07/06/2010 12:24 PM, Gleb Natapov wrote:
... a commit message would be useful when you submit these
patches for inclusion upstream.
Signed-off-by: Gleb Natapovg...@redhat.com
Reviewed-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line
with this patch, but it looks like patch
10/12 addresses all of those, so ...
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
On 07/06/2010 12:24 PM, Gleb Natapov wrote:
KVM will use it to try and find a page without falling back to slow
gup. That is why get_user_pages_fast() is not enough.
Signed-off-by: Gleb Natapovg...@redhat.com
Reviewed-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe
On 07/06/2010 12:24 PM, Gleb Natapov wrote:
Code that depends on particular memslot layout can track changes and
adjust to new layout.
Signed-off-by: Gleb Natapovg...@redhat.com
Reviewed-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line
-sleepable context and will not be able to
reschedule.
Signed-off-by: Gleb Natapovg...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info
Natapovg...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Natapovg...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 07/06/2010 12:25 PM, Gleb Natapov wrote:
Signed-off-by: Gleb Natapovg...@redhat.com
This patch needs a commit message on the next submission.
Other than that:
Reviewed-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm
On 07/11/2010 03:12 PM, Daniel Bareiro wrote:
On Sunday, 11 July 2010 12:12:57 -0300,
Daniel Bareiro wrote:
I have an installation with Debian GNU/Linux 5.0.4 amd64 with qemu-kvm
0.12.3 compiled with the source code obtained from the official site
of KVM and Linux 2.6.32.12 compiled from
van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 07/12/2010 10:25 PM, Zachary Amsden wrote:
On reset, VMCB TSC should be set to zero. Instead, code was setting
tsc_offset to zero, which passes through the underlying TSC.
Signed-off-by: Zachary Amsdenzams...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
-by: Zachary Amsdenzams...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
against CPU hotplug or
frequency updates, which will issue IPIs to the local CPU to perform
this very same task).
Signed-off-by: Zachary Amsdenzams...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body
On 07/12/2010 10:25 PM, Zachary Amsden wrote:
If creating an SMP guest with unstable host TSC, issue a warning
Signed-off-by: Zachary Amsdenzams...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm
.
Signed-off-by: Zachary Amsdenzams...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
task is descheduled.
Signed-off-by: Zachary Amsdenzams...@redhat.com
Reviewed-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
-atomic operation.
Also, convert the KVM_SET_CLOCK / KVM_GET_CLOCK ioctls to use the kernel
time helper, these should be bootbased as well.
Signed-off-by: Zachary Amsdenzams...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send
alternative, so ...
Reviewed-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
-off-by: Zachary Amsdenzams...@redhat.com
Reviewed-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
to boot after a suspend event.
This covers both cases.
Note that it is acceptable to take the spinlock, as either
no other tasks will be running and no locks held (BSP after
resume), or other tasks will be guaranteed to drop the lock
relatively quickly (AP on CPU_STARTING).
Acked-by: Rik van
On 07/12/2010 10:25 PM, Zachary Amsden wrote:
The scale_delta function for shift / multiply with 31-bit
precision moves to a common header so it can be used by both
kernel and kvm module.
Signed-off-by: Zachary Amsdenzams...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights
...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 07/12/2010 10:25 PM, Zachary Amsden wrote:
Add a kernel call to get the number of nanoseconds since boot. This
is generally useful enough to make it a generic call.
Signed-off-by: Zachary Amsdenzams...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
On 07/12/2010 10:25 PM, Zachary Amsden wrote:
Signed-off-by: Zachary Amsdenzams...@redhat.com
Would be nice to have a commit message the next time you
submit this :)
arch/x86/kvm/x86.c | 22 ++
1 files changed, 6 insertions(+), 16 deletions(-)
Reviewed-by: Rik van
...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Arcangeliaarca...@redhat.com
Reviewed-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 07/19/2010 11:30 AM, Gleb Natapov wrote:
Enable async PF in a guest if async PF capability is discovered.
Signed-off-by: Gleb Natapovg...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord
On 08/02/2010 02:57 PM, Daniel Bareiro wrote:
Hi, Rik.
On Sunday, 11 July 2010 17:49:43 -0400,
Rik van Riel wrote:
I have an installation with Debian GNU/Linux 5.0.4 amd64 with
qemu-kvm 0.12.3 compiled with the source code obtained from the
official site of KVM and Linux 2.6.32.12 compiled
On 08/02/2010 03:52 PM, Daniel Bareiro wrote:
And there are some estimates of when this patch is in Linux stable?
It should be there already in 2.6.33-stable and 2.6.34-stable.
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
On 12/31/2009 12:02 PM, Hugh Dickins wrote:
On Thu, 31 Dec 2009, Daniel Bareiro wrote:
What tests would be recommendable to make to reproduce the problem?
Oh, I thought you were the one seeing the problem! If you cannot
easily reproduce it, then please don't spend too long over it.
I've
On 01/05/2010 10:05 AM, Jun Koi wrote:
On Tue, Jan 5, 2010 at 11:12 PM, Gleb Natapovg...@redhat.com wrote:
KVM virtualizes guest memory by means of shadow pages or HW assistance
like NPT/EPT. Not all memory used by a guest is mapped into the guest
address space or even present in a host memory
On 01/08/2010 11:18 AM, Marcelo Tosatti wrote:
- Limit the number of queued async pf's per guest ?
This is automatically limited to the number of processes
running in a guest :)
--
All rights reversed.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message
On 01/08/2010 02:30 PM, Bryan Donlan wrote:
On Fri, Jan 8, 2010 at 2:24 PM, Rik van Rielr...@redhat.com wrote:
On 01/08/2010 11:18 AM, Marcelo Tosatti wrote:
- Limit the number of queued async pf's per guest ?
This is automatically limited to the number of processes
running in a guest :)
On 01/20/2010 07:00 AM, Avi Kivity wrote:
On 01/20/2010 12:02 PM, Gleb Natapov wrote:
I can inject the event as HW interrupt on vector greater then 32 but not
go through APIC so EOI will not be required. This sounds
non-architectural
and I am not sure kernel has entry point code for this kind
On 02/03/2010 11:12 PM, Balbir Singh wrote:
* Rik van Rielr...@redhat.com [2010-02-03 16:11:03]:
Currently KVM pretends that pages with EPT mappings never got
accessed. This has some side effects in the VM, like swapping
out actively used guest pages and needlessly breaking up actively
used
Balbir Singh wrote:
* Rik van Riel r...@redhat.com [2010-02-04 08:40:43]:
On 02/03/2010 11:12 PM, Balbir Singh wrote:
* Rik van Rielr...@redhat.com [2010-02-03 16:11:03]:
Currently KVM pretends that pages with EPT mappings never got
accessed. This has some side effects in the VM, like
On 03/09/2010 04:30 PM, Marcelo Tosatti wrote:
On Tue, Mar 09, 2010 at 09:47:38PM +0100, Thomas Treutner wrote:
Hi,
I'm referring to this patchset
http://www.mail-archive.com/kvm@vger.kernel.org/msg23810.html
of Marcelo Tosatti. It seems it was never included or even discussed, although
it's
On 12/01/2010 12:22 PM, Peter Zijlstra wrote:
On Wed, 2010-12-01 at 09:17 -0800, Chris Wright wrote:
Directed yield and fairness don't mix well either. You can end up
feeding the other tasks more time than you'll ever get back.
If the directed yield is always to another task in your cgroup
1 - 100 of 288 matches
Mail list logo