On Wed, Jan 12, 2011, Xiao Guangrong wrote about [PATCH v3 2/3] KVM: send IPI
to vcpu only when it's in guest mode:
We can interrupt the vcpu only when it's running in guest mode
to reduce IPI
Hi,
I am afraid there's a risk of confusion between the new
vcpu-mode = IN_GUEST_MODE;
and
On 01/12/2011 11:54 AM, Nadav Har'El wrote:
On Wed, Jan 12, 2011, Xiao Guangrong wrote about [PATCH v3 2/3] KVM: send IPI to
vcpu only when it's in guest mode:
We can interrupt the vcpu only when it's running in guest mode
to reduce IPI
Hi,
I am afraid there's a risk of confusion between
On 01/11/2011 03:54 PM, Anthony Liguori wrote:
Right, we should introduce a KVMBus that KVM devices are created on.
The devices can get at KVMState through the BusState.
There is no kvm bus in a PC (I looked). We're bending the device model
here because a device is implemented in the
Am 12.01.2011 11:22, Avi Kivity wrote:
On 01/11/2011 03:54 PM, Anthony Liguori wrote:
Right, we should introduce a KVMBus that KVM devices are created on.
The devices can get at KVMState through the BusState.
There is no kvm bus in a PC (I looked). We're bending the device model
here
KVM guest always pauses on NOSPACE error, this test
just repeatedly extend guest disk space and resume guest
from paused status.
Signed-off-by: Amos Kong ak...@redhat.com
---
client/tests/kvm/scripts/check_image.py |0
client/tests/kvm/tests/enospc.py| 56
Avi Kivity a...@redhat.com writes:
On 01/11/2011 03:54 PM, Anthony Liguori wrote:
Right, we should introduce a KVMBus that KVM devices are created on.
The devices can get at KVMState through the BusState.
There is no kvm bus in a PC (I looked). We're bending the device
model here because
Linus, please find an amended 2.6.38 KVM queue in
git://git.kernel.org/pub/scm/virt/kvm/kvm.git kvm-updates/2.6.38
changes from v1:
- dropped the patch introducing FAULT_FLAG_MINOR
- changed the following patch not to use the new get_user_pages_noio()
- tacked on a late sleeping-while-atomic
Hello,
libvirt implements a manages save, which suspens a VM to a file, from which it
can be resumed later. This uses Qemus/Kvms migrate exec:file feature.
This doesn't work reliable for me: In may cases the resumed VM seems to be
stuck: its VNC console is restored, but no key presses or
On Mon, Jan 10, 2011 at 10:02:50PM +0100, Jiri Slaby wrote:
Yup, this works for me. If you point me to the other 2, I will test them
too...
Sure, and they're already included in -mm.
http://marc.info/?l=linux-mmm=129442647907831q=raw
http://marc.info/?l=linux-mmm=129442718808733q=raw
On Wed, Jan 12, 2011 at 03:51:13PM +0100, Philipp Hahn wrote:
Hello,
libvirt implements a manages save, which suspens a VM to a file, from which
it
can be resumed later. This uses Qemus/Kvms migrate exec:file feature.
This doesn't work reliable for me: In may cases the resumed VM seems to
In order to avoid namespace collisions, turn the kvm test dir into
a module and, whenever importing full tests, use the complete
namespace. The full namespace might not be pretty, but it is
certainly safer, specially executing from autoserv.
Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
Hi,
Trying to stir up a year old conversation [1] about mac filtering.
The patch below is Alex Williamson's work updated for the current qemu
taking into account some comments. An extra check for multiple nics on
the same vlan has been added as well. Now, I know it's not ideal but
I'm looking
On Wed, Jan 12, 2011 at 06:26:55PM +0100, Dragos Tatulea wrote:
Hi,
Trying to stir up a year old conversation [1] about mac filtering.
The patch below is Alex Williamson's work updated for the current qemu
taking into account some comments. An extra check for multiple nics on
the same vlan
On Wed, Jan 12, 2011 at 06:26:55PM +0100, Dragos Tatulea wrote:
Hi,
Trying to stir up a year old conversation [1] about mac filtering.
The patch below is Alex Williamson's work updated for the current qemu
taking into account some comments. An extra check for multiple nics on
the same vlan
On 01/11/2011 04:25 AM, Avi Kivity wrote:
On 01/10/2011 09:31 PM, Linus Torvalds wrote:
Why wasn't I notified
before-hand? Was Andrew cc'd?
Andrew and linux-mm were copied. Rik was the only one who reviewed (and
ack'ed) it. I guess I should have explicitly asked for Nick's review.
Last
On Wed, Jan 12, 2011 at 12:33 PM, Rik van Riel r...@redhat.com wrote:
Now that we have FAULT_FLAG_ALLOW_RETRY, the async
pagefault patches can be a little smaller.
I suspect you do still want a new page flag, to say that
FAULT_FLAG_ALLOW_RETRY shouldn't actually wait for the page that it
On 01/07/2011 12:29 AM, Mike Galbraith wrote:
+#ifdef CONFIG_SMP
+ /*
+* If this yield is important enough to want to preempt instead
+* of only dropping a -next hint, we're alone, and the target
+* is not alone, pull the target to this cpu.
+*
+*
On Wed, 2011-01-12 at 22:02 -0500, Rik van Riel wrote:
Cgroups only makes the matter worse - libvirt places
each KVM guest into its own cgroup, so a VCPU will
generally always be alone on its own per-cgroup, per-cpu
runqueue! That can lead to pulling a VCPU onto our local
CPU because we
On 01/12/2011 10:26 PM, Mike Galbraith wrote:
On Wed, 2011-01-12 at 22:02 -0500, Rik van Riel wrote:
Cgroups only makes the matter worse - libvirt places
each KVM guest into its own cgroup, so a VCPU will
generally always be alone on its own per-cgroup, per-cpu
runqueue! That can lead to
On some CPUs, a ple_gap of 41 is simply insufficient to ever trigger
PLE exits, even with the minimalistic PLE test from kvm-unit-tests.
http://git.kernel.org/?p=virt/kvm/kvm-unit-tests.git;a=commitdiff;h=eda71b28fa122203e316483b35f37aaacd42f545
For example, the Xeon X5670 CPU needs a ple_gap of
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a feature that detects when a virtual
CPU is
Keep track of which task is running a KVM vcpu. This helps us
figure out later what task to wake up if we want to boost a
vcpu that got preempted.
Unfortunately there are no guarantees that the same task
always keeps the same vcpu, so we can only track the task
across a single run of the vcpu.
Instead of sleeping in kvm_vcpu_on_spin, which can cause gigantic
slowdowns of certain workloads, we instead use yield_to to hand
the rest of our timeslice to another vcpu in the same KVM guest.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
diff
From: Mike Galbraith efa...@gmx.de
Currently only implemented for fair class tasks.
Add a yield_to_task method() to the fair scheduling class. allowing the
caller of yield_to() to accelerate another thread in it's thread group,
task group.
Implemented via a scheduler hint, using cfs_rq-next to
On Mon, Jan 10, 2011 at 06:31:55PM +0900, Simon Horman wrote:
On Fri, Jan 07, 2011 at 10:23:58AM +0900, Simon Horman wrote:
On Thu, Jan 06, 2011 at 05:38:01PM -0500, Jesse Gross wrote:
[ snip ]
I know that everyone likes a nice netperf result but I agree with
Michael that this
25 matches
Mail list logo