Instead of sleeping in kvm_vcpu_on_spin, which can cause gigantic
slowdowns of certain workloads, we instead use yield_to to hand
the rest of our timeslice to another vcpu in the same KVM guest.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
diff
From: Mike Galbraith efa...@gmx.de
Currently only implemented for fair class tasks.
Add a yield_to_task method() to the fair scheduling class. allowing the
caller of yield_to() to accelerate another thread in it's thread group,
task group.
Implemented via a scheduler hint, using cfs_rq-next to
Keep track of which task is running a KVM vcpu. This helps us
figure out later what task to wake up if we want to boost a
vcpu that got preempted.
Unfortunately there are no guarantees that the same task
always keeps the same vcpu, so we can only track the task
across a single run of the vcpu.
Export the symbols required for a race-free kvm_vcpu_on_spin.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/fork.c b/kernel/fork.c
index 3b159c5..adc8f47 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -191,6 +191,7 @@ void __put_task_struct(struct task_struct *tsk)
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a feature that detects when a virtual
CPU is
Am 14.01.2011 06:28, schrieb Amos Kong:
- Original Message -
From: Amos Kong ak...@redhat.com
KVM guest always pauses on NOSPACE error, this test
just repeatedly extend guest disk space and resume guest
from paused status.
Changes from v2:
- Oops! Forgot to update
Am 14.01.2011 02:51, Huang Ying wrote:
On Thu, 2011-01-13 at 17:01 +0800, Jan Kiszka wrote:
Am 13.01.2011 09:34, Huang Ying wrote:
In Linux kernel HWPoison processing implementation, the virtual
address in processes mapping the error physical memory page is marked
as HWPoison. So that, the
For latest kernel, the device name of ide disk is 'sd*',
not 'hd*'.
Signed-off-by: Amos Kong ak...@redhat.com
---
client/tests/kvm/tests/enospc.py |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/client/tests/kvm/tests/enospc.py b/client/tests/kvm/tests/enospc.py
index
Please pull the following for 2.6.38.
Thanks!
The following changes since commit 0c21e3aaf6ae85bee804a325aa29c325209180fd:
Merge branch 'for-next' of
git://git.kernel.org/pub/scm/linux/kernel/git/hch/hfsplus (2011-01-07 17:16:27
-0800)
are available in the git repository at:
Marcelo Tosatti mtosa...@redhat.com wrote:
On Fri, Jan 07, 2011 at 10:44:20AM -1000, Zachary Amsden wrote:
On 01/07/2011 12:48 AM, Marcelo Tosatti wrote:
On Thu, Jan 06, 2011 at 12:10:45AM -1000, Zachary Amsden wrote:
Use an MSR to allow soft migration to hosts which do not support
TSC
From: Jason Wang jasow...@redhat.com
The patch tries to make using kvm-autotest much more easier by
enabling the ability of passing the test parameters from command line
directly through --args=key1=value1 key2=value2 ... keyN=valueN.
The idea is simple, autotest test pass the additional
On Thu, Jan 13, 2011 at 02:27:00PM -0500, Avi Kivity wrote:
On 01/13/2011 05:51 PM, Roedel, Joerg wrote:
I also had a look at entry_64.S. The save_paranoid could not be the
cause because MSR_GS_BASE is already negative at this point. But the
re-schedule condition check at the end of the NMI
Hi,
here is the reworked version of the patch-set. Only patch 1/2 has
changed and now contains the real fix for the crashes that were seen and
has an updated log message.
Regards,
Joerg
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
On Fri, 2011-01-14 at 03:03 -0500, Rik van Riel wrote:
From: Mike Galbraith efa...@gmx.de
Currently only implemented for fair class tasks.
Add a yield_to_task method() to the fair scheduling class. allowing the
caller of yield_to() to accelerate another thread in it's thread group,
task
On 01/14/2011 03:02 AM, Rik van Riel wrote:
Benchmark results:
Two 4-CPU KVM guests are pinned to the same 4 physical CPUs.
I just discovered that I had in fact pinned the 4-CPU KVM
guests to 4 HT threads across 2 cores, and the scheduler
has all kinds of special magic for dealing with HT
event-tap function is called only when it is on, and requests sent
from device emulators.
Signed-off-by: Yoshiaki Tamura tamura.yoshi...@lab.ntt.co.jp
---
block.c | 11 +++
1 files changed, 11 insertions(+), 0 deletions(-)
diff --git a/block.c b/block.c
index ff2795b..85bd8b8 100644
Introduce skip_header parameter to qemu_loadvm_state() so that it can
be called iteratively without reading the header.
Signed-off-by: Yoshiaki Tamura tamura.yoshi...@lab.ntt.co.jp
---
migration.c |2 +-
savevm.c| 24 +---
sysemu.h|2 +-
3 files changed, 15
For regular migration inuse == 0 always as requests are flushed before
save. However, event-tap log when enabled introduces an extra queue
for requests which is not being flushed, thus the last inuse requests
are left in the event-tap queue. Move the last_avail_idx value sent
to the remote back
event-tap function is called only when it is on.
Signed-off-by: Yoshiaki Tamura tamura.yoshi...@lab.ntt.co.jp
---
net.c |9 +
1 files changed, 9 insertions(+), 0 deletions(-)
diff --git a/net.c b/net.c
index 9ba5be2..1176124 100644
--- a/net.c
+++ b/net.c
@@ -36,6 +36,7 @@
#include
Introduce migrate_ft_trans_put_ready() which kicks the FT transaction
cycle. When ft_mode is on, migrate_fd_put_ready() would open
ft_trans_file and turn on event_tap. To end or cancel FT transaction,
ft_mode and event_tap is turned off. migrate_ft_trans_get_ready() is
called to receive ack
Hi,
This patch series is a revised version of Kemari for KVM, which
applied comments for the previous post. The current code is based on
qemu.git d03d11260ee2d55579e8b76116e35ccdf5031833.
The changes from v0.2.4 - v0.2.5 are:
- fixed braces and trailing spaces by using Blue's checkpatch.pl
event-tap controls when to start FT transaction, and provides proxy
functions to called from net/block devices. While FT transaction, it
queues up net/block requests, and flush them when the transaction gets
completed.
Signed-off-by: Yoshiaki Tamura tamura.yoshi...@lab.ntt.co.jp
Signed-off-by:
Record ioport event to replay it upon failover.
Signed-off-by: Yoshiaki Tamura tamura.yoshi...@lab.ntt.co.jp
---
ioport.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/ioport.c b/ioport.c
index aa4188a..74aebf5 100644
--- a/ioport.c
+++ b/ioport.c
@@ -27,6 +27,7 @@
This code implements VM transaction protocol. Like buffered_file, it
sits between savevm and migration layer. With this architecture, VM
transaction protocol is implemented mostly independent from other
existing code.
Signed-off-by: Yoshiaki Tamura tamura.yoshi...@lab.ntt.co.jp
Signed-off-by:
When -k option is set to migrate command, it will turn on ft_mode to
start FT migration mode (Kemari).
Signed-off-by: Yoshiaki Tamura tamura.yoshi...@lab.ntt.co.jp
---
hmp-commands.hx |7 ---
migration.c |4
qmp-commands.hx |7 ---
3 files changed, 12 insertions(+),
Signed-off-by: Yoshiaki Tamura tamura.yoshi...@lab.ntt.co.jp
---
vl.c |3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/vl.c b/vl.c
index 8bbb785..9faeb27 100644
--- a/vl.c
+++ b/vl.c
@@ -162,6 +162,7 @@ int main(int argc, char **argv)
#include qemu-queue.h
#include
Currently buf size is fixed at 32KB. It would be useful if it could
be flexible.
Signed-off-by: Yoshiaki Tamura tamura.yoshi...@lab.ntt.co.jp
---
hw/hw.h |2 ++
savevm.c | 20 +++-
2 files changed, 21 insertions(+), 1 deletions(-)
diff --git a/hw/hw.h b/hw/hw.h
index
Currently FdMigrationState doesn't support read(), and this patch
introduces it to get response from the other side.
Signed-off-by: Yoshiaki Tamura tamura.yoshi...@lab.ntt.co.jp
---
migration-tcp.c | 15 +++
migration.c | 13 +
migration.h |3 +++
3 files
When ft_mode is set in the header, tcp_accept_incoming_migration()
sets ft_trans_incoming() as a callback, and call
qemu_file_get_notify() to receive FT transaction iteratively. We also
need a hack no to close fd before moving to ft_transaction mode, so
that we can reuse the fd for it.
Introduce qemu_savevm_state_{begin,commit} to send the memory and
device info together, while avoiding cancelling memory state tracking.
Signed-off-by: Yoshiaki Tamura tamura.yoshi...@lab.ntt.co.jp
---
savevm.c | 93 ++
sysemu.h |
On Fri, Jan 14, 2011 at 03:03:57AM -0500, Rik van Riel wrote:
From: Mike Galbraith efa...@gmx.de
Currently only implemented for fair class tasks.
Add a yield_to_task method() to the fair scheduling class. allowing the
caller of yield_to() to accelerate another thread in it's thread group,
On 01/14/2011 12:47 PM, Srivatsa Vaddagiri wrote:
If I recall correctly, one of the motivations for yield_to_task (rather than
a simple yield) was to avoid leaking bandwidth to other guests i.e we don't want
the remaining timeslice of spinning vcpu to be given away to other guests but
rather
Use remote session instead of serial, because we also run
this test under windows
Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
---
client/tests/kvm/tests/physical_resources_check.py |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git
On 01/13/2011 05:22 AM, Avi Kivity wrote:
On 01/11/2011 06:19 PM, Stefan Berger wrote:
What puzzles me is that the read operation may be run twice but
others don't.
Reads have split execution: kvm emulates the mmio instruction, notices
that it cannot satisfy the read request, exits to
Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
---
client/tests/kvm/autotest_control/iozone.control | 18 ++
client/tests/kvm/tests_base.cfg.sample |3 +++
2 files changed, 21 insertions(+), 0 deletions(-)
create mode 100644
From: Michael S. Tsirkin m...@redhat.com
Date: Fri, 14 Jan 2011 11:33:02 +0200
Please pull the following for 2.6.38.
Thanks!
The following changes since commit 0c21e3aaf6ae85bee804a325aa29c325209180fd:
Merge branch 'for-next' of
session.cmd_output() now takes a 'cmd' parameter, rather
than a 'command' parameter.
Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
---
client/tests/kvm/tests/iozone_windows.py |3 +--
1 files changed, 1 insertions(+), 2 deletions(-)
diff --git
On 01/14/2011 03:02 AM, Rik van Riel wrote:
Benchmark results:
Two 4-CPU KVM guests are pinned to the same 4 physical CPUs.
Unfortunately, it turned out I was running my benchmark on
only two CPU cores, using two HT threads of each core.
I have re-run the benchmark with the guests bound to
38 matches
Mail list logo