On Fri, Oct 10, 2014 at 09:33:04AM +0200, Marcin Gibuła wrote:
Does anybody know why the APIC state loaded by the first call to
kvm_arch_get_registers() is wrong, in the first place? What exactly is
different in the APIC state in the second kvm_arch_get_registers() call,
and when/why does it
Does anybody know why the APIC state loaded by the first call to
kvm_arch_get_registers() is wrong, in the first place? What exactly is
different in the APIC state in the second kvm_arch_get_registers() call,
and when/why does it change?
If cpu_synchronize_state() does the wrong thing if it is
On Mon, Aug 04, 2014 at 06:30:09PM +0200, Marcin Gibuła wrote:
W dniu 2014-07-31 13:27, Marcin Gibuła pisze:
Can you dump *env before and after the call to kvm_arch_get_registers?
Yes, but it seems they are equal - I used memcmp() to compare them. Is
there any other side effect that
On Sun, Aug 24, 2014 at 10:51:38PM +0400, Andrey Korolyov wrote:
On Sun, Aug 24, 2014 at 8:57 PM, Andrey Korolyov and...@xdel.ru wrote:
On Sun, Aug 24, 2014 at 8:35 PM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 24/08/2014 18:19, Andrey Korolyov ha scritto:
Sorry, I was a bit inaccurate in
On Thu, Sep 4, 2014 at 8:38 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Sun, Aug 24, 2014 at 10:51:38PM +0400, Andrey Korolyov wrote:
On Sun, Aug 24, 2014 at 8:57 PM, Andrey Korolyov and...@xdel.ru wrote:
On Sun, Aug 24, 2014 at 8:35 PM, Paolo Bonzini pbonz...@redhat.com wrote:
Il
On Thu, Sep 04, 2014 at 08:52:00PM +0400, Andrey Korolyov wrote:
On Thu, Sep 4, 2014 at 8:38 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Sun, Aug 24, 2014 at 10:51:38PM +0400, Andrey Korolyov wrote:
On Sun, Aug 24, 2014 at 8:57 PM, Andrey Korolyov and...@xdel.ru wrote:
On Sun, Aug
On Thu, Sep 04, 2014 at 03:54:01PM -0300, Marcelo Tosatti wrote:
On Thu, Sep 04, 2014 at 08:52:00PM +0400, Andrey Korolyov wrote:
On Thu, Sep 4, 2014 at 8:38 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Sun, Aug 24, 2014 at 10:51:38PM +0400, Andrey Korolyov wrote:
On Sun, Aug 24,
On Thu, Sep 4, 2014 at 10:54 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Thu, Sep 04, 2014 at 03:54:01PM -0300, Marcelo Tosatti wrote:
On Thu, Sep 04, 2014 at 08:52:00PM +0400, Andrey Korolyov wrote:
On Thu, Sep 4, 2014 at 8:38 PM, Marcelo Tosatti mtosa...@redhat.com
wrote:
On Sun,
Il 24/08/2014 22:14, Andrey Korolyov ha scritto:
Forgot to mention, _actual_ patch from above. Adding
cpu_synchronize_all_states() bringing old bug with lost interrupts
back.
Are you adding it before or after cpu_clean_all_dirty?
Paolo
On Mon, Aug 25, 2014 at 2:45 PM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 24/08/2014 22:14, Andrey Korolyov ha scritto:
Forgot to mention, _actual_ patch from above. Adding
cpu_synchronize_all_states() bringing old bug with lost interrupts
back.
Are you adding it before or after
Sorry, I was a bit inaccurate in my thoughts at Fri about necessary
amount of work, patch lays perfectly on 3.10 with bit of monkey
rewrites. The attached one fixed problem for me - it represents
0b10a1c87a2b0fb459baaefba9cb163dbb8d3344,
0bc830b05c667218d703f2026ec866c49df974fc,
Il 24/08/2014 18:19, Andrey Korolyov ha scritto:
Sorry, I was a bit inaccurate in my thoughts at Fri about necessary
amount of work, patch lays perfectly on 3.10 with bit of monkey
rewrites. The attached one fixed problem for me - it represents
0b10a1c87a2b0fb459baaefba9cb163dbb8d3344,
On Sun, Aug 24, 2014 at 8:35 PM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 24/08/2014 18:19, Andrey Korolyov ha scritto:
Sorry, I was a bit inaccurate in my thoughts at Fri about necessary
amount of work, patch lays perfectly on 3.10 with bit of monkey
rewrites. The attached one fixed
On Sun, Aug 24, 2014 at 8:57 PM, Andrey Korolyov and...@xdel.ru wrote:
On Sun, Aug 24, 2014 at 8:35 PM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 24/08/2014 18:19, Andrey Korolyov ha scritto:
Sorry, I was a bit inaccurate in my thoughts at Fri about necessary
amount of work, patch lays
Forgot to mention, _actual_ patch from above. Adding
cpu_synchronize_all_states() bringing old bug with lost interrupts
back.
I`m running 3.10, so patches are not here, will try 3.16 soon. Even if
problem will be fixed, it still will be specific for 2.1, earlier
releases working well and I`ll bisect at a time.
Thanks, using 3.16 helped indeed. Though the bug remains as is at 2.1
on LTS 3.10, should I find the
On Fri, Aug 22, 2014 at 08:44:53PM +0400, Andrey Korolyov wrote:
I`m running 3.10, so patches are not here, will try 3.16 soon. Even if
problem will be fixed, it still will be specific for 2.1, earlier
releases working well and I`ll bisect at a time.
Thanks, using 3.16 helped indeed.
Il 22/08/2014 18:44, Andrey Korolyov ha scritto:
I`m running 3.10, so patches are not here, will try 3.16 soon. Even if
problem will be fixed, it still will be specific for 2.1, earlier
releases working well and I`ll bisect at a time.
Thanks, using 3.16 helped indeed. Though the bug remains
On Fri, Aug 22, 2014 at 9:45 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Fri, Aug 22, 2014 at 08:44:53PM +0400, Andrey Korolyov wrote:
I`m running 3.10, so patches are not here, will try 3.16 soon. Even if
problem will be fixed, it still will be specific for 2.1, earlier
releases
On Fri, Aug 22, 2014 at 10:39:38PM +0400, Andrey Korolyov wrote:
On Fri, Aug 22, 2014 at 9:45 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Fri, Aug 22, 2014 at 08:44:53PM +0400, Andrey Korolyov wrote:
I`m running 3.10, so patches are not here, will try 3.16 soon. Even if
problem
On Fri, Aug 22, 2014 at 04:05:07PM -0300, Marcelo Tosatti wrote:
On Fri, Aug 22, 2014 at 10:39:38PM +0400, Andrey Korolyov wrote:
On Fri, Aug 22, 2014 at 9:45 PM, Marcelo Tosatti mtosa...@redhat.com
wrote:
On Fri, Aug 22, 2014 at 08:44:53PM +0400, Andrey Korolyov wrote:
I`m
On Fri, Aug 22, 2014 at 11:05 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Fri, Aug 22, 2014 at 04:05:07PM -0300, Marcelo Tosatti wrote:
On Fri, Aug 22, 2014 at 10:39:38PM +0400, Andrey Korolyov wrote:
On Fri, Aug 22, 2014 at 9:45 PM, Marcelo Tosatti mtosa...@redhat.com
wrote:
On
On Fri, Aug 22, 2014 at 04:05:46PM -0300, Marcelo Tosatti wrote:
On Fri, Aug 22, 2014 at 04:05:07PM -0300, Marcelo Tosatti wrote:
On Fri, Aug 22, 2014 at 10:39:38PM +0400, Andrey Korolyov wrote:
On Fri, Aug 22, 2014 at 9:45 PM, Marcelo Tosatti mtosa...@redhat.com
wrote:
On Fri, Aug
Andrey,
Can you give instructions on how to reproduce please?
Please find answers inline:
- qemu.git codebase (if you have any patches relative to a
given commit id, please provide the patches).
rolled to bare 2.1-release to reproduce, for 3.10 I am hitting issue
with and without
On Sat, Aug 9, 2014 at 10:35 AM, Paolo Bonzini pbonz...@redhat.com wrote:
Yeah, I need to sit down and look at the code more closely... Perhaps a
cpu_mark_all_dirty() is enough.
Hi Paolo,
cpu_clean_all_dirty, you mean? Has the same effect.
Marcin's patch to add
On Thu, Aug 21, 2014 at 7:48 PM, Andrey Korolyov and...@xdel.ru wrote:
On Sat, Aug 9, 2014 at 10:35 AM, Paolo Bonzini pbonz...@redhat.com wrote:
Yeah, I need to sit down and look at the code more closely... Perhaps a
cpu_mark_all_dirty() is enough.
Hi Paolo,
cpu_clean_all_dirty, you
Il 21/08/2014 18:41, Andrey Korolyov ha scritto:
Sorry, the test series revealed that the problem is still here, but
with lower hit ratio with modified 2.1-HEAD using selected argument
set. The actual root of the issue is in '-cpu
qemu64,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1000'
On Thu, Aug 21, 2014 at 8:44 PM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 21/08/2014 18:41, Andrey Korolyov ha scritto:
Sorry, the test series revealed that the problem is still here, but
with lower hit ratio with modified 2.1-HEAD using selected argument
set. The actual root of the issue
Yeah, I need to sit down and look at the code more closely... Perhaps a
cpu_mark_all_dirty() is enough.
Hi Paolo,
cpu_clean_all_dirty, you mean? Has the same effect.
Marcin's patch to add cpu_synchronize_state_always() has the same
effect.
What do you prefer ?
I'd prefer
On Mon, Aug 04, 2014 at 08:30:48PM +0200, Paolo Bonzini wrote:
Il 04/08/2014 18:30, Marcin Gibuła ha scritto:
is this analysis deep enough for you? I don't know if that can be fixed
with existing api as cpu_synchronize_all_states() is all or nothing kind
of stuff.
Kvmclock needs
W dniu 2014-07-31 13:27, Marcin Gibuła pisze:
Can you dump *env before and after the call to kvm_arch_get_registers?
Yes, but it seems they are equal - I used memcmp() to compare them. Is
there any other side effect that cpu_synchronize_all_states() may have?
I think I found it.
The reason
Il 04/08/2014 18:30, Marcin Gibuła ha scritto:
is this analysis deep enough for you? I don't know if that can be fixed
with existing api as cpu_synchronize_all_states() is all or nothing kind
of stuff.
Kvmclock needs it only to read current cpu registers, so syncing
everything is not
Can you dump *env before and after the call to kvm_arch_get_registers?
Yes, but it seems they are equal - I used memcmp() to compare them. Is
there any other side effect that cpu_synchronize_all_states() may have?
I think I found it.
The reason for hang is, because when second call to
On 29.07.2014 18:58, Paolo Bonzini wrote:
Il 18/07/2014 10:48, Paolo Bonzini ha scritto:
It is easy to find out if the fix is related to 1 or 2/3: just write
if (cpu-kvm_vcpu_dirty) {
printf (do_kvm_cpu_synchronize_state_always: look at 2/3\n);
Il 30/07/2014 14:02, Marcin Gibuła ha scritto:
without it:
called do_kvm_cpu_synchronize_state_always
called do_kvm_cpu_synchronize_state_always
called do_kvm_cpu_synchronize_state: vcpu not dirty, getting registers
called do_kvm_cpu_synchronize_state: vcpu not dirty, getting registers
W dniu 2014-07-30 15:38, Paolo Bonzini pisze:
Il 30/07/2014 14:02, Marcin Gibuła ha scritto:
without it:
s/without/with/ of course...
called do_kvm_cpu_synchronize_state_always
called do_kvm_cpu_synchronize_state_always
called do_kvm_cpu_synchronize_state: vcpu not dirty, getting registers
Il 18/07/2014 10:48, Paolo Bonzini ha scritto:
It is easy to find out if the fix is related to 1 or 2/3: just write
if (cpu-kvm_vcpu_dirty) {
printf (do_kvm_cpu_synchronize_state_always: look at 2/3\n);
kvm_arch_get_registers(cpu);
} else {
printf
could you try attached patch? It's an incredibly ugly workaround that calls
cpu_synchronize_all_states() in a way that bypasses lazy execution logic.
But it works for me. If that works for you as well, its somehow related to
lazy execution of cpu_synchronize_all_states.
--
mg
Yes, it working
On Fri, Jul 18, 2014 at 12:21 PM, Marcin Gibuła m.gib...@beyond.pl wrote:
could you try attached patch? It's an incredibly ugly workaround that
calls
cpu_synchronize_all_states() in a way that bypasses lazy execution logic.
But it works for me. If that works for you as well, its somehow
Does it fix problem with libvirt migration timing out for you as well?
Oh, forgot to mention - yes, all migration-related problems are fixed.
Though release right now in a freeze phase, I`d like to ask
maintainers to consider possibility of fixing the problem on top of
the current tree instead
Il 17/07/2014 15:25, Marcin Gibuła ha scritto:
+static void do_kvm_cpu_synchronize_state_always(void *arg)
+{
+CPUState *cpu = arg;
+
+kvm_arch_get_registers(cpu);
+}
+
The name of the hack^Wfunction is tricky, because compared to
do_kvm_cpu_synchronize_state there are three things
Il 18/07/2014 10:44, Marcin Gibuła ha scritto:
Paolo,
if patch in its current form is not acceptable for you for inclusion,
I'll try rewrite it according to your comments.
The problem is that we don't know _why_ the patch is fixing things.
Considering that your kvmclock bug has been there
On (Fri) 18 Jul 2014 [10:48:40], Paolo Bonzini wrote:
Il 17/07/2014 15:25, Marcin Gibuła ha scritto:
+static void do_kvm_cpu_synchronize_state_always(void *arg)
+{
+CPUState *cpu = arg;
+
+kvm_arch_get_registers(cpu);
+}
+
The name of the hack^Wfunction is tricky, because
The name of the hack^Wfunction is tricky, because compared to
do_kvm_cpu_synchronize_state there are three things you change:
1) you always synchronize the state
2) the next call to do_kvm_cpu_synchronize_state will do
kvm_arch_get_registers
Yes.
3) the next CPU entry will call
W dniu 2014-07-18 11:37, Paolo Bonzini pisze:
Il 18/07/2014 11:32, Marcin Gibuła ha scritto:
3) the next CPU entry will call kvm_arch_put_registers:
if (cpu-kvm_vcpu_dirty) {
kvm_arch_put_registers(cpu, KVM_PUT_RUNTIME_STATE);
cpu-kvm_vcpu_dirty = false;
Yes, exactly. ISCSI-based setup can take some minutes to deploy, given
prepared image, and I have one hundred percent hit rate for the
original issue with it.
I've reproduced your IO hang with 2.0 and both
9b1786829aefb83f37a8f3135e3ea91c56001b56 and
a096b3a6732f846ec57dc28b47ee9435aa0609bf
On Thu, Jul 17, 2014 at 3:54 PM, Marcin Gibuła m.gib...@beyond.pl wrote:
Yes, exactly. ISCSI-based setup can take some minutes to deploy, given
prepared image, and I have one hundred percent hit rate for the
original issue with it.
I've reproduced your IO hang with 2.0 and both
2.1-rc2 behaves exactly the same.
Interestingly enough, reseting guest system causes I/O to work again. So
it's not qemu that hangs on IO, rather it fails to notify guest about
completed operations that were issued during migration.
And its somehow caused by calling cpu_synchronize_all_states()
I don't know if this is the same case, but Gerd showed me a migration failure
that might be related. 2.0 seems OK, 2.1-rc0 is broken (and I've not found
another working point in between yet).
The test case involves booting a fedora livecd (using an IDE CDROM device)
and after the migration we're
W dniu 2014-07-17 21:18, Dr. David Alan Gilbert pisze:
I don't know if this is the same case, but Gerd showed me a migration failure
that might be related. 2.0 seems OK, 2.1-rc0 is broken (and I've not found
another working point in between yet).
The test case involves booting a fedora livecd
On Thu, Jul 17, 2014 at 5:25 PM, Marcin Gibuła m.gib...@beyond.pl wrote:
2.1-rc2 behaves exactly the same.
Interestingly enough, reseting guest system causes I/O to work again. So
it's not qemu that hangs on IO, rather it fails to notify guest about
completed operations that were issued
Andrey,
Can you please provide instructions on how to create reproducible
environment?
The following patch is equivalent to the original patch, for
the purposes of fixing the kvmclock problem.
Perhaps it becomes easier to spot the reason for the hang you are
experiencing.
Marcelo,
the
On Wed, Jul 16, 2014 at 5:16 AM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Wed, Jul 16, 2014 at 03:40:47AM +0400, Andrey Korolyov wrote:
On Wed, Jul 16, 2014 at 2:01 AM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 15/07/2014 23:25, Andrey Korolyov ha scritto:
On Wed, Jul 16, 2014 at
On Wed, Jul 16, 2014 at 12:38:51PM +0400, Andrey Korolyov wrote:
On Wed, Jul 16, 2014 at 5:16 AM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Wed, Jul 16, 2014 at 03:40:47AM +0400, Andrey Korolyov wrote:
On Wed, Jul 16, 2014 at 2:01 AM, Paolo Bonzini pbonz...@redhat.com wrote:
Il
On Wed, Jul 16, 2014 at 09:35:16AM +0200, Marcin Gibuła wrote:
Andrey,
Can you please provide instructions on how to create reproducible
environment?
The following patch is equivalent to the original patch, for
the purposes of fixing the kvmclock problem.
Perhaps it becomes easier to
On Wed, Jul 16, 2014 at 3:52 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Wed, Jul 16, 2014 at 12:38:51PM +0400, Andrey Korolyov wrote:
On Wed, Jul 16, 2014 at 5:16 AM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Wed, Jul 16, 2014 at 03:40:47AM +0400, Andrey Korolyov wrote:
On Wed,
On Wed, Jul 16, 2014 at 5:24 PM, Andrey Korolyov and...@xdel.ru wrote:
On Wed, Jul 16, 2014 at 3:52 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Wed, Jul 16, 2014 at 12:38:51PM +0400, Andrey Korolyov wrote:
On Wed, Jul 16, 2014 at 5:16 AM, Marcelo Tosatti mtosa...@redhat.com
wrote:
On
Tested on iscsi pool, though there are no-cache requirement and rbd
with disabled cache may survive one migration but iscsi backend hangs
always. As it was before, just rolling back problematic commit fixes
the problem and adding cpu_synchronize_all_states to migration.c has
no difference at a
On Thu, Jul 17, 2014 at 1:28 AM, Marcin Gibuła m.gib...@beyond.pl wrote:
Tested on iscsi pool, though there are no-cache requirement and rbd
with disabled cache may survive one migration but iscsi backend hangs
always. As it was before, just rolling back problematic commit fixes
the problem
On Tue, Jul 15, 2014 at 9:03 AM, Amit Shah amit.s...@redhat.com wrote:
On (Sun) 13 Jul 2014 [16:28:56], Andrey Korolyov wrote:
Hello,
the issue is not specific to the iothread code because generic
virtio-blk also hangs up:
Do you know which version works well? If you could bisect, that'll
On Tue, Jul 15, 2014 at 10:52 AM, Andrey Korolyov and...@xdel.ru wrote:
On Tue, Jul 15, 2014 at 9:03 AM, Amit Shah amit.s...@redhat.com wrote:
On (Sun) 13 Jul 2014 [16:28:56], Andrey Korolyov wrote:
Hello,
the issue is not specific to the iothread code because generic
virtio-blk also hangs
Il 13/07/2014 17:29, Andrey Korolyov ha scritto:
Small follow-up: issue have probabilistic nature, as it looks - by
limited number of runs, it is reproducible within three cases:
1) live migration went well, I/O locked up,
2) live migration failed by timeout, I/O locked up,
3) live migration
On Tue, Jul 15, 2014 at 7:57 PM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 13/07/2014 17:29, Andrey Korolyov ha scritto:
Small follow-up: issue have probabilistic nature, as it looks - by
limited number of runs, it is reproducible within three cases:
1) live migration went well, I/O locked
On Tue, Jul 15, 2014 at 9:32 PM, Andrey Korolyov and...@xdel.ru wrote:
On Tue, Jul 15, 2014 at 7:57 PM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 13/07/2014 17:29, Andrey Korolyov ha scritto:
Small follow-up: issue have probabilistic nature, as it looks - by
limited number of runs, it is
On Tue, Jul 15, 2014 at 06:01:08PM +0400, Andrey Korolyov wrote:
On Tue, Jul 15, 2014 at 10:52 AM, Andrey Korolyov and...@xdel.ru wrote:
On Tue, Jul 15, 2014 at 9:03 AM, Amit Shah amit.s...@redhat.com wrote:
On (Sun) 13 Jul 2014 [16:28:56], Andrey Korolyov wrote:
Hello,
the issue is not
On Wed, Jul 16, 2014 at 1:09 AM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Tue, Jul 15, 2014 at 06:01:08PM +0400, Andrey Korolyov wrote:
On Tue, Jul 15, 2014 at 10:52 AM, Andrey Korolyov and...@xdel.ru wrote:
On Tue, Jul 15, 2014 at 9:03 AM, Amit Shah amit.s...@redhat.com wrote:
On (Sun)
Il 15/07/2014 23:25, Andrey Korolyov ha scritto:
On Wed, Jul 16, 2014 at 1:09 AM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Tue, Jul 15, 2014 at 06:01:08PM +0400, Andrey Korolyov wrote:
On Tue, Jul 15, 2014 at 10:52 AM, Andrey Korolyov and...@xdel.ru wrote:
On Tue, Jul 15, 2014 at 9:03
On Wed, Jul 16, 2014 at 2:01 AM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 15/07/2014 23:25, Andrey Korolyov ha scritto:
On Wed, Jul 16, 2014 at 1:09 AM, Marcelo Tosatti mtosa...@redhat.com
wrote:
On Tue, Jul 15, 2014 at 06:01:08PM +0400, Andrey Korolyov wrote:
On Tue, Jul 15, 2014 at
On Wed, Jul 16, 2014 at 03:40:47AM +0400, Andrey Korolyov wrote:
On Wed, Jul 16, 2014 at 2:01 AM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 15/07/2014 23:25, Andrey Korolyov ha scritto:
On Wed, Jul 16, 2014 at 1:09 AM, Marcelo Tosatti mtosa...@redhat.com
wrote:
On Tue, Jul 15, 2014
On Wed, Jul 16, 2014 at 03:40:47AM +0400, Andrey Korolyov wrote:
On Wed, Jul 16, 2014 at 2:01 AM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 15/07/2014 23:25, Andrey Korolyov ha scritto:
On Wed, Jul 16, 2014 at 1:09 AM, Marcelo Tosatti mtosa...@redhat.com
wrote:
On Tue, Jul 15, 2014
On (Sun) 13 Jul 2014 [16:28:56], Andrey Korolyov wrote:
Hello,
the issue is not specific to the iothread code because generic
virtio-blk also hangs up:
Do you know which version works well? If you could bisect, that'll
help a lot.
Thanks,
Amit
Hello,
the issue is not specific to the iothread code because generic
virtio-blk also hangs up:
Given code set like in the
http://www.mail-archive.com/qemu-devel@nongnu.org/msg246164.html,
launch a VM with virtio-blk disk and writeback rbd backend, fire up
fio, migrate once with libvirt:
time
On Sun, Jul 13, 2014 at 4:28 PM, Andrey Korolyov and...@xdel.ru wrote:
Hello,
the issue is not specific to the iothread code because generic
virtio-blk also hangs up:
Given code set like in the
http://www.mail-archive.com/qemu-devel@nongnu.org/msg246164.html,
launch a VM with virtio-blk
73 matches
Mail list logo