Am 23.05.2013 12:09, schrieb Paolo Bonzini:
Il 22/05/2013 14:24, Stefan Priebe - Profihost AG ha scritto:
Am 22.05.2013 um 10:41 schrieb Paolo Bonzini pbonz...@redhat.com:
Il 22/05/2013 08:26, Stefan Priebe - Profihost AG ha scritto:
Hi,
as i can't reproduce no ;-( i just saw the kernel
Hello list,
since upgrading from qemu 1.4.1 to 1.5.0 i've problems with qmp commands.
With Qemu 1.5 i've the following socket communication:
'{execute:qmp_capabilities,id:12125:1,arguments:{}}'
'{return: {}, id: 12125:1}'
found}}
{id: 12125:1, error: {class: CommandNotFound, desc: The
command qmp_capabilities has not been found}}
Stefan
Am 24.05.2013 17:21, schrieb Luiz Capitulino:
On Fri, 24 May 2013 16:36:26 +0200
Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote:
Am 24.05.2013 um 16:02 schrieb Luiz
Am 24.05.2013 17:21, schrieb Luiz Capitulino:
On Fri, 24 May 2013 16:36:26 +0200
Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote:
Am 24.05.2013 um 16:02 schrieb Luiz Capitulino lcapitul...@redhat.com:
On Fri, 24 May 2013 15:57:59 +0200
Stefan Priebe - Profihost AG s.pri
Mit freundlichen Grüßen
Stefan Priebe
Bachelor of Science in Computer Science (BSCS)
Vorstand (CTO)
---
Profihost AG
Am Mittelfelde 29
30519 Hannover
Deutschland
Tel.: +49 (511) 5151 8181 | Fax.: +49 (511) 5151 8282
URL: http://www.profihost.com | E-Mail: i
Am 24.05.2013 23:37, schrieb Stefan Priebe:
Am 24.05.2013 17:21, schrieb Luiz Capitulino:
On Fri, 24 May 2013 16:36:26 +0200
Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote:
Am 24.05.2013 um 16:02 schrieb Luiz Capitulino
lcapitul...@redhat.com:
On Fri, 24 May 2013 15:57:59 +0200
Am 25.05.2013 00:09, schrieb mdroth:
I would try to create a small example script.
I use qmp-shell and other little scripts very often.
Am this be due to the fact that I don't wait for the welcome banner
right now?
If you're not reading from the socket, then you'll get the banner back
when
Am 25.05.2013 00:32, schrieb mdroth:
On Sat, May 25, 2013 at 12:12:22AM +0200, Stefan Priebe wrote:
Am 25.05.2013 00:09, schrieb mdroth:
I would try to create a small example script.
I use qmp-shell and other little scripts very often.
Am this be due to the fact that I don't wait
Am 26.05.2013 03:23, schrieb mdroth:
On Sat, May 25, 2013 at 01:09:50PM +0200, Stefan Priebe wrote:
Am 25.05.2013 00:32, schrieb mdroth:
On Sat, May 25, 2013 at 12:12:22AM +0200, Stefan Priebe wrote:
Am 25.05.2013 00:09, schrieb mdroth:
I would try to create a small example script.
I use
Am 26.05.2013 17:36, schrieb mdroth:
On Sun, May 26, 2013 at 05:13:36PM +0200, Stefan Priebe wrote:
Am 26.05.2013 03:23, schrieb mdroth:
On Sat, May 25, 2013 at 01:09:50PM +0200, Stefan Priebe wrote:
Am 25.05.2013 00:32, schrieb mdroth:
On Sat, May 25, 2013 at 12:12:22AM +0200, Stefan Priebe
Am 10.05.2013 13:09, schrieb Stefan Hajnoczi:
On Fri, May 10, 2013 at 11:07 AM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Am 10.05.2013 09:42, schrieb Stefan Hajnoczi:
On Fri, May 10, 2013 at 08:12:39AM +0200, Stefan Priebe - Profihost AG wrote:
3. Either use gdb
Am 30.05.2013 15:13, schrieb Amos Kong:
On Thu, May 30, 2013 at 02:09:25PM +0200, Stefan Priebe - Profihost AG wrote:
Am 29.05.2013 09:56, schrieb Amos Kong:
Recent virtio refactoring in QEMU made virtio-bus become the parent bus
of scsi-bus, and virtio-bus doesn't have get_fw_dev_path
Public bug reported:
qemu-kvm 1.1.1 stable is running fine for me with RHEL 6 2.6.32 based
kernel.
But with 3.5.0 kernel qemu-system-x86_64 segfaults while i'm trying to
install ubuntu 12.04 server reproducable.
You find three backtraces here:
http://pastebin.com/raw.php?i=xCy2pEcP
Stefan
**
ah OK - thanks. Will there be a fixed 1.1.2 as well?
Stefan
Am 08.08.2012 10:06, schrieb Stefan Hajnoczi:
On Wed, Aug 08, 2012 at 07:51:07AM +0200, Stefan Priebe wrote:
Any news? Was this applied upstream?
Kevin is ill. He has asked me to review and test patches in his
absence. When he
Hello list,
i wanted to start using virtio-scsi instead of virtio-blk, cause it
offers the possibility to use discard / trim support.
Kernel: 3.5.0 on host and guest
Qemu-kvm: 1.1.1 stable
But i'm not seeing the same or nearly the same speed:
virtio-scsi:
rand. 4k:
write: io=677628KB,
Yes cache none. Is there a bugfix for 1.1.1?
Stefan
Am 08.08.2012 um 18:17 schrieb Paolo Bonzini pbonz...@redhat.com:
Il 08/08/2012 17:21, Stefan Priebe ha scritto:
Hello list,
i wanted to start using virtio-scsi instead of virtio-blk, cause it
offers the possibility to use discard / trim
, Stefan Priebe ha scritto:
Hello list,
i wanted to start using virtio-scsi instead of virtio-blk, cause it
offers the possibility to use discard / trim support.
Kernel: 3.5.0 on host and guest
Qemu-kvm: 1.1.1 stable
But i'm not seeing the same or nearly the same speed:
1) How did you start
Yes should be possible. guest is Debian or Ubuntu. I couldn't find a tag for
V1.1.1 which I ran from source. So where to start bisect?
Stefan
Am 09.08.2012 um 09:01 schrieb Paolo Bonzini pbonz...@redhat.com:
Il 09/08/2012 08:13, Stefan Priebe ha scritto:
i really would like to test
09/08/2012 09:07, Stefan Priebe ha scritto:
Yes should be possible. guest is Debian or Ubuntu. I couldn't find a
tag for V1.1.1 which I ran from source. So where to start bisect?
You can start from the v1.1.0 tag.
Can you give the command line, perhaps it is enough to reproduce?
Paolo
Stefan
@writethrough: why not?
@libiscsi Same speed problem with cache=none and with just local lvm disks.
Stefan
Am 09.08.2012 um 09:53 schrieb Paolo Bonzini pbonz...@redhat.com:
Il 09/08/2012 09:41, Stefan Priebe ha scritto:
-drive
file=iscsi://10.0.255.100/iqn.1986-03.com.sun:02:8a9019a4-4aa3
Hello list,
i tried to find out how to be able to use trim / discard. So my storage
can free unusedblocks.
But i wasn't able to find out which virtio block devices support trim /
discard and what else is needed.
Thanks and Greets,
Stefan
From: spriebe g...@profihost.ag
---
block/iscsi.c | 36
1 files changed, 20 insertions(+), 16 deletions(-)
diff --git a/block/iscsi.c b/block/iscsi.c
index 12ca76d..257f97f 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -76,6 +76,10 @@ static void
Am 14.08.2012 16:08, schrieb Kevin Wolf:
Am 14.08.2012 14:11, schrieb Stefan Hajnoczi:
On Tue, Aug 14, 2012 at 1:09 PM, ronnie sahlberg
ronniesahlb...@gmail.com wrote:
Is a reply with the text
Acked-by: Ronnie Sahlberg ronniesahlb...@gmail.com
sufficient ?
Yes
But is this only meant as a
This patch fixes a race and some segfaults which i discovered while testing
scsi-generic
and unmapping with libiscsi.
The first problem is that in iscsi_aio_cancel iscsi_scsi_task_cancel and
iscsi_task_mgmt_abort_task_async got called but
iscsi_task_mgmt_abort_task_async already
calls
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
block/iscsi.c | 55 +++
1 files changed, 23 insertions(+), 32 deletions(-)
diff --git a/block/iscsi.c b/block/iscsi.c
index 12ca76d..1c8b049 100644
--- a/block/iscsi.c
+++ b/block
---
block/iscsi.c | 55 +++
1 files changed, 23 insertions(+), 32 deletions(-)
diff --git a/block/iscsi.c b/block/iscsi.c
index 12ca76d..1c8b049 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -76,6 +76,10 @@ static void
This patch fixes two main issues with block/iscsi.c:
1.) iscsi_task_mgmt_abort_task_async calls iscsi_scsi_task_cancel which was
also directly
called in iscsi_aio_cancel
2.) a race between task completition and task abortion could happen cause the
scsi_free_scsi_task
were done before
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
block/iscsi.c | 55 +++
1 files changed, 23 insertions(+), 32 deletions(-)
diff --git a/block/iscsi.c b/block/iscsi.c
index 12ca76d..1c8b049 100644
--- a/block/iscsi.c
+++ b/block
Hi Paolo,
Am 18.08.2012 23:49, schrieb Paolo Bonzini:
Hi Stefan,
this is my version of your patch. I think the flow of the code is a
bit simpler (or at least matches other implementations of cancellation).
Can you test it on your test case?
I'm really sorry but your patch doesn't work at all.
Hello list,
what is the status of CPU hotplug support?
I tried latest 1.2rc1 kvm-qemu with vanilla kernel v3.5.2 but the VM
just crashes when sending cpu_set X online through qm monitor.
Greets,
Stefan
Am 30.08.2012 um 17:41 schrieb Andreas Färber afaer...@suse.de:
Hello,
Am 30.08.2012 11:06, schrieb Stefan Priebe:
I tried latest 1.2rc1 kvm-qemu with vanilla kernel v3.5.2 but the VM
just crashes when sending cpu_set X online through qm monitor.
For SLES we're carrying a patch
Am 30.08.2012 18:43, schrieb Andreas Färber:
Am 30.08.2012 18:35, schrieb Stefan Priebe:
Am 30.08.2012 um 17:41 schrieb Andreas Färber afaer...@suse.de:
Am 30.08.2012 11:06, schrieb Stefan Priebe:
I tried latest 1.2rc1 kvm-qemu with vanilla kernel v3.5.2 but the VM
just crashes when sending
Am 30.08.2012 20:40, schrieb Igor Mammedov:
Am 30.08.2012 um 17:41 schrieb Andreas Färber afaer...@suse.de:
Am 30.08.2012 11:06, schrieb Stefan Priebe:
I tried latest 1.2rc1 kvm-qemu with vanilla kernel v3.5.2 but the VM
just crashes when sending cpu_set X online through qm monitor
Am 30.08.2012 20:56, schrieb Igor Mammedov:
On Thu, 30 Aug 2012 20:45:10 +0200
Stefan Priebe s.pri...@profihost.ag wrote:
Am 30.08.2012 20:40, schrieb Igor Mammedov:
Am 30.08.2012 um 17:41 schrieb Andreas Färber afaer...@suse.de:
Am 30.08.2012 11:06, schrieb Stefan Priebe:
I tried latest
Am 25.10.2012 15:15, schrieb Orit Wasserman:
Looks like a lot of cache miss, you can try increasing the cache size
(migrate_set_cache_size).
But you should remember that for an idle guest XBZRLE is wasteful,
it is useful for workload that changes the same memory pages frequently.
sure here
Am 06.11.2012 23:42, schrieb Paolo Bonzini:
i wantes to use scsi unmap with rbd. rbd documention says you need to
set discard_granularity=512 for the device. I'm using qemu 1.2.
If i set this and send an UNMAP command i get this kernel output:
The discard request is failing. Please check
Hi Paolo,
Am 06.11.2012 23:42, schrieb Paolo Bonzini:
i wantes to use scsi unmap with rbd. rbd documention says you need to
set discard_granularity=512 for the device. I'm using qemu 1.2.
If i set this and send an UNMAP command i get this kernel output:
The discard request is failing.
From: Stefan Priebe s.pri...@profhost.ag
This one fixes a race qemu also had in iscsi block driver between
cancellation and io completition.
qemu_rbd_aio_cancel was not synchronously waiting for the end of
the command.
It also removes the useless cancelled flag and introduces instead
a status
From Stefan Priebe s.pri...@profihost.ag # This line is ignored.
From: Stefan Priebe s.pri...@profihost.ag
Cc: pve-de...@pve.proxmox.com
Cc: pbonz...@redhat.com
Cc: ceph-de...@vger.kernel.org
Subject: QEMU/PATCH: rbd block driver: fix race between completition and cancel
In-Reply-To:
ve-de
rbd / rados tends to return pretty often length of writes
or discarded blocks. These values might be bigger than int.
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
block/rbd.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index
Hi Stefan,
Am 20.11.2012 17:29, schrieb Stefan Hajnoczi:
On Tue, Nov 20, 2012 at 01:44:55PM +0100, Stefan Priebe wrote:
rbd / rados tends to return pretty often length of writes
or discarded blocks. These values might be bigger than int.
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
When acb-cmd is WRITE or DISCARD block/rbd stores rcb-size into acb-ret
Look here:
if (acb-cmd == RBD_AIO_WRITE ||
acb-cmd == RBD_AIO_DISCARD) {
if (r 0) {
acb-ret = r;
acb-error = 1;
} else if (!acb-error) {
acb-ret = rcb-size;
This one fixes a race which qemu had also in iscsi block driver
between cancellation and io completition.
qemu_rbd_aio_cancel was not synchronously waiting for the end of
the command.
To archieve this it introduces a new status flag which uses
-EINPROGRESS.
Signed-off-by: Stefan Priebe s.pri
Am 24.11.2012 20:54, schrieb Blue Swirl:
On Thu, Nov 22, 2012 at 10:00 AM, Stefan Priebe s.pri...@profihost.ag wrote:
This one fixes a race which qemu had also in iscsi block driver
between cancellation and io completition.
qemu_rbd_aio_cancel was not synchronously waiting for the end
Hi,
tried to reproduce... but wasn't able to...
Stefan
Am 08.02.2013 11:43, schrieb Paolo Bonzini:
Il 08/02/2013 11:29, Stefan Priebe - Profihost AG ha scritto:
Hello list,
while testing current git master
(ecd8d4715ea33aa2c146a5047bacb031e86af599) i've seen a VM endless
looping
Hi,
Am 13.02.2013 16:24, schrieb Paolo Bonzini:
Il 13/02/2013 15:30, Stefan Priebe - Profihost AG ha scritto:
I added this:
-trace events=/tmp/events,file=/root/qemu.123.trace
and put the events in the events file as i couldn't handle \n in my app
starting the kvm process. But even when doing
Hello list,
i'm using qemu v1.3.0. Since upgrading from kvm qemu 1.2 my 64bit linux
guests hang while online migrating.
Has anybody an idea / can help?
Things happing while this happens:
- kvm process uses 100%CPU
- human monitor does not work as the process does not respond at/on socket
-
While testing this again i also had 32bit guests where migrate didn't
work - kvm source process just eats all CPU.
Stefan
Am 24.12.2012 00:18, schrieb Stefan Priebe:
Hello list,
i'm using qemu v1.3.0. Since upgrading from kvm qemu 1.2 my 64bit linux
guests hang while online migrating.
Has
Hello list,
i'm using qemu 1.3 and migration works fine if i do not set
migrate_downtime.
If i set migrate_downtime to 1s or 0.5s or 0.3s the VM halts immediatly
i cannot even connect to the qmp socket anymore and migration takes 5-10
minutes or never finishes.
I see high cpu usage on
Hi Paolo,
Am 28.12.2012 18:53, schrieb Paolo Bonzini:
Il 28/12/2012 08:05, Alexandre DERUMIER ha scritto:
Hi list,
After discuss with Stefan Yesterday here some more info:
(this is for stable qemu 1.3, it was working fine with qemu 1.2)
The problem seem that whesettings a
Hi Paolo,
Am 29.12.2012 15:00, schrieb Paolo Bonzini:
i cherry picked that one on top of 1.3 sadly it does not help. VM halts,
monitor socket is no longer available kvm process is running with 100%
CPU on source side.
Can you please test master and, if it works, bisect it in reverse?
(That is,
Am 29.12.2012 15:58, schrieb Paolo Bonzini:
Il 29/12/2012 15:05, Stefan Priebe ha scritto:
It starts working to me after the first 22 patches (after introducing
the new mutex and threading for writes).
And when does it break in 1.3?
I suppose it will be between
) at vl.c:4047
Stefan
Am 29.12.2012 16:25, schrieb Paolo Bonzini:
Il 29/12/2012 16:19, Stefan Priebe ha scritto:
I suppose it will be between 05e72dc5812a9f461fc2c606dff2572909eafc39
and aa723c23147e93fef8475bd80fd29e633378c34d.
Probably at 2dddf6f4133975af62e64cb6406ec1239491fa89, which
missing braces
- added vfree for bounce
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
block/rbd.c | 27 ++-
1 file changed, 18 insertions(+), 9 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index 0aaacaf..917c64c 100644
--- a/block/rbd.c
+++ b/block/rbd.c
unnecessary if condition in rbd_start_aio as we
haven't start io yet
- moved acb-status = 0 to rbd_aio_bh_cb so qemu_aio_wait always
waits until BH was executed
Changes since PATCHv2:
- fixed missing braces
- added vfree for bounce
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
block
braces
- added vfree for bounce
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
block/rbd.c | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index f3becc7..3bc9c7a 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -77,6 +77,7 @@ typedef
Hi Paolo,
Am 29.11.2012 16:23, schrieb Paolo Bonzini:
+qemu_vfree(acb-bounce);
This vfree is not needed, since the BH will run and do the free.
new patch v5 sent.
Greets,
Stefan
to rbd_aio_bh_cb so qemu_aio_wait always
waits until BH was executed
Changes since PATCHv2:
- fixed missing braces
- added vfree for bounce
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
block/rbd.c | 20
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/block
Am 13.02.2014 21:06, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag) wrote:
Am 10.02.2014 17:07, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag) wrote:
i could fix it by explicitly disable xbzrle - it seems its
automatically on if i do not set
got it here:
http://lists.nongnu.org/archive/html/qemu-devel/2014-02/msg02341.html
will try asap
Am 13.02.2014 21:06, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag) wrote:
Am 10.02.2014 17:07, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag
Am 13.02.2014 21:06, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag) wrote:
Am 10.02.2014 17:07, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag) wrote:
i could fix it by explicitly disable xbzrle - it seems its
automatically on if i do not set
Am 14.02.2014 15:59, schrieb Stefan Hajnoczi:
On Tue, Feb 11, 2014 at 07:32:46PM +0100, Stefan Priebe wrote:
Am 11.02.2014 17:22, schrieb Peter Lieven:
Am 11.02.2014 um 16:44 schrieb Stefan Hajnoczi stefa...@gmail.com:
On Tue, Feb 11, 2014 at 3:54 PM, Stefan Priebe - Profihost AG
s.pri
Am 14.02.2014 16:03, schrieb Stefan Hajnoczi:
On Tue, Feb 11, 2014 at 07:30:54PM +0100, Stefan Priebe wrote:
Am 11.02.2014 16:44, schrieb Stefan Hajnoczi:
On Tue, Feb 11, 2014 at 3:54 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
in the past (Qemu 1.5) a migration failed
Hello,
after live migrating machines with a lot of memory (32GB, 48GB, ...) i
see pretty often crashing services after migration and the guest kernel
prints:
[1707620.031806] swap_free: Bad swap file entry 00377410
[1707620.031806] swap_free: Bad swap file entry 00593c48
[1707620.031807]
Hi,
Am 06.02.2014 20:51, schrieb Dr. David Alan Gilbert:
* Stefan Priebe - Profihost AG (s.pri...@profihost.ag) wrote:
some more things which happen during migration:
php5.2[20258]: segfault at a0 ip 00740656 sp 7fff53b694a0
error 4 in php-cgi[40+6d7000]
php5.2[20249
Am 07.02.2014 21:02, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag) wrote:
anything i could try or debug? to help to find the problem?
I think the most useful would be to see if the problem is
a new problem in the 1.7 you're using or has existed
for a while; depending
anything i could try or debug? to help to find the problem?
Stefan
Am 07.02.2014 14:45, schrieb Stefan Priebe - Profihost AG:
it's always the same pattern there are too many 0 instead of X.
only seen:
read:0x ... expected:0x
or
read:0x
i could fix it by explicitly disable xbzrle - it seems its automatically
on if i do not set the migration caps to false.
So it seems to be a xbzrle bug.
Stefan
Am 07.02.2014 21:10, schrieb Stefan Priebe:
Am 07.02.2014 21:02, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri
Am 10.02.2014 17:07, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag) wrote:
i could fix it by explicitly disable xbzrle - it seems its
automatically on if i do not set the migration caps to false.
So it seems to be a xbzrle bug.
Stefan can you give me some more info
Am 11.02.2014 16:44, schrieb Stefan Hajnoczi:
On Tue, Feb 11, 2014 at 3:54 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
in the past (Qemu 1.5) a migration failed if there was not enogh memory
on the target host available directly at the beginning.
Now with Qemu 1.7 i've seen
Am 11.02.2014 17:22, schrieb Peter Lieven:
Am 11.02.2014 um 16:44 schrieb Stefan Hajnoczi stefa...@gmail.com:
On Tue, Feb 11, 2014 at 3:54 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
in the past (Qemu 1.5) a migration failed if there was not enogh memory
on the target host
Am 09.05.2014 18:29, schrieb Dr. David Alan Gilbert:
* Stefan Priebe - Profihost AG (s.pri...@profihost.ag) wrote:
Am 09.05.2014 um 15:41 schrieb Dr. David Alan Gilbert dgilb...@redhat.com:
* Stefan Priebe - Profihost AG (s.pri...@profihost.ag) wrote:
Hello list,
i was trying to migrate
)
Am 09.05.2014 19:05, schrieb Paolo Bonzini:
Il 09/05/2014 15:13, Stefan Priebe - Profihost AG ha scritto:
I see no output at the monitor of Qemu 2.0.
# migrate -d tcp:a.b.c.d:6000
# info migrate
capabilities: xbzrle: off x-rdma-pin-all: off auto-converge: on
zero-blocks: off
Migration
Hello,
i mean since using qemu 2.0 i've now seen several times the following
segfault:
(gdb) bt
#0 0x7f2af1196433 in event_notifier_set (e=0x124) at
util/event_notifier-posix.c:97
#1 0x7f2af0db1afc in aio_notify (ctx=0x0) at async.c:246
#2 0x7f2af0db1697 in qemu_bh_schedule
is this:
commit 271c0f68b4eae72691721243a1c37f46a3232d61
Author: Fam Zheng f...@redhat.com
Date: Wed May 21 10:42:13 2014 +0800
aio: Fix use-after-free in cancellation path
Stefan
Am 28.05.2014 21:40, schrieb Stefan Priebe:
Hello,
i mean since using qemu 2.0 i've now seen several times
0x7f9dcdba513d in clone () from /lib/x86_64-linux-gnu/libc.so.6
#12 0x in ?? ()
Am 28.05.2014 21:44, schrieb Stefan Priebe:
is this:
commit 271c0f68b4eae72691721243a1c37f46a3232d61
Author: Fam Zheng f...@redhat.com
Date: Wed May 21 10:42:13 2014 +0800
aio: Fix use-after-free
in clone () from /lib/x86_64-linux-gnu/libc.so.6
#12 0x in ?? ()
Am 28.05.2014 21:44, schrieb Stefan Priebe:
is this:
commit 271c0f68b4eae72691721243a1c37f46a3232d61
Author: Fam Zheng f...@redhat.com
Date: Wed May 21 10:42:13 2014 +0800
aio: Fix use-after-free
Am 02.06.2014 15:40, schrieb Stefan Hajnoczi:
On Fri, May 30, 2014 at 04:10:39PM +0200, Stefan Priebe wrote:
even with
+From 271c0f68b4eae72691721243a1c37f46a3232d61 Mon Sep 17 00:00:00 2001
+From: Fam Zheng f...@redhat.com
+Date: Wed, 21 May 2014 10:42:13 +0800
+Subject: [PATCH] aio: Fix use
Am 02.06.2014 22:45, schrieb Paolo Bonzini:
Il 02/06/2014 21:32, Stefan Priebe ha scritto:
#0 0x7f69e421c43f in event_notifier_set (e=0x124) at
util/event_notifier-posix.c:97
#1 0x7f69e3e37afc in aio_notify (ctx=0x0) at async.c:246
#2 0x7f69e3e37697 in qemu_bh_schedule (bh
Am 24.02.2014 17:13, schrieb Eric Blake:
On 02/24/2014 08:00 AM, Stefan Hajnoczi wrote:
What is the right way to check for enough free memory and memory
usage of a specific vm?
I would approach it in terms of guest RAM allocation plus QEMU overhead:
host_ram = num_guests *
Hi,
while migrating a bunch of VMs i saw multiple times segaults with qemu
2.1.2.
Is this a known bug?
Full backtrace:
Program terminated with signal 11, Segmentation fault.
#0 0x7ff9c73bca90 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) bt
#0 0x7ff9c73bca90 in ?? () from
Am 16.02.2015 um 16:50 schrieb Andreas Färber:
Am 16.02.2015 um 16:41 schrieb Stefan Priebe - Profihost AG:
Am 16.02.2015 um 15:49 schrieb Paolo Bonzini pbonz...@redhat.com:
On 16/02/2015 15:47, Stefan Priebe - Profihost AG wrote:
Could it be that this is a results of compiling qemu
Am 16.02.2015 um 16:49 schrieb Paolo Bonzini:
On 16/02/2015 16:41, Stefan Priebe - Profihost AG wrote:
Yes, just do nothing (--enable-debug-info is the default;
--enable-debug enables debug info _and_ turns off optimization).
If I do not enable-debug dh_strip does not extract any debugging
Hi,
while i get a constant random 4k i/o write speed of 20.000 iops with
qemu 2.1.0 or 2.1.3. I get jumping speeds with qemu 2.2 (jumping between
500 iops and 15.000 iop/s).
If i use virtio instead of virtio-scsi speed is the same between 2.2 and
2.1.
Is there a known regression?
Greets,
Hi,
after upgrading the host kernel from 3.12 to 3.18 live migration fails
with the following qemu output (guest running on a host with 3.12 =
host with 3.18):
kvm: Features 0x30afffe3 unsupported. Allowed features: 0x79bfbbe7
qemu: warning: error while loading state for instance 0x0 of
Hi,
it started to work again with virtio 100 instead of 94. No idea why it
works with qemu 2.2.0.
Stefan
Am 24.03.2015 um 12:15 schrieb Stefan Priebe - Profihost AG:
Am 24.03.2015 um 11:45 schrieb Paolo Bonzini:
On 24/03/2015 11:39, Stefan Priebe - Profihost AG wrote:
after upgrading
Am 13.05.2015 um 21:05 schrieb Stefan Weil:
Am 13.05.2015 um 20:59 schrieb Stefan Priebe:
Am 13.05.2015 um 20:51 schrieb Stefan Weil:
Hi,
I just noticed this patch because my provider told me that my KVM based
server
needs a reboot because of a CVE (see this German news:
http://www.heise.de
Am 13.05.2015 um 20:51 schrieb Stefan Weil:
Hi,
I just noticed this patch because my provider told me that my KVM based
server
needs a reboot because of a CVE (see this German news:
Am 13.05.2015 um 21:04 schrieb John Snow:
On 05/13/2015 02:59 PM, Stefan Priebe wrote:
Am 13.05.2015 um 20:51 schrieb Stefan Weil:
Hi,
I just noticed this patch because my provider told me that my KVM based
server
needs a reboot because of a CVE (see this German news:
http://www.heise.de
] kernel_init+0xe/0xf0
[0.195715] [816347a2] ret_from_fork+0x42/0x70
[0.195719] [8161f6a0] ? rest_init+0x80/0x80
[0.195729] ---[ end trace cf665146248feec1 ]---
Stefan
Am 15.08.2015 um 20:44 schrieb Stefan Priebe:
Hi,
while switching to a FULL tickless kernel i
Hi,
while switching to a FULL tickless kernel i detected that all our VMs
produce the following stack trace while running under qemu 2.3.0.
[0.195160] HPET: 3 timers in total, 0 timers will be used for
per-cpu timer
[0.195181] hpet0: at MMIO 0xfed0, IRQs 2, 8, 0
[0.195188]
Am 15.07.2015 um 13:32 schrieb Andrey Korolyov:
On Wed, Jul 15, 2015 at 2:20 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Hi,
is there a way to query the current cpu model / type of a running qemu
machine?
I mean host, kvm64, qemu64, ...
Stefan
I believe that the most
Am 15.07.2015 um 22:15 schrieb Andrey Korolyov:
On Wed, Jul 15, 2015 at 11:07 PM, Stefan Priebe s.pri...@profihost.ag wrote:
Am 15.07.2015 um 13:32 schrieb Andrey Korolyov:
On Wed, Jul 15, 2015 at 2:20 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Hi,
is there a way
Hello,
is there any chance or hack to work with a bigger cluster size for the
drive backup job?
See:
http://git.qemu.org/?p=qemu.git;a=blob;f=block/backup.c;h=16105d40b193be9bb40346027bdf58e62b956a96;hb=98d2c6f2cd80afaa2dc10091f5e35a97c181e4f5
This is very slow with ceph - may be due to the
Am 22.02.2016 um 18:36 schrieb Paolo Bonzini:
On 20/02/2016 11:44, Stefan Priebe wrote:
Hi,
while testing Kernel 4.4.2 and starting 20 Qemu 2.4.1 virtual machines.
I got those traces and a load of 500 on those system. I was only abler
to recover by sysrq-trigger.
It seems like something
Am 25.02.2016 um 20:53 schrieb John Snow:
On 02/25/2016 02:49 AM, Stefan Priebe - Profihost AG wrote:
Am 22.02.2016 um 23:08 schrieb John Snow:
On 02/22/2016 03:21 PM, Stefan Priebe wrote:
Hello,
is there any chance or hack to work with a bigger cluster size for the
drive backup job
Hi,
while testing Kernel 4.4.2 and starting 20 Qemu 2.4.1 virtual machines.
I got those traces and a load of 500 on those system. I was only abler
to recover by sysrq-trigger.
All traces:
INFO: task pvedaemon worke:7470 blocked for more than 120 seconds.
Not tainted 4.4.2+1-ph #1
"echo 0 >
Hi josh, hi Stefan,
Am 14.05.2013 17:05, schrieb Stefan Hajnoczi:
On Tue, May 14, 2013 at 4:29 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Am 10.05.2013 13:09, schrieb Stefan Hajnoczi:
On Fri, May 10, 2013 at 11:07 AM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag
Am 22.05.2013 um 10:41 schrieb Paolo Bonzini pbonz...@redhat.com:
Il 22/05/2013 08:26, Stefan Priebe - Profihost AG ha scritto:
Hi,
as i can't reproduce no ;-( i just saw the kernel segfault message and
used addr2line and a qemu dbg package to get the code line.
I've now seen this again
Am 24.05.2013 um 15:23 schrieb Luiz Capitulino lcapitul...@redhat.com:
On Fri, 24 May 2013 07:50:33 +0200
Stefan Priebe s.pri...@profihost.ag wrote:
Hello list,
since upgrading from qemu 1.4.1 to 1.5.0 i've problems with qmp commands.
With Qemu 1.5 i've the following socket
1 - 100 of 277 matches
Mail list logo