got it here:
http://lists.nongnu.org/archive/html/qemu-devel/2014-02/msg02341.html
will try asap
Am 13.02.2014 21:06, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag) wrote:
Am 10.02.2014 17:07, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag
Am 13.02.2014 21:06, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag) wrote:
Am 10.02.2014 17:07, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag) wrote:
i could fix it by explicitly disable xbzrle - it seems its
automatically on if i do not set
Am 11.02.2014 14:32, schrieb Orit Wasserman:
On 02/08/2014 09:23 PM, Stefan Priebe wrote:
i could fix it by explicitly disable xbzrle - it seems its
automatically on if i do not set the migration caps to false.
So it seems to be a xbzrle bug.
XBZRLE is disabled by default (actually all
Am 11.02.2014 14:45, schrieb Orit Wasserman:
On 02/11/2014 03:33 PM, Stefan Priebe - Profihost AG wrote:
Am 11.02.2014 14:32, schrieb Orit Wasserman:
On 02/08/2014 09:23 PM, Stefan Priebe wrote:
i could fix it by explicitly disable xbzrle - it seems its
automatically on if i do not set
Hello,
in the past (Qemu 1.5) a migration failed if there was not enogh memory
on the target host available directly at the beginning.
Now with Qemu 1.7 i've seen succeeded migrations but the kernel OOM
memory killer killing qemu processes. So the migration seems to takes
place without having
Am 11.02.2014 16:44, schrieb Stefan Hajnoczi:
On Tue, Feb 11, 2014 at 3:54 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
in the past (Qemu 1.5) a migration failed if there was not enogh memory
on the target host available directly at the beginning.
Now with Qemu 1.7 i've seen
Am 11.02.2014 17:22, schrieb Peter Lieven:
Am 11.02.2014 um 16:44 schrieb Stefan Hajnoczi stefa...@gmail.com:
On Tue, Feb 11, 2014 at 3:54 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
in the past (Qemu 1.5) a migration failed if there was not enogh memory
on the target host
Am 10.02.2014 17:07, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag) wrote:
i could fix it by explicitly disable xbzrle - it seems its
automatically on if i do not set the migration caps to false.
So it seems to be a xbzrle bug.
Stefan can you give me some more info
i could fix it by explicitly disable xbzrle - it seems its automatically
on if i do not set the migration caps to false.
So it seems to be a xbzrle bug.
Stefan
Am 07.02.2014 21:10, schrieb Stefan Priebe:
Am 07.02.2014 21:02, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri
Am 07.02.2014 09:15, schrieb Alexandre DERUMIER:
do you use xbzrle for live migration ?
no - i'm really stucked right now with this. Biggest problem i can't
reproduce with test machines ;-(
Stefan
- Mail original -
De: Stefan Priebe s.pri...@profihost.ag
À: Dr. David Alan
Am 07.02.2014 10:15, schrieb Dr. David Alan Gilbert:
* Stefan Priebe - Profihost AG (s.pri...@profihost.ag) wrote:
Am 07.02.2014 09:15, schrieb Alexandre DERUMIER:
do you use xbzrle for live migration ?
no - i'm really stucked right now with this. Biggest problem i can't
reproduce
Hi,
Am 07.02.2014 10:29, schrieb Marcin Gibuła:
do you use xbzrle for live migration ?
no - i'm really stucked right now with this. Biggest problem i can't
reproduce with test machines ;-(
Only being able to test on your production VMs isn't fun;
is it possible or you to run an extra
Am 07.02.2014 10:31, schrieb Dr. David Alan Gilbert:
* Stefan Priebe - Profihost AG (s.pri...@profihost.ag) wrote:
Am 07.02.2014 10:15, schrieb Dr. David Alan Gilbert:
* Stefan Priebe - Profihost AG (s.pri...@profihost.ag) wrote:
Am 07.02.2014 09:15, schrieb Alexandre DERUMIER:
do you use
at 8242.44MB/s
Stats: Disk: 0.00M at 0.00MB/s
Status: FAIL - test discovered HW problems
---
Stefan
Am 07.02.2014 10:37, schrieb Stefan Priebe - Profihost AG:
Am 07.02.2014 10:31, schrieb Dr. David Alan Gilbert
Hi,
Am 07.02.2014 13:21, schrieb Dr. David Alan Gilbert:
* Stefan Priebe - Profihost AG (s.pri...@profihost.ag) wrote:
Hi,
i was able to reproduce with a longer running test VM running the google
stress test.
Hmm that's quite a fun set of differences; I think I'd like
to understand
Hi,
Am 07.02.2014 13:44, schrieb Paolo Bonzini:
Il 07/02/2014 13:30, Stefan Priebe - Profihost AG ha scritto:
i was able to reproduce with a longer running test VM running the
google
stress test.
Hmm that's quite a fun set of differences; I think I'd like
to understand whether
Hi,
Am 07.02.2014 14:08, schrieb Dr. David Alan Gilbert:
* Stefan Priebe - Profihost AG (s.pri...@profihost.ag) wrote:
Hi,
Am 07.02.2014 13:44, schrieb Paolo Bonzini:
Il 07/02/2014 13:30, Stefan Priebe - Profihost AG ha scritto:
i was able to reproduce with a longer running test VM running
Hi,
Am 07.02.2014 14:15, schrieb Dr. David Alan Gilbert:
* Stefan Priebe - Profihost AG (s.pri...@profihost.ag) wrote:
Hi,
Am 07.02.2014 14:08, schrieb Dr. David Alan Gilbert:
* Stefan Priebe - Profihost AG (s.pri...@profihost.ag) wrote:
first of all i've now a memory image of a VM where
Hi,
Am 07.02.2014 14:19, schrieb Paolo Bonzini:
Il 07/02/2014 14:04, Stefan Priebe - Profihost AG ha scritto:
first of all i've now a memory image of a VM where i can reproduce it.
You mean you start that VM with -incoming 'exec:cat /path/to/vm.img'?
But google stress test doesn't report any
:0xb5b5b5b5b5b5b5b5
no idea if this helps.
Stefan
Am 07.02.2014 14:39, schrieb Stefan Priebe - Profihost AG:
Hi,
Am 07.02.2014 14:19, schrieb Paolo Bonzini:
Il 07/02/2014 14:04, Stefan Priebe - Profihost AG ha scritto:
first of all i've now a memory image of a VM where i can reproduce it.
You
Am 07.02.2014 21:02, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag) wrote:
anything i could try or debug? to help to find the problem?
I think the most useful would be to see if the problem is
a new problem in the 1.7 you're using or has existed
for a while; depending
anything i could try or debug? to help to find the problem?
Stefan
Am 07.02.2014 14:45, schrieb Stefan Priebe - Profihost AG:
it's always the same pattern there are too many 0 instead of X.
only seen:
read:0x ... expected:0x
or
read:0x
Am 06.02.2014 11:22, schrieb Orit Wasserman:
On 02/06/2014 09:20 AM, Stefan Priebe - Profihost AG wrote:
Am 05.02.2014 21:15, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag) wrote:
Hello,
after live migrating machines with a lot of memory (32GB, 48GB, ...)
i see
.
- Mail original -
De: Stefan Priebe s.pri...@profihost.ag
À: pve-de...@pve.proxmox.com, qemu-devel qemu-devel@nongnu.org
Envoyé: Mercredi 5 Février 2014 18:51:15
Objet: [pve-devel] QEMU LIve Migration - swap_free: Bad swap file entry
Hello,
after live migrating
://lkml.indiana.edu/hypermail/linux/kernel/1106.3/01340.html
Maybe is it a guest kernel bug ?
- Mail original -
De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: pve-de...@pve.proxmox.com, qemu-devel qemu-devel@nongnu.org
Envoyé: Jeudi 6
at 7f0008a70ed4 ip 7fc890b9d440 sp
7fff08a6f9b0 error 4 in libc-2.13.so[7fc890b67000+182000]
Stefan
Am 06.02.2014 13:10, schrieb Stefan Priebe - Profihost AG:
May be,
sadly i've no idea. Only using 3.10 Kernel with XFS.
Stefan
Am 06.02.2014 12:40, schrieb Alexandre DERUMIER:
PS: all
Hi,
Am 06.02.2014 20:51, schrieb Dr. David Alan Gilbert:
* Stefan Priebe - Profihost AG (s.pri...@profihost.ag) wrote:
some more things which happen during migration:
php5.2[20258]: segfault at a0 ip 00740656 sp 7fff53b694a0
error 4 in php-cgi[40+6d7000]
php5.2[20249
Hello,
after live migrating machines with a lot of memory (32GB, 48GB, ...) i
see pretty often crashing services after migration and the guest kernel
prints:
[1707620.031806] swap_free: Bad swap file entry 00377410
[1707620.031806] swap_free: Bad swap file entry 00593c48
[1707620.031807]
Am 05.02.2014 21:15, schrieb Dr. David Alan Gilbert:
* Stefan Priebe (s.pri...@profihost.ag) wrote:
Hello,
after live migrating machines with a lot of memory (32GB, 48GB, ...)
i see pretty often crashing services after migration and the guest
kernel prints:
[1707620.031806] swap_free: Bad
Am 31.05.2013 13:02, schrieb Amos Kong:
...
thanks for this great explanation. I've done what you sayd but it still
does not work.
Here is the output of the seabis debug log where you see the loop:
http://pastebin.com/raw.php?i=e53rdW2b
| found virtio-scsi at 0:5
| Searching bootorder
Am 31.05.2013 00:51, schrieb Amos Kong:
On Thu, May 30, 2013 at 10:30:21PM +0200, Stefan Priebe wrote:
Am 30.05.2013 15:13, schrieb Amos Kong:
On Thu, May 30, 2013 at 02:09:25PM +0200, Stefan Priebe - Profihost AG
wrote:
Am 29.05.2013 09:56, schrieb Amos Kong:
Recent virtio refactoring
Am 31.05.2013 13:02, schrieb Amos Kong:
...
thanks for this great explanation. I've done what you sayd but it still
does not work.
Here is the output of the seabis debug log where you see the loop:
http://pastebin.com/raw.php?i=e53rdW2b
| found virtio-scsi at 0:5
| Searching bootorder
moving
CHR_EVENT_OPENED
in-band with connection establishment as a general solution, but fixes
QMP for the time being.
Reported-by: Stefan Priebe s.pri...@profihost.ag
Cc: qemu-sta...@nongnu.org
Signed-off-by: Michael Roth mdr...@linux.vnet.ibm.com
Thanks for debugging this Michael. I'm
Am 29.05.2013 09:56, schrieb Amos Kong:
Recent virtio refactoring in QEMU made virtio-bus become the parent bus
of scsi-bus, and virtio-bus doesn't have get_fw_dev_path implementation,
typename will be added to fw_dev_path by default, the new fw_dev_path
could not be identified by seabios. It
Am 30.05.2013 15:13, schrieb Amos Kong:
On Thu, May 30, 2013 at 02:09:25PM +0200, Stefan Priebe - Profihost AG wrote:
Am 29.05.2013 09:56, schrieb Amos Kong:
Recent virtio refactoring in QEMU made virtio-bus become the parent bus
of scsi-bus, and virtio-bus doesn't have get_fw_dev_path
Am 10.05.2013 13:09, schrieb Stefan Hajnoczi:
On Fri, May 10, 2013 at 11:07 AM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Am 10.05.2013 09:42, schrieb Stefan Hajnoczi:
On Fri, May 10, 2013 at 08:12:39AM +0200, Stefan Priebe - Profihost AG wrote:
3. Either use gdb
Am 26.05.2013 03:23, schrieb mdroth:
On Sat, May 25, 2013 at 01:09:50PM +0200, Stefan Priebe wrote:
Am 25.05.2013 00:32, schrieb mdroth:
On Sat, May 25, 2013 at 12:12:22AM +0200, Stefan Priebe wrote:
Am 25.05.2013 00:09, schrieb mdroth:
I would try to create a small example script.
I use
Am 26.05.2013 17:36, schrieb mdroth:
On Sun, May 26, 2013 at 05:13:36PM +0200, Stefan Priebe wrote:
Am 26.05.2013 03:23, schrieb mdroth:
On Sat, May 25, 2013 at 01:09:50PM +0200, Stefan Priebe wrote:
Am 25.05.2013 00:32, schrieb mdroth:
On Sat, May 25, 2013 at 12:12:22AM +0200, Stefan Priebe
Am 25.05.2013 00:32, schrieb mdroth:
On Sat, May 25, 2013 at 12:12:22AM +0200, Stefan Priebe wrote:
Am 25.05.2013 00:09, schrieb mdroth:
I would try to create a small example script.
I use qmp-shell and other little scripts very often.
Am this be due to the fact that I don't wait
Am 24.05.2013 um 15:23 schrieb Luiz Capitulino lcapitul...@redhat.com:
On Fri, 24 May 2013 07:50:33 +0200
Stefan Priebe s.pri...@profihost.ag wrote:
Hello list,
since upgrading from qemu 1.4.1 to 1.5.0 i've problems with qmp commands.
With Qemu 1.5 i've the following socket
Am 24.05.2013 um 16:02 schrieb Luiz Capitulino lcapitul...@redhat.com:
On Fri, 24 May 2013 15:57:59 +0200
Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote:
Am 24.05.2013 um 15:23 schrieb Luiz Capitulino lcapitul...@redhat.com:
On Fri, 24 May 2013 07:50:33 +0200
Stefan Priebe
found}}
{id: 12125:1, error: {class: CommandNotFound, desc: The
command qmp_capabilities has not been found}}
Stefan
Am 24.05.2013 17:21, schrieb Luiz Capitulino:
On Fri, 24 May 2013 16:36:26 +0200
Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote:
Am 24.05.2013 um 16:02 schrieb Luiz
Am 24.05.2013 17:21, schrieb Luiz Capitulino:
On Fri, 24 May 2013 16:36:26 +0200
Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote:
Am 24.05.2013 um 16:02 schrieb Luiz Capitulino lcapitul...@redhat.com:
On Fri, 24 May 2013 15:57:59 +0200
Stefan Priebe - Profihost AG s.pri
Mit freundlichen Grüßen
Stefan Priebe
Bachelor of Science in Computer Science (BSCS)
Vorstand (CTO)
---
Profihost AG
Am Mittelfelde 29
30519 Hannover
Deutschland
Tel.: +49 (511) 5151 8181 | Fax.: +49 (511) 5151 8282
URL: http://www.profihost.com | E-Mail: i
Am 24.05.2013 23:37, schrieb Stefan Priebe:
Am 24.05.2013 17:21, schrieb Luiz Capitulino:
On Fri, 24 May 2013 16:36:26 +0200
Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote:
Am 24.05.2013 um 16:02 schrieb Luiz Capitulino
lcapitul...@redhat.com:
On Fri, 24 May 2013 15:57:59 +0200
Am 25.05.2013 00:09, schrieb mdroth:
I would try to create a small example script.
I use qmp-shell and other little scripts very often.
Am this be due to the fact that I don't wait for the welcome banner
right now?
If you're not reading from the socket, then you'll get the banner back
when
Am 23.05.2013 12:09, schrieb Paolo Bonzini:
Il 22/05/2013 14:24, Stefan Priebe - Profihost AG ha scritto:
Am 22.05.2013 um 10:41 schrieb Paolo Bonzini pbonz...@redhat.com:
Il 22/05/2013 08:26, Stefan Priebe - Profihost AG ha scritto:
Hi,
as i can't reproduce no ;-( i just saw the kernel
Hello list,
since upgrading from qemu 1.4.1 to 1.5.0 i've problems with qmp commands.
With Qemu 1.5 i've the following socket communication:
'{execute:qmp_capabilities,id:12125:1,arguments:{}}'
'{return: {}, id: 12125:1}'
Hi josh, hi Stefan,
Am 14.05.2013 17:05, schrieb Stefan Hajnoczi:
On Tue, May 14, 2013 at 4:29 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Am 10.05.2013 13:09, schrieb Stefan Hajnoczi:
On Fri, May 10, 2013 at 11:07 AM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag
Am 22.05.2013 um 10:41 schrieb Paolo Bonzini pbonz...@redhat.com:
Il 22/05/2013 08:26, Stefan Priebe - Profihost AG ha scritto:
Hi,
as i can't reproduce no ;-( i just saw the kernel segfault message and
used addr2line and a qemu dbg package to get the code line.
I've now seen this again
Am 10.05.2013 13:09, schrieb Stefan Hajnoczi:
On Fri, May 10, 2013 at 11:07 AM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Am 10.05.2013 09:42, schrieb Stefan Hajnoczi:
On Fri, May 10, 2013 at 08:12:39AM +0200, Stefan Priebe - Profihost AG
wrote:
3. Either use gdb
Am 14.05.2013 17:05, schrieb Stefan Hajnoczi:
On Tue, May 14, 2013 at 4:29 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Am 10.05.2013 13:09, schrieb Stefan Hajnoczi:
On Fri, May 10, 2013 at 11:07 AM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Am 10.05.2013 09
Hello list,
i've now seen this several times. A VM is suddently down no segfault
nothing the kvm process just disappears...
Anybody any idea how to debug this?
Sadly i can't reproduce. Qemu version is 1.4.1.
Greets Stefan
Am 10.05.2013 09:20, schrieb Alexandre DERUMIER:
Just an idea, maybe are you out of memory and process are killed ?
nothing in logs ?
140GB free mem also nothing in dmesg... which logs did you mean?
Stefan
- Mail original -
De: Stefan Priebe - Profihost AG s.pri
Am 10.05.2013 09:28, schrieb Alexandre DERUMIER:
140GB free mem also nothing in dmesg... which logs did you mean?
I thinked of /var/log/messages, logs with OOM Killer. But seem to not be your
case ;)
Nothing...
Do you use HA ?
No
Stefan
Am 10.05.2013 09:42, schrieb Stefan Hajnoczi:
On Fri, May 10, 2013 at 08:12:39AM +0200, Stefan Priebe - Profihost AG wrote:
i've now seen this several times. A VM is suddently down no segfault
nothing the kvm process just disappears...
Anybody any idea how to debug this?
Sadly i can't
Another hint: I've never seens this using qemu 1.3.1
Stefan
Am 13.02.2013 08:49, schrieb Stefan Priebe - Profihost AG:
Hi Paolo,
sadly no luck. A VM crashed again.
[ ~]# addr2line -e /usr/lib/debug/usr/bin/kvm -f 24040c
virtio_scsi_command_complete
hw/virtio-scsi.c:429
Same point
that in-flight I/O is cancelled.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
Signed-off-by: Anthony Liguori aligu...@us.ibm.com
Greets,
Stefan
Am 13.02.2013 09:01, schrieb Stefan Priebe - Profihost AG:
Another hint: I've never seens this using qemu 1.3.1
Stefan
Am 13.02.2013 08:49
Hi,
Am 13.02.2013 09:57, schrieb Paolo Bonzini:
Il 13/02/2013 09:19, Stefan Priebe - Profihost AG ha scritto:
Hi,
could this be this one?
commit 47a150a4bbb06e45ef439a8222e9f46a7c4cca3f
...
You can certainly try reverting it, but this patch is fixing a real bug.
Will try that. Yes but even
Hi,
Am 13.02.2013 12:36, schrieb Paolo Bonzini:
Il 13/02/2013 10:07, Stefan Priebe - Profihost AG ha scritto:
commit 47a150a4bbb06e45ef439a8222e9f46a7c4cca3f
...
You can certainly try reverting it, but this patch is fixing a real bug.
Will try that. Yes but even if it fixes a bug and raises
Hi paolo,
thanks for your work. Should i still apply your old patch to scsi-disk
or should i remove it?
Stefan
Am 13.02.2013 14:39, schrieb Paolo Bonzini:
Il 13/02/2013 13:55, Stefan Priebe - Profihost AG ha scritto:
Hi,
Am 13.02.2013 12:36, schrieb Paolo Bonzini:
Il 13/02/2013 10:07, Stefan
Output of cat:
[: ~]# cat /sys/block/*/device/scsi_disk/*/provisioning_mode
writesame_16
Stefan
Am 13.02.2013 14:39, schrieb Paolo Bonzini:
Il 13/02/2013 13:55, Stefan Priebe - Profihost AG ha scritto:
Hi,
Am 13.02.2013 12:36, schrieb Paolo Bonzini:
Il 13/02/2013 10:07, Stefan Priebe
Bonzini:
Il 13/02/2013 13:55, Stefan Priebe - Profihost AG ha scritto:
Hi,
Am 13.02.2013 12:36, schrieb Paolo Bonzini:
Il 13/02/2013 10:07, Stefan Priebe - Profihost AG ha scritto:
commit 47a150a4bbb06e45ef439a8222e9f46a7c4cca3f
...
You can certainly try reverting it, but this patch is fixing
Hi,
Am 13.02.2013 16:24, schrieb Paolo Bonzini:
Il 13/02/2013 15:30, Stefan Priebe - Profihost AG ha scritto:
I added this:
-trace events=/tmp/events,file=/root/qemu.123.trace
and put the events in the events file as i couldn't handle \n in my app
starting the kvm process. But even when doing
Hi,
no VM crashed this morning.
Stefan
Am 13.02.2013 16:24, schrieb Paolo Bonzini:
Il 13/02/2013 15:30, Stefan Priebe - Profihost AG ha scritto:
I added this:
-trace events=/tmp/events,file=/root/qemu.123.trace
and put the events in the events file as i couldn't handle \n in my app
Hi,
thanks - i applied the patch to the latest master. I hope that this will
solve my issue. Will this one get integrated in 1.4 final?
Greets,
Stefan
Am 11.02.2013 15:42, schrieb Paolo Bonzini:
Il 11/02/2013 15:18, Stefan Priebe - Profihost AG ha scritto:
Some trace that a request
, Stefan Priebe - Profihost AG ha scritto:
Hi,
thanks - i applied the patch to the latest master. I hope that this will
solve my issue. Will this one get integrated in 1.4 final?
Hello list,
i've seen segfaults of the kvm process. Sadly i've no core dumps just
the line from dmesg:
kvm[26268]: segfault at c050 ip 7fcfc3465eac sp 7fffe85a0d00
error 4 in kvm[7fcfc3223000+3ba000]
Is it possible to get the function and some more details out of this
line? I've symbol
Hi Stefan,
Am 11.02.2013 10:40, schrieb Stefan Hajnoczi:
On Mon, Feb 11, 2013 at 08:46:03AM +0100, Stefan Priebe - Profihost AG wrote:
i've seen segfaults of the kvm process. Sadly i've no core dumps just
the line from dmesg:
kvm[26268]: segfault at c050 ip 7fcfc3465eac sp 7fffe85a0d00
So it looks a bit like a race condition in the virtio-scsi driver.
Command got canceled and the completed or something like this.
Stefan
Am 11.02.2013 10:40, schrieb Stefan Hajnoczi:
On Mon, Feb 11, 2013 at 08:46:03AM +0100, Stefan Priebe - Profihost AG wrote:
i've seen segfaults of the kvm
Hi,
Am 11.02.2013 13:48, schrieb Paolo Bonzini:
Il 11/02/2013 10:48, Stefan Priebe - Profihost AG ha scritto:
req-resp.cmd-status = status;
if (req-resp.cmd-status == GOOD) {
req-resp.cmd-resid = tswap32(resid);
} else {
req-resp.cmd-resid = 0;
sense_len
Hi Stefan,
yes i use virtio-scsi-pci in all my guests. As it is the only one where
i can use fstrim from guest to storage with rbd ;-)
Stefan
Am 11.02.2013 14:21, schrieb Stefan Hajnoczi:
On Mon, Feb 11, 2013 at 2:08 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Hi,
Am
Hi Paolo,
as the guest crashes i can't check the guest. On the host i just have
the segmentation fault line. Anything else is from the bootprocess or
enabling the tap device. So nothing suspicious.
Greets,
Stefan
Am 11.02.2013 14:56, schrieb Paolo Bonzini:
Il 11/02/2013 14:35, Stefan Priebe
Hi,
nothing. What are you searching for?
Stefan
Am 11.02.2013 14:59, schrieb Paolo Bonzini:
Il 11/02/2013 14:58, Stefan Priebe - Profihost AG ha scritto:
Hi Paolo,
as the guest crashes i can't check the guest. On the host i just have
the segmentation fault line. Anything else is from
)
Stefan
Am 11.02.2013 15:12, schrieb Paolo Bonzini:
Il 11/02/2013 15:02, Stefan Priebe - Profihost AG ha scritto:
Hi,
nothing. What are you searching for?
Some trace that a request was actually cancelled, but I think I believe
that. This seems to be the same issue as commits
Hello list,
until now all my bridges were on top of a raw ethernet device. My tap
devices worked fine.
Now ihad moved my bridge on top of a bond but then no tap device is working.
My old Ubuntu/Debian style interfaces file looked like this:
auto vmbr0
iface vmbr0 inet manual
Hi,
tried to reproduce... but wasn't able to...
Stefan
Am 08.02.2013 11:43, schrieb Paolo Bonzini:
Il 08/02/2013 11:29, Stefan Priebe - Profihost AG ha scritto:
Hello list,
while testing current git master
(ecd8d4715ea33aa2c146a5047bacb031e86af599) i've seen a VM endless
looping
Hello list,
while testing current git master
(ecd8d4715ea33aa2c146a5047bacb031e86af599) i've seen a VM endless
looping - the monitor socket also did not answer anymore while
migrating. The VM was under very low load but the migration never
finishs and the VM was not accessable anymore. Is this a
Hi,
uh might be pretty difficult to provide reproduction instructions as i
don't know how to reproduce ;-)
Might attaching gdb and provide a backtrace of all threads help while
having debugging symbols?
Stefan
Am 08.02.2013 11:43, schrieb Paolo Bonzini:
Il 08/02/2013 11:29, Stefan Priebe
Am 08.02.2013 12:53, schrieb Paolo Bonzini:
Il 08/02/2013 12:25, Stefan Priebe - Profihost AG ha scritto:
uh might be pretty difficult to provide reproduction instructions as i
don't know how to reproduce ;-)
Might attaching gdb and provide a backtrace of all threads help while
having
) at vl.c:4047
Stefan
Am 29.12.2012 16:25, schrieb Paolo Bonzini:
Il 29/12/2012 16:19, Stefan Priebe ha scritto:
I suppose it will be between 05e72dc5812a9f461fc2c606dff2572909eafc39
and aa723c23147e93fef8475bd80fd29e633378c34d.
Probably at 2dddf6f4133975af62e64cb6406ec1239491fa89, which
Hi Paolo,
Am 29.12.2012 15:00, schrieb Paolo Bonzini:
i cherry picked that one on top of 1.3 sadly it does not help. VM halts,
monitor socket is no longer available kvm process is running with 100%
CPU on source side.
Can you please test master and, if it works, bisect it in reverse?
(That is,
Am 29.12.2012 15:58, schrieb Paolo Bonzini:
Il 29/12/2012 15:05, Stefan Priebe ha scritto:
It starts working to me after the first 22 patches (after introducing
the new mutex and threading for writes).
And when does it break in 1.3?
I suppose it will be between
Hi Paolo,
Am 28.12.2012 18:53, schrieb Paolo Bonzini:
Il 28/12/2012 08:05, Alexandre DERUMIER ha scritto:
Hi list,
After discuss with Stefan Yesterday here some more info:
(this is for stable qemu 1.3, it was working fine with qemu 1.2)
The problem seem that whesettings a
Hello list,
i'm using qemu 1.3 and migration works fine if i do not set
migrate_downtime.
If i set migrate_downtime to 1s or 0.5s or 0.3s the VM halts immediatly
i cannot even connect to the qmp socket anymore and migration takes 5-10
minutes or never finishes.
I see high cpu usage on
While testing this again i also had 32bit guests where migrate didn't
work - kvm source process just eats all CPU.
Stefan
Am 24.12.2012 00:18, schrieb Stefan Priebe:
Hello list,
i'm using qemu v1.3.0. Since upgrading from kvm qemu 1.2 my 64bit linux
guests hang while online migrating.
Has
Hello list,
i'm using qemu v1.3.0. Since upgrading from kvm qemu 1.2 my 64bit linux
guests hang while online migrating.
Has anybody an idea / can help?
Things happing while this happens:
- kvm process uses 100%CPU
- human monitor does not work as the process does not respond at/on socket
-
to rbd_aio_bh_cb so qemu_aio_wait always
waits until BH was executed
Changes since PATCHv2:
- fixed missing braces
- added vfree for bounce
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
block/rbd.c | 20
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/block
fixed in V6
Am 30.11.2012 09:26, schrieb Stefan Hajnoczi:
On Thu, Nov 29, 2012 at 10:37 PM, Stefan Priebe s.pri...@profihost.ag wrote:
@@ -568,6 +562,10 @@ static void qemu_rbd_aio_cancel(BlockDriverAIOCB *blockacb)
{
RBDAIOCB *acb = (RBDAIOCB *) blockacb;
acb-cancelled = 1
missing braces
- added vfree for bounce
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
block/rbd.c | 27 ++-
1 file changed, 18 insertions(+), 9 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index 0aaacaf..917c64c 100644
--- a/block/rbd.c
+++ b/block/rbd.c
unnecessary if condition in rbd_start_aio as we
haven't start io yet
- moved acb-status = 0 to rbd_aio_bh_cb so qemu_aio_wait always
waits until BH was executed
Changes since PATCHv2:
- fixed missing braces
- added vfree for bounce
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
block
Hi,
i hope i've done everything correctly. I've send a new v4 patch.
Am 29.11.2012 14:58, schrieb Stefan Hajnoczi:
On Thu, Nov 22, 2012 at 11:00:19AM +0100, Stefan Priebe wrote:
@@ -406,10 +401,11 @@ static void qemu_rbd_complete_aio(RADOSCB *rcb)
acb-ret = r
braces
- added vfree for bounce
Signed-off-by: Stefan Priebe s.pri...@profihost.ag
---
block/rbd.c | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index f3becc7..3bc9c7a 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -77,6 +77,7 @@ typedef
Hi Paolo,
Am 29.11.2012 16:23, schrieb Paolo Bonzini:
+qemu_vfree(acb-bounce);
This vfree is not needed, since the BH will run and do the free.
new patch v5 sent.
Greets,
Stefan
Am 24.11.2012 20:54, schrieb Blue Swirl:
On Thu, Nov 22, 2012 at 10:00 AM, Stefan Priebe s.pri...@profihost.ag wrote:
This one fixes a race which qemu had also in iscsi block driver
between cancellation and io completition.
qemu_rbd_aio_cancel was not synchronously waiting for the end
to patch.
Thanks!
Greets,
Stefan
Am 23.11.2012 15:15, schrieb Peter Maydell:
On 23 November 2012 14:11, Stefan Hajnoczi stefa...@gmail.com wrote:
On Thu, Nov 22, 2012 at 10:07 AM, Stefan Priebe s.pri...@profihost.ag wrote:
diff --git a/block/rbd.c b/block/rbd.c
index 5a0f79f..0384c6c 100644
Am 21.11.2012 23:32, schrieb Peter Maydell:
On 21 November 2012 17:03, Stefan Weil s...@weilnetz.de wrote:
Why do you use int64_t instead of off_t?
If the value is related to file sizes, off_t would be a good choice.
Looking at the librbd API (which is what the size and ret
values come from),
When acb-cmd is WRITE or DISCARD block/rbd stores rcb-size into acb-ret
Look here:
if (acb-cmd == RBD_AIO_WRITE ||
acb-cmd == RBD_AIO_DISCARD) {
if (r 0) {
acb-ret = r;
acb-error = 1;
} else if (!acb-error) {
acb-ret = rcb-size;
Hello,
i send a new patch using ssize_t. (Subject [PATCH] overflow of int ret:
use ssize_t for ret)
Stefan
Am 22.11.2012 09:40, schrieb Peter Maydell:
On 22 November 2012 08:23, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Am 21.11.2012 23:32, schrieb Peter Maydell:
Looking
This one fixes a race which qemu had also in iscsi block driver
between cancellation and io completition.
qemu_rbd_aio_cancel was not synchronously waiting for the end of
the command.
To archieve this it introduces a new status flag which uses
-EINPROGRESS.
Signed-off-by: Stefan Priebe s.pri
101 - 200 of 277 matches
Mail list logo