Re: [Qemu-devel] [PATCH] rbd block driver fix race between aio completition and aio cancel

2012-11-22 Thread Stefan Priebe - Profihost AG
Hello, i've send a new patch which hopefully cares about all your comments. [PATCH] rbd block driver fix race between aio completition and aio cancel Greets Stefan Am 21.11.2012 10:07, schrieb Stefan Hajnoczi: On Mon, Nov 19, 2012 at 09:39:45PM +0100, Stefan Priebe wrote: @@ -376,9 +376,7

Re: [Qemu-devel] [PATCH] overflow of int ret: use ssize_t for ret

2012-11-22 Thread Stefan Priebe - Profihost AG
Hi Andreas, thanks for your comment. Do i have to resend this patch? -- Greets, Stefan Am 22.11.2012 17:40, schrieb Andreas Färber: Am 22.11.2012 10:07, schrieb Stefan Priebe: When acb-cmd is WRITE or DISCARD block/rbd stores rcb-size into acb-ret Look here: if (acb-cmd == RBD_AIO_WRITE

Re: [Qemu-devel] [PATCH] overflow of int ret: use ssize_t for ret

2012-11-22 Thread Stefan Priebe - Profihost AG
Signed-off-by: Stefan Priebe s.pri...@profihost.ag Am 22.11.2012 10:07, schrieb Stefan Priebe: When acb-cmd is WRITE or DISCARD block/rbd stores rcb-size into acb-ret Look here: if (acb-cmd == RBD_AIO_WRITE || acb-cmd == RBD_AIO_DISCARD) { if (r 0) { acb

Re: [Qemu-devel] [PATCH] use int64_t for return values from rbd instead of int

2012-11-21 Thread Stefan Priebe - Profihost AG
Am 21.11.2012 09:26, schrieb Stefan Hajnoczi: On Wed, Nov 21, 2012 at 08:47:16AM +0100, Stefan Priebe - Profihost AG wrote: Am 21.11.2012 07:41, schrieb Stefan Hajnoczi: We're going in circles here. I know the types are wrong in the code and your patch fixes it, that's why I said it looks

Re: [Qemu-devel] [PATCH] rbd block driver fix race between aio completition and aio cancel

2012-11-21 Thread Stefan Priebe - Profihost AG
Hello Stefan, hello Paolo, most of the ideas and removing the whole cancellation stuff came from Paolo. Maybe he can comment also? I would then make a new patch. Greets, Stefan Am 21.11.2012 10:07, schrieb Stefan Hajnoczi: On Mon, Nov 19, 2012 at 09:39:45PM +0100, Stefan Priebe wrote

Re: [Qemu-devel] [PATCH] use int64_t for return values from rbd instead of int

2012-11-21 Thread Stefan Priebe - Profihost AG
Not sure about off_t. What is min and max size? Stefan Am 21.11.2012 um 18:03 schrieb Stefan Weil s...@weilnetz.de: Am 20.11.2012 13:44, schrieb Stefan Priebe: rbd / rados tends to return pretty often length of writes or discarded blocks. These values might be bigger than int. Signed-off

[Qemu-devel] [PATCH] use int64_t for return values from rbd instead of int

2012-11-20 Thread Stefan Priebe
rbd / rados tends to return pretty often length of writes or discarded blocks. These values might be bigger than int. Signed-off-by: Stefan Priebe s.pri...@profihost.ag --- block/rbd.c |4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/block/rbd.c b/block/rbd.c index

Re: [Qemu-devel] [PATCH] use int64_t for return values from rbd instead of int

2012-11-20 Thread Stefan Priebe
Hi Stefan, Am 20.11.2012 17:29, schrieb Stefan Hajnoczi: On Tue, Nov 20, 2012 at 01:44:55PM +0100, Stefan Priebe wrote: rbd / rados tends to return pretty often length of writes or discarded blocks. These values might be bigger than int. Signed-off-by: Stefan Priebe s.pri...@profihost.ag

Re: [Qemu-devel] [PATCH] use int64_t for return values from rbd instead of int

2012-11-20 Thread Stefan Priebe - Profihost AG
Am 21.11.2012 07:41, schrieb Stefan Hajnoczi: On Tue, Nov 20, 2012 at 8:16 PM, Stefan Priebe s.pri...@profihost.ag wrote: Hi Stefan, Am 20.11.2012 17:29, schrieb Stefan Hajnoczi: On Tue, Nov 20, 2012 at 01:44:55PM +0100, Stefan Priebe wrote: rbd / rados tends to return pretty often length

Re: [Qemu-devel] scsi-hd with discard_granularity and unmap results in Aborted Commands

2012-11-19 Thread Stefan Priebe - Profihost AG
Hi Paolo, Am 19.11.2012 09:10, schrieb Paolo Bonzini: I'm sorry the discard requests aren't failing. Qemu / Block driver starts to cancel a bunch of requests. That is being done in the kernel (the guest, I think) because the UNMAPs are taking too long. That makes sense. RBD handles discards

Re: [Qemu-devel] scsi-hd with discard_granularity and unmap results in Aborted Commands

2012-11-19 Thread Stefan Priebe - Profihost AG
Am 19.11.2012 10:54, schrieb Paolo Bonzini: Il 19/11/2012 10:36, Stefan Priebe - Profihost AG ha scritto: Hi Paolo, Am 19.11.2012 09:10, schrieb Paolo Bonzini: I'm sorry the discard requests aren't failing. Qemu / Block driver starts to cancel a bunch of requests. That is being done

Re: [Qemu-devel] scsi-hd with discard_granularity and unmap results in Aborted Commands

2012-11-19 Thread Stefan Priebe - Profihost AG
Am 19.11.2012 11:06, schrieb Paolo Bonzini: Il 19/11/2012 10:59, Stefan Priebe - Profihost AG ha scritto: Do you know what is the correct way? I think the correct fix is to serialize them in the kernel. So you mean this is not a bug in rbd or qemu this is a general bug in the linux kernel

Re: [Qemu-devel] scsi-hd with discard_granularity and unmap results in Aborted Commands

2012-11-19 Thread Stefan Priebe - Profihost AG
Am 19.11.2012 11:23, schrieb Paolo Bonzini: Il 19/11/2012 11:13, Stefan Priebe - Profihost AG ha scritto: So you mean this is not a bug in rbd or qemu this is a general bug in the linux kernel since they implemented discard? Yes. As you're known in the linux dev community ;-) Might you

Re: [Qemu-devel] scsi-hd with discard_granularity and unmap results in Aborted Commands

2012-11-19 Thread Stefan Priebe - Profihost AG
Yeah thats my old thread regarding iscsi und unmap but this works fine now since you patched qemu. Stefan Am 19.11.2012 11:36, schrieb Paolo Bonzini: Il 19/11/2012 11:30, Stefan Priebe - Profihost AG ha scritto: But do you have any idea why it works with an iscsi / libiscsi backend

Re: [Qemu-devel] scsi-hd with discard_granularity and unmap results in Aborted Commands

2012-11-19 Thread Stefan Priebe - Profihost AG
Am 19.11.2012 12:16, schrieb Paolo Bonzini: Il 19/11/2012 11:57, Stefan Priebe - Profihost AG ha scritto: Yeah thats my old thread regarding iscsi und unmap but this works fine now since you patched qemu. It still causes hangs no? Though it works apart from that. iscsi/libiscsi

Re: [Qemu-devel] scsi-hd with discard_granularity and unmap results in Aborted Commands

2012-11-19 Thread Stefan Priebe - Profihost AG
Am 19.11.2012 13:24, schrieb Paolo Bonzini: Il 19/11/2012 12:49, Stefan Priebe - Profihost AG ha scritto: It still causes hangs no? Though it works apart from that. iscsi/libiscsi and discards works fine since your latest patches: 1bd075f29ea6d11853475c7c42734595720c3ac6

Re: [Qemu-devel] scsi-hd with discard_granularity and unmap results in Aborted Commands

2012-11-19 Thread Stefan Priebe - Profihost AG
. Greets, Stefan Am 19.11.2012 14:06, schrieb Paolo Bonzini: Il 19/11/2012 14:01, Stefan Priebe - Profihost AG ha scritto: The right behavior is to return only after the target says whether the cancellation was done or not. For libiscsi, it was implemented by the commits you mention. So the whole bunch

Re: [Qemu-devel] scsi-hd with discard_granularity and unmap results in Aborted Commands

2012-11-19 Thread Stefan Priebe - Profihost AG
Hi Paolo, this is my current work status on porting these fixes to rbd. Right now the discards get still canceled by the client kernel. Might you have a look what i have forgotten? Thanks! Stefan Am 19.11.2012 14:06, schrieb Paolo Bonzini: Il 19/11/2012 14:01, Stefan Priebe - Profihost AG

Re: [Qemu-devel] scsi-hd with discard_granularity and unmap results in Aborted Commands

2012-11-19 Thread Stefan Priebe - Profihost AG
Am 19.11.2012 15:41, schrieb Paolo Bonzini: Il 19/11/2012 15:28, Stefan Priebe - Profihost AG ha scritto: typedef struct RADOSCB { @@ -376,6 +377,10 @@ static void qemu_rbd_complete_aio(RADOSCB *rcb) RBDAIOCB *acb = rcb-acb; int64_t r; +if (acb-bh) { +return

Re: [Qemu-devel] scsi-hd with discard_granularity and unmap results in Aborted Commands

2012-11-19 Thread Stefan Priebe - Profihost AG
Mon Sep 17 00:00:00 2001 From: Stefan Priebe s.pri...@profhost.ag Date: Mon, 19 Nov 2012 15:54:05 +0100 Subject: [PATCH] fix cancel rbd race Signed-off-by: Stefan Priebe s.pri...@profhost.ag --- block/rbd.c | 19 --- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git

Re: [Qemu-devel] scsi-hd with discard_granularity and unmap results in Aborted Commands

2012-11-19 Thread Stefan Priebe - Profihost AG
Am 19.11.2012 16:22, schrieb Paolo Bonzini: Il 19/11/2012 16:04, Stefan Priebe - Profihost AG ha scritto: [ 49.183366] sd 2:0:0:1: [sdb] [ 49.183366] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 49.183366] sd 2:0:0:1: [sdb] [ 49.183366] Sense Key : Aborted Command [current

[Qemu-devel] [PATCH] rbd block driver fix race between aio completition and aio cancel

2012-11-19 Thread Stefan Priebe
From: Stefan Priebe s.pri...@profhost.ag This one fixes a race qemu also had in iscsi block driver between cancellation and io completition. qemu_rbd_aio_cancel was not synchronously waiting for the end of the command. It also removes the useless cancelled flag and introduces instead a status

[Qemu-devel] (no subject)

2012-11-19 Thread Stefan Priebe
From Stefan Priebe s.pri...@profihost.ag # This line is ignored. From: Stefan Priebe s.pri...@profihost.ag Cc: pve-de...@pve.proxmox.com Cc: pbonz...@redhat.com Cc: ceph-de...@vger.kernel.org Subject: QEMU/PATCH: rbd block driver: fix race between completition and cancel In-Reply-To: ve-de

Re: [Qemu-devel] scsi-hd with discard_granularity and unmap results in Aborted Commands

2012-11-18 Thread Stefan Priebe
Hi Paolo, Am 06.11.2012 23:42, schrieb Paolo Bonzini: i wantes to use scsi unmap with rbd. rbd documention says you need to set discard_granularity=512 for the device. I'm using qemu 1.2. If i set this and send an UNMAP command i get this kernel output: The discard request is failing.

Re: [Qemu-devel] scsi-hd with discard_granularity and unmap results in Aborted Commands

2012-11-07 Thread Stefan Priebe
Am 06.11.2012 23:42, schrieb Paolo Bonzini: i wantes to use scsi unmap with rbd. rbd documention says you need to set discard_granularity=512 for the device. I'm using qemu 1.2. If i set this and send an UNMAP command i get this kernel output: The discard request is failing. Please check

[Qemu-devel] scsi-hd with discard_granularity and unmap results in Aborted Commands

2012-11-06 Thread Stefan Priebe - Profihost AG
Hello list, i wantes to use scsi unmap with rbd. rbd documention says you need to set discard_granularity=512 for the device. I'm using qemu 1.2. If i set this and send an UNMAP command i get this kernel output: Sense Key : Aborted Command [current] sd 2:0:0:1: [sdb] Add. Sense: I/O process

Re: [Qemu-devel] RBD trim / unmap support?

2012-11-02 Thread Stefan Priebe - Profihost AG
. Sense: I/O process terminated [ 75.500374] sd 2:0:0:4: [sdc] CDB: [ 75.500374] Write same(16): 93 08 00 00 00 00 03 7f ff f9 00 7f ff ff 00 00 [ 75.500374] end_request: I/O error, dev sdc, sector 58720249 Stefan Am 02.11.2012 09:20, schrieb Stefan Priebe - Profihost AG: Am 02.11.2012 00:36

[Qemu-devel] slow migration speed / strange memory usage

2012-10-29 Thread Stefan Priebe - Profihost AG
Hello list, i'm running kvm 1.2 on vanilla 3.6.3 kernel. I'm trying to understand the memory usage and the migration speed. I've a VM which does nothing else than running OpenSSH and a cron job every minute to write a small json file. When the VM is freshly started Host shows 300MB memory

[Qemu-devel] slow xbzrle

2012-10-25 Thread Stefan Priebe - Profihost AG
Hello list, i'm using 1.2 stable and wanted to use xbzrle but xbzrle is extremely slow. While trying to transfer a simple VM with 4GB memory through a 10GBe nic while running a MySQL (with NO LOAD) it takes up to 10 - 15 minutes. Remaining is often jumping or just lowering pretty slow. Is

Re: [Qemu-devel] slow xbzrle

2012-10-25 Thread Stefan Priebe - Profihost AG
Am 25.10.2012 11:44, schrieb Orit Wasserman: Is this known or is something wrong? My guess this workload migrates fine without XBZRLE so it is not the speed or downtime :). it could be that the cache size is too small resulting with a lot of cache misses which means XBZRLE makes things worse

Re: [Qemu-devel] slow xbzrle

2012-10-25 Thread Stefan Priebe - Profihost AG
Am 25.10.2012 13:39, schrieb Orit Wasserman: On 10/25/2012 12:35 PM, Stefan Priebe - Profihost AG wrote: Am 25.10.2012 11:44, schrieb Orit Wasserman: Is this known or is something wrong? My guess this workload migrates fine without XBZRLE so it is not the speed or downtime :). it could

Re: [Qemu-devel] slow xbzrle

2012-10-25 Thread Stefan Priebe
Am 25.10.2012 15:15, schrieb Orit Wasserman: Looks like a lot of cache miss, you can try increasing the cache size (migrate_set_cache_size). But you should remember that for an idle guest XBZRLE is wasteful, it is useful for workload that changes the same memory pages frequently. sure here

[Qemu-devel] CPU hotplug

2012-08-30 Thread Stefan Priebe
Hello list, what is the status of CPU hotplug support? I tried latest 1.2rc1 kvm-qemu with vanilla kernel v3.5.2 but the VM just crashes when sending cpu_set X online through qm monitor. Greets, Stefan

Re: [Qemu-devel] CPU hotplug

2012-08-30 Thread Stefan Priebe
Am 30.08.2012 um 17:41 schrieb Andreas Färber afaer...@suse.de: Hello, Am 30.08.2012 11:06, schrieb Stefan Priebe: I tried latest 1.2rc1 kvm-qemu with vanilla kernel v3.5.2 but the VM just crashes when sending cpu_set X online through qm monitor. For SLES we're carrying a patch

Re: [Qemu-devel] CPU hotplug

2012-08-30 Thread Stefan Priebe
Am 30.08.2012 18:43, schrieb Andreas Färber: Am 30.08.2012 18:35, schrieb Stefan Priebe: Am 30.08.2012 um 17:41 schrieb Andreas Färber afaer...@suse.de: Am 30.08.2012 11:06, schrieb Stefan Priebe: I tried latest 1.2rc1 kvm-qemu with vanilla kernel v3.5.2 but the VM just crashes when sending

Re: [Qemu-devel] CPU hotplug

2012-08-30 Thread Stefan Priebe
Am 30.08.2012 20:40, schrieb Igor Mammedov: Am 30.08.2012 um 17:41 schrieb Andreas Färber afaer...@suse.de: Am 30.08.2012 11:06, schrieb Stefan Priebe: I tried latest 1.2rc1 kvm-qemu with vanilla kernel v3.5.2 but the VM just crashes when sending cpu_set X online through qm monitor

Re: [Qemu-devel] CPU hotplug

2012-08-30 Thread Stefan Priebe
Am 30.08.2012 20:56, schrieb Igor Mammedov: On Thu, 30 Aug 2012 20:45:10 +0200 Stefan Priebe s.pri...@profihost.ag wrote: Am 30.08.2012 20:40, schrieb Igor Mammedov: Am 30.08.2012 um 17:41 schrieb Andreas Färber afaer...@suse.de: Am 30.08.2012 11:06, schrieb Stefan Priebe: I tried latest

Re: [Qemu-devel] [PATCH RFT 0/3] iscsi: fix NULL dereferences / races between task completion and abort

2012-08-21 Thread Stefan Priebe - Profihost AG
Am 21.08.2012 00:36, schrieb ronnie sahlberg: On Mon, Aug 20, 2012 at 6:12 PM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: Hi Ronnie, Am 20.08.2012 10:08, schrieb Paolo Bonzini: That's because the big QEMU lock is held by the thread that called qemu_aio_cancel. and i also see

Re: [Qemu-devel] [PATCH RFT 0/3] iscsi: fix NULL dereferences / races between task completion and abort

2012-08-20 Thread Stefan Priebe - Profihost AG
Am 20.08.2012 09:22, schrieb Paolo Bonzini: Il 19/08/2012 21:22, Stefan Priebe - Profihost AG ha scritto: No problem, my fault---I'm just back and I haven't really started again all my stuff, so the patch was not tested. This should fix it, though. Booting works fine now. But the VM starts

Re: [Qemu-devel] [PATCH RFT 0/3] iscsi: fix NULL dereferences / races between task completion and abort

2012-08-20 Thread Stefan Priebe - Profihost AG
Hi Ronnie, Am 20.08.2012 10:08, schrieb Paolo Bonzini: That's because the big QEMU lock is held by the thread that called qemu_aio_cancel. and i also see no cancellation message in kernel log. And that's because the UNMAP actually ultimately succeeds. You'll probably see soft lockup

Re: [Qemu-devel] [PATCH RFT 0/3] iscsi: fix NULL dereferences / races between task completion and abort

2012-08-19 Thread Stefan Priebe
Hi Paolo, Am 18.08.2012 23:49, schrieb Paolo Bonzini: Hi Stefan, this is my version of your patch. I think the flow of the code is a bit simpler (or at least matches other implementations of cancellation). Can you test it on your test case? I'm really sorry but your patch doesn't work at all.

Re: [Qemu-devel] [PATCH RFT 0/3] iscsi: fix NULL dereferences / races between task completion and abort

2012-08-19 Thread Stefan Priebe - Profihost AG
Am 19.08.2012 15:11, schrieb Paolo Bonzini: No problem, my fault---I'm just back and I haven't really started again all my stuff, so the patch was not tested. This should fix it, though. Booting works fine now. But the VM starts to hang after trying to unmap large regions. No segfault or so

[Qemu-devel] [PATCH] PATCH V2: fix NULL dereferences / races between task completition and abort

2012-08-15 Thread Stefan Priebe
--- block/iscsi.c | 55 +++ 1 files changed, 23 insertions(+), 32 deletions(-) diff --git a/block/iscsi.c b/block/iscsi.c index 12ca76d..1c8b049 100644 --- a/block/iscsi.c +++ b/block/iscsi.c @@ -76,6 +76,10 @@ static void

[Qemu-devel] PATCH V2: fix NULL dereferences / races between task completition and abort

2012-08-15 Thread Stefan Priebe
This patch fixes two main issues with block/iscsi.c: 1.) iscsi_task_mgmt_abort_task_async calls iscsi_scsi_task_cancel which was also directly called in iscsi_aio_cancel 2.) a race between task completition and task abortion could happen cause the scsi_free_scsi_task were done before

[Qemu-devel] [PATCH] PATCH V2: fix NULL dereferences / races between task completition and abort

2012-08-15 Thread Stefan Priebe
Signed-off-by: Stefan Priebe s.pri...@profihost.ag --- block/iscsi.c | 55 +++ 1 files changed, 23 insertions(+), 32 deletions(-) diff --git a/block/iscsi.c b/block/iscsi.c index 12ca76d..1c8b049 100644 --- a/block/iscsi.c +++ b/block

Re: [Qemu-devel] [PATCH] PATCH V2: fix NULL dereferences / races between task completition and abort

2012-08-15 Thread Stefan Priebe - Profihost AG
iscsi_schedule_bh has finished GIT: [PATCH] PATCH V2: fix NULL dereferences / races between task completition and abort Am 15.08.2012 09:09, schrieb Stefan Priebe: Signed-off-by: Stefan Priebe s.pri...@profihost.ag --- block/iscsi.c | 55

[Qemu-devel] [PATCH] iscsi: fix race between task completition and task abortion

2012-08-14 Thread Stefan Priebe
From: spriebe g...@profihost.ag --- block/iscsi.c | 36 1 files changed, 20 insertions(+), 16 deletions(-) diff --git a/block/iscsi.c b/block/iscsi.c index 12ca76d..257f97f 100644 --- a/block/iscsi.c +++ b/block/iscsi.c @@ -76,6 +76,10 @@ static void

Re: [Qemu-devel] [PATCH] iscsi: fix race between task completition and task abortion

2012-08-14 Thread Stefan Priebe
Am 14.08.2012 16:08, schrieb Kevin Wolf: Am 14.08.2012 14:11, schrieb Stefan Hajnoczi: On Tue, Aug 14, 2012 at 1:09 PM, ronnie sahlberg ronniesahlb...@gmail.com wrote: Is a reply with the text Acked-by: Ronnie Sahlberg ronniesahlb...@gmail.com sufficient ? Yes But is this only meant as a

[Qemu-devel] PATCH V2: fix NULL dereferences / races between task completition and abort

2012-08-14 Thread Stefan Priebe
This patch fixes a race and some segfaults which i discovered while testing scsi-generic and unmapping with libiscsi. The first problem is that in iscsi_aio_cancel iscsi_scsi_task_cancel and iscsi_task_mgmt_abort_task_async got called but iscsi_task_mgmt_abort_task_async already calls

[Qemu-devel] [PATCH] PATCH V2: fix NULL dereferences / races between task completition and abort

2012-08-14 Thread Stefan Priebe
Signed-off-by: Stefan Priebe s.pri...@profihost.ag --- block/iscsi.c | 55 +++ 1 files changed, 23 insertions(+), 32 deletions(-) diff --git a/block/iscsi.c b/block/iscsi.c index 12ca76d..1c8b049 100644 --- a/block/iscsi.c +++ b/block

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-10 Thread Stefan Priebe - Profihost AG
virtio-scsi is now working fine. Could you please help me to get discard / trim running? I can't find any information what is needed to get discard / trim working. Thanks, Stefan Am 09.08.2012 12:17, schrieb Stefan Priebe - Profihost AG: That looks better - thanks for the hint. But now

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-10 Thread Stefan Priebe - Profihost AG
I'm using iscsi. So no raw or qcow2. XFS as FS. Thanks, Stefan Am 10.08.2012 12:20, schrieb Paolo Bonzini: Il 10/08/2012 11:22, Stefan Priebe - Profihost AG ha scritto: virtio-scsi is now working fine. Could you please help me to get discard / trim running? I can't find any information what

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-10 Thread Stefan Priebe - Profihost AG
VM start command was: http://pastebin.com/raw.php?i=6WNLPemy Stefan Am 10.08.2012 12:30, schrieb Paolo Bonzini: Il 10/08/2012 12:28, Stefan Priebe - Profihost AG ha scritto: I'm using iscsi. So no raw or qcow2. Ok, then you need to use scsi-block as your device instead of scsi-disk or scsi-hd

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-10 Thread Stefan Priebe - Profihost AG
it. But i'm using virtio-scsi-pci? I'm really sorry to ask so many questions. Stefan Am 10.08.2012 13:20, schrieb ronnie sahlberg: On Fri, Aug 10, 2012 at 8:30 PM, Paolo Bonzini pbonz...@redhat.com wrote: Il 10/08/2012 12:28, Stefan Priebe - Profihost AG ha scritto: I'm using iscsi. So no raw or qcow2

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-10 Thread Stefan Priebe - Profihost AG
Am 10.08.2012 13:12, schrieb ronnie sahlberg: You want discard to work? Yes You are using qemu 1.0 ? actual qemu-kvm git So you dont have the qemu support for scsi-generic passthrough to iscsi devices. Why? I think you need to run the target on linux 3.2 or later kernels using ext4/xfs

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-10 Thread Stefan Priebe - Profihost AG
Am 10.08.2012 14:04, schrieb ronnie sahlberg: On Fri, Aug 10, 2012 at 9:57 PM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: Am 10.08.2012 13:12, schrieb ronnie sahlberg: You want discard to work? Yes You are using qemu 1.0 ? actual qemu-kvm git So you dont have the qemu

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-10 Thread Stefan Priebe - Profihost AG
Am 10.08.2012 14:24, schrieb ronnie sahlberg: On Fri, Aug 10, 2012 at 10:14 PM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: I dont know the kvm version numbers. They're the same as qemu. But you can check the file block/iscsi.c for the version you use

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-10 Thread Stefan Priebe - Profihost AG
it iscsi-aware then use the commandssg_unmap to try to unmap regions and sg_get_lba_status to check that the regions are now unmapped. On Fri, Aug 10, 2012 at 9:54 PM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: http://www.nexenta.com/corp/products/what-is-openstorage

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-10 Thread Stefan Priebe - Profihost AG
Hi Paolo, Am 10.08.2012 14:39, schrieb Paolo Bonzini: Il 10/08/2012 14:35, Stefan Priebe - Profihost AG ha scritto: One way to activate passthough is via scsi-generic: Example: -device lsi -device scsi-generic,drive=MyISCSI \ -drive file=iscsi://10.1.1.125

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-09 Thread Stefan Priebe
, Stefan Priebe ha scritto: Hello list, i wanted to start using virtio-scsi instead of virtio-blk, cause it offers the possibility to use discard / trim support. Kernel: 3.5.0 on host and guest Qemu-kvm: 1.1.1 stable But i'm not seeing the same or nearly the same speed: 1) How did you start

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-09 Thread Stefan Priebe
Yes should be possible. guest is Debian or Ubuntu. I couldn't find a tag for V1.1.1 which I ran from source. So where to start bisect? Stefan Am 09.08.2012 um 09:01 schrieb Paolo Bonzini pbonz...@redhat.com: Il 09/08/2012 08:13, Stefan Priebe ha scritto: i really would like to test

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-09 Thread Stefan Priebe
09/08/2012 09:07, Stefan Priebe ha scritto: Yes should be possible. guest is Debian or Ubuntu. I couldn't find a tag for V1.1.1 which I ran from source. So where to start bisect? You can start from the v1.1.0 tag. Can you give the command line, perhaps it is enough to reproduce? Paolo Stefan

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-09 Thread Stefan Priebe
@writethrough: why not? @libiscsi Same speed problem with cache=none and with just local lvm disks. Stefan Am 09.08.2012 um 09:53 schrieb Paolo Bonzini pbonz...@redhat.com: Il 09/08/2012 09:41, Stefan Priebe ha scritto: -drive file=iscsi://10.0.255.100/iqn.1986-03.com.sun:02:8a9019a4-4aa3

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-09 Thread Stefan Priebe - Profihost AG
That looks better - thanks for the hint. But now network isn't working at all ;-( Stefan Am 09.08.2012 11:18, schrieb Stefan Hajnoczi: On Thu, Aug 9, 2012 at 8:41 AM, Stefan Priebe s.pri...@profihost.ag wrote: starting line: /usr/bin/qemu-x86_64 -chardev socket,id=qmp,path=/var/run/qemu

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-09 Thread Stefan Priebe - Profihost AG
Am 09.08.2012 13:04, schrieb Stefan Hajnoczi: On Thu, Aug 9, 2012 at 11:17 AM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: That looks better - thanks for the hint. But now network isn't working at all ;-( You need to have commit 26b9b5fe17cc1b6be2e8bf8b9d16094f420bb8ad (virtio

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-09 Thread Stefan Priebe - Profihost AG
read : io=5748MB, bw=570178KB/s, iops=139, runt= 10323msec Stefan Am 09.08.2012 13:04, schrieb Stefan Hajnoczi: On Thu, Aug 9, 2012 at 11:17 AM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: That looks better - thanks for the hint. But now network isn't working at all ;-( You need

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-09 Thread Stefan Priebe - Profihost AG
Am 09.08.2012 14:19, schrieb Paolo Bonzini: Il 09/08/2012 14:08, Stefan Priebe - Profihost AG ha scritto: virtio-scsi: rand 4k: write: io=822448KB, bw=82228KB/s, iops=20557, runt= 10002msec read : io=950920KB, bw=94694KB/s, iops=23673, runt= 10042msec seq: write: io=2436MB, bw

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-09 Thread Stefan Priebe - Profihost AG
Am 09.08.2012 15:42, schrieb Paolo Bonzini: Il 09/08/2012 15:39, Stefan Priebe - Profihost AG ha scritto: scsi-generic would indeed incur some overhead because it does not do scatter/gather I/O directly, but scsi-hd/scsi-block do not have this overhead. In any case, that should be visible

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-09 Thread Stefan Priebe - Profihost AG
Am 09.08.2012 15:42, schrieb Paolo Bonzini: Il 09/08/2012 15:39, Stefan Priebe - Profihost AG ha scritto: scsi-generic would indeed incur some overhead because it does not do scatter/gather I/O directly, but scsi-hd/scsi-block do not have this overhead. In any case, that should be visible

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-09 Thread Stefan Priebe - Profihost AG
Am 09.08.2012 15:15, schrieb Paolo Bonzini: Il 09/08/2012 14:52, ronnie sahlberg ha scritto: guest uses noop right now. Disk Host is nexentastor running open solaris. I use libiscsi right now so the disks are not visible in both cases (virtio-blk and virtio-scsi) to the host right now. And

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-09 Thread Stefan Priebe - Profihost AG
OK VMs do work fine now. Sorry for missing the patch after switching to qemu-kvm. Am 09.08.2012 14:44, schrieb Paolo Bonzini: Ok, try deadline in the guest then. Using noop amplifies bad performance, because you lose request merging. With no host scheduler, as is the case with libiscsi, noop

[Qemu-devel] discard / trim support

2012-08-09 Thread Stefan Priebe
Hello list, i tried to find out how to be able to use trim / discard. So my storage can free unusedblocks. But i wasn't able to find out which virtio block devices support trim / discard and what else is needed. Thanks and Greets, Stefan

Re: [Qemu-devel] KVM segfaults with 3.5 while installing ubuntu 12.04

2012-08-08 Thread Stefan Priebe
ah OK - thanks. Will there be a fixed 1.1.2 as well? Stefan Am 08.08.2012 10:06, schrieb Stefan Hajnoczi: On Wed, Aug 08, 2012 at 07:51:07AM +0200, Stefan Priebe wrote: Any news? Was this applied upstream? Kevin is ill. He has asked me to review and test patches in his absence. When he

[Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-08 Thread Stefan Priebe
Hello list, i wanted to start using virtio-scsi instead of virtio-blk, cause it offers the possibility to use discard / trim support. Kernel: 3.5.0 on host and guest Qemu-kvm: 1.1.1 stable But i'm not seeing the same or nearly the same speed: virtio-scsi: rand. 4k: write: io=677628KB,

Re: [Qemu-devel] virtio-scsi vs. virtio-blk

2012-08-08 Thread Stefan Priebe
Yes cache none. Is there a bugfix for 1.1.1? Stefan Am 08.08.2012 um 18:17 schrieb Paolo Bonzini pbonz...@redhat.com: Il 08/08/2012 17:21, Stefan Priebe ha scritto: Hello list, i wanted to start using virtio-scsi instead of virtio-blk, cause it offers the possibility to use discard / trim

[Qemu-devel] [Bug 1033494] [NEW] qemu-system-x86_64 segfaults with kernel 3.5.0

2012-08-06 Thread Stefan Priebe
Public bug reported: qemu-kvm 1.1.1 stable is running fine for me with RHEL 6 2.6.32 based kernel. But with 3.5.0 kernel qemu-system-x86_64 segfaults while i'm trying to install ubuntu 12.04 server reproducable. You find three backtraces here: http://pastebin.com/raw.php?i=xCy2pEcP Stefan **

Re: [Qemu-devel] KVM segfaults with 3.5 while installing ubuntu 12.04

2012-08-06 Thread Stefan Priebe - Profihost AG
can confirm - this fixed it! Am 06.08.2012 14:37, schrieb Avi Kivity: On 08/06/2012 03:12 PM, Avi Kivity wrote: On 08/06/2012 11:46 AM, Stefan Priebe - Profihost AG wrote: But still i got the segfault and core dump - this is my main problem? I mean qemu-kvm master isn't declared as stable. So

<    1   2   3