Re: [PATCH 7/7] block/rbd: change request alignment to 1 byte

2021-01-20 Thread Peter Lieven
 > Am 19.01.2021 um 15:20 schrieb Jason Dillaman : > > On Tue, Jan 19, 2021 at 4:36 AM Peter Lieven wrote: >>> Am 18.01.21 um 23:33 schrieb Jason Dillaman: >>> On Fri, Jan 15, 2021 at 10:39 AM Peter Lieven wrote: >>>> Am 15.01.21 um 16:27 schrieb Jas

Re: [PATCH 7/7] block/rbd: change request alignment to 1 byte

2021-01-19 Thread Peter Lieven
Am 18.01.21 um 23:33 schrieb Jason Dillaman: > On Fri, Jan 15, 2021 at 10:39 AM Peter Lieven wrote: >> Am 15.01.21 um 16:27 schrieb Jason Dillaman: >>> On Thu, Jan 14, 2021 at 2:59 PM Peter Lieven wrote: >>>> Am 14.01.21 um 20:19 schrieb Jason Dillaman: >>

Re: [PATCH 7/7] block/rbd: change request alignment to 1 byte

2021-01-15 Thread Peter Lieven
Am 15.01.21 um 16:27 schrieb Jason Dillaman: > On Thu, Jan 14, 2021 at 2:59 PM Peter Lieven wrote: >> Am 14.01.21 um 20:19 schrieb Jason Dillaman: >>> On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote: >>>> since we implement byte interfaces and librbd supports ai

Re: [PATCH 7/7] block/rbd: change request alignment to 1 byte

2021-01-14 Thread Peter Lieven
Am 14.01.21 um 20:19 schrieb Jason Dillaman: > On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote: >> since we implement byte interfaces and librbd supports aio on byte >> granularity we can lift >> the 512 byte alignment. >> >> Signed-off-by: Peter Lieven &g

Re: [PATCH 4/7] block/rbd: add bdrv_{attach,detach}_aio_context

2021-01-14 Thread Peter Lieven
Am 14.01.21 um 20:18 schrieb Jason Dillaman: > On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote: >> Signed-off-by: Peter Lieven >> --- >> block/rbd.c | 21 +++-- >> 1 file changed, 19 insertions(+), 2 deletions(-) >> >> diff --g

Re: [PATCH 6/7] block/rbd: add write zeroes support

2021-01-14 Thread Peter Lieven
Am 14.01.21 um 20:19 schrieb Jason Dillaman: > On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote: >> Signed-off-by: Peter Lieven >> --- >> block/rbd.c | 31 ++- >> 1 file changed, 30 insertions(+), 1 deletion(-) >> >> diff --g

Re: [PATCH 5/7] block/rbd: migrate from aio to coroutines

2021-01-14 Thread Peter Lieven
Am 14.01.21 um 20:19 schrieb Jason Dillaman: > On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote: >> Signed-off-by: Peter Lieven >> --- >> block/rbd.c | 247 ++-- >> 1 file changed, 84 insertions(+), 163 deletions(

Re: [PATCH 3/7] block/rbd: use stored image_size in qemu_rbd_getlength

2021-01-14 Thread Peter Lieven
Am 14.01.21 um 20:18 schrieb Jason Dillaman: > On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote: >> Signed-off-by: Peter Lieven >> --- >> block/rbd.c | 10 +- >> 1 file changed, 1 insertion(+), 9 deletions(-) >> >> diff --git a/block/

[PATCH 6/7] block/rbd: add write zeroes support

2020-12-27 Thread Peter Lieven
Signed-off-by: Peter Lieven --- block/rbd.c | 31 ++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/block/rbd.c b/block/rbd.c index 2d77d0007f..27b4404adf 100644 --- a/block/rbd.c +++ b/block/rbd.c @@ -63,7 +63,8 @@ typedef enum { RBD_AIO_READ

[PATCH 4/7] block/rbd: add bdrv_{attach,detach}_aio_context

2020-12-27 Thread Peter Lieven
Signed-off-by: Peter Lieven --- block/rbd.c | 21 +++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/block/rbd.c b/block/rbd.c index a2da70e37f..27b232f4d8 100644 --- a/block/rbd.c +++ b/block/rbd.c @@ -91,6 +91,7 @@ typedef struct BDRVRBDState { char

[PATCH 7/7] block/rbd: change request alignment to 1 byte

2020-12-27 Thread Peter Lieven
since we implement byte interfaces and librbd supports aio on byte granularity we can lift the 512 byte alignment. Signed-off-by: Peter Lieven --- block/rbd.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/block/rbd.c b/block/rbd.c index 27b4404adf..8673e8f553 100644 --- a/block/rbd.c

[PATCH 3/7] block/rbd: use stored image_size in qemu_rbd_getlength

2020-12-27 Thread Peter Lieven
Signed-off-by: Peter Lieven --- block/rbd.c | 10 +- 1 file changed, 1 insertion(+), 9 deletions(-) diff --git a/block/rbd.c b/block/rbd.c index bc8cf8af9b..a2da70e37f 100644 --- a/block/rbd.c +++ b/block/rbd.c @@ -956,15 +956,7 @@ static int qemu_rbd_getinfo(BlockDriverState *bs

[PATCH 2/7] block/rbd: store object_size in BDRVRBDState

2020-12-27 Thread Peter Lieven
Signed-off-by: Peter Lieven --- block/rbd.c | 18 +++--- 1 file changed, 7 insertions(+), 11 deletions(-) diff --git a/block/rbd.c b/block/rbd.c index 650e27c351..bc8cf8af9b 100644 --- a/block/rbd.c +++ b/block/rbd.c @@ -90,6 +90,7 @@ typedef struct BDRVRBDState { char *snap

[PATCH 1/7] block/rbd: bump librbd requirement to luminous release

2020-12-27 Thread Peter Lieven
even luminous (version 12.2) is unmaintained for over 3 years now. Bump the requirement to get rid of the ifdef'ry in the code. Signed-off-by: Peter Lieven --- block/rbd.c | 120 configure | 7 +-- 2 files changed, 12 insertions(+), 115

[PATCH 5/7] block/rbd: migrate from aio to coroutines

2020-12-27 Thread Peter Lieven
Signed-off-by: Peter Lieven --- block/rbd.c | 247 ++-- 1 file changed, 84 insertions(+), 163 deletions(-) diff --git a/block/rbd.c b/block/rbd.c index 27b232f4d8..2d77d0007f 100644 --- a/block/rbd.c +++ b/block/rbd.c @@ -66,22 +66,6 @@ typedef

[PATCH 0/7] block/rbd: migrate to coroutines and add write zeroes support

2020-12-27 Thread Peter Lieven
and ifdef'ry in the code. Peter Lieven (7): block/rbd: bump librbd requirement to luminous release block/rbd: store object_size in BDRVRBDState block/rbd: use stored image_size in qemu_rbd_getlength block/rbd: add bdrv_{attach,detach}_aio_context block/rbd: migrate from aio to coroutines block

Re: qemu 6.0 rbd driver rewrite

2020-12-09 Thread Peter Lieven
Am 01.12.20 um 13:40 schrieb Peter Lieven: > Hi, > > > i would like to submit a series for 6.0 which will convert the aio hooks to > native coroutine hooks and add write zeroes support. > > The aio routines are nowadays just an emulation on top of coroutines which >

[PATCH] block/nfs: fix int overflow in nfs_client_open_qdict

2020-12-09 Thread Peter Lieven
nfs_client_open returns the file size in sectors. This effectively makes it impossible to open files larger than 1TB. Fixes: a1a42af422d46812f1f0cebe6b230c20409a3731 Cc: qemu-sta...@nongnu.org Signed-off-by: Peter Lieven --- block/nfs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion

qemu 6.0 rbd driver rewrite

2020-12-01 Thread Peter Lieven
Hi, i would like to submit a series for 6.0 which will convert the aio hooks to native coroutine hooks and add write zeroes support. The aio routines are nowadays just an emulation on top of coroutines which add additional overhead. For this I would like to lift the minimum librbd

Re: [PATCH 2/2] qemu-img: align next status sector on destination alignment.

2020-11-12 Thread Peter Lieven
Am 11.11.20 um 16:39 schrieb Maxim Levitsky: > This helps avoid unneeded writes and discards. > > Signed-off-by: Maxim Levitsky > --- > qemu-img.c | 13 - > 1 file changed, 8 insertions(+), 5 deletions(-) > > diff --git a/qemu-img.c b/qemu-img.c > index c2c56fc797..7e9b0f659f 100644

Re: [PATCH v10 25/26] block: Fixes nfs compiling error on msys2/mingw

2020-09-20 Thread Peter Lieven
Am 15.09.20 um 19:12 schrieb Yonggang Luo: > These compiling errors are fixed: > ../block/nfs.c:27:10: fatal error: poll.h: No such file or directory >27 | #include > | ^~~~ > compilation terminated. > > ../block/nfs.c:63:5: error: unknown type name 'blkcnt_t' >63 |

Re: [PATCH v7 03/25] block: Fixes nfs compiling error on msys2/mingw

2020-09-13 Thread Peter Lieven
> Am 10.09.2020 um 22:36 schrieb 罗勇刚(Yonggang Luo) : > >  > > >> On Fri, Sep 11, 2020 at 4:16 AM Peter Lieven wrote: >> >> >>> Am 10.09.2020 um 12:30 schrieb Yonggang Luo : >>> >>> These compiling errors are fixed: >>>

Re: [PATCH v7 03/25] block: Fixes nfs compiling error on msys2/mingw

2020-09-10 Thread Peter Lieven
> Am 10.09.2020 um 12:30 schrieb Yonggang Luo : > > These compiling errors are fixed: > ../block/nfs.c:27:10: fatal error: poll.h: No such file or directory > 27 | #include > | ^~~~ > compilation terminated. > > ../block/nfs.c:63:5: error: unknown type name 'blkcnt_t' >

Re: [PATCH] qemu-img: avoid unaligned read requests during convert

2020-09-10 Thread Peter Lieven
> Am 10.09.2020 um 18:58 schrieb Max Reitz : > > On 01.09.20 14:51, Peter Lieven wrote: >> in case of large continous areas that share the same allocation status >> it happens that the value of s->sector_next_status is unaligned to the >> cluster size or even r

Re: [PATCH v2 01/21] block: Fixes nfs compiling error on msys2/mingw

2020-09-10 Thread Peter Lieven
> Am 10.09.2020 um 09:14 schrieb 罗勇刚(Yonggang Luo) : > > > > On Thu, Sep 10, 2020 at 3:01 PM Peter Lieven wrote: > > > > Am 09.09.2020 um 11:45 schrieb Yonggang Luo : > > > > These compiling errors are fixed: > > ../block/nfs.c:27:10: f

Re: [PATCH v2 01/21] block: Fixes nfs compiling error on msys2/mingw

2020-09-10 Thread Peter Lieven
> Am 09.09.2020 um 11:45 schrieb Yonggang Luo : > > These compiling errors are fixed: > ../block/nfs.c:27:10: fatal error: poll.h: No such file or directory > 27 | #include > | ^~~~ > compilation terminated. > > ../block/nfs.c:63:5: error: unknown type name 'blkcnt_t' >

[PATCH] qemu-img: avoid unaligned read requests during convert

2020-09-01 Thread Peter Lieven
ed-off-by: Peter Lieven --- qemu-img.c | 22 ++ 1 file changed, 22 insertions(+) diff --git a/qemu-img.c b/qemu-img.c index 5308773811..ed17238c36 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -1665,6 +1665,7 @@ enum ImgConvertBlockStatus { typedef struct ImgConver

Re: Choice of BDRV_REQUEST_MAX_SECTORS

2020-08-17 Thread Peter Lieven
Am 17.08.20 um 15:44 schrieb Eric Blake: On 8/17/20 7:32 AM, Peter Lieven wrote: Hi, I am currently debugging a performance issue in qemu-img convert. I think I have found the cause and will send a patch later. But is there any reason why BDRV_REQUEST_MAX_SECTORS is not at least aligned

Choice of BDRV_REQUEST_MAX_SECTORS

2020-08-17 Thread Peter Lieven
Hi, I am currently debugging a performance issue in qemu-img convert. I think I have found the cause and will send a patch later. But is there any reason why BDRV_REQUEST_MAX_SECTORS is not at least aligned down to 8 (4k sectors)? Any operation that is not able to determinate an optimal or

Re: [PATCH] iscsi: Cap block count from GET LBA STATUS (CVE-2020-1711)

2020-01-23 Thread Peter Lieven
> Am 23.01.2020 um 22:29 schrieb Felipe Franciosi : > > Hi, > >> On Jan 23, 2020, at 5:46 PM, Philippe Mathieu-Daudé >> wrote: >> >>> On 1/23/20 1:44 PM, Felipe Franciosi wrote: >>> When querying an iSCSI server for the provisioning status of blocks (via >>> GET LBA STATUS), Qemu only

Re: [PATCH] iscsi: Don't access non-existent scsi_lba_status_descriptor

2020-01-23 Thread Peter Lieven
goto out_unlock; >> } >> > > Naive question: Does the specification allow for such a response? Is > this inherently an error? The spec says the answer SHALL contain at least one lbasd. So I think threating zero as an error is okay Anyway, Reviewed-by: Peter Lieven Peter

Re: bdrv_co_pwritev: Assertion `!waited || !use_local_qiov' failed.

2019-12-17 Thread Peter Lieven
Am 17.12.19 um 16:52 schrieb Kevin Wolf: Am 17.12.2019 um 15:14 hat Peter Lieven geschrieben: I have a vserver running Qemu 4.0 that seems to reproducibly hit the following assertion:  bdrv_co_pwritev: Assertion `!waited || !use_local_qiov' failed. I noticed that the padding code

bdrv_co_pwritev: Assertion `!waited || !use_local_qiov' failed.

2019-12-17 Thread Peter Lieven
Hi all, I have a vserver running Qemu 4.0 that seems to reproducibly hit the following assertion:  bdrv_co_pwritev: Assertion `!waited || !use_local_qiov' failed. I noticed that the padding code was recently reworked in commit 2e2ad02f2c. In the new code I cannot find a similar assertion.

[RESEND PATCH V4] block/vhdx: add check for truncated image files

2019-10-10 Thread Peter Lieven
qemu is currently not able to detect truncated vhdx image files. Add a basic check if all allocated blocks are reachable at open and report all errors during bdrv_co_check. Signed-off-by: Peter Lieven --- V4: - allow partial last blocks [Kevin] - report offsets in error messages [Kevin

Re: [Qemu-block] [PATCH V2 1/2] block/nfs: tear down aio before nfs_close

2019-09-13 Thread Peter Lieven
Am 13.09.19 um 11:51 schrieb Max Reitz: > On 10.09.19 17:41, Peter Lieven wrote: >> nfs_close is a sync call from libnfs and has its own event >> handler polling on the nfs FD. Avoid that both QEMU and libnfs >> are intefering here. >> >> CC: qemu-sta...@nongnu.o

Re: [Qemu-block] [PATCH V2 2/2] block/nfs: add support for nfs_umount

2019-09-11 Thread Peter Lieven
Am 11.09.19 um 09:48 schrieb Max Reitz: > On 10.09.19 17:41, Peter Lieven wrote: >> libnfs recently added support for unmounting. Add support >> in Qemu too. >> >> Signed-off-by: Peter Lieven >> --- >> block/nfs.c | 3 +++ >> 1 file changed, 3 inserti

[Qemu-block] [PATCH V2 1/2] block/nfs: tear down aio before nfs_close

2019-09-10 Thread Peter Lieven
nfs_close is a sync call from libnfs and has its own event handler polling on the nfs FD. Avoid that both QEMU and libnfs are intefering here. CC: qemu-sta...@nongnu.org Signed-off-by: Peter Lieven --- block/nfs.c | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/block

[Qemu-block] [PATCH V2 0/2] add support for nfs_umount

2019-09-10 Thread Peter Lieven
add support for NFSv3 umount call. V2 adds a patch that fixes the order of the aio teardown. The addition of the NFS umount call unmasked that bug. Peter Lieven (2): block/nfs: tear down aio before nfs_close block/nfs: add support for nfs_umount block/nfs.c | 9 +++-- 1 file changed, 7

[Qemu-block] [PATCH V2 2/2] block/nfs: add support for nfs_umount

2019-09-10 Thread Peter Lieven
libnfs recently added support for unmounting. Add support in Qemu too. Signed-off-by: Peter Lieven --- block/nfs.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/block/nfs.c b/block/nfs.c index 2c98508275..f39acfdb28 100644 --- a/block/nfs.c +++ b/block/nfs.c @@ -398,6 +398,9 @@ static

[Qemu-block] [PATCH V4] block/vhdx: add check for truncated image files

2019-09-10 Thread Peter Lieven
qemu is currently not able to detect truncated vhdx image files. Add a basic check if all allocated blocks are reachable at open and report all errors during bdrv_co_check. Signed-off-by: Peter Lieven --- V4: - allow partial last blocks [Kevin] - report offsets in error messages [Kevin

Re: [Qemu-block] [PATCH V3] block/vhdx: add check for truncated image files

2019-09-10 Thread Peter Lieven
Am 10.09.19 um 13:15 schrieb Kevin Wolf: Am 05.09.2019 um 12:02 hat Peter Lieven geschrieben: Am 04.09.19 um 16:09 schrieb Kevin Wolf: Am 03.09.2019 um 15:35 hat Peter Lieven geschrieben: qemu is currently not able to detect truncated vhdx image files. Add a basic check if all allocated

Re: [Qemu-block] [PATCH] block/nfs: add support for nfs_umount

2019-09-05 Thread Peter Lieven
Am 05.09.19 um 12:28 schrieb ronnie sahlberg: On Thu, Sep 5, 2019 at 8:16 PM Peter Lieven wrote: Am 05.09.19 um 12:05 schrieb ronnie sahlberg: On Thu, Sep 5, 2019 at 7:43 PM Peter Lieven wrote: Am 04.09.19 um 11:34 schrieb Kevin Wolf: Am 03.09.2019 um 21:52 hat Peter Lieven geschrieben

Re: [Qemu-block] [PATCH] block/nfs: add support for nfs_umount

2019-09-05 Thread Peter Lieven
Am 05.09.19 um 12:05 schrieb ronnie sahlberg: On Thu, Sep 5, 2019 at 7:43 PM Peter Lieven wrote: Am 04.09.19 um 11:34 schrieb Kevin Wolf: Am 03.09.2019 um 21:52 hat Peter Lieven geschrieben: Am 03.09.2019 um 16:56 schrieb Kevin Wolf : Am 03.09.2019 um 15:44 hat Peter Lieven geschrieben

Re: [Qemu-block] [PATCH V3] block/vhdx: add check for truncated image files

2019-09-05 Thread Peter Lieven
Am 04.09.19 um 16:09 schrieb Kevin Wolf: Am 03.09.2019 um 15:35 hat Peter Lieven geschrieben: qemu is currently not able to detect truncated vhdx image files. Add a basic check if all allocated blocks are reachable at open and report all errors during bdrv_co_check. Signed-off-by: Peter Lieven

Re: [Qemu-block] [PATCH V3] block/vhdx: add check for truncated image files

2019-09-05 Thread Peter Lieven
Am 04.09.19 um 16:09 schrieb Kevin Wolf: Am 03.09.2019 um 15:35 hat Peter Lieven geschrieben: qemu is currently not able to detect truncated vhdx image files. Add a basic check if all allocated blocks are reachable at open and report all errors during bdrv_co_check. Signed-off-by: Peter Lieven

Re: [Qemu-block] [PATCH] block/nfs: add support for nfs_umount

2019-09-05 Thread Peter Lieven
Am 04.09.19 um 11:34 schrieb Kevin Wolf: Am 03.09.2019 um 21:52 hat Peter Lieven geschrieben: Am 03.09.2019 um 16:56 schrieb Kevin Wolf : Am 03.09.2019 um 15:44 hat Peter Lieven geschrieben: libnfs recently added support for unmounting. Add support in Qemu too. Signed-off-by: Peter Lieven

Re: [Qemu-block] [PATCH] block/nfs: add support for nfs_umount

2019-09-03 Thread Peter Lieven
> Am 03.09.2019 um 16:56 schrieb Kevin Wolf : > > Am 03.09.2019 um 15:44 hat Peter Lieven geschrieben: >> libnfs recently added support for unmounting. Add support >> in Qemu too. >> >> Signed-off-by: Peter Lieven > > Looks trivial enough to revie

[Qemu-block] [PATCH] block/nfs: add support for nfs_umount

2019-09-03 Thread Peter Lieven
libnfs recently added support for unmounting. Add support in Qemu too. Signed-off-by: Peter Lieven --- block/nfs.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/block/nfs.c b/block/nfs.c index 0ec50953e4..9d30963fd8 100644 --- a/block/nfs.c +++ b/block/nfs.c @@ -1,7

[Qemu-block] [PATCH V3] block/vhdx: add check for truncated image files

2019-09-03 Thread Peter Lieven
qemu is currently not able to detect truncated vhdx image files. Add a basic check if all allocated blocks are reachable at open and report all errors during bdrv_co_check. Signed-off-by: Peter Lieven --- V3: - check for bdrv_getlength failure [Kevin] - use uint32_t for i [Kevin] - check

Re: [Qemu-block] [PATCH V2] block/vhdx: add check for truncated image files

2019-09-03 Thread Peter Lieven
Am 03.09.19 um 15:02 schrieb Kevin Wolf: Am 02.09.2019 um 17:24 hat Peter Lieven geschrieben: qemu is currently not able to detect truncated vhdx image files. Add a basic check if all allocated blocks are reachable at open and report all errors during bdrv_co_check. Signed-off-by: Peter Lieven

Re: [Qemu-block] [PATCH V2] block/vhdx: add check for truncated image files

2019-09-03 Thread Peter Lieven
Am 02.09.19 um 17:24 schrieb Peter Lieven: qemu is currently not able to detect truncated vhdx image files. Add a basic check if all allocated blocks are reachable at open and report all errors during bdrv_co_check. Signed-off-by: Peter Lieven --- V2: - add error reporting [Kevin] - use

[Qemu-block] [PATCH V2] block/vhdx: add check for truncated image files

2019-09-02 Thread Peter Lieven
qemu is currently not able to detect truncated vhdx image files. Add a basic check if all allocated blocks are reachable at open and report all errors during bdrv_co_check. Signed-off-by: Peter Lieven --- V2: - add error reporting [Kevin] - use bdrv_getlength instead

Re: [Qemu-block] [PATCH] block/vhdx: add check for truncated image files

2019-09-02 Thread Peter Lieven
Am 02.09.19 um 15:46 schrieb Kevin Wolf: Am 02.09.2019 um 15:15 hat Peter Lieven geschrieben: Am 02.09.19 um 15:07 schrieb Kevin Wolf: Am 29.08.2019 um 15:36 hat Peter Lieven geschrieben: qemu is currently not able to detect truncated vhdx image files. Add a basic check if all allocated

Re: [Qemu-block] [PATCH] block/vhdx: add check for truncated image files

2019-09-02 Thread Peter Lieven
Am 02.09.19 um 15:07 schrieb Kevin Wolf: Am 29.08.2019 um 15:36 hat Peter Lieven geschrieben: qemu is currently not able to detect truncated vhdx image files. Add a basic check if all allocated blocks are reachable to vhdx_co_check. Signed-off-by: Jan-Hendrik Frintrop Signed-off-by: Peter

[Qemu-block] [PATCH] block/vhdx: add check for truncated image files

2019-08-29 Thread Peter Lieven
qemu is currently not able to detect truncated vhdx image files. Add a basic check if all allocated blocks are reachable to vhdx_co_check. Signed-off-by: Jan-Hendrik Frintrop Signed-off-by: Peter Lieven --- block/vhdx.c | 19 +++ 1 file changed, 19 insertions(+) diff --git

Re: [Qemu-block] Virtio-BLK/SCSI write requests and data payload checksums

2019-01-11 Thread Peter Lieven
> Am 11.01.2019 um 08:14 schrieb Vadim Rozenfeld : > >> On Thu, 2019-01-10 at 14:57 +0100, Peter Lieven wrote: >>> Am 18.12.18 um 15:45 schrieb Peter Lieven: >>>> Am 18.12.18 um 14:15 schrieb Vadim Rozenfeld: >>>> Peter, I must be missing some

Re: [Qemu-block] Virtio-BLK/SCSI write requests and data payload checksums

2019-01-10 Thread Peter Lieven
Am 18.12.18 um 15:45 schrieb Peter Lieven: Am 18.12.18 um 14:15 schrieb Vadim Rozenfeld: Peter, I must be missing something here, but what exactly the problem is? The issue is that I see concurrent read requests coming in from Windows Guest with vioscsi as driver that have the same buffer

Re: [Qemu-block] Virtio-BLK/SCSI write requests and data payload checksums

2018-12-18 Thread Peter Lieven
Am 18.12.18 um 14:15 schrieb Vadim Rozenfeld: > Peter, I must be missing something here, but what exactly the problem > is? The issue is that I see concurrent read requests coming in from Windows Guest with vioscsi as driver that have the same buffer address from guest memory space. I noticed

Re: [Qemu-block] Virtio-BLK/SCSI write requests and data payload checksums

2018-12-18 Thread Peter Lieven
Am 18.12.18 um 10:34 schrieb Stefan Hajnoczi: > On Mon, Dec 17, 2018 at 04:19:53PM +0100, Peter Lieven wrote: >> Actually I don't know for sure that the address comes from the guest. In >> theory it could be that >> the request from the guest was less than 4096 byte a

Re: [Qemu-block] Virtio-BLK/SCSI write requests and data payload checksums

2018-12-17 Thread Peter Lieven
Am 17.12.18 um 15:48 schrieb Stefan Hajnoczi: On Sun, Dec 16, 2018 at 06:53:44PM +0100, Peter Lieven wrote: It turned out that for writes a bounce buffer is indeed always necessary. But what I found out is that it seems that even for reads it happens that the OS (Windows in this case) issues

Re: [Qemu-block] Virtio-BLK/SCSI write requests and data payload checksums

2018-12-16 Thread Peter Lieven
Von: Paolo Bonzini > > Yes, it's ugly but it's legal.  It probably doesn't happen on real hardware > > that computes the checksum after or during DMA and has some kind of buffer > > inside the board.  But on virt there is only one copy until we reach the > > actual physical hardware.

[Qemu-block] [PATCH V2] qemu-img: avoid overflow of min_sparse parameter

2018-07-13 Thread Peter Lieven
forbid values that are non multiple of 512 to avoid undesired behaviour. For instance, values between 1 and 511 were legal, but resulted in full allocation. Cc: qemu-sta...@nongnu.org Signed-off-by: Peter Lieven --- V1->V2: - use correct check for sval mod 512 == 0 - use BDRV_SECTOR_SIZE ma

[Qemu-block] [PATCH] qemu-img: avoid overflow of min_sparse parameter

2018-07-12 Thread Peter Lieven
forbid values that are non multiple of 512 to avoid undesired behaviour. Values between 1 and 511 were legal, but resulted in full allocation. Cc: qemu-sta...@nongnu.org Signed-off-by: Peter Lieven --- qemu-img.c | 16 +++- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git

[Qemu-block] [PATCH V6] qemu-img: align result of is_allocated_sectors

2018-07-12 Thread Peter Lieven
. [1] https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk Signed-off-by: Peter Lieven --- V5->V6: - fix output of iotest 122 [Kevin] V4->V5: - is_zero is a bool [Kevin] - treat zero areas as allocated if i <= tail to avoid *pnum

Re: [Qemu-block] [PATCH V5] qemu-img: align result of is_allocated_sectors

2018-07-12 Thread Peter Lieven
Am 11.07.2018 um 10:25 schrieb Kevin Wolf: Am 10.07.2018 um 22:16 hat Peter Lieven geschrieben: Am 10.07.2018 um 17:31 schrieb Kevin Wolf : Am 10.07.2018 um 17:05 hat Peter Lieven geschrieben: We currently don't enforce that the sparse segments we detect during convert are aligned

Re: [Qemu-block] [PATCH V5] qemu-img: align result of is_allocated_sectors

2018-07-12 Thread Peter Lieven
Am 11.07.2018 um 10:25 schrieb Kevin Wolf: Am 10.07.2018 um 22:16 hat Peter Lieven geschrieben: Am 10.07.2018 um 17:31 schrieb Kevin Wolf : Am 10.07.2018 um 17:05 hat Peter Lieven geschrieben: We currently don't enforce that the sparse segments we detect during convert are aligned

Re: [Qemu-block] [PATCH V5] qemu-img: align result of is_allocated_sectors

2018-07-10 Thread Peter Lieven
> Am 10.07.2018 um 17:31 schrieb Kevin Wolf : > > Am 10.07.2018 um 17:05 hat Peter Lieven geschrieben: >> We currently don't enforce that the sparse segments we detect during convert >> are >> aligned. This leads to unnecessary and costly read-modify-write cycles e

[Qemu-block] [PATCH V5] qemu-img: align result of is_allocated_sectors

2018-07-10 Thread Peter Lieven
. [1] https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk Signed-off-by: Peter Lieven --- V4->V5: - is_zero is a bool [Kevin] - treat zero areas as allocated if i <= tail to avoid *pnum underflow [Kevin] V3->V4: - only focus o

Re: [Qemu-block] [PATCH V4] qemu-img: align result of is_allocated_sectors

2018-07-10 Thread Peter Lieven
Am 10.07.2018 um 14:28 schrieb Kevin Wolf: Am 07.07.2018 um 13:42 hat Peter Lieven geschrieben: We currently don't enforce that the sparse segments we detect during convert are aligned. This leads to unnecessary and costly read-modify-write cycles either internally in Qemu or in the background

[Qemu-block] [PATCH V4] qemu-img: align result of is_allocated_sectors

2018-07-07 Thread Peter Lieven
. [1] https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk Signed-off-by: Peter Lieven --- V3->V4: - only focus on the end offset in is_allocated_sectors [Kevin] V2->V3: - ensure that s.alignment is a power of 2 - correctly ha

Re: [Qemu-block] [PATCH V3] qemu-img: align result of is_allocated_sectors

2018-07-05 Thread Peter Lieven
> Am 05.07.2018 um 17:15 schrieb Kevin Wolf : > > Am 05.07.2018 um 12:52 hat Peter Lieven geschrieben: >> We currently don't enforce that the sparse segments we detect during convert >> are >> aligned. This leads to unnecessary and costly read-modify-write cycles e

[Qemu-block] [PATCH V3] qemu-img: align result of is_allocated_sectors

2018-07-05 Thread Peter Lieven
/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk Signed-off-by: Peter Lieven --- V2->V3: - ensure that s.alignment is a power of 2 - correctly handle n < alignment in is_allocated_sectors if sector_num % alignment > 0. V1->V2: - take the current s

[Qemu-block] [PATCH V2] qemu-img: align result of is_allocated_sectors

2018-07-03 Thread Peter Lieven
/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk Signed-off-by: Peter Lieven --- V1->V2: - take the current sector offset into account [Max] - try to figure out the target alignment [Max] qemu-img.c | 44 ++-- 1 file changed, 34 inserti

Re: [Qemu-block] [PATCH] qemu-img: align is_allocated_sectors to 4k

2018-06-25 Thread Peter Lieven
Am 11.06.2018 um 16:04 schrieb Max Reitz: > On 2018-06-11 15:59, Peter Lieven wrote: >> Am 11.06.2018 um 15:30 schrieb Max Reitz: >>> On 2018-06-07 14:46, Peter Lieven wrote: >>>> We currently don't enforce that the sparse segments we detect during >>>

Re: [Qemu-block] [PATCH] qemu-img: align is_allocated_sectors to 4k

2018-06-11 Thread Peter Lieven
Am 11.06.2018 um 16:04 schrieb Max Reitz: On 2018-06-11 15:59, Peter Lieven wrote: Am 11.06.2018 um 15:30 schrieb Max Reitz: On 2018-06-07 14:46, Peter Lieven wrote: We currently don't enforce that the sparse segments we detect during convert are aligned. This leads to unnecessary and costly

Re: [Qemu-block] [PATCH] qemu-img: align is_allocated_sectors to 4k

2018-06-11 Thread Peter Lieven
Am 11.06.2018 um 15:30 schrieb Max Reitz: On 2018-06-07 14:46, Peter Lieven wrote: We currently don't enforce that the sparse segments we detect during convert are aligned. This leads to unnecessary and costly read-modify-write cycles either internally in Qemu or in the background

[Qemu-block] [PATCH] qemu-img: align is_allocated_sectors to 4k

2018-06-07 Thread Peter Lieven
a total of about 15000 write requests. With this path the 4600 additional read requests are eliminated. [1] https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk Signed-off-by: Peter Lieven --- qemu-img.c | 21 +++-- 1 file changed, 15

Re: [Qemu-block] [PATCH 4/5] migration/block: limit the number of parallel I/O requests

2018-03-20 Thread Peter Lieven
Am 08.03.2018 um 14:30 schrieb Peter Lieven: Am 08.03.2018 um 13:50 schrieb Juan Quintela: Peter Lieven <p...@kamp.de> wrote: the current implementation submits up to 512 I/O requests in parallel which is much to high especially for a background task. This patch adds a maximum limit of

Re: [Qemu-block] [Qemu-devel] [PATCH 3/5] migration/block: rename MAX_INFLIGHT_IO to MAX_IO_BUFFERS

2018-03-09 Thread Peter Lieven
Am 09.03.2018 um 15:58 schrieb Dr. David Alan Gilbert: > * Peter Lieven (p...@kamp.de) wrote: >> this actually limits (as the original commit mesage suggests) the >> number of I/O buffers that can be allocated and not the number >> of parallel (inflight) I/O requests. >&

Re: [Qemu-block] [PATCH 4/5] migration/block: limit the number of parallel I/O requests

2018-03-08 Thread Peter Lieven
Am 08.03.2018 um 13:50 schrieb Juan Quintela: Peter Lieven <p...@kamp.de> wrote: the current implementation submits up to 512 I/O requests in parallel which is much to high especially for a background task. This patch adds a maximum limit of 16 I/O requests that can be submitted in pa

[Qemu-block] [PATCH 0/5] block migration fixes

2018-03-08 Thread Peter Lieven
Peter Lieven (5): migration: do not transfer ram during bulk storage migration migration/block: reset dirty bitmap before read in bulk phase migration/block: rename MAX_INFLIGHT_IO to MAX_IO_BUFFERS migration/block: limit the number of parallel I/O requests migration/block: compare only

[Qemu-block] [PATCH 1/5] migration: do not transfer ram during bulk storage migration

2018-03-08 Thread Peter Lieven
this patch makes the bulk phase of a block migration to take place before we start transferring ram. As the bulk block migration can take a long time its pointless to transfer ram during that phase. Signed-off-by: Peter Lieven <p...@kamp.de> Reviewed-by: Stefan Hajnoczi <stefa...@r

[Qemu-block] [PATCH 4/5] migration/block: limit the number of parallel I/O requests

2018-03-08 Thread Peter Lieven
the current implementation submits up to 512 I/O requests in parallel which is much to high especially for a background task. This patch adds a maximum limit of 16 I/O requests that can be submitted in parallel to avoid monopolizing the I/O device. Signed-off-by: Peter Lieven <p...@kamp

[Qemu-block] [PATCH 2/5] migration/block: reset dirty bitmap before read in bulk phase

2018-03-08 Thread Peter Lieven
Reset the dirty bitmap before reading to make sure we don't miss any new data. Cc: qemu-sta...@nongnu.org Signed-off-by: Peter Lieven <p...@kamp.de> --- migration/block.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/migration/block.c b/migration/block.c index 1

[Qemu-block] [PATCH 5/5] migration/block: compare only read blocks against the rate limiter

2018-03-08 Thread Peter Lieven
only read_done blocks are in the queued to be flushed to the migration stream. submitted blocks are still in flight. Signed-off-by: Peter Lieven <p...@kamp.de> --- migration/block.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/migration/block.c b/migration/block.c

Re: [Qemu-block] block migration and dirty bitmap reset

2018-03-08 Thread Peter Lieven
Am 08.03.2018 um 10:01 schrieb Fam Zheng: On Thu, Mar 8, 2018 at 4:57 PM, Peter Lieven <p...@kamp.de> wrote: Am 08.03.2018 um 02:28 schrieb Fam Zheng <f...@redhat.com>: On Wed, 03/07 09:06, Peter Lieven wrote: Hi, while looking at the code I wonder if the bl

Re: [Qemu-block] block migration and dirty bitmap reset

2018-03-08 Thread Peter Lieven
> Am 08.03.2018 um 02:28 schrieb Fam Zheng <f...@redhat.com>: > >> On Wed, 03/07 09:06, Peter Lieven wrote: >> Hi, >> >> while looking at the code I wonder if the blk_aio_preadv and the >> bdrv_reset_dirty_bitmap order

Re: [Qemu-block] Limiting coroutine stack usage

2018-03-07 Thread Peter Lieven
Am 06.03.2018 um 12:51 schrieb Stefan Hajnoczi: > On Tue, Feb 20, 2018 at 06:04:02PM +0100, Peter Lieven wrote: >> I remember we discussed a long time ago to limit the stack usage of all >> functions that are executed in a coroutine >> context to a very low value to be

Re: [Qemu-block] block migration and MAX_IN_FLIGHT_IO

2018-03-07 Thread Peter Lieven
Am 07.03.2018 um 10:47 schrieb Stefan Hajnoczi: > On Wed, Mar 7, 2018 at 7:55 AM, Peter Lieven <p...@kamp.de> wrote: >> Am 06.03.2018 um 17:35 schrieb Peter Lieven: >>> Am 06.03.2018 um 17:07 schrieb Stefan Hajnoczi: >>>> On Mon, Mar 05, 2018 at 02:52:16PM

[Qemu-block] block migration and dirty bitmap reset

2018-03-07 Thread Peter Lieven
Hi, while looking at the code I wonder if the blk_aio_preadv and the bdrv_reset_dirty_bitmap order must be swapped in mig_save_device_bulk: qemu_mutex_lock_iothread(); aio_context_acquire(blk_get_aio_context(bmds->blk)); blk->aiocb = blk_aio_preadv(bb, cur_sector * BDRV_SECTOR_SIZE,

Re: [Qemu-block] block migration and MAX_IN_FLIGHT_IO

2018-03-06 Thread Peter Lieven
Am 06.03.2018 um 17:35 schrieb Peter Lieven: > Am 06.03.2018 um 17:07 schrieb Stefan Hajnoczi: >> On Mon, Mar 05, 2018 at 02:52:16PM +, Dr. David Alan Gilbert wrote: >>> * Peter Lieven (p...@kamp.de) wrote: >>>> Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi: &g

Re: [Qemu-block] block migration and MAX_IN_FLIGHT_IO

2018-03-06 Thread Peter Lieven
Am 06.03.2018 um 17:07 schrieb Stefan Hajnoczi: > On Mon, Mar 05, 2018 at 02:52:16PM +, Dr. David Alan Gilbert wrote: >> * Peter Lieven (p...@kamp.de) wrote: >>> Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi: >>>> On Thu, Feb 22, 2018 at 12:13:50PM +0100, Peter

Re: [Qemu-block] block migration and MAX_IN_FLIGHT_IO

2018-03-06 Thread Peter Lieven
Am 05.03.2018 um 15:52 schrieb Dr. David Alan Gilbert: > * Peter Lieven (p...@kamp.de) wrote: >> Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi: >>> On Thu, Feb 22, 2018 at 12:13:50PM +0100, Peter Lieven wrote: >>>> I stumbled across the MAX_INFLIGHT_IO fie

Re: [Qemu-block] block migration and MAX_IN_FLIGHT_IO

2018-03-05 Thread Peter Lieven
Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi: > On Thu, Feb 22, 2018 at 12:13:50PM +0100, Peter Lieven wrote: >> I stumbled across the MAX_INFLIGHT_IO field that was introduced in 2015 and >> was curious what was the reason >> to choose 512MB as readahead? The que

Re: [Qemu-block] [Qemu-devel] Limiting coroutine stack usage

2018-02-22 Thread Peter Lieven
Am 22.02.2018 um 13:03 schrieb Daniel P. Berrangé: > On Thu, Feb 22, 2018 at 01:02:05PM +0100, Peter Lieven wrote: >> Am 22.02.2018 um 13:00 schrieb Daniel P. Berrangé: >>> On Thu, Feb 22, 2018 at 12:51:58PM +0100, Peter Lieven wrote: >>>> Am 22.02.2018 um 12

Re: [Qemu-block] [Qemu-devel] Limiting coroutine stack usage

2018-02-22 Thread Peter Lieven
Am 22.02.2018 um 13:00 schrieb Daniel P. Berrangé: > On Thu, Feb 22, 2018 at 12:51:58PM +0100, Peter Lieven wrote: >> Am 22.02.2018 um 12:40 schrieb Daniel P. Berrangé: >>> On Thu, Feb 22, 2018 at 12:32:04PM +0100, Kevin Wolf wrote: >>>> Am 22.02.2018 um 12:0

Re: [Qemu-block] [Qemu-devel] Limiting coroutine stack usage

2018-02-22 Thread Peter Lieven
Am 22.02.2018 um 12:40 schrieb Daniel P. Berrangé: > On Thu, Feb 22, 2018 at 12:32:04PM +0100, Kevin Wolf wrote: >> Am 22.02.2018 um 12:01 hat Peter Lieven geschrieben: >>> Am 22.02.2018 um 11:57 schrieb Kevin Wolf: >>>> Am 20.02.2018 um 22:54 hat Paolo Bonzini gesch

Re: [Qemu-block] Limiting coroutine stack usage

2018-02-22 Thread Peter Lieven
Am 22.02.2018 um 12:32 schrieb Kevin Wolf: > Am 22.02.2018 um 12:01 hat Peter Lieven geschrieben: >> Am 22.02.2018 um 11:57 schrieb Kevin Wolf: >>> Am 20.02.2018 um 22:54 hat Paolo Bonzini geschrieben: >>>> On 20/02/2018 18:04, Peter Lieven wrote: >>>>

[Qemu-block] block migration and MAX_IN_FLIGHT_IO

2018-02-22 Thread Peter Lieven
Hi, I stumbled across the MAX_INFLIGHT_IO field that was introduced in 2015 and was curious what was the reason to choose 512MB as readahead? The question is that I found that the source VM gets very unresponsive I/O wise while the initial 512MB are read and furthermore seems to stay

Re: [Qemu-block] Limiting coroutine stack usage

2018-02-22 Thread Peter Lieven
Am 22.02.2018 um 11:57 schrieb Kevin Wolf: > Am 20.02.2018 um 22:54 hat Paolo Bonzini geschrieben: >> On 20/02/2018 18:04, Peter Lieven wrote: >>> Hi, >>> >>> I remember we discussed a long time ago to limit the stack usage of all >>> functions

<    1   2   3   4   5   6   >