> Am 19.01.2021 um 15:20 schrieb Jason Dillaman :
>
> On Tue, Jan 19, 2021 at 4:36 AM Peter Lieven wrote:
>>> Am 18.01.21 um 23:33 schrieb Jason Dillaman:
>>> On Fri, Jan 15, 2021 at 10:39 AM Peter Lieven wrote:
>>>> Am 15.01.21 um 16:27 schrieb Jas
Am 18.01.21 um 23:33 schrieb Jason Dillaman:
> On Fri, Jan 15, 2021 at 10:39 AM Peter Lieven wrote:
>> Am 15.01.21 um 16:27 schrieb Jason Dillaman:
>>> On Thu, Jan 14, 2021 at 2:59 PM Peter Lieven wrote:
>>>> Am 14.01.21 um 20:19 schrieb Jason Dillaman:
>>
Am 15.01.21 um 16:27 schrieb Jason Dillaman:
> On Thu, Jan 14, 2021 at 2:59 PM Peter Lieven wrote:
>> Am 14.01.21 um 20:19 schrieb Jason Dillaman:
>>> On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>>>> since we implement byte interfaces and librbd supports ai
Am 14.01.21 um 20:19 schrieb Jason Dillaman:
> On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>> since we implement byte interfaces and librbd supports aio on byte
>> granularity we can lift
>> the 512 byte alignment.
>>
>> Signed-off-by: Peter Lieven
&g
Am 14.01.21 um 20:18 schrieb Jason Dillaman:
> On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>> Signed-off-by: Peter Lieven
>> ---
>> block/rbd.c | 21 +++--
>> 1 file changed, 19 insertions(+), 2 deletions(-)
>>
>> diff --g
Am 14.01.21 um 20:19 schrieb Jason Dillaman:
> On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>> Signed-off-by: Peter Lieven
>> ---
>> block/rbd.c | 31 ++-
>> 1 file changed, 30 insertions(+), 1 deletion(-)
>>
>> diff --g
Am 14.01.21 um 20:19 schrieb Jason Dillaman:
> On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>> Signed-off-by: Peter Lieven
>> ---
>> block/rbd.c | 247 ++--
>> 1 file changed, 84 insertions(+), 163 deletions(
Am 14.01.21 um 20:18 schrieb Jason Dillaman:
> On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>> Signed-off-by: Peter Lieven
>> ---
>> block/rbd.c | 10 +-
>> 1 file changed, 1 insertion(+), 9 deletions(-)
>>
>> diff --git a/block/
Signed-off-by: Peter Lieven
---
block/rbd.c | 31 ++-
1 file changed, 30 insertions(+), 1 deletion(-)
diff --git a/block/rbd.c b/block/rbd.c
index 2d77d0007f..27b4404adf 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -63,7 +63,8 @@ typedef enum {
RBD_AIO_READ
Signed-off-by: Peter Lieven
---
block/rbd.c | 21 +++--
1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index a2da70e37f..27b232f4d8 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -91,6 +91,7 @@ typedef struct BDRVRBDState {
char
since we implement byte interfaces and librbd supports aio on byte granularity
we can lift
the 512 byte alignment.
Signed-off-by: Peter Lieven
---
block/rbd.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index 27b4404adf..8673e8f553 100644
--- a/block/rbd.c
Signed-off-by: Peter Lieven
---
block/rbd.c | 10 +-
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index bc8cf8af9b..a2da70e37f 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -956,15 +956,7 @@ static int qemu_rbd_getinfo(BlockDriverState *bs
Signed-off-by: Peter Lieven
---
block/rbd.c | 18 +++---
1 file changed, 7 insertions(+), 11 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index 650e27c351..bc8cf8af9b 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -90,6 +90,7 @@ typedef struct BDRVRBDState {
char *snap
even luminous (version 12.2) is unmaintained for over 3 years now.
Bump the requirement to get rid of the ifdef'ry in the code.
Signed-off-by: Peter Lieven
---
block/rbd.c | 120
configure | 7 +--
2 files changed, 12 insertions(+), 115
Signed-off-by: Peter Lieven
---
block/rbd.c | 247 ++--
1 file changed, 84 insertions(+), 163 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index 27b232f4d8..2d77d0007f 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -66,22 +66,6 @@ typedef
and
ifdef'ry in the code.
Peter Lieven (7):
block/rbd: bump librbd requirement to luminous release
block/rbd: store object_size in BDRVRBDState
block/rbd: use stored image_size in qemu_rbd_getlength
block/rbd: add bdrv_{attach,detach}_aio_context
block/rbd: migrate from aio to coroutines
block
Am 01.12.20 um 13:40 schrieb Peter Lieven:
> Hi,
>
>
> i would like to submit a series for 6.0 which will convert the aio hooks to
> native coroutine hooks and add write zeroes support.
>
> The aio routines are nowadays just an emulation on top of coroutines which
>
nfs_client_open returns the file size in sectors. This effectively
makes it impossible to open files larger than 1TB.
Fixes: a1a42af422d46812f1f0cebe6b230c20409a3731
Cc: qemu-sta...@nongnu.org
Signed-off-by: Peter Lieven
---
block/nfs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion
Hi,
i would like to submit a series for 6.0 which will convert the aio hooks to
native coroutine hooks and add write zeroes support.
The aio routines are nowadays just an emulation on top of coroutines which add
additional overhead.
For this I would like to lift the minimum librbd
Am 11.11.20 um 16:39 schrieb Maxim Levitsky:
> This helps avoid unneeded writes and discards.
>
> Signed-off-by: Maxim Levitsky
> ---
> qemu-img.c | 13 -
> 1 file changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/qemu-img.c b/qemu-img.c
> index c2c56fc797..7e9b0f659f 100644
Am 15.09.20 um 19:12 schrieb Yonggang Luo:
> These compiling errors are fixed:
> ../block/nfs.c:27:10: fatal error: poll.h: No such file or directory
>27 | #include
> | ^~~~
> compilation terminated.
>
> ../block/nfs.c:63:5: error: unknown type name 'blkcnt_t'
>63 |
> Am 10.09.2020 um 22:36 schrieb 罗勇刚(Yonggang Luo) :
>
>
>
>
>> On Fri, Sep 11, 2020 at 4:16 AM Peter Lieven wrote:
>>
>>
>>> Am 10.09.2020 um 12:30 schrieb Yonggang Luo :
>>>
>>> These compiling errors are fixed:
>>>
> Am 10.09.2020 um 12:30 schrieb Yonggang Luo :
>
> These compiling errors are fixed:
> ../block/nfs.c:27:10: fatal error: poll.h: No such file or directory
> 27 | #include
> | ^~~~
> compilation terminated.
>
> ../block/nfs.c:63:5: error: unknown type name 'blkcnt_t'
>
> Am 10.09.2020 um 18:58 schrieb Max Reitz :
>
> On 01.09.20 14:51, Peter Lieven wrote:
>> in case of large continous areas that share the same allocation status
>> it happens that the value of s->sector_next_status is unaligned to the
>> cluster size or even r
> Am 10.09.2020 um 09:14 schrieb 罗勇刚(Yonggang Luo) :
>
>
>
> On Thu, Sep 10, 2020 at 3:01 PM Peter Lieven wrote:
>
>
> > Am 09.09.2020 um 11:45 schrieb Yonggang Luo :
> >
> > These compiling errors are fixed:
> > ../block/nfs.c:27:10: f
> Am 09.09.2020 um 11:45 schrieb Yonggang Luo :
>
> These compiling errors are fixed:
> ../block/nfs.c:27:10: fatal error: poll.h: No such file or directory
> 27 | #include
> | ^~~~
> compilation terminated.
>
> ../block/nfs.c:63:5: error: unknown type name 'blkcnt_t'
>
ed-off-by: Peter Lieven
---
qemu-img.c | 22 ++
1 file changed, 22 insertions(+)
diff --git a/qemu-img.c b/qemu-img.c
index 5308773811..ed17238c36 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@ -1665,6 +1665,7 @@ enum ImgConvertBlockStatus {
typedef struct ImgConver
Am 17.08.20 um 15:44 schrieb Eric Blake:
On 8/17/20 7:32 AM, Peter Lieven wrote:
Hi,
I am currently debugging a performance issue in qemu-img convert. I think I
have found the cause and will send a patch later.
But is there any reason why BDRV_REQUEST_MAX_SECTORS is not at least aligned
Hi,
I am currently debugging a performance issue in qemu-img convert. I think I
have found the cause and will send a patch later.
But is there any reason why BDRV_REQUEST_MAX_SECTORS is not at least aligned
down to 8 (4k sectors)?
Any operation that is not able to determinate an optimal or
> Am 23.01.2020 um 22:29 schrieb Felipe Franciosi :
>
> Hi,
>
>> On Jan 23, 2020, at 5:46 PM, Philippe Mathieu-Daudé
>> wrote:
>>
>>> On 1/23/20 1:44 PM, Felipe Franciosi wrote:
>>> When querying an iSCSI server for the provisioning status of blocks (via
>>> GET LBA STATUS), Qemu only
goto out_unlock;
>> }
>>
>
> Naive question: Does the specification allow for such a response? Is
> this inherently an error?
The spec says the answer SHALL contain at least one lbasd. So I think threating
zero as an error is okay
Anyway,
Reviewed-by: Peter Lieven
Peter
Am 17.12.19 um 16:52 schrieb Kevin Wolf:
Am 17.12.2019 um 15:14 hat Peter Lieven geschrieben:
I have a vserver running Qemu 4.0 that seems to reproducibly hit the
following assertion:
bdrv_co_pwritev: Assertion `!waited || !use_local_qiov' failed.
I noticed that the padding code
Hi all,
I have a vserver running Qemu 4.0 that seems to reproducibly hit the following
assertion:
bdrv_co_pwritev: Assertion `!waited || !use_local_qiov' failed.
I noticed that the padding code was recently reworked in commit 2e2ad02f2c.
In the new code I cannot find a similar assertion.
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
---
V4: - allow partial last blocks [Kevin]
- report offsets in error messages [Kevin
Am 13.09.19 um 11:51 schrieb Max Reitz:
> On 10.09.19 17:41, Peter Lieven wrote:
>> nfs_close is a sync call from libnfs and has its own event
>> handler polling on the nfs FD. Avoid that both QEMU and libnfs
>> are intefering here.
>>
>> CC: qemu-sta...@nongnu.o
Am 11.09.19 um 09:48 schrieb Max Reitz:
> On 10.09.19 17:41, Peter Lieven wrote:
>> libnfs recently added support for unmounting. Add support
>> in Qemu too.
>>
>> Signed-off-by: Peter Lieven
>> ---
>> block/nfs.c | 3 +++
>> 1 file changed, 3 inserti
nfs_close is a sync call from libnfs and has its own event
handler polling on the nfs FD. Avoid that both QEMU and libnfs
are intefering here.
CC: qemu-sta...@nongnu.org
Signed-off-by: Peter Lieven
---
block/nfs.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/block
add support for NFSv3 umount call. V2 adds a patch that fixes
the order of the aio teardown. The addition of the NFS umount
call unmasked that bug.
Peter Lieven (2):
block/nfs: tear down aio before nfs_close
block/nfs: add support for nfs_umount
block/nfs.c | 9 +++--
1 file changed, 7
libnfs recently added support for unmounting. Add support
in Qemu too.
Signed-off-by: Peter Lieven
---
block/nfs.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/block/nfs.c b/block/nfs.c
index 2c98508275..f39acfdb28 100644
--- a/block/nfs.c
+++ b/block/nfs.c
@@ -398,6 +398,9 @@ static
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
---
V4: - allow partial last blocks [Kevin]
- report offsets in error messages [Kevin
Am 10.09.19 um 13:15 schrieb Kevin Wolf:
Am 05.09.2019 um 12:02 hat Peter Lieven geschrieben:
Am 04.09.19 um 16:09 schrieb Kevin Wolf:
Am 03.09.2019 um 15:35 hat Peter Lieven geschrieben:
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated
Am 05.09.19 um 12:28 schrieb ronnie sahlberg:
On Thu, Sep 5, 2019 at 8:16 PM Peter Lieven wrote:
Am 05.09.19 um 12:05 schrieb ronnie sahlberg:
On Thu, Sep 5, 2019 at 7:43 PM Peter Lieven wrote:
Am 04.09.19 um 11:34 schrieb Kevin Wolf:
Am 03.09.2019 um 21:52 hat Peter Lieven geschrieben
Am 05.09.19 um 12:05 schrieb ronnie sahlberg:
On Thu, Sep 5, 2019 at 7:43 PM Peter Lieven wrote:
Am 04.09.19 um 11:34 schrieb Kevin Wolf:
Am 03.09.2019 um 21:52 hat Peter Lieven geschrieben:
Am 03.09.2019 um 16:56 schrieb Kevin Wolf :
Am 03.09.2019 um 15:44 hat Peter Lieven geschrieben
Am 04.09.19 um 16:09 schrieb Kevin Wolf:
Am 03.09.2019 um 15:35 hat Peter Lieven geschrieben:
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
Am 04.09.19 um 16:09 schrieb Kevin Wolf:
Am 03.09.2019 um 15:35 hat Peter Lieven geschrieben:
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
Am 04.09.19 um 11:34 schrieb Kevin Wolf:
Am 03.09.2019 um 21:52 hat Peter Lieven geschrieben:
Am 03.09.2019 um 16:56 schrieb Kevin Wolf :
Am 03.09.2019 um 15:44 hat Peter Lieven geschrieben:
libnfs recently added support for unmounting. Add support
in Qemu too.
Signed-off-by: Peter Lieven
> Am 03.09.2019 um 16:56 schrieb Kevin Wolf :
>
> Am 03.09.2019 um 15:44 hat Peter Lieven geschrieben:
>> libnfs recently added support for unmounting. Add support
>> in Qemu too.
>>
>> Signed-off-by: Peter Lieven
>
> Looks trivial enough to revie
libnfs recently added support for unmounting. Add support
in Qemu too.
Signed-off-by: Peter Lieven
---
block/nfs.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/block/nfs.c b/block/nfs.c
index 0ec50953e4..9d30963fd8 100644
--- a/block/nfs.c
+++ b/block/nfs.c
@@ -1,7
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
---
V3: - check for bdrv_getlength failure [Kevin]
- use uint32_t for i [Kevin]
- check
Am 03.09.19 um 15:02 schrieb Kevin Wolf:
Am 02.09.2019 um 17:24 hat Peter Lieven geschrieben:
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
Am 02.09.19 um 17:24 schrieb Peter Lieven:
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
---
V2: - add error reporting [Kevin]
- use
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
---
V2: - add error reporting [Kevin]
- use bdrv_getlength instead
Am 02.09.19 um 15:46 schrieb Kevin Wolf:
Am 02.09.2019 um 15:15 hat Peter Lieven geschrieben:
Am 02.09.19 um 15:07 schrieb Kevin Wolf:
Am 29.08.2019 um 15:36 hat Peter Lieven geschrieben:
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated
Am 02.09.19 um 15:07 schrieb Kevin Wolf:
Am 29.08.2019 um 15:36 hat Peter Lieven geschrieben:
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable to vhdx_co_check.
Signed-off-by: Jan-Hendrik Frintrop
Signed-off-by: Peter
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable to vhdx_co_check.
Signed-off-by: Jan-Hendrik Frintrop
Signed-off-by: Peter Lieven
---
block/vhdx.c | 19 +++
1 file changed, 19 insertions(+)
diff --git
> Am 11.01.2019 um 08:14 schrieb Vadim Rozenfeld :
>
>> On Thu, 2019-01-10 at 14:57 +0100, Peter Lieven wrote:
>>> Am 18.12.18 um 15:45 schrieb Peter Lieven:
>>>> Am 18.12.18 um 14:15 schrieb Vadim Rozenfeld:
>>>> Peter, I must be missing some
Am 18.12.18 um 15:45 schrieb Peter Lieven:
Am 18.12.18 um 14:15 schrieb Vadim Rozenfeld:
Peter, I must be missing something here, but what exactly the problem
is?
The issue is that I see concurrent read requests coming in from Windows Guest
with vioscsi as driver that
have the same buffer
Am 18.12.18 um 14:15 schrieb Vadim Rozenfeld:
> Peter, I must be missing something here, but what exactly the problem
> is?
The issue is that I see concurrent read requests coming in from Windows Guest
with vioscsi as driver that
have the same buffer address from guest memory space. I noticed
Am 18.12.18 um 10:34 schrieb Stefan Hajnoczi:
> On Mon, Dec 17, 2018 at 04:19:53PM +0100, Peter Lieven wrote:
>> Actually I don't know for sure that the address comes from the guest. In
>> theory it could be that
>> the request from the guest was less than 4096 byte a
Am 17.12.18 um 15:48 schrieb Stefan Hajnoczi:
On Sun, Dec 16, 2018 at 06:53:44PM +0100, Peter Lieven wrote:
It turned out that for writes a bounce buffer is indeed always necessary. But
what I found out is that
it seems that even for reads it happens that the OS (Windows in this case)
issues
Von: Paolo Bonzini
> > Yes, it's ugly but it's legal. It probably doesn't happen on real hardware
> > that computes the checksum after or during DMA and has some kind of buffer
> > inside the board. But on virt there is only one copy until we reach the
> > actual physical hardware.
forbid values that are non
multiple of 512 to avoid undesired behaviour. For instance, values
between 1 and 511 were legal, but resulted in full allocation.
Cc: qemu-sta...@nongnu.org
Signed-off-by: Peter Lieven
---
V1->V2: - use correct check for sval mod 512 == 0
- use BDRV_SECTOR_SIZE ma
forbid values that are non
multiple of 512 to avoid undesired behaviour. Values between 1 and
511 were legal, but resulted in full allocation.
Cc: qemu-sta...@nongnu.org
Signed-off-by: Peter Lieven
---
qemu-img.c | 16 +++-
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git
.
[1]
https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk
Signed-off-by: Peter Lieven
---
V5->V6: - fix output of iotest 122 [Kevin]
V4->V5: - is_zero is a bool [Kevin]
- treat zero areas as allocated if i <= tail to avoid *pnum
Am 11.07.2018 um 10:25 schrieb Kevin Wolf:
Am 10.07.2018 um 22:16 hat Peter Lieven geschrieben:
Am 10.07.2018 um 17:31 schrieb Kevin Wolf :
Am 10.07.2018 um 17:05 hat Peter Lieven geschrieben:
We currently don't enforce that the sparse segments we detect during convert are
aligned
Am 11.07.2018 um 10:25 schrieb Kevin Wolf:
Am 10.07.2018 um 22:16 hat Peter Lieven geschrieben:
Am 10.07.2018 um 17:31 schrieb Kevin Wolf :
Am 10.07.2018 um 17:05 hat Peter Lieven geschrieben:
We currently don't enforce that the sparse segments we detect during convert are
aligned
> Am 10.07.2018 um 17:31 schrieb Kevin Wolf :
>
> Am 10.07.2018 um 17:05 hat Peter Lieven geschrieben:
>> We currently don't enforce that the sparse segments we detect during convert
>> are
>> aligned. This leads to unnecessary and costly read-modify-write cycles e
.
[1]
https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk
Signed-off-by: Peter Lieven
---
V4->V5: - is_zero is a bool [Kevin]
- treat zero areas as allocated if i <= tail to avoid *pnum underflow
[Kevin]
V3->V4: - only focus o
Am 10.07.2018 um 14:28 schrieb Kevin Wolf:
Am 07.07.2018 um 13:42 hat Peter Lieven geschrieben:
We currently don't enforce that the sparse segments we detect during convert are
aligned. This leads to unnecessary and costly read-modify-write cycles either
internally in Qemu or in the background
.
[1]
https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk
Signed-off-by: Peter Lieven
---
V3->V4: - only focus on the end offset in is_allocated_sectors [Kevin]
V2->V3: - ensure that s.alignment is a power of 2
- correctly ha
> Am 05.07.2018 um 17:15 schrieb Kevin Wolf :
>
> Am 05.07.2018 um 12:52 hat Peter Lieven geschrieben:
>> We currently don't enforce that the sparse segments we detect during convert
>> are
>> aligned. This leads to unnecessary and costly read-modify-write cycles e
/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk
Signed-off-by: Peter Lieven
---
V2->V3: - ensure that s.alignment is a power of 2
- correctly handle n < alignment in is_allocated_sectors if
sector_num % alignment > 0.
V1->V2: - take the current s
/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk
Signed-off-by: Peter Lieven
---
V1->V2: - take the current sector offset into account [Max]
- try to figure out the target alignment [Max]
qemu-img.c | 44 ++--
1 file changed, 34 inserti
Am 11.06.2018 um 16:04 schrieb Max Reitz:
> On 2018-06-11 15:59, Peter Lieven wrote:
>> Am 11.06.2018 um 15:30 schrieb Max Reitz:
>>> On 2018-06-07 14:46, Peter Lieven wrote:
>>>> We currently don't enforce that the sparse segments we detect during
>>>
Am 11.06.2018 um 16:04 schrieb Max Reitz:
On 2018-06-11 15:59, Peter Lieven wrote:
Am 11.06.2018 um 15:30 schrieb Max Reitz:
On 2018-06-07 14:46, Peter Lieven wrote:
We currently don't enforce that the sparse segments we detect during
convert are
aligned. This leads to unnecessary and costly
Am 11.06.2018 um 15:30 schrieb Max Reitz:
On 2018-06-07 14:46, Peter Lieven wrote:
We currently don't enforce that the sparse segments we detect during convert are
aligned. This leads to unnecessary and costly read-modify-write cycles either
internally in Qemu or in the background
a total of about
15000
write requests. With this path the 4600 additional read requests are eliminated.
[1]
https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk
Signed-off-by: Peter Lieven
---
qemu-img.c | 21 +++--
1 file changed, 15
Am 08.03.2018 um 14:30 schrieb Peter Lieven:
Am 08.03.2018 um 13:50 schrieb Juan Quintela:
Peter Lieven <p...@kamp.de> wrote:
the current implementation submits up to 512 I/O requests in parallel
which is much to high especially for a background task.
This patch adds a maximum limit of
Am 09.03.2018 um 15:58 schrieb Dr. David Alan Gilbert:
> * Peter Lieven (p...@kamp.de) wrote:
>> this actually limits (as the original commit mesage suggests) the
>> number of I/O buffers that can be allocated and not the number
>> of parallel (inflight) I/O requests.
>&
Am 08.03.2018 um 13:50 schrieb Juan Quintela:
Peter Lieven <p...@kamp.de> wrote:
the current implementation submits up to 512 I/O requests in parallel
which is much to high especially for a background task.
This patch adds a maximum limit of 16 I/O requests that can
be submitted in pa
Peter Lieven (5):
migration: do not transfer ram during bulk storage migration
migration/block: reset dirty bitmap before read in bulk phase
migration/block: rename MAX_INFLIGHT_IO to MAX_IO_BUFFERS
migration/block: limit the number of parallel I/O requests
migration/block: compare only
this patch makes the bulk phase of a block migration to take
place before we start transferring ram. As the bulk block migration
can take a long time its pointless to transfer ram during that phase.
Signed-off-by: Peter Lieven <p...@kamp.de>
Reviewed-by: Stefan Hajnoczi <stefa...@r
the current implementation submits up to 512 I/O requests in parallel
which is much to high especially for a background task.
This patch adds a maximum limit of 16 I/O requests that can
be submitted in parallel to avoid monopolizing the I/O device.
Signed-off-by: Peter Lieven <p...@kamp
Reset the dirty bitmap before reading to make sure we don't miss
any new data.
Cc: qemu-sta...@nongnu.org
Signed-off-by: Peter Lieven <p...@kamp.de>
---
migration/block.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/migration/block.c b/migration/block.c
index 1
only read_done blocks are in the queued to be flushed to the migration
stream. submitted blocks are still in flight.
Signed-off-by: Peter Lieven <p...@kamp.de>
---
migration/block.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/migration/block.c b/migration/block.c
Am 08.03.2018 um 10:01 schrieb Fam Zheng:
On Thu, Mar 8, 2018 at 4:57 PM, Peter Lieven <p...@kamp.de> wrote:
Am 08.03.2018 um 02:28 schrieb Fam Zheng <f...@redhat.com>:
On Wed, 03/07 09:06, Peter Lieven wrote:
Hi,
while looking at the code I wonder if the bl
> Am 08.03.2018 um 02:28 schrieb Fam Zheng <f...@redhat.com>:
>
>> On Wed, 03/07 09:06, Peter Lieven wrote:
>> Hi,
>>
>> while looking at the code I wonder if the blk_aio_preadv and the
>> bdrv_reset_dirty_bitmap order
Am 06.03.2018 um 12:51 schrieb Stefan Hajnoczi:
> On Tue, Feb 20, 2018 at 06:04:02PM +0100, Peter Lieven wrote:
>> I remember we discussed a long time ago to limit the stack usage of all
>> functions that are executed in a coroutine
>> context to a very low value to be
Am 07.03.2018 um 10:47 schrieb Stefan Hajnoczi:
> On Wed, Mar 7, 2018 at 7:55 AM, Peter Lieven <p...@kamp.de> wrote:
>> Am 06.03.2018 um 17:35 schrieb Peter Lieven:
>>> Am 06.03.2018 um 17:07 schrieb Stefan Hajnoczi:
>>>> On Mon, Mar 05, 2018 at 02:52:16PM
Hi,
while looking at the code I wonder if the blk_aio_preadv and the
bdrv_reset_dirty_bitmap order must
be swapped in mig_save_device_bulk:
qemu_mutex_lock_iothread();
aio_context_acquire(blk_get_aio_context(bmds->blk));
blk->aiocb = blk_aio_preadv(bb, cur_sector * BDRV_SECTOR_SIZE,
Am 06.03.2018 um 17:35 schrieb Peter Lieven:
> Am 06.03.2018 um 17:07 schrieb Stefan Hajnoczi:
>> On Mon, Mar 05, 2018 at 02:52:16PM +, Dr. David Alan Gilbert wrote:
>>> * Peter Lieven (p...@kamp.de) wrote:
>>>> Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi:
&g
Am 06.03.2018 um 17:07 schrieb Stefan Hajnoczi:
> On Mon, Mar 05, 2018 at 02:52:16PM +, Dr. David Alan Gilbert wrote:
>> * Peter Lieven (p...@kamp.de) wrote:
>>> Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi:
>>>> On Thu, Feb 22, 2018 at 12:13:50PM +0100, Peter
Am 05.03.2018 um 15:52 schrieb Dr. David Alan Gilbert:
> * Peter Lieven (p...@kamp.de) wrote:
>> Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi:
>>> On Thu, Feb 22, 2018 at 12:13:50PM +0100, Peter Lieven wrote:
>>>> I stumbled across the MAX_INFLIGHT_IO fie
Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi:
> On Thu, Feb 22, 2018 at 12:13:50PM +0100, Peter Lieven wrote:
>> I stumbled across the MAX_INFLIGHT_IO field that was introduced in 2015 and
>> was curious what was the reason
>> to choose 512MB as readahead? The que
Am 22.02.2018 um 13:03 schrieb Daniel P. Berrangé:
> On Thu, Feb 22, 2018 at 01:02:05PM +0100, Peter Lieven wrote:
>> Am 22.02.2018 um 13:00 schrieb Daniel P. Berrangé:
>>> On Thu, Feb 22, 2018 at 12:51:58PM +0100, Peter Lieven wrote:
>>>> Am 22.02.2018 um 12
Am 22.02.2018 um 13:00 schrieb Daniel P. Berrangé:
> On Thu, Feb 22, 2018 at 12:51:58PM +0100, Peter Lieven wrote:
>> Am 22.02.2018 um 12:40 schrieb Daniel P. Berrangé:
>>> On Thu, Feb 22, 2018 at 12:32:04PM +0100, Kevin Wolf wrote:
>>>> Am 22.02.2018 um 12:0
Am 22.02.2018 um 12:40 schrieb Daniel P. Berrangé:
> On Thu, Feb 22, 2018 at 12:32:04PM +0100, Kevin Wolf wrote:
>> Am 22.02.2018 um 12:01 hat Peter Lieven geschrieben:
>>> Am 22.02.2018 um 11:57 schrieb Kevin Wolf:
>>>> Am 20.02.2018 um 22:54 hat Paolo Bonzini gesch
Am 22.02.2018 um 12:32 schrieb Kevin Wolf:
> Am 22.02.2018 um 12:01 hat Peter Lieven geschrieben:
>> Am 22.02.2018 um 11:57 schrieb Kevin Wolf:
>>> Am 20.02.2018 um 22:54 hat Paolo Bonzini geschrieben:
>>>> On 20/02/2018 18:04, Peter Lieven wrote:
>>>>
Hi,
I stumbled across the MAX_INFLIGHT_IO field that was introduced in 2015 and was
curious what was the reason
to choose 512MB as readahead? The question is that I found that the source VM
gets very unresponsive I/O wise
while the initial 512MB are read and furthermore seems to stay
Am 22.02.2018 um 11:57 schrieb Kevin Wolf:
> Am 20.02.2018 um 22:54 hat Paolo Bonzini geschrieben:
>> On 20/02/2018 18:04, Peter Lieven wrote:
>>> Hi,
>>>
>>> I remember we discussed a long time ago to limit the stack usage of all
>>> functions
101 - 200 of 564 matches
Mail list logo