Am 18.01.21 um 23:33 schrieb Jason Dillaman:
> On Fri, Jan 15, 2021 at 10:39 AM Peter Lieven wrote:
>> Am 15.01.21 um 16:27 schrieb Jason Dillaman:
>>> On Thu, Jan 14, 2021 at 2:59 PM Peter Lieven wrote:
>>>> Am 14.01.21 um 20:19 schrieb Jason Dillaman:
>>
Am 15.01.21 um 16:27 schrieb Jason Dillaman:
> On Thu, Jan 14, 2021 at 2:59 PM Peter Lieven wrote:
>> Am 14.01.21 um 20:19 schrieb Jason Dillaman:
>>> On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>>>> since we implement byte interfaces and librbd supports ai
Am 14.01.21 um 20:19 schrieb Jason Dillaman:
> On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>> since we implement byte interfaces and librbd supports aio on byte
>> granularity we can lift
>> the 512 byte alignment.
>>
>> Signed-off-by: Peter Lieven
&g
Am 14.01.21 um 20:18 schrieb Jason Dillaman:
> On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>> Signed-off-by: Peter Lieven
>> ---
>> block/rbd.c | 21 +++--
>> 1 file changed, 19 insertions(+), 2 deletions(-)
>>
>> diff --g
Am 14.01.21 um 20:19 schrieb Jason Dillaman:
> On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>> Signed-off-by: Peter Lieven
>> ---
>> block/rbd.c | 31 ++-
>> 1 file changed, 30 insertions(+), 1 deletion(-)
>>
>> diff --g
Am 14.01.21 um 20:19 schrieb Jason Dillaman:
> On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>> Signed-off-by: Peter Lieven
>> ---
>> block/rbd.c | 247 ++--
>> 1 file changed, 84 insertions(+), 163 deletions(
Am 14.01.21 um 20:18 schrieb Jason Dillaman:
> On Sun, Dec 27, 2020 at 11:42 AM Peter Lieven wrote:
>> Signed-off-by: Peter Lieven
>> ---
>> block/rbd.c | 10 +-
>> 1 file changed, 1 insertion(+), 9 deletions(-)
>>
>> diff --git a/block/
even luminous (version 12.2) is unmaintained for over 3 years now.
Bump the requirement to get rid of the ifdef'ry in the code.
Signed-off-by: Peter Lieven
---
block/rbd.c | 120
configure | 7 +--
2 files changed, 12 insertions(+), 115
Signed-off-by: Peter Lieven
---
block/rbd.c | 21 +++--
1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index a2da70e37f..27b232f4d8 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -91,6 +91,7 @@ typedef struct BDRVRBDState {
char
Signed-off-by: Peter Lieven
---
block/rbd.c | 247 ++--
1 file changed, 84 insertions(+), 163 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index 27b232f4d8..2d77d0007f 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -66,22 +66,6 @@ typedef
Signed-off-by: Peter Lieven
---
block/rbd.c | 18 +++---
1 file changed, 7 insertions(+), 11 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index 650e27c351..bc8cf8af9b 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -90,6 +90,7 @@ typedef struct BDRVRBDState {
char *snap
Signed-off-by: Peter Lieven
---
block/rbd.c | 31 ++-
1 file changed, 30 insertions(+), 1 deletion(-)
diff --git a/block/rbd.c b/block/rbd.c
index 2d77d0007f..27b4404adf 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -63,7 +63,8 @@ typedef enum {
RBD_AIO_READ
Signed-off-by: Peter Lieven
---
block/rbd.c | 10 +-
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index bc8cf8af9b..a2da70e37f 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -956,15 +956,7 @@ static int qemu_rbd_getinfo(BlockDriverState *bs
and
ifdef'ry in the code.
Peter Lieven (7):
block/rbd: bump librbd requirement to luminous release
block/rbd: store object_size in BDRVRBDState
block/rbd: use stored image_size in qemu_rbd_getlength
block/rbd: add bdrv_{attach,detach}_aio_context
block/rbd: migrate from aio to coroutines
block
since we implement byte interfaces and librbd supports aio on byte granularity
we can lift
the 512 byte alignment.
Signed-off-by: Peter Lieven
---
block/rbd.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/block/rbd.c b/block/rbd.c
index 27b4404adf..8673e8f553 100644
--- a/block/rbd.c
Am 01.12.20 um 13:40 schrieb Peter Lieven:
> Hi,
>
>
> i would like to submit a series for 6.0 which will convert the aio hooks to
> native coroutine hooks and add write zeroes support.
>
> The aio routines are nowadays just an emulation on top of coroutines which
>
nfs_client_open returns the file size in sectors. This effectively
makes it impossible to open files larger than 1TB.
Fixes: a1a42af422d46812f1f0cebe6b230c20409a3731
Cc: qemu-sta...@nongnu.org
Signed-off-by: Peter Lieven
---
block/nfs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion
Am 11.11.20 um 16:39 schrieb Maxim Levitsky:
> This helps avoid unneeded writes and discards.
>
> Signed-off-by: Maxim Levitsky
> ---
> qemu-img.c | 13 -
> 1 file changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/qemu-img.c b/qemu-img.c
> index c2c56fc797..7e9b0f659f 100644
Am 21.09.20 um 10:29 schrieb Daniel P. Berrangé:
On Sun, Sep 20, 2020 at 10:24:41PM +0200, Peter Lieven wrote:
Hi Qemu folks,
is there a BCP to limit just the maximum usage of a virtual (KVM) cpu?
I know that there are many approaches, but as far as I know they all limit the
complete qemu
Hi Qemu folks,
is there a BCP to limit just the maximum usage of a virtual (KVM) cpu?
I know that there are many approaches, but as far as I know they all limit the
complete qemu process which is far more
than just the virtual CPUs.
Is it possible to limit just the vCPU threads and leave
Am 15.09.20 um 19:12 schrieb Yonggang Luo:
> These compiling errors are fixed:
> ../block/nfs.c:27:10: fatal error: poll.h: No such file or directory
>27 | #include
> | ^~~~
> compilation terminated.
>
> ../block/nfs.c:63:5: error: unknown type name 'blkcnt_t'
>63 |
> Am 10.09.2020 um 22:36 schrieb 罗勇刚(Yonggang Luo) :
>
>
>
>
>> On Fri, Sep 11, 2020 at 4:16 AM Peter Lieven wrote:
>>
>>
>>> Am 10.09.2020 um 12:30 schrieb Yonggang Luo :
>>>
>>> These compiling errors are fixed:
>>>
> Am 10.09.2020 um 12:30 schrieb Yonggang Luo :
>
> These compiling errors are fixed:
> ../block/nfs.c:27:10: fatal error: poll.h: No such file or directory
> 27 | #include
> | ^~~~
> compilation terminated.
>
> ../block/nfs.c:63:5: error: unknown type name 'blkcnt_t'
>
> Am 10.09.2020 um 18:58 schrieb Max Reitz :
>
> On 01.09.20 14:51, Peter Lieven wrote:
>> in case of large continous areas that share the same allocation status
>> it happens that the value of s->sector_next_status is unaligned to the
>> cluster size or even r
> Am 10.09.2020 um 09:14 schrieb 罗勇刚(Yonggang Luo) :
>
>
>
> On Thu, Sep 10, 2020 at 3:01 PM Peter Lieven wrote:
>
>
> > Am 09.09.2020 um 11:45 schrieb Yonggang Luo :
> >
> > These compiling errors are fixed:
> > ../block/nfs.c:27:10: f
> Am 09.09.2020 um 11:45 schrieb Yonggang Luo :
>
> These compiling errors are fixed:
> ../block/nfs.c:27:10: fatal error: poll.h: No such file or directory
> 27 | #include
> | ^~~~
> compilation terminated.
>
> ../block/nfs.c:63:5: error: unknown type name 'blkcnt_t'
>
ed-off-by: Peter Lieven
---
qemu-img.c | 22 ++
1 file changed, 22 insertions(+)
diff --git a/qemu-img.c b/qemu-img.c
index 5308773811..ed17238c36 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@ -1665,6 +1665,7 @@ enum ImgConvertBlockStatus {
typedef struct ImgConver
> Am 23.01.2020 um 22:29 schrieb Felipe Franciosi :
>
> Hi,
>
>> On Jan 23, 2020, at 5:46 PM, Philippe Mathieu-Daudé
>> wrote:
>>
>>> On 1/23/20 1:44 PM, Felipe Franciosi wrote:
>>> When querying an iSCSI server for the provisioning status of blocks (via
>>> GET LBA STATUS), Qemu only
goto out_unlock;
>> }
>>
>
> Naive question: Does the specification allow for such a response? Is
> this inherently an error?
The spec says the answer SHALL contain at least one lbasd. So I think threating
zero as an error is okay
Anyway,
Reviewed-by: Peter Lieven
Peter
Am 17.01.20 um 16:59 schrieb Dr. David Alan Gilbert:
* Peter Lieven (p...@kamp.de) wrote:
Am 16.01.20 um 21:26 schrieb Dr. David Alan Gilbert:
* Peter Lieven (p...@kamp.de) wrote:
Am 16.01.20 um 13:47 schrieb Peter Lieven:
Am 13.01.20 um 17:25 schrieb Peter Lieven:
Am 09.01.20 um 19:44
Am 16.01.20 um 21:26 schrieb Dr. David Alan Gilbert:
> * Peter Lieven (p...@kamp.de) wrote:
>> Am 16.01.20 um 13:47 schrieb Peter Lieven:
>>> Am 13.01.20 um 17:25 schrieb Peter Lieven:
>>>> Am 09.01.20 um 19:44 schrieb Dr. David Alan Gilbert:
>>>>> *
> Am 16.01.2020 um 21:26 schrieb Dr. David Alan Gilbert :
>
> * Peter Lieven (p...@kamp.de) wrote:
>> Am 16.01.20 um 13:47 schrieb Peter Lieven:
>>> Am 13.01.20 um 17:25 schrieb Peter Lieven:
>>>> Am 09.01.20 um 19:44 schrieb Dr. David Alan Gilbert:
>&
Am 16.01.20 um 13:47 schrieb Peter Lieven:
Am 13.01.20 um 17:25 schrieb Peter Lieven:
Am 09.01.20 um 19:44 schrieb Dr. David Alan Gilbert:
* Peter Lieven (p...@kamp.de) wrote:
Am 08.01.20 um 16:04 schrieb Dr. David Alan Gilbert:
* Peter Lieven (p...@kamp.de) wrote:
Hi,
I have a Qemu 4.0.1
Am 13.01.20 um 17:25 schrieb Peter Lieven:
Am 09.01.20 um 19:44 schrieb Dr. David Alan Gilbert:
* Peter Lieven (p...@kamp.de) wrote:
Am 08.01.20 um 16:04 schrieb Dr. David Alan Gilbert:
* Peter Lieven (p...@kamp.de) wrote:
Hi,
I have a Qemu 4.0.1 machine with vhost-net network adapter
Am 09.01.20 um 19:44 schrieb Dr. David Alan Gilbert:
* Peter Lieven (p...@kamp.de) wrote:
Am 08.01.20 um 16:04 schrieb Dr. David Alan Gilbert:
* Peter Lieven (p...@kamp.de) wrote:
Hi,
I have a Qemu 4.0.1 machine with vhost-net network adapter, thats polluting the
log with the above message
Am 08.01.20 um 16:04 schrieb Dr. David Alan Gilbert:
* Peter Lieven (p...@kamp.de) wrote:
Hi,
I have a Qemu 4.0.1 machine with vhost-net network adapter, thats polluting the
log with the above message.
Is this something known? Googling revealed the following patch in Nemu (with
seems
Hi,
I have a Qemu 4.0.1 machine with vhost-net network adapter, thats polluting the
log with the above message.
Is this something known? Googling revealed the following patch in Nemu (with
seems to be a Qemu fork from Intel):
Am 17.12.19 um 16:52 schrieb Kevin Wolf:
Am 17.12.2019 um 15:14 hat Peter Lieven geschrieben:
I have a vserver running Qemu 4.0 that seems to reproducibly hit the
following assertion:
bdrv_co_pwritev: Assertion `!waited || !use_local_qiov' failed.
I noticed that the padding code
Hi all,
I have a vserver running Qemu 4.0 that seems to reproducibly hit the following
assertion:
bdrv_co_pwritev: Assertion `!waited || !use_local_qiov' failed.
I noticed that the padding code was recently reworked in commit 2e2ad02f2c.
In the new code I cannot find a similar assertion.
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
---
V4: - allow partial last blocks [Kevin]
- report offsets in error messages [Kevin
Am 13.09.19 um 11:51 schrieb Max Reitz:
> On 10.09.19 17:41, Peter Lieven wrote:
>> nfs_close is a sync call from libnfs and has its own event
>> handler polling on the nfs FD. Avoid that both QEMU and libnfs
>> are intefering here.
>>
>> CC: qemu-sta...@nongnu.o
Am 11.09.19 um 09:48 schrieb Max Reitz:
> On 10.09.19 17:41, Peter Lieven wrote:
>> libnfs recently added support for unmounting. Add support
>> in Qemu too.
>>
>> Signed-off-by: Peter Lieven
>> ---
>> block/nfs.c | 3 +++
>> 1 file changed, 3 inserti
nfs_close is a sync call from libnfs and has its own event
handler polling on the nfs FD. Avoid that both QEMU and libnfs
are intefering here.
CC: qemu-sta...@nongnu.org
Signed-off-by: Peter Lieven
---
block/nfs.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/block
libnfs recently added support for unmounting. Add support
in Qemu too.
Signed-off-by: Peter Lieven
---
block/nfs.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/block/nfs.c b/block/nfs.c
index 2c98508275..f39acfdb28 100644
--- a/block/nfs.c
+++ b/block/nfs.c
@@ -398,6 +398,9 @@ static
add support for NFSv3 umount call. V2 adds a patch that fixes
the order of the aio teardown. The addition of the NFS umount
call unmasked that bug.
Peter Lieven (2):
block/nfs: tear down aio before nfs_close
block/nfs: add support for nfs_umount
block/nfs.c | 9 +++--
1 file changed, 7
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
---
V4: - allow partial last blocks [Kevin]
- report offsets in error messages [Kevin
Am 10.09.19 um 13:15 schrieb Kevin Wolf:
Am 05.09.2019 um 12:02 hat Peter Lieven geschrieben:
Am 04.09.19 um 16:09 schrieb Kevin Wolf:
Am 03.09.2019 um 15:35 hat Peter Lieven geschrieben:
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated
Am 05.09.19 um 12:28 schrieb ronnie sahlberg:
On Thu, Sep 5, 2019 at 8:16 PM Peter Lieven wrote:
Am 05.09.19 um 12:05 schrieb ronnie sahlberg:
On Thu, Sep 5, 2019 at 7:43 PM Peter Lieven wrote:
Am 04.09.19 um 11:34 schrieb Kevin Wolf:
Am 03.09.2019 um 21:52 hat Peter Lieven geschrieben
Am 05.09.19 um 12:05 schrieb ronnie sahlberg:
On Thu, Sep 5, 2019 at 7:43 PM Peter Lieven wrote:
Am 04.09.19 um 11:34 schrieb Kevin Wolf:
Am 03.09.2019 um 21:52 hat Peter Lieven geschrieben:
Am 03.09.2019 um 16:56 schrieb Kevin Wolf :
Am 03.09.2019 um 15:44 hat Peter Lieven geschrieben
Am 04.09.19 um 16:09 schrieb Kevin Wolf:
Am 03.09.2019 um 15:35 hat Peter Lieven geschrieben:
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
Am 04.09.19 um 16:09 schrieb Kevin Wolf:
Am 03.09.2019 um 15:35 hat Peter Lieven geschrieben:
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
Am 04.09.19 um 11:34 schrieb Kevin Wolf:
Am 03.09.2019 um 21:52 hat Peter Lieven geschrieben:
Am 03.09.2019 um 16:56 schrieb Kevin Wolf :
Am 03.09.2019 um 15:44 hat Peter Lieven geschrieben:
libnfs recently added support for unmounting. Add support
in Qemu too.
Signed-off-by: Peter Lieven
> Am 03.09.2019 um 16:56 schrieb Kevin Wolf :
>
> Am 03.09.2019 um 15:44 hat Peter Lieven geschrieben:
>> libnfs recently added support for unmounting. Add support
>> in Qemu too.
>>
>> Signed-off-by: Peter Lieven
>
> Looks trivial enough to revie
libnfs recently added support for unmounting. Add support
in Qemu too.
Signed-off-by: Peter Lieven
---
block/nfs.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/block/nfs.c b/block/nfs.c
index 0ec50953e4..9d30963fd8 100644
--- a/block/nfs.c
+++ b/block/nfs.c
@@ -1,7
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
---
V3: - check for bdrv_getlength failure [Kevin]
- use uint32_t for i [Kevin]
- check
Am 03.09.19 um 15:02 schrieb Kevin Wolf:
Am 02.09.2019 um 17:24 hat Peter Lieven geschrieben:
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
Am 02.09.19 um 17:24 schrieb Peter Lieven:
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
---
V2: - add error reporting [Kevin]
- use
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
---
V2: - add error reporting [Kevin]
- use bdrv_getlength instead
Am 02.09.19 um 15:46 schrieb Kevin Wolf:
Am 02.09.2019 um 15:15 hat Peter Lieven geschrieben:
Am 02.09.19 um 15:07 schrieb Kevin Wolf:
Am 29.08.2019 um 15:36 hat Peter Lieven geschrieben:
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated
Am 02.09.19 um 15:07 schrieb Kevin Wolf:
Am 29.08.2019 um 15:36 hat Peter Lieven geschrieben:
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable to vhdx_co_check.
Signed-off-by: Jan-Hendrik Frintrop
Signed-off-by: Peter
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable to vhdx_co_check.
Signed-off-by: Jan-Hendrik Frintrop
Signed-off-by: Peter Lieven
---
block/vhdx.c | 19 +++
1 file changed, 19 insertions(+)
diff --git
-by: Peter Lieven
Reviewed-by: Hannes Reinecke
---
hw/scsi/megasas.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/scsi/megasas.c b/hw/scsi/megasas.c
index a56317e026..5ad762de23 100644
--- a/hw/scsi/megasas.c
+++ b/hw/scsi/megasas.c
@@ -477,7 +477,7 @@ static MegasasCmd
-by: Peter Lieven
---
hw/scsi/megasas.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/scsi/megasas.c b/hw/scsi/megasas.c
index a56317e026..5ad762de23 100644
--- a/hw/scsi/megasas.c
+++ b/hw/scsi/megasas.c
@@ -477,7 +477,7 @@ static MegasasCmd *megasas_enqueue_frame(MegasasState *s
forbid values that are non
multiple of 512 to avoid undesired behaviour. For instance, values
between 1 and 511 were legal, but resulted in full allocation.
Cc: qemu-sta...@nongnu.org
Signed-off-by: Peter Lieven
---
V1->V2: - use correct check for sval mod 512 == 0
- use BDRV_SECTOR_SIZE ma
forbid values that are non
multiple of 512 to avoid undesired behaviour. Values between 1 and
511 were legal, but resulted in full allocation.
Cc: qemu-sta...@nongnu.org
Signed-off-by: Peter Lieven
---
qemu-img.c | 16 +++-
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git
.
[1]
https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk
Signed-off-by: Peter Lieven
---
V5->V6: - fix output of iotest 122 [Kevin]
V4->V5: - is_zero is a bool [Kevin]
- treat zero areas as allocated if i <= tail to avoid *pnum
Am 11.07.2018 um 10:25 schrieb Kevin Wolf:
Am 10.07.2018 um 22:16 hat Peter Lieven geschrieben:
Am 10.07.2018 um 17:31 schrieb Kevin Wolf :
Am 10.07.2018 um 17:05 hat Peter Lieven geschrieben:
We currently don't enforce that the sparse segments we detect during convert are
aligned
Am 11.07.2018 um 10:25 schrieb Kevin Wolf:
Am 10.07.2018 um 22:16 hat Peter Lieven geschrieben:
Am 10.07.2018 um 17:31 schrieb Kevin Wolf :
Am 10.07.2018 um 17:05 hat Peter Lieven geschrieben:
We currently don't enforce that the sparse segments we detect during convert are
aligned
> Am 10.07.2018 um 17:31 schrieb Kevin Wolf :
>
> Am 10.07.2018 um 17:05 hat Peter Lieven geschrieben:
>> We currently don't enforce that the sparse segments we detect during convert
>> are
>> aligned. This leads to unnecessary and costly read-modify-write cycles e
.
[1]
https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk
Signed-off-by: Peter Lieven
---
V4->V5: - is_zero is a bool [Kevin]
- treat zero areas as allocated if i <= tail to avoid *pnum underflow
[Kevin]
V3->V4: - only focus o
Am 10.07.2018 um 14:28 schrieb Kevin Wolf:
Am 07.07.2018 um 13:42 hat Peter Lieven geschrieben:
We currently don't enforce that the sparse segments we detect during convert are
aligned. This leads to unnecessary and costly read-modify-write cycles either
internally in Qemu or in the background
.
[1]
https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk
Signed-off-by: Peter Lieven
---
V3->V4: - only focus on the end offset in is_allocated_sectors [Kevin]
V2->V3: - ensure that s.alignment is a power of 2
- correctly ha
> Am 05.07.2018 um 17:15 schrieb Kevin Wolf :
>
> Am 05.07.2018 um 12:52 hat Peter Lieven geschrieben:
>> We currently don't enforce that the sparse segments we detect during convert
>> are
>> aligned. This leads to unnecessary and costly read-modify-write cycles e
/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk
Signed-off-by: Peter Lieven
---
V2->V3: - ensure that s.alignment is a power of 2
- correctly handle n < alignment in is_allocated_sectors if
sector_num % alignment > 0.
V1->V2: - take the current s
/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk
Signed-off-by: Peter Lieven
---
V1->V2: - take the current sector offset into account [Max]
- try to figure out the target alignment [Max]
qemu-img.c | 44 ++--
1 file changed, 34 inserti
Am 11.06.2018 um 16:04 schrieb Max Reitz:
> On 2018-06-11 15:59, Peter Lieven wrote:
>> Am 11.06.2018 um 15:30 schrieb Max Reitz:
>>> On 2018-06-07 14:46, Peter Lieven wrote:
>>>> We currently don't enforce that the sparse segments we detect during
>>>
Hi,
ich habe some hosts running rather old Qemu Versions which sometimes show the
following behaviour:
1) Live Migration
2) One Network Interface (virtio-net) stops forwarding traffic.
3) If I reboot the server I hit the following assertion
qemu-2.9.0: hw/pci/pci.c:311: pcibus_reset:
Am 11.06.2018 um 16:04 schrieb Max Reitz:
On 2018-06-11 15:59, Peter Lieven wrote:
Am 11.06.2018 um 15:30 schrieb Max Reitz:
On 2018-06-07 14:46, Peter Lieven wrote:
We currently don't enforce that the sparse segments we detect during
convert are
aligned. This leads to unnecessary and costly
Am 11.06.2018 um 15:30 schrieb Max Reitz:
On 2018-06-07 14:46, Peter Lieven wrote:
We currently don't enforce that the sparse segments we detect during convert are
aligned. This leads to unnecessary and costly read-modify-write cycles either
internally in Qemu or in the background
a total of about
15000
write requests. With this path the 4600 additional read requests are eliminated.
[1]
https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk
Signed-off-by: Peter Lieven
---
qemu-img.c | 21 +++--
1 file changed, 15
Am 08.03.2018 um 14:30 schrieb Peter Lieven:
Am 08.03.2018 um 13:50 schrieb Juan Quintela:
Peter Lieven <p...@kamp.de> wrote:
the current implementation submits up to 512 I/O requests in parallel
which is much to high especially for a background task.
This patch adds a maximum limit of
Am 09.03.2018 um 15:58 schrieb Dr. David Alan Gilbert:
> * Peter Lieven (p...@kamp.de) wrote:
>> this actually limits (as the original commit mesage suggests) the
>> number of I/O buffers that can be allocated and not the number
>> of parallel (inflight) I/O requests.
>&
Am 08.03.2018 um 13:50 schrieb Juan Quintela:
Peter Lieven <p...@kamp.de> wrote:
the current implementation submits up to 512 I/O requests in parallel
which is much to high especially for a background task.
This patch adds a maximum limit of 16 I/O requests that can
be submitted in pa
Peter Lieven (5):
migration: do not transfer ram during bulk storage migration
migration/block: reset dirty bitmap before read in bulk phase
migration/block: rename MAX_INFLIGHT_IO to MAX_IO_BUFFERS
migration/block: limit the number of parallel I/O requests
migration/block: compare only
this actually limits (as the original commit mesage suggests) the
number of I/O buffers that can be allocated and not the number
of parallel (inflight) I/O requests.
Signed-off-by: Peter Lieven <p...@kamp.de>
---
migration/block.c | 7 +++
1 file changed, 3 insertions(+), 4 del
this patch makes the bulk phase of a block migration to take
place before we start transferring ram. As the bulk block migration
can take a long time its pointless to transfer ram during that phase.
Signed-off-by: Peter Lieven <p...@kamp.de>
Reviewed-by: Stefan Hajnoczi <stefa...@r
only read_done blocks are in the queued to be flushed to the migration
stream. submitted blocks are still in flight.
Signed-off-by: Peter Lieven <p...@kamp.de>
---
migration/block.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/migration/block.c b/migration/block.c
the current implementation submits up to 512 I/O requests in parallel
which is much to high especially for a background task.
This patch adds a maximum limit of 16 I/O requests that can
be submitted in parallel to avoid monopolizing the I/O device.
Signed-off-by: Peter Lieven <p...@kamp
Reset the dirty bitmap before reading to make sure we don't miss
any new data.
Cc: qemu-sta...@nongnu.org
Signed-off-by: Peter Lieven <p...@kamp.de>
---
migration/block.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/migration/block.c b/migration/block.c
index 1
Am 08.03.2018 um 10:01 schrieb Fam Zheng:
On Thu, Mar 8, 2018 at 4:57 PM, Peter Lieven <p...@kamp.de> wrote:
Am 08.03.2018 um 02:28 schrieb Fam Zheng <f...@redhat.com>:
On Wed, 03/07 09:06, Peter Lieven wrote:
Hi,
while looking at the code I wonder if the bl
> Am 08.03.2018 um 02:28 schrieb Fam Zheng <f...@redhat.com>:
>
>> On Wed, 03/07 09:06, Peter Lieven wrote:
>> Hi,
>>
>> while looking at the code I wonder if the blk_aio_preadv and the
>> bdrv_reset_dirty_bitmap order
Am 06.03.2018 um 12:51 schrieb Stefan Hajnoczi:
> On Tue, Feb 20, 2018 at 06:04:02PM +0100, Peter Lieven wrote:
>> I remember we discussed a long time ago to limit the stack usage of all
>> functions that are executed in a coroutine
>> context to a very low value to be
Am 07.03.2018 um 10:47 schrieb Stefan Hajnoczi:
> On Wed, Mar 7, 2018 at 7:55 AM, Peter Lieven <p...@kamp.de> wrote:
>> Am 06.03.2018 um 17:35 schrieb Peter Lieven:
>>> Am 06.03.2018 um 17:07 schrieb Stefan Hajnoczi:
>>>> On Mon, Mar 05, 2018 at 02:52:16PM
Hi,
while looking at the code I wonder if the blk_aio_preadv and the
bdrv_reset_dirty_bitmap order must
be swapped in mig_save_device_bulk:
qemu_mutex_lock_iothread();
aio_context_acquire(blk_get_aio_context(bmds->blk));
blk->aiocb = blk_aio_preadv(bb, cur_sector * BDRV_SECTOR_SIZE,
Am 06.03.2018 um 17:35 schrieb Peter Lieven:
> Am 06.03.2018 um 17:07 schrieb Stefan Hajnoczi:
>> On Mon, Mar 05, 2018 at 02:52:16PM +, Dr. David Alan Gilbert wrote:
>>> * Peter Lieven (p...@kamp.de) wrote:
>>>> Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi:
&g
Am 06.03.2018 um 17:07 schrieb Stefan Hajnoczi:
> On Mon, Mar 05, 2018 at 02:52:16PM +, Dr. David Alan Gilbert wrote:
>> * Peter Lieven (p...@kamp.de) wrote:
>>> Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi:
>>>> On Thu, Feb 22, 2018 at 12:13:50PM +0100, Peter
Am 05.03.2018 um 15:52 schrieb Dr. David Alan Gilbert:
> * Peter Lieven (p...@kamp.de) wrote:
>> Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi:
>>> On Thu, Feb 22, 2018 at 12:13:50PM +0100, Peter Lieven wrote:
>>>> I stumbled across the MAX_INFLIGHT_IO fie
Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi:
> On Thu, Feb 22, 2018 at 12:13:50PM +0100, Peter Lieven wrote:
>> I stumbled across the MAX_INFLIGHT_IO field that was introduced in 2015 and
>> was curious what was the reason
>> to choose 512MB as readahead? The que
Am 22.02.2018 um 13:03 schrieb Daniel P. Berrangé:
> On Thu, Feb 22, 2018 at 01:02:05PM +0100, Peter Lieven wrote:
>> Am 22.02.2018 um 13:00 schrieb Daniel P. Berrangé:
>>> On Thu, Feb 22, 2018 at 12:51:58PM +0100, Peter Lieven wrote:
>>>> Am 22.02.2018 um 12
Am 22.02.2018 um 13:00 schrieb Daniel P. Berrangé:
> On Thu, Feb 22, 2018 at 12:51:58PM +0100, Peter Lieven wrote:
>> Am 22.02.2018 um 12:40 schrieb Daniel P. Berrangé:
>>> On Thu, Feb 22, 2018 at 12:32:04PM +0100, Kevin Wolf wrote:
>>>> Am 22.02.2018 um 12:0
101 - 200 of 2519 matches
Mail list logo