When creating a read-only image, we are still advertising support for
TRIM and WRITE_ZEROES to the client, even though the client should not
be issuing those commands. But seeing this requires looking across
multiple functions:
All callers to nbd_export_new() passed a single flag based solely on
A server may have a reason to reject a request for structured replies,
beyond just not recognizing them as a valid request; similarly, it may
have a reason for rejecting a request for a meta context. It doesn't
hurt us to continue talking to such a server; otherwise 'qemu-nbd
--list' of such a ser
The server side is fairly straightforward: we can always advertise
support for detection of fast zero, and implement it by mapping the
request to the block layer BDRV_REQ_NO_FALLBACK.
Signed-off-by: Eric Blake
Message-Id: <20190823143726.27062-5-ebl...@redhat.com>
Reviewed-by: Vladimir Sementsov-
The NBD specification defines NBD_FLAG_CAN_MULTI_CONN, which can be
advertised when the server promises cache consistency between
simultaneous clients (basically, rules that determine what FUA and
flush from one client are able to guarantee for reads from another
client). When we don't permit simu
From: Andrey Shinkevich
Revert the commit 118f99442d 'block/io.c: fix for the allocation failure'
and use better error handling for file systems that do not support
fallocate() for an unaligned byte range. Allow falling back to pwrite
in case fallocate() returns EINVAL.
Suggested-by: Kevin Wolf
The client side is fairly straightforward: if the server advertised
fast zero support, then we can map that to BDRV_REQ_NO_FALLBACK
support. A server that advertises FAST_ZERO but not WRITE_ZEROES
is technically broken, but we can ignore that situation as it does
not change our behavior.
Signed-o
Commit fe0480d6 and friends added BDRV_REQ_NO_FALLBACK as a way to
avoid wasting time on a preliminary write-zero request that will later
be rewritten by actual data, if it is known that the write-zero
request will use a slow fallback; but in doing so, could not optimize
for NBD. The NBD specifica
When creating a read-only image, we are still advertising support for
TRIM and WRITE_ZEROES to the client, even though the client should not
be issuing those commands. But seeing this requires looking across
multiple functions:
All callers to nbd_export_new() passed a single flag based solely on
The NBD specification defines NBD_FLAG_CAN_MULTI_CONN, which can be
advertised when the server promises cache consistency between
simultaneous clients (basically, rules that determine what FUA and
flush from one client are able to guarantee for reads from another
client). When we don't permit simu
The server side is fairly straightforward: we can always advertise
support for detection of fast zero, and implement it by mapping the
request to the block layer BDRV_REQ_NO_FALLBACK.
Signed-off-by: Eric Blake
Message-Id: <20190823143726.27062-5-ebl...@redhat.com>
Reviewed-by: Vladimir Sementsov-
Thanks to our recent move to use glib's g_autofree, I can join the
bandwagon. Getting rid of gotos is fun ;)
There are probably more places where we could register cleanup
functions and get rid of more gotos; this patch just focuses on the
labels that existed merely to call g_free.
Signed-off-by
A server may have a reason to reject a request for structured replies,
beyond just not recognizing them as a valid request; similarly, it may
have a reason for rejecting a request for a meta context. It doesn't
hurt us to continue talking to such a server; otherwise 'qemu-nbd
--list' of such a ser
On Thu, 2019-09-05 at 13:27 -0400, John Snow wrote:
>
> On 9/5/19 9:24 AM, Maxim Levitsky wrote:
> > On Wed, 2019-08-28 at 12:03 +0300, Maxim Levitsky wrote:
> > > On Tue, 2019-08-27 at 18:29 -0400, John Snow wrote:
> > > >
> > > > On 8/25/19 3:15 AM, Maxim Levitsky wrote:
> > > > > Signed-off-by
On 9/5/19 9:24 AM, Maxim Levitsky wrote:
> On Wed, 2019-08-28 at 12:03 +0300, Maxim Levitsky wrote:
>> On Tue, 2019-08-27 at 18:29 -0400, John Snow wrote:
>>>
>>> On 8/25/19 3:15 AM, Maxim Levitsky wrote:
Signed-off-by: Maxim Levitsky
---
block/nvme.c | 83
Am 12.08.2019 um 14:58 hat Max Reitz geschrieben:
> On 10.08.19 17:36, Vladimir Sementsov-Ogievskiy wrote:
> > 09.08.2019 19:13, Max Reitz wrote:
> >> If the driver does not support .bdrv_co_flush() so bdrv_co_flush()
> >> itself has to flush the children of the given node, it should not flush
> >>
On Wed, Sep 04, 2019 at 05:00:56PM -0400, Dmitry Fomichev wrote:
> Currently, attaching zoned block devices (i.e., storage devices
> compliant to ZAC/ZBC standards) using several virtio methods doesn't
> work properly as zoned devices appear as regular block devices at the
> guest. This may cause u
On 05.09.2019 17:31, Eric Blake wrote:
> On 9/5/19 2:44 AM, Denis Plotnikov wrote:
>
>
+
+s_size = be32_to_cpu(*(const uint32_t *) src);
>>> As written, this looks like you may be dereferencing an unaligned
>>> pointer. It so happens that be32_to_cpu() applies & to your * to get
>>>
On 9/5/19 2:44 AM, Denis Plotnikov wrote:
>>> +
>>> +s_size = be32_to_cpu(*(const uint32_t *) src);
>> As written, this looks like you may be dereferencing an unaligned
>> pointer. It so happens that be32_to_cpu() applies & to your * to get
>> back at the raw pointer, and then is careful to
Am 09.08.2019 um 18:13 hat Max Reitz geschrieben:
> Use child access functions when iterating through backing chains so
> filters do not break the chain.
>
> Signed-off-by: Max Reitz
> ---
> block.c | 40
> 1 file changed, 28 insertions(+), 12 deletions(-
05.09.2019 12:31, Denis Plotnikov wrote:
> The patch adds some preparation parts for incompatible compression type
> feature to QCOW2 header that indicates that *all* compressed clusters
> must be (de)compressed using a certain compression type.
>
> It is implied that the compression type is set o
On Fri, 2019-07-12 at 19:35 +0200, Max Reitz wrote:
> Hi,
>
> Kevin commented on my RFC, so I got what an RFC wants, and he didn’t
> object to the creation fallback part. So I suppose I can go down that
> route at least. (Which was actually the more important part of the
> series.)
>
> So as in
On Wed, 2019-08-28 at 12:03 +0300, Maxim Levitsky wrote:
> On Tue, 2019-08-27 at 18:29 -0400, John Snow wrote:
> >
> > On 8/25/19 3:15 AM, Maxim Levitsky wrote:
> > > Signed-off-by: Maxim Levitsky
> > > ---
> > > block/nvme.c | 83 ++
> > > block
Am 09.08.2019 um 18:13 hat Max Reitz geschrieben:
> Filters cannot compress data themselves but they have to implement
> .bdrv_co_pwritev_compressed() still (or they cannot forward compressed
> writes). Therefore, checking whether
> bs->drv->bdrv_co_pwritev_compressed is non-NULL is not sufficient
Am 09.08.2019 um 18:13 hat Max Reitz geschrieben:
> In order to make filters work in backing chains, the associated
> functions must be able to deal with them and freeze all filter links, be
> they COW or R/W filter links.
>
> In the process, rename these functions to reflect that they now act on
Am 05.09.19 um 12:28 schrieb ronnie sahlberg:
On Thu, Sep 5, 2019 at 8:16 PM Peter Lieven wrote:
Am 05.09.19 um 12:05 schrieb ronnie sahlberg:
On Thu, Sep 5, 2019 at 7:43 PM Peter Lieven wrote:
Am 04.09.19 um 11:34 schrieb Kevin Wolf:
Am 03.09.2019 um 21:52 hat Peter Lieven geschrieben:
Am
Am 05.09.19 um 12:05 schrieb ronnie sahlberg:
On Thu, Sep 5, 2019 at 7:43 PM Peter Lieven wrote:
Am 04.09.19 um 11:34 schrieb Kevin Wolf:
Am 03.09.2019 um 21:52 hat Peter Lieven geschrieben:
Am 03.09.2019 um 16:56 schrieb Kevin Wolf :
Am 03.09.2019 um 15:44 hat Peter Lieven geschrieben:
lib
Am 04.09.19 um 16:09 schrieb Kevin Wolf:
Am 03.09.2019 um 15:35 hat Peter Lieven geschrieben:
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
ping
07.08.2019 17:12, Vladimir Sementsov-Ogievskiy wrote:
> Hi all!
>
> Bitmaps reopening is buggy, reopening-rw just not working at all and
> reopening-ro may lead to producing broken incremental
> backup if we do temporary snapshot in a meantime.
>
> v4: Drop complicated solution around reope
Am 04.09.19 um 16:09 schrieb Kevin Wolf:
Am 03.09.2019 um 15:35 hat Peter Lieven geschrieben:
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven
Am 04.09.19 um 11:34 schrieb Kevin Wolf:
Am 03.09.2019 um 21:52 hat Peter Lieven geschrieben:
Am 03.09.2019 um 16:56 schrieb Kevin Wolf :
Am 03.09.2019 um 15:44 hat Peter Lieven geschrieben:
libnfs recently added support for unmounting. Add support
in Qemu too.
Signed-off-by: Peter Lieven
v6:
* fixed zstd compressed length storing/loading [Eric]
* fixed wording, spec section placement [Eric]
v5:
* type changed for compression_type at BDRVQcow2State [Kevin]
* fixed typos, grammar [Kevin]
* fixed default config zstd setting [Kevin]
v4:
* remove not feasible switch case [Vladimir]
*
zstd significantly reduces cluster compression time.
It provides better compression performance maintaining
the same level of compression ratio in comparison with
zlib, which, at the moment, has been the only compression
method available.
The performance test results:
Test compresses and decompres
The patch adds some preparation parts for incompatible compression type
feature to QCOW2 header that indicates that *all* compressed clusters
must be (de)compressed using a certain compression type.
It is implied that the compression type is set on the image creation and
can be changed only later
The patch allows processing the image compression type defined
in the image header and chooses an appropriate method for
image clusters (de)compression.
Signed-off-by: Denis Plotnikov
Reviewed-by: Eric Blake
---
block/qcow2-threads.c | 77 +++
1 file chan
ping
21.08.2019 19:52, Vladimir Sementsov-Ogievskiy wrote:
> Hi all!
> Here is NBD reconnect. Previously, if connection failed all current
> and future requests will fail. After the series, nbd-client driver
> will try to reconnect unlimited times. During first @reconnect-delay
> seconds of reconn
On 04.09.2019 19:07, Eric Blake wrote:
> On 9/4/19 10:29 AM, Denis Plotnikov wrote:
>> zstd significantly reduces cluster compression time.
>> It provides better compression performance maintaining
>> the same level of compression ratio in comparison with
>> zlib, which, at the moment, has been the
Max, can you review again?
On Fri, Aug 30, 2019 at 11:25 PM Nir Soffer wrote:
> On Wed, Aug 28, 2019 at 11:14 PM John Snow wrote:
>
>>
>>
>> On 8/27/19 2:59 PM, Nir Soffer wrote:
>> > While working on 4k support, I noticed that there is lot of code using
>> > BDRV_SECTOR_SIZE (512) for checking
On 04.09.2019 19:07, Eric Blake wrote:
> On 9/4/19 10:29 AM, Denis Plotnikov wrote:
>> zstd significantly reduces cluster compression time.
>> It provides better compression performance maintaining
>> the same level of compression ratio in comparison with
>> zlib, which, at the moment, has been the
38 matches
Mail list logo