Re: [Qemu-block] KVM Forum block no[td]es

2018-11-15 Thread Denis V . Lunev
On 11/12/18 1:25 AM, Max Reitz wrote:
> This is what I’ve taken from two or three BoF-like get-togethers on
> blocky things.  Amendments are more than welcome, of course.
>
>
>
> Permission system
> =
>
> GRAPH_MOD
> -
>
> We need some way for the commit job to prevent graph changes on its
> chain while it is running.  Our current blocker doesn’t do the job,
> however.  What to do?
>
> - We have no idea how to make a *permission* work.  Maybe the biggest
>   problem is that it just doesn’t work as a permission, because the
>   commit job doesn’t own the BdrvChildren that would need to be
>   blocked (namely the @backing BdrvChild).
>
> - A property of BdrvChild that can be set by a non-parent seems more
>   feasible, e.g. a counter where changing the child is possible only
>   if the counter is 0.  This also actually makes sense in what it
>   means.
>   (We never quite knew what “taking the GRAPH_PERMISSION” or
>   “unsharing the GRPAH_MOD permission” was supposed to mean.  Figuring
>   that out always took like half an our in any face-to-face meeting,
>   and then we decided it was pretty much useless for any case we had
>   at hand.)
>
>
> Reopen
> --
>
> How should permissions be handled while the reopen is under way?
> Maybe we should take the union of @perm before and after, and the
> intersection of @shared before and after?
>
> - Taking permissions is a transaction that can fail.  Reopen, too, is
>   a transaction, and we want to go from the intermediate to the final
>   permissions in reopen’s commit part, so that transition is not
>   allowed to fail.
>   Since with the above model we would only relax things during that
>   transition (relinquishing bits from @perm and adding bits to
>   @shared), this transition should in theory be possible without any
>   failure.  However, in practice things are different, as permission
>   changes with file-posix nodes imply lock changes on the filesystem
>   -- which may always fail.  Arguably failures from changing the
>   file-posix locks can be ignored, because that just means that the
>   file claims more permissions to be taken and less to be shared than
>   is actually the case.  Which means you may not be able to open the
>   file in some other application, while you should be, but that’s the
>   benign kind of error.  You won’t be able to access data in a way
>   you shouldn’t be able to.
>   - Note that we have this issue already, so in general dropping
> permissions sometimes aborts because code assumes that dropping
> permissions is always safe and can never result in an error.  It
> seems best to ignore such protocol layer errors in the generic
> block layer rather than handling this in every protocol driver
> itself.
> (The block layer should discard errors from dropping permissions
> on the protocol layer.)
>
> - Is it possible that changing an option may require taking an
>   intermediate permission that is required neither before nor after
>   the reopen process?
>   Changing a child link comes to mind (like changing a child from one
>   BDS to another, where the visible data changes, which would mean we
>   may want to e.g. unshare CONSISTENT_READ during the reopen).
>   However:
>   1. It is unfeasible to unshare that for all child changes.
>  Effectively everything requires CONSISTENT_READ, and for good
>  reason.
>   2. Why would a user even change a BDS to something of a different
>  content?
>   3. Anything that currently allows you to change a child node assumes
>  that the user always changes it to something of the same content
>  (some take extra care to verify this, like mirror, which makes
>  sure that @replaces and the target are connected, and there are
>  only filter nodes in between).
>   Always using the same enforcing model as mirror does (no. 3 above)
>   does not really work, though, because one use case is to copy a
>   backing file offline to some different storage and then replace the
>   files via QMP.  To qemu, both files are completely unrelated.
>
>
> Block jobs, including blockdev-copy
> ===
>
> Example for use of the fleecing filter:
> - The real target is on slow storage.  Put an overlay on fast storage
>   on top of it.  Then use that overlay as the target of the fleecing
>   filter (and commit the data later or on the side), so that the
>   backup job does not slow down the guest.
>
> For a unified copy job, having a backup/fleecing filter is not a
> problem on the way.  One thing we definitely have to and can do is to
> copy common functionality into a shared file so that the different
> jobs can at least share that.
>
> COR/Stream:
> - There should be a way to discard ranges that have been copied into
>   the overlay from the backing files to save space
> - Also, the COR filter should integrated with the stream job (at some
>   point, as always)
>
> Hole punching with active commit:
> - Putting d

Re: [Qemu-block] [PATCH] migration/block-dirty-bitmap: fix Coverity CID1390625

2018-11-15 Thread John Snow



On 11/15/18 6:48 AM, Peter Maydell wrote:
> On 17 October 2018 at 10:51, Stefan Hajnoczi  wrote:
>> On Tue, Oct 16, 2018 at 04:20:18PM +0300, Vladimir Sementsov-Ogievskiy wrote:
>>> Theoretically possible that we finish the skipping loop with bs = NULL
>>> and the following code will crash trying to dereference it. Fix that.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy 
>>> ---
>>>  migration/block-dirty-bitmap.c | 4 
>>>  1 file changed, 4 insertions(+)
>>>
>>> diff --git a/migration/block-dirty-bitmap.c b/migration/block-dirty-bitmap.c
>>> index 477826330c..6de808f95f 100644
>>> --- a/migration/block-dirty-bitmap.c
>>> +++ b/migration/block-dirty-bitmap.c
>>> @@ -288,6 +288,10 @@ static int init_dirty_bitmap_migration(void)
>>>  bs = backing_bs(bs);
>>>  }
>>>
>>> +if (!bs || bs->implicit) {
>>> +continue;
>>> +}
>>> +
>>>  for (bitmap = bdrv_dirty_bitmap_next(bs, NULL); bitmap;
>>>   bitmap = bdrv_dirty_bitmap_next(bs, bitmap))
>>>  {
>>
>> Previous discussion:
>> http://qemu.11.n7.nabble.com/PATCH-migration-Appease-coverity-skip-empty-block-trees-td582504.html
>>
>> I've CCed John so he can take a look.
> 
> So have you block-layer folks figured out how you want to address
> this Coverity issue yet?
> 
> thanks
> -- PMM
> 

I looked again. I think Vladimir's patch will shut up Coverity for sure,
feel free to apply it if you want this out of your hair.

Stefan suggests the following, however;


diff --git a/migration/block-dirty-bitmap.c b/migration/block-dirty-bitmap.c
index 5e90f44c2f..00c068fda3 100644
--- a/migration/block-dirty-bitmap.c
+++ b/migration/block-dirty-bitmap.c
@@ -284,7 +284,7 @@ static int init_dirty_bitmap_migration(void)
 const char *drive_name = bdrv_get_device_or_node_name(bs);

 /* skip automatically inserted nodes */
-while (bs && bs->drv && bs->implicit) {
+while (bs->drv && bs->implicit) {
 bs = backing_bs(bs);
 }


that by removing the assumption that bs could ever be null here (it
shouldn't), that we'll coax coverity into not warning anymore. I don't
know if that will work, because backing_bs can theoretically return NULL
and might convince coverity there's a problem. In practice it won't be.

I don't know how to check this to see if Stefan's suggestion is appropriate.

For such a small, trivial issue though, just merge this and be done with
it, in my opinion. If you want to take this fix directly as a "build
fix" I wouldn't object.

I'm sorry for the fuss.



Re: [Qemu-block] Change in qemu 2.12 causes qemu-img convert to NBD to write more data

2018-11-15 Thread Nir Soffer
On Sun, Nov 11, 2018 at 6:11 PM Nir Soffer  wrote:

> On Wed, Nov 7, 2018 at 7:55 PM Nir Soffer  wrote:
>
>> On Wed, Nov 7, 2018 at 7:27 PM Kevin Wolf  wrote:
>>
>>> Am 07.11.2018 um 15:56 hat Nir Soffer geschrieben:
>>> > Wed, Nov 7, 2018 at 4:36 PM Richard W.M. Jones 
>>> wrote:
>>> >
>>> > > Another thing I tried was to change the NBD server (nbdkit) so that
>>> it
>>> > > doesn't advertise zero support to the client:
>>> > >
>>> > >   $ nbdkit --filter=log --filter=nozero memory size=6G
>>> logfile=/tmp/log \
>>> > >   --run './qemu-img convert ./fedora-28.img -n $nbd'
>>> > >   $ grep '\.\.\.$' /tmp/log | sed 's/.*\([A-Z][a-z]*\).*/\1/' | uniq
>>> -c
>>> > >2154 Write
>>> > >
>>> > > Not surprisingly no zero commands are issued.  The size of the write
>>> > > commands is very uneven -- it appears to be send one command per
>>> block
>>> > > of zeroes or data.
>>> > >
>>> > > Nir: If we could get information from imageio about whether zeroing
>>> is
>>> > > implemented efficiently or not by the backend, we could change
>>> > > virt-v2v / nbdkit to advertise this back to qemu.
>>> >
>>> > There is no way to detect the capability, ioctl(BLKZEROOUT) always
>>> > succeeds, falling back to manual zeroing in the kernel silently
>>> >
>>> > Even if we could, sending zero on the wire from qemu may be even
>>> > slower, and it looks like qemu send even more requests in this case
>>> > (2154 vs ~1300).
>>> >
>>> > Looks like this optimization in qemu side leads to worse performance,
>>> > so it should not be enabled by default.
>>>
>>> Well, that's overgeneralising your case a bit. If the backend does
>>> support efficient zero writes (which file systems, the most common case,
>>> generally do), doing one big write_zeroes request at the start can
>>> improve performance quite a bit.
>>>
>>> It seems the problem is that we can't really know whether the operation
>>> will be efficient because the backends generally don't tell us. Maybe
>>> NBD could introduce a flag for this, but in the general case it appears
>>> to me that we'll have to have a command line option.
>>>
>>> However, I'm curious what your exact use case and the backend used in it
>>> is? Can something be improved there to actually get efficient zero
>>> writes and get even better performance than by just disabling the big
>>> zero write?
>>
>>
>> The backend is some NetApp storage connected via FC. I don't have
>> more info on this. We get zero rate of about 1G/s on this storage, which
>> is quite slow compared with other storage we tested.
>>
>> One option we check now is if this is the kernel silent fallback to manual
>> zeroing when the server advertise wrong value of write_same_max_bytes.
>>
>
> We eliminated this using blkdiscard. This is what we get on with this
> storage
> zeroing 100G LV:
>
> for i in 1 2 4 8 16 32; do time blkdiscard -z -p ${i}m
> /dev/6e1d84f9-f939-46e9-b108-0427a08c280c/2d5c06ce-6536-4b3c-a7b6-13c6d8e55ade;
> done
>
> real 4m50.851s
> user 0m0.065s
> sys 0m1.482s
>
> real 4m30.504s
> user 0m0.047s
> sys 0m0.870s
>
> real 4m19.443s
> user 0m0.029s
> sys 0m0.508s
>
> real 4m13.016s
> user 0m0.020s
> sys 0m0.284s
>
> real 2m45.888s
> user 0m0.011s
> sys 0m0.162s
>
> real 2m10.153s
> user 0m0.003s
> sys 0m0.100s
>
> We are investigating why we get low throughput on this server, and also
> will check
> several other servers.
>
> Having a command line option to control this behavior sounds good. I don't
>> have enough data to tell what should be the default, but I think the safe
>> way would be to keep old behavior.
>>
>
> We file this bug:
> https://bugzilla.redhat.com/1648622
>

More data from even slower storage - zeroing 10G lv on Kaminario K2

# time blkdiscard -z -p 32m /dev/test_vg/test_lv2

real50m12.425s
user0m0.018s
sys 2m6.785s

Maybe something is wrong with this storage, since we see this:

# grep -s "" /sys/block/dm-29/queue/* | grep write_same_max_bytes
/sys/block/dm-29/queue/write_same_max_bytes:512

Since BLKZEROOUT always fallback to manual slow zeroing silently,
maybe we can disable the aggressive pre-zero of the entire device
for block devices, and keep this optimization for files when fallocate()
is supported?

Nir


Re: [Qemu-block] [Qemu-devel] [PULL 29/36] qemu-iotests: Test auto-read-only with -drive and -blockdev

2018-11-15 Thread Eric Blake

On 11/5/18 10:37 AM, Kevin Wolf wrote:

Signed-off-by: Kevin Wolf 
Reviewed-by: Eric Blake 
---
  tests/qemu-iotests/232 | 147 +
  tests/qemu-iotests/232.out |  59 +++
  tests/qemu-iotests/group   |   1 +
  3 files changed, 207 insertions(+)
  create mode 100755 tests/qemu-iotests/232
  create mode 100644 tests/qemu-iotests/232.out

diff --git a/tests/qemu-iotests/232 b/tests/qemu-iotests/232
new file mode 100755
index 00..bc2972d124
--- /dev/null
+++ b/tests/qemu-iotests/232
@@ -0,0 +1,147 @@
+#!/bin/bash
+#
+# Test for auto-read-only


This breaks './check -luks 232':

@@ -3,57 +3,60 @@

 === -drive with read-write image: read-only/auto-read-only 
combinations ===


-NODE_NAME: TEST_DIR/t.IMGFMT (file, read-only)
-NODE_NAME: TEST_DIR/t.IMGFMT (file, read-only)
-NODE_NAME: TEST_DIR/t.IMGFMT (file, read-only)
-
-NODE_NAME: TEST_DIR/t.IMGFMT (file)
-NODE_NAME: TEST_DIR/t.IMGFMT (file)
-NODE_NAME: TEST_DIR/t.IMGFMT (file)
-
-NODE_NAME: TEST_DIR/t.IMGFMT (file)
-NODE_NAME: TEST_DIR/t.IMGFMT (file)
-NODE_NAME: TEST_DIR/t.IMGFMT (file)
+QEMU_PROG: -drive 
driver=file,file=driver=IMGFMT,key-secret=keysec0,file.filename=TEST_DIR/t.IMGFMT,if=none,read-only=on,auto-read-only=off: 
Could not open 'driver=IMGFMT': No such file or directory
+QEMU_PROG: -drive 
driver=file,file=driver=IMGFMT,key-secret=keysec0,file.filename=TEST_DIR/t.IMGFMT,if=none,read-only=on,auto-read-only=on: 
Could not open 'driver=IMGFMT': No such file or directory


...

I know that for LUKS, you have to use --image-opts, which would require 
some rewrites of the commands you are testing; so maybe the best thing 
is to just blacklist this test from running on LUKS?


--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



Re: [Qemu-block] [Qemu-devel] [PATCH for-3.1?] file-posix: Better checks of 64-bit copy_range

2018-11-15 Thread Eric Blake

On 11/14/18 3:05 PM, Eric Blake wrote:

file-posix.c was taking a 64-bit bytes in raw_co_copy_range_to(),
passing it through a 32-bit parameter of paio_submit_co_full(),
then widening it back to size_t when assigning into acb->aio_nbytes.

Looking at io.c, I can't quickly tell if bdrv_co_copy_range_internal()
is fragmenting things to honor bs->bl.max_transfer, or if it can
accidentally send requests larger than 2G down to the driver. If
the former, then this is a no-op; if the latter, then someone needs
to find a way to trigger this assertion and patch the block layer
to properly fragment copy_range requests.  Either way, we're better
off with an assertion than the risk of silent data corruption.

Signed-off-by: Eric Blake 
---
  block/file-posix.c | 5 +++--
  1 file changed, 3 insertions(+), 2 deletions(-)


This patch is not needed after Kevin's file-posix cleanups for post-release:
https://lists.gnu.org/archive/html/qemu-devel/2018-11/msg02985.html

The question remains, though, if we still want this in 3.1.



diff --git a/block/file-posix.c b/block/file-posix.c
index 58c86a01eaa..48ad3bb372a 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -1821,7 +1821,7 @@ static int aio_worker(void *arg)
  static int paio_submit_co_full(BlockDriverState *bs, int fd,
 int64_t offset, int fd2, int64_t offset2,
 QEMUIOVector *qiov,
-   int bytes, int type)
+   uint64_t bytes, int type)
  {
  RawPosixAIOData *acb = g_new(RawPosixAIOData, 1);
  ThreadPool *pool;
@@ -1832,6 +1832,7 @@ static int paio_submit_co_full(BlockDriverState *bs, int 
fd,
  acb->aio_fd2 = fd2;
  acb->aio_offset2 = offset2;

+assert(bytes <= SIZE_MAX);
  acb->aio_nbytes = bytes;
  acb->aio_offset = offset;

@@ -1848,7 +1849,7 @@ static int paio_submit_co_full(BlockDriverState *bs, int 
fd,

  static inline int paio_submit_co(BlockDriverState *bs, int fd,
   int64_t offset, QEMUIOVector *qiov,
- int bytes, int type)
+ uint64_t bytes, int type)
  {
  return paio_submit_co_full(bs, fd, offset, -1, 0, qiov, bytes, type);
  }



--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



Re: [Qemu-block] [PATCH for 3.1 v3 0/3] minor qcow2 compression improvements

2018-11-15 Thread Eric Blake

On 11/14/18 8:35 AM, Kevin Wolf wrote:

Am 14.11.2018 um 00:03 hat Eric Blake geschrieben:

As the added iotests shows, we have a (corner case) data corruption
that is user triggerable, therefore, this is still appropriate for
inclusion in 3.1.


Thanks, applied to the block branch.


Patch 2 and 3 are fine; but Berto raised questions about patch 1 so I 
posted a v9 of that patch, in case you want to replace it in your queue 
and/or defer it out of 3.1 and instead save it for 4.0.


--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



[Qemu-block] [PATCH for 3.1 v9] qcow2: Document some maximum size constraints

2018-11-15 Thread Eric Blake
Although off_t permits up to 63 bits (8EB) of file offsets, in
practice, we're going to hit other limits first.  Document some
of those limits in the qcow2 spec (some are inherent, others are
implementation choices of qemu), and how choice of cluster size
can influence some of the limits.

While we cannot map any uncompressed virtual cluster to any
address higher than 64 PB (56 bits) (due to the current L1/L2
field encoding stopping at bit 55), qemu's cap of 8M for the
refcount table can still access larger host addresses for some
combinations of large clusters and small refcount_order.  For
comparison, ext4 with 4k blocks caps files at 16PB.

Another interesting limit: for compressed clusters, the L2 layout
requires an ever-smaller maximum host offset as cluster size gets
larger, down to a 512 TB maximum with 2M clusters.  In particular,
note that with a cluster size of 8k or smaller, the L2 entry for
a compressed cluster could technically point beyond the 64PB mark,
but when you consider that with 8k clusters and refcount_order = 0,
you cannot access beyond 512T without exceeding qemu's limit of an
8M cap on the refcount table, it is unlikely that any image in the
wild has attempted to do so.  To be safe, let's document that bits
beyond 55 in a compressed cluster must be 0.

Signed-off-by: Eric Blake 

---
v9: Yet more wording tweaks, to call out the difference between
inherent (L2 reserved bit) and implementation (qemu's 32M L1 and
8M refcount) limits.

Designed to replace commit 1da4dc02 on Kevin's block branch.
---
 docs/interop/qcow2.txt | 38 --
 1 file changed, 36 insertions(+), 2 deletions(-)

diff --git a/docs/interop/qcow2.txt b/docs/interop/qcow2.txt
index 845d40a086d..fb5cb47245c 100644
--- a/docs/interop/qcow2.txt
+++ b/docs/interop/qcow2.txt
@@ -40,7 +40,18 @@ The first cluster of a qcow2 image contains the file header:
 with larger cluster sizes.

  24 - 31:   size
-Virtual disk size in bytes
+Virtual disk size in bytes.
+
+Note: qemu has an implementation limit of 32 MB as
+the maximum L1 table size.  With a 2 MB cluster
+size, it is unable to populate a virtual cluster
+beyond 2 EB (61 bits); with a 512 byte cluster
+size, it is unable to populate a virtual size
+larger than 128 GB (37 bits).  Meanwhile, L1/L2
+table layouts limit an image to no more than 64 PB
+(56 bits) of populated clusters, and an image may
+hit other limits first (such as a file system's
+maximum size).

  32 - 35:   crypt_method
 0 for no encryption
@@ -326,6 +337,17 @@ in the image file.
 It contains pointers to the second level structures which are called refcount
 blocks and are exactly one cluster in size.

+Although a large enough refcount table can reserve clusters past 64 PB
+(56 bits) (assuming the underlying protocol can even be sized that
+large), note that some qcow2 metadata such as L1/L2 tables must point
+to clusters prior to that point.
+
+Note: qemu has an implementation limit of 8 MB as the maximum refcount
+table size.  With a 2 MB cluster size and a default refcount_order of
+4, it is unable to reference host resources beyond 2 EB (61 bits); in
+the worst case, with a 512 cluster size and refcount_order of 6, it is
+unable to access beyond 32 GB (35 bits).
+
 Given an offset into the image file, the refcount of its cluster can be
 obtained as follows:

@@ -365,6 +387,16 @@ The L1 table has a variable size (stored in the header) 
and may use multiple
 clusters, however it must be contiguous in the image file. L2 tables are
 exactly one cluster in size.

+The L1 and L2 tables have implications on the maximum virtual file
+size; for a given L1 table size, a larger cluster size is required for
+the guest to have access to more space.  Furthermore, a virtual
+cluster must currently map to a host offset below 64 PB (56 bits)
+(although this limit could be relaxed by putting reserved bits into
+use).  Additionally, as cluster size increases, the maximum host
+offset for a compressed cluster is reduced (a 2M cluster size requires
+compressed clusters to reside below 512 TB (49 bits), and this limit
+cannot be relaxed without an incompatible layout change).
+
 Given an offset into the virtual disk, the offset into the image file can be
 obtained as follows:

@@ -427,7 +459,9 @@ Standard Cluster Descriptor:
 Compressed Clusters Descriptor (x = 62 - (cluster_bits - 8)):

 Bit  0 - x-1:   Host cluster offset. This is usually _not_ aligned to a
-cluster or sector boundary!
+cluster or sector boundary!  If cluster_bits is
+small enough that this field includes bits beyond
+55, those upper bits must be se

Re: [Qemu-block] [Qemu-devel] [PATCH v2 09/13] RFC: crypto: Rely on block layer for fragmentation

2018-11-15 Thread Daniel P . Berrangé
On Wed, Nov 14, 2018 at 08:03:30PM -0600, Eric Blake wrote:
> No need to reimplement fragmentation to BLOCK_CRYPTO_MAX_IO_SIZE
> ourselves when we can ask the block layer to do it for us.
> 
> Signed-off-by: Eric Blake 
> 
> ---
> Question - is this patch for 'crypto' acceptable, or should we stick
> with just the previous one that marks things as 64-bit clean?

Unless I'm missing something, this is functionally equivalent to
the existing code, and is simpler to read, so I don't see any
obvious downside  to this patch.

Assuming that you've run 'qemu-iotests/check -luks' on it to
validate it then ...

   Reviewed-by: Daniel P. Berrangé 



> ---
>  block/crypto.c | 80 +++---
>  1 file changed, 30 insertions(+), 50 deletions(-)
> 
> diff --git a/block/crypto.c b/block/crypto.c
> index 259ef2649e1..b004cef22c2 100644
> --- a/block/crypto.c
> +++ b/block/crypto.c
> @@ -328,8 +328,6 @@ block_crypto_co_preadv(BlockDriverState *bs, uint64_t 
> offset, uint64_t bytes,
> QEMUIOVector *qiov, int flags)
>  {
>  BlockCrypto *crypto = bs->opaque;
> -uint64_t cur_bytes; /* number of bytes in current iteration */
> -uint64_t bytes_done = 0;
>  uint8_t *cipher_data = NULL;
>  QEMUIOVector hd_qiov;
>  int ret = 0;
> @@ -346,38 +344,30 @@ block_crypto_co_preadv(BlockDriverState *bs, uint64_t 
> offset, uint64_t bytes,
>  /* Bounce buffer because we don't wish to expose cipher text
>   * in qiov which points to guest memory.
>   */
> -cipher_data =
> -qemu_try_blockalign(bs->file->bs, MIN(BLOCK_CRYPTO_MAX_IO_SIZE,
> -  qiov->size));
> +assert(qiov->size <= BLOCK_CRYPTO_MAX_IO_SIZE);
> +cipher_data = qemu_try_blockalign(bs->file->bs, qiov->size);
>  if (cipher_data == NULL) {
>  ret = -ENOMEM;
>  goto cleanup;
>  }
> 
> -while (bytes) {
> -cur_bytes = MIN(bytes, BLOCK_CRYPTO_MAX_IO_SIZE);
> +qemu_iovec_reset(&hd_qiov);
> +qemu_iovec_add(&hd_qiov, cipher_data, bytes);
> 
> -qemu_iovec_reset(&hd_qiov);
> -qemu_iovec_add(&hd_qiov, cipher_data, cur_bytes);
> -
> -ret = bdrv_co_preadv(bs->file, payload_offset + offset + bytes_done,
> - cur_bytes, &hd_qiov, 0);
> -if (ret < 0) {
> -goto cleanup;
> -}
> -
> -if (qcrypto_block_decrypt(crypto->block, offset + bytes_done,
> -  cipher_data, cur_bytes, NULL) < 0) {
> -ret = -EIO;
> -goto cleanup;
> -}
> -
> -qemu_iovec_from_buf(qiov, bytes_done, cipher_data, cur_bytes);
> +ret = bdrv_co_preadv(bs->file, payload_offset + offset, bytes,
> + &hd_qiov, 0);
> +if (ret < 0) {
> +goto cleanup;
> +}
> 
> -bytes -= cur_bytes;
> -bytes_done += cur_bytes;
> +if (qcrypto_block_decrypt(crypto->block, offset,
> +  cipher_data, bytes, NULL) < 0) {
> +ret = -EIO;
> +goto cleanup;
>  }
> 
> +qemu_iovec_from_buf(qiov, 0, cipher_data, bytes);
> +
>   cleanup:
>  qemu_iovec_destroy(&hd_qiov);
>  qemu_vfree(cipher_data);
> @@ -391,8 +381,6 @@ block_crypto_co_pwritev(BlockDriverState *bs, uint64_t 
> offset, uint64_t bytes,
>  QEMUIOVector *qiov, int flags)
>  {
>  BlockCrypto *crypto = bs->opaque;
> -uint64_t cur_bytes; /* number of bytes in current iteration */
> -uint64_t bytes_done = 0;
>  uint8_t *cipher_data = NULL;
>  QEMUIOVector hd_qiov;
>  int ret = 0;
> @@ -409,36 +397,28 @@ block_crypto_co_pwritev(BlockDriverState *bs, uint64_t 
> offset, uint64_t bytes,
>  /* Bounce buffer because we're not permitted to touch
>   * contents of qiov - it points to guest memory.
>   */
> -cipher_data =
> -qemu_try_blockalign(bs->file->bs, MIN(BLOCK_CRYPTO_MAX_IO_SIZE,
> -  qiov->size));
> +assert(qiov->size <= BLOCK_CRYPTO_MAX_IO_SIZE);
> +cipher_data = qemu_try_blockalign(bs->file->bs, qiov->size);
>  if (cipher_data == NULL) {
>  ret = -ENOMEM;
>  goto cleanup;
>  }
> 
> -while (bytes) {
> -cur_bytes = MIN(bytes, BLOCK_CRYPTO_MAX_IO_SIZE);
> +qemu_iovec_to_buf(qiov, 0, cipher_data, bytes);
> 
> -qemu_iovec_to_buf(qiov, bytes_done, cipher_data, cur_bytes);
> +if (qcrypto_block_encrypt(crypto->block, offset,
> +  cipher_data, bytes, NULL) < 0) {
> +ret = -EIO;
> +goto cleanup;
> +}
> 
> -if (qcrypto_block_encrypt(crypto->block, offset + bytes_done,
> -  cipher_data, cur_bytes, NULL) < 0) {
> -ret = -EIO;
> -goto cleanup;
> -}
> +qemu_iovec_reset(&hd_qiov);
> +qemu_iovec_add(&hd_qiov, cipher_data, bytes);
> 
> -qemu_iovec_

Re: [Qemu-block] [PATCH] nvme: fix oob access issue(CVE-2018-16847)

2018-11-15 Thread Paolo Bonzini
On 15/11/2018 04:14, Li Qiang wrote:
> 
> 
> Paolo Bonzini mailto:pbonz...@redhat.com>> 于2018
> 年11月14日周三 下午11:44写道:
> 
> On 14/11/2018 02:38, Li Qiang wrote:
> >
> >
> > Paolo Bonzini mailto:pbonz...@redhat.com>
> >> 于2018
> > 年11月14日周三 上午2:27写道:
> >
> >     On 13/11/2018 11:17, Kevin Wolf wrote:
> >     > Am 13.11.2018 um 02:45 hat Li Qiang geschrieben:
> >     >> Ping what't the status of this patch.
> >     >>
> >     >> I see Kevin's new pr doesn't contain this patch.
> >     >
> >     > Oh, I thought you said that you wanted to fix this at a higher
> >     level so
> >     > that the problem is caught before even getting into nvme
> code? If you
> >     > don't, I can apply the patch for my next pull request.
> >
> >     As far as I know the bug doesn't exist.  Li Qiang, if you have a
> >     reproducer please send it.
> >
> >
> > Hello Paolo,
> > Though I've send the debug information and ASAN output in the mail to
> > secal...@redhat.com 
> >, I'm glad
> provide here.
> > This is for read, I think the write the same but as the PoC is in
> > userspace, the mmap can only map the exact size of the MMIO,
> > So we can only write within the area. But if we using a module we can
> > write the out of MMIO I think
> > The nvme device's parameter should set as 'cmb_size_mb=2' and the PCI
> > address may differ in your system.
> 
> Ok, thanks.  I've created a reproducer using qtest (though I have to run
> now and cannot post it properly).
> 
> The patch for the fix is simply:
> 
> 
> So do you send this or me?

Me, together with the test.

Paolo



Re: [Qemu-block] [PATCH] migration/block-dirty-bitmap: fix Coverity CID1390625

2018-11-15 Thread John Snow



On 11/15/18 6:48 AM, Peter Maydell wrote:
> On 17 October 2018 at 10:51, Stefan Hajnoczi  wrote:
>> On Tue, Oct 16, 2018 at 04:20:18PM +0300, Vladimir Sementsov-Ogievskiy wrote:
>>> Theoretically possible that we finish the skipping loop with bs = NULL
>>> and the following code will crash trying to dereference it. Fix that.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy 
>>> ---
>>>  migration/block-dirty-bitmap.c | 4 
>>>  1 file changed, 4 insertions(+)
>>>
>>> diff --git a/migration/block-dirty-bitmap.c b/migration/block-dirty-bitmap.c
>>> index 477826330c..6de808f95f 100644
>>> --- a/migration/block-dirty-bitmap.c
>>> +++ b/migration/block-dirty-bitmap.c
>>> @@ -288,6 +288,10 @@ static int init_dirty_bitmap_migration(void)
>>>  bs = backing_bs(bs);
>>>  }
>>>
>>> +if (!bs || bs->implicit) {
>>> +continue;
>>> +}
>>> +
>>>  for (bitmap = bdrv_dirty_bitmap_next(bs, NULL); bitmap;
>>>   bitmap = bdrv_dirty_bitmap_next(bs, bitmap))
>>>  {
>>
>> Previous discussion:
>> http://qemu.11.n7.nabble.com/PATCH-migration-Appease-coverity-skip-empty-block-trees-td582504.html
>>
>> I've CCed John so he can take a look.
> 
> So have you block-layer folks figured out how you want to address
> this Coverity issue yet?
> 
> thanks
> -- PMM
> 

I'm sorry, Peter.

I've let this one slip through the cracks many times and I thought it
had been handled.

I'll look now.



Re: [Qemu-block] [PATCH v2 13/13] block: Enforce non-zero bl.max_transfer

2018-11-15 Thread Eric Blake

On 11/15/18 10:24 AM, Kevin Wolf wrote:

Am 15.11.2018 um 03:03 hat Eric Blake geschrieben:

The raw format driver and the filter drivers default to picking
up the same limits as what they wrap, and I've audited that they
are otherwise simple enough in their passthrough to be 64-bit
clean; it's not worth changing their .bdrv_refresh_limits to
advertise anything different.  Previous patches changed all
remaining byte-based format and protocol drivers to supply an
explicit max_transfer consistent with an audit (or lack thereof)
of their code.  And for drivers still using sector-based
callbacks (gluster, iscsi, parallels, qed, replication, sheepdog,
ssh, vhdx), we can easily supply a 2G default limit while waiting
for a later per-driver conversion and auditing while moving to
byte-based callbacks.

With that, we can assert that max_transfer is now always set (either
by sector-based default, by merge with a backing layer, or by
explicit settings), and enforce that it be non-zero by the end
of bdrv_refresh_limits().  This in turn lets us simplify portions
of the block layer to use MIN() instead of MIN_NON_ZERO(), while
still fragmenting things to no larger than 2G chunks.  Also, we no
longer need to rewrite the driver's bl.max_transfer to a value below
2G.  There should be no semantic change from this patch, although
it does open the door if a future patch wants to let the block layer
choose a larger value than 2G while still honoring bl.max_transfer
for fragmentation.

Signed-off-by: Eric Blake 
---

[hmm - in sending this email, I realize that I didn't specifically
check whether quorum always picks up a sane limit from its children,
since it is byte-based but lacks a .bdrv_refresh_limits]

  include/block/block_int.h | 10 +-
  block/io.c| 25 -
  2 files changed, 17 insertions(+), 18 deletions(-)

diff --git a/include/block/block_int.h b/include/block/block_int.h
index b19d94dac5f..410553bb0cc 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -548,11 +548,11 @@ typedef struct BlockLimits {
  uint32_t opt_transfer;

  /* Maximal transfer length in bytes.  Need not be power of 2, but
- * must be multiple of opt_transfer and bl.request_alignment, or 0
- * for no 32-bit limit.  The block layer fragments all actions
- * below 2G, so setting this value to anything larger than INT_MAX
- * implies that the driver has been audited for 64-bit
- * cleanness. */
+ * must be larger than opt_transfer and request_alignment (the
+ * block layer rounds it down as needed).  The block layer
+ * fragments all actions below 2G, so setting this value to
+ * anything larger than INT_MAX implies that the driver has been
+ * audited for 64-bit cleanness. */
  uint64_t max_transfer;

  /* memory alignment, in bytes so that no bounce buffer is needed */
diff --git a/block/io.c b/block/io.c
index 4911a1123eb..0024be0bf31 100644
--- a/block/io.c
+++ b/block/io.c
@@ -129,6 +129,9 @@ void bdrv_refresh_limits(BlockDriverState *bs, Error **errp)
  /* Default alignment based on whether driver has byte interface */
  bs->bl.request_alignment = (drv->bdrv_co_preadv ||
  drv->bdrv_aio_preadv) ? 1 : 512;
+/* Default cap at 2G for drivers without byte interface */
+if (bs->bl.request_alignment == 1)
+bs->bl.max_transfer = BDRV_REQUEST_MAX_BYTES;


Missing braces, and comment and code don't match (the comment suggests
that the condition should be != 1).


Indeed. I shouldn't send patches late at night ;)  In fact, it proves I 
didn't run iotests on any of the protocol drivers that would be impacted 
by this code (gluster, iscsi, sheepdog, ssh), and thus, would probably 
fail the added assertion when they leave bl.max_transfer at 0 because I 
didn't initialize a default for them.  The format drivers (parallels, 
qed, replication, vhdx) would probably still work because we merge in 
the max_transfer of bs->file before the assert that bl.max_transfer is 
non-zero.


Lesson learned - don't submit v3 of this patch until I've at least done 
better due diligence at running iotests on some of the impacted drivers.


--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



Re: [Qemu-block] [PATCH v2 05/13] block: Switch to 64-bit bl.max_transfer

2018-11-15 Thread Eric Blake

On 11/15/18 9:45 AM, Kevin Wolf wrote:

Am 15.11.2018 um 03:03 hat Eric Blake geschrieben:

This change has no semantic impact: all drivers either leave the
value at 0 (no inherent 32-bit limit is still translated into
fragmentation below 2G; see the previous patch for that audit), or
set it to a value less than 2G.  However, switching to a larger
type and enforcing the 2G cap at the block layer makes it easier
to audit specific drivers for their robustness to larger sizing,
by letting them specify a value larger than INT_MAX if they have
been audited to be 64-bit clean.




+++ b/block/io.c
@@ -159,6 +159,13 @@ void bdrv_refresh_limits(BlockDriverState *bs, Error 
**errp)
  if (drv->bdrv_refresh_limits) {
  drv->bdrv_refresh_limits(bs, errp);
  }
+
+/* Clamp max_transfer to 2G */
+if (bs->bl.max_transfer > UINT32_MAX) {


UINT32_MAX is 4G, not 2G.

Would it make more sense to make BDRV_REQUEST_MAX_BYTES the maximum
anyway?


D'oh.  Yes, that's what I intended, possibly by spelling it INT_MAX (the 
fact that the 'if' goes away in patch 13 is not an excuse for sloppy 
coding in the meantime).



Allowing higher (but not too high) explicit values than what we
clamp to feels a bit odd.

BDRV_REQUEST_MAX_BYTES is probably also what drivers really expect
today.


Correct.  Well, at least the unaudited drivers (the rest of this series 
audits a few drivers that can handle larger byte values).


--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



Re: [Qemu-block] [PATCH v2 13/13] block: Enforce non-zero bl.max_transfer

2018-11-15 Thread Kevin Wolf
Am 15.11.2018 um 03:03 hat Eric Blake geschrieben:
> The raw format driver and the filter drivers default to picking
> up the same limits as what they wrap, and I've audited that they
> are otherwise simple enough in their passthrough to be 64-bit
> clean; it's not worth changing their .bdrv_refresh_limits to
> advertise anything different.  Previous patches changed all
> remaining byte-based format and protocol drivers to supply an
> explicit max_transfer consistent with an audit (or lack thereof)
> of their code.  And for drivers still using sector-based
> callbacks (gluster, iscsi, parallels, qed, replication, sheepdog,
> ssh, vhdx), we can easily supply a 2G default limit while waiting
> for a later per-driver conversion and auditing while moving to
> byte-based callbacks.
> 
> With that, we can assert that max_transfer is now always set (either
> by sector-based default, by merge with a backing layer, or by
> explicit settings), and enforce that it be non-zero by the end
> of bdrv_refresh_limits().  This in turn lets us simplify portions
> of the block layer to use MIN() instead of MIN_NON_ZERO(), while
> still fragmenting things to no larger than 2G chunks.  Also, we no
> longer need to rewrite the driver's bl.max_transfer to a value below
> 2G.  There should be no semantic change from this patch, although
> it does open the door if a future patch wants to let the block layer
> choose a larger value than 2G while still honoring bl.max_transfer
> for fragmentation.
> 
> Signed-off-by: Eric Blake 
> ---
> 
> [hmm - in sending this email, I realize that I didn't specifically
> check whether quorum always picks up a sane limit from its children,
> since it is byte-based but lacks a .bdrv_refresh_limits]
> 
>  include/block/block_int.h | 10 +-
>  block/io.c| 25 -
>  2 files changed, 17 insertions(+), 18 deletions(-)
> 
> diff --git a/include/block/block_int.h b/include/block/block_int.h
> index b19d94dac5f..410553bb0cc 100644
> --- a/include/block/block_int.h
> +++ b/include/block/block_int.h
> @@ -548,11 +548,11 @@ typedef struct BlockLimits {
>  uint32_t opt_transfer;
> 
>  /* Maximal transfer length in bytes.  Need not be power of 2, but
> - * must be multiple of opt_transfer and bl.request_alignment, or 0
> - * for no 32-bit limit.  The block layer fragments all actions
> - * below 2G, so setting this value to anything larger than INT_MAX
> - * implies that the driver has been audited for 64-bit
> - * cleanness. */
> + * must be larger than opt_transfer and request_alignment (the
> + * block layer rounds it down as needed).  The block layer
> + * fragments all actions below 2G, so setting this value to
> + * anything larger than INT_MAX implies that the driver has been
> + * audited for 64-bit cleanness. */
>  uint64_t max_transfer;
> 
>  /* memory alignment, in bytes so that no bounce buffer is needed */
> diff --git a/block/io.c b/block/io.c
> index 4911a1123eb..0024be0bf31 100644
> --- a/block/io.c
> +++ b/block/io.c
> @@ -129,6 +129,9 @@ void bdrv_refresh_limits(BlockDriverState *bs, Error 
> **errp)
>  /* Default alignment based on whether driver has byte interface */
>  bs->bl.request_alignment = (drv->bdrv_co_preadv ||
>  drv->bdrv_aio_preadv) ? 1 : 512;
> +/* Default cap at 2G for drivers without byte interface */
> +if (bs->bl.request_alignment == 1)
> +bs->bl.max_transfer = BDRV_REQUEST_MAX_BYTES;

Missing braces, and comment and code don't match (the comment suggests
that the condition should be != 1).

Kevin



Re: [Qemu-block] [PATCH v3 1/3] qcow2: Document some maximum size constraints

2018-11-15 Thread Eric Blake

On 11/15/18 9:17 AM, Alberto Garcia wrote:

On Wed 14 Nov 2018 12:03:17 AM CET, Eric Blake wrote:

@@ -427,7 +451,9 @@ Standard Cluster Descriptor:
  Compressed Clusters Descriptor (x = 62 - (cluster_bits - 8)):

  Bit  0 - x-1:   Host cluster offset. This is usually _not_ aligned to a
-cluster or sector boundary!
+cluster or sector boundary!  If cluster_bits is
+small enough that this field includes bits beyond
+55, those upper bits must be set to 0.


While I think that it's good to have a 56 bits upper limit for both
compressed and uncompressed clusters, I'm wondering: is it theoretically
possible to have data clusters above 64PB if they're all compressed?


The question is only applicable for cluster sizes of 8k and smaller. 
With an 8k cluster image and the qcow2.h limit of a 32MiB L1 table (4096 
clusters, each of which holds 1024 L2 entries, and each L2 table holds 
1024 cluster entries), you can have up to 4k * 1k * 1k * 8k == 32T of 
guest size.  You'd need a LOT of metadata (for example, over 2000 
internal snapshots) before the host file would reach 64PB to even need 
to store compressed clusters at a host offset that large.  At the same 
time, qemu would limit you to an 8MiB refcount table (1024 clusters, 
each of which holds 1024 refblocks, which in turn hold a default of 4096 
refcounts, but with a refcount_order of 0 could hold 64k refcounts), 
which results in qemu allowing your maximum host offset to be 1k * 1k * 
64k * 8k == 512T, which means qemu will refuse to generate or open such 
an image in the first place.  So if you have an image that tries to 
store a compressed data cluster above host offset 64PB, qemu is unable 
to process that image.


But your question does mean this other part of my patch:

>
>24 - 31:   size
> -Virtual disk size in bytes
> +Virtual disk size in bytes.
> +
> +Note: with a 2 MB cluster size, the maximum
> +virtual size is 2 EB (61 bits) for a fully sparse
> +file; however, L1/L2 table layouts limit an image
> +to no more than 64 PB (56 bits) of populated
> +clusters, and an image may hit other limits first
> +(such as a file system's maximum size).  With a
> +512 byte cluster size, the maximum virtual size
> +drops to 128 GB (37 bits).

is misleading.  Elsewhere, we mention for cluster_bits:

Note: qemu as of today has an implementation limit 
of 2 MB
as the maximum cluster size and won't be able to 
open images

with larger cluster sizes.

and looking at the code in qcow2.h, the 2EB limits on maximum virtual 
size is NOT an inherent limit in the qcow2 file format, but rather a 
result of qemu's implementation refusing to size the L1 table larger 
than 32MiB.  If you allow a larger L1 table, you can get to larger 
virtual addresses.  So I need to fix this patch [again] to add in 
wording about this being a qemu limit.


--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



Re: [Qemu-block] [PATCH 08/12] file-posix: Move read/write operation logic out of aio_worker()

2018-11-15 Thread Kevin Wolf
Am 31.10.2018 um 22:56 hat Kevin Wolf geschrieben:
> aio_worker() for reads and writes isn't boring enough yet. It still does
> some postprocessing for handling short reads and turning the result into
> the right return value.
> 
> However, there is no reason why handle_aiocb_rw() couldn't do the same,
> and even without duplicating code between the read and write path. So
> move the code there.
> 
> Signed-off-by: Kevin Wolf 

> @@ -1354,7 +1356,21 @@ static ssize_t handle_aiocb_rw(RawPosixAIOData *aiocb)
>  }
>  qemu_vfree(buf);
>  
> -return nbytes;
> +out:
> +if (nbytes == aiocb->aio_nbytes) {
> +return 0;
> +} else if (nbytes >= 0 && nbytes < aiocb->aio_nbytes) {
> +if (aiocb->aio_type & QEMU_AIO_WRITE) {
> +return -EINVAL;
> +} else {
> +iov_memset(aiocb->io.iov, aiocb->io.niov, nbytes,
> +  0, aiocb->aio_nbytes - nbytes);
> +return aiocb->aio_nbytes;

While reviewing Eric's patches, I noticed that this should be return 0.
Fixing it in my queue.

Kevin



Re: [Qemu-block] [PATCH v2 09/13] RFC: crypto: Rely on block layer for fragmentation

2018-11-15 Thread Kevin Wolf
Am 15.11.2018 um 03:03 hat Eric Blake geschrieben:
> No need to reimplement fragmentation to BLOCK_CRYPTO_MAX_IO_SIZE
> ourselves when we can ask the block layer to do it for us.
> 
> Signed-off-by: Eric Blake 
> 
> ---
> Question - is this patch for 'crypto' acceptable, or should we stick
> with just the previous one that marks things as 64-bit clean?

I don't know what Dan thinks, but I like it.

Kevin

>  block/crypto.c | 80 +++---
>  1 file changed, 30 insertions(+), 50 deletions(-)
> 
> diff --git a/block/crypto.c b/block/crypto.c
> index 259ef2649e1..b004cef22c2 100644
> --- a/block/crypto.c
> +++ b/block/crypto.c
> @@ -328,8 +328,6 @@ block_crypto_co_preadv(BlockDriverState *bs, uint64_t 
> offset, uint64_t bytes,
> QEMUIOVector *qiov, int flags)
>  {
>  BlockCrypto *crypto = bs->opaque;
> -uint64_t cur_bytes; /* number of bytes in current iteration */
> -uint64_t bytes_done = 0;
>  uint8_t *cipher_data = NULL;
>  QEMUIOVector hd_qiov;
>  int ret = 0;
> @@ -346,38 +344,30 @@ block_crypto_co_preadv(BlockDriverState *bs, uint64_t 
> offset, uint64_t bytes,
>  /* Bounce buffer because we don't wish to expose cipher text
>   * in qiov which points to guest memory.
>   */
> -cipher_data =
> -qemu_try_blockalign(bs->file->bs, MIN(BLOCK_CRYPTO_MAX_IO_SIZE,
> -  qiov->size));
> +assert(qiov->size <= BLOCK_CRYPTO_MAX_IO_SIZE);
> +cipher_data = qemu_try_blockalign(bs->file->bs, qiov->size);
>  if (cipher_data == NULL) {
>  ret = -ENOMEM;
>  goto cleanup;
>  }
> 
> -while (bytes) {
> -cur_bytes = MIN(bytes, BLOCK_CRYPTO_MAX_IO_SIZE);
> +qemu_iovec_reset(&hd_qiov);
> +qemu_iovec_add(&hd_qiov, cipher_data, bytes);
> 
> -qemu_iovec_reset(&hd_qiov);
> -qemu_iovec_add(&hd_qiov, cipher_data, cur_bytes);
> -
> -ret = bdrv_co_preadv(bs->file, payload_offset + offset + bytes_done,
> - cur_bytes, &hd_qiov, 0);
> -if (ret < 0) {
> -goto cleanup;
> -}
> -
> -if (qcrypto_block_decrypt(crypto->block, offset + bytes_done,
> -  cipher_data, cur_bytes, NULL) < 0) {
> -ret = -EIO;
> -goto cleanup;
> -}
> -
> -qemu_iovec_from_buf(qiov, bytes_done, cipher_data, cur_bytes);
> +ret = bdrv_co_preadv(bs->file, payload_offset + offset, bytes,
> + &hd_qiov, 0);
> +if (ret < 0) {
> +goto cleanup;
> +}
> 
> -bytes -= cur_bytes;
> -bytes_done += cur_bytes;
> +if (qcrypto_block_decrypt(crypto->block, offset,
> +  cipher_data, bytes, NULL) < 0) {
> +ret = -EIO;
> +goto cleanup;
>  }
> 
> +qemu_iovec_from_buf(qiov, 0, cipher_data, bytes);
> +
>   cleanup:
>  qemu_iovec_destroy(&hd_qiov);
>  qemu_vfree(cipher_data);
> @@ -391,8 +381,6 @@ block_crypto_co_pwritev(BlockDriverState *bs, uint64_t 
> offset, uint64_t bytes,
>  QEMUIOVector *qiov, int flags)
>  {
>  BlockCrypto *crypto = bs->opaque;
> -uint64_t cur_bytes; /* number of bytes in current iteration */
> -uint64_t bytes_done = 0;
>  uint8_t *cipher_data = NULL;
>  QEMUIOVector hd_qiov;
>  int ret = 0;
> @@ -409,36 +397,28 @@ block_crypto_co_pwritev(BlockDriverState *bs, uint64_t 
> offset, uint64_t bytes,
>  /* Bounce buffer because we're not permitted to touch
>   * contents of qiov - it points to guest memory.
>   */
> -cipher_data =
> -qemu_try_blockalign(bs->file->bs, MIN(BLOCK_CRYPTO_MAX_IO_SIZE,
> -  qiov->size));
> +assert(qiov->size <= BLOCK_CRYPTO_MAX_IO_SIZE);
> +cipher_data = qemu_try_blockalign(bs->file->bs, qiov->size);
>  if (cipher_data == NULL) {
>  ret = -ENOMEM;
>  goto cleanup;
>  }
> 
> -while (bytes) {
> -cur_bytes = MIN(bytes, BLOCK_CRYPTO_MAX_IO_SIZE);
> +qemu_iovec_to_buf(qiov, 0, cipher_data, bytes);
> 
> -qemu_iovec_to_buf(qiov, bytes_done, cipher_data, cur_bytes);
> +if (qcrypto_block_encrypt(crypto->block, offset,
> +  cipher_data, bytes, NULL) < 0) {
> +ret = -EIO;
> +goto cleanup;
> +}
> 
> -if (qcrypto_block_encrypt(crypto->block, offset + bytes_done,
> -  cipher_data, cur_bytes, NULL) < 0) {
> -ret = -EIO;
> -goto cleanup;
> -}
> +qemu_iovec_reset(&hd_qiov);
> +qemu_iovec_add(&hd_qiov, cipher_data, bytes);
> 
> -qemu_iovec_reset(&hd_qiov);
> -qemu_iovec_add(&hd_qiov, cipher_data, cur_bytes);
> -
> -ret = bdrv_co_pwritev(bs->file, payload_offset + offset + bytes_done,
> -  cur_bytes, &hd_qiov, flags);
> -if (ret <

Re: [Qemu-block] [PATCH v2 05/13] block: Switch to 64-bit bl.max_transfer

2018-11-15 Thread Kevin Wolf
Am 15.11.2018 um 03:03 hat Eric Blake geschrieben:
> This change has no semantic impact: all drivers either leave the
> value at 0 (no inherent 32-bit limit is still translated into
> fragmentation below 2G; see the previous patch for that audit), or
> set it to a value less than 2G.  However, switching to a larger
> type and enforcing the 2G cap at the block layer makes it easier
> to audit specific drivers for their robustness to larger sizing,
> by letting them specify a value larger than INT_MAX if they have
> been audited to be 64-bit clean.
> 
> Signed-off-by: Eric Blake 
> ---
>  include/block/block_int.h | 8 +---
>  block/io.c| 7 +++
>  2 files changed, 12 insertions(+), 3 deletions(-)
> 
> diff --git a/include/block/block_int.h b/include/block/block_int.h
> index f605622216d..b19d94dac5f 100644
> --- a/include/block/block_int.h
> +++ b/include/block/block_int.h
> @@ -549,9 +549,11 @@ typedef struct BlockLimits {
> 
>  /* Maximal transfer length in bytes.  Need not be power of 2, but
>   * must be multiple of opt_transfer and bl.request_alignment, or 0
> - * for no 32-bit limit.  For now, anything larger than INT_MAX is
> - * clamped down. */
> -uint32_t max_transfer;
> + * for no 32-bit limit.  The block layer fragments all actions
> + * below 2G, so setting this value to anything larger than INT_MAX
> + * implies that the driver has been audited for 64-bit
> + * cleanness. */
> +uint64_t max_transfer;
> 
>  /* memory alignment, in bytes so that no bounce buffer is needed */
>  size_t min_mem_alignment;
> diff --git a/block/io.c b/block/io.c
> index 39d4e7a3ae1..4911a1123eb 100644
> --- a/block/io.c
> +++ b/block/io.c
> @@ -159,6 +159,13 @@ void bdrv_refresh_limits(BlockDriverState *bs, Error 
> **errp)
>  if (drv->bdrv_refresh_limits) {
>  drv->bdrv_refresh_limits(bs, errp);
>  }
> +
> +/* Clamp max_transfer to 2G */
> +if (bs->bl.max_transfer > UINT32_MAX) {

UINT32_MAX is 4G, not 2G.

Would it make more sense to make BDRV_REQUEST_MAX_BYTES the maximum
anyway? Allowing higher (but not too high) explicit values than what we
clamp to feels a bit odd.

BDRV_REQUEST_MAX_BYTES is probably also what drivers really expect
today.

> +bs->bl.max_transfer = QEMU_ALIGN_DOWN(BDRV_REQUEST_MAX_BYTES,
> +  MAX(bs->bl.opt_transfer,
> +  bs->bl.request_alignment));
> +}
>  }

Kevin



Re: [Qemu-block] [PATCH 00/12] file-posix: Simplify delegation to worker thread

2018-11-15 Thread Kevin Wolf
Am 12.11.2018 um 17:50 hat Kevin Wolf geschrieben:
> Am 31.10.2018 um 22:56 hat Kevin Wolf geschrieben:
> > This series cleans up and simplifies the code that calls worker thread
> > functions for the various operations in the file-posix driver. This
> > results in less indirection and better readability as well as reduced
> > heap allocations because we can store ACBs on the coroutine stack now.
> 
> ping?

Seems nobody has objections, so applied to block-next.

Kevin



Re: [Qemu-block] [PATCH v3 1/3] qcow2: Document some maximum size constraints

2018-11-15 Thread Alberto Garcia
On Wed 14 Nov 2018 12:03:17 AM CET, Eric Blake wrote:
> @@ -427,7 +451,9 @@ Standard Cluster Descriptor:
>  Compressed Clusters Descriptor (x = 62 - (cluster_bits - 8)):
>
>  Bit  0 - x-1:   Host cluster offset. This is usually _not_ aligned to a
> -cluster or sector boundary!
> +cluster or sector boundary!  If cluster_bits is
> +small enough that this field includes bits beyond
> +55, those upper bits must be set to 0.

While I think that it's good to have a 56 bits upper limit for both
compressed and uncompressed clusters, I'm wondering: is it theoretically
possible to have data clusters above 64PB if they're all compressed?

Berto



Re: [Qemu-block] KVM Forum block no[td]es

2018-11-15 Thread Alberto Garcia
On Wed 14 Nov 2018 06:24:10 PM CET, Max Reitz wrote:
>>> Permission system
>>> =
>>>
>>> GRAPH_MOD
>>> -
>>>
>>> We need some way for the commit job to prevent graph changes on its
>>> chain while it is running.  Our current blocker doesn’t do the job,
>>> however.  What to do?
>>>
>>> - We have no idea how to make a *permission* work.  Maybe the biggest
>>>   problem is that it just doesn’t work as a permission, because the
>>>   commit job doesn’t own the BdrvChildren that would need to be
>>>   blocked (namely the @backing BdrvChild).
>> 
>> What I'm doing at the moment in my blockdev-reopen series is check
>> whether all parents of the node I want to change share the GRAPH_MOD
>> permission. If any of them doesn't then the operation fails.
>> 
>> This can be checked by calling bdrv_get_cumulative_perm() or simply
>> bdrv_check_update_perm(..., BLK_PERM_GRAPH_MOD, ...).
>
> Sure.
>
>> It solves the problem because e.g. the stream block job only allows
>> BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED, so no graph
>> modifications allowed.
>
> But the problem is that the commit job needs to unshare the permission
> on all backing links between base and top, so none of the backing
> links can be broken.  But it cannot do that because those backing
> links do not belong to it (but to the respective overlay nodes).

I'm not sure if I'm following you. The commit block job has references
to all intermediate nodes (with block_job_add_bdrv()) and only shares
BLK_PERM_WRITE_UNCHANGED | BLK_PERM_WRITE. Anyone trying to obtain
BLK_PERM_GRAPH_MOD on an intermediate node would see that the block job
doesn't allow it.

>>>   (We never quite knew what “taking the GRAPH_PERMISSION” or
>>>   “unsharing the GRPAH_MOD permission” was supposed to mean.
>>>   Figuring that out always took like half an our in any face-to-face
>>>   meeting, and then we decided it was pretty much useless for any
>>>   case we had at hand.)
>> 
>> Yeah it's a bit unclear. It can mean "none of this node's children
>> can be replaced / removed", which is what I'm using it for at the
>> moment.
>
> Yes, but that's just not useful.  I don't think we have any case where
> something would be annoyed by having its immediate child replaced
> (other than the visible data changing, potentially).  Usually when we
> say something (e.g. a block job) wants to prevent graph modification,
> it's about changing edges that exist independently of the job (such as
> a backing chain).

Isn't that what I just said? You cannot change other edges <=> you
cannot replace a node's children (or parents).

>>> Reopen
>>> --
>>>
>>> How should permissions be handled while the reopen is under way?
>>> Maybe we should take the union of @perm before and after, and the
>>> intersection of @shared before and after?
>> 
>> Do you have an example of a case in which you're reopening a node and
>> the change of permissions causes a problem?
>
> I do not.  So currently we switch from the permissions before to the
> ones after, right?  Which should be safe because that switch is a
> transaction that is integrated into reopen.
>
> However, I suppose it's possible the protocol layer gives us some
> issues again.  It cannot switch the locks before commit, but
> committing may fail.  What to do then?
>
> It would be safer if we took the union/intersection in
> bdrv_reopen_prepare() and then released the unnecessary flags in
> bdrv_reopen_commit().  We can still get errors (as discussed), but
> those can be safely ignored.  (They're just annoying, but not
> harmful.)

Ok, I suppose you're right.

Berto



Re: [Qemu-block] [Qemu-devel] [PATCH v2 for-4.0 00/13] block: byte-based blocking read/write

2018-11-15 Thread Eric Blake

On 11/15/18 3:02 AM, no-re...@patchew.org wrote:

Hi,

This series failed docker-mingw@fedora build test. Please find the testing 
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

Message-id: 20181115020334.1189829-1-ebl...@redhat.com
Type: series
Subject: [Qemu-devel] [PATCH v2 for-4.0 00/13] block: byte-based blocking 
read/write




/tmp/qemu-test/src/block/qcow2-cluster.c: In function 
'qcow2_decompress_cluster':
/tmp/qemu-test/src/block/qcow2-cluster.c:1630:15: error: implicit declaration 
of function 'bdrv_read'; did you mean 'bdrv_pread'? 
[-Werror=implicit-function-declaration]
  ret = bdrv_read(bs->file, coffset >> 9, s->cluster_data,
^
bdrv_pread
/tmp/qemu-test/src/block/qcow2-cluster.c:1630:15: error: nested extern 
declaration of 'bdrv_read' [-Werror=nested-externs]


False positive - for some reason, patchew didn't properly follow my 
Based-on tags that this depends on Kevin's block-next branch:



Based-on: <20181114210548.1098207-1-ebl...@redhat.com>
[file-posix: Better checks of 64-bit copy_range]
Based-on: <20181101182738.70462-1-vsement...@virtuozzo.com>
[0/7 qcow2 decompress in threads] - more specifically, on Kevin's block-next 
branch



--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



Re: [Qemu-block] [PATCH] migration/block-dirty-bitmap: fix Coverity CID1390625

2018-11-15 Thread Peter Maydell
On 17 October 2018 at 10:51, Stefan Hajnoczi  wrote:
> On Tue, Oct 16, 2018 at 04:20:18PM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Theoretically possible that we finish the skipping loop with bs = NULL
>> and the following code will crash trying to dereference it. Fix that.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy 
>> ---
>>  migration/block-dirty-bitmap.c | 4 
>>  1 file changed, 4 insertions(+)
>>
>> diff --git a/migration/block-dirty-bitmap.c b/migration/block-dirty-bitmap.c
>> index 477826330c..6de808f95f 100644
>> --- a/migration/block-dirty-bitmap.c
>> +++ b/migration/block-dirty-bitmap.c
>> @@ -288,6 +288,10 @@ static int init_dirty_bitmap_migration(void)
>>  bs = backing_bs(bs);
>>  }
>>
>> +if (!bs || bs->implicit) {
>> +continue;
>> +}
>> +
>>  for (bitmap = bdrv_dirty_bitmap_next(bs, NULL); bitmap;
>>   bitmap = bdrv_dirty_bitmap_next(bs, bitmap))
>>  {
>
> Previous discussion:
> http://qemu.11.n7.nabble.com/PATCH-migration-Appease-coverity-skip-empty-block-trees-td582504.html
>
> I've CCed John so he can take a look.

So have you block-layer folks figured out how you want to address
this Coverity issue yet?

thanks
-- PMM



Re: [Qemu-block] [PATCH v3 1/6] qemu-nbd: add support for authorization of TLS clients

2018-11-15 Thread Daniel P . Berrangé
On Mon, Nov 05, 2018 at 04:41:09PM -0600, Eric Blake wrote:
> On 10/9/18 8:23 AM, Daniel P. Berrangé wrote:
> > From: "Daniel P. Berrange" 
> > 
> > Currently any client which can complete the TLS handshake is able to use
> > the NBD server. The server admin can turn on the 'verify-peer' option
> > for the x509 creds to require the client to provide a x509 certificate.
> > This means the client will have to acquire a certificate from the CA
> > before they are permitted to use the NBD server. This is still a fairly
> > low bar to cross.
> > 
> > This adds a '--tls-authz OBJECT-ID' option to the qemu-nbd command which
> > takes the ID of a previously added 'QAuthZ' object instance. This will
> > be used to validate the client's x509 distinguished name. Clients
> > failing the authorization check will not be permitted to use the NBD
> > server.
> > 
> > For example to setup authorization that only allows connection from a client
> > whose x509 certificate distinguished name is
> > 
> > CN=laptop.example.com,O=Example Org,L=London,ST=London,C=GB
> > 
> > use:
> > 
> >qemu-nbd --object tls-creds-x509,id=tls0,dir=/home/berrange/qemutls,\
> >  endpoint=server,verify-peer=yes \
> > --object authz-simple,id=auth0,identity=CN=laptop.example.com,,\
> >  O=Example Org,,L=London,,ST=London,,C=GB \
> 
> Missing shell quoting around the space in 'Example Org'. It's also fairly
> obvious that actual shell commands can't have leading space between
> \-newline line continuations.
> 
> > --tls-creds tls0 \
> > --tls-authz authz0
> >other qemu-nbd args...
> > 
> > Signed-off-by: Daniel P. Berrange 
> > ---
> >   include/block/nbd.h |  2 +-
> >   nbd/server.c| 10 +-
> >   qemu-nbd.c  | 13 -
> >   qemu-nbd.texi   |  4 
> >   4 files changed, 22 insertions(+), 7 deletions(-)
> > 
> 
> > +++ b/qemu-nbd.c
> > @@ -52,6 +52,7 @@
> >   #define QEMU_NBD_OPT_TLSCREDS  261
> >   #define QEMU_NBD_OPT_IMAGE_OPTS262
> >   #define QEMU_NBD_OPT_FORK  263
> > +#define QEMU_NBD_OPT_TLSAUTHZ  264
> 
> > @@ -532,6 +534,7 @@ int main(int argc, char **argv)
> >   { "image-opts", no_argument, NULL, QEMU_NBD_OPT_IMAGE_OPTS },
> >   { "trace", required_argument, NULL, 'T' },
> >   { "fork", no_argument, NULL, QEMU_NBD_OPT_FORK },
> > +{ "tls-authz", no_argument, NULL, QEMU_NBD_OPT_TLSAUTHZ },
> >   { NULL, 0, NULL, 0 }
> >   };
> 
> Missing a change to qemu-nbd --help to describe the new option.

Yes, and it should be 'required_argument' too, not 'no_argument'.


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [Qemu-block] [PATCH v3 1/6] qemu-nbd: add support for authorization of TLS clients

2018-11-15 Thread Daniel P . Berrangé
On Mon, Nov 05, 2018 at 04:41:09PM -0600, Eric Blake wrote:
> On 10/9/18 8:23 AM, Daniel P. Berrangé wrote:
> > From: "Daniel P. Berrange" 
> > 
> > Currently any client which can complete the TLS handshake is able to use
> > the NBD server. The server admin can turn on the 'verify-peer' option
> > for the x509 creds to require the client to provide a x509 certificate.
> > This means the client will have to acquire a certificate from the CA
> > before they are permitted to use the NBD server. This is still a fairly
> > low bar to cross.
> > 
> > This adds a '--tls-authz OBJECT-ID' option to the qemu-nbd command which
> > takes the ID of a previously added 'QAuthZ' object instance. This will
> > be used to validate the client's x509 distinguished name. Clients
> > failing the authorization check will not be permitted to use the NBD
> > server.
> > 
> > For example to setup authorization that only allows connection from a client
> > whose x509 certificate distinguished name is
> > 
> > CN=laptop.example.com,O=Example Org,L=London,ST=London,C=GB
> > 
> > use:
> > 
> >qemu-nbd --object tls-creds-x509,id=tls0,dir=/home/berrange/qemutls,\
> >  endpoint=server,verify-peer=yes \
> > --object authz-simple,id=auth0,identity=CN=laptop.example.com,,\
> >  O=Example Org,,L=London,,ST=London,,C=GB \
> 
> Missing shell quoting around the space in 'Example Org'. It's also fairly
> obvious that actual shell commands can't have leading space between
> \-newline line continuations.

Yep, leading space is the tradeoff of sticking to sensible line length
while maintaining clarity.

> 
> > --tls-creds tls0 \
> > --tls-authz authz0
> >other qemu-nbd args...
> > 
> > Signed-off-by: Daniel P. Berrange 
> > ---
> >   include/block/nbd.h |  2 +-
> >   nbd/server.c| 10 +-
> >   qemu-nbd.c  | 13 -
> >   qemu-nbd.texi   |  4 
> >   4 files changed, 22 insertions(+), 7 deletions(-)
> > 
> 
> > +++ b/qemu-nbd.c
> > @@ -52,6 +52,7 @@
> >   #define QEMU_NBD_OPT_TLSCREDS  261
> >   #define QEMU_NBD_OPT_IMAGE_OPTS262
> >   #define QEMU_NBD_OPT_FORK  263
> > +#define QEMU_NBD_OPT_TLSAUTHZ  264
> 
> > @@ -532,6 +534,7 @@ int main(int argc, char **argv)
> >   { "image-opts", no_argument, NULL, QEMU_NBD_OPT_IMAGE_OPTS },
> >   { "trace", required_argument, NULL, 'T' },
> >   { "fork", no_argument, NULL, QEMU_NBD_OPT_FORK },
> > +{ "tls-authz", no_argument, NULL, QEMU_NBD_OPT_TLSAUTHZ },
> >   { NULL, 0, NULL, 0 }
> >   };
> 
> Missing a change to qemu-nbd --help to describe the new option.

Opps, yes.


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [Qemu-block] [Qemu-devel] [PATCH v2 for-4.0 00/13] block: byte-based blocking read/write

2018-11-15 Thread no-reply
Hi,

This series failed docker-mingw@fedora build test. Please find the testing 
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

Message-id: 20181115020334.1189829-1-ebl...@redhat.com
Type: series
Subject: [Qemu-devel] [PATCH v2 for-4.0 00/13] block: byte-based blocking 
read/write

=== TEST SCRIPT BEGIN ===
#!/bin/bash
time make docker-test-mingw@fedora SHOW_ENV=1 J=8
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
2e66e20 block: Enforce non-zero bl.max_transfer
8b272d1 block: Document need for audit of read/write 64-bit cleanness
261da46 qcow2: Audit for read/write 64-bit cleanness
1ba6c76 file-posix: Audit for read/write 64-bit cleanness
00e8a09 RFC: crypto: Rely on block layer for fragmentation
be37a5a crypto: Audit for read/write 64-bit cleanness
b7af666 blklogwrites: Audit for read/write 64-bit cleanness
222d7a4 blkdebug: Audit for read/write 64-bit cleanness
21175b1 block: Switch to 64-bit bl.max_transfer
b472b9d block: Removed unused sector-based blocking I/O
dfb2b66 vvfat: Switch to byte-based calls
e47df95 vdi: Switch to byte-based calls
d7ab2c2 qcow2: Prefer byte-based calls into bs->file

=== OUTPUT BEGIN ===
  BUILD   fedora
make[1]: Entering directory `/var/tmp/patchew-tester-tmp-hw0lvnzz/src'
  GEN 
/var/tmp/patchew-tester-tmp-hw0lvnzz/src/docker-src.2018-11-15-04.01.25.3134/qemu.tar
Cloning into 
'/var/tmp/patchew-tester-tmp-hw0lvnzz/src/docker-src.2018-11-15-04.01.25.3134/qemu.tar.vroot'...
done.
Checking out files:  46% (3007/6455)   
Checking out files:  47% (3034/6455)   
Checking out files:  48% (3099/6455)   
Checking out files:  49% (3163/6455)   
Checking out files:  50% (3228/6455)   
Checking out files:  51% (3293/6455)   
Checking out files:  52% (3357/6455)   
Checking out files:  53% (3422/6455)   
Checking out files:  54% (3486/6455)   
Checking out files:  55% (3551/6455)   
Checking out files:  56% (3615/6455)   
Checking out files:  57% (3680/6455)   
Checking out files:  58% (3744/6455)   
Checking out files:  59% (3809/6455)   
Checking out files:  60% (3873/6455)   
Checking out files:  61% (3938/6455)   
Checking out files:  62% (4003/6455)   
Checking out files:  63% (4067/6455)   
Checking out files:  64% (4132/6455)   
Checking out files:  65% (4196/6455)   
Checking out files:  66% (4261/6455)   
Checking out files:  67% (4325/6455)   
Checking out files:  68% (4390/6455)   
Checking out files:  69% (4454/6455)   
Checking out files:  70% (4519/6455)   
Checking out files:  71% (4584/6455)   
Checking out files:  72% (4648/6455)   
Checking out files:  73% (4713/6455)   
Checking out files:  74% (4777/6455)   
Checking out files:  75% (4842/6455)   
Checking out files:  76% (4906/6455)   
Checking out files:  77% (4971/6455)   
Checking out files:  78% (5035/6455)   
Checking out files:  79% (5100/6455)   
Checking out files:  80% (5164/6455)   
Checking out files:  81% (5229/6455)   
Checking out files:  82% (5294/6455)   
Checking out files:  83% (5358/6455)   
Checking out files:  84% (5423/6455)   
Checking out files:  85% (5487/6455)   
Checking out files:  86% (5552/6455)   
Checking out files:  87% (5616/6455)   
Checking out files:  88% (5681/6455)   
Checking out files:  89% (5745/6455)   
Checking out files:  90% (5810/6455)   
Checking out files:  91% (5875/6455)   
Checking out files:  92% (5939/6455)   
Checking out files:  93% (6004/6455)   
Checking out files:  94% (6068/6455)   
Checking out files:  95% (6133/6455)   
Checking out files:  96% (6197/6455)   
Checking out files:  97% (6262/6455)   
Checking out files:  98% (6326/6455)   
Checking out files:  99% (6391/6455)   
Checking out files: 100% (6455/6455)   
Checking out files: 100% (6455/6455), done.
Submodule 'dtc' (https://git.qemu.org/git/dtc.git) registered for path 'dtc'
Cloning into 'dtc'...
Submodule path 'dtc': checked out '88f18909db731a627456f26d779445f84e449536'
Submodule 'ui/keycodemapdb' (https://git.qemu.org/git/keycodemapdb.git) 
registered for path 'ui/keycodemapdb'
Cloning into 'ui/keycodemapdb'...
Submodule path 'ui/keycodemapdb': checked out 
'6b3d716e2b6472eb7189d3220552280ef3d832ce'
  COPYRUNNER
RUN test-mingw in qemu:fedora 
Packages installed:
SDL2-devel-2.0.9-1.fc28.x86_64
bc-1.07.1-5.fc28.x86_64
bison-3.0.4-9.fc28.x86_64
bluez-libs-devel-5.50-1.fc28.x86_64
brlapi-devel-0.6.7-19.fc28.x86_64
bzip2-1.0.6-26.fc28.x86_64
bzip2-devel-1.0.6-26.fc28.x86_64
ccache-3.4.2-2.fc28.x86_64
clang-6.0.1-2.fc28.x86_64
device-mapper-multipath-devel-0.7.4-3.git07e7bd5.fc28.x86_64
findutils-4.6.0-19.fc28.x86_64
flex-2.6.1-7.fc28.x86_64
gcc-8.2.1-5.fc28.x86_64
gcc-c++-8.2.1-5.fc28.x86_64
gettext-0.19.8.1-14.fc28.x86_64
git-2.17.2-1.fc28.x86_64
glib2-devel-2.56.3-2.fc28.x86_64
glusterfs-api-devel-4.1.5-1.fc28.x86_64
gnutls-devel-3.6.4-1.fc28.x86_64
gtk3-devel-3.22.30-1.fc28.x86_64
hostname-3.20-3.fc28.x86_64
libaio-devel-0.3.110-11.fc28.x86_64
libasan-8.2.1-5.fc28.x86

Re: [Qemu-block] [Qemu-devel] [PATCH v2 for-4.0 00/13] block: byte-based blocking read/write

2018-11-15 Thread no-reply
Hi,

This series failed docker-quick@centos7 build test. Please find the testing 
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

Message-id: 20181115020334.1189829-1-ebl...@redhat.com
Type: series
Subject: [Qemu-devel] [PATCH v2 for-4.0 00/13] block: byte-based blocking 
read/write

=== TEST SCRIPT BEGIN ===
#!/bin/bash
time make docker-test-quick@centos7 SHOW_ENV=1 J=8
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
2e66e20 block: Enforce non-zero bl.max_transfer
8b272d1 block: Document need for audit of read/write 64-bit cleanness
261da46 qcow2: Audit for read/write 64-bit cleanness
1ba6c76 file-posix: Audit for read/write 64-bit cleanness
00e8a09 RFC: crypto: Rely on block layer for fragmentation
be37a5a crypto: Audit for read/write 64-bit cleanness
b7af666 blklogwrites: Audit for read/write 64-bit cleanness
222d7a4 blkdebug: Audit for read/write 64-bit cleanness
21175b1 block: Switch to 64-bit bl.max_transfer
b472b9d block: Removed unused sector-based blocking I/O
dfb2b66 vvfat: Switch to byte-based calls
e47df95 vdi: Switch to byte-based calls
d7ab2c2 qcow2: Prefer byte-based calls into bs->file

=== OUTPUT BEGIN ===
  BUILD   centos7
make[1]: Entering directory `/var/tmp/patchew-tester-tmp-9bah4vqa/src'
  GEN 
/var/tmp/patchew-tester-tmp-9bah4vqa/src/docker-src.2018-11-15-04.03.19.7513/qemu.tar
Cloning into 
'/var/tmp/patchew-tester-tmp-9bah4vqa/src/docker-src.2018-11-15-04.03.19.7513/qemu.tar.vroot'...
done.
Checking out files:  46% (3007/6455)   
Checking out files:  47% (3034/6455)   
Checking out files:  48% (3099/6455)   
Checking out files:  49% (3163/6455)   
Checking out files:  50% (3228/6455)   
Checking out files:  51% (3293/6455)   
Checking out files:  52% (3357/6455)   
Checking out files:  53% (3422/6455)   
Checking out files:  54% (3486/6455)   
Checking out files:  55% (3551/6455)   
Checking out files:  56% (3615/6455)   
Checking out files:  57% (3680/6455)   
Checking out files:  58% (3744/6455)   
Checking out files:  59% (3809/6455)   
Checking out files:  60% (3873/6455)   
Checking out files:  61% (3938/6455)   
Checking out files:  62% (4003/6455)   
Checking out files:  63% (4067/6455)   
Checking out files:  64% (4132/6455)   
Checking out files:  65% (4196/6455)   
Checking out files:  66% (4261/6455)   
Checking out files:  67% (4325/6455)   
Checking out files:  68% (4390/6455)   
Checking out files:  69% (4454/6455)   
Checking out files:  70% (4519/6455)   
Checking out files:  71% (4584/6455)   
Checking out files:  72% (4648/6455)   
Checking out files:  73% (4713/6455)   
Checking out files:  74% (4777/6455)   
Checking out files:  75% (4842/6455)   
Checking out files:  76% (4906/6455)   
Checking out files:  77% (4971/6455)   
Checking out files:  78% (5035/6455)   
Checking out files:  79% (5100/6455)   
Checking out files:  80% (5164/6455)   
Checking out files:  81% (5229/6455)   
Checking out files:  82% (5294/6455)   
Checking out files:  83% (5358/6455)   
Checking out files:  84% (5423/6455)   
Checking out files:  85% (5487/6455)   
Checking out files:  86% (5552/6455)   
Checking out files:  87% (5616/6455)   
Checking out files:  88% (5681/6455)   
Checking out files:  89% (5745/6455)   
Checking out files:  90% (5810/6455)   
Checking out files:  91% (5875/6455)   
Checking out files:  92% (5939/6455)   
Checking out files:  93% (6004/6455)   
Checking out files:  94% (6068/6455)   
Checking out files:  95% (6133/6455)   
Checking out files:  96% (6197/6455)   
Checking out files:  97% (6262/6455)   
Checking out files:  98% (6326/6455)   
Checking out files:  99% (6391/6455)   
Checking out files: 100% (6455/6455)   
Checking out files: 100% (6455/6455), done.
Submodule 'dtc' (https://git.qemu.org/git/dtc.git) registered for path 'dtc'
Cloning into 'dtc'...
Submodule path 'dtc': checked out '88f18909db731a627456f26d779445f84e449536'
Submodule 'ui/keycodemapdb' (https://git.qemu.org/git/keycodemapdb.git) 
registered for path 'ui/keycodemapdb'
Cloning into 'ui/keycodemapdb'...
Submodule path 'ui/keycodemapdb': checked out 
'6b3d716e2b6472eb7189d3220552280ef3d832ce'
  COPYRUNNER
RUN test-quick in qemu:centos7 
Packages installed:
SDL-devel-1.2.15-14.el7.x86_64
bison-3.0.4-1.el7.x86_64
bzip2-1.0.6-13.el7.x86_64
bzip2-devel-1.0.6-13.el7.x86_64
ccache-3.3.4-1.el7.x86_64
csnappy-devel-0-6.20150729gitd7bc683.el7.x86_64
flex-2.5.37-3.el7.x86_64
gcc-4.8.5-28.el7_5.1.x86_64
gettext-0.19.8.1-2.el7.x86_64
git-1.8.3.1-14.el7_5.x86_64
glib2-devel-2.54.2-2.el7.x86_64
libaio-devel-0.3.109-13.el7.x86_64
libepoxy-devel-1.3.1-2.el7_5.x86_64
libfdt-devel-1.4.6-1.el7.x86_64
lzo-devel-2.06-8.el7.x86_64
make-3.82-23.el7.x86_64
mesa-libEGL-devel-17.2.3-8.20171019.el7.x86_64
mesa-libgbm-devel-17.2.3-8.20171019.el7.x86_64
nettle-devel-2.7.1-8.el7.x86_64
package g++ is not installed
package librdmacm-devel is not installed
pixman-devel-0.34.0-1.el7.x86_6