Re: [PATCH-for-5.1? v2 1/2] qemu/osdep: Make QEMU_VMALLOC_ALIGN unsigned long

2020-07-30 Thread David Gibson
On Thu, Jul 30, 2020 at 04:12:44PM +0200, Philippe Mathieu-Daudé wrote:
> QEMU_VMALLOC_ALIGN is sometimes expanded to signed type,
> other times to unsigned. Unify using unsigned.
> 
> Signed-off-by: Philippe Mathieu-Daudé 

Reviewed-by: David Gibson 

> ---
>  include/qemu/osdep.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
> index 20872e793e..085df8d508 100644
> --- a/include/qemu/osdep.h
> +++ b/include/qemu/osdep.h
> @@ -454,10 +454,10 @@ void qemu_anon_ram_free(void *ptr, size_t size);
> /* Use 2 MiB alignment so transparent hugepages can be used by KVM.
>Valgrind does not support alignments larger than 1 MiB,
>therefore we need special code which handles running on Valgrind. */
> -#  define QEMU_VMALLOC_ALIGN (512 * 4096)
> +#  define QEMU_VMALLOC_ALIGN (512 * 4096UL)
>  #elif defined(__linux__) && defined(__s390x__)
> /* Use 1 MiB (segment size) alignment so gmap can be used by KVM. */
> -#  define QEMU_VMALLOC_ALIGN (256 * 4096)
> +#  define QEMU_VMALLOC_ALIGN (256 * 4096UL)
>  #elif defined(__linux__) && defined(__sparc__)
>  #include 
>  #  define QEMU_VMALLOC_ALIGN MAX(qemu_real_host_page_size, SHMLBA)

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [PATCH-for-5.1? v2 2/2] util/pagesize: Make qemu_real_host_page_size of type size_t

2020-07-30 Thread David Gibson
On Thu, Jul 30, 2020 at 04:12:45PM +0200, Philippe Mathieu-Daudé wrote:
> We use different types to hold 'qemu_real_host_page_size'.
> Unify picking 'size_t' which seems the best candidate.
> 
> Doing so fix a format string issue in hw/virtio/virtio-mem.c
> reported when building with GCC 4.9.4:
> 
>   hw/virtio/virtio-mem.c: In function ‘virtio_mem_set_block_size’:
>   hw/virtio/virtio-mem.c:756:9: error: format ‘%x’ expects argument of type 
> ‘unsigned int’, but argument 7 has type ‘uintptr_t’ [-Werror=format=]
>  error_setg(errp, "'%s' property has to be at least 0x%" PRIx32, name,
>  ^
> 
> Fixes: 910b25766b ("virtio-mem: Paravirtualized memory hot(un)plug")
> Reported-by: Bruce Rogers 
> Signed-off-by: Philippe Mathieu-Daudé 

Reviewed-by: David Gibson 

and ppc parts
Acked-by: David Gibson 

> ---
>  include/exec/ram_addr.h  | 4 ++--
>  include/qemu/osdep.h | 2 +-
>  accel/kvm/kvm-all.c  | 3 ++-
>  block/qcow2-cache.c  | 2 +-
>  exec.c   | 8 
>  hw/ppc/spapr_pci.c   | 2 +-
>  hw/virtio/virtio-mem.c   | 2 +-
>  migration/migration.c| 2 +-
>  migration/postcopy-ram.c | 2 +-
>  monitor/misc.c   | 2 +-
>  util/pagesize.c  | 2 +-
>  11 files changed, 16 insertions(+), 15 deletions(-)
> 
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index 3ef729a23c..e07532266e 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -93,8 +93,8 @@ static inline unsigned long int 
> ramblock_recv_bitmap_offset(void *host_addr,
>  
>  bool ramblock_is_pmem(RAMBlock *rb);
>  
> -long qemu_minrampagesize(void);
> -long qemu_maxrampagesize(void);
> +size_t qemu_minrampagesize(void);
> +size_t qemu_maxrampagesize(void);
>  
>  /**
>   * qemu_ram_alloc_from_file,
> diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
> index 085df8d508..77115a8270 100644
> --- a/include/qemu/osdep.h
> +++ b/include/qemu/osdep.h
> @@ -635,10 +635,10 @@ char *qemu_get_pid_name(pid_t pid);
>   */
>  pid_t qemu_fork(Error **errp);
>  
> +extern size_t qemu_real_host_page_size;
>  /* Using intptr_t ensures that qemu_*_page_mask is sign-extended even
>   * when intptr_t is 32-bit and we are aligning a long long.
>   */
> -extern uintptr_t qemu_real_host_page_size;
>  extern intptr_t qemu_real_host_page_mask;
>  
>  extern int qemu_icache_linesize;
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> index 63ef6af9a1..59becfbd6c 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -674,7 +674,8 @@ static int kvm_log_clear_one_slot(KVMSlot *mem, int 
> as_id, uint64_t start,
>  KVMState *s = kvm_state;
>  uint64_t end, bmap_start, start_delta, bmap_npages;
>  struct kvm_clear_dirty_log d;
> -unsigned long *bmap_clear = NULL, psize = qemu_real_host_page_size;
> +unsigned long *bmap_clear = NULL;
> +size_t psize = qemu_real_host_page_size;
>  int ret;
>  
>  /*
> diff --git a/block/qcow2-cache.c b/block/qcow2-cache.c
> index 7444b9c4ab..4ad9f5929f 100644
> --- a/block/qcow2-cache.c
> +++ b/block/qcow2-cache.c
> @@ -74,7 +74,7 @@ static void qcow2_cache_table_release(Qcow2Cache *c, int i, 
> int num_tables)
>  /* Using MADV_DONTNEED to discard memory is a Linux-specific feature */
>  #ifdef CONFIG_LINUX
>  void *t = qcow2_cache_get_table_addr(c, i);
> -int align = qemu_real_host_page_size;
> +size_t align = qemu_real_host_page_size;
>  size_t mem_size = (size_t) c->table_size * num_tables;
>  size_t offset = QEMU_ALIGN_UP((uintptr_t) t, align) - (uintptr_t) t;
>  size_t length = QEMU_ALIGN_DOWN(mem_size - offset, align);
> diff --git a/exec.c b/exec.c
> index 6f381f98e2..4b6d52e01f 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -1657,7 +1657,7 @@ static int find_max_backend_pagesize(Object *obj, void 
> *opaque)
>   * TODO: We assume right now that all mapped host memory backends are
>   * used as RAM, however some might be used for different purposes.
>   */
> -long qemu_minrampagesize(void)
> +size_t qemu_minrampagesize(void)
>  {
>  long hpsize = LONG_MAX;
>  Object *memdev_root = object_resolve_path("/objects", NULL);
> @@ -1666,7 +1666,7 @@ long qemu_minrampagesize(void)
>  return hpsize;
>  }
>  
> -long qemu_maxrampagesize(void)
> +size_t qemu_maxrampagesize(void)
>  {
>  long pagesize = 0;
>  Object *memdev_root = object_resolve_path("/objects", NULL);
> @@ -1675,11 +1675,11 @@ long qemu_maxrampagesize(void)
>  return pagesize;
>  }
>  #else
> -long qemu_minrampagesize(void)
> +size_t qemu_minrampagesize(void)
>  {
>  return qemu_real_host_page_size;
>  }
> -long qemu_maxrampagesize(void)
> +size_t qemu_maxrampagesize(void)
>  {
>  return qemu_real_host_page_size;
>  }
> diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
> index 363cdb3f7b..a9da84fe30 100644
> --- a/hw/ppc/spapr_pci.c
> +++ b/hw/ppc/spapr_pci.c
> @@ -1810,7 +1810,7 @@ static void spapr_phb_realize(DeviceState *dev, Error 
> **errp)
>  char *namebuf;
>

Re: [PATCH] schemas: Add vim modeline

2020-07-30 Thread John Snow

On 7/30/20 11:11 AM, Eric Blake wrote:
JSON5 would also let us get rid of some quotes, if that is considered a 
desirable goal of the representation (although I'm not sure that quote 
avoidance should be driving our decision, so much as automated conversion).


There's no JSON5 parser built in to Python.

OK, there's no built-in YAML parser either, but I am not keen on 
fighting the built-in json module.





Re: [PATCH] schemas: Add vim modeline

2020-07-30 Thread John Snow

On 7/30/20 11:11 AM, Eric Blake wrote:


Agreed on that front.


Thirded:

Reviewed-by: John Snow 




Re: [PATCH v12 01/11] iotests: add test for QCOW2 header dump

2020-07-30 Thread Eric Blake

On 7/30/20 9:15 AM, Andrey Shinkevich wrote:

The simple script creates a QCOW2 image and fills it with some data.
Two bitmaps are created as well. Then the script reads the image header
with extensions from the disk by running the script qcow2.py and dumps
the information to the output. Other entities, such as snapshots, may
be added to the test later.

Suggested-by: Eric Blake 
Signed-off-by: Andrey Shinkevich 
---
  tests/qemu-iotests/303 | 59 ++
  tests/qemu-iotests/303.out | 64 ++
  tests/qemu-iotests/group   |  1 +
  3 files changed, 124 insertions(+)
  create mode 100755 tests/qemu-iotests/303
  create mode 100644 tests/qemu-iotests/303.out



+import iotests
+import subprocess
+from iotests import qemu_img_create, qemu_io, file_path, log, filter_qemu_io
+
+iotests.script_initialize(supported_fmts=['qcow2'])
+
+disk = file_path('disk')
+chunk = 1024 * 1024
+
+
+def create_bitmap(bitmap_number, disabled):
+granularity = 1 << (14 + bitmap_number)
+bitmap_name = 'bitmap-' + str(bitmap_number)
+vm = iotests.VM().add_drive(disk)
+vm.launch()
+vm.qmp_log('block-dirty-bitmap-add', node='drive0', name=bitmap_name,
+   granularity=granularity, persistent=True, disabled=disabled)
+vm.shutdown()


Would it be any easier to use qemu-img bitmap here instead of firing up 
a full VM?


At any rate, this is a nice starting point for tracking what the rest of 
your series improves.


Reviewed-by: Eric Blake 

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




Re: dma_blk helpers and infinite dma_memory_map retries

2020-07-30 Thread John Snow

On 7/29/20 7:43 AM, Paolo Bonzini wrote:

This is the question: what is special about this address_space_map, that
makes it never succeed?  The usual case where address_space_map fails is
simply two or more concurrent users of the bounce buffer, and these
solve by themselves when address_space_unmap calls
cpu_notify_map_clients.  Do we never call address_space_unmap?


TLDR: the dma-helpers handle alignment ... poorly, and it's not clear 
what the right way to handle this might be for a few reasons.



Read on for my usual thinking-out-loud analysis tracing this problem, 
though I do have a question for Kevin/Paolo at the end. (See the 
numbered list 1-4 at the bottom.)



First, the (partially bogus, fuzzer-generated) IDE command wants to:

1. dma write 259 sectors starting at sector 1

2. Provides a PRDT at addr 0x00 whose first PRDT describes a data buffer 
at 0x of length 0x1. [a]


3. The remaining 8,191 PRD entries are uninitialized memory that all 
wind up describing the same data buffer at 0x00 of length 0x1.


Generally, the entire PRDT is going to be read, but truncated into an 
SGList that's exactly as long as the IDE command. Here, that's 0x20600 
bytes.


Yadda, yadda, yadda, that winds up turning into these map requests:

addr 0x; len 0x1
  -- mapped length: 0x01 (normal map return)

addr 0x1; len 0x
  -- mapped length: 0x1000 (bounce buffer return)

addr 0x11000; len 0xefff
  -- bounce buffer is busy, cannot map

Then it proceeds and calls the iofunc. We return to dma_blk_cb and then:

unmap 0x; len 0x01; access_len 0x01;

... That unmaps the "direct" one, but we seemingly fail to unmap the 
indirect one.


Uh, cool. When we build the IOV, we build it with two entries; but 
qemu_iovec_discard_back discards the second entry entirely without 
unmapping it.


IDE asks for an alignment of BDRV_SECTOR_SIZE (512 bytes). The IDE state 
machine transfers an entire sector or nothing at all. The total IOV size 
we have build thus far is 0x1001 bytes, which is not aligned as you 
might have noticed.


So, we try to align it:

qemu_iovec_discard_back(>iov, QEMU_ALIGN_DOWN(4097, 512))

... I think we probably wanted to ask to shave off one byte instead of 
asking to shave off 4096 bytes.



So, a few perceived problems with dma_blk_cb:

1. Our alignment math is wrong. discard_back takes as an argument the 
number of bytes to discard, not the number of bytes you want to have 
afterwards.


2. qemu_iovec_discard_back will happily unwind entire IO vectors that we 
would need to unmap and have now lost. Worse, whenever we do any kind of 
truncating at all, those bytes are not re-added to the source SGList, so 
subsequent transfers will have skipped some bytes in the guest SGList.


3. the dma_blk_io interfaces don't ever check to see if the sg list is 
an even multiple of the alignment. They don't return synchronous error 
and no callers check for an error case. (Though BMDMA does carefully 
prepare the SGList such that it is aligned in this way. AHCI does too, 
IIRC.) This means we might have an unaligned tail that we will just drop 
or ignore, leading to another partial DMA.


4. There's no guarantee that any given individual IO vector will have an 
entire sector's worth of data in it. It is theoretically valid to 
describe a series of vectors of two bytes each. If we can only map 1-2 
vectors at a time, depending, we're never going to be able to scrounge 
up enough buffer real estate to transfer an entire sector.



Hm, Ow.


Before I go on some quest to ascend to a higher plane of being, I wanted 
to solicit feedback on the problem as described thus far and see if 
anyone has a suggestion for a fix that's not too involved.


--js


[a] This is against the BMDMA spec. The address must be aligned to 0x02 
and cannot cross a 64K boundary. bit0 is documented as always being 
zero, but it's not clear what should happen if the boundary constraint 
is violated. Absent other concerns, it might just be easiest to fulfill 
the transfer if it's possible.





Re: Disk cache defaults

2020-07-30 Thread Markus Armbruster
Stefan Hajnoczi  writes:

> On Wed, Jul 22, 2020 at 07:14:59AM +0100, anthony smith wrote:
>> Appreciate any assistance even if the answer is just "Not possible" so I
>> can at least drop this search for answers.
>
> You can create an /etc/qemu/qemu.conf file but I don't remember if there
> is syntax to set -drive cache=none.
>
> CCing Kevin Wolf and Markus Armbruster for ideas.

I'm not aware of a way to modify defaults (except for device propeties
with -global, which doesn't help you, and is usually a bad idea anyway).

[...]




Re: [PATCH] schemas: Add vim modeline

2020-07-30 Thread Eric Blake

On 7/30/20 6:51 AM, Markus Armbruster wrote:


Given that we already have emacs mode-lines, I see no reason to
not also have vim mode-lines, so regardless of the deeper discussion
I think this is patch is fine to merge in the short term

   Reviewed-by: Daniel P. Berrangé 


Agreed on that front.





Naming QAPI schema files .json even though their contents isn't was a
mistake.  Correcting it would be a pain.  If we correct it, then the
sooner the better.

Renaming them to .py gives decent editor support out of the box.  Their
contents isn't quite Python, though: true vs. True, false vs. False.  Do
we care?  Only a few dozen occurences; they could be adjusted.

Renaming them to .qapi would perhaps be less confusing, for the price of
"out of the box".


I've tried that patch in the past, but it went nowhere at the time.
https://lists.gnu.org/archive/html/qemu-devel/2018-04/msg03173.html

Rebasing it to the present would be a complete rewrite, but I'm still 
willing to do it if we think it will go somewhere this time.





IMHO, the critical rule is that if you a pick a particular file extension
associated with an existing language, you absolutely MUST BE compliant
with that language.


Can't argue with that :)


Also in violent agreement.




We fail at compliance with both JSON and Python because we're actually
using our own DSL (domain specific language).

IOW if we're going to stick with our current file format, then we should
be naming them .qapi. We can still use an editor mode line if we want to
claim we're approximately equiv to another language, but we can't be
surprised if editors get upset.


The bigger question is whether having our own DSL is justified ?

I'm *really* sceptical that it is.


To be precise: our own DSL *syntax*.  Semantics is a separate question
to be skeptical about.


We can't use JSON because it lacks comments. So we invented our own
psuedo-JSON parser that supported comments, and used ' instead of "
for some reason. We also needed to be able to parse a sequence of
multiple JSON documents in one file. We should have just picked a
different language because JSON clearly didn't do what we eneeded.


JSON is a exceptionally poor choice for a DSL, or even a configuration
language.

Correcting our mistake involves a flag day and a re-learn.  We need to
weigh costs against benefits.


How much of that can be automated with tooling?  Yes, a re-learn is 
needed, but if a tool can convert between the two syntaces with minimal 
pain, the re-learn (in both directions: rebasing patches written 
pre-change for merge after the change lands upstream, and backporting 
patches post-change to trees that are pre-change) is not as painful.




The QAPI schema language has two layers:

* JSON, with a lexical and a syntactical sub-layer (both in parser.py)

* QAPI, with a context-free and a context-dependend sub-layer (in
   expr.py and schema.py, respectively)

Replacing the JSON layer is possible as long as the replacement is
sufficiently expressive (not a tall order).


I'm open to the idea, if we want to attempt it, and agree with the 
assessment that it is not a tall order.  In fact, if we were to go with 
the JSON5 language instead of JSON, we are already that much closer to 
having sufficiently expressive (although JSON5 does not seem to be as 
popular in terms of pre-written libraries).





You suggest naming them .py. If we do that, we must declare that they
are literally Python code


Agree.


I'm not necessarily a fan of .py for QAPI files; it makes me think of 
executable python code, even if we chose it merely for something that 
python could parse natively instead of rolling our own parser.





   modify them so that we can load the
files straight into the python intepretor as code, and not parse
them as data. I feel unhappy about treating data as code though.


Stress on *can* load.  Doesn't mean we should.

Ancient prior art: Lisp programs routinely use s-expressions as
configuration file syntax.  They don't load them as code, they read them
as data.

With Python, it's ast.parse(), I think.


While JSON doesn't do what we need, its second-cousin YAML is a more
flexible format. Taking one example




StrictYAML insists on quotes.

I hate having to quote identifiers.  There's a reason we don't write

 'int'
 'main'('int', 'argc', 'char' *'argv'[])
 {
 'printf'("hello world\n");
 return 0;
 }



JSON5 would also let us get rid of some quotes, if that is considered a 
desirable goal of the representation (although I'm not sure that quote 
avoidance should be driving our decision, so much as automated conversion).


--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




Re: [PATCH 1/2] qcow2: Release read-only bitmaps when inactivated

2020-07-30 Thread Eric Blake

On 7/30/20 7:02 AM, Max Reitz wrote:

During migration, we release all bitmaps after storing them on disk, as
long as they are (1) stored on disk, (2) not read-only, and (3)
consistent.

(2) seems arbitrary, though.  The reason we do not release them is
because we do not write them, as there is no need to; and then we just
forget about all bitmaps that we have not written to the file.  However,
read-only persistent bitmaps are still in the file and in sync with
their in-memory representation, so we may as well release them just like
any R/W bitmap that we have updated.

It leads to actual problems, too: After migration, letting the source
continue may result in an error if there were any bitmaps on read-only
nodes (such as backing images), because those have not been released by
bdrv_inactive_all(), but bdrv_invalidate_cache_all() attempts to reload
them (which fails, because they are still present in memory).


I think our alternatives are ensuring no bitmaps are in memory so that 
reloading the RO bitmap from the file succeeds (which then hits the 
earlier question about whether releasing ALL bitmaps affects libvirt's 
ability to query which bitmaps were on the source, but makes reloading 
on the destination easy), or teaching the reload to special-case a RO 
bitmap from the disk that is already in memory (either to make the 
reload a graceful no-op instead of an error that it was already loaded, 
or to go one step further and validate whether the contents in memory 
match the contents reloaded from disk).  If I understand your patch, you 
went with the first of these alternatives.  And since Peter was able to 
test that it fixed the libvirt scenario, I'm okay with the approach you 
took, although I would love a second opinion from Virtuozzo folks.




Signed-off-by: Max Reitz 
---
  block/qcow2-bitmap.c | 23 +++
  1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/block/qcow2-bitmap.c b/block/qcow2-bitmap.c
index 1f38806ca6..8c34b2aef7 100644
--- a/block/qcow2-bitmap.c



@@ -1641,6 +1654,7 @@ void 
qcow2_store_persistent_dirty_bitmaps(BlockDriverState *bs,
  g_free(tb);
  }
  
+success:

  if (release_stored) {
  QSIMPLEQ_FOREACH(bm, bm_list, entry) {
  if (bm->dirty_bitmap == NULL) {
@@ -1651,13 +1665,14 @@ void 
qcow2_store_persistent_dirty_bitmaps(BlockDriverState *bs,
  }
  }
  
-success:


Moving the label was an interesting change; I had to look at the file in 
context to see the real effect: basically, you now reach the line:


bdrv_release_dirty_bitmap(bm->dirty_bitmap);

for the set of persistent RO bitmaps that were previously ignored.

Reviewed-by: Eric Blake 

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




Re: [PATCH 2/2] iotests/169: Test source cont with backing bmap

2020-07-30 Thread Eric Blake

On 7/30/20 7:02 AM, Max Reitz wrote:

Test migrating from a VM with a persistent bitmap in the backing chain,
and then continuing that VM after the migration


Indeed, the more our iotest is like what libvirt actually does, the 
better we protect ourselves from future bugs in this area.




Signed-off-by: Max Reitz 
---
  tests/qemu-iotests/169 | 64 +-
  tests/qemu-iotests/169.out |  4 +--
  2 files changed, 65 insertions(+), 3 deletions(-)




+class TestDirtyBitmapBackingMigration(iotests.QMPTestCase):
+def setUp(self):
+qemu_img_create('-f', iotests.imgfmt, base_a, size)
+qemu_img_create('-f', iotests.imgfmt, '-F', iotests.imgfmt,
+'-b', base_a, disk_a, size)
+
+for f in (disk_a, base_a):
+qemu_img('bitmap', '--add', f, 'bmap0')


I'm so glad to see 'qemu-img bitmap' getting some use :)


+
+blockdev = {
+'node-name': 'node0',
+'driver': iotests.imgfmt,
+'file': {
+'driver': 'file',
+'filename': disk_a
+},
+'backing': {
+'node-name': 'node0-base',
+'driver': iotests.imgfmt,
+'file': {
+'driver': 'file',
+'filename': base_a
+}
+}
+}
+
+self.vm = iotests.VM()
+self.vm.launch()
+
+result = self.vm.qmp('blockdev-add', **blockdev)
+self.assert_qmp(result, 'return', {})
+
+# Check that the bitmaps are there
+for node in self.vm.qmp('query-named-block-nodes', 
flat=True)['return']:
+if 'node0' in node['node-name']:


This caught me on the first read, before I realized it was a clever way 
to test both 'node0' and 'node0-base'.



+self.assert_qmp(node, 'dirty-bitmaps[0]/name', 'bmap0')
+
+caps = [{'capability': 'events', 'state': True}]
+result = self.vm.qmp('migrate-set-capabilities', capabilities=caps)
+self.assert_qmp(result, 'return', {})
+
+def tearDown(self):
+self.vm.shutdown()
+for f in (disk_a, base_a):
+os.remove(f)
+
+def test_cont_on_source(self):
+"""
+Continue the source after migration.
+"""
+result = self.vm.qmp('migrate', uri=f'exec: cat > /dev/null')
+self.assert_qmp(result, 'return', {})
+
+with Timeout(10, 'Migration timeout'):
+self.vm.wait_migration('postmigrate')
+
+result = self.vm.qmp('cont')
+self.assert_qmp(result, 'return', {})
+


Reviewed-by: Eric Blake 

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




Re: [PATCH-for-5.1? v2 0/2] util/pagesize: Make qemu_real_host_page_size of type size_t

2020-07-30 Thread Philippe Mathieu-Daudé
On 7/30/20 4:22 PM, Dr. David Alan Gilbert wrote:
> * Philippe Mathieu-Daudé (phi...@redhat.com) wrote:
>> Since v1:
>> Make QEMU_VMALLOC_ALIGN unsigned in a previous patch
> 
> Nah, not for 5.1 - it feels like the type of thing that might on a
> really bad day create a really subtle bug.

Ack :)




[PATCH v12 05/11] qcow2_format.py: Dump bitmap directory information

2020-07-30 Thread Andrey Shinkevich
Read and dump entries from the bitmap directory of QCOW2 image.

Header extension:
magic 0x23852875 (Bitmaps)
...
Bitmap name   bitmap-1
bitmap_table_offset   0xf
bitmap_table_size 1
flags 0x2 (['auto'])
type  1
granularity_bits  16
name_size 8
extra_data_size   0

Suggested-by: Kevin Wolf 
Signed-off-by: Andrey Shinkevich 
Reviewed-by: Vladimir Sementsov-Ogievskiy 
---
 tests/qemu-iotests/303.out | 18 +++
 tests/qemu-iotests/qcow2_format.py | 47 ++
 2 files changed, 65 insertions(+)

diff --git a/tests/qemu-iotests/303.out b/tests/qemu-iotests/303.out
index 8b11169..dc3739b 100644
--- a/tests/qemu-iotests/303.out
+++ b/tests/qemu-iotests/303.out
@@ -62,3 +62,21 @@ reserved320
 bitmap_directory_size 0x40
 bitmap_directory_offset   0x9d
 
+Bitmap name   bitmap-1
+bitmap_table_offset   0x9b
+bitmap_table_size 1
+flags 0x2 (['auto'])
+type  1
+granularity_bits  15
+name_size 8
+extra_data_size   0
+
+Bitmap name   bitmap-2
+bitmap_table_offset   0x9c
+bitmap_table_size 1
+flags 0x0 ([])
+type  1
+granularity_bits  16
+name_size 8
+extra_data_size   0
+
diff --git a/tests/qemu-iotests/qcow2_format.py 
b/tests/qemu-iotests/qcow2_format.py
index b447344..05a8aa9 100644
--- a/tests/qemu-iotests/qcow2_format.py
+++ b/tests/qemu-iotests/qcow2_format.py
@@ -134,6 +134,53 @@ class Qcow2BitmapExt(Qcow2Struct):
 tail = struct.calcsize(self.fmt) % 8
 if tail:
 fd.seek(8 - tail, 1)
+position = fd.tell()
+self.read_bitmap_directory(fd)
+fd.seek(position)
+
+def read_bitmap_directory(self, fd):
+fd.seek(self.bitmap_directory_offset)
+self.bitmap_directory = \
+[Qcow2BitmapDirEntry(fd) for _ in range(self.nb_bitmaps)]
+
+def dump(self):
+super().dump()
+for entry in self.bitmap_directory:
+print()
+entry.dump()
+
+
+class Qcow2BitmapDirEntry(Qcow2Struct):
+
+fields = (
+('u64', '{:#x}', 'bitmap_table_offset'),
+('u32', '{}', 'bitmap_table_size'),
+('u32', BitmapFlags, 'flags'),
+('u8',  '{}', 'type'),
+('u8',  '{}', 'granularity_bits'),
+('u16', '{}', 'name_size'),
+('u32', '{}', 'extra_data_size')
+)
+
+def __init__(self, fd):
+super().__init__(fd=fd)
+# Seek relative to the current position in the file
+fd.seek(self.extra_data_size, 1)
+bitmap_name = fd.read(self.name_size)
+self.name = bitmap_name.decode('ascii')
+# Move position to the end of the entry in the directory
+entry_raw_size = self.bitmap_dir_entry_raw_size()
+padding = ((entry_raw_size + 7) & ~7) - entry_raw_size
+fd.seek(padding, 1)
+
+def bitmap_dir_entry_raw_size(self):
+return struct.calcsize(self.fmt) + self.name_size + \
+self.extra_data_size
+
+def dump(self):
+print(f'{"Bitmap name":<25} {self.name}')
+super(Qcow2BitmapDirEntry, self).dump()
+
 
 QCOW2_EXT_MAGIC_BITMAPS = 0x23852875
 
-- 
1.8.3.1




[PATCH v12 02/11] qcow2_format.py: make printable data an extension class member

2020-07-30 Thread Andrey Shinkevich
Let us differ binary data type from string one for the extension data
variable and keep the string as the QcowHeaderExtension class member.

Signed-off-by: Andrey Shinkevich 
Reviewed-by: Eric Blake 
Reviewed-by: Vladimir Sementsov-Ogievskiy 
---
 tests/qemu-iotests/qcow2_format.py | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/tests/qemu-iotests/qcow2_format.py 
b/tests/qemu-iotests/qcow2_format.py
index cc432e7..2f3681b 100644
--- a/tests/qemu-iotests/qcow2_format.py
+++ b/tests/qemu-iotests/qcow2_format.py
@@ -165,6 +165,13 @@ class QcowHeaderExtension(Qcow2Struct):
 self.data = fd.read(padded)
 assert self.data is not None
 
+data_str = self.data[:self.length]
+if all(c in string.printable.encode('ascii') for c in data_str):
+data_str = f"'{ data_str.decode('ascii') }'"
+else:
+data_str = ''
+self.data_str = data_str
+
 if self.magic == QCOW2_EXT_MAGIC_BITMAPS:
 self.obj = Qcow2BitmapExt(data=self.data)
 else:
@@ -174,12 +181,7 @@ class QcowHeaderExtension(Qcow2Struct):
 super().dump()
 
 if self.obj is None:
-data = self.data[:self.length]
-if all(c in string.printable.encode('ascii') for c in data):
-data = f"'{ data.decode('ascii') }'"
-else:
-data = ''
-print(f'{"data":<25} {data}')
+print(f'{"data":<25} {self.data_str}')
 else:
 self.obj.dump()
 
-- 
1.8.3.1




[PATCH v12 01/11] iotests: add test for QCOW2 header dump

2020-07-30 Thread Andrey Shinkevich
The simple script creates a QCOW2 image and fills it with some data.
Two bitmaps are created as well. Then the script reads the image header
with extensions from the disk by running the script qcow2.py and dumps
the information to the output. Other entities, such as snapshots, may
be added to the test later.

Suggested-by: Eric Blake 
Signed-off-by: Andrey Shinkevich 
---
 tests/qemu-iotests/303 | 59 ++
 tests/qemu-iotests/303.out | 64 ++
 tests/qemu-iotests/group   |  1 +
 3 files changed, 124 insertions(+)
 create mode 100755 tests/qemu-iotests/303
 create mode 100644 tests/qemu-iotests/303.out

diff --git a/tests/qemu-iotests/303 b/tests/qemu-iotests/303
new file mode 100755
index 000..3c7a611
--- /dev/null
+++ b/tests/qemu-iotests/303
@@ -0,0 +1,59 @@
+#!/usr/bin/env python3
+#
+# Test for dumping of qcow2 image metadata
+#
+# Copyright (c) 2020 Virtuozzo International GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see .
+#
+
+import iotests
+import subprocess
+from iotests import qemu_img_create, qemu_io, file_path, log, filter_qemu_io
+
+iotests.script_initialize(supported_fmts=['qcow2'])
+
+disk = file_path('disk')
+chunk = 1024 * 1024
+
+
+def create_bitmap(bitmap_number, disabled):
+granularity = 1 << (14 + bitmap_number)
+bitmap_name = 'bitmap-' + str(bitmap_number)
+vm = iotests.VM().add_drive(disk)
+vm.launch()
+vm.qmp_log('block-dirty-bitmap-add', node='drive0', name=bitmap_name,
+   granularity=granularity, persistent=True, disabled=disabled)
+vm.shutdown()
+
+
+def write_to_disk(offset, size):
+write = f'write {offset} {size}'
+log(qemu_io('-c', write, disk), filters=[filter_qemu_io])
+
+
+def add_bitmap(num, begin, end, disabled):
+log(f'Add bitmap {num}')
+create_bitmap(num, disabled)
+for i in range(begin, end):
+write_to_disk((i-1) * chunk, chunk)
+log('')
+
+
+qemu_img_create('-f', iotests.imgfmt, disk, '10M')
+
+add_bitmap(1, 1, 7, False)
+add_bitmap(2, 7, 9, True)
+dump = ['qcow2.py', f'{disk}', 'dump-header']
+subprocess.run(dump)
diff --git a/tests/qemu-iotests/303.out b/tests/qemu-iotests/303.out
new file mode 100644
index 000..8b11169
--- /dev/null
+++ b/tests/qemu-iotests/303.out
@@ -0,0 +1,64 @@
+Add bitmap 1
+{"execute": "block-dirty-bitmap-add", "arguments": {"disabled": false, 
"granularity": 32768, "name": "bitmap-1", "node": "drive0", "persistent": true}}
+{"return": {}}
+wrote 1048576/1048576 bytes at offset 0
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+wrote 1048576/1048576 bytes at offset 1048576
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+wrote 1048576/1048576 bytes at offset 2097152
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+wrote 1048576/1048576 bytes at offset 3145728
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+wrote 1048576/1048576 bytes at offset 4194304
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+wrote 1048576/1048576 bytes at offset 5242880
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+
+Add bitmap 2
+{"execute": "block-dirty-bitmap-add", "arguments": {"disabled": true, 
"granularity": 65536, "name": "bitmap-2", "node": "drive0", "persistent": true}}
+{"return": {}}
+wrote 1048576/1048576 bytes at offset 6291456
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+wrote 1048576/1048576 bytes at offset 7340032
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+
+magic 0x514649fb
+version   3
+backing_file_offset   0x0
+backing_file_size 0x0
+cluster_bits  16
+size  10485760
+crypt_method  0
+l1_size   1
+l1_table_offset   0x3
+refcount_table_offset 0x1
+refcount_table_clusters   1
+nb_snapshots  0
+snapshot_offset   0x0
+incompatible_features []
+compatible_features   []
+autoclear_features[0]
+refcount_order4
+header_length 112
+
+Header extension:
+magic 0x6803f857 (Feature table)
+length336
+data  
+
+Header extension:
+magic 0x23852875 (Bitmaps)
+length24
+nb_bitmaps2
+reserved320

[PATCH v12 00/11] iotests: Dump QCOW2 dirty bitmaps metadata

2020-07-30 Thread Andrey Shinkevich
Add dirty bitmap information to QCOW2 metadata dump in the qcow2_format.py.

v10:
  01: New. The test case #303 added to visualize the QCOW2 metadata dumping.
  Suggested by Eric. The patch "Fix capitalization of header extension
  constant" has been pulled separately.
  07: The class Qcow2BitmapTableEntry is now derived from the Qcow2Struct
  one. The constructors and methods of Qcow2BitmapTable and of
  Qcow2BitmapTableEntry were modified.
  09: The python dict.update method was replaced with assignment operator.
  The interleaving dict 'entries' was removed from bitmap table dump.
  10: The class Qcow2HeaderExtensionsDoc was removed.
  11: New. The #303 was extended by dumping QCOW2 metadata in JSON format.


Andrey Shinkevich (11):
  iotests: add test for QCOW2 header dump
  qcow2_format.py: make printable data an extension class member
  qcow2_format.py: change Qcow2BitmapExt initialization method
  qcow2_format.py: dump bitmap flags in human readable way.
  qcow2_format.py: Dump bitmap directory information
  qcow2_format.py: pass cluster size to substructures
  qcow2_format.py: Dump bitmap table serialized entries
  qcow2.py: Introduce '-j' key to dump in JSON format
  qcow2_format.py: collect fields to dump in JSON format
  qcow2_format.py: support dumping metadata in JSON format
  iotests: dump QCOW2 header in JSON in #303

 tests/qemu-iotests/303 |  62 +++
 tests/qemu-iotests/303.out | 162 
 tests/qemu-iotests/group   |   1 +
 tests/qemu-iotests/qcow2.py|  18 +++-
 tests/qemu-iotests/qcow2_format.py | 210 ++---
 5 files changed, 432 insertions(+), 21 deletions(-)
 create mode 100755 tests/qemu-iotests/303
 create mode 100644 tests/qemu-iotests/303.out

-- 
1.8.3.1




Re: [PATCH 0/2] qcow2: Release read-only bitmaps when inactivated

2020-07-30 Thread Eric Blake

On 7/30/20 7:02 AM, Max Reitz wrote:

Hi,

When beginning migration, the qcow2 driver syncs all persistent bitmaps
to disk and then releases them.  If the user decides to continue on the
source after migration, those bitmaps are re-loaded from the qcow2
image.

However, we only do this for bitmaps that were actively synced, i.e. R/W
bitmaps.  RO bitmaps (those on backing images) are not written and thus
not released.  However, we still try to re-load them when continuing,
and that will then fail.

To fix this problem, I think we should just consider RO bitmaps to be in
sync from the beginning, so we can release them just like bitmaps that
we have actively written back to the image.  This is done by patch 1.

However, there’s a catch: Peter Krempa noted that it isn’t in libvirt’s
interest for the bitmaps to be released before migration at all, because
this makes them disappear from query-named-block-node’s dirty-bitmaps
list, but libvirt needs the bitmaps to be there:

https://bugzilla.redhat.com/show_bug.cgi?id=1858739#c3


And that is enough to make me think this series is -rc3 material. 
Although I'm not yet sure whether the solution is this series as 
written, or to patch libvirt to look elsewhere for bitmap information, 
or to patch qemu on incoming migration to not complain when reloading a 
RO bitmap, or something else.




If it’s really not feasible to keep the bitmaps around, then I suppose
what might work for libvirt is to query
image/format-specific/data/bitmaps in addition to dirty-bitmaps (every
bitmap that we released before migration must be a persistent bitmap).

What are your thoughts on this?


I'd really like to hear from Virtuozzo on the topic before committing to 
this series, but I will at least review it in the meantime.





Max Reitz (2):
   qcow2: Release read-only bitmaps when inactivated
   iotests/169: Test source cont with backing bmap

  block/qcow2-bitmap.c   | 23 +++---
  tests/qemu-iotests/169 | 64 +-
  tests/qemu-iotests/169.out |  4 +--
  3 files changed, 84 insertions(+), 7 deletions(-)



--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v11 10/11] qcow2_format.py: introduce Qcow2HeaderExtensionsDoc class

2020-07-30 Thread Andrey Shinkevich

On 28.07.2020 14:36, Vladimir Sementsov-Ogievskiy wrote:

17.07.2020 11:14, Andrey Shinkevich wrote:

Per original script design, QcowHeader class may dump the QCOW2 header
info separately from the QCOW2 extensions info. To implement the
to_dict() method for dumping extensions, let us introduce the class
Qcow2HeaderExtensionsDoc.


I think, when dumping to qcow2, no needs to omit extensions, let's 
just always dump them.




Signed-off-by: Andrey Shinkevich 
---
  tests/qemu-iotests/qcow2_format.py | 9 +
  1 file changed, 9 insertions(+)

diff --git a/tests/qemu-iotests/qcow2_format.py 
b/tests/qemu-iotests/qcow2_format.py

index 19d29b8..d2a8659 100644
--- a/tests/qemu-iotests/qcow2_format.py
+++ b/tests/qemu-iotests/qcow2_format.py
@@ -248,6 +248,15 @@ class Qcow2BitmapTable:
  return dict(entries=self.entries)
    +class Qcow2HeaderExtensionsDoc:
+
+    def __init__(self, extensions):
+    self.extensions = extensions
+
+    def to_dict(self):
+    return dict(Header_extensions=self.extensions)


s/H/h/



Sorry. I ment the non-JSON format dump as the original capitalized 'H'. 
The class Qcow2HeaderExtensionsDoc has been removed in the next v12.


Andrey



+
+
  QCOW2_EXT_MAGIC_BITMAPS = 0x23852875









[PATCH v12 09/11] qcow2_format.py: collect fields to dump in JSON format

2020-07-30 Thread Andrey Shinkevich
As __dict__ is being extended with class members we do not want to
print, add the to_dict() method to classes that returns a dictionary
with desired fields and their values. Extend it in subclass when
necessary to print the final dictionary in the JSON output which
follows.

Suggested-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Andrey Shinkevich 
---
 tests/qemu-iotests/qcow2_format.py | 34 ++
 1 file changed, 34 insertions(+)

diff --git a/tests/qemu-iotests/qcow2_format.py 
b/tests/qemu-iotests/qcow2_format.py
index 2000de3..a4114cb 100644
--- a/tests/qemu-iotests/qcow2_format.py
+++ b/tests/qemu-iotests/qcow2_format.py
@@ -119,6 +119,9 @@ class Qcow2Struct(metaclass=Qcow2StructMeta):
 
 print('{:<25} {}'.format(f[2], value_str))
 
+def to_dict(self):
+return dict((f[2], self.__dict__[f[2]]) for f in self.fields)
+
 
 class Qcow2BitmapExt(Qcow2Struct):
 
@@ -151,6 +154,11 @@ class Qcow2BitmapExt(Qcow2Struct):
 print()
 entry.dump()
 
+def to_dict(self):
+fields_dict = super().to_dict()
+fields_dict['bitmap_directory'] = self.bitmap_directory
+return fields_dict
+
 
 class Qcow2BitmapDirEntry(Qcow2Struct):
 
@@ -189,6 +197,14 @@ class Qcow2BitmapDirEntry(Qcow2Struct):
 super(Qcow2BitmapDirEntry, self).dump()
 self.bitmap_table.dump()
 
+def to_dict(self):
+fields_dict = super().to_dict()
+fields_dict['bitmap_table'] = self.bitmap_table.entries
+bmp_name = dict(name=self.name)
+# Put the name ahead of the dict
+bme_dict = {**bmp_name, **fields_dict}
+return bme_dict
+
 
 class Qcow2BitmapTableEntry(Qcow2Struct):
 
@@ -214,6 +230,9 @@ class Qcow2BitmapTableEntry(Qcow2Struct):
 else:
 self.type = 'all-zeroes'
 
+def to_dict(self):
+return dict(type=self.type, offset=self.offset, reserved=self.reserved)
+
 
 class Qcow2BitmapTable:
 
@@ -246,6 +265,9 @@ class QcowHeaderExtension(Qcow2Struct):
 0x44415441: 'Data file'
 }
 
+def to_dict(self):
+return self.mapping.get(self.value, "")
+
 fields = (
 ('u32', Magic, 'magic'),
 ('u32', '{}', 'length')
@@ -308,6 +330,18 @@ class QcowHeaderExtension(Qcow2Struct):
 else:
 self.obj.dump()
 
+def to_dict(self):
+fields_dict = super().to_dict()
+ext_name = dict(name=self.Magic(self.magic))
+# Put the name ahead of the dict
+he_dict = {**ext_name, **fields_dict}
+if self.obj is not None:
+he_dict['data'] = self.obj
+else:
+he_dict['data_str'] = self.data_str
+
+return he_dict
+
 @classmethod
 def create(cls, magic, data):
 return QcowHeaderExtension(magic, len(data), data)
-- 
1.8.3.1




Re: [PATCH-for-5.1? v2 0/2] util/pagesize: Make qemu_real_host_page_size of type size_t

2020-07-30 Thread Dr. David Alan Gilbert
* Philippe Mathieu-Daudé (phi...@redhat.com) wrote:
> Since v1:
> Make QEMU_VMALLOC_ALIGN unsigned in a previous patch

Nah, not for 5.1 - it feels like the type of thing that might on a
really bad day create a really subtle bug.

Dave

> Philippe Mathieu-Daudé (2):
>   qemu/osdep: Make QEMU_VMALLOC_ALIGN unsigned long
>   util/pagesize: Make qemu_real_host_page_size of type size_t
> 
>  include/exec/ram_addr.h  | 4 ++--
>  include/qemu/osdep.h | 6 +++---
>  accel/kvm/kvm-all.c  | 3 ++-
>  block/qcow2-cache.c  | 2 +-
>  exec.c   | 8 
>  hw/ppc/spapr_pci.c   | 2 +-
>  hw/virtio/virtio-mem.c   | 2 +-
>  migration/migration.c| 2 +-
>  migration/postcopy-ram.c | 2 +-
>  monitor/misc.c   | 2 +-
>  util/pagesize.c  | 2 +-
>  11 files changed, 18 insertions(+), 17 deletions(-)
> 
> -- 
> 2.21.3
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [PATCH 1/2] qcow2: Release read-only bitmaps when inactivated

2020-07-30 Thread Peter Krempa
On Thu, Jul 30, 2020 at 14:02:33 +0200, Max Reitz wrote:
> During migration, we release all bitmaps after storing them on disk, as
> long as they are (1) stored on disk, (2) not read-only, and (3)
> consistent.
> 
> (2) seems arbitrary, though.  The reason we do not release them is
> because we do not write them, as there is no need to; and then we just
> forget about all bitmaps that we have not written to the file.  However,
> read-only persistent bitmaps are still in the file and in sync with
> their in-memory representation, so we may as well release them just like
> any R/W bitmap that we have updated.
> 
> It leads to actual problems, too: After migration, letting the source
> continue may result in an error if there were any bitmaps on read-only
> nodes (such as backing images), because those have not been released by
> bdrv_inactive_all(), but bdrv_invalidate_cache_all() attempts to reload
> them (which fails, because they are still present in memory).
> 

I've tested it with same commands as I've used before and now the 'cont'
succeeds and also the bitmaps after the cont call are loaded and active
at least according to 'query-named-block-nodes'

Tested-by: Peter Krempa 




[PATCH v12 07/11] qcow2_format.py: Dump bitmap table serialized entries

2020-07-30 Thread Andrey Shinkevich
Add bitmap table information to the QCOW2 metadata dump.

Bitmap name   bitmap-1
...
Bitmap table   typesize offset
0  serialized  6553610092544
1  all-zeroes  655360
2  all-zeroes  655360

Signed-off-by: Andrey Shinkevich 
---
 tests/qemu-iotests/303.out |  4 
 tests/qemu-iotests/qcow2_format.py | 47 ++
 2 files changed, 51 insertions(+)

diff --git a/tests/qemu-iotests/303.out b/tests/qemu-iotests/303.out
index dc3739b..d581fb4 100644
--- a/tests/qemu-iotests/303.out
+++ b/tests/qemu-iotests/303.out
@@ -70,6 +70,8 @@ type  1
 granularity_bits  15
 name_size 8
 extra_data_size   0
+Bitmap table   typesize offset
+0  serialized  6553610092544
 
 Bitmap name   bitmap-2
 bitmap_table_offset   0x9c
@@ -79,4 +81,6 @@ type  1
 granularity_bits  16
 name_size 8
 extra_data_size   0
+Bitmap table   typesize offset
+0  all-zeroes  655360
 
diff --git a/tests/qemu-iotests/qcow2_format.py 
b/tests/qemu-iotests/qcow2_format.py
index ca0d350..1f033d4 100644
--- a/tests/qemu-iotests/qcow2_format.py
+++ b/tests/qemu-iotests/qcow2_format.py
@@ -175,6 +175,10 @@ class Qcow2BitmapDirEntry(Qcow2Struct):
 entry_raw_size = self.bitmap_dir_entry_raw_size()
 padding = ((entry_raw_size + 7) & ~7) - entry_raw_size
 fd.seek(padding, 1)
+self.bitmap_table = Qcow2BitmapTable(fd=fd,
+ offset=self.bitmap_table_offset,
+ nb_entries=self.bitmap_table_size,
+ cluster_size=self.cluster_size)
 
 def bitmap_dir_entry_raw_size(self):
 return struct.calcsize(self.fmt) + self.name_size + \
@@ -183,6 +187,49 @@ class Qcow2BitmapDirEntry(Qcow2Struct):
 def dump(self):
 print(f'{"Bitmap name":<25} {self.name}')
 super(Qcow2BitmapDirEntry, self).dump()
+self.bitmap_table.dump()
+
+
+class Qcow2BitmapTableEntry(Qcow2Struct):
+
+fields = (
+('u64',  '{}', 'entry'),
+)
+
+BME_TABLE_ENTRY_RESERVED_MASK = 0xff0001fe
+BME_TABLE_ENTRY_OFFSET_MASK = 0x00fffe00
+BME_TABLE_ENTRY_FLAG_ALL_ONES = 1
+
+def __init__(self, fd):
+super().__init__(fd=fd)
+self.reserved = self.entry & self.BME_TABLE_ENTRY_RESERVED_MASK
+self.offset = self.entry & self.BME_TABLE_ENTRY_OFFSET_MASK
+if self.offset:
+if self.entry & self.BME_TABLE_ENTRY_FLAG_ALL_ONES:
+self.type = 'invalid'
+else:
+self.type = 'serialized'
+elif self.entry & self.BME_TABLE_ENTRY_FLAG_ALL_ONES:
+self.type = 'all-ones'
+else:
+self.type = 'all-zeroes'
+
+
+class Qcow2BitmapTable:
+
+def __init__(self, fd, offset, nb_entries, cluster_size):
+self.cluster_size = cluster_size
+position = fd.tell()
+fd.seek(offset)
+self.entries = [Qcow2BitmapTableEntry(fd) for _ in range(nb_entries)]
+fd.seek(position)
+
+def dump(self):
+size = self.cluster_size
+bitmap_table = enumerate(self.entries)
+print(f'{"Bitmap table":<14} {"type":<15} {"size":<12} {"offset"}')
+for i, entry in bitmap_table:
+print(f'{i:<14} {entry.type:<15} {size:<12} {entry.offset}')
 
 
 QCOW2_EXT_MAGIC_BITMAPS = 0x23852875
-- 
1.8.3.1




[PATCH v12 04/11] qcow2_format.py: dump bitmap flags in human readable way.

2020-07-30 Thread Andrey Shinkevich
Introduce the class BitmapFlags that parses a bitmap flags mask.

Suggested-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Andrey Shinkevich 
Reviewed-by: Vladimir Sementsov-Ogievskiy 
---
 tests/qemu-iotests/qcow2_format.py | 16 
 1 file changed, 16 insertions(+)

diff --git a/tests/qemu-iotests/qcow2_format.py 
b/tests/qemu-iotests/qcow2_format.py
index d4a9974..b447344 100644
--- a/tests/qemu-iotests/qcow2_format.py
+++ b/tests/qemu-iotests/qcow2_format.py
@@ -40,6 +40,22 @@ class Flags64(Qcow2Field):
 return str(bits)
 
 
+class BitmapFlags(Qcow2Field):
+
+flags = {
+0x1: 'in-use',
+0x2: 'auto'
+}
+
+def __str__(self):
+bits = []
+for bit in range(64):
+flag = self.value & (1 << bit)
+if flag:
+bits.append(self.flags.get(flag, f'bit-{bit}'))
+return f'{self.value:#x} ({bits})'
+
+
 class Enum(Qcow2Field):
 
 def __str__(self):
-- 
1.8.3.1




[PATCH v12 08/11] qcow2.py: Introduce '-j' key to dump in JSON format

2020-07-30 Thread Andrey Shinkevich
Add the command key to the qcow2.py arguments list to dump QCOW2
metadata in JSON format. Here is the suggested way to do that. The
implementation of the dump in JSON format is in the patch that follows.

Signed-off-by: Andrey Shinkevich 
---
 tests/qemu-iotests/qcow2.py| 18 ++
 tests/qemu-iotests/qcow2_format.py |  4 ++--
 2 files changed, 16 insertions(+), 6 deletions(-)

diff --git a/tests/qemu-iotests/qcow2.py b/tests/qemu-iotests/qcow2.py
index 0910e6a..77ca59c 100755
--- a/tests/qemu-iotests/qcow2.py
+++ b/tests/qemu-iotests/qcow2.py
@@ -26,16 +26,19 @@ from qcow2_format import (
 )
 
 
+is_json = False
+
+
 def cmd_dump_header(fd):
 h = QcowHeader(fd)
-h.dump()
+h.dump(is_json)
 print()
-h.dump_extensions()
+h.dump_extensions(is_json)
 
 
 def cmd_dump_header_exts(fd):
 h = QcowHeader(fd)
-h.dump_extensions()
+h.dump_extensions(is_json)
 
 
 def cmd_set_header(fd, name, value):
@@ -151,11 +154,14 @@ def main(filename, cmd, args):
 
 
 def usage():
-print("Usage: %s   [, ...]" % sys.argv[0])
+print("Usage: %s   [, ...] [, ...]" % sys.argv[0])
 print("")
 print("Supported commands:")
 for name, handler, num_args, desc in cmds:
 print("%-20s - %s" % (name, desc))
+print("")
+print("Supported keys:")
+print("%-20s - %s" % ('-j', 'Dump in JSON format'))
 
 
 if __name__ == '__main__':
@@ -163,4 +169,8 @@ if __name__ == '__main__':
 usage()
 sys.exit(1)
 
+is_json = '-j' in sys.argv
+if is_json:
+sys.argv.remove('-j')
+
 main(sys.argv[1], sys.argv[2], sys.argv[3:])
diff --git a/tests/qemu-iotests/qcow2_format.py 
b/tests/qemu-iotests/qcow2_format.py
index 1f033d4..2000de3 100644
--- a/tests/qemu-iotests/qcow2_format.py
+++ b/tests/qemu-iotests/qcow2_format.py
@@ -109,7 +109,7 @@ class Qcow2Struct(metaclass=Qcow2StructMeta):
 self.__dict__ = dict((field[2], values[i])
  for i, field in enumerate(self.fields))
 
-def dump(self):
+def dump(self, is_json=False):
 for f in self.fields:
 value = self.__dict__[f[2]]
 if isinstance(f[1], str):
@@ -405,7 +405,7 @@ class QcowHeader(Qcow2Struct):
 buf = buf[0:header_bytes-1]
 fd.write(buf)
 
-def dump_extensions(self):
+def dump_extensions(self, is_json=False):
 for ex in self.extensions:
 print('Header extension:')
 ex.dump()
-- 
1.8.3.1




[PATCH v12 10/11] qcow2_format.py: support dumping metadata in JSON format

2020-07-30 Thread Andrey Shinkevich
Implementation of dumping QCOW2 image metadata.
The sample output:
{
"Header_extensions": [
{
"name": "Feature table",
"magic": 1745090647,
"length": 192,
"data_str": ""
},
{
"name": "Bitmaps",
"magic": 595929205,
"length": 24,
"data": {
"nb_bitmaps": 2,
"reserved32": 0,
"bitmap_directory_size": 64,
"bitmap_directory_offset": 1048576,
"bitmap_directory": [
{
"name": "bitmap-1",
"bitmap_table_offset": 589824,
"bitmap_table_size": 1,
"flags": 2,
"type": 1,
"granularity_bits": 15,
"name_size": 8,
"extra_data_size": 0,
"bitmap_table": [
{
"type": "serialized",
"offset": 655360
},
...

Suggested-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Andrey Shinkevich 
---
 tests/qemu-iotests/qcow2_format.py | 17 +
 1 file changed, 17 insertions(+)

diff --git a/tests/qemu-iotests/qcow2_format.py 
b/tests/qemu-iotests/qcow2_format.py
index a4114cb..7487720 100644
--- a/tests/qemu-iotests/qcow2_format.py
+++ b/tests/qemu-iotests/qcow2_format.py
@@ -19,6 +19,15 @@
 
 import struct
 import string
+import json
+
+
+class ComplexEncoder(json.JSONEncoder):
+def default(self, obj):
+if hasattr(obj, 'to_dict'):
+return obj.to_dict()
+else:
+return json.JSONEncoder.default(self, obj)
 
 
 class Qcow2Field:
@@ -110,6 +119,10 @@ class Qcow2Struct(metaclass=Qcow2StructMeta):
  for i, field in enumerate(self.fields))
 
 def dump(self, is_json=False):
+if is_json:
+print(json.dumps(self.to_dict(), indent=4, cls=ComplexEncoder))
+return
+
 for f in self.fields:
 value = self.__dict__[f[2]]
 if isinstance(f[1], str):
@@ -440,6 +453,10 @@ class QcowHeader(Qcow2Struct):
 fd.write(buf)
 
 def dump_extensions(self, is_json=False):
+if is_json:
+print(json.dumps(self.extensions, indent=4, cls=ComplexEncoder))
+return
+
 for ex in self.extensions:
 print('Header extension:')
 ex.dump()
-- 
1.8.3.1




[PATCH v12 03/11] qcow2_format.py: change Qcow2BitmapExt initialization method

2020-07-30 Thread Andrey Shinkevich
There are two ways to initialize a class derived from Qcow2Struct:
1. Pass a block of binary data to the constructor.
2. Pass the file descriptor to allow reading the file from constructor.
Let's change the Qcow2BitmapExt initialization method from 1 to 2 to
support a scattered reading in the initialization chain.
The implementation comes with the patch that follows.

Suggested-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Andrey Shinkevich 
Reviewed-by: Vladimir Sementsov-Ogievskiy 
---
 tests/qemu-iotests/qcow2_format.py | 36 ++--
 1 file changed, 22 insertions(+), 14 deletions(-)

diff --git a/tests/qemu-iotests/qcow2_format.py 
b/tests/qemu-iotests/qcow2_format.py
index 2f3681b..d4a9974 100644
--- a/tests/qemu-iotests/qcow2_format.py
+++ b/tests/qemu-iotests/qcow2_format.py
@@ -113,6 +113,11 @@ class Qcow2BitmapExt(Qcow2Struct):
 ('u64', '{:#x}', 'bitmap_directory_offset')
 )
 
+def __init__(self, fd):
+super().__init__(fd=fd)
+tail = struct.calcsize(self.fmt) % 8
+if tail:
+fd.seek(8 - tail, 1)
 
 QCOW2_EXT_MAGIC_BITMAPS = 0x23852875
 
@@ -161,21 +166,24 @@ class QcowHeaderExtension(Qcow2Struct):
 else:
 assert all(v is None for v in (magic, length, data))
 super().__init__(fd=fd)
-padded = (self.length + 7) & ~7
-self.data = fd.read(padded)
-assert self.data is not None
-
-data_str = self.data[:self.length]
-if all(c in string.printable.encode('ascii') for c in data_str):
-data_str = f"'{ data_str.decode('ascii') }'"
-else:
-data_str = ''
-self.data_str = data_str
+if self.magic == QCOW2_EXT_MAGIC_BITMAPS:
+self.obj = Qcow2BitmapExt(fd=fd)
+self.data = None
+else:
+padded = (self.length + 7) & ~7
+self.data = fd.read(padded)
+assert self.data is not None
+self.obj = None
+
+if self.data is not None:
+data_str = self.data[:self.length]
+if all(c in string.printable.encode(
+'ascii') for c in data_str):
+data_str = f"'{ data_str.decode('ascii') }'"
+else:
+data_str = ''
+self.data_str = data_str
 
-if self.magic == QCOW2_EXT_MAGIC_BITMAPS:
-self.obj = Qcow2BitmapExt(data=self.data)
-else:
-self.obj = None
 
 def dump(self):
 super().dump()
-- 
1.8.3.1




[PATCH v12 11/11] iotests: dump QCOW2 header in JSON in #303

2020-07-30 Thread Andrey Shinkevich
Extend the test case #303 by dumping QCOW2 image metadata in JSON
format.

Signed-off-by: Andrey Shinkevich 
---
 tests/qemu-iotests/303 |  3 ++
 tests/qemu-iotests/303.out | 76 ++
 2 files changed, 79 insertions(+)

diff --git a/tests/qemu-iotests/303 b/tests/qemu-iotests/303
index 3c7a611..6821bd3 100755
--- a/tests/qemu-iotests/303
+++ b/tests/qemu-iotests/303
@@ -57,3 +57,6 @@ add_bitmap(1, 1, 7, False)
 add_bitmap(2, 7, 9, True)
 dump = ['qcow2.py', f'{disk}', 'dump-header']
 subprocess.run(dump)
+# Dump the metadata in JSON format
+dump.append('-j')
+subprocess.run(dump)
diff --git a/tests/qemu-iotests/303.out b/tests/qemu-iotests/303.out
index d581fb4..ead3b63 100644
--- a/tests/qemu-iotests/303.out
+++ b/tests/qemu-iotests/303.out
@@ -84,3 +84,79 @@ extra_data_size   0
 Bitmap table   typesize offset
 0  all-zeroes  655360
 
+{
+"magic": 1363560955,
+"version": 3,
+"backing_file_offset": 0,
+"backing_file_size": 0,
+"cluster_bits": 16,
+"size": 10485760,
+"crypt_method": 0,
+"l1_size": 1,
+"l1_table_offset": 196608,
+"refcount_table_offset": 65536,
+"refcount_table_clusters": 1,
+"nb_snapshots": 0,
+"snapshot_offset": 0,
+"incompatible_features": 0,
+"compatible_features": 0,
+"autoclear_features": 1,
+"refcount_order": 4,
+"header_length": 112
+}
+
+[
+{
+"name": "Feature table",
+"magic": 1745090647,
+"length": 336,
+"data_str": ""
+},
+{
+"name": "Bitmaps",
+"magic": 595929205,
+"length": 24,
+"data": {
+"nb_bitmaps": 2,
+"reserved32": 0,
+"bitmap_directory_size": 64,
+"bitmap_directory_offset": 10289152,
+"bitmap_directory": [
+{
+"name": "bitmap-1",
+"bitmap_table_offset": 10158080,
+"bitmap_table_size": 1,
+"flags": 2,
+"type": 1,
+"granularity_bits": 15,
+"name_size": 8,
+"extra_data_size": 0,
+"bitmap_table": [
+{
+"type": "serialized",
+"offset": 10092544,
+"reserved": 0
+}
+]
+},
+{
+"name": "bitmap-2",
+"bitmap_table_offset": 10223616,
+"bitmap_table_size": 1,
+"flags": 0,
+"type": 1,
+"granularity_bits": 16,
+"name_size": 8,
+"extra_data_size": 0,
+"bitmap_table": [
+{
+"type": "all-zeroes",
+"offset": 0,
+"reserved": 0
+}
+]
+}
+]
+}
+}
+]
-- 
1.8.3.1




[PATCH v12 06/11] qcow2_format.py: pass cluster size to substructures

2020-07-30 Thread Andrey Shinkevich
The cluster size of an image is the QcowHeader class member and may be
obtained by dependent extension structures such as Qcow2BitmapExt for
further bitmap table details print.

Signed-off-by: Andrey Shinkevich 
Reviewed-by: Vladimir Sementsov-Ogievskiy 
---
 tests/qemu-iotests/qcow2_format.py | 17 +++--
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/tests/qemu-iotests/qcow2_format.py 
b/tests/qemu-iotests/qcow2_format.py
index 05a8aa9..ca0d350 100644
--- a/tests/qemu-iotests/qcow2_format.py
+++ b/tests/qemu-iotests/qcow2_format.py
@@ -129,19 +129,21 @@ class Qcow2BitmapExt(Qcow2Struct):
 ('u64', '{:#x}', 'bitmap_directory_offset')
 )
 
-def __init__(self, fd):
+def __init__(self, fd, cluster_size):
 super().__init__(fd=fd)
 tail = struct.calcsize(self.fmt) % 8
 if tail:
 fd.seek(8 - tail, 1)
 position = fd.tell()
+self.cluster_size = cluster_size
 self.read_bitmap_directory(fd)
 fd.seek(position)
 
 def read_bitmap_directory(self, fd):
 fd.seek(self.bitmap_directory_offset)
 self.bitmap_directory = \
-[Qcow2BitmapDirEntry(fd) for _ in range(self.nb_bitmaps)]
+[Qcow2BitmapDirEntry(fd, cluster_size=self.cluster_size)
+ for _ in range(self.nb_bitmaps)]
 
 def dump(self):
 super().dump()
@@ -162,8 +164,9 @@ class Qcow2BitmapDirEntry(Qcow2Struct):
 ('u32', '{}', 'extra_data_size')
 )
 
-def __init__(self, fd):
+def __init__(self, fd, cluster_size):
 super().__init__(fd=fd)
+self.cluster_size = cluster_size
 # Seek relative to the current position in the file
 fd.seek(self.extra_data_size, 1)
 bitmap_name = fd.read(self.name_size)
@@ -203,11 +206,13 @@ class QcowHeaderExtension(Qcow2Struct):
 # then padding to next multiply of 8
 )
 
-def __init__(self, magic=None, length=None, data=None, fd=None):
+def __init__(self, magic=None, length=None, data=None, fd=None,
+ cluster_size=None):
 """
 Support both loading from fd and creation from user data.
 For fd-based creation current position in a file will be used to read
 the data.
+The cluster_size value may be obtained by dependent structures.
 
 This should be somehow refactored and functionality should be moved to
 superclass (to allow creation of any qcow2 struct), but then, fields
@@ -230,7 +235,7 @@ class QcowHeaderExtension(Qcow2Struct):
 assert all(v is None for v in (magic, length, data))
 super().__init__(fd=fd)
 if self.magic == QCOW2_EXT_MAGIC_BITMAPS:
-self.obj = Qcow2BitmapExt(fd=fd)
+self.obj = Qcow2BitmapExt(fd=fd, cluster_size=cluster_size)
 self.data = None
 else:
 padded = (self.length + 7) & ~7
@@ -319,7 +324,7 @@ class QcowHeader(Qcow2Struct):
 end = self.cluster_size
 
 while fd.tell() < end:
-ext = QcowHeaderExtension(fd=fd)
+ext = QcowHeaderExtension(fd=fd, cluster_size=self.cluster_size)
 if ext.magic == 0:
 break
 else:
-- 
1.8.3.1




Re: [PATCH-for-5.1?] util/pagesize: Make qemu_real_host_page_size of type size_t

2020-07-30 Thread Stefano Garzarella
On Thu, Jul 30, 2020 at 03:59:35PM +0200, Philippe Mathieu-Daudé wrote:
> We use different types to hold 'qemu_real_host_page_size'.
> Unify picking 'size_t' which seems the best candidate.

I agree, it sounds better!

Should we change even "qemu_host_page_size"?

Thanks,
Stefano

> 
> Doing so fix a format string issue in hw/virtio/virtio-mem.c
> reported when building with GCC 4.9.4:
> 
>   hw/virtio/virtio-mem.c: In function ‘virtio_mem_set_block_size’:
>   hw/virtio/virtio-mem.c:756:9: error: format ‘%x’ expects argument of type 
> ‘unsigned int’, but argument 7 has type ‘uintptr_t’ [-Werror=format=]
>  error_setg(errp, "'%s' property has to be at least 0x%" PRIx32, name,
>  ^
> 
> Fixes: 910b25766b ("virtio-mem: Paravirtualized memory hot(un)plug")
> Reported-by: Bruce Rogers 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  include/exec/ram_addr.h  | 4 ++--
>  include/qemu/osdep.h | 2 +-
>  accel/kvm/kvm-all.c  | 3 ++-
>  block/qcow2-cache.c  | 2 +-
>  exec.c   | 8 
>  hw/ppc/spapr_pci.c   | 2 +-
>  hw/virtio/virtio-mem.c   | 2 +-
>  migration/migration.c| 2 +-
>  migration/postcopy-ram.c | 2 +-
>  monitor/misc.c   | 2 +-
>  util/pagesize.c  | 2 +-
>  11 files changed, 16 insertions(+), 15 deletions(-)
> 
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index 3ef729a23c..e07532266e 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -93,8 +93,8 @@ static inline unsigned long int 
> ramblock_recv_bitmap_offset(void *host_addr,
>  
>  bool ramblock_is_pmem(RAMBlock *rb);
>  
> -long qemu_minrampagesize(void);
> -long qemu_maxrampagesize(void);
> +size_t qemu_minrampagesize(void);
> +size_t qemu_maxrampagesize(void);
>  
>  /**
>   * qemu_ram_alloc_from_file,
> diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
> index 20872e793e..619b8a7a8c 100644
> --- a/include/qemu/osdep.h
> +++ b/include/qemu/osdep.h
> @@ -635,10 +635,10 @@ char *qemu_get_pid_name(pid_t pid);
>   */
>  pid_t qemu_fork(Error **errp);
>  
> +extern size_t qemu_real_host_page_size;
>  /* Using intptr_t ensures that qemu_*_page_mask is sign-extended even
>   * when intptr_t is 32-bit and we are aligning a long long.
>   */
> -extern uintptr_t qemu_real_host_page_size;
>  extern intptr_t qemu_real_host_page_mask;
>  
>  extern int qemu_icache_linesize;
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> index 63ef6af9a1..59becfbd6c 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -674,7 +674,8 @@ static int kvm_log_clear_one_slot(KVMSlot *mem, int 
> as_id, uint64_t start,
>  KVMState *s = kvm_state;
>  uint64_t end, bmap_start, start_delta, bmap_npages;
>  struct kvm_clear_dirty_log d;
> -unsigned long *bmap_clear = NULL, psize = qemu_real_host_page_size;
> +unsigned long *bmap_clear = NULL;
> +size_t psize = qemu_real_host_page_size;
>  int ret;
>  
>  /*
> diff --git a/block/qcow2-cache.c b/block/qcow2-cache.c
> index 7444b9c4ab..4ad9f5929f 100644
> --- a/block/qcow2-cache.c
> +++ b/block/qcow2-cache.c
> @@ -74,7 +74,7 @@ static void qcow2_cache_table_release(Qcow2Cache *c, int i, 
> int num_tables)
>  /* Using MADV_DONTNEED to discard memory is a Linux-specific feature */
>  #ifdef CONFIG_LINUX
>  void *t = qcow2_cache_get_table_addr(c, i);
> -int align = qemu_real_host_page_size;
> +size_t align = qemu_real_host_page_size;
>  size_t mem_size = (size_t) c->table_size * num_tables;
>  size_t offset = QEMU_ALIGN_UP((uintptr_t) t, align) - (uintptr_t) t;
>  size_t length = QEMU_ALIGN_DOWN(mem_size - offset, align);
> diff --git a/exec.c b/exec.c
> index 6f381f98e2..4b6d52e01f 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -1657,7 +1657,7 @@ static int find_max_backend_pagesize(Object *obj, void 
> *opaque)
>   * TODO: We assume right now that all mapped host memory backends are
>   * used as RAM, however some might be used for different purposes.
>   */
> -long qemu_minrampagesize(void)
> +size_t qemu_minrampagesize(void)
>  {
>  long hpsize = LONG_MAX;
>  Object *memdev_root = object_resolve_path("/objects", NULL);
> @@ -1666,7 +1666,7 @@ long qemu_minrampagesize(void)
>  return hpsize;
>  }
>  
> -long qemu_maxrampagesize(void)
> +size_t qemu_maxrampagesize(void)
>  {
>  long pagesize = 0;
>  Object *memdev_root = object_resolve_path("/objects", NULL);
> @@ -1675,11 +1675,11 @@ long qemu_maxrampagesize(void)
>  return pagesize;
>  }
>  #else
> -long qemu_minrampagesize(void)
> +size_t qemu_minrampagesize(void)
>  {
>  return qemu_real_host_page_size;
>  }
> -long qemu_maxrampagesize(void)
> +size_t qemu_maxrampagesize(void)
>  {
>  return qemu_real_host_page_size;
>  }
> diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
> index 363cdb3f7b..a9da84fe30 100644
> --- a/hw/ppc/spapr_pci.c
> +++ b/hw/ppc/spapr_pci.c
> @@ -1810,7 +1810,7 @@ static void spapr_phb_realize(DeviceState *dev, Error 
> **errp)
>  

[PATCH-for-5.1? v2 0/2] util/pagesize: Make qemu_real_host_page_size of type size_t

2020-07-30 Thread Philippe Mathieu-Daudé
Since v1:
Make QEMU_VMALLOC_ALIGN unsigned in a previous patch

Philippe Mathieu-Daudé (2):
  qemu/osdep: Make QEMU_VMALLOC_ALIGN unsigned long
  util/pagesize: Make qemu_real_host_page_size of type size_t

 include/exec/ram_addr.h  | 4 ++--
 include/qemu/osdep.h | 6 +++---
 accel/kvm/kvm-all.c  | 3 ++-
 block/qcow2-cache.c  | 2 +-
 exec.c   | 8 
 hw/ppc/spapr_pci.c   | 2 +-
 hw/virtio/virtio-mem.c   | 2 +-
 migration/migration.c| 2 +-
 migration/postcopy-ram.c | 2 +-
 monitor/misc.c   | 2 +-
 util/pagesize.c  | 2 +-
 11 files changed, 18 insertions(+), 17 deletions(-)

-- 
2.21.3




[PATCH-for-5.1? v2 1/2] qemu/osdep: Make QEMU_VMALLOC_ALIGN unsigned long

2020-07-30 Thread Philippe Mathieu-Daudé
QEMU_VMALLOC_ALIGN is sometimes expanded to signed type,
other times to unsigned. Unify using unsigned.

Signed-off-by: Philippe Mathieu-Daudé 
---
 include/qemu/osdep.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
index 20872e793e..085df8d508 100644
--- a/include/qemu/osdep.h
+++ b/include/qemu/osdep.h
@@ -454,10 +454,10 @@ void qemu_anon_ram_free(void *ptr, size_t size);
/* Use 2 MiB alignment so transparent hugepages can be used by KVM.
   Valgrind does not support alignments larger than 1 MiB,
   therefore we need special code which handles running on Valgrind. */
-#  define QEMU_VMALLOC_ALIGN (512 * 4096)
+#  define QEMU_VMALLOC_ALIGN (512 * 4096UL)
 #elif defined(__linux__) && defined(__s390x__)
/* Use 1 MiB (segment size) alignment so gmap can be used by KVM. */
-#  define QEMU_VMALLOC_ALIGN (256 * 4096)
+#  define QEMU_VMALLOC_ALIGN (256 * 4096UL)
 #elif defined(__linux__) && defined(__sparc__)
 #include 
 #  define QEMU_VMALLOC_ALIGN MAX(qemu_real_host_page_size, SHMLBA)
-- 
2.21.3




[PATCH-for-5.1? v2 2/2] util/pagesize: Make qemu_real_host_page_size of type size_t

2020-07-30 Thread Philippe Mathieu-Daudé
We use different types to hold 'qemu_real_host_page_size'.
Unify picking 'size_t' which seems the best candidate.

Doing so fix a format string issue in hw/virtio/virtio-mem.c
reported when building with GCC 4.9.4:

  hw/virtio/virtio-mem.c: In function ‘virtio_mem_set_block_size’:
  hw/virtio/virtio-mem.c:756:9: error: format ‘%x’ expects argument of type 
‘unsigned int’, but argument 7 has type ‘uintptr_t’ [-Werror=format=]
 error_setg(errp, "'%s' property has to be at least 0x%" PRIx32, name,
 ^

Fixes: 910b25766b ("virtio-mem: Paravirtualized memory hot(un)plug")
Reported-by: Bruce Rogers 
Signed-off-by: Philippe Mathieu-Daudé 
---
 include/exec/ram_addr.h  | 4 ++--
 include/qemu/osdep.h | 2 +-
 accel/kvm/kvm-all.c  | 3 ++-
 block/qcow2-cache.c  | 2 +-
 exec.c   | 8 
 hw/ppc/spapr_pci.c   | 2 +-
 hw/virtio/virtio-mem.c   | 2 +-
 migration/migration.c| 2 +-
 migration/postcopy-ram.c | 2 +-
 monitor/misc.c   | 2 +-
 util/pagesize.c  | 2 +-
 11 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 3ef729a23c..e07532266e 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -93,8 +93,8 @@ static inline unsigned long int 
ramblock_recv_bitmap_offset(void *host_addr,
 
 bool ramblock_is_pmem(RAMBlock *rb);
 
-long qemu_minrampagesize(void);
-long qemu_maxrampagesize(void);
+size_t qemu_minrampagesize(void);
+size_t qemu_maxrampagesize(void);
 
 /**
  * qemu_ram_alloc_from_file,
diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
index 085df8d508..77115a8270 100644
--- a/include/qemu/osdep.h
+++ b/include/qemu/osdep.h
@@ -635,10 +635,10 @@ char *qemu_get_pid_name(pid_t pid);
  */
 pid_t qemu_fork(Error **errp);
 
+extern size_t qemu_real_host_page_size;
 /* Using intptr_t ensures that qemu_*_page_mask is sign-extended even
  * when intptr_t is 32-bit and we are aligning a long long.
  */
-extern uintptr_t qemu_real_host_page_size;
 extern intptr_t qemu_real_host_page_mask;
 
 extern int qemu_icache_linesize;
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index 63ef6af9a1..59becfbd6c 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -674,7 +674,8 @@ static int kvm_log_clear_one_slot(KVMSlot *mem, int as_id, 
uint64_t start,
 KVMState *s = kvm_state;
 uint64_t end, bmap_start, start_delta, bmap_npages;
 struct kvm_clear_dirty_log d;
-unsigned long *bmap_clear = NULL, psize = qemu_real_host_page_size;
+unsigned long *bmap_clear = NULL;
+size_t psize = qemu_real_host_page_size;
 int ret;
 
 /*
diff --git a/block/qcow2-cache.c b/block/qcow2-cache.c
index 7444b9c4ab..4ad9f5929f 100644
--- a/block/qcow2-cache.c
+++ b/block/qcow2-cache.c
@@ -74,7 +74,7 @@ static void qcow2_cache_table_release(Qcow2Cache *c, int i, 
int num_tables)
 /* Using MADV_DONTNEED to discard memory is a Linux-specific feature */
 #ifdef CONFIG_LINUX
 void *t = qcow2_cache_get_table_addr(c, i);
-int align = qemu_real_host_page_size;
+size_t align = qemu_real_host_page_size;
 size_t mem_size = (size_t) c->table_size * num_tables;
 size_t offset = QEMU_ALIGN_UP((uintptr_t) t, align) - (uintptr_t) t;
 size_t length = QEMU_ALIGN_DOWN(mem_size - offset, align);
diff --git a/exec.c b/exec.c
index 6f381f98e2..4b6d52e01f 100644
--- a/exec.c
+++ b/exec.c
@@ -1657,7 +1657,7 @@ static int find_max_backend_pagesize(Object *obj, void 
*opaque)
  * TODO: We assume right now that all mapped host memory backends are
  * used as RAM, however some might be used for different purposes.
  */
-long qemu_minrampagesize(void)
+size_t qemu_minrampagesize(void)
 {
 long hpsize = LONG_MAX;
 Object *memdev_root = object_resolve_path("/objects", NULL);
@@ -1666,7 +1666,7 @@ long qemu_minrampagesize(void)
 return hpsize;
 }
 
-long qemu_maxrampagesize(void)
+size_t qemu_maxrampagesize(void)
 {
 long pagesize = 0;
 Object *memdev_root = object_resolve_path("/objects", NULL);
@@ -1675,11 +1675,11 @@ long qemu_maxrampagesize(void)
 return pagesize;
 }
 #else
-long qemu_minrampagesize(void)
+size_t qemu_minrampagesize(void)
 {
 return qemu_real_host_page_size;
 }
-long qemu_maxrampagesize(void)
+size_t qemu_maxrampagesize(void)
 {
 return qemu_real_host_page_size;
 }
diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
index 363cdb3f7b..a9da84fe30 100644
--- a/hw/ppc/spapr_pci.c
+++ b/hw/ppc/spapr_pci.c
@@ -1810,7 +1810,7 @@ static void spapr_phb_realize(DeviceState *dev, Error 
**errp)
 char *namebuf;
 int i;
 PCIBus *bus;
-uint64_t msi_window_size = 4096;
+size_t msi_window_size = 4096;
 SpaprTceTable *tcet;
 const unsigned windows_supported = spapr_phb_windows_supported(sphb);
 Error *local_err = NULL;
diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c
index c12e9f79b0..34344cec39 100644
--- a/hw/virtio/virtio-mem.c
+++ b/hw/virtio/virtio-mem.c
@@ -753,7 +753,7 @@ 

Re: [PATCH-for-5.1?] util/pagesize: Make qemu_real_host_page_size of type size_t

2020-07-30 Thread Philippe Mathieu-Daudé
On 7/30/20 3:59 PM, Philippe Mathieu-Daudé wrote:
> We use different types to hold 'qemu_real_host_page_size'.
> Unify picking 'size_t' which seems the best candidate.
> 
> Doing so fix a format string issue in hw/virtio/virtio-mem.c
> reported when building with GCC 4.9.4:
> 
>   hw/virtio/virtio-mem.c: In function ‘virtio_mem_set_block_size’:
>   hw/virtio/virtio-mem.c:756:9: error: format ‘%x’ expects argument of type 
> ‘unsigned int’, but argument 7 has type ‘uintptr_t’ [-Werror=format=]
>  error_setg(errp, "'%s' property has to be at least 0x%" PRIx32, name,
>  ^
> 
> Fixes: 910b25766b ("virtio-mem: Paravirtualized memory hot(un)plug")
> Reported-by: Bruce Rogers 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  include/exec/ram_addr.h  | 4 ++--
>  include/qemu/osdep.h | 2 +-
>  accel/kvm/kvm-all.c  | 3 ++-
>  block/qcow2-cache.c  | 2 +-
>  exec.c   | 8 
>  hw/ppc/spapr_pci.c   | 2 +-
>  hw/virtio/virtio-mem.c   | 2 +-
>  migration/migration.c| 2 +-
>  migration/postcopy-ram.c | 2 +-
>  monitor/misc.c   | 2 +-
>  util/pagesize.c  | 2 +-
>  11 files changed, 16 insertions(+), 15 deletions(-)
> 
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index 3ef729a23c..e07532266e 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -93,8 +93,8 @@ static inline unsigned long int 
> ramblock_recv_bitmap_offset(void *host_addr,
>  
>  bool ramblock_is_pmem(RAMBlock *rb);
>  
> -long qemu_minrampagesize(void);
> -long qemu_maxrampagesize(void);
> +size_t qemu_minrampagesize(void);
> +size_t qemu_maxrampagesize(void);
>  
>  /**
>   * qemu_ram_alloc_from_file,
> diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
> index 20872e793e..619b8a7a8c 100644
> --- a/include/qemu/osdep.h
> +++ b/include/qemu/osdep.h
> @@ -635,10 +635,10 @@ char *qemu_get_pid_name(pid_t pid);
>   */
>  pid_t qemu_fork(Error **errp);
>  
> +extern size_t qemu_real_host_page_size;
>  /* Using intptr_t ensures that qemu_*_page_mask is sign-extended even
>   * when intptr_t is 32-bit and we are aligning a long long.
>   */
> -extern uintptr_t qemu_real_host_page_size;
>  extern intptr_t qemu_real_host_page_mask;
>  

This is incomplete as I missed to make QEMU_VMALLOC_ALIGN unsigned...

I'll respin.




Re: [PATCH v11 10/11] qcow2_format.py: introduce Qcow2HeaderExtensionsDoc class

2020-07-30 Thread Andrey Shinkevich

On 28.07.2020 14:36, Vladimir Sementsov-Ogievskiy wrote:

17.07.2020 11:14, Andrey Shinkevich wrote:

Per original script design, QcowHeader class may dump the QCOW2 header
info separately from the QCOW2 extensions info. To implement the
to_dict() method for dumping extensions, let us introduce the class
Qcow2HeaderExtensionsDoc.


I think, when dumping to qcow2, no needs to omit extensions, let's 
just always dump them.




Signed-off-by: Andrey Shinkevich 
---
  tests/qemu-iotests/qcow2_format.py | 9 +
  1 file changed, 9 insertions(+)

diff --git a/tests/qemu-iotests/qcow2_format.py 
b/tests/qemu-iotests/qcow2_format.py

index 19d29b8..d2a8659 100644
--- a/tests/qemu-iotests/qcow2_format.py
+++ b/tests/qemu-iotests/qcow2_format.py
@@ -248,6 +248,15 @@ class Qcow2BitmapTable:
  return dict(entries=self.entries)
    +class Qcow2HeaderExtensionsDoc:
+
+    def __init__(self, extensions):
+    self.extensions = extensions
+
+    def to_dict(self):
+    return dict(Header_extensions=self.extensions)


s/H/h/



It is the original code and the change would be unralated to my patch.

Should I make a separate patch for this change?

Andrey



+
+
  QCOW2_EXT_MAGIC_BITMAPS = 0x23852875









[PATCH-for-5.1?] util/pagesize: Make qemu_real_host_page_size of type size_t

2020-07-30 Thread Philippe Mathieu-Daudé
We use different types to hold 'qemu_real_host_page_size'.
Unify picking 'size_t' which seems the best candidate.

Doing so fix a format string issue in hw/virtio/virtio-mem.c
reported when building with GCC 4.9.4:

  hw/virtio/virtio-mem.c: In function ‘virtio_mem_set_block_size’:
  hw/virtio/virtio-mem.c:756:9: error: format ‘%x’ expects argument of type 
‘unsigned int’, but argument 7 has type ‘uintptr_t’ [-Werror=format=]
 error_setg(errp, "'%s' property has to be at least 0x%" PRIx32, name,
 ^

Fixes: 910b25766b ("virtio-mem: Paravirtualized memory hot(un)plug")
Reported-by: Bruce Rogers 
Signed-off-by: Philippe Mathieu-Daudé 
---
 include/exec/ram_addr.h  | 4 ++--
 include/qemu/osdep.h | 2 +-
 accel/kvm/kvm-all.c  | 3 ++-
 block/qcow2-cache.c  | 2 +-
 exec.c   | 8 
 hw/ppc/spapr_pci.c   | 2 +-
 hw/virtio/virtio-mem.c   | 2 +-
 migration/migration.c| 2 +-
 migration/postcopy-ram.c | 2 +-
 monitor/misc.c   | 2 +-
 util/pagesize.c  | 2 +-
 11 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 3ef729a23c..e07532266e 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -93,8 +93,8 @@ static inline unsigned long int 
ramblock_recv_bitmap_offset(void *host_addr,
 
 bool ramblock_is_pmem(RAMBlock *rb);
 
-long qemu_minrampagesize(void);
-long qemu_maxrampagesize(void);
+size_t qemu_minrampagesize(void);
+size_t qemu_maxrampagesize(void);
 
 /**
  * qemu_ram_alloc_from_file,
diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
index 20872e793e..619b8a7a8c 100644
--- a/include/qemu/osdep.h
+++ b/include/qemu/osdep.h
@@ -635,10 +635,10 @@ char *qemu_get_pid_name(pid_t pid);
  */
 pid_t qemu_fork(Error **errp);
 
+extern size_t qemu_real_host_page_size;
 /* Using intptr_t ensures that qemu_*_page_mask is sign-extended even
  * when intptr_t is 32-bit and we are aligning a long long.
  */
-extern uintptr_t qemu_real_host_page_size;
 extern intptr_t qemu_real_host_page_mask;
 
 extern int qemu_icache_linesize;
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index 63ef6af9a1..59becfbd6c 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -674,7 +674,8 @@ static int kvm_log_clear_one_slot(KVMSlot *mem, int as_id, 
uint64_t start,
 KVMState *s = kvm_state;
 uint64_t end, bmap_start, start_delta, bmap_npages;
 struct kvm_clear_dirty_log d;
-unsigned long *bmap_clear = NULL, psize = qemu_real_host_page_size;
+unsigned long *bmap_clear = NULL;
+size_t psize = qemu_real_host_page_size;
 int ret;
 
 /*
diff --git a/block/qcow2-cache.c b/block/qcow2-cache.c
index 7444b9c4ab..4ad9f5929f 100644
--- a/block/qcow2-cache.c
+++ b/block/qcow2-cache.c
@@ -74,7 +74,7 @@ static void qcow2_cache_table_release(Qcow2Cache *c, int i, 
int num_tables)
 /* Using MADV_DONTNEED to discard memory is a Linux-specific feature */
 #ifdef CONFIG_LINUX
 void *t = qcow2_cache_get_table_addr(c, i);
-int align = qemu_real_host_page_size;
+size_t align = qemu_real_host_page_size;
 size_t mem_size = (size_t) c->table_size * num_tables;
 size_t offset = QEMU_ALIGN_UP((uintptr_t) t, align) - (uintptr_t) t;
 size_t length = QEMU_ALIGN_DOWN(mem_size - offset, align);
diff --git a/exec.c b/exec.c
index 6f381f98e2..4b6d52e01f 100644
--- a/exec.c
+++ b/exec.c
@@ -1657,7 +1657,7 @@ static int find_max_backend_pagesize(Object *obj, void 
*opaque)
  * TODO: We assume right now that all mapped host memory backends are
  * used as RAM, however some might be used for different purposes.
  */
-long qemu_minrampagesize(void)
+size_t qemu_minrampagesize(void)
 {
 long hpsize = LONG_MAX;
 Object *memdev_root = object_resolve_path("/objects", NULL);
@@ -1666,7 +1666,7 @@ long qemu_minrampagesize(void)
 return hpsize;
 }
 
-long qemu_maxrampagesize(void)
+size_t qemu_maxrampagesize(void)
 {
 long pagesize = 0;
 Object *memdev_root = object_resolve_path("/objects", NULL);
@@ -1675,11 +1675,11 @@ long qemu_maxrampagesize(void)
 return pagesize;
 }
 #else
-long qemu_minrampagesize(void)
+size_t qemu_minrampagesize(void)
 {
 return qemu_real_host_page_size;
 }
-long qemu_maxrampagesize(void)
+size_t qemu_maxrampagesize(void)
 {
 return qemu_real_host_page_size;
 }
diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
index 363cdb3f7b..a9da84fe30 100644
--- a/hw/ppc/spapr_pci.c
+++ b/hw/ppc/spapr_pci.c
@@ -1810,7 +1810,7 @@ static void spapr_phb_realize(DeviceState *dev, Error 
**errp)
 char *namebuf;
 int i;
 PCIBus *bus;
-uint64_t msi_window_size = 4096;
+size_t msi_window_size = 4096;
 SpaprTceTable *tcet;
 const unsigned windows_supported = spapr_phb_windows_supported(sphb);
 Error *local_err = NULL;
diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c
index c12e9f79b0..34344cec39 100644
--- a/hw/virtio/virtio-mem.c
+++ b/hw/virtio/virtio-mem.c
@@ -753,7 +753,7 @@ 

Re: [PATCH] schemas: Add vim modeline

2020-07-30 Thread Daniel P . Berrangé
On Thu, Jul 30, 2020 at 01:51:10PM +0200, Markus Armbruster wrote:
> Daniel P. Berrangé  writes:
> 
> >   modify them so that we can load the 
> > files straight into the python intepretor as code, and not parse 
> > them as data. I feel unhappy about treating data as code though.
> 
> Stress on *can* load.  Doesn't mean we should.
> 
> Ancient prior art: Lisp programs routinely use s-expressions as
> configuration file syntax.  They don't load them as code, they read them
> as data.
> 
> With Python, it's ast.parse(), I think.

Yes, that could work


> > struct: ImageInfoSpecificQCow2
> > data:
> >   compat: str
> >   "*data-file": str
> >   "*data-file-raw": bool
> >   "*lazy-refcounts": bool
> >   "*corrupt": bool
> >   refcount-bits: int
> >   "*encrypt": ImageInfoSpecificQCow2Encryption
> >   "*bitmaps":
> > - Qcow2BitmapInfo
> >   compression-type: Qcow2CompressionType
> >
> >
> > Then we could use a regular off the shelf YAML parser in python.
> >
> > The uglyiness with quotes is due to the use of "*". Slightly less ugly
> > if we simply declare that quotes are always used, even where they're
> > not strictly required.
> 
> StrictYAML insists on quotes.

I wouldn't suggest StrictYAML, just normal YAML is what pretty much
everyone uses.

If we came up with a different way to mark a field as optional
instead of using the magic "*" then we wouldn't need to quote
anything

> I hate having to quote identifiers.  There's a reason we don't write
> 
> 'int'
> 'main'('int', 'argc', 'char' *'argv'[])
> {
> 'printf'("hello world\n");
> return 0;
> }
> 
> > struct: ImageInfoSpecificQCow2
> > data:
> >   "compat": "str"
> >   "*data-file": "str"
> >   "*data-file-raw": "bool"
> >   "*lazy-refcounts": "bool"
> >   "*corrupt": "bool"
> >   "refcount-bits": "int"
> >   "*encrypt": "ImageInfoSpecificQCow2Encryption"
> >   "*bitmaps":
> > - "Qcow2BitmapInfo"
> >   "compression-type": "Qcow2CompressionType"
> >
> > With the use of "---" to denote the start of document, we have no trouble 
> > parsing our files which would actually be a concatenation of multiple 
> > documents. The python YAML library provides the easy yaml.load_all()
> > method.
> 
> Required reading on YAML:
> https://www.arp242.net/yaml-config.html

I don't think this is especially helpful to our evaluation. You can write
such blog posts about pretty much any thing if you want to pick holes in a
proposal. Certainly there's plenty of awful stuff you can write about
JSON, and Python.

> Some of the criticism there doesn't matter for our use case.

Yeah, what matters is whether it can do the job we need in a way that is
better than what we have today, and whether there are any further options
to consider that might be viable alternatives.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH] schemas: Add vim modeline

2020-07-30 Thread Markus Armbruster
Andrea Bolognani  writes:

> The various schemas included in QEMU use a JSON-based format which
> is, however, strictly speaking not valid JSON.
>
> As a consequence, when vim tries to apply syntax highlight rules
> for JSON (as guessed from the file name), the result is an unreadable
> mess which mostly consist of red markers pointing out supposed errors
> in, well, pretty much everything.
>
> Using Python syntax highlighting produces much better results, and
> in fact these files already start with specially-formatted comments
> that instruct Emacs to process them as if they were Python files.
>
> This commit adds the equivalent special comments for vim.
>
> Signed-off-by: Andrea Bolognani 

Fixing the real mistake would be better, but until then, mitigating it
helps, like the existing Emacs modelines do.

Queued, thanks!




Re: [PATCH v2 14/16] hw/block/nvme: consolidate qsg/iov clearing

2020-07-30 Thread Minwoo Im
> Hi Minwoo,
>
> The is that on 'exit' we release request resources (like the qsg and
> iov). On 'clear' we initialize the request by clearing the struct. I
> guess I could call it nvme_req_init instead maybe, but really - it is
> clearing it.

Yeah, I just wanted to make it clear to understand myself here.

Reviewed-by: Minwoo Im 



[PATCH 2/2] iotests/169: Test source cont with backing bmap

2020-07-30 Thread Max Reitz
Test migrating from a VM with a persistent bitmap in the backing chain,
and then continuing that VM after the migration

Signed-off-by: Max Reitz 
---
 tests/qemu-iotests/169 | 64 +-
 tests/qemu-iotests/169.out |  4 +--
 2 files changed, 65 insertions(+), 3 deletions(-)

diff --git a/tests/qemu-iotests/169 b/tests/qemu-iotests/169
index 2c5a132aa3..40afb15299 100755
--- a/tests/qemu-iotests/169
+++ b/tests/qemu-iotests/169
@@ -24,11 +24,12 @@ import time
 import itertools
 import operator
 import re
-from iotests import qemu_img
+from iotests import qemu_img, qemu_img_create, Timeout
 
 
 disk_a = os.path.join(iotests.test_dir, 'disk_a')
 disk_b = os.path.join(iotests.test_dir, 'disk_b')
+base_a = os.path.join(iotests.test_dir, 'base_a')
 size = '1M'
 mig_file = os.path.join(iotests.test_dir, 'mig_file')
 mig_cmd = 'exec: cat > ' + mig_file
@@ -234,6 +235,67 @@ for cmb in list(itertools.product((True, False), 
repeat=2)):
 inject_test_case(TestDirtyBitmapMigration, name,
  'do_test_migration_resume_source', *list(cmb))
 
+
+class TestDirtyBitmapBackingMigration(iotests.QMPTestCase):
+def setUp(self):
+qemu_img_create('-f', iotests.imgfmt, base_a, size)
+qemu_img_create('-f', iotests.imgfmt, '-F', iotests.imgfmt,
+'-b', base_a, disk_a, size)
+
+for f in (disk_a, base_a):
+qemu_img('bitmap', '--add', f, 'bmap0')
+
+blockdev = {
+'node-name': 'node0',
+'driver': iotests.imgfmt,
+'file': {
+'driver': 'file',
+'filename': disk_a
+},
+'backing': {
+'node-name': 'node0-base',
+'driver': iotests.imgfmt,
+'file': {
+'driver': 'file',
+'filename': base_a
+}
+}
+}
+
+self.vm = iotests.VM()
+self.vm.launch()
+
+result = self.vm.qmp('blockdev-add', **blockdev)
+self.assert_qmp(result, 'return', {})
+
+# Check that the bitmaps are there
+for node in self.vm.qmp('query-named-block-nodes', 
flat=True)['return']:
+if 'node0' in node['node-name']:
+self.assert_qmp(node, 'dirty-bitmaps[0]/name', 'bmap0')
+
+caps = [{'capability': 'events', 'state': True}]
+result = self.vm.qmp('migrate-set-capabilities', capabilities=caps)
+self.assert_qmp(result, 'return', {})
+
+def tearDown(self):
+self.vm.shutdown()
+for f in (disk_a, base_a):
+os.remove(f)
+
+def test_cont_on_source(self):
+"""
+Continue the source after migration.
+"""
+result = self.vm.qmp('migrate', uri=f'exec: cat > /dev/null')
+self.assert_qmp(result, 'return', {})
+
+with Timeout(10, 'Migration timeout'):
+self.vm.wait_migration('postmigrate')
+
+result = self.vm.qmp('cont')
+self.assert_qmp(result, 'return', {})
+
+
 if __name__ == '__main__':
 iotests.main(supported_fmts=['qcow2'],
  supported_protocols=['file'])
diff --git a/tests/qemu-iotests/169.out b/tests/qemu-iotests/169.out
index 5c26d15c0d..cafb8161f7 100644
--- a/tests/qemu-iotests/169.out
+++ b/tests/qemu-iotests/169.out
@@ -1,5 +1,5 @@
-
+.
 --
-Ran 36 tests
+Ran 37 tests
 
 OK
-- 
2.26.2




[PATCH 1/2] qcow2: Release read-only bitmaps when inactivated

2020-07-30 Thread Max Reitz
During migration, we release all bitmaps after storing them on disk, as
long as they are (1) stored on disk, (2) not read-only, and (3)
consistent.

(2) seems arbitrary, though.  The reason we do not release them is
because we do not write them, as there is no need to; and then we just
forget about all bitmaps that we have not written to the file.  However,
read-only persistent bitmaps are still in the file and in sync with
their in-memory representation, so we may as well release them just like
any R/W bitmap that we have updated.

It leads to actual problems, too: After migration, letting the source
continue may result in an error if there were any bitmaps on read-only
nodes (such as backing images), because those have not been released by
bdrv_inactive_all(), but bdrv_invalidate_cache_all() attempts to reload
them (which fails, because they are still present in memory).

Signed-off-by: Max Reitz 
---
 block/qcow2-bitmap.c | 23 +++
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/block/qcow2-bitmap.c b/block/qcow2-bitmap.c
index 1f38806ca6..8c34b2aef7 100644
--- a/block/qcow2-bitmap.c
+++ b/block/qcow2-bitmap.c
@@ -1562,11 +1562,22 @@ void 
qcow2_store_persistent_dirty_bitmaps(BlockDriverState *bs,
 Qcow2Bitmap *bm;
 
 if (!bdrv_dirty_bitmap_get_persistence(bitmap) ||
-bdrv_dirty_bitmap_readonly(bitmap) ||
 bdrv_dirty_bitmap_inconsistent(bitmap)) {
 continue;
 }
 
+if (bdrv_dirty_bitmap_readonly(bitmap)) {
+/*
+ * Store the bitmap in the associated Qcow2Bitmap so it
+ * can be released later
+ */
+bm = find_bitmap_by_name(bm_list, name);
+if (bm) {
+bm->dirty_bitmap = bitmap;
+}
+continue;
+}
+
 need_write = true;
 
 if (check_constraints_on_bitmap(bs, name, granularity, errp) < 0) {
@@ -1618,7 +1629,9 @@ void 
qcow2_store_persistent_dirty_bitmaps(BlockDriverState *bs,
 
 /* allocate clusters and store bitmaps */
 QSIMPLEQ_FOREACH(bm, bm_list, entry) {
-if (bm->dirty_bitmap == NULL) {
+BdrvDirtyBitmap *bitmap = bm->dirty_bitmap;
+
+if (bitmap == NULL || bdrv_dirty_bitmap_readonly(bitmap)) {
 continue;
 }
 
@@ -1641,6 +1654,7 @@ void 
qcow2_store_persistent_dirty_bitmaps(BlockDriverState *bs,
 g_free(tb);
 }
 
+success:
 if (release_stored) {
 QSIMPLEQ_FOREACH(bm, bm_list, entry) {
 if (bm->dirty_bitmap == NULL) {
@@ -1651,13 +1665,14 @@ void 
qcow2_store_persistent_dirty_bitmaps(BlockDriverState *bs,
 }
 }
 
-success:
 bitmap_list_free(bm_list);
 return;
 
 fail:
 QSIMPLEQ_FOREACH(bm, bm_list, entry) {
-if (bm->dirty_bitmap == NULL || bm->table.offset == 0) {
+if (bm->dirty_bitmap == NULL || bm->table.offset == 0 ||
+bdrv_dirty_bitmap_readonly(bm->dirty_bitmap))
+{
 continue;
 }
 
-- 
2.26.2




[PATCH 0/2] qcow2: Release read-only bitmaps when inactivated

2020-07-30 Thread Max Reitz
Hi,

When beginning migration, the qcow2 driver syncs all persistent bitmaps
to disk and then releases them.  If the user decides to continue on the
source after migration, those bitmaps are re-loaded from the qcow2
image.

However, we only do this for bitmaps that were actively synced, i.e. R/W
bitmaps.  RO bitmaps (those on backing images) are not written and thus
not released.  However, we still try to re-load them when continuing,
and that will then fail.

To fix this problem, I think we should just consider RO bitmaps to be in
sync from the beginning, so we can release them just like bitmaps that
we have actively written back to the image.  This is done by patch 1.

However, there’s a catch: Peter Krempa noted that it isn’t in libvirt’s
interest for the bitmaps to be released before migration at all, because
this makes them disappear from query-named-block-node’s dirty-bitmaps
list, but libvirt needs the bitmaps to be there:

https://bugzilla.redhat.com/show_bug.cgi?id=1858739#c3

If it’s really not feasible to keep the bitmaps around, then I suppose
what might work for libvirt is to query
image/format-specific/data/bitmaps in addition to dirty-bitmaps (every
bitmap that we released before migration must be a persistent bitmap).

What are your thoughts on this?


Max Reitz (2):
  qcow2: Release read-only bitmaps when inactivated
  iotests/169: Test source cont with backing bmap

 block/qcow2-bitmap.c   | 23 +++---
 tests/qemu-iotests/169 | 64 +-
 tests/qemu-iotests/169.out |  4 +--
 3 files changed, 84 insertions(+), 7 deletions(-)

-- 
2.26.2




Re: [PATCH] schemas: Add vim modeline

2020-07-30 Thread Markus Armbruster
Daniel P. Berrangé  writes:

> On Thu, Jul 30, 2020 at 11:07:26AM +0200, Markus Armbruster wrote:
>> Andrea Bolognani  writes:
>> 
>> > The various schemas included in QEMU use a JSON-based format which
>> > is, however, strictly speaking not valid JSON.
>> >
>> > As a consequence, when vim tries to apply syntax highlight rules
>> > for JSON (as guessed from the file name), the result is an unreadable
>> > mess which mostly consist of red markers pointing out supposed errors
>> > in, well, pretty much everything.
>> >
>> > Using Python syntax highlighting produces much better results, and
>> > in fact these files already start with specially-formatted comments
>> > that instruct Emacs to process them as if they were Python files.
>> >
>> > This commit adds the equivalent special comments for vim.
>> >
>> > Signed-off-by: Andrea Bolognani 
>
> Given that we already have emacs mode-lines, I see no reason to
> not also have vim mode-lines, so regardless of the deeper discussion
> I think this is patch is fine to merge in the short term
>
>   Reviewed-by: Daniel P. Berrangé 
>
>
>> Naming QAPI schema files .json even though their contents isn't was a
>> mistake.  Correcting it would be a pain.  If we correct it, then the
>> sooner the better.
>> 
>> Renaming them to .py gives decent editor support out of the box.  Their
>> contents isn't quite Python, though: true vs. True, false vs. False.  Do
>> we care?  Only a few dozen occurences; they could be adjusted.
>> 
>> Renaming them to .qapi would perhaps be less confusing, for the price of
>> "out of the box".
>
> IMHO, the critical rule is that if you a pick a particular file extension
> associated with an existing language, you absolutely MUST BE compliant
> with that language.

Can't argue with that :)

> We fail at compliance with both JSON and Python because we're actually
> using our own DSL (domain specific language).
>
> IOW if we're going to stick with our current file format, then we should
> be naming them .qapi. We can still use an editor mode line if we want to
> claim we're approximately equiv to another language, but we can't be
> surprised if editors get upset.
>
>
> The bigger question is whether having our own DSL is justified ?
>
> I'm *really* sceptical that it is.

To be precise: our own DSL *syntax*.  Semantics is a separate question
to be skeptical about.

> We can't use JSON because it lacks comments. So we invented our own
> psuedo-JSON parser that supported comments, and used ' instead of "
> for some reason. We also needed to be able to parse a sequence of
> multiple JSON documents in one file. We should have just picked a 
> different language because JSON clearly didn't do what we eneeded.

JSON is a exceptionally poor choice for a DSL, or even a configuration
language.

Correcting our mistake involves a flag day and a re-learn.  We need to
weigh costs against benefits.

The QAPI schema language has two layers:

* JSON, with a lexical and a syntactical sub-layer (both in parser.py)

* QAPI, with a context-free and a context-dependend sub-layer (in
  expr.py and schema.py, respectively)

Replacing the JSON layer is possible as long as the replacement is
sufficiently expressive (not a tall order).

> You suggest naming them .py. If we do that, we must declare that they
> are literally Python code

Agree.

>   modify them so that we can load the 
> files straight into the python intepretor as code, and not parse 
> them as data. I feel unhappy about treating data as code though.

Stress on *can* load.  Doesn't mean we should.

Ancient prior art: Lisp programs routinely use s-expressions as
configuration file syntax.  They don't load them as code, they read them
as data.

With Python, it's ast.parse(), I think.

> While JSON doesn't do what we need, its second-cousin YAML is a more
> flexible format. Taking one example
>
> ---
> ##
> # @ImageInfoSpecificQCow2:
> #
> # @compat: compatibility level
> #
> # ...snip...
> #
> # Since: 1.7
> ##
> struct: ImageInfoSpecificQCow2
> data:
>   compat: str
>   "*data-file": str
>   "*data-file-raw": bool
>   "*lazy-refcounts": bool
>   "*corrupt": bool
>   refcount-bits: int
>   "*encrypt": ImageInfoSpecificQCow2Encryption
>   "*bitmaps":
> - Qcow2BitmapInfo
>   compression-type: Qcow2CompressionType
>
>
> Then we could use a regular off the shelf YAML parser in python.
>
> The uglyiness with quotes is due to the use of "*". Slightly less ugly
> if we simply declare that quotes are always used, even where they're
> not strictly required.

StrictYAML insists on quotes.

I hate having to quote identifiers.  There's a reason we don't write

'int'
'main'('int', 'argc', 'char' *'argv'[])
{
'printf'("hello world\n");
return 0;
}

> struct: ImageInfoSpecificQCow2
> data:
>   "compat": "str"
>   "*data-file": "str"
>   "*data-file-raw": "bool"
>   "*lazy-refcounts": "bool"
>   "*corrupt": "bool"
>   "refcount-bits": "int"
>   "*encrypt": 

Re: [PATCH v4 1/2] nvme: indicate CMB support through controller capabilities register

2020-07-30 Thread Maxim Levitsky
On Tue, 2020-07-07 at 21:15 +0200, Klaus Jensen wrote:
> On Jul  7 19:27, Maxim Levitsky wrote:
> > On Wed, 2020-07-01 at 14:48 -0700, Andrzej Jakowski wrote:
> > > This patch sets CMBS bit in controller capabilities register when user
> > > configures NVMe driver with CMB support, so capabilites are correctly
> > > reported to guest OS.
> > > 
> > > Signed-off-by: Andrzej Jakowski 
> > > Reviewed-by: Klaus Jensen 
> > > ---
> > >  hw/block/nvme.c  | 2 +-
> > >  include/block/nvme.h | 6 +-
> > >  2 files changed, 6 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> > > index 1aee042d4c..9f11f3e9da 100644
> > > --- a/hw/block/nvme.c
> > > +++ b/hw/block/nvme.c
> > > @@ -1582,6 +1582,7 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice 
> > > *pci_dev)
> > >  NVME_CAP_SET_TO(n->bar.cap, 0xf);
> > >  NVME_CAP_SET_CSS(n->bar.cap, 1);
> > >  NVME_CAP_SET_MPSMAX(n->bar.cap, 4);
> > > +NVME_CAP_SET_CMBS(n->bar.cap, n->params.cmb_size_mb ? 1 : 0);
> > >  
> > >  n->bar.vs = 0x00010200;
> > >  n->bar.intmc = n->bar.intms = 0;
> > > @@ -1591,7 +1592,6 @@ static void nvme_realize(PCIDevice *pci_dev, Error 
> > > **errp)
> > >  {
> > >  NvmeCtrl *n = NVME(pci_dev);
> > >  Error *local_err = NULL;
> > > -
> > >  int i;
> > >  
> > >  nvme_check_constraints(n, _err);
> > > diff --git a/include/block/nvme.h b/include/block/nvme.h
> > > index 1720ee1d51..14cf398dfa 100644
> > > --- a/include/block/nvme.h
> > > +++ b/include/block/nvme.h
> > > @@ -35,6 +35,7 @@ enum NvmeCapShift {
> > >  CAP_MPSMIN_SHIFT   = 48,
> > >  CAP_MPSMAX_SHIFT   = 52,
> > >  CAP_PMR_SHIFT  = 56,
> > > +CAP_CMB_SHIFT  = 57,
> > >  };
> > >  
> > >  enum NvmeCapMask {
> > > @@ -48,6 +49,7 @@ enum NvmeCapMask {
> > >  CAP_MPSMIN_MASK= 0xf,
> > >  CAP_MPSMAX_MASK= 0xf,
> > >  CAP_PMR_MASK   = 0x1,
> > > +CAP_CMB_MASK   = 0x1,
> > >  };
> > >  
> > >  #define NVME_CAP_MQES(cap)  (((cap) >> CAP_MQES_SHIFT)   & CAP_MQES_MASK)
> > > @@ -78,8 +80,10 @@ enum NvmeCapMask {
> > > << 
> > > CAP_MPSMIN_SHIFT)
> > >  #define NVME_CAP_SET_MPSMAX(cap, val) (cap |= (uint64_t)(val & 
> > > CAP_MPSMAX_MASK)\
> > >  << 
> > > CAP_MPSMAX_SHIFT)
> > > -#define NVME_CAP_SET_PMRS(cap, val) (cap |= (uint64_t)(val & 
> > > CAP_PMR_MASK)\
> > > +#define NVME_CAP_SET_PMRS(cap, val)   (cap |= (uint64_t)(val & 
> > > CAP_PMR_MASK)   \
> > >  << 
> > > CAP_PMR_SHIFT)
> > > +#define NVME_CAP_SET_CMBS(cap, val)   (cap |= (uint64_t)(val & 
> > > CAP_CMB_MASK)   \
> > > +   << 
> > > CAP_CMB_SHIFT)
> > >  
> > >  enum NvmeCcShift {
> > >  CC_EN_SHIFT = 0,
> > 
> > I wonder how this could have beeing forgotten. Hmm.
> > I see that Linux kernel uses CMBSZ != for that.
> > I guess this explains it.
> > 
> > Reviewed-by: Maxim Levitsky 
> > 
> 
> It is a v1.4 field. The CMB support was added when NVMe was at v1.2.
> And the Linux kernel is also basically adhering to v1.3 wrt. CMB
> support. In v1.4 the host actually needs to specifically enable the CMB
> - and that is not something the kernel does currently IIRC.
> 
Ah, makes sense!
I by now have specs for each NVME revision, but I am getting lazy sometimes to 
cross-check
them.

Best regards,
Maxim Levitsky




Re: [PATCH v2 14/16] hw/block/nvme: consolidate qsg/iov clearing

2020-07-30 Thread Klaus Jensen
On Jul 30 19:31, Minwoo Im wrote:
> On 20-07-30 00:06:36, Klaus Jensen wrote:
> > From: Klaus Jensen 
> > 
> > Always destroy the request qsg/iov at the end of request use.
> > 
> > Signed-off-by: Klaus Jensen 
> > ---
> >  hw/block/nvme.c | 52 -
> >  1 file changed, 21 insertions(+), 31 deletions(-)
> > 
> > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> > index 3d7275eae369..045dd55376a5 100644
> > --- a/hw/block/nvme.c
> > +++ b/hw/block/nvme.c
> > @@ -217,6 +217,17 @@ static void nvme_req_clear(NvmeRequest *req)
> >  memset(>cqe, 0x0, sizeof(req->cqe));
> >  }
> >  
> > +static void nvme_req_exit(NvmeRequest *req)
> > +{
> > +if (req->qsg.sg) {
> > +qemu_sglist_destroy(>qsg);
> > +}
> > +
> > +if (req->iov.iov) {
> > +qemu_iovec_destroy(>iov);
> > +}
> > +}
> > +
> 
> Klaus,
> 
> What is differences between 'clear' and 'exit' from the request
> perspective?
> 

Hi Minwoo,

The is that on 'exit' we release request resources (like the qsg and
iov). On 'clear' we initialize the request by clearing the struct. I
guess I could call it nvme_req_init instead maybe, but really - it is
clearing it.



Re: [PATCH v2 16/16] hw/block/nvme: remove explicit qsg/iov parameters

2020-07-30 Thread Maxim Levitsky
On Thu, 2020-07-30 at 00:06 +0200, Klaus Jensen wrote:
> From: Klaus Jensen 
> 
> Since nvme_map_prp always operate on the request-scoped qsg/iovs, just
> pass a single pointer to the NvmeRequest instead of two for each of the
> qsg and iov.
> 
> Suggested-by: Minwoo Im 
> Signed-off-by: Klaus Jensen 
> ---
>  hw/block/nvme.c | 11 +++
>  1 file changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> index 55b1a68ced8c..aea8a8b6946c 100644
> --- a/hw/block/nvme.c
> +++ b/hw/block/nvme.c
> @@ -284,8 +284,8 @@ static uint16_t nvme_map_addr(NvmeCtrl *n, QEMUSGList 
> *qsg, QEMUIOVector *iov,
>  return NVME_SUCCESS;
>  }
>  
> -static uint16_t nvme_map_prp(QEMUSGList *qsg, QEMUIOVector *iov, uint64_t 
> prp1,
> - uint64_t prp2, uint32_t len, NvmeCtrl *n)
> +static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t prp1, uint64_t prp2,
> + uint32_t len, NvmeRequest *req)
>  {
>  hwaddr trans_len = n->page_size - (prp1 % n->page_size);
>  trans_len = MIN(len, trans_len);
> @@ -293,6 +293,9 @@ static uint16_t nvme_map_prp(QEMUSGList *qsg, 
> QEMUIOVector *iov, uint64_t prp1,
>  uint16_t status;
>  bool prp_list_in_cmb = false;
>  
> +QEMUSGList *qsg = >qsg;
> +QEMUIOVector *iov = >iov;
> +
>  trace_pci_nvme_map_prp(trans_len, len, prp1, prp2, num_prps);
>  
>  if (unlikely(!prp1)) {
> @@ -386,7 +389,7 @@ static uint16_t nvme_dma_prp(NvmeCtrl *n, uint8_t *ptr, 
> uint32_t len,
>  {
>  uint16_t status = NVME_SUCCESS;
>  
> -status = nvme_map_prp(>qsg, >iov, prp1, prp2, len, n);
> +status = nvme_map_prp(n, prp1, prp2, len, req);
>  if (status) {
>  return status;
>  }
> @@ -431,7 +434,7 @@ static uint16_t nvme_map_dptr(NvmeCtrl *n, size_t len, 
> NvmeRequest *req)
>  uint64_t prp1 = le64_to_cpu(cmd->dptr.prp1);
>  uint64_t prp2 = le64_to_cpu(cmd->dptr.prp2);
>  
> -return nvme_map_prp(>qsg, >iov, prp1, prp2, len, n);
> +return nvme_map_prp(n, prp1, prp2, len, req);
>  }
>  
>  static void nvme_post_cqes(void *opaque)
Reviewed-by: Maxim Levitsky 

Best regards,
Maxim Levitsky




Re: [PATCH v2 14/16] hw/block/nvme: consolidate qsg/iov clearing

2020-07-30 Thread Maxim Levitsky
On Thu, 2020-07-30 at 19:31 +0900, Minwoo Im wrote:
> On 20-07-30 00:06:36, Klaus Jensen wrote:
> > From: Klaus Jensen 
> > 
> > Always destroy the request qsg/iov at the end of request use.
> > 
> > Signed-off-by: Klaus Jensen 
> > ---
> >  hw/block/nvme.c | 52 -
> >  1 file changed, 21 insertions(+), 31 deletions(-)
> > 
> > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> > index 3d7275eae369..045dd55376a5 100644
> > --- a/hw/block/nvme.c
> > +++ b/hw/block/nvme.c
> > @@ -217,6 +217,17 @@ static void nvme_req_clear(NvmeRequest *req)
> >  memset(>cqe, 0x0, sizeof(req->cqe));
> >  }
> >  
> > +static void nvme_req_exit(NvmeRequest *req)
> > +{
> > +if (req->qsg.sg) {
> > +qemu_sglist_destroy(>qsg);
> > +}
> > +
> > +if (req->iov.iov) {
> > +qemu_iovec_destroy(>iov);
> > +}
> > +}
> > +
> 
> Klaus,
> 
> What is differences between 'clear' and 'exit' from the request
> perspective?
> 
> Thanks,
> 
In my personal opinion, I don't think the name matters that much here.

Reviewed-by: Maxim Levitsky 

Best regards,
Maxim Levitsky




Re: [PATCH v2 07/16] hw/block/nvme: add tracing to nvme_map_prp

2020-07-30 Thread Maxim Levitsky
On Thu, 2020-07-30 at 00:06 +0200, Klaus Jensen wrote:
> From: Klaus Jensen 
> 
> Add tracing to nvme_map_prp.
> 
> Signed-off-by: Klaus Jensen 
> ---
>  hw/block/nvme.c   | 2 ++
>  hw/block/trace-events | 1 +
>  2 files changed, 3 insertions(+)
> 
> diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> index 571635ebe9f9..952afbb05175 100644
> --- a/hw/block/nvme.c
> +++ b/hw/block/nvme.c
> @@ -274,6 +274,8 @@ static uint16_t nvme_map_prp(QEMUSGList *qsg, 
> QEMUIOVector *iov, uint64_t prp1,
>  int num_prps = (len >> n->page_bits) + 1;
>  uint16_t status;
>  
> +trace_pci_nvme_map_prp(trans_len, len, prp1, prp2, num_prps);
> +
>  if (unlikely(!prp1)) {
>  trace_pci_nvme_err_invalid_prp();
>  return NVME_INVALID_FIELD | NVME_DNR;
> diff --git a/hw/block/trace-events b/hw/block/trace-events
> index f3b2d004e078..f20c59a4b542 100644
> --- a/hw/block/trace-events
> +++ b/hw/block/trace-events
> @@ -35,6 +35,7 @@ pci_nvme_irq_masked(void) "IRQ is masked"
>  pci_nvme_dma_read(uint64_t prp1, uint64_t prp2) "DMA read, prp1=0x%"PRIx64" 
> prp2=0x%"PRIx64""
>  pci_nvme_map_addr(uint64_t addr, uint64_t len) "addr 0x%"PRIx64" len 
> %"PRIu64""
>  pci_nvme_map_addr_cmb(uint64_t addr, uint64_t len) "addr 0x%"PRIx64" len 
> %"PRIu64""
> +pci_nvme_map_prp(uint64_t trans_len, uint32_t len, uint64_t prp1, uint64_t 
> prp2, int num_prps) "trans_len %"PRIu64" len %"PRIu32" prp1 0x%"PRIx64" prp2 
> 0x%"PRIx64" num_prps %d"
>  pci_nvme_io_cmd(uint16_t cid, uint32_t nsid, uint16_t sqid, uint8_t opcode) 
> "cid %"PRIu16" nsid %"PRIu32" sqid %"PRIu16" opc 0x%"PRIx8""
>  pci_nvme_admin_cmd(uint16_t cid, uint16_t sqid, uint8_t opcode) "cid 
> %"PRIu16" sqid %"PRIu16" opc 0x%"PRIx8""
>  pci_nvme_rw(const char *verb, uint32_t blk_count, uint64_t byte_count, 
> uint64_t lba) "%s %"PRIu32" blocks (%"PRIu64" bytes) from LBA %"PRIu64""

Reviewed-by: Maxim Levitsky 

Best regards,
Maxim Levitsky




Re: [PATCH v2 05/16] hw/block/nvme: destroy request iov before reuse

2020-07-30 Thread Maxim Levitsky
On Thu, 2020-07-30 at 00:06 +0200, Klaus Jensen wrote:
> From: Klaus Jensen 
> 
> Make sure the request iov is destroyed before reuse; fixing a memory
> leak.
> 
> Signed-off-by: Klaus Jensen 
> ---
>  hw/block/nvme.c | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> index a9d9a2912655..8f8257e06eed 100644
> --- a/hw/block/nvme.c
> +++ b/hw/block/nvme.c
> @@ -554,6 +554,10 @@ static void nvme_rw_cb(void *opaque, int ret)
>  if (req->qsg.nalloc) {
>  qemu_sglist_destroy(>qsg);
>  }
> +if (req->iov.nalloc) {
> +qemu_iovec_destroy(>iov);
> +}
> +
>  nvme_enqueue_req_completion(cq, req);
>  }
>  

Reviewed-by: Maxim Levitsky 

Best regards,
Maxim Levitsky




Re: [PATCH v2 04/16] hw/block/nvme: remove redundant has_sg member

2020-07-30 Thread Maxim Levitsky
On Thu, 2020-07-30 at 00:06 +0200, Klaus Jensen wrote:
> From: Klaus Jensen 
> 
> Remove the has_sg member from NvmeRequest since it's redundant.
> 
> Signed-off-by: Klaus Jensen 


Reviewed-by: Maxim Levitsky 

Best regards,
Maxim Levitsky

> ---
>  hw/block/nvme.c | 7 ++-
>  hw/block/nvme.h | 1 -
>  2 files changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> index d60b19e1840f..a9d9a2912655 100644
> --- a/hw/block/nvme.c
> +++ b/hw/block/nvme.c
> @@ -550,7 +550,8 @@ static void nvme_rw_cb(void *opaque, int ret)
>  block_acct_failed(blk_get_stats(n->conf.blk), >acct);
>  req->status = NVME_INTERNAL_DEV_ERROR;
>  }
> -if (req->has_sg) {
> +
> +if (req->qsg.nalloc) {
>  qemu_sglist_destroy(>qsg);
>  }
>  nvme_enqueue_req_completion(cq, req);
> @@ -559,7 +560,6 @@ static void nvme_rw_cb(void *opaque, int ret)
>  static uint16_t nvme_flush(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd,
>  NvmeRequest *req)
>  {
> -req->has_sg = false;
>  block_acct_start(blk_get_stats(n->conf.blk), >acct, 0,
>   BLOCK_ACCT_FLUSH);
>  req->aiocb = blk_aio_flush(n->conf.blk, nvme_rw_cb, req);
> @@ -585,7 +585,6 @@ static uint16_t nvme_write_zeros(NvmeCtrl *n, 
> NvmeNamespace *ns, NvmeCmd *cmd,
>  return NVME_LBA_RANGE | NVME_DNR;
>  }
>  
> -req->has_sg = false;
>  block_acct_start(blk_get_stats(n->conf.blk), >acct, 0,
>   BLOCK_ACCT_WRITE);
>  req->aiocb = blk_aio_pwrite_zeroes(n->conf.blk, offset, count,
> @@ -623,7 +622,6 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeNamespace *ns, 
> NvmeCmd *cmd,
>  }
>  
>  if (req->qsg.nsg > 0) {
> -req->has_sg = true;
>  block_acct_start(blk_get_stats(n->conf.blk), >acct, 
> req->qsg.size,
>   acct);
>  req->aiocb = is_write ?
> @@ -632,7 +630,6 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeNamespace *ns, 
> NvmeCmd *cmd,
>  dma_blk_read(n->conf.blk, >qsg, data_offset, 
> BDRV_SECTOR_SIZE,
>   nvme_rw_cb, req);
>  } else {
> -req->has_sg = false;
>  block_acct_start(blk_get_stats(n->conf.blk), >acct, 
> req->iov.size,
>   acct);
>  req->aiocb = is_write ?
> diff --git a/hw/block/nvme.h b/hw/block/nvme.h
> index 0b6a8ae66559..5519b5cc7686 100644
> --- a/hw/block/nvme.h
> +++ b/hw/block/nvme.h
> @@ -22,7 +22,6 @@ typedef struct NvmeRequest {
>  struct NvmeSQueue   *sq;
>  BlockAIOCB  *aiocb;
>  uint16_tstatus;
> -boolhas_sg;
>  NvmeCqe cqe;
>  BlockAcctCookie acct;
>  QEMUSGList  qsg;





Re: [PATCH v2 16/16] hw/block/nvme: remove explicit qsg/iov parameters

2020-07-30 Thread Minwoo Im
On 20-07-30 00:06:38, Klaus Jensen wrote:
> From: Klaus Jensen 
> 
> Since nvme_map_prp always operate on the request-scoped qsg/iovs, just
> pass a single pointer to the NvmeRequest instead of two for each of the
> qsg and iov.
> 
> Suggested-by: Minwoo Im 
> Signed-off-by: Klaus Jensen 
> ---
>  hw/block/nvme.c | 11 +++
>  1 file changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> index 55b1a68ced8c..aea8a8b6946c 100644
> --- a/hw/block/nvme.c
> +++ b/hw/block/nvme.c
> @@ -284,8 +284,8 @@ static uint16_t nvme_map_addr(NvmeCtrl *n, QEMUSGList 
> *qsg, QEMUIOVector *iov,
>  return NVME_SUCCESS;
>  }
>  
> -static uint16_t nvme_map_prp(QEMUSGList *qsg, QEMUIOVector *iov, uint64_t 
> prp1,
> - uint64_t prp2, uint32_t len, NvmeCtrl *n)
> +static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t prp1, uint64_t prp2,
> + uint32_t len, NvmeRequest *req)
>  {
>  hwaddr trans_len = n->page_size - (prp1 % n->page_size);
>  trans_len = MIN(len, trans_len);
> @@ -293,6 +293,9 @@ static uint16_t nvme_map_prp(QEMUSGList *qsg, 
> QEMUIOVector *iov, uint64_t prp1,
>  uint16_t status;
>  bool prp_list_in_cmb = false;
>  
> +QEMUSGList *qsg = >qsg;
> +QEMUIOVector *iov = >iov;
> +
>  trace_pci_nvme_map_prp(trans_len, len, prp1, prp2, num_prps);
>  
>  if (unlikely(!prp1)) {
> @@ -386,7 +389,7 @@ static uint16_t nvme_dma_prp(NvmeCtrl *n, uint8_t *ptr, 
> uint32_t len,
>  {
>  uint16_t status = NVME_SUCCESS;
>  
> -status = nvme_map_prp(>qsg, >iov, prp1, prp2, len, n);
> +status = nvme_map_prp(n, prp1, prp2, len, req);
>  if (status) {
>  return status;
>  }
> @@ -431,7 +434,7 @@ static uint16_t nvme_map_dptr(NvmeCtrl *n, size_t len, 
> NvmeRequest *req)
>  uint64_t prp1 = le64_to_cpu(cmd->dptr.prp1);
>  uint64_t prp2 = le64_to_cpu(cmd->dptr.prp2);
>  
> -return nvme_map_prp(>qsg, >iov, prp1, prp2, len, n);
> +return nvme_map_prp(n, prp1, prp2, len, req);
>  }
>  
>  static void nvme_post_cqes(void *opaque)


This looks better to have qsg and iov in the NvmeRequest.

Reviewed-by: Minwoo Im 



Re: [PATCH v2 14/16] hw/block/nvme: consolidate qsg/iov clearing

2020-07-30 Thread Minwoo Im
On 20-07-30 00:06:36, Klaus Jensen wrote:
> From: Klaus Jensen 
> 
> Always destroy the request qsg/iov at the end of request use.
> 
> Signed-off-by: Klaus Jensen 
> ---
>  hw/block/nvme.c | 52 -
>  1 file changed, 21 insertions(+), 31 deletions(-)
> 
> diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> index 3d7275eae369..045dd55376a5 100644
> --- a/hw/block/nvme.c
> +++ b/hw/block/nvme.c
> @@ -217,6 +217,17 @@ static void nvme_req_clear(NvmeRequest *req)
>  memset(>cqe, 0x0, sizeof(req->cqe));
>  }
>  
> +static void nvme_req_exit(NvmeRequest *req)
> +{
> +if (req->qsg.sg) {
> +qemu_sglist_destroy(>qsg);
> +}
> +
> +if (req->iov.iov) {
> +qemu_iovec_destroy(>iov);
> +}
> +}
> +

Klaus,

What is differences between 'clear' and 'exit' from the request
perspective?

Thanks,



Re: [PATCH v2 15/16] hw/block/nvme: use preallocated qsg/iov in nvme_dma_prp

2020-07-30 Thread Minwoo Im
On 20-07-30 00:06:37, Klaus Jensen wrote:
> From: Klaus Jensen 
> 
> Since clean up of the request qsg/iov is now always done post-use, there
> is no need to use a stack-allocated qsg/iov in nvme_dma_prp.
> 
> Signed-off-by: Klaus Jensen 
> Acked-by: Keith Busch 
> Reviewed-by: Maxim Levitsky 

Reviewed-by: Minwoo Im 



Re: [PATCH v2 08/16] hw/block/nvme: add request mapping helper

2020-07-30 Thread Minwoo Im
On Thu, Jul 30, 2020 at 7:06 AM Klaus Jensen  wrote:
>
> From: Klaus Jensen 
>
> Introduce the nvme_map helper to remove some noise in the main nvme_rw
> function.
>
> Signed-off-by: Klaus Jensen 
> Reviewed-by: Maxim Levitsky 
> ---
>  hw/block/nvme.c | 13 ++---
>  1 file changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> index 952afbb05175..198a26890e0c 100644
> --- a/hw/block/nvme.c
> +++ b/hw/block/nvme.c
> @@ -415,6 +415,15 @@ static uint16_t nvme_dma_prp(NvmeCtrl *n, uint8_t *ptr, 
> uint32_t len,
>  return status;
>  }
>
> +static uint16_t nvme_map_dptr(NvmeCtrl *n, NvmeCmd *cmd, size_t len,
> + NvmeRequest *req)
> +{
> +uint64_t prp1 = le64_to_cpu(cmd->dptr.prp1);
> +uint64_t prp2 = le64_to_cpu(cmd->dptr.prp2);
> +
> +return nvme_map_prp(>qsg, >iov, prp1, prp2, len, n);
> +}

Let's do something for MPTR laster also when we are right in front of that.

Looks good to me.

Reviewed-by: Minwoo Im 

> +
>  static void nvme_post_cqes(void *opaque)
>  {
>  NvmeCQueue *cq = opaque;
> @@ -602,8 +611,6 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeNamespace *ns, 
> NvmeCmd *cmd,
>  NvmeRwCmd *rw = (NvmeRwCmd *)cmd;
>  uint32_t nlb  = le32_to_cpu(rw->nlb) + 1;
>  uint64_t slba = le64_to_cpu(rw->slba);
> -uint64_t prp1 = le64_to_cpu(rw->dptr.prp1);
> -uint64_t prp2 = le64_to_cpu(rw->dptr.prp2);
>
>  uint8_t lba_index  = NVME_ID_NS_FLBAS_INDEX(ns->id_ns.flbas);
>  uint8_t data_shift = ns->id_ns.lbaf[lba_index].ds;
> @@ -620,7 +627,7 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeNamespace *ns, 
> NvmeCmd *cmd,
>  return NVME_LBA_RANGE | NVME_DNR;
>  }
>
> -if (nvme_map_prp(>qsg, >iov, prp1, prp2, data_size, n)) {
> +if (nvme_map_dptr(n, cmd, data_size, req)) {
>  block_acct_invalid(blk_get_stats(n->conf.blk), acct);
>  return NVME_INVALID_FIELD | NVME_DNR;
>  }
> --
> 2.27.0
>



Re: [PATCH v2 07/16] hw/block/nvme: add tracing to nvme_map_prp

2020-07-30 Thread Minwoo Im
On Thu, Jul 30, 2020 at 7:06 AM Klaus Jensen  wrote:
>
> From: Klaus Jensen 
>
> Add tracing to nvme_map_prp.
>
> Signed-off-by: Klaus Jensen 
> ---
>  hw/block/nvme.c   | 2 ++
>  hw/block/trace-events | 1 +
>  2 files changed, 3 insertions(+)
>
> diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> index 571635ebe9f9..952afbb05175 100644
> --- a/hw/block/nvme.c
> +++ b/hw/block/nvme.c
> @@ -274,6 +274,8 @@ static uint16_t nvme_map_prp(QEMUSGList *qsg, 
> QEMUIOVector *iov, uint64_t prp1,
>  int num_prps = (len >> n->page_bits) + 1;
>  uint16_t status;
>
> +trace_pci_nvme_map_prp(trans_len, len, prp1, prp2, num_prps);

Hmm.. Okay with this.  But once QEMUSGList and QEMUIOVector instances are coming
into the NvmeRequest, we just can pass the NvmeRequest instance here
and print the cid as well
later :)

Reviewed-by: Minwoo Im 

Thanks!

> +
>  if (unlikely(!prp1)) {
>  trace_pci_nvme_err_invalid_prp();
>  return NVME_INVALID_FIELD | NVME_DNR;
> diff --git a/hw/block/trace-events b/hw/block/trace-events
> index f3b2d004e078..f20c59a4b542 100644
> --- a/hw/block/trace-events
> +++ b/hw/block/trace-events
> @@ -35,6 +35,7 @@ pci_nvme_irq_masked(void) "IRQ is masked"
>  pci_nvme_dma_read(uint64_t prp1, uint64_t prp2) "DMA read, prp1=0x%"PRIx64" 
> prp2=0x%"PRIx64""
>  pci_nvme_map_addr(uint64_t addr, uint64_t len) "addr 0x%"PRIx64" len 
> %"PRIu64""
>  pci_nvme_map_addr_cmb(uint64_t addr, uint64_t len) "addr 0x%"PRIx64" len 
> %"PRIu64""
> +pci_nvme_map_prp(uint64_t trans_len, uint32_t len, uint64_t prp1, uint64_t 
> prp2, int num_prps) "trans_len %"PRIu64" len %"PRIu32" prp1 0x%"PRIx64" prp2 
> 0x%"PRIx64" num_prps %d"
>  pci_nvme_io_cmd(uint16_t cid, uint32_t nsid, uint16_t sqid, uint8_t opcode) 
> "cid %"PRIu16" nsid %"PRIu32" sqid %"PRIu16" opc 0x%"PRIx8""
>  pci_nvme_admin_cmd(uint16_t cid, uint16_t sqid, uint8_t opcode) "cid 
> %"PRIu16" sqid %"PRIu16" opc 0x%"PRIx8""
>  pci_nvme_rw(const char *verb, uint32_t blk_count, uint64_t byte_count, 
> uint64_t lba) "%s %"PRIu32" blocks (%"PRIu64" bytes) from LBA %"PRIu64""
> --
> 2.27.0
>



Re: [PATCH v2 05/16] hw/block/nvme: destroy request iov before reuse

2020-07-30 Thread Minwoo Im
On Thu, Jul 30, 2020 at 7:06 AM Klaus Jensen  wrote:
>
> From: Klaus Jensen 
>
> Make sure the request iov is destroyed before reuse; fixing a memory
> leak.
>
> Signed-off-by: Klaus Jensen 

Looks good to me and Thanks for splitting this up.

Reviewed-by: Minwoo Im 



Re: [PATCH v2 01/16] hw/block/nvme: memset preallocated requests structures

2020-07-30 Thread Minwoo Im
On Thu, Jul 30, 2020 at 7:06 AM Klaus Jensen  wrote:
>
> From: Klaus Jensen 
>
> This is preparatory to subsequent patches that change how QSGs/IOVs are
> handled. It is important that the qsg and iov members of the NvmeRequest
> are initially zeroed.
>
> Signed-off-by: Klaus Jensen 
> Reviewed-by: Maxim Levitsky 

Reviewed-by: Minwoo Im 



Re: [PATCH v2 04/16] hw/block/nvme: remove redundant has_sg member

2020-07-30 Thread Minwoo Im
On Thu, Jul 30, 2020 at 7:06 AM Klaus Jensen  wrote:
>
> From: Klaus Jensen 
>
> Remove the has_sg member from NvmeRequest since it's redundant.
>
> Signed-off-by: Klaus Jensen 

Looks better than the previous one to me.

Reviewed-by: Minwoo Im 



Re: [PATCH] schemas: Add vim modeline

2020-07-30 Thread Daniel P . Berrangé
On Thu, Jul 30, 2020 at 11:07:26AM +0200, Markus Armbruster wrote:
> Andrea Bolognani  writes:
> 
> > The various schemas included in QEMU use a JSON-based format which
> > is, however, strictly speaking not valid JSON.
> >
> > As a consequence, when vim tries to apply syntax highlight rules
> > for JSON (as guessed from the file name), the result is an unreadable
> > mess which mostly consist of red markers pointing out supposed errors
> > in, well, pretty much everything.
> >
> > Using Python syntax highlighting produces much better results, and
> > in fact these files already start with specially-formatted comments
> > that instruct Emacs to process them as if they were Python files.
> >
> > This commit adds the equivalent special comments for vim.
> >
> > Signed-off-by: Andrea Bolognani 

Given that we already have emacs mode-lines, I see no reason to
not also have vim mode-lines, so regardless of the deeper discussion
I think this is patch is fine to merge in the short term

  Reviewed-by: Daniel P. Berrangé 


> Naming QAPI schema files .json even though their contents isn't was a
> mistake.  Correcting it would be a pain.  If we correct it, then the
> sooner the better.
> 
> Renaming them to .py gives decent editor support out of the box.  Their
> contents isn't quite Python, though: true vs. True, false vs. False.  Do
> we care?  Only a few dozen occurences; they could be adjusted.
> 
> Renaming them to .qapi would perhaps be less confusing, for the price of
> "out of the box".

IMHO, the critical rule is that if you a pick a particular file extension
associated with an existing language, you absolutely MUST BE compliant
with that language.

We fail at compliance with both JSON and Python because we're actually
using our own DSL (domain specific language).

IOW if we're going to stick with our current file format, then we should
be naming them .qapi. We can still use an editor mode line if we want to
claim we're approximately equiv to another language, but we can't be
surprised if editors get upset.


The bigger question is whether having our own DSL is justified ?

I'm *really* sceptical that it is.


We can't use JSON because it lacks comments. So we invented our own
psuedo-JSON parser that supported comments, and used ' instead of "
for some reason. We also needed to be able to parse a sequence of
multiple JSON documents in one file. We should have just picked a 
different language because JSON clearly didn't do what we eneeded.

You suggest naming them .py. If we do that, we must declare that they
are literally Python code and modify them so that we can load the 
files straight into the python intepretor as code, and not parse 
them as data. I feel unhappy about treating data as code though.


While JSON doesn't do what we need, its second-cousin YAML is a more
flexible format. Taking one example

---
##
# @ImageInfoSpecificQCow2:
#
# @compat: compatibility level
#
# ...snip...
#
# Since: 1.7
##
struct: ImageInfoSpecificQCow2
data:
  compat: str
  "*data-file": str
  "*data-file-raw": bool
  "*lazy-refcounts": bool
  "*corrupt": bool
  refcount-bits: int
  "*encrypt": ImageInfoSpecificQCow2Encryption
  "*bitmaps":
- Qcow2BitmapInfo
  compression-type: Qcow2CompressionType


Then we could use a regular off the shelf YAML parser in python.

The uglyiness with quotes is due to the use of "*". Slightly less ugly
if we simply declare that quotes are always used, even where they're
not strictly required.

struct: ImageInfoSpecificQCow2
data:
  "compat": "str"
  "*data-file": "str"
  "*data-file-raw": "bool"
  "*lazy-refcounts": "bool"
  "*corrupt": "bool"
  "refcount-bits": "int"
  "*encrypt": "ImageInfoSpecificQCow2Encryption"
  "*bitmaps":
- "Qcow2BitmapInfo"
  "compression-type": "Qcow2CompressionType"

With the use of "---" to denote the start of document, we have no trouble 
parsing our files which would actually be a concatenation of multiple 
documents. The python YAML library provides the easy yaml.load_all()
method.


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH] schemas: Add vim modeline

2020-07-30 Thread Markus Armbruster
Andrea Bolognani  writes:

> The various schemas included in QEMU use a JSON-based format which
> is, however, strictly speaking not valid JSON.
>
> As a consequence, when vim tries to apply syntax highlight rules
> for JSON (as guessed from the file name), the result is an unreadable
> mess which mostly consist of red markers pointing out supposed errors
> in, well, pretty much everything.
>
> Using Python syntax highlighting produces much better results, and
> in fact these files already start with specially-formatted comments
> that instruct Emacs to process them as if they were Python files.
>
> This commit adds the equivalent special comments for vim.
>
> Signed-off-by: Andrea Bolognani 

Naming QAPI schema files .json even though their contents isn't was a
mistake.  Correcting it would be a pain.  If we correct it, then the
sooner the better.

Renaming them to .py gives decent editor support out of the box.  Their
contents isn't quite Python, though: true vs. True, false vs. False.  Do
we care?  Only a few dozen occurences; they could be adjusted.

Renaming them to .qapi would perhaps be less confusing, for the price of
"out of the box".

Thoughts?




Re: [PATCH v3 08/18] hw/block/nvme: add support for the asynchronous event request command

2020-07-30 Thread Maxim Levitsky
On Wed, 2020-07-29 at 22:08 +0200, Klaus Jensen wrote:
> On Jul 29 21:45, Maxim Levitsky wrote:
> > On Wed, 2020-07-29 at 15:37 +0200, Klaus Jensen wrote:
> > > On Jul 29 13:43, Maxim Levitsky wrote:
> > > > On Mon, 2020-07-06 at 08:12 +0200, Klaus Jensen wrote:
> > > > > +DEFINE_PROP_UINT8("aerl", NvmeCtrl, params.aerl, 3),
> > > > So this is number of AERs that we allow the user to be outstanding
> > > 
> > > Yeah, and per the spec, 0's based.
> > > 
> > > > > +DEFINE_PROP_UINT32("aer_max_queued", NvmeCtrl, 
> > > > > params.aer_max_queued, 64),
> > > > And this is the number of AERs that we keep in our internal AER queue 
> > > > untill user posts and AER so that we
> > > > can complete it.
> > > > 
> > > 
> > > Correct.
> > 
> > Yep - this is what I understood after examining all of the patch, but from 
> > the names itself it is hard to understand this.
> > Maybe a comment next to property to at least make it easier for advanced 
> > user (e.g user that reads code)
> > to understand?
> > 
> > (I often end up reading source to understand various qemu device 
> > parameters).
> > 
> 
> I should add this in docs/specs/nvme.txt (which shows up in one of my
> next series when I add a new PCI id for the device). For now, I will add
> it to the top of the file like the rest of the parameters.
This is a good idea!
> 
> Subsequent series contains a lot more additions of new parameters that
> is directly from the spec and to me it really only makes sense that they
> share the names if they can.
> 
> We could consider having them under a "spec namespace"? So, say, we do
> DEFINE_PROP_UINT("spec.aerl", ...)?
I personally tend to think that it won't make it much more readable.

Best regards,
Maxim Levitsky
>