[RFC PATCH] gitlab: disable accelerated zlib for s390x

2022-03-21 Thread Alex Bennée
Apparently this causes problems with migration.

Signed-off-by: Alex Bennée 
Cc: Peter Maydell 
---
 .gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml | 12 
 1 file changed, 12 insertions(+)

diff --git a/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml 
b/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml
index 0333872113..4f292a8a5b 100644
--- a/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml
+++ b/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml
@@ -8,6 +8,8 @@ ubuntu-20.04-s390x-all-linux-static:
  tags:
  - ubuntu_20.04
  - s390x
+ variables:
+DFLTCC: 0
  rules:
  - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
  - if: "$S390X_RUNNER_AVAILABLE"
@@ -27,6 +29,8 @@ ubuntu-20.04-s390x-all:
  tags:
  - ubuntu_20.04
  - s390x
+ variables:
+DFLTCC: 0
  rules:
  - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
  - if: "$S390X_RUNNER_AVAILABLE"
@@ -43,6 +47,8 @@ ubuntu-20.04-s390x-alldbg:
  tags:
  - ubuntu_20.04
  - s390x
+ variables:
+DFLTCC: 0
  rules:
  - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
when: manual
@@ -64,6 +70,8 @@ ubuntu-20.04-s390x-clang:
  tags:
  - ubuntu_20.04
  - s390x
+ variables:
+DFLTCC: 0
  rules:
  - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
when: manual
@@ -84,6 +92,8 @@ ubuntu-20.04-s390x-tci:
  tags:
  - ubuntu_20.04
  - s390x
+ variables:
+DFLTCC: 0
  rules:
  - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
when: manual
@@ -103,6 +113,8 @@ ubuntu-20.04-s390x-notcg:
  tags:
  - ubuntu_20.04
  - s390x
+ variables:
+DFLTCC: 0
  rules:
  - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
when: manual
-- 
2.30.2




[PATCH v2] hw/i386/amd_iommu: Fix maybe-uninitialized error with GCC 12

2022-03-21 Thread Paolo Bonzini
Be more explicit that the loop must roll at least once.  Avoids the
following warning:

  FAILED: libqemu-x86_64-softmmu.fa.p/hw_i386_amd_iommu.c.o
  In function 'pte_get_page_mask',
  inlined from 'amdvi_page_walk' at hw/i386/amd_iommu.c:945:25,
  inlined from 'amdvi_do_translate' at hw/i386/amd_iommu.c:989:5,
  inlined from 'amdvi_translate' at hw/i386/amd_iommu.c:1038:5:
  hw/i386/amd_iommu.c:877:38: error: 'oldlevel' may be used uninitialized 
[-Werror=maybe-uninitialized]
877 | return ~((1UL << ((oldlevel * 9) + 3)) - 1);
|  ^~~~
  hw/i386/amd_iommu.c: In function 'amdvi_translate':
  hw/i386/amd_iommu.c:906:41: note: 'oldlevel' was declared here
906 | unsigned level, present, pte_perms, oldlevel;
| ^~~~
  cc1: all warnings being treated as errors

Having:

  $ gcc --version
  gcc (Debian 12-20220313-1) 12.0.1 20220314 (experimental)

Reported-by: Philippe Mathieu-Daudé 
Signed-off-by: Paolo Bonzini 
---
 hw/i386/amd_iommu.c | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/hw/i386/amd_iommu.c b/hw/i386/amd_iommu.c
index 4d13d8e697..6986ad3b87 100644
--- a/hw/i386/amd_iommu.c
+++ b/hw/i386/amd_iommu.c
@@ -913,7 +913,7 @@ static void amdvi_page_walk(AMDVIAddressSpace *as, uint64_t 
*dte,
 }
 
 /* we are at the leaf page table or page table encodes a huge page */
-while (level > 0) {
+do {
 pte_perms = amdvi_get_perms(pte);
 present = pte & 1;
 if (!present || perms != (perms & pte_perms)) {
@@ -932,10 +932,7 @@ static void amdvi_page_walk(AMDVIAddressSpace *as, 
uint64_t *dte,
 }
 oldlevel = level;
 level = get_pte_translation_mode(pte);
-if (level == 0x7) {
-break;
-}
-}
+} while (level > 0 && level < 7);
 
 if (level == 0x7) {
 page_mask = pte_override_page_mask(pte);
-- 
2.35.1




Re: [PULL for-7.0 0/2] Block patches

2022-03-21 Thread Stefan Hajnoczi
On Thu, Mar 17, 2022 at 06:36:36PM +, Peter Maydell wrote:
> On Thu, 17 Mar 2022 at 16:57, Stefan Hajnoczi  wrote:
> >
> > The following changes since commit 1d60bb4b14601e38ed17384277aa4c30c57925d3:
> >
> >   Merge tag 'pull-request-2022-03-15v2' of https://gitlab.com/thuth/qemu 
> > into staging (2022-03-16 10:43:58 +)
> >
> > are available in the Git repository at:
> >
> >   https://gitlab.com/stefanha/qemu.git tags/block-pull-request
> >
> > for you to fetch changes up to fc8796465c6cd4091efe6a2f8b353f07324f49c7:
> >
> >   aio-posix: fix spurious ->poll_ready() callbacks in main loop (2022-03-17 
> > 11:23:18 +)
> >
> > 
> > Pull request
> >
> > Bug fixes for 7.0.
> 
> msys2-32bit CI job fails on test-aio:
> 
> | 14/85 ERROR:../tests/unit/test-aio.c:501:test_timer_schedule:
> assertion failed: (aio_poll(ctx, true)) ERROR
> 14/85 qemu:unit / test-aio ERROR 2.40s (exit status 2147483651 or
> signal 2147483523 SIGinvalid)
> 
> https://gitlab.com/qemu-project/qemu/-/jobs/2217696361

Looks like a random failure. The commits touch Linux/POSIX code so I
don't know how this pull request could affect Windows.

I reran and the test passed:
https://gitlab.com/qemu-project/qemu/-/jobs/2229158826

Stefan


signature.asc
Description: PGP signature


[PULL 1/4] block-qdict: Fix -Werror=maybe-uninitialized build failure

2022-03-21 Thread Markus Armbruster
From: Murilo Opsfelder Araujo 

Building QEMU on Fedora 37 (Rawhide Prerelease) ppc64le failed with the
following error:

$ ../configure --prefix=/usr/local/qemu-disabletcg 
--target-list=ppc-softmmu,ppc64-softmmu --disable-tcg --disable-linux-user
...
$ make -j$(nproc)
...
In file included from /root/qemu/include/qapi/qmp/qdict.h:16,
 from /root/qemu/include/block/qdict.h:13,
 from ../qobject/block-qdict.c:11:
/root/qemu/include/qapi/qmp/qobject.h: In function ‘qdict_array_split’:
/root/qemu/include/qapi/qmp/qobject.h:49:17: error: ‘subqdict’ may be used 
uninitialized [-Werror=maybe-uninitialized]
   49 | typeof(obj) _obj = (obj);   \
  | ^~~~
../qobject/block-qdict.c:227:16: note: ‘subqdict’ declared here
  227 | QDict *subqdict;
  |^~~~
cc1: all warnings being treated as errors

Fix build failure by expanding the ternary operation.
Tested with `make check-unit` (the check-block-qdict test passed).

Signed-off-by: Murilo Opsfelder Araujo 
Cc: Kevin Wolf 
Cc: Hanna Reitz 
Cc: Markus Armbruster 
Message-Id: <20220311221634.58288-1-muri...@linux.ibm.com>
Reviewed-by: Markus Armbruster 
Signed-off-by: Markus Armbruster 
Tested-by: Philippe Mathieu-Daudé 
---
 qobject/block-qdict.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/qobject/block-qdict.c b/qobject/block-qdict.c
index 1487cc5dd8..4a83bda2c3 100644
--- a/qobject/block-qdict.c
+++ b/qobject/block-qdict.c
@@ -251,12 +251,12 @@ void qdict_array_split(QDict *src, QList **dst)
 if (is_subqdict) {
 qdict_extract_subqdict(src, , prefix);
 assert(qdict_size(subqdict) > 0);
+qlist_append_obj(*dst, QOBJECT(subqdict));
 } else {
 qobject_ref(subqobj);
 qdict_del(src, indexstr);
+qlist_append_obj(*dst, subqobj);
 }
-
-qlist_append_obj(*dst, subqobj ?: QOBJECT(subqdict));
 }
 }
 
-- 
2.35.1




Re: [PULL 0/3] ppc queue

2022-03-21 Thread Peter Maydell
On Mon, 21 Mar 2022 at 06:45, Cédric Le Goater  wrote:
>
> The following changes since commit 2058fdbe81e2985c226a026851dd26b146d3395c:
>
>   Merge tag 'fixes-20220318-pull-request' of git://git.kraxel.org/qemu into 
> staging (2022-03-19 11:28:54 +)
>
> are available in the Git repository at:
>
>   https://github.com/legoater/qemu/ tags/pull-ppc-20220321
>
> for you to fetch changes up to 3515553bf625ad48aa90210379c4f387c2596093:
>
>   target/ppc: Replicate Double->Single-Precision result (2022-03-20 23:35:27 
> +0100)
>
> 
> ppc-7.0 queue :
>
> * ISA v3.1 vector instruction fixes
> * Compilation fix regarding 'struct pt_regs' definition
>
> 


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/7.0
for any user-visible changes.

-- PMM



[PATCH v1 08/13] libvhost-user: expose vu_request_to_string

2022-03-21 Thread Alex Bennée
This is useful for more human readable debug messages in vhost-user
programs.

Signed-off-by: Alex Bennée 
---
 subprojects/libvhost-user/libvhost-user.h | 9 +
 subprojects/libvhost-user/libvhost-user.c | 2 +-
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/subprojects/libvhost-user/libvhost-user.h 
b/subprojects/libvhost-user/libvhost-user.h
index cde9f07bb3..aea7ec5061 100644
--- a/subprojects/libvhost-user/libvhost-user.h
+++ b/subprojects/libvhost-user/libvhost-user.h
@@ -473,6 +473,15 @@ bool vu_init(VuDev *dev,
  */
 void vu_deinit(VuDev *dev);
 
+
+/**
+ * vu_request_to_string: return string for vhost message request
+ * @req: VhostUserMsg request
+ *
+ * Returns a const string, do not free.
+ */
+const char *vu_request_to_string(unsigned int req);
+
 /**
  * vu_dispatch:
  * @dev: a VuDev context
diff --git a/subprojects/libvhost-user/libvhost-user.c 
b/subprojects/libvhost-user/libvhost-user.c
index 47d2efc60f..c218f911e7 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -99,7 +99,7 @@ static inline bool vu_has_protocol_feature(VuDev *dev, 
unsigned int fbit)
 return has_feature(dev->protocol_features, fbit);
 }
 
-static const char *
+const char *
 vu_request_to_string(unsigned int req)
 {
 #define REQ(req) [req] = #req
-- 
2.30.2




[PATCH v1 05/13] docs: vhost-user: rewrite section on ring state machine

2022-03-21 Thread Alex Bennée
From: Paolo Bonzini 

This section is using the word "back-end" to refer to the
"slave's back-end", and talking about the "client" for
what the rest of the document calls the "slave".

Rework it to free the use of the term "back-end", which in
the next patch will replace "slave".

Signed-off-by: Paolo Bonzini 
Message-Id: <20210226143413.188046-3-pbonz...@redhat.com>
---
 docs/interop/vhost-user.rst | 46 +
 1 file changed, 21 insertions(+), 25 deletions(-)

diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index bb588c66fc..694a113e59 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -331,40 +331,36 @@ bit was dedicated for this purpose::
 
   #define VHOST_USER_F_PROTOCOL_FEATURES 30
 
-Starting and stopping rings

+Ring states
+---
 
-Client must only process each ring when it is started.
+Rings can be in one of three states:
 
-Client must only pass data between the ring and the backend, when the
-ring is enabled.
+* stopped: the slave must not process the ring at all.
 
-If ring is started but disabled, client must process the ring without
-talking to the backend.
+* started but disabled: the slave must process the ring without
+  causing any side effects.  For example, for a networking device,
+  in the disabled state the slave must not supply any new RX packets,
+  but must process and discard any TX packets.
 
-For example, for a networking device, in the disabled state client
-must not supply any new RX packets, but must process and discard any
-TX packets.
+* started and enabled.
 
-If ``VHOST_USER_F_PROTOCOL_FEATURES`` has not been negotiated, the
-ring is initialized in an enabled state.
+Each ring is initialized in a stopped state.  The slave must start
+ring upon receiving a kick (that is, detecting that file descriptor is
+readable) on the descriptor specified by ``VHOST_USER_SET_VRING_KICK``
+or receiving the in-band message ``VHOST_USER_VRING_KICK`` if negotiated,
+and stop ring upon receiving ``VHOST_USER_GET_VRING_BASE``.
 
-If ``VHOST_USER_F_PROTOCOL_FEATURES`` has been negotiated, the ring is
-initialized in a disabled state. Client must not pass data to/from the
-backend until ring is enabled by ``VHOST_USER_SET_VRING_ENABLE`` with
-parameter 1, or after it has been disabled by
-``VHOST_USER_SET_VRING_ENABLE`` with parameter 0.
+Rings can be enabled or disabled by ``VHOST_USER_SET_VRING_ENABLE``.
 
-Each ring is initialized in a stopped state, client must not process
-it until ring is started, or after it has been stopped.
+If ``VHOST_USER_F_PROTOCOL_FEATURES`` has not been negotiated, the
+ring starts directly in the enabled state.
 
-Client must start ring upon receiving a kick (that is, detecting that
-file descriptor is readable) on the descriptor specified by
-``VHOST_USER_SET_VRING_KICK`` or receiving the in-band message
-``VHOST_USER_VRING_KICK`` if negotiated, and stop ring upon receiving
-``VHOST_USER_GET_VRING_BASE``.
+If ``VHOST_USER_F_PROTOCOL_FEATURES`` has been negotiated, the ring is
+initialized in a disabled state and is enabled by
+``VHOST_USER_SET_VRING_ENABLE`` with parameter 1.
 
-While processing the rings (whether they are enabled or not), client
+While processing the rings (whether they are enabled or not), the slave
 must support changing some configuration aspects on the fly.
 
 Multiple queue support
-- 
2.30.2




Re: [PATCH v3 06/11] target/s390x: vxeh2: vector {load, store} elements reversed

2022-03-21 Thread Richard Henderson

On 3/21/22 04:35, David Hildenbrand wrote:

+/* Probe write access before actually modifying memory */
+gen_helper_probe_write_access(cpu_env, o->addr1, tcg_constant_i64(16));


We have to free the tcg_constant_i64() IIRC.


We do not.


r~



Re: Memory leak in via_isa_realize()

2022-03-21 Thread BALATON Zoltan

On Mon, 21 Mar 2022, Peter Maydell wrote:

On Mon, 21 Mar 2022 at 10:31, Thomas Huth  wrote:

FYI, I'm seeing a memory leak in via_isa_realize() when building
QEMU with sanitizers enabled or when running QEMU through valgrind:
Same problem happens with qemu-system-ppc64 and the pegasos2 machine.

No clue how to properly fix this... is it safe to free the pointer
at the end of the function?


This is because the code is still using the old function
qemu_allocate_irqs(), which is almost always going to involve
it leaking memory. The fix is usually to rewrite the code to not use
that function at all, i.e. to manage its irq/gpio lines differently.
Probably the i8259 code should have a named GPIO output line
rather than wanting to be passed a qemu_irq in an init function,
and the via code should have an input GPIO line which it connects
up to the i8259. It looks from a quick glance like the i8259 and
its callers have perhaps not been completely QOMified.


Everything involving ISA emulation in QEMU is not completely QOMified and 
this has caused some problems before but I did not want to try to fix it 
both becuase it's too much unrelated work and because it's used by too 
many things that could break that I can't even test. So I'd rather 
somebody more comfortable with this would look at ISA QOMification.



In this specific case, though, it seems like the only thing that
the via_isa_request_i8259_irq() function does is pass the interrupt
signal through to its own s->cpu_intr, so I think a relatively
self-contained way to deal with the leak is to pass s->cpu_intr
into i8259_init() and drop the isa_irq allocated irq and its
associated helper function entirely. (There might be some subtlety
I'm missing that means that wouldn't work, of course.)


I think I've tried to do that first and it did not work for some reason 
then I got this way from some other device model which works but I forgot 
the details. You can test it by booting MorphOS or Debian Linux 8.11 PPC 
on pegasos2 which support this machine or maybe I can have a look later 
this week if it's not urgent and try something but I don't mind if 
somebody comes up with a fix before that.


Regards,
BALATON Zoltan



[PATCH] hw/pvrdma: Protect against buggy or malicious guest driver

2022-03-21 Thread Yuval Shaia
Guest driver might execute HW commands when shared buffers are not yet
allocated.
This might happen on purpose (malicious guest) or because some other
guest/host address mapping.
We need to protect againts such case.

Reported-by: Mauro Matteo Cascella 
Signed-off-by: Yuval Shaia 
---
 hw/rdma/vmw/pvrdma_cmd.c  | 6 ++
 hw/rdma/vmw/pvrdma_main.c | 9 +
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/hw/rdma/vmw/pvrdma_cmd.c b/hw/rdma/vmw/pvrdma_cmd.c
index da7ddfa548..89db963c46 100644
--- a/hw/rdma/vmw/pvrdma_cmd.c
+++ b/hw/rdma/vmw/pvrdma_cmd.c
@@ -796,6 +796,12 @@ int pvrdma_exec_cmd(PVRDMADev *dev)
 
 dsr_info = >dsr_info;
 
+if (!dsr_info->dsr) {
+/* Buggy or malicious guest driver */
+rdma_error_report("Exec command without dsr, req or rsp buffers");
+goto out;
+}
+
 if (dsr_info->req->hdr.cmd >= sizeof(cmd_handlers) /
   sizeof(struct cmd_handler)) {
 rdma_error_report("Unsupported command");
diff --git a/hw/rdma/vmw/pvrdma_main.c b/hw/rdma/vmw/pvrdma_main.c
index 91206dbb8e..aae382af59 100644
--- a/hw/rdma/vmw/pvrdma_main.c
+++ b/hw/rdma/vmw/pvrdma_main.c
@@ -159,13 +159,13 @@ static void free_dsr(PVRDMADev *dev)
 free_dev_ring(pci_dev, >dsr_info.cq, dev->dsr_info.cq_ring_state);
 
 rdma_pci_dma_unmap(pci_dev, dev->dsr_info.req,
- sizeof(union pvrdma_cmd_req));
+   sizeof(union pvrdma_cmd_req));
 
 rdma_pci_dma_unmap(pci_dev, dev->dsr_info.rsp,
- sizeof(union pvrdma_cmd_resp));
+   sizeof(union pvrdma_cmd_resp));
 
 rdma_pci_dma_unmap(pci_dev, dev->dsr_info.dsr,
- sizeof(struct pvrdma_device_shared_region));
+   sizeof(struct pvrdma_device_shared_region));
 
 dev->dsr_info.dsr = NULL;
 }
@@ -249,7 +249,8 @@ static void init_dsr_dev_caps(PVRDMADev *dev)
 {
 struct pvrdma_device_shared_region *dsr;
 
-if (dev->dsr_info.dsr == NULL) {
+if (!dev->dsr_info.dsr) {
+/* Buggy or malicious guest driver */
 rdma_error_report("Can't initialized DSR");
 return;
 }
-- 
2.20.1




Re: [PATCH v4] tests: Do not treat the iotests as separate meson test target anymore

2022-03-21 Thread Hanna Reitz

On 21.03.22 10:17, Thomas Huth wrote:

On 21/03/2022 10.06, Hanna Reitz wrote:

On 18.03.22 18:36, Thomas Huth wrote:

On 18/03/2022 18.04, Hanna Reitz wrote:

On 10.03.22 08:50, Thomas Huth wrote:

If there is a failing iotest, the output is currently not logged to
the console anymore. To get this working again, we need to run the
meson test runner with "--print-errorlogs" (and without "--verbose"
due to a current meson bug that will be fixed here:
https://github.com/mesonbuild/meson/commit/c3f145ca2b9f5.patch ).
We could update the "meson test" call in tests/Makefile.include,
but actually it's nicer and easier if we simply do not treat the
iotests as separate test target anymore and integrate them along
with the other test suites. This has the disadvantage of not getting
the detailed progress indication there anymore, but since that was
only working right in single-threaded "make -j1" mode anyway, it's
not a huge loss right now.

Signed-off-by: Thomas Huth 
---
  v4: updated commit description

  meson.build    | 6 +++---
  scripts/mtest2make.py  | 4 
  tests/Makefile.include | 9 +
  3 files changed, 4 insertions(+), 15 deletions(-)


I can’t really say I understand what’s going on in this patch and 
around it, but I can confirm that it before this patch, fail diffs 
aren’t printed; but afterwards, they are


It's a bug in Meson. It will be fixed in 0.61.3 and later (so this 
patch won't be needed there anymore), but the update to meson 0.61.3 
caused other problems so we also can't do that right now... so I'm 
not sure whether we now want to have this patch here included, wait 
for a better version of meson, or even rather want to revert the TAP 
support / meson integration again for 7.0 ... ?


I don’t have anything against this patch, I just don’t fully 
understand what it does, and how it works.


So as far as I understand, check-block was its own target and used 
--verbose so that the progress indication would work (with -j1). Now 
that causes problems because of a bug in meson, and so this patch 
drops that special-casing again.  The only disadvantage is that the 
progress indication (which only worked with -j1) no longer ever works.


(Is that right?)


Right!

I personally don’t mind that disadvantage, because on CI systems it 
doesn’t really matter anyway; and on developers’ systems, I would 
assume `make check` to always be run with -jX anyway.


Right again. So currently the only question is: Do we want to see a 
nice progress output with -j1 and do not care about the error logs, or 
do we rather want to see the error logs with -j1 and do not care about 
the nice progress output? For -jX with X > 1, the patch does not 
change much, and we'd need a newer version of meson to fix that.


OK, to me the answer sounds obvious.  We absolutely need error logs, 
nice output is secondary to it.


Waiting for a new usable version of meson is not really an option, 
because when it comes around, we can just revert this patch (or take any 
other course of action that seems best then).


I guess we could revert TAP and/or the meson integration, I suppose 
that’d mean we’d get some progress output again, but it’s just the plain 
one from the iotests’ `check` script, right?  I’m hard-pressed to find 
good arguments against that, but I don’t really like that idea either.


Having this patch as a workaround until the functionality can be 
restored (which seems in sight) seems absolutely fine to me.  I guess 
I’ll just take it to my tree, then.  Won’t stop others from being able 
to protest, after all. :)


(I.e.: Thanks, applied to my block branch: 
https://gitlab.com/hreitz/qemu/-/commits/block)


Hanna




[PATCH] hw/sd/sdhci: Block Size Register bits [14:12] is lost

2022-03-21 Thread Lu Gao
Block Size Register bits [14:12] is SDMA Buffer Boundary, it is missed
in register write, but it is needed in SDMA transfer. e.g. it will be
used in sdhci_sdma_transfer_multi_blocks to calculate boundary_ variables.

Missing this field will cause wrong operation for different SDMA Buffer
Boundary settings.

Signed-off-by: Lu Gao 
Signed-off-by: Jianxian Wen 
---
 hw/sd/sdhci.c | 15 +++
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/hw/sd/sdhci.c b/hw/sd/sdhci.c
index e0bbc90344..350ceb487d 100644
--- a/hw/sd/sdhci.c
+++ b/hw/sd/sdhci.c
@@ -321,6 +321,8 @@ static void sdhci_poweron_reset(DeviceState *dev)
 
 static void sdhci_data_transfer(void *opaque);
 
+#define BLOCK_SIZE_MASK (4 * KiB - 1)
+
 static void sdhci_send_command(SDHCIState *s)
 {
 SDRequest request;
@@ -371,7 +373,8 @@ static void sdhci_send_command(SDHCIState *s)
 
 sdhci_update_irq(s);
 
-if (!timeout && s->blksize && (s->cmdreg & SDHC_CMD_DATA_PRESENT)) {
+if (!timeout && (s->blksize & BLOCK_SIZE_MASK) &&
+(s->cmdreg & SDHC_CMD_DATA_PRESENT)) {
 s->data_count = 0;
 sdhci_data_transfer(s);
 }
@@ -406,7 +409,6 @@ static void sdhci_end_transfer(SDHCIState *s)
 /*
  * Programmed i/o data transfer
  */
-#define BLOCK_SIZE_MASK (4 * KiB - 1)
 
 /* Fill host controller's read buffer with BLKSIZE bytes of data from card */
 static void sdhci_read_block_from_card(SDHCIState *s)
@@ -1137,7 +1139,8 @@ sdhci_write(void *opaque, hwaddr offset, uint64_t val, 
unsigned size)
 s->sdmasysad = (s->sdmasysad & mask) | value;
 MASKED_WRITE(s->sdmasysad, mask, value);
 /* Writing to last byte of sdmasysad might trigger transfer */
-if (!(mask & 0xFF00) && s->blkcnt && s->blksize &&
+if (!(mask & 0xFF00) && s->blkcnt &&
+(s->blksize & BLOCK_SIZE_MASK) &&
 SDHC_DMA_TYPE(s->hostctl1) == SDHC_CTRL_SDMA) {
 if (s->trnmod & SDHC_TRNS_MULTI) {
 sdhci_sdma_transfer_multi_blocks(s);
@@ -1151,7 +1154,11 @@ sdhci_write(void *opaque, hwaddr offset, uint64_t val, 
unsigned size)
 if (!TRANSFERRING_DATA(s->prnsts)) {
 uint16_t blksize = s->blksize;
 
-MASKED_WRITE(s->blksize, mask, extract32(value, 0, 12));
+/*
+ * [14:12] SDMA Buffer Boundary
+ * [11:00] Transfer Block Size
+ */
+MASKED_WRITE(s->blksize, mask, extract32(value, 0, 15));
 MASKED_WRITE(s->blkcnt, mask >> 16, value >> 16);
 
 /* Limit block size to the maximum buffer size */
-- 
2.17.1




Re: [PATCH v2 2/3] qapi: nbd-export: allow select bitmaps by node/name pair

2022-03-21 Thread Eric Blake
On Mon, Mar 21, 2022 at 02:50:25PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> > > +++ b/qapi/block-export.json
> > > @@ -6,6 +6,7 @@
> > >   ##
> > >   { 'include': 'sockets.json' }
> > > +{ 'include': 'block-core.json' }
> > 
> > Hmm.  Does this extra inclusion negatively impact qemu-storage-daemon,
> > since that is why we created block-export.json in the first place (to
> > minimize the stuff that qsd pulled in without needing all of
> > block-core.json)?  In other words, would it be better to move
> > BlockDirtyBitmapOrStr to this file?
> 
> And include block-export in block-core?

Right now, we have:

qapi/block-core.json "Block core (VM unrelated)" - includes
{common,crypto,job,sockets}.json

qapi/block-export.json "Block device exports" - includes sockets.json

qapi/block.json "Additional block stuff (VM related)" - includes block-core.json

Kevin, you forked off qapi/block-export.json.  What do you propose here?

> 
> Another alternative is to move BlockDirtyBitmapOrStr to a separate file 
> included from both block-export and block-core but that seems to be too much.

Indeed, that feels like a step too far; we already have confusion on
which file to stick new stuff in, and adding another file won't help
that.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




Re: [PULL for-7.0 1/2] aio-posix: fix build failure io_uring 2.2

2022-03-21 Thread Stefan Hajnoczi
On Thu, Mar 17, 2022 at 05:14:20PM +, Daniel P. Berrangé wrote:
> On Thu, Mar 17, 2022 at 04:57:42PM +, Stefan Hajnoczi wrote:
> > From: Haiyue Wang 
> > 
> > The io_uring fixed "Don't truncate addr fields to 32-bit on 32-bit":
> > https://git.kernel.dk/cgit/liburing/commit/?id=d84c29b19ed0b13619cff40141bb1fc3615b
> 
> Ewww, that changes the public ABI of the library on 32-bit
> platforms, but failed to bump the soname version, except
> 
> ...investigating this I noticed a further change that happend
> a few weeks earlier in liburing that actually dropped the
> version from the soname entirely making it an unversioned
> library.
> 
> This is the current shipping 2.1 version:
> 
> $ eu-readelf -a liburing.so.2.0.0  | grep SONAME
>   SONAMELibrary soname: [liburing.so.2]
> 
> and in git master:
> 
> $ eu-readelf -a src/liburing.so.2.2 | grep SONA
>   SONAMELibrary soname: [liburing.so]
> 
> Surely that's a mistake.
> 
> After the ABI incompatibility above, I would have expected
> it to bump to liburing.so.3 

Thanks, I have sent a liburing patch to fix the soname.

Stefan


signature.asc
Description: PGP signature


[PATCH v3 5/5] i386/cpu: Free env->xsave_buf in KVM and HVF destory_vcpu_thread routines

2022-03-21 Thread Mark Kanda
Create KVM and HVF specific destory_vcpu_thread() routines to free
env->xsave_buf.

vCPU hotunplug related leak reported by Valgrind:

==132362== 4,096 bytes in 1 blocks are definitely lost in loss record 8,440 of 
8,549
==132362==at 0x4C3B15F: memalign (vg_replace_malloc.c:1265)
==132362==by 0x4C3B288: posix_memalign (vg_replace_malloc.c:1429)
==132362==by 0xB41195: qemu_try_memalign (memalign.c:53)
==132362==by 0xB41204: qemu_memalign (memalign.c:73)
==132362==by 0x7131CB: kvm_init_xsave (kvm.c:1601)
==132362==by 0x7148ED: kvm_arch_init_vcpu (kvm.c:2031)
==132362==by 0x91D224: kvm_init_vcpu (kvm-all.c:516)
==132362==by 0x9242C9: kvm_vcpu_thread_fn (kvm-accel-ops.c:40)
==132362==by 0xB2EB26: qemu_thread_start (qemu-thread-posix.c:556)
==132362==by 0x7EB2159: start_thread (in /usr/lib64/libpthread-2.28.so)
==132362==by 0x9D45DD2: clone (in /usr/lib64/libc-2.28.so)

Signed-off-by: Mark Kanda 
---
 accel/hvf/hvf-accel-ops.c | 11 ++-
 accel/kvm/kvm-accel-ops.c | 11 ++-
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index b23a67881c..bc53890352 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -462,12 +462,21 @@ static void hvf_start_vcpu_thread(CPUState *cpu)
cpu, QEMU_THREAD_JOINABLE);
 }
 
+static void hvf_destroy_vcpu_thread(CPUState *cpu)
+{
+X86CPU *x86_cpu = X86_CPU(cpu);
+CPUX86State *env = _cpu->env;
+
+g_free(env->xsave_buf);
+generic_destroy_vcpu_thread(cpu);
+}
+
 static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
 {
 AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
 
 ops->create_vcpu_thread = hvf_start_vcpu_thread;
-ops->destroy_vcpu_thread = generic_destroy_vcpu_thread;
+ops->destroy_vcpu_thread = hvf_destroy_vcpu_thread;
 ops->kick_vcpu_thread = hvf_kick_vcpu_thread;
 
 ops->synchronize_post_reset = hvf_cpu_synchronize_post_reset;
diff --git a/accel/kvm/kvm-accel-ops.c b/accel/kvm/kvm-accel-ops.c
index 5a7a9ae79c..0345a30139 100644
--- a/accel/kvm/kvm-accel-ops.c
+++ b/accel/kvm/kvm-accel-ops.c
@@ -74,6 +74,15 @@ static void kvm_start_vcpu_thread(CPUState *cpu)
cpu, QEMU_THREAD_JOINABLE);
 }
 
+static void kvm_destroy_vcpu_thread(CPUState *cpu)
+{
+X86CPU *x86_cpu = X86_CPU(cpu);
+CPUX86State *env = _cpu->env;
+
+g_free(env->xsave_buf);
+generic_destroy_vcpu_thread(cpu);
+}
+
 static bool kvm_vcpu_thread_is_idle(CPUState *cpu)
 {
 return !kvm_halt_in_kernel();
@@ -89,7 +98,7 @@ static void kvm_accel_ops_class_init(ObjectClass *oc, void 
*data)
 AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
 
 ops->create_vcpu_thread = kvm_start_vcpu_thread;
-ops->destroy_vcpu_thread = generic_destroy_vcpu_thread;
+ops->destroy_vcpu_thread = kvm_destroy_vcpu_thread;
 ops->cpu_thread_is_idle = kvm_vcpu_thread_is_idle;
 ops->cpus_are_resettable = kvm_cpus_are_resettable;
 ops->synchronize_post_reset = kvm_cpu_synchronize_post_reset;
-- 
2.27.0




Re: [RFC PATCH 0/5] Removal of AioContext lock, bs->parents and ->children: proof of concept

2022-03-21 Thread Vladimir Sementsov-Ogievskiy

09.03.2022 16:26, Emanuele Giuseppe Esposito wrote:



Am 02/03/2022 um 12:07 schrieb Vladimir Sementsov-Ogievskiy:

01.03.2022 17:21, Emanuele Giuseppe Esposito wrote:

This serie tries to provide a proof of concept and a clear explanation
on why we need to use drains (and more precisely subtree_drains)
to replace the aiocontext lock, especially to protect BlockDriverState
->children and ->parent lists.

Just a small recap on the key concepts:
* We split block layer APIs in "global state" (GS), "I/O", and
"global state or I/O".
    GS are running in the main loop, under BQL, and are the only
    one allowed to modify the BlockDriverState graph.

    I/O APIs are thread safe and can run in any thread

    "global state or I/O" are essentially all APIs that use
    BDRV_POLL_WHILE. This is because there can be only 2 threads
    that can use BDRV_POLL_WHILE: main loop and the iothread that
    runs the aiocontext.

* Drains allow the caller (either main loop or iothread running
the context) to wait all in_flights requests and operations
of a BDS: normal drains target a given node and is parents, while
subtree ones also include the subgraph of the node. Siblings are
not affected by any of these two kind of drains.
After bdrv_drained_begin, no more request is allowed to come
from the affected nodes. Therefore the only actor left working
on a drained part of the graph should be the main loop.

What do we intend to do
---
We want to remove the AioContext lock. It is not 100% clear on how
many things we are protecting with it, and why.
As a starter, we want to protect BlockDriverState ->parents and
->children lists, since they are read by main loop and I/O but
only written by main loop under BQL. The function that modifies
these lists is bdrv_replace_child_common().

How do we want to do it
---
We individuated as ideal subtitute of AioContext lock
the subtree_drain API. The reason is simple: draining prevents the
iothread to read or write the nodes, so once the main loop finishes


I'm not sure it's ideal. Unfortunately I'm not really good in all that
BQL, AioContext locks and drains. So, I can't give good advice. But here
are my doubts:

Draining is very restrictive measure: even if drained section is very
short, at least on bdrv_drained_begin() we have to wait for all current
requests, and don't start new ones. That slows down the guest.


I don't think we are in a critical path where performance is important here.

In the

same time there are operations that don't require to stop guest IO
requests. For example manipulation with dirty bitmaps - qmp commands
block-dirty-bitmap-add block-dirty-bitmap-remove. Or different query
requests..



Maybe you misunderstood or I was not 100% clear, but I am talking about replacing the 
AioContext lock for the ->parents and ->children instance. Not everywhere. This 
is the first step, and then we will see if the additional things that it protects can 
use drain or something else


Ok, if we are only about graph modification that's not a critical performance 
path.



  

I see only two real cases, where we do need drain:

1. When we need a consistent "point-in-time". For example, when we start
backup in transaction with some dirty-bitmap manipulation commands.

2. When we need to modify block-graph: if we are going to break relation
A -> B, there must not be any in-flight request that want to use this
relation.


That's the use case I am considering.


All other operations, for which we want some kind of lock (like
AioContext lock or something) we actually don't want to stop guest IO.


Yes, they have to be analyzed case by case.



Next, I have a problem in mind, that in past lead to a lot of iotest 30
failures. Next there were different fixes and improvements, but the core
problem (as far as I understand) is still here: nothing protects us when
we are in some graph modification process (for example block-job
finalization) do yield, switch to other coroutine and enter another
graph modification process (for example, another block-job finaliztion)..


That's another point to consider. I don't really have a solution for this.


(for details look at my old "[PATCH RFC 0/5] Fix accidental crash in
iotest 30"
https://lists.nongnu.org/archive/html/qemu-devel/2020-11/msg05290.html ,
where I suggested to add a global graph_modify_mutex CoMutex, to be held
during graph-modifying process that may yield)..
Does your proposal solve this problem?



executing bdrv_drained_begin() on the interested graph, we are sure that
the iothread is not going to look or even interfere with that part of
the graph.
We are also sure that the only two actors that can look at a specific
BlockDriverState in any given context are the main loop and the
iothread running the AioContext (ensured by "global state or IO" logic).

Why use _subtree_ instead of normal drain
-
A simple drain "blocks" a given node and all its parents.

Re: [PATCH 06/15] iotests: rebase qemu_io() on top of qemu_tool()

2022-03-21 Thread Eric Blake
On Fri, Mar 18, 2022 at 04:36:46PM -0400, John Snow wrote:
> Rework qemu_io() to be analogous to qemu_img(); a function that requires
> a return code of zero by default unless disabled explicitly.
> 
> Tests that use qemu_io():
> 030 040 041 044 055 056 093 124 129 132 136 148 149 151 152 163 165 205
> 209 219 236 245 248 254 255 257 260 264 280 298 300 302 304
> image-fleecing migrate-bitmaps-postcopy-test migrate-bitmaps-test
> migrate-during-backup migration-permissions
> 
> Test that use qemu_io_log():
> 242 245 255 274 303 307 nbd-reconnect-on-open
> 
> Signed-off-by: John Snow 
> 
> ---
> 
> Note: This breaks several tests at this point. I'll be fixing each
> broken test one by one in the subsequent commits. We can squash them all
> on merge to avoid test regressions.
> 
> (Seems like a way to have your cake and eat it too with regards to
> maintaining bisectability while also having nice mailing list patches.)

Interesting approach; it does appear to have made reviewing a bit
easier, so thanks for trying it.

I'll withhold actual R-b until the last squashed patch, but so far, I
haven't seen anything that causes me grief other than the lack of
bisectability that you already have documented how it will be
addressed.  [less wordy - this patch is incomplete, as advertised, but
looks good]

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




[PATCH v1 12/13] hw/virtio/vhost-user: don't suppress F_CONFIG when supported

2022-03-21 Thread Alex Bennée
Previously we would silently suppress VHOST_USER_PROTOCOL_F_CONFIG
during the protocol negotiation if the QEMU stub hadn't implemented
the vhost_dev_config_notifier. However this isn't the only way we can
handle config messages, the existing vdc->get/set_config can do this
as well.

Lightly re-factor the code to check for both potential methods and
instead of silently squashing the feature error out. It is unlikely
that a vhost-user backend expecting to handle CONFIG messages will
behave correctly if they never get sent.

Fixes: 1c3e5a2617 ("vhost-user: back SET/GET_CONFIG requests with a protocol 
feature")
Cc: Maxime Coquelin 
Cc: Michael S. Tsirkin 
Cc: Stefan Hajnoczi 
Signed-off-by: Alex Bennée 

---
  - we can't check for get_config/set_config as the stack squashed vdev
  - use vhost-user-state to transmit this
---
 include/hw/virtio/vhost-user.h |  1 +
 hw/scsi/vhost-user-scsi.c  |  1 +
 hw/virtio/vhost-user.c | 46 --
 3 files changed, 35 insertions(+), 13 deletions(-)

diff --git a/include/hw/virtio/vhost-user.h b/include/hw/virtio/vhost-user.h
index e44a41bb70..6e0e8a71a3 100644
--- a/include/hw/virtio/vhost-user.h
+++ b/include/hw/virtio/vhost-user.h
@@ -22,6 +22,7 @@ typedef struct VhostUserState {
 CharBackend *chr;
 VhostUserHostNotifier notifier[VIRTIO_QUEUE_MAX];
 int memory_slots;
+bool supports_config;
 } VhostUserState;
 
 bool vhost_user_init(VhostUserState *user, CharBackend *chr, Error **errp);
diff --git a/hw/scsi/vhost-user-scsi.c b/hw/scsi/vhost-user-scsi.c
index 1b2f7eed98..9be21d07ee 100644
--- a/hw/scsi/vhost-user-scsi.c
+++ b/hw/scsi/vhost-user-scsi.c
@@ -121,6 +121,7 @@ static void vhost_user_scsi_realize(DeviceState *dev, Error 
**errp)
 vsc->dev.backend_features = 0;
 vqs = vsc->dev.vqs;
 
+s->vhost_user.supports_config = true;
 ret = vhost_dev_init(>dev, >vhost_user,
  VHOST_BACKEND_TYPE_USER, 0, errp);
 if (ret < 0) {
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index b27b8c56e2..6ce082861b 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -1949,14 +1949,15 @@ static int 
vhost_user_postcopy_notifier(NotifierWithReturn *notifier,
 static int vhost_user_backend_init(struct vhost_dev *dev, void *opaque,
Error **errp)
 {
-uint64_t features, protocol_features, ram_slots;
+uint64_t features, ram_slots;
 struct vhost_user *u;
+VhostUserState *vus = (VhostUserState *) opaque;
 int err;
 
 assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
 
 u = g_new0(struct vhost_user, 1);
-u->user = opaque;
+u->user = vus;
 u->dev = dev;
 dev->opaque = u;
 
@@ -1967,6 +1968,10 @@ static int vhost_user_backend_init(struct vhost_dev 
*dev, void *opaque,
 }
 
 if (virtio_has_feature(features, VHOST_USER_F_PROTOCOL_FEATURES)) {
+bool supports_f_config = vus->supports_config ||
+(dev->config_ops && dev->config_ops->vhost_dev_config_notifier);
+uint64_t protocol_features;
+
 dev->backend_features |= 1ULL << VHOST_USER_F_PROTOCOL_FEATURES;
 
 err = vhost_user_get_u64(dev, VHOST_USER_GET_PROTOCOL_FEATURES,
@@ -1976,19 +1981,34 @@ static int vhost_user_backend_init(struct vhost_dev 
*dev, void *opaque,
 return -EPROTO;
 }
 
-dev->protocol_features =
-protocol_features & VHOST_USER_PROTOCOL_FEATURE_MASK;
-
-if (!dev->config_ops || !dev->config_ops->vhost_dev_config_notifier) {
-/* Don't acknowledge CONFIG feature if device doesn't support it */
-dev->protocol_features &= ~(1ULL << VHOST_USER_PROTOCOL_F_CONFIG);
-} else if (!(protocol_features &
-(1ULL << VHOST_USER_PROTOCOL_F_CONFIG))) {
-error_setg(errp, "Device expects VHOST_USER_PROTOCOL_F_CONFIG "
-   "but backend does not support it.");
-return -EINVAL;
+/*
+ * We will use all the protocol features we support - although
+ * we suppress F_CONFIG if we know QEMUs internal code can not support
+ * it.
+ */
+protocol_features &= VHOST_USER_PROTOCOL_FEATURE_MASK;
+
+if (supports_f_config) {
+if (!virtio_has_feature(protocol_features,
+VHOST_USER_PROTOCOL_F_CONFIG)) {
+error_setg(errp, "vhost-user device %s expecting "
+   "VHOST_USER_PROTOCOL_F_CONFIG but the vhost-user 
backend does "
+   "not support it.", dev->vdev->name);
+return -EPROTO;
+}
+} else {
+if (virtio_has_feature(protocol_features,
+   VHOST_USER_PROTOCOL_F_CONFIG)) {
+warn_reportf_err(*errp, "vhost-user backend supports "
+ "VHOST_USER_PROTOCOL_F_CONFIG for "
+ 

Re: [PATCH qemu 00/13] Add tail agnostic behavior for rvv instructions

2022-03-21 Thread eop Chen

Hi WeiWei,

Thanks for reviewing this PR.

===

Regarding to possible behaviors on agnostic elements to mask instructions, I
want to ask for you and other's opinion on this proposed PR before sending the
next version.

I understand that there are multiple possibility for agnostic elements
according to v-spec. The main intent of this patch-set tries to add option that
can distinguish between tail policies. Setting agnostic elements to all 1s
makes things simple and allow qemu to express that the element is agnostic.
Therefore I want unify **all** agnostic elements to be set to 1s in this when
this option is enabled.

To avoid affecting the current behavior, the option is default to “disabled".
This option is an extra feature to offer so users that care can enable it on
their will.

===

Here are some replies to your review comments.

Under [00/13]
> Another question: when rvv_ta_all_1s for vta  is enabled, How about vma? 
> Is it necessary to set the inactive elements to all 1s?

This PR will add tail agnostic feature. I am planning on adding the mask
policy in another PR to keep the size of change more reasonable for review.

Under [01/13]
> ESZ can be used in the later patches. Maybe it's better to move this 
> patch to  last  and prune redundant DSZ parameter.

ESZ and DSZ are redundant code that aren't cleaned-up in the past developments.
I prefer to clean this up first and add it back incrementally in the following
commit to make the commits more readable. I do agree with you that `ETYPE` is
not a straight-forward naming and I will change them to `ESZ`.

Under [03/13]
> Maybe miss a space here.

Nice catch here, thank you.

Under [04/13]
> ETYPE seems have no other meaning here. Why not use ESZ directly  as 
original code.

Yes I agree with you. I will update it in the next version.

Under [05/13]
> Similar to last patch, can use ESZ directly here.

I will update it in the next version.

Under [06/13]
> Use vlmax here and in the previous patches may not contains all the tail 
> elements:
> "When LMUL < 1, the tail includes the elements past VLMAX that are held 
> in the same vector register"

Nice catch for this. I will cover LMUL < 1 cases for all functions in the next
version.

Under [07/13]
> Why comment 'clear tail element' here?
> "In addition, except for mask load instructions, any element in the tail 
> of a mask result can also be written with the value the
> mask-producing operation would have calculated with vl=VLMAX.
> Furthermore, for mask-logical instructions and vmsbf.m,
> vmsif.m, vmsof.m mask-manipulation instructions, any element in the tail 
> of the result can be written with the value the
> mask-producing operation would have calculated with vl=VLEN, SEW=8, and 
> LMUL=8 (i.e., all bits of the mask register can
> be overwritten)."

I will wait for you and other's reply on my comment on this.

===

Thanks again for your time.

Best,

Yueh-Ting (eop) Chen



Re: [PATCH 02/15] iotests/163: Fix broken qemu-io invocation

2022-03-21 Thread Eric Blake
On Fri, Mar 18, 2022 at 04:36:42PM -0400, John Snow wrote:
> The 'read' commands to qemu-io were malformed, and this invocation only
> worked by coincidence because the error messages were identical. Oops.
> 
> There's no point in checking the patterning of the reference image, so
> just check the empty image by itself instead.
> 
> (Note: as of this commit, nothing actually enforces that this command
> completes successfully, but a forthcoming commit in this series will
> enforce that qemu_io() must have a zero status code.)
> 
> Signed-off-by: John Snow 
> ---
>  tests/qemu-iotests/163 | 5 +
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/tests/qemu-iotests/163 b/tests/qemu-iotests/163
> index e4cd4b230f..c94ad16f4a 100755
> --- a/tests/qemu-iotests/163
> +++ b/tests/qemu-iotests/163
> @@ -113,10 +113,7 @@ class ShrinkBaseClass(iotests.QMPTestCase):
>  qemu_img('resize',  '-f', iotests.imgfmt, '--shrink', test_img,
>   self.shrink_size)
>  
> -self.assertEqual(
> -qemu_io('-c', 'read -P 0x00 %s'%self.shrink_size, test_img),
> -qemu_io('-c', 'read -P 0x00 %s'%self.shrink_size, check_img),
> -"Verifying image content")
> +qemu_io('-c', f"read -P 0x00 0 {self.shrink_size}", test_img)

Reviewed-by: Eric Blake 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v4 10/18] iotests: add qemu_img_map() function

2022-03-21 Thread Eric Blake
On Thu, Mar 17, 2022 at 07:49:29PM -0400, John Snow wrote:
> Add a qemu_img_map() function by analogy with qemu_img_measure(),
> qemu_img_check(), and qemu_img_info() that all return JSON information.
> 
> Replace calls to qemu_img_pipe('map', '--output=json', ...) with this
> new function, which provides better diagnostic information on failure.
> 
> Note: The output for iotest 211 changes, because logging JSON after it
> was deserialized by Python behaves a little differently than logging the
> raw JSON document string itself.
> (iotests.log() sorts the keys for Python 3.6 support.)
> 
> Signed-off-by: John Snow 
> ---

> +++ b/tests/qemu-iotests/211.out

> @@ -55,9 +53,7 @@ file format: IMGFMT
>  virtual size: 32 MiB (33554432 bytes)
>  cluster_size: 1048576
>  
> -[{ "start": 0, "length": 3072, "depth": 0, "present": true, "zero": false, 
> "data": true, "offset": 1024},
> -{ "start": 3072, "length": 33551360, "depth": 0, "present": true, "zero": 
> true, "data": true, "offset": 4096}]
> -
> +[{"data": true, "depth": 0, "length": 3072, "offset": 1024, "present": true, 
> "start": 0, "zero": false}, {"data": true, "depth": 0, "length": 33551360, 
> "offset": 4096, "present": true, "start": 3072, "zero": true}]

The change in format can produce really long lines for a more complex
map, which can introduce its own problems in legibility. But I can
live with it.

Reviewed-by: Eric Blake 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




[PULL 0/4] Miscellaneous patches patches for 2022-03-21

2022-03-21 Thread Markus Armbruster
If it's too late for trivial cleanup, I'll respin this with the last
patch dropped.

The following changes since commit 2058fdbe81e2985c226a026851dd26b146d3395c:

  Merge tag 'fixes-20220318-pull-request' of git://git.kraxel.org/qemu into 
staging (2022-03-19 11:28:54 +)

are available in the Git repository at:

  git://repo.or.cz/qemu/armbru.git tags/pull-misc-2022-03-21

for you to fetch changes up to b21e2380376c470900fcadf47507f4d5ade75e85:

  Use g_new() & friends where that makes obvious sense (2022-03-21 15:44:44 
+0100)


Miscellaneous patches patches for 2022-03-21


Markus Armbruster (3):
  scripts/coccinelle: New use-g_new-etc.cocci
  9pfs: Use g_new() & friends where that makes obvious sense
  Use g_new() & friends where that makes obvious sense

Murilo Opsfelder Araujo (1):
  block-qdict: Fix -Werror=maybe-uninitialized build failure

 scripts/coccinelle/use-g_new-etc.cocci   | 75 
 include/qemu/timer.h |  2 +-
 accel/kvm/kvm-all.c  |  6 +--
 accel/tcg/tcg-accel-ops-mttcg.c  |  2 +-
 accel/tcg/tcg-accel-ops-rr.c |  4 +-
 audio/audio.c|  4 +-
 audio/audio_legacy.c |  6 +--
 audio/dsoundaudio.c  |  2 +-
 audio/jackaudio.c|  6 +--
 audio/paaudio.c  |  4 +-
 backends/cryptodev.c |  2 +-
 contrib/vhost-user-gpu/vhost-user-gpu.c  |  2 +-
 cpus-common.c|  4 +-
 dump/dump.c  |  2 +-
 hw/9pfs/9p-proxy.c   |  2 +-
 hw/9pfs/9p-synth.c   |  4 +-
 hw/9pfs/9p.c |  8 ++--
 hw/9pfs/codir.c  |  6 +--
 hw/acpi/hmat.c   |  2 +-
 hw/audio/intel-hda.c |  2 +-
 hw/char/parallel.c   |  2 +-
 hw/char/riscv_htif.c |  2 +-
 hw/char/virtio-serial-bus.c  |  6 +--
 hw/core/irq.c|  2 +-
 hw/core/reset.c  |  2 +-
 hw/display/pxa2xx_lcd.c  |  2 +-
 hw/display/tc6393xb.c|  2 +-
 hw/display/virtio-gpu.c  |  4 +-
 hw/display/xenfb.c   |  4 +-
 hw/dma/rc4030.c  |  4 +-
 hw/i2c/core.c|  4 +-
 hw/i2c/i2c_mux_pca954x.c |  2 +-
 hw/i386/amd_iommu.c  |  4 +-
 hw/i386/intel_iommu.c|  2 +-
 hw/i386/xen/xen-hvm.c| 10 ++---
 hw/i386/xen/xen-mapcache.c   | 14 +++---
 hw/input/lasips2.c   |  2 +-
 hw/input/pckbd.c |  2 +-
 hw/input/ps2.c   |  4 +-
 hw/input/pxa2xx_keypad.c |  2 +-
 hw/input/tsc2005.c   |  3 +-
 hw/intc/riscv_aclint.c   |  6 +--
 hw/intc/xics.c   |  2 +-
 hw/m68k/virt.c   |  2 +-
 hw/mips/mipssim.c|  2 +-
 hw/misc/applesmc.c   |  2 +-
 hw/misc/imx6_src.c   |  2 +-
 hw/misc/ivshmem.c|  4 +-
 hw/net/virtio-net.c  |  4 +-
 hw/nvme/ns.c |  2 +-
 hw/pci-host/pnv_phb3.c   |  2 +-
 hw/pci-host/pnv_phb4.c   |  2 +-
 hw/pci/pcie_sriov.c  |  2 +-
 hw/ppc/e500.c|  2 +-
 hw/ppc/ppc.c |  8 ++--
 hw/ppc/ppc405_boards.c   |  4 +-
 hw/ppc/ppc405_uc.c   | 18 
 hw/ppc/ppc4xx_devs.c |  2 +-
 hw/ppc/ppc_booke.c   |  4 +-
 hw/ppc/spapr.c   |  2 +-
 hw/ppc/spapr_events.c|  2 +-
 hw/ppc/spapr_hcall.c |  2 +-
 hw/ppc/spapr_numa.c  |  3 +-
 hw/rdma/vmw/pvrdma_dev_ring.c|  2 +-
 hw/rdma/vmw/pvrdma_qp_ops.c  |  6 +--
 hw/sh4/r2d.c |  4 +-
 hw/sh4/sh7750.c  |  2 +-
 hw/sparc/leon3.c |  2 +-
 hw/sparc64/sparc64.c |  4 +-
 hw/timer/arm_timer.c |  2 +-
 hw/timer/slavio_timer.c  |  2 +-
 hw/vfio/pci.c|  4 +-
 hw/vfio/platform.c   |  4 +-
 hw/virtio/virtio-crypto.c|  2 +-
 hw/virtio/virtio-iommu.c |  2 +-
 hw/virtio/virtio.c   |  5 +--
 hw/xtensa/xtfpga.c   |  2 +-
 linux-user/syscall.c |  2 +-
 migration/dirtyrate.c  

[PULL 2/4] scripts/coccinelle: New use-g_new-etc.cocci

2022-03-21 Thread Markus Armbruster
This is the semantic patch from commit b45c03f585 "arm: Use g_new() &
friends where that makes obvious sense".

Signed-off-by: Markus Armbruster 
Reviewed-by: Philippe Mathieu-Daudé 
Reviewed-by: Richard Henderson 
Reviewed-by: Alex Bennée 
Message-Id: <20220315144156.1595462-2-arm...@redhat.com>
---
 scripts/coccinelle/use-g_new-etc.cocci | 75 ++
 1 file changed, 75 insertions(+)
 create mode 100644 scripts/coccinelle/use-g_new-etc.cocci

diff --git a/scripts/coccinelle/use-g_new-etc.cocci 
b/scripts/coccinelle/use-g_new-etc.cocci
new file mode 100644
index 00..e2280e93b3
--- /dev/null
+++ b/scripts/coccinelle/use-g_new-etc.cocci
@@ -0,0 +1,75 @@
+// Use g_new() & friends where that makes obvious sense
+@@
+type T;
+@@
+-g_malloc(sizeof(T))
++g_new(T, 1)
+@@
+type T;
+@@
+-g_try_malloc(sizeof(T))
++g_try_new(T, 1)
+@@
+type T;
+@@
+-g_malloc0(sizeof(T))
++g_new0(T, 1)
+@@
+type T;
+@@
+-g_try_malloc0(sizeof(T))
++g_try_new0(T, 1)
+@@
+type T;
+expression n;
+@@
+-g_malloc(sizeof(T) * (n))
++g_new(T, n)
+@@
+type T;
+expression n;
+@@
+-g_try_malloc(sizeof(T) * (n))
++g_try_new(T, n)
+@@
+type T;
+expression n;
+@@
+-g_malloc0(sizeof(T) * (n))
++g_new0(T, n)
+@@
+type T;
+expression n;
+@@
+-g_try_malloc0(sizeof(T) * (n))
++g_try_new0(T, n)
+@@
+type T;
+expression p, n;
+@@
+-g_realloc(p, sizeof(T) * (n))
++g_renew(T, p, n)
+@@
+type T;
+expression p, n;
+@@
+-g_try_realloc(p, sizeof(T) * (n))
++g_try_renew(T, p, n)
+@@
+type T;
+expression n;
+@@
+-(T *)g_new(T, n)
++g_new(T, n)
+@@
+type T;
+expression n;
+@@
+-(T *)g_new0(T, n)
++g_new0(T, n)
+@@
+type T;
+expression p, n;
+@@
+-(T *)g_renew(T, p, n)
++g_renew(T, p, n)
-- 
2.35.1




[PATCH v1 04/13] docs: vhost-user: clean up request/reply description

2022-03-21 Thread Alex Bennée
From: Paolo Bonzini 

It is not necessary to mention which side is sending/receiving
each payload; it is more interesting to say which is the request
and which is the reply.  This also matches what vhost-user-gpu.rst
already does.

While at it, ensure that all messages list both the request and
the reply payload.

Signed-off-by: Paolo Bonzini 
Message-Id: <20210226143413.188046-2-pbonz...@redhat.com>
---
 docs/interop/vhost-user.rst | 163 +---
 1 file changed, 95 insertions(+), 68 deletions(-)

diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index 4dbc84fd00..bb588c66fc 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -865,8 +865,8 @@ Master message types
 ``VHOST_USER_GET_FEATURES``
   :id: 1
   :equivalent ioctl: ``VHOST_GET_FEATURES``
-  :master payload: N/A
-  :slave payload: ``u64``
+  :request payload: N/A
+  :reply payload: ``u64``
 
   Get from the underlying vhost implementation the features bitmask.
   Feature bit ``VHOST_USER_F_PROTOCOL_FEATURES`` signals slave support
@@ -876,7 +876,8 @@ Master message types
 ``VHOST_USER_SET_FEATURES``
   :id: 2
   :equivalent ioctl: ``VHOST_SET_FEATURES``
-  :master payload: ``u64``
+  :request payload: ``u64``
+  :reply payload: N/A
 
   Enable features in the underlying vhost implementation using a
   bitmask.  Feature bit ``VHOST_USER_F_PROTOCOL_FEATURES`` signals
@@ -886,8 +887,8 @@ Master message types
 ``VHOST_USER_GET_PROTOCOL_FEATURES``
   :id: 15
   :equivalent ioctl: ``VHOST_GET_FEATURES``
-  :master payload: N/A
-  :slave payload: ``u64``
+  :request payload: N/A
+  :reply payload: ``u64``
 
   Get the protocol feature bitmask from the underlying vhost
   implementation.  Only legal if feature bit
@@ -902,7 +903,8 @@ Master message types
 ``VHOST_USER_SET_PROTOCOL_FEATURES``
   :id: 16
   :equivalent ioctl: ``VHOST_SET_FEATURES``
-  :master payload: ``u64``
+  :request payload: ``u64``
+  :reply payload: N/A
 
   Enable protocol features in the underlying vhost implementation.
 
@@ -916,7 +918,8 @@ Master message types
 ``VHOST_USER_SET_OWNER``
   :id: 3
   :equivalent ioctl: ``VHOST_SET_OWNER``
-  :master payload: N/A
+  :request payload: N/A
+  :reply payload: N/A
 
   Issued when a new connection is established. It sets the current
   *master* as an owner of the session. This can be used on the *slave*
@@ -924,7 +927,8 @@ Master message types
 
 ``VHOST_USER_RESET_OWNER``
   :id: 4
-  :master payload: N/A
+  :request payload: N/A
+  :reply payload: N/A
 
 .. admonition:: Deprecated
 
@@ -937,8 +941,8 @@ Master message types
 ``VHOST_USER_SET_MEM_TABLE``
   :id: 5
   :equivalent ioctl: ``VHOST_SET_MEM_TABLE``
-  :master payload: memory regions description
-  :slave payload: (postcopy only) memory regions description
+  :request payload: memory regions description
+  :reply payload: (postcopy only) memory regions description
 
   Sets the memory map regions on the slave so it can translate the
   vring addresses. In the ancillary data there is an array of file
@@ -961,8 +965,8 @@ Master message types
 ``VHOST_USER_SET_LOG_BASE``
   :id: 6
   :equivalent ioctl: ``VHOST_SET_LOG_BASE``
-  :master payload: u64
-  :slave payload: N/A
+  :request payload: u64
+  :reply payload: N/A
 
   Sets logging shared memory space.
 
@@ -974,44 +978,48 @@ Master message types
 ``VHOST_USER_SET_LOG_FD``
   :id: 7
   :equivalent ioctl: ``VHOST_SET_LOG_FD``
-  :master payload: N/A
+  :request payload: N/A
+  :reply payload: N/A
 
   Sets the logging file descriptor, which is passed as ancillary data.
 
 ``VHOST_USER_SET_VRING_NUM``
   :id: 8
   :equivalent ioctl: ``VHOST_SET_VRING_NUM``
-  :master payload: vring state description
+  :request payload: vring state description
+  :reply payload: N/A
 
   Set the size of the queue.
 
 ``VHOST_USER_SET_VRING_ADDR``
   :id: 9
   :equivalent ioctl: ``VHOST_SET_VRING_ADDR``
-  :master payload: vring address description
-  :slave payload: N/A
+  :request payload: vring address description
+  :reply payload: N/A
 
   Sets the addresses of the different aspects of the vring.
 
 ``VHOST_USER_SET_VRING_BASE``
   :id: 10
   :equivalent ioctl: ``VHOST_SET_VRING_BASE``
-  :master payload: vring state description
+  :request payload: vring state description
+  :reply payload: N/A
 
   Sets the base offset in the available vring.
 
 ``VHOST_USER_GET_VRING_BASE``
   :id: 11
   :equivalent ioctl: ``VHOST_USER_GET_VRING_BASE``
-  :master payload: vring state description
-  :slave payload: vring state description
+  :request payload: vring state description
+  :reply payload: vring state description
 
   Get the available vring base offset.
 
 ``VHOST_USER_SET_VRING_KICK``
   :id: 12
   :equivalent ioctl: ``VHOST_SET_VRING_KICK``
-  :master payload: ``u64``
+  :request payload: ``u64``
+  :reply payload: N/A
 
   Set the event file descriptor for adding buffers to the vring. It is
   passed in the ancillary data.
@@ -1029,7 +1037,8 @@ Master message types
 

[PATCH v1 06/13] docs: vhost-user: replace master/slave with front-end/back-end

2022-03-21 Thread Alex Bennée
From: Paolo Bonzini 

This matches the nomenclature that is generally used.  Also commonly used
is client/server, but it is not as clear because sometimes the front-end
exposes a passive (server) socket that the back-end connects to.

Signed-off-by: Paolo Bonzini 
Message-Id: <20210226143413.188046-4-pbonz...@redhat.com>
---
 docs/interop/vhost-user-gpu.rst |  10 +-
 docs/interop/vhost-user.rst | 342 
 2 files changed, 176 insertions(+), 176 deletions(-)

diff --git a/docs/interop/vhost-user-gpu.rst b/docs/interop/vhost-user-gpu.rst
index 71a2c52b31..1640553729 100644
--- a/docs/interop/vhost-user-gpu.rst
+++ b/docs/interop/vhost-user-gpu.rst
@@ -13,10 +13,10 @@ Introduction
 
 
 The vhost-user-gpu protocol is aiming at sharing the rendering result
-of a virtio-gpu, done from a vhost-user slave process to a vhost-user
-master process (such as QEMU). It bears a resemblance to a display
+of a virtio-gpu, done from a vhost-user back-end process to a vhost-user
+front-end process (such as QEMU). It bears a resemblance to a display
 server protocol, if you consider QEMU as the display server and the
-slave as the client, but in a very limited way. Typically, it will
+back-end as the client, but in a very limited way. Typically, it will
 work by setting a scanout/display configuration, before sending flush
 events for the display updates. It will also update the cursor shape
 and position.
@@ -26,8 +26,8 @@ socket ancillary data to share opened file descriptors 
(DMABUF fds or
 shared memory). The socket is usually obtained via
 ``VHOST_USER_GPU_SET_SOCKET``.
 
-Requests are sent by the *slave*, and the optional replies by the
-*master*.
+Requests are sent by the *back-end*, and the optional replies by the
+*front-end*.
 
 Wire format
 ===
diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index 694a113e59..08c4bf2ef7 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -23,19 +23,19 @@ space process on the same host. It uses communication over 
a Unix
 domain socket to share file descriptors in the ancillary data of the
 message.
 
-The protocol defines 2 sides of the communication, *master* and
-*slave*. *Master* is the application that shares its virtqueues, in
-our case QEMU. *Slave* is the consumer of the virtqueues.
+The protocol defines 2 sides of the communication, *front-end* and
+*back-end*. The *front-end* is the application that shares its virtqueues, in
+our case QEMU. The *back-end* is the consumer of the virtqueues.
 
-In the current implementation QEMU is the *master*, and the *slave* is
-the external process consuming the virtio queues, for example a
+In the current implementation QEMU is the *front-end*, and the *back-end*
+is the external process consuming the virtio queues, for example a
 software Ethernet switch running in user space, such as Snabbswitch,
-or a block device backend processing read & write to a virtual
-disk. In order to facilitate interoperability between various backend
+or a block device back-end processing read & write to a virtual
+disk. In order to facilitate interoperability between various back-end
 implementations, it is recommended to follow the :ref:`Backend program
 conventions `.
 
-*Master* and *slave* can be either a client (i.e. connecting) or
+The *front-end* and *back-end* can be either a client (i.e. connecting) or
 server (listening) in the socket communication.
 
 Support for platforms other than Linux
@@ -77,7 +77,7 @@ Header
 :flags: 32-bit bit field
 
 - Lower 2 bits are the version (currently 0x01)
-- Bit 2 is the reply flag - needs to be sent on each reply from the slave
+- Bit 2 is the reply flag - needs to be sent on each reply from the back-end
 - Bit 3 is the need_reply flag - see :ref:`REPLY_ACK ` for
   details.
 
@@ -222,8 +222,8 @@ Virtio device config space
 :size: a 32-bit configuration space access size in bytes
 
 :flags: a 32-bit value:
-  - 0: Vhost master messages used for writeable fields
-  - 1: Vhost master messages used for live migration
+  - 0: Vhost front-end messages used for writeable fields
+  - 1: Vhost front-end messages used for live migration
 
 :payload: Size bytes array holding the contents of the virtio
   device's configuration space
@@ -290,8 +290,8 @@ vhost for the Linux Kernel. Most messages that can be sent 
via the
 Unix domain socket implementing vhost-user have an equivalent ioctl to
 the kernel implementation.
 
-The communication consists of *master* sending message requests and
-*slave* sending message replies. Most of the requests don't require
+The communication consists of the *front-end* sending message requests and
+the *back-end* sending message replies. Most of the requests don't require
 replies. Here is a list of the ones that do:
 
 * ``VHOST_USER_GET_FEATURES``
@@ -305,7 +305,7 @@ replies. Here is a list of the ones that do:
:ref:`REPLY_ACK `
The section on ``REPLY_ACK`` 

[PATCH v1 00/13] various virtio docs, fixes and tweaks

2022-03-21 Thread Alex Bennée
Hi,

This series is a sub-set of patches while I was trying to re-rev my
virtio-rpmb patches. It attempts to address a few things:

  - improve documentation for virtio/vhost/vhost-user
  - document some of the API
  - a hacky fix for F_CONFIG handling
  - putting VhostUserState on a diet, make VhostUserHostNotifier dynamic

In particular I've been trying to better understand how vhost-user
interactions are meant to work and why there are two different methods
for instantiating them. If my supposition is correct perhaps a number
of devices that don't have in-kernel vhost equivalents could be converted?

While working onthe VhostUserHostNotifier changes I found it quite
hard to trigger the code. Is this rarely used code or just requires
backends we don't see in the testing?

Alex Bennée (10):
  hw/virtio: move virtio-pci.h into shared include space
  virtio-pci: add notification trace points
  hw/virtio: add vhost_user_[read|write] trace points
  vhost-user.rst: add clarifying language about protocol negotiation
  libvhost-user: expose vu_request_to_string
  docs/devel: start documenting writing VirtIO devices
  include/hw: start documenting the vhost API
  contrib/vhost-user-blk: fix 32 bit build and enable
  hw/virtio/vhost-user: don't suppress F_CONFIG when supported
  virtio/vhost-user: dynamically assign VhostUserHostNotifiers

Paolo Bonzini (3):
  docs: vhost-user: clean up request/reply description
  docs: vhost-user: rewrite section on ring state machine
  docs: vhost-user: replace master/slave with front-end/back-end

 docs/devel/index-internals.rst|   1 +
 docs/devel/virtio-backends.rst| 214 +
 docs/interop/vhost-user-gpu.rst   |  10 +-
 docs/interop/vhost-user.rst   | 555 --
 meson.build   |   2 +-
 include/hw/virtio/vhost-user.h|  43 +-
 include/hw/virtio/vhost.h | 132 -
 {hw => include/hw}/virtio/virtio-pci.h|   0
 subprojects/libvhost-user/libvhost-user.h |   9 +
 contrib/vhost-user-blk/vhost-user-blk.c   |   6 +-
 hw/scsi/vhost-user-scsi.c |   1 +
 hw/virtio/vhost-scsi-pci.c|   2 +-
 hw/virtio/vhost-user-blk-pci.c|   2 +-
 hw/virtio/vhost-user-fs-pci.c |   2 +-
 hw/virtio/vhost-user-i2c-pci.c|   2 +-
 hw/virtio/vhost-user-input-pci.c  |   2 +-
 hw/virtio/vhost-user-rng-pci.c|   2 +-
 hw/virtio/vhost-user-scsi-pci.c   |   2 +-
 hw/virtio/vhost-user-vsock-pci.c  |   2 +-
 hw/virtio/vhost-user.c| 133 --
 hw/virtio/vhost-vsock-pci.c   |   2 +-
 hw/virtio/virtio-9p-pci.c |   2 +-
 hw/virtio/virtio-balloon-pci.c|   2 +-
 hw/virtio/virtio-blk-pci.c|   2 +-
 hw/virtio/virtio-input-host-pci.c |   2 +-
 hw/virtio/virtio-input-pci.c  |   2 +-
 hw/virtio/virtio-iommu-pci.c  |   2 +-
 hw/virtio/virtio-net-pci.c|   2 +-
 hw/virtio/virtio-pci.c|   5 +-
 hw/virtio/virtio-rng-pci.c|   2 +-
 hw/virtio/virtio-scsi-pci.c   |   2 +-
 hw/virtio/virtio-serial-pci.c |   2 +-
 subprojects/libvhost-user/libvhost-user.c |   2 +-
 contrib/vhost-user-blk/meson.build|   3 +-
 hw/virtio/trace-events|  10 +-
 35 files changed, 831 insertions(+), 333 deletions(-)
 create mode 100644 docs/devel/virtio-backends.rst
 rename {hw => include/hw}/virtio/virtio-pci.h (100%)

-- 
2.30.2




[PATCH v2] gitlab: disable accelerated zlib for s390x

2022-03-21 Thread Alex Bennée
There appears to be a bug in the s390 hardware-accelerated version of
zlib distributed with Ubuntu 20.04, which makes our test
/i386/migration/multifd/tcp/zlib hit an assertion perhaps one time in
10. Fortunately zlib provides an escape hatch where we can disable the
hardware-acceleration entirely by setting the environment variable
DFLTCC to 0. Do this on all our CI which runs on s390 hosts, both our
custom gitlab runner and also the Travis hosts.

Signed-off-by: Alex Bennée 
Cc: Peter Maydell 

---
v2
  - more complete commit wording from Peter
  - also tweak travis rules
---
 .gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml | 12 
 .travis.yml|  6 --
 2 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml 
b/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml
index 0333872113..4f292a8a5b 100644
--- a/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml
+++ b/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml
@@ -8,6 +8,8 @@ ubuntu-20.04-s390x-all-linux-static:
  tags:
  - ubuntu_20.04
  - s390x
+ variables:
+DFLTCC: 0
  rules:
  - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
  - if: "$S390X_RUNNER_AVAILABLE"
@@ -27,6 +29,8 @@ ubuntu-20.04-s390x-all:
  tags:
  - ubuntu_20.04
  - s390x
+ variables:
+DFLTCC: 0
  rules:
  - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
  - if: "$S390X_RUNNER_AVAILABLE"
@@ -43,6 +47,8 @@ ubuntu-20.04-s390x-alldbg:
  tags:
  - ubuntu_20.04
  - s390x
+ variables:
+DFLTCC: 0
  rules:
  - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
when: manual
@@ -64,6 +70,8 @@ ubuntu-20.04-s390x-clang:
  tags:
  - ubuntu_20.04
  - s390x
+ variables:
+DFLTCC: 0
  rules:
  - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
when: manual
@@ -84,6 +92,8 @@ ubuntu-20.04-s390x-tci:
  tags:
  - ubuntu_20.04
  - s390x
+ variables:
+DFLTCC: 0
  rules:
  - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
when: manual
@@ -103,6 +113,8 @@ ubuntu-20.04-s390x-notcg:
  tags:
  - ubuntu_20.04
  - s390x
+ variables:
+DFLTCC: 0
  rules:
  - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
when: manual
diff --git a/.travis.yml b/.travis.yml
index c3c8048842..9afc4a54b8 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -218,6 +218,7 @@ jobs:
 - TEST_CMD="make check check-tcg V=1"
 - CONFIG="--disable-containers 
--target-list=${MAIN_SOFTMMU_TARGETS},s390x-linux-user"
 - UNRELIABLE=true
+- DFLTCC=0
   script:
 - BUILD_RC=0 && make -j${JOBS} || BUILD_RC=$?
 - |
@@ -257,7 +258,7 @@ jobs:
   env:
 - CONFIG="--disable-containers --audio-drv-list=sdl --disable-user
   --target-list-exclude=${MAIN_SOFTMMU_TARGETS}"
-
+- DFLTCC=0
 - name: "[s390x] GCC (user)"
   arch: s390x
   dist: focal
@@ -269,7 +270,7 @@ jobs:
   - ninja-build
   env:
 - CONFIG="--disable-containers --disable-system"
-
+- DFLTCC=0
 - name: "[s390x] Clang (disable-tcg)"
   arch: s390x
   dist: focal
@@ -303,3 +304,4 @@ jobs:
 - CONFIG="--disable-containers --disable-tcg --enable-kvm
   --disable-tools --host-cc=clang --cxx=clang++"
 - UNRELIABLE=true
+- DFLTCC=0
-- 
2.30.2




[PULL 4/4] Use g_new() & friends where that makes obvious sense

2022-03-21 Thread Markus Armbruster
g_new(T, n) is neater than g_malloc(sizeof(T) * n).  It's also safer,
for two reasons.  One, it catches multiplication overflowing size_t.
Two, it returns T * rather than void *, which lets the compiler catch
more type errors.

This commit only touches allocations with size arguments of the form
sizeof(T).

Patch created mechanically with:

$ spatch --in-place --sp-file scripts/coccinelle/use-g_new-etc.cocci \
 --macro-file scripts/cocci-macro-file.h FILES...

Signed-off-by: Markus Armbruster 
Reviewed-by: Philippe Mathieu-Daudé 
Reviewed-by: Cédric Le Goater 
Reviewed-by: Alex Bennée 
Acked-by: Dr. David Alan Gilbert 
Message-Id: <20220315144156.1595462-4-arm...@redhat.com>
Reviewed-by: Pavel Dovgalyuk 
---
 include/qemu/timer.h |  2 +-
 accel/kvm/kvm-all.c  |  6 ++--
 accel/tcg/tcg-accel-ops-mttcg.c  |  2 +-
 accel/tcg/tcg-accel-ops-rr.c |  4 +--
 audio/audio.c|  4 +--
 audio/audio_legacy.c |  6 ++--
 audio/dsoundaudio.c  |  2 +-
 audio/jackaudio.c|  6 ++--
 audio/paaudio.c  |  4 +--
 backends/cryptodev.c |  2 +-
 contrib/vhost-user-gpu/vhost-user-gpu.c  |  2 +-
 cpus-common.c|  4 +--
 dump/dump.c  |  2 +-
 hw/acpi/hmat.c   |  2 +-
 hw/audio/intel-hda.c |  2 +-
 hw/char/parallel.c   |  2 +-
 hw/char/riscv_htif.c |  2 +-
 hw/char/virtio-serial-bus.c  |  6 ++--
 hw/core/irq.c|  2 +-
 hw/core/reset.c  |  2 +-
 hw/display/pxa2xx_lcd.c  |  2 +-
 hw/display/tc6393xb.c|  2 +-
 hw/display/virtio-gpu.c  |  4 +--
 hw/display/xenfb.c   |  4 +--
 hw/dma/rc4030.c  |  4 +--
 hw/i2c/core.c|  4 +--
 hw/i2c/i2c_mux_pca954x.c |  2 +-
 hw/i386/amd_iommu.c  |  4 +--
 hw/i386/intel_iommu.c|  2 +-
 hw/i386/xen/xen-hvm.c| 10 +++---
 hw/i386/xen/xen-mapcache.c   | 14 
 hw/input/lasips2.c   |  2 +-
 hw/input/pckbd.c |  2 +-
 hw/input/ps2.c   |  4 +--
 hw/input/pxa2xx_keypad.c |  2 +-
 hw/input/tsc2005.c   |  3 +-
 hw/intc/riscv_aclint.c   |  6 ++--
 hw/intc/xics.c   |  2 +-
 hw/m68k/virt.c   |  2 +-
 hw/mips/mipssim.c|  2 +-
 hw/misc/applesmc.c   |  2 +-
 hw/misc/imx6_src.c   |  2 +-
 hw/misc/ivshmem.c|  4 +--
 hw/net/virtio-net.c  |  4 +--
 hw/nvme/ns.c |  2 +-
 hw/pci-host/pnv_phb3.c   |  2 +-
 hw/pci-host/pnv_phb4.c   |  2 +-
 hw/pci/pcie_sriov.c  |  2 +-
 hw/ppc/e500.c|  2 +-
 hw/ppc/ppc.c |  8 ++---
 hw/ppc/ppc405_boards.c   |  4 +--
 hw/ppc/ppc405_uc.c   | 18 +-
 hw/ppc/ppc4xx_devs.c |  2 +-
 hw/ppc/ppc_booke.c   |  4 +--
 hw/ppc/spapr.c   |  2 +-
 hw/ppc/spapr_events.c|  2 +-
 hw/ppc/spapr_hcall.c |  2 +-
 hw/ppc/spapr_numa.c  |  3 +-
 hw/rdma/vmw/pvrdma_dev_ring.c|  2 +-
 hw/rdma/vmw/pvrdma_qp_ops.c  |  6 ++--
 hw/sh4/r2d.c |  4 +--
 hw/sh4/sh7750.c  |  2 +-
 hw/sparc/leon3.c |  2 +-
 hw/sparc64/sparc64.c |  4 +--
 hw/timer/arm_timer.c |  2 +-
 hw/timer/slavio_timer.c  |  2 +-
 hw/vfio/pci.c|  4 +--
 hw/vfio/platform.c   |  4 +--
 hw/virtio/virtio-crypto.c|  2 +-
 hw/virtio/virtio-iommu.c |  2 +-
 hw/virtio/virtio.c   |  5 ++-
 hw/xtensa/xtfpga.c   |  2 +-
 linux-user/syscall.c |  2 +-
 migration/dirtyrate.c|  4 +--
 migration/multifd-zlib.c |  4 +--
 migration/ram.c  |  2 +-
 monitor/misc.c   |  2 +-
 monitor/qmp-cmds.c   |  2 +-
 qga/commands-win32.c |  8 ++---
 qga/commands.c   |  2 +-
 qom/qom-qmp-cmds.c   |  2 +-
 replay/replay-char.c |  4 +--
 replay/replay-events.c   | 10 +++---
 softmmu/bootdevice.c |  4 +--
 

Re: [PATCH v21 0/9] support dirty restraint on vCPU

2022-03-21 Thread Hyman Huang

Ping

Hi!
I think this patchset is meaningful to merge, not just
for it provides interfaces for limiting dirty page
rate, but also it builds foundation for the dirtylimit
capability of live migration. Which is implemented in the
following repo:
https://github.com/newfriday/qemu/tree/migration_dirtylimit_v1

I compare the auto-converge and dirtylimit capability in
live migration from 3 aspects: total time, guest unixbench score,
guest memory performance, and the result is satisfactory.

I explain the result in plain terms:

For migration total time:
Dirtylimit reduce by 30% - 50% and could achieve convergence
more easily than auto-converge in large vCPU size scenario.

For unixbench test, i run commands in a vm like this:
taskset -c 0-1 --vm {N}
taskset -c 8-15 ./Run {CASE} -i 2 -c 8

And almost all the test cases(dhry2reg、pipe、context1..)
score improve in dirtylimit migration.

For guest performance, i use qemu guestperf tool to test,
The changing curves of memory update rate of dirtylimit
is similar to auto-converge. The difference is dirtylimit
memory perfmance drop earlier and faster than auto-converge.

What do you think of this?

在 2022/3/16 21:07, huang...@chinatelecom.cn 写道:

From: Hyman Huang(黄勇) 

v21
- remove the tmpfs declarations in header file and
   test case should use tmpfs as internal var respectively.

v20
- fix the style problems and let QEMU test pass
- change the dirty limit case logic:
   test fail if dirtyrate measurement 200ms timeout

v19
- rebase on master and fix conflicts
- add test case for dirty page rate limit

Ping.

Adding an test case and hope it can be merged along with previous
patchset by the way.

Please review. Thanks,

Regards
Yong

v18
- squash commit "Ignore query-vcpu-dirty-limit test" into
   "Implement dirty page rate limit" in  [PATCH v17] to make
   the modification logic self-contained.

Please review. Thanks,

Regards
Yong

v17
- rebase on master
- fix qmp-cmd-test

v16
- rebase on master
- drop the unused typedef syntax in [PATCH v15 6/7]
- add the Reviewed-by and Acked-by tags by the way

v15
- rebase on master
- drop the 'init_time_ms' parameter in function vcpu_calculate_dirtyrate
- drop the 'setup' field in dirtylimit_state and call dirtylimit_process
   directly, which makes code cleaner.
- code clean in dirtylimit_adjust_throttle
- fix miss dirtylimit_state_unlock() in dirtylimit_process and
   dirtylimit_query_all
- add some comment

Please review. Thanks,

Regards
Yong

v14
- v13 sent by accident, resend patchset.

v13
- rebase on master
- passing NULL to kvm_dirty_ring_reap in commit
   "refactor per-vcpu dirty ring reaping" to keep the logic unchanged.
   In other word, we still try the best to reap as much PFNs as possible
   if dirtylimit not in service.
- move the cpu list gen id changes into a separate patch.
- release the lock before sleep during dirty page rate calculation.
- move the dirty ring size fetch logic into a separate patch.
- drop the DIRTYLIMIT_LINEAR_ADJUSTMENT_WATERMARK MACRO .
- substitute bh with function pointer when implement dirtylimit.
- merge the dirtylimit_start/stop into dirtylimit_change.
- fix "cpu-index" parameter type with "int" to keep consistency.
- fix some syntax error in documents.

Please review. Thanks,

Yong

v12
- rebase on master
- add a new commmit to refactor per-vcpu dirty ring reaping, which can resolve
   the "vcpu miss the chances to sleep" problem
- remove the dirtylimit_thread and implemtment throttle in bottom half instead.
- let the dirty ring reaper thread keep sleeping when dirtylimit is in service
- introduce cpu_list_generation_id to identify cpu_list changing.
- keep taking the cpu_list_lock during dirty_stat_wait to prevent vcpu 
plug/unplug
   when calculating the dirty page rate
- move the dirtylimit global initializations out of dirtylimit_set_vcpu and do
   some code clean
- add DIRTYLIMIT_LINEAR_ADJUSTMENT_WATERMARK in case of oscillation when 
throttling
- remove the unmatched count field in dirtylimit_state
- add stub to fix build on non-x86
- refactor the documents

Thanks Peter and Markus for reviewing the previous versions, please review.

Thanks,
Yong

v11
- rebase on master
- add a commit " refactor dirty page rate calculation"  so that dirty page rate 
limit
   can reuse the calculation logic.
- handle the cpu hotplug/unplug case in the dirty page rate calculation logic.
- modify the qmp commands according to Markus's advice.
- introduce a standalone file dirtylimit.c to implement dirty page rate limit
- check if dirty limit in service by dirtylimit_state pointer instead of global 
variable
- introduce dirtylimit_mutex to protect dirtylimit_state
- do some code clean and docs

See the commit for more detail, thanks Markus and Peter very mush for the code
review and give the experienced and insightful advices, most modifications are
based on these advices.

v10:
- rebase on master
- make the following modifications on patch [1/3]:
   1. Make "dirtylimit-calc" thread joinable 

[PATCH v1 01/13] hw/virtio: move virtio-pci.h into shared include space

2022-03-21 Thread Alex Bennée
This allows other device classes that will be exposed via PCI to be
able to do so in the appropriate hw/ directory. I resisted the
temptation to re-order headers to be more aesthetically pleasing.

Signed-off-by: Alex Bennée 
Message-Id: <20200925125147.26943-4-alex.ben...@linaro.org>

---
v2
  - add i2c/rng device to changes
---
 {hw => include/hw}/virtio/virtio-pci.h | 0
 hw/virtio/vhost-scsi-pci.c | 2 +-
 hw/virtio/vhost-user-blk-pci.c | 2 +-
 hw/virtio/vhost-user-fs-pci.c  | 2 +-
 hw/virtio/vhost-user-i2c-pci.c | 2 +-
 hw/virtio/vhost-user-input-pci.c   | 2 +-
 hw/virtio/vhost-user-rng-pci.c | 2 +-
 hw/virtio/vhost-user-scsi-pci.c| 2 +-
 hw/virtio/vhost-user-vsock-pci.c   | 2 +-
 hw/virtio/vhost-vsock-pci.c| 2 +-
 hw/virtio/virtio-9p-pci.c  | 2 +-
 hw/virtio/virtio-balloon-pci.c | 2 +-
 hw/virtio/virtio-blk-pci.c | 2 +-
 hw/virtio/virtio-input-host-pci.c  | 2 +-
 hw/virtio/virtio-input-pci.c   | 2 +-
 hw/virtio/virtio-iommu-pci.c   | 2 +-
 hw/virtio/virtio-net-pci.c | 2 +-
 hw/virtio/virtio-pci.c | 2 +-
 hw/virtio/virtio-rng-pci.c | 2 +-
 hw/virtio/virtio-scsi-pci.c| 2 +-
 hw/virtio/virtio-serial-pci.c  | 2 +-
 21 files changed, 20 insertions(+), 20 deletions(-)
 rename {hw => include/hw}/virtio/virtio-pci.h (100%)

diff --git a/hw/virtio/virtio-pci.h b/include/hw/virtio/virtio-pci.h
similarity index 100%
rename from hw/virtio/virtio-pci.h
rename to include/hw/virtio/virtio-pci.h
diff --git a/hw/virtio/vhost-scsi-pci.c b/hw/virtio/vhost-scsi-pci.c
index cb71a294fa..08980bc23b 100644
--- a/hw/virtio/vhost-scsi-pci.c
+++ b/hw/virtio/vhost-scsi-pci.c
@@ -21,7 +21,7 @@
 #include "hw/virtio/vhost-scsi.h"
 #include "qapi/error.h"
 #include "qemu/module.h"
-#include "virtio-pci.h"
+#include "hw/virtio/virtio-pci.h"
 #include "qom/object.h"
 
 typedef struct VHostSCSIPCI VHostSCSIPCI;
diff --git a/hw/virtio/vhost-user-blk-pci.c b/hw/virtio/vhost-user-blk-pci.c
index 33b404d8a2..eef8641a98 100644
--- a/hw/virtio/vhost-user-blk-pci.c
+++ b/hw/virtio/vhost-user-blk-pci.c
@@ -26,7 +26,7 @@
 #include "qapi/error.h"
 #include "qemu/error-report.h"
 #include "qemu/module.h"
-#include "virtio-pci.h"
+#include "hw/virtio/virtio-pci.h"
 #include "qom/object.h"
 
 typedef struct VHostUserBlkPCI VHostUserBlkPCI;
diff --git a/hw/virtio/vhost-user-fs-pci.c b/hw/virtio/vhost-user-fs-pci.c
index 2ed8492b3f..6829b8b743 100644
--- a/hw/virtio/vhost-user-fs-pci.c
+++ b/hw/virtio/vhost-user-fs-pci.c
@@ -14,7 +14,7 @@
 #include "qemu/osdep.h"
 #include "hw/qdev-properties.h"
 #include "hw/virtio/vhost-user-fs.h"
-#include "virtio-pci.h"
+#include "hw/virtio/virtio-pci.h"
 #include "qom/object.h"
 
 struct VHostUserFSPCI {
diff --git a/hw/virtio/vhost-user-i2c-pci.c b/hw/virtio/vhost-user-i2c-pci.c
index 70b7b65fd9..00ac10941f 100644
--- a/hw/virtio/vhost-user-i2c-pci.c
+++ b/hw/virtio/vhost-user-i2c-pci.c
@@ -9,7 +9,7 @@
 #include "qemu/osdep.h"
 #include "hw/qdev-properties.h"
 #include "hw/virtio/vhost-user-i2c.h"
-#include "virtio-pci.h"
+#include "hw/virtio/virtio-pci.h"
 
 struct VHostUserI2CPCI {
 VirtIOPCIProxy parent_obj;
diff --git a/hw/virtio/vhost-user-input-pci.c b/hw/virtio/vhost-user-input-pci.c
index c9d3e9113a..b858898a36 100644
--- a/hw/virtio/vhost-user-input-pci.c
+++ b/hw/virtio/vhost-user-input-pci.c
@@ -9,7 +9,7 @@
 #include "hw/virtio/virtio-input.h"
 #include "qapi/error.h"
 #include "qemu/error-report.h"
-#include "virtio-pci.h"
+#include "hw/virtio/virtio-pci.h"
 #include "qom/object.h"
 
 typedef struct VHostUserInputPCI VHostUserInputPCI;
diff --git a/hw/virtio/vhost-user-rng-pci.c b/hw/virtio/vhost-user-rng-pci.c
index c83dc86813..f64935453b 100644
--- a/hw/virtio/vhost-user-rng-pci.c
+++ b/hw/virtio/vhost-user-rng-pci.c
@@ -9,7 +9,7 @@
 #include "qemu/osdep.h"
 #include "hw/qdev-properties.h"
 #include "hw/virtio/vhost-user-rng.h"
-#include "virtio-pci.h"
+#include "hw/virtio/virtio-pci.h"
 
 struct VHostUserRNGPCI {
 VirtIOPCIProxy parent_obj;
diff --git a/hw/virtio/vhost-user-scsi-pci.c b/hw/virtio/vhost-user-scsi-pci.c
index d5343412a1..75882e3cf9 100644
--- a/hw/virtio/vhost-user-scsi-pci.c
+++ b/hw/virtio/vhost-user-scsi-pci.c
@@ -30,7 +30,7 @@
 #include "hw/pci/msix.h"
 #include "hw/loader.h"
 #include "sysemu/kvm.h"
-#include "virtio-pci.h"
+#include "hw/virtio/virtio-pci.h"
 #include "qom/object.h"
 
 typedef struct VHostUserSCSIPCI VHostUserSCSIPCI;
diff --git a/hw/virtio/vhost-user-vsock-pci.c b/hw/virtio/vhost-user-vsock-pci.c
index 72a96199cd..e5a86e8013 100644
--- a/hw/virtio/vhost-user-vsock-pci.c
+++ b/hw/virtio/vhost-user-vsock-pci.c
@@ -10,7 +10,7 @@
 
 #include "qemu/osdep.h"
 
-#include "virtio-pci.h"
+#include "hw/virtio/virtio-pci.h"
 #include "hw/qdev-properties.h"
 #include "hw/virtio/vhost-user-vsock.h"
 #include "qom/object.h"
diff --git a/hw/virtio/vhost-vsock-pci.c 

[PATCH v1 03/13] hw/virtio: add vhost_user_[read|write] trace points

2022-03-21 Thread Alex Bennée
These are useful when trying to debug the initial vhost-user
negotiation, especially when it hard to get logging from the low level
library on the other side.

Signed-off-by: Alex Bennée 

---
v2
  - fixed arguments
---
 hw/virtio/vhost-user.c | 4 
 hw/virtio/trace-events | 2 ++
 2 files changed, 6 insertions(+)

diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 6abbc9da32..b27b8c56e2 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -489,6 +489,8 @@ static int vhost_user_write(struct vhost_dev *dev, 
VhostUserMsg *msg,
 return ret < 0 ? -saved_errno : -EIO;
 }
 
+trace_vhost_user_write(msg->hdr.request, msg->hdr.flags);
+
 return 0;
 }
 
@@ -542,6 +544,8 @@ static int vhost_user_set_log_base(struct vhost_dev *dev, 
uint64_t base,
 }
 }
 
+trace_vhost_user_read(msg.hdr.request, msg.hdr.flags);
+
 return 0;
 }
 
diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 46851a7cd1..fd213e2a27 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -21,6 +21,8 @@ vhost_user_set_mem_table_withfd(int index, const char *name, 
uint64_t memory_siz
 vhost_user_postcopy_waker(const char *rb, uint64_t rb_offset) "%s + 0x%"PRIx64
 vhost_user_postcopy_waker_found(uint64_t client_addr) "0x%"PRIx64
 vhost_user_postcopy_waker_nomatch(const char *rb, uint64_t rb_offset) "%s + 
0x%"PRIx64
+vhost_user_read(uint32_t req, uint32_t flags) "req:%d flags:0x%"PRIx32""
+vhost_user_write(uint32_t req, uint32_t flags) "req:%d flags:0x%"PRIx32""
 
 # vhost-vdpa.c
 vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint64_t iova, 
uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type) "vdpa:%p fd: %d 
msg_type: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" uaddr: 0x%"PRIx64" 
perm: 0x%"PRIx8" type: %"PRIu8
-- 
2.30.2




Re: [RFC PATCH 0/5] Removal of AioContext lock, bs->parents and ->children: proof of concept

2022-03-21 Thread Vladimir Sementsov-Ogievskiy

09.03.2022 16:26, Emanuele Giuseppe Esposito wrote:


Am 02/03/2022 um 12:07 schrieb Vladimir Sementsov-Ogievskiy:

01.03.2022 17:21, Emanuele Giuseppe Esposito wrote:

This serie tries to provide a proof of concept and a clear explanation
on why we need to use drains (and more precisely subtree_drains)
to replace the aiocontext lock, especially to protect BlockDriverState
->children and ->parent lists.

Just a small recap on the key concepts:
* We split block layer APIs in "global state" (GS), "I/O", and
"global state or I/O".
    GS are running in the main loop, under BQL, and are the only
    one allowed to modify the BlockDriverState graph.

    I/O APIs are thread safe and can run in any thread

    "global state or I/O" are essentially all APIs that use
    BDRV_POLL_WHILE. This is because there can be only 2 threads
    that can use BDRV_POLL_WHILE: main loop and the iothread that
    runs the aiocontext.

* Drains allow the caller (either main loop or iothread running
the context) to wait all in_flights requests and operations
of a BDS: normal drains target a given node and is parents, while
subtree ones also include the subgraph of the node. Siblings are
not affected by any of these two kind of drains.
After bdrv_drained_begin, no more request is allowed to come
from the affected nodes. Therefore the only actor left working
on a drained part of the graph should be the main loop.

What do we intend to do
---
We want to remove the AioContext lock. It is not 100% clear on how
many things we are protecting with it, and why.
As a starter, we want to protect BlockDriverState ->parents and
->children lists, since they are read by main loop and I/O but
only written by main loop under BQL. The function that modifies
these lists is bdrv_replace_child_common().

How do we want to do it
---
We individuated as ideal subtitute of AioContext lock
the subtree_drain API. The reason is simple: draining prevents the
iothread to read or write the nodes, so once the main loop finishes

I'm not sure it's ideal. Unfortunately I'm not really good in all that
BQL, AioContext locks and drains. So, I can't give good advice. But here
are my doubts:

Draining is very restrictive measure: even if drained section is very
short, at least on bdrv_drained_begin() we have to wait for all current
requests, and don't start new ones. That slows down the guest.

I don't think we are in a critical path where performance is important here.

In the

same time there are operations that don't require to stop guest IO
requests. For example manipulation with dirty bitmaps - qmp commands
block-dirty-bitmap-add block-dirty-bitmap-remove. Or different query
requests..


Maybe you misunderstood or I was not 100% clear, but I am talking about replacing the 
AioContext lock for the ->parents and ->children instance. Not everywhere. This 
is the first step, and then we will see if the additional things that it protects can 
use drain or something else

  

I see only two real cases, where we do need drain:

1. When we need a consistent "point-in-time". For example, when we start
backup in transaction with some dirty-bitmap manipulation commands.

2. When we need to modify block-graph: if we are going to break relation
A -> B, there must not be any in-flight request that want to use this
relation.

That's the use case I am considering.

All other operations, for which we want some kind of lock (like
AioContext lock or something) we actually don't want to stop guest IO.

Yes, they have to be analyzed case by case.


Next, I have a problem in mind, that in past lead to a lot of iotest 30
failures. Next there were different fixes and improvements, but the core
problem (as far as I understand) is still here: nothing protects us when
we are in some graph modification process (for example block-job
finalization) do yield, switch to other coroutine and enter another
graph modification process (for example, another block-job finaliztion)..

That's another point to consider. I don't really have a solution for this.


(for details look at my old "[PATCH RFC 0/5] Fix accidental crash in
iotest 30"
https://lists.nongnu.org/archive/html/qemu-devel/2020-11/msg05290.html  ,
where I suggested to add a global graph_modify_mutex CoMutex, to be held
during graph-modifying process that may yield)..
Does your proposal solve this problem?



executing bdrv_drained_begin() on the interested graph, we are sure that
the iothread is not going to look or even interfere with that part of
the graph.
We are also sure that the only two actors that can look at a specific
BlockDriverState in any given context are the main loop and the
iothread running the AioContext (ensured by "global state or IO" logic).

Why use_subtree_  instead of normal drain
-
A simple drain "blocks" a given node and all its parents.
But it doesn't touch the child.
This means that if we use a simple drain, a child can always

[PATCH v1 13/13] virtio/vhost-user: dynamically assign VhostUserHostNotifiers

2022-03-21 Thread Alex Bennée
At a couple of hundred bytes per notifier allocating one for every
potential queue is very wasteful as most devices only have a few
queues. Instead of having this handled statically dynamically assign
them and track in a GPtrArray.

[AJB: it's hard to trigger the vhost notifiers code, I assume as it
requires a KVM guest with appropriate backend]

Signed-off-by: Alex Bennée 
---
 include/hw/virtio/vhost-user.h | 42 -
 hw/virtio/vhost-user.c | 83 +++---
 hw/virtio/trace-events |  1 +
 3 files changed, 108 insertions(+), 18 deletions(-)

diff --git a/include/hw/virtio/vhost-user.h b/include/hw/virtio/vhost-user.h
index 6e0e8a71a3..c6e693cd3f 100644
--- a/include/hw/virtio/vhost-user.h
+++ b/include/hw/virtio/vhost-user.h
@@ -11,21 +11,61 @@
 #include "chardev/char-fe.h"
 #include "hw/virtio/virtio.h"
 
+/**
+ * VhostUserHostNotifier - notifier information for one queue
+ * @rcu: rcu_head for cleanup
+ * @mr: memory region of notifier
+ * @addr: current mapped address
+ * @unmap_addr: address to be un-mapped
+ * @idx: virtioqueue index
+ *
+ * The VhostUserHostNotifier entries are re-used. When an old mapping
+ * is to be released it is moved to @unmap_addr and @addr is replaced.
+ * Once the RCU process has completed the unmap @unmap_addr is
+ * cleared.
+ */
 typedef struct VhostUserHostNotifier {
 struct rcu_head rcu;
 MemoryRegion mr;
 void *addr;
 void *unmap_addr;
+int idx;
 } VhostUserHostNotifier;
 
+/**
+ * VhostUserState - shared state for all vhost-user devices
+ * @chr: the character backend for the socket
+ * @notifiers: GPtrArray of @VhostUserHostnotifier
+ * @memory_slots:
+ */
 typedef struct VhostUserState {
 CharBackend *chr;
-VhostUserHostNotifier notifier[VIRTIO_QUEUE_MAX];
+GPtrArray *notifiers;
 int memory_slots;
 bool supports_config;
 } VhostUserState;
 
+/**
+ * vhost_user_init() - initialise shared vhost_user state
+ * @user: allocated area for storing shared state
+ * @chr: the chardev for the vhost socket
+ * @errp: error handle
+ *
+ * User can either directly g_new() space for the state or embed
+ * VhostUserState in their larger device structure and just point to
+ * it.
+ *
+ * Return: true on success, false on error while setting errp.
+ */
 bool vhost_user_init(VhostUserState *user, CharBackend *chr, Error **errp);
+
+/**
+ * vhost_user_cleanup() - cleanup state
+ * @user: ptr to use state
+ *
+ * Cleans up shared state and notifiers, callee is responsible for
+ * freeing the @VhostUserState memory itself.
+ */
 void vhost_user_cleanup(VhostUserState *user);
 
 #endif
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 6ce082861b..4c0423de55 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -1174,14 +1174,16 @@ static void 
vhost_user_host_notifier_free(VhostUserHostNotifier *n)
 n->unmap_addr = NULL;
 }
 
-static void vhost_user_host_notifier_remove(VhostUserState *user,
-VirtIODevice *vdev, int queue_idx)
+/*
+ * clean-up function for notifier, will finally free the structure
+ * under rcu.
+ */
+static void vhost_user_host_notifier_remove(VhostUserHostNotifier *n,
+VirtIODevice *vdev)
 {
-VhostUserHostNotifier *n = >notifier[queue_idx];
-
 if (n->addr) {
 if (vdev) {
-virtio_queue_set_host_notifier_mr(vdev, queue_idx, >mr, false);
+virtio_queue_set_host_notifier_mr(vdev, n->idx, >mr, false);
 }
 assert(!n->unmap_addr);
 n->unmap_addr = n->addr;
@@ -1225,6 +1227,15 @@ static int vhost_user_set_vring_enable(struct vhost_dev 
*dev, int enable)
 return 0;
 }
 
+static VhostUserHostNotifier *fetch_notifier(VhostUserState *u,
+ int idx)
+{
+if (idx >= u->notifiers->len) {
+return NULL;
+}
+return g_ptr_array_index(u->notifiers, idx);
+}
+
 static int vhost_user_get_vring_base(struct vhost_dev *dev,
  struct vhost_vring_state *ring)
 {
@@ -1237,7 +1248,10 @@ static int vhost_user_get_vring_base(struct vhost_dev 
*dev,
 };
 struct vhost_user *u = dev->opaque;
 
-vhost_user_host_notifier_remove(u->user, dev->vdev, ring->index);
+VhostUserHostNotifier *n = fetch_notifier(u->user, ring->index);
+if (n) {
+vhost_user_host_notifier_remove(n, dev->vdev);
+}
 
 ret = vhost_user_write(dev, , NULL, 0);
 if (ret < 0) {
@@ -1502,6 +1516,29 @@ static int vhost_user_slave_handle_config_change(struct 
vhost_dev *dev)
 return dev->config_ops->vhost_dev_config_notifier(dev);
 }
 
+/*
+ * Fetch or create the notifier for a given idx. Newly created
+ * notifiers are added to the pointer array that tracks them.
+ */
+static VhostUserHostNotifier *fetch_or_create_notifier(VhostUserState *u,
+   int idx)
+{
+

Re: [PATCH v3 06/11] target/s390x: vxeh2: vector {load, store} elements reversed

2022-03-21 Thread David Hildenbrand
On 21.03.22 16:35, Richard Henderson wrote:
> On 3/21/22 04:35, David Hildenbrand wrote:
>>> +/* Probe write access before actually modifying memory */
>>> +gen_helper_probe_write_access(cpu_env, o->addr1, tcg_constant_i64(16));
>>
>> We have to free the tcg_constant_i64() IIRC.
> 
> We do not.

Ah, then my memory is playing tricks on me :)

-- 
Thanks,

David / dhildenb




Re: [PATCH v4 00/18] iotests: add enhanced debugging info to qemu-img failures

2022-03-21 Thread Hanna Reitz

On 21.03.22 14:14, Hanna Reitz wrote:

On 18.03.22 22:14, John Snow wrote:

On Fri, Mar 18, 2022 at 9:36 AM Hanna Reitz  wrote:

On 18.03.22 00:49, John Snow wrote:

Hiya!

This series effectively replaces qemu_img_pipe_and_status() with a
rewritten function named qemu_img() that raises an exception on 
non-zero
return code by default. By the end of the series, every last 
invocation

of the qemu-img binary ultimately goes through qemu_img().

The exception that this function raises includes stdout/stderr output
when the traceback is printed in a a little decorated text box so that
it stands out from the jargony Python traceback readout.

(You can test what this looks like for yourself, or at least you 
could,

by disabling ztsd support and then running qcow2 iotest 065.)

Negative tests are still possible in two ways:

- Passing check=False to qemu_img, qemu_img_log, or img_info_log
- Catching and handling the CalledProcessError exception at the 
callsite.

Thanks!  Applied to my block branch:

https://gitlab.com/hreitz/qemu/-/commits/block

Hanna


Actually, hold it -- this looks like it is causing problems with the
Gitlab CI. I need to investigate these.
https://gitlab.com/jsnow/qemu/-/pipelines/495155073/failures

... and, ugh, naturally the nice error diagnostics are suppressed here
so I can't see them. Well, there's one more thing to try and fix
somehow.


I hope this patch by Thomas fixes the logging at least:

https://lists.nongnu.org/archive/html/qemu-devel/2022-03/msg02946.html


So I found three issues:

1. check-patch wrongfully complains about the comment added in in 
“python/utils: add add_visual_margin() text decoration utility” that 
shows an example for how the output looks.  It complains the lines 
consisting mostly of “” were too long.  I believe that’s because 
it counts bytes, not characters.


Not fatal, i.e. doesn’t break the pipeline.  We should ignore that.

2. riscv64-debian-cross-container breaks, but that looks pre-existing.  
apt complains about some dependencies.


Also marked as allowed-to-fail, so I believe we should also just ignore 
that.  (Seems to fail on `master`, too.)


3. The rest are runs complaining about 
`subprocess.CompletedProcess[str]`.  Looks like the same issue I was 
facing for ec88eed8d14088b36a3495710368b8d1a3c33420, where I had to 
specify the type as a string.


Indeed this is fixed by something like 
https://gitlab.com/hreitz/qemu/-/commit/87615eb536bdca7babe8eb4a35fd4ea810d1da24 
.  Maybe squash that in?  (If it’s the correct way to go about this?)


Hanna




Re: [PATCH v2] gitlab: disable accelerated zlib for s390x

2022-03-21 Thread Thomas Huth

On 21/03/2022 17.11, Alex Bennée wrote:

There appears to be a bug in the s390 hardware-accelerated version of
zlib distributed with Ubuntu 20.04, which makes our test
/i386/migration/multifd/tcp/zlib hit an assertion perhaps one time in
10. Fortunately zlib provides an escape hatch where we can disable the
hardware-acceleration entirely by setting the environment variable
DFLTCC to 0. Do this on all our CI which runs on s390 hosts, both our
custom gitlab runner and also the Travis hosts.

Signed-off-by: Alex Bennée 
Cc: Peter Maydell 

---
v2
   - more complete commit wording from Peter
   - also tweak travis rules
---
  .gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml | 12 
  .travis.yml|  6 --
  2 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml 
b/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml
index 0333872113..4f292a8a5b 100644
--- a/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml
+++ b/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml
@@ -8,6 +8,8 @@ ubuntu-20.04-s390x-all-linux-static:
   tags:
   - ubuntu_20.04
   - s390x
+ variables:
+DFLTCC: 0
   rules:
   - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
   - if: "$S390X_RUNNER_AVAILABLE"
@@ -27,6 +29,8 @@ ubuntu-20.04-s390x-all:
   tags:
   - ubuntu_20.04
   - s390x
+ variables:
+DFLTCC: 0
   rules:
   - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
   - if: "$S390X_RUNNER_AVAILABLE"
@@ -43,6 +47,8 @@ ubuntu-20.04-s390x-alldbg:
   tags:
   - ubuntu_20.04
   - s390x
+ variables:
+DFLTCC: 0
   rules:
   - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
 when: manual
@@ -64,6 +70,8 @@ ubuntu-20.04-s390x-clang:
   tags:
   - ubuntu_20.04
   - s390x
+ variables:
+DFLTCC: 0
   rules:
   - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
 when: manual
@@ -84,6 +92,8 @@ ubuntu-20.04-s390x-tci:
   tags:
   - ubuntu_20.04
   - s390x
+ variables:
+DFLTCC: 0
   rules:
   - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
 when: manual
@@ -103,6 +113,8 @@ ubuntu-20.04-s390x-notcg:
   tags:
   - ubuntu_20.04
   - s390x
+ variables:
+DFLTCC: 0
   rules:
   - if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ 
/^staging/'
 when: manual
diff --git a/.travis.yml b/.travis.yml
index c3c8048842..9afc4a54b8 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -218,6 +218,7 @@ jobs:
  - TEST_CMD="make check check-tcg V=1"
  - CONFIG="--disable-containers 
--target-list=${MAIN_SOFTMMU_TARGETS},s390x-linux-user"
  - UNRELIABLE=true
+- DFLTCC=0
script:
  - BUILD_RC=0 && make -j${JOBS} || BUILD_RC=$?
  - |
@@ -257,7 +258,7 @@ jobs:
env:
  - CONFIG="--disable-containers --audio-drv-list=sdl --disable-user
--target-list-exclude=${MAIN_SOFTMMU_TARGETS}"
-
+- DFLTCC=0
  - name: "[s390x] GCC (user)"
arch: s390x
dist: focal
@@ -269,7 +270,7 @@ jobs:
- ninja-build
env:
  - CONFIG="--disable-containers --disable-system"
-
+- DFLTCC=0
  - name: "[s390x] Clang (disable-tcg)"
arch: s390x
dist: focal
@@ -303,3 +304,4 @@ jobs:
  - CONFIG="--disable-containers --disable-tcg --enable-kvm
--disable-tools --host-cc=clang --cxx=clang++"
  - UNRELIABLE=true
+- DFLTCC=0


Reviewed-by: Thomas Huth 




Re: [RFC PATCH 0/5] Removal of AioContext lock, bs->parents and ->children: proof of concept

2022-03-21 Thread Vladimir Sementsov-Ogievskiy

17.03.2022 00:55, Emanuele Giuseppe Esposito wrote:



Am 09/03/2022 um 14:26 schrieb Emanuele Giuseppe Esposito:

Next, I have a problem in mind, that in past lead to a lot of iotest 30
failures. Next there were different fixes and improvements, but the core
problem (as far as I understand) is still here: nothing protects us when
we are in some graph modification process (for example block-job
finalization) do yield, switch to other coroutine and enter another
graph modification process (for example, another block-job finaliztion)..

That's another point to consider. I don't really have a solution for this.


On a side note, that might not be a real problem.
If I understand correctly, your fear is that we are doing something like
parent->children[x] = new_node // partial graph operation
/* yield to another coroutine */
coroutine reads/writes parent->children[x] and/or new_node->parents[y]
/* yield back */
new_node->parents[y] = parent // end of the initial graph operation

Is that what you are pointing out here?
If so, is there a concrete example for this? Because yields and drains
(that eventually can poll) seem to be put either before or after the
whole graph modification section. In other words, even if a coroutine
enters, it will be always before or after the _whole_ graph modification
is performed.



The old example was here: 
https://lists.gnu.org/archive/html/qemu-devel/2020-11/msg05212.html  - not sure 
how much is it applicable now.

Another example - look at bdrv_drop_intermediate() in block.c and at TODO 
comments in it.

In both cases the problem is we want to update some metadata in qcow2 (backing 
file name) as part of block-graph modification. But this update does write to 
qcow2 header which may yield and switch to some another block-graph 
modification code.


--
Best regards,
Vladimir



[PATCH v1 1/1] test/avocado/machine_aspeed.py: Add ast1030 test case

2022-03-21 Thread Jamin Lin
Add test case to test "ast1030-evb" machine with zephyr os

Signed-off-by: Jamin Lin 
---
 tests/avocado/machine_aspeed.py | 36 +
 1 file changed, 36 insertions(+)
 create mode 100644 tests/avocado/machine_aspeed.py

diff --git a/tests/avocado/machine_aspeed.py b/tests/avocado/machine_aspeed.py
new file mode 100644
index 00..33090af199
--- /dev/null
+++ b/tests/avocado/machine_aspeed.py
@@ -0,0 +1,36 @@
+# Functional test that boots the ASPEED SoCs with firmware
+#
+# Copyright (C) 2022 ASPEED Technology Inc
+#
+# This work is licensed under the terms of the GNU GPL, version 2 or
+# later.  See the COPYING file in the top-level directory.
+
+from avocado_qemu import QemuSystemTest
+from avocado_qemu import wait_for_console_pattern
+from avocado_qemu import exec_command_and_wait_for_pattern
+from avocado.utils import archive
+
+
+class AST1030Machine(QemuSystemTest):
+"""Boots the zephyr os and checks that the console is operational"""
+
+timeout = 10
+
+def test_ast1030_zephyros(self):
+"""
+:avocado: tags=arch:arm
+:avocado: tags=machine:ast1030-evb
+"""
+tar_url = ('https://github.com/AspeedTech-BMC'
+   '/zephyr/releases/download/v00.01.04/ast1030-evb-demo.zip')
+tar_hash = '4c6a8ce3a8ba76ef1a65dae419ae3409343c4b20'
+tar_path = self.fetch_asset(tar_url, asset_hash=tar_hash)
+archive.extract(tar_path, self.workdir)
+kernel_file = self.workdir + "/ast1030-evb-demo/zephyr.elf"
+self.vm.set_console()
+self.vm.add_args('-kernel', kernel_file,
+ '-nographic')
+self.vm.launch()
+wait_for_console_pattern(self, "Booting Zephyr OS")
+exec_command_and_wait_for_pattern(self, "help",
+  "Available commands")
-- 
2.17.1




Re: [PATCH v4 06/18] iotests: add qemu_img_json()

2022-03-21 Thread Eric Blake
On Thu, Mar 17, 2022 at 07:49:25PM -0400, John Snow wrote:
> qemu_img_json() is a new helper built on top of qemu_img() that tries to
> pull a valid JSON document out of the stdout stream.
> 
> In the event that the return code is negative (the program crashed), or
> the code is greater than zero and did not produce valid JSON output, the
> VerboseProcessError raised by qemu_img() is re-raised.
> 
> In the event that the return code is zero but we can't parse valid JSON,
> allow the JSON deserialization error to be raised.
> 
> Signed-off-by: John Snow 
> ---
>  tests/qemu-iotests/iotests.py | 32 
>  1 file changed, 32 insertions(+)
> 
> diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
> index 7057db0686..9d23066701 100644
> --- a/tests/qemu-iotests/iotests.py
> +++ b/tests/qemu-iotests/iotests.py
> @@ -276,6 +276,38 @@ def ordered_qmp(qmsg, conv_keys=True):
>  def qemu_img_create(*args: str) -> subprocess.CompletedProcess[str]:
>  return qemu_img('create', *args)
>  
> +def qemu_img_json(*args: str) -> Any:
> +"""
> +Run qemu-img and return its output as deserialized JSON.
> +
> +:raise CalledProcessError:
> +When qemu-img crashes, or returns a non-zero exit code without
> +producing a valid JSON document to stdout.
> +:raise JSONDecoderError:
> +When qemu-img returns 0, but failed to produce a valid JSON document.
> +
> +:return: A deserialized JSON object; probably a dict[str, Any].

Interesting choice to type the function as '-> Any', but document that
we expect a more specific '-> dict[str, Any]' for our known usage of
qemu-img.  But it makes sense to me (in case a future qemu-img
--output=json produces something that is JSON but not a dict).

> +"""
> +try:
> +res = qemu_img(*args, combine_stdio=False)
> +except subprocess.CalledProcessError as exc:
> +# Terminated due to signal. Don't bother.
> +if exc.returncode < 0:
> +raise
> +
> +# Commands like 'check' can return failure (exit codes 2 and 3)
> +# to indicate command completion, but with errors found. For
> +# multi-command flexibility, ignore the exact error codes and
> +# *try* to load JSON.
> +try:
> +return json.loads(exc.stdout)
> +except json.JSONDecodeError:
> +# Nope. This thing is toast. Raise the /process/ error.
> +pass
> +raise
> +
> +return json.loads(res.stdout)

The comments were very helpful.

Reviewed-by: Eric Blake 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




Re: [RFC PATCH] gitlab: disable accelerated zlib for s390x

2022-03-21 Thread Peter Maydell
On Mon, 21 Mar 2022 at 13:39, Alex Bennée  wrote:
>
> Apparently this causes problems with migration.

More specifically:

# There appears to be a bug in the s390 hardware-accelerated version
# of zlib distributed with Ubuntu 20.04, which makes our test
# /i386/migration/multifd/tcp/zlib hit an assertion perhaps one
# time in 10. Fortunately zlib provides an escape hatch
# where we can disable the hardware-acceleration entirely by
# setting the environment variable DFLTCC to 0. Do this on all
# our CI which runs on s390 hosts, both our custom gitlab runner
# and also the Travis hosts.


...speaking of which, this patch seems to only be
touching the gitlab CI and not the travis config.

thanks
-- PMM



[PATCH v3 3/5] softmmu/cpus: Free cpu->halt_cond in generic_destroy_vcpu_thread()

2022-03-21 Thread Mark Kanda
vCPU hotunplug related leak reported by Valgrind:

==102631== 56 bytes in 1 blocks are definitely lost in loss record 5,089 of 
8,555
==102631==at 0x4C3ADBB: calloc (vg_replace_malloc.c:1117)
==102631==by 0x69EE4CD: g_malloc0 (in /usr/lib64/libglib-2.0.so.0.5600.4)
==102631==by 0x924452: kvm_start_vcpu_thread (kvm-accel-ops.c:69)
==102631==by 0x4505C2: qemu_init_vcpu (cpus.c:643)
==102631==by 0x76B4D1: x86_cpu_realizefn (cpu.c:6520)
==102631==by 0x9344A7: device_set_realized (qdev.c:531)
==102631==by 0x93E329: property_set_bool (object.c:2273)
==102631==by 0x93C2F8: object_property_set (object.c:1408)
==102631==by 0x940796: object_property_set_qobject (qom-qobject.c:28)
==102631==by 0x93C663: object_property_set_bool (object.c:1477)
==102631==by 0x933D3B: qdev_realize (qdev.c:333)
==102631==by 0x455EC4: qdev_device_add_from_qdict (qdev-monitor.c:713)

Signed-off-by: Mark Kanda 
---
 accel/accel-common.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/accel/accel-common.c b/accel/accel-common.c
index 623df43cc3..297d4e4ef1 100644
--- a/accel/accel-common.c
+++ b/accel/accel-common.c
@@ -140,4 +140,5 @@ type_init(register_accel_types);
 void generic_destroy_vcpu_thread(CPUState *cpu)
 {
 g_free(cpu->thread);
+g_free(cpu->halt_cond);
 }
-- 
2.27.0




[PULL 3/4] 9pfs: Use g_new() & friends where that makes obvious sense

2022-03-21 Thread Markus Armbruster
g_new(T, n) is neater than g_malloc(sizeof(T) * n).  It's also safer,
for two reasons.  One, it catches multiplication overflowing size_t.
Two, it returns T * rather than void *, which lets the compiler catch
more type errors.

This commit only touches allocations with size arguments of the form
sizeof(T).

Initial patch created mechanically with:

$ spatch --in-place --sp-file scripts/coccinelle/use-g_new-etc.cocci \
 --macro-file scripts/cocci-macro-file.h FILES...

This uncovers a typing error:

../hw/9pfs/9p.c: In function ‘qid_path_fullmap’:
../hw/9pfs/9p.c:855:13: error: assignment to ‘QpfEntry *’ from incompatible 
pointer type ‘QppEntry *’ [-Werror=incompatible-pointer-types]
  855 | val = g_new0(QppEntry, 1);
  | ^

Harmless, because QppEntry is larger than QpfEntry.  Manually fixed to
allocate a QpfEntry instead.

Cc: Greg Kurz 
Cc: Christian Schoenebeck 
Signed-off-by: Markus Armbruster 
Reviewed-by: Philippe Mathieu-Daudé 
Reviewed-by: Christian Schoenebeck 
Reviewed-by: Alex Bennée 
Reviewed-by: Greg Kurz 
Message-Id: <20220315144156.1595462-3-arm...@redhat.com>
---
 hw/9pfs/9p-proxy.c   | 2 +-
 hw/9pfs/9p-synth.c   | 4 ++--
 hw/9pfs/9p.c | 8 
 hw/9pfs/codir.c  | 6 +++---
 tests/qtest/virtio-9p-test.c | 4 ++--
 5 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/hw/9pfs/9p-proxy.c b/hw/9pfs/9p-proxy.c
index 8b4b5cf7dc..4c5e0fc217 100644
--- a/hw/9pfs/9p-proxy.c
+++ b/hw/9pfs/9p-proxy.c
@@ -1187,7 +1187,7 @@ static int proxy_parse_opts(QemuOpts *opts, FsDriverEntry 
*fs, Error **errp)
 
 static int proxy_init(FsContext *ctx, Error **errp)
 {
-V9fsProxy *proxy = g_malloc(sizeof(V9fsProxy));
+V9fsProxy *proxy = g_new(V9fsProxy, 1);
 int sock_id;
 
 if (ctx->export_flags & V9FS_PROXY_SOCK_NAME) {
diff --git a/hw/9pfs/9p-synth.c b/hw/9pfs/9p-synth.c
index b3080e415b..d99d263985 100644
--- a/hw/9pfs/9p-synth.c
+++ b/hw/9pfs/9p-synth.c
@@ -49,7 +49,7 @@ static V9fsSynthNode *v9fs_add_dir_node(V9fsSynthNode 
*parent, int mode,
 
 /* Add directory type and remove write bits */
 mode = ((mode & 0777) | S_IFDIR) & ~(S_IWUSR | S_IWGRP | S_IWOTH);
-node = g_malloc0(sizeof(V9fsSynthNode));
+node = g_new0(V9fsSynthNode, 1);
 if (attr) {
 /* We are adding .. or . entries */
 node->attr = attr;
@@ -128,7 +128,7 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode *parent, int 
mode,
 }
 /* Add file type and remove write bits */
 mode = ((mode & 0777) | S_IFREG);
-node = g_malloc0(sizeof(V9fsSynthNode));
+node = g_new0(V9fsSynthNode, 1);
 node->attr = >actual_attr;
 node->attr->inode  = synth_node_count++;
 node->attr->nlink  = 1;
diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
index a6d6b3f835..8e9d4aea73 100644
--- a/hw/9pfs/9p.c
+++ b/hw/9pfs/9p.c
@@ -324,7 +324,7 @@ static V9fsFidState *alloc_fid(V9fsState *s, int32_t fid)
 return NULL;
 }
 }
-f = g_malloc0(sizeof(V9fsFidState));
+f = g_new0(V9fsFidState, 1);
 f->fid = fid;
 f->fid_type = P9_FID_NONE;
 f->ref = 1;
@@ -804,7 +804,7 @@ static int qid_inode_prefix_hash_bits(V9fsPDU *pdu, dev_t 
dev)
 
 val = qht_lookup(>s->qpd_table, , hash);
 if (!val) {
-val = g_malloc0(sizeof(QpdEntry));
+val = g_new0(QpdEntry, 1);
 *val = lookup;
 affix = affixForIndex(pdu->s->qp_affix_next);
 val->prefix_bits = affix.bits;
@@ -852,7 +852,7 @@ static int qid_path_fullmap(V9fsPDU *pdu, const struct stat 
*stbuf,
 return -ENFILE;
 }
 
-val = g_malloc0(sizeof(QppEntry));
+val = g_new0(QpfEntry, 1);
 *val = lookup;
 
 /* new unique inode and device combo */
@@ -928,7 +928,7 @@ static int qid_path_suffixmap(V9fsPDU *pdu, const struct 
stat *stbuf,
 return -ENFILE;
 }
 
-val = g_malloc0(sizeof(QppEntry));
+val = g_new0(QppEntry, 1);
 *val = lookup;
 
 /* new unique inode affix and device combo */
diff --git a/hw/9pfs/codir.c b/hw/9pfs/codir.c
index 75148bc985..93ba44fb75 100644
--- a/hw/9pfs/codir.c
+++ b/hw/9pfs/codir.c
@@ -141,9 +141,9 @@ static int do_readdir_many(V9fsPDU *pdu, V9fsFidState *fidp,
 
 /* append next node to result chain */
 if (!e) {
-*entries = e = g_malloc0(sizeof(V9fsDirEnt));
+*entries = e = g_new0(V9fsDirEnt, 1);
 } else {
-e = e->next = g_malloc0(sizeof(V9fsDirEnt));
+e = e->next = g_new0(V9fsDirEnt, 1);
 }
 e->dent = qemu_dirent_dup(dent);
 
@@ -163,7 +163,7 @@ static int do_readdir_many(V9fsPDU *pdu, V9fsFidState *fidp,
 break;
 }
 
-e->st = g_malloc0(sizeof(struct stat));
+e->st = g_new0(struct stat, 1);
 memcpy(e->st, , sizeof(struct stat));
 }
 
diff --git a/tests/qtest/virtio-9p-test.c 

Re: [PATCH v4 1/3] qmp: Support for querying stats

2022-03-21 Thread Mark Kanda

On 3/21/2022 9:55 AM, Paolo Bonzini wrote:

On 3/21/22 14:50, Markus Armbruster wrote:

Mark Kanda  writes:

Thank you Markus.
On 3/11/2022 7:06 AM, Markus Armbruster wrote:

Are the stats bulky enough to justfify the extra complexity of
filtering?


If this was only for KVM, the complexity probably isn't worth it. However, the
framework is intended to support future stats with new providers and targets
(there has also been mention of moving existing stats to this framework).
Without some sort of filtering, I think the payload could become unmanageable.


I'm deeply wary of "may need $complexity in the future" when $complexity
could be added when we actually need it :)


I think it's better to have the filtering already.  There are several uses for 
it.


Regarding filtering by provider, consider that a command like "info jit" 
should be a wrapper over


{ "execute": "query-stats", "arguments" : { "target": "vm",
  "filters": [ { "provider": "tcg" } ] } }

So we have an example of the intended use already within QEMU. Yes, the 
usefulness depends on actually having >1 provider but I think it's pretty 
central to the idea of having a statistics *subsystem*.


Regarding filtering by name, query-stats mostly has two usecases. The first is 
retrieving all stats and publishing them up to the user, for example once per 
minute per VM.  The second is monitoring a small number and building a 
relatively continuous plot (e.g. 1-10 times per second per vCPU).  For the 
latter, not having to return hundreds of values unnecessarily (KVM has almost 
60 stats, multiply by the number of vCPUs and the frequency) is worth having 
even just with the KVM provider.



Can you give a use case for query-stats-schemas?


'query-stats-schemas' provide the the type details about each stat; such as the
unit, base, etc. These details are not reported by 'query-stats' (only the stat
name and raw values are returned).


Yes, but what is going to use these type details, and for what purpose?


QEMU does not know in advance which stats are provided.  The types, etc. are 
provided by the kernel and can change by architecture and kernel version.  In 
the case of KVM, introspection is done through a file descriptor.  QEMU passes 
these up as QMP and in the future it could/should extend this to other 
providers (such as TCG) and devices (such as block devices).


See the "info stats" implementation for how it uses the schema:

vcpu (qom path: /machine/unattached/device[2])
  provider: kvm
    exits (cumulative): 52369
    halt_wait_ns (cumulative nanoseconds): 416092704390

Information such as "cumulative nanoseconds" is provided by the schema.


Have you considered splitting this up into three parts: unfiltered
query-stats, filtering, and query-stats-schemas?


Splitting could be an idea, but I think only filtering would be a separate 
step.  The stats are not really usable without a schema that tells you the 
units, or whether a number can go down or only up.  (Well, a human export 
could use them through its intuition, but a HMP-level command could not be 
provided).



We could perhaps merge with the current schema, then clean it up on top,
both in 7.1, if that's easier for you.


The serialized JSON would change, so that would be a bit worrisome (but it 
makes me feel a little less bad about this missing 7.0). It seems to be as 
easy as this, as far as alternates go:


diff --git a/scripts/qapi/expr.py b/scripts/qapi/expr.py
index 3cb389e875..48578e1698 100644
--- a/scripts/qapi/expr.py
+++ b/scripts/qapi/expr.py
@@ -554,7 +554,7 @@ def check_alternate(expr: _JSONObject, info: 
QAPISourceInfo) -> None:

 check_name_lower(key, info, source)
 check_keys(value, info, source, ['type'], ['if'])
 check_if(value, info, source)
-    check_type(value['type'], info, source)
+    check_type(value['type'], info, source, allow_array=True)


 def check_command(expr: _JSONObject, info: QAPISourceInfo) -> None:

diff --git a/scripts/qapi/schema.py b/scripts/qapi/schema.py
index b7b3fc0ce4..3728340c37 100644
--- a/scripts/qapi/schema.py
+++ b/scripts/qapi/schema.py
@@ -243,6 +243,7 @@ def alternate_qtype(self):
 'number':  'QTYPE_QNUM',
 'int': 'QTYPE_QNUM',
 'boolean': 'QTYPE_QBOOL',
+    'array':   'QTYPE_QLIST',
 'object':  'QTYPE_QDICT'
 }
 return json2qtype.get(self.json_type())
@@ -1069,6 +1070,9 @@ def _def_struct_type(self, expr, info, doc):
 None))

 def _make_variant(self, case, typ, ifcond, info):
+    if isinstance(typ, list):
+    assert len(typ) == 1
+    typ = self._make_array_type(typ[0], info)
 return QAPISchemaVariant(case, info, typ, ifcond)

 def _def_union_type(self, expr, info, doc):


I'll try to write some testcases and also cover other uses of
_make_variant, which will undoubtedly find some issue.



Hi Paolo,

FWIW, the attached patch adjusts some tests for alternates 

[PATCH v1 10/13] include/hw: start documenting the vhost API

2022-03-21 Thread Alex Bennée
While trying to get my head around the nest of interactions for vhost
devices I though I could start by documenting the key API functions.
This patch documents the main API hooks for creating and starting a
vhost device as well as how the configuration changes are handled.

Signed-off-by: Alex Bennée 
Cc: Michael S. Tsirkin 
Cc: Stefan Hajnoczi 
Cc: Marc-André Lureau 
---
 include/hw/virtio/vhost.h | 132 +++---
 1 file changed, 122 insertions(+), 10 deletions(-)

diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index 58a73e7b7a..b291fe4e24 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -61,6 +61,12 @@ typedef struct VhostDevConfigOps {
 } VhostDevConfigOps;
 
 struct vhost_memory;
+
+/**
+ * struct vhost_dev - common vhost_dev structure
+ * @vhost_ops: backend specific ops
+ * @config_ops: ops for config changes (see @vhost_dev_set_config_notifier)
+ */
 struct vhost_dev {
 VirtIODevice *vdev;
 MemoryListener memory_listener;
@@ -108,15 +114,129 @@ struct vhost_net {
 NetClientState *nc;
 };
 
+/**
+ * vhost_dev_init() - initialise the vhost interface
+ * @hdev: the common vhost_dev structure
+ * @opaque: opaque ptr passed to backend (vhost/vhost-user/vdpa)
+ * @backend_type: type of backend
+ * @busyloop_timeout: timeout for polling virtqueue
+ * @errp: error handle
+ *
+ * The initialisation of the vhost device will trigger the
+ * initialisation of the backend and potentially capability
+ * negotiation of backend interface. Configuration of the VirtIO
+ * itself won't happen until the interface is started.
+ *
+ * Return: 0 on success, non-zero on error while setting errp.
+ */
 int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
VhostBackendType backend_type,
uint32_t busyloop_timeout, Error **errp);
+
+/**
+ * vhost_dev_cleanup() - tear down and cleanup vhost interface
+ * @hdev: the common vhost_dev structure
+ */
 void vhost_dev_cleanup(struct vhost_dev *hdev);
-int vhost_dev_start(struct vhost_dev *hdev, VirtIODevice *vdev);
-void vhost_dev_stop(struct vhost_dev *hdev, VirtIODevice *vdev);
+
+/**
+ * vhost_dev_enable_notifiers() - enable event notifiers
+ * @hdev: common vhost_dev structure
+ * @vdev: the VirtIODevice structure
+ *
+ * Enable notifications directly to the vhost device rather than being
+ * triggered by QEMU itself. Notifications should be enabled before
+ * the vhost device is started via @vhost_dev_start.
+ *
+ * Return: 0 on success, < 0 on error.
+ */
 int vhost_dev_enable_notifiers(struct vhost_dev *hdev, VirtIODevice *vdev);
+
+/**
+ * vhost_dev_disable_notifiers - disable event notifications
+ * @hdev: common vhost_dev structure
+ * @vdev: the VirtIODevice structure
+ *
+ * Disable direct notifications to vhost device.
+ */
 void vhost_dev_disable_notifiers(struct vhost_dev *hdev, VirtIODevice *vdev);
 
+/**
+ * vhost_dev_start() - start the vhost device
+ * @hdev: common vhost_dev structure
+ * @vdev: the VirtIODevice structure
+ *
+ * Starts the vhost device. From this point VirtIO feature negotiation
+ * can start and the device can start processing VirtIO transactions.
+ *
+ * Return: 0 on success, < 0 on error.
+ */
+int vhost_dev_start(struct vhost_dev *hdev, VirtIODevice *vdev);
+
+/**
+ * vhost_dev_stop() - stop the vhost device
+ * @hdev: common vhost_dev structure
+ * @vdev: the VirtIODevice structure
+ *
+ * Stop the vhost device. After the device is stopped the notifiers
+ * can be disabled (@vhost_dev_disable_notifiers) and the device can
+ * be torn down (@vhost_dev_cleanup).
+ */
+void vhost_dev_stop(struct vhost_dev *hdev, VirtIODevice *vdev);
+
+/**
+ * DOC: vhost device configuration handling
+ *
+ * The VirtIO device configuration space is used for rarely changing
+ * or initialisation time parameters. The configuration can be updated
+ * by either the guest driver or the device itself. If the device can
+ * change the configuration over time the vhost handler should
+ * register a @VhostDevConfigOps structure with
+ * @vhost_dev_set_config_notifier so the guest can be notified. Some
+ * devices register a handler anyway and will signal an error if an
+ * unexpected config change happens.
+ */
+
+/**
+ * vhost_dev_get_config() - fetch device configuration
+ * @hdev: common vhost_dev_structure
+ * @config: pointer to device appropriate config structure
+ * @config_len: size of device appropriate config structure
+ *
+ * Return: 0 on success, < 0 on error while setting errp
+ */
+int vhost_dev_get_config(struct vhost_dev *hdev, uint8_t *config,
+ uint32_t config_len, Error **errp);
+
+/**
+ * vhost_dev_set_config() - set device configuration
+ * @hdev: common vhost_dev_structure
+ * @data: pointer to data to set
+ * @offset: offset into configuration space
+ * @size: length of set
+ * @flags: @VhostSetConfigType flags
+ *
+ * By use of @offset/@size a subset of the configuration space can be
+ * written 

[PATCH v1 11/13] contrib/vhost-user-blk: fix 32 bit build and enable

2022-03-21 Thread Alex Bennée
We were not building the vhost-user-blk server due to 32 bit
compilation problems. The problem was due to format string types so
fix that and then enable the build. Tweak the rule to follow the same
rules as other vhost-user daemons.

Signed-off-by: Alex Bennée 
---
 meson.build | 2 +-
 contrib/vhost-user-blk/vhost-user-blk.c | 6 +++---
 contrib/vhost-user-blk/meson.build  | 3 +--
 3 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/meson.build b/meson.build
index 282e7c4650..0435419307 100644
--- a/meson.build
+++ b/meson.build
@@ -1326,7 +1326,7 @@ have_vhost_user_blk_server = 
get_option('vhost_user_blk_server') \
error_message: 'vhost_user_blk_server requires linux') \
   .require('CONFIG_VHOST_USER' in config_host,
error_message: 'vhost_user_blk_server requires vhost-user support') 
\
-  .disable_auto_if(not have_system) \
+  .disable_auto_if(not have_tools and not have_system) \
   .allowed()
 
 if get_option('fuse').disabled() and get_option('fuse_lseek').enabled()
diff --git a/contrib/vhost-user-blk/vhost-user-blk.c 
b/contrib/vhost-user-blk/vhost-user-blk.c
index d14b2896bf..0bee79360f 100644
--- a/contrib/vhost-user-blk/vhost-user-blk.c
+++ b/contrib/vhost-user-blk/vhost-user-blk.c
@@ -146,7 +146,7 @@ vub_readv(VubReq *req, struct iovec *iov, uint32_t iovcnt)
 req->size = vub_iov_size(iov, iovcnt);
 rc = preadv(vdev_blk->blk_fd, iov, iovcnt, req->sector_num * 512);
 if (rc < 0) {
-fprintf(stderr, "%s, Sector %"PRIu64", Size %lu failed with %s\n",
+fprintf(stderr, "%s, Sector %"PRIu64", Size %zu failed with %s\n",
 vdev_blk->blk_name, req->sector_num, req->size,
 strerror(errno));
 return -1;
@@ -169,7 +169,7 @@ vub_writev(VubReq *req, struct iovec *iov, uint32_t iovcnt)
 req->size = vub_iov_size(iov, iovcnt);
 rc = pwritev(vdev_blk->blk_fd, iov, iovcnt, req->sector_num * 512);
 if (rc < 0) {
-fprintf(stderr, "%s, Sector %"PRIu64", Size %lu failed with %s\n",
+fprintf(stderr, "%s, Sector %"PRIu64", Size %zu failed with %s\n",
 vdev_blk->blk_name, req->sector_num, req->size,
 strerror(errno));
 return -1;
@@ -188,7 +188,7 @@ vub_discard_write_zeroes(VubReq *req, struct iovec *iov, 
uint32_t iovcnt,
 
 size = vub_iov_size(iov, iovcnt);
 if (size != sizeof(*desc)) {
-fprintf(stderr, "Invalid size %ld, expect %ld\n", size, sizeof(*desc));
+fprintf(stderr, "Invalid size %zd, expect %zd\n", size, sizeof(*desc));
 return -1;
 }
 buf = g_new0(char, size);
diff --git a/contrib/vhost-user-blk/meson.build 
b/contrib/vhost-user-blk/meson.build
index 601ea15ef5..dcb9e2ffcd 100644
--- a/contrib/vhost-user-blk/meson.build
+++ b/contrib/vhost-user-blk/meson.build
@@ -1,5 +1,4 @@
-# FIXME: broken on 32-bit architectures
 executable('vhost-user-blk', files('vhost-user-blk.c'),
dependencies: [qemuutil, vhost_user],
-   build_by_default: false,
+   build_by_default: targetos == 'linux',
install: false)
-- 
2.30.2




[PATCH v1 07/13] vhost-user.rst: add clarifying language about protocol negotiation

2022-03-21 Thread Alex Bennée
Make the language about feature negotiation explicitly clear about the
handling of the VHOST_USER_F_PROTOCOL_FEATURES feature bit. Try and
avoid the sort of bug introduced in vhost.rs REPLY_ACK processing:

  https://github.com/rust-vmm/vhost/pull/24

Signed-off-by: Alex Bennée 
Cc: Jiang Liu 
Message-Id: <20210226111619.21178-1-alex.ben...@linaro.org>

---
v2
  - use Stefan's suggested wording
  - Be super explicit in the message descriptions
---
 docs/interop/vhost-user.rst | 18 --
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index 08c4bf2ef7..948d69c9ad 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -331,6 +331,18 @@ bit was dedicated for this purpose::
 
   #define VHOST_USER_F_PROTOCOL_FEATURES 30
 
+Note that VHOST_USER_F_PROTOCOL_FEATURES is the UNUSED (30) feature
+bit defined in `VIRTIO 1.1 6.3 Legacy Interface: Reserved Feature Bits
+`_.
+VIRTIO devices do not advertise this feature bit and therefore VIRTIO
+drivers cannot negotiate it.
+
+This reserved feature bit was reused by the vhost-user protocol to add
+vhost-user protocol feature negotiation in a backwards compatible
+fashion. Old vhost-user master and slave implementations continue to
+work even though they are not aware of vhost-user protocol feature
+negotiation.
+
 Ring states
 ---
 
@@ -889,7 +901,8 @@ Front-end message types
   Get the protocol feature bitmask from the underlying vhost
   implementation.  Only legal if feature bit
   ``VHOST_USER_F_PROTOCOL_FEATURES`` is present in
-  ``VHOST_USER_GET_FEATURES``.
+  ``VHOST_USER_GET_FEATURES``.  It does not need to be acknowledged by
+  ``VHOST_USER_SET_FEATURES``.
 
 .. Note::
Back-ends that report ``VHOST_USER_F_PROTOCOL_FEATURES`` must
@@ -905,7 +918,8 @@ Front-end message types
   Enable protocol features in the underlying vhost implementation.
 
   Only legal if feature bit ``VHOST_USER_F_PROTOCOL_FEATURES`` is present in
-  ``VHOST_USER_GET_FEATURES``.
+  ``VHOST_USER_GET_FEATURES``.  It does not need to be acknowledged by
+  ``VHOST_USER_SET_FEATURES``.
 
 .. Note::
Back-ends that report ``VHOST_USER_F_PROTOCOL_FEATURES`` must support
-- 
2.30.2




Re: [PATCH v4 10/18] iotests: add qemu_img_map() function

2022-03-21 Thread John Snow
On Mon, Mar 21, 2022, 10:24 AM Eric Blake  wrote:

> On Thu, Mar 17, 2022 at 07:49:29PM -0400, John Snow wrote:
> > Add a qemu_img_map() function by analogy with qemu_img_measure(),
> > qemu_img_check(), and qemu_img_info() that all return JSON information.
> >
> > Replace calls to qemu_img_pipe('map', '--output=json', ...) with this
> > new function, which provides better diagnostic information on failure.
> >
> > Note: The output for iotest 211 changes, because logging JSON after it
> > was deserialized by Python behaves a little differently than logging the
> > raw JSON document string itself.
> > (iotests.log() sorts the keys for Python 3.6 support.)
> >
> > Signed-off-by: John Snow 
> > ---
>
> > +++ b/tests/qemu-iotests/211.out
>
> > @@ -55,9 +53,7 @@ file format: IMGFMT
> >  virtual size: 32 MiB (33554432 bytes)
> >  cluster_size: 1048576
> >
> > -[{ "start": 0, "length": 3072, "depth": 0, "present": true, "zero":
> false, "data": true, "offset": 1024},
> > -{ "start": 3072, "length": 33551360, "depth": 0, "present": true,
> "zero": true, "data": true, "offset": 4096}]
> > -
> > +[{"data": true, "depth": 0, "length": 3072, "offset": 1024, "present":
> true, "start": 0, "zero": false}, {"data": true, "depth": 0, "length":
> 33551360, "offset": 4096, "present": true, "start": 3072, "zero": true}]
>
> The change in format can produce really long lines for a more complex
> map, which can introduce its own problems in legibility. But I can
> live with it.
>
> Reviewed-by: Eric Blake 
>
> --
> Eric Blake, Principal Software Engineer
> Red Hat, Inc.   +1-919-301-3266
> Virtualization:  qemu.org | libvirt.org


Yeah, we don't have to print out the entire thing, either. We could also
pretty-print it if we want to.

(Once we drop 3.6 (which I know is contested as to when we can do it) we
can remove a lot of our special QMP sorting code and just start printing
the raw JSON objects, which makes dealing with qmp a lot easier in
diff-based tests.)

The point was more just to remove any copy-pastables using the JSON and
provide only the "one good way". This patch in and of itself is otherwise
pretty lateral.


>


Re: [PATCH v3 08/11] target/s390x: vxeh2: vector {load, store} byte reversed element

2022-03-21 Thread David Hildenbrand
On 08.03.22 02:53, Richard Henderson wrote:
> From: David Miller 
> 
> This includes VLEBR* and VSTEBR* (single element);
> VLBRREP (load single element and replicate); and
> VLLEBRZ (load single element and zero).

"load byte reversed element and ..."

> 
> Signed-off-by: David Miller 
> Message-Id: <20220307020327.3003-6-dmiller...@gmail.com>
> [rth: Split out elements (plural) from element (scalar),
>   Use tcg little-endian memory operations.]
> Signed-off-by: Richard Henderson 

[...]

> diff --git a/target/s390x/tcg/insn-data.def b/target/s390x/tcg/insn-data.def
> index ee6e1dc9e5..b80f989002 100644
> --- a/target/s390x/tcg/insn-data.def
> +++ b/target/s390x/tcg/insn-data.def
> @@ -1027,6 +1027,14 @@
>  F(0xe756, VLR, VRR_a, V,   0, 0, 0, 0, vlr, 0, IF_VEC)
>  /* VECTOR LOAD AND REPLICATE */
>  F(0xe705, VLREP,   VRX,   V,   la2, 0, 0, 0, vlrep, 0, IF_VEC)
> +/* VECTOR LOAD BYTE REVERSED ELEMENT */
> +E(0xe601, VLEBRH,  VRX,   VE2, la2, 0, 0, 0, vlebr, 0, ES_16, IF_VEC)
> +E(0xe603, VLEBRF,  VRX,   VE2, la2, 0, 0, 0, vlebr, 0, ES_32, IF_VEC)
> +E(0xe602, VLEBRG,  VRX,   VE2, la2, 0, 0, 0, vlebr, 0, ES_64, IF_VEC)
> +/* VECTOR LOAD BYTE REVERSED ELEMENT AND REPLOCATE */

s/REPLOCATE/REPLICATE/

> +F(0xe605, VLBRREP, VRX,   VE2, la2, 0, 0, 0, vlbrrep, 0, IF_VEC)
> +/* VECTOR LOAD BYTE REVERSED ELEMENT AND ZERO */
> +F(0xe604, VLLEBRZ, VRX,   VE2, la2, 0, 0, 0, vllebrz, 0, IF_VEC)
>  /* VECTOR LOAD BYTE REVERSED ELEMENTS */
>  F(0xe606, VLBR,VRX,   VE2, la2, 0, 0, 0, vlbr, 0, IF_VEC)
>  /* VECTOR LOAD ELEMENT */
> @@ -1081,6 +1089,10 @@
>  F(0xe75f, VSEG,VRR_a, V,   0, 0, 0, 0, vseg, 0, IF_VEC)
>  /* VECTOR STORE */
>  F(0xe70e, VST, VRX,   V,   la2, 0, 0, 0, vst, 0, IF_VEC)
> +/* VECTOR STORE BYTE REVERSED ELEMENT */
> +E(0xe609, VSTEBRH,  VRX,   VE2, la2, 0, 0, 0, vstebr, 0, ES_16, IF_VEC)
> +E(0xe60b, VSTEBRF,  VRX,   VE2, la2, 0, 0, 0, vstebr, 0, ES_32, IF_VEC)
> +E(0xe60a, VSTEBRG,  VRX,   VE2, la2, 0, 0, 0, vstebr, 0, ES_64, IF_VEC)
>  /* VECTOR STORE BYTE REVERSED ELEMENTS */
>  F(0xe60e, VSTBR,VRX,   VE2, la2, 0, 0, 0, vstbr, 0, IF_VEC)
>  /* VECTOR STORE ELEMENT */

Reviewed-by: David Hildenbrand 

-- 
Thanks,

David / dhildenb




Re: [libvirt RFC] virFile: new VIR_FILE_WRAPPER_BIG_PIPE to improve performance

2022-03-21 Thread Andrea Righi
On Fri, Mar 18, 2022 at 02:34:29PM +0100, Claudio Fontana wrote:
...
> I have lots of questions here, and I tried to involve Jiri and Andrea Righi 
> here, who a long time ago proposed a POSIX_FADV_NOREUSE implementation.
> 
> 1) What is the reason iohelper was introduced?
> 
> 2) Was Jiri's comment about the missing linux implementation of 
> POSIX_FADV_NOREUSE?
> 
> 3) if using O_DIRECT is the only reason for iohelper to exist (...?), would 
> replacing it with posix_fadvise remove the need for iohelper?
> 
> 4) What has stopped Andreas' or another POSIX_FADV_NOREUSE implementation in 
> the kernel?

For what I remember (it was a long time ago sorry) I stopped to pursue
the POSIX_FADV_NOREUSE idea, because we thought that moving to a
memcg-based solution was a better and more flexible approach, assuming
memcg would have given some form of specific page cache control. As of
today I think we still don't have any specific page cache control
feature in memcg, so maybe we could reconsider the FADV_NOREUSE idea (or
something similar)?

Maybe even introduce a separate FADV_ flag if we don't want
to bind a specific implementation of this feature to a standard POSIX
flag (even if FADV_NOREUSE is still implemented as a no-op in the
kernel).

The thing that I liked about the fadvise approach is its simplicity from
an application perspective, because it's just a syscall and that's it,
without having to deal with any other subsystems (cgroups, sysfs, and
similar).

-Andrea

> 
> Lots of questions..
> 
> Thanks for all your insight,
> 
> Claudio
> 
> > 
> > Dave
> > 
> >> Ciao,
> >>
> >> C
> >>
> 
>  In the above tests with libvirt, were you using the
>  --bypass-cache flag or not ?
> >>>
> >>> No, I do not. Tests with ramdisk did not show a notable difference for me,
> >>>
> >>> but tests with /dev/null were not possible, since the command line is not 
> >>> accepted:
> >>>
> >>> # virsh save centos7 /dev/null
> >>> Domain 'centos7' saved to /dev/null
> >>> [OK]
> >>>
> >>> # virsh save centos7 /dev/null --bypass-cache
> >>> error: Failed to save domain 'centos7' to /dev/null
> >>> error: Failed to create file '/dev/null': Invalid argument
> >>>
> >>>
> 
>  Hopefully use of O_DIRECT doesn't make a difference for
>  /dev/null, since the I/O is being immediately thrown
>  away and so ought to never go into I/O cache. 
> 
>  In terms of the comparison, we still have libvirt iohelper
>  giving QEMU a pipe, while your test above gives QEMU a
>  UNIX socket.
> 
>  So I still wonder if the delta is caused by the pipe vs socket
>  difference, as opposed to netcat vs libvirt iohelper code.
> >>>
> >>> I'll look into this aspect, thanks!
> >>



[PATCH v1 0/1] hw/gpio Add ASPEED GPIO model for AST1030

2022-03-21 Thread Jamin Lin
1. Add GPIO read/write trace event.
2. Support GPIO index mode for write operation.
It did not support GPIO index mode for read operation.
3. AST1030 integrates one set of Parallel GPIO Controller
with maximum 151 control pins, which are 21 groups
(A~U, exclude pin: M6 M7 Q5 Q6 Q7 R0 R1 R4 R5 R6 R7 S0 S3 S4
S5 S6 S7 ) and the group T and U are input only.

Test Steps:
1. Download image from
https://github.com/AspeedTech-BMC/zephyr/releases/download/v00.01.04/ast1030-evb-demo.zip
2. Extract the zip file to obtain zephyr.elf
3. Run ./qemu-system-arm -M ast1030-evb -kernel $PATH/zephyr.elf -nographic
4. Test GPIO D6 Pin
uart:~$ gpio conf GPIO0_A_D 30 out
uart:~$ gpio get GPIO0_A_D 30
[Result]
Reading GPIO0_A_D pin 30
Value 0
uart:~$ gpio set GPIO0_A_D 30 1
uart:~$ gpio get GPIO0_A_D 30
[Result]
Reading GPIO0_A_D pin 30
Value 1
uart:~$ gpio set GPIO0_A_D 30 0
uart:~$ gpio get GPIO0_A_D 30
[Result]
Reading GPIO0_A_D pin 30
Value 0

Jamin Lin (1):
  hw/gpio: Add ASPEED GPIO model for AST1030

 hw/gpio/aspeed_gpio.c | 250 --
 hw/gpio/trace-events  |   5 +
 include/hw/gpio/aspeed_gpio.h |  16 ++-
 3 files changed, 255 insertions(+), 16 deletions(-)

-- 
2.17.1




Re: [PATCH v4 00/18] iotests: add enhanced debugging info to qemu-img failures

2022-03-21 Thread Hanna Reitz

On 18.03.22 22:14, John Snow wrote:

On Fri, Mar 18, 2022 at 9:36 AM Hanna Reitz  wrote:

On 18.03.22 00:49, John Snow wrote:

Hiya!

This series effectively replaces qemu_img_pipe_and_status() with a
rewritten function named qemu_img() that raises an exception on non-zero
return code by default. By the end of the series, every last invocation
of the qemu-img binary ultimately goes through qemu_img().

The exception that this function raises includes stdout/stderr output
when the traceback is printed in a a little decorated text box so that
it stands out from the jargony Python traceback readout.

(You can test what this looks like for yourself, or at least you could,
by disabling ztsd support and then running qcow2 iotest 065.)

Negative tests are still possible in two ways:

- Passing check=False to qemu_img, qemu_img_log, or img_info_log
- Catching and handling the CalledProcessError exception at the callsite.

Thanks!  Applied to my block branch:

https://gitlab.com/hreitz/qemu/-/commits/block

Hanna


Actually, hold it -- this looks like it is causing problems with the
Gitlab CI. I need to investigate these.
https://gitlab.com/jsnow/qemu/-/pipelines/495155073/failures

... and, ugh, naturally the nice error diagnostics are suppressed here
so I can't see them. Well, there's one more thing to try and fix
somehow.


I hope this patch by Thomas fixes the logging at least:

https://lists.nongnu.org/archive/html/qemu-devel/2022-03/msg02946.html

Let’s see.

Hanna




Re: [PATCH 01/15] iotests: replace calls to log(qemu_io(...)) with qemu_io_log()

2022-03-21 Thread Eric Blake
On Fri, Mar 18, 2022 at 04:36:41PM -0400, John Snow wrote:
> This makes these callsites a little simpler, but the real motivation is
> a forthcoming commit will change the return type of qemu_io(), so removing
> users of the return value now is helpful.
> 
> Signed-off-by: John Snow 
> ---

Reviewed-by: Eric Blake 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




[PATCH-for-7.0] qemu/main-loop: Disable block backend global state assertion on Darwin

2022-03-21 Thread Philippe Mathieu-Daudé
From: Philippe Mathieu-Daudé 

Since commit 0439c5a462 ("block/block-backend.c: assertions for
block-backend") QEMU crashes on Darwin hosts, example on macOS:

  $ qemu-system-i386
  Assertion failed: (qemu_in_main_thread()), function blk_all_next, file 
block-backend.c, line 552.
  Abort trap: 6

Looking with lldb:

  Assertion failed: (qemu_in_main_thread()), function blk_all_next, file 
block-backend.c, line 552.
  Process 76914 stopped
  * thread #1, queue = 'com.apple.main-thread', stop reason = hit program assert
 frame #4: 0x00010057c2d4 qemu-system-i386`blk_all_next.cold.1
  at block-backend.c:552:5 [opt]
  549*/
  550   BlockBackend *blk_all_next(BlockBackend *blk)
  551   {
  --> 552   GLOBAL_STATE_CODE();
  553   return blk ? QTAILQ_NEXT(blk, link)
  554  : QTAILQ_FIRST(_backends);
  555   }
  Target 1: (qemu-system-i386) stopped.

  (lldb) bt
  * thread #1, queue = 'com.apple.main-thread', stop reason = hit program assert
 frame #0: 0x0001908c99b8 libsystem_kernel.dylib`__pthread_kill + 8
 frame #1: 0x0001908fceb0 libsystem_pthread.dylib`pthread_kill + 288
 frame #2: 0x00019083a314 libsystem_c.dylib`abort + 164
 frame #3: 0x00019083972c libsystem_c.dylib`__assert_rtn + 300
   * frame #4: 0x00010057c2d4 qemu-system-i386`blk_all_next.cold.1 at 
block-backend.c:552:5 [opt]
 frame #5: 0x0001003c00b4 
qemu-system-i386`blk_all_next(blk=) at block-backend.c:552:5 [opt]
 frame #6: 0x0001003d8f04 
qemu-system-i386`qmp_query_block(errp=0x) at qapi.c:591:16 [opt]
 frame #7: 0x00010003ab0c qemu-system-i386`main [inlined] 
addRemovableDevicesMenuItems at cocoa.m:1756:21 [opt]
 frame #8: 0x00010003ab04 qemu-system-i386`main(argc=, 
argv=) at cocoa.m:1980:5 [opt]
 frame #9: 0x0001012690f4 dyld`start + 520

As we are in passed release 7.0 hard freeze, disable the block
backend assertion which, while being valuable during development,
is not helpful to users. We'll restore this assertion immediately
once 7.0 is released and work on a fix.

Cc: Kevin Wolf 
Cc: Paolo Bonzini 
Cc: Peter Maydell 
Cc: Emanuele Giuseppe Esposito 
Suggested-by: Akihiko Odaki 
Signed-off-by: Philippe Mathieu-Daudé 
---
 include/qemu/main-loop.h | 4 
 1 file changed, 4 insertions(+)

diff --git a/include/qemu/main-loop.h b/include/qemu/main-loop.h
index 7a4d6a0920..c27968ce33 100644
--- a/include/qemu/main-loop.h
+++ b/include/qemu/main-loop.h
@@ -270,10 +270,14 @@ bool qemu_mutex_iothread_locked(void);
 bool qemu_in_main_thread(void);
 
 /* Mark and check that the function is part of the global state API. */
+#ifdef CONFIG_DARWIN
+#define GLOBAL_STATE_CODE()
+#else
 #define GLOBAL_STATE_CODE() \
 do {\
 assert(qemu_in_main_thread());  \
 } while (0)
+#endif /* CONFIG_DARWIN */
 
 /* Mark and check that the function is part of the I/O API. */
 #define IO_CODE()   \
-- 
2.35.1




Re: [PATCH 05/15] iotests: create generic qemu_tool() function

2022-03-21 Thread Eric Blake
On Fri, Mar 18, 2022 at 04:36:45PM -0400, John Snow wrote:
> reimplement qemu_img() in terms of qemu_tool() in preparation for doing
> the same with qemu_io().
> 
> Signed-off-by: John Snow 
> ---
>  tests/qemu-iotests/iotests.py | 37 +++
>  1 file changed, 24 insertions(+), 13 deletions(-)
> 
> diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
> index 6cd8374c81..974a2b0c8d 100644
> --- a/tests/qemu-iotests/iotests.py
> +++ b/tests/qemu-iotests/iotests.py
> @@ -207,15 +207,13 @@ def qemu_img_create_prepare_args(args: List[str]) -> 
> List[str]:
>  
>  return result
>  
> -def qemu_img(*args: str, check: bool = True, combine_stdio: bool = True
> +
> +def qemu_tool(*args: str, check: bool = True, combine_stdio: bool = True
>   ) -> subprocess.CompletedProcess[str]:

Does this line need reindentation?

> @@ -227,14 +225,13 @@ def qemu_img(*args: str, check: bool = True, 
> combine_stdio: bool = True
>  handled, the command-line, return code, and all console output
>  will be included at the bottom of the stack trace.
>  
> -:return: a CompletedProcess. This object has args, returncode, and
> -stdout properties. If streams are not combined, it will also
> -have a stderr property.
> +:return:
> +A CompletedProcess. This object has args, returncode, and stdout
> +properties. If streams are not combined, it will also have a
> +stderr property.

Should this reflow be squashed in some earlier patch?

As those are both cosemetic only,

Reviewed-by: Eric Blake 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




[PATCH 1/2] pcie: Don't try triggering a LSI when not defined

2022-03-21 Thread Frederic Barrat
This patch skips [de]asserting a LSI interrupt if the device doesn't
have any LSI defined. Doing so would trigger an assert in
pci_irq_handler().

The PCIE root port implementation in qemu requests a LSI (INTA), but a
subclass may want to change that behavior since it's a valid
configuration. For example on the POWER8/POWER9/POWER10 systems, the
root bridge doesn't request any LSI.

Signed-off-by: Frederic Barrat 
---
 hw/pci/pcie.c | 8 ++--
 hw/pci/pcie_aer.c | 4 +++-
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/hw/pci/pcie.c b/hw/pci/pcie.c
index 67a5d67372..71c5194b80 100644
--- a/hw/pci/pcie.c
+++ b/hw/pci/pcie.c
@@ -354,7 +354,9 @@ static void hotplug_event_notify(PCIDevice *dev)
 } else if (msi_enabled(dev)) {
 msi_notify(dev, pcie_cap_flags_get_vector(dev));
 } else {
-pci_set_irq(dev, dev->exp.hpev_notified);
+if (pci_intx(dev) != -1) {
+pci_set_irq(dev, dev->exp.hpev_notified);
+}
 }
 }
 
@@ -362,7 +364,9 @@ static void hotplug_event_clear(PCIDevice *dev)
 {
 hotplug_event_update_event_status(dev);
 if (!msix_enabled(dev) && !msi_enabled(dev) && !dev->exp.hpev_notified) {
-pci_irq_deassert(dev);
+if (pci_intx(dev) != -1) {
+pci_irq_deassert(dev);
+}
 }
 }
 
diff --git a/hw/pci/pcie_aer.c b/hw/pci/pcie_aer.c
index e1a8a88c8c..d936bfca20 100644
--- a/hw/pci/pcie_aer.c
+++ b/hw/pci/pcie_aer.c
@@ -291,7 +291,9 @@ static void pcie_aer_root_notify(PCIDevice *dev)
 } else if (msi_enabled(dev)) {
 msi_notify(dev, pcie_aer_root_get_vector(dev));
 } else {
-pci_irq_assert(dev);
+if (pci_intx(dev) != -1) {
+pci_irq_assert(dev);
+}
 }
 }
 
-- 
2.35.1




[PATCH 2/2] ppc/pnv: Remove LSI on the PCIE host bridge

2022-03-21 Thread Frederic Barrat
The phb3/phb4/phb5 root ports inherit from the default PCIE root port
implementation, which requests a LSI interrupt (#INTA). On real
hardware (POWER8/POWER9/POWER10), there is no such LSI. This patch
corrects it so that it matches the hardware.

As a consequence, the device tree previously generated was bogus, as
the root bridge LSI was not properly mapped. On some
implementation (powernv9), it was leading to inconsistent interrupt
controller (xive) data. With this patch, it is now clean.

Signed-off-by: Frederic Barrat 
---
 hw/pci-host/pnv_phb3.c | 1 +
 hw/pci-host/pnv_phb4.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/hw/pci-host/pnv_phb3.c b/hw/pci-host/pnv_phb3.c
index ac801ac835..0d18c96117 100644
--- a/hw/pci-host/pnv_phb3.c
+++ b/hw/pci-host/pnv_phb3.c
@@ -1162,6 +1162,7 @@ static void pnv_phb3_root_port_realize(DeviceState *dev, 
Error **errp)
 error_propagate(errp, local_err);
 return;
 }
+pci_config_set_interrupt_pin(pci->config, 0);
 }
 
 static void pnv_phb3_root_port_class_init(ObjectClass *klass, void *data)
diff --git a/hw/pci-host/pnv_phb4.c b/hw/pci-host/pnv_phb4.c
index b301762093..b66b75d4d7 100644
--- a/hw/pci-host/pnv_phb4.c
+++ b/hw/pci-host/pnv_phb4.c
@@ -1772,6 +1772,7 @@ static void pnv_phb4_root_port_reset(DeviceState *dev)
 pci_set_word(conf + PCI_PREF_MEMORY_LIMIT, 0xfff1);
 pci_set_long(conf + PCI_PREF_BASE_UPPER32, 0x1); /* Hack */
 pci_set_long(conf + PCI_PREF_LIMIT_UPPER32, 0x);
+pci_config_set_interrupt_pin(conf, 0);
 }
 
 static void pnv_phb4_root_port_realize(DeviceState *dev, Error **errp)
-- 
2.35.1




Re: Memory leak in via_isa_realize()

2022-03-21 Thread Peter Maydell
On Mon, 21 Mar 2022 at 12:11, BALATON Zoltan  wrote:
>
> On Mon, 21 Mar 2022, Peter Maydell wrote:
> > On Mon, 21 Mar 2022 at 10:31, Thomas Huth  wrote:
> >> FYI, I'm seeing a memory leak in via_isa_realize() when building
> >> QEMU with sanitizers enabled or when running QEMU through valgrind:
> >> Same problem happens with qemu-system-ppc64 and the pegasos2 machine.
> >>
> >> No clue how to properly fix this... is it safe to free the pointer
> >> at the end of the function?
> >
> > This is because the code is still using the old function
> > qemu_allocate_irqs(), which is almost always going to involve
> > it leaking memory. The fix is usually to rewrite the code to not use
> > that function at all, i.e. to manage its irq/gpio lines differently.
> > Probably the i8259 code should have a named GPIO output line
> > rather than wanting to be passed a qemu_irq in an init function,
> > and the via code should have an input GPIO line which it connects
> > up to the i8259. It looks from a quick glance like the i8259 and
> > its callers have perhaps not been completely QOMified.
>
> Everything involving ISA emulation in QEMU is not completely QOMified and
> this has caused some problems before but I did not want to try to fix it
> both becuase it's too much unrelated work and because it's used by too
> many things that could break that I can't even test. So I'd rather
> somebody more comfortable with this would look at ISA QOMification.

Yeah, there's usually a reason that these things haven't been more
thoroughly QOMified, and that reason is often because it's a pile of
work for not very clear benefit.

In this particular case, although there is a "leak", it happens exactly
once at QEMU startup and in practice we need that memory to hang around
until QEMU exits anyway. The only real reason to fix this kind of leak in
my opinion is because it clutters up the output of valgrind or clang/gcc
address sanitizer runs and prevents us from having our CI do a
leak-sanitizer test run that would guard against new leaks being added
to the codebase. We still have a fair number of this sort of one-off
startup leak in various arm boards/devices, for instance -- I occasionally
have a look through and fix some of the more tractable ones.

thanks
-- PMM



Re: [PATCH RESEND 1/2] hw/vfio/pci-quirks: Resolve redundant property getters

2022-03-21 Thread Philippe Mathieu-Daudé

On 21/3/22 11:57, Bernhard Beschow wrote:

Am 1. März 2022 22:52:19 UTC schrieb Bernhard Beschow :

The QOM API already provides getters for uint64 and uint32 values, so reuse
them.

Signed-off-by: Bernhard Beschow 
Reviewed-by: Philippe Mathieu-Daudé 
---
hw/vfio/pci-quirks.c | 34 +-
1 file changed, 9 insertions(+), 25 deletions(-)

diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index 0cf69a8c6d..f0147a050a 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1565,22 +1565,6 @@ static int vfio_add_nv_gpudirect_cap(VFIOPCIDevice 
*vdev, Error **errp)
 return 0;
}

-static void vfio_pci_nvlink2_get_tgt(Object *obj, Visitor *v,
- const char *name,
- void *opaque, Error **errp)
-{
-uint64_t tgt = (uintptr_t) opaque;
-visit_type_uint64(v, name, , errp);
-}
-
-static void vfio_pci_nvlink2_get_link_speed(Object *obj, Visitor *v,
- const char *name,
- void *opaque, Error **errp)
-{
-uint32_t link_speed = (uint32_t)(uintptr_t) opaque;
-visit_type_uint32(v, name, _speed, errp);
-}
-
int vfio_pci_nvidia_v100_ram_init(VFIOPCIDevice *vdev, Error **errp)
{
 int ret;
@@ -1618,9 +1602,9 @@ int vfio_pci_nvidia_v100_ram_init(VFIOPCIDevice *vdev, 
Error **errp)
nv2reg->size, p);
 QLIST_INSERT_HEAD(>bars[0].quirks, quirk, next);

-object_property_add(OBJECT(vdev), "nvlink2-tgt", "uint64",
-vfio_pci_nvlink2_get_tgt, NULL, NULL,
-(void *) (uintptr_t) cap->tgt);
+object_property_add_uint64_ptr(OBJECT(vdev), "nvlink2-tgt",
+   (uint64_t *) >tgt,
+   OBJ_PROP_FLAG_READ);
 trace_vfio_pci_nvidia_gpu_setup_quirk(vdev->vbasedev.name, cap->tgt,
   nv2reg->size);
free_exit:
@@ -1679,15 +1663,15 @@ int vfio_pci_nvlink2_init(VFIOPCIDevice *vdev, Error 
**errp)
 QLIST_INSERT_HEAD(>bars[0].quirks, quirk, next);
 }

-object_property_add(OBJECT(vdev), "nvlink2-tgt", "uint64",
-vfio_pci_nvlink2_get_tgt, NULL, NULL,
-(void *) (uintptr_t) captgt->tgt);
+object_property_add_uint64_ptr(OBJECT(vdev), "nvlink2-tgt",
+   (uint64_t *) >tgt,
+   OBJ_PROP_FLAG_READ);
 trace_vfio_pci_nvlink2_setup_quirk_ssatgt(vdev->vbasedev.name, captgt->tgt,
   atsdreg->size);

-object_property_add(OBJECT(vdev), "nvlink2-link-speed", "uint32",
-vfio_pci_nvlink2_get_link_speed, NULL, NULL,
-(void *) (uintptr_t) capspeed->link_speed);
+object_property_add_uint32_ptr(OBJECT(vdev), "nvlink2-link-speed",
+   >link_speed,
+   OBJ_PROP_FLAG_READ);
 trace_vfio_pci_nvlink2_setup_quirk_lnkspd(vdev->vbasedev.name,
   capspeed->link_speed);
free_exit:


Ping

@Alistair: When resending, I accidently added a Reviewed-by with your name here 
which I asked to be ignored *after* you re-acked patch 2/2. In case you 
intended to ack this patch as well your voice would be needed again.


FWIW I expect these patches to get merged via the qemu-trivial@ tree
once the 7.1 development window opens.



Re: [PATCH 04/15] iotests/040: Don't check image pattern on zero-length image

2022-03-21 Thread Eric Blake
On Fri, Mar 18, 2022 at 04:36:44PM -0400, John Snow wrote:
> qemu-io fails on read/write with zero-length raw images, so skip these
> when running the zero-length image tests.

On my first read, I wondered what we accomplish by rejecting
zero-length reads on a zero-length image, and whether entering the
rabbit hole of trying to make that corner case "work" differently
makes more sense...

> 
> Signed-off-by: John Snow 
> ---
>  tests/qemu-iotests/040 | 14 --
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/tests/qemu-iotests/040 b/tests/qemu-iotests/040
> index adf5815781..c4a90937dc 100755
> --- a/tests/qemu-iotests/040
> +++ b/tests/qemu-iotests/040
> @@ -86,8 +86,10 @@ class TestSingleDrive(ImageCommitTestCase):
>  qemu_img('create', '-f', iotests.imgfmt,
>   '-o', 'backing_file=%s' % mid_img,
>   '-F', iotests.imgfmt, test_img)
> -qemu_io('-f', 'raw', '-c', 'write -P 0xab 0 524288', backing_img)
> -qemu_io('-f', iotests.imgfmt, '-c', 'write -P 0xef 524288 524288', 
> mid_img)
> +if self.image_len:
> +qemu_io('-f', 'raw', '-c', 'write -P 0xab 0 524288', backing_img)
> +qemu_io('-f', iotests.imgfmt, '-c', 'write -P 0xef 524288 
> 524288',
> +mid_img)

...but now it is obvious - one of our test cases is attempting a
non-zero-length modification to a zero-length file, and it does make
sense for that modification attempt to fail, in which case, making the
test special case the zero-length file is the right thing to do.

Reviewed-by: Eric Blake 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v4 1/3] qmp: Support for querying stats

2022-03-21 Thread Paolo Bonzini

On 3/21/22 14:50, Markus Armbruster wrote:

Mark Kanda  writes:

Thank you Markus.
On 3/11/2022 7:06 AM, Markus Armbruster wrote:

Are the stats bulky enough to justfify the extra complexity of
filtering?


If this was only for KVM, the complexity probably isn't worth it. However, the
framework is intended to support future stats with new providers and targets
(there has also been mention of moving existing stats to this framework).
Without some sort of filtering, I think the payload could become unmanageable.


I'm deeply wary of "may need $complexity in the future" when $complexity
could be added when we actually need it :)


I think it's better to have the filtering already.  There are several 
uses for it.


Regarding filtering by provider, consider that a command like "info jit" 
should be a wrapper over


{ "execute": "query-stats", "arguments" : { "target": "vm",
  "filters": [ { "provider": "tcg" } ] } }

So we have an example of the intended use already within QEMU.  Yes, the 
usefulness depends on actually having >1 provider but I think it's 
pretty central to the idea of having a statistics *subsystem*.


Regarding filtering by name, query-stats mostly has two usecases.  The 
first is retrieving all stats and publishing them up to the user, for 
example once per minute per VM.  The second is monitoring a small number 
and building a relatively continuous plot (e.g. 1-10 times per second 
per vCPU).  For the latter, not having to return hundreds of values 
unnecessarily (KVM has almost 60 stats, multiply by the number of vCPUs 
and the frequency) is worth having even just with the KVM provider.



Can you give a use case for query-stats-schemas?


'query-stats-schemas' provide the the type details about each stat; such as the
unit, base, etc. These details are not reported by 'query-stats' (only the stat
name and raw values are returned).


Yes, but what is going to use these type details, and for what purpose?


QEMU does not know in advance which stats are provided.  The types, etc. 
are provided by the kernel and can change by architecture and kernel 
version.  In the case of KVM, introspection is done through a file 
descriptor.  QEMU passes these up as QMP and in the future it 
could/should extend this to other providers (such as TCG) and devices 
(such as block devices).


See the "info stats" implementation for how it uses the schema:

vcpu (qom path: /machine/unattached/device[2])
  provider: kvm
exits (cumulative): 52369
halt_wait_ns (cumulative nanoseconds): 416092704390

Information such as "cumulative nanoseconds" is provided by the schema.


Have you considered splitting this up into three parts: unfiltered
query-stats, filtering, and query-stats-schemas?


Splitting could be an idea, but I think only filtering would be a 
separate step.  The stats are not really usable without a schema that 
tells you the units, or whether a number can go down or only up.  (Well, 
a human export could use them through its intuition, but a HMP-level 
command could not be provided).



We could perhaps merge with the current schema, then clean it up on top,
both in 7.1, if that's easier for you.


The serialized JSON would change, so that would be a bit worrisome (but 
it makes me feel a little less bad about this missing 7.0).  It seems to 
be as easy as this, as far as alternates go:


diff --git a/scripts/qapi/expr.py b/scripts/qapi/expr.py
index 3cb389e875..48578e1698 100644
--- a/scripts/qapi/expr.py
+++ b/scripts/qapi/expr.py
@@ -554,7 +554,7 @@ def check_alternate(expr: _JSONObject, info: 
QAPISourceInfo) -> None:

 check_name_lower(key, info, source)
 check_keys(value, info, source, ['type'], ['if'])
 check_if(value, info, source)
-check_type(value['type'], info, source)
+check_type(value['type'], info, source, allow_array=True)


 def check_command(expr: _JSONObject, info: QAPISourceInfo) -> None:

diff --git a/scripts/qapi/schema.py b/scripts/qapi/schema.py
index b7b3fc0ce4..3728340c37 100644
--- a/scripts/qapi/schema.py
+++ b/scripts/qapi/schema.py
@@ -243,6 +243,7 @@ def alternate_qtype(self):
 'number':  'QTYPE_QNUM',
 'int': 'QTYPE_QNUM',
 'boolean': 'QTYPE_QBOOL',
+'array':   'QTYPE_QLIST',
 'object':  'QTYPE_QDICT'
 }
 return json2qtype.get(self.json_type())
@@ -1069,6 +1070,9 @@ def _def_struct_type(self, expr, info, doc):
 None))

 def _make_variant(self, case, typ, ifcond, info):
+if isinstance(typ, list):
+assert len(typ) == 1
+typ = self._make_array_type(typ[0], info)
 return QAPISchemaVariant(case, info, typ, ifcond)

 def _def_union_type(self, expr, info, doc):


I'll try to write some testcases and also cover other uses of
_make_variant, which will undoubtedly find some issue.

Paolo



[PATCH v1 02/13] virtio-pci: add notification trace points

2022-03-21 Thread Alex Bennée
Signed-off-by: Alex Bennée 
Reviewed-by: Philippe Mathieu-Daudé 
Message-Id: <20200925125147.26943-6-alex.ben...@linaro.org>
Signed-off-by: Alex Bennée 
---
 hw/virtio/virtio-pci.c | 3 +++
 hw/virtio/trace-events | 7 ++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index 602be7f83d..0566ad7d00 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -38,6 +38,7 @@
 #include "hw/virtio/virtio-bus.h"
 #include "qapi/visitor.h"
 #include "sysemu/replay.h"
+#include "trace.h"
 
 #define VIRTIO_PCI_REGION_SIZE(dev) 
VIRTIO_PCI_CONFIG_OFF(msix_present(dev))
 
@@ -1380,6 +1381,7 @@ static void virtio_pci_notify_write(void *opaque, hwaddr 
addr,
 unsigned queue = addr / virtio_pci_queue_mem_mult(proxy);
 
 if (vdev != NULL && queue < VIRTIO_QUEUE_MAX) {
+trace_virtio_pci_notify_write(addr, val, size);
 virtio_queue_notify(vdev, queue);
 }
 }
@@ -1393,6 +1395,7 @@ static void virtio_pci_notify_write_pio(void *opaque, 
hwaddr addr,
 unsigned queue = val;
 
 if (vdev != NULL && queue < VIRTIO_QUEUE_MAX) {
+trace_virtio_pci_notify_write_pio(addr, val, size);
 virtio_queue_notify(vdev, queue);
 }
 }
diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index a5102eac9e..46851a7cd1 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -87,7 +87,12 @@ virtio_mmio_guest_page(uint64_t size, int shift) "guest page 
size 0x%" PRIx64 "
 virtio_mmio_queue_write(uint64_t value, int max_size) "mmio_queue write 0x%" 
PRIx64 " max %d"
 virtio_mmio_setting_irq(int level) "virtio_mmio setting IRQ %d"
 
-# virtio-iommu.c
+# virtio-pci.c
+virtio_pci_notify(uint16_t vector) "virtio_pci_notify vec 0x%x"
+virtio_pci_notify_write(uint64_t addr, uint64_t val, unsigned int size) "0x%" 
PRIx64" = 0x%" PRIx64 " (%d)"
+virtio_pci_notify_write_pio(uint64_t addr, uint64_t val, unsigned int size) 
"0x%" PRIx64" = 0x%" PRIx64 " (%d)"
+
+# hw/virtio/virtio-iommu.c
 virtio_iommu_device_reset(void) "reset!"
 virtio_iommu_system_reset(void) "system reset!"
 virtio_iommu_get_features(uint64_t features) "device supports 
features=0x%"PRIx64
-- 
2.30.2




[PATCH v1 09/13] docs/devel: start documenting writing VirtIO devices

2022-03-21 Thread Alex Bennée
While writing my own VirtIO devices I've gotten confused with how
things are structured and what sort of shared infrastructure there is.
If we can document how everything is supposed to work we can then
maybe start cleaning up inconsistencies in the code.

Signed-off-by: Alex Bennée 
Cc: Stefan Hajnoczi 
Cc: "Michael S. Tsirkin" 
Cc: Gerd Hoffmann 
Cc: Marc-André Lureau 
Cc: Viresh Kumar 
Cc: Mathieu Poirier 
Cc: Dr. David Alan Gilbert 
Message-Id: <20220309164929.19395-1-alex.ben...@linaro.org>

---
v2
  - move more description to the leader text
  - try not to confuse backend and frontend terms
  - more explicit description of objects
  - try and tease apart vhost_dev_init vs QOM-ifed vhost-user-backend
---
 docs/devel/index-internals.rst |   1 +
 docs/devel/virtio-backends.rst | 214 +
 2 files changed, 215 insertions(+)
 create mode 100644 docs/devel/virtio-backends.rst

diff --git a/docs/devel/index-internals.rst b/docs/devel/index-internals.rst
index bb118b8eaf..5d9f95dd93 100644
--- a/docs/devel/index-internals.rst
+++ b/docs/devel/index-internals.rst
@@ -19,3 +19,4 @@ Details about QEMU's various subsystems including how to add 
features to them.
tracing
vfio-migration
writing-monitor-commands
+   virtio-backends
diff --git a/docs/devel/virtio-backends.rst b/docs/devel/virtio-backends.rst
new file mode 100644
index 00..9ff092e7a0
--- /dev/null
+++ b/docs/devel/virtio-backends.rst
@@ -0,0 +1,214 @@
+..
+   Copyright (c) 2022, Linaro Limited
+   Written by Alex Bennée
+
+Writing VirtIO backends for QEMU
+
+
+This document attempts to outline the information a developer needs to
+know to write device emulations in QEMU. It is specifically focused on
+implementing VirtIO devices. For VirtIO the frontend is the driver
+running on the guest. The backend is the everything that QEMU needs to
+do to handle the emulation of the VirtIO device. This can be done
+entirely in QEMU, divided between QEMU and the kernel (vhost) or
+handled by a separate process which is configured by QEMU
+(vhost-user).
+
+VirtIO Transports
+-
+
+VirtIO supports a number of different transports. While the details of
+the configuration and operation of the device will generally be the
+same QEMU represents them as different devices depending on the
+transport they use. For example -device virtio-foo represents the foo
+device using mmio and -device virtio-foo-pci is the same class of
+device using the PCI transport.
+
+Using the QEMU Object Model (QOM)
+-
+
+Generally all devices in QEMU are super classes of ``TYPE_DEVICE``
+however VirtIO devices should be based on ``TYPE_VIRTIO_DEVICE`` which
+itself is derived from the base class. For example:
+
+.. code:: c
+
+  static const TypeInfo virtio_blk_info = {
+  .name = TYPE_VIRTIO_BLK,
+  .parent = TYPE_VIRTIO_DEVICE,
+  .instance_size = sizeof(VirtIOBlock),
+  .instance_init = virtio_blk_instance_init,
+  .class_init = virtio_blk_class_init,
+  };
+
+The author may decide to have a more expansive class hierarchy to
+support multiple device types. For example the Virtio GPU device:
+
+.. code:: c
+
+  static const TypeInfo virtio_gpu_base_info = {
+  .name = TYPE_VIRTIO_GPU_BASE,
+  .parent = TYPE_VIRTIO_DEVICE,
+  .instance_size = sizeof(VirtIOGPUBase),
+  .class_size = sizeof(VirtIOGPUBaseClass),
+  .class_init = virtio_gpu_base_class_init,
+  .abstract = true
+  };
+
+  static const TypeInfo vhost_user_gpu_info = {
+  .name = TYPE_VHOST_USER_GPU,
+  .parent = TYPE_VIRTIO_GPU_BASE,
+  .instance_size = sizeof(VhostUserGPU),
+  .instance_init = vhost_user_gpu_instance_init,
+  .instance_finalize = vhost_user_gpu_instance_finalize,
+  .class_init = vhost_user_gpu_class_init,
+  };
+
+  static const TypeInfo virtio_gpu_info = {
+  .name = TYPE_VIRTIO_GPU,
+  .parent = TYPE_VIRTIO_GPU_BASE,
+  .instance_size = sizeof(VirtIOGPU),
+  .class_size = sizeof(VirtIOGPUClass),
+  .class_init = virtio_gpu_class_init,
+  };
+
+defines a base class for the VirtIO GPU and then specialises two
+versions, one for the internal implementation and the other for the
+vhost-user version.
+
+VirtIOPCIProxy
+^^
+
+[AJB: the following is supposition and welcomes more informed
+opinions]
+
+Probably due to legacy from the pre-QOM days PCI VirtIO devices don't
+follow the normal hierarchy. Instead the a standalone object is based
+on the VirtIOPCIProxy class and the specific VirtIO instance is
+manually instantiated:
+
+.. code:: c
+
+  /*
+   * virtio-blk-pci: This extends VirtioPCIProxy.
+   */
+  #define TYPE_VIRTIO_BLK_PCI "virtio-blk-pci-base"
+  DECLARE_INSTANCE_CHECKER(VirtIOBlkPCI, VIRTIO_BLK_PCI,
+   TYPE_VIRTIO_BLK_PCI)
+
+  struct VirtIOBlkPCI {
+  VirtIOPCIProxy parent_obj;
+  VirtIOBlock vdev;
+  };
+
+  static Property 

Re: [PATCH v3 0/4] Improve integration of iotests in the meson test harness

2022-03-21 Thread Hanna Reitz

On 23.02.22 10:38, Thomas Huth wrote:

Though "make check-block" is currently already run via the meson test
runner, it still looks like an oddball in the output of "make check". It
would be nicer if the iotests would show up like the other tests suites.

My original plan was to add each iotests individually from meson.build,
but I did not get that done reliably yet [*], so here's now a cut-down
version to improve the situation at least a little bit: The first three
patches are preparation for the clean-up (long-term goal is to get rid
of check-block.sh, though we're not quite there yet), and the final
patch adds the iotests not as separate test target in the meson test
harness anymore. This way, we can now finally get the output of failed
tests on the console again (unless you're running meson test in verbose
mode, where meson only puts this to the log file - for incomprehensible
reasons), so this should hopefully help to diagnose problems with the
iotests in most cases more easily.

[*] See v2 here:
 https://lists.gnu.org/archive/html/qemu-devel/2022-02/msg01942.html

Thomas Huth (4):
   tests/qemu-iotests: Rework the checks and spots using GNU sed
   tests/qemu-iotests/meson.build: Improve the indentation
   tests/qemu-iotests: Move the bash and sanitizer checks to meson.build
   tests: Do not treat the iotests as separate meson test target anymore


What’s the status of this series?  I wonder why you split it apart, mainly.

Patch 1 was already merged, and I took patch 4 today.  So what about 
patches 2 and 3?  They look sensible to me, but is this series still 
relevant and fresh, considering you sent new versions of patches 1 and 4?


(And are there any other iotests patches from you that flew under my radar?)

Hanna




Re: [PATCH 05/15] iotests: create generic qemu_tool() function

2022-03-21 Thread John Snow
On Mon, Mar 21, 2022, 11:13 AM Eric Blake  wrote:

> On Fri, Mar 18, 2022 at 04:36:45PM -0400, John Snow wrote:
> > reimplement qemu_img() in terms of qemu_tool() in preparation for doing
> > the same with qemu_io().
> >
> > Signed-off-by: John Snow 
> > ---
> >  tests/qemu-iotests/iotests.py | 37 +++
> >  1 file changed, 24 insertions(+), 13 deletions(-)
> >
> > diff --git a/tests/qemu-iotests/iotests.py
> b/tests/qemu-iotests/iotests.py
> > index 6cd8374c81..974a2b0c8d 100644
> > --- a/tests/qemu-iotests/iotests.py
> > +++ b/tests/qemu-iotests/iotests.py
> > @@ -207,15 +207,13 @@ def qemu_img_create_prepare_args(args: List[str])
> -> List[str]:
> >
> >  return result
> >
> > -def qemu_img(*args: str, check: bool = True, combine_stdio: bool = True
> > +
> > +def qemu_tool(*args: str, check: bool = True, combine_stdio: bool = True
> >   ) -> subprocess.CompletedProcess[str]:
>
> Does this line need reindentation?
>

Huh, I'll check. Maybe I fixed this by accident in a later patch and didn't
notice. Or maybe git diff is playing tricks on me.


> > @@ -227,14 +225,13 @@ def qemu_img(*args: str, check: bool = True,
> combine_stdio: bool = True
> >  handled, the command-line, return code, and all console output
> >  will be included at the bottom of the stack trace.
> >
> > -:return: a CompletedProcess. This object has args, returncode, and
> > -stdout properties. If streams are not combined, it will also
> > -have a stderr property.
> > +:return:
> > +A CompletedProcess. This object has args, returncode, and stdout
> > +properties. If streams are not combined, it will also have a
> > +stderr property.
>
> Should this reflow be squashed in some earlier patch?
>

Aw, you caught me. 

I need to respin the qemu-img stuff anyway due to CI failures, so I can fix
it where it appears first.

(When I wrote this, I didn't realize that the qemu-img series was failing
CI yet.)


> As those are both cosemetic only,
>
> Reviewed-by: Eric Blake 
>
> --
> Eric Blake, Principal Software Engineer
> Red Hat, Inc.   +1-919-301-3266
> Virtualization:  qemu.org | libvirt.org
>
>


[PATCH v1 1/1] hw/gpio: Add ASPEED GPIO model for AST1030

2022-03-21 Thread Jamin Lin
1. Add GPIO read/write trace event.
2. Support GPIO index mode for write operation.
It did not support GPIO index mode for read operation.
3. AST1030 integrates one set of Parallel GPIO Controller
with maximum 151 control pins, which are 21 groups
(A~U, exclude pin: M6 M7 Q5 Q6 Q7 R0 R1 R4 R5 R6 R7 S0 S3 S4
S5 S6 S7 ) and the group T and U are input only.

Signed-off-by: Jamin Lin 
---
 hw/gpio/aspeed_gpio.c | 250 --
 hw/gpio/trace-events  |   5 +
 include/hw/gpio/aspeed_gpio.h |  16 ++-
 3 files changed, 255 insertions(+), 16 deletions(-)

diff --git a/hw/gpio/aspeed_gpio.c b/hw/gpio/aspeed_gpio.c
index c63634d3d3..3f0bd036b7 100644
--- a/hw/gpio/aspeed_gpio.c
+++ b/hw/gpio/aspeed_gpio.c
@@ -15,6 +15,8 @@
 #include "qapi/visitor.h"
 #include "hw/irq.h"
 #include "migration/vmstate.h"
+#include "trace.h"
+#include "hw/registerfields.h"
 
 #define GPIOS_PER_GROUP 8
 
@@ -203,6 +205,28 @@
 #define GPIO_1_8V_MEM_SIZE0x1D8
 #define GPIO_1_8V_REG_ARRAY_SIZE  (GPIO_1_8V_MEM_SIZE >> 2)
 
+/*
+ * GPIO index mode support
+ * It only supports write operation
+ */
+REG32(GPIO_INDEX_REG, 0x2AC)
+FIELD(GPIO_INDEX_REG, NUMBER, 0, 8)
+FIELD(GPIO_INDEX_REG, COMMAND, 12, 1)
+FIELD(GPIO_INDEX_REG, TYPE, 16, 4)
+FIELD(GPIO_INDEX_REG, DATA_VALUE, 20, 1)
+FIELD(GPIO_INDEX_REG, DIRECTION, 20, 1)
+FIELD(GPIO_INDEX_REG, INT_ENABLE, 20, 1)
+FIELD(GPIO_INDEX_REG, INT_SENS_0, 21, 1)
+FIELD(GPIO_INDEX_REG, INT_SENS_1, 22, 1)
+FIELD(GPIO_INDEX_REG, INT_SENS_2, 23, 1)
+FIELD(GPIO_INDEX_REG, INT_STATUS, 24, 1)
+FIELD(GPIO_INDEX_REG, DEBOUNCE_1, 20, 1)
+FIELD(GPIO_INDEX_REG, DEBOUNCE_2, 21, 1)
+FIELD(GPIO_INDEX_REG, RESET_TOLERANT, 20, 1)
+FIELD(GPIO_INDEX_REG, COMMAND_SRC_0, 20, 1)
+FIELD(GPIO_INDEX_REG, COMMAND_SRC_1, 21, 1)
+FIELD(GPIO_INDEX_REG, INPUT_MASK, 20, 1)
+
 static int aspeed_evaluate_irq(GPIOSets *regs, int gpio_prev_high, int gpio)
 {
 uint32_t falling_edge = 0, rising_edge = 0;
@@ -523,11 +547,16 @@ static uint64_t aspeed_gpio_read(void *opaque, hwaddr 
offset, uint32_t size)
 uint64_t idx = -1;
 const AspeedGPIOReg *reg;
 GPIOSets *set;
+uint32_t value = 0;
+uint64_t debounce_value;
 
 idx = offset >> 2;
 if (idx >= GPIO_DEBOUNCE_TIME_1 && idx <= GPIO_DEBOUNCE_TIME_3) {
 idx -= GPIO_DEBOUNCE_TIME_1;
-return (uint64_t) s->debounce_regs[idx];
+debounce_value = (uint64_t) s->debounce_regs[idx];
+trace_aspeed_gpio_read(DEVICE(s)->canonical_path,
+   offset, debounce_value);
+return debounce_value;
 }
 
 reg = >reg_table[idx];
@@ -540,38 +569,193 @@ static uint64_t aspeed_gpio_read(void *opaque, hwaddr 
offset, uint32_t size)
 set = >sets[reg->set_idx];
 switch (reg->type) {
 case gpio_reg_data_value:
-return set->data_value;
+ value = set->data_value;
+ break;
 case gpio_reg_direction:
-return set->direction;
+value = set->direction;
+break;
 case gpio_reg_int_enable:
-return set->int_enable;
+value = set->int_enable;
+break;
 case gpio_reg_int_sens_0:
-return set->int_sens_0;
+value = set->int_sens_0;
+break;
 case gpio_reg_int_sens_1:
-return set->int_sens_1;
+value = set->int_sens_1;
+break;
 case gpio_reg_int_sens_2:
-return set->int_sens_2;
+value = set->int_sens_2;
+break;
 case gpio_reg_int_status:
-return set->int_status;
+value = set->int_status;
+break;
 case gpio_reg_reset_tolerant:
-return set->reset_tol;
+value = set->reset_tol;
+break;
 case gpio_reg_debounce_1:
-return set->debounce_1;
+value = set->debounce_1;
+break;
 case gpio_reg_debounce_2:
-return set->debounce_2;
+value = set->debounce_2;
+break;
 case gpio_reg_cmd_source_0:
-return set->cmd_source_0;
+value = set->cmd_source_0;
+break;
 case gpio_reg_cmd_source_1:
-return set->cmd_source_1;
+value = set->cmd_source_1;
+break;
 case gpio_reg_data_read:
-return set->data_read;
+value = set->data_read;
+break;
 case gpio_reg_input_mask:
-return set->input_mask;
+value = set->input_mask;
+break;
 default:
 qemu_log_mask(LOG_GUEST_ERROR, "%s: no getter for offset 0x%"
   HWADDR_PRIx"\n", __func__, offset);
 return 0;
 }
+
+trace_aspeed_gpio_read(DEVICE(s)->canonical_path, offset, value);
+return value;
+}
+
+static void aspeed_gpio_write_index_mode(void *opaque, hwaddr offset,
+uint64_t data, uint32_t size)
+{
+
+AspeedGPIOState *s = ASPEED_GPIO(opaque);
+AspeedGPIOClass *agc = ASPEED_GPIO_GET_CLASS(s);
+const 

Re: [PATCH 03/15] iotests: Don't check qemu_io() output for specific error strings

2022-03-21 Thread Eric Blake
On Fri, Mar 18, 2022 at 04:36:43PM -0400, John Snow wrote:
> A forthcoming commit updates qemu_io() to raise an exception on non-zero
> return by default, and changes its return type.
> 
> In preparation, simplify some calls to qemu_io() that assert that
> specific error message strings do not appear in qemu-io's
> output. Asserting that all of these calls return a status code of zero
> will be a more robust way to guard against failure.
> 
> Signed-off-by: John Snow 
> ---
>  tests/qemu-iotests/040 | 33 -
>  tests/qemu-iotests/056 |  2 +-
>  2 files changed, 17 insertions(+), 18 deletions(-)
>

Reviewed-by: Eric Blake 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v3 4/6] vduse-blk: implements vduse-blk export

2022-03-21 Thread Yongji Xie
On Mon, Mar 21, 2022 at 9:25 PM Eric Blake  wrote:
>
> On Mon, Mar 21, 2022 at 03:14:37PM +0800, Xie Yongji wrote:
> > This implements a VDUSE block backends based on
> > the libvduse library. We can use it to export the BDSs
> > for both VM and container (host) usage.
> >
> > The new command-line syntax is:
> >
> > $ qemu-storage-daemon \
> > --blockdev file,node-name=drive0,filename=test.img \
> > --export vduse-blk,node-name=drive0,id=vduse-export0,writable=on
> >
> > After the qemu-storage-daemon started, we need to use
> > the "vdpa" command to attach the device to vDPA bus:
> >
> > $ vdpa dev add name vduse-export0 mgmtdev vduse
> >
> > Also the device must be removed via the "vdpa" command
> > before we stop the qemu-storage-daemon.
> >
> > Signed-off-by: Xie Yongji 
> > ---
>
> Looking at just the QAPI:
>
> > +++ b/qapi/block-export.json
> > @@ -170,6 +170,22 @@
> >  '*allow-other': 'FuseExportAllowOther' },
> >'if': 'CONFIG_FUSE' }
> >
> > +##
> > +# @BlockExportOptionsVduseBlk:
> > +#
> > +# A vduse-blk block export.
> > +#
> > +# @num-queues: the number of virtqueues. Defaults to 1.
> > +# @queue-size: the size of virtqueue. Defaults to 128.
> > +# @logical-block-size: Logical block size in bytes. Defaults to 512 bytes.
>
> Any restrictions on this not being allowed to be smaller than 512, or
> that it must be a power of 2, or that it has a maximum size?  If so,
> they should be documented.
>

Yes, it must be [512, PAGE_SIZE]. I will document it in v4.

> > +#
> > +# Since: 7.0
>
> This is a new feature, and is too late for 7.0, so this line should
> mention 7.1.
>

Oh, right. I will fix it.

> > +##
> > +{ 'struct': 'BlockExportOptionsVduseBlk',
> > +  'data': { '*num-queues': 'uint16',
> > +'*queue-size': 'uint16',
> > +'*logical-block-size': 'size'} }
> > +
> >  ##
> >  # @NbdServerAddOptions:
> >  #
> > @@ -273,6 +289,7 @@
> >  # @nbd: NBD export
> >  # @vhost-user-blk: vhost-user-blk export (since 5.2)
> >  # @fuse: FUSE export (since: 6.0)
> > +# @vduse-blk: vduse-blk export (since 7.0)
>
> Another spot for 7.1.
>

Will fix it.

Thanks,
Yongji



[PATCH v3 4/5] cpu: Free cpu->cpu_ases in cpu_address_space_destroy()

2022-03-21 Thread Mark Kanda
Create cpu_address_space_destroy() to free a CPU's cpu_ases list.

vCPU hotunplug related leak reported by Valgrind:

==132362== 216 bytes in 1 blocks are definitely lost in loss record 7,119 of 
8,549
==132362==at 0x4C3ADBB: calloc (vg_replace_malloc.c:1117)
==132362==by 0x69EE4CD: g_malloc0 (in /usr/lib64/libglib-2.0.so.0.5600.4)
==132362==by 0x7E34AF: cpu_address_space_init (physmem.c:751)
==132362==by 0x45053E: qemu_init_vcpu (cpus.c:635)
==132362==by 0x76B4A7: x86_cpu_realizefn (cpu.c:6520)
==132362==by 0x9343ED: device_set_realized (qdev.c:531)
==132362==by 0x93E26F: property_set_bool (object.c:2273)
==132362==by 0x93C23E: object_property_set (object.c:1408)
==132362==by 0x9406DC: object_property_set_qobject (qom-qobject.c:28)
==132362==by 0x93C5A9: object_property_set_bool (object.c:1477)
==132362==by 0x933C81: qdev_realize (qdev.c:333)
==132362==by 0x455E9A: qdev_device_add_from_qdict (qdev-monitor.c:713)

Signed-off-by: Mark Kanda 
---
 cpu.c | 1 +
 include/exec/cpu-common.h | 7 +++
 softmmu/physmem.c | 5 +
 3 files changed, 13 insertions(+)

diff --git a/cpu.c b/cpu.c
index be1f8b074c..59352a1487 100644
--- a/cpu.c
+++ b/cpu.c
@@ -174,6 +174,7 @@ void cpu_exec_unrealizefn(CPUState *cpu)
 tcg_exec_unrealizefn(cpu);
 }
 
+cpu_address_space_destroy(cpu);
 cpu_list_remove(cpu);
 }
 
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index 50a7d2912e..b17ad61ae4 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -111,6 +111,13 @@ size_t qemu_ram_pagesize_largest(void);
  */
 void cpu_address_space_init(CPUState *cpu, int asidx,
 const char *prefix, MemoryRegion *mr);
+/**
+ * cpu_address_space_destroy:
+ * @cpu: CPU for this address space
+ *
+ * Cleanup CPU's cpu_ases list.
+ */
+void cpu_address_space_destroy(CPUState *cpu);
 
 void cpu_physical_memory_rw(hwaddr addr, void *buf,
 hwaddr len, bool is_write);
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 43ae70fbe2..aec61ca07a 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -762,6 +762,11 @@ void cpu_address_space_init(CPUState *cpu, int asidx,
 }
 }
 
+void cpu_address_space_destroy(CPUState *cpu)
+{
+g_free(cpu->cpu_ases);
+}
+
 AddressSpace *cpu_get_address_space(CPUState *cpu, int asidx)
 {
 /* Return the AddressSpace corresponding to the specified index */
-- 
2.27.0




[PATCH v3 0/5] vCPU hotunplug related memory leaks

2022-03-21 Thread Mark Kanda
This series addresses a few vCPU hotunplug related leaks (found with Valgrind).

v3:
- patch 4: create cpu_address_space_destroy() to free cpu_ases (Phillipe)
- patch 5: create _destroy_vcpu_thread() to free xsave_buf (Phillipe)

v2: Create AccelOpsClass::destroy_vcpu_thread() for vcpu thread related cleanup
(Philippe)

Mark Kanda (5):
  accel: Introduce AccelOpsClass::destroy_vcpu_thread()
  softmmu/cpus: Free cpu->thread in generic_destroy_vcpu_thread()
  softmmu/cpus: Free cpu->halt_cond in generic_destroy_vcpu_thread()
  cpu: Free cpu->cpu_ases in cpu_address_space_destroy()
  i386/cpu: Free env->xsave_buf in KVM and HVF destory_vcpu_thread
routines

 accel/accel-common.c  |  7 +++
 accel/hvf/hvf-accel-ops.c | 10 ++
 accel/kvm/kvm-accel-ops.c | 10 ++
 accel/qtest/qtest.c   |  1 +
 accel/tcg/tcg-accel-ops.c |  1 +
 accel/xen/xen-all.c   |  1 +
 cpu.c |  1 +
 include/exec/cpu-common.h |  7 +++
 include/sysemu/accel-ops.h|  3 +++
 softmmu/cpus.c|  3 +++
 softmmu/physmem.c |  5 +
 target/i386/hax/hax-accel-ops.c   |  1 +
 target/i386/nvmm/nvmm-accel-ops.c |  1 +
 target/i386/whpx/whpx-accel-ops.c |  1 +
 14 files changed, 52 insertions(+)

-- 
2.27.0




[PATCH v3 2/5] softmmu/cpus: Free cpu->thread in generic_destroy_vcpu_thread()

2022-03-21 Thread Mark Kanda
Free cpu->thread in a new AccelOpsClass::destroy_vcpu_thread() handler
generic_destroy_vcpu_thread().

vCPU hotunplug related leak reported by Valgrind:

==102631== 8 bytes in 1 blocks are definitely lost in loss record 1,037 of 8,555
==102631==at 0x4C3ADBB: calloc (vg_replace_malloc.c:1117)
==102631==by 0x69EE4CD: g_malloc0 (in /usr/lib64/libglib-2.0.so.0.5600.4)
==102631==by 0x92443A: kvm_start_vcpu_thread (kvm-accel-ops.c:68)
==102631==by 0x4505C2: qemu_init_vcpu (cpus.c:643)
==102631==by 0x76B4D1: x86_cpu_realizefn (cpu.c:6520)
==102631==by 0x9344A7: device_set_realized (qdev.c:531)
==102631==by 0x93E329: property_set_bool (object.c:2273)
==102631==by 0x93C2F8: object_property_set (object.c:1408)
==102631==by 0x940796: object_property_set_qobject (qom-qobject.c:28)
==102631==by 0x93C663: object_property_set_bool (object.c:1477)
==102631==by 0x933D3B: qdev_realize (qdev.c:333)
==102631==by 0x455EC4: qdev_device_add_from_qdict (qdev-monitor.c:713)

Signed-off-by: Mark Kanda 
---
 accel/accel-common.c  | 6 ++
 accel/hvf/hvf-accel-ops.c | 1 +
 accel/kvm/kvm-accel-ops.c | 1 +
 accel/qtest/qtest.c   | 1 +
 accel/tcg/tcg-accel-ops.c | 1 +
 accel/xen/xen-all.c   | 1 +
 include/sysemu/accel-ops.h| 2 ++
 target/i386/hax/hax-accel-ops.c   | 1 +
 target/i386/nvmm/nvmm-accel-ops.c | 1 +
 target/i386/whpx/whpx-accel-ops.c | 1 +
 10 files changed, 16 insertions(+)

diff --git a/accel/accel-common.c b/accel/accel-common.c
index 7b8ec7e0f7..623df43cc3 100644
--- a/accel/accel-common.c
+++ b/accel/accel-common.c
@@ -28,6 +28,7 @@
 
 #include "cpu.h"
 #include "hw/core/accel-cpu.h"
+#include "sysemu/accel-ops.h"
 
 #ifndef CONFIG_USER_ONLY
 #include "accel-softmmu.h"
@@ -135,3 +136,8 @@ static void register_accel_types(void)
 }
 
 type_init(register_accel_types);
+
+void generic_destroy_vcpu_thread(CPUState *cpu)
+{
+g_free(cpu->thread);
+}
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index 54457c76c2..b23a67881c 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -467,6 +467,7 @@ static void hvf_accel_ops_class_init(ObjectClass *oc, void 
*data)
 AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
 
 ops->create_vcpu_thread = hvf_start_vcpu_thread;
+ops->destroy_vcpu_thread = generic_destroy_vcpu_thread;
 ops->kick_vcpu_thread = hvf_kick_vcpu_thread;
 
 ops->synchronize_post_reset = hvf_cpu_synchronize_post_reset;
diff --git a/accel/kvm/kvm-accel-ops.c b/accel/kvm/kvm-accel-ops.c
index c4244a23c6..5a7a9ae79c 100644
--- a/accel/kvm/kvm-accel-ops.c
+++ b/accel/kvm/kvm-accel-ops.c
@@ -89,6 +89,7 @@ static void kvm_accel_ops_class_init(ObjectClass *oc, void 
*data)
 AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
 
 ops->create_vcpu_thread = kvm_start_vcpu_thread;
+ops->destroy_vcpu_thread = generic_destroy_vcpu_thread;
 ops->cpu_thread_is_idle = kvm_vcpu_thread_is_idle;
 ops->cpus_are_resettable = kvm_cpus_are_resettable;
 ops->synchronize_post_reset = kvm_cpu_synchronize_post_reset;
diff --git a/accel/qtest/qtest.c b/accel/qtest/qtest.c
index f6056ac836..ba8573fc2c 100644
--- a/accel/qtest/qtest.c
+++ b/accel/qtest/qtest.c
@@ -51,6 +51,7 @@ static void qtest_accel_ops_class_init(ObjectClass *oc, void 
*data)
 AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
 
 ops->create_vcpu_thread = dummy_start_vcpu_thread;
+ops->destroy_vcpu_thread = generic_destroy_vcpu_thread;
 ops->get_virtual_clock = qtest_get_virtual_clock;
 };
 
diff --git a/accel/tcg/tcg-accel-ops.c b/accel/tcg/tcg-accel-ops.c
index ea7dcad674..527592c4d7 100644
--- a/accel/tcg/tcg-accel-ops.c
+++ b/accel/tcg/tcg-accel-ops.c
@@ -94,6 +94,7 @@ void tcg_handle_interrupt(CPUState *cpu, int mask)
 
 static void tcg_accel_ops_init(AccelOpsClass *ops)
 {
+ops->destroy_vcpu_thread = generic_destroy_vcpu_thread;
 if (qemu_tcg_mttcg_enabled()) {
 ops->create_vcpu_thread = mttcg_start_vcpu_thread;
 ops->kick_vcpu_thread = mttcg_kick_vcpu_thread;
diff --git a/accel/xen/xen-all.c b/accel/xen/xen-all.c
index 69aa7d018b..0efda554cc 100644
--- a/accel/xen/xen-all.c
+++ b/accel/xen/xen-all.c
@@ -220,6 +220,7 @@ static void xen_accel_ops_class_init(ObjectClass *oc, void 
*data)
 AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
 
 ops->create_vcpu_thread = dummy_start_vcpu_thread;
+ops->destroy_vcpu_thread = generic_destroy_vcpu_thread;
 }
 
 static const TypeInfo xen_accel_ops_type = {
diff --git a/include/sysemu/accel-ops.h b/include/sysemu/accel-ops.h
index e296b27b82..fac7d6b34e 100644
--- a/include/sysemu/accel-ops.h
+++ b/include/sysemu/accel-ops.h
@@ -46,4 +46,6 @@ struct AccelOpsClass {
 int64_t (*get_elapsed_ticks)(void);
 };
 
+/* free vcpu thread structures */
+void generic_destroy_vcpu_thread(CPUState *cpu);
 #endif /* ACCEL_OPS_H */
diff --git a/target/i386/hax/hax-accel-ops.c b/target/i386/hax/hax-accel-ops.c
index 

[PATCH v3 1/5] accel: Introduce AccelOpsClass::destroy_vcpu_thread()

2022-03-21 Thread Mark Kanda
Add destroy_vcpu_thread() to AccelOps as a method for vcpu thread cleanup.
This will be used in subsequent patches.

Suggested-by: Philippe Mathieu-Daudé  
Signed-off-by: Mark Kanda 
Reviewed-by: Philippe Mathieu-Daudé 
---
 include/sysemu/accel-ops.h | 1 +
 softmmu/cpus.c | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/include/sysemu/accel-ops.h b/include/sysemu/accel-ops.h
index 6013c9444c..e296b27b82 100644
--- a/include/sysemu/accel-ops.h
+++ b/include/sysemu/accel-ops.h
@@ -31,6 +31,7 @@ struct AccelOpsClass {
 bool (*cpus_are_resettable)(void);
 
 void (*create_vcpu_thread)(CPUState *cpu); /* MANDATORY NON-NULL */
+void (*destroy_vcpu_thread)(CPUState *cpu);
 void (*kick_vcpu_thread)(CPUState *cpu);
 bool (*cpu_thread_is_idle)(CPUState *cpu);
 
diff --git a/softmmu/cpus.c b/softmmu/cpus.c
index 7b75bb66d5..622f8b4608 100644
--- a/softmmu/cpus.c
+++ b/softmmu/cpus.c
@@ -609,6 +609,9 @@ void cpu_remove_sync(CPUState *cpu)
 qemu_mutex_unlock_iothread();
 qemu_thread_join(cpu->thread);
 qemu_mutex_lock_iothread();
+if (cpus_accel->destroy_vcpu_thread) {
+cpus_accel->destroy_vcpu_thread(cpu);
+}
 }
 
 void cpus_register_accel(const AccelOpsClass *ops)
-- 
2.27.0




Re: [PATCH] i386: Set MCG_STATUS_RIPV bit for mce SRAR error

2022-03-21 Thread Paolo Bonzini
Queued, thanks.

Paolo





Re: [PATCH v3 3/4] tests/qemu-iotests: Move the bash and sanitizer checks to meson.build

2022-03-21 Thread Hanna Reitz

On 23.02.22 10:38, Thomas Huth wrote:

We want to get rid of check-block.sh in the long run, so let's move
the checks for the bash version and sanitizers from check-block.sh
into the meson.build file instead.

Signed-off-by: Thomas Huth 
---
  tests/check-block.sh   | 26 --
  tests/qemu-iotests/meson.build | 14 ++
  2 files changed, 14 insertions(+), 26 deletions(-)


FWIW

Reviewed-by: Hanna Reitz 




Re: [PATCH v5 14/15] docs: Add documentation for SR-IOV and Virtualization Enhancements

2022-03-21 Thread Lukasz Maniak
On Tue, Mar 01, 2022 at 01:23:18PM +0100, Klaus Jensen wrote:
> On Feb 17 18:45, Lukasz Maniak wrote:
> > Signed-off-by: Lukasz Maniak 
> 
> Please add a short commit description as well. Otherwise,

Klaus,

Sorry I forgot to add the description in v6 aka v7, been really busy
recently.
I am going to add the description for v8.

Regards,
Lukasz
> 
> Reviewed-by: Klaus Jensen 
> 
> > ---
> >  docs/system/devices/nvme.rst | 82 
> >  1 file changed, 82 insertions(+)
> > 
> > diff --git a/docs/system/devices/nvme.rst b/docs/system/devices/nvme.rst
> > index b5acb2a9c19..aba253304e4 100644
> > --- a/docs/system/devices/nvme.rst
> > +++ b/docs/system/devices/nvme.rst
> > @@ -239,3 +239,85 @@ The virtual namespace device supports DIF- and 
> > DIX-based protection information
> >to ``1`` to transfer protection information as the first eight bytes of
> >metadata. Otherwise, the protection information is transferred as the 
> > last
> >eight bytes.
> > +
> > +Virtualization Enhancements and SR-IOV (Experimental Support)
> > +-
> > +
> > +The ``nvme`` device supports Single Root I/O Virtualization and Sharing
> > +along with Virtualization Enhancements. The controller has to be linked to
> > +an NVM Subsystem device (``nvme-subsys``) for use with SR-IOV.
> > +
> > +A number of parameters are present (**please note, that they may be
> > +subject to change**):
> > +
> > +``sriov_max_vfs`` (default: ``0``)
> > +  Indicates the maximum number of PCIe virtual functions supported
> > +  by the controller. Specifying a non-zero value enables reporting of both
> > +  SR-IOV and ARI (Alternative Routing-ID Interpretation) capabilities
> > +  by the NVMe device. Virtual function controllers will not report SR-IOV.
> > +
> > +``sriov_vq_flexible``
> > +  Indicates the total number of flexible queue resources assignable to all
> > +  the secondary controllers. Implicitly sets the number of primary
> > +  controller's private resources to ``(max_ioqpairs - sriov_vq_flexible)``.
> > +
> > +``sriov_vi_flexible``
> > +  Indicates the total number of flexible interrupt resources assignable to
> > +  all the secondary controllers. Implicitly sets the number of primary
> > +  controller's private resources to ``(msix_qsize - sriov_vi_flexible)``.
> > +
> > +``sriov_max_vi_per_vf`` (default: ``0``)
> > +  Indicates the maximum number of virtual interrupt resources assignable
> > +  to a secondary controller. The default ``0`` resolves to
> > +  ``(sriov_vi_flexible / sriov_max_vfs)``
> > +
> > +``sriov_max_vq_per_vf`` (default: ``0``)
> > +  Indicates the maximum number of virtual queue resources assignable to
> > +  a secondary controller. The default ``0`` resolves to
> > +  ``(sriov_vq_flexible / sriov_max_vfs)``
> > +
> > +The simplest possible invocation enables the capability to set up one VF
> > +controller and assign an admin queue, an IO queue, and a MSI-X interrupt.
> > +
> > +.. code-block:: console
> > +
> > +   -device nvme-subsys,id=subsys0
> > +   -device nvme,serial=deadbeef,subsys=subsys0,sriov_max_vfs=1,
> > +sriov_vq_flexible=2,sriov_vi_flexible=1
> > +
> > +The minimum steps required to configure a functional NVMe secondary
> > +controller are:
> > +
> > +  * unbind flexible resources from the primary controller
> > +
> > +.. code-block:: console
> > +
> > +   nvme virt-mgmt /dev/nvme0 -c 0 -r 1 -a 1 -n 0
> > +   nvme virt-mgmt /dev/nvme0 -c 0 -r 0 -a 1 -n 0
> > +
> > +  * perform a Function Level Reset on the primary controller to actually
> > +release the resources
> > +
> > +.. code-block:: console
> > +
> > +   echo 1 > /sys/bus/pci/devices/:01:00.0/reset
> > +
> > +  * enable VF
> > +
> > +.. code-block:: console
> > +
> > +   echo 1 > /sys/bus/pci/devices/:01:00.0/sriov_numvfs
> > +
> > +  * assign the flexible resources to the VF and set it ONLINE
> > +
> > +.. code-block:: console
> > +
> > +   nvme virt-mgmt /dev/nvme0 -c 1 -r 1 -a 8 -n 1
> > +   nvme virt-mgmt /dev/nvme0 -c 1 -r 0 -a 8 -n 2
> > +   nvme virt-mgmt /dev/nvme0 -c 1 -r 0 -a 9 -n 0
> > +
> > +  * bind the NVMe driver to the VF
> > +
> > +.. code-block:: console
> > +
> > +   echo :01:00.1 > /sys/bus/pci/drivers/nvme/bind
> > \ No newline at end of file
> > -- 
> > 2.25.1
> > 
> 
> -- 
> One of us - No more doubt, silence or taboo about mental illness.





Re: Memory leak in via_isa_realize()

2022-03-21 Thread Philippe Mathieu-Daudé

Cc'ing Bernhard who did a similar cleanup recently.

On 21/3/22 11:31, Thomas Huth wrote:


  Hi!

FYI, I'm seeing a memory leak in via_isa_realize() when building
QEMU with sanitizers enabled or when running QEMU through valgrind:

$ valgrind --leak-check=full --show-leak-kinds=definite 
./qemu-system-mips64el --nographic -M fuloong2e

==210405== Memcheck, a memory error detector
==210405== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==210405== Using Valgrind-3.17.0 and LibVEX; rerun with -h for copyright 
info

==210405== Command: ./qemu-system-mips64el --nographic -M fuloong2e
==210405==
==210405== Warning: set address range perms: large range [0x15c9f000, 
0x55c9f000) (defined)
==210405== Warning: set address range perms: large range [0x59ea4000, 
0x99ea4000) (defined)
==210405== Warning: set address range perms: large range [0x99ea4000, 
0xaa0a4000) (noaccess)

QEMU 6.2.90 monitor - type 'help' for more information
(qemu) q
==210405==
==210405== HEAP SUMMARY:
==210405== in use at exit: 8,409,442 bytes in 23,516 blocks
==210405==   total heap usage: 37,073 allocs, 13,557 frees, 32,674,469 
bytes allocated

==210405==
==210405== 8 bytes in 1 blocks are definitely lost in loss record 715 of 
6,085

==210405==    at 0x4C360A5: malloc (vg_replace_malloc.c:380)
==210405==    by 0x7059475: g_malloc (in 
/usr/lib64/libglib-2.0.so.0.5600.4)

==210405==    by 0x96C52C: qemu_extend_irqs (irq.c:57)
==210405==    by 0x96C5B8: qemu_allocate_irqs (irq.c:66)
==210405==    by 0x5FFA47: via_isa_realize (vt82c686.c:591)
==210405==    by 0x5FFCDA: vt82c686b_realize (vt82c686.c:646)
==210405==    by 0x681502: pci_qdev_realize (pci.c:2192)
==210405==    by 0x969A5D: device_set_realized (qdev.c:531)
==210405==    by 0x97354A: property_set_bool (object.c:2273)
==210405==    by 0x9715A0: object_property_set (object.c:1408)
==210405==    by 0x975938: object_property_set_qobject (qom-qobject.c:28)
==210405==    by 0x971907: object_property_set_bool (object.c:1477)
==210405==
==210405== LEAK SUMMARY:
==210405==    definitely lost: 8 bytes in 1 blocks
==210405==    indirectly lost: 0 bytes in 0 blocks
==210405==  possibly lost: 3,794 bytes in 45 blocks
==210405==    still reachable: 8,405,640 bytes in 23,470 blocks
==210405==   of which reachable via heuristic:
==210405== newarray   : 1,536 bytes in 
16 blocks

==210405== suppressed: 0 bytes in 0 blocks
==210405== Reachable blocks (those to which a pointer was found) are not 
shown.

==210405== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==210405==
==210405== For lists of detected and suppressed errors, rerun with: -s
==210405== ERROR SUMMARY: 46 errors from 46 contexts (suppressed: 0 from 0)

Same problem happens with qemu-system-ppc64 and the pegasos2 machine.

No clue how to properly fix this... is it safe to free the pointer
at the end of the function?

  Thomas






Re: [PATCH v3 4/6] vduse-blk: implements vduse-blk export

2022-03-21 Thread Eric Blake
On Mon, Mar 21, 2022 at 03:14:37PM +0800, Xie Yongji wrote:
> This implements a VDUSE block backends based on
> the libvduse library. We can use it to export the BDSs
> for both VM and container (host) usage.
> 
> The new command-line syntax is:
> 
> $ qemu-storage-daemon \
> --blockdev file,node-name=drive0,filename=test.img \
> --export vduse-blk,node-name=drive0,id=vduse-export0,writable=on
> 
> After the qemu-storage-daemon started, we need to use
> the "vdpa" command to attach the device to vDPA bus:
> 
> $ vdpa dev add name vduse-export0 mgmtdev vduse
> 
> Also the device must be removed via the "vdpa" command
> before we stop the qemu-storage-daemon.
> 
> Signed-off-by: Xie Yongji 
> ---

Looking at just the QAPI:

> +++ b/qapi/block-export.json
> @@ -170,6 +170,22 @@
>  '*allow-other': 'FuseExportAllowOther' },
>'if': 'CONFIG_FUSE' }
>  
> +##
> +# @BlockExportOptionsVduseBlk:
> +#
> +# A vduse-blk block export.
> +#
> +# @num-queues: the number of virtqueues. Defaults to 1.
> +# @queue-size: the size of virtqueue. Defaults to 128.
> +# @logical-block-size: Logical block size in bytes. Defaults to 512 bytes.

Any restrictions on this not being allowed to be smaller than 512, or
that it must be a power of 2, or that it has a maximum size?  If so,
they should be documented.

> +#
> +# Since: 7.0

This is a new feature, and is too late for 7.0, so this line should
mention 7.1.

> +##
> +{ 'struct': 'BlockExportOptionsVduseBlk',
> +  'data': { '*num-queues': 'uint16',
> +'*queue-size': 'uint16',
> +'*logical-block-size': 'size'} }
> +
>  ##
>  # @NbdServerAddOptions:
>  #
> @@ -273,6 +289,7 @@
>  # @nbd: NBD export
>  # @vhost-user-blk: vhost-user-blk export (since 5.2)
>  # @fuse: FUSE export (since: 6.0)
> +# @vduse-blk: vduse-blk export (since 7.0)

Another spot for 7.1.

>  #
>  # Since: 4.2
>  ##
> @@ -280,7 +297,8 @@
>'data': [ 'nbd',
>  { 'name': 'vhost-user-blk',
>'if': 'CONFIG_VHOST_USER_BLK_SERVER' },
> -{ 'name': 'fuse', 'if': 'CONFIG_FUSE' } ] }
> +{ 'name': 'fuse', 'if': 'CONFIG_FUSE' },
> +{ 'name': 'vduse-blk', 'if': 'CONFIG_VDUSE_BLK_EXPORT' } ] }
>  
>  ##
>  # @BlockExportOptions:
> @@ -324,7 +342,9 @@
>'vhost-user-blk': { 'type': 'BlockExportOptionsVhostUserBlk',
>'if': 'CONFIG_VHOST_USER_BLK_SERVER' },
>'fuse': { 'type': 'BlockExportOptionsFuse',
> -'if': 'CONFIG_FUSE' }
> +'if': 'CONFIG_FUSE' },
> +  'vduse-blk': { 'type': 'BlockExportOptionsVduseBlk',
> + 'if': 'CONFIG_VDUSE_BLK_EXPORT' }
> } }
>  

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




[PATCH v1 0/1] tests/avocado: Add ast1030 test case

2022-03-21 Thread Jamin Lin
1. Add tests/avocado/machines_aspeed.py to test ASPEED SOCs with
 avocado framework
2. Add test case to test "ast1030-evb" machine with zephyr os

Jamin Lin (1):
  test/avocado/machine_aspeed.py: Add ast1030 test case

 tests/avocado/machine_aspeed.py | 36 +
 1 file changed, 36 insertions(+)
 create mode 100644 tests/avocado/machine_aspeed.py

-- 
2.17.1




Re: [PATCH v4 1/3] qmp: Support for querying stats

2022-03-21 Thread Markus Armbruster
First: sorry for my slow response.

Mark Kanda  writes:

> Thank you Markus.
>
> On 3/11/2022 7:06 AM, Markus Armbruster wrote:
>> Mark Kanda  writes:
>>
>>> Introduce QMP support for querying stats. Provide a framework for adding new
>>> stats and support for the following commands:
>>>
>>> - query-stats
>>> Returns a list of all stats per target type (only VM and vCPU to start), 
>>> with
>>> additional options for specifying stat names, vCPU qom paths, and providers.
>>>
>>> - query-stats-schemas
>>> Returns a list of stats included in each target type, with an option for
>>> specifying the provider.
>>>
>>> The framework provides a method to register callbacks for these QMP 
>>> commands.
>>>
>>> The first use-case will be for fd-based KVM stats (in an upcoming patch).
>>>
>>> Examples (with fd-based KVM stats):
>>>
>>> - Query all VM stats:
>>>
>>> { "execute": "query-stats", "arguments" : { "target": "vm" } }
>>>
>>> { "return": {
>>>"vm": [
>>>   { "provider": "kvm",
>>> "stats": [
>>>{ "name": "max_mmu_page_hash_collisions", "value": 0 },
>>>{ "name": "max_mmu_rmap_size", "value": 0 },
>>>{ "name": "nx_lpage_splits", "value": 148 },
>>>...
>>>   { "provider": "xyz",
>>> "stats": [ ...
>>>   ...
>>> ] } }
>>>
>>> - Query all vCPU stats:
>>>
>>> { "execute": "query-stats", "arguments" : { "target": "vcpu" } }
>>>
>>> { "return": {
>>>  "vcpus": [
>>>{ "path": "/machine/unattached/device[0]"
>>>  "providers": [
>>>{ "provider": "kvm",
>>>  "stats": [
>>>{ "name": "guest_mode", "value": 0 },
>>>{ "name": "directed_yield_successful", "value": 0 },
>>>{ "name": "directed_yield_attempted", "value": 106 },
>>>...
>>>{ "provider": "xyz",
>>>  "stats": [ ...
>>> ...
>>>{ "path": "/machine/unattached/device[1]"
>>>  "providers": [
>>>{ "provider": "kvm",
>>>  "stats": [...
>>>...
>>> } ] } }
>>>
>>> - Query 'exits' and 'l1d_flush' KVM stats, and 'somestat' from provider 
>>> 'xyz'
>>> for vCPUs '/machine/unattached/device[2]' and 
>>> '/machine/unattached/device[4]':
>>>
>>> { "execute": "query-stats",
>>>"arguments": {
>>>  "target": "vcpu",
>>>  "vcpus": [ "/machine/unattached/device[2]",
>>> "/machine/unattached/device[4]" ],
>>>  "filters": [
>>>{ "provider": "kvm",
>>>  "fields": [ "l1d_flush", "exits" ] },
>>>{ "provider": "xyz",
>>>  "fields": [ "somestat" ] } ] } }
>> Are the stats bulky enough to justfify the extra complexity of
>> filtering?
>
> If this was only for KVM, the complexity probably isn't worth it. However, 
> the 
> framework is intended to support future stats with new providers and targets 
> (there has also been mention of moving existing stats to this framework). 
> Without some sort of filtering, I think the payload could become unmanageable.

I'm deeply wary of "may need $complexity in the future" when $complexity
could be added when we actually need it :)

>>> { "return": {
>>>  "vcpus": [
>>>{ "path": "/machine/unattached/device[2]"
>>>  "providers": [
>>>{ "provider": "kvm",
>>>  "stats": [ { "name": "l1d_flush", "value": 41213 },
>>> { "name": "exits", "value": 74291 } ] },
>>>{ "provider": "xyz",
>>>  "stats": [ ... ] } ] },
>>>{ "path": "/machine/unattached/device[4]"
>>>  "providers": [
>>>{ "provider": "kvm",
>>>  "stats": [ { "name": "l1d_flush", "value": 16132 },
>>> { "name": "exits", "value": 57922 } ] },
>>>{ "provider": "xyz",
>>>  "stats": [ ... ] } ] } ] } }
>>>
>>> - Query stats schemas:
>>>
>>> { "execute": "query-stats-schemas" }
>>>
>>> { "return": {
>>>  "vcpu": [
>>>{ "provider": "kvm",
>>>  "stats": [
>>> { "name": "guest_mode",
>>>   "unit": "none",
>>>   "base": 10,
>>>   "exponent": 0,
>>>   "type": "instant" },
>>>{ "name": "directed_yield_successful",
>>>   "unit": "none",
>>>   "base": 10,
>>>   "exponent": 0,
>>>   "type": "cumulative" },
>>>   ...
>>>{ "provider": "xyz",
>>>  ...
>>> "vm": [
>>>{ "provider": "kvm",
>>>  "stats": [
>>> { "name": "max_mmu_page_hash_collisions",
>>>   "unit": "none",
>>>   "base": 10,
>>>   "exponent": 0,
>>>   "type": "peak" },
>>>{ "provider": "xyz",
>>>...
>> Can you give a use case for query-stats-schemas?
>
> 'query-stats-schemas' provide the the type details about each stat; such as 
> the 
> unit, base, etc. These details are not reported by 

Re: [PATCH v2] hw/i386/amd_iommu: Fix maybe-uninitialized error with GCC 12

2022-03-21 Thread Philippe Mathieu-Daudé

On 21/3/22 15:33, Paolo Bonzini wrote:

Be more explicit that the loop must roll at least once.  Avoids the
following warning:

   FAILED: libqemu-x86_64-softmmu.fa.p/hw_i386_amd_iommu.c.o
   In function 'pte_get_page_mask',
   inlined from 'amdvi_page_walk' at hw/i386/amd_iommu.c:945:25,
   inlined from 'amdvi_do_translate' at hw/i386/amd_iommu.c:989:5,
   inlined from 'amdvi_translate' at hw/i386/amd_iommu.c:1038:5:
   hw/i386/amd_iommu.c:877:38: error: 'oldlevel' may be used uninitialized 
[-Werror=maybe-uninitialized]
 877 | return ~((1UL << ((oldlevel * 9) + 3)) - 1);
 |  ^~~~
   hw/i386/amd_iommu.c: In function 'amdvi_translate':
   hw/i386/amd_iommu.c:906:41: note: 'oldlevel' was declared here
 906 | unsigned level, present, pte_perms, oldlevel;
 | ^~~~
   cc1: all warnings being treated as errors

Having:

   $ gcc --version
   gcc (Debian 12-20220313-1) 12.0.1 20220314 (experimental)

Reported-by: Philippe Mathieu-Daudé 
Signed-off-by: Paolo Bonzini 
---
  hw/i386/amd_iommu.c | 7 ++-
  1 file changed, 2 insertions(+), 5 deletions(-)


Reviewed-by: Philippe Mathieu-Daudé 

Thanks!



Re: [PATCH 1/1] MAINTAINERS: Update maintainers for Guest x86 HAXM CPUs

2022-03-21 Thread Markus Armbruster
Perhaps this can go via qemu-trivial (cc'ed).

"Wang, Wenchao"  writes:

> diff --git a/MAINTAINERS b/MAINTAINERS
> index f2e9ce1da2..36f877cf74 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -492,7 +492,6 @@ Guest CPU Cores (HAXM)
> -
> X86 HAXM CPUs
> M: Wenchao Wang 
> -M: Colin Xu 
> L: haxm-t...@intel.com
> W: https://github.com/intel/haxm/issues
> S: Maintained
> --
> 2.17.1

Patch git-am fails with "error: corrupt patch at line 7".

For a trivial patch like this one, the maintainer may be willing to work
around.  To post real work, you'll have to fix your mail sending, I'm
afraid.

Patch makes sense; mail to colin...@intel.com bounces.

Reviewed-by: Markus Armbruster 




[PATCH 0/2] Remove PCIE root bridge LSI on powernv

2022-03-21 Thread Frederic Barrat
The powernv8/powernv9/powernv10 machines allocate a LSI for their root
port bridge, which is not the case on real hardware. The default root
port implementation in qemu requests a LSI. Since the powernv
implementation derives from it, that's where the LSI is coming
from. This series fixes it, so that the model matches the hardware.

However, the code in hw/pci to handle AER and hotplug events assume a
LSI is defined. It tends to assert/deassert a LSI if MSI or MSIX is
not enabled. Since we have hardware where that is not true, this patch
also fixes a few code paths to check if a LSI is configured before
trying to trigger it.


Frederic Barrat (2):
  pcie: Don't try triggering a LSI when not defined
  ppc/pnv: Remove LSI on the PCIE host bridge

 hw/pci-host/pnv_phb3.c | 1 +
 hw/pci-host/pnv_phb4.c | 1 +
 hw/pci/pcie.c  | 8 ++--
 hw/pci/pcie_aer.c  | 4 +++-
 4 files changed, 11 insertions(+), 3 deletions(-)

-- 
2.35.1




Re: [PATCH v4 00/18] iotests: add enhanced debugging info to qemu-img failures

2022-03-21 Thread John Snow
On Mon, Mar 21, 2022, 11:50 AM Hanna Reitz  wrote:

> On 21.03.22 14:14, Hanna Reitz wrote:
> > On 18.03.22 22:14, John Snow wrote:
> >> On Fri, Mar 18, 2022 at 9:36 AM Hanna Reitz  wrote:
> >>> On 18.03.22 00:49, John Snow wrote:
>  Hiya!
> 
>  This series effectively replaces qemu_img_pipe_and_status() with a
>  rewritten function named qemu_img() that raises an exception on
>  non-zero
>  return code by default. By the end of the series, every last
>  invocation
>  of the qemu-img binary ultimately goes through qemu_img().
> 
>  The exception that this function raises includes stdout/stderr output
>  when the traceback is printed in a a little decorated text box so that
>  it stands out from the jargony Python traceback readout.
> 
>  (You can test what this looks like for yourself, or at least you
>  could,
>  by disabling ztsd support and then running qcow2 iotest 065.)
> 
>  Negative tests are still possible in two ways:
> 
>  - Passing check=False to qemu_img, qemu_img_log, or img_info_log
>  - Catching and handling the CalledProcessError exception at the
>  callsite.
> >>> Thanks!  Applied to my block branch:
> >>>
> >>> https://gitlab.com/hreitz/qemu/-/commits/block
> >>>
> >>> Hanna
> >>>
> >> Actually, hold it -- this looks like it is causing problems with the
> >> Gitlab CI. I need to investigate these.
> >> https://gitlab.com/jsnow/qemu/-/pipelines/495155073/failures
> >>
> >> ... and, ugh, naturally the nice error diagnostics are suppressed here
> >> so I can't see them. Well, there's one more thing to try and fix
> >> somehow.
> >
> > I hope this patch by Thomas fixes the logging at least:
> >
> > https://lists.nongnu.org/archive/html/qemu-devel/2022-03/msg02946.html
>
> So I found three issues:
>
> 1. check-patch wrongfully complains about the comment added in in
> “python/utils: add add_visual_margin() text decoration utility” that
> shows an example for how the output looks.  It complains the lines
> consisting mostly of “” were too long.  I believe that’s because
> it counts bytes, not characters.
>
> Not fatal, i.e. doesn’t break the pipeline.  We should ignore that.
>

Agree. (Though I did shorten the lines in my re-spin to see if I could make
it shut up, but it didn't work. Ignoring it is.)


> 2. riscv64-debian-cross-container breaks, but that looks pre-existing.
> apt complains about some dependencies.
>
> Also marked as allowed-to-fail, so I believe we should also just ignore
> that.  (Seems to fail on `master`, too.)
>

Yeah, I don't think this is me.


> 3. The rest are runs complaining about
> `subprocess.CompletedProcess[str]`.  Looks like the same issue I was
> facing for ec88eed8d14088b36a3495710368b8d1a3c33420, where I had to
> specify the type as a string.
>
> Indeed this is fixed by something like
>
> https://gitlab.com/hreitz/qemu/-/commit/87615eb536bdca7babe8eb4a35fd4ea810d1da24
> .  Maybe squash that in?  (If it’s the correct way to go about this?)
>
> Hanna
>

Yep, sorry for not replying. I respun the series and tested it, but it
became "way too Saturday" for me to hit send on the respin. Will do so
today.

(Annoying: I test under python 3.6, but I didn't *run the iotests with
3.6*, which is where this problem shows up. Meh.)


[PULL 0/2] Bugfixes for QEMU 7.0-rc1

2022-03-21 Thread Paolo Bonzini
The following changes since commit e2fb7d8aa218256793df99571d16f92074258447:

  Merge tag 'dbus-pull-request' of gitlab.com:marcandre.lureau/qemu into 
staging (2022-03-15 16:28:50 +)

are available in the Git repository at:

  https://gitlab.com/bonzini/qemu.git tags/for-upstream

for you to fetch changes up to 17e6ffa6a5d2674cb2ebfd967d28b1048261d977:

  hw/i386/amd_iommu: Fix maybe-uninitialized error with GCC 12 (2022-03-21 
15:57:47 +0100)


Bugfixes.


Paolo Bonzini (2):
  target/i386: kvm: do not access uninitialized variable on older kernels
  hw/i386/amd_iommu: Fix maybe-uninitialized error with GCC 12

 hw/i386/amd_iommu.c   |  7 ++-
 target/i386/kvm/kvm.c | 17 +
 2 files changed, 15 insertions(+), 9 deletions(-)
-- 
2.35.1




[PATCH 3/3] qapi-schema: test: add a unit test for parsing array alternates

2022-03-21 Thread Paolo Bonzini
Signed-off-by: Paolo Bonzini 
---
 tests/qapi-schema/qapi-schema-test.json |  1 +
 tests/qapi-schema/qapi-schema-test.out  |  4 +++
 tests/unit/test-qobject-input-visitor.c | 43 +
 3 files changed, 48 insertions(+)

diff --git a/tests/qapi-schema/qapi-schema-test.json 
b/tests/qapi-schema/qapi-schema-test.json
index 43b8697002..ba7302f42b 100644
--- a/tests/qapi-schema/qapi-schema-test.json
+++ b/tests/qapi-schema/qapi-schema-test.json
@@ -119,6 +119,7 @@
 { 'alternate': 'AltEnumNum', 'data': { 'e': 'EnumOne', 'n': 'number' } }
 { 'alternate': 'AltNumEnum', 'data': { 'n': 'number', 'e': 'EnumOne' } }
 { 'alternate': 'AltEnumInt', 'data': { 'e': 'EnumOne', 'i': 'int' } }
+{ 'alternate': 'AltListInt', 'data': { 'l': ['int'], 'i': 'int' } }
 
 # for testing use of 'str' within alternates
 { 'alternate': 'AltStrObj', 'data': { 's': 'str', 'o': 'TestStruct' } }
diff --git a/tests/qapi-schema/qapi-schema-test.out 
b/tests/qapi-schema/qapi-schema-test.out
index 1f9585fa9b..043d75c655 100644
--- a/tests/qapi-schema/qapi-schema-test.out
+++ b/tests/qapi-schema/qapi-schema-test.out
@@ -121,6 +121,10 @@ alternate AltEnumInt
 tag type
 case e: EnumOne
 case i: int
+alternate AltListInt
+tag type
+case l: intList
+case i: int
 alternate AltStrObj
 tag type
 case s: str
diff --git a/tests/unit/test-qobject-input-visitor.c 
b/tests/unit/test-qobject-input-visitor.c
index 6f59a7f432..2af002dd82 100644
--- a/tests/unit/test-qobject-input-visitor.c
+++ b/tests/unit/test-qobject-input-visitor.c
@@ -776,6 +776,7 @@ static void 
test_visitor_in_alternate_number(TestInputVisitorData *data,
 AltEnumNum *aen;
 AltNumEnum *ans;
 AltEnumInt *asi;
+AltListInt *ali;
 
 /* Parsing an int */
 
@@ -802,6 +803,12 @@ static void 
test_visitor_in_alternate_number(TestInputVisitorData *data,
 g_assert_cmpint(asi->u.i, ==, 42);
 qapi_free_AltEnumInt(asi);
 
+v = visitor_input_test_init(data, "42");
+visit_type_AltListInt(v, NULL, , _abort);
+g_assert_cmpint(ali->type, ==, QTYPE_QNUM);
+g_assert_cmpint(ali->u.i, ==, 42);
+qapi_free_AltListInt(ali);
+
 /* Parsing a double */
 
 v = visitor_input_test_init(data, "42.5");
@@ -827,6 +834,40 @@ static void 
test_visitor_in_alternate_number(TestInputVisitorData *data,
 qapi_free_AltEnumInt(asi);
 }
 
+static void test_visitor_in_alternate_list(TestInputVisitorData *data,
+ const void *unused)
+{
+intList *item;
+Visitor *v;
+AltListInt *ali;
+int i;
+
+v = visitor_input_test_init(data, "[ 42, 43, 44 ]");
+visit_type_AltListInt(v, NULL, , _abort);
+g_assert(ali != NULL);
+
+g_assert_cmpint(ali->type, ==, QTYPE_QLIST);
+for (i = 0, item = ali->u.l; item; item = item->next, i++) {
+char string[12];
+
+snprintf(string, sizeof(string), "string%d", i);
+g_assert_cmpint(item->value, ==, 42 + i);
+}
+
+qapi_free_AltListInt(ali);
+ali = NULL;
+
+/* An empty list is valid */
+v = visitor_input_test_init(data, "[]");
+visit_type_AltListInt(v, NULL, , _abort);
+g_assert(ali != NULL);
+
+g_assert_cmpint(ali->type, ==, QTYPE_QLIST);
+g_assert(!ali->u.l);
+qapi_free_AltListInt(ali);
+ali = NULL;
+}
+
 static void input_visitor_test_add(const char *testpath,
const void *user_data,
void (*test_func)(TestInputVisitorData 
*data,
@@ -1188,6 +1229,8 @@ int main(int argc, char **argv)
NULL, test_visitor_in_wrong_type);
 input_visitor_test_add("/visitor/input/alternate-number",
NULL, test_visitor_in_alternate_number);
+input_visitor_test_add("/visitor/input/alternate-list",
+   NULL, test_visitor_in_alternate_list);
 input_visitor_test_add("/visitor/input/fail/struct",
NULL, test_visitor_in_fail_struct);
 input_visitor_test_add("/visitor/input/fail/struct-nested",
-- 
2.35.1




[PATCH] Define MAP_SYNC and MAP_SHARED_VALIDATE on needed linux systems

2022-03-21 Thread Khem Raj
linux only wires MAP_SYNC and MAP_SHARED_VALIDATE for architectures
which include asm-generic/mman.h and mips/powerpc are not including this
file in linux/mman.h, therefore these should be defined for such
architectures on Linux as well. This fixes build on mips/musl/linux

Signed-off-by: Khem Raj 
Cc: Zhang Yi 
Cc: Michael S. Tsirkin 
---
 util/mmap-alloc.c | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
index 893d864354..86d3cda248 100644
--- a/util/mmap-alloc.c
+++ b/util/mmap-alloc.c
@@ -10,14 +10,18 @@
  * later.  See the COPYING file in the top-level directory.
  */
 
+#include "qemu/osdep.h"
 #ifdef CONFIG_LINUX
 #include 
-#else  /* !CONFIG_LINUX */
+#endif  /* CONFIG_LINUX */
+
+#ifndef MAP_SYNC
 #define MAP_SYNC  0x0
+#endif /* MAP_SYNC */
+#ifndef MAP_SHARED_VALIDATE
 #define MAP_SHARED_VALIDATE   0x0
-#endif /* CONFIG_LINUX */
+#endif /* MAP_SHARED_VALIDATE */
 
-#include "qemu/osdep.h"
 #include "qemu/mmap-alloc.h"
 #include "qemu/host-utils.h"
 #include "qemu/cutils.h"
-- 
2.35.1




RE: [PATCH v8 12/12] target/hexagon: import additional tests

2022-03-21 Thread Taylor Simpson


> -Original Message-
> From: Anton Johansson 
> Sent: Wednesday, February 9, 2022 11:03 AM
> To: qemu-devel@nongnu.org
> Cc: a...@rev.ng; Taylor Simpson ; Brian Cain
> ; Michael Lambert ;
> bab...@rev.ng; ni...@rev.ng; richard.hender...@linaro.org
> Subject: [PATCH v8 12/12] target/hexagon: import additional tests
> 
> From: Niccolò Izzo 
> 
> Signed-off-by: Alessandro Di Federico 
> Signed-off-by: Niccolò Izzo 
> Signed-off-by: Anton Johansson 
> ---
>  tests/tcg/hexagon/Makefile.target  | 28 -
>  tests/tcg/hexagon/crt.S| 14 +++
>  tests/tcg/hexagon/test_abs.S   | 17 
>  tests/tcg/hexagon/test_bitcnt.S| 40 +++
>  tests/tcg/hexagon/test_bitsplit.S  | 22 ++
>  tests/tcg/hexagon/test_call.S  | 64 ++
>  tests/tcg/hexagon/test_clobber.S   | 29 ++
>  tests/tcg/hexagon/test_cmp.S   | 31 +++
>  tests/tcg/hexagon/test_dotnew.S| 38 ++
>  tests/tcg/hexagon/test_ext.S   | 13 ++
>  tests/tcg/hexagon/test_fibonacci.S | 30 ++
>  tests/tcg/hexagon/test_hl.S| 16 
>  tests/tcg/hexagon/test_hwloops.S   | 19 +
>  tests/tcg/hexagon/test_jmp.S   | 22 ++
>  tests/tcg/hexagon/test_lsr.S   | 36 +
>  tests/tcg/hexagon/test_mpyi.S  | 17 
>  tests/tcg/hexagon/test_packet.S| 29 ++
>  tests/tcg/hexagon/test_reorder.S   | 33 +++
>  tests/tcg/hexagon/test_round.S | 29 ++
>  tests/tcg/hexagon/test_vavgw.S | 31 +++
>  tests/tcg/hexagon/test_vcmpb.S | 30 ++
>  tests/tcg/hexagon/test_vcmpw.S | 30 ++
>  tests/tcg/hexagon/test_vlsrw.S | 20 ++
>  tests/tcg/hexagon/test_vmaxh.S | 35 
>  tests/tcg/hexagon/test_vminh.S | 35 
>  tests/tcg/hexagon/test_vpmpyh.S| 28 +
>  tests/tcg/hexagon/test_vspliceb.S  | 31 +++
>  27 files changed, 766 insertions(+), 1 deletion(-)  create mode 100644
> tests/tcg/hexagon/crt.S  create mode 100644 tests/tcg/hexagon/test_abs.S
> create mode 100644 tests/tcg/hexagon/test_bitcnt.S  create mode 100644
> tests/tcg/hexagon/test_bitsplit.S  create mode 100644
> tests/tcg/hexagon/test_call.S  create mode 100644
> tests/tcg/hexagon/test_clobber.S  create mode 100644
> tests/tcg/hexagon/test_cmp.S  create mode 100644
> tests/tcg/hexagon/test_dotnew.S  create mode 100644
> tests/tcg/hexagon/test_ext.S  create mode 100644
> tests/tcg/hexagon/test_fibonacci.S
>  create mode 100644 tests/tcg/hexagon/test_hl.S  create mode 100644
> tests/tcg/hexagon/test_hwloops.S  create mode 100644
> tests/tcg/hexagon/test_jmp.S  create mode 100644
> tests/tcg/hexagon/test_lsr.S  create mode 100644
> tests/tcg/hexagon/test_mpyi.S  create mode 100644
> tests/tcg/hexagon/test_packet.S  create mode 100644
> tests/tcg/hexagon/test_reorder.S  create mode 100644
> tests/tcg/hexagon/test_round.S  create mode 100644
> tests/tcg/hexagon/test_vavgw.S  create mode 100644
> tests/tcg/hexagon/test_vcmpb.S  create mode 100644
> tests/tcg/hexagon/test_vcmpw.S  create mode 100644
> tests/tcg/hexagon/test_vlsrw.S  create mode 100644
> tests/tcg/hexagon/test_vmaxh.S  create mode 100644
> tests/tcg/hexagon/test_vminh.S  create mode 100644
> tests/tcg/hexagon/test_vpmpyh.S  create mode 100644
> tests/tcg/hexagon/test_vspliceb.S

Reviewed-by: Taylor Simpson 



[PATCH v5 14/18] iotests: remove remaining calls to qemu_img_pipe()

2022-03-21 Thread John Snow
As part of moving all python iotest invocations of qemu-img onto a
single qemu_img() implementation, remove a few lingering uses of
qemu_img_pipe() from outside of iotests.py itself.

Several cases here rely on the knowledge that qemu_img_pipe() suppresses
*all* output on a successful case when the command being issued is
'create'.

065: This call's output is inspected, but it appears as if it's expected
 to succeed. Replace this call with the checked qemu_img() variant
 instead to get better diagnostics if/when qemu-img itself fails.

237: "create" call output isn't actually logged. Use qemu_img_create()
 instead, which checks the return code. Remove the empty lines from
 the test output.

296: Two calls;
 -create: Expected to succeed. Like other create calls, the output
  isn't actually logged.  Switch to a checked variant
  (qemu_img_create) instead. The output for this test is
  a mixture of both test styles, so actually replace the
  blank line for readability.
 -amend:  This is expected to fail. Log the output.

After this patch, the only uses of qemu_img_pipe are internal to
iotests.py and will be removed in subsequent patches.

Signed-off-by: John Snow 
Reviewed-by: Hanna Reitz 
---
 tests/qemu-iotests/065 |  4 ++--
 tests/qemu-iotests/237 |  3 +--
 tests/qemu-iotests/237.out |  3 ---
 tests/qemu-iotests/296 | 12 ++--
 4 files changed, 9 insertions(+), 13 deletions(-)

diff --git a/tests/qemu-iotests/065 b/tests/qemu-iotests/065
index 9466ce7df4..ba94e19349 100755
--- a/tests/qemu-iotests/065
+++ b/tests/qemu-iotests/065
@@ -24,7 +24,7 @@ import os
 import re
 import json
 import iotests
-from iotests import qemu_img, qemu_img_info, qemu_img_pipe
+from iotests import qemu_img, qemu_img_info
 import unittest
 
 test_img = os.path.join(iotests.test_dir, 'test.img')
@@ -54,7 +54,7 @@ class TestQemuImgInfo(TestImageInfoSpecific):
 self.assertEqual(data['data'], self.json_compare)
 
 def test_human(self):
-data = qemu_img_pipe('info', '--output=human', test_img).split('\n')
+data = qemu_img('info', '--output=human', test_img).stdout.split('\n')
 data = data[(data.index('Format specific information:') + 1)
 :data.index('')]
 for field in data:
diff --git a/tests/qemu-iotests/237 b/tests/qemu-iotests/237
index 43dfd3bd40..5ea13eb01f 100755
--- a/tests/qemu-iotests/237
+++ b/tests/qemu-iotests/237
@@ -165,8 +165,7 @@ with iotests.FilePath('t.vmdk') as disk_path, \
 iotests.log("")
 
 for path in [ extent1_path, extent2_path, extent3_path ]:
-msg = iotests.qemu_img_pipe('create', '-f', imgfmt, path, '0')
-iotests.log(msg, [iotests.filter_testfiles])
+iotests.qemu_img_create('-f', imgfmt, path, '0')
 
 vm.add_blockdev('driver=file,filename=%s,node-name=ext1' % (extent1_path))
 vm.add_blockdev('driver=file,filename=%s,node-name=ext2' % (extent2_path))
diff --git a/tests/qemu-iotests/237.out b/tests/qemu-iotests/237.out
index aeb9724492..62b8865677 100644
--- a/tests/qemu-iotests/237.out
+++ b/tests/qemu-iotests/237.out
@@ -129,9 +129,6 @@ Job failed: Cannot find device='this doesn't exist' nor 
node-name='this doesn't
 
 === Other subformats ===
 
-
-
-
 == Missing extent ==
 
 {"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": 
{"driver": "vmdk", "file": "node0", "size": 33554432, "subformat": 
"monolithicFlat"}}}
diff --git a/tests/qemu-iotests/296 b/tests/qemu-iotests/296
index f80ef3434a..0d21b740a7 100755
--- a/tests/qemu-iotests/296
+++ b/tests/qemu-iotests/296
@@ -76,7 +76,7 @@ class EncryptionSetupTestCase(iotests.QMPTestCase):
 # create the encrypted block device using qemu-img
 def createImg(self, file, secret):
 
-output = iotests.qemu_img_pipe(
+iotests.qemu_img(
 'create',
 '--object', *secret.to_cmdline_object(),
 '-f', iotests.imgfmt,
@@ -84,8 +84,7 @@ class EncryptionSetupTestCase(iotests.QMPTestCase):
 '-o', 'iter-time=10',
 file,
 '1M')
-
-iotests.log(output, filters=[iotests.filter_test_dir])
+iotests.log('')
 
 # attempts to add a key using qemu-img
 def addKey(self, file, secret, new_secret):
@@ -99,7 +98,7 @@ class EncryptionSetupTestCase(iotests.QMPTestCase):
 }
 }
 
-output = iotests.qemu_img_pipe(
+output = iotests.qemu_img(
 'amend',
 '--object', *secret.to_cmdline_object(),
 '--object', *new_secret.to_cmdline_object(),
@@ -108,8 +107,9 @@ class EncryptionSetupTestCase(iotests.QMPTestCase):
 '-o', 'new-secret=' + new_secret.id(),
 '-o', 'iter-time=10',
 
-"json:" + json.dumps(image_options)
-)
+"json:" + json.dumps(image_options),
+check=False  # Expected to fail. Log output.
+).stdout
 
  

[PATCH v5 16/18] iotests: replace qemu_img_log('create', ...) calls

2022-03-21 Thread John Snow
qemu_img_log() calls into qemu_img_pipe(), which always removes output
for 'create' commands on success anyway. Replace all of these calls to
the simpler qemu_img_create(...) which doesn't log, but raises a
detailed exception object on failure instead.

Blank lines are removed from output files where appropriate.

Signed-off-by: John Snow 
Reviewed-by: Hanna Reitz 
---
 tests/qemu-iotests/255 |  8 
 tests/qemu-iotests/255.out |  4 
 tests/qemu-iotests/274 | 17 -
 tests/qemu-iotests/274.out | 29 -
 tests/qemu-iotests/280 |  2 +-
 tests/qemu-iotests/280.out |  1 -
 6 files changed, 13 insertions(+), 48 deletions(-)

diff --git a/tests/qemu-iotests/255 b/tests/qemu-iotests/255
index 3d6d0e80cb..f86fa851b6 100755
--- a/tests/qemu-iotests/255
+++ b/tests/qemu-iotests/255
@@ -42,8 +42,8 @@ with iotests.FilePath('t.qcow2') as disk_path, \
 size_str = str(size)
 
 iotests.create_image(base_path, size)
-iotests.qemu_img_log('create', '-f', iotests.imgfmt, mid_path, size_str)
-iotests.qemu_img_log('create', '-f', iotests.imgfmt, disk_path, size_str)
+iotests.qemu_img_create('-f', iotests.imgfmt, mid_path, size_str)
+iotests.qemu_img_create('-f', iotests.imgfmt, disk_path, size_str)
 
 # Create a backing chain like this:
 # base <- [throttled: bps-read=4096] <- mid <- overlay
@@ -92,8 +92,8 @@ with iotests.FilePath('src.qcow2') as src_path, \
 size = 128 * 1024 * 1024
 size_str = str(size)
 
-iotests.qemu_img_log('create', '-f', iotests.imgfmt, src_path, size_str)
-iotests.qemu_img_log('create', '-f', iotests.imgfmt, dst_path, size_str)
+iotests.qemu_img_create('-f', iotests.imgfmt, src_path, size_str)
+iotests.qemu_img_create('-f', iotests.imgfmt, dst_path, size_str)
 
 iotests.log(iotests.qemu_io('-f', iotests.imgfmt, '-c', 'write 0 1M',
 src_path),
diff --git a/tests/qemu-iotests/255.out b/tests/qemu-iotests/255.out
index 11a05a5213..2e837cbb5f 100644
--- a/tests/qemu-iotests/255.out
+++ b/tests/qemu-iotests/255.out
@@ -3,8 +3,6 @@ Finishing a commit job with background reads
 
 === Create backing chain and start VM ===
 
-
-
 === Start background read requests ===
 
 === Run a commit job ===
@@ -21,8 +19,6 @@ Closing the VM while a job is being cancelled
 
 === Create images and start VM ===
 
-
-
 wrote 1048576/1048576 bytes at offset 0
 1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 
diff --git a/tests/qemu-iotests/274 b/tests/qemu-iotests/274
index 080a90f10f..2495e051a2 100755
--- a/tests/qemu-iotests/274
+++ b/tests/qemu-iotests/274
@@ -31,12 +31,11 @@ size_long = 2 * 1024 * 1024
 size_diff = size_long - size_short
 
 def create_chain() -> None:
-iotests.qemu_img_log('create', '-f', iotests.imgfmt, base,
- str(size_long))
-iotests.qemu_img_log('create', '-f', iotests.imgfmt, '-b', base,
- '-F', iotests.imgfmt, mid, str(size_short))
-iotests.qemu_img_log('create', '-f', iotests.imgfmt, '-b', mid,
- '-F', iotests.imgfmt, top, str(size_long))
+iotests.qemu_img_create('-f', iotests.imgfmt, base, str(size_long))
+iotests.qemu_img_create('-f', iotests.imgfmt, '-b', base,
+'-F', iotests.imgfmt, mid, str(size_short))
+iotests.qemu_img_create('-f', iotests.imgfmt, '-b', mid,
+'-F', iotests.imgfmt, top, str(size_long))
 
 iotests.qemu_io_log('-c', 'write -P 1 0 %d' % size_long, base)
 
@@ -160,9 +159,9 @@ with iotests.FilePath('base') as base, \
 ('off',  '512k', '256k', '500k', '436k')]:
 
 iotests.log('=== preallocation=%s ===' % prealloc)
-iotests.qemu_img_log('create', '-f', iotests.imgfmt, base, base_size)
-iotests.qemu_img_log('create', '-f', iotests.imgfmt, '-b', base,
- '-F', iotests.imgfmt, top, top_size_old)
+iotests.qemu_img_create('-f', iotests.imgfmt, base, base_size)
+iotests.qemu_img_create('-f', iotests.imgfmt, '-b', base,
+'-F', iotests.imgfmt, top, top_size_old)
 iotests.qemu_io_log('-c', 'write -P 1 %s 64k' % off, base)
 
 # After this, top_size_old to base_size should be allocated/zeroed.
diff --git a/tests/qemu-iotests/274.out b/tests/qemu-iotests/274.out
index 1ce40d839a..acd8b166a6 100644
--- a/tests/qemu-iotests/274.out
+++ b/tests/qemu-iotests/274.out
@@ -1,7 +1,4 @@
 == Commit tests ==
-
-
-
 wrote 2097152/2097152 bytes at offset 0
 2 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 
@@ -63,9 +60,6 @@ read 1048576/1048576 bytes at offset 1048576
 1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 
 === Testing HMP commit (top -> mid) ===
-
-
-
 wrote 2097152/2097152 bytes at offset 0
 2 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 
@@ -92,9 +86,6 @@ read 1048576/1048576 bytes at offset 1048576
 1 MiB, X ops; XX:XX:XX.X (XXX 

Re: [PULL 0/4] Miscellaneous patches patches for 2022-03-21

2022-03-21 Thread Peter Maydell
On Mon, 21 Mar 2022 at 14:59, Markus Armbruster  wrote:
>
> If it's too late for trivial cleanup, I'll respin this with the last
> patch dropped.
>
> The following changes since commit 2058fdbe81e2985c226a026851dd26b146d3395c:
>
>   Merge tag 'fixes-20220318-pull-request' of git://git.kraxel.org/qemu into 
> staging (2022-03-19 11:28:54 +)
>
> are available in the Git repository at:
>
>   git://repo.or.cz/qemu/armbru.git tags/pull-misc-2022-03-21
>
> for you to fetch changes up to b21e2380376c470900fcadf47507f4d5ade75e85:
>
>   Use g_new() & friends where that makes obvious sense (2022-03-21 15:44:44 
> +0100)
>
> 
> Miscellaneous patches patches for 2022-03-21
>


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/7.0
for any user-visible changes.

-- PMM



Re: [PATCH v5 17/18] iotests: remove qemu_img_pipe_and_status()

2022-03-21 Thread Eric Blake
On Mon, Mar 21, 2022 at 04:16:17PM -0400, John Snow wrote:
> With the exceptional 'create' calls removed in the prior commit, change
> qemu_img_log() and img_info_log() to call qemu_img() directly
> instead.
> 
> For now, allow these calls to qemu-img to return non-zero on the basis
> that any unusual output will be logged anyway. The very next commit
> begins to enforce a successful exit code by default even for the logged
> functions.
> 
> Signed-off-by: John Snow 
> ---
>  tests/qemu-iotests/iotests.py | 26 +++---
>  1 file changed, 7 insertions(+), 19 deletions(-)
>

Reviewed-by: Eric Blake 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v3 4/5] cpu: Free cpu->cpu_ases in cpu_address_space_destroy()

2022-03-21 Thread Philippe Mathieu-Daudé

On 21/3/22 15:14, Mark Kanda wrote:

Create cpu_address_space_destroy() to free a CPU's cpu_ases list.

vCPU hotunplug related leak reported by Valgrind:

==132362== 216 bytes in 1 blocks are definitely lost in loss record 7,119 of 
8,549
==132362==at 0x4C3ADBB: calloc (vg_replace_malloc.c:1117)
==132362==by 0x69EE4CD: g_malloc0 (in /usr/lib64/libglib-2.0.so.0.5600.4)
==132362==by 0x7E34AF: cpu_address_space_init (physmem.c:751)
==132362==by 0x45053E: qemu_init_vcpu (cpus.c:635)
==132362==by 0x76B4A7: x86_cpu_realizefn (cpu.c:6520)
==132362==by 0x9343ED: device_set_realized (qdev.c:531)
==132362==by 0x93E26F: property_set_bool (object.c:2273)
==132362==by 0x93C23E: object_property_set (object.c:1408)
==132362==by 0x9406DC: object_property_set_qobject (qom-qobject.c:28)
==132362==by 0x93C5A9: object_property_set_bool (object.c:1477)
==132362==by 0x933C81: qdev_realize (qdev.c:333)
==132362==by 0x455E9A: qdev_device_add_from_qdict (qdev-monitor.c:713)

Signed-off-by: Mark Kanda 
---
  cpu.c | 1 +
  include/exec/cpu-common.h | 7 +++
  softmmu/physmem.c | 5 +
  3 files changed, 13 insertions(+)


Tested-by: Philippe Mathieu-Daudé 



Re: [PATCH-for-7.0] qemu/main-loop: Disable block backend global state assertion on Darwin

2022-03-21 Thread Akihiko Odaki

On 2022/03/21 23:55, Philippe Mathieu-Daudé wrote:

From: Philippe Mathieu-Daudé 

Since commit 0439c5a462 ("block/block-backend.c: assertions for
block-backend") QEMU crashes on Darwin hosts, example on macOS:

   $ qemu-system-i386
   Assertion failed: (qemu_in_main_thread()), function blk_all_next, file 
block-backend.c, line 552.
   Abort trap: 6

Looking with lldb:

   Assertion failed: (qemu_in_main_thread()), function blk_all_next, file 
block-backend.c, line 552.
   Process 76914 stopped
   * thread #1, queue = 'com.apple.main-thread', stop reason = hit program 
assert
  frame #4: 0x00010057c2d4 qemu-system-i386`blk_all_next.cold.1
   at block-backend.c:552:5 [opt]
   549*/
   550   BlockBackend *blk_all_next(BlockBackend *blk)
   551   {
   --> 552   GLOBAL_STATE_CODE();
   553   return blk ? QTAILQ_NEXT(blk, link)
   554  : QTAILQ_FIRST(_backends);
   555   }
   Target 1: (qemu-system-i386) stopped.

   (lldb) bt
   * thread #1, queue = 'com.apple.main-thread', stop reason = hit program 
assert
  frame #0: 0x0001908c99b8 libsystem_kernel.dylib`__pthread_kill + 8
  frame #1: 0x0001908fceb0 libsystem_pthread.dylib`pthread_kill + 288
  frame #2: 0x00019083a314 libsystem_c.dylib`abort + 164
  frame #3: 0x00019083972c libsystem_c.dylib`__assert_rtn + 300
* frame #4: 0x00010057c2d4 qemu-system-i386`blk_all_next.cold.1 at 
block-backend.c:552:5 [opt]
  frame #5: 0x0001003c00b4 
qemu-system-i386`blk_all_next(blk=) at block-backend.c:552:5 [opt]
  frame #6: 0x0001003d8f04 
qemu-system-i386`qmp_query_block(errp=0x) at qapi.c:591:16 [opt]
  frame #7: 0x00010003ab0c qemu-system-i386`main [inlined] 
addRemovableDevicesMenuItems at cocoa.m:1756:21 [opt]
  frame #8: 0x00010003ab04 qemu-system-i386`main(argc=, 
argv=) at cocoa.m:1980:5 [opt]
  frame #9: 0x0001012690f4 dyld`start + 520

As we are in passed release 7.0 hard freeze, disable the block
backend assertion which, while being valuable during development,
is not helpful to users. We'll restore this assertion immediately
once 7.0 is released and work on a fix.

Cc: Kevin Wolf 
Cc: Paolo Bonzini 
Cc: Peter Maydell 
Cc: Emanuele Giuseppe Esposito 
Suggested-by: Akihiko Odaki 
Signed-off-by: Philippe Mathieu-Daudé 
---
  include/qemu/main-loop.h | 4 
  1 file changed, 4 insertions(+)

diff --git a/include/qemu/main-loop.h b/include/qemu/main-loop.h
index 7a4d6a0920..c27968ce33 100644
--- a/include/qemu/main-loop.h
+++ b/include/qemu/main-loop.h
@@ -270,10 +270,14 @@ bool qemu_mutex_iothread_locked(void);
  bool qemu_in_main_thread(void);
  
  /* Mark and check that the function is part of the global state API. */

+#ifdef CONFIG_DARWIN


You may use CONFIG_COCOA instead. The assertion can still do its job on 
Darwin if ui/cocoa is not in use.


Also, some code comment is nice to have since the intention is rather 
unclear from the code even though this is temporary and few people would 
stumble upon it.


Regards,
Akihiko Odaki


+#define GLOBAL_STATE_CODE()
+#else
  #define GLOBAL_STATE_CODE() \
  do {\
  assert(qemu_in_main_thread());  \
  } while (0)
+#endif /* CONFIG_DARWIN */
  
  /* Mark and check that the function is part of the I/O API. */

  #define IO_CODE()   \





Re: [PATCH v3 4/5] cpu: Free cpu->cpu_ases in cpu_address_space_destroy()

2022-03-21 Thread Philippe Mathieu-Daudé

On 21/3/22 23:03, Philippe Mathieu-Daudé wrote:

On 21/3/22 15:14, Mark Kanda wrote:

Create cpu_address_space_destroy() to free a CPU's cpu_ases list.

vCPU hotunplug related leak reported by Valgrind:

==132362== 216 bytes in 1 blocks are definitely lost in loss record 
7,119 of 8,549

==132362==    at 0x4C3ADBB: calloc (vg_replace_malloc.c:1117)
==132362==    by 0x69EE4CD: g_malloc0 (in 
/usr/lib64/libglib-2.0.so.0.5600.4)

==132362==    by 0x7E34AF: cpu_address_space_init (physmem.c:751)
==132362==    by 0x45053E: qemu_init_vcpu (cpus.c:635)
==132362==    by 0x76B4A7: x86_cpu_realizefn (cpu.c:6520)
==132362==    by 0x9343ED: device_set_realized (qdev.c:531)
==132362==    by 0x93E26F: property_set_bool (object.c:2273)
==132362==    by 0x93C23E: object_property_set (object.c:1408)
==132362==    by 0x9406DC: object_property_set_qobject (qom-qobject.c:28)
==132362==    by 0x93C5A9: object_property_set_bool (object.c:1477)
==132362==    by 0x933C81: qdev_realize (qdev.c:333)
==132362==    by 0x455E9A: qdev_device_add_from_qdict 
(qdev-monitor.c:713)


Signed-off-by: Mark Kanda 
---
  cpu.c | 1 +
  include/exec/cpu-common.h | 7 +++
  softmmu/physmem.c | 5 +
  3 files changed, 13 insertions(+)


Tested-by: Philippe Mathieu-Daudé 


Err I meant:
Reviewed-by: Philippe Mathieu-Daudé 



Re: [PATCH v3 2/5] softmmu/cpus: Free cpu->thread in generic_destroy_vcpu_thread()

2022-03-21 Thread Philippe Mathieu-Daudé

On 21/3/22 15:14, Mark Kanda wrote:

Free cpu->thread in a new AccelOpsClass::destroy_vcpu_thread() handler
generic_destroy_vcpu_thread().

vCPU hotunplug related leak reported by Valgrind:

==102631== 8 bytes in 1 blocks are definitely lost in loss record 1,037 of 8,555
==102631==at 0x4C3ADBB: calloc (vg_replace_malloc.c:1117)
==102631==by 0x69EE4CD: g_malloc0 (in /usr/lib64/libglib-2.0.so.0.5600.4)
==102631==by 0x92443A: kvm_start_vcpu_thread (kvm-accel-ops.c:68)
==102631==by 0x4505C2: qemu_init_vcpu (cpus.c:643)
==102631==by 0x76B4D1: x86_cpu_realizefn (cpu.c:6520)
==102631==by 0x9344A7: device_set_realized (qdev.c:531)
==102631==by 0x93E329: property_set_bool (object.c:2273)
==102631==by 0x93C2F8: object_property_set (object.c:1408)
==102631==by 0x940796: object_property_set_qobject (qom-qobject.c:28)
==102631==by 0x93C663: object_property_set_bool (object.c:1477)
==102631==by 0x933D3B: qdev_realize (qdev.c:333)
==102631==by 0x455EC4: qdev_device_add_from_qdict (qdev-monitor.c:713)

Signed-off-by: Mark Kanda 
---
  accel/accel-common.c  | 6 ++
  accel/hvf/hvf-accel-ops.c | 1 +
  accel/kvm/kvm-accel-ops.c | 1 +
  accel/qtest/qtest.c   | 1 +
  accel/tcg/tcg-accel-ops.c | 1 +
  accel/xen/xen-all.c   | 1 +
  include/sysemu/accel-ops.h| 2 ++
  target/i386/hax/hax-accel-ops.c   | 1 +
  target/i386/nvmm/nvmm-accel-ops.c | 1 +
  target/i386/whpx/whpx-accel-ops.c | 1 +
  10 files changed, 16 insertions(+)


Reviewed-by: Philippe Mathieu-Daudé 



Re: [PATCH v1 01/13] hw/virtio: move virtio-pci.h into shared include space

2022-03-21 Thread Philippe Mathieu-Daudé

On 21/3/22 16:30, Alex Bennée wrote:

This allows other device classes that will be exposed via PCI to be
able to do so in the appropriate hw/ directory. I resisted the
temptation to re-order headers to be more aesthetically pleasing.

Signed-off-by: Alex Bennée 
Message-Id: <20200925125147.26943-4-alex.ben...@linaro.org>

---
v2
   - add i2c/rng device to changes
---
  {hw => include/hw}/virtio/virtio-pci.h | 0
  hw/virtio/vhost-scsi-pci.c | 2 +-
  hw/virtio/vhost-user-blk-pci.c | 2 +-
  hw/virtio/vhost-user-fs-pci.c  | 2 +-
  hw/virtio/vhost-user-i2c-pci.c | 2 +-
  hw/virtio/vhost-user-input-pci.c   | 2 +-
  hw/virtio/vhost-user-rng-pci.c | 2 +-
  hw/virtio/vhost-user-scsi-pci.c| 2 +-
  hw/virtio/vhost-user-vsock-pci.c   | 2 +-
  hw/virtio/vhost-vsock-pci.c| 2 +-
  hw/virtio/virtio-9p-pci.c  | 2 +-
  hw/virtio/virtio-balloon-pci.c | 2 +-
  hw/virtio/virtio-blk-pci.c | 2 +-
  hw/virtio/virtio-input-host-pci.c  | 2 +-
  hw/virtio/virtio-input-pci.c   | 2 +-
  hw/virtio/virtio-iommu-pci.c   | 2 +-
  hw/virtio/virtio-net-pci.c | 2 +-
  hw/virtio/virtio-pci.c | 2 +-
  hw/virtio/virtio-rng-pci.c | 2 +-
  hw/virtio/virtio-scsi-pci.c| 2 +-
  hw/virtio/virtio-serial-pci.c  | 2 +-
  21 files changed, 20 insertions(+), 20 deletions(-)
  rename {hw => include/hw}/virtio/virtio-pci.h (100%)


Reviewed-by: Philippe Mathieu-Daudé 



[PATCH v4 00/11] s390x/tcg: Implement Vector-Enhancements Facility 2

2022-03-21 Thread David Miller
Implement Vector-Enhancements Facility 2 for s390x

resolves: https://gitlab.com/qemu-project/qemu/-/issues/738

implements:
VECTOR LOAD ELEMENTS REVERSED   (VLER)
VECTOR LOAD BYTE REVERSED ELEMENTS  (VLBR)
VECTOR LOAD BYTE REVERSED ELEMENT   (VLEBRH, VLEBRF, VLEBRG)
VECTOR LOAD BYTE REVERSED ELEMENT AND ZERO  (VLLEBRZ)
VECTOR LOAD BYTE REVERSED ELEMENT AND REPLICATE (VLBRREP)
VECTOR STORE ELEMENTS REVERSED  (VSTER)
VECTOR STORE BYTE REVERSED ELEMENTS (VSTBR)
VECTOR STORE BYTE REVERSED ELEMENTS (VSTEBRH, VSTEBRF, VSTEBRG)
VECTOR SHIFT LEFT DOUBLE BY BIT (VSLD)
VECTOR SHIFT RIGHT DOUBLE BY BIT(VSRD)
VECTOR STRING SEARCH(VSTRS)

modifies:
VECTOR FP CONVERT FROM FIXED(VCFPS)
VECTOR FP CONVERT FROM LOGICAL  (VCFPL)
VECTOR FP CONVERT TO FIXED  (VCSFP)
VECTOR FP CONVERT TO LOGICAL(VCLFP)
VECTOR SHIFT LEFT   (VSL)
VECTOR SHIFT RIGHT ARITHMETIC   (VSRA)
VECTOR SHIFT RIGHT LOGICAL  (VSRL)


David Miller (9):
  tcg: Implement tcg_gen_{h,w}swap_{i32,i64}
  target/s390x: vxeh2: vector convert short/32b
  target/s390x: vxeh2: vector string search
  target/s390x: vxeh2: Update for changes to vector shifts
  target/s390x: vxeh2: vector shift double by bit
  target/s390x: vxeh2: vector {load, store} elements reversed
  target/s390x: vxeh2: vector {load, store} byte reversed elements
  target/s390x: vxeh2: vector {load, store} byte reversed element
  target/s390x: add S390_FEAT_VECTOR_ENH2 to qemu CPU model
  tests/tcg/s390x: Tests for Vector Enhancements Facility 2
  target/s390x: Fix writeback to v1 in helper_vstl

Richard Henderson (2):
  tcg: Implement tcg_gen_{h,w}swap_{i32,i64}
  target/s390x: Fix writeback to v1 in helper_vstl

 include/tcg/tcg-op.h |   6 +
 target/s390x/gen-features.c  |   2 +
 target/s390x/helper.h|  13 +
 target/s390x/tcg/insn-data.def   |  40 ++-
 target/s390x/tcg/translate.c |   3 +-
 target/s390x/tcg/translate_vx.c.inc  | 463 ---
 target/s390x/tcg/vec_fpu_helper.c|  31 ++
 target/s390x/tcg/vec_helper.c|   2 -
 target/s390x/tcg/vec_int_helper.c|  55 
 target/s390x/tcg/vec_string_helper.c |  99 ++
 tcg/tcg-op.c |  30 ++
 tests/tcg/s390x/Makefile.target  |   8 +
 tests/tcg/s390x/vxeh2_vcvt.c |  97 ++
 tests/tcg/s390x/vxeh2_vlstr.c| 146 +
 tests/tcg/s390x/vxeh2_vs.c   |  91 ++
 15 files changed, 1031 insertions(+), 55 deletions(-)
 create mode 100644 tests/tcg/s390x/vxeh2_vcvt.c
 create mode 100644 tests/tcg/s390x/vxeh2_vlstr.c
 create mode 100644 tests/tcg/s390x/vxeh2_vs.c

-- 
2.34.1




[PATCH v4 05/11] target/s390x: vxeh2: vector shift double by bit

2022-03-21 Thread David Miller
Signed-off-by: David Miller 
Signed-off-by: Richard Henderson 
---
 target/s390x/tcg/insn-data.def  |  6 +++-
 target/s390x/tcg/translate_vx.c.inc | 55 +
 2 files changed, 53 insertions(+), 8 deletions(-)

diff --git a/target/s390x/tcg/insn-data.def b/target/s390x/tcg/insn-data.def
index f487a64abf..98a31a557d 100644
--- a/target/s390x/tcg/insn-data.def
+++ b/target/s390x/tcg/insn-data.def
@@ -1207,12 +1207,16 @@
 E(0xe774, VSL, VRR_c, V,   0, 0, 0, 0, vsl, 0, 0, IF_VEC)
 /* VECTOR SHIFT LEFT BY BYTE */
 E(0xe775, VSLB,VRR_c, V,   0, 0, 0, 0, vsl, 0, 1, IF_VEC)
+/* VECTOR SHIFT LEFT DOUBLE BY BIT */
+E(0xe786, VSLD,VRI_d, VE2, 0, 0, 0, 0, vsld, 0, 0, IF_VEC)
 /* VECTOR SHIFT LEFT DOUBLE BY BYTE */
-F(0xe777, VSLDB,   VRI_d, V,   0, 0, 0, 0, vsldb, 0, IF_VEC)
+E(0xe777, VSLDB,   VRI_d, V,   0, 0, 0, 0, vsld, 0, 1, IF_VEC)
 /* VECTOR SHIFT RIGHT ARITHMETIC */
 E(0xe77e, VSRA,VRR_c, V,   0, 0, 0, 0, vsra, 0, 0, IF_VEC)
 /* VECTOR SHIFT RIGHT ARITHMETIC BY BYTE */
 E(0xe77f, VSRAB,   VRR_c, V,   0, 0, 0, 0, vsra, 0, 1, IF_VEC)
+/* VECTOR SHIFT RIGHT DOUBLE BY BIT */
+F(0xe787, VSRD,VRI_d, VE2, 0, 0, 0, 0, vsrd, 0, IF_VEC)
 /* VECTOR SHIFT RIGHT LOGICAL */
 E(0xe77c, VSRL,VRR_c, V,   0, 0, 0, 0, vsrl, 0, 0, IF_VEC)
 /* VECTOR SHIFT RIGHT LOGICAL BY BYTE */
diff --git a/target/s390x/tcg/translate_vx.c.inc 
b/target/s390x/tcg/translate_vx.c.inc
index fd53ddafef..bb997de794 100644
--- a/target/s390x/tcg/translate_vx.c.inc
+++ b/target/s390x/tcg/translate_vx.c.inc
@@ -2056,14 +2056,23 @@ static DisasJumpType op_vsrl(DisasContext *s, DisasOps 
*o)
 gen_helper_gvec_vsrl_ve2);
 }
 
-static DisasJumpType op_vsldb(DisasContext *s, DisasOps *o)
+static DisasJumpType op_vsld(DisasContext *s, DisasOps *o)
 {
-const uint8_t i4 = get_field(s, i4) & 0xf;
-const int left_shift = (i4 & 7) * 8;
-const int right_shift = 64 - left_shift;
-TCGv_i64 t0 = tcg_temp_new_i64();
-TCGv_i64 t1 = tcg_temp_new_i64();
-TCGv_i64 t2 = tcg_temp_new_i64();
+const bool byte = s->insn->data;
+const uint8_t mask = byte ? 15 : 7;
+const uint8_t mul  = byte ?  8 : 1;
+const uint8_t i4   = get_field(s, i4);
+const int right_shift = 64 - (i4 & 7) * mul;
+TCGv_i64 t0, t1, t2;
+
+if (i4 & ~mask) {
+gen_program_exception(s, PGM_SPECIFICATION);
+return DISAS_NORETURN;
+}
+
+t0 = tcg_temp_new_i64();
+t1 = tcg_temp_new_i64();
+t2 = tcg_temp_new_i64();
 
 if ((i4 & 8) == 0) {
 read_vec_element_i64(t0, get_field(s, v2), 0, ES_64);
@@ -2074,8 +2083,40 @@ static DisasJumpType op_vsldb(DisasContext *s, DisasOps 
*o)
 read_vec_element_i64(t1, get_field(s, v3), 0, ES_64);
 read_vec_element_i64(t2, get_field(s, v3), 1, ES_64);
 }
+
 tcg_gen_extract2_i64(t0, t1, t0, right_shift);
 tcg_gen_extract2_i64(t1, t2, t1, right_shift);
+
+write_vec_element_i64(t0, get_field(s, v1), 0, ES_64);
+write_vec_element_i64(t1, get_field(s, v1), 1, ES_64);
+
+tcg_temp_free(t0);
+tcg_temp_free(t1);
+tcg_temp_free(t2);
+return DISAS_NEXT;
+}
+
+static DisasJumpType op_vsrd(DisasContext *s, DisasOps *o)
+{
+const uint8_t i4 = get_field(s, i4);
+TCGv_i64 t0, t1, t2;
+
+if (i4 & ~7) {
+gen_program_exception(s, PGM_SPECIFICATION);
+return DISAS_NORETURN;
+}
+
+t0 = tcg_temp_new_i64();
+t1 = tcg_temp_new_i64();
+t2 = tcg_temp_new_i64();
+
+read_vec_element_i64(t0, get_field(s, v2), 1, ES_64);
+read_vec_element_i64(t1, get_field(s, v3), 0, ES_64);
+read_vec_element_i64(t2, get_field(s, v3), 1, ES_64);
+
+tcg_gen_extract2_i64(t0, t1, t0, i4);
+tcg_gen_extract2_i64(t1, t2, t1, i4);
+
 write_vec_element_i64(t0, get_field(s, v1), 0, ES_64);
 write_vec_element_i64(t1, get_field(s, v1), 1, ES_64);
 
-- 
2.34.1




[PATCH v4 01/11] tcg: Implement tcg_gen_{h,w}swap_{i32,i64}

2022-03-21 Thread David Miller
From: Richard Henderson 

Swap half-words (16-bit) and words (32-bit) within a larger value.
Mirrors functions of the same names within include/qemu/bitops.h.

Signed-off-by: Richard Henderson 
Reviewed-by: David Miller 
Reviewed-by: David Hildenbrand 
---
 include/tcg/tcg-op.h |  6 ++
 tcg/tcg-op.c | 30 ++
 2 files changed, 36 insertions(+)

diff --git a/include/tcg/tcg-op.h b/include/tcg/tcg-op.h
index caa0a63612..b09b8b4a05 100644
--- a/include/tcg/tcg-op.h
+++ b/include/tcg/tcg-op.h
@@ -332,6 +332,7 @@ void tcg_gen_ext8u_i32(TCGv_i32 ret, TCGv_i32 arg);
 void tcg_gen_ext16u_i32(TCGv_i32 ret, TCGv_i32 arg);
 void tcg_gen_bswap16_i32(TCGv_i32 ret, TCGv_i32 arg, int flags);
 void tcg_gen_bswap32_i32(TCGv_i32 ret, TCGv_i32 arg);
+void tcg_gen_hswap_i32(TCGv_i32 ret, TCGv_i32 arg);
 void tcg_gen_smin_i32(TCGv_i32, TCGv_i32 arg1, TCGv_i32 arg2);
 void tcg_gen_smax_i32(TCGv_i32, TCGv_i32 arg1, TCGv_i32 arg2);
 void tcg_gen_umin_i32(TCGv_i32, TCGv_i32 arg1, TCGv_i32 arg2);
@@ -531,6 +532,8 @@ void tcg_gen_ext32u_i64(TCGv_i64 ret, TCGv_i64 arg);
 void tcg_gen_bswap16_i64(TCGv_i64 ret, TCGv_i64 arg, int flags);
 void tcg_gen_bswap32_i64(TCGv_i64 ret, TCGv_i64 arg, int flags);
 void tcg_gen_bswap64_i64(TCGv_i64 ret, TCGv_i64 arg);
+void tcg_gen_hswap_i64(TCGv_i64 ret, TCGv_i64 arg);
+void tcg_gen_wswap_i64(TCGv_i64 ret, TCGv_i64 arg);
 void tcg_gen_smin_i64(TCGv_i64, TCGv_i64 arg1, TCGv_i64 arg2);
 void tcg_gen_smax_i64(TCGv_i64, TCGv_i64 arg1, TCGv_i64 arg2);
 void tcg_gen_umin_i64(TCGv_i64, TCGv_i64 arg1, TCGv_i64 arg2);
@@ -1077,6 +1080,8 @@ void tcg_gen_stl_vec(TCGv_vec r, TCGv_ptr base, TCGArg 
offset, TCGType t);
 #define tcg_gen_bswap32_tl tcg_gen_bswap32_i64
 #define tcg_gen_bswap64_tl tcg_gen_bswap64_i64
 #define tcg_gen_bswap_tl tcg_gen_bswap64_i64
+#define tcg_gen_hswap_tl tcg_gen_hswap_i64
+#define tcg_gen_wswap_tl tcg_gen_wswap_i64
 #define tcg_gen_concat_tl_i64 tcg_gen_concat32_i64
 #define tcg_gen_extr_i64_tl tcg_gen_extr32_i64
 #define tcg_gen_andc_tl tcg_gen_andc_i64
@@ -1192,6 +1197,7 @@ void tcg_gen_stl_vec(TCGv_vec r, TCGv_ptr base, TCGArg 
offset, TCGType t);
 #define tcg_gen_bswap16_tl tcg_gen_bswap16_i32
 #define tcg_gen_bswap32_tl(D, S, F) tcg_gen_bswap32_i32(D, S)
 #define tcg_gen_bswap_tl tcg_gen_bswap32_i32
+#define tcg_gen_hswap_tl tcg_gen_hswap_i32
 #define tcg_gen_concat_tl_i64 tcg_gen_concat_i32_i64
 #define tcg_gen_extr_i64_tl tcg_gen_extr_i64_i32
 #define tcg_gen_andc_tl tcg_gen_andc_i32
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 65e1c94c2d..ae336ff6c2 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -1056,6 +1056,12 @@ void tcg_gen_bswap32_i32(TCGv_i32 ret, TCGv_i32 arg)
 }
 }
 
+void tcg_gen_hswap_i32(TCGv_i32 ret, TCGv_i32 arg)
+{
+/* Swapping 2 16-bit elements is a rotate. */
+tcg_gen_rotli_i32(ret, arg, 16);
+}
+
 void tcg_gen_smin_i32(TCGv_i32 ret, TCGv_i32 a, TCGv_i32 b)
 {
 tcg_gen_movcond_i32(TCG_COND_LT, ret, a, b, a, b);
@@ -1792,6 +1798,30 @@ void tcg_gen_bswap64_i64(TCGv_i64 ret, TCGv_i64 arg)
 }
 }
 
+void tcg_gen_hswap_i64(TCGv_i64 ret, TCGv_i64 arg)
+{
+uint64_t m = 0xull;
+TCGv_i64 t0 = tcg_temp_new_i64();
+TCGv_i64 t1 = tcg_temp_new_i64();
+
+/* See include/qemu/bitops.h, hswap64. */
+tcg_gen_rotli_i64(t1, arg, 32);
+tcg_gen_andi_i64(t0, t1, m);
+tcg_gen_shli_i64(t0, t0, 16);
+tcg_gen_shri_i64(t1, t1, 16);
+tcg_gen_andi_i64(t1, t1, m);
+tcg_gen_or_i64(ret, t0, t1);
+
+tcg_temp_free_i64(t0);
+tcg_temp_free_i64(t1);
+}
+
+void tcg_gen_wswap_i64(TCGv_i64 ret, TCGv_i64 arg)
+{
+/* Swapping 2 32-bit elements is a rotate. */
+tcg_gen_rotli_i64(ret, arg, 32);
+}
+
 void tcg_gen_not_i64(TCGv_i64 ret, TCGv_i64 arg)
 {
 if (TCG_TARGET_REG_BITS == 32) {
-- 
2.34.1




[PATCH v2 1/4] python/machine: permanently switch to AQMP

2022-03-21 Thread John Snow
Remove the QEMU_PYTHON_LEGACY_QMP environment variable, making the
switch from sync qmp to async qmp permanent. Update exceptions and
import paths as necessary.

Signed-off-by: John Snow 
Reviewed-by: Vladimir Sementsov-Ogievskiy 
Reviewed-by: Beraldo Leal 
Acked-by: Hanna Reitz 
---
 python/qemu/machine/machine.py | 18 +++---
 python/qemu/machine/qtest.py   |  2 +-
 2 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/python/qemu/machine/machine.py b/python/qemu/machine/machine.py
index a5972fab4d..41be025ac7 100644
--- a/python/qemu/machine/machine.py
+++ b/python/qemu/machine/machine.py
@@ -40,21 +40,16 @@
 TypeVar,
 )
 
-from qemu.qmp import (  # pylint: disable=import-error
+from qemu.aqmp import SocketAddrT
+from qemu.aqmp.legacy import (
+QEMUMonitorProtocol,
 QMPMessage,
 QMPReturnValue,
-SocketAddrT,
 )
 
 from . import console_socket
 
 
-if os.environ.get('QEMU_PYTHON_LEGACY_QMP'):
-from qemu.qmp import QEMUMonitorProtocol
-else:
-from qemu.aqmp.legacy import QEMUMonitorProtocol
-
-
 LOG = logging.getLogger(__name__)
 
 
@@ -743,8 +738,9 @@ def events_wait(self,
 :param timeout: Optional timeout, in seconds.
 See QEMUMonitorProtocol.pull_event.
 
-:raise QMPTimeoutError: If timeout was non-zero and no matching events
-were found.
+:raise asyncio.TimeoutError:
+If timeout was non-zero and no matching events were found.
+
 :return: A QMP event matching the filter criteria.
  If timeout was 0 and no event matched, None.
 """
@@ -767,7 +763,7 @@ def _match(event: QMPMessage) -> bool:
 event = self._qmp.pull_event(wait=timeout)
 if event is None:
 # NB: None is only returned when timeout is false-ish.
-# Timeouts raise QMPTimeoutError instead!
+# Timeouts raise asyncio.TimeoutError instead!
 break
 if _match(event):
 return event
diff --git a/python/qemu/machine/qtest.py b/python/qemu/machine/qtest.py
index f2f9aaa5e5..13e0aaff84 100644
--- a/python/qemu/machine/qtest.py
+++ b/python/qemu/machine/qtest.py
@@ -26,7 +26,7 @@
 TextIO,
 )
 
-from qemu.qmp import SocketAddrT  # pylint: disable=import-error
+from qemu.aqmp import SocketAddrT
 
 from .machine import QEMUMachine
 
-- 
2.34.1




Re: Memory leak in via_isa_realize()

2022-03-21 Thread Peter Maydell
On Mon, 21 Mar 2022 at 18:55, Cédric Le Goater  wrote:
> I introduced quite a few of these calls,
>
>hw/ppc/pnv_lpc.c:irqs = qemu_allocate_irqs(handler, lpc, ISA_NUM_IRQS);
>hw/ppc/pnv_psi.c:psi->qirqs = qemu_allocate_irqs(ics_set_irq, ics, 
> ics->nr_irqs);
>hw/ppc/pnv_psi.c:psi->qirqs = qemu_allocate_irqs(xive_source_set_irq, 
> xsrc, xsrc->nr_irqs);
>hw/ppc/ppc.c:env->irq_inputs = (void 
> **)qemu_allocate_irqs(_set_irq, cpu,
>hw/ppc/ppc.c:env->irq_inputs = (void 
> **)qemu_allocate_irqs(_set_irq, cpu,
>hw/ppc/ppc.c:env->irq_inputs = (void 
> **)qemu_allocate_irqs(_set_irq, cpu,
>hw/ppc/ppc.c:env->irq_inputs = (void 
> **)qemu_allocate_irqs(_set_irq, cpu,
>hw/ppc/ppc.c:env->irq_inputs = (void 
> **)qemu_allocate_irqs(_set_irq,
>hw/ppc/ppc.c:env->irq_inputs = (void 
> **)qemu_allocate_irqs(_set_irq,
>hw/ppc/spapr_irq.c:spapr->qirqs = qemu_allocate_irqs(spapr_set_irq, 
> spapr,
>
> and may be I can remove some. What's the best practice ?

The 'best practice' is that if you have an irq line it should be
because it is the input (gpio or sysbus irq) or output (gpio) of
some device, ie something that is a subtype of TYPE_DEVICE.

For the ones in hw/ppc/ppc.c: we used to need to write code like that
because CPU objects weren't TYPE_DEVICE; now they are, and so you
can give them inbound gpio lines using qdev_init_gpio_in(), typically
in the cpu initfn. (See target/riscv for an example, or grep for
that function name in target/ for others.) Then the board code
needs to wire up to those IRQs in the usual way for GPIO lines,
ie using qdev_get_gpio_in(cpudev, ...), instead of directly
reaching into the CPU struct env->irq_inputs. (There's probably
a way to structure this change to avoid having to change the CPU
and all the board code at once, but I haven't thought it through.)

For the spapr one: this is in machine model code, and currently machines
aren't subtypes of TYPE_DEVICE. I'd leave this one alone for now;
we can come back and think about it later.

For pnv_psi.c: these appear to be because the PnvPsi device is
allocating irq lines which really should belong to the ICSState
object (and as a result the ICSState code is having to expose
ics->nr_irqs and the ics_set_irq function when they could be
internal to the ICSState code). The ICSState's init function
should be creating these as qdev gpio lines.

pnv_lpc.c seems to be ISA related. hw/isa/lpc_ich9.c is an
example of setting up IRQs for isa_bus_irqs() without using
qemu_allocate_irqs(), but there may be some more generalised
ISA cleanup possible here.

thanks
-- PMM



  1   2   3   >