Re: [PULL 29/31] qemu-option: clean up id vs. list->merge_lists

2021-01-25 Thread Paolo Bonzini
Too late but I will point it out in the commit that cleans up the iteration.

Paolo

Il lun 25 gen 2021, 08:42 Markus Armbruster  ha scritto:

> Paolo Bonzini  writes:
>
> > Looking at all merge-lists QemuOptsList, here is how they access their
> > QemuOpts:
> >
> > reopen_opts in qemu-io-cmds.c ("qemu-img reopen -o")
> >   qemu_opts_find(&reopen_opts, NULL)
> >
> > empty_opts in qemu-io.c ("qemu-io open -o")
> >   qemu_opts_find(&empty_opts, NULL)
> >
> > qemu_rtc_opts ("-rtc")
> >   qemu_find_opts_singleton("rtc")
> >
> > qemu_machine_opts ("-M")
> >   qemu_find_opts_singleton("machine")
> >
> > qemu_action_opts ("-name")
>
> Pasto: it's "-action".
>
> >   qemu_opts_foreach->process_runstate_actions
> >
> > qemu_boot_opts ("-boot")
> >   in hw/nvram/fw_cfg.c and hw/s390x/ipl.c:
> > QTAILQ_FIRST(&qemu_find_opts("bootopts")->head)
> >   in softmmu/vl.c:
> > qemu_opts_find(qemu_find_opts("boot-opts"), NULL)
> >
> > qemu_name_opts ("-name")
> >   qemu_opts_foreach->parse_name
> >   parse_name does not use id
> >
> > qemu_mem_opts ("-m")
> >   qemu_find_opts_singleton("memory")
> >
> > qemu_icount_opts ("-icount")
> >   qemu_opts_foreach->do_configure_icount
> >   do_configure_icount->icount_configure
> >   icount_configure does not use id
> >
> > qemu_smp_opts ("-smp")
> >   qemu_opts_find(qemu_find_opts("smp-opts"), NULL)
> >
> > qemu_spice_opts ("-spice")
> >   QTAILQ_FIRST(&qemu_spice_opts.head)
> >
> > i.e. they don't need an id.  Sometimes its presence is ignored
> > (e.g. when using qemu_opts_foreach), sometimes all the options
> > with the id are skipped, sometimes only the first option on the
>
> Let's insert
>
> (when using qemu_find_opts_singleton() or qemu_opts_find(list, NULL))
>
> right after skipped, and
>
> > command line is considered.  -boot does two different things
>
> (when using QTAILQ_FIRST)
>
> right after considered.
>
> > depending on who's looking at the options.
> >
> > With this patch we just forbid id on merge-lists QemuOptsLists; if the
> > command line still works, it has the same semantics as before.
> >
> > qemu_opts_create's fail_if_exists parameter is now unnecessary:
> >
> > - it is unused if id is NULL
> >
> > - opts_parse only passes false if reached from qemu_opts_set_defaults,
> > in which case this patch enforces that id must be NULL
> >
> > - other callers that can pass a non-NULL id always set it to true
> >
> > Assert that it is true in the only case where "fail_if_exists" matters,
> > i.e. "id && !lists->merge_lists".  This means that if an id is present,
> > duplicates are always forbidden, which was already the status quo.
> >
> > Discounting the case that aborts as it's not user-controlled (it's
> > "just" a matter of inspecting qemu_opts_create callers), the paths
> > through qemu_opts_create can be summarized as:
> >
> > - merge_lists = true: singleton opts with NULL id; non-NULL id fails
> >
> > - merge_lists = false: always return new opts; non-NULL id fails if dup
> >
> > Reviewed-by: Kevin Wolf 
> > Signed-off-by: Paolo Bonzini 
>
> Reviewed-by: Markus Armbruster 
>
>


[Bug 1913012] Re: Installed firmware descriptor files contain (invalid) relative paths

2021-01-25 Thread Alain Kalker
** Description changed:

  After building and installing QEMU, the resulting installed firmware
  descriptor files contain relative paths for their `mapping.filename`
  properties. These relative paths will not be accepted as valid by tools
  like `virt-install`, resulting in the inability to configure new VMs
  which reference these firmware descriptors.
  
  # QEMU version
  $ qemu-system-x86_64 -version
  QEMU emulator version 5.2.0
  
  (I've also reproduced the issue with QEMU built from Git master @
  v5.2.0-1300-g0e32462630, see next comment.)
  
  # OS version
  Void Linux x86_64 (glibc)
  
  Steps to reproduce (with results on my system):
  
  # Verify the symptom
  
  $ virt-install --boot firmware=efi --disk none --memory 2048
  Using default --name vm4
  WARNING  No operating system detected, VM performance may suffer. Specify an 
OS with --os-variant for optimal results.
  
  Starting install...
  ERRORFailed to open file 'share/qemu/edk2-i386-vars.fd': No such file or 
directory
  Domain installation does not appear to have been successful.
  If it was, you can restart your domain by running:
    virsh --connect qemu:///session start vm4
  otherwise, please restart your installation.
  
  # Verify most likely cause
  
  $ grep filename /usr/share/qemu/firmware/*i386*.json
  /usr/share/qemu/firmware/50-edk2-i386-secure.json:"filename": 
"share/qemu/edk2-i386-secure-code.fd",
  /usr/share/qemu/firmware/50-edk2-i386-secure.json:"filename": 
"share/qemu/edk2-i386-vars.fd",
  /usr/share/qemu/firmware/60-edk2-i386.json:"filename": 
"share/qemu/edk2-i386-code.fd",
  /usr/share/qemu/firmware/60-edk2-i386.json:"filename": 
"share/qemu/edk2-i386-vars.fd",
+ 
+ # Workaround
+ 
+ Manually editing the firmware descriptor files in
+ `/usr/share/qemu/firmware` to contain full absolute paths to the
+ firmware blobs resolves the issue:
+ 
+ $ sudo sed -i.bak -e 's,"share/qemu/,"/usr/share/qemu/,' 
/usr/share/qemu/firmware/*.json
+ $ virt-install --boot firmware=efi --disk none --memory 2048
+ [...VM boots normally...]

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1913012

Title:
  Installed firmware descriptor files contain (invalid) relative paths

Status in QEMU:
  New

Bug description:
  After building and installing QEMU, the resulting installed firmware
  descriptor files contain relative paths for their `mapping.filename`
  properties. These relative paths will not be accepted as valid by
  tools like `virt-install`, resulting in the inability to configure new
  VMs which reference these firmware descriptors.

  # QEMU version
  $ qemu-system-x86_64 -version
  QEMU emulator version 5.2.0

  (I've also reproduced the issue with QEMU built from Git master @
  v5.2.0-1300-g0e32462630, see next comment.)

  # OS version
  Void Linux x86_64 (glibc)

  Steps to reproduce (with results on my system):

  # Verify the symptom

  $ virt-install --boot firmware=efi --disk none --memory 2048
  Using default --name vm4
  WARNING  No operating system detected, VM performance may suffer. Specify an 
OS with --os-variant for optimal results.

  Starting install...
  ERRORFailed to open file 'share/qemu/edk2-i386-vars.fd': No such file or 
directory
  Domain installation does not appear to have been successful.
  If it was, you can restart your domain by running:
    virsh --connect qemu:///session start vm4
  otherwise, please restart your installation.

  # Verify most likely cause

  $ grep filename /usr/share/qemu/firmware/*i386*.json
  /usr/share/qemu/firmware/50-edk2-i386-secure.json:"filename": 
"share/qemu/edk2-i386-secure-code.fd",
  /usr/share/qemu/firmware/50-edk2-i386-secure.json:"filename": 
"share/qemu/edk2-i386-vars.fd",
  /usr/share/qemu/firmware/60-edk2-i386.json:"filename": 
"share/qemu/edk2-i386-code.fd",
  /usr/share/qemu/firmware/60-edk2-i386.json:"filename": 
"share/qemu/edk2-i386-vars.fd",

  # Workaround

  Manually editing the firmware descriptor files in
  `/usr/share/qemu/firmware` to contain full absolute paths to the
  firmware blobs resolves the issue:

  $ sudo sed -i.bak -e 's,"share/qemu/,"/usr/share/qemu/,' 
/usr/share/qemu/firmware/*.json
  $ virt-install --boot firmware=efi --disk none --memory 2048
  [...VM boots normally...]

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1913012/+subscriptions



Re: [PATCH v7 05/11] osdep: build with non-working system() function

2021-01-25 Thread Paolo Bonzini

On 23/01/21 14:45, Peter Maydell wrote:

On Sat, 23 Jan 2021 at 03:18, Joelle van Dyne  wrote:
On Fri, Jan 22, 2021 at 3:17 PM Peter Maydell  wrote:

Can we do the "does system() exist?" check in meson.build ?



config_host_data.set('HAVE_SYSTEM_FUNCTION', cc.has_function('system'))



Unfortunately, this doesn't work for iOS, which defines system() but
throws a compile time error if you try to call it.


That's odd -- as far as I can tell the meson implementation
of has_function() does what I expected it to do, ie
"try to compile and link a little program that uses the
function, and see if it successfully links":
https://github.com/mesonbuild/meson/blob/39ede12aa5b27376341df85bc9ec254913f044bd/mesonbuild/compilers/mixins/clike.py#L791
There's some initial cleverness there too, so I guess some
part of that must be what's tripping us up.

In any case, I think we should be doing new checks in
meson.build, not configure.  Paolo, what's the right
way to do a meson "really compile this program and
check it built" test?


One possibility is that you have to specify the #include in the "prefix" 
argument of cc.has_function for the test to behave as the QEMU code?


If cc.has_function doesn't work, there's cc.compiles() and cc.links().

Paolo




[Bug 1913012] Re: Installed firmware descriptor files contain (invalid) relative paths

2021-01-25 Thread Alain Kalker
** Description changed:

  After building and installing QEMU, the resulting installed firmware
  descriptor files contain relative paths for their `mapping.filename`
- properties. These relative paths will not be accepted as valid by tools
- like `virt-install`, resulting in the inability to configure new VMs
- which reference these firmware descriptors.
+ properties. These relative paths are causing errors when using tools
+ based on `libvirt` like `virt-install`, resulting in the inability to
+ configure new VMs which reference these firmware descriptors.
  
  # QEMU version
  $ qemu-system-x86_64 -version
  QEMU emulator version 5.2.0
  
  (I've also reproduced the issue with QEMU built from Git master @
  v5.2.0-1300-g0e32462630, see next comment.)
  
  # OS version
  Void Linux x86_64 (glibc)
  
  Steps to reproduce (with results on my system):
  
  # Verify the symptom
  
  $ virt-install --boot firmware=efi --disk none --memory 2048
  Using default --name vm4
  WARNING  No operating system detected, VM performance may suffer. Specify an 
OS with --os-variant for optimal results.
  
  Starting install...
  ERRORFailed to open file 'share/qemu/edk2-i386-vars.fd': No such file or 
directory
  Domain installation does not appear to have been successful.
  If it was, you can restart your domain by running:
    virsh --connect qemu:///session start vm4
  otherwise, please restart your installation.
  
  # Verify most likely cause
  
  $ grep filename /usr/share/qemu/firmware/*i386*.json
  /usr/share/qemu/firmware/50-edk2-i386-secure.json:"filename": 
"share/qemu/edk2-i386-secure-code.fd",
  /usr/share/qemu/firmware/50-edk2-i386-secure.json:"filename": 
"share/qemu/edk2-i386-vars.fd",
  /usr/share/qemu/firmware/60-edk2-i386.json:"filename": 
"share/qemu/edk2-i386-code.fd",
  /usr/share/qemu/firmware/60-edk2-i386.json:"filename": 
"share/qemu/edk2-i386-vars.fd",
  
  # Workaround
  
  Manually editing the firmware descriptor files in
  `/usr/share/qemu/firmware` to contain full absolute paths to the
  firmware blobs resolves the issue:
  
  $ sudo sed -i.bak -e 's,"share/qemu/,"/usr/share/qemu/,' 
/usr/share/qemu/firmware/*.json
  $ virt-install --boot firmware=efi --disk none --memory 2048
  [...VM boots normally...]

** Description changed:

  After building and installing QEMU, the resulting installed firmware
  descriptor files contain relative paths for their `mapping.filename`
  properties. These relative paths are causing errors when using tools
  based on `libvirt` like `virt-install`, resulting in the inability to
  configure new VMs which reference these firmware descriptors.
  
  # QEMU version
  $ qemu-system-x86_64 -version
  QEMU emulator version 5.2.0
  
  (I've also reproduced the issue with QEMU built from Git master @
  v5.2.0-1300-g0e32462630, see next comment.)
  
  # OS version
  Void Linux x86_64 (glibc)
  
  Steps to reproduce (with results on my system):
  
  # Verify the symptom
  
  $ virt-install --boot firmware=efi --disk none --memory 2048
  Using default --name vm4
  WARNING  No operating system detected, VM performance may suffer. Specify an 
OS with --os-variant for optimal results.
  
  Starting install...
  ERRORFailed to open file 'share/qemu/edk2-i386-vars.fd': No such file or 
directory
  Domain installation does not appear to have been successful.
  If it was, you can restart your domain by running:
    virsh --connect qemu:///session start vm4
  otherwise, please restart your installation.
  
+ # Verify that the file does exist on the system and is accessible
+ 
+ $ ls -l /usr/share/qemu/edk2-i386-vars.fd 
+ -rw-r--r-- 1 root root 540672 12 dec 18:47 /usr/share/qemu/edk2-i386-vars.fd
+ 
  # Verify most likely cause
  
  $ grep filename /usr/share/qemu/firmware/*i386*.json
  /usr/share/qemu/firmware/50-edk2-i386-secure.json:"filename": 
"share/qemu/edk2-i386-secure-code.fd",
  /usr/share/qemu/firmware/50-edk2-i386-secure.json:"filename": 
"share/qemu/edk2-i386-vars.fd",
  /usr/share/qemu/firmware/60-edk2-i386.json:"filename": 
"share/qemu/edk2-i386-code.fd",
  /usr/share/qemu/firmware/60-edk2-i386.json:"filename": 
"share/qemu/edk2-i386-vars.fd",
  
  # Workaround
  
  Manually editing the firmware descriptor files in
  `/usr/share/qemu/firmware` to contain full absolute paths to the
  firmware blobs resolves the issue:
  
  $ sudo sed -i.bak -e 's,"share/qemu/,"/usr/share/qemu/,' 
/usr/share/qemu/firmware/*.json
  $ virt-install --boot firmware=efi --disk none --memory 2048
  [...VM boots normally...]

** Description changed:

  After building and installing QEMU, the resulting installed firmware
  descriptor files contain relative paths for their `mapping.filename`
  properties. These relative paths are causing errors when using tools
  based on `libvirt` like `virt-install`, resulting in the inability to
  configure new VMs which reference these firmware descripto

[PATCH 0/3] hw/block/nvme: misc fixes

2021-01-25 Thread Klaus Jensen
From: Klaus Jensen 

Misc fixes from Gollu.

Gollu Appalanaidu (3):
  hw/block/nvme: fix set feature for error recovery
  hw/block/nvme: fix set feature save field check
  hw/block/nvme: align with existing style

 hw/block/nvme.c | 26 ++
 1 file changed, 14 insertions(+), 12 deletions(-)

-- 
2.30.0




[PATCH 1/3] hw/block/nvme: fix set feature for error recovery

2021-01-25 Thread Klaus Jensen
From: Gollu Appalanaidu 

Only enable DULBE if the namespace supports it.

Signed-off-by: Gollu Appalanaidu 
Reviewed-by: Klaus Jensen 
---
 hw/block/nvme.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index 21aec90637fa..e7983ff422f2 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -3396,7 +3396,9 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest 
*req)
 }
 
 assert(ns);
-ns->features.err_rec = dw11;
+if (NVME_ID_NS_NSFEAT_DULBE(ns->id_ns.nsfeat))  {
+ns->features.err_rec = dw11;
+}
 break;
 case NVME_VOLATILE_WRITE_CACHE:
 for (i = 1; i <= n->num_namespaces; i++) {
-- 
2.30.0




[PATCH 2/3] hw/block/nvme: fix set feature save field check

2021-01-25 Thread Klaus Jensen
From: Gollu Appalanaidu 

Currently, no features are saveable, so the current check is not wrong,
but add a check against the feature capabilities to make sure this will
not regress if saveable features are added later.

Signed-off-by: Gollu Appalanaidu 
Reviewed-by: Klaus Jensen 
---
 hw/block/nvme.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index e7983ff422f2..1be5b54e0fed 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -3324,7 +3324,7 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest 
*req)
 
 trace_pci_nvme_setfeat(nvme_cid(req), nsid, fid, save, dw11);
 
-if (save) {
+if (save && !(nvme_feature_cap[fid] & NVME_FEAT_CAP_SAVE)) {
 return NVME_FID_NOT_SAVEABLE | NVME_DNR;
 }
 
-- 
2.30.0




[PATCH 3/3] hw/block/nvme: align with existing style

2021-01-25 Thread Klaus Jensen
From: Gollu Appalanaidu 

Change status checks to align with the existing style and remove the
explicit check against NVME_SUCCESS.

Cc: Dmitry Fomichev 
Signed-off-by: Gollu Appalanaidu 
Reviewed-by: Klaus Jensen 
---
 hw/block/nvme.c | 20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index 1be5b54e0fed..98d84fe26644 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -1198,7 +1198,7 @@ static uint16_t nvme_check_zone_write(NvmeCtrl *n, 
NvmeNamespace *ns,
 status = nvme_check_zone_state_for_write(zone);
 }
 
-if (status != NVME_SUCCESS) {
+if (status) {
 trace_pci_nvme_err_zone_write_not_ok(slba, nlb, status);
 } else {
 assert(nvme_wp_is_valid(zone));
@@ -1253,7 +1253,7 @@ static uint16_t nvme_check_zone_read(NvmeNamespace *ns, 
uint64_t slba,
 uint16_t status;
 
 status = nvme_check_zone_state_for_read(zone);
-if (status != NVME_SUCCESS) {
+if (status) {
 ;
 } else if (unlikely(end > bndry)) {
 if (!ns->params.cross_zone_read) {
@@ -1266,7 +1266,7 @@ static uint16_t nvme_check_zone_read(NvmeNamespace *ns, 
uint64_t slba,
 do {
 zone++;
 status = nvme_check_zone_state_for_read(zone);
-if (status != NVME_SUCCESS) {
+if (status) {
 break;
 }
 } while (end > nvme_zone_rd_boundary(ns, zone));
@@ -1677,7 +1677,7 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *req)
 
 if (ns->params.zoned) {
 status = nvme_check_zone_read(ns, slba, nlb);
-if (status != NVME_SUCCESS) {
+if (status) {
 trace_pci_nvme_err_zone_read_not_ok(slba, nlb, status);
 goto invalid;
 }
@@ -1748,12 +1748,12 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequest 
*req, bool append,
 zone = nvme_get_zone_by_slba(ns, slba);
 
 status = nvme_check_zone_write(n, ns, zone, slba, nlb, append);
-if (status != NVME_SUCCESS) {
+if (status) {
 goto invalid;
 }
 
 status = nvme_auto_open_zone(ns, zone);
-if (status != NVME_SUCCESS) {
+if (status) {
 goto invalid;
 }
 
@@ -1852,14 +1852,14 @@ static uint16_t nvme_open_zone(NvmeNamespace *ns, 
NvmeZone *zone,
 switch (state) {
 case NVME_ZONE_STATE_EMPTY:
 status = nvme_aor_check(ns, 1, 0);
-if (status != NVME_SUCCESS) {
+if (status) {
 return status;
 }
 nvme_aor_inc_active(ns);
 /* fall through */
 case NVME_ZONE_STATE_CLOSED:
 status = nvme_aor_check(ns, 0, 1);
-if (status != NVME_SUCCESS) {
+if (status) {
 if (state == NVME_ZONE_STATE_EMPTY) {
 nvme_aor_dec_active(ns);
 }
@@ -1972,7 +1972,7 @@ static uint16_t nvme_set_zd_ext(NvmeNamespace *ns, 
NvmeZone *zone)
 
 if (state == NVME_ZONE_STATE_EMPTY) {
 status = nvme_aor_check(ns, 1, 0);
-if (status != NVME_SUCCESS) {
+if (status) {
 return status;
 }
 nvme_aor_inc_active(ns);
@@ -3301,7 +3301,7 @@ static uint16_t nvme_set_feature_timestamp(NvmeCtrl *n, 
NvmeRequest *req)
 
 ret = nvme_dma(n, (uint8_t *)×tamp, sizeof(timestamp),
DMA_DIRECTION_TO_DEVICE, req);
-if (ret != NVME_SUCCESS) {
+if (ret) {
 return ret;
 }
 
-- 
2.30.0




[Bug 1913012] Re: Installed firmware descriptor files contain (invalid) relative paths

2021-01-25 Thread Alain Kalker
Please disregard the `# Test a coworkaround` stuff in comment #1, bad
copypasta, too late at night ;-)

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1913012

Title:
  Installed firmware descriptor files contain (invalid) relative paths

Status in QEMU:
  New

Bug description:
  After building and installing QEMU, the resulting installed firmware
  descriptor files contain relative paths for their `mapping.filename`
  properties. These relative paths are causing errors when using tools
  based on `libvirt` like `virt-install`, resulting in the inability to
  configure new VMs which reference these firmware descriptors.

  # QEMU version
  $ qemu-system-x86_64 -version
  QEMU emulator version 5.2.0

  (I've also reproduced the issue with QEMU built from Git master @
  v5.2.0-1300-g0e32462630, see next comment.)

  # OS version
  Void Linux x86_64 (glibc)

  Steps to reproduce (with results on my system):

  # Verify the symptom

  $ virt-install --boot firmware=efi --disk none --memory 2048
  Using default --name vm4
  WARNING  No operating system detected, VM performance may suffer. Specify an 
OS with --os-variant for optimal results.

  Starting install...
  ERRORFailed to open file 'share/qemu/edk2-i386-vars.fd': No such file or 
directory
  Domain installation does not appear to have been successful.
  If it was, you can restart your domain by running:
    virsh --connect qemu:///session start vm4
  otherwise, please restart your installation.

  # Verify that the file does exist on the system and is accessible

  $ ls -l /usr/share/qemu/edk2-i386-vars.fd
  -rw-r--r-- 1 root root 540672 12 dec 18:47 /usr/share/qemu/edk2-i386-vars.fd

  # Verify most likely cause

  $ grep filename /usr/share/qemu/firmware/*i386*.json
  /usr/share/qemu/firmware/50-edk2-i386-secure.json:"filename": 
"share/qemu/edk2-i386-secure-code.fd",
  /usr/share/qemu/firmware/50-edk2-i386-secure.json:"filename": 
"share/qemu/edk2-i386-vars.fd",
  /usr/share/qemu/firmware/60-edk2-i386.json:"filename": 
"share/qemu/edk2-i386-code.fd",
  /usr/share/qemu/firmware/60-edk2-i386.json:"filename": 
"share/qemu/edk2-i386-vars.fd",

  Note that all the paths are relative and are missing , i.e.
  `/usr`.

  # Workaround

  Manually editing the firmware descriptor files in
  `/usr/share/qemu/firmware` to contain full absolute paths to the
  firmware blobs resolves the issue:

  $ sudo sed -i.bak -e 's,"share/qemu/,"/usr/share/qemu/,' 
/usr/share/qemu/firmware/*.json
  $ virt-install --boot firmware=efi --disk none --memory 2048
  [...VM boots normally...]

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1913012/+subscriptions



Re: [PATCH v3 0/6] hw/sd: Support block write in SPI mode

2021-01-25 Thread Philippe Mathieu-Daudé
On 1/24/21 9:28 PM, Philippe Mathieu-Daudé wrote:
> These are Bin's SD patches from his v2, rebased on sd-next.
> 
> v2:
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg775712.html
> 
> Based-on: <20210124201106.2602238-1-f4...@amsat.org>
> 
> Bin Meng (6):
>   hw/sd: ssi-sd: Support multiple block read
>   hw/sd: sd: Remove duplicated codes in single/multiple block read/write
>   hw/sd: sd: Allow single/multiple block write for SPI mode
>   hw/sd: Introduce receive_ready() callback
>   hw/sd: ssi-sd: Support single block write
>   hw/sd: ssi-sd: Support multiple block write
> 
>  include/hw/sd/sd.h |   2 +
>  hw/sd/core.c   |  13 +
>  hw/sd/sd.c |  56 +++---
>  hw/sd/ssi-sd.c | 117 -
>  4 files changed, 125 insertions(+), 63 deletions(-)

Tested-by: Philippe Mathieu-Daudé 



Re: [PATCH] spapr: Adjust firmware path of PCI devices

2021-01-25 Thread David Gibson
On Fri, Jan 22, 2021 at 06:01:57PM +0100, Greg Kurz wrote:
> It is currently not possible to perform a strict boot from USB storage:
> 
> $ qemu-system-ppc64 -accel kvm -nodefaults -nographic -serial stdio \
>   -boot strict=on \
>   -device qemu-xhci \
>   -device usb-storage,drive=disk,bootindex=0 \
>   -blockdev driver=file,node-name=disk,filename=fedora-ppc64le.qcow2
> 
> 
> SLOF **
> QEMU Starting
>  Build Date = Jul 17 2020 11:15:24
>  FW Version = git-e18ddad8516ff2cf
>  Press "s" to enter Open Firmware.
> 
> Populating /vdevice methods
> Populating /vdevice/vty@7100
> Populating /vdevice/nvram@7101
> Populating /pci@8002000
>  00  (D) : 1b36 000dserial bus [ usb-xhci ]
> No NVRAM common partition, re-initializing...
> Scanning USB
>   XHCI: Initializing
> USB Storage
>SCSI: Looking for devices
>   101 DISK : "QEMU QEMU HARDDISK2.5+"
> Using default console: /vdevice/vty@7100
> 
>   Welcome to Open Firmware
> 
>   Copyright (c) 2004, 2017 IBM Corporation All rights reserved.
>   This program and the accompanying materials are made available
>   under the terms of the BSD License available at
>   http://www.opensource.org/licenses/bsd-license.php
> 
> 
> Trying to load:  from: 
> /pci@8002000/usb@0/storage@1/disk@101 ...
> E3405: No such device
> 
> E3407: Load failed
> 
>   Type 'boot' and press return to continue booting the system.
>   Type 'reset-all' and press return to reboot the system.
> 
> 
> Ready!
> 0 >
> 
> The device tree handed over by QEMU to SLOF indeed contains:
> 
> qemu,boot-list =
>   "/pci@8002000/usb@0/storage@1/disk@101 HALT";
> 
> but the device node is named usb-xhci@0, not usb@0.
> 
> This happens because the firmware names of PCI devices returned
> by get_boot_devices_list() come from pcibus_get_fw_dev_path(),
> while the sPAPR PHB code uses a different naming scheme for
> device nodes. This inconsistency has always been there but it was
> hidden for a long time because SLOF used to rename USB device
> nodes, until this commit, merged in QEMU 4.2.0 :
> 
> commit 85164ad4ed9960cac842fa4cc067c6b6699b0994
> Author: Alexey Kardashevskiy 
> Date:   Wed Sep 11 16:24:32 2019 +1000
> 
> pseries: Update SLOF firmware image
> 
> This fixes USB host bus adapter name in the device tree to match QEMU's
> one.
> 
> Signed-off-by: Alexey Kardashevskiy 
> Signed-off-by: David Gibson 
> 
> Fortunately, sPAPR implements the firmware path provider interface.
> This provides a way to override the default firmware paths.
> 
> Just factor out the sPAPR PHB naming logic from spapr_dt_pci_device()
> to a helper, and use it in the sPAPR firmware path provider hook.
> 
> Fixes: 85164ad4ed99 ("pseries: Update SLOF firmware image")
> Signed-off-by: Greg Kurz 

Nice work.  Applied to ppc-for-6.0.

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [RFC PATCH v3 5/6] virtio-net: Added eBPF RSS to virtio-net.

2021-01-25 Thread Jason Wang



On 2021/1/24 下午4:24, Yuri Benditovich wrote:

Hi Jason,

I've prepared a POC of graceful switch to 'vhost off' if respective
features are acked by the guest.
Such a way we do not need to silently clear RSS and hash report
features in case of 'vhost on'.
Can you please review it and provide your feedback?

I think the only open question is what to do with cases of vhost-user
and vhost-vdpa.

https://github.com/qemu/qemu/pull/105
This pull request is for reviews only.

Thanks in advance



Will review it sometime this week.

Thanks









On Mon, Jan 18, 2021 at 5:16 AM Jason Wang  wrote:


On 2021/1/17 下午5:04, Yuri Benditovich wrote:

On Fri, Jan 15, 2021 at 9:20 AM Jason Wang  wrote:

On 2021/1/15 上午5:16, Andrew Melnychenko wrote:

From: Andrew

When RSS is enabled the device tries to load the eBPF program
to select RX virtqueue in the TUN. If eBPF can be loaded
the RSS will function also with vhost (works with kernel 5.8 and later).
Software RSS is used as a fallback with vhost=off when eBPF can't be loaded
or when hash population requested by the guest.

Signed-off-by: Yuri Benditovich
Signed-off-by: Andrew Melnychenko
---
hw/net/vhost_net.c |   2 +
hw/net/virtio-net.c| 125 +++--
include/hw/virtio/virtio-net.h |   4 ++
net/vhost-vdpa.c   |   2 +
4 files changed, 129 insertions(+), 4 deletions(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 24d555e764..16124f99c3 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -71,6 +71,8 @@ static const int user_feature_bits[] = {
VIRTIO_NET_F_MTU,
VIRTIO_F_IOMMU_PLATFORM,
VIRTIO_F_RING_PACKED,
+VIRTIO_NET_F_RSS,
+VIRTIO_NET_F_HASH_REPORT,

/* This bit implies RARP isn't sent by QEMU out of band */
VIRTIO_NET_F_GUEST_ANNOUNCE,
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 09ceb02c9d..37016fc73a 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -691,6 +691,19 @@ static void virtio_net_set_queues(VirtIONet *n)

static void virtio_net_set_multiqueue(VirtIONet *n, int multiqueue);

+static uint64_t fix_ebpf_vhost_features(uint64_t features)
+{
+/* If vhost=on & CONFIG_EBPF doesn't set - disable RSS feature */

I still think we should not clear feature silently. This may break
migraiton if the feature is cleared on destination.

Do I understand it correctly that if we do not clear features silently
and implement a graceful drop to vhost=off when we can't do what we
need with vhost - then we do not need to add any migration blocker?


Yes. I think we won't go with migration blocker since we need support
migration in the end.

Thanks







Re: [PATCH 08/25] hmp: replace "O" parser with keyval

2021-01-25 Thread Markus Armbruster
Paolo Bonzini  writes:

> HMP is using QemuOpts to parse free-form commands device_add,
> netdev_add and object_add.  However, none of these need QemuOpts
> for validation (these three QemuOptsLists are all of the catch-all
> kind), and keyval is already able to parse into QDict.  So use
> keyval directly, avoiding the detour from
> string to QemuOpts to QDict.
>
> The args_type now stores the implied key.  This arguably makes more
> sense than storing the QemuOptsList name; at least, it _is_ a key
> that might end up in the arguments QDict.
>
> Signed-off-by: Paolo Bonzini 

Switching from QemuOpts to keyval changes the accepted language.  We may
change it, because HMP is not a stable interface.  The commit message
should point out the change, though.  Maybe even release notes, not
sure.

Let's recap the differences briefly:

* Boolean sugar: deprecated in QemuOpts, nonexistent in keyval

* QemuOpts accepts a number of more or less crazy corner cases keyval
  rejects: invalid key, long key (silently truncated), first rather than
  last id= wins (unlike other keys), implied key with empty value.

* QemuOpts rejects anti-social ID such as id=666, keyval leaves this to
  the caller, because key "id" is not special in keyval.

  Are these still rejected with your patch?

> ---
>  hmp-commands.hx |  6 +++---
>  monitor/hmp.c   | 20 +---
>  2 files changed, 12 insertions(+), 14 deletions(-)
>
> diff --git a/hmp-commands.hx b/hmp-commands.hx
> index 73e0832ea1..6ee746b53e 100644
> --- a/hmp-commands.hx
> +++ b/hmp-commands.hx
> @@ -669,7 +669,7 @@ ERST
>  
>  {
>  .name   = "device_add",
> -.args_type  = "device:O",
> +.args_type  = "driver:O",
>  .params = "driver[,prop=value][,...]",
>  .help   = "add device, like -device on the command line",
>  .cmd= hmp_device_add,
> @@ -1315,7 +1315,7 @@ ERST
>  
>  {
>  .name   = "netdev_add",
> -.args_type  = "netdev:O",
> +.args_type  = "type:O",
>  .params = 
> "[user|tap|socket|vde|bridge|hubport|netmap|vhost-user],id=str[,prop=value][,...]",
>  .help   = "add host network device",
>  .cmd= hmp_netdev_add,
> @@ -1343,7 +1343,7 @@ ERST
>  
>  {
>  .name   = "object_add",
> -.args_type  = "object:O",
> +.args_type  = "qom-type:O",
>  .params = "[qom-type=]type,id=str[,prop=value][,...]",
>  .help   = "create QOM object",
>  .cmd= hmp_object_add,
> diff --git a/monitor/hmp.c b/monitor/hmp.c
> index 6c0b33a0b1..d2cb886da5 100644
> --- a/monitor/hmp.c
> +++ b/monitor/hmp.c
> @@ -744,13 +744,9 @@ static QDict *monitor_parse_arguments(Monitor *mon,
>  break;
>  case 'O':
>  {
> -QemuOptsList *opts_list;
> -QemuOpts *opts;
> +Error *errp;
> +bool help;
>  
> -opts_list = qemu_find_opts(key);
> -if (!opts_list || opts_list->desc->name) {
> -goto bad_type;
> -}
>  while (qemu_isspace(*p)) {
>  p++;
>  }
> @@ -760,12 +756,14 @@ static QDict *monitor_parse_arguments(Monitor *mon,
>  if (get_str(buf, sizeof(buf), &p) < 0) {
>  goto fail;
>  }
> -opts = qemu_opts_parse_noisily(opts_list, buf, true);
> -if (!opts) {
> -goto fail;
> +keyval_parse_into(qdict, buf, key, &help, &errp);
> +if (help) {

Uh...

> +if (qdict_haskey(qdict, key)) {

If we parsed a value for the implied key (sugared or not),

> +qdict_put_bool(qdict, "help", true);

then encode the help request by mapping key "help" to true,

> +} else {
> +qdict_put_str(qdict, key, "help");

else by mapping the implied key to "help".

> +}

Test cases:

* device_add help

  @qdict before the patch:

{
"driver": "help"
}

  No change.

* device_add e1000,help

  @qdict before the patch:

{
"driver": "e1000",
"help": "on"
}

  Afterwards:

{
"driver": "e1000",
"help": true
}

  If this is okay, the commit message should explain it.

* device_add help,e1000

{
"e1000": "on",
"driver": "help"
}

  Afterwards:
  upstream-qemu: ../util/error.c:59: error_setv: Assertion `*errp == NULL' 
failed.

>  }
> -qemu_opts_to_qdict(opts, qdict);
> -qemu_opts_del(opts);
>  }
>  break;
>  case '/':




[Bug 1811862] Re: microcode version stays 0x1 even if -cpu host is used

2021-01-25 Thread Paolo Bonzini
This is fixed in 5.2.

** Changed in: qemu
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1811862

Title:
  microcode version stays 0x1 even if -cpu host is used

Status in QEMU:
  Fix Released

Bug description:
  The microcode version of my host cpu has the following version:

  grep microcode /proc/cpuinfo | head -1
  microcode   : 0x3d

  while trying to run ESXi in an nested VM, the boot bailed out with
  error message that at least microcode version 0x19 is needed. It 
  seems they have introduced such a check on certain CPU types.

  The VM in question is using the "host-passthrough" option in libvirt
  and the qemu command line reads as this:

  21172 ?Sl 0:09 /usr/libexec/qemu-kvm -name guest=hpe-env-
  client1,debug-threads=on -S -object
  secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-33
  -hpe-env-client1/master-key.aes -machine pc-i440fx-
  rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu host 

  Running a regular Linux VM with `host-passthrough` shows that the
  microcode version is still reported as 0x1.

  Within the VM:

  [root@hpe-env-client1 ~]# cat /proc/cpuinfo 
  processor   : 0
  vendor_id   : GenuineIntel
  cpu family  : 6
  model   : 63
  model name  : Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
  stepping: 2
  microcode   : 0x1
  cpu MHz : 2397.222

  
  My impression is qemu should copy the hosts microcode version in this case?

  Running Qemu von RHEl8 beta here.

  [root@3parserver ~]# /usr/libexec/qemu-kvm --version
  QEMU emulator version 2.12.0 (qemu-kvm-2.12.0-41.el8+2104+3e32e6f8)

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1811862/+subscriptions



Re: [RFC PATCH v3 4/6] ebpf: Added eBPF RSS loader.

2021-01-25 Thread Jason Wang



On 2021/1/19 下午10:53, Yuri Benditovich wrote:

On Fri, Jan 15, 2021 at 9:02 AM Jason Wang  wrote:


On 2021/1/15 上午5:16, Andrew Melnychenko wrote:

From: Andrew 

Added function that loads RSS eBPF program.
Added stub functions for RSS eBPF loader.
Added meson and configuration options.

By default, eBPF feature enabled if libbpf is present in the build system.
libbpf checked in configuration shell script and meson script.

Signed-off-by: Yuri Benditovich 
Signed-off-by: Andrew Melnychenko 
---
   configure   |  33 
   ebpf/ebpf_rss-stub.c|  40 
   ebpf/ebpf_rss.c | 165 +
   ebpf/ebpf_rss.h |  44 +
   ebpf/meson.build|   1 +
   ebpf/rss.bpf.skeleton.h | 397 
   ebpf/trace-events   |   4 +
   ebpf/trace.h|   2 +
   meson.build |  13 ++
   9 files changed, 699 insertions(+)
   create mode 100644 ebpf/ebpf_rss-stub.c
   create mode 100644 ebpf/ebpf_rss.c
   create mode 100644 ebpf/ebpf_rss.h
   create mode 100644 ebpf/meson.build
   create mode 100644 ebpf/rss.bpf.skeleton.h
   create mode 100644 ebpf/trace-events
   create mode 100644 ebpf/trace.h

diff --git a/configure b/configure
index 5860bdb77b..9d18e941f5 100755
--- a/configure
+++ b/configure
@@ -342,6 +342,7 @@ vhost_vsock="$default_feature"
   vhost_user="no"
   vhost_user_blk_server="auto"
   vhost_user_fs="$default_feature"
+bpf=""
   kvm="auto"
   hax="auto"
   hvf="auto"
@@ -1236,6 +1237,10 @@ for opt do
 ;;
 --enable-membarrier) membarrier="yes"
 ;;
+  --disable-bpf) bpf="no"
+  ;;
+  --enable-bpf) bpf="yes"
+  ;;
 --disable-blobs) blobs="false"
 ;;
 --with-pkgversion=*) pkgversion="$optarg"
@@ -1845,6 +1850,7 @@ disabled with --disable-FEATURE, default is enabled if 
available
 vhost-user  vhost-user backend support
 vhost-user-blk-servervhost-user-blk server support
 vhost-vdpa  vhost-vdpa kernel backend support
+  bpf BPF kernel support
 spice   spice
 rbd rados block device (rbd)
 libiscsiiscsi support
@@ -5057,6 +5063,30 @@ else
   membarrier=no
   fi

+##
+# check for usable bpf system call
+if test "$bpf" = ""; then


This implies the bpf is enabled by default?

Yes, assuming libbpf-devel present and bpf system call defined.

Any problem with it?



It means the configure will fail if libbpf is not installed. Consider 
libbpf is not very common at current stage. I think it's better to make 
it auto or disabled by default.








+have_bpf=no
+if test "$linux" = "yes" -a "$bigendian" != "yes"; then
+cat > $TMPC << EOF
+#include 
+#include 
+int main(void) {
+struct bpf_object *obj = NULL;
+bpf_object__load(obj);
+exit(0);
+}
+EOF
+if compile_prog "" "-lbpf" ; then
+have_bpf=yes
+bpf=yes
+fi
+fi
+if test "$have_bpf" = "no"; then
+  feature_not_found "bpf" "the libbpf is not available"
+fi
+fi
+
   ##
   # check if rtnetlink.h exists and is useful
   have_rtnetlink=no
@@ -5905,6 +5935,9 @@ fi
   if test "$membarrier" = "yes" ; then
 echo "CONFIG_MEMBARRIER=y" >> $config_host_mak
   fi
+if test "$bpf" = "yes" -a "$bigendian" != "yes" -a "$linux" = "yes" ; then
+  echo "CONFIG_EBPF=y" >> $config_host_mak
+fi
   if test "$signalfd" = "yes" ; then
 echo "CONFIG_SIGNALFD=y" >> $config_host_mak
   fi
diff --git a/ebpf/ebpf_rss-stub.c b/ebpf/ebpf_rss-stub.c
new file mode 100644
index 00..e71e229190
--- /dev/null
+++ b/ebpf/ebpf_rss-stub.c
@@ -0,0 +1,40 @@
+/*
+ * eBPF RSS stub file
+ *
+ * Developed by Daynix Computing LTD (http://www.daynix.com)
+ *
+ * Authors:
+ *  Yuri Benditovich 
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "ebpf/ebpf_rss.h"


I wonder why not simply use #ifdef #else to exclude the rss functions.

Just to make the reading easier.


If I read code correctly, this stub requires rss_is_loaded called before
ebpf_rss_set_all(), so actually ebpf_rss_is_loaded serve the same as

Can you please get into details?
I think the stub does not require it, it just converts all the
ebpf_rss calls to nop at link time.
If I miss something please let us know, we'll fix it in v4.



I mean the current can not be used without checking rss_is_loaded(). So 
rss_is_loaded() is somehow a guard like macro.


I personally prefer #ifdef but it's not a must.

Thanks




[PATCH v1] gitlab-ci.yml: Speed up CI by using "meson test --no-rebuild"

2021-01-25 Thread Thomas Huth
Currently, our check-system-* jobs are recompiling the whole sources
again. This happens due to the fact that the jobs are checking out
the whole source tree and required submodules again, and only try
to use the "build" directory with the binaries and object files
as an artifact from the previous stage - which simply does not work
anymore (with the current version of meson). Due to some changed
time stamps, meson is always trying to rebuild the whole tree.

To fix this problem, we can use "meson test --no-rebuild" instead of
make check" to avoid rebuilding all binaries every time. However, the
iotests ("make check-block") are not run by "meson test", so we have
to execute these manually now. But instead of adding them to the same
job as "meson test", it's better to put them into a separate new job
instead, to keep things nicely running in parallel in the CI.
This saves ca. 15 - 20 minutes of precious CI cycles in each run.

Signed-off-by: Thomas Huth 
---
 RFC -> v1: Added iotests

 .gitlab-ci.yml | 113 -
 1 file changed, 94 insertions(+), 19 deletions(-)

diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index de3a3d25b5..0834267a37 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -34,6 +34,26 @@ include:
 make -j"$JOBS" $MAKE_CHECK_ARGS ;
   fi
 
+.native_meson_test_job:
+  stage: test
+  image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
+  script:
+- cd build
+- touch *
+- make git-submodule-update
+- if [ -x ../meson/meson.py ]; then
+  ../meson/meson.py test --no-rebuild -t 5 $MESON_TEST_ARGS ;
+  else
+  meson test --no-rebuild -t 5 $MESON_TEST_ARGS ;
+  fi
+
+.native_iotest_job:
+  stage: test
+  image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
+  script:
+- cd build/tests/qemu-iotests/
+- ./check -g auto -qcow2
+
 .native_test_job_template: &native_test_job_definition
   stage: test
   image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
@@ -83,17 +103,15 @@ build-system-alpine:
   artifacts:
 expire_in: 2 days
 paths:
-  - .git-submodule-status
   - build
 
 check-system-alpine:
-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
   needs:
 - job: build-system-alpine
   artifacts: true
   variables:
 IMAGE: alpine
-MAKE_CHECK_ARGS: check
 
 acceptance-system-alpine:
   <<: *native_test_job_definition
@@ -118,13 +136,20 @@ build-system-ubuntu:
   - build
 
 check-system-ubuntu:
-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
+  needs:
+- job: build-system-ubuntu
+  artifacts: true
+  variables:
+IMAGE: ubuntu2004
+
+iotest-ubuntu:
+  extends: .native_iotest_job
   needs:
 - job: build-system-ubuntu
   artifacts: true
   variables:
 IMAGE: ubuntu2004
-MAKE_CHECK_ARGS: check
 
 acceptance-system-ubuntu:
   <<: *native_test_job_definition
@@ -149,13 +174,20 @@ build-system-debian:
   - build
 
 check-system-debian:
-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
+  needs:
+- job: build-system-debian
+  artifacts: true
+  variables:
+IMAGE: debian-amd64
+
+iotest-debian:
+  extends: .native_iotest_job
   needs:
 - job: build-system-debian
   artifacts: true
   variables:
 IMAGE: debian-amd64
-MAKE_CHECK_ARGS: check
 
 # No targets are built here, just tools, docs, and unit tests. This
 # also feeds into the eventual documentation deployment steps later
@@ -194,13 +226,20 @@ build-system-fedora:
   - build
 
 check-system-fedora:
-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
+  needs:
+- job: build-system-fedora
+  artifacts: true
+  variables:
+IMAGE: fedora
+
+iotest-fedora:
+  extends: .native_iotest_job
   needs:
 - job: build-system-fedora
   artifacts: true
   variables:
 IMAGE: fedora
-MAKE_CHECK_ARGS: check
 
 acceptance-system-fedora:
   <<: *native_test_job_definition
@@ -226,13 +265,20 @@ build-system-centos:
   - build
 
 check-system-centos:
-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
+  needs:
+- job: build-system-centos
+  artifacts: true
+  variables:
+IMAGE: centos8
+
+iotest-centos:
+  extends: .native_iotest_job
   needs:
 - job: build-system-centos
   artifacts: true
   variables:
 IMAGE: centos8
-MAKE_CHECK_ARGS: check
 
 acceptance-system-centos:
   <<: *native_test_job_definition
@@ -256,13 +302,20 @@ build-system-opensuse:
   - build
 
 check-system-opensuse:
-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
+  needs:
+- job: build-system-opensuse
+  artifacts: true
+  variables:
+IMAGE: opensuse-leap
+
+iotest-opensuse:
+  extends: .native_iotest_job
   needs:
 - job: build-system-opensuse
   artifacts: true
   variables:
 IMAGE: opensuse-leap
-MAKE_CHECK_ARGS: check
 
 acceptance-system-opensuse:
<<: *native_test_job_definition
@@ -525,13 +578,20 @@ build-crypto-old-n

Re: [RFC PATCH] gitlab-ci.yml: Speed up CI by using "meson test --no-rebuild"

2021-01-25 Thread Thomas Huth

On 24/01/2021 12.32, Paolo Bonzini wrote:

check-block is not run by "meson test".


Thanks! I've just send a v1 now which also runs the iotests again.

 Thomas




Il dom 24 gen 2021, 08:58 Thomas Huth > ha scritto:


Currently, our check-system-* jobs are recompiling the whole sources
again. This happens due to the fact that the jobs are checking out
the whole source tree and required submodules again, and only try
to use the "build" directory with the binaries and object files
as an artifact from the previous stage - which simply does not work
anymore (with the current version of meson). Due to some changed
time stamps, meson is always trying to rebuild the whole tree.

To fix this problem, use "meson test --no-rebuild" instead of
"make check" to avoid rebuilding all binaries every time. This
saves ca. 15 - 20 minutes of precious CI cycles in each run.

Signed-off-by: Thomas Huth mailto:th...@redhat.com>>
---
  Marked as "RFC" since I'm not quite sure whether "meson test" has
  the same test coverage as "make check"... Paolo?

  .gitlab-ci.yml | 41 ++---
  1 file changed, 22 insertions(+), 19 deletions(-)

diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index de3a3d25b5..c9fb11c325 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -34,6 +34,19 @@ include:
          make -j"$JOBS" $MAKE_CHECK_ARGS ;
        fi

+.native_meson_test_job:
+  stage: test
+  image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
+  script:
+    - cd build
+    - touch *
+    - make git-submodule-update
+    - if [ -x ../meson/meson.py ]; then
+          ../meson/meson.py test --no-rebuild -t 5 $MESON_TEST_ARGS ;
+      else
+          meson test --no-rebuild -t 5 $MESON_TEST_ARGS ;
+      fi
+
  .native_test_job_template: &native_test_job_definition
    stage: test
    image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
@@ -83,17 +96,15 @@ build-system-alpine:
    artifacts:
      expire_in: 2 days
      paths:
-      - .git-submodule-status
        - build

  check-system-alpine:
-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
    needs:
      - job: build-system-alpine
        artifacts: true
    variables:
      IMAGE: alpine
-    MAKE_CHECK_ARGS: check

  acceptance-system-alpine:
    <<: *native_test_job_definition
@@ -118,13 +129,12 @@ build-system-ubuntu:
        - build

  check-system-ubuntu:
-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
    needs:
      - job: build-system-ubuntu
        artifacts: true
    variables:
      IMAGE: ubuntu2004
-    MAKE_CHECK_ARGS: check

  acceptance-system-ubuntu:
    <<: *native_test_job_definition
@@ -149,13 +159,12 @@ build-system-debian:
        - build

  check-system-debian:
-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
    needs:
      - job: build-system-debian
        artifacts: true
    variables:
      IMAGE: debian-amd64
-    MAKE_CHECK_ARGS: check

  # No targets are built here, just tools, docs, and unit tests. This
  # also feeds into the eventual documentation deployment steps later
@@ -194,13 +203,12 @@ build-system-fedora:
        - build

  check-system-fedora:
-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
    needs:
      - job: build-system-fedora
        artifacts: true
    variables:
      IMAGE: fedora
-    MAKE_CHECK_ARGS: check

  acceptance-system-fedora:
    <<: *native_test_job_definition
@@ -226,13 +234,12 @@ build-system-centos:
        - build

  check-system-centos:
-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
    needs:
      - job: build-system-centos
        artifacts: true
    variables:
      IMAGE: centos8
-    MAKE_CHECK_ARGS: check

  acceptance-system-centos:
    <<: *native_test_job_definition
@@ -256,13 +263,12 @@ build-system-opensuse:
        - build

  check-system-opensuse:
-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
    needs:
      - job: build-system-opensuse
        artifacts: true
    variables:
      IMAGE: opensuse-leap
-    MAKE_CHECK_ARGS: check

  acceptance-system-opensuse:
     <<: *native_test_job_definition
@@ -525,13 +531,12 @@ build-crypto-old-nettle:
        - build

  check-crypto-old-nettle:
-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
    needs:
      - job: build-crypto-old-nettle
        artifacts: true
    variables:
      IMAGE: centos7
-    MAKE_CHECK_ARGS: check


  build-crypto

Re: [PATCH] tcg: Increase the static number of temporaries

2021-01-25 Thread Alex Bennée


Laurent Vivier  writes:

> Le 23/01/2021 à 17:19, Laurent Vivier a écrit :
>> Le 21/01/2021 à 03:54, Richard Henderson a écrit :
>>> This isn't a total or permanent solution to the problem of running
>>> out of temporaries, but it puts off the issue for a bit.
>>>
>>> Make the assert in tcg_temp_alloc unconditional.  If we do run out
>>> of temps, this can fail much later as a weird SIGSEGV, due to the
>>> buffer overrun of the temp array.
>>>
>>> Remove the inlines from tcg_temp_alloc and tcg_global_alloc.
>>>
>>> Buglink: https://bugs.launchpad.net/bugs/1912065
>>> Signed-off-by: Richard Henderson 
>>> ---
>>>
>>> There are more bugs that need fixing in order to actually make
>>> the dynamic allocation scheme work.  Rather than keep this bug
>>> pending longer, hack around it and make the SEGV an ABRT.
>>>
>>> r~
>>>
>>> ---
>>>  include/tcg/tcg.h | 2 +-
>>>  tcg/tcg.c | 6 +++---
>>>  2 files changed, 4 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
>>> index c5a9d65d5f..0187de1352 100644
>>> --- a/include/tcg/tcg.h
>>> +++ b/include/tcg/tcg.h
>>> @@ -275,7 +275,7 @@ typedef struct TCGPool {
>>>  
>>>  #define TCG_POOL_CHUNK_SIZE 32768
>>>  
>>> -#define TCG_MAX_TEMPS 512
>>> +#define TCG_MAX_TEMPS 1024
>> 
>> This seems not enough, I have:
>> 
>> ERROR:.../tcg/tcg.c:1210:tcg_temp_alloc: assertion failed: (n < 
>> TCG_MAX_TEMPS)
>> Bail out! ERROR:.../tcg/tcg.c:1210:tcg_temp_alloc: assertion failed: (n < 
>> TCG_MAX_TEMPS)
>> 
>> With my branch m68k-virt (68040 processor with virtio-mmio devices) booting 
>> debian sid.
>
> The cause of the overflow is this sequence:
>
> 
> IN:
> 0xc0f0520c:  movel %a5,%sp@-
> 0xc0f0520e:  lea %pc@(0xc1805000),%a5
> 0xc0f05216:  moveal %sp@(12),%a0
> 0xc0f0521a:  movel %a5@(61a0),%a0@
> 0xc0f05222:  movel %a5@(4fd4),%a0@(180)
> 0xc0f0522c:  movel %a5@(8af4),%a0@(184)
> 0xc0f05236:  movel %a5@(6328),%a0@(188)
> 0xc0f05240:  movel %a5@(2530),%a0@(172)
> 0xc0f0524a:  movel %a5@(61fc),%a0@(176)
> 0xc0f05254:  movel %a5@(5848),%a0@(24)
> 0xc0f0525e:  movel %a5@(d398),%a0@(28)
> 0xc0f05268:  movel %a5@(53ec),%a0@(32)
> 0xc0f05272:  movel %a5@(17774),%a0@(36)
> 0xc0f0527c:  movel %a5@(f748),%a0@(40)
> 0xc0f05286:  movel %a5@(551c),%a0@(44)
> 0xc0f05290:  movel %a5@(36ac),%a0@(4)
> 0xc0f0529a:  movel %a5@(68a0),%a0@(8)
> 0xc0f052a4:  movel %a5@(54c0),%a0@(12)
> 0xc0f052ae:  movel %a5@(4354),%a0@(16)
> 0xc0f052b8:  movel %a5@(5eb0),%a0@(48)
> 0xc0f052c2:  movel %a5@(5ee4),%a0@(52)
> 0xc0f052cc:  movel %a5@(5894),%a0@(68)
> 0xc0f052d6:  movel %a5@(5924),%a0@(72)
> 0xc0f052e0:  movel %a5@(c8fc),%a0@(76)
> 0xc0f052ea:  movel %a5@(3248),%a0@(80)
> 0xc0f052f4:  movel %a5@(bcd0),%a0@(84)
> 0xc0f052fe:  movel %a5@(9a38),%a0@(88)
> 0xc0f05308:  movel %a5@(e2e8),%a0@(92)
> 0xc0f05312:  movel %a5@(dd50),%a0@(96)
> 0xc0f0531c:  movel %a5@(62b0),%a0@(100)
> 0xc0f05326:  movel %a5@(20a0),%a0@(104)
> 0xc0f05330:  movel %a5@(527c),%a0@(108)
> 0xc0f0533a:  movel %a5@(41ec),%a0@(112)
> 0xc0f05344:  movel %a5@(33dc),%a0@(116)
> 0xc0f0534e:  movel %a5@(423c),%a0@(120)
> 0xc0f05358:  movel %a5@(9728),%a0@(124)
> 0xc0f05362:  movel %a5@(3fe4),%a0@(128)
> 0xc0f0536c:  movel %a5@(1018c),%a0@(132)
> 0xc0f05376:  movel %a5@(7b54),%a0@(136)
> 0xc0f05380:  movel %a5@(121e8),%a0@(140)
> 0xc0f0538a:  movel %a5@(550c),%a0@(144)
> 0xc0f05394:  movel %a5@(b4a8),%a0@(148)
> 0xc0f0539e:  movel %a5@(6a20),%a0@(152)
> 0xc0f053a8:  movel %a5@(56e0),%a0@(156)
> 0xc0f053b2:  movel %a5@(10c9c),%a0@(160)
> 0xc0f053bc:  movel %a5@(a4e8),%a0@(164)
> 0xc0f053c6:  movel %a5@(9d58),%a0@(168)
> 0xc0f053d0:  movel %a5@(6e2c),%a0@(224)
> 0xc0f053da:  movel %a5@(4e28),%a0@(228)
> 0xc0f053e4:  movel %a5@(152e0),%a0@(232)
> 0xc0f053ee:  movel %a5@(7e3c),%a0@(236)
> 0xc0f053f8:  movel %a5@(15b10),%a0@(240)
> 0xc0f05402:  movel %a5@(4578),%a0@(244)
> 0xc0f0540c:  movel %a5@(11e98),%a0@(248)
> 0xc0f05416:  movel %a5@(44b8),%a0@(252)
> 0xc0f05420:  movel %a5@(72a0),%a0@(504)
> 0xc0f0542a:  movel %a5@(308c),%a0@(508)
> 0xc0f05434:  movel %a5@(4f40),%a0@(512)
> 0xc0f0543e:  movel %a5@(8c04),%a0@(516)
> 0xc0f05448:  movel %a5@(b328),%a0@(520)
> 0xc0f05452:  movel %a5@(4e40),%a0@(524)
> 0xc0f0545c:  movel %a5@(4954),%a0@(528)
> 0xc0f05466:  movel %a5@(14f48),%a0@(532)
> 0xc0f05470:  movel %a5@(15c2c),%a0@(536)
> 0xc0f0547a:  movel %a5@(4bf0),%a0@(540)
> 0xc0f05484:  movel %a5@(66b4),%a0@(544)
> 0xc0f0548e:  movel %a5@(3768),%a0@(548)
> 0xc0f05498:  movel %a5@(111e4),%a0@(552)
> 0xc0f054a2:  movel %a5@(414c),%a0@(556)
> 0xc0f054ac:  movel %a5@(14eb8),%a0@(560)
> 0xc0f054b6:  movel %a5@(6fec),%a0@(564)
> 0xc0f054c0:  movel %a5@(48c0),%a0@(568)
> 0xc0f054ca:  movel %a5@(4494),%a0@(572)
> 0xc0f054d4:  movel %a5@(7534),%a0@(576)
> 0xc0f054de:  movel %a5@(c1ec),%a0@(580)
> 0xc0f054e8:  movel %a5@(636c),%a0@(584)
> 0xc0f054f2:  movel %a5@(a5a0),%a0@(588)
> 0xc0f054fc:  movel %a5@(8734),%a0@(592)
> 0xc0f05506:  movel %a5@(3f94),%a0@(596)
> 0xc0f05510:  movel %a5@(11910),%a0@(600)

[Bug 1912224] Re: qemu may freeze during drive-mirroring on fragmented FS

2021-01-25 Thread Alexandre arents
Another test that confirm the perf drop after the revert,

live-migration of disk of 51GB (raw written data, without hole)
between two host with nvme disk, 10Gb/s connectivity:

master qemu 1m14s  (737 MB/s)
master qemu with reverted patch 2m30s  (353 MB/s)

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1912224

Title:
  qemu may freeze during drive-mirroring on fragmented FS

Status in QEMU:
  New

Bug description:
  
  We have odd behavior in operation where qemu freeze during long
  seconds, We started an thread about that issue here:
  https://lists.gnu.org/archive/html/qemu-devel/2020-11/msg05623.html

  It happens at least during openstack nova snapshot (qemu blockdev-mirror)
  or live block migration(which include network copy of disk).

  After further troubleshoots, it seems related to FS fragmentation on
  host.

  reproducible at least on:
  Ubuntu 18.04.3/4.18.0-25-generic/qemu-4.0
  Ubuntu 16.04.6/5.10.6/qemu-5.2.0-rc2

  # Lets create a dedicated file system on a SSD/Nvme 60GB disk in my case:
  $sudo mkfs.ext4 /dev/sda3
  $sudo mount /dev/sda3 /mnt
  $df -h /mnt
  Filesystem  Size  Used Avail Use% Mounted on
  /dev/sda3 59G   53M   56G   1% /mnt

  #Create a fragmented disk on it using 2MB Chunks (about 30min):
  $sudo python3 create_fragged_disk.py /mnt 2
  Filling up FS by Creating chunks files in:  /mnt/chunks
  We are probably full as expected!!:  [Errno 28] No space left on device
  Creating fragged disk file:  /mnt/disk

  $ls -lhs 
  59G -rw-r--r-- 1 root root 59G Jan 15 14:08 /mnt/disk

  $ sudo e4defrag -c /mnt/disk
   Total/best extents 41971/30
   Average size per extent1466 KB
   Fragmentation score2
   [0-30 no problem: 31-55 a little bit fragmented: 56- needs defrag]
   This file (/mnt/disk) does not need defragmentation.
   Done.

  # the tool^^^ says it is not enough fragmented to be able to defrag.

  #Inject an image on fragmented disk
  sudo chown ubuntu /mnt/disk
  wget 
https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
  qemu-img convert -O raw  bionic-server-cloudimg-amd64.img \
   bionic-server-cloudimg-amd64.img.raw
  dd conv=notrunc iflag=fullblock if=bionic-server-cloudimg-amd64.img.raw \
  of=/mnt/disk bs=1M
  virt-customize -a /mnt/disk --root-password password:

  # logon run console activity ex: ping -i 0.3 127.0.0.1
  $qemu-system-x86_64 -m 2G -enable-kvm  -nographic \
  -chardev socket,id=test,path=/tmp/qmp-monitor,server,nowait \
  -mon chardev=test,mode=control \
  -drive 
file=/mnt/disk,format=raw,if=none,id=drive-virtio-disk0,cache=none,discard\
  -device 
virtio-blk-pci,scsi=off,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on

  $sync
  $echo 3 | sudo tee -a /proc/sys/vm/drop_caches

  #start drive-mirror via qmp on another SSD/nvme partition
  nc -U /tmp/qmp-monitor
  {"execute":"qmp_capabilities"}
  
{"execute":"drive-mirror","arguments":{"device":"drive-virtio-disk0","target":"/home/ubuntu/mirror","sync":"full","format":"qcow2"}}
  ^^^ qemu console may start to freeze at this step.

  NOTE:
   - smaller chunk sz and bigger disk size the worst it is.
 In operation we also have issue on 400GB disk size with average 13MB/extent
   - Reproducible also on xfs

  
  Expected behavior:
  ---
  QEMU should remain steady, eventually only have decrease storage Performance
  or mirroring, because of fragmented fs.

  Observed behavior:
  ---
  Perf of mirroring is still quite good even on fragmented FS,
  but it breaks qemu.

  
  ##  create_fragged_disk.py 
  import sys
  import os
  import tempfile
  import glob
  import errno

  MNT_DIR = sys.argv[1]
  CHUNK_SZ_MB = int(sys.argv[2])
  CHUNKS_DIR = MNT_DIR + '/chunks'
  DISK_FILE = MNT_DIR + '/disk'

  if not os.path.exists(CHUNKS_DIR):
  os.makedirs(CHUNKS_DIR)

  with open("/dev/urandom", "rb") as f_rand:
   mb_rand=f_rand.read(1024 * 1024)

  print("Filling up FS by Creating chunks files in: ",CHUNKS_DIR)
  try:
  while True:
  tp = tempfile.NamedTemporaryFile(dir=CHUNKS_DIR,delete=False)
  for x in range(CHUNK_SZ_MB):
  tp.write(mb_rand)
  os.fsync(tp)
  tp.close()
  except Exception as ex:
  print("We are probably full as expected!!: ",ex)

  chunks = glob.glob(CHUNKS_DIR + '/*')

  print("Creating fragged disk file: ",DISK_FILE)
  with open(DISK_FILE, "w+b") as f_disk:
  for chunk in chunks:
  try:
  os.unlink(chunk)
  for x in range(CHUNK_SZ_MB):
  f_disk.write(mb_rand)
  os.fsync(f_disk)
  except IOError as ex:
  if ex.errno != errno.ENOSPC:
  raise
  ##

Re: [PATCH v3 2/4] meson: Warn when TCI is selected but TCG backend is available

2021-01-25 Thread Daniel P . Berrangé
On Fri, Jan 22, 2021 at 02:30:02PM +0100, Philippe Mathieu-Daudé wrote:
> Some new users get confused with 'TCG' and 'TCI', and enable TCI
> support expecting to enable TCG.
> 
> Emit a warning when native TCG backend is available on the
> host architecture, mentioning this is a suboptimal configuration.
> 
> Reviewed-by: Stefan Weil 
> Reviewed-by: Thomas Huth 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  meson.build | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/meson.build b/meson.build
> index d5b76150e49..d3df5fa3516 100644
> --- a/meson.build
> +++ b/meson.build
> @@ -234,6 +234,9 @@
>error('Unsupported CPU @0@, try --enable-tcg-interpreter'.format(cpu))
>  endif
>endif
> +  if 'CONFIG_TCG_INTERPRETER' in config_host and cpu in supported_cpus
> +warning('Experimental TCI requested while native TCG is available on 
> @0@, suboptimal performance expected'.format(cpu))
> +  endif

warning('Use of the TCG interpretor is not recommended on this host
 architecture. There is a native TCG execution backend available
 which provides substantially better performance and reliability. 
 It is strongly recommended to remove the --enable-tcg-interpreter 
 configuration option on this architecture to use the native
 backend.')


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH v1] gitlab-ci.yml: Speed up CI by using "meson test --no-rebuild"

2021-01-25 Thread Paolo Bonzini

On 25/01/21 10:03, Thomas Huth wrote:

Currently, our check-system-* jobs are recompiling the whole sources
again. This happens due to the fact that the jobs are checking out
the whole source tree and required submodules again, and only try
to use the "build" directory with the binaries and object files
as an artifact from the previous stage - which simply does not work
anymore (with the current version of meson). Due to some changed
time stamps, meson is always trying to rebuild the whole tree.

To fix this problem, we can use "meson test --no-rebuild" instead of
make check" to avoid rebuilding all binaries every time. However, the
iotests ("make check-block") are not run by "meson test", so we have
to execute these manually now. But instead of adding them to the same
job as "meson test", it's better to put them into a separate new job
instead, to keep things nicely running in parallel in the CI.
This saves ca. 15 - 20 minutes of precious CI cycles in each run.


The reason why we're not using "meson test" is that the reporting is 
(still) inferior to what you get from "make check", especially with 
respect to which tests are failing.  This is being tracked at 
https://github.com/mesonbuild/meson/issues/7830 and the last missing 
bits are at https://github.com/mesonbuild/meson/issues/8200 (after which 
we'll change the "meson test" command line to also include "meson test 
--verbose").


However, for CI this is a minor issue because we can let GitLab parse 
the XML testing logs.  Can you add something like this to the test jobs 
for v2?


+  artifacts:
+when: always
+paths:
+  - build/meson-logs/
+reports:
+  junit: build/meson-logs/testlog.junit.xml

Another possibility could be to use "make check NINJA=:".  I am not sure 
if that works, but if it does it would be the smallest possible workaround.


Paolo


Signed-off-by: Thomas Huth 
---
  RFC -> v1: Added iotests

  .gitlab-ci.yml | 113 -
  1 file changed, 94 insertions(+), 19 deletions(-)

diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index de3a3d25b5..0834267a37 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -34,6 +34,26 @@ include:
  make -j"$JOBS" $MAKE_CHECK_ARGS ;
fi
  
+.native_meson_test_job:

+  stage: test
+  image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
+  script:
+- cd build
+- touch *
+- make git-submodule-update
+- if [ -x ../meson/meson.py ]; then
+  ../meson/meson.py test --no-rebuild -t 5 $MESON_TEST_ARGS ;
+  else
+  meson test --no-rebuild -t 5 $MESON_TEST_ARGS ;
+  fi
+
+.native_iotest_job:
+  stage: test
+  image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
+  script:
+- cd build/tests/qemu-iotests/
+- ./check -g auto -qcow2
+
  .native_test_job_template: &native_test_job_definition
stage: test
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
@@ -83,17 +103,15 @@ build-system-alpine:
artifacts:
  expire_in: 2 days
  paths:
-  - .git-submodule-status
- build
  
  check-system-alpine:

-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
needs:
  - job: build-system-alpine
artifacts: true
variables:
  IMAGE: alpine
-MAKE_CHECK_ARGS: check
  
  acceptance-system-alpine:

<<: *native_test_job_definition
@@ -118,13 +136,20 @@ build-system-ubuntu:
- build
  
  check-system-ubuntu:

-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
+  needs:
+- job: build-system-ubuntu
+  artifacts: true
+  variables:
+IMAGE: ubuntu2004
+
+iotest-ubuntu:
+  extends: .native_iotest_job
needs:
  - job: build-system-ubuntu
artifacts: true
variables:
  IMAGE: ubuntu2004
-MAKE_CHECK_ARGS: check
  
  acceptance-system-ubuntu:

<<: *native_test_job_definition
@@ -149,13 +174,20 @@ build-system-debian:
- build
  
  check-system-debian:

-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
+  needs:
+- job: build-system-debian
+  artifacts: true
+  variables:
+IMAGE: debian-amd64
+
+iotest-debian:
+  extends: .native_iotest_job
needs:
  - job: build-system-debian
artifacts: true
variables:
  IMAGE: debian-amd64
-MAKE_CHECK_ARGS: check
  
  # No targets are built here, just tools, docs, and unit tests. This

  # also feeds into the eventual documentation deployment steps later
@@ -194,13 +226,20 @@ build-system-fedora:
- build
  
  check-system-fedora:

-  <<: *native_test_job_definition
+  extends: .native_meson_test_job
+  needs:
+- job: build-system-fedora
+  artifacts: true
+  variables:
+IMAGE: fedora
+
+iotest-fedora:
+  extends: .native_iotest_job
needs:
  - job: build-system-fedora
artifacts: true
variables:
  IMAGE: fedora
-MAKE_CHECK_ARGS: check
  
  acceptance-system-fedora:

<<: *native_test_job_definition
@@ -226,13 +265,20 @@ build-system-centos:
- build

Re: [PULL v2 10/10] tcg: Restart code generation when we run out of temps

2021-01-25 Thread Roman Bolshakov
On Sun, Jan 24, 2021 at 08:11:22AM -1000, Richard Henderson wrote:
> Some large translation blocks can generate so many unique
> constants that we run out of temps to hold them.  In this
> case, longjmp back to the start of code generation and
> restart with a smaller translation block.
> 
> Buglink: https://bugs.launchpad.net/bugs/1912065
> Tested-by: BALATON Zoltan 
> Reviewed-by: Philippe Mathieu-Daudé 
> Signed-off-by: Richard Henderson 
> ---
>  include/tcg/tcg.h |  3 +++
>  accel/tcg/translate-all.c | 15 ++-
>  tcg/tcg.c | 11 ---
>  3 files changed, 25 insertions(+), 4 deletions(-)
> 
> diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
> index c5a9d65d5f..0f0695e90d 100644
> --- a/include/tcg/tcg.h
> +++ b/include/tcg/tcg.h
> @@ -680,6 +680,9 @@ struct TCGContext {
>  
>  uint16_t gen_insn_end_off[TCG_MAX_INSNS];
>  target_ulong gen_insn_data[TCG_MAX_INSNS][TARGET_INSN_START_WORDS];
> +
> +/* Exit to translator on overflow. */
> +sigjmp_buf jmp_trans;
>  };
>  
>  static inline bool temp_readonly(TCGTemp *ts)
> diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
> index d09c187e0f..81d4c83f22 100644
> --- a/accel/tcg/translate-all.c
> +++ b/accel/tcg/translate-all.c
> @@ -1926,11 +1926,17 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
>  ti = profile_getclock();
>  #endif
>  
> +gen_code_size = sigsetjmp(tcg_ctx->jmp_trans, 0);
> +if (unlikely(gen_code_size != 0)) {
> +goto error_return;
> +}
> +
>  tcg_func_start(tcg_ctx);
>  
>  tcg_ctx->cpu = env_cpu(env);
>  gen_intermediate_code(cpu, tb, max_insns);
>  tcg_ctx->cpu = NULL;
> +max_insns = tb->icount;
>  
>  trace_translate_block(tb, tb->pc, tb->tc.ptr);
>  
> @@ -1955,6 +1961,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
>  
>  gen_code_size = tcg_gen_code(tcg_ctx, tb);
>  if (unlikely(gen_code_size < 0)) {
> + error_return:
>  switch (gen_code_size) {
>  case -1:
>  /*
> @@ -1966,6 +1973,9 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
>   * flush the TBs, allocate a new TB, re-initialize it per
>   * above, and re-do the actual code generation.
>   */
> +qemu_log_mask(CPU_LOG_TB_OP | CPU_LOG_TB_OP_OPT,
> +  "Restarting code generation for "
> +  "code_gen_buffer overflow\n");
>  goto buffer_overflow;
>  
>  case -2:
> @@ -1978,9 +1988,12 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
>   * Try again with half as many insns as we attempted this time.
>   * If a single insn overflows, there's a bug somewhere...
>   */
> -max_insns = tb->icount;
>  assert(max_insns > 1);
>  max_insns /= 2;
> +qemu_log_mask(CPU_LOG_TB_OP | CPU_LOG_TB_OP_OPT,
> +  "Restarting code generation with "
> +  "smaller translation block (max %d insns)\n",
> +  max_insns);
>  goto tb_overflow;
>  
>  default:
> diff --git a/tcg/tcg.c b/tcg/tcg.c
> index 67b08f708d..9e1b0d73c7 100644
> --- a/tcg/tcg.c
> +++ b/tcg/tcg.c
> @@ -1205,18 +1205,23 @@ void tcg_func_start(TCGContext *s)
>  QSIMPLEQ_INIT(&s->labels);
>  }
>  
> -static inline TCGTemp *tcg_temp_alloc(TCGContext *s)
> +static TCGTemp *tcg_temp_alloc(TCGContext *s)
>  {
>  int n = s->nb_temps++;
> -tcg_debug_assert(n < TCG_MAX_TEMPS);
> +
> +if (n >= TCG_MAX_TEMPS) {
> +/* Signal overflow, starting over with fewer guest insns. */
> +siglongjmp(s->jmp_trans, -2);
> +}
>  return memset(&s->temps[n], 0, sizeof(TCGTemp));
>  }
>  
> -static inline TCGTemp *tcg_global_alloc(TCGContext *s)
> +static TCGTemp *tcg_global_alloc(TCGContext *s)
>  {
>  TCGTemp *ts;
>  
>  tcg_debug_assert(s->nb_globals == s->nb_temps);
> +tcg_debug_assert(s->nb_globals < TCG_MAX_TEMPS);
>  s->nb_globals++;
>  ts = tcg_temp_alloc(s);
>  ts->kind = TEMP_GLOBAL;
> -- 
> 2.25.1
> 
> 

Hi Richard,

Thanks for providing the fix.

Tested-by: Roman Bolshakov 

Regards,
Roman



Re: [PATCH v1] gitlab-ci.yml: Speed up CI by using "meson test --no-rebuild"

2021-01-25 Thread Daniel P . Berrangé
On Mon, Jan 25, 2021 at 10:51:04AM +0100, Paolo Bonzini wrote:
> On 25/01/21 10:03, Thomas Huth wrote:
> > Currently, our check-system-* jobs are recompiling the whole sources
> > again. This happens due to the fact that the jobs are checking out
> > the whole source tree and required submodules again, and only try
> > to use the "build" directory with the binaries and object files
> > as an artifact from the previous stage - which simply does not work
> > anymore (with the current version of meson). Due to some changed
> > time stamps, meson is always trying to rebuild the whole tree.
> > 
> > To fix this problem, we can use "meson test --no-rebuild" instead of
> > make check" to avoid rebuilding all binaries every time. However, the
> > iotests ("make check-block") are not run by "meson test", so we have
> > to execute these manually now. But instead of adding them to the same
> > job as "meson test", it's better to put them into a separate new job
> > instead, to keep things nicely running in parallel in the CI.
> > This saves ca. 15 - 20 minutes of precious CI cycles in each run.
> 
> The reason why we're not using "meson test" is that the reporting is (still)
> inferior to what you get from "make check", especially with respect to which
> tests are failing.  This is being tracked at
> https://github.com/mesonbuild/meson/issues/7830 and the last missing bits
> are at https://github.com/mesonbuild/meson/issues/8200 (after which we'll
> change the "meson test" command line to also include "meson test
> --verbose").
> 
> However, for CI this is a minor issue because we can let GitLab parse the
> XML testing logs.  Can you add something like this to the test jobs for v2?
> 
> +  artifacts:
> +when: always
> +paths:
> +  - build/meson-logs/
> +reports:
> +  junit: build/meson-logs/testlog.junit.xml
> 
> Another possibility could be to use "make check NINJA=:".  I am not sure if
> that works, but if it does it would be the smallest possible workaround.

When I suggested use of --no-rebuild, I was actally thinking that
we would change the Makefile(s) to enable to pass the --no-rebuild
arg to meson. eg

  make check MESON_ARGS=--no-rebuild

is that, or something similar possible ?


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH] target/ppc: Fix truncation of env->hflags

2021-01-25 Thread Alex Bennée


Richard Henderson  writes:

> On 1/23/21 6:46 PM, David Gibson wrote:
>> On Sat, Jan 23, 2021 at 05:24:22PM -1000, Richard Henderson wrote:
>>> Use the cs_base field, because it happens to be the same
>>> size as hflags (and MSR, from which hflags is derived).
>>>
>>> In translate, extract most bits from a local hflags variable.
>>> Mark several cases where code generation is *not* derived from
>>> data stored within the hashed elements of the TranslationBlock.
>> 
>> My knowledge of TCG isn't great, so I'm pretty much prepared to accept
>> this is correct on your say so.
>> 
>> But that commit message feels like it's following on from a
>> conversation that's not here, nor linked.  It'd be great if it
>> explained how said hflags truncation is happening, because it's
>> certainly not obvious to someone with only a fair to middling
>> understanding of TCG.
>
> Mm, fair.
>
> How about:
>
> The assignment from env->hflags to tb->flags truncates
> target_ulong to uint32_t.  This loses important bits from
> the top of hflags, which results in incorrect tb selection.

We are just putting off the day we declare tb->flags to be 64 bit or end
up renaming cs_base to
tb->cs_base_or_extra_flag_bits_we_could_not_fit_in_flags. The fact that
cs_base is expressed in terms of target_ulong worries me if there is
ever any hflag state above bit 32 for the ppc32 targets.

>
> Use the cs_base field instead, because it happens to be the
> same size as hflags (and MSR fom which hflags is derived).
>
> In translate, extract most bits from a local hflags variable.
> All of the checks vs env->flags are redundant with env->msr_mask
> in that msr bits cannot be set when the feature is not available.
> Mark several cases where code generation is *not* derived from
> data stored within hashed elements of the tb.
>
>
> r~


-- 
Alex Bennée



Re: [PATCH] spapr: Adjust firmware path of PCI devices

2021-01-25 Thread Greg Kurz
On Sat, 23 Jan 2021 13:36:34 +1100
Alexey Kardashevskiy  wrote:

> 
> 
> On 23/01/2021 04:01, Greg Kurz wrote:
> > It is currently not possible to perform a strict boot from USB storage:
> > 
> > $ qemu-system-ppc64 -accel kvm -nodefaults -nographic -serial stdio \
> > -boot strict=on \
> > -device qemu-xhci \
> > -device usb-storage,drive=disk,bootindex=0 \
> > -blockdev driver=file,node-name=disk,filename=fedora-ppc64le.qcow2
> > 
> > 
> > SLOF **
> > QEMU Starting
> >   Build Date = Jul 17 2020 11:15:24
> >   FW Version = git-e18ddad8516ff2cf
> >   Press "s" to enter Open Firmware.
> > 
> > Populating /vdevice methods
> > Populating /vdevice/vty@7100
> > Populating /vdevice/nvram@7101
> > Populating /pci@8002000
> >   00  (D) : 1b36 000dserial bus [ usb-xhci ]
> > No NVRAM common partition, re-initializing...
> > Scanning USB
> >XHCI: Initializing
> >  USB Storage
> > SCSI: Looking for devices
> >101 DISK : "QEMU QEMU HARDDISK2.5+"
> > Using default console: /vdevice/vty@7100
> > 
> >Welcome to Open Firmware
> > 
> >Copyright (c) 2004, 2017 IBM Corporation All rights reserved.
> >This program and the accompanying materials are made available
> >under the terms of the BSD License available at
> >http://www.opensource.org/licenses/bsd-license.php
> > 
> > 
> > Trying to load:  from: 
> > /pci@8002000/usb@0/storage@1/disk@101 ...
> > E3405: No such device
> > 
> > E3407: Load failed
> > 
> >Type 'boot' and press return to continue booting the system.
> >Type 'reset-all' and press return to reboot the system.
> > 
> > 
> > Ready!
> > 0 >
> > 
> > The device tree handed over by QEMU to SLOF indeed contains:
> > 
> > qemu,boot-list =
> > "/pci@8002000/usb@0/storage@1/disk@101 HALT";
> > 
> > but the device node is named usb-xhci@0, not usb@0.
> 
> 
> I'd expect it to be a return of qdev_fw_name() so in this case something 
> like "nec-usb-xhci" (which would still be broken) but seeing a plain 
> "usb" is a bit strange.
> 

The logic under get_boot_devices_list() is a bit hard to follow
because of the multiple indirections, but AFAICT it doesn't seem
to rely on qdev_fw_name() to get the names.

None of the XHCI devices seem to be setting DeviceClass::fw_name anyway:

$ git grep fw_name hw/usb/
hw/usb/bus.c: qdev_fw_name(qdev), nr);
hw/usb/dev-hub.c:dc->fw_name = "hub";
hw/usb/dev-mtp.c:dc->fw_name = "mtp";
hw/usb/dev-network.c:dc->fw_name = "network";
hw/usb/dev-storage.c:dc->fw_name = "storage";
hw/usb/dev-uas.c:dc->fw_name = "storage";

The plain "usb" naming comes from PCI, which has its own naming
logic for PCI devices (which qemu-xhci happens to be) :

#0  0x000100319474 in pci_dev_fw_name (len=33, buf=0x7fffd4c8 "\020", 
dev=0x7fffc3320010) at ../../hw/pci/pci.c:2533
#1  0x000100319474 in pcibus_get_fw_dev_path (dev=0x7fffc3320010) at 
../../hw/pci/pci.c:2550
#2  0x00010053118c in bus_get_fw_dev_path (dev=0x7fffc3320010, 
bus=) at ../../hw/core/qdev-fw.c:38
#3  0x00010053118c in qdev_get_fw_dev_path_helper (dev=0x7fffc3320010, 
p=0x7fffd728 "/pci@8002000/", size=128) at 
../../hw/core/qdev-fw.c:72
#4  0x000100531064 in qdev_get_fw_dev_path_helper (dev=0x101c864a0, 
p=0x7fffd728 "/pci@8002000/", size=128) at 
../../hw/core/qdev-fw.c:69
#5  0x000100531064 in qdev_get_fw_dev_path_helper (dev=0x1019f3560, 
p=0x7fffd728 "/pci@8002000/", size=128) at 
../../hw/core/qdev-fw.c:69
#6  0x0001005312f0 in qdev_get_fw_dev_path (dev=) at 
../../hw/core/qdev-fw.c:91
#7  0x000100588a68 in get_boot_device_path (dev=, 
ignore_suffixes=, ignore_suffixes@entry=true, suffix=) at ../../softmmu/bootdevice.c:211
#8  0x000100589540 in get_boot_devices_list (size=0x7fffd990) at 
../../softmmu/bootdevice.c:257
#9  0x000100606764 in spapr_dt_chosen (reset=true, fdt=0x7fffc26f0010, 
spapr=0x10149aef0) at ../../hw/ppc/spapr.c:1019


> While your patch works, I wonder if we should assign fw_name to all pci 
> nodes to avoid similar problems in the future? Thanks,
> 

Not sure to understand "assign fw_name to all pci nodes" ...

> 
> 
> 
> > 
> > This happens because the firmware names of PCI devices returned
> > by get_boot_devices_list() come from pcibus_get_fw_dev_path(),
> > while the sPAPR PHB code uses a different naming scheme for
> > device nodes. This inconsistency has always been there but it was
> > hidden for a long time because SLOF used to rename USB device
> > nodes, until this commit, merged in QEMU 4.2.0 :
> > 
> > commit 85164ad4ed9960cac842fa4cc067c6b6699b0994
> > Author: Alexey Kardashevskiy 
> > Date:   Wed Sep 11 16:24:32 2019 +1000
> > 
> >  pseries: Update SLOF firmware image
> > 
> >  This fixes USB host bus adapter name in the de

Re: [PULL v2 00/10] tcg patch queue

2021-01-25 Thread Peter Maydell
On Sun, 24 Jan 2021 at 18:11, Richard Henderson
 wrote:
>
>
> V2 replaces the tcg const temp overflow patch.
>
>
> r~
>
>
> The following changes since commit 0e32462630687a18039464511bd0447ada5709c3:
>
>   Merge remote-tracking branch 
> 'remotes/vivier2/tags/linux-user-for-6.0-pull-request' into staging 
> (2021-01-22 10:35:55 +)
>
> are available in the Git repository at:
>
>   https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20210124
>
> for you to fetch changes up to ae30e86661b0f48562cd95918d37cbeec5d02262:
>
>   tcg: Restart code generation when we run out of temps (2021-01-24 08:03:27 
> -1000)
>
> 
> Fix tcg constant temp overflow.
> Fix running during atomic single-step.
> Partial support for apple silicon.
> Cleanups for accel/tcg.


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/6.0
for any user-visible changes.

-- PMM



Re: [PATCH v1] gitlab-ci.yml: Speed up CI by using "meson test --no-rebuild"

2021-01-25 Thread Paolo Bonzini

On 25/01/21 10:56, Daniel P. Berrangé wrote:

When I suggested use of --no-rebuild, I was actally thinking that
we would change the Makefile(s) to enable to pass the --no-rebuild
arg to meson. eg

   make check MESON_ARGS=--no-rebuild

is that, or something similar possible ?


It would but "make check" is not using Meson yet.  Upon switching, you 
will indeed be able to do something like that ("make check 
MTESTARGS=--no-rebuild" in the patch I am using).


Paolo




[PATCH v2 1/2] Revert "vnc: move initialization to framebuffer_update_request"

2021-01-25 Thread Gerd Hoffmann
This reverts commit 9e1632ad07ca49de99da4bb231e9e2f22f2d8df5.

Older gtk-vnc versions can't deal with non-incremental update
requests sending pseudo-encodings, so trying to send full server
state (including desktop size, cursor etc. which is done using
pseudo-encodings) doesn't fly.  Return to old behavior to send
those only for new connects and when changes happen.

Signed-off-by: Gerd Hoffmann 
---
 ui/vnc.c | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/ui/vnc.c b/ui/vnc.c
index 66f7c1b9361e..2622f82a5a9f 100644
--- a/ui/vnc.c
+++ b/ui/vnc.c
@@ -687,6 +687,10 @@ static void vnc_desktop_resize(VncState *vs)
 !vnc_has_feature(vs, VNC_FEATURE_RESIZE_EXT))) {
 return;
 }
+if (vs->client_width == pixman_image_get_width(vs->vd->server) &&
+vs->client_height == pixman_image_get_height(vs->vd->server)) {
+return;
+}
 
 assert(pixman_image_get_width(vs->vd->server) < 65536 &&
pixman_image_get_width(vs->vd->server) >= 0);
@@ -2042,10 +2046,6 @@ static void framebuffer_update_request(VncState *vs, int 
incremental,
 } else {
 vs->update = VNC_STATE_UPDATE_FORCE;
 vnc_set_area_dirty(vs->dirty, vs->vd, x, y, w, h);
-vnc_colordepth(vs);
-vnc_desktop_resize(vs);
-vnc_led_state_change(vs);
-vnc_cursor_define(vs);
 }
 }
 
@@ -2189,7 +2189,10 @@ static void set_encodings(VncState *vs, int32_t 
*encodings, size_t n_encodings)
 break;
 }
 }
+vnc_desktop_resize(vs);
 check_pointer_type_change(&vs->mouse_mode_notifier, NULL);
+vnc_led_state_change(vs);
+vnc_cursor_define(vs);
 }
 
 static void set_pixel_conversion(VncState *vs)
-- 
2.29.2




[PATCH v2 2/2] vnc: send extended desktop resize on update requests

2021-01-25 Thread Gerd Hoffmann
Unlike other pseudo-encodings these don't break gtk-vnc
because older versions don't suport the extended desktop
resize extension in the first place.

Signed-off-by: Gerd Hoffmann 
---
 ui/vnc.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/ui/vnc.c b/ui/vnc.c
index 2622f82a5a9f..16bb3be770b2 100644
--- a/ui/vnc.c
+++ b/ui/vnc.c
@@ -2046,6 +2046,9 @@ static void framebuffer_update_request(VncState *vs, int 
incremental,
 } else {
 vs->update = VNC_STATE_UPDATE_FORCE;
 vnc_set_area_dirty(vs->dirty, vs->vd, x, y, w, h);
+if (vnc_has_feature(vs, VNC_FEATURE_RESIZE_EXT)) {
+vnc_desktop_resize_ext(vs, 0);
+}
 }
 }
 
-- 
2.29.2




Re: [Qemu-devel][PATCH] x86/cpu: Use max host physical address if -cpu max option is applied

2021-01-25 Thread Paolo Bonzini

On 25/01/21 08:10, Nathan Chancellor wrote:

This patch as commit 5a140b255d ("x86/cpu: Use max host physical address
if -cpu max option is applied") prevents me from using '-cpu host' while
booting an i386_defconfig kernel.

$ qemu-system-i386 \
-append console=ttyS0 \
-cpu host \
-display none \
-enable-kvm \
-initrd rootfs.cpio \
-kernel bzImage \
-serial mon:stdio
qemu-system-i386: phys-bits should be between 32 and 36  (but is 48)

Am I expected to pass "-cpu host,host-phys-bits=false" now or did this
do something unexpected?


Yes, it's setting the LM bit for a 32-bit guest.

Does this work for you?

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 72a79e6019..70df57337f 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -5081,6 +5081,11 @@ static uint64_t 
x86_cpu_get_supported_feature_word(FeatureWord w,

 } else {
 return ~0;
 }
+#ifdef TARGET_I386
+if (wi->cpuid.eax = 0x8001) {
+r &= ~CPUID_EXT2_LM;
+}
+#endif
 if (migratable_only) {
 r &= x86_cpu_get_migratable_flags(w);
 }


Paolo


Hi, Nathan,
Could you try Paolo's latest patch?

[PULL 03/31] x86/cpu: Use max host physical address if -cpu max option is 
applied


Hi Yang,

That is the version of the patch I tried, which has been pulled into the
master branch.

Cheers,
Nathan






Re: [PATCH v2 18/25] hw/sd: ssi-sd: Bump up version ids of VMStateDescription

2021-01-25 Thread Dr. David Alan Gilbert
* Bin Meng (bmeng...@gmail.com) wrote:
> On Mon, Jan 25, 2021 at 12:59 AM Philippe Mathieu-Daudé  
> wrote:
> >
> > On 1/23/21 11:40 AM, Bin Meng wrote:
> > > From: Bin Meng 
> > >
> > > With all these fixes and improvements, there is no way for the
> > > VMStateDescription to keep backward compatibility. We will have
> > > to bump up version ids.
> >
> > Unfortunately this breaks bisectability (think about downstream
> > distributions cherry-picking patches individually).
> >
> > I don't think there is a problem increasing 2 -> 3 -> 4 -> 5
> > (Cc'ed David in case). Could you respin increasing the version
> > on each VMState change?
> >
> 
> I definitely could be wrong, the reason I posted a single patch to
> upreve the version is that, I was under an impression that in each big
> release (like here 5.2.0 -> 6.0.0), the incompatibility version id
> should be bumped up once.
> It does not look correct to me that in a big release we bump up the
> version id for 10 times.

I think I agree; I don't think we've ever done it incrementally like
that before.

It would only break bisectability if you were cross-version migrating
during the bisect which is rare.

> Since this is a series to fix issues in the ssi-sd, I don't think it's
> practical for downstream to just cherry-pick some commits while
> leaving some other commits there.

Never underestimate downstream :-)
However, please add a comment when you're doing incrimentals like this -
e.g. a TODO or something showing that it's unfinished and you need the
remaining patches so people don't do it accidentally.

Dave

> Regards,
> Bin
> 
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




[PATCH v2 0/2] vnc: unbreak older gtk-vnc

2021-01-25 Thread Gerd Hoffmann


Gerd Hoffmann (2):
  Revert "vnc: move initialization to framebuffer_update_request"
  vnc: send extended desktop resize on update requests

 ui/vnc.c | 14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)

-- 
2.29.2





[Bug 1886793] Re: "go install" command fails while running inside s390x docker container on x86_64 host using qemu

2021-01-25 Thread Nirman Narang
I used the command "docker run --rm --privileged multiarch/qemu-user-static 
--reset -p yes".
This pulls the latest one automatically.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1886793

Title:
  "go install" command fails while running inside s390x docker container
  on x86_64 host using qemu

Status in QEMU:
  Incomplete

Bug description:
  Steps to reproduce the issue:

  Register x86_64 host with the latest qemu-user-static.
  docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

  Build the following Docker Image using following Dockerfile.s390x
  using command docker build -t test/crossbuild:latest-s390x -f
  Dockerfile.s390x .

  Dockerfile.s390x

  ##
  FROM alpine:3.11 as qemu
  ARG QEMU_VERSION=5.0.0-2
  ARG QEMU_ARCHS="s390x"
  RUN apk --update add curl
  #Enable non-native runs on amd64 architecture hosts
  RUN for i in ${QEMU_ARCHS}; do curl -L 
https://github.com/multiarch/qemu-user-static/releases/download/v${QEMU_VERSION}/qemu-${i}-static.tar.gz
 | tar zxvf - -C /usr/bin; done
  RUN chmod +x /usr/bin/qemu-*

  FROM s390x/golang:1.14.2-alpine3.11
  MAINTAINER LoZ Open Source Ecosystem 
(https://www.ibm.com/developerworks/community/groups/community/lozopensource)

  ARG MANIFEST_TOOL_VERSION=v1.0.2

  #Enable non-native builds of this image on an amd64 hosts.
  #This must be the first RUN command in this file!
  COPY --from=qemu /usr/bin/qemu-*-static /usr/bin/

  #Install su-exec for use in the entrypoint.sh (so processes run as the right 
user)
  #Install bash for the entry script (and because it's generally useful)
  #Install curl to download glide
  #Install git for fetching Go dependencies
  #Install ssh for fetching Go dependencies
  #Install mercurial for fetching go dependencies
  #Install wget since it's useful for fetching
  #Install make for building things
  #Install util-linux for column command (used for output formatting).
  #Install grep and sed for use in some Makefiles (e.g. pulling versions out of 
glide.yaml)
  #Install shadow for useradd (it allows to use big UID)
  RUN apk update && apk add --no-cache su-exec curl bash git openssh mercurial 
make wget util-linux tini file grep sed shadow
  RUN apk upgrade --no-cache

  #Disable ssh host key checking
  RUN echo 'Host *' >> /etc/ssh/ssh_config \
    && echo 'StrictHostKeyChecking no' >> /etc/ssh/ssh_config

  #Disable cgo so that binaries we build will be fully static.
  ENV CGO_ENABLED=0

  #Recompile the standard library with cgo disabled.  This prevents the 
standard library from being
  #marked stale, causing full rebuilds every time.
  RUN go install -v std

  #Install glide
  RUN go get github.com/Masterminds/glide
  ENV GLIDE_HOME /home/user/.glide

  #Install dep
  RUN go get github.com/golang/dep/cmd/dep

  #Install ginkgo CLI tool for running tests
  RUN go get github.com/onsi/ginkgo/ginkgo

  #Install linting tools.
  RUN wget -O - -q 
https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s 
v1.20.0
  RUN golangci-lint --version

  #Install license checking tool.
  RUN go get github.com/pmezard/licenses

  #Install tool to merge coverage reports.
  RUN go get github.com/wadey/gocovmerge

  #Install CLI tool for working with yaml files
  RUN go get github.com/mikefarah/yaml

  #Delete all the Go sources that were downloaded, we only rely on the binaries
  RUN rm -rf /go/src/*

  #Install vgo (should be removed once we take Go 1.11)
  RUN go get -u golang.org/x/vgo

  #Ensure that everything under the GOPATH is writable by everyone
  RUN chmod -R 777 $GOPATH

  RUN curl -sSL 
https://github.com/estesp/manifest-tool/releases/download/${MANIFEST_TOOL_VERSION}/manifest-tool-linux-s390x
 > manifest-tool && \
  chmod +x manifest-tool && \
  mv manifest-tool /usr/bin/

  COPY entrypoint.sh /usr/local/bin/entrypoint.sh
  ENTRYPOINT ["/sbin/tini", "--", "/usr/local/bin/entrypoint.sh"]
  ##

  
  The build just hangs at RUN go install -v std


  Also, while running the same command inside s390x container on x86_64
  host, error Illegal instruction (core dumped) is thrown.

  Register x86_64 host with the latest qemu-user-static.
  docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

  docker run -it -v /home/test/qemu-s390x-static:/usr/bin/qemu-s390x-
  static s390x/golang:1.14.2-alpine3.11

  Inside s390x container:

  apk update && apk add --no-cache su-exec curl bash git openssh mercurial make 
wget util-linux tini file grep sed shadow
  apk upgrade --no-cache

  #Disable cgo so that binaries we build will be fully static.
  export CGO_ENABLED=0
  go install -v std

  
  This gives the following error:
  Illegal instruction (core dumped)

  
  Environment:
  x86_64 Ub18.04 4.15.0-101-generic Ubuntu SMP x86_64 GNU/Linux

  QEMU user static version

Re: [PATCH v2 18/25] hw/sd: ssi-sd: Bump up version ids of VMStateDescription

2021-01-25 Thread Bin Meng
On Mon, Jan 25, 2021 at 6:41 PM Dr. David Alan Gilbert
 wrote:
>
> * Bin Meng (bmeng...@gmail.com) wrote:
> > On Mon, Jan 25, 2021 at 12:59 AM Philippe Mathieu-Daudé  
> > wrote:
> > >
> > > On 1/23/21 11:40 AM, Bin Meng wrote:
> > > > From: Bin Meng 
> > > >
> > > > With all these fixes and improvements, there is no way for the
> > > > VMStateDescription to keep backward compatibility. We will have
> > > > to bump up version ids.
> > >
> > > Unfortunately this breaks bisectability (think about downstream
> > > distributions cherry-picking patches individually).
> > >
> > > I don't think there is a problem increasing 2 -> 3 -> 4 -> 5
> > > (Cc'ed David in case). Could you respin increasing the version
> > > on each VMState change?
> > >
> >
> > I definitely could be wrong, the reason I posted a single patch to
> > upreve the version is that, I was under an impression that in each big
> > release (like here 5.2.0 -> 6.0.0), the incompatibility version id
> > should be bumped up once.
> > It does not look correct to me that in a big release we bump up the
> > version id for 10 times.
>
> I think I agree; I don't think we've ever done it incrementally like
> that before.
>

Thanks David.

> It would only break bisectability if you were cross-version migrating
> during the bisect which is rare.
>
> > Since this is a series to fix issues in the ssi-sd, I don't think it's
> > practical for downstream to just cherry-pick some commits while
> > leaving some other commits there.
>
> Never underestimate downstream :-)
> However, please add a comment when you're doing incrimentals like this -
> e.g. a TODO or something showing that it's unfinished and you need the
> remaining patches so people don't do it accidentally.
>

Sure, next time :)

Philippe, I guess we will need to hold on your PR?
http://patchwork.ozlabs.org/project/qemu-devel/list/?series=226135

Regards,
Bin



Re: [RFC PATCH v3 4/6] ebpf: Added eBPF RSS loader.

2021-01-25 Thread Yuri Benditovich
On Mon, Jan 25, 2021 at 11:03 AM Jason Wang  wrote:
>
>
> On 2021/1/19 下午10:53, Yuri Benditovich wrote:
> > On Fri, Jan 15, 2021 at 9:02 AM Jason Wang  wrote:
> >>
> >> On 2021/1/15 上午5:16, Andrew Melnychenko wrote:
> >>> From: Andrew 
> >>>
> >>> Added function that loads RSS eBPF program.
> >>> Added stub functions for RSS eBPF loader.
> >>> Added meson and configuration options.
> >>>
> >>> By default, eBPF feature enabled if libbpf is present in the build system.
> >>> libbpf checked in configuration shell script and meson script.
> >>>
> >>> Signed-off-by: Yuri Benditovich 
> >>> Signed-off-by: Andrew Melnychenko 
> >>> ---
> >>>configure   |  33 
> >>>ebpf/ebpf_rss-stub.c|  40 
> >>>ebpf/ebpf_rss.c | 165 +
> >>>ebpf/ebpf_rss.h |  44 +
> >>>ebpf/meson.build|   1 +
> >>>ebpf/rss.bpf.skeleton.h | 397 
> >>>ebpf/trace-events   |   4 +
> >>>ebpf/trace.h|   2 +
> >>>meson.build |  13 ++
> >>>9 files changed, 699 insertions(+)
> >>>create mode 100644 ebpf/ebpf_rss-stub.c
> >>>create mode 100644 ebpf/ebpf_rss.c
> >>>create mode 100644 ebpf/ebpf_rss.h
> >>>create mode 100644 ebpf/meson.build
> >>>create mode 100644 ebpf/rss.bpf.skeleton.h
> >>>create mode 100644 ebpf/trace-events
> >>>create mode 100644 ebpf/trace.h
> >>>
> >>> diff --git a/configure b/configure
> >>> index 5860bdb77b..9d18e941f5 100755
> >>> --- a/configure
> >>> +++ b/configure
> >>> @@ -342,6 +342,7 @@ vhost_vsock="$default_feature"
> >>>vhost_user="no"
> >>>vhost_user_blk_server="auto"
> >>>vhost_user_fs="$default_feature"
> >>> +bpf=""
> >>>kvm="auto"
> >>>hax="auto"
> >>>hvf="auto"
> >>> @@ -1236,6 +1237,10 @@ for opt do
> >>>  ;;
> >>>  --enable-membarrier) membarrier="yes"
> >>>  ;;
> >>> +  --disable-bpf) bpf="no"
> >>> +  ;;
> >>> +  --enable-bpf) bpf="yes"
> >>> +  ;;
> >>>  --disable-blobs) blobs="false"
> >>>  ;;
> >>>  --with-pkgversion=*) pkgversion="$optarg"
> >>> @@ -1845,6 +1850,7 @@ disabled with --disable-FEATURE, default is enabled 
> >>> if available
> >>>  vhost-user  vhost-user backend support
> >>>  vhost-user-blk-servervhost-user-blk server support
> >>>  vhost-vdpa  vhost-vdpa kernel backend support
> >>> +  bpf BPF kernel support
> >>>  spice   spice
> >>>  rbd rados block device (rbd)
> >>>  libiscsiiscsi support
> >>> @@ -5057,6 +5063,30 @@ else
> >>>membarrier=no
> >>>fi
> >>>
> >>> +##
> >>> +# check for usable bpf system call
> >>> +if test "$bpf" = ""; then
> >>
> >> This implies the bpf is enabled by default?
> > Yes, assuming libbpf-devel present and bpf system call defined.
> >
> > Any problem with it?
>
>
> It means the configure will fail if libbpf is not installed. Consider
> libbpf is not very common at current stage. I think it's better to make
> it auto or disabled by default.
>
>
> >
> >>
> >>> +have_bpf=no
> >>> +if test "$linux" = "yes" -a "$bigendian" != "yes"; then
> >>> +cat > $TMPC << EOF
> >>> +#include 
> >>> +#include 
> >>> +int main(void) {
> >>> +struct bpf_object *obj = NULL;
> >>> +bpf_object__load(obj);
> >>> +exit(0);
> >>> +}
> >>> +EOF
> >>> +if compile_prog "" "-lbpf" ; then
> >>> +have_bpf=yes
> >>> +bpf=yes
> >>> +fi
> >>> +fi
> >>> +if test "$have_bpf" = "no"; then
> >>> +  feature_not_found "bpf" "the libbpf is not available"
> >>> +fi

Yes, this is a mistake. These 3 lines are to be removed.
If libbpf is not installed, this should not fail the build.

> >>> +fi
> >>> +
> >>>##
> >>># check if rtnetlink.h exists and is useful
> >>>have_rtnetlink=no
> >>> @@ -5905,6 +5935,9 @@ fi
> >>>if test "$membarrier" = "yes" ; then
> >>>  echo "CONFIG_MEMBARRIER=y" >> $config_host_mak
> >>>fi
> >>> +if test "$bpf" = "yes" -a "$bigendian" != "yes" -a "$linux" = "yes" ; 
> >>> then
> >>> +  echo "CONFIG_EBPF=y" >> $config_host_mak
> >>> +fi
> >>>if test "$signalfd" = "yes" ; then
> >>>  echo "CONFIG_SIGNALFD=y" >> $config_host_mak
> >>>fi
> >>> diff --git a/ebpf/ebpf_rss-stub.c b/ebpf/ebpf_rss-stub.c
> >>> new file mode 100644
> >>> index 00..e71e229190
> >>> --- /dev/null
> >>> +++ b/ebpf/ebpf_rss-stub.c
> >>> @@ -0,0 +1,40 @@
> >>> +/*
> >>> + * eBPF RSS stub file
> >>> + *
> >>> + * Developed by Daynix Computing LTD (http://www.daynix.com)
> >>> + *
> >>> + * Authors:
> >>> + *  Yuri Benditovich 
> >>> + *
> >>> + * This work is licensed under the terms of the GNU GPL, version 2.  See
> >>> + * the COPYING file in the top-level directory.
> >>> + */
> >>> +
> >>> +#include "qemu/osdep.h"
> >>> +#include "ebpf/ebpf_rss.h"
>

Re: [PATCH] coroutine-sigaltstack: Keep SIGUSR2 handler up

2021-01-25 Thread Max Reitz

On 23.01.21 01:41, Laszlo Ersek wrote:

On 01/22/21 22:26, Laszlo Ersek wrote:


I'm drifting towards an overhaul of coroutine-sigaltstack, based on my
personal understanding of POSIX, but given that I can absolutely not
*test* coroutine-sigaltstack on the platforms where it actually matters,
an "overhaul" by me would be reckless.

I didn't expect these skeletons when I first read Max's "Thread safety
of coroutine-sigaltstack" email :/

Max, after having worked on top of your patch for a few hours, I
officially endorse your mutex approach. I can't encourage you or myself
to touch this code, in good conscience. It's not that it's "bad"; it's
inexplicable and (to me) untestable.


On one hand, that’s too bad; on the other perhaps it’s just for the 
better to get all of this out of our minds again (for now)... O:)



I'm attaching a patch (based on 0e3246263068). I'm not convinced that I
should take responsibility for this, given the lack of testability on my
end. So I'm not posting it stand-alone even as an RFC. I've built it and
have booted one of my existent domains with it, but that's all.


FWIW, it looks good to me.  We should keep it in mind if in the future 
for some reason sigaltstack becomes more important, but for now I’m not 
too sad to abort any improvement efforts.


Max




[PATCH v2] target/mips: fetch code with translator_ld

2021-01-25 Thread Philippe Mathieu-Daudé
Similarly to commits ae82adc8e29..7f93879e444, use the
translator_ld*() API introduced in commit 409c1a0bf0f
to fetch the code on the MIPS target.

Reviewed-by: Jiaxun Yang  
Signed-off-by: Philippe Mathieu-Daudé 
---
v2: Restrict to translator path =)
---
 target/mips/translate.c | 20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/target/mips/translate.c b/target/mips/translate.c
index a5cf1742a8b..a6e835809aa 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -26,7 +26,7 @@
 #include "cpu.h"
 #include "internal.h"
 #include "tcg/tcg-op.h"
-#include "exec/cpu_ldst.h"
+#include "exec/translator.h"
 #include "exec/helper-proto.h"
 #include "exec/helper-gen.h"
 #include "hw/semihosting/semihost.h"
@@ -13911,7 +13911,7 @@ static void decode_i64_mips16(DisasContext *ctx,
 
 static int decode_extended_mips16_opc(CPUMIPSState *env, DisasContext *ctx)
 {
-int extend = cpu_lduw_code(env, ctx->base.pc_next + 2);
+int extend = translator_lduw(env, ctx->base.pc_next + 2);
 int op, rx, ry, funct, sa;
 int16_t imm, offset;
 
@@ -14161,7 +14161,7 @@ static int decode_mips16_opc(CPUMIPSState *env, 
DisasContext *ctx)
 /* No delay slot, so just process as a normal instruction */
 break;
 case M16_OPC_JAL:
-offset = cpu_lduw_code(env, ctx->base.pc_next + 2);
+offset = translator_lduw(env, ctx->base.pc_next + 2);
 offset = (((ctx->opcode & 0x1f) << 21)
   | ((ctx->opcode >> 5) & 0x1f) << 16
   | offset) << 2;
@@ -16295,7 +16295,7 @@ static void decode_micromips32_opc(CPUMIPSState *env, 
DisasContext *ctx)
 uint32_t op, minor, minor2, mips32_op;
 uint32_t cond, fmt, cc;
 
-insn = cpu_lduw_code(env, ctx->base.pc_next + 2);
+insn = translator_lduw(env, ctx->base.pc_next + 2);
 ctx->opcode = (ctx->opcode << 16) | insn;
 
 rt = (ctx->opcode >> 21) & 0x1f;
@@ -21350,7 +21350,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, 
DisasContext *ctx)
 int offset;
 int imm;
 
-insn = cpu_lduw_code(env, ctx->base.pc_next + 2);
+insn = translator_lduw(env, ctx->base.pc_next + 2);
 ctx->opcode = (ctx->opcode << 16) | insn;
 
 rt = extract32(ctx->opcode, 21, 5);
@@ -21469,7 +21469,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, 
DisasContext *ctx)
 break;
 case NM_P48I:
 {
-insn = cpu_lduw_code(env, ctx->base.pc_next + 4);
+insn = translator_lduw(env, ctx->base.pc_next + 4);
 target_long addr_off = extract32(ctx->opcode, 0, 16) | insn << 16;
 switch (extract32(ctx->opcode, 16, 5)) {
 case NM_LI48:
@@ -29087,17 +29087,17 @@ static void mips_tr_translate_insn(DisasContextBase 
*dcbase, CPUState *cs)
 
 is_slot = ctx->hflags & MIPS_HFLAG_BMASK;
 if (ctx->insn_flags & ISA_NANOMIPS32) {
-ctx->opcode = cpu_lduw_code(env, ctx->base.pc_next);
+ctx->opcode = translator_lduw(env, ctx->base.pc_next);
 insn_bytes = decode_nanomips_opc(env, ctx);
 } else if (!(ctx->hflags & MIPS_HFLAG_M16)) {
-ctx->opcode = cpu_ldl_code(env, ctx->base.pc_next);
+ctx->opcode = translator_ldl(env, ctx->base.pc_next);
 insn_bytes = 4;
 decode_opc(env, ctx);
 } else if (ctx->insn_flags & ASE_MICROMIPS) {
-ctx->opcode = cpu_lduw_code(env, ctx->base.pc_next);
+ctx->opcode = translator_lduw(env, ctx->base.pc_next);
 insn_bytes = decode_micromips_opc(env, ctx);
 } else if (ctx->insn_flags & ASE_MIPS16) {
-ctx->opcode = cpu_lduw_code(env, ctx->base.pc_next);
+ctx->opcode = translator_lduw(env, ctx->base.pc_next);
 insn_bytes = decode_mips16_opc(env, ctx);
 } else {
 gen_reserved_instruction(ctx);
-- 
2.26.2




Re: [PATCH] tcg: Increase the static number of temporaries

2021-01-25 Thread Laurent Vivier
Le 25/01/2021 à 10:31, Alex Bennée a écrit :
> 
> Laurent Vivier  writes:
> 
>> Le 23/01/2021 à 17:19, Laurent Vivier a écrit :
>>> Le 21/01/2021 à 03:54, Richard Henderson a écrit :
 This isn't a total or permanent solution to the problem of running
 out of temporaries, but it puts off the issue for a bit.

 Make the assert in tcg_temp_alloc unconditional.  If we do run out
 of temps, this can fail much later as a weird SIGSEGV, due to the
 buffer overrun of the temp array.

 Remove the inlines from tcg_temp_alloc and tcg_global_alloc.

 Buglink: https://bugs.launchpad.net/bugs/1912065
 Signed-off-by: Richard Henderson 
 ---

 There are more bugs that need fixing in order to actually make
 the dynamic allocation scheme work.  Rather than keep this bug
 pending longer, hack around it and make the SEGV an ABRT.

 r~

 ---
  include/tcg/tcg.h | 2 +-
  tcg/tcg.c | 6 +++---
  2 files changed, 4 insertions(+), 4 deletions(-)

 diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
 index c5a9d65d5f..0187de1352 100644
 --- a/include/tcg/tcg.h
 +++ b/include/tcg/tcg.h
 @@ -275,7 +275,7 @@ typedef struct TCGPool {
  
  #define TCG_POOL_CHUNK_SIZE 32768
  
 -#define TCG_MAX_TEMPS 512
 +#define TCG_MAX_TEMPS 1024
>>>
>>> This seems not enough, I have:
>>>
>>> ERROR:.../tcg/tcg.c:1210:tcg_temp_alloc: assertion failed: (n < 
>>> TCG_MAX_TEMPS)
>>> Bail out! ERROR:.../tcg/tcg.c:1210:tcg_temp_alloc: assertion failed: (n < 
>>> TCG_MAX_TEMPS)
>>>
>>> With my branch m68k-virt (68040 processor with virtio-mmio devices) booting 
>>> debian sid.
>>
>> The cause of the overflow is this sequence:
>>
>> 
>> IN:
>> 0xc0f0520c:  movel %a5,%sp@-
>> 0xc0f0520e:  lea %pc@(0xc1805000),%a5
>> 0xc0f05216:  moveal %sp@(12),%a0
>> 0xc0f0521a:  movel %a5@(61a0),%a0@
>> 0xc0f05222:  movel %a5@(4fd4),%a0@(180)
>> 0xc0f0522c:  movel %a5@(8af4),%a0@(184)
>> 0xc0f05236:  movel %a5@(6328),%a0@(188)
>> 0xc0f05240:  movel %a5@(2530),%a0@(172)
>> 0xc0f0524a:  movel %a5@(61fc),%a0@(176)
>> 0xc0f05254:  movel %a5@(5848),%a0@(24)
>> 0xc0f0525e:  movel %a5@(d398),%a0@(28)
>> 0xc0f05268:  movel %a5@(53ec),%a0@(32)
>> 0xc0f05272:  movel %a5@(17774),%a0@(36)
>> 0xc0f0527c:  movel %a5@(f748),%a0@(40)
>> 0xc0f05286:  movel %a5@(551c),%a0@(44)
>> 0xc0f05290:  movel %a5@(36ac),%a0@(4)
>> 0xc0f0529a:  movel %a5@(68a0),%a0@(8)
>> 0xc0f052a4:  movel %a5@(54c0),%a0@(12)
>> 0xc0f052ae:  movel %a5@(4354),%a0@(16)
>> 0xc0f052b8:  movel %a5@(5eb0),%a0@(48)
>> 0xc0f052c2:  movel %a5@(5ee4),%a0@(52)
>> 0xc0f052cc:  movel %a5@(5894),%a0@(68)
>> 0xc0f052d6:  movel %a5@(5924),%a0@(72)
>> 0xc0f052e0:  movel %a5@(c8fc),%a0@(76)
>> 0xc0f052ea:  movel %a5@(3248),%a0@(80)
>> 0xc0f052f4:  movel %a5@(bcd0),%a0@(84)
>> 0xc0f052fe:  movel %a5@(9a38),%a0@(88)
>> 0xc0f05308:  movel %a5@(e2e8),%a0@(92)
>> 0xc0f05312:  movel %a5@(dd50),%a0@(96)
>> 0xc0f0531c:  movel %a5@(62b0),%a0@(100)
>> 0xc0f05326:  movel %a5@(20a0),%a0@(104)
>> 0xc0f05330:  movel %a5@(527c),%a0@(108)
>> 0xc0f0533a:  movel %a5@(41ec),%a0@(112)
>> 0xc0f05344:  movel %a5@(33dc),%a0@(116)
>> 0xc0f0534e:  movel %a5@(423c),%a0@(120)
>> 0xc0f05358:  movel %a5@(9728),%a0@(124)
>> 0xc0f05362:  movel %a5@(3fe4),%a0@(128)
>> 0xc0f0536c:  movel %a5@(1018c),%a0@(132)
>> 0xc0f05376:  movel %a5@(7b54),%a0@(136)
>> 0xc0f05380:  movel %a5@(121e8),%a0@(140)
>> 0xc0f0538a:  movel %a5@(550c),%a0@(144)
>> 0xc0f05394:  movel %a5@(b4a8),%a0@(148)
>> 0xc0f0539e:  movel %a5@(6a20),%a0@(152)
>> 0xc0f053a8:  movel %a5@(56e0),%a0@(156)
>> 0xc0f053b2:  movel %a5@(10c9c),%a0@(160)
>> 0xc0f053bc:  movel %a5@(a4e8),%a0@(164)
>> 0xc0f053c6:  movel %a5@(9d58),%a0@(168)
>> 0xc0f053d0:  movel %a5@(6e2c),%a0@(224)
>> 0xc0f053da:  movel %a5@(4e28),%a0@(228)
>> 0xc0f053e4:  movel %a5@(152e0),%a0@(232)
>> 0xc0f053ee:  movel %a5@(7e3c),%a0@(236)
>> 0xc0f053f8:  movel %a5@(15b10),%a0@(240)
>> 0xc0f05402:  movel %a5@(4578),%a0@(244)
>> 0xc0f0540c:  movel %a5@(11e98),%a0@(248)
>> 0xc0f05416:  movel %a5@(44b8),%a0@(252)
>> 0xc0f05420:  movel %a5@(72a0),%a0@(504)
>> 0xc0f0542a:  movel %a5@(308c),%a0@(508)
>> 0xc0f05434:  movel %a5@(4f40),%a0@(512)
>> 0xc0f0543e:  movel %a5@(8c04),%a0@(516)
>> 0xc0f05448:  movel %a5@(b328),%a0@(520)
>> 0xc0f05452:  movel %a5@(4e40),%a0@(524)
>> 0xc0f0545c:  movel %a5@(4954),%a0@(528)
>> 0xc0f05466:  movel %a5@(14f48),%a0@(532)
>> 0xc0f05470:  movel %a5@(15c2c),%a0@(536)
>> 0xc0f0547a:  movel %a5@(4bf0),%a0@(540)
>> 0xc0f05484:  movel %a5@(66b4),%a0@(544)
>> 0xc0f0548e:  movel %a5@(3768),%a0@(548)
>> 0xc0f05498:  movel %a5@(111e4),%a0@(552)
>> 0xc0f054a2:  movel %a5@(414c),%a0@(556)
>> 0xc0f054ac:  movel %a5@(14eb8),%a0@(560)
>> 0xc0f054b6:  movel %a5@(6fec),%a0@(564)
>> 0xc0f054c0:  movel %a5@(48c0),%a0@(568)
>> 0xc0f054ca:  movel %a5@(4494),%a0@(572)
>> 0xc0f054d4:  movel %a5@(7534),%a0@(576)
>> 0xc0f054de:  movel %a5@(c1ec),%a0@(580)
>> 0xc0f054e8:  movel %a5@(636c),%a0@

[PATCH] trace: add meson custom_target() depend_files for tracetool

2021-01-25 Thread Stefan Hajnoczi
Re-generate tracetool output when the tracetool source code changes. Use
the same approach as qapi_gen_depends and introduce a tracetool_depends
files list so meson is aware of the dependencies.

Signed-off-by: Stefan Hajnoczi 
---
 meson.build   | 28 +++-
 trace/meson.build | 21 ++---
 2 files changed, 41 insertions(+), 8 deletions(-)

diff --git a/meson.build b/meson.build
index 35a9eddf5c..3909d6b4c1 100644
--- a/meson.build
+++ b/meson.build
@@ -1626,6 +1626,31 @@ tracetool = [
   python, files('scripts/tracetool.py'),
'--backend=' + config_host['TRACE_BACKENDS']
 ]
+tracetool_depends = files(
+  'scripts/tracetool/backend/log.py',
+  'scripts/tracetool/backend/__init__.py',
+  'scripts/tracetool/backend/dtrace.py',
+  'scripts/tracetool/backend/ftrace.py',
+  'scripts/tracetool/backend/simple.py',
+  'scripts/tracetool/backend/syslog.py',
+  'scripts/tracetool/backend/ust.py',
+  'scripts/tracetool/format/tcg_h.py',
+  'scripts/tracetool/format/ust_events_c.py',
+  'scripts/tracetool/format/ust_events_h.py',
+  'scripts/tracetool/format/__init__.py',
+  'scripts/tracetool/format/d.py',
+  'scripts/tracetool/format/tcg_helper_c.py',
+  'scripts/tracetool/format/simpletrace_stap.py',
+  'scripts/tracetool/format/c.py',
+  'scripts/tracetool/format/h.py',
+  'scripts/tracetool/format/tcg_helper_h.py',
+  'scripts/tracetool/format/log_stap.py',
+  'scripts/tracetool/format/stap.py',
+  'scripts/tracetool/format/tcg_helper_wrapper_h.py',
+  'scripts/tracetool/__init__.py',
+  'scripts/tracetool/transform.py',
+  'scripts/tracetool/vcpu.py'
+)
 
 qemu_version_cmd = [find_program('scripts/qemu-version.sh'),
 meson.current_source_dir(),
@@ -2192,7 +2217,8 @@ foreach target : target_dirs
 '--target-type=' + target_type,
 '--probe-prefix=qemu.' + target_type + '.' + 
target_name,
 '@INPUT@', '@OUTPUT@'
-  ])
+  ],
+  depend_files: tracetool_depends)
   endforeach
 endif
   endforeach
diff --git a/trace/meson.build b/trace/meson.build
index a0be8f9b0d..08f83a15c3 100644
--- a/trace/meson.build
+++ b/trace/meson.build
@@ -12,17 +12,20 @@ foreach dir : [ '.' ] + trace_events_subdirs
   trace_h = custom_target(fmt.format('trace', 'h'),
   output: fmt.format('trace', 'h'),
   input: trace_events_file,
-  command: [ tracetool, group, '--format=h', 
'@INPUT@', '@OUTPUT@' ])
+  command: [ tracetool, group, '--format=h', 
'@INPUT@', '@OUTPUT@' ],
+  depend_files: tracetool_depends)
   genh += trace_h
   trace_c = custom_target(fmt.format('trace', 'c'),
   output: fmt.format('trace', 'c'),
   input: trace_events_file,
-  command: [ tracetool, group, '--format=c', 
'@INPUT@', '@OUTPUT@' ])
+  command: [ tracetool, group, '--format=c', 
'@INPUT@', '@OUTPUT@' ],
+  depend_files: tracetool_depends)
   if 'CONFIG_TRACE_UST' in config_host
 trace_ust_h = custom_target(fmt.format('trace-ust', 'h'),
 output: fmt.format('trace-ust', 'h'),
 input: trace_events_file,
-command: [ tracetool, group, 
'--format=ust-events-h', '@INPUT@', '@OUTPUT@' ])
+command: [ tracetool, group, 
'--format=ust-events-h', '@INPUT@', '@OUTPUT@' ],
+depend_files: tracetool_depends)
 trace_ss.add(trace_ust_h, lttng, urcubp)
 genh += trace_ust_h
   endif
@@ -31,7 +34,8 @@ foreach dir : [ '.' ] + trace_events_subdirs
 trace_dtrace = custom_target(fmt.format('trace-dtrace', 'dtrace'),
  output: fmt.format('trace-dtrace', 'dtrace'),
  input: trace_events_file,
- command: [ tracetool, group, '--format=d', 
'@INPUT@', '@OUTPUT@' ])
+ command: [ tracetool, group, '--format=d', 
'@INPUT@', '@OUTPUT@' ],
+ depend_files: tracetool_depends)
 trace_dtrace_h = custom_target(fmt.format('trace-dtrace', 'h'),
output: fmt.format('trace-dtrace', 'h'),
input: trace_dtrace,
@@ -66,7 +70,8 @@ foreach d : [
   gen = custom_target(d[0],
 output: d[0],
 input: meson.source_root() / 'trace-events',
-command: [ tracetool, '--group=root', 
'--format=@0@'.format(d[1]), '@INPUT@', '@OUTPUT@' ])
+command: [ tracetool, '--group=root', 
'--format=@0@'.format(d[1]), '@INPUT@', '@OUTPUT@' ],
+depend_files: tracetool_depends)
   specific_ss.add(when: 'CONFIG_TCG', if_

Re: [PATCH] util/log: flush TB cache when log level changes

2021-01-25 Thread Alex Bennée


Pavel Dovgalyuk  writes:

> On 22.01.2021 14:42, Alex Bennée wrote:
>> 
>> Pavel Dovgalyuk  writes:
>> 
>>> Sometimes we need to collect the translation logs starting
>>> from some point of the execution. Some TB listings may
>>> be missed in this case, when blocks were translated before.
>>> This patch clears TB cache to allow re-translation of such
>>> code blocks.
>>>
>>> Signed-off-by: Pavel Dovgalyuk 
>>> ---
>>>   accel/tcg/translate-all.c |8 
>>>   include/sysemu/tcg.h  |1 +
>>>   stubs/meson.build |1 +
>>>   stubs/tcg.c   |   12 
>>>   util/log.c|3 +++
>>>   5 files changed, 25 insertions(+)
>>>   create mode 100644 stubs/tcg.c
>>>
>>> diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
>>> index e9de6ff9dd..3acb227c57 100644
>>> --- a/accel/tcg/translate-all.c
>>> +++ b/accel/tcg/translate-all.c
>>> @@ -1461,6 +1461,14 @@ void tb_flush(CPUState *cpu)
>>>   }
>>>   }
>>>   
>>> +void tb_flush_all(void)
>>> +{
>>> +CPUState *cpu;
>>> +CPU_FOREACH(cpu) {
>>> +tb_flush(cpu);
>>> +}
>>> +}
>>> +
>> 
>> This isn't needed - tb_flush flushes all translations although it does
>> need to be executed in a CPU context to do so.
>> 
>>>   /*
>>>* Formerly ifdef DEBUG_TB_CHECK. These debug functions are 
>>> user-mode-only,
>>>* so in order to prevent bit rot we compile them unconditionally in 
>>> user-mode,
>>> diff --git a/include/sysemu/tcg.h b/include/sysemu/tcg.h
>>> index 00349fb18a..7415f11022 100644
>>> --- a/include/sysemu/tcg.h
>>> +++ b/include/sysemu/tcg.h
>>> @@ -9,6 +9,7 @@
>>>   #define SYSEMU_TCG_H
>>>   
>>>   void tcg_exec_init(unsigned long tb_size, int splitwx);
>>> +void tb_flush_all(void);
>>>   
>>>   #ifdef CONFIG_TCG
>>>   extern bool tcg_allowed;
>>> diff --git a/stubs/meson.build b/stubs/meson.build
>>> index 80b1d81a31..95e70f8542 100644
>>> --- a/stubs/meson.build
>>> +++ b/stubs/meson.build
>>> @@ -38,6 +38,7 @@ stub_ss.add(files('set-fd-handler.c'))
>>>   stub_ss.add(files('sysbus.c'))
>>>   stub_ss.add(files('target-get-monitor-def.c'))
>>>   stub_ss.add(files('target-monitor-defs.c'))
>>> +stub_ss.add(files('tcg.c'))
>>>   stub_ss.add(files('tpm.c'))
>>>   stub_ss.add(files('trace-control.c'))
>>>   stub_ss.add(files('uuid.c'))
>>> diff --git a/stubs/tcg.c b/stubs/tcg.c
>>> new file mode 100644
>>> index 00..775a748c77
>>> --- /dev/null
>>> +++ b/stubs/tcg.c
>>> @@ -0,0 +1,12 @@
>>> +/*
>>> + * TCG stubs
>>> + *
>>> + * This work is licensed under the terms of the GNU GPL, version 2 or 
>>> later.
>>> + * See the COPYING file in the top-level directory.
>>> + */
>>> +
>>> +#include "sysemu/tcg.h"
>>> +
>>> +void tb_flush_all(void)
>>> +{
>>> +}
>>> diff --git a/util/log.c b/util/log.c
>>> index 2ee1500bee..2ff342a91b 100644
>>> --- a/util/log.c
>>> +++ b/util/log.c
>>> @@ -26,6 +26,7 @@
>>>   #include "trace/control.h"
>>>   #include "qemu/thread.h"
>>>   #include "qemu/lockable.h"
>>> +#include "sysemu/tcg.h"
>>>   
>>>   static char *logfilename;
>>>   static QemuMutex qemu_logfile_mutex;
>>> @@ -84,6 +85,8 @@ void qemu_set_log(int log_flags)
>>>   #ifdef CONFIG_TRACE_LOG
>>>   qemu_loglevel |= LOG_TRACE;
>>>   #endif
>>> +tb_flush_all();
>>> +
>> 
>> I would call tb_flush(current_cpu) or first_cpu here. But two things:
>> 
>>   - I'm not sure you have a CPU at all times qemu_set_log is called
>>   - It seems overly aggressive to throw away all translations every time
>> the log level is changed. I would define a mask in log.h and have
>> something like:
>
> Do you propose removing the parameter from tb_flush or omitting the loop
> from tb_flush_all?

No tb_flush should keep the CPU interface. In normal usage from the
emulation we always have a CPU to call. However for qemu_set_log you
will need to find a CPU to call or bail out if you can't. Maybe
something like:

  CPUStatus *cpu = current_cpu || first_cpu;
  if (cpu) {
  tb_flush(cpu);
  }

my only worry is if qemu_set_log is called from outside a CPU context
(current_cpu will always be NULL) while first_cpu is in a exclusive
region. We could extend cpu_in_exclusive_context to be:

  cpu == current_cpu && cpu->in_exclusive_context

but that seems a little icky to me. Paolo, any thoughts?

>
>>if (log_flags & LOG_TRANSLATION) {
>>tb_flush();
>>}
>> 
>>>   /*
>>>* In all cases we only log if qemu_loglevel is set.
>>>* Also:
>> 
>> 


-- 
Alex Bennée



Re: [PATCH v2 1/1] spapr_caps.c: check user input before warning about TCG only caps

2021-01-25 Thread Daniel Henrique Barboza




On 1/22/21 10:46 PM, David Gibson wrote:

On Wed, Jan 20, 2021 at 07:54:06AM -0300, Daniel Henrique Barboza wrote:

Commit 006e9d361869 added warning messages for cap-cfpc, cap-ibs and
cap-sbbc when enabled under TCG. Commit 8ff43ee404d3 did the same thing
when introducing cap-ccf-assist.

These warning messages, although benign to the machine launch, can make
users a bit confused. E.g:

$ sudo ./ppc64-softmmu/qemu-system-ppc64
qemu-system-ppc64: warning: TCG doesn't support requested feature, 
cap-cfpc=workaround
qemu-system-ppc64: warning: TCG doesn't support requested feature, 
cap-sbbc=workaround
qemu-system-ppc64: warning: TCG doesn't support requested feature, 
cap-ibs=workaround
qemu-system-ppc64: warning: TCG doesn't support requested feature, 
cap-ccf-assist=on

We're complaining about "TCG doesn't support requested feature" when the
user didn't request any of those caps in the command line.

Check if these caps were set in the command line before sending an user
warning.

Signed-off-by: Daniel Henrique Barboza 


Oof.  I have real mixed feelings about this.

So, yes, the warnings are annoying, but they're not meaningless.  They
are really indicating that the guest environment is different from the
one you requested (implicitly, via the machine version). The fact that
they are only warnings, not hard errors, is already a compromise
because otherwise there would be no real way to use TCG at all with
current machines.

In short, the warnings are scary because they're *meant* to be scary.
TCG will not, and cannot, supply the Spectre mitigations that are
expected on a current machine type.


Quick story: I'm involved in helping folks in a local Brazilian college working
with QEMU in Power ([1] for more info). One user is trying to run a pseries TCG
guest in Windows 10, and he is having problems with the current pseries machine,
while we is still able to do it with the pseries-2.8 one (I'm actually surprised
that it works at all in Windows 10 TBH).

So this user ask me for help with this scenario because he didn't know how to
fix these TCG warnings, because he was thinking that they had something to do
with the problem he is having with his use case. I said that these warnings
could be safely ignored for TCG.

These warnings are indeed scary, as you said. But not in a helpful way. Consider
that most QEMU warnings are a call for action for the user to fix something, 
e.g.
an option that's about to be deprecated/no longer supported. In this case we're
warning the user of something that the user has no fault on, and more important,
can do nothing about it but to ignore. And this user interaction I had made me
realize that it's not trivial to ignore warnings when your use case is not
working as intended. You will attempt to fix the warnings before trying to open
a developer bug and so on.

What we're doing here I can call a 'developer warning', something to remind us,
developers, that TCG does not implement Spectre caps that are default in the
pseries machine. Well, I'd rather document somewhere (in tcg/README, or perhaps
create a hw/ppc/README since this is a pseries exclusive behavior) that TCG is
ignoring default Spectre caps of the pseries machine, than to issue warnings 
about
it.




I agree that the current behaviour is pretty irritating, but I don't
know that silently pretending TCG can do what's normally expected of
that command line is a great option either.



I can send a patch to change the messages to say something like "this can be 
safely
ignored. Use -machine cap-X=broken to hid it". At least we will inform TCG users
that these warning are not their fault and they shouldn't spend their time 
trying to
figure them out. But then, why issue a warning and tell the user "this is 
warning,
please ignore me"?



Thanks,


DHB



[1] https://openpower.ic.unicamp.br/minicloud/





---
  hw/ppc/spapr_caps.c | 47 ++---
  1 file changed, 36 insertions(+), 11 deletions(-)

diff --git a/hw/ppc/spapr_caps.c b/hw/ppc/spapr_caps.c
index 9341e9782a..629c24a96d 100644
--- a/hw/ppc/spapr_caps.c
+++ b/hw/ppc/spapr_caps.c
@@ -244,9 +244,15 @@ static void cap_safe_cache_apply(SpaprMachineState *spapr, 
uint8_t val,
  uint8_t kvm_val =  kvmppc_get_cap_safe_cache();
  
  if (tcg_enabled() && val) {

-/* TCG only supports broken, allow other values and print a warning */
-warn_report("TCG doesn't support requested feature, cap-cfpc=%s",
-cap_cfpc_possible.vals[val]);
+/*
+ * TCG only supports broken, allow other values and print a warning
+ * in case the user attempted to set a different value in the command
+ * line.
+ */
+if (spapr->cmd_line_caps[SPAPR_CAP_CFPC] != SPAPR_CAP_BROKEN) {
+warn_report("TCG doesn't support requested feature, cap-cfpc=%s",
+cap_cfpc_possible.vals[val]);
+}
  } else if (kvm_enabled() && (val > kvm_val)) {

Re: [PATCH] trace: add meson custom_target() depend_files for tracetool

2021-01-25 Thread Paolo Bonzini

On 25/01/21 12:09, Stefan Hajnoczi wrote:

Re-generate tracetool output when the tracetool source code changes. Use
the same approach as qapi_gen_depends and introduce a tracetool_depends
files list so meson is aware of the dependencies.

Signed-off-by: Stefan Hajnoczi 
---
  meson.build   | 28 +++-
  trace/meson.build | 21 ++---
  2 files changed, 41 insertions(+), 8 deletions(-)

diff --git a/meson.build b/meson.build
index 35a9eddf5c..3909d6b4c1 100644
--- a/meson.build
+++ b/meson.build
@@ -1626,6 +1626,31 @@ tracetool = [
python, files('scripts/tracetool.py'),
 '--backend=' + config_host['TRACE_BACKENDS']
  ]
+tracetool_depends = files(
+  'scripts/tracetool/backend/log.py',
+  'scripts/tracetool/backend/__init__.py',
+  'scripts/tracetool/backend/dtrace.py',
+  'scripts/tracetool/backend/ftrace.py',
+  'scripts/tracetool/backend/simple.py',
+  'scripts/tracetool/backend/syslog.py',
+  'scripts/tracetool/backend/ust.py',
+  'scripts/tracetool/format/tcg_h.py',
+  'scripts/tracetool/format/ust_events_c.py',
+  'scripts/tracetool/format/ust_events_h.py',
+  'scripts/tracetool/format/__init__.py',
+  'scripts/tracetool/format/d.py',
+  'scripts/tracetool/format/tcg_helper_c.py',
+  'scripts/tracetool/format/simpletrace_stap.py',
+  'scripts/tracetool/format/c.py',
+  'scripts/tracetool/format/h.py',
+  'scripts/tracetool/format/tcg_helper_h.py',
+  'scripts/tracetool/format/log_stap.py',
+  'scripts/tracetool/format/stap.py',
+  'scripts/tracetool/format/tcg_helper_wrapper_h.py',
+  'scripts/tracetool/__init__.py',
+  'scripts/tracetool/transform.py',
+  'scripts/tracetool/vcpu.py'
+)
  
  qemu_version_cmd = [find_program('scripts/qemu-version.sh'),

  meson.current_source_dir(),
@@ -2192,7 +2217,8 @@ foreach target : target_dirs
  '--target-type=' + target_type,
  '--probe-prefix=qemu.' + target_type + '.' + 
target_name,
  '@INPUT@', '@OUTPUT@'
-  ])
+  ],
+  depend_files: tracetool_depends)
endforeach
  endif
endforeach
diff --git a/trace/meson.build b/trace/meson.build
index a0be8f9b0d..08f83a15c3 100644
--- a/trace/meson.build
+++ b/trace/meson.build
@@ -12,17 +12,20 @@ foreach dir : [ '.' ] + trace_events_subdirs
trace_h = custom_target(fmt.format('trace', 'h'),
output: fmt.format('trace', 'h'),
input: trace_events_file,
-  command: [ tracetool, group, '--format=h', 
'@INPUT@', '@OUTPUT@' ])
+  command: [ tracetool, group, '--format=h', 
'@INPUT@', '@OUTPUT@' ],
+  depend_files: tracetool_depends)
genh += trace_h
trace_c = custom_target(fmt.format('trace', 'c'),
output: fmt.format('trace', 'c'),
input: trace_events_file,
-  command: [ tracetool, group, '--format=c', 
'@INPUT@', '@OUTPUT@' ])
+  command: [ tracetool, group, '--format=c', 
'@INPUT@', '@OUTPUT@' ],
+  depend_files: tracetool_depends)
if 'CONFIG_TRACE_UST' in config_host
  trace_ust_h = custom_target(fmt.format('trace-ust', 'h'),
  output: fmt.format('trace-ust', 'h'),
  input: trace_events_file,
-command: [ tracetool, group, 
'--format=ust-events-h', '@INPUT@', '@OUTPUT@' ])
+command: [ tracetool, group, 
'--format=ust-events-h', '@INPUT@', '@OUTPUT@' ],
+depend_files: tracetool_depends)
  trace_ss.add(trace_ust_h, lttng, urcubp)
  genh += trace_ust_h
endif
@@ -31,7 +34,8 @@ foreach dir : [ '.' ] + trace_events_subdirs
  trace_dtrace = custom_target(fmt.format('trace-dtrace', 'dtrace'),
   output: fmt.format('trace-dtrace', 'dtrace'),
   input: trace_events_file,
- command: [ tracetool, group, '--format=d', 
'@INPUT@', '@OUTPUT@' ])
+ command: [ tracetool, group, '--format=d', 
'@INPUT@', '@OUTPUT@' ],
+ depend_files: tracetool_depends)
  trace_dtrace_h = custom_target(fmt.format('trace-dtrace', 'h'),
 output: fmt.format('trace-dtrace', 'h'),
 input: trace_dtrace,
@@ -66,7 +70,8 @@ foreach d : [
gen = custom_target(d[0],
  output: d[0],
  input: meson.source_root() / 'trace-events',
-command: [ tracetool, '--group=root', 
'--format=@0@'.format(d[1]), '@INPUT@', '@OUTPUT@' ])
+command: [ tracetool, '--group=root', 
'--format=@0@'.format(d[1]), '@INPUT@', '@OUTPUT@' ],
+

Re: Fwd: VirtioSound device emulation implementation

2021-01-25 Thread Alex Bennée


Shreyansh Chouhan  writes:

> Just a follow up.
> Still working on device initialization. I am trying to understand the
> relation between virtio-net and net device in qemu
> in hopes of recreating something similar with the audio device.

All QEMU devices have two parts, a frontend (which the guest sees) and a
backend (which is how the data gets to somewhere in the host). Some of
the command line options in QEMU elide the details for convenience (-nic
and -drive are examples). The -netdev device is all about configuring
the backend of the network part for a paired -device front end. There is
a similar setup for audio (-audiodev) although I'll defer to Gerd's
expertise on how the two interact.

> Once I have
> the device initialization part
> ready I will send in a couple of patches for review.

Good stuff.

-- 
Alex Bennée



[PATCH 0/2] trace: make the 'log' backend timestamp configurable

2021-01-25 Thread Stefan Hajnoczi
Zoltan reminded me that the 'log' backend prints tids/timestamps and this can
be unwanted in some cases. It's easier to look at trace output without them and
in some cases parsing is also more convenient with them.

Extend -msg timestamp=on|off to control the 'log' backend's tid/timestamp 
output.

Stefan Hajnoczi (2):
  error: rename error_with_timestamp to message_with_timestamp
  trace: make the 'log' backend timestamp configurable

 docs/devel/tracing.txt   |  3 +++
 include/qemu/error-report.h  |  2 +-
 softmmu/vl.c |  2 +-
 util/qemu-error.c|  4 ++--
 scripts/tracetool/backend/log.py | 19 +--
 5 files changed, 20 insertions(+), 10 deletions(-)

-- 
2.29.2



[PATCH 2/2] trace: make the 'log' backend timestamp configurable

2021-01-25 Thread Stefan Hajnoczi
Timestamps in tracing output can be distracting. Make it possible to
control tid/timestamp printing with -msg timestamp=on|off. The default
is no tid/timestamps. Previously they were always printed.

Suggested-by: BALATON Zoltan 
Signed-off-by: Stefan Hajnoczi 
---
 docs/devel/tracing.txt   |  3 +++
 scripts/tracetool/backend/log.py | 19 +--
 2 files changed, 16 insertions(+), 6 deletions(-)

diff --git a/docs/devel/tracing.txt b/docs/devel/tracing.txt
index dba43fc7a4..cd744c0429 100644
--- a/docs/devel/tracing.txt
+++ b/docs/devel/tracing.txt
@@ -211,6 +211,9 @@ effectively turns trace events into debug printfs.
 This is the simplest backend and can be used together with existing code that
 uses DPRINTF().
 
+The -msg timestamp=on|off command-line option controls whether or not to print
+the tid/timestamp prefix for each trace event.
+
 === Simpletrace ===
 
 The "simple" backend supports common use cases and comes as part of the QEMU
diff --git a/scripts/tracetool/backend/log.py b/scripts/tracetool/backend/log.py
index bc43dbb4f4..17ba1cd90e 100644
--- a/scripts/tracetool/backend/log.py
+++ b/scripts/tracetool/backend/log.py
@@ -20,6 +20,7 @@ PUBLIC = True
 
 def generate_h_begin(events, group):
 out('#include "qemu/log-for-trace.h"',
+'#include "qemu/error-report.h"',
 '')
 
 
@@ -35,14 +36,20 @@ def generate_h(event, group):
 cond = "trace_event_get_state(%s)" % ("TRACE_" + event.name.upper())
 
 out('if (%(cond)s && qemu_loglevel_mask(LOG_TRACE)) {',
-'struct timeval _now;',
-'gettimeofday(&_now, NULL);',
+'if (message_with_timestamp) {',
+'struct timeval _now;',
+'gettimeofday(&_now, NULL);',
 '#line %(event_lineno)d "%(event_filename)s"',
-'qemu_log("%%d@%%zu.%%06zu:%(name)s " %(fmt)s "\\n",',
-' qemu_get_thread_id(),',
-' (size_t)_now.tv_sec, (size_t)_now.tv_usec',
-' %(argnames)s);',
+'qemu_log("%%d@%%zu.%%06zu:%(name)s " %(fmt)s "\\n",',
+' qemu_get_thread_id(),',
+' (size_t)_now.tv_sec, (size_t)_now.tv_usec',
+' %(argnames)s);',
 '#line %(out_next_lineno)d "%(out_filename)s"',
+'} else {',
+'#line %(event_lineno)d "%(event_filename)s"',
+'qemu_log("%(name)s " %(fmt)s "\\n"%(argnames)s);',
+'#line %(out_next_lineno)d "%(out_filename)s"',
+'}',
 '}',
 cond=cond,
 event_lineno=event.lineno,
-- 
2.29.2



[PATCH 1/2] error: rename error_with_timestamp to message_with_timestamp

2021-01-25 Thread Stefan Hajnoczi
The -msg timestamp=on|off option controls whether a timestamp is printed
with error_report() messages. The "-msg" name suggests that this option
has a wider effect than just error_report(). The next patch extends it
to the 'log' trace backend, so rename the variable from
error_with_timestamp to message_with_timestamp.

Signed-off-by: Stefan Hajnoczi 
---
 include/qemu/error-report.h | 2 +-
 softmmu/vl.c| 2 +-
 util/qemu-error.c   | 4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/qemu/error-report.h b/include/qemu/error-report.h
index a5ad95ff1b..9d197daca3 100644
--- a/include/qemu/error-report.h
+++ b/include/qemu/error-report.h
@@ -74,7 +74,7 @@ void error_init(const char *argv0);
 
 const char *error_get_progname(void);
 
-extern bool error_with_timestamp;
+extern bool message_with_timestamp;
 extern bool error_with_guestname;
 extern const char *error_guest_name;
 
diff --git a/softmmu/vl.c b/softmmu/vl.c
index a8876b8965..bd55468669 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -737,7 +737,7 @@ static void realtime_init(void)
 
 static void configure_msg(QemuOpts *opts)
 {
-error_with_timestamp = qemu_opt_get_bool(opts, "timestamp", false);
+message_with_timestamp = qemu_opt_get_bool(opts, "timestamp", false);
 error_with_guestname = qemu_opt_get_bool(opts, "guest-name", false);
 }
 
diff --git a/util/qemu-error.c b/util/qemu-error.c
index aa30f03564..52a9e013c4 100644
--- a/util/qemu-error.c
+++ b/util/qemu-error.c
@@ -25,7 +25,7 @@ typedef enum {
 } report_type;
 
 /* Prepend timestamp to messages */
-bool error_with_timestamp;
+bool message_with_timestamp;
 bool error_with_guestname;
 const char *error_guest_name;
 
@@ -208,7 +208,7 @@ static void vreport(report_type type, const char *fmt, 
va_list ap)
 GTimeVal tv;
 gchar *timestr;
 
-if (error_with_timestamp && !monitor_cur()) {
+if (message_with_timestamp && !monitor_cur()) {
 g_get_current_time(&tv);
 timestr = g_time_val_to_iso8601(&tv);
 error_printf("%s ", timestr);
-- 
2.29.2



[PULL 0/5] 9p next patches

2021-01-25 Thread Greg Kurz
The following changes since commit fef80ea073c4862bc9eaddb6ddb0ed970b8ad7c4:

  Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2021-01-20' into 
staging (2021-01-21 10:44:28 +)

are available in the Git repository at:

  https://gitlab.com/gkurz/qemu.git tags/9p-next-pull-request

for you to fetch changes up to 81f9766b7a16ccfcfd19e0b4525a5eeba862c851:

  9pfs: Convert reclaim list to QSLIST (2021-01-22 18:26:40 +0100)


This fixes a Coverity report and improves the fid reclaim logic.



Greg Kurz (5):
  9pfs/proxy: Check return value of proxy_marshal()
  9pfs: Convert V9fsFidState::clunked to bool
  9pfs: Convert V9fsFidState::fid_list to QSIMPLEQ
  9pfs: Improve unreclaim loop
  9pfs: Convert reclaim list to QSLIST

 hw/9pfs/9p-proxy.c |   3 +-
 hw/9pfs/9p.c   | 102 +
 hw/9pfs/9p.h   |   8 ++--
 3 files changed, 63 insertions(+), 50 deletions(-)

-- 
2.26.2





[PULL 5/5] 9pfs: Convert reclaim list to QSLIST

2021-01-25 Thread Greg Kurz
Use QSLIST instead of open-coding for a slightly improved readability.

No behavioral change.

Reviewed-by: Christian Schoenebeck 
Message-Id: <20210122143514.215780-1-gr...@kaod.org>
Signed-off-by: Greg Kurz 
---
 hw/9pfs/9p.c | 17 -
 hw/9pfs/9p.h |  2 +-
 2 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
index 3864d014b43c..5a6e2c9d3d7f 100644
--- a/hw/9pfs/9p.c
+++ b/hw/9pfs/9p.c
@@ -416,7 +416,9 @@ void coroutine_fn v9fs_reclaim_fd(V9fsPDU *pdu)
 {
 int reclaim_count = 0;
 V9fsState *s = pdu->s;
-V9fsFidState *f, *reclaim_list = NULL;
+V9fsFidState *f;
+QSLIST_HEAD(, V9fsFidState) reclaim_list =
+QSLIST_HEAD_INITIALIZER(reclaim_list);
 
 QSIMPLEQ_FOREACH(f, &s->fid_list, next) {
 /*
@@ -448,8 +450,7 @@ void coroutine_fn v9fs_reclaim_fd(V9fsPDU *pdu)
  * a clunk request won't free this fid
  */
 f->ref++;
-f->rclm_lst = reclaim_list;
-reclaim_list = f;
+QSLIST_INSERT_HEAD(&reclaim_list, f, reclaim_next);
 f->fs_reclaim.fd = f->fs.fd;
 f->fs.fd = -1;
 reclaim_count++;
@@ -461,8 +462,7 @@ void coroutine_fn v9fs_reclaim_fd(V9fsPDU *pdu)
  * a clunk request won't free this fid
  */
 f->ref++;
-f->rclm_lst = reclaim_list;
-reclaim_list = f;
+QSLIST_INSERT_HEAD(&reclaim_list, f, reclaim_next);
 f->fs_reclaim.dir.stream = f->fs.dir.stream;
 f->fs.dir.stream = NULL;
 reclaim_count++;
@@ -476,15 +476,14 @@ void coroutine_fn v9fs_reclaim_fd(V9fsPDU *pdu)
  * Now close the fid in reclaim list. Free them if they
  * are already clunked.
  */
-while (reclaim_list) {
-f = reclaim_list;
-reclaim_list = f->rclm_lst;
+while (!QSLIST_EMPTY(&reclaim_list)) {
+f = QSLIST_FIRST(&reclaim_list);
+QSLIST_REMOVE(&reclaim_list, f, V9fsFidState, reclaim_next);
 if (f->fid_type == P9_FID_FILE) {
 v9fs_co_close(pdu, &f->fs_reclaim);
 } else if (f->fid_type == P9_FID_DIR) {
 v9fs_co_closedir(pdu, &f->fs_reclaim);
 }
-f->rclm_lst = NULL;
 /*
  * Now drop the fid reference, free it
  * if clunked.
diff --git a/hw/9pfs/9p.h b/hw/9pfs/9p.h
index 85fb6930b0ca..00381591ffa2 100644
--- a/hw/9pfs/9p.h
+++ b/hw/9pfs/9p.h
@@ -281,7 +281,7 @@ struct V9fsFidState {
 int ref;
 bool clunked;
 QSIMPLEQ_ENTRY(V9fsFidState) next;
-V9fsFidState *rclm_lst;
+QSLIST_ENTRY(V9fsFidState) reclaim_next;
 };
 
 typedef enum AffixType_t {
-- 
2.26.2




[PULL 1/5] 9pfs/proxy: Check return value of proxy_marshal()

2021-01-25 Thread Greg Kurz
This should always successfully write exactly two 32-bit integers.
Make it clear with an assert(), like v9fs_receive_status() and
v9fs_receive_response() already do when unmarshalling the same
header.

Fixes: Coverity CID 1438968
Reviewed-by: Christian Schoenebeck 
Message-Id: <161035859647.1221144.4691749806675653934.st...@bahia.lan>
Signed-off-by: Greg Kurz 
---
 hw/9pfs/9p-proxy.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/9pfs/9p-proxy.c b/hw/9pfs/9p-proxy.c
index 6f598a0f111c..4aa4e0a3baa0 100644
--- a/hw/9pfs/9p-proxy.c
+++ b/hw/9pfs/9p-proxy.c
@@ -537,7 +537,8 @@ static int v9fs_request(V9fsProxy *proxy, int type, void 
*response, ...)
 }
 
 /* marshal the header details */
-proxy_marshal(iovec, 0, "dd", header.type, header.size);
+retval = proxy_marshal(iovec, 0, "dd", header.type, header.size);
+assert(retval == 4 * 2);
 header.size += PROXY_HDR_SZ;
 
 retval = qemu_write_full(proxy->sockfd, iovec->iov_base, header.size);
-- 
2.26.2




[PULL 4/5] 9pfs: Improve unreclaim loop

2021-01-25 Thread Greg Kurz
If a fid was actually re-opened by v9fs_reopen_fid(), we re-traverse the
fid list from the head in case some other request created a fid that
needs to be marked unreclaimable as well (i.e. the client opened a new
handle on the path that is being unlinked). This is suboptimal since
most if not all fids that require it have likely been taken care of
already.

This is mostly the result of new fids being added to the head of the
list. Since the list is now a QSIMPLEQ, add new fids at the end instead
to avoid the need to rewind. Take a reference on the fid to ensure it
doesn't go away during v9fs_reopen_fid() and that it can be safely
passed to QSIMPLEQ_NEXT() afterwards. Since the associated put_fid()
can also yield, same is done with the next fid. So the logic here is
to get a reference on a fid and only put it back during the next
iteration after we could get a reference on the next fid.

Reviewed-by: Christian Schoenebeck 
Message-Id: <20210121181510.1459390-1-gr...@kaod.org>
Signed-off-by: Greg Kurz 
---
 hw/9pfs/9p.c | 46 --
 1 file changed, 32 insertions(+), 14 deletions(-)

diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
index b65f320e6518..3864d014b43c 100644
--- a/hw/9pfs/9p.c
+++ b/hw/9pfs/9p.c
@@ -311,7 +311,7 @@ static V9fsFidState *alloc_fid(V9fsState *s, int32_t fid)
  * reclaim won't close the file descriptor
  */
 f->flags |= FID_REFERENCED;
-QSIMPLEQ_INSERT_HEAD(&s->fid_list, f, next);
+QSIMPLEQ_INSERT_TAIL(&s->fid_list, f, next);
 
 v9fs_readdir_init(s->proto_version, &f->fs.dir);
 v9fs_readdir_init(s->proto_version, &f->fs_reclaim.dir);
@@ -497,32 +497,50 @@ static int coroutine_fn v9fs_mark_fids_unreclaim(V9fsPDU 
*pdu, V9fsPath *path)
 {
 int err;
 V9fsState *s = pdu->s;
-V9fsFidState *fidp;
+V9fsFidState *fidp, *fidp_next;
 
-again:
-QSIMPLEQ_FOREACH(fidp, &s->fid_list, next) {
-if (fidp->path.size != path->size) {
-continue;
-}
-if (!memcmp(fidp->path.data, path->data, path->size)) {
+fidp = QSIMPLEQ_FIRST(&s->fid_list);
+if (!fidp) {
+return 0;
+}
+
+/*
+ * v9fs_reopen_fid() can yield : a reference on the fid must be held
+ * to ensure its pointer remains valid and we can safely pass it to
+ * QSIMPLEQ_NEXT(). The corresponding put_fid() can also yield so
+ * we must keep a reference on the next fid as well. So the logic here
+ * is to get a reference on a fid and only put it back during the next
+ * iteration after we could get a reference on the next fid. Start with
+ * the first one.
+ */
+for (fidp->ref++; fidp; fidp = fidp_next) {
+if (fidp->path.size == path->size &&
+!memcmp(fidp->path.data, path->data, path->size)) {
 /* Mark the fid non reclaimable. */
 fidp->flags |= FID_NON_RECLAIMABLE;
 
 /* reopen the file/dir if already closed */
 err = v9fs_reopen_fid(pdu, fidp);
 if (err < 0) {
+put_fid(pdu, fidp);
 return err;
 }
+}
+
+fidp_next = QSIMPLEQ_NEXT(fidp, next);
+
+if (fidp_next) {
 /*
- * Go back to head of fid list because
- * the list could have got updated when
- * switched to the worker thread
+ * Ensure the next fid survives a potential clunk request during
+ * put_fid() below and v9fs_reopen_fid() in the next iteration.
  */
-if (err == 0) {
-goto again;
-}
+fidp_next->ref++;
 }
+
+/* We're done with this fid */
+put_fid(pdu, fidp);
 }
+
 return 0;
 }
 
-- 
2.26.2




Re: [PATCH 15/25] vl: plumb keyval-based options into -set and -readconfig

2021-01-25 Thread Markus Armbruster
Paolo Bonzini  writes:

> Add generic machinery to support parsing command line options with
> keyval in -set and -readconfig, choosing between QDict and
> QemuOpts as the underlying data structure.
>
> The keyval_merge function is slightly heavyweight as a way to
> do qemu_set_option for QDict-based options, but it will be put
> to further use later to merge entire -readconfig sections together
> with their command-line equivalents.
>
> Signed-off-by: Paolo Bonzini 

Watch this:

$ cat test.cfg
[machine]
  accel = "kvm"
  usb = "on"
$ qemu-system-x86_64 -S -display none -readconfig test.cfg
Aborted (core dumped)

Backtrace:

#0  0x752f19e5 in raise () at /lib64/libc.so.6
#1  0x752da895 in abort () at /lib64/libc.so.6
#2  0x55c44a77 in qemu_record_config_group
(group=0x7fffd1b0 "machine", dict=0x565fb740, 
errp=0x564ffca0 ) at ../softmmu/vl.c:2103
#3  0x55c44bd8 in qemu_parse_config_group
(group=0x7fffd1b0 "machine", qdict=0x565f9640, 
opaque=0x564ff9e0 , errp=0x564ffca0 ) at 
../softmmu/vl.c:2135
#4  0x55eda3e6 in qemu_config_foreach
(fp=0x565f3e00, cb=0x55c44af5 , 
opaque=0x564ff9e0 , fname=0x7fffe0dd "test.cfg", 
errp=0x564ffca0 ) at ../util/qemu-config.c:378
#5  0x55eda5d5 in qemu_read_config_file
(filename=0x7fffe0dd "test.cfg", cb=0x55c44af5 
, errp=0x564ffca0 ) at 
../util/qemu-config.c:421
#6  0x55c47d3f in qemu_init
(argc=6, argv=0x7fffdcc8, envp=0x7fffdd00) at 
../softmmu/vl.c:3405
#7  0x558234e1 in main
(argc=6, argv=0x7fffdcc8, envp=0x7fffdd00) at 
../softmmu/main.c:49

Similar result for

[memory]
  size = "1024"

and

[chardev "mon0"]
  backend = "stdio"

I didn't look for more.




[PULL 2/5] 9pfs: Convert V9fsFidState::clunked to bool

2021-01-25 Thread Greg Kurz
This can only be 0 or 1.

Reviewed-by: Christian Schoenebeck 
Message-Id: <20210118142300.801516-2-gr...@kaod.org>
Signed-off-by: Greg Kurz 
---
 hw/9pfs/9p.c | 4 ++--
 hw/9pfs/9p.h | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
index 6026b51a1c04..37c3379b7462 100644
--- a/hw/9pfs/9p.c
+++ b/hw/9pfs/9p.c
@@ -413,7 +413,7 @@ static V9fsFidState *clunk_fid(V9fsState *s, int32_t fid)
 }
 fidp = *fidpp;
 *fidpp = fidp->next;
-fidp->clunked = 1;
+fidp->clunked = true;
 return fidp;
 }
 
@@ -544,7 +544,7 @@ static void coroutine_fn virtfs_reset(V9fsPDU *pdu)
 
 /* Clunk fid */
 s->fid_list = fidp->next;
-fidp->clunked = 1;
+fidp->clunked = true;
 
 put_fid(pdu, fidp);
 }
diff --git a/hw/9pfs/9p.h b/hw/9pfs/9p.h
index 32df81f360ea..93656323d1d7 100644
--- a/hw/9pfs/9p.h
+++ b/hw/9pfs/9p.h
@@ -279,7 +279,7 @@ struct V9fsFidState {
 int open_flags;
 uid_t uid;
 int ref;
-int clunked;
+bool clunked;
 V9fsFidState *next;
 V9fsFidState *rclm_lst;
 };
-- 
2.26.2




Re: [PATCH 16/25] hw/arm/stellaris: Convert SSYS to QOM device

2021-01-25 Thread Peter Maydell
On Thu, 21 Jan 2021 at 22:13, Philippe Mathieu-Daudé  wrote:
>
> Hi Peter,
>
> On 1/21/21 8:06 PM, Peter Maydell wrote:
> > Convert the SSYS code in the Stellaris boards (which encapsulates the
> > system registers) to a proper QOM device.  This will provide us with
> > somewhere to put the output Clock whose frequency depends on the
> > setting of the PLL configuration registers.
> >
> > This is a migration compatibility break for lm3s811evb, lm3s6965evb.
> >
> > We use 3-phase reset here because the Clock will need to propagate
> > its value in the hold phase.
> >
> > For the moment we reset the device during the board creation so that
> > the system_clock_scale global gets set; this will be removed in a
> > subsequent commit.

> > +
> > +struct ssys_state {
> > +SysBusDevice parent_obj;
> > +
> >  MemoryRegion iomem;
> >  uint32_t pborctl;
> >  uint32_t ldopctl;
> > @@ -371,11 +376,18 @@ typedef struct {
> >  uint32_t dcgc[3];
> >  uint32_t clkvclr;
> >  uint32_t ldoarst;
> > +qemu_irq irq;
> > +/* Properties (all read-only registers) */
> >  uint32_t user0;
> >  uint32_t user1;
> > -qemu_irq irq;
> > -stellaris_board_info *board;
> > -} ssys_state;
> > +uint32_t did0;
> > +uint32_t did1;
> > +uint32_t dc0;
> > +uint32_t dc1;
> > +uint32_t dc2;
> > +uint32_t dc3;
> > +uint32_t dc4;
>
> Shouldn't these be class properties?

Could you elaborate on what you think the code ought to look like?
I just used the usual thing of defining uint32 qdev properties so we
can set these values when we create the device, as a replacement
for the existing code which either reaches directly into the
state struct to set the user0/user1 values or sets the
stellaris_board_info pointer in the state struct.

thanks
-- PMM



[PULL 3/5] 9pfs: Convert V9fsFidState::fid_list to QSIMPLEQ

2021-01-25 Thread Greg Kurz
The fid_list is currently open-coded. This doesn't seem to serve any
purpose that cannot be met with QEMU's generic lists. Let's go for a
QSIMPLEQ : this will allow to add new fids at the end of the list and
to improve the logic in v9fs_mark_fids_unreclaim().

Reviewed-by: Christian Schoenebeck 
Message-Id: <20210118142300.801516-3-gr...@kaod.org>
Signed-off-by: Greg Kurz 
---
 hw/9pfs/9p.c | 41 ++---
 hw/9pfs/9p.h |  4 ++--
 2 files changed, 20 insertions(+), 25 deletions(-)

diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
index 37c3379b7462..b65f320e6518 100644
--- a/hw/9pfs/9p.c
+++ b/hw/9pfs/9p.c
@@ -260,7 +260,7 @@ static V9fsFidState *coroutine_fn get_fid(V9fsPDU *pdu, 
int32_t fid)
 V9fsFidState *f;
 V9fsState *s = pdu->s;
 
-for (f = s->fid_list; f; f = f->next) {
+QSIMPLEQ_FOREACH(f, &s->fid_list, next) {
 BUG_ON(f->clunked);
 if (f->fid == fid) {
 /*
@@ -295,7 +295,7 @@ static V9fsFidState *alloc_fid(V9fsState *s, int32_t fid)
 {
 V9fsFidState *f;
 
-for (f = s->fid_list; f; f = f->next) {
+QSIMPLEQ_FOREACH(f, &s->fid_list, next) {
 /* If fid is already there return NULL */
 BUG_ON(f->clunked);
 if (f->fid == fid) {
@@ -311,8 +311,7 @@ static V9fsFidState *alloc_fid(V9fsState *s, int32_t fid)
  * reclaim won't close the file descriptor
  */
 f->flags |= FID_REFERENCED;
-f->next = s->fid_list;
-s->fid_list = f;
+QSIMPLEQ_INSERT_HEAD(&s->fid_list, f, next);
 
 v9fs_readdir_init(s->proto_version, &f->fs.dir);
 v9fs_readdir_init(s->proto_version, &f->fs_reclaim.dir);
@@ -401,20 +400,16 @@ static int coroutine_fn put_fid(V9fsPDU *pdu, 
V9fsFidState *fidp)
 
 static V9fsFidState *clunk_fid(V9fsState *s, int32_t fid)
 {
-V9fsFidState **fidpp, *fidp;
+V9fsFidState *fidp;
 
-for (fidpp = &s->fid_list; *fidpp; fidpp = &(*fidpp)->next) {
-if ((*fidpp)->fid == fid) {
-break;
+QSIMPLEQ_FOREACH(fidp, &s->fid_list, next) {
+if (fidp->fid == fid) {
+QSIMPLEQ_REMOVE(&s->fid_list, fidp, V9fsFidState, next);
+fidp->clunked = true;
+return fidp;
 }
 }
-if (*fidpp == NULL) {
-return NULL;
-}
-fidp = *fidpp;
-*fidpp = fidp->next;
-fidp->clunked = true;
-return fidp;
+return NULL;
 }
 
 void coroutine_fn v9fs_reclaim_fd(V9fsPDU *pdu)
@@ -423,7 +418,7 @@ void coroutine_fn v9fs_reclaim_fd(V9fsPDU *pdu)
 V9fsState *s = pdu->s;
 V9fsFidState *f, *reclaim_list = NULL;
 
-for (f = s->fid_list; f; f = f->next) {
+QSIMPLEQ_FOREACH(f, &s->fid_list, next) {
 /*
  * Unlink fids cannot be reclaimed. Check
  * for them and skip them. Also skip fids
@@ -505,7 +500,7 @@ static int coroutine_fn v9fs_mark_fids_unreclaim(V9fsPDU 
*pdu, V9fsPath *path)
 V9fsFidState *fidp;
 
 again:
-for (fidp = s->fid_list; fidp; fidp = fidp->next) {
+QSIMPLEQ_FOREACH(fidp, &s->fid_list, next) {
 if (fidp->path.size != path->size) {
 continue;
 }
@@ -537,13 +532,13 @@ static void coroutine_fn virtfs_reset(V9fsPDU *pdu)
 V9fsFidState *fidp;
 
 /* Free all fids */
-while (s->fid_list) {
+while (!QSIMPLEQ_EMPTY(&s->fid_list)) {
 /* Get fid */
-fidp = s->fid_list;
+fidp = QSIMPLEQ_FIRST(&s->fid_list);
 fidp->ref++;
 
 /* Clunk fid */
-s->fid_list = fidp->next;
+QSIMPLEQ_REMOVE(&s->fid_list, fidp, V9fsFidState, next);
 fidp->clunked = true;
 
 put_fid(pdu, fidp);
@@ -3121,7 +3116,7 @@ static int coroutine_fn v9fs_complete_rename(V9fsPDU 
*pdu, V9fsFidState *fidp,
  * Fixup fid's pointing to the old name to
  * start pointing to the new name
  */
-for (tfidp = s->fid_list; tfidp; tfidp = tfidp->next) {
+QSIMPLEQ_FOREACH(tfidp, &s->fid_list, next) {
 if (v9fs_path_is_ancestor(&fidp->path, &tfidp->path)) {
 /* replace the name */
 v9fs_fix_path(&tfidp->path, &new_path, strlen(fidp->path.data));
@@ -3215,7 +3210,7 @@ static int coroutine_fn v9fs_fix_fid_paths(V9fsPDU *pdu, 
V9fsPath *olddir,
  * Fixup fid's pointing to the old name to
  * start pointing to the new name
  */
-for (tfidp = s->fid_list; tfidp; tfidp = tfidp->next) {
+QSIMPLEQ_FOREACH(tfidp, &s->fid_list, next) {
 if (v9fs_path_is_ancestor(&oldpath, &tfidp->path)) {
 /* replace the name */
 v9fs_fix_path(&tfidp->path, &newpath, strlen(oldpath.data));
@@ -4081,7 +4076,7 @@ int v9fs_device_realize_common(V9fsState *s, const 
V9fsTransport *t,
 s->ctx.fmode = fse->fmode;
 s->ctx.dmode = fse->dmode;
 
-s->fid_list = NULL;
+QSIMPLEQ_INIT(&s->fid_list);
 qemu_co_rwlock_init(&s->rename_lock);
 
 if (s->ops->init(&s->ctx, errp) < 0) {
diff --git a/hw/9pfs/9p.h b/hw/9pfs/9p.h
index 93656323d1d7..85fb6930b0ca 100644
--- a/hw/9pfs/9p.h
+++ b/hw/9pfs/9p.h
@

Re: [PATCH v2 4/7] m68k: add an interrupt controller

2021-01-25 Thread Laurent Vivier
Le 24/01/2021 à 17:46, Philippe Mathieu-Daudé a écrit :
> On 12/20/20 12:26 PM, Laurent Vivier wrote:
>> Signed-off-by: Laurent Vivier 
>> ---
>>  include/hw/intc/m68k_irqc.h |  28 +
>>  hw/intc/m68k_irqc.c | 120 
> 
> Missing MAINTAINERS entries?

new devices are added in patch 7 (where I add the machine and the section in 
MAINTAINER), but I've
missed this one. Thanks.

> 
>>  hw/intc/Kconfig |   3 +
>>  hw/intc/meson.build |   1 +
>>  4 files changed, 152 insertions(+)
>>  create mode 100644 include/hw/intc/m68k_irqc.h
>>  create mode 100644 hw/intc/m68k_irqc.c
>>
>> diff --git a/include/hw/intc/m68k_irqc.h b/include/hw/intc/m68k_irqc.h
>> new file mode 100644
>> index ..c40b1b4df8fa
>> --- /dev/null
>> +++ b/include/hw/intc/m68k_irqc.h
>> @@ -0,0 +1,28 @@
>> +/*
>> + * SPDX-License-Identifer: GPL-2.0-or-later
>> + *
>> + * QEMU Motorla 680x0 IRQ Controller
> 
> Typo "Motorola".

Thank you (there is another typo: "q800_irq_register_types" that must be 
renamed)

> Are there any specs to refer at?
> 

There is no specs. It's a (generic) copy of the GLUE device we already have for 
q800.

I don't update Q800 because the GLUE device may disappear because Q800 uses 
actually djMEMC that is
also a memory controller (I have that in my q800-dev branch, but for the moment 
it's only needed for
the MacROM and Mark is working on that).

Thanks,
Laurent





Re: [PATCH v7 10/11] iotests: rewrite check into python

2021-01-25 Thread Kevin Wolf
Am 23.01.2021 um 16:08 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 22.01.2021 19:08, Kevin Wolf wrote:
> > Am 16.01.2021 um 14:44 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > > Just use classes introduced in previous three commits. Behavior
> > > difference is described in these three commits.
> > > 
> > > Drop group file, as it becomes unused.
> > > 
> > > Drop common.env: now check is in python, and for tests we use same
> > > python interpreter that runs the check itself. Use build environment
> > > PYTHON in check-block instead, to keep "make check" use the same
> > > python.
> > > 
> > > Signed-off-by: Vladimir Sementsov-Ogievskiy 
> > 
> > > diff --git a/tests/check-block.sh b/tests/check-block.sh
> > > index fb4c1baae9..26eb1c0a9b 100755
> > > --- a/tests/check-block.sh
> > > +++ b/tests/check-block.sh
> > > @@ -69,7 +69,7 @@ export QEMU_CHECK_BLOCK_AUTO=1
> > >   ret=0
> > >   for fmt in $format_list ; do
> > > -./check -makecheck -$fmt $group || ret=1
> > > +${PYTHON} ./check -makecheck -$fmt $group || ret=1
> > >   done
> > 
> > When I add an echo to print that command line, it seems that ${PYTHON}
> > is empty for me. Is this expected?
> 
> It seems to be defined defined when called from make check. Did you
> just call check-block directly?
D> 
> It's not intentional, but I think it's OK: if PYTHON is not defined
> let's just execute check as self-executable. And for make-check PYTHON
> is defined and correct python is used.

Hm, where does that happen in 'make check'? It seems the old makefiles
were quite readable in comparison to what we have now...

Anyway, I think 'make check-block' should run just the block-specific
subset of 'make check', without changing the behaviour of the remaining
tests. Anything that can be started through make should respect the
configured Python interpreter.

> > >   exit $ret
> > > diff --git a/tests/qemu-iotests/check b/tests/qemu-iotests/check
> > > index 952762d5ed..914321806a 100755
> > > --- a/tests/qemu-iotests/check
> > > +++ b/tests/qemu-iotests/check
> 
> [..]
> 
> > > -if [ -x "$binary" ]
> > > -then
> > > -export QEMU_PROG="$build_root/$binary"
> > > -break
> > > -fi
> > > -done
> > > -popd > /dev/null
> > > -[ "$QEMU_PROG" = "" ] && _init_error "qemu not found"
> > > -fi
> > 
> > I think this else branch is kind of important (if there is no system
> > emulator binary for the host architecture, find _any_ system emulator
> > binary that was built). I can't find its equivalent in the new code.
> 
> Hmm, I decided testing "first found" emulator is strange.. It seems
> like we have several emulators and user don't care which would be
> tested?

Remember that we're mainly testing the block layer, which is the same in
all qemu-system-* binaries anyway. So yes, any system emulator binary is
good enough for many test cases, and certainly better than having no
system emulator. Differences are only in the supported guest devices,
which may cause some tests to be skipped.

If there are multiple binaries that we could use, we could change the
way to select one instead of just the first one, e.g. by trying x86_64
first because this is what enables the largest set of tests.

But anything is better than failing with "qemu not found".

> Probably we should instead used qemu-system-* binary only if there is
> only one matching binary. And fail if there are many.

No, 'make check' shouldn't fail because I built arm and ppc emulators on
my x86_64 machine without also building a x86_64 emulator. (And I think
this is a case that fails both with the actual patch under review and
with your suggested change.)

Kevin




[PATCH] coroutine-sigaltstack: Add SIGUSR2 mutex

2021-01-25 Thread Max Reitz
Disposition (action) for any given signal is global for the process.
When two threads run coroutine-sigaltstack's qemu_coroutine_new()
concurrently, they may interfere with each other: One of them may revert
the SIGUSR2 handler to SIG_DFL, between the other thread (a) setting up
coroutine_trampoline() as the handler and (b) raising SIGUSR2.  That
SIGUSR2 will then terminate the QEMU process abnormally.

We have to ensure that only one thread at a time can modify the
process-global SIGUSR2 handler.  To do so, wrap the whole section where
that is done in a mutex.

Alternatively, we could for example have the SIGUSR2 handler always be
coroutine_trampoline(), so there would be no need to invoke sigaction()
in qemu_coroutine_new().  Laszlo has posted a patch to do so here:

  https://lists.nongnu.org/archive/html/qemu-devel/2021-01/msg05962.html

However, given that coroutine-sigaltstack is more of a fallback
implementation for platforms that do not support ucontext, that change
may be a bit too invasive to be comfortable with it.  The mutex proposed
here may negatively impact performance, but the change is much simpler.

Signed-off-by: Max Reitz 
---
 util/coroutine-sigaltstack.c | 9 +
 1 file changed, 9 insertions(+)

diff --git a/util/coroutine-sigaltstack.c b/util/coroutine-sigaltstack.c
index aade82afb8..e99b8a4f9c 100644
--- a/util/coroutine-sigaltstack.c
+++ b/util/coroutine-sigaltstack.c
@@ -157,6 +157,7 @@ Coroutine *qemu_coroutine_new(void)
 sigset_t sigs;
 sigset_t osigs;
 sigjmp_buf old_env;
+static pthread_mutex_t sigusr2_mutex = PTHREAD_MUTEX_INITIALIZER;
 
 /* The way to manipulate stack is with the sigaltstack function. We
  * prepare a stack, with it delivering a signal to ourselves and then
@@ -186,6 +187,12 @@ Coroutine *qemu_coroutine_new(void)
 sa.sa_handler = coroutine_trampoline;
 sigfillset(&sa.sa_mask);
 sa.sa_flags = SA_ONSTACK;
+
+/*
+ * sigaction() is a process-global operation.  We must not run
+ * this code in multiple threads at once.
+ */
+pthread_mutex_lock(&sigusr2_mutex);
 if (sigaction(SIGUSR2, &sa, &osa) != 0) {
 abort();
 }
@@ -234,6 +241,8 @@ Coroutine *qemu_coroutine_new(void)
  * Restore the old SIGUSR2 signal handler and mask
  */
 sigaction(SIGUSR2, &osa, NULL);
+pthread_mutex_unlock(&sigusr2_mutex);
+
 pthread_sigmask(SIG_SETMASK, &osigs, NULL);
 
 /*
-- 
2.29.2




Re: [PULL 18/21] hw/misc: Add a PWM module for NPCM7XX

2021-01-25 Thread Peter Maydell
On Wed, 13 Jan 2021 at 17:13, Hao Wu  wrote:
> On Wed, Jan 13, 2021 at 8:03 AM Peter Maydell  
> wrote:
>> Hi; Coverity reports a possibly-overflowing arithmetic operation here
>> (CID 1442342):
>>
>> > +static uint32_t npcm7xx_pwm_calculate_duty(NPCM7xxPWM *p)
>> > +{
>> > +uint64_t duty;
>> > +
>> > +if (p->running) {
>> > +if (p->cnr == 0) {
>> > +duty = 0;
>> > +} else if (p->cmr >= p->cnr) {
>> > +duty = NPCM7XX_PWM_MAX_DUTY;
>> > +} else {
>> > +duty = NPCM7XX_PWM_MAX_DUTY * (p->cmr + 1) / (p->cnr + 1);
>>
>> Here all of p->cmr, p->cnr and NPCM7XX_PWM_MAX_DUTY are 32-bits,
>> so we calculate the whole expression using 32-bit arithmetic
>> before assigning it to a 64-bit variable. This could be
>> fixed using eg a cast of NPCM7XX_PWM_MAX_DUTY to uint64_t.
>>
>> Incidentally, we don't actually do any 64-bit
>> arithmetic calculations on 'duty' and we return
>> a uint32_t from this function, so 'duty' itself could
>> be a uint32_t, I think...
>
> Since NPCM7XX_PWM_MAX_DUTY =1,000,000 and p->cmr can have up to 65535, The 
> overflow is possible. We might want to cast NPCM7XX_PWM_MAX_DUTY to uint64_t 
> or #define NPCM7XX_PWM_MAX_DUTY 100ULL
> duty itself could be a uint32_t as you point out. Since p->cmr is less than 
> p->cnr in this line, duty cannot exceed NPCM7XX_PWM_MAX_DUTY, so there's no 
> overflow after this computation.

Hi; were you planning to write a patch for this?

thanks
-- PMM



Re: [PATCH v2 05/12] meson: Restrict block subsystem processing

2021-01-25 Thread Kevin Wolf
Am 22.01.2021 um 21:44 hat Philippe Mathieu-Daudé geschrieben:
> Avoid generating module_block.h and block-gen.c if we are
> not going to use them.
> 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
> Cc: qemu-bl...@nongnu.org
> ---
>  meson.build | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/meson.build b/meson.build
> index 61cbb89cd44..181f8795f5a 100644
> --- a/meson.build
> +++ b/meson.build
> @@ -1829,7 +1829,9 @@
>  
>  subdir('nbd')
>  subdir('scsi')
> -subdir('block')
> +if have_block
> +  subdir('block')
> +endif
>  
>  blockdev_ss.add(files(
>'blockdev.c',

It feels odd to have a random have_block check in the middle of a whole
bunch of lines that all deal with block layer functionality, especially
when unconditional ones depend on the conditional one. (nbd, scsi and
blockdev.c present in the context here certainly all can't work without
block)

So should this if block become a bit longer and include all block layer
related things nearby?

If not, at least a comment explaining why we're doing this would be
nice.

Kevin




Re: [PATCH] replay: fix replay of the interrupts

2021-01-25 Thread Alex Bennée


Paolo Bonzini  writes:

> In general I agree, but != means that rr disabled returns true. In general
> it seems to me that rr disabled should work more or less the same as record
> mode, because there is no replay log to provide the checkpoints.

Is this not an argument to combine the mode and check into replay.h
inline helpers with some clear semantic documentation and the call sites
become self documenting?

if (deadline == 0 && replay_recording_or_checkpoint())

which also makes things easier to compile away if replay isn't there?

>
> Paolo
>
> Il lun 25 gen 2021, 06:38 Pavel Dovgalyuk  ha
> scritto:
>
>> On 23.01.2021 21:15, Paolo Bonzini wrote:
>> > On 19/01/21 13:39, Pavel Dovgalyuk wrote:
>> >> Sometimes interrupt event comes at the same time with
>> >> the virtual timers. In this case replay tries to proceed
>> >> the timers, because deadline for them is zero.
>> >> This patch allows processing interrupts and exceptions
>> >> by entering the vCPU execution loop, when deadline is zero,
>> >> but checkpoint associated with virtual timers is not ready
>> >> to be replayed.
>> >>
>> >> Signed-off-by: Pavel Dovgalyuk 
>> >> ---
>> >>   accel/tcg/tcg-cpus-icount.c |8 +++-
>> >>   1 file changed, 7 insertions(+), 1 deletion(-)
>> >>
>> >> diff --git a/accel/tcg/tcg-cpus-icount.c b/accel/tcg/tcg-cpus-icount.c
>> >> index 9f45432275..a6d2bb8a88 100644
>> >> --- a/accel/tcg/tcg-cpus-icount.c
>> >> +++ b/accel/tcg/tcg-cpus-icount.c
>> >> @@ -81,7 +81,13 @@ void icount_handle_deadline(void)
>> >>   int64_t deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL,
>> >>
>> QEMU_TIMER_ATTR_ALL);
>> >> -if (deadline == 0) {
>> >> +/*
>> >> + * Instructions, interrupts, and exceptions are processed in
>> >> cpu-exec.
>> >> + * Don't interrupt cpu thread, when these events are waiting
>> >> + * (i.e., there is no checkpoint)
>> >> + */
>> >> +if (deadline == 0
>> >> +&& (replay_mode == REPLAY_MODE_RECORD ||
>> >> replay_has_checkpoint())) {
>> >
>> > Should this be replay_mode != REPLAY_MODE_PLAY ||
>> replay_has_checkpoint()?
>>
>> It was the first idea, but I thought, that == is more straightforward
>> to understand than !=.
>>
>> Pavel Dovgalyuk
>>
>>


-- 
Alex Bennée



[PATCH] Fix crash with IOthread when block commit after snapshot

2021-01-25 Thread 08005325
From: Michael Qiu 

Currently, if guest has workloads, IO thread will acquire aio_context
lock before do io_submit, it leads to segmentfault when do block commit
after snapshot. Just like below:
[Switching to thread 2 (Thread 0x7f046c312700 (LWP 108791))]
#0  0x5573f57930db in bdrv_mirror_top_pwritev ... at block/mirror.c:1420
1420in block/mirror.c
(gdb) p s->job
$17 = (MirrorBlockJob *) 0x0
(gdb) p s->stop
$18 = false
(gdb)
(gdb) bt
#0  0x5573f57930db in bdrv_mirror_top_pwritev ... at block/mirror.c:1420
#1  0x5573f5798ceb in bdrv_driver_pwritev ... at block/io.c:1183
#2  0x5573f579ae7a in bdrv_aligned_pwritev ... at block/io.c:1980
#3  0x5573f579b667 in bdrv_co_pwritev_part ... at block/io.c:2137
#4  0x5573f57886c8 in blk_do_pwritev_part ... at block/block-backend.c:1231
#5  0x5573f578879d in blk_aio_write_entry ... at block/block-backend.c:1439
#6  0x5573f58317cb in coroutine_trampoline ... at 
util/coroutine-ucontext.c:115
#7  0x7f047414a0d0 in __start_context () at /lib64/libc.so.6
#8  0x7f046c310e60 in  ()
#9  0x in  ()

Switch to qmp:
#0  0x7f04744dd4ed in __lll_lock_wait () at /lib64/libpthread.so.0
#1  0x7f04744d8de6 in _L_lock_941 () at /lib64/libpthread.so.0
#2  0x7f04744d8cdf in pthread_mutex_lock () at /lib64/libpthread.so.0
#3  0x5573f581de89 in qemu_mutex_lock_impl ... at 
util/qemu-thread-posix.c:78
#4  0x5573f575789e in block_job_add_bdrv ... at blockjob.c:223
#5  0x5573f5757ebd in block_job_create ... at blockjob.c:441
#6  0x5573f5792430 in mirror_start_job ... at block/mirror.c:1604
#7  0x5573f5794b6f in commit_active_start ... at block/mirror.c:1789

in IO thread when do bdrv_mirror_top_pwritev, the job is NULL, and stop field
is false, this means the s object has not been initialized, and this object
is initialized by block_job_create(), but the initialize process stuck in
acquire the lock.

The rootcause is that qemu do release/acquire when hold the lock,
at the same time, IO thread get the lock after release stage, and the crash
occured.

Actually, in this situation, job->job.aio_context will not equal to
qemu_get_aio_context(), and will be the same as bs->aio_context,
thus, no need to release the lock, becasue bdrv_root_attach_child()
will not change the context.

This patch fix this issue.

Signed-off-by: Michael Qiu 
---
 blockjob.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/blockjob.c b/blockjob.c
index c6e20e2f..e1d41db9 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -214,12 +214,14 @@ int block_job_add_bdrv(BlockJob *job, const char *name, 
BlockDriverState *bs,
 BdrvChild *c;
 
 bdrv_ref(bs);
-if (job->job.aio_context != qemu_get_aio_context()) {
+if (bdrv_get_aio_context(bs) != job->job.aio_context &&
+job->job.aio_context != qemu_get_aio_context()) {
 aio_context_release(job->job.aio_context);
 }
 c = bdrv_root_attach_child(bs, name, &child_job, job->job.aio_context,
perm, shared_perm, job, errp);
-if (job->job.aio_context != qemu_get_aio_context()) {
+if (bdrv_get_aio_context(bs) != job->job.aio_context &&
+job->job.aio_context != qemu_get_aio_context()) {
 aio_context_acquire(job->job.aio_context);
 }
 if (c == NULL) {
-- 
2.22.0




[RESEND PULL 1/4] net: Fix handling of id in netdev_add and netdev_del

2021-01-25 Thread Jason Wang
From: Markus Armbruster 

CLI -netdev accumulates in option group "netdev".

Before commit 08712fcb85 "net: Track netdevs in NetClientState rather
than QemuOpt", netdev_add added to the option group, and netdev_del
removed from it, both HMP and QMP.  Thus, every netdev had a
corresponding QemuOpts in this option group.

Commit 08712fcb85 dropped this for QMP netdev_add and both netdev_del.
Now a netdev has a corresponding QemuOpts only when it was created
with CLI or HMP.  Two issues:

* QMP and HMP netdev_del can leave QemuOpts behind, breaking HMP
  netdev_add.  Reproducer:

$ qemu-system-x86_64 -S -display none -nodefaults -monitor stdio
QEMU 5.1.92 monitor - type 'help' for more information
(qemu) netdev_add user,id=net0
(qemu) info network
net0: index=0,type=user,net=10.0.2.0,restrict=off
(qemu) netdev_del net0
(qemu) info network
(qemu) netdev_add user,id=net0
upstream-qemu: Duplicate ID 'net0' for netdev
Try "help netdev_add" for more information

  Fix by restoring the QemuOpts deletion in qmp_netdev_del(), but with
  a guard, because the QemuOpts need not exist.

* QMP netdev_add loses its "no duplicate ID" check.  Reproducer:

$ qemu-system-x86_64 -S -display none -qmp stdio
{"QMP": {"version": {"qemu": {"micro": 92, "minor": 1, "major": 5}, 
"package": "v5.2.0-rc2-1-g02c1f0142c"}, "capabilities": ["oob"]}}
{"execute": "qmp_capabilities"}
{"return": {}}
{"execute": "netdev_add", "arguments": {"type": "user", "id":"net0"}}
{"return": {}}
{"execute": "netdev_add", "arguments": {"type": "user", "id":"net0"}}
{"return": {}}

  Fix by adding a duplicate ID check to net_client_init1() to replace
  the lost one.  The check is redundant for callers where QemuOpts
  still checks, i.e. for CLI and HMP.

Reported-by: Andrew Melnichenko 
Fixes: 08712fcb851034228b61f75bd922863a984a4f60
Cc: qemu-sta...@nongnu.org
Signed-off-by: Markus Armbruster 
Reviewed-by: Eric Blake 
Signed-off-by: Jason Wang 
---
 net/net.c | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/net/net.c b/net/net.c
index e1035f2..c1cd9c7 100644
--- a/net/net.c
+++ b/net/net.c
@@ -983,6 +983,7 @@ static int (* const 
net_client_init_fun[NET_CLIENT_DRIVER__MAX])(
 static int net_client_init1(const Netdev *netdev, bool is_netdev, Error **errp)
 {
 NetClientState *peer = NULL;
+NetClientState *nc;
 
 if (is_netdev) {
 if (netdev->type == NET_CLIENT_DRIVER_NIC ||
@@ -1010,6 +1011,12 @@ static int net_client_init1(const Netdev *netdev, bool 
is_netdev, Error **errp)
 }
 }
 
+nc = qemu_find_netdev(netdev->id);
+if (nc) {
+error_setg(errp, "Duplicate ID '%s'", netdev->id);
+return -1;
+}
+
 if (net_client_init_fun[netdev->type](netdev, netdev->id, peer, errp) < 0) 
{
 /* FIXME drop when all init functions store an Error */
 if (errp && !*errp) {
@@ -1020,8 +1027,6 @@ static int net_client_init1(const Netdev *netdev, bool 
is_netdev, Error **errp)
 }
 
 if (is_netdev) {
-NetClientState *nc;
-
 nc = qemu_find_netdev(netdev->id);
 assert(nc);
 nc->is_netdev = true;
@@ -1135,6 +1140,7 @@ void qmp_netdev_add(Netdev *netdev, Error **errp)
 void qmp_netdev_del(const char *id, Error **errp)
 {
 NetClientState *nc;
+QemuOpts *opts;
 
 nc = qemu_find_netdev(id);
 if (!nc) {
@@ -1149,6 +1155,16 @@ void qmp_netdev_del(const char *id, Error **errp)
 }
 
 qemu_del_net_client(nc);
+
+/*
+ * Wart: we need to delete the QemuOpts associated with netdevs
+ * created via CLI or HMP, to avoid bogus "Duplicate ID" errors in
+ * HMP netdev_add.
+ */
+opts = qemu_opts_find(qemu_find_opts("netdev"), id);
+if (opts) {
+qemu_opts_del(opts);
+}
 }
 
 static void netfilter_print_info(Monitor *mon, NetFilterState *nf)
-- 
2.7.4




[RESEND PULL 0/4] Net patches

2021-01-25 Thread Jason Wang
The following changes since commit e81eb5e6d108008445821e4f891fb9563016c71b:

  Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into 
staging (2021-01-23 22:34:21 +)

are available in the git repository at:

  https://github.com/jasowang/qemu.git tags/net-pull-request

for you to fetch changes up to f574633529926697ced51b6865e5c50bbb78bf1b:

  net: checksum: Introduce fine control over checksum type (2021-01-25 17:04:56 
+0800)




Bin Meng (1):
  net: checksum: Introduce fine control over checksum type

Guishan Qin (1):
  net: checksum: Add IP header checksum calculation

Markus Armbruster (1):
  net: Fix handling of id in netdev_add and netdev_del

Markus Carlstedt (1):
  net: checksum: Skip fragmented IP packets

 hw/net/allwinner-sun8i-emac.c |  2 +-
 hw/net/cadence_gem.c  |  2 +-
 hw/net/fsl_etsec/rings.c  | 18 +-
 hw/net/ftgmac100.c| 13 -
 hw/net/imx_fec.c  | 20 
 hw/net/virtio-net.c   |  2 +-
 hw/net/xen_nic.c  |  2 +-
 include/net/checksum.h|  7 ++-
 net/checksum.c| 24 +---
 net/filter-rewriter.c |  4 ++--
 net/net.c | 20 ++--
 11 files changed, 80 insertions(+), 34 deletions(-)




[RESEND PULL 3/4] net: checksum: Add IP header checksum calculation

2021-01-25 Thread Jason Wang
From: Guishan Qin 

At present net_checksum_calculate() only calculates TCP/UDP checksum
in an IP packet, but assumes the IP header checksum to be provided
by the software, e.g.: Linux kernel always calculates the IP header
checksum. However this might not always be the case, e.g.: for an IP
checksum offload enabled stack like VxWorks, the IP header checksum
can be zero.

This adds the checksum calculation of the IP header.

Signed-off-by: Guishan Qin 
Signed-off-by: Yabing Liu 
Signed-off-by: Bin Meng 
Reviewed-by: Cédric Le Goater 
Signed-off-by: Jason Wang 
---
 net/checksum.c | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/net/checksum.c b/net/checksum.c
index 5cb8b2c..dabd290 100644
--- a/net/checksum.c
+++ b/net/checksum.c
@@ -61,6 +61,7 @@ void net_checksum_calculate(uint8_t *data, int length)
 {
 int mac_hdr_len, ip_len;
 struct ip_header *ip;
+uint16_t csum;
 
 /*
  * Note: We cannot assume "data" is aligned, so the all code uses
@@ -106,6 +107,11 @@ void net_checksum_calculate(uint8_t *data, int length)
 return; /* not IPv4 */
 }
 
+/* Calculate IP checksum */
+stw_he_p(&ip->ip_sum, 0);
+csum = net_raw_checksum((uint8_t *)ip, IP_HDR_GET_LEN(ip));
+stw_be_p(&ip->ip_sum, csum);
+
 if (IP4_IS_FRAGMENT(ip)) {
 return; /* a fragmented IP packet */
 }
@@ -122,7 +128,6 @@ void net_checksum_calculate(uint8_t *data, int length)
 switch (ip->ip_p) {
 case IP_PROTO_TCP:
 {
-uint16_t csum;
 tcp_header *tcp = (tcp_header *)(ip + 1);
 
 if (ip_len < sizeof(tcp_header)) {
@@ -143,7 +148,6 @@ void net_checksum_calculate(uint8_t *data, int length)
 }
 case IP_PROTO_UDP:
 {
-uint16_t csum;
 udp_header *udp = (udp_header *)(ip + 1);
 
 if (ip_len < sizeof(udp_header)) {
-- 
2.7.4




[PATCH] Fix crash with IOthread when block commit after snapshot

2021-01-25 Thread 08005325
From: Michael Qiu 

Currently, if guest has workloads, IO thread will acquire aio_context
lock before do io_submit, it leads to segmentfault when do block commit
after snapshot. Just like below:
[Switching to thread 2 (Thread 0x7f046c312700 (LWP 108791))]
#0  0x5573f57930db in bdrv_mirror_top_pwritev ... at block/mirror.c:1420
1420in block/mirror.c
(gdb) p s->job
$17 = (MirrorBlockJob *) 0x0
(gdb) p s->stop
$18 = false
(gdb)
(gdb) bt
#0  0x5573f57930db in bdrv_mirror_top_pwritev ... at block/mirror.c:1420
#1  0x5573f5798ceb in bdrv_driver_pwritev ... at block/io.c:1183
#2  0x5573f579ae7a in bdrv_aligned_pwritev ... at block/io.c:1980
#3  0x5573f579b667 in bdrv_co_pwritev_part ... at block/io.c:2137
#4  0x5573f57886c8 in blk_do_pwritev_part ... at block/block-backend.c:1231
#5  0x5573f578879d in blk_aio_write_entry ... at block/block-backend.c:1439
#6  0x5573f58317cb in coroutine_trampoline ... at 
util/coroutine-ucontext.c:115
#7  0x7f047414a0d0 in __start_context () at /lib64/libc.so.6
#8  0x7f046c310e60 in  ()
#9  0x in  ()

Switch to qmp:
#0  0x7f04744dd4ed in __lll_lock_wait () at /lib64/libpthread.so.0
#1  0x7f04744d8de6 in _L_lock_941 () at /lib64/libpthread.so.0
#2  0x7f04744d8cdf in pthread_mutex_lock () at /lib64/libpthread.so.0
#3  0x5573f581de89 in qemu_mutex_lock_impl ... at 
util/qemu-thread-posix.c:78
#4  0x5573f575789e in block_job_add_bdrv ... at blockjob.c:223
#5  0x5573f5757ebd in block_job_create ... at blockjob.c:441
#6  0x5573f5792430 in mirror_start_job ... at block/mirror.c:1604
#7  0x5573f5794b6f in commit_active_start ... at block/mirror.c:1789

in IO thread when do bdrv_mirror_top_pwritev, the job is NULL, and stop field
is false, this means the s object has not been initialized, and this object
is initialized by block_job_create(), but the initialize process stuck in
acquire the lock.

The rootcause is that qemu do release/acquire when hold the lock,
at the same time, IO thread get the lock after release stage, and the crash
occured.

Actually, in this situation, job->job.aio_context will not equal to
qemu_get_aio_context(), and will be the same as bs->aio_context,
thus, no need to release the lock, becasue bdrv_root_attach_child()
will not change the context.

This patch fix this issue.

Signed-off-by: Michael Qiu 
---
 blockjob.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/blockjob.c b/blockjob.c
index c6e20e2f..e1d41db9 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -214,12 +214,14 @@ int block_job_add_bdrv(BlockJob *job, const char *name, 
BlockDriverState *bs,
 BdrvChild *c;
 
 bdrv_ref(bs);
-if (job->job.aio_context != qemu_get_aio_context()) {
+if (bdrv_get_aio_context(bs) != job->job.aio_context &&
+job->job.aio_context != qemu_get_aio_context()) {
 aio_context_release(job->job.aio_context);
 }
 c = bdrv_root_attach_child(bs, name, &child_job, job->job.aio_context,
perm, shared_perm, job, errp);
-if (job->job.aio_context != qemu_get_aio_context()) {
+if (bdrv_get_aio_context(bs) != job->job.aio_context &&
+job->job.aio_context != qemu_get_aio_context()) {
 aio_context_acquire(job->job.aio_context);
 }
 if (c == NULL) {
-- 
2.22.0





[RESEND PULL 2/4] net: checksum: Skip fragmented IP packets

2021-01-25 Thread Jason Wang
From: Markus Carlstedt 

To calculate the TCP/UDP checksum we need the whole datagram. Unless
the hardware has some logic to collect all fragments before sending
the whole datagram first, it can only be done by the network stack,
which is normally the case for the NICs we have seen so far.

Skip these fragmented IP packets to avoid checksum corruption.

Signed-off-by: Markus Carlstedt 
Signed-off-by: Bin Meng 
Signed-off-by: Jason Wang 
---
 net/checksum.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/net/checksum.c b/net/checksum.c
index aaa4000..5cb8b2c 100644
--- a/net/checksum.c
+++ b/net/checksum.c
@@ -106,6 +106,10 @@ void net_checksum_calculate(uint8_t *data, int length)
 return; /* not IPv4 */
 }
 
+if (IP4_IS_FRAGMENT(ip)) {
+return; /* a fragmented IP packet */
+}
+
 ip_len = lduw_be_p(&ip->ip_len);
 
 /* Last, check that we have enough data for the all IP frame */
-- 
2.7.4




[RESEND PULL 4/4] net: checksum: Introduce fine control over checksum type

2021-01-25 Thread Jason Wang
From: Bin Meng 

At present net_checksum_calculate() blindly calculates all types of
checksums (IP, TCP, UDP). Some NICs may have a per type setting in
their BDs to control what checksum should be offloaded. To support
such hardware behavior, introduce a 'csum_flag' parameter to the
net_checksum_calculate() API to allow fine control over what type
checksum is calculated.

Existing users of this API are updated accordingly.

Signed-off-by: Bin Meng 
Signed-off-by: Jason Wang 
---
 hw/net/allwinner-sun8i-emac.c |  2 +-
 hw/net/cadence_gem.c  |  2 +-
 hw/net/fsl_etsec/rings.c  | 18 +-
 hw/net/ftgmac100.c| 13 -
 hw/net/imx_fec.c  | 20 
 hw/net/virtio-net.c   |  2 +-
 hw/net/xen_nic.c  |  2 +-
 include/net/checksum.h|  7 ++-
 net/checksum.c| 18 ++
 net/filter-rewriter.c |  4 ++--
 10 files changed, 55 insertions(+), 33 deletions(-)

diff --git a/hw/net/allwinner-sun8i-emac.c b/hw/net/allwinner-sun8i-emac.c
index 38d3285..0427689 100644
--- a/hw/net/allwinner-sun8i-emac.c
+++ b/hw/net/allwinner-sun8i-emac.c
@@ -514,7 +514,7 @@ static void allwinner_sun8i_emac_transmit(AwSun8iEmacState 
*s)
 /* After the last descriptor, send the packet */
 if (desc.status2 & TX_DESC_STATUS2_LAST_DESC) {
 if (desc.status2 & TX_DESC_STATUS2_CHECKSUM_MASK) {
-net_checksum_calculate(packet_buf, packet_bytes);
+net_checksum_calculate(packet_buf, packet_bytes, CSUM_ALL);
 }
 
 qemu_send_packet(nc, packet_buf, packet_bytes);
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
index 7a53469..9a4474a 100644
--- a/hw/net/cadence_gem.c
+++ b/hw/net/cadence_gem.c
@@ -1266,7 +1266,7 @@ static void gem_transmit(CadenceGEMState *s)
 
 /* Is checksum offload enabled? */
 if (s->regs[GEM_DMACFG] & GEM_DMACFG_TXCSUM_OFFL) {
-net_checksum_calculate(s->tx_packet, total_bytes);
+net_checksum_calculate(s->tx_packet, total_bytes, 
CSUM_ALL);
 }
 
 /* Update MAC statistics */
diff --git a/hw/net/fsl_etsec/rings.c b/hw/net/fsl_etsec/rings.c
index 628648a..121415a 100644
--- a/hw/net/fsl_etsec/rings.c
+++ b/hw/net/fsl_etsec/rings.c
@@ -183,13 +183,11 @@ static void process_tx_fcb(eTSEC *etsec)
 uint8_t *l3_header = etsec->tx_buffer + 8 + l3_header_offset;
 /* L4 header */
 uint8_t *l4_header = l3_header + l4_header_offset;
+int csum = 0;
 
 /* if packet is IP4 and IP checksum is requested */
 if (flags & FCB_TX_IP && flags & FCB_TX_CIP) {
-/* do IP4 checksum (TODO This function does TCP/UDP checksum
- * but not sure if it also does IP4 checksum.) */
-net_checksum_calculate(etsec->tx_buffer + 8,
-etsec->tx_buffer_len - 8);
+csum |= CSUM_IP;
 }
 /* TODO Check the correct usage of the PHCS field of the FCB in case the 
NPH
  * flag is on */
@@ -201,9 +199,7 @@ static void process_tx_fcb(eTSEC *etsec)
 /* if checksum is requested */
 if (flags & FCB_TX_CTU) {
 /* do UDP checksum */
-
-net_checksum_calculate(etsec->tx_buffer + 8,
-etsec->tx_buffer_len - 8);
+csum |= CSUM_UDP;
 } else {
 /* set checksum field to 0 */
 l4_header[6] = 0;
@@ -211,10 +207,14 @@ static void process_tx_fcb(eTSEC *etsec)
 }
 } else if (flags & FCB_TX_CTU) { /* if TCP and checksum is requested */
 /* do TCP checksum */
-net_checksum_calculate(etsec->tx_buffer + 8,
-   etsec->tx_buffer_len - 8);
+csum |= CSUM_TCP;
 }
 }
+
+if (csum) {
+net_checksum_calculate(etsec->tx_buffer + 8,
+   etsec->tx_buffer_len - 8, csum);
+}
 }
 
 static void process_tx_bd(eTSEC *etsec,
diff --git a/hw/net/ftgmac100.c b/hw/net/ftgmac100.c
index 782ff19..25685ba 100644
--- a/hw/net/ftgmac100.c
+++ b/hw/net/ftgmac100.c
@@ -564,6 +564,7 @@ static void ftgmac100_do_tx(FTGMAC100State *s, uint32_t 
tx_ring,
 ptr += len;
 frame_size += len;
 if (bd.des0 & FTGMAC100_TXDES0_LTS) {
+int csum = 0;
 
 /* Check for VLAN */
 if (flags & FTGMAC100_TXDES1_INS_VLANTAG &&
@@ -573,8 +574,18 @@ static void ftgmac100_do_tx(FTGMAC100State *s, uint32_t 
tx_ring,
 }
 
 if (flags & FTGMAC100_TXDES1_IP_CHKSUM) {
-net_checksum_calculate(s->frame, frame_size);
+csum |= CSUM_IP;
 }
+if (flags & FTGMAC100_TXDES1_TCP_CHKSUM) {
+csum |= CSUM_TCP;
+}
+if (flags & FTGMAC100_TXDES1_UDP_CHKSUM) {
+csum |= CSUM_UDP;
+}
+ 

Re: [PATCH v7 10/11] iotests: rewrite check into python

2021-01-25 Thread Vladimir Sementsov-Ogievskiy

25.01.2021 15:02, Kevin Wolf wrote:

Am 23.01.2021 um 16:08 hat Vladimir Sementsov-Ogievskiy geschrieben:

22.01.2021 19:08, Kevin Wolf wrote:

Am 16.01.2021 um 14:44 hat Vladimir Sementsov-Ogievskiy geschrieben:

Just use classes introduced in previous three commits. Behavior
difference is described in these three commits.

Drop group file, as it becomes unused.

Drop common.env: now check is in python, and for tests we use same
python interpreter that runs the check itself. Use build environment
PYTHON in check-block instead, to keep "make check" use the same
python.

Signed-off-by: Vladimir Sementsov-Ogievskiy 



diff --git a/tests/check-block.sh b/tests/check-block.sh
index fb4c1baae9..26eb1c0a9b 100755
--- a/tests/check-block.sh
+++ b/tests/check-block.sh
@@ -69,7 +69,7 @@ export QEMU_CHECK_BLOCK_AUTO=1
   ret=0
   for fmt in $format_list ; do
-./check -makecheck -$fmt $group || ret=1
+${PYTHON} ./check -makecheck -$fmt $group || ret=1
   done


When I add an echo to print that command line, it seems that ${PYTHON}
is empty for me. Is this expected?


It seems to be defined defined when called from make check. Did you
just call check-block directly?

D>

It's not intentional, but I think it's OK: if PYTHON is not defined
let's just execute check as self-executable. And for make-check PYTHON
is defined and correct python is used.


Hm, where does that happen in 'make check'? It seems the old makefiles
were quite readable in comparison to what we have now...

Anyway, I think 'make check-block' should run just the block-specific
subset of 'make check', without changing the behaviour of the remaining
tests. Anything that can be started through make should respect the
configured Python interpreter.


   exit $ret
diff --git a/tests/qemu-iotests/check b/tests/qemu-iotests/check
index 952762d5ed..914321806a 100755
--- a/tests/qemu-iotests/check
+++ b/tests/qemu-iotests/check


[..]


-if [ -x "$binary" ]
-then
-export QEMU_PROG="$build_root/$binary"
-break
-fi
-done
-popd > /dev/null
-[ "$QEMU_PROG" = "" ] && _init_error "qemu not found"
-fi


I think this else branch is kind of important (if there is no system
emulator binary for the host architecture, find _any_ system emulator
binary that was built). I can't find its equivalent in the new code.


Hmm, I decided testing "first found" emulator is strange.. It seems
like we have several emulators and user don't care which would be
tested?


Remember that we're mainly testing the block layer, which is the same in
all qemu-system-* binaries anyway. So yes, any system emulator binary is
good enough for many test cases, and certainly better than having no
system emulator. Differences are only in the supported guest devices,
which may cause some tests to be skipped.

If there are multiple binaries that we could use, we could change the
way to select one instead of just the first one, e.g. by trying x86_64
first because this is what enables the largest set of tests.

But anything is better than failing with "qemu not found".


Probably we should instead used qemu-system-* binary only if there is
only one matching binary. And fail if there are many.


No, 'make check' shouldn't fail because I built arm and ppc emulators on
my x86_64 machine without also building a x86_64 emulator. (And I think
this is a case that fails both with the actual patch under review and
with your suggested change.)



OK, I'll send a squash-in



--
Best regards,
Vladimir



Re: [PATCH v8 2/5] iotests: add testenv.py

2021-01-25 Thread Vladimir Sementsov-Ogievskiy

24.01.2021 00:04, Vladimir Sementsov-Ogievskiy wrote:

Add TestEnv class, which will handle test environment in a new python
iotests running framework.

Don't add compat=1.1 for qcow2 IMGOPTS, as v3 is default anyway.

Signed-off-by: Vladimir Sementsov-Ogievskiy
---
  tests/qemu-iotests/testenv.py | 278 ++
  1 file changed, 278 insertions(+)
  create mode 100644 tests/qemu-iotests/testenv.py

diff --git a/tests/qemu-iotests/testenv.py b/tests/qemu-iotests/testenv.py
new file mode 100644
index 00..348af593e9


[..]


+def init_binaries(self):
+"""Init binary path variables:
+ PYTHON (for bash tests)
+ QEMU_PROG, QEMU_IMG_PROG, QEMU_IO_PROG, QEMU_NBD_PROG, QSD_PROG
+ SOCKET_SCM_HELPER
+"""
+self.python = sys.executable
+
+def root(*names):
+return os.path.join(self.build_root, *names)
+
+arch = os.uname().machine
+if 'ppc64' in arch:
+arch = 'ppc64'
+
+self.qemu_prog = os.getenv('QEMU_PROG', root(f'qemu-system-{arch}'))
+if not os.path.exists(self.qemu_prog):
+pattern = root('qemu-system-*')
+progs = glob.glob(pattern)
+if not progs:
+sys.exit(f"Not found any Qemu binary by pattern '{pattern}'")
+if len(progs) > 1:
+progs_list = ', '.join(progs)
+sys.exit(f"Several non '{arch}' qemu binaries found: "
+ f"{progs_list}, please set QEMU_PROG environment "
+ "variable")
+self.qemu_prog = progs[0]



squash-in:

diff --git a/tests/qemu-iotests/testenv.py b/tests/qemu-iotests/testenv.py
index 348af593e9..1633510caf 100644
--- a/tests/qemu-iotests/testenv.py
+++ b/tests/qemu-iotests/testenv.py
@@ -129,14 +129,14 @@ class TestEnv(AbstractContextManager['TestEnv']):
 if not os.path.exists(self.qemu_prog):
 pattern = root('qemu-system-*')
 progs = glob.glob(pattern)
-if not progs:
-sys.exit(f"Not found any Qemu binary by pattern '{pattern}'")
-if len(progs) > 1:
-progs_list = ', '.join(progs)
-sys.exit(f"Several non '{arch}' qemu binaries found: "
- f"{progs_list}, please set QEMU_PROG environment "
- "variable")
-self.qemu_prog = progs[0]
+found = False
+for p in progs:
+if os.access(p, os.X_OK):
+self.qemu_prog = p
+found = True
+if not found:
+sys.exit("Not found any Qemu executable binary by pattern "
+ f"'{pattern}'")


--
Best regards,
Vladimir



Re: [PATCH] replay: fix replay of the interrupts

2021-01-25 Thread Claudio Fontana
On 1/25/21 1:12 PM, Alex Bennée wrote:
> 
> Paolo Bonzini  writes:
> 
>> In general I agree, but != means that rr disabled returns true. In general
>> it seems to me that rr disabled should work more or less the same as record
>> mode, because there is no replay log to provide the checkpoints.
> 
> Is this not an argument to combine the mode and check into replay.h
> inline helpers with some clear semantic documentation and the call sites
> become self documenting?
> 
> if (deadline == 0 && replay_recording_or_checkpoint())
> 
> which also makes things easier to compile away if replay isn't there?


Seems that the TCG build faces a similar issue to the issue I was facing with 
the non-TCG build,
before the non-TCG build got functional again (for x86).

We solved the non-TCG build problem, by not compiling replay at all for 
non-TCG, plus closing our nose and stubbing away what couldn't be completely 
removed (yet).

But the CONFIG_TCG build has the same legitimate requirement towards a 
non-CONFIG_REPLAY build.

ie, like we have tcg_enabled(), should we have replay_enabled()? Maybe it could 
be reworked starting from replay_events_enabled()?

And then when things are refactored properly for replay_enabled(), a non-REPLAY 
TCG build can basically ignore all the inner workings of replay.

Ciao,

Claudio

> 
>>
>> Paolo
>>
>> Il lun 25 gen 2021, 06:38 Pavel Dovgalyuk  ha
>> scritto:
>>
>>> On 23.01.2021 21:15, Paolo Bonzini wrote:
 On 19/01/21 13:39, Pavel Dovgalyuk wrote:
> Sometimes interrupt event comes at the same time with
> the virtual timers. In this case replay tries to proceed
> the timers, because deadline for them is zero.
> This patch allows processing interrupts and exceptions
> by entering the vCPU execution loop, when deadline is zero,
> but checkpoint associated with virtual timers is not ready
> to be replayed.
>
> Signed-off-by: Pavel Dovgalyuk 
> ---
>   accel/tcg/tcg-cpus-icount.c |8 +++-
>   1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/accel/tcg/tcg-cpus-icount.c b/accel/tcg/tcg-cpus-icount.c
> index 9f45432275..a6d2bb8a88 100644
> --- a/accel/tcg/tcg-cpus-icount.c
> +++ b/accel/tcg/tcg-cpus-icount.c
> @@ -81,7 +81,13 @@ void icount_handle_deadline(void)
>   int64_t deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL,
>
>>> QEMU_TIMER_ATTR_ALL);
> -if (deadline == 0) {
> +/*
> + * Instructions, interrupts, and exceptions are processed in
> cpu-exec.
> + * Don't interrupt cpu thread, when these events are waiting
> + * (i.e., there is no checkpoint)
> + */
> +if (deadline == 0
> +&& (replay_mode == REPLAY_MODE_RECORD ||
> replay_has_checkpoint())) {

 Should this be replay_mode != REPLAY_MODE_PLAY ||
>>> replay_has_checkpoint()?
>>>
>>> It was the first idea, but I thought, that == is more straightforward
>>> to understand than !=.
>>>
>>> Pavel Dovgalyuk
>>>
>>>
> 
> 




Re: [PATCH 09/25] qom: use qemu_printf to print help for user-creatable objects

2021-01-25 Thread Markus Armbruster
Paolo Bonzini  writes:

> This is needed when we add help support for object_add.
>
> Signed-off-by: Paolo Bonzini 

Reviewed-by: Markus Armbruster 




Re: [PATCH 10/25] hmp: special case help options for object_add

2021-01-25 Thread Markus Armbruster
Paolo Bonzini  writes:

> Fix "object_add help" and "object_add TYPE,help".
>
> Signed-off-by: Paolo Bonzini 

Standard question when I read "Fix $interface" in a commit message: how
exactly is it broken?




Re: [PATCH 10/25] hmp: special case help options for object_add

2021-01-25 Thread Paolo Bonzini

On 25/01/21 13:48, Markus Armbruster wrote:

Paolo Bonzini  writes:


Fix "object_add help" and "object_add TYPE,help".

Signed-off-by: Paolo Bonzini 


Standard question when I read "Fix $interface" in a commit message: how
exactly is it broken?


It doesn't work at all ("Error: parameter 'id' is missing").

Paolo




[PATCH] s390x/cpu_model: disallow unpack for --only-migratable

2021-01-25 Thread Christian Borntraeger
secure execution (aka protected virtualization) guests cannot be
migrated at the moment. Disallow the unpack facility if the user
specifies --only-migratable.

Signed-off-by: Christian Borntraeger 
---
 target/s390x/cpu_models.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c
index 35179f9dc7ba..0fa082ae2546 100644
--- a/target/s390x/cpu_models.c
+++ b/target/s390x/cpu_models.c
@@ -26,6 +26,7 @@
 #include "qapi/qmp/qdict.h"
 #ifndef CONFIG_USER_ONLY
 #include "sysemu/arch_init.h"
+#include "sysemu/sysemu.h"
 #include "hw/pci/pci.h"
 #endif
 #include "qapi/qapi-commands-machine-target.h"
@@ -878,6 +879,11 @@ static void check_compatibility(const S390CPUModel 
*max_model,
 return;
 }
 
+if (only_migratable && test_bit(S390_FEAT_UNPACK, model->features)) {
+error_setg(errp, "The unpack facility is not compatible with "
+   "the --only-migratable option");
+}
+
 /* detect the missing features to properly report them */
 bitmap_andnot(missing, model->features, max_model->features, 
S390_FEAT_MAX);
 if (bitmap_empty(missing, S390_FEAT_MAX)) {
-- 
2.28.0




Re: [PATCH 11/25] remove -writeconfig

2021-01-25 Thread Markus Armbruster
Paolo Bonzini  writes:

> Like -set and -readconfig, it would not really be too hard to
> extend -writeconfig to parsing mechanisms other than QemuOpts.
> However, the uses of -writeconfig are substantially more
> limited, as it is generally easier to write the configuration
> by hand in the first place.  In addition, -writeconfig does
> not even try to detect cases where it prints incorrect
> syntax (for example if values have a quote in them, since
> qemu_config_parse does not support any kind of escaping.
> Just remove it.
>
> Signed-off-by: Paolo Bonzini 

I love the "and how give me a config file for all that" idea, but I
agree our -writeconfig is flawed.  I hope we can bring it back in more
useful shape.

No deprecation grace period?

> ---
>  include/qemu/config-file.h |  1 -
>  qemu-options.hx| 13 ++--
>  softmmu/vl.c   | 19 -
>  util/qemu-config.c | 42 --
>  4 files changed, 2 insertions(+), 73 deletions(-)
>
> diff --git a/include/qemu/config-file.h b/include/qemu/config-file.h
> index 29226107bd..7d26fe3816 100644
> --- a/include/qemu/config-file.h
> +++ b/include/qemu/config-file.h
> @@ -10,7 +10,6 @@ void qemu_add_opts(QemuOptsList *list);
>  void qemu_add_drive_opts(QemuOptsList *list);
>  int qemu_global_option(const char *str);
>  
> -void qemu_config_write(FILE *fp);
>  int qemu_config_parse(FILE *fp, QemuOptsList **lists, const char *fname);
>  
>  int qemu_read_config_file(const char *filename);
> diff --git a/qemu-options.hx b/qemu-options.hx
> index 62791f56d8..7480b6a03f 100644
> --- a/qemu-options.hx
> +++ b/qemu-options.hx
> @@ -4301,23 +4301,14 @@ SRST
>  ERST
>  
>  DEF("readconfig", HAS_ARG, QEMU_OPTION_readconfig,
> -"-readconfig \n", QEMU_ARCH_ALL)
> +"-readconfig \n"
> +"read config file\n", QEMU_ARCH_ALL)

Sure this belongs?

>  SRST
>  ``-readconfig file``
>  Read device configuration from file. This approach is useful when
>  you want to spawn QEMU process with many command line options but
>  you don't want to exceed the command line character limit.
>  ERST
> -DEF("writeconfig", HAS_ARG, QEMU_OPTION_writeconfig,
> -"-writeconfig \n"
> -"read/write config file\n", QEMU_ARCH_ALL)
> -SRST
> -``-writeconfig file``
> -Write device configuration to file. The file can be either filename
> -to save command line and device configuration into file or dash
> -``-``) character to print the output to stdout. This can be later
> -used as input file for ``-readconfig`` option.
> -ERST
>  
>  DEF("no-user-config", 0, QEMU_OPTION_nouserconfig,
>  "-no-user-config\n"
> diff --git a/softmmu/vl.c b/softmmu/vl.c
> index 7ddf405d76..d34307bf11 100644
> --- a/softmmu/vl.c
> +++ b/softmmu/vl.c
> @@ -3337,25 +3337,6 @@ void qemu_init(int argc, char **argv, char **envp)
>  }
>  display_remote++;
>  break;
> -case QEMU_OPTION_writeconfig:
> -{
> -FILE *fp;
> -if (strcmp(optarg, "-") == 0) {
> -fp = stdout;
> -} else {
> -fp = fopen(optarg, "w");
> -if (fp == NULL) {
> -error_report("open %s: %s", optarg,
> - strerror(errno));
> -exit(1);
> -}
> -}
> -qemu_config_write(fp);
> -if (fp != stdout) {
> -fclose(fp);
> -}
> -break;
> -}
>  case QEMU_OPTION_qtest:
>  qtest_chrdev = optarg;
>  break;
> diff --git a/util/qemu-config.c b/util/qemu-config.c
> index e2a700b284..a4a1324c68 100644
> --- a/util/qemu-config.c
> +++ b/util/qemu-config.c
> @@ -307,48 +307,6 @@ void qemu_add_opts(QemuOptsList *list)
>  abort();
>  }
>  
> -struct ConfigWriteData {
> -QemuOptsList *list;
> -FILE *fp;
> -};
> -
> -static int config_write_opt(void *opaque, const char *name, const char 
> *value,
> -Error **errp)
> -{
> -struct ConfigWriteData *data = opaque;
> -
> -fprintf(data->fp, "  %s = \"%s\"\n", name, value);
> -return 0;
> -}
> -
> -static int config_write_opts(void *opaque, QemuOpts *opts, Error **errp)
> -{
> -struct ConfigWriteData *data = opaque;
> -const char *id = qemu_opts_id(opts);
> -
> -if (id) {
> -fprintf(data->fp, "[%s \"%s\"]\n", data->list->name, id);
> -} else {
> -fprintf(data->fp, "[%s]\n", data->list->name);
> -}
> -qemu_opt_foreach(opts, config_write_opt, data, NULL);
> -fprintf(data->fp, "\n");
> -return 0;
> -}
> -
> -void qemu_config_write(FILE *fp)
> -{
> -struct ConfigWriteData data = { .fp = fp };
> -QemuOpt

Re: [PATCH] s390x/cpu_model: disallow unpack for --only-migratable

2021-01-25 Thread David Hildenbrand
On 25.01.21 13:53, Christian Borntraeger wrote:
> secure execution (aka protected virtualization) guests cannot be
> migrated at the moment. Disallow the unpack facility if the user
> specifies --only-migratable.
> 
> Signed-off-by: Christian Borntraeger 
> ---
>  target/s390x/cpu_models.c | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c
> index 35179f9dc7ba..0fa082ae2546 100644
> --- a/target/s390x/cpu_models.c
> +++ b/target/s390x/cpu_models.c
> @@ -26,6 +26,7 @@
>  #include "qapi/qmp/qdict.h"
>  #ifndef CONFIG_USER_ONLY
>  #include "sysemu/arch_init.h"
> +#include "sysemu/sysemu.h"
>  #include "hw/pci/pci.h"
>  #endif
>  #include "qapi/qapi-commands-machine-target.h"
> @@ -878,6 +879,11 @@ static void check_compatibility(const S390CPUModel 
> *max_model,
>  return;
>  }
>  
> +if (only_migratable && test_bit(S390_FEAT_UNPACK, model->features)) {
> +error_setg(errp, "The unpack facility is not compatible with "
> +   "the --only-migratable option");

return; ?


This implies that a VM with "-cpu host" might not start anymore, right?


-- 
Thanks,

David / dhildenb




Re: [PATCH] s390x/cpu_model: disallow unpack for --only-migratable

2021-01-25 Thread Christian Borntraeger



On 25.01.21 13:57, David Hildenbrand wrote:
> On 25.01.21 13:53, Christian Borntraeger wrote:
>> secure execution (aka protected virtualization) guests cannot be
>> migrated at the moment. Disallow the unpack facility if the user
>> specifies --only-migratable.
>>
>> Signed-off-by: Christian Borntraeger 
>> ---
>>  target/s390x/cpu_models.c | 6 ++
>>  1 file changed, 6 insertions(+)
>>
>> diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c
>> index 35179f9dc7ba..0fa082ae2546 100644
>> --- a/target/s390x/cpu_models.c
>> +++ b/target/s390x/cpu_models.c
>> @@ -26,6 +26,7 @@
>>  #include "qapi/qmp/qdict.h"
>>  #ifndef CONFIG_USER_ONLY
>>  #include "sysemu/arch_init.h"
>> +#include "sysemu/sysemu.h"
>>  #include "hw/pci/pci.h"
>>  #endif
>>  #include "qapi/qapi-commands-machine-target.h"
>> @@ -878,6 +879,11 @@ static void check_compatibility(const S390CPUModel 
>> *max_model,
>>  return;
>>  }
>>  
>> +if (only_migratable && test_bit(S390_FEAT_UNPACK, model->features)) {
>> +error_setg(errp, "The unpack facility is not compatible with "
>> +   "the --only-migratable option");
> 
> return; ?

of course.

> 
> 
> This implies that a VM with "-cpu host" might not start anymore, right?

Only if --only-migratable is set.



Re: [PATCH] replay: fix replay of the interrupts

2021-01-25 Thread Claudio Fontana
On 1/25/21 1:43 PM, Claudio Fontana wrote:
> On 1/25/21 1:12 PM, Alex Bennée wrote:
>>
>> Paolo Bonzini  writes:
>>
>>> In general I agree, but != means that rr disabled returns true. In general
>>> it seems to me that rr disabled should work more or less the same as record
>>> mode, because there is no replay log to provide the checkpoints.
>>
>> Is this not an argument to combine the mode and check into replay.h
>> inline helpers with some clear semantic documentation and the call sites
>> become self documenting?
>>
>> if (deadline == 0 && replay_recording_or_checkpoint())
>>
>> which also makes things easier to compile away if replay isn't there?
> 
> 
> Seems that the TCG build faces a similar issue to the issue I was facing with 
> the non-TCG build,
> before the non-TCG build got functional again (for x86).
> 
> We solved the non-TCG build problem, by not compiling replay at all for 
> non-TCG, plus closing our nose and stubbing away what couldn't be completely 
> removed (yet).
> 
> But the CONFIG_TCG build has the same legitimate requirement towards a 
> non-CONFIG_REPLAY build.
> 
> ie, like we have tcg_enabled(), should we have replay_enabled()? Maybe it 
> could be reworked starting from replay_events_enabled()?
> 
> And then when things are refactored properly for replay_enabled(), a 
> non-REPLAY TCG build can basically ignore all the inner workings of replay.
> 

I guess to summarize the above, should there be a CONFIG_REPLAY, dependent on 
CONFIG_TCG, by default on,
but which could be disabled with

--disable-replay

?

Thanks,

Claudio

> 
>>
>>>
>>> Paolo
>>>
>>> Il lun 25 gen 2021, 06:38 Pavel Dovgalyuk  ha
>>> scritto:
>>>
 On 23.01.2021 21:15, Paolo Bonzini wrote:
> On 19/01/21 13:39, Pavel Dovgalyuk wrote:
>> Sometimes interrupt event comes at the same time with
>> the virtual timers. In this case replay tries to proceed
>> the timers, because deadline for them is zero.
>> This patch allows processing interrupts and exceptions
>> by entering the vCPU execution loop, when deadline is zero,
>> but checkpoint associated with virtual timers is not ready
>> to be replayed.
>>
>> Signed-off-by: Pavel Dovgalyuk 
>> ---
>>   accel/tcg/tcg-cpus-icount.c |8 +++-
>>   1 file changed, 7 insertions(+), 1 deletion(-)
>>
>> diff --git a/accel/tcg/tcg-cpus-icount.c b/accel/tcg/tcg-cpus-icount.c
>> index 9f45432275..a6d2bb8a88 100644
>> --- a/accel/tcg/tcg-cpus-icount.c
>> +++ b/accel/tcg/tcg-cpus-icount.c
>> @@ -81,7 +81,13 @@ void icount_handle_deadline(void)
>>   int64_t deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL,
>>
 QEMU_TIMER_ATTR_ALL);
>> -if (deadline == 0) {
>> +/*
>> + * Instructions, interrupts, and exceptions are processed in
>> cpu-exec.
>> + * Don't interrupt cpu thread, when these events are waiting
>> + * (i.e., there is no checkpoint)
>> + */
>> +if (deadline == 0
>> +&& (replay_mode == REPLAY_MODE_RECORD ||
>> replay_has_checkpoint())) {
>
> Should this be replay_mode != REPLAY_MODE_PLAY ||
 replay_has_checkpoint()?

 It was the first idea, but I thought, that == is more straightforward
 to understand than !=.

 Pavel Dovgalyuk


>>
>>
> 




Re: [PATCH] s390x/cpu_model: disallow unpack for --only-migratable

2021-01-25 Thread David Hildenbrand
On 25.01.21 13:59, Christian Borntraeger wrote:
> 
> 
> On 25.01.21 13:57, David Hildenbrand wrote:
>> On 25.01.21 13:53, Christian Borntraeger wrote:
>>> secure execution (aka protected virtualization) guests cannot be
>>> migrated at the moment. Disallow the unpack facility if the user
>>> specifies --only-migratable.
>>>
>>> Signed-off-by: Christian Borntraeger 
>>> ---
>>>  target/s390x/cpu_models.c | 6 ++
>>>  1 file changed, 6 insertions(+)
>>>
>>> diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c
>>> index 35179f9dc7ba..0fa082ae2546 100644
>>> --- a/target/s390x/cpu_models.c
>>> +++ b/target/s390x/cpu_models.c
>>> @@ -26,6 +26,7 @@
>>>  #include "qapi/qmp/qdict.h"
>>>  #ifndef CONFIG_USER_ONLY
>>>  #include "sysemu/arch_init.h"
>>> +#include "sysemu/sysemu.h"
>>>  #include "hw/pci/pci.h"
>>>  #endif
>>>  #include "qapi/qapi-commands-machine-target.h"
>>> @@ -878,6 +879,11 @@ static void check_compatibility(const S390CPUModel 
>>> *max_model,
>>>  return;
>>>  }
>>>  
>>> +if (only_migratable && test_bit(S390_FEAT_UNPACK, model->features)) {
>>> +error_setg(errp, "The unpack facility is not compatible with "
>>> +   "the --only-migratable option");
>>
>> return; ?
> 
> of course.
> 
>>
>>
>> This implies that a VM with "-cpu host" might not start anymore, right?
> 
> Only if --only-migratable is set.
> 

Right, that's what I meant

Reviewed-by: David Hildenbrand 

-- 
Thanks,

David / dhildenb




[PATCH v2] s390x/cpu_model: disallow unpack for --only-migratable

2021-01-25 Thread Christian Borntraeger
secure execution (aka protected virtualization) guests cannot be
migrated at the moment. Disallow the unpack facility if the user
specifies --only-migratable.

Signed-off-by: Christian Borntraeger 
---
v1->v2:
- add missing return
- protect check with CONFIG_USER_ONLY for building non softmmu binaries

 target/s390x/cpu_models.c | 9 +
 1 file changed, 9 insertions(+)

diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c
index 35179f9dc7ba..e844a4007210 100644
--- a/target/s390x/cpu_models.c
+++ b/target/s390x/cpu_models.c
@@ -26,6 +26,7 @@
 #include "qapi/qmp/qdict.h"
 #ifndef CONFIG_USER_ONLY
 #include "sysemu/arch_init.h"
+#include "sysemu/sysemu.h"
 #include "hw/pci/pci.h"
 #endif
 #include "qapi/qapi-commands-machine-target.h"
@@ -878,6 +879,14 @@ static void check_compatibility(const S390CPUModel 
*max_model,
 return;
 }
 
+#ifndef CONFIG_USER_ONLY
+if (only_migratable && test_bit(S390_FEAT_UNPACK, model->features)) {
+error_setg(errp, "The unpack facility is not compatible with "
+   "the --only-migratable option");
+   return;
+}
+#endif
+
 /* detect the missing features to properly report them */
 bitmap_andnot(missing, model->features, max_model->features, 
S390_FEAT_MAX);
 if (bitmap_empty(missing, S390_FEAT_MAX)) {
-- 
2.28.0




[PATCH] error: Fix "Converting to ERRP_GUARD()" doc on "valid at return"

2021-01-25 Thread Markus Armbruster
Setting errp = NULL is wrong: the automatic error propagation still
propagates the dangling pointer _auto_errp_prop.local_err.  We need to
set *errp = NULL to clear the dangling pointer.

Signed-off-by: Markus Armbruster 
---
 include/qapi/error.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/qapi/error.h b/include/qapi/error.h
index eaa05c4837..4a9260b0cc 100644
--- a/include/qapi/error.h
+++ b/include/qapi/error.h
@@ -235,7 +235,7 @@
  *error_propagate_prepend(errp, *errp, ...) by error_prepend(errp, ...)
  *
  * 4. Ensure @errp is valid at return: when you destroy *errp, set
- *errp = NULL.
+ **errp = NULL.
  *
  * Example:
  *
-- 
2.26.2




Re: [PATCH] migration: Provide a test for migratability

2021-01-25 Thread Dr. David Alan Gilbert
* Eric Blake (ebl...@redhat.com) wrote:
> On 1/21/21 12:51 PM, Dr. David Alan Gilbert (git) wrote:
> > From: "Dr. David Alan Gilbert" 
> > 
> > Provide a simple way to see if there's currently a migration blocker in
> > operation:
> > 
> > $ ./x86_64-softmmu/qemu-system-x86_64 -nographic -M pc,usb=on -chardev 
> > null,id=n -device usb-serial,chardev=n
> > 
> > (qemu) info migratable
> > Error: State blocked by non-migratable device ':00:01.2/1/usb-serial'
> > 
> > $ ./x86_64-softmmu/qemu-system-x86_64 -nographic
> > 
> > (qemu) info migratable
> > Migratable
> > 
> > Signed-off-by: Dr. David Alan Gilbert 
> > ---
> 
> 
> > +++ b/qapi/migration.json
> > @@ -366,6 +366,20 @@
> >  ##
> >  { 'command': 'query-migrate', 'returns': 'MigrationInfo' }
> >  
> > +##
> > +# @query-migratable:
> > +# Tests whether it will be possible to migrate the VM in the current state.
> > +#
> > +# Returns: nothing on success (i.e. if the VM is migratable)
> 
> Do we really need a new command?  Or can we get away with enhancing the
> existing 'query-migrate' to add another bool field to 'MigrationInfo'?

In the case where the VM is not migratable this command errors with the
message set in a migration-blocker (Our migration blocker mechanism
always takes an Error* as the reason).

It didn't seem right to make that change for the existing commands;
so how would you like the existing MigrationInfo to change; a boolean to
say yes/no, and an error/list of errors for the reason?

Dave

> -- 
> Eric Blake, Principal Software Engineer
> Red Hat, Inc.   +1-919-301-3226
> Virtualization:  qemu.org | libvirt.org
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [PATCH v2] s390x/cpu_model: disallow unpack for --only-migratable

2021-01-25 Thread David Hildenbrand
On 25.01.21 14:22, Christian Borntraeger wrote:
> secure execution (aka protected virtualization) guests cannot be
> migrated at the moment. Disallow the unpack facility if the user
> specifies --only-migratable.
> 
> Signed-off-by: Christian Borntraeger 
> ---
> v1->v2:
> - add missing return
> - protect check with CONFIG_USER_ONLY for building non softmmu binaries
> 
>  target/s390x/cpu_models.c | 9 +
>  1 file changed, 9 insertions(+)
> 
> diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c
> index 35179f9dc7ba..e844a4007210 100644
> --- a/target/s390x/cpu_models.c
> +++ b/target/s390x/cpu_models.c
> @@ -26,6 +26,7 @@
>  #include "qapi/qmp/qdict.h"
>  #ifndef CONFIG_USER_ONLY
>  #include "sysemu/arch_init.h"
> +#include "sysemu/sysemu.h"
>  #include "hw/pci/pci.h"
>  #endif
>  #include "qapi/qapi-commands-machine-target.h"
> @@ -878,6 +879,14 @@ static void check_compatibility(const S390CPUModel 
> *max_model,
>  return;
>  }
>  
> +#ifndef CONFIG_USER_ONLY
> +if (only_migratable && test_bit(S390_FEAT_UNPACK, model->features)) {
> +error_setg(errp, "The unpack facility is not compatible with "
> +   "the --only-migratable option");
> + return;
> +}
> +#endif
> +
>  /* detect the missing features to properly report them */
>  bitmap_andnot(missing, model->features, max_model->features, 
> S390_FEAT_MAX);
>  if (bitmap_empty(missing, S390_FEAT_MAX)) {
> 

BTW, I wonder if the right place for this would be apply_cpu_model().

Anyhow

Reviewed-by: David Hildenbrand 

-- 
Thanks,

David / dhildenb




Re: [PATCH v2] s390x/cpu_model: disallow unpack for --only-migratable

2021-01-25 Thread no-reply
Patchew URL: 
https://patchew.org/QEMU/20210125132238.179972-1-borntrae...@de.ibm.com/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Message-id: 20210125132238.179972-1-borntrae...@de.ibm.com
Subject: [PATCH v2] s390x/cpu_model: disallow unpack for --only-migratable

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 - [tag update]  patchew/20210125125312.138491-1-borntrae...@de.ibm.com -> 
patchew/20210125125312.138491-1-borntrae...@de.ibm.com
 * [new tag] patchew/20210125132238.179972-1-borntrae...@de.ibm.com -> 
patchew/20210125132238.179972-1-borntrae...@de.ibm.com
Switched to a new branch 'test'
9d55ad2 s390x/cpu_model: disallow unpack for --only-migratable

=== OUTPUT BEGIN ===
ERROR: code indent should never use tabs
#37: FILE: target/s390x/cpu_models.c:886:
+^Ireturn;$

total: 1 errors, 0 warnings, 21 lines checked

Commit 9d55ad23e018 (s390x/cpu_model: disallow unpack for --only-migratable) 
has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
=== OUTPUT END ===

Test command exited with code: 1


The full log is available at
http://patchew.org/logs/20210125132238.179972-1-borntrae...@de.ibm.com/testing.checkpatch/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-de...@redhat.com

Re: [PATCH v2] s390x/cpu_model: disallow unpack for --only-migratable

2021-01-25 Thread Cornelia Huck
On Mon, 25 Jan 2021 14:22:38 +0100
Christian Borntraeger  wrote:

> secure execution (aka protected virtualization) guests cannot be
> migrated at the moment. Disallow the unpack facility if the user
> specifies --only-migratable.

Maybe make the explanation a bit more verbose?

"Secure execution (aka protected virtualization) guests cannot be
migrated at the moment. If the unpack facility is provided in the cpu
model, a guest may choose to transition to secure mode, making the
guest unmigratable at that point in time. If the machine was explicitly
started with --only-migratable, we would get a failure only when the
guest actually tries to transition; instead, explicitly disallow the
unpack facility if --only-migratable was specified to avoid late
surprises."

> 
> Signed-off-by: Christian Borntraeger 
> ---
> v1->v2:
> - add missing return
> - protect check with CONFIG_USER_ONLY for building non softmmu binaries
> 
>  target/s390x/cpu_models.c | 9 +
>  1 file changed, 9 insertions(+)
> 
> diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c
> index 35179f9dc7ba..e844a4007210 100644
> --- a/target/s390x/cpu_models.c
> +++ b/target/s390x/cpu_models.c
> @@ -26,6 +26,7 @@
>  #include "qapi/qmp/qdict.h"
>  #ifndef CONFIG_USER_ONLY
>  #include "sysemu/arch_init.h"
> +#include "sysemu/sysemu.h"
>  #include "hw/pci/pci.h"
>  #endif
>  #include "qapi/qapi-commands-machine-target.h"
> @@ -878,6 +879,14 @@ static void check_compatibility(const S390CPUModel 
> *max_model,
>  return;
>  }
>  
> +#ifndef CONFIG_USER_ONLY
> +if (only_migratable && test_bit(S390_FEAT_UNPACK, model->features)) {
> +error_setg(errp, "The unpack facility is not compatible with "
> +   "the --only-migratable option");

Might be a bit surprising if the host model had been specified... is
there a way to add a hint how to get rid of the unpack bit?

> + return;
> +}
> +#endif
> +
>  /* detect the missing features to properly report them */
>  bitmap_andnot(missing, model->features, max_model->features, 
> S390_FEAT_MAX);
>  if (bitmap_empty(missing, S390_FEAT_MAX)) {




Re: [PATCH v3 18/19] i386: provide simple 'hv-default=on' option

2021-01-25 Thread David Edmondson
On Thursday, 2021-01-21 at 12:08:02 -05, Eduardo Habkost wrote:

> On Thu, Jan 21, 2021 at 02:27:04PM +0100, Igor Mammedov wrote:
>> On Wed, 20 Jan 2021 15:49:09 -0500
>> Eduardo Habkost  wrote:
>> 
>> > On Wed, Jan 20, 2021 at 08:08:32PM +0100, Igor Mammedov wrote:
>> > > On Wed, 20 Jan 2021 15:38:33 +0100
>> > > Vitaly Kuznetsov  wrote:
>> > >   
>> > > > Igor Mammedov  writes:
>> > > >   
>> > > > > On Fri, 15 Jan 2021 10:20:23 +0100
>> > > > > Vitaly Kuznetsov  wrote:
>> > > > >
>> > > > >> Igor Mammedov  writes:
>> > > > >> 
>> > > > >> > On Thu,  7 Jan 2021 16:14:49 +0100
>> > > > >> > Vitaly Kuznetsov  wrote:
>> > > > >> >  
>> > > > >> >> Enabling Hyper-V emulation for a Windows VM is a tiring 
>> > > > >> >> experience as it
>> > > > >> >> requires listing all currently supported enlightenments ("hv-*" 
>> > > > >> >> CPU
>> > > > >> >> features) explicitly. We do have 'hv-passthrough' mode enabling
>> > > > >> >> everything but it can't be used in production as it prevents 
>> > > > >> >> migration.
>> > > > >> >> 
>> > > > >> >> Introduce a simple 'hv-default=on' CPU flag enabling all 
>> > > > >> >> currently supported
>> > > > >> >> Hyper-V enlightenments. Later, when new enlightenments get 
>> > > > >> >> implemented,
>> > > > >> >> compat_props mechanism will be used to disable them for legacy 
>> > > > >> >> machine types,
>> > > > >> >> this will keep 'hv-default=on' configurations migratable.
>> > > > >> >> 
>> > > > >> >> Signed-off-by: Vitaly Kuznetsov 
>> > > > >> >> ---
>> > > > >> >>  docs/hyperv.txt   | 16 +---
>> > > > >> >>  target/i386/cpu.c | 38 ++
>> > > > >> >>  target/i386/cpu.h |  5 +
>> > > > >> >>  3 files changed, 56 insertions(+), 3 deletions(-)
>> > > > >> >> 
>> > > > >> >> diff --git a/docs/hyperv.txt b/docs/hyperv.txt
>> > > > >> >> index 5df00da54fc4..a54c066cab09 100644
>> > > > >> >> --- a/docs/hyperv.txt
>> > > > >> >> +++ b/docs/hyperv.txt
>> > > > >> >> @@ -17,10 +17,20 @@ compatible hypervisor and use Hyper-V 
>> > > > >> >> specific features.
>> > > > >> >>  
>> > > > >> >>  2. Setup
>> > > > >> >>  =
>> > > > >> >> -No Hyper-V enlightenments are enabled by default by either KVM 
>> > > > >> >> or QEMU. In
>> > > > >> >> -QEMU, individual enlightenments can be enabled through CPU 
>> > > > >> >> flags, e.g:
>> > > > >> >> +All currently supported Hyper-V enlightenments can be enabled 
>> > > > >> >> by specifying
>> > > > >> >> +'hv-default=on' CPU flag:
>> > > > >> >>  
>> > > > >> >> -  qemu-system-x86_64 --enable-kvm --cpu 
>> > > > >> >> host,hv_relaxed,hv_vpindex,hv_time, ...
>> > > > >> >> +  qemu-system-x86_64 --enable-kvm --cpu host,hv-default ...
>> > > > >> >> +
>> > > > >> >> +Alternatively, it is possible to do fine-grained enablement 
>> > > > >> >> through CPU flags,
>> > > > >> >> +e.g:
>> > > > >> >> +
>> > > > >> >> +  qemu-system-x86_64 --enable-kvm --cpu 
>> > > > >> >> host,hv-relaxed,hv-vpindex,hv-time ...  
>> > > > >> >
>> > > > >> > I'd put here not '...' but rather recommended list of flags, and 
>> > > > >> > update
>> > > > >> > it every time when new feature added if necessary.
>> > > > >> >  
>> > > > >
>> > > > > 1)
>> > > > >  
>> > > > >> This is an example of fine-grained enablement, there is no point to 
>> > > > >> put
>> > > > >> all the existing flags there (hv-default is the only recommended way
>> > > > >> now, the rest is 'expert'/'debugging').
>> > > > > so users are kept in dark what hv-default disables/enables (and it 
>> > > > > might depend
>> > > > > on machine version on top that). Doesn't look like a good 
>> > > > > documentation to me
>> > > > > (sure everyone can go and read source code for it and try to figure 
>> > > > > out how
>> > > > > it's supposed to work)
>> > > > 
>> > > > 'hv-default' enables *all* currently supported enlightenments. When
>> > > > using with an old machine type, it will enable *all* Hyper-V
>> > > > enlightenmnets which were supported when the corresponding machine type
>> > > > was released. I don't think we document all other cases when a machine
>> > > > type is modified (i.e. where can I read how pc-q35-5.1 is different 
>> > > > from
>> > > > pc-q35-5.0 if I refuse to read the source code?)
>> > > >   
>> > > > >
>> > > > >>
>> > > > >> > (not to mention that if we had it to begin with, then new 
>> > > > >> > 'hv-default' won't
>> > > > >> > be necessary, I still see it as functionality duplication but I 
>> > > > >> > will not oppose it)
>> > > > >> >  
>> > > > >> 
>> > > > >> Unfortunately, upper layer tools don't read this doc and update
>> > > > >> themselves to enable new features when they appear.
>> > > > > rant: (just merge all libvirt into QEMU, and make VM configuration 
>> > > > > less low-level.
>> > > > > why stop there, just merge with yet another upper layer, it would 
>> > > > > save us a lot
>> > > > > on communication protocols and simplify VM creatio

Re: [PATCH v2] s390x/cpu_model: disallow unpack for --only-migratable

2021-01-25 Thread Christian Borntraeger



On 25.01.21 14:38, Cornelia Huck wrote:
> On Mon, 25 Jan 2021 14:22:38 +0100
> Christian Borntraeger  wrote:
> 
>> secure execution (aka protected virtualization) guests cannot be
>> migrated at the moment. Disallow the unpack facility if the user
>> specifies --only-migratable.
> 
> Maybe make the explanation a bit more verbose?
> 
> "Secure execution (aka protected virtualization) guests cannot be
> migrated at the moment. If the unpack facility is provided in the cpu
> model, a guest may choose to transition to secure mode, making the
> guest unmigratable at that point in time. If the machine was explicitly
> started with --only-migratable, we would get a failure only when the
> guest actually tries to transition; instead, explicitly disallow the
> unpack facility if --only-migratable was specified to avoid late
> surprises."
> 
>>
>> Signed-off-by: Christian Borntraeger 
>> ---
>> v1->v2:
>> - add missing return
>> - protect check with CONFIG_USER_ONLY for building non softmmu binaries
>>
>>  target/s390x/cpu_models.c | 9 +
>>  1 file changed, 9 insertions(+)
>>
>> diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c
>> index 35179f9dc7ba..e844a4007210 100644
>> --- a/target/s390x/cpu_models.c
>> +++ b/target/s390x/cpu_models.c
>> @@ -26,6 +26,7 @@
>>  #include "qapi/qmp/qdict.h"
>>  #ifndef CONFIG_USER_ONLY
>>  #include "sysemu/arch_init.h"
>> +#include "sysemu/sysemu.h"
>>  #include "hw/pci/pci.h"
>>  #endif
>>  #include "qapi/qapi-commands-machine-target.h"
>> @@ -878,6 +879,14 @@ static void check_compatibility(const S390CPUModel 
>> *max_model,
>>  return;
>>  }
>>  
>> +#ifndef CONFIG_USER_ONLY
>> +if (only_migratable && test_bit(S390_FEAT_UNPACK, model->features)) {
>> +error_setg(errp, "The unpack facility is not compatible with "
>> +   "the --only-migratable option");
> 
> Might be a bit surprising if the host model had been specified... is
> there a way to add a hint how to get rid of the unpack bit?

I can try to make the error message a bit more verbose. 
> 
>> +return;
>> +}
>> +#endif
>> +
>>  /* detect the missing features to properly report them */
>>  bitmap_andnot(missing, model->features, max_model->features, 
>> S390_FEAT_MAX);
>>  if (bitmap_empty(missing, S390_FEAT_MAX)) {
> 



Re: [PULL 0/9] SD/MMC patches for 2021-01-24

2021-01-25 Thread Peter Maydell
On Sun, 24 Jan 2021 at 20:13, Philippe Mathieu-Daudé  wrote:
>
> The following changes since commit e93c65a6c64fa18b0c61fb9338d364cbea32b6ef:
>
>   Merge remote-tracking branch 
> 'remotes/kraxel/tags/usb-20210122-pull-request' into staging (2021-01-23 
> 14:40:45 +)
>
> are available in the Git repository at:
>
>   https://gitlab.com/philmd/qemu.git tags/sdmmc-20210124
>
> for you to fetch changes up to 3f20ccd359913013723f64e2443dd513786039f6:
>
>   hw/sd: sd.h: Cosmetic change of using spaces (2021-01-24 20:11:05 +0100)
>
> 
> SD/MMC patches
>
> - Various improvements for SD cards in SPI mode (Bin Meng)
>
> 


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/6.0
for any user-visible changes.

-- PMM



[PATCH v3] s390x/cpu_model: disallow unpack for --only-migratable

2021-01-25 Thread Christian Borntraeger
Secure execution (aka protected virtualization) guests cannot be
migrated at the moment. If the unpack facility is provided in the cpu
model, a guest may choose to transition to secure mode, making the
guest unmigratable at that point in time. If the machine was explicitly
started with --only-migratable, we would get a failure only when the
guest actually tries to transition; instead, explicitly disallow the
unpack facility if --only-migratable was specified to avoid late
surprises.

Signed-off-by: Christian Borntraeger 
Reviewed-by: David Hildenbrand 
---
 target/s390x/cpu_models.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c
index 35179f9dc7ba..dd474c5e9ad1 100644
--- a/target/s390x/cpu_models.c
+++ b/target/s390x/cpu_models.c
@@ -26,6 +26,7 @@
 #include "qapi/qmp/qdict.h"
 #ifndef CONFIG_USER_ONLY
 #include "sysemu/arch_init.h"
+#include "sysemu/sysemu.h"
 #include "hw/pci/pci.h"
 #endif
 #include "qapi/qapi-commands-machine-target.h"
@@ -878,6 +879,15 @@ static void check_compatibility(const S390CPUModel 
*max_model,
 return;
 }
 
+#ifndef CONFIG_USER_ONLY
+if (only_migratable && test_bit(S390_FEAT_UNPACK, model->features)) {
+error_setg(errp, "The unpack facility is not compatible with "
+   "the --only-migratable option. You must remove either "
+   "the 'unpack' facility or the --only-migratable option");
+return;
+}
+#endif
+
 /* detect the missing features to properly report them */
 bitmap_andnot(missing, model->features, max_model->features, 
S390_FEAT_MAX);
 if (bitmap_empty(missing, S390_FEAT_MAX)) {
-- 
2.28.0




Re: [PATCH] error: Fix "Converting to ERRP_GUARD()" doc on "valid at return"

2021-01-25 Thread Vladimir Sementsov-Ogievskiy

25.01.2021 16:26, Markus Armbruster wrote:

Setting errp = NULL is wrong: the automatic error propagation still
propagates the dangling pointer _auto_errp_prop.local_err.  We need to
set *errp = NULL to clear the dangling pointer.

Signed-off-by: Markus Armbruster 


Reviewed-by: Vladimir Sementsov-Ogievskiy 


---
  include/qapi/error.h | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/qapi/error.h b/include/qapi/error.h
index eaa05c4837..4a9260b0cc 100644
--- a/include/qapi/error.h
+++ b/include/qapi/error.h
@@ -235,7 +235,7 @@
   *error_propagate_prepend(errp, *errp, ...) by error_prepend(errp, ...)
   *
   * 4. Ensure @errp is valid at return: when you destroy *errp, set
- *errp = NULL.
+ **errp = NULL.
   *
   * Example:
   *




--
Best regards,
Vladimir



Re: [PATCH 12/25] qemu-config: add error propagation to qemu_config_parse

2021-01-25 Thread Markus Armbruster
Paolo Bonzini  writes:

> This enables some simplification of vl.c via error_fatal.
>
> Signed-off-by: Paolo Bonzini 
> ---
>  block/blkdebug.c   |  3 +--
>  include/qemu/config-file.h |  4 ++--
>  softmmu/vl.c   | 30 --
>  util/qemu-config.c | 20 ++--
>  4 files changed, 25 insertions(+), 32 deletions(-)
>
> diff --git a/block/blkdebug.c b/block/blkdebug.c
> index 5fe6172da9..7eaa8a28bf 100644
> --- a/block/blkdebug.c
> +++ b/block/blkdebug.c
> @@ -279,9 +279,8 @@ static int read_config(BDRVBlkdebugState *s, const char 
> *filename,
>  return -errno;
>  }
>  
> -ret = qemu_config_parse(f, config_groups, filename);
> +ret = qemu_config_parse(f, config_groups, filename, errp);
>  if (ret < 0) {
> -error_setg(errp, "Could not parse blkdebug config file");
>  goto fail;
>  }
>  }
> diff --git a/include/qemu/config-file.h b/include/qemu/config-file.h
> index 7d26fe3816..da6f4690b7 100644
> --- a/include/qemu/config-file.h
> +++ b/include/qemu/config-file.h
> @@ -10,9 +10,9 @@ void qemu_add_opts(QemuOptsList *list);
>  void qemu_add_drive_opts(QemuOptsList *list);
>  int qemu_global_option(const char *str);
>  
> -int qemu_config_parse(FILE *fp, QemuOptsList **lists, const char *fname);
> +int qemu_config_parse(FILE *fp, QemuOptsList **lists, const char *fname, 
> Error **errp);

Long line.

>  
> -int qemu_read_config_file(const char *filename);
> +int qemu_read_config_file(const char *filename, Error **errp);
>  
>  /* Parse QDict options as a replacement for a config file (allowing multiple
> enumerated (0..(n-1)) configuration "sections") */
> diff --git a/softmmu/vl.c b/softmmu/vl.c
> index d34307bf11..d991919155 100644
> --- a/softmmu/vl.c
> +++ b/softmmu/vl.c
> @@ -2056,17 +2056,20 @@ static int global_init_func(void *opaque, QemuOpts 
> *opts, Error **errp)
>  return 0;
>  }
>  
> -static int qemu_read_default_config_file(void)
> +static void qemu_read_default_config_file(Error **errp)
>  {
>  int ret;
> +Error *local_err = NULL;
>  g_autofree char *file = get_relocated_path(CONFIG_QEMU_CONFDIR 
> "/qemu.conf");
>  
> -ret = qemu_read_config_file(file);
> -if (ret < 0 && ret != -ENOENT) {
> -return ret;
> +ret = qemu_read_config_file(file, &local_err);
> +if (ret < 0) {
> +if (ret == -ENOENT) {
> +error_free(local_err);
> +} else {
> +error_propagate(errp, local_err);
> +}
>  }
> -
> -return 0;
>  }

Please use ERRP_GUARD() in new code:

   static void qemu_read_default_config_file(Error **errp)
   {
   ERRP_GUARD();
   int ret;
   g_autofree char *file = get_relocated_path(CONFIG_QEMU_CONFDIR
  "/qemu.conf");

   ret = qemu_read_config_file(file, errp);
   if (ret == -ENOENT) {
   error_free(*errp);
   *errp = NULL;
   }
   }

>  
>  static int qemu_set_option(const char *str)
> @@ -2622,9 +2625,7 @@ void qemu_init(int argc, char **argv, char **envp)
>  }
>  
>  if (userconfig) {
> -if (qemu_read_default_config_file() < 0) {
> -exit(1);
> -}
> +qemu_read_default_config_file(&error_fatal);
>  }
>  
>  /* second pass of option parsing */
> @@ -3312,15 +3313,8 @@ void qemu_init(int argc, char **argv, char **envp)
>  qemu_plugin_opt_parse(optarg, &plugin_list);
>  break;
>  case QEMU_OPTION_readconfig:
> -{
> -int ret = qemu_read_config_file(optarg);
> -if (ret < 0) {
> -error_report("read config %s: %s", optarg,
> - strerror(-ret));
> -exit(1);
> -}
> -break;
> -}
> +qemu_read_config_file(optarg, &error_fatal);
> +break;

More than just code simplifcation: you're deleting an extra error
message.  Test case:

$ qemu-system-x86_64 -readconfig .
qemu: error reading file
qemu: -readconfig .: read config .: Invalid argument

Pretty bad.  With your patch applied:

qemu-system-x86_64: error reading file

Worse :)

I actually expected this to come out as

qemu-system-x86_64: -readconfig .: error reading file

That would be an improvement.

The reason for the bad location information is here:

int qemu_config_parse(FILE *fp, QemuOptsList **lists, const char *fname, 
Error **errp)
{
char line[1024], group[64], id[64], arg[64], value[1024];
Location loc;
QemuOptsList *list = NULL;
Error *local_err = NULL;
QemuOpts *opts = NULL;
int res = -EINVAL, lno = 0;
int count = 0;

loc_push_none(&loc);
while (fgets(line, sizeof(line), fp) != NULL) {

If the very first fgets() f

Re: [PATCH 11/25] remove -writeconfig

2021-01-25 Thread Paolo Bonzini

On 25/01/21 13:53, Markus Armbruster wrote:

I love the "and how give me a config file for all that" idea, but I
agree our -writeconfig is flawed.  I hope we can bring it back in more
useful shape.

No deprecation grace period?



That's a decision that we have to take overall once the whole series is 
reviewed, I think.  I have no problem having a grace period:


- the patches aren't 101% ready

- the real conflict magnets have been merged already

- I have a large KVM backlog so I don't mind leaving this aside for a 
few months


Paolo




Re: [PATCH 15/25] vl: plumb keyval-based options into -set and -readconfig

2021-01-25 Thread Paolo Bonzini

On 25/01/21 12:48, Markus Armbruster wrote:

Paolo Bonzini  writes:


Add generic machinery to support parsing command line options with
keyval in -set and -readconfig, choosing between QDict and
QemuOpts as the underlying data structure.

The keyval_merge function is slightly heavyweight as a way to
do qemu_set_option for QDict-based options, but it will be put
to further use later to merge entire -readconfig sections together
with their command-line equivalents.

Signed-off-by: Paolo Bonzini 


Watch this:

 $ cat test.cfg
 [machine]
   accel = "kvm"
   usb = "on"
 $ qemu-system-x86_64 -S -display none -readconfig test.cfg
 Aborted (core dumped)


Probably something that fixes itself later in the series.  I'll check it 
out.


Paolo


Backtrace:

 #0  0x752f19e5 in raise () at /lib64/libc.so.6
 #1  0x752da895 in abort () at /lib64/libc.so.6
 #2  0x55c44a77 in qemu_record_config_group
 (group=0x7fffd1b0 "machine", dict=0x565fb740, errp=0x564ffca0 
) at ../softmmu/vl.c:2103
 #3  0x55c44bd8 in qemu_parse_config_group
 (group=0x7fffd1b0 "machine", qdict=0x565f9640, opaque=0x564ff9e0 
, errp=0x564ffca0 ) at ../softmmu/vl.c:2135
 #4  0x55eda3e6 in qemu_config_foreach
 (fp=0x565f3e00, cb=0x55c44af5 , opaque=0x564ff9e0 
, fname=0x7fffe0dd "test.cfg", errp=0x564ffca0 
) at ../util/qemu-config.c:378
 #5  0x55eda5d5 in qemu_read_config_file
 (filename=0x7fffe0dd "test.cfg", cb=0x55c44af5 
, errp=0x564ffca0 ) at ../util/qemu-config.c:421
 #6  0x55c47d3f in qemu_init
 (argc=6, argv=0x7fffdcc8, envp=0x7fffdd00) at 
../softmmu/vl.c:3405
 #7  0x558234e1 in main
 (argc=6, argv=0x7fffdcc8, envp=0x7fffdd00) at 
../softmmu/main.c:49

Similar result for

 [memory]
   size = "1024"

and

 [chardev "mon0"]
   backend = "stdio"

I didn't look for more.






MIPS support for PIC32 and GPIOs

2021-01-25 Thread Peter Lambrechtsen
Hi

I am using this fork of qemu that has pic32 support in it.

https://github.com/sergev/qemu/

When I boot a firmware image I get these errors as I don’t think the gpios
have been defined.

/usr/local/qemu-mips/bin/qemu-system-mipsel -machine pic32mx7-max32
-nographic -serial file:serial1.log -serial file:serial2.log -bios boot.hex
-kernel pfm.hex
QEMU 2.3.50 monitor - type 'help' for more information
(qemu) Board: chipKIT Max32
Processor: M4K
RAM size: 128 kbytes
Load file: 'boot.hex', 2344 bytes
Load file: 'pfm.hex', 287440 bytes
--- Read 1f800810: peripheral register not supported
--- Write 0001 to 1f88: peripheral register not supported
--- Write 0001 to ???: readonly register
--- Read 1f800600: peripheral register not supported
--- Write 0030 to 1f800600: peripheral register not supported
--- Write 0030 to ???: readonly register
--- Write  to 1f800620: peripheral register not supported
--- Write  to ???: readonly register
--- Write  to 1f800610: peripheral register not supported
--- Write  to ???: readonly register
--- Read 1f800c00: peripheral register not supported
--- Write 0070 to 1f800c00: peripheral register not supported
--- Write 0070 to ???: readonly register
--- Write 003f to 1f800c20: peripheral register not supported
--- Write 003f to ???: readonly register
--- Write  to 1f800c10: peripheral register not supported
--- Write  to ???: readonly register
--- Read 1f800600: peripheral register not supported
--- Write 8000 to 1f800600: peripheral register not supported
--- Write 8000 to ???: readonly register
--- Read 1f800c00: peripheral register not supported
--- Write 8000 to 1f800c00: peripheral register not supported
--- Write 8000 to ???: readonly register
--- Write  to 1f800800: peripheral register not supported
--- Write  to ???: readonly register
--- Write  to 1f800810: peripheral register not supported
--- Write  to ???: readonly register
--- Write  to 1f800a10: peripheral register not supported
--- Write  to ???: readonly register
--- Write  to 1f800a20: peripheral register not supported
--- Write  to ???: readonly register
--- Write 0078 to 1f800800: peripheral register not supported
--- Write 0078 to ???: readonly register
--- Write  to 1f800810: peripheral register not supported
--- Write  to ???: readonly register
--- Write  to 1f800820: peripheral register not supported
--- Write  to ???: readonly register
--- Write 8000 to 1f800808: peripheral register not supported
--- Write 8000 to ???: readonly register
--- Read 1f800810: peripheral register not supported
--- Write 1d07e000 to 1f80f420: peripheral register not supported
--- Write 1d07e000 to ???: readonly register
--- Write 4004 to 1f80f400: peripheral register not supported
--- Write 4004 to ???: readonly register
--- Write aa996655 to 1f80f410: peripheral register not supported
--- Write aa996655 to ???: readonly register
--- Write 556699aa to 1f80f410: peripheral register not supported
--- Write 556699aa to ???: readonly register
--- Write 8000 to 1f80f408: peripheral register not supported
--- Write 8000 to ???: readonly register
--- Read 1f80f400: peripheral register not supported
--- Write 4000 to 1f80f404: peripheral register not supported
--- Write 4000 to ???: readonly register
--- Read 1f80f400: peripheral register not supported
--- Write 1d07e000 to 1f80f420: peripheral register not supported
--- Write 1d07e000 to ???: readonly register
--- Write c608 to 1f80f440: peripheral register not supported
--- Write c608 to ???: readonly register
--- Write 4003 to 1f80f400: peripheral register not supported
--- Write 4003 to ???: readonly register
--- Write aa996655 to 1f80f410: peripheral register not supported
--- Write aa996655 to ???: readonly register
--- Write 556699aa to 1f80f410: peripheral register not supported
--- Write 556699aa to ???: readonly register
--- Write 8000 to 1f80f408: peripheral register not supported
--- Write 8000 to ???: readonly register
--- Read 1f80f400: peripheral register not supported
--- Write 4000 to 1f80f404: peripheral register not supported
--- Write 4000 to ???: readonly register
--- Read 1f80f400: peripheral register not supported
--- Write 1d07e200 to 1f80f420: peripheral register not supported
--- Write 1d07e200 to ???: readonly register
--- Write c808 to 1f80f440: peripheral register not supported
--- Write c808 to ???: readonly register
--- Write 4003 to 1f80f400: peripheral register not supported
--- Write 4003 to ???: readonly register
--- Write aa996655 to 1f80f410: peripheral register not supported
--- Write aa996655 to ???: readonly register
--- Write 556699aa to 1f80f410: peripheral register not supported
--- Write 556699aa to ???: readonly register
--- Write 8000 to 1f80f408: peripheral r

Re: [PATCH 10/25] hmp: special case help options for object_add

2021-01-25 Thread Markus Armbruster
Paolo Bonzini  writes:

> On 25/01/21 13:48, Markus Armbruster wrote:
>> Paolo Bonzini  writes:
>> 
>>> Fix "object_add help" and "object_add TYPE,help".
>>>
>>> Signed-off-by: Paolo Bonzini 
>> Standard question when I read "Fix $interface" in a commit message:
>> how
>> exactly is it broken?
>
> It doesn't work at all ("Error: parameter 'id' is missing").

Adding simple reproducers to commit messages won't hurt a bit, I
promise!

;-P




Re: [PATCH 0/2] trace: make the 'log' backend timestamp configurable

2021-01-25 Thread Philippe Mathieu-Daudé
Hi Stefan,

On 1/25/21 12:35 PM, Stefan Hajnoczi wrote:
> Zoltan reminded me that the 'log' backend prints tids/timestamps and this can
> be unwanted in some cases. It's easier to look at trace output without them 
> and
> in some cases parsing is also more convenient with them.
> 
> Extend -msg timestamp=on|off to control the 'log' backend's tid/timestamp 
> output.
> 
> Stefan Hajnoczi (2):
>   error: rename error_with_timestamp to message_with_timestamp
>   trace: make the 'log' backend timestamp configurable
> 
>  docs/devel/tracing.txt   |  3 +++
>  include/qemu/error-report.h  |  2 +-
>  softmmu/vl.c |  2 +-
>  util/qemu-error.c|  4 ++--
>  scripts/tracetool/backend/log.py | 19 +--
>  5 files changed, 20 insertions(+), 10 deletions(-)

I applied your series, rebuilt, but nothing changed.

Apparently there is some buildsys rule missing, the
trace files weren't regenerated.

After full tree 'make clean' I could successfully test:
Tested-by: Philippe Mathieu-Daudé 

Ignoring the buildsys issue:
Reviewed-by: Philippe Mathieu-Daudé 

Regards,

Phil.




Re: [PATCH 11/25] remove -writeconfig

2021-01-25 Thread Daniel P . Berrangé
On Mon, Jan 25, 2021 at 03:01:01PM +0100, Paolo Bonzini wrote:
> On 25/01/21 13:53, Markus Armbruster wrote:
> > I love the "and how give me a config file for all that" idea, but I
> > agree our -writeconfig is flawed.  I hope we can bring it back in more
> > useful shape.
> > 
> > No deprecation grace period?
> > 
> 
> That's a decision that we have to take overall once the whole series is
> reviewed, I think.  I have no problem having a grace period:

I'm normally in strongly pushing for honouring our deprecation policy,
but in almost all past cases we're changing/removing something that is
genuinely used by people in the real world.

I think it is possible to argue that -writeconfig is a special case
becuase its functionality is so limited in scope that its real world
use cases are fairly niche, and is majorly buggy in what it writes
in some cases. IOW we could argue it is too broken + useless to justify
going through the deprecation process.

So overall I'm ambivalent on whether we use deprecation for -writeconfig
or not.

> 
> - the patches aren't 101% ready
> 
> - the real conflict magnets have been merged already
> 
> - I have a large KVM backlog so I don't mind leaving this aside for a few
> months



Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




configure does not detect librados or librbd since the switch to meson

2021-01-25 Thread Peter Lieven
Hi,


on Dedian / Ubuntu configure does no longer detect librbd / librados since the 
switch to meson.

I need to add dirs: ['/usr/lib'] to the cc.find_library for librados and 
librbd. But I am not familiar with meson

and can't say if thats the appropriate fix.


I would be thankful for a hint. I would create a patch to fix this and include 
it upfront of my rbd driver rewrite

that I would like to respin asap.


Peter





Re: [PATCH] hw/scsi/megasas: check for NULL frame in megasas_command_cancelled()

2021-01-25 Thread Mauro Matteo Cascella
Hello,

Any updates on this little patch? Please find below a reproducer for
this bug (thanks Alexander):
https://lists.nongnu.org/archive/html/qemu-devel/2021-01/msg02567.html

Thank you,

On Thu, Dec 24, 2020 at 6:55 PM Mauro Matteo Cascella
 wrote:
>
> Ensure that 'cmd->frame' is not NULL before accessing the 'header' field.
> This check prevents a potential NULL pointer dereference issue.
>
> RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1910346
> Signed-off-by: Mauro Matteo Cascella 
> Reported-by: Cheolwoo Myung 
> ---
>  hw/scsi/megasas.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/hw/scsi/megasas.c b/hw/scsi/megasas.c
> index 1a5fc5857d..77510e120c 100644
> --- a/hw/scsi/megasas.c
> +++ b/hw/scsi/megasas.c
> @@ -1893,7 +1893,7 @@ static void megasas_command_cancelled(SCSIRequest *req)
>  {
>  MegasasCmd *cmd = req->hba_private;
>
> -if (!cmd) {
> +if (!cmd || !cmd->frame) {
>  return;
>  }
>  cmd->frame->header.cmd_status = MFI_STAT_SCSI_IO_FAILED;
> --
> 2.29.2
>


-- 
Mauro Matteo Cascella
Red Hat Product Security
PGP-Key ID: BB3410B0




Re: configure does not detect librados or librbd since the switch to meson

2021-01-25 Thread Peter Lieven
Am 25.01.21 um 15:13 schrieb Peter Lieven:
> Hi,
>
>
> on Dedian / Ubuntu configure does no longer detect librbd / librados since 
> the switch to meson.
>
> I need to add dirs: ['/usr/lib'] to the cc.find_library for librados and 
> librbd. But I am not familiar with meson
>
> and can't say if thats the appropriate fix.
>
>
> I would be thankful for a hint. I would create a patch to fix this and 
> include it upfront of my rbd driver rewrite
>
> that I would like to respin asap.


Further issue: if I specify configure --enable-rbd and cc.links fails the 
configure command succeeds and rbd support is disabled.


This seems to be an issue with all cc.links calls in the meson.build script.


Peter





  1   2   3   4   >