Re: [PATCH v11 09/11] qcow2_format.py: collect fields to dump in JSON format

2020-07-28 Thread Andrey Shinkevich

On 28.07.2020 14:09, Vladimir Sementsov-Ogievskiy wrote:

17.07.2020 11:14, Andrey Shinkevich wrote:

As __dict__ is being extended with class members we do not want to
print, add the to_dict() method to classes that returns a dictionary
with desired fields and their values. Extend it in subclass when
necessary to print the final dictionary in the JSON output which
follows.

Suggested-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Andrey Shinkevich 
---
  tests/qemu-iotests/qcow2_format.py | 38 
++

  1 file changed, 38 insertions(+)

diff --git a/tests/qemu-iotests/qcow2_format.py 
b/tests/qemu-iotests/qcow2_format.py

index 2921a27..19d29b8 100644
--- a/tests/qemu-iotests/qcow2_format.py
+++ b/tests/qemu-iotests/qcow2_format.py

...

    class Qcow2BitmapDirEntry(Qcow2Struct):
  @@ -190,6 +198,13 @@ class Qcow2BitmapDirEntry(Qcow2Struct):
  super(Qcow2BitmapDirEntry, self).dump()
  self.bitmap_table.dump()
  +    def to_dict(self):
+    fields_dict = super().to_dict()
+    fields_dict.update(bitmap_table=self.bitmap_table)
+    bmp_name = dict(name=self.name)
+    bme_dict = {**bmp_name, **fields_dict}


hmmm... I don't follow, why not simply

   fields_dict = super().to_dict()
   fields_dict['name'] = self.name
   fields_dict['bitmap_table'] = self.bitmap_table
   ?



The idea is to put the name ahead of the dict. It is the same to 
QcowHeaderExtension::to_dict(). The relevant comment will be supplied in 
the code.


The .update() will be replaced with the assignment operator.

Andrey





+    return bme_dict
+

...

@@ -303,6 +327,17 @@ class QcowHeaderExtension(Qcow2Struct):
  else:
  self.obj.dump()
  +    def to_dict(self):
+    fields_dict = super().to_dict()
+    ext_name = dict(name=self.Magic(self.magic))
+    he_dict = {**ext_name, **fields_dict}


again, why not just add a field to fields_dict ?


+    if self.obj is not None:
+    he_dict.update(data=self.obj)
+    else:
+    he_dict.update(data_str=self.data_str)
+
+    return he_dict
+

...



Re: [PULL for-5.1 3/3] tcg/cpu-exec: precise single-stepping after an interrupt

2020-07-28 Thread Pavel Dovgalyuk

On 17.07.2020 21:16, Richard Henderson wrote:

When single-stepping with a debugger attached to QEMU, and when an
interrupt is raised, the debugger misses the first instruction after
the interrupt.

Tested-by: Luc Michel 
Reviewed-by: Luc Michel 
Buglink: https://bugs.launchpad.net/qemu/+bug/757702
Message-Id: <20200717163029.2737546-1-richard.hender...@linaro.org>
Signed-off-by: Richard Henderson 
---
  accel/tcg/cpu-exec.c | 8 +++-
  1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 6a3d3a3cfc..66d38f9d85 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -588,7 +588,13 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
  else {
  if (cc->cpu_exec_interrupt(cpu, interrupt_request)) {
  replay_interrupt();
-cpu->exception_index = -1;
+/*
+ * After processing the interrupt, ensure an EXCP_DEBUG is
+ * raised when single-stepping so that GDB doesn't miss the
+ * next instruction.
+ */
+cpu->exception_index =
+(cpu->singlestep_enabled ? EXCP_DEBUG : -1);
  *last_tb = NULL;


I just rebased my reverse debugging patches and noticed that this breaks 
the debugging in record/replay icount mode for i386.

At some points "si" in remote gdb does nothing.

This happens because of CPU_INTERRUPT_POLL. These are not real 
interrupts and converted into HW interrupt_request flags later.


Therefore we shouldn't stop when there is CPU_INTERRUPT_POLL request.


  }
  /* The target hook may have updated the 'cpu->interrupt_request';




Pavel Dovgalyuk



Re: [PATCH] hw: add compat machines for 5.2

2020-07-28 Thread David Gibson
On Tue, Jul 28, 2020 at 11:46:45AM +0200, Cornelia Huck wrote:
> Add 5.2 machine types for arm/i440fx/q35/s390x/spapr.
> 
> Signed-off-by: Cornelia Huck 

ppc parts

Acked-by: David Gibson 

> ---
>  hw/arm/virt.c  |  9 -
>  hw/core/machine.c  |  3 +++
>  hw/i386/pc.c   |  3 +++
>  hw/i386/pc_piix.c  | 14 +-
>  hw/i386/pc_q35.c   | 13 -
>  hw/ppc/spapr.c | 15 +--
>  hw/s390x/s390-virtio-ccw.c | 14 +-
>  include/hw/boards.h|  3 +++
>  include/hw/i386/pc.h   |  3 +++
>  9 files changed, 71 insertions(+), 6 deletions(-)
> 
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index ecfee362a182..acf9bfbeceaf 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -2546,10 +2546,17 @@ static void machvirt_machine_init(void)
>  }
>  type_init(machvirt_machine_init);
>  
> +static void virt_machine_5_2_options(MachineClass *mc)
> +{
> +}
> +DEFINE_VIRT_MACHINE_AS_LATEST(5, 2)
> +
>  static void virt_machine_5_1_options(MachineClass *mc)
>  {
> +virt_machine_5_2_options(mc);
> +compat_props_add(mc->compat_props, hw_compat_5_1, hw_compat_5_1_len);
>  }
> -DEFINE_VIRT_MACHINE_AS_LATEST(5, 1)
> +DEFINE_VIRT_MACHINE(5, 1)
>  
>  static void virt_machine_5_0_options(MachineClass *mc)
>  {
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index 2f881d6d75b8..a24fe18ab6a6 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -28,6 +28,9 @@
>  #include "hw/mem/nvdimm.h"
>  #include "migration/vmstate.h"
>  
> +GlobalProperty hw_compat_5_1[] = {};
> +const size_t hw_compat_5_1_len = G_N_ELEMENTS(hw_compat_5_1);
> +
>  GlobalProperty hw_compat_5_0[] = {
>  { "virtio-balloon-device", "page-poison", "false" },
>  { "vmport", "x-read-set-eax", "off" },
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index 3d419d599127..1733b5341a62 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -97,6 +97,9 @@
>  #include "fw_cfg.h"
>  #include "trace.h"
>  
> +GlobalProperty pc_compat_5_1[] = {};
> +const size_t pc_compat_5_1_len = G_N_ELEMENTS(pc_compat_5_1);
> +
>  GlobalProperty pc_compat_5_0[] = {};
>  const size_t pc_compat_5_0_len = G_N_ELEMENTS(pc_compat_5_0);
>  
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index b789e83f9acb..c5ba70ca17cb 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -426,7 +426,7 @@ static void pc_i440fx_machine_options(MachineClass *m)
>  machine_class_allow_dynamic_sysbus_dev(m, TYPE_VMBUS_BRIDGE);
>  }
>  
> -static void pc_i440fx_5_1_machine_options(MachineClass *m)
> +static void pc_i440fx_5_2_machine_options(MachineClass *m)
>  {
>  PCMachineClass *pcmc = PC_MACHINE_CLASS(m);
>  pc_i440fx_machine_options(m);
> @@ -435,6 +435,18 @@ static void pc_i440fx_5_1_machine_options(MachineClass 
> *m)
>  pcmc->default_cpu_version = 1;
>  }
>  
> +DEFINE_I440FX_MACHINE(v5_2, "pc-i440fx-5.2", NULL,
> +  pc_i440fx_5_2_machine_options);
> +
> +static void pc_i440fx_5_1_machine_options(MachineClass *m)
> +{
> +pc_i440fx_5_2_machine_options(m);
> +m->alias = NULL;
> +m->is_default = false;
> +compat_props_add(m->compat_props, hw_compat_5_1, hw_compat_5_1_len);
> +compat_props_add(m->compat_props, pc_compat_5_1, pc_compat_5_1_len);
> +}
> +
>  DEFINE_I440FX_MACHINE(v5_1, "pc-i440fx-5.1", NULL,
>pc_i440fx_5_1_machine_options);
>  
> diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
> index a3e607a544a5..0cb9c18cd44d 100644
> --- a/hw/i386/pc_q35.c
> +++ b/hw/i386/pc_q35.c
> @@ -353,7 +353,7 @@ static void pc_q35_machine_options(MachineClass *m)
>  m->max_cpus = 288;
>  }
>  
> -static void pc_q35_5_1_machine_options(MachineClass *m)
> +static void pc_q35_5_2_machine_options(MachineClass *m)
>  {
>  PCMachineClass *pcmc = PC_MACHINE_CLASS(m);
>  pc_q35_machine_options(m);
> @@ -361,6 +361,17 @@ static void pc_q35_5_1_machine_options(MachineClass *m)
>  pcmc->default_cpu_version = 1;
>  }
>  
> +DEFINE_Q35_MACHINE(v5_2, "pc-q35-5.2", NULL,
> +   pc_q35_5_2_machine_options);
> +
> +static void pc_q35_5_1_machine_options(MachineClass *m)
> +{
> +pc_q35_5_2_machine_options(m);
> +m->alias = NULL;
> +compat_props_add(m->compat_props, hw_compat_5_1, hw_compat_5_1_len);
> +compat_props_add(m->compat_props, pc_compat_5_1, pc_compat_5_1_len);
> +}
> +
>  DEFINE_Q35_MACHINE(v5_1, "pc-q35-5.1", NULL,
> pc_q35_5_1_machine_options);
>  
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 0ae293ec9431..1c8d0981b382 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -4579,15 +4579,26 @@ static void 
> spapr_machine_latest_class_options(MachineClass *mc)
>  }\
>  type_init(spapr_machine_register_##suffix)
>  
> +/*
> + * pseries-5.2
> + */
> +static void spapr_machine_5_2_class_options(MachineClass *mc)
> +{
> +/* Defaults for the 

Re: [PATCH for-5.2] spapr: Avoid some integer conversions in spapr_phb_realize()

2020-07-28 Thread David Gibson
On Tue, Jul 28, 2020 at 11:14:13AM +0200, Greg Kurz wrote:
> Without this patch, the irq number gets converted uselessly from int
> to int32_t, back and forth.
> 
> This doesn't fix an actual issue, it's just to make the code neater.
> 
> Suggested-by: Markus Armbruster 
> Signed-off-by: Greg Kurz 

Applied to ppc-for-5.2, thanks.

> ---
> 
> This is a follow-up to my previous "spapr: Simplify error handling in
> spapr_phb_realize()" patch. Maybe worth squashing it there ?
> ---
>  hw/ppc/spapr_pci.c |2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
> index 59441e2117f3..0a418f1e6711 100644
> --- a/hw/ppc/spapr_pci.c
> +++ b/hw/ppc/spapr_pci.c
> @@ -1964,7 +1964,7 @@ static void spapr_phb_realize(DeviceState *dev, Error 
> **errp)
>  
>  /* Initialize the LSI table */
>  for (i = 0; i < PCI_NUM_PINS; i++) {
> -int32_t irq = SPAPR_IRQ_PCI_LSI + sphb->index * PCI_NUM_PINS + i;
> +int irq = SPAPR_IRQ_PCI_LSI + sphb->index * PCI_NUM_PINS + i;
>  
>  if (smc->legacy_irq_allocation) {
>  irq = spapr_irq_findone(spapr, errp);
> 
> 

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [PATCH] qcow2: cleanup created file when qcow2_co_create

2020-07-28 Thread Maxim Levitsky
On Thu, 2020-07-16 at 14:33 +0300, Maxim Levitsky wrote:
> This is basically the same thing as commit
> 'crypto.c: cleanup created file when block_crypto_co_create_opts_luks fails'
> does but for qcow2 files to ensure that we don't leave qcow2 files
> when creation fails.
> 
> Signed-off-by: Maxim Levitsky 
> ---
>  block/qcow2.c | 11 +++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/block/qcow2.c b/block/qcow2.c
> index fadf3422f8..8b848924b5 100644
> --- a/block/qcow2.c
> +++ b/block/qcow2.c
> @@ -3794,6 +3794,17 @@ static int coroutine_fn 
> qcow2_co_create_opts(BlockDriver *drv,
>  /* Create the qcow2 image (format layer) */
>  ret = qcow2_co_create(create_options, errp);
>  if (ret < 0) {
> +
> +Error *local_delete_err = NULL;
> +int r_del = bdrv_co_delete_file(bs, _delete_err);
> +/*
> + * ENOTSUP will happen if the block driver doesn't support
> + * the 'bdrv_co_delete_file' interface. This is a predictable
> + * scenario and shouldn't be reported back to the user.
> + */
> +if ((r_del < 0) && (r_del != -ENOTSUP)) {
> +error_report_err(local_delete_err);
> +}
>  goto finish;
>  }
>  
Any update on this? Do you think we can add this to 5.1 or is it too late?
Best regards,
Maxim Levitsky




Re: [PATCH v4 4/7] hw/riscv: Use pre-built bios image of generic platform for virt & sifive_u

2020-07-28 Thread Bin Meng
Hi Alistair,

On Wed, Jul 29, 2020 at 1:05 PM Alistair Francis  wrote:
>
> On Tue, Jul 28, 2020 at 9:51 PM Bin Meng  wrote:
> >
> > Hi Alistair,
> >
> > On Wed, Jul 29, 2020 at 2:26 AM Alistair Francis  
> > wrote:
> > >
> > > On Tue, Jul 28, 2020 at 8:46 AM Bin Meng  wrote:
> > > >
> > > > Hi Alistair,
> > > >
> > > > On Tue, Jul 28, 2020 at 11:39 PM Alistair Francis 
> > > >  wrote:
> > > > >
> > > > > On Wed, Jul 15, 2020 at 9:55 PM Bin Meng  wrote:
> > > > > >
> > > > > > Hi Alistair,
> > > > > >
> > > > > > On Mon, Jul 13, 2020 at 9:53 AM Bin Meng  wrote:
> > > > > > >
> > > > > > > On Sun, Jul 12, 2020 at 1:34 AM Alistair Francis 
> > > > > > >  wrote:
> > > > > > > >
> > > > > > > > On Thu, Jul 9, 2020 at 10:07 PM Bin Meng  
> > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > From: Bin Meng 
> > > > > > > > >
> > > > > > > > > Update virt and sifive_u machines to use the opensbi 
> > > > > > > > > fw_dynamic bios
> > > > > > > > > image built for the generic FDT platform.
> > > > > > > > >
> > > > > > > > > Remove the out-of-date no longer used bios images.
> > > > > > > > >
> > > > > > > > > Signed-off-by: Bin Meng 
> > > > > > > > > Reviewed-by: Anup Patel 
> > > > > > > > > Reviewed-by: Alistair Francis 
> > > > > > > >
> > > > > > > > This patch seems to break 32-bit Linux boots on the sifive_u 
> > > > > > > > and virt machines.
> > > > > > > >
> > > > > > >
> > > > > > > It looks only Linux boot on sifive_u is broken. On our side, we 
> > > > > > > have
> > > > > > > been using VxWorks to test 32-bit OpenSBI on sifive_u so this 
> > > > > > > issue
> > > > > > > gets unnoticed. I will take a look.
> > > > > >
> > > > > > I've figured out the issue of 32-bit Linux booting failure on
> > > > > > sifive_u. A patch has been sent to Linux upstream:
> > > > > > http://lists.infradead.org/pipermail/linux-riscv/2020-July/001213.html
> > > > >
> > > > > Thanks for that. What change in QEMU causes this failure though?
> > > > >
> > > >
> > > > There is nothing wrong in QEMU.
> > >
> > > There is. This patch causes a regression for 32-bit Linux boot on the
> > > sifive_u. Your v5 has not addressed this.
> >
> > The 32-bit Linux boot failure was fixed by:
> > http://lists.infradead.org/pipermail/linux-riscv/2020-July/001213.html
> >
> > What additional issue did you see?
> >
> > >
> > > With this patch, the Linux boot stops here:
> > >
> > > OpenSBI v0.8
> > >_  _
> > >   / __ \  / |  _ \_   _|
> > >  | |  | |_ __   ___ _ __ | (___ | |_) || |
> > >  | |  | | '_ \ / _ \ '_ \ \___ \|  _ < | |
> > >  | |__| | |_) |  __/ | | |) | |_) || |_
> > >   \/| .__/ \___|_| |_|_/|/_|
> > > | |
> > > |_|
> > >
> > > Platform Name   : SiFive HiFive Unleashed A00
> > > Platform Features   : timer,mfdeleg
> > > Platform HART Count : 4
> > > Boot HART ID: 3
> > > Boot HART ISA   : rv64imafdcsu
> >
> > This is a 64-bit hardware.
>
> You are right. It's not 32-bit, that was my mistake. I'm used to my
> first test being 32-bit, but in this case it's not.
>
> It looks like this commit instead breaks the sifive_u for 64-bit with
> the 5.3 kernel.
>
> >
> > > BOOT HART Features  : pmp,scounteren,mcounteren
> > > BOOT HART PMP Count : 16
> > > Firmware Base   : 0x8000
> > > Firmware Size   : 116 KB
> > > Runtime SBI Version : 0.2
> > >
> > > MIDELEG : 0x0222
> > > MEDELEG : 0xb109
> > > PMP0: 0x8000-0x8001 (A)
> > > PMP1: 0x-0x (A,R,W,X)
> > > [0.00] OF: fdt: Ignoring memory range 0x8000 - 0x8020
> > > [0.00] Linux version 5.3.0 (oe-user@oe-host) (gcc version
> >
> > It seems that you are using quite an old kernel. Can you please try
> > the latest version?
>
> It is an old kernel, but old kernels should still keep working (or we
> should at least know why they don't)
>
> >
> > > 9.2.0 (GCC)) #1 SMP Thu Sep 19 18:34:52 UTC 2019
> > > [0.00] earlycon: sbi0 at I/O port 0x0 (options '')
> > > [0.00] printk: bootconsole [sbi0] enabled
> > > [0.00] initrd not found or empty - disabling initrd
> > > [0.00] Zone ranges:
> > > [0.00]   DMA32[mem 0x8020-0xbfff]
> > > [0.00]   Normal   empty
> > > [0.00] Movable zone start for each node
> > > [0.00] Early memory node ranges
> > > [0.00]   node   0: [mem 0x8020-0xbfff]
> > > [0.00] Initmem setup node 0 [mem 
> > > 0x8020-0xbfff]
> > > [0.00] OF: fdt: Invalid device tree blob header
> > > [0.00] software IO TLB: mapped [mem 0xbb1fe000-0xbf1fe000] (64MB)
> > >
> > > Without this patch I can boot all the way to looking for a rootFS.
> > >
> > > Please don't send new versions of patches without addresses regressions.
> >
> > The patches were sent after addressing all regressions you 

Re: [PATCH 1/1] pci/pcie: refuse another hotplug/unplug event if attention button is pending

2020-07-28 Thread Maxim Levitsky
On Wed, 2020-07-22 at 19:19 +0300, Maxim Levitsky wrote:
> On Wed, 2020-07-22 at 19:17 +0300, Maxim Levitsky wrote:
> > Curently it is possible to hotplug a device and then immediatly
> > hotunplug it before the OS notices, and that will result
> > in missed unplug event since we can only send one attention button event.
> > 
> > Moreover the device will stuck in unplugging state forever.
> > 
> > Error out in such cases and rely on the caller (e.g libvirt) to retry
> > the unplug a bit later
> > 
> > Signed-off-by: Maxim Levitsky 
> > ---
> >  hw/pci/pcie.c | 11 +++
> >  1 file changed, 11 insertions(+)
> > 
> > diff --git a/hw/pci/pcie.c b/hw/pci/pcie.c
> > index 5b48bae0f6..9e836cf2f4 100644
> > --- a/hw/pci/pcie.c
> > +++ b/hw/pci/pcie.c
> > @@ -402,6 +402,17 @@ static void pcie_cap_slot_plug_common(PCIDevice 
> > *hotplug_dev, DeviceState *dev,
> >   */
> >  error_setg_errno(errp, EBUSY, "slot is electromechanically 
> > locked");
> >  }
> > +
> > +if (sltsta & PCI_EXP_SLTSTA_ABP) {
> > +/*
> > + * Attention button is pressed, thus we can't send another
> > + * hotpplug event
> Typo here, forgot to refresh the commit.
> > + */
> > +error_setg_errno(errp, EBUSY,
> > + "attention button is already pressed, can't "
> > + "send another hotplug event");
> > +}
> > +
> >  }
> >  
> >  void pcie_cap_slot_pre_plug_cb(HotplugHandler *hotplug_dev, DeviceState 
> > *dev,
ping.


Best regards,
Maxim Levitsky




Re: [PATCH v4 4/7] hw/riscv: Use pre-built bios image of generic platform for virt & sifive_u

2020-07-28 Thread Alistair Francis
On Tue, Jul 28, 2020 at 9:51 PM Bin Meng  wrote:
>
> Hi Alistair,
>
> On Wed, Jul 29, 2020 at 2:26 AM Alistair Francis  wrote:
> >
> > On Tue, Jul 28, 2020 at 8:46 AM Bin Meng  wrote:
> > >
> > > Hi Alistair,
> > >
> > > On Tue, Jul 28, 2020 at 11:39 PM Alistair Francis  
> > > wrote:
> > > >
> > > > On Wed, Jul 15, 2020 at 9:55 PM Bin Meng  wrote:
> > > > >
> > > > > Hi Alistair,
> > > > >
> > > > > On Mon, Jul 13, 2020 at 9:53 AM Bin Meng  wrote:
> > > > > >
> > > > > > On Sun, Jul 12, 2020 at 1:34 AM Alistair Francis 
> > > > > >  wrote:
> > > > > > >
> > > > > > > On Thu, Jul 9, 2020 at 10:07 PM Bin Meng  
> > > > > > > wrote:
> > > > > > > >
> > > > > > > > From: Bin Meng 
> > > > > > > >
> > > > > > > > Update virt and sifive_u machines to use the opensbi fw_dynamic 
> > > > > > > > bios
> > > > > > > > image built for the generic FDT platform.
> > > > > > > >
> > > > > > > > Remove the out-of-date no longer used bios images.
> > > > > > > >
> > > > > > > > Signed-off-by: Bin Meng 
> > > > > > > > Reviewed-by: Anup Patel 
> > > > > > > > Reviewed-by: Alistair Francis 
> > > > > > >
> > > > > > > This patch seems to break 32-bit Linux boots on the sifive_u and 
> > > > > > > virt machines.
> > > > > > >
> > > > > >
> > > > > > It looks only Linux boot on sifive_u is broken. On our side, we have
> > > > > > been using VxWorks to test 32-bit OpenSBI on sifive_u so this issue
> > > > > > gets unnoticed. I will take a look.
> > > > >
> > > > > I've figured out the issue of 32-bit Linux booting failure on
> > > > > sifive_u. A patch has been sent to Linux upstream:
> > > > > http://lists.infradead.org/pipermail/linux-riscv/2020-July/001213.html
> > > >
> > > > Thanks for that. What change in QEMU causes this failure though?
> > > >
> > >
> > > There is nothing wrong in QEMU.
> >
> > There is. This patch causes a regression for 32-bit Linux boot on the
> > sifive_u. Your v5 has not addressed this.
>
> The 32-bit Linux boot failure was fixed by:
> http://lists.infradead.org/pipermail/linux-riscv/2020-July/001213.html
>
> What additional issue did you see?
>
> >
> > With this patch, the Linux boot stops here:
> >
> > OpenSBI v0.8
> >_  _
> >   / __ \  / |  _ \_   _|
> >  | |  | |_ __   ___ _ __ | (___ | |_) || |
> >  | |  | | '_ \ / _ \ '_ \ \___ \|  _ < | |
> >  | |__| | |_) |  __/ | | |) | |_) || |_
> >   \/| .__/ \___|_| |_|_/|/_|
> > | |
> > |_|
> >
> > Platform Name   : SiFive HiFive Unleashed A00
> > Platform Features   : timer,mfdeleg
> > Platform HART Count : 4
> > Boot HART ID: 3
> > Boot HART ISA   : rv64imafdcsu
>
> This is a 64-bit hardware.

You are right. It's not 32-bit, that was my mistake. I'm used to my
first test being 32-bit, but in this case it's not.

It looks like this commit instead breaks the sifive_u for 64-bit with
the 5.3 kernel.

>
> > BOOT HART Features  : pmp,scounteren,mcounteren
> > BOOT HART PMP Count : 16
> > Firmware Base   : 0x8000
> > Firmware Size   : 116 KB
> > Runtime SBI Version : 0.2
> >
> > MIDELEG : 0x0222
> > MEDELEG : 0xb109
> > PMP0: 0x8000-0x8001 (A)
> > PMP1: 0x-0x (A,R,W,X)
> > [0.00] OF: fdt: Ignoring memory range 0x8000 - 0x8020
> > [0.00] Linux version 5.3.0 (oe-user@oe-host) (gcc version
>
> It seems that you are using quite an old kernel. Can you please try
> the latest version?

It is an old kernel, but old kernels should still keep working (or we
should at least know why they don't)

>
> > 9.2.0 (GCC)) #1 SMP Thu Sep 19 18:34:52 UTC 2019
> > [0.00] earlycon: sbi0 at I/O port 0x0 (options '')
> > [0.00] printk: bootconsole [sbi0] enabled
> > [0.00] initrd not found or empty - disabling initrd
> > [0.00] Zone ranges:
> > [0.00]   DMA32[mem 0x8020-0xbfff]
> > [0.00]   Normal   empty
> > [0.00] Movable zone start for each node
> > [0.00] Early memory node ranges
> > [0.00]   node   0: [mem 0x8020-0xbfff]
> > [0.00] Initmem setup node 0 [mem 
> > 0x8020-0xbfff]
> > [0.00] OF: fdt: Invalid device tree blob header
> > [0.00] software IO TLB: mapped [mem 0xbb1fe000-0xbf1fe000] (64MB)
> >
> > Without this patch I can boot all the way to looking for a rootFS.
> >
> > Please don't send new versions of patches without addresses regressions.
>
> The patches were sent after addressing all regressions you reported
> (well the 32-bit Linux booting issue is actually not a QEMU
> regression, but one that exists in the Linux kernel side for a long
> time).

Yep, that is my mistake. Sorry about the confusion.

>
> I just tested 64-bit Linux boot on both virt and sifive_u, and they
> both can boot all the way to looking for a root fs.

Can you test with older kernels?

If 

Re: [PATCH v4 4/7] hw/riscv: Use pre-built bios image of generic platform for virt & sifive_u

2020-07-28 Thread Bin Meng
Hi Alistair,

On Wed, Jul 29, 2020 at 2:26 AM Alistair Francis  wrote:
>
> On Tue, Jul 28, 2020 at 8:46 AM Bin Meng  wrote:
> >
> > Hi Alistair,
> >
> > On Tue, Jul 28, 2020 at 11:39 PM Alistair Francis  
> > wrote:
> > >
> > > On Wed, Jul 15, 2020 at 9:55 PM Bin Meng  wrote:
> > > >
> > > > Hi Alistair,
> > > >
> > > > On Mon, Jul 13, 2020 at 9:53 AM Bin Meng  wrote:
> > > > >
> > > > > On Sun, Jul 12, 2020 at 1:34 AM Alistair Francis 
> > > > >  wrote:
> > > > > >
> > > > > > On Thu, Jul 9, 2020 at 10:07 PM Bin Meng  wrote:
> > > > > > >
> > > > > > > From: Bin Meng 
> > > > > > >
> > > > > > > Update virt and sifive_u machines to use the opensbi fw_dynamic 
> > > > > > > bios
> > > > > > > image built for the generic FDT platform.
> > > > > > >
> > > > > > > Remove the out-of-date no longer used bios images.
> > > > > > >
> > > > > > > Signed-off-by: Bin Meng 
> > > > > > > Reviewed-by: Anup Patel 
> > > > > > > Reviewed-by: Alistair Francis 
> > > > > >
> > > > > > This patch seems to break 32-bit Linux boots on the sifive_u and 
> > > > > > virt machines.
> > > > > >
> > > > >
> > > > > It looks only Linux boot on sifive_u is broken. On our side, we have
> > > > > been using VxWorks to test 32-bit OpenSBI on sifive_u so this issue
> > > > > gets unnoticed. I will take a look.
> > > >
> > > > I've figured out the issue of 32-bit Linux booting failure on
> > > > sifive_u. A patch has been sent to Linux upstream:
> > > > http://lists.infradead.org/pipermail/linux-riscv/2020-July/001213.html
> > >
> > > Thanks for that. What change in QEMU causes this failure though?
> > >
> >
> > There is nothing wrong in QEMU.
>
> There is. This patch causes a regression for 32-bit Linux boot on the
> sifive_u. Your v5 has not addressed this.

The 32-bit Linux boot failure was fixed by:
http://lists.infradead.org/pipermail/linux-riscv/2020-July/001213.html

What additional issue did you see?

>
> With this patch, the Linux boot stops here:
>
> OpenSBI v0.8
>_  _
>   / __ \  / |  _ \_   _|
>  | |  | |_ __   ___ _ __ | (___ | |_) || |
>  | |  | | '_ \ / _ \ '_ \ \___ \|  _ < | |
>  | |__| | |_) |  __/ | | |) | |_) || |_
>   \/| .__/ \___|_| |_|_/|/_|
> | |
> |_|
>
> Platform Name   : SiFive HiFive Unleashed A00
> Platform Features   : timer,mfdeleg
> Platform HART Count : 4
> Boot HART ID: 3
> Boot HART ISA   : rv64imafdcsu

This is a 64-bit hardware.

> BOOT HART Features  : pmp,scounteren,mcounteren
> BOOT HART PMP Count : 16
> Firmware Base   : 0x8000
> Firmware Size   : 116 KB
> Runtime SBI Version : 0.2
>
> MIDELEG : 0x0222
> MEDELEG : 0xb109
> PMP0: 0x8000-0x8001 (A)
> PMP1: 0x-0x (A,R,W,X)
> [0.00] OF: fdt: Ignoring memory range 0x8000 - 0x8020
> [0.00] Linux version 5.3.0 (oe-user@oe-host) (gcc version

It seems that you are using quite an old kernel. Can you please try
the latest version?

> 9.2.0 (GCC)) #1 SMP Thu Sep 19 18:34:52 UTC 2019
> [0.00] earlycon: sbi0 at I/O port 0x0 (options '')
> [0.00] printk: bootconsole [sbi0] enabled
> [0.00] initrd not found or empty - disabling initrd
> [0.00] Zone ranges:
> [0.00]   DMA32[mem 0x8020-0xbfff]
> [0.00]   Normal   empty
> [0.00] Movable zone start for each node
> [0.00] Early memory node ranges
> [0.00]   node   0: [mem 0x8020-0xbfff]
> [0.00] Initmem setup node 0 [mem 
> 0x8020-0xbfff]
> [0.00] OF: fdt: Invalid device tree blob header
> [0.00] software IO TLB: mapped [mem 0xbb1fe000-0xbf1fe000] (64MB)
>
> Without this patch I can boot all the way to looking for a rootFS.
>
> Please don't send new versions of patches without addresses regressions.

The patches were sent after addressing all regressions you reported
(well the 32-bit Linux booting issue is actually not a QEMU
regression, but one that exists in the Linux kernel side for a long
time).

I just tested 64-bit Linux boot on both virt and sifive_u, and they
both can boot all the way to looking for a root fs.

Regards,
Bin



[Bug 1872790] Re: empty qcow2

2020-07-28 Thread Launchpad Bug Tracker
[Expired for QEMU because there has been no activity for 60 days.]

** Changed in: qemu
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1872790

Title:
  empty qcow2

Status in QEMU:
  Expired

Bug description:
  I plugged multiple qcow2 to a Windows guest. On the Windows disk
  manager all disks are listed perfectly, with their data, their real
  space, I even can explore all files on the Explorer, all cool

  On third party disk manager (all of them), I only have the C:\ HDD who
  act normally, all the other plugged qcow2 are seen as fully
  unallocated, so I can't manipulate them

  I want to move some partitions, create others, but on Windows disk
  manager I can't extend or create partition and on third party I didn't
  see the partitions at all

  Even guestfs doesn't recognize any partition table `libguestfs: error:
  inspect_os: /dev/sda: not a partitioned device`

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1872790/+subscriptions



Adding VHOST_USER_PROTOCOL_F_CONFIG_MEM_SLOTS to 5.1 release notes

2020-07-28 Thread Raphael Norwitz
Hi mst,

Looking at the current changelog
https://wiki.qemu.org/ChangeLog/5.1#virtio, I don't see any mention of
the VHOST_USER_PROTOCOL_F_CONFIG_MEM_SLOTS protocol feature. It is a
user visible change so shouldn't we add a note?

Thanks,
Raphael



RE: qemu icount to run guest SMP code

2020-07-28 Thread Wu, Wentong

Thanks for the reply.

> > We are trying to run guest SMP code with qemu icount mode, but based on my 
> > current understanding I don’t think we can do that, because with icount 
> > enabled, the multi cpus will be simulated in round-robin way(tcg kick vcpu 
> > timer, or current cpu exit in order to handle interrupt or the ending of 
> > the current execution translationblock) with the single vCPU thread, so 
> > qemu is not running guest code in parallel as real hardware does, if guest 
> > code has the assumption cores run in parallel it will cause unexpected 
> > behavior.
>
> Timing of the emulated CPUs will always vary from that of real hardware to 
> some extent.

I understand that, but at least we can get the deterministic result on load 
heavily PC for emulated single core system if we can adjust the shift value to 
the level of hardware frequency. And it will not work if icount qemu need  
communicate with other real hardware, for example via TCP protocol. 

> In general you can't expect QEMU's simulation to be accurate to the level 
> that it will correctly run guest code that's looking carefully at the level 
> of parallelism between multiple cores (whether using -icount or not.)

Not sure without icount(maybe it will be accurate if only QEMU is running on a 
powerful PC), but I can understand it's not accurate with icount enabled, the 
reason is as you mentioned below, there is the possibility that we have to spin 
to wait another core, so the icount timer will be not accurate.

>
> SMP mode with icount (ie without MTTCG) will run all vCPUs on one thread, but 
> since we always round-robin between them well-behaved guest code will make 
> forward progress and will not notice any major differences between this and 
> real parallel execution. (Sometimes it might spin a little more if it has a 
> busy-loop waiting for another core, but only until the round-robin kicks in 
> and runs the other core.)

Yes, agree with "well-behaved guest code will make forward progress", 

please correct me if anything wrong.

BR


[Bug 1856335] Re: Cache Layout wrong on many Zen Arch CPUs

2020-07-28 Thread Heiko Sieger
Sanjay,

You can just increase the number of vcpus, such as:

64

then continue to define the vcpus:


































(6x enabled=yes, then 2x enabled=no.)

You will get more vcpu ids than you have threads, but since you disable
16 out of 64, you will have 48 active.

vcpupin should continue as follows:


























This is if you pin all vcpus to the VM, which may not be the best thing
to do. The maximum number of vcpus you can pin on a Threadripper 3960X
are 48.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1856335

Title:
  Cache Layout wrong on many Zen Arch CPUs

Status in QEMU:
  New

Bug description:
  AMD CPUs have L3 cache per 2, 3 or 4 cores. Currently, TOPOEXT seems
  to always map Cache ass if it was an 4-Core per CCX CPU, which is
  incorrect, and costs upwards 30% performance (more realistically 10%)
  in L3 Cache Layout aware applications.

  Example on a 4-CCX CPU (1950X /w 8 Cores and no SMT):

    
  EPYC-IBPB
  AMD
  

  In windows, coreinfo reports correctly:

    Unified Cache 1, Level 3,8 MB, Assoc  16, LineSize  64
    Unified Cache 6, Level 3,8 MB, Assoc  16, LineSize  64

  On a 3-CCX CPU (3960X /w 6 cores and no SMT):

   
  EPYC-IBPB
  AMD
  

  in windows, coreinfo reports incorrectly:

  --  Unified Cache  1, Level 3,8 MB, Assoc  16, LineSize  64
  **  Unified Cache  6, Level 3,8 MB, Assoc  16, LineSize  64

  Validated against 3.0, 3.1, 4.1 and 4.2 versions of qemu-kvm.

  With newer Qemu there is a fix (that does behave correctly) in using the dies 
parameter:
   

  The problem is that the dies are exposed differently than how AMD does
  it natively, they are exposed to Windows as sockets, which means, that
  if you are nto a business user, you can't ever have a machine with
  more than two CCX (6 cores) as consumer versions of Windows only
  supports two sockets. (Should this be reported as a separate bug?)

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1856335/+subscriptions



Re: [PATCH v3 0/8] Generalize start-powered-off property from ARM

2020-07-28 Thread Thiago Jung Bauermann


Thiago Jung Bauermann  writes:

> The ARM code has a start-powered-off property in ARMCPU, which is a
> subclass of CPUState. This property causes arm_cpu_reset() to set
> CPUState::halted to 1, signalling that the CPU should start in a halted
> state. Other architectures also have code which aim to achieve the same
> effect, but without using a property.
>
> The ppc/spapr version has a bug where QEMU does a KVM_RUN on the vcpu
> before cs->halted is set to 1, causing the vcpu to run while it's still in
> an unitialized state (more details in patch 3).

Since this series fixes a bug is it eligible for 5.1, at least the
patches that were already approved by the appropriate maintainers?

-- 
Thiago Jung Bauermann
IBM Linux Technology Center



Re: [RFC PATCH v3 8/8] target/s390x: Use start-powered-off CPUState property

2020-07-28 Thread Thiago Jung Bauermann


Hi,

Cornelia Huck  writes:

> On Wed, 22 Jul 2020 23:56:57 -0300
> Thiago Jung Bauermann  wrote:
>
>> Instead of setting CPUState::halted to 1 in s390_cpu_initfn(), use the
>> start-powered-off property which makes cpu_common_reset() initialize it
>> to 1 in common code.
>> 
>> Note that this changes behavior by setting cs->halted to 1 on reset, which
>> didn't happen before.
>
> I think that should be fine, as we change the cpu state to STOPPED in
> the reset function, which sets halted to 1.

Nice, thanks for checking.

>> 
>> Signed-off-by: Thiago Jung Bauermann 
>> ---
>>  target/s390x/cpu.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>> 
>> NB: I was only able to test that this patch builds. I wasn't able to
>> run it.
>
> No noticeable difference under kvm, but running under tcg seems a bit
> more sluggish than usual, and I saw some pausing on reboot (after the
> bios handover to the kernel). Not sure if it were just flukes on my
> laptop, would appreciate if someone else could give it a go.

I tried today setting up a TCG guest, but didn't have success yet.
Will try some more tomorrow.

-- 
Thiago Jung Bauermann
IBM Linux Technology Center



Re: [PATCH v6 3/4] target/riscv: Fix the translation of physical address

2020-07-28 Thread Alistair Francis
On Tue, Jul 28, 2020 at 1:26 AM Zong Li  wrote:
>
> The real physical address should add the 12 bits page offset. It also
> causes the PMP wrong checking due to the minimum granularity of PMP is
> 4 byte, but we always get the physical address which is 4KB alignment,
> that means, we always use the start address of the page to check PMP for
> all addresses which in the same page.
>
> Signed-off-by: Zong Li 

Reviewed-by: Alistair Francis 

Alistair

> ---
>  target/riscv/cpu_helper.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index 75d2ae3434..2f337e418c 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -543,7 +543,8 @@ restart:
>  /* for superpage mappings, make a fake leaf PTE for the TLB's
> benefit. */
>  target_ulong vpn = addr >> PGSHIFT;
> -*physical = (ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT;
> +*physical = ((ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT) |
> +(addr & ~TARGET_PAGE_MASK);
>
>  /* set permissions on the TLB entry */
>  if ((pte & PTE_R) || ((pte & PTE_X) && mxr)) {
> @@ -630,7 +631,7 @@ hwaddr riscv_cpu_get_phys_page_debug(CPUState *cs, vaddr 
> addr)
>  }
>  }
>
> -return phys_addr;
> +return phys_addr & TARGET_PAGE_MASK;
>  }
>
>  void riscv_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr,
> --
> 2.27.0
>
>



Re: [PATCH for-5.1? 0/3] Fix AIRCR.SYSRESETREQ for most M-profile boards

2020-07-28 Thread Alistair Francis
On Tue, Jul 28, 2020 at 3:38 AM Peter Maydell  wrote:
>
> QEMU's NVIC device provides an outbound qemu_irq "SYSRESETREQ" which
> it signals when the guest sets the SYSRESETREQ bit in the AIRCR
> register.  This matches the hardware design (where the CPU has a
> signal of this name and it is up to the SoC to connect that up to an
> actual reset mechanism), but in QEMU it mostly results in duplicated
> code in SoC objects and bugs where SoC model implementors forget to
> wire up the SYSRESETREQ line.
>
> This patchseries provides a default behaviour for the case where
> SYSRESETREQ is not actually connected to anything: use
> qemu_system_reset_request() to perform a system reset. This is a
> much more plausible default than "do nothing". It allows us to
> allow us to remove the implementations of SYSRESETREQ handling from
> the boards which currently hook up a reset function that just
> does that (stellaris, emcraft-sf2), and also fixes the bugs
> in these board models which forgot to wire up the signal:
>
>  * microbit
>  * mps2-an385
>  * mps2-an505
>  * mps2-an511
>  * mps2-an521
>  * musca-a
>  * musca-b1
>  * netduino
>  * netduinoplus2
>
> [I admit that I have not specifically checked for datasheets
> and errata notes for all these boards to confirm that AIRCR.SYSRESETREQ
> really does reset the system or to look for more complex
> reset-control logic... As an example, though, the SSE-200 used in
> the mps2-an521 and the musca boards has a RESET_MASK register in the
> system control block that allows a guest to suppress reset requests from
> one or both CPUs. Since we don't model that register, "always reset"
> is still closer to reasonable behaviour than "do nothing".]
>
> We still allow the board to wire up the signal if it needs to, in case
> we need to model more complicated reset controller logic (eg if we
> wanted to get that SSE-200 corner case right in future) or to model
> buggy SoC hardware which forgot to wire up the line itself. But
> defaulting to "reset the system" is going to be correct much more
> often than "do nothing".
>
> Patch 1 introduces a new API for "check whether my qemu_irq is
> actually connected to anything" (the test is trivial but the
> encapsulation seems worthwhile); patch 2 provides the new default
> and patch 3 removes the now-unnecessary SYSRESETREQ handlers in
> board/SoC code.

I checked the STM Reference Manual and this matches what they expect.

Reviewed-by: Alistair Francis 

Alistair

>
> thanks
> -- PMM
>
> Peter Maydell (3):
>   include/hw/irq.h: New function qemu_irq_is_connected()
>   hw/intc/armv7m_nvic: Provide default "reset the system" behaviour for
> SYSRESETREQ
>   msf2-soc, stellaris: Don't wire up SYSRESETREQ
>
>  include/hw/arm/armv7m.h |  4 +++-
>  include/hw/irq.h| 18 ++
>  hw/arm/msf2-soc.c   | 11 ---
>  hw/arm/stellaris.c  | 12 
>  hw/intc/armv7m_nvic.c   | 17 -
>  5 files changed, 37 insertions(+), 25 deletions(-)
>
> --
> 2.20.1
>
>



[ANNOUNCE] QEMU 5.1.0-rc2 is now available

2020-07-28 Thread Michael Roth
Hello,

On behalf of the QEMU Team, I'd like to announce the availability of the
third release candidate for the QEMU 5.1 release.  This release is meant
for testing purposes and should not be used in a production environment.

  http://download.qemu-project.org/qemu-5.1.0-rc2.tar.xz
  http://download.qemu-project.org/qemu-5.1.0-rc2.tar.xz.sig

You can help improve the quality of the QEMU 5.1 release by testing this
release and reporting bugs on Launchpad:

  https://bugs.launchpad.net/qemu/

The release plan, as well a documented known issues for release
candidates, are available at:

  http://wiki.qemu.org/Planning/5.1

Please add entries to the ChangeLog for the 5.1 release below:

  http://wiki.qemu.org/ChangeLog/5.1

Thank you to everyone involved!

Changes since rc1:

5772f2b1fc: Update version for v5.1.0-rc2 release (Peter Maydell)
12c75e20a2: block/nbd: nbd_co_reconnect_loop(): don't sleep if drained 
(Vladimir Sementsov-Ogievskiy)
fbeb3e63b3: block/nbd: on shutdown terminate connection attempt (Vladimir 
Sementsov-Ogievskiy)
dd1ec1a4af: block/nbd: allow drain during reconnect attempt (Vladimir 
Sementsov-Ogievskiy)
fa35591b9c: block/nbd: split nbd_establish_connection out of nbd_client_connect 
(Vladimir Sementsov-Ogievskiy)
03a970bb6f: iotests: Test convert to qcow2 compressed to NBD (Nir Soffer)
4b914b01cd: iotests: Add more qemu_img helpers (Nir Soffer)
b7719bcad2: iotests: Make qemu_nbd_popen() a contextmanager (Nir Soffer)
a2b333c018: block: nbd: Fix convert qcow2 compressed to nbd (Nir Soffer)
9c15f57891: slirp: update to latest stable-4.2 branch (Marc-André Lureau)
297641d43c: test-char: abort on serial test error (Marc-André Lureau)
890cbccb08: nbd: Fix large trim/zero requests (Eric Blake)
afac471b71: iotests/197: Fix for non-qcow2 formats (Max Reitz)
ae159450e1: iotests/028: Add test for cross-base-EOF reads (Max Reitz)
134b7dec6e: block: Fix bdrv_aligned_p*v() for qiov_offset != 0 (Max Reitz)
22dc8663d9: net: forbid the reentrant RX (Jason Wang)
c546ecf27d: virtio-net: check the existence of peer before accessing vDPA 
config (Jason Wang)
a48aaf882b: virtio-pci: fix wrong index in virtio_pci_queue_enabled (Yuri 
Benditovich)
ba620541d0: qga/qapi-schema: Document -1 for invalid PCI address fields (Thomas 
Huth)
3aaebc0cce: qga-win: fix "guest-get-fsinfo" wrong filesystem type (Basil Salman)
37931e006f: migration: Fix typos in bitmap migration comments (Eric Blake)
fbd1c1b642: iotests: Adjust which migration tests are quick (Eric Blake)
058a08a658: qemu-iotests/199: add source-killed case to bitmaps postcopy 
(Vladimir Sementsov-Ogievskiy)
845b2204c9: qemu-iotests/199: add early shutdown case to bitmaps postcopy 
(Vladimir Sementsov-Ogievskiy)
d4c6fcc01b: qemu-iotests/199: check persistent bitmaps (Vladimir 
Sementsov-Ogievskiy)
48f43820cd: qemu-iotests/199: prepare for new test-cases addition (Vladimir 
Sementsov-Ogievskiy)
ee64722514: migration/savevm: don't worry if bitmap migration postcopy failed 
(Vladimir Sementsov-Ogievskiy)
1499ab0969: migration/block-dirty-bitmap: cancel migration on shutdown 
(Vladimir Sementsov-Ogievskiy)
b91f33b81d: migration/block-dirty-bitmap: relax error handling in incoming part 
(Vladimir Sementsov-Ogievskiy)
0a47190a00: migration/block-dirty-bitmap: keep bitmap state for all bitmaps 
(Vladimir Sementsov-Ogievskiy)
f3045b9a82: migration/block-dirty-bitmap: simplify dirty_bitmap_load_complete 
(Vladimir Sementsov-Ogievskiy)
8949121644: migration/block-dirty-bitmap: rename finish_lock to just lock 
(Vladimir Sementsov-Ogievskiy)
3b52726ec0: migration/block-dirty-bitmap: refactor state global variables 
(Vladimir Sementsov-Ogievskiy)
d0cccbd118: migration/block-dirty-bitmap: move mutex init to 
dirty_bitmap_mig_init (Vladimir Sementsov-Ogievskiy)
b25d364102: migration/block-dirty-bitmap: rename dirty_bitmap_mig_cleanup 
(Vladimir Sementsov-Ogievskiy)
fbbc6b1470: migration/block-dirty-bitmap: rename state structure types 
(Vladimir Sementsov-Ogievskiy)
e6ce5e9224: migration/block-dirty-bitmap: fix dirty_bitmap_mig_before_vm_start 
(Vladimir Sementsov-Ogievskiy)
e80a4150a5: qemu-iotests/199: increase postcopy period (Vladimir 
Sementsov-Ogievskiy)
31e3827913: qemu-iotests/199: change discard patterns (Vladimir 
Sementsov-Ogievskiy)
edb90bbdf3: qemu-iotests/199: improve performance: set bitmap by discard 
(Vladimir Sementsov-Ogievskiy)
09feea6cf5: qemu-iotests/199: better catch postcopy time (Vladimir 
Sementsov-Ogievskiy)
f3f483ac63: qemu-iotests/199: drop extra constraints (Vladimir 
Sementsov-Ogievskiy)
8243219fa5: qemu-iotests/199: fix style (Vladimir Sementsov-Ogievskiy)
8098969cf2: qcow2: Fix capitalization of header extension constant. (Andrey 
Shinkevich)
0f6bb1958f: linux-user: Use getcwd syscall directly (Andreas Schwab)
4d213001b3: linux-user: Fix syscall rt_sigtimedwait() implementation (Filip 
Bozuta)
c9f8066697: linux-user: Ensure mmap_min_addr is non-zero (Richard Henderson)
0c9753ebda: virtio-pci: fix virtio_pci_queue_enabled() (Laurent 

Re: [PATCH for 5.1] docs: fix trace docs build with sphinx 3.1.1

2020-07-28 Thread John Snow

On 7/27/20 4:14 PM, Peter Maydell wrote:

On Mon, 27 Jul 2020 at 20:52, John Snow  wrote:

... Should we say goodbye to Sphinx 1.7.x, or is there a workaround that
keeps support from 1.6.1 through to 3.1.1?


I think we need to keep 1.7.x because it's the Sphinx shipped
by some LTS distros we support, don't we?



Only if you feel it's important that doc building works using packages 
from the system repo. `pip3 install --user sphinx` works on all of these 
distros, too.


We already somewhat break our promise where Sphinx is concerned for the 
oldest distros on our support list.



I do feel we probably need to defend our Sphinx-version-support
more actively by having oldest-supported and bleeding-edge
both tested in the CI setup...



We could either:

A) Start using a "build venv" that uses specific sets of python packages 
for the build process to pin everyone's build at version 1.6.1


B) Add some sort of pytest/tox/whatever to add a test that does a sphinx 
doc build across multiple different versions, 1.6.1, 1.7.x, 2.0.x, etc.


(I don't think you liked the idea of a Python environment in QEMU taking 
responsibility of Sphinx, though ...)





Re: [PATCH v3 3/8] ppc/spapr: Use start-powered-off CPUState property

2020-07-28 Thread Thiago Jung Bauermann


Greg Kurz  writes:

> On Wed, 22 Jul 2020 23:56:52 -0300
> Thiago Jung Bauermann  wrote:
>
>> PowerPC sPAPR CPUs start in the halted state, and spapr_reset_vcpu()
>> attempts to implement this by setting CPUState::halted to 1. But that's too
>> late for the case of hotplugged CPUs in a machine configure with 2 or more
>> threads per core.
>> 
>> By then, other parts of QEMU have already caused the vCPU to run in an
>> unitialized state a couple of times. For example, ppc_cpu_reset() calls
>> ppc_tlb_invalidate_all(), which ends up calling async_run_on_cpu(). This
>> kicks the new vCPU while it has CPUState::halted = 0, causing QEMU to issue
>> a KVM_RUN ioctl on the new vCPU before the guest is able to make the
>> start-cpu RTAS call to initialize its register state.
>> 
>> This problem doesn't seem to cause visible issues for regular guests, but
>> on a secure guest running under the Ultravisor it does. The Ultravisor
>> relies on being able to snoop on the start-cpu RTAS call to map vCPUs to
>> guests, and this issue causes it to see a stray vCPU that doesn't belong to
>> any guest.
>> 
>> Fix by setting the start-powered-off CPUState property in
>> spapr_create_vcpu(), which makes cpu_common_reset() initialize
>> CPUState::halted to 1 at an earlier moment.
>> 
>> Suggested-by: Eduardo Habkost 
>> Signed-off-by: Thiago Jung Bauermann 
>> ---
>
> Thanks for doing this ! I remember seeing partly initialized CPUs being
> kicked in the past, which is clearly wrong but I never got time to find
> a proper fix (especially since it didn't seem to break anything).
>
> Reviewed-by: Greg Kurz 

Thanks for confirming the issue, and for reviewing the code.

-- 
Thiago Jung Bauermann
IBM Linux Technology Center



[PATCH] linux-user: Correctly start brk after executable

2020-07-28 Thread Timothy E Baldwin
info->brk was erroneously set to the end of highest addressed
writable segment which could result it in overlapping the executable.

As per load_elf_binary in fs/binfmt_elf.c in Linux, it should be
set to end of highest addressed segment.

Signed-off-by: Timothy E Baldwin 
---
 linux-user/elfload.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index 7e7f642332..d5d444f698 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -2564,9 +2564,9 @@ static void load_elf_image(const char *image_name, int 
image_fd,
 if (vaddr_ef > info->end_data) {
 info->end_data = vaddr_ef;
 }
-if (vaddr_em > info->brk) {
-info->brk = vaddr_em;
-}
+}
+if (vaddr_em > info->brk) {
+info->brk = vaddr_em;
 }
 } else if (eppnt->p_type == PT_INTERP && pinterp_name) {
 char *interp_name;
@@ -2621,7 +2621,6 @@ static void load_elf_image(const char *image_name, int 
image_fd,
 if (info->end_data == 0) {
 info->start_data = info->end_code;
 info->end_data = info->end_code;
-info->brk = info->end_code;
 }
 
 if (qemu_log_enabled()) {
-- 
2.25.1

Given an executable with a read-write segment between 2 executable segments
qemu was unmapping most of the execuateble instead of area reserved for brk
at the end of the execuatable.



Re: [PATCH v3 1/8] target/arm: Move start-powered-off property to generic CPUState

2020-07-28 Thread Thiago Jung Bauermann


Greg Kurz  writes:

> On Thu, 23 Jul 2020 13:06:09 +1000
> David Gibson  wrote:
>
>> On Wed, Jul 22, 2020 at 11:56:50PM -0300, Thiago Jung Bauermann wrote:
>> > There are other platforms which also have CPUs that start powered off, so
>> > generalize the start-powered-off property so that it can be used by them.
>> > 
>> > Note that ARMv7MState also has a property of the same name but this patch
>> > doesn't change it because that class isn't a subclass of CPUState so it
>> > wouldn't be a trivial change.
>> > 
>> > This change should not cause any change in behavior.
>> > 
>> > Suggested-by: Eduardo Habkost 
>> > Reviewed-by: Philippe Mathieu-Daudé 
>> > Signed-off-by: Thiago Jung Bauermann 
>> 
>> Reviewed-by: David Gibson 
>> 
>
> Reviewed-by: Greg Kurz 

Thanks!

-- 
Thiago Jung Bauermann
IBM Linux Technology Center



Re: sysbus_create_simple Vs qdev_create

2020-07-28 Thread Eduardo Habkost
On Tue, Jul 28, 2020 at 07:38:27PM +0200, Paolo Bonzini wrote:
> On 28/07/20 09:19, Markus Armbruster wrote:
> >> the composition tree generally mirrors things that are born and die
> >> at the same time, and creating children is generally reserved to the
> >> object itself.
> >
> > Yes.  Notable exceptions: containers /machine/peripheral,
> > /machine/peripheral-anon, /machine/unattached.
> 
> And /objects too.  Apart from /machine/unattached, all these dynamic
> objects are created by the monitor or the command line.
> 
> >> Children are usually embedded directly in a struct, for
> >> example.
> > 
> > We sometimes use object_new() + object_property_add_child() instead.
> > Extra indirection.  I guess we'd be better off without the extra
> > indirection most of the time.  Implementation detail.
> > 
> > We sometimes use object_new() without object_property_add_child(), and
> > have qdev_realize() put the device in the /machine/unattached orphanage.
> > Meh.  I guess the orphanage feature exists to make conversion to QOM
> > slightly easier.  Could we ban its use for new boards at least?
> 
> Banning perhaps is too strong, but yes /machine/unattached is an
> anti-pattern.
> 
> >> 3) accessing the QOM graph is slow (it requires hash table lookups,
> >> string comparisons and all that), so the pointers that cache the
> >> parent-child links are needed for use in hot paths.
> > 
> > True, but only because QOM's design opts for generality, efficiency be
> > damned :)
> 
> Remember that QOM's essential feature is the visitors: unlike GObject,
> QOM is not targeted at programming languages but rather at CLI and RPC.

This is surprising to me.  I never thought QOM was targeted at
the CLI or RPC.  (Every single property mentioned in this message
don't seem to be related to the CLI or RPC.)

About the visitors: I always had the impression that usage of
visitors inside QOM is unnecessary and avoidable (compared to
QAPI, where the visitors are an essential feature).


> 
> > I never quite understood why we need to build properties at run time.
> 
> I think it was (for once) Anthony reasoning that good is better than
> perfect.  You do need run-time properties for /machine/unattached,
> /machine/peripheral, etc., so he decided to only have run-time
> properties.  Introspection wasn't considered a primary design goal.
> 
> Also, even though child properties are quite often the same for all
> objects after initialization (and possibly realization) is complete,
> this does not cover in two cases:
> 
> 1) before the corresponding objects are created---so static child
> properties would only be possible if creation of all children is moved
> to instance_init, which is guaranteed not to fail.
> 
> 2) there are cases in which a property (e.g. an int) governs how many of
> those children exist, so you cannot create all children in instance_init.

Do we really need need QOM children to be accessible using the QOM
property API?

Using the same code for both user-configurable properties and for
the list of children of a QOM object might have saved some time
years ago, but I'm not sure this is still a necessary or useful
abstraction.

-- 
Eduardo




Re: [PATCH v2 3/3] virtiofsd: probe unshare(CLONE_FS) and print an error

2020-07-28 Thread Vivek Goyal
On Tue, Jul 28, 2020 at 03:12:54PM -0400, Daniel Walsh wrote:
> On 7/28/20 09:12, Vivek Goyal wrote:
> > On Tue, Jul 28, 2020 at 12:00:20PM +0200, Roman Mohr wrote:
> >> On Tue, Jul 28, 2020 at 3:07 AM misono.tomoh...@fujitsu.com <
> >> misono.tomoh...@fujitsu.com> wrote:
> >>
>  Subject: [PATCH v2 3/3] virtiofsd: probe unshare(CLONE_FS) and print an
> >>> error
>  An assertion failure is raised during request processing if
>  unshare(CLONE_FS) fails. Implement a probe at startup so the problem can
>  be detected right away.
> 
>  Unfortunately Docker/Moby does not include unshare in the seccomp.json
>  list unless CAP_SYS_ADMIN is given. Other seccomp.json lists always
>  include unshare (e.g. podman is unaffected):
> 
> >>> https://raw.githubusercontent.com/seccomp/containers-golang/master/seccomp.json
>  Use "docker run --security-opt seccomp=path/to/seccomp.json ..." if the
>  default seccomp.json is missing unshare.
> >>> Hi, sorry for a bit late.
> >>>
> >>> unshare() was added to fix xattr problem:
> >>>
> >>> https://github.com/qemu/qemu/commit/bdfd66788349acc43cd3f1298718ad491663cfcc#
> >>> In theory we don't need to call unshare if xattr is disabled, but it is
> >>> hard to get to know
> >>> if xattr is enabled or disabled in fv_queue_worker(), right?
> >>>
> >>>
> >> In kubevirt we want to run virtiofsd in containers. We would already not
> >> have xattr support for e.g. overlayfs in the VM after this patch series (an
> >> acceptable con at least for us right now).
> >> If we can get rid of the unshare (and potentially of needing root) that
> >> would be great. We always assume that everything which we run in containers
> >> should work for cri-o and docker.
> > But cri-o and docker containers run as root, isn't it? (or atleast have
> > the capability to run as root). Havind said that, it will be nice to be able
> > to run virtiofsd without root. 
> >
> > There are few hurdles though.
> >
> > - For file creation, we switch uid/gid (seteuid/setegid) and that seems
> >   to require root. If we were to run unpriviliged, probably all files
> >   on host will have to be owned by unpriviliged user and guest visible
> >   uid/gid will have to be stored in xattrs. I think virtfs supports
> >   something similar.
> >
> > I am sure there are other restrictions but this probably is the biggest
> > one to overcome.
> >
> >  > 
> You should be able to run it within a user namespace with Namespaces
> capabilities.

User namespaces has always been a one TODO item. Few challenges are
how to manage uid/gid mapping for existing directories which will be
shared. We will have to pick a mapping range and then chown the
files accordingly. And chowning itself will require priviliges.

Now Stefan is adding support to run virtiofsd inside container. So
podman should be able virtiofsd inside a user namespace (And even
give CAP_SYS_ADMIN). That way we don't have to worry about setting
a user namespace and select a mapping etc. But persistent data
volumes will continue to be an issue with user namespace, right?

How are you dealing with it in podman. Containers running in 
user namespace and with volume mounts.

Vivek

> >> "Just" pointing docker to a different seccomp.json file is something which
> >> k8s users/admin in many cases can't do.
> > Or may be issue is that standard seccomp.json does not allow unshare()
> > and hence you are forced to use a non-standar seccomp.json.
> >
> > Vivek
> >
> >> Best Regards,
> >> Roman
> >>
> >>
> >>> So, it looks good to me.
> >>> Reviewed-by: Misono Tomohiro 
> >>>
> >>> Regards,
> >>> Misono
> >>>
>  Cc: Misono Tomohiro 
>  Signed-off-by: Stefan Hajnoczi 
>  ---
>   tools/virtiofsd/fuse_virtio.c | 16 
>   1 file changed, 16 insertions(+)
> 
>  diff --git a/tools/virtiofsd/fuse_virtio.c
> >>> b/tools/virtiofsd/fuse_virtio.c
>  index 3b6d16a041..9e5537506c 100644
>  --- a/tools/virtiofsd/fuse_virtio.c
>  +++ b/tools/virtiofsd/fuse_virtio.c
>  @@ -949,6 +949,22 @@ int virtio_session_mount(struct fuse_session *se)
>   {
>   int ret;
> 
>  +/*
>  + * Test that unshare(CLONE_FS) works. fv_queue_worker() will need
> >>> it. It's
>  + * an unprivileged system call but some Docker/Moby versions are
> >>> known to
>  + * reject it via seccomp when CAP_SYS_ADMIN is not given.
>  + *
>  + * Note that the program is single-threaded here so this syscall
> >>> has no
>  + * visible effect and is safe to make.
>  + */
>  +ret = unshare(CLONE_FS);
>  +if (ret == -1 && errno == EPERM) {
>  +fuse_log(FUSE_LOG_ERR, "unshare(CLONE_FS) failed with EPERM. If
> >>> "
>  +"running in a container please check that the container
> >>> "
>  +"runtime seccomp policy allows unshare.\n");
>  +return -1;
>  +}
>  +
>   ret = 

Re: [PATCH v2 3/3] virtiofsd: probe unshare(CLONE_FS) and print an error

2020-07-28 Thread Vivek Goyal
On Tue, Jul 28, 2020 at 04:52:33PM +0100, Daniel P. Berrangé wrote:
> On Tue, Jul 28, 2020 at 09:12:50AM -0400, Vivek Goyal wrote:
> > On Tue, Jul 28, 2020 at 12:00:20PM +0200, Roman Mohr wrote:
> > > On Tue, Jul 28, 2020 at 3:07 AM misono.tomoh...@fujitsu.com <
> > > misono.tomoh...@fujitsu.com> wrote:
> > > 
> > > > > Subject: [PATCH v2 3/3] virtiofsd: probe unshare(CLONE_FS) and print 
> > > > > an
> > > > error
> > > > >
> > > > > An assertion failure is raised during request processing if
> > > > > unshare(CLONE_FS) fails. Implement a probe at startup so the problem 
> > > > > can
> > > > > be detected right away.
> > > > >
> > > > > Unfortunately Docker/Moby does not include unshare in the seccomp.json
> > > > > list unless CAP_SYS_ADMIN is given. Other seccomp.json lists always
> > > > > include unshare (e.g. podman is unaffected):
> > > > >
> > > > https://raw.githubusercontent.com/seccomp/containers-golang/master/seccomp.json
> > > > >
> > > > > Use "docker run --security-opt seccomp=path/to/seccomp.json ..." if 
> > > > > the
> > > > > default seccomp.json is missing unshare.
> > > >
> > > > Hi, sorry for a bit late.
> > > >
> > > > unshare() was added to fix xattr problem:
> > > >
> > > > https://github.com/qemu/qemu/commit/bdfd66788349acc43cd3f1298718ad491663cfcc#
> > > > In theory we don't need to call unshare if xattr is disabled, but it is
> > > > hard to get to know
> > > > if xattr is enabled or disabled in fv_queue_worker(), right?
> > > >
> > > >
> > > In kubevirt we want to run virtiofsd in containers. We would already not
> > > have xattr support for e.g. overlayfs in the VM after this patch series 
> > > (an
> > > acceptable con at least for us right now).
> > > If we can get rid of the unshare (and potentially of needing root) that
> > > would be great. We always assume that everything which we run in 
> > > containers
> > > should work for cri-o and docker.
> > 
> > But cri-o and docker containers run as root, isn't it? (or atleast have
> > the capability to run as root). Havind said that, it will be nice to be able
> > to run virtiofsd without root. 
> > 
> > There are few hurdles though.
> > 
> > - For file creation, we switch uid/gid (seteuid/setegid) and that seems
> >   to require root. If we were to run unpriviliged, probably all files
> >   on host will have to be owned by unpriviliged user and guest visible
> >   uid/gid will have to be stored in xattrs. I think virtfs supports
> >   something similar.
> 
> I think I've mentioned before, 9p virtfs supports different modes,
> passthrough, squashed or remapped.
> 
> passthrough should be reasonably straightforward to support in virtiofs.
> The guest sees all the host UID/GIDs ownership as normal, and can read
> any files the host user can read, but are obviously restricted to write
> to only the files that host user can write too. No DAC-OVERRIDE facility
> in essence. You'll just get EPERM, which is fine. This simple passthrough
> scenario would be just what's desired for a typical desktop virt use
> cases, where you want to share part/all of your home dir with a guest for
> easy file access. Personally this is the mode I'd be most interested in
> seeing provided for unprivileged virtiofsd usage.

Interesting. So passthrough will have two sub modes. priviliged and
unpriviliged. As of now we support priviliged passthrough. 

I guess it does make sense to look into unpriviliged passthrough
and see what other operations will not be allowed.

Thanks
Vivek

> 
> squash is similar to passthrough, except the guest sees everything
> as owned by the same user. This can be surprising as the guest might
> see a file owned by them, but not be able to write to it, as on the
> host its actually owned by some other user. Fairly niche use case
> I think.
> 
> remapping would be needed for a more general purpose use cases
> allowing the guest to do arbitrary UID/GID changes, but on the host
> everything is still stored as one user and remapped somehow.
> 
> The main challenge for all the unprivileged scenarios is safety of
> the sandbox, to avoid risk of guests escaping to access files outside
> of the exported dir via symlink attacks or similar.
> 
> 
> 
> Regards,
> Daniel
> -- 
> |: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o-https://fstop138.berrange.com :|
> |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH v2 3/3] virtiofsd: probe unshare(CLONE_FS) and print an error

2020-07-28 Thread Daniel Walsh
On 7/28/20 11:32, Stefan Hajnoczi wrote:
> On Tue, Jul 28, 2020 at 12:00:20PM +0200, Roman Mohr wrote:
>> On Tue, Jul 28, 2020 at 3:07 AM misono.tomoh...@fujitsu.com <
>> misono.tomoh...@fujitsu.com> wrote:
>>
 Subject: [PATCH v2 3/3] virtiofsd: probe unshare(CLONE_FS) and print an
>>> error
 An assertion failure is raised during request processing if
 unshare(CLONE_FS) fails. Implement a probe at startup so the problem can
 be detected right away.

 Unfortunately Docker/Moby does not include unshare in the seccomp.json
 list unless CAP_SYS_ADMIN is given. Other seccomp.json lists always
 include unshare (e.g. podman is unaffected):

>>> https://raw.githubusercontent.com/seccomp/containers-golang/master/seccomp.json
 Use "docker run --security-opt seccomp=path/to/seccomp.json ..." if the
 default seccomp.json is missing unshare.
>>> Hi, sorry for a bit late.
>>>
>>> unshare() was added to fix xattr problem:
>>>
>>> https://github.com/qemu/qemu/commit/bdfd66788349acc43cd3f1298718ad491663cfcc#
>>> In theory we don't need to call unshare if xattr is disabled, but it is
>>> hard to get to know
>>> if xattr is enabled or disabled in fv_queue_worker(), right?
>>>
>>>
>> In kubevirt we want to run virtiofsd in containers. We would already not
>> have xattr support for e.g. overlayfs in the VM after this patch series (an
>> acceptable con at least for us right now).
>> If we can get rid of the unshare (and potentially of needing root) that
>> would be great. We always assume that everything which we run in containers
>> should work for cri-o and docker.
> Root is required to access files with any uid/gid.
>
> Dave Gilbert is working on xattr support without CAP_SYS_ADMIN. He may
> be able to find a way to drop unshare (at least in containers).
>
>> "Just" pointing docker to a different seccomp.json file is something which
>> k8s users/admin in many cases can't do.
> There is a Moby PR to change the default seccomp.json file here but it's
> unclear if it will be merged:
> https://github.com/moby/moby/pull/41244
>
> Stefan

Why not try Podman?




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v2 3/3] virtiofsd: probe unshare(CLONE_FS) and print an error

2020-07-28 Thread Daniel Walsh
On 7/28/20 09:12, Vivek Goyal wrote:
> On Tue, Jul 28, 2020 at 12:00:20PM +0200, Roman Mohr wrote:
>> On Tue, Jul 28, 2020 at 3:07 AM misono.tomoh...@fujitsu.com <
>> misono.tomoh...@fujitsu.com> wrote:
>>
 Subject: [PATCH v2 3/3] virtiofsd: probe unshare(CLONE_FS) and print an
>>> error
 An assertion failure is raised during request processing if
 unshare(CLONE_FS) fails. Implement a probe at startup so the problem can
 be detected right away.

 Unfortunately Docker/Moby does not include unshare in the seccomp.json
 list unless CAP_SYS_ADMIN is given. Other seccomp.json lists always
 include unshare (e.g. podman is unaffected):

>>> https://raw.githubusercontent.com/seccomp/containers-golang/master/seccomp.json
 Use "docker run --security-opt seccomp=path/to/seccomp.json ..." if the
 default seccomp.json is missing unshare.
>>> Hi, sorry for a bit late.
>>>
>>> unshare() was added to fix xattr problem:
>>>
>>> https://github.com/qemu/qemu/commit/bdfd66788349acc43cd3f1298718ad491663cfcc#
>>> In theory we don't need to call unshare if xattr is disabled, but it is
>>> hard to get to know
>>> if xattr is enabled or disabled in fv_queue_worker(), right?
>>>
>>>
>> In kubevirt we want to run virtiofsd in containers. We would already not
>> have xattr support for e.g. overlayfs in the VM after this patch series (an
>> acceptable con at least for us right now).
>> If we can get rid of the unshare (and potentially of needing root) that
>> would be great. We always assume that everything which we run in containers
>> should work for cri-o and docker.
> But cri-o and docker containers run as root, isn't it? (or atleast have
> the capability to run as root). Havind said that, it will be nice to be able
> to run virtiofsd without root. 
>
> There are few hurdles though.
>
> - For file creation, we switch uid/gid (seteuid/setegid) and that seems
>   to require root. If we were to run unpriviliged, probably all files
>   on host will have to be owned by unpriviliged user and guest visible
>   uid/gid will have to be stored in xattrs. I think virtfs supports
>   something similar.
>
> I am sure there are other restrictions but this probably is the biggest
> one to overcome.
>
>  > 
You should be able to run it within a user namespace with Namespaces
capabilities.
>> "Just" pointing docker to a different seccomp.json file is something which
>> k8s users/admin in many cases can't do.
> Or may be issue is that standard seccomp.json does not allow unshare()
> and hence you are forced to use a non-standar seccomp.json.
>
> Vivek
>
>> Best Regards,
>> Roman
>>
>>
>>> So, it looks good to me.
>>> Reviewed-by: Misono Tomohiro 
>>>
>>> Regards,
>>> Misono
>>>
 Cc: Misono Tomohiro 
 Signed-off-by: Stefan Hajnoczi 
 ---
  tools/virtiofsd/fuse_virtio.c | 16 
  1 file changed, 16 insertions(+)

 diff --git a/tools/virtiofsd/fuse_virtio.c
>>> b/tools/virtiofsd/fuse_virtio.c
 index 3b6d16a041..9e5537506c 100644
 --- a/tools/virtiofsd/fuse_virtio.c
 +++ b/tools/virtiofsd/fuse_virtio.c
 @@ -949,6 +949,22 @@ int virtio_session_mount(struct fuse_session *se)
  {
  int ret;

 +/*
 + * Test that unshare(CLONE_FS) works. fv_queue_worker() will need
>>> it. It's
 + * an unprivileged system call but some Docker/Moby versions are
>>> known to
 + * reject it via seccomp when CAP_SYS_ADMIN is not given.
 + *
 + * Note that the program is single-threaded here so this syscall
>>> has no
 + * visible effect and is safe to make.
 + */
 +ret = unshare(CLONE_FS);
 +if (ret == -1 && errno == EPERM) {
 +fuse_log(FUSE_LOG_ERR, "unshare(CLONE_FS) failed with EPERM. If
>>> "
 +"running in a container please check that the container
>>> "
 +"runtime seccomp policy allows unshare.\n");
 +return -1;
 +}
 +
  ret = fv_create_listen_socket(se);
  if (ret < 0) {
  return ret;
 --
 2.26.2
>>>




[Bug 1889288] Re: aarch64 BICS instruciton doesn't set flags

2020-07-28 Thread Robert
Oh yes I see. Sorry for the false report.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1889288

Title:
  aarch64 BICS instruciton doesn't set flags

Status in QEMU:
  Invalid

Bug description:
  When reading the source for translate-a64.c here:

  
https://github.com/qemu/qemu/blob/a466dd084f51cdc9da2e99361f674f98d7218559/target/arm/translate-a64.c#L4783

  I noticed that it does not appear to call gen_logic_CC for the BICS
  instruction so is not setting the flags as required. I haven't tried
  to produce a test case for it but it seems like it might be a bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1889288/+subscriptions



Re: [PULL 0/9] nbd patches for -rc2, 2020-07-28

2020-07-28 Thread Peter Maydell
On Tue, 28 Jul 2020 at 16:06, Eric Blake  wrote:
>
> The following changes since commit 1b242c3b1ec7c6011901b4f3b4b0876e31746afb:
>
>   Merge remote-tracking branch 'remotes/mcayland/tags/qemu-openbios-20200727' 
> into staging (2020-07-28 13:46:31 +0100)
>
> are available in the Git repository at:
>
>   https://repo.or.cz/qemu/ericb.git tags/pull-nbd-2020-07-28
>
> for you to fetch changes up to 12c75e20a269ac917f4a76936d7142264e522233:
>
>   block/nbd: nbd_co_reconnect_loop(): don't sleep if drained (2020-07-28 
> 09:54:43 -0500)
>
> Sorry this is down to the wire, Nir's patches were a pretty recent
> discovery, but still counts as a bug fix worthy of having in -rc2.
>
> 
> nbd patches for 2020-07-28
>
> - fix NBD handling of trim/zero requests larger than 2G
> - allow no-op resizes on NBD (in turn fixing qemu-img convert -c into NBD)
> - several deadlock fixes when using NBD reconnect


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/5.1
for any user-visible changes.

-- PMM



Re: [PULL 2/2] s390x/s390-virtio-ccw: fix loadparm property getter

2020-07-28 Thread Halil Pasic
On Tue, 28 Jul 2020 17:14:38 +0200
Cornelia Huck  wrote:

> On Tue, 28 Jul 2020 14:52:36 +0100
> Peter Maydell  wrote:
> 
> > On Mon, 27 Jul 2020 at 15:05, Cornelia Huck  wrote:
> > >
> > > From: Halil Pasic 
> > >
> > > The function machine_get_loadparm() is supposed to produce a C-string,
> > > that is a NUL-terminated one, but it does not. ElectricFence can detect
> > > this problem if the loadparm machine property is used.
> > >
> > > Let us make the returned string a NUL-terminated one.
> > >
> > > Fixes: 7104bae9de ("hw/s390x: provide loadparm property for the machine")
> > > Signed-off-by: Halil Pasic 
> > > Reviewed-by: Thomas Huth 
> > > Message-Id: <20200723162717.88485-1-pa...@linux.ibm.com>
> > > Signed-off-by: Cornelia Huck 
> > > ---
> > >  hw/s390x/s390-virtio-ccw.c | 6 +-
> > >  1 file changed, 5 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c
> > > index 8cc2f25d8a6a..403d30e13bca 100644
> > > --- a/hw/s390x/s390-virtio-ccw.c
> > > +++ b/hw/s390x/s390-virtio-ccw.c
> > > @@ -701,8 +701,12 @@ bool hpage_1m_allowed(void)
> > >  static char *machine_get_loadparm(Object *obj, Error **errp)
> > >  {
> > >  S390CcwMachineState *ms = S390_CCW_MACHINE(obj);
> > > +char *loadparm_str;
> > >
> > > -return g_memdup(ms->loadparm, sizeof(ms->loadparm));
> > > +/* make a NUL-terminated string */
> > > +loadparm_str = g_memdup(ms->loadparm, sizeof(ms->loadparm) + 1);
> > > +loadparm_str[sizeof(ms->loadparm)] = 0;
> > > +return loadparm_str;  
> > 
> > Hi. Coverity points out (CID 1431058) that this code now
> > reads off the end of the ms->loadparm buffer, because
> > g_memdup() is going to read and copy 9 bytes (size + 1)
> > and the array itself is only 8 bytes.
> > 
> > I don't think you can use g_memdup() here -- you need to
> > allocate the memory with g_malloc() and then fill it with
> > memcpy(), something like:
> > 
> > loadparm_str = g_malloc(sizeof(ms->loadparm) + 1);
> > memcpy(loadparm_str, ms->loadparm, sizeof(ms->loadparm));
> > loadparm_str[sizeof(ms->loadparm)] = 0;
> 
> Sigh.
> 
> Halil, do you have time to cook up a patch?
> 
> 

Will do!

Halil




[Bug 1880424] Re: I/O write make imx_epit_reset() crash

2020-07-28 Thread Peter Maydell
Patch on list:
https://patchew.org/QEMU/20200727154550.3409-1-peter.mayd...@linaro.org/

** Changed in: qemu
   Status: New => In Progress

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1880424

Title:
  I/O write make imx_epit_reset() crash

Status in QEMU:
  In Progress

Bug description:
  libFuzzer found:

  qemu-fuzz-arm: hw/core/ptimer.c:377: void 
ptimer_transaction_begin(ptimer_state *): Assertion `!s->in_transaction' failed.
  ==6041== ERROR: libFuzzer: deadly signal
  #8 0x7fcaba320565 in __GI___assert_fail (/lib64/libc.so.6+0x30565)
  #9 0x563b46f91637 in ptimer_transaction_begin (qemu-fuzz-arm+0x1af1637)
  #10 0x563b476cc4c6 in imx_epit_reset (qemu-fuzz-arm+0x222c4c6)
  #11 0x563b476cd004 in imx_epit_write (qemu-fuzz-arm+0x222d004)
  #12 0x563b46582377 in memory_region_write_accessor 
(qemu-fuzz-arm+0x10e2377)
  #13 0x563b46581ee3 in access_with_adjusted_size (qemu-fuzz-arm+0x10e1ee3)
  #14 0x563b46580a83 in memory_region_dispatch_write 
(qemu-fuzz-arm+0x10e0a83)
  #15 0x563b463c5022 in flatview_write_continue (qemu-fuzz-arm+0xf25022)
  #16 0x563b463b4ea2 in flatview_write (qemu-fuzz-arm+0xf14ea2)
  #17 0x563b463b49d4 in address_space_write (qemu-fuzz-arm+0xf149d4)

  Reproducer:

  qemu-system-arm -M kzm -display none -S -qtest stdio << 'EOF'
  writel 0x53f94000 0x11
  EOF

  qemu-system-arm: hw/core/ptimer.c:377: ptimer_transaction_begin: Assertion 
`!s->in_transaction' failed.
  Aborted (core dumped)

  (gdb) bt
  #1  0x7f4aa4daa895 in abort () at /lib64/libc.so.6
  #2  0x7f4aa4daa769 in _nl_load_domain.cold () at /lib64/libc.so.6
  #3  0x7f4aa4db8566 in annobin_assert.c_end () at /lib64/libc.so.6
  #4  0x55ee85400164 in ptimer_transaction_begin (s=0x55ee873bc4c0) at 
hw/core/ptimer.c:377
  #5  0x55ee855c7936 in imx_epit_reset (dev=0x55ee871725c0) at 
hw/timer/imx_epit.c:111
  #6  0x55ee855c7d1b in imx_epit_write (opaque=0x55ee871725c0, offset=0, 
value=1114112, size=4) at hw/timer/imx_epit.c:209
  #7  0x55ee8513db85 in memory_region_write_accessor (mr=0x55ee871728f0, 
addr=0, value=0x7fff3012d6f8, size=4, shift=0, mask=4294967295, attrs=...) at 
memory.c:483
  #8  0x55ee8513dd96 in access_with_adjusted_size (addr=0, 
value=0x7fff3012d6f8, size=4, access_size_min=1, access_size_max=4, access_fn=
  0x55ee8513daa2 , mr=0x55ee871728f0, 
attrs=...) at memory.c:545
  #9  0x55ee85140cbd in memory_region_dispatch_write (mr=0x55ee871728f0, 
addr=0, data=1114112, op=MO_32, attrs=...) at memory.c:1477
  #10 0x55ee850deba5 in flatview_write_continue (fv=0x55ee87181bd0, 
addr=1408843776, attrs=..., ptr=0x7fff3012d900, len=4, addr1=0, l=4, 
mr=0x55ee871728f0) at exec.c:3147
  #11 0x55ee850decf3 in flatview_write (fv=0x55ee87181bd0, addr=1408843776, 
attrs=..., buf=0x7fff3012d900, len=4) at exec.c:3190
  #12 0x55ee850df05d in address_space_write (as=0x55ee8730a560, 
addr=1408843776, attrs=..., buf=0x7fff3012d900, len=4) at exec.c:3289

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1880424/+subscriptions



[PATCH] target/arm: Fix AddPAC error indication

2020-07-28 Thread Richard Henderson
The definition of top_bit used in this function is one higher
than that used in the Arm ARM psuedo-code, which put the error
indication at top_bit - 1 at the wrong place, which meant that
it wasn't visible to Auth.

Fixing the definition of top_bit requires more changes, because
its most common use is for the count of bits in top_bit:bot_bit,
which would then need to be computed as top_bit - bot_bit + 1.

For now, prefer the minimal fix to the error indication alone.

Fixes: 63ff0ca94cb
Reported-by: Derrick McKee 
Signed-off-by: Richard Henderson 
---
 target/arm/pauth_helper.c |  2 +-
 tests/tcg/aarch64/pauth-5.c   | 33 +++
 tests/tcg/aarch64/Makefile.target |  2 +-
 3 files changed, 35 insertions(+), 2 deletions(-)
 create mode 100644 tests/tcg/aarch64/pauth-5.c

diff --git a/target/arm/pauth_helper.c b/target/arm/pauth_helper.c
index b909630317..d00cd97332 100644
--- a/target/arm/pauth_helper.c
+++ b/target/arm/pauth_helper.c
@@ -300,7 +300,7 @@ static uint64_t pauth_addpac(CPUARMState *env, uint64_t 
ptr, uint64_t modifier,
  */
 test = sextract64(ptr, bot_bit, top_bit - bot_bit);
 if (test != 0 && test != -1) {
-pac ^= MAKE_64BIT_MASK(top_bit - 1, 1);
+pac ^= MAKE_64BIT_MASK(top_bit - 2, 1);
 }
 
 /*
diff --git a/tests/tcg/aarch64/pauth-5.c b/tests/tcg/aarch64/pauth-5.c
new file mode 100644
index 00..67c257918b
--- /dev/null
+++ b/tests/tcg/aarch64/pauth-5.c
@@ -0,0 +1,33 @@
+#include 
+
+static int x;
+
+int main()
+{
+int *p0 = , *p1, *p2, *p3;
+unsigned long salt = 0;
+
+/*
+ * With TBI enabled and a 48-bit VA, there are 7 bits of auth, and so
+ * a 1/128 chance of auth = pac(ptr,key,salt) producing zero.
+ * Find a salt that creates auth != 0.
+ */
+do {
+salt++;
+asm("pacda %0, %1" : "=r"(p1) : "r"(salt), "0"(p0));
+} while (p0 == p1);
+
+/*
+ * This pac must fail, because the input pointer bears an encryption,
+ * and so is not properly extended within bits [55:47].  This will
+ * toggle bit 54 in the output...
+ */
+asm("pacda %0, %1" : "=r"(p2) : "r"(salt), "0"(p1));
+
+/* ... so that the aut must fail, setting bit 53 in the output ... */
+asm("autda %0, %1" : "=r"(p3) : "r"(salt), "0"(p2));
+
+/* ... which means this equality must not hold. */
+assert(p3 != p0);
+return 0;
+}
diff --git a/tests/tcg/aarch64/Makefile.target 
b/tests/tcg/aarch64/Makefile.target
index b617f2ac7e..e7249915e7 100644
--- a/tests/tcg/aarch64/Makefile.target
+++ b/tests/tcg/aarch64/Makefile.target
@@ -19,7 +19,7 @@ run-fcvt: fcvt
 
 # Pauth Tests
 ifneq ($(DOCKER_IMAGE)$(CROSS_CC_HAS_ARMV8_3),)
-AARCH64_TESTS += pauth-1 pauth-2 pauth-4
+AARCH64_TESTS += pauth-1 pauth-2 pauth-4 pauth-5
 pauth-%: CFLAGS += -march=armv8.3-a
 run-pauth-%: QEMU_OPTS += -cpu max
 run-plugin-pauth-%: QEMU_OPTS += -cpu max
-- 
2.25.1




Re: [PULL 0/2] Update slirp (+ debug test-serial)

2020-07-28 Thread Peter Maydell
On Tue, 28 Jul 2020 at 15:31, Marc-André Lureau
 wrote:
>
> The following changes since commit 264991512193ee50e27d43e66f832d5041cf3b28:
>
>   Merge remote-tracking branch 'remotes/ericb/tags/pull-bitmaps-2020-07-27' 
> into staging (2020-07-28 14:38:17 +0100)
>
> are available in the Git repository at:
>
>   https://github.com/elmarco/qemu.git tags/slirp-pull-request
>
> for you to fetch changes up to 9c15f57891af7c2cb3baf2d66a1b1f3f87a665ba:
>
>   slirp: update to latest stable-4.2 branch (2020-07-28 18:27:59 +0400)
>
> 
> slirp: update to latest stable-4.2 branch
>
> 


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/5.1
for any user-visible changes.

-- PMM



[PULL 4/4] hw/display/artist.c: fix out of bounds check

2020-07-28 Thread Helge Deller
From: Sven Schnelle 

Signed-off-by: Sven Schnelle 
Signed-off-by: Helge Deller 
---
 hw/display/artist.c | 18 ++
 1 file changed, 6 insertions(+), 12 deletions(-)

diff --git a/hw/display/artist.c b/hw/display/artist.c
index 6261bfe65b..46043ec895 100644
--- a/hw/display/artist.c
+++ b/hw/display/artist.c
@@ -340,14 +340,13 @@ static void vram_bit_write(ARTISTState *s, int posx, int 
posy, bool incr_x,
 {
 struct vram_buffer *buf;
 uint32_t vram_bitmask = s->vram_bitmask;
-int mask, i, pix_count, pix_length, offset, height, width;
+int mask, i, pix_count, pix_length, offset, width;
 uint8_t *data8, *p;

 pix_count = vram_write_pix_per_transfer(s);
 pix_length = vram_pixel_length(s);

 buf = vram_write_buffer(s);
-height = buf->height;
 width = buf->width;

 if (s->cmap_bm_access) {
@@ -367,20 +366,13 @@ static void vram_bit_write(ARTISTState *s, int posx, int 
posy, bool incr_x,
 pix_count = size * 8;
 }

-if (posy * width + posx + pix_count > buf->size) {
-qemu_log("write outside bounds: wants %dx%d, max size %dx%d\n",
- posx, posy, width, height);
-return;
-}
-
-
 switch (pix_length) {
 case 0:
 if (s->image_bitmap_op & 0x2000) {
 data &= vram_bitmask;
 }

-for (i = 0; i < pix_count; i++) {
+for (i = 0; i < pix_count && offset + i < buf->size; i++) {
 artist_rop8(s, p + offset + pix_count - 1 - i,
 (data & 1) ? (s->plane_mask >> 24) : 0);
 data >>= 1;
@@ -398,7 +390,9 @@ static void vram_bit_write(ARTISTState *s, int posx, int 
posy, bool incr_x,
 for (i = 3; i >= 0; i--) {
 if (!(s->image_bitmap_op & 0x2000) ||
 s->vram_bitmask & (1 << (28 + i))) {
-artist_rop8(s, p + offset + 3 - i, data8[ROP8OFF(i)]);
+if (offset + 3 - i < buf->size) {
+artist_rop8(s, p + offset + 3 - i, data8[ROP8OFF(i)]);
+}
 }
 }
 memory_region_set_dirty(>mr, offset, 3);
@@ -420,7 +414,7 @@ static void vram_bit_write(ARTISTState *s, int posx, int 
posy, bool incr_x,
 break;
 }

-for (i = 0; i < pix_count; i++) {
+for (i = 0; i < pix_count && offset + i < buf->size; i++) {
 mask = 1 << (pix_count - 1 - i);

 if (!(s->image_bitmap_op & 0x2000) ||
--
2.21.3




[PULL 3/4] hw/hppa: Implement proper SeaBIOS version check

2020-07-28 Thread Helge Deller
It's important that the SeaBIOS hppa firmware is at least at a minimal
level to ensure proper interaction between qemu and firmware.

Implement a proper firmware version check by telling SeaBIOS via the
fw_cfg interface which minimal SeaBIOS version is required by this
running qemu instance. If the firmware detects that it's too old, it
will stop.

Signed-off-by: Helge Deller 
---
 hw/hppa/machine.c | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/hw/hppa/machine.c b/hw/hppa/machine.c
index 49155537cd..90aeefe2a4 100644
--- a/hw/hppa/machine.c
+++ b/hw/hppa/machine.c
@@ -25,6 +25,8 @@

 #define MAX_IDE_BUS 2

+#define MIN_SEABIOS_HPPA_VERSION 1 /* require at least this fw version */
+
 static ISABus *hppa_isa_bus(void)
 {
 ISABus *isa_bus;
@@ -56,6 +58,23 @@ static uint64_t cpu_hppa_to_phys(void *opaque, uint64_t addr)
 static HPPACPU *cpu[HPPA_MAX_CPUS];
 static uint64_t firmware_entry;

+static FWCfgState *create_fw_cfg(MachineState *ms)
+{
+FWCfgState *fw_cfg;
+uint64_t val;
+
+fw_cfg = fw_cfg_init_mem(QEMU_FW_CFG_IO_BASE, QEMU_FW_CFG_IO_BASE + 4);
+fw_cfg_add_i16(fw_cfg, FW_CFG_NB_CPUS, ms->smp.cpus);
+fw_cfg_add_i16(fw_cfg, FW_CFG_MAX_CPUS, HPPA_MAX_CPUS);
+fw_cfg_add_i64(fw_cfg, FW_CFG_RAM_SIZE, ram_size);
+
+val = cpu_to_le64(MIN_SEABIOS_HPPA_VERSION);
+fw_cfg_add_file(fw_cfg, "/etc/firmware-min-version",
+g_memdup(, sizeof(val)), sizeof(val));
+
+return fw_cfg;
+}
+
 static void machine_hppa_init(MachineState *machine)
 {
 const char *kernel_filename = machine->kernel_filename;
@@ -118,6 +137,9 @@ static void machine_hppa_init(MachineState *machine)
115200, serial_hd(0), DEVICE_BIG_ENDIAN);
 }

+/* fw_cfg configuration interface */
+create_fw_cfg(machine);
+
 /* SCSI disk setup. */
 dev = DEVICE(pci_create_simple(pci_bus, -1, "lsi53c895a"));
 lsi53c8xx_handle_legacy_cmdline(dev);
--
2.21.3




[PULL 1/4] hw/hppa: Sync hppa_hardware.h file with SeaBIOS sources

2020-07-28 Thread Helge Deller
The hppa_hardware.h file is shared with SeaBIOS. Sync it.

Signed-off-by: Helge Deller 
---
 hw/hppa/hppa_hardware.h | 6 ++
 hw/hppa/lasi.c  | 2 --
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/hw/hppa/hppa_hardware.h b/hw/hppa/hppa_hardware.h
index 4a2fe2df60..cdb7fa6240 100644
--- a/hw/hppa/hppa_hardware.h
+++ b/hw/hppa/hppa_hardware.h
@@ -17,6 +17,7 @@
 #define LASI_UART_HPA   0xffd05000
 #define LASI_SCSI_HPA   0xffd06000
 #define LASI_LAN_HPA0xffd07000
+#define LASI_RTC_HPA0xffd09000
 #define LASI_LPT_HPA0xffd02000
 #define LASI_AUDIO_HPA  0xffd04000
 #define LASI_PS2KBD_HPA 0xffd08000
@@ -37,10 +38,15 @@
 #define PORT_PCI_CMD(PCI_HPA + DINO_PCI_ADDR)
 #define PORT_PCI_DATA   (PCI_HPA + DINO_CONFIG_DATA)

+/* QEMU fw_cfg interface port */
+#define QEMU_FW_CFG_IO_BASE (MEMORY_HPA + 0x80)
+
 #define PORT_SERIAL1(DINO_UART_HPA + 0x800)
 #define PORT_SERIAL2(LASI_UART_HPA + 0x800)

 #define HPPA_MAX_CPUS   8   /* max. number of SMP CPUs */
 #define CPU_CLOCK_MHZ   250 /* emulate a 250 MHz CPU */

+#define CPU_HPA_CR_REG  7   /* store CPU HPA in cr7 (SeaBIOS internal) */
+
 #endif
diff --git a/hw/hppa/lasi.c b/hw/hppa/lasi.c
index 19974034f3..ffcbb988b8 100644
--- a/hw/hppa/lasi.c
+++ b/hw/hppa/lasi.c
@@ -54,8 +54,6 @@
 #define LASI_CHIP(obj) \
 OBJECT_CHECK(LasiState, (obj), TYPE_LASI_CHIP)

-#define LASI_RTC_HPA(LASI_HPA + 0x9000)
-
 typedef struct LasiState {
 PCIHostState parent_obj;

--
2.21.3




[PULL 0/4] target-hppa fixes

2020-07-28 Thread Helge Deller
Please pull to fix those two bugs in target-hppa:

* Fix the SeaBIOS-hppa firmware build with gcc-10 on Debian

* Fix the following runtime warning with artist framebuffer:
  "write outside bounds: wants 1256x1023, max size 1280x1024"

in addition the SeaBIOS-hppa firmware now includes a version check to prevent
starting when it's incompatible to the emulated qemu hardware.

Helge



The following changes since commit 3461487523b897d324e8d91f3fd20ed55f849544:

  Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20200727' 
into staging (2020-07-28 18:43:48 +0100)

are available in the Git repository at:

  https://github.com/hdeller/qemu-hppa.git target-hppa

for you to fetch changes up to 9aa10e5543566facf328e76d3b5a4aa9d2b79756:

  hw/display/artist.c: fix out of bounds check (2020-07-28 21:17:44 +0200)



Helge Deller (3):
  hw/hppa: Sync hppa_hardware.h file with SeaBIOS sources
  seabios-hppa: Update to SeaBIOS hppa version 1
  hw/hppa: Implement proper SeaBIOS version check

Sven Schnelle (1):
  hw/display/artist.c: fix out of bounds check

 hw/display/artist.c   |  18 ++
 hw/hppa/hppa_hardware.h   |   6 ++
 hw/hppa/lasi.c|   2 --
 hw/hppa/machine.c |  22 ++
 pc-bios/hppa-firmware.img | Bin 766136 -> 783144 bytes
 roms/seabios-hppa |   2 +-
 6 files changed, 35 insertions(+), 15 deletions(-)
--
2.21.3

Helge Deller (3):
  hw/hppa: Sync hppa_hardware.h file with SeaBIOS sources
  seabios-hppa: Update to SeaBIOS hppa version 1
  hw/hppa: Implement proper SeaBIOS version check

Sven Schnelle (1):
  hw/display/artist.c: fix out of bounds check

 hw/display/artist.c   |  18 ++
 hw/hppa/hppa_hardware.h   |   6 ++
 hw/hppa/lasi.c|   2 --
 hw/hppa/machine.c |  22 ++
 pc-bios/hppa-firmware.img | Bin 766136 -> 783144 bytes
 roms/seabios-hppa |   2 +-
 6 files changed, 35 insertions(+), 15 deletions(-)

--
2.21.3




qemu repo lockdown message for a WHPX PR

2020-07-28 Thread Sunil Muthuswamy
Hey Paolo,

Following your suggestion of creating PRs for WHPX changes, I tried creating a 
PR https://github.com/qemu/qemu/pull/95

But, I am getting repo-lockdown message. What do I need to do differently?

Thanks,
Sunil



Re: [PULL 0/7] target-arm queue

2020-07-28 Thread Peter Maydell
On Mon, 27 Jul 2020 at 16:19, Peter Maydell  wrote:
>
> Just some bugfixes this time around.
>
> -- PMM
>
> The following changes since commit 4215d3413272ad6d1c6c9d0234450b602e46a74c:
>
>   Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-5.1-20200727' 
> into staging (2020-07-27 09:33:04 +0100)
>
> are available in the Git repository at:
>
>   https://git.linaro.org/people/pmaydell/qemu-arm.git 
> tags/pull-target-arm-20200727
>
> for you to fetch changes up to d4f6dda182e19afa75706936805e18397cb95f07:
>
>   target/arm: Improve IMPDEF algorithm for IRG (2020-07-27 16:12:11 +0100)
>
> 
> target-arm queue:
>  * ACPI: Assert that we don't run out of the preallocated memory
>  * hw/misc/aspeed_sdmc: Fix incorrect memory size
>  * target/arm: Always pass cacheattr in S1_ptw_translate
>  * docs/system/arm/virt: Document 'mte' machine option
>  * hw/arm/boot: Fix PAUTH, MTE for EL3 direct kernel boot
>  * target/arm: Improve IMPDEF algorithm for IRG
>

Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/5.1
for any user-visible changes.

-- PMM



[PATCH for-5.2 5/6] pc-bios/s390-ccw: Scan through all boot devices if none has been specified

2020-07-28 Thread Thomas Huth
If no boot device has been specified (via "bootindex=..."), the s390-ccw
bios scans through all devices to find a bootable device. But so far, it
stops at the very first block device (including virtio-scsi controllers
without attached devices) that it finds, no matter whether it is bootable
or not. That leads to some weird situatation where it is e.g. possible
to boot via:

 qemu-system-s390x -hda /path/to/disk.qcow2

but not if there is e.g. a virtio-scsi controller specified before:

 qemu-system-s390x -device virtio-scsi -hda /path/to/disk.qcow2

While using "bootindex=..." is clearly the preferred way of booting
on s390x, we still can make the life for the users at least a little
bit easier if we look at all available devices to find a bootable one.

Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1846975
Signed-off-by: Thomas Huth 
---
 pc-bios/s390-ccw/main.c | 46 +++--
 1 file changed, 31 insertions(+), 15 deletions(-)

diff --git a/pc-bios/s390-ccw/main.c b/pc-bios/s390-ccw/main.c
index 3cd01cd80f..0af872f9e3 100644
--- a/pc-bios/s390-ccw/main.c
+++ b/pc-bios/s390-ccw/main.c
@@ -182,20 +182,8 @@ static void boot_setup(void)
 static void find_boot_device(void)
 {
 VDev *vdev = virtio_get_device();
-int ssid;
 bool found;
 
-if (!have_iplb) {
-for (ssid = 0; ssid < 0x3; ssid++) {
-blk_schid.ssid = ssid;
-found = find_subch(-1);
-if (found) {
-return;
-}
-}
-panic("Could not find a suitable boot device (none specified)\n");
-}
-
 switch (iplb.pbt) {
 case S390_IPL_TYPE_CCW:
 debug_print_int("device no. ", iplb.ccw.devno);
@@ -260,14 +248,42 @@ static void ipl_boot_device(void)
 }
 }
 
+/*
+ * No boot device has been specified, so we have to scan through the
+ * channels to find one.
+ */
+static void probe_boot_device(void)
+{
+int ssid, sch_no, ret;
+
+for (ssid = 0; ssid < 0x3; ssid++) {
+blk_schid.ssid = ssid;
+for (sch_no = 0; sch_no < 0x1; sch_no++) {
+ret = check_sch_no(-1, sch_no);
+if (ret < 0) {
+break;
+}
+if (ret == true) {
+ipl_boot_device();  /* Only returns if unsuccessful */
+}
+}
+}
+
+sclp_print("Could not find a suitable boot device (none specified)\n");
+}
+
 int main(void)
 {
 sclp_setup();
 css_setup();
 boot_setup();
-find_boot_device();
-enable_subchannel(blk_schid);
-ipl_boot_device();
+if (have_iplb) {
+find_boot_device();
+enable_subchannel(blk_schid);
+ipl_boot_device();
+} else {
+probe_boot_device();
+}
 
 panic("Failed to load OS from hard disk\n");
 return 0; /* make compiler happy */
-- 
2.18.1




[PATCH for-5.2 6/6] pc-bios/s390-ccw: Allow booting in case the first virtio-blk disk is bad

2020-07-28 Thread Thomas Huth
If you try to boot with two virtio-blk disks (without bootindex), and
only the second one is bootable, the s390-ccw bios currently stops at
the first disk and does not continue booting from the second one. This
is annoying - and all other major QEMU firmwares succeed to boot from
the second disk in this case, so we should do the same in the s390-ccw
bios, too.

Signed-off-by: Thomas Huth 
---
 pc-bios/s390-ccw/bootmap.c | 34 +++---
 pc-bios/s390-ccw/main.c|  2 +-
 2 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/pc-bios/s390-ccw/bootmap.c b/pc-bios/s390-ccw/bootmap.c
index 97205674e5..0ef6b851f3 100644
--- a/pc-bios/s390-ccw/bootmap.c
+++ b/pc-bios/s390-ccw/bootmap.c
@@ -289,11 +289,18 @@ static void ipl_eckd_cdl(void)
 read_block(1, ipl2, "Cannot read IPL2 record at block 1");
 
 mbr = >mbr;
-IPL_assert(magic_match(mbr, ZIPL_MAGIC), "No zIPL section in IPL2 
record.");
-IPL_assert(block_size_ok(mbr->blockptr.xeckd.bptr.size),
-   "Bad block size in zIPL section of IPL2 record.");
-IPL_assert(mbr->dev_type == DEV_TYPE_ECKD,
-   "Non-ECKD device type in zIPL section of IPL2 record.");
+if (!magic_match(mbr, ZIPL_MAGIC)) {
+sclp_print("No zIPL section in IPL2 record.\n");
+return;
+}
+if (!block_size_ok(mbr->blockptr.xeckd.bptr.size)) {
+sclp_print("Bad block size in zIPL section of IPL2 record.\n");
+return;
+}
+if (!mbr->dev_type == DEV_TYPE_ECKD) {
+sclp_print("Non-ECKD device type in zIPL section of IPL2 record.\n");
+return;
+}
 
 /* save pointer to Boot Map Table */
 bmt_block_nr = eckd_block_num(>blockptr.xeckd.bptr.chs);
@@ -303,10 +310,14 @@ static void ipl_eckd_cdl(void)
 
 memset(sec, FREE_SPACE_FILLER, sizeof(sec));
 read_block(2, vlbl, "Cannot read Volume Label at block 2");
-IPL_assert(magic_match(vlbl->key, VOL1_MAGIC),
-   "Invalid magic of volume label block");
-IPL_assert(magic_match(vlbl->f.key, VOL1_MAGIC),
-   "Invalid magic of volser block");
+if (!magic_match(vlbl->key, VOL1_MAGIC)) {
+sclp_print("Invalid magic of volume label block.\n");
+return;
+}
+if (!magic_match(vlbl->f.key, VOL1_MAGIC)) {
+sclp_print("Invalid magic of volser block.\n");
+return;
+}
 print_volser(vlbl->f.volser);
 
 run_eckd_boot_script(bmt_block_nr, s1b_block_nr);
@@ -398,7 +409,8 @@ static void ipl_eckd(void)
 read_block(0, mbr, "Cannot read block 0 on DASD");
 
 if (magic_match(mbr->magic, IPL1_MAGIC)) {
-ipl_eckd_cdl(); /* no return */
+ipl_eckd_cdl(); /* only returns in case of error */
+return;
 }
 
 /* LDL/CMS? */
@@ -825,5 +837,5 @@ void zipl_load(void)
 panic("\n! Unknown IPL device type !\n");
 }
 
-panic("\n* this can never happen *\n");
+sclp_print("zIPL load failed.\n");
 }
diff --git a/pc-bios/s390-ccw/main.c b/pc-bios/s390-ccw/main.c
index 0af872f9e3..4026e0ef20 100644
--- a/pc-bios/s390-ccw/main.c
+++ b/pc-bios/s390-ccw/main.c
@@ -239,7 +239,7 @@ static void ipl_boot_device(void)
 break;
 case CU_TYPE_VIRTIO:
 if (virtio_setup()) {
-zipl_load(); /* no return */
+zipl_load(); /* Only returns in case of errors */
 }
 break;
 default:
-- 
2.18.1




[PATCH for-5.2 1/6] pc-bios/s390-ccw/Makefile: Compile with -std=gnu99, -fwrapv and -fno-common

2020-07-28 Thread Thomas Huth
The main QEMU code is compiled with -std=gnu99, -fwrapv and -fno-common.
We should use the same flags for the s390-ccw bios, too, to avoid that
we get different behavior with different compiler versions that changed
their default settings in the course of time (it happened at least with
-std=... and -fno-common in the past already).

While we're at it, also group the other flags here in a little bit nicer
fashion: Move the two "-m" flags out of the "-f" area and specify them on
a separate line.

Signed-off-by: Thomas Huth 
---
 pc-bios/s390-ccw/Makefile | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/pc-bios/s390-ccw/Makefile b/pc-bios/s390-ccw/Makefile
index 50bc880272..9abb0ea4c0 100644
--- a/pc-bios/s390-ccw/Makefile
+++ b/pc-bios/s390-ccw/Makefile
@@ -13,10 +13,11 @@ OBJECTS = start.o main.o bootmap.o jump2ipl.o sclp.o menu.o 
\
  virtio.o virtio-scsi.o virtio-blkdev.o libc.o cio.o dasd-ipl.o
 
 QEMU_CFLAGS := $(filter -W%, $(QEMU_CFLAGS))
-QEMU_CFLAGS += -ffreestanding -fno-delete-null-pointer-checks -msoft-float
-QEMU_CFLAGS += -march=z900 -fPIE -fno-strict-aliasing
-QEMU_CFLAGS += -fno-asynchronous-unwind-tables
+QEMU_CFLAGS += -ffreestanding -fno-delete-null-pointer-checks -fno-common -fPIE
+QEMU_CFLAGS += -fwrapv -fno-strict-aliasing -fno-asynchronous-unwind-tables
 QEMU_CFLAGS += $(call cc-option, $(QEMU_CFLAGS), -fno-stack-protector)
+QEMU_CFLAGS += -msoft-float -march=z900
+QEMU_CFLAGS += -std=gnu99
 LDFLAGS += -Wl,-pie -nostdlib
 
 build-all: s390-ccw.img s390-netboot.img
-- 
2.18.1




[PATCH for-5.2 4/6] pc-bios/s390-ccw: Do not bail out early if not finding a SCSI disk

2020-07-28 Thread Thomas Huth
In case the user did not specify a boot device, we want to continue
looking for other devices if there are no valid SCSI disks on a virtio-
scsi controller. As a first step, do not panic in this case and let
the control flow carry the error to the upper functions instead.

Signed-off-by: Thomas Huth 
---
 pc-bios/s390-ccw/main.c  | 13 +
 pc-bios/s390-ccw/s390-ccw.h  |  2 +-
 pc-bios/s390-ccw/virtio-blkdev.c |  7 ---
 pc-bios/s390-ccw/virtio-scsi.c   | 25 ++---
 pc-bios/s390-ccw/virtio-scsi.h   |  2 +-
 5 files changed, 33 insertions(+), 16 deletions(-)

diff --git a/pc-bios/s390-ccw/main.c b/pc-bios/s390-ccw/main.c
index 9477313188..3cd01cd80f 100644
--- a/pc-bios/s390-ccw/main.c
+++ b/pc-bios/s390-ccw/main.c
@@ -218,7 +218,7 @@ static void find_boot_device(void)
 IPL_assert(found, "Boot device not found\n");
 }
 
-static void virtio_setup(void)
+static bool virtio_setup(void)
 {
 VDev *vdev = virtio_get_device();
 QemuIplParameters *early_qipl = (QemuIplParameters *)QIPL_ADDRESS;
@@ -233,9 +233,13 @@ static void virtio_setup(void)
 sclp_print("Network boot device detected\n");
 vdev->netboot_start_addr = qipl.netboot_start_addr;
 } else {
-virtio_blk_setup_device(blk_schid);
+if (!virtio_blk_setup_device(blk_schid)) {
+return false;
+}
 IPL_assert(virtio_ipl_disk_is_valid(), "No valid IPL device detected");
 }
+
+return true;
 }
 
 static void ipl_boot_device(void)
@@ -246,8 +250,9 @@ static void ipl_boot_device(void)
 dasd_ipl(blk_schid, cutype); /* no return */
 break;
 case CU_TYPE_VIRTIO:
-virtio_setup();
-zipl_load(); /* no return */
+if (virtio_setup()) {
+zipl_load(); /* no return */
+}
 break;
 default:
 print_int("Attempting to boot from unexpected device type", cutype);
diff --git a/pc-bios/s390-ccw/s390-ccw.h b/pc-bios/s390-ccw/s390-ccw.h
index 36b884cced..a46d15431e 100644
--- a/pc-bios/s390-ccw/s390-ccw.h
+++ b/pc-bios/s390-ccw/s390-ccw.h
@@ -71,7 +71,7 @@ int sclp_read(char *str, size_t count);
 unsigned long virtio_load_direct(ulong rec_list1, ulong rec_list2,
  ulong subchan_id, void *load_addr);
 bool virtio_is_supported(SubChannelId schid);
-void virtio_blk_setup_device(SubChannelId schid);
+bool virtio_blk_setup_device(SubChannelId schid);
 int virtio_read(ulong sector, void *load_addr);
 
 /* bootmap.c */
diff --git a/pc-bios/s390-ccw/virtio-blkdev.c b/pc-bios/s390-ccw/virtio-blkdev.c
index 11c56261ca..0746627b1e 100644
--- a/pc-bios/s390-ccw/virtio-blkdev.c
+++ b/pc-bios/s390-ccw/virtio-blkdev.c
@@ -263,7 +263,7 @@ uint64_t virtio_get_blocks(void)
 return 0;
 }
 
-void virtio_blk_setup_device(SubChannelId schid)
+bool virtio_blk_setup_device(SubChannelId schid)
 {
 VDev *vdev = virtio_get_device();
 
@@ -288,9 +288,10 @@ void virtio_blk_setup_device(SubChannelId schid)
 "Config: CDB size mismatch");
 
 sclp_print("Using virtio-scsi.\n");
-virtio_scsi_setup(vdev);
-break;
+return virtio_scsi_setup(vdev);
 default:
 panic("\n! No IPL device available !\n");
 }
+
+return true;
 }
diff --git a/pc-bios/s390-ccw/virtio-scsi.c b/pc-bios/s390-ccw/virtio-scsi.c
index eddfb8a7ad..4d05b02ed0 100644
--- a/pc-bios/s390-ccw/virtio-scsi.c
+++ b/pc-bios/s390-ccw/virtio-scsi.c
@@ -194,7 +194,12 @@ static bool scsi_read_capacity(VDev *vdev,
 
 /* virtio-scsi routines */
 
-static void virtio_scsi_locate_device(VDev *vdev)
+/*
+ * Tries to locate a SCSI device and adds that information to the
+ * vdev->scsi_device structure.
+ * Returns true if SCSI device could be located, false otherwise
+ */
+static bool virtio_scsi_locate_device(VDev *vdev)
 {
 const uint16_t channel = 0; /* again, it's what QEMU does */
 uint16_t target;
@@ -220,7 +225,7 @@ static void virtio_scsi_locate_device(VDev *vdev)
 IPL_check(sdev->channel == 0, "non-zero channel requested");
 IPL_check(sdev->target <= vdev->config.scsi.max_target, "target# 
high");
 IPL_check(sdev->lun <= vdev->config.scsi.max_lun, "LUN# high");
-return;
+return true;
 }
 
 for (target = 0; target <= vdev->config.scsi.max_target; target++) {
@@ -247,18 +252,20 @@ static void virtio_scsi_locate_device(VDev *vdev)
  */
 sdev->lun = r->lun[0].v16[0]; /* it's returned this way */
 debug_print_int("Have to use LUN", sdev->lun);
-return; /* we have to use this device */
+return true; /* we have to use this device */
 }
 for (i = 0; i < luns; i++) {
 if (r->lun[i].v64) {
 /* Look for non-zero LUN - we have where to choose from */
 sdev->lun = r->lun[i].v16[0];
 debug_print_int("Will use LUN", sdev->lun);
-return; /* we have found a device */
+  

[PATCH for-5.2 3/6] pc-bios/s390-ccw: Move the inner logic of find_subch() to a separate function

2020-07-28 Thread Thomas Huth
Move the code to a separate function to be able to re-use it from a
different spot later.

Signed-off-by: Thomas Huth 
---
 pc-bios/s390-ccw/main.c | 99 -
 1 file changed, 57 insertions(+), 42 deletions(-)

diff --git a/pc-bios/s390-ccw/main.c b/pc-bios/s390-ccw/main.c
index 9b64eb0c24..9477313188 100644
--- a/pc-bios/s390-ccw/main.c
+++ b/pc-bios/s390-ccw/main.c
@@ -51,6 +51,60 @@ unsigned int get_loadparm_index(void)
 return atoui(loadparm_str);
 }
 
+static int check_sch_no(int dev_no, int sch_no)
+{
+bool is_virtio;
+Schib schib;
+int r;
+
+blk_schid.sch_no = sch_no;
+r = stsch_err(blk_schid, );
+if (r == 3 || r == -EIO) {
+return -EIO;
+}
+if (!schib.pmcw.dnv) {
+return false;
+}
+
+enable_subchannel(blk_schid);
+cutype = cu_type(blk_schid);
+
+/*
+ * Note: we always have to run virtio_is_supported() here to make
+ * sure that the vdev.senseid data gets pre-initialized correctly
+ */
+is_virtio = virtio_is_supported(blk_schid);
+
+/* No specific devno given, just return 1st possibly bootable device */
+if (dev_no < 0) {
+switch (cutype) {
+case CU_TYPE_VIRTIO:
+if (is_virtio) {
+/*
+ * Skip net devices since no IPLB is created and therefore
+ * no network bootloader has been loaded
+ */
+if (virtio_get_device_type() != VIRTIO_ID_NET) {
+return true;
+}
+}
+return false;
+case CU_TYPE_DASD_3990:
+case CU_TYPE_DASD_2107:
+return true;
+default:
+return false;
+}
+}
+
+/* Caller asked for a specific devno */
+if (schib.pmcw.dev == dev_no) {
+return true;
+}
+
+return false;
+}
+
 /*
  * Find the subchannel connected to the given device (dev_no) and fill in the
  * subchannel information block (schib) with the connected subchannel's info.
@@ -62,53 +116,14 @@ unsigned int get_loadparm_index(void)
  */
 static bool find_subch(int dev_no)
 {
-Schib schib;
 int i, r;
-bool is_virtio;
 
 for (i = 0; i < 0x1; i++) {
-blk_schid.sch_no = i;
-r = stsch_err(blk_schid, );
-if ((r == 3) || (r == -EIO)) {
+r = check_sch_no(dev_no, i);
+if (r < 0) {
 break;
 }
-if (!schib.pmcw.dnv) {
-continue;
-}
-
-enable_subchannel(blk_schid);
-cutype = cu_type(blk_schid);
-
-/*
- * Note: we always have to run virtio_is_supported() here to make
- * sure that the vdev.senseid data gets pre-initialized correctly
- */
-is_virtio = virtio_is_supported(blk_schid);
-
-/* No specific devno given, just return 1st possibly bootable device */
-if (dev_no < 0) {
-switch (cutype) {
-case CU_TYPE_VIRTIO:
-if (is_virtio) {
-/*
- * Skip net devices since no IPLB is created and therefore
- * no network bootloader has been loaded
- */
-if (virtio_get_device_type() != VIRTIO_ID_NET) {
-return true;
-}
-}
-continue;
-case CU_TYPE_DASD_3990:
-case CU_TYPE_DASD_2107:
-return true;
-default:
-continue;
-}
-}
-
-/* Caller asked for a specific devno */
-if (schib.pmcw.dev == dev_no) {
+if (r == true) {
 return true;
 }
 }
-- 
2.18.1




[PATCH for-5.2 0/6] Continue booting in case the first device is not bootable

2020-07-28 Thread Thomas Huth
If the user did not specify a "bootindex" property, the s390-ccw bios
tries to find a bootable device on its own. Unfortunately, it alwasy
stops at the very first device that it can find, no matter whether it's
bootable or not. That causes some weird behavior, for example while

 qemu-system-s390x -hda bootable.qcow2

boots perfectly fine, the bios refuses to work if you just specify
a virtio-scsi controller in front of it:

 qemu-system-s390x -device virtio-scsi -hda bootable.qcow2

Since this is quite uncomfortable and confusing for the users, and
all major firmwares on other architectures correctly boot in such
cases, too, let's also try to teach the s390-ccw bios how to boot
in such cases.

For this, we have to get rid of the various panic()s and IPL_assert()
statements at the "low-level" function and let the main code handle
the decision instead whether a boot from a device should fail or not,
so that the main code can continue searching in case it wants to.

 Thomas

Thomas Huth (6):
  pc-bios/s390-ccw/Makefile: Compile with -std=gnu99, -fwrapv and
-fno-common
  pc-bios/s390-ccw: Move ipl-related code from main() into a separate
function
  pc-bios/s390-ccw: Move the inner logic of find_subch() to a separate
function
  pc-bios/s390-ccw: Do not bail out early if not finding a SCSI disk
  pc-bios/s390-ccw: Scan through all boot devices if none has been
specified
  pc-bios/s390-ccw: Allow booting in case the first virtio-blk disk is
bad

 pc-bios/s390-ccw/Makefile|   7 +-
 pc-bios/s390-ccw/bootmap.c   |  34 --
 pc-bios/s390-ccw/main.c  | 172 +++
 pc-bios/s390-ccw/s390-ccw.h  |   2 +-
 pc-bios/s390-ccw/virtio-blkdev.c |   7 +-
 pc-bios/s390-ccw/virtio-scsi.c   |  25 +++--
 pc-bios/s390-ccw/virtio-scsi.h   |   2 +-
 7 files changed, 157 insertions(+), 92 deletions(-)

-- 
2.18.1




[PATCH for-5.2 2/6] pc-bios/s390-ccw: Move ipl-related code from main() into a separate function

2020-07-28 Thread Thomas Huth
Let's move this part of the code into a separate function to be able
to use it from multiple spots later.

Signed-off-by: Thomas Huth 
---
 pc-bios/s390-ccw/main.c | 20 
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/pc-bios/s390-ccw/main.c b/pc-bios/s390-ccw/main.c
index 146a50760b..9b64eb0c24 100644
--- a/pc-bios/s390-ccw/main.c
+++ b/pc-bios/s390-ccw/main.c
@@ -223,14 +223,8 @@ static void virtio_setup(void)
 }
 }
 
-int main(void)
+static void ipl_boot_device(void)
 {
-sclp_setup();
-css_setup();
-boot_setup();
-find_boot_device();
-enable_subchannel(blk_schid);
-
 switch (cutype) {
 case CU_TYPE_DASD_3990:
 case CU_TYPE_DASD_2107:
@@ -242,8 +236,18 @@ int main(void)
 break;
 default:
 print_int("Attempting to boot from unexpected device type", cutype);
-panic("");
+panic("\nBoot failed.\n");
 }
+}
+
+int main(void)
+{
+sclp_setup();
+css_setup();
+boot_setup();
+find_boot_device();
+enable_subchannel(blk_schid);
+ipl_boot_device();
 
 panic("Failed to load OS from hard disk\n");
 return 0; /* make compiler happy */
-- 
2.18.1




Re: [PATCH v5 7/7] Makefile: Ship the generic platform bios ELF images for RISC-V

2020-07-28 Thread Alistair Francis
On Tue, Jul 28, 2020 at 8:52 AM Bin Meng  wrote:
>
> Hi Alistair,
>
> On Tue, Jul 28, 2020 at 11:37 PM Alistair Francis  
> wrote:
> >
> > On Wed, Jul 15, 2020 at 11:01 PM Bin Meng  wrote:
> > >
> > > From: Bin Meng 
> > >
> > > At present only the generic platform fw_dynamic bios BIN images
> > > are included in the 'make install' target for 'virt' and 'sifive_u'
> > > machines. This updates the install blob list to include ELF images
> > > which are needed by the 'spike' machine.
> > >
> > > Signed-off-by: Bin Meng 
> >
> > This commit should be squashed into patch 5.
>
> I actually considered this before, and I was thinking that patch 5 is
> only for "fixing" the missing binaries for Spike machine, and this
> patch addresses the "make install" issue, hence separate patch.

Patch 5 creates the issue though, so I think it is worth fixing there.

>
> >
> > Do you want me to do that when applying?
>
> Sure, please do then if you feel this is the correct way.

This series still has regressions, so when you send a new version
please squash this patch.

Alistair

>
> Regards,
> Bin



QEMU | Pipeline #171749063 has failed for master | 0a58e39f

2020-07-28 Thread GitLab via


Your pipeline has failed.

Project: QEMU ( https://gitlab.com/qemu-project/qemu )
Branch: master ( https://gitlab.com/qemu-project/qemu/-/commits/master )

Commit: 0a58e39f ( 
https://gitlab.com/qemu-project/qemu/-/commit/0a58e39fe90c50b313a5148c095f8dbb6111a6d6
 )
Commit Message: Merge remote-tracking branch 'remotes/vivier2/t...
Commit Author: Peter Maydell ( https://gitlab.com/pm215 )

Pipeline #171749063 ( 
https://gitlab.com/qemu-project/qemu/-/pipelines/171749063 ) triggered by Alex 
Bennée ( https://gitlab.com/stsquad )
had 1 failed build.

Job #660213052 ( https://gitlab.com/qemu-project/qemu/-/jobs/660213052/raw )

Stage: build
Name: build-tcg-disabled
Trace: 192  ...[17:38:47] ...  
192  pass   [17:38:47] [17:38:47]   0s   
194  ...[17:38:47] ...  
194  pass   [17:38:47] [17:38:47]   0s   
197  ...[17:38:47] ...  
197  fail   [17:38:47] [17:40:29]
output mismatch (see 197.out.bad)
--- /builds/qemu-project/qemu/tests/qemu-iotests/197.out2020-07-28 
17:28:21.0 +
+++ /builds/qemu-project/qemu/build/tests/qemu-iotests/197.out.bad  
2020-07-28 17:40:28.0 +
@@ -26,9 +26,9 @@
 
 === Partial final cluster ===
 
-Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1024
+qemu-img: TEST_DIR/t.IMGFMT: Invalid parameter 'compat'
 read 1024/1024 bytes at offset 0
 1 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-1 KiB (0x400) bytes allocated at offset 0 bytes (0x0)
+4 GiB (0x1) bytes allocated at offset 0 bytes (0x0)
 No errors were found on the image.
 *** done
208  ...[17:40:29] ...  
208  pass   [17:40:29] [17:40:33]   4s   
215  ...[17:40:33] ...  
215  pass   [17:40:33] [17:42:12]  98s   
221  ...[17:42:12] ...  
221  pass   [17:42:12] [17:42:12]   0s   
222  ...[17:42:12] ...  
222  pass   [17:42:12] [17:42:16]   4s   
226  ...[17:42:16] ...  
226  pass   [17:42:16] [17:42:17]   1s   
227  ...[17:42:17] ...  
227  pass   [17:42:17] [17:42:17]   0s   
236  ...[17:42:17] ...  
236  pass   [17:42:17] [17:42:17]   0s   
253  ...[17:42:17] ...  
253  pass   [17:42:17] [17:42:17]   0s   
277  ...[17:42:17] ...  
277  pass   [17:42:17] [17:42:19]   2s   
Failures: 197
Failed 1 of 47 iotests
section_end:1595958160:step_script
ERROR: Job failed: exit code 1



-- 
You're receiving this email because of your account on gitlab.com.





Re: [PATCH v4 4/7] hw/riscv: Use pre-built bios image of generic platform for virt & sifive_u

2020-07-28 Thread Alistair Francis
On Tue, Jul 28, 2020 at 8:46 AM Bin Meng  wrote:
>
> Hi Alistair,
>
> On Tue, Jul 28, 2020 at 11:39 PM Alistair Francis  
> wrote:
> >
> > On Wed, Jul 15, 2020 at 9:55 PM Bin Meng  wrote:
> > >
> > > Hi Alistair,
> > >
> > > On Mon, Jul 13, 2020 at 9:53 AM Bin Meng  wrote:
> > > >
> > > > On Sun, Jul 12, 2020 at 1:34 AM Alistair Francis  
> > > > wrote:
> > > > >
> > > > > On Thu, Jul 9, 2020 at 10:07 PM Bin Meng  wrote:
> > > > > >
> > > > > > From: Bin Meng 
> > > > > >
> > > > > > Update virt and sifive_u machines to use the opensbi fw_dynamic bios
> > > > > > image built for the generic FDT platform.
> > > > > >
> > > > > > Remove the out-of-date no longer used bios images.
> > > > > >
> > > > > > Signed-off-by: Bin Meng 
> > > > > > Reviewed-by: Anup Patel 
> > > > > > Reviewed-by: Alistair Francis 
> > > > >
> > > > > This patch seems to break 32-bit Linux boots on the sifive_u and virt 
> > > > > machines.
> > > > >
> > > >
> > > > It looks only Linux boot on sifive_u is broken. On our side, we have
> > > > been using VxWorks to test 32-bit OpenSBI on sifive_u so this issue
> > > > gets unnoticed. I will take a look.
> > >
> > > I've figured out the issue of 32-bit Linux booting failure on
> > > sifive_u. A patch has been sent to Linux upstream:
> > > http://lists.infradead.org/pipermail/linux-riscv/2020-July/001213.html
> >
> > Thanks for that. What change in QEMU causes this failure though?
> >
>
> There is nothing wrong in QEMU.

There is. This patch causes a regression for 32-bit Linux boot on the
sifive_u. Your v5 has not addressed this.

With this patch, the Linux boot stops here:

OpenSBI v0.8
   _  _
  / __ \  / |  _ \_   _|
 | |  | |_ __   ___ _ __ | (___ | |_) || |
 | |  | | '_ \ / _ \ '_ \ \___ \|  _ < | |
 | |__| | |_) |  __/ | | |) | |_) || |_
  \/| .__/ \___|_| |_|_/|/_|
| |
|_|

Platform Name   : SiFive HiFive Unleashed A00
Platform Features   : timer,mfdeleg
Platform HART Count : 4
Boot HART ID: 3
Boot HART ISA   : rv64imafdcsu
BOOT HART Features  : pmp,scounteren,mcounteren
BOOT HART PMP Count : 16
Firmware Base   : 0x8000
Firmware Size   : 116 KB
Runtime SBI Version : 0.2

MIDELEG : 0x0222
MEDELEG : 0xb109
PMP0: 0x8000-0x8001 (A)
PMP1: 0x-0x (A,R,W,X)
[0.00] OF: fdt: Ignoring memory range 0x8000 - 0x8020
[0.00] Linux version 5.3.0 (oe-user@oe-host) (gcc version
9.2.0 (GCC)) #1 SMP Thu Sep 19 18:34:52 UTC 2019
[0.00] earlycon: sbi0 at I/O port 0x0 (options '')
[0.00] printk: bootconsole [sbi0] enabled
[0.00] initrd not found or empty - disabling initrd
[0.00] Zone ranges:
[0.00]   DMA32[mem 0x8020-0xbfff]
[0.00]   Normal   empty
[0.00] Movable zone start for each node
[0.00] Early memory node ranges
[0.00]   node   0: [mem 0x8020-0xbfff]
[0.00] Initmem setup node 0 [mem 0x8020-0xbfff]
[0.00] OF: fdt: Invalid device tree blob header
[0.00] software IO TLB: mapped [mem 0xbb1fe000-0xbf1fe000] (64MB)

Without this patch I can boot all the way to looking for a rootFS.

Please don't send new versions of patches without addresses regressions.

Alistair

>
> > There are lots of people not running the latest Linux from master that
> > this will cause breakages for.
>
> It's just that the 32-bit Linux defconfig has never been validated by
> people with 'sifive_u' machine. I bet people only validated the 32-bit
> kernel with the 'virt' machine.
>
> Regards,
> Bin



[Bug 1889288] Re: aarch64 BICS instruciton doesn't set flags

2020-07-28 Thread Peter Maydell
The code is correct (though it is admittedly not entirely obvious at
first glance). The switch statement at line 4753 is on "(opc | (invert
<< 2))" (where opc is a 2 bit field and invert a 1 bit field). Both ANDS
and BICS have opc==3 and so will cause a call to gen_logic_CC(). The
difference between the two insns is that ANDC has invert==0 and BICS has
invert==1.


** Changed in: qemu
   Status: New => Invalid

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1889288

Title:
  aarch64 BICS instruciton doesn't set flags

Status in QEMU:
  Invalid

Bug description:
  When reading the source for translate-a64.c here:

  
https://github.com/qemu/qemu/blob/a466dd084f51cdc9da2e99361f674f98d7218559/target/arm/translate-a64.c#L4783

  I noticed that it does not appear to call gen_logic_CC for the BICS
  instruction so is not setting the flags as required. I haven't tried
  to produce a test case for it but it seems like it might be a bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1889288/+subscriptions



Re: [PULL 0/3] Block patches for 5.1.0-rc2?

2020-07-28 Thread Peter Maydell
On Tue, 28 Jul 2020 at 14:48, Max Reitz  wrote:
>
> Hi,
>
> Sorry for the very late pull request.  The iotest issue only appeared
> today, and the I/O path issue was only tracked down today.  We need the
> fixes for the latter in 5.1, so if they do not make it into rc2, we will
> need them in rc3.
>
>
> The following changes since commit 23ae28783f4674e98f7539d1c05d793166c2fc12:
>
>   Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2020-07-27' 
> into staging (2020-07-28 09:15:44 +0100)
>
> are available in the Git repository at:
>
>   https://github.com/XanClic/qemu.git tags/pull-block-2020-07-28
>
> for you to fetch changes up to afac471b71da92d91cc56fb64c0719b8a4a2d96b:
>
>   iotests/197: Fix for non-qcow2 formats (2020-07-28 15:28:56 +0200)
>
> 
> Block patches for 5.1.0:
> - Fix block I/O for split transfers
> - Fix iotest 197 for non-qcow2 formats


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/5.1
for any user-visible changes.

-- PMM



Re: sysbus_create_simple Vs qdev_create

2020-07-28 Thread Paolo Bonzini
On 28/07/20 09:19, Markus Armbruster wrote:
>> the composition tree generally mirrors things that are born and die
>> at the same time, and creating children is generally reserved to the
>> object itself.
>
> Yes.  Notable exceptions: containers /machine/peripheral,
> /machine/peripheral-anon, /machine/unattached.

And /objects too.  Apart from /machine/unattached, all these dynamic
objects are created by the monitor or the command line.

>> Children are usually embedded directly in a struct, for
>> example.
> 
> We sometimes use object_new() + object_property_add_child() instead.
> Extra indirection.  I guess we'd be better off without the extra
> indirection most of the time.  Implementation detail.
> 
> We sometimes use object_new() without object_property_add_child(), and
> have qdev_realize() put the device in the /machine/unattached orphanage.
> Meh.  I guess the orphanage feature exists to make conversion to QOM
> slightly easier.  Could we ban its use for new boards at least?

Banning perhaps is too strong, but yes /machine/unattached is an
anti-pattern.

>> 3) accessing the QOM graph is slow (it requires hash table lookups,
>> string comparisons and all that), so the pointers that cache the
>> parent-child links are needed for use in hot paths.
> 
> True, but only because QOM's design opts for generality, efficiency be
> damned :)

Remember that QOM's essential feature is the visitors: unlike GObject,
QOM is not targeted at programming languages but rather at CLI and RPC.

> I never quite understood why we need to build properties at run time.

I think it was (for once) Anthony reasoning that good is better than
perfect.  You do need run-time properties for /machine/unattached,
/machine/peripheral, etc., so he decided to only have run-time
properties.  Introspection wasn't considered a primary design goal.

Also, even though child properties are quite often the same for all
objects after initialization (and possibly realization) is complete,
this does not cover in two cases:

1) before the corresponding objects are created---so static child
properties would only be possible if creation of all children is moved
to instance_init, which is guaranteed not to fail.

2) there are cases in which a property (e.g. an int) governs how many of
those children exist, so you cannot create all children in instance_init.

Paolo

> It's makes reasoning about properties harder, and introspection brittle.




[Bug 1888728] Re: Bare chroot in linux-user fails with pgb_reserved_va: Assertion `guest_base != 0' failed.

2020-07-28 Thread Laurent Vivier
Fixed here:
https://github.com/qemu/qemu/commit/c9f8066697e0d3e77b97f6df423e9d6540b693be

** Changed in: qemu
   Status: In Progress => Fix Committed

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1888728

Title:
  Bare chroot in linux-user fails with pgb_reserved_va: Assertion
  `guest_base != 0' failed.

Status in QEMU:
  Fix Committed

Bug description:
  Trying to run a bare chroot with no additional bind mounts fails on
  git master (8ffa52c20d5693d454f65f2024a1494edfea65d4) with:

  root@nofan:~/qemu> chroot /local_scratch/sid-m68k-sbuild/
  qemu-m68k-static: /root/qemu/linux-user/elfload.c:2315: pgb_reserved_va: 
Assertion `guest_base != 0' failed.
  Aborted
  root@nofan:~/qemu>

  The problem can be worked around by bind-mounting /proc from the host
  system into the target chroot:

  root@nofan:~/qemu> mount -o bind /proc/ /local_scratch/sid-m68k-sbuild/proc/
  root@nofan:~/qemu> chroot /local_scratch/sid-m68k-sbuild/
  bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
  (sid-m68k-sbuild)root@nofan:/#

  Host system is an up-to-date Debian unstable (2020-07-23).

  I have not been able to bisect the issue yet since there is another
  annoying linux-user bug (virtual memory exhaustion) that was somewhere
  introduced and fixed between v5.0.0 and HEAD and overshadows the
  original Assertion failure bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1888728/+subscriptions



QEMU | Pipeline #171722122 has failed for master | a466dd08

2020-07-28 Thread GitLab via


Your pipeline has failed.

Project: QEMU ( https://gitlab.com/qemu-project/qemu )
Branch: master ( https://gitlab.com/qemu-project/qemu/-/commits/master )

Commit: a466dd08 ( 
https://gitlab.com/qemu-project/qemu/-/commit/a466dd084f51cdc9da2e99361f674f98d7218559
 )
Commit Message: Merge remote-tracking branch 'remotes/jasowang/...
Commit Author: Peter Maydell ( https://gitlab.com/pm215 )

Pipeline #171722122 ( 
https://gitlab.com/qemu-project/qemu/-/pipelines/171722122 ) triggered by Alex 
Bennée ( https://gitlab.com/stsquad )
had 1 failed build.

Job #660111763 ( https://gitlab.com/qemu-project/qemu/-/jobs/660111763/raw )

Stage: build
Name: build-tcg-disabled
Trace: 192  ...[16:36:31] ...  
192  pass   [16:36:31] [16:36:31]   0s   
194  ...[16:36:31] ...  
194  pass   [16:36:31] [16:36:31]   0s   
197  ...[16:36:31] ...  
197  fail   [16:36:31] [16:38:23]
output mismatch (see 197.out.bad)
--- /builds/qemu-project/qemu/tests/qemu-iotests/197.out2020-07-28 
16:26:40.0 +
+++ /builds/qemu-project/qemu/build/tests/qemu-iotests/197.out.bad  
2020-07-28 16:38:22.0 +
@@ -26,9 +26,9 @@
 
 === Partial final cluster ===
 
-Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1024
+qemu-img: TEST_DIR/t.IMGFMT: Invalid parameter 'compat'
 read 1024/1024 bytes at offset 0
 1 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-1 KiB (0x400) bytes allocated at offset 0 bytes (0x0)
+4 GiB (0x1) bytes allocated at offset 0 bytes (0x0)
 No errors were found on the image.
 *** done
208  ...[16:38:23] ...  
208  pass   [16:38:23] [16:38:27]   4s   
215  ...[16:38:27] ...  
215  pass   [16:38:27] [16:40:04]  96s   
221  ...[16:40:04] ...  
221  pass   [16:40:04] [16:40:04]   0s   
222  ...[16:40:04] ...  
222  pass   [16:40:04] [16:40:08]   4s   
226  ...[16:40:08] ...  
226  pass   [16:40:08] [16:40:08]   0s   
227  ...[16:40:08] ...  
227  pass   [16:40:08] [16:40:08]   0s   
236  ...[16:40:08] ...  
236  pass   [16:40:08] [16:40:08]   0s   
253  ...[16:40:08] ...  
253  pass   [16:40:08] [16:40:09]   1s   
277  ...[16:40:09] ...  
277  pass   [16:40:09] [16:40:10]   1s   
Failures: 197
Failed 1 of 47 iotests
section_end:1595954427:step_script
ERROR: Job failed: exit code 1



-- 
You're receiving this email because of your account on gitlab.com.





[Bug 1889288] [NEW] aarch64 BICS instruciton doesn't set flags

2020-07-28 Thread Robert
Public bug reported:

When reading the source for translate-a64.c here:

https://github.com/qemu/qemu/blob/a466dd084f51cdc9da2e99361f674f98d7218559/target/arm/translate-a64.c#L4783

I noticed that it does not appear to call gen_logic_CC for the BICS
instruction so is not setting the flags as required. I haven't tried to
produce a test case for it but it seems like it might be a bug.

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1889288

Title:
  aarch64 BICS instruciton doesn't set flags

Status in QEMU:
  New

Bug description:
  When reading the source for translate-a64.c here:

  
https://github.com/qemu/qemu/blob/a466dd084f51cdc9da2e99361f674f98d7218559/target/arm/translate-a64.c#L4783

  I noticed that it does not appear to call gen_logic_CC for the BICS
  instruction so is not setting the flags as required. I haven't tried
  to produce a test case for it but it seems like it might be a bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1889288/+subscriptions



Re: [PATCH 0/2] assertion failure in net_tx_pkt_add_raw_fragment() in hw/net/net_tx_pkt.c

2020-07-28 Thread Mauro Matteo Cascella
Thank you Alexander for testing the patch and providing the
reproducer. I think you should be credited, along with Ziming, for
independently reporting the same issue.

On Mon, Jul 27, 2020 at 7:40 PM Alexander Bulekov  wrote:
>
> I sent a reproducer for the to the list some time ago, but never created
> a Launchpad bug...
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg701930.html
>
> Anyways.. I can confirm that I can't reproduce the issue with these
> patches.
>
> Minimized Reproducer:
> cat << EOF | ./i386-softmmu/qemu-system-i386 -M pc-q35-5.0 -nographic \
> -display none -serial none -monitor none -qtest stdio
> outl 0xcf8 0x80001010
> outl 0xcfc 0xe102
> outl 0xcf8 0x80001004
> outw 0xcfc 0x7
> write 0xe10207e8 0x4 0x25ff13ff
> write 0xe10200b8 0x7 0xe3055e411b0202
> write 0xe1020100 0x5 0x5e411b0202
> write 0xe1020110 0x4 0x1b0202e1
> write 0xe1020118 0x4 0x06fff105
> write 0xe1020128 0x7 0xf3055e411b0202
> write 0xe1020402 0x2 0x5e41
> write 0xe1020420 0x4 0x1b0202e1
> write 0xe1020428 0x4 0x06ff6105
> write 0xe1020438 0x1 0x63
> write 0xe1020439 0x1 0x05
> EOF
>
> -Alex
>
> On 200727 1908, Mauro Matteo Cascella wrote:
> > An assertion failure issue was reported by Mr. Ziming Zhang (CC'd).
> > It occurs in the code that processes network packets while adding data
> > fragments into packet context. This flaw could potentially be abused by
> > a malicious guest to abort the QEMU process on the host. This two patch
> > series does a couple of things:
> >
> > - introduces a new function in net_tx_pkt.{c,h} to check the maximum number
> >   of data fragments
> > - adds a check in both e1000e and vmxnet3 devices to skip the packet if the
> >   current data fragment exceeds max_raw_frags, preventing
> >   net_tx_pkt_add_raw_fragment() to be called with an invalid raw_frags
> >
> > Mauro Matteo Cascella (2):
> >   hw/net/net_tx_pkt: add function to check pkt->max_raw_frags
> >   hw/net: check max_raw_frags in e1000e and vmxnet3 devices
> >
> >  hw/net/e1000e_core.c | 3 ++-
> >  hw/net/net_tx_pkt.c  | 5 +
> >  hw/net/net_tx_pkt.h  | 8 
> >  hw/net/vmxnet3.c | 3 ++-
> >  4 files changed, 17 insertions(+), 2 deletions(-)
> >
> > --
> > 2.26.2
> >
> >
>


-- 
Mauro Matteo Cascella, Red Hat Product Security
6F78 E20B 5935 928C F0A8  1A9D 4E55 23B8 BB34 10B0




Re: [PULL 0/3] Linux user for 5.1 patches

2020-07-28 Thread Peter Maydell
On Tue, 28 Jul 2020 at 13:36, Laurent Vivier  wrote:
>
> The following changes since commit 9303ecb658a0194560d1eecde165a1511223c2d8:
>
>   Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20200727' into 
> staging (2020-07-27 17:25:06 +0100)
>
> are available in the Git repository at:
>
>   git://github.com/vivier/qemu.git tags/linux-user-for-5.1-pull-request
>
> for you to fetch changes up to 0f6bb1958f3aae0171996941df7fb7ea7536bb12:
>
>   linux-user: Use getcwd syscall directly (2020-07-27 22:05:34 +0200)
>
> ----
> linux-user 20200728
>
> Fix "pgb_reserved_va: Assertion `guest_base != 0' failed." error
> Fix rt_sigtimedwait() errno
> Fix getcwd() errno


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/5.1
for any user-visible changes.

-- PMM



Re: [PATCH v1 1/2] qemu-timer: gracefully handle the end of time

2020-07-28 Thread Paolo Bonzini
On 28/07/20 18:08, Alex Bennée wrote:
> 
> Paolo Bonzini  writes:
> 
>> On 28/07/20 16:10, Alex Bennée wrote:
>>> +/*
>>> + * Check to see if we have run out of time. Most of our time
>>> + * sources are nanoseconds since epoch (some time around the fall
>>> + * of Babylon 5, the start of the Enterprises five year mission
>>> + * and just before the arrival of the great evil ~ 2262CE).
>>> + * Although icount based time is ns since the start of emulation
>>> + * it is able to skip forward if the device is sleeping (think IoT
>>> + * device with a very long heartbeat). Either way we don't really
>>> + * handle running out of time so lets catch it and report it here.
>>> + */
>>> +if (current_time == INT64_MAX) {
>>> +qemu_handle_outa_time();
>>> +goto out;
>>> +}
>>> +
>>
>> Doing this here is a bit dangerous, I'd rather do nothing here and
>> detect the situation in cpus.c where we can do
>> qemu_system_shutdown_request() (and also do nothing).
> 
> You mean in notify_aio_contexts()? Sure we can do that.

Yes, that would work.  I think qemu_clock_deadline_ns_all would already
return -1 so that you'd have a zero-instruction budget from
tcg_get_icount_limit.

Paolo




Re: [PATCH v2 2/2] scripts/performance: Add list_helpers.py script

2020-07-28 Thread Ahmed Karaman
On Tue, Jul 28, 2020 at 12:30 PM Aleksandar Markovic
 wrote:
>
>
>
> On Thursday, July 16, 2020, Ahmed Karaman  
> wrote:
>>
>> Python script that prints executed helpers of a QEMU invocation.
>>
>
> Hi, Ahmed.
>
> You outlined the envisioned user workflow regarding this script in your 
> report. As I understand it, it generally goes like this:
>
> 1) The user first discovers helpers, and their performance data.
> 2) The user examines the callees of a particular helper of choice (usually, 
> the most instruction-consuming helper).
> 3) The user perhaps further examines a callee of a particular callee of the 
> particular helper.
> 4) The user continues this way until the conclusion can be drawn, or maximal 
> depth is reached.
>
> The procedure might be time consuming since each step requires running an 
> emulation of the test program.
>
> This makes me think that the faster and easier tool for the user (but, to 
> some, not that great, extent, harder for you) would be improved 
> list_helpers.py (and list_fn_calees.py) that provides list of all callees for 
> all helpers, in the tree form (so, callees of callees, callees of callees of 
> callees, etc.), rather than providing just a list of immediate callees, like 
> it currently does.
>
> I think you can provide such functionality relatively easily using recursion. 
> See, let's say:
>
> https://realpython.com/python-thinking-recursively/
>
> Perhaps you can have a switch (let's say, --tree ) that specifies 
> whether the script outputs just immediate callee list, or entire callee tree.

I have to say, this is a very nice suggestion. I will start working on it!

>
> Thanks,
> Aleksandar
>
>>
>> Syntax:
>> list_helpers.py [-h] -- \
>> [] \
>> []
>>
>> [-h] - Print the script arguments help message.
>>
>> Example of usage:
>> list_helpers.py -- qemu-mips coulomb_double-mips -n10
>>
>> Example output:
>>  Total number of instructions: 108,933,695
>>
>>  Executed QEMU Helpers:
>>
>>  No. Ins Percent  Calls Ins/Call Helper Name Source File
>>  --- --- --- --  ---
>>1 183,021  0.168%  1,305  140 helper_float_sub_d  
>> /target/mips/fpu_helper.c
>>2 177,111  0.163%770  230 helper_float_madd_d 
>> /target/mips/fpu_helper.c
>>3 171,537  0.157%  1,014  169 helper_float_mul_d  
>> /target/mips/fpu_helper.c
>>4 157,298  0.144%  2,443   64 helper_lookup_tb_ptr
>> /accel/tcg/tcg-runtime.c
>>5 138,123  0.127%897  153 helper_float_add_d  
>> /target/mips/fpu_helper.c
>>6  47,083  0.043%207  227 helper_float_msub_d 
>> /target/mips/fpu_helper.c
>>7  24,062  0.022%487   49 helper_cmp_d_lt 
>> /target/mips/fpu_helper.c
>>8  22,910  0.021%150  152 helper_float_div_d  
>> /target/mips/fpu_helper.c
>>9  15,497  0.014%321   48 helper_cmp_d_eq 
>> /target/mips/fpu_helper.c
>>   10   9,100  0.008% 52  175 helper_float_trunc_w_d  
>> /target/mips/fpu_helper.c
>>   11   7,059  0.006% 10  705 helper_float_sqrt_d 
>> /target/mips/fpu_helper.c
>>   12   3,000  0.003% 40   75 helper_cmp_d_ule
>> /target/mips/fpu_helper.c
>>   13   2,720  0.002% 20  136 helper_float_cvtd_w 
>> /target/mips/fpu_helper.c
>>   14   2,477  0.002% 27   91 helper_swl  
>> /target/mips/op_helper.c
>>   15   2,000  0.002% 40   50 helper_cmp_d_le 
>> /target/mips/fpu_helper.c
>>   16   1,800  0.002% 40   45 helper_cmp_d_un 
>> /target/mips/fpu_helper.c
>>   17   1,164  0.001% 12   97 helper_raise_exception_ 
>> /target/mips/op_helper.c
>>   18 720  0.001% 10   72 helper_cmp_d_ult
>> /target/mips/fpu_helper.c
>>   19 560  0.001%1404 helper_cfc1 
>> /target/mips/fpu_helper.c
>>
>> Signed-off-by: Ahmed Karaman 
>> ---
>>  scripts/performance/list_helpers.py | 207 
>>  1 file changed, 207 insertions(+)
>>  create mode 100755 scripts/performance/list_helpers.py
>>
>> diff --git a/scripts/performance/list_helpers.py 
>> b/scripts/performance/list_helpers.py
>> new file mode 100755
>> index 00..a97c7ed4fe
>> --- /dev/null
>> +++ b/scripts/performance/list_helpers.py
>> @@ -0,0 +1,207 @@
>> +#!/usr/bin/env python3
>> +
>> +#  Print the executed helpers of a QEMU invocation.
>> +#
>> +#  Syntax:
>> +#  list_helpers.py [-h] -- \
>> +#  [] \
>> +#  []
>> +#
>> +#  [-h] - Print the script arguments help message.
>> +#
>> +#  Example of usage:
>> +#  list_helpers.py -- qemu-mips coulomb_double-mips
>> +#
>> +#  This file is a part of the project "TCG Continuous Benchmarking".
>> +#
>> +#  Copyright (C) 2020  Ahmed Karaman 
>> +#  Copyright (C) 2020  Aleksandar Markovic 
>> +#
>> +#  This program is free software: you can redistribute it and/or modify
>> +#  it 

Re: [PATCH v2 0/2] QEMU Gating CI

2020-07-28 Thread Peter Maydell
On Tue, 28 Jul 2020 at 17:34, Cleber Rosa  wrote:
> So, I think the reason for the skip (there's an open issue on GitLab
> itself about not communicating to users the reason) is that GitLab
> does a late evaluation of the job condition.  For those jobs the
> condition is:
>
>rules:
>- if: '$CI_COMMIT_REF_NAME == "staging"'
>
> Which by the time the job was evaluated it was no longer true (there
> was new content on the staging branch).

What caused the new content on the staging branch? I only
pushed the once (to go to commit 6e7c2dcb50907aa) -- is some
other mechanism updating it? Looking at what it now seems to
point to it looks like it's being mirrored from the git.qemu.org
repo... Either we should turn that off, or tests need to be
pushed to git.qemu.org:staging (which might introduce a delay
if the mirror isn't immediate)...

thanks
-- PMM



qemu-img convert asserts while converting from vhdx to raw

2020-07-28 Thread Swapnil Ingle
Hey Guys,

We are seeing following assert when trying to convert disk image from vhdx to 
raw.
This issue is seen only for disk with 4k logical sector size.

$ qemu-img convert -f vhdx -O raw 4KTest1.vhdx test.raw
qemu-img: util/iov.c:388: qiov_slice: Assertion `offset + len <= qiov->size' 
failed.
Aborted

$ qemu-img --version
qemu-img version 5.0.91 (v5.1.0-rc1-2-g3cbc897-dirty)
Copyright (c) 2003-2020 Fabrice Bellard and the QEMU Project developers

 $ qemu-img check -r all 4KTest1.vhdx
No errors were found on the image.

$ qemu-img info 4KTest1.vhdx
image: 4KTest1.vhdx
file format: vhdx
virtual size: 10 GiB (10737418240 bytes)
disk size: 35.7 GiB
cluster_size: 33554432

The vhdx disk metadata is following,

VhdFormat : VHDX
VhdType : Dynamic
LogicalSectorSize : 4096
PhysicalSectorSize : 4096
BlockSize : 33554432

Following is the backtrace of the assert,

#0  0x764cf387 in raise () from /lib64/libc.so.6
#1  0x764d0a78 in abort () from /lib64/libc.so.6
#2  0x764c81a6 in __assert_fail_base () from /lib64/libc.so.6
#3  0x764c8252 in __assert_fail () from /lib64/libc.so.6
#4  0x556abf5a in qiov_slice (qiov=0x74122a20, offset=0, 
len=2096640, head=0x74122648, tail=0x74122650,
niov=0x74122640) at util/iov.c:388
#5  0x556ac0f6 in qemu_iovec_init_extended (qiov=0x74122730, 
head_buf=0x0, head_len=0, mid_qiov=0x74122a20, mid_offset=0,
mid_len=2096640, tail_buf=0x0, tail_len=0) at util/iov.c:429
#6  0x556ac438 in qemu_iovec_init_slice (qiov=0x74122730, 
source=0x74122a20, offset=0, len=2096640) at util/iov.c:495
#7  0x55609bd6 in bdrv_driver_preadv (bs=0x55982a80, 
offset=15841886208, bytes=2096640, qiov=0x74122a20, qiov_offset=0,
flags=0) at block/io.c:1134
#8  0x5560ad55 in bdrv_aligned_preadv (child=0x559891f0, 
req=0x74122900, offset=15841886208, bytes=2096640, align=1,
qiov=0x74122a20, qiov_offset=0, flags=0) at block/io.c:1515
#9  0x5560b67b in bdrv_co_preadv_part (child=0x559891f0, 
offset=15841886208, bytes=2096640, qiov=0x74122a20, qiov_offset=0,
flags=0) at block/io.c:1756
#10 0x5560b4b4 in bdrv_co_preadv (child=0x559891f0, 
offset=15841886208, bytes=2096640, qiov=0x74122a20, flags=0)
at block/io.c:1714
#11 0x555e3266 in vhdx_co_readv (bs=0x5597b370, sector_num=4194304, 
nb_sectors=4095, qiov=0x74122e10) at block/vhdx.c:1208
#12 0x55609da1 in bdrv_driver_preadv (bs=0x5597b370, 
offset=2147483136, bytes=2097152, qiov=0x74122e10, qiov_offset=0,
flags=0) at block/io.c:1169
#13 0x5560ad55 in bdrv_aligned_preadv (child=0x55989150, 
req=0x74122cb0, offset=2147483136, bytes=2097152, align=512,
qiov=0x74122e10, qiov_offset=0, flags=0) at block/io.c:1515
#14 0x5560b67b in bdrv_co_preadv_part (child=0x55989150, 
offset=2147483136, bytes=2097152, qiov=0x74122e10, qiov_offset=0,
flags=0) at block/io.c:1756
#15 0x5560b4b4 in bdrv_co_preadv (child=0x55989150, 
offset=2147483136, bytes=2097152, qiov=0x74122e10, flags=0)
at block/io.c:1714
#16 0x555f34c3 in blk_do_preadv (blk=0x5597b010, offset=2147483136, 
bytes=2097152, qiov=0x74122e10, flags=0)
at block/block-backend.c:1211
#17 0x555f351b in blk_co_preadv (blk=0x5597b010, offset=2147483136, 
bytes=2097152, qiov=0x74122e10, flags=0)
at block/block-backend.c:1223
#18 0x5557347b in blk_co_pread (blk=0x5597b010, offset=2147483136, 
bytes=2097152, buf=0x7fffefdff000, flags=0)
at /home/swapnil/dev/github/qemu/include/sysemu/block-backend.h:140
#19 0x555771aa in convert_co_read (s=0x7fffdc30, 
sector_num=4194303, nb_sectors=4096, buf=0x7fffefdff000 "") at qemu-img.c:1830
#20 0x5557785c in convert_co_do_copy (opaque=0x7fffdc30) at 
qemu-img.c:2007
#21 0x556a9e4e in coroutine_trampoline (i0=1436133568, i1=21845) at 
util/coroutine-ucontext.c:173
#22 0x764e1190 in ?? () from /lib64/libc.so.6
#23 0x7fffd2e0 in ?? ()
#24 0x in ?? ()

Thanks and Regards,
-Swapnil



QEMU | Pipeline #171708785 has failed for master | 1e0e0917

2020-07-28 Thread GitLab via


Your pipeline has failed.

Project: QEMU ( https://gitlab.com/qemu-project/qemu )
Branch: master ( https://gitlab.com/qemu-project/qemu/-/commits/master )

Commit: 1e0e0917 ( 
https://gitlab.com/qemu-project/qemu/-/commit/1e0e0917e5df765575a72afd35a7183e65f505ac
 )
Commit Message: Merge remote-tracking branch 'remotes/mdroth/ta...
Commit Author: Peter Maydell ( https://gitlab.com/pm215 )

Pipeline #171708785 ( 
https://gitlab.com/qemu-project/qemu/-/pipelines/171708785 ) triggered by Alex 
Bennée ( https://gitlab.com/stsquad )
had 1 failed build.

Job #660048436 ( https://gitlab.com/qemu-project/qemu/-/jobs/660048436/raw )

Stage: build
Name: build-tcg-disabled
Trace: 192  ...[16:06:11] ...  
192  pass   [16:06:11] [16:06:11]   0s   
194  ...[16:06:11] ...  
194  pass   [16:06:11] [16:06:12]   1s   
197  ...[16:06:12] ...  
197  fail   [16:06:12] [16:07:54]
output mismatch (see 197.out.bad)
--- /builds/qemu-project/qemu/tests/qemu-iotests/197.out2020-07-28 
15:56:14.0 +
+++ /builds/qemu-project/qemu/build/tests/qemu-iotests/197.out.bad  
2020-07-28 16:07:53.0 +
@@ -26,9 +26,9 @@
 
 === Partial final cluster ===
 
-Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1024
+qemu-img: TEST_DIR/t.IMGFMT: Invalid parameter 'compat'
 read 1024/1024 bytes at offset 0
 1 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-1 KiB (0x400) bytes allocated at offset 0 bytes (0x0)
+4 GiB (0x1) bytes allocated at offset 0 bytes (0x0)
 No errors were found on the image.
 *** done
208  ...[16:07:54] ...  
208  pass   [16:07:54] [16:07:56]   2s   
215  ...[16:07:56] ...  
215  pass   [16:07:56] [16:09:30]  93s   
221  ...[16:09:30] ...  
221  pass   [16:09:30] [16:09:30]   0s   
222  ...[16:09:30] ...  
222  pass   [16:09:30] [16:09:33]   3s   
226  ...[16:09:33] ...  
226  pass   [16:09:33] [16:09:33]   0s   
227  ...[16:09:33] ...  
227  pass   [16:09:33] [16:09:34]   1s   
236  ...[16:09:34] ...  
236  pass   [16:09:34] [16:09:34]   0s   
253  ...[16:09:34] ...  
253  pass   [16:09:34] [16:09:34]   0s   
277  ...[16:09:34] ...  
277  pass   [16:09:34] [16:09:35]   1s   
Failures: 197
Failed 1 of 47 iotests
section_end:1595952589:step_script
ERROR: Job failed: exit code 1



-- 
You're receiving this email because of your account on gitlab.com.





Re: [PATCH v2 0/2] QEMU Gating CI

2020-07-28 Thread Philippe Mathieu-Daudé
On 7/28/20 6:33 PM, Cleber Rosa wrote:
> On Tue, Jul 28, 2020 at 05:08:50PM +0100, Peter Maydell wrote:
>> On Tue, 28 Jul 2020 at 16:51, Cleber Rosa  wrote:
>>>
...
 OTOH I can't see anything on that web page that suggests that
 it's submitting jobs to the s390 or aarch64 boxes -- is it
 intended to?

>>>
>>> All the jobs for that pipeline have been created as expected, for
>>> instance:
>>>
>>>https://gitlab.com/qemu-project/qemu/-/jobs/659874849
>>>
>>> But given the recent changes to the GitLab YAML adding other phases,
>>> it's waiting for the previous phases.
>>
>> The page now says "This job has been skipped"...
>>
> 
> I saw that, and I was very disappointed... I double checked the
> machines, the runners status, tag names and they all seem to be OK.
> 
> So, I think the reason for the skip (there's an open issue on GitLab
> itself about not communicating to users the reason) is that GitLab
> does a late evaluation of the job condition.  For those jobs the
> condition is:
> 
>rules:
>- if: '$CI_COMMIT_REF_NAME == "staging"'
> 
> Which by the time the job was evaluated it was no longer true (there
> was new content on the staging branch).  There are multiple ways to
> solve the problem, including (and in my order of preference):
> 
>  1. using '$CI_COMMIT_BRANCH' instead of '$CI_COMMIT_REF_NAME', given
> that the pushed branch name should be kept stable even if the content
> (thus reference name) changes
> 
>  2. not changing anything if you believe that under normal
> circunstances one pipeline for the staging will be running at a
> time.

For mainstream, usually one at a time is enough, because if you tests
various and one is merged, then you need to rerun on top of the merged
one, so it is not very useful.

For other users, it is useful. I'd certainly finish a patch, run the
script, switch branch in another console, do some other work, run
another instance of the script concurrently with the 1st one, etc...

> 
> I'll prepare a new version with #1, unless you have a strong feeling
> against it.
> 
> - Cleber.
> 
>> thanks
>> -- PMM
>>




Re: [PATCH 2/7] build: fix device module builds

2020-07-28 Thread Philippe Mathieu-Daudé
On 7/23/20 7:46 PM, Christophe de Dinechin wrote:
> From: Gerd Hoffmann 
> 
> See comment.  Feels quite hackish.  Better ideas anyone?

I don't understand this patch, how is it related to the rest of
your series?

> 
> Signed-off-by: Gerd Hoffmann 
> Signed-off-by: Christophe de Dinechin 
> ---
>  dtc   | 2 +-
>  roms/SLOF | 2 +-
>  roms/openbios | 2 +-
>  roms/opensbi  | 2 +-
>  roms/seabios  | 2 +-
>  slirp | 2 +-
>  6 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/dtc b/dtc
> index 85e5d83984..88f18909db 16
> --- a/dtc
> +++ b/dtc
> @@ -1 +1 @@
> -Subproject commit 85e5d839847af54efab170f2b1331b2a6421e647
> +Subproject commit 88f18909db731a627456f26d779445f84e449536
> diff --git a/roms/SLOF b/roms/SLOF
> index e18ddad851..9546892a80 16
> --- a/roms/SLOF
> +++ b/roms/SLOF
> @@ -1 +1 @@
> -Subproject commit e18ddad8516ff2cfe36ec130200318f7251aa78c
> +Subproject commit 9546892a80d5a4c73deea6719de46372f007f4a6
> diff --git a/roms/openbios b/roms/openbios
> index 75fbb41d28..7e5b89e429 16
> --- a/roms/openbios
> +++ b/roms/openbios
> @@ -1 +1 @@
> -Subproject commit 75fbb41d2857d93208c74a8e0228b29fd7bf04c0
> +Subproject commit 7e5b89e4295063d8eba55b9c8ce8bc681c2d129a
> diff --git a/roms/opensbi b/roms/opensbi
> index 9f1b72ce66..be92da280d 16
> --- a/roms/opensbi
> +++ b/roms/opensbi
> @@ -1 +1 @@
> -Subproject commit 9f1b72ce66d659e91013b358939e832fb27223f5
> +Subproject commit be92da280d87c38a2e0adc5d3f43bab7b5468f09
> diff --git a/roms/seabios b/roms/seabios
> index 88ab0c1552..f21b5a4aeb 16
> --- a/roms/seabios
> +++ b/roms/seabios
> @@ -1 +1 @@
> -Subproject commit 88ab0c15525ced2eefe39220742efe4769089ad8
> +Subproject commit f21b5a4aeb020f2a5e2c6503f906a9349dd2f069
> diff --git a/slirp b/slirp
> index 2faae0f778..126c04acba 16
> --- a/slirp
> +++ b/slirp
> @@ -1 +1 @@
> -Subproject commit 2faae0f778f818fadc873308f983289df697eb93
> +Subproject commit 126c04acbabd7ad32c2b018fe10dfac2a3bc1210
> 




Re: [PATCH v2 3/4] iotests: Add more qemu_img helpers

2020-07-28 Thread Nir Soffer
On Tue, Jul 28, 2020 at 4:50 PM Vladimir Sementsov-Ogievskiy
 wrote:
>
> 28.07.2020 00:58, Nir Soffer wrote:
> > Add 2 helpers for measuring and checking images:
> > - qemu_img_measure()
> > - qemu_img_check()
> >
> > Both use --output-json and parse the returned json to make easy to use
> > in other tests. I'm going to use them in a new test, and I hope they
> > will be useful in may other tests.
> >
> > Signed-off-by: Nir Soffer 
> > ---
> >   tests/qemu-iotests/iotests.py | 6 ++
> >   1 file changed, 6 insertions(+)
> >
> > diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
> > index 8f79668435..717b5b652c 100644
> > --- a/tests/qemu-iotests/iotests.py
> > +++ b/tests/qemu-iotests/iotests.py
> > @@ -141,6 +141,12 @@ def qemu_img_create(*args):
> >
> >   return qemu_img(*args)
> >
> > +def qemu_img_measure(*args):
> > +return json.loads(qemu_img_pipe("measure", "--output", "json", *args))
> > +
> > +def qemu_img_check(*args):
> > +return json.loads(qemu_img_pipe("check", "--output", "json", *args))
> > +
>
> qemu_img_pipe has type hints, so I assume we should add them here too.

True, but type hints are not use consistently in this module (e.g.
qemu_img_verbose).

>
> Also, qemu-img don't report errors in json format, so in case of error, this 
> will raise a problem about something that json can't parse. Probably we need 
> better error handling.

Yes, this fails in an ugly and unhelpful way now.

Ideally failing command will raise a detailed error with the command,
exit code, output,
and error. Code that want to check for specific return code would do:

try:
iotests.qemu_img_check(disk)
except iotest.Error as e:
if e.rc == 2:
   ...

But most callers do not need this so they will fail loudly with all the details.

What do you think?

> Still, for 5.1 it's OK as is I think, so if we are in a hurry:
> Reviewed-by: Vladimir Sementsov-Ogievskiy 
>
> >   def qemu_img_verbose(*args):
> >   '''Run qemu-img without suppressing its output and return the exit 
> > code'''
> >   exitcode = subprocess.call(qemu_img_args + list(args))
> >
>
>
> --
> Best regards,
> Vladimir
>




Re: [PATCH v2 0/2] QEMU Gating CI

2020-07-28 Thread Cleber Rosa
On Tue, Jul 28, 2020 at 05:08:50PM +0100, Peter Maydell wrote:
> On Tue, 28 Jul 2020 at 16:51, Cleber Rosa  wrote:
> >
> > On Tue, Jul 28, 2020 at 03:48:38PM +0100, Peter Maydell wrote:
> > > On Mon, 20 Jul 2020 at 18:22, Cleber Rosa  wrote:
> > > > Sure.  It's important that PATCH 2/2 in this series is included in a
> > > > branch that you need to push to the "staging" branch on the
> > > > https://gitlab.com/qemu-project/qemu repo (it could be just that one
> > > > patch).  Then, you can run:
> > > >
> > > >   ./scripts/ci/gitlab-pipeline-status --verbose -w
> > > >
> > > > And that should be it.  You can drop '--verbose' if you just want the
> > > > final outcome as the result.
> > >
> > > I tried this (local branch named "staging", pushed to gitlab
> > > remote "staging" branch), but it said:
> > >
> > > e104462:bionic:qemu$ ./scripts/ci/gitlab-pipeline-status --verbose -w
> > > ERROR: No pipeline found
> > > failure
> > >
> >
> > Hi Peter,
> >
> > I think this may just have been a timing issue.  GitLab usually does
> > take a few seconds after it receives a branch push to create a
> > pipeline.  Let me know if you'd like to see this within the script, or
> > if you'd rather put a sleep between your push and the
> > "gitlab-pipeline-status" execution.
> 
> Ah, right. I ran the command again and it does (eventually)
> print "running...". I think the ideal behaviour would be for
> the script to have some kind of "waiting for pipeline to start..."
> phase where it sits and polls for the pipeline to appear,
> with a pretty long timeout (minutes?).
>

Fair enough.  I'll send a patch to change the script behavior.

> > > It does seem to have kicked off the pipeline on gitlab though:
> > > https://gitlab.com/qemu-project/qemu/-/pipelines/171671136/builds
> >
> > There's already new content on the staging branch, but supposing my local
> > staging branch contained commit 6e7c2dcb50907aa6be0cbc37f81801d2fa67f7b4
> > (https://gitlab.com/qemu-project/qemu/-/commit/6e7c2dcb50907aa6be0cbc37f81801d2fa67f7b4),
> > the command you ran:
> >
> >   ./scripts/ci/gitlab-pipeline-status --verbose -w
> >
> > Should have behaved as this (output from my machine):
> >
> >   /scripts/ci/gitlab-pipeline-status --verbose -w -c 
> > 6e7c2dcb50907aa6be0cbc37f81801d2fa67f7b4
> >   running...
> >
> > > OTOH I can't see anything on that web page that suggests that
> > > it's submitting jobs to the s390 or aarch64 boxes -- is it
> > > intended to?
> > >
> >
> > All the jobs for that pipeline have been created as expected, for
> > instance:
> >
> >https://gitlab.com/qemu-project/qemu/-/jobs/659874849
> >
> > But given the recent changes to the GitLab YAML adding other phases,
> > it's waiting for the previous phases.
> 
> The page now says "This job has been skipped"...
>

I saw that, and I was very disappointed... I double checked the
machines, the runners status, tag names and they all seem to be OK.

So, I think the reason for the skip (there's an open issue on GitLab
itself about not communicating to users the reason) is that GitLab
does a late evaluation of the job condition.  For those jobs the
condition is:

   rules:
   - if: '$CI_COMMIT_REF_NAME == "staging"'

Which by the time the job was evaluated it was no longer true (there
was new content on the staging branch).  There are multiple ways to
solve the problem, including (and in my order of preference):

 1. using '$CI_COMMIT_BRANCH' instead of '$CI_COMMIT_REF_NAME', given
that the pushed branch name should be kept stable even if the content
(thus reference name) changes

 2. not changing anything if you believe that under normal
circunstances one pipeline for the staging will be running at a
time.

I'll prepare a new version with #1, unless you have a strong feeling
against it.

- Cleber.

> thanks
> -- PMM
> 


signature.asc
Description: PGP signature


Re: [PATCH 1/2] hw/net/net_tx_pkt: add function to check pkt->max_raw_frags

2020-07-28 Thread Mauro Matteo Cascella
On Tue, Jul 28, 2020 at 6:06 AM Jason Wang  wrote:
>
>
> On 2020/7/28 上午1:08, Mauro Matteo Cascella wrote:
> > This patch introduces a new function in hw/net/net_tx_pkt.{c,h} to check the
> > current data fragment against the maximum number of data fragments.
>
>
> I wonder whether it's better to do the check in
> net_tx_pkt_add_raw_fragment() and fail there.

Given the assertion, I assumed the caller is responsible for the
check, but moving the check in net_tx_pkt_add_raw_fragment() totally
makes sense to me.

> Btw, I find net_tx_pkt_add_raw_fragment() does not unmap dma when
> returning to true, is this a bug?

Isn't it unmapped in net_tx_pkt_reset()?

> Thanks
>
>
> >
> > Reported-by: Ziming Zhang 
> > Signed-off-by: Mauro Matteo Cascella 
> > ---
> >   hw/net/net_tx_pkt.c | 5 +
> >   hw/net/net_tx_pkt.h | 8 
> >   2 files changed, 13 insertions(+)
> >
> > diff --git a/hw/net/net_tx_pkt.c b/hw/net/net_tx_pkt.c
> > index 9560e4a49e..d035618f2c 100644
> > --- a/hw/net/net_tx_pkt.c
> > +++ b/hw/net/net_tx_pkt.c
> > @@ -400,6 +400,11 @@ bool net_tx_pkt_add_raw_fragment(struct NetTxPkt *pkt, 
> > hwaddr pa,
> >   }
> >   }
> >
> > +bool net_tx_pkt_exceed_max_fragments(struct NetTxPkt *pkt)
> > +{
> > +return pkt->raw_frags >= pkt->max_raw_frags;
> > +}
> > +
> >   bool net_tx_pkt_has_fragments(struct NetTxPkt *pkt)
> >   {
> >   return pkt->raw_frags > 0;
> > diff --git a/hw/net/net_tx_pkt.h b/hw/net/net_tx_pkt.h
> > index 4ec8bbe9bd..e2ee46ae03 100644
> > --- a/hw/net/net_tx_pkt.h
> > +++ b/hw/net/net_tx_pkt.h
> > @@ -179,6 +179,14 @@ bool net_tx_pkt_send_loopback(struct NetTxPkt *pkt, 
> > NetClientState *nc);
> >*/
> >   bool net_tx_pkt_parse(struct NetTxPkt *pkt);
> >
> > +/**
> > +* indicates if the current data fragment exceeds max_raw_frags
> > +*
> > +* @pkt:packet
> > +*
> > +*/
> > +bool net_tx_pkt_exceed_max_fragments(struct NetTxPkt *pkt);
> > +
> >   /**
> >   * indicates if there are data fragments held by this packet object.
> >   *
>

-- 
Mauro Matteo Cascella, Red Hat Product Security
6F78 E20B 5935 928C F0A8  1A9D 4E55 23B8 BB34 10B0




Re: [PATCH v2 0/2] QEMU Gating CI

2020-07-28 Thread Cleber Rosa
On Tue, Jul 28, 2020 at 05:15:17PM +0100, Daniel P. Berrangé wrote:
> On Tue, Jul 28, 2020 at 12:13:06PM -0400, Cleber Rosa wrote:
> > On Tue, Jul 28, 2020 at 03:51:34PM +0100, Daniel P. Berrangé wrote:
> > > On Tue, Jul 28, 2020 at 03:48:38PM +0100, Peter Maydell wrote:
> > > > On Mon, 20 Jul 2020 at 18:22, Cleber Rosa  wrote:
> > > > > Sure.  It's important that PATCH 2/2 in this series is included in a
> > > > > branch that you need to push to the "staging" branch on the
> > > > > https://gitlab.com/qemu-project/qemu repo (it could be just that one
> > > > > patch).  Then, you can run:
> > > > >
> > > > >   ./scripts/ci/gitlab-pipeline-status --verbose -w
> > > > >
> > > > > And that should be it.  You can drop '--verbose' if you just want the
> > > > > final outcome as the result.
> > > > 
> > > > I tried this (local branch named "staging", pushed to gitlab
> > > > remote "staging" branch), but it said:
> > > > 
> > > > e104462:bionic:qemu$ ./scripts/ci/gitlab-pipeline-status --verbose -w
> > > > ERROR: No pipeline found
> > > > failure
> > > > 
> > > > It does seem to have kicked off the pipeline on gitlab though:
> > > > https://gitlab.com/qemu-project/qemu/-/pipelines/171671136/builds
> > > > OTOH I can't see anything on that web page that suggests that
> > > > it's submitting jobs to the s390 or aarch64 boxes -- is it
> > > > intended to?
> > > 
> > > It looks like those jobs are all in the "test" stage of the pipeline, so
> > > it is waiting for the earlier stages to complete before running the jobs.
> > >
> > 
> > Hi Daniel,
> > 
> > Right.  IIUC the criteria for putting jobs in the test stage at the
> > moment is "if they include running tests (in addition to builds) they
> > should be in test".  Looking at that from this perspective, they're in
> > the right place.
> > 
> > But, these jobs don't depend on anything else, including container
> > builds, so there's no reason to have them wait for so long to run.
> > The solution may be to rename the first stage (containers) to something
> > more generic (and unfortunately less descriptive) so that all of them
> > will run concurrently and earlier.
> 
> Just add 'needs: []'  to any jobs that don't depend on earlier jobs.
>

Great, thanks!

Long version: I remember something like this was possible (which I
mentioned in the other reply), but the documentation about stages
(https://docs.gitlab.com/ee/ci/yaml/#stages) made me loose hope when
writing this reply.  Thanks again!

- Cleber.

> Regards,
> Daniel
> -- 
> |: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o-https://fstop138.berrange.com :|
> |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|


signature.asc
Description: PGP signature


Re: [PATCH v1 2/2] target/arm: only set the nexttick timer if !ISTATUS

2020-07-28 Thread Peter Maydell
On Tue, 28 Jul 2020 at 17:11, Alex Bennée  wrote:
>
>
> Peter Maydell  writes:
>
> > On Tue, 28 Jul 2020 at 15:10, Alex Bennée  wrote:
> >>
> >> Otherwise we have an unfortunate interaction with -count sleep=off
> >> which means we fast forward time when we don't need to. The easiest
> >> way to trigger it was to attach to the gdbstub and place a break point
> >> at the timers IRQ routine. Once the timer fired setting the next event
> >> at INT_MAX then qemu_start_warp_timer would skip to the end.
> >>
> >> Signed-off-by: Alex Bennée 
> >> ---

> >>  if (istatus) {
> >> -/* Next transition is when count rolls back over to zero */
> >> -nexttick = UINT64_MAX;
> >> +/*
> >> + * The IRQ status of the timer will persist until:
> >> + *   - CVAL is changed or
> >> + *   - ENABLE is changed
> >> + *
> >> + * There is no point re-arming the timer for some far
> >> + * flung future - currently it just is.
> >> + */
> >> +timer_del(cpu->gt_timer[timeridx]);
> >
> > Why do we delete the timer for this case of "next time we need to
> > know is massively in the future"...
>
> It's not really - it's happening now and it will continue to happen
> until the IRQ is serviced or we change the CVAL at which point we can
> calculate the next time we need it.

It is far-future: the counter can roll all the way round and back over
to zero, as the old comment states. (Other reasons for things to change
get handled via other code paths: it's only the "at some defined time
in the future a change happens" cases that we need to set a timer for).
It's the same situation as below, I think (though I don't know why we
used UINT64_MAX for one and INT64_MAX for the other).

-- PMM



Re: [PULL 0/3] Block patches for 5.1

2020-07-28 Thread Thomas Huth
On 28/07/2020 18.12, Peter Maydell wrote:
> On Tue, 28 Jul 2020 at 11:19, Peter Maydell  wrote:
>>
>> On Mon, 27 Jul 2020 at 15:38, Max Reitz  wrote:
>>> 
>>> Block patches for 5.1:
>>> - Coverity fix
>>> - iotests fix for rx and avr
>>> - iotests fix for qcow2 -o compat=0.10
>>>
>>
>> Applied, thanks.
> 
> This seems to have broken the "tcg disabled" build on gitlab:
> https://gitlab.com/qemu-project/qemu/-/jobs/659352096

Max already sent another pull request that contains the fix for this
issue, look for "[PULL 0/3] Block patches for 5.1.0-rc2?"

 Thomas




Re: [PULL 08/16] linux-user: don't use MAP_FIXED in pgd_find_hole_fallback

2020-07-28 Thread Peter Maydell
On Tue, 28 Jul 2020 at 17:04, Alex Bennée  wrote:
> Peter Maydell  writes:
> > Hi; Coverity thinks this conditional expression is suspicious
> > (CID 1431059):
> >
> >>  if (mmap_start != MAP_FAILED) {
> >>  munmap((void *) align_start, guest_size);
> >> -return (uintptr_t) mmap_start + offset;
> >> +if (MAP_FIXED_NOREPLACE || mmap_start == (void *) 
> >> align_start) {
> >
> > because it's performing a logical OR operation where the left
> > operand is an integer constant that's neither 0 nor 1
> > (it's 1048576). What was this intended to be?
>
> It's 0 if the header doesn't provide it. If it's !0 we don't need to
> check the address because it should have been in the correct place.

OK. "if (MAP_FIXED_NOREPLACE != 0 || ...)" will probably satisfy
Coverity then.

-- PMM



Re: [PATCH v2 0/2] QEMU Gating CI

2020-07-28 Thread Daniel P . Berrangé
On Tue, Jul 28, 2020 at 12:13:06PM -0400, Cleber Rosa wrote:
> On Tue, Jul 28, 2020 at 03:51:34PM +0100, Daniel P. Berrangé wrote:
> > On Tue, Jul 28, 2020 at 03:48:38PM +0100, Peter Maydell wrote:
> > > On Mon, 20 Jul 2020 at 18:22, Cleber Rosa  wrote:
> > > > Sure.  It's important that PATCH 2/2 in this series is included in a
> > > > branch that you need to push to the "staging" branch on the
> > > > https://gitlab.com/qemu-project/qemu repo (it could be just that one
> > > > patch).  Then, you can run:
> > > >
> > > >   ./scripts/ci/gitlab-pipeline-status --verbose -w
> > > >
> > > > And that should be it.  You can drop '--verbose' if you just want the
> > > > final outcome as the result.
> > > 
> > > I tried this (local branch named "staging", pushed to gitlab
> > > remote "staging" branch), but it said:
> > > 
> > > e104462:bionic:qemu$ ./scripts/ci/gitlab-pipeline-status --verbose -w
> > > ERROR: No pipeline found
> > > failure
> > > 
> > > It does seem to have kicked off the pipeline on gitlab though:
> > > https://gitlab.com/qemu-project/qemu/-/pipelines/171671136/builds
> > > OTOH I can't see anything on that web page that suggests that
> > > it's submitting jobs to the s390 or aarch64 boxes -- is it
> > > intended to?
> > 
> > It looks like those jobs are all in the "test" stage of the pipeline, so
> > it is waiting for the earlier stages to complete before running the jobs.
> >
> 
> Hi Daniel,
> 
> Right.  IIUC the criteria for putting jobs in the test stage at the
> moment is "if they include running tests (in addition to builds) they
> should be in test".  Looking at that from this perspective, they're in
> the right place.
> 
> But, these jobs don't depend on anything else, including container
> builds, so there's no reason to have them wait for so long to run.
> The solution may be to rename the first stage (containers) to something
> more generic (and unfortunately less descriptive) so that all of them
> will run concurrently and earlier.

Just add 'needs: []'  to any jobs that don't depend on earlier jobs.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PULL V2 0/3] Net patches

2020-07-28 Thread Peter Maydell
On Tue, 28 Jul 2020 at 10:10, Jason Wang  wrote:
>
> The following changes since commit 93ea484375ab473379dd9c836261ef484bd71ab1:
>
>   Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging 
> (2020-07-27 21:00:01 +0100)
>
> are available in the git repository at:
>
>   https://github.com/jasowang/qemu.git tags/net-pull-request
>
> for you to fetch changes up to 22dc8663d9fc7baa22100544c600b6285a63c7a3:
>
>   net: forbid the reentrant RX (2020-07-28 16:57:58 +0800)
>
> 
> Want to send earlier but most patches just come.
>
> - fix vhost-vdpa issues when no peer
> - fix virtio-pci queue enabling index value
> - forbid reentrant RX
>
> Changes from V1:
>
> - drop the patch that has been merged
>
> 
> Jason Wang (2):
>   virtio-net: check the existence of peer before accessing vDPA config
>   net: forbid the reentrant RX
>
> Yuri Benditovich (1):
>   virtio-pci: fix wrong index in virtio_pci_queue_enabled


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/5.1
for any user-visible changes.

-- PMM



Re: [PATCH v2 0/2] QEMU Gating CI

2020-07-28 Thread Cleber Rosa
On Tue, Jul 28, 2020 at 03:51:34PM +0100, Daniel P. Berrangé wrote:
> On Tue, Jul 28, 2020 at 03:48:38PM +0100, Peter Maydell wrote:
> > On Mon, 20 Jul 2020 at 18:22, Cleber Rosa  wrote:
> > > Sure.  It's important that PATCH 2/2 in this series is included in a
> > > branch that you need to push to the "staging" branch on the
> > > https://gitlab.com/qemu-project/qemu repo (it could be just that one
> > > patch).  Then, you can run:
> > >
> > >   ./scripts/ci/gitlab-pipeline-status --verbose -w
> > >
> > > And that should be it.  You can drop '--verbose' if you just want the
> > > final outcome as the result.
> > 
> > I tried this (local branch named "staging", pushed to gitlab
> > remote "staging" branch), but it said:
> > 
> > e104462:bionic:qemu$ ./scripts/ci/gitlab-pipeline-status --verbose -w
> > ERROR: No pipeline found
> > failure
> > 
> > It does seem to have kicked off the pipeline on gitlab though:
> > https://gitlab.com/qemu-project/qemu/-/pipelines/171671136/builds
> > OTOH I can't see anything on that web page that suggests that
> > it's submitting jobs to the s390 or aarch64 boxes -- is it
> > intended to?
> 
> It looks like those jobs are all in the "test" stage of the pipeline, so
> it is waiting for the earlier stages to complete before running the jobs.
>

Hi Daniel,

Right.  IIUC the criteria for putting jobs in the test stage at the
moment is "if they include running tests (in addition to builds) they
should be in test".  Looking at that from this perspective, they're in
the right place.

But, these jobs don't depend on anything else, including container
builds, so there's no reason to have them wait for so long to run.
The solution may be to rename the first stage (containers) to something
more generic (and unfortunately less descriptive) so that all of them
will run concurrently and earlier.

Thoughts?
- Cleber.

> Regards,
> Daniel
> -- 
> |: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o-https://fstop138.berrange.com :|
> |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|


signature.asc
Description: PGP signature


Re: [PULL 0/3] Block patches for 5.1

2020-07-28 Thread Peter Maydell
On Tue, 28 Jul 2020 at 11:19, Peter Maydell  wrote:
>
> On Mon, 27 Jul 2020 at 15:38, Max Reitz  wrote:
> > 
> > Block patches for 5.1:
> > - Coverity fix
> > - iotests fix for rx and avr
> > - iotests fix for qcow2 -o compat=0.10
> >
>
> Applied, thanks.

This seems to have broken the "tcg disabled" build on gitlab:
https://gitlab.com/qemu-project/qemu/-/jobs/659352096


197   [1m [31mfail   [0m [10:57:48] [10:59:34]
   output mismatch (see 197.out.bad)
--- /builds/qemu-project/qemu/tests/qemu-iotests/197.out 2020-07-28
10:47:16.0 +
+++ /builds/qemu-project/qemu/build/tests/qemu-iotests/197.out.bad
2020-07-28 10:59:33.0 +
@@ -26,9 +26,9 @@

 === Partial final cluster ===

-Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1024
+qemu-img: TEST_DIR/t.IMGFMT: Invalid parameter 'compat'
 read 1024/1024 bytes at offset 0
 1 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-1 KiB (0x400) bytes allocated at offset 0 bytes (0x0)
+4 GiB (0x1) bytes allocated at offset 0 bytes (0x0)
 No errors were found on the image.
 *** done


thanks
-- PMM



Re: [PATCH v1 2/2] target/arm: only set the nexttick timer if !ISTATUS

2020-07-28 Thread Alex Bennée


Peter Maydell  writes:

> On Tue, 28 Jul 2020 at 15:10, Alex Bennée  wrote:
>>
>> Otherwise we have an unfortunate interaction with -count sleep=off
>> which means we fast forward time when we don't need to. The easiest
>> way to trigger it was to attach to the gdbstub and place a break point
>> at the timers IRQ routine. Once the timer fired setting the next event
>> at INT_MAX then qemu_start_warp_timer would skip to the end.
>>
>> Signed-off-by: Alex Bennée 
>> ---
>>  target/arm/helper.c | 35 ++-
>>  1 file changed, 22 insertions(+), 13 deletions(-)
>>
>> diff --git a/target/arm/helper.c b/target/arm/helper.c
>> index c69a2baf1d3..ec1b84cf0fd 100644
>> --- a/target/arm/helper.c
>> +++ b/target/arm/helper.c
>> @@ -2683,7 +2683,7 @@ static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
>>  uint64_t count = gt_get_countervalue(>env);
>>  /* Note that this must be unsigned 64 bit arithmetic: */
>>  int istatus = count - offset >= gt->cval;
>> -uint64_t nexttick;
>> +uint64_t nexttick = 0;
>>  int irqstate;
>>
>>  gt->ctl = deposit32(gt->ctl, 2, 1, istatus);
>> @@ -2692,21 +2692,30 @@ static void gt_recalc_timer(ARMCPU *cpu, int 
>> timeridx)
>>  qemu_set_irq(cpu->gt_timer_outputs[timeridx], irqstate);
>>
>>  if (istatus) {
>> -/* Next transition is when count rolls back over to zero */
>> -nexttick = UINT64_MAX;
>> +/*
>> + * The IRQ status of the timer will persist until:
>> + *   - CVAL is changed or
>> + *   - ENABLE is changed
>> + *
>> + * There is no point re-arming the timer for some far
>> + * flung future - currently it just is.
>> + */
>> +timer_del(cpu->gt_timer[timeridx]);
>
> Why do we delete the timer for this case of "next time we need to
> know is massively in the future"...

It's not really - it's happening now and it will continue to happen
until the IRQ is serviced or we change the CVAL at which point we can
calculate the next time we need it.

>
>>  } else {
>>  /* Next transition is when we hit cval */
>>  nexttick = gt->cval + offset;
>> -}
>> -/* Note that the desired next expiry time might be beyond the
>> - * signed-64-bit range of a QEMUTimer -- in this case we just
>> - * set the timer for as far in the future as possible. When the
>> - * timer expires we will reset the timer for any remaining period.
>> - */
>> -if (nexttick > INT64_MAX / gt_cntfrq_period_ns(cpu)) {
>> -timer_mod_ns(cpu->gt_timer[timeridx], INT64_MAX);
>> -} else {
>> -timer_mod(cpu->gt_timer[timeridx], nexttick);
>> +
>> +/*
>> + * It is possible the next tick is beyond the
>> + * signed-64-bit range of a QEMUTimer but currently the
>> + * timer system doesn't support a run time of more the 292
>> + * odd years so we set it to INT_MAX in this case.
>> + */
>> +if (nexttick > INT64_MAX / gt_cntfrq_period_ns(cpu)) {
>> +timer_mod_ns(cpu->gt_timer[timeridx], INT64_MAX);
>
> ...but here we handle the similar case by "set a timeout for
> INT64_MAX" ?

Yeah we could just swallow it up and report something to say it's not
going to happen because it's beyond the horizon of what QEMUTimer can
deal with.

>
>> +} else {
>> +timer_mod(cpu->gt_timer[timeridx], nexttick);
>> +}
>>  }
>>  trace_arm_gt_recalc(timeridx, irqstate, nexttick);
>>  } else {
>> --
>
> thanks
> -- PMM


-- 
Alex Bennée



Re: [PATCH v7 01/47] block: Add child access functions

2020-07-28 Thread Christophe de Dinechin


On 2020-07-13 at 11:06 CEST, Vladimir Sementsov-Ogievskiy wrote...
> 25.06.2020 18:21, Max Reitz wrote:
>> There are BDS children that the general block layer code can access,
>> namely bs->file and bs->backing.  Since the introduction of filters and
>> external data files, their meaning is not quite clear.  bs->backing can
>> be a COW source, or it can be a filtered child; bs->file can be a
>> filtered child, it can be data and metadata storage, or it can be just
>> metadata storage.
>>
>> This overloading really is not helpful.  This patch adds functions that
>> retrieve the correct child for each exact purpose.  Later patches in
>> this series will make use of them.  Doing so will allow us to handle
>> filter nodes in a meaningful way.
>>
>> Signed-off-by: Max Reitz 
>> ---
>
> [..]
>
>> +/*
>> + * Return the primary child of this node: For filters, that is the
>> + * filtered child.  For other nodes, that is usually the child storing
>> + * metadata.
>> + * (A generally more helpful description is that this is (usually) the
>> + * child that has the same filename as @bs.)
>> + *
>> + * Drivers do not necessarily have a primary child; for example quorum
>> + * does not.
>> + */
>> +BdrvChild *bdrv_primary_child(BlockDriverState *bs)
>> +{
>> +BdrvChild *c;
>> +
>> +QLIST_FOREACH(c, >children, next) {
>> +if (c->role & BDRV_CHILD_PRIMARY) {
>> +return c;
>> +}
>> +}
>> +
>> +return NULL;
>> +}
>>
>
> Suggest squash-in to also assert that not more than one primary child:
> --- a/block.c
> +++ b/block.c
> @@ -6998,13 +6998,14 @@ BdrvChild *bdrv_filter_or_cow_child(BlockDriverState 
> *bs)
>*/
>   BdrvChild *bdrv_primary_child(BlockDriverState *bs)
>   {
> -BdrvChild *c;
> +BdrvChild *c, *found = NULL;
>
>   QLIST_FOREACH(c, >children, next) {
>   if (c->role & BDRV_CHILD_PRIMARY) {
> -return c;
> +assert(!found);
> +found = c;
>   }
>   }
>
> -return NULL;
> +return c;

Shouldn't that be "return found"?
>   }
>
>
> with or without:
> Reviewed-by: Vladimir Sementsov-Ogievskiy 


--
Cheers,
Christophe de Dinechin (IRC c3d)




Re: [PATCH v2 0/2] QEMU Gating CI

2020-07-28 Thread Peter Maydell
On Tue, 28 Jul 2020 at 16:51, Cleber Rosa  wrote:
>
> On Tue, Jul 28, 2020 at 03:48:38PM +0100, Peter Maydell wrote:
> > On Mon, 20 Jul 2020 at 18:22, Cleber Rosa  wrote:
> > > Sure.  It's important that PATCH 2/2 in this series is included in a
> > > branch that you need to push to the "staging" branch on the
> > > https://gitlab.com/qemu-project/qemu repo (it could be just that one
> > > patch).  Then, you can run:
> > >
> > >   ./scripts/ci/gitlab-pipeline-status --verbose -w
> > >
> > > And that should be it.  You can drop '--verbose' if you just want the
> > > final outcome as the result.
> >
> > I tried this (local branch named "staging", pushed to gitlab
> > remote "staging" branch), but it said:
> >
> > e104462:bionic:qemu$ ./scripts/ci/gitlab-pipeline-status --verbose -w
> > ERROR: No pipeline found
> > failure
> >
>
> Hi Peter,
>
> I think this may just have been a timing issue.  GitLab usually does
> take a few seconds after it receives a branch push to create a
> pipeline.  Let me know if you'd like to see this within the script, or
> if you'd rather put a sleep between your push and the
> "gitlab-pipeline-status" execution.

Ah, right. I ran the command again and it does (eventually)
print "running...". I think the ideal behaviour would be for
the script to have some kind of "waiting for pipeline to start..."
phase where it sits and polls for the pipeline to appear,
with a pretty long timeout (minutes?).

> > It does seem to have kicked off the pipeline on gitlab though:
> > https://gitlab.com/qemu-project/qemu/-/pipelines/171671136/builds
>
> There's already new content on the staging branch, but supposing my local
> staging branch contained commit 6e7c2dcb50907aa6be0cbc37f81801d2fa67f7b4
> (https://gitlab.com/qemu-project/qemu/-/commit/6e7c2dcb50907aa6be0cbc37f81801d2fa67f7b4),
> the command you ran:
>
>   ./scripts/ci/gitlab-pipeline-status --verbose -w
>
> Should have behaved as this (output from my machine):
>
>   /scripts/ci/gitlab-pipeline-status --verbose -w -c 
> 6e7c2dcb50907aa6be0cbc37f81801d2fa67f7b4
>   running...
>
> > OTOH I can't see anything on that web page that suggests that
> > it's submitting jobs to the s390 or aarch64 boxes -- is it
> > intended to?
> >
>
> All the jobs for that pipeline have been created as expected, for
> instance:
>
>https://gitlab.com/qemu-project/qemu/-/jobs/659874849
>
> But given the recent changes to the GitLab YAML adding other phases,
> it's waiting for the previous phases.

The page now says "This job has been skipped"...

thanks
-- PMM



Re: [PATCH v1 1/2] qemu-timer: gracefully handle the end of time

2020-07-28 Thread Alex Bennée


Paolo Bonzini  writes:

> On 28/07/20 16:10, Alex Bennée wrote:
>> +/*
>> + * Check to see if we have run out of time. Most of our time
>> + * sources are nanoseconds since epoch (some time around the fall
>> + * of Babylon 5, the start of the Enterprises five year mission
>> + * and just before the arrival of the great evil ~ 2262CE).
>> + * Although icount based time is ns since the start of emulation
>> + * it is able to skip forward if the device is sleeping (think IoT
>> + * device with a very long heartbeat). Either way we don't really
>> + * handle running out of time so lets catch it and report it here.
>> + */
>> +if (current_time == INT64_MAX) {
>> +qemu_handle_outa_time();
>> +goto out;
>> +}
>> +
>
> Doing this here is a bit dangerous, I'd rather do nothing here and
> detect the situation in cpus.c where we can do
> qemu_system_shutdown_request() (and also do nothing).

You mean in notify_aio_contexts()? Sure we can do that.

I also figured it might be worth cleaning up the return progress stuff
because AFAICT no one seems to care.

>
> Paolo


-- 
Alex Bennée



Re: [PATCH v2 2/4] iotests: Make qemu_nbd_popen() a contextmanager

2020-07-28 Thread Nir Soffer
On Tue, Jul 28, 2020 at 4:43 PM Vladimir Sementsov-Ogievskiy
 wrote:
>
> 28.07.2020 00:58, Nir Soffer wrote:
> > Instead of duplicating the code to wait until the server is ready and
> > remember to terminate the server and wait for it, make it possible to
> > use like this:
> >
> >  with qemu_nbd_popen('-k', sock, image):
> >  # Access image via qemu-nbd socket...
> >
> > Only test 264 used this helper, but I had to modify the output since it
> > did not consistently when starting and stopping qemu-nbd.
> >
> > Signed-off-by: Nir Soffer 
> > ---
> >   tests/qemu-iotests/264| 76 +--
> >   tests/qemu-iotests/264.out|  2 +
> >   tests/qemu-iotests/iotests.py | 28 -
> >   3 files changed, 56 insertions(+), 50 deletions(-)
> >
> > diff --git a/tests/qemu-iotests/264 b/tests/qemu-iotests/264
> > index 304a7443d7..666f164ed8 100755
> > --- a/tests/qemu-iotests/264
> > +++ b/tests/qemu-iotests/264
> > @@ -36,48 +36,32 @@ wait_step = 0.2
> >
> >   qemu_img_create('-f', iotests.imgfmt, disk_a, str(size))
> >   qemu_img_create('-f', iotests.imgfmt, disk_b, str(size))
> > -srv = qemu_nbd_popen('-k', nbd_sock, '-f', iotests.imgfmt, disk_b)
> >
> > -# Wait for NBD server availability
> > -t = 0
> > -ok = False
> > -while t < wait_limit:
> > -ok = qemu_io_silent_check('-f', 'raw', '-c', 'read 0 512', nbd_uri)
> > -if ok:
> > -break
> > -time.sleep(wait_step)
> > -t += wait_step
> > +with qemu_nbd_popen('-k', nbd_sock, '-f', iotests.imgfmt, disk_b):
> > +vm = iotests.VM().add_drive(disk_a)
> > +vm.launch()
> > +vm.hmp_qemu_io('drive0', 'write 0 {}'.format(size))
> > +
> > +vm.qmp_log('blockdev-add', filters=[iotests.filter_qmp_testfiles],
> > +   **{'node_name': 'backup0',
> > +  'driver': 'raw',
> > +  'file': {'driver': 'nbd',
> > +   'server': {'type': 'unix', 'path': nbd_sock},
> > +   'reconnect-delay': 10}})
> > +vm.qmp_log('blockdev-backup', device='drive0', sync='full', 
> > target='backup0',
> > +   speed=(1 * 1024 * 1024))
> > +
> > +# Wait for some progress
> > +t = 0
> > +while t < wait_limit:
> > +jobs = vm.qmp('query-block-jobs')['return']
> > +if jobs and jobs[0]['offset'] > 0:
> > +break
> > +time.sleep(wait_step)
> > +t += wait_step
> >
> > -assert ok
> > -
> > -vm = iotests.VM().add_drive(disk_a)
> > -vm.launch()
> > -vm.hmp_qemu_io('drive0', 'write 0 {}'.format(size))
> > -
> > -vm.qmp_log('blockdev-add', filters=[iotests.filter_qmp_testfiles],
> > -   **{'node_name': 'backup0',
> > -  'driver': 'raw',
> > -  'file': {'driver': 'nbd',
> > -   'server': {'type': 'unix', 'path': nbd_sock},
> > -   'reconnect-delay': 10}})
> > -vm.qmp_log('blockdev-backup', device='drive0', sync='full', 
> > target='backup0',
> > -   speed=(1 * 1024 * 1024))
> > -
> > -# Wait for some progress
> > -t = 0
> > -while t < wait_limit:
> > -jobs = vm.qmp('query-block-jobs')['return']
> >   if jobs and jobs[0]['offset'] > 0:
> > -break
> > -time.sleep(wait_step)
> > -t += wait_step
> > -
> > -if jobs and jobs[0]['offset'] > 0:
> > -log('Backup job is started')
> > -
> > -log('Kill NBD server')
> > -srv.kill()
> > -srv.wait()
> > +log('Backup job is started')
> >
> >   jobs = vm.qmp('query-block-jobs')['return']
> >   if jobs and jobs[0]['offset'] < jobs[0]['len']:
> > @@ -88,12 +72,8 @@ vm.qmp_log('block-job-set-speed', device='drive0', 
> > speed=0)
> >   # Emulate server down time for 1 second
> >   time.sleep(1)
> >
> > -log('Start NBD server')
> > -srv = qemu_nbd_popen('-k', nbd_sock, '-f', iotests.imgfmt, disk_b)
> > -
> > -e = vm.event_wait('BLOCK_JOB_COMPLETED')
> > -log('Backup completed: {}'.format(e['data']['offset']))
> > -
> > -vm.qmp_log('blockdev-del', node_name='backup0')
> > -srv.kill()
> > -vm.shutdown()
> > +with qemu_nbd_popen('-k', nbd_sock, '-f', iotests.imgfmt, disk_b):
> > +e = vm.event_wait('BLOCK_JOB_COMPLETED')
> > +log('Backup completed: {}'.format(e['data']['offset']))
> > +vm.qmp_log('blockdev-del', node_name='backup0')
> > +vm.shutdown()
> > diff --git a/tests/qemu-iotests/264.out b/tests/qemu-iotests/264.out
> > index 3000944b09..c45b1e81ef 100644
> > --- a/tests/qemu-iotests/264.out
> > +++ b/tests/qemu-iotests/264.out
> > @@ -1,3 +1,4 @@
> > +Start NBD server
> >   {"execute": "blockdev-add", "arguments": {"driver": "raw", "file": 
> > {"driver": "nbd", "reconnect-delay": 10, "server": {"path": 
> > "TEST_DIR/PID-nbd-sock", "type": "unix"}}, "node-name": "backup0"}}
> >   {"return": {}}
> >   {"execute": "blockdev-backup", "arguments": {"device": "drive0", "speed": 
> > 1048576, "sync": "full", "target": "backup0"}}
> > @@ -11,3 +12,4 @@ Start NBD server
> >   Backup completed: 5242880
> >   

Re: [PULL 08/16] linux-user: don't use MAP_FIXED in pgd_find_hole_fallback

2020-07-28 Thread Alex Bennée


Peter Maydell  writes:

> On Mon, 27 Jul 2020 at 13:24, Alex Bennée  wrote:
>>
>> Plain MAP_FIXED has the undesirable behaviour of splatting exiting
>> maps so we don't actually achieve what we want when looking for gaps.
>> We should be using MAP_FIXED_NOREPLACE. As this isn't always available
>> we need to potentially check the returned address to see if the kernel
>> gave us what we asked for.
>>
>> Fixes: ad592e37dfc ("linux-user: provide fallback pgd_find_hole for bare 
>> chroots")
>> Signed-off-by: Alex Bennée 
>> Reviewed-by: Richard Henderson 
>> Message-Id: <20200724064509.331-9-alex.ben...@linaro.org>
>
> Hi; Coverity thinks this conditional expression is suspicious
> (CID 1431059):
>
>>  if (mmap_start != MAP_FAILED) {
>>  munmap((void *) align_start, guest_size);
>> -return (uintptr_t) mmap_start + offset;
>> +if (MAP_FIXED_NOREPLACE || mmap_start == (void *) 
>> align_start) {
>
> because it's performing a logical OR operation where the left
> operand is an integer constant that's neither 0 nor 1
> (it's 1048576). What was this intended to be?

It's 0 if the header doesn't provide it. If it's !0 we don't need to
check the address because it should have been in the correct place.

>
>> +return (uintptr_t) mmap_start + offset;
>> +}
>>  }
>
> thanks
> -- PMM


-- 
Alex Bennée



Re: [PATCH v2 3/3] virtiofsd: probe unshare(CLONE_FS) and print an error

2020-07-28 Thread Daniel P . Berrangé
On Tue, Jul 28, 2020 at 09:12:50AM -0400, Vivek Goyal wrote:
> On Tue, Jul 28, 2020 at 12:00:20PM +0200, Roman Mohr wrote:
> > On Tue, Jul 28, 2020 at 3:07 AM misono.tomoh...@fujitsu.com <
> > misono.tomoh...@fujitsu.com> wrote:
> > 
> > > > Subject: [PATCH v2 3/3] virtiofsd: probe unshare(CLONE_FS) and print an
> > > error
> > > >
> > > > An assertion failure is raised during request processing if
> > > > unshare(CLONE_FS) fails. Implement a probe at startup so the problem can
> > > > be detected right away.
> > > >
> > > > Unfortunately Docker/Moby does not include unshare in the seccomp.json
> > > > list unless CAP_SYS_ADMIN is given. Other seccomp.json lists always
> > > > include unshare (e.g. podman is unaffected):
> > > >
> > > https://raw.githubusercontent.com/seccomp/containers-golang/master/seccomp.json
> > > >
> > > > Use "docker run --security-opt seccomp=path/to/seccomp.json ..." if the
> > > > default seccomp.json is missing unshare.
> > >
> > > Hi, sorry for a bit late.
> > >
> > > unshare() was added to fix xattr problem:
> > >
> > > https://github.com/qemu/qemu/commit/bdfd66788349acc43cd3f1298718ad491663cfcc#
> > > In theory we don't need to call unshare if xattr is disabled, but it is
> > > hard to get to know
> > > if xattr is enabled or disabled in fv_queue_worker(), right?
> > >
> > >
> > In kubevirt we want to run virtiofsd in containers. We would already not
> > have xattr support for e.g. overlayfs in the VM after this patch series (an
> > acceptable con at least for us right now).
> > If we can get rid of the unshare (and potentially of needing root) that
> > would be great. We always assume that everything which we run in containers
> > should work for cri-o and docker.
> 
> But cri-o and docker containers run as root, isn't it? (or atleast have
> the capability to run as root). Havind said that, it will be nice to be able
> to run virtiofsd without root. 
> 
> There are few hurdles though.
> 
> - For file creation, we switch uid/gid (seteuid/setegid) and that seems
>   to require root. If we were to run unpriviliged, probably all files
>   on host will have to be owned by unpriviliged user and guest visible
>   uid/gid will have to be stored in xattrs. I think virtfs supports
>   something similar.

I think I've mentioned before, 9p virtfs supports different modes,
passthrough, squashed or remapped.

passthrough should be reasonably straightforward to support in virtiofs.
The guest sees all the host UID/GIDs ownership as normal, and can read
any files the host user can read, but are obviously restricted to write
to only the files that host user can write too. No DAC-OVERRIDE facility
in essence. You'll just get EPERM, which is fine. This simple passthrough
scenario would be just what's desired for a typical desktop virt use
cases, where you want to share part/all of your home dir with a guest for
easy file access. Personally this is the mode I'd be most interested in
seeing provided for unprivileged virtiofsd usage.

squash is similar to passthrough, except the guest sees everything
as owned by the same user. This can be surprising as the guest might
see a file owned by them, but not be able to write to it, as on the
host its actually owned by some other user. Fairly niche use case
I think.

remapping would be needed for a more general purpose use cases
allowing the guest to do arbitrary UID/GID changes, but on the host
everything is still stored as one user and remapped somehow.

The main challenge for all the unprivileged scenarios is safety of
the sandbox, to avoid risk of guests escaping to access files outside
of the exported dir via symlink attacks or similar.



Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




QEMU | Pipeline #171670637 has failed for master | 26499151

2020-07-28 Thread GitLab via


Your pipeline has failed.

Project: QEMU ( https://gitlab.com/qemu-project/qemu )
Branch: master ( https://gitlab.com/qemu-project/qemu/-/commits/master )

Commit: 26499151 ( 
https://gitlab.com/qemu-project/qemu/-/commit/264991512193ee50e27d43e66f832d5041cf3b28
 )
Commit Message: Merge remote-tracking branch 'remotes/ericb/tag...
Commit Author: Peter Maydell ( https://gitlab.com/pm215 )

Pipeline #171670637 ( 
https://gitlab.com/qemu-project/qemu/-/pipelines/171670637 ) triggered by Alex 
Bennée ( https://gitlab.com/stsquad )
had 1 failed build.

Job #659872438 ( https://gitlab.com/qemu-project/qemu/-/jobs/659872438/raw )

Stage: build
Name: build-tcg-disabled
Trace: 192  ...[15:01:15] ...  
192  pass   [15:01:15] [15:01:16]   0s   
194  ...[15:01:16] ...  
194  pass   [15:01:16] [15:01:16]   0s   
197  ...[15:01:16] ...  
197  fail   [15:01:16] [15:02:58]
output mismatch (see 197.out.bad)
--- /builds/qemu-project/qemu/tests/qemu-iotests/197.out2020-07-28 
14:51:47.0 +
+++ /builds/qemu-project/qemu/build/tests/qemu-iotests/197.out.bad  
2020-07-28 15:02:57.0 +
@@ -26,9 +26,9 @@
 
 === Partial final cluster ===
 
-Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1024
+qemu-img: TEST_DIR/t.IMGFMT: Invalid parameter 'compat'
 read 1024/1024 bytes at offset 0
 1 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-1 KiB (0x400) bytes allocated at offset 0 bytes (0x0)
+4 GiB (0x1) bytes allocated at offset 0 bytes (0x0)
 No errors were found on the image.
 *** done
208  ...[15:02:58] ...  
208  pass   [15:02:58] [15:03:01]   3s   
215  ...[15:03:01] ...  
215  pass   [15:03:01] [15:04:30]  88s   
221  ...[15:04:30] ...  
221  pass   [15:04:30] [15:04:30]   0s   
222  ...[15:04:30] ...  
222  pass   [15:04:30] [15:04:34]   4s   
226  ...[15:04:34] ...  
226  pass   [15:04:34] [15:04:35]   1s   
227  ...[15:04:35] ...  
227  pass   [15:04:35] [15:04:35]   0s   
236  ...[15:04:35] ...  
236  pass   [15:04:35] [15:04:35]   0s   
253  ...[15:04:35] ...  
253  pass   [15:04:35] [15:04:35]   0s   
277  ...[15:04:35] ...  
277  pass   [15:04:35] [15:04:36]   1s   
Failures: 197
Failed 1 of 47 iotests
section_end:1595948698:step_script
ERROR: Job failed: exit code 1



-- 
You're receiving this email because of your account on gitlab.com.





Re: [PATCH v5 7/7] Makefile: Ship the generic platform bios ELF images for RISC-V

2020-07-28 Thread Bin Meng
Hi Alistair,

On Tue, Jul 28, 2020 at 11:37 PM Alistair Francis  wrote:
>
> On Wed, Jul 15, 2020 at 11:01 PM Bin Meng  wrote:
> >
> > From: Bin Meng 
> >
> > At present only the generic platform fw_dynamic bios BIN images
> > are included in the 'make install' target for 'virt' and 'sifive_u'
> > machines. This updates the install blob list to include ELF images
> > which are needed by the 'spike' machine.
> >
> > Signed-off-by: Bin Meng 
>
> This commit should be squashed into patch 5.

I actually considered this before, and I was thinking that patch 5 is
only for "fixing" the missing binaries for Spike machine, and this
patch addresses the "make install" issue, hence separate patch.

>
> Do you want me to do that when applying?

Sure, please do then if you feel this is the correct way.

Regards,
Bin



Re: [PATCH v2 0/2] QEMU Gating CI

2020-07-28 Thread Cleber Rosa
On Tue, Jul 28, 2020 at 03:48:38PM +0100, Peter Maydell wrote:
> On Mon, 20 Jul 2020 at 18:22, Cleber Rosa  wrote:
> > Sure.  It's important that PATCH 2/2 in this series is included in a
> > branch that you need to push to the "staging" branch on the
> > https://gitlab.com/qemu-project/qemu repo (it could be just that one
> > patch).  Then, you can run:
> >
> >   ./scripts/ci/gitlab-pipeline-status --verbose -w
> >
> > And that should be it.  You can drop '--verbose' if you just want the
> > final outcome as the result.
> 
> I tried this (local branch named "staging", pushed to gitlab
> remote "staging" branch), but it said:
> 
> e104462:bionic:qemu$ ./scripts/ci/gitlab-pipeline-status --verbose -w
> ERROR: No pipeline found
> failure
>

Hi Peter,

I think this may just have been a timing issue.  GitLab usually does
take a few seconds after it receives a branch push to create a
pipeline.  Let me know if you'd like to see this within the script, or
if you'd rather put a sleep between your push and the
"gitlab-pipeline-status" execution.

> It does seem to have kicked off the pipeline on gitlab though:
> https://gitlab.com/qemu-project/qemu/-/pipelines/171671136/builds

There's already new content on the staging branch, but supposing my local
staging branch contained commit 6e7c2dcb50907aa6be0cbc37f81801d2fa67f7b4
(https://gitlab.com/qemu-project/qemu/-/commit/6e7c2dcb50907aa6be0cbc37f81801d2fa67f7b4),
the command you ran:

  ./scripts/ci/gitlab-pipeline-status --verbose -w

Should have behaved as this (output from my machine):

  /scripts/ci/gitlab-pipeline-status --verbose -w -c 
6e7c2dcb50907aa6be0cbc37f81801d2fa67f7b4
  running...

> OTOH I can't see anything on that web page that suggests that
> it's submitting jobs to the s390 or aarch64 boxes -- is it
> intended to?
>

All the jobs for that pipeline have been created as expected, for
instance:

   https://gitlab.com/qemu-project/qemu/-/jobs/659874849

But given the recent changes to the GitLab YAML adding other phases,
it's waiting for the previous phases.

The jobs introduced with this patch are in the test phase because they
do builds and tests.  But IIRC there's a way to tell jobs to avoid
being blocked by previous phases.  I'll look into that.

Until then, I hope to see the job results soon from the s390 and aarch64
boxes.

Thanks,
- Cleber.

> thanks
> -- PMM
> 


signature.asc
Description: PGP signature


Re: [PATCH v4 4/7] hw/riscv: Use pre-built bios image of generic platform for virt & sifive_u

2020-07-28 Thread Bin Meng
Hi Alistair,

On Tue, Jul 28, 2020 at 11:39 PM Alistair Francis  wrote:
>
> On Wed, Jul 15, 2020 at 9:55 PM Bin Meng  wrote:
> >
> > Hi Alistair,
> >
> > On Mon, Jul 13, 2020 at 9:53 AM Bin Meng  wrote:
> > >
> > > On Sun, Jul 12, 2020 at 1:34 AM Alistair Francis  
> > > wrote:
> > > >
> > > > On Thu, Jul 9, 2020 at 10:07 PM Bin Meng  wrote:
> > > > >
> > > > > From: Bin Meng 
> > > > >
> > > > > Update virt and sifive_u machines to use the opensbi fw_dynamic bios
> > > > > image built for the generic FDT platform.
> > > > >
> > > > > Remove the out-of-date no longer used bios images.
> > > > >
> > > > > Signed-off-by: Bin Meng 
> > > > > Reviewed-by: Anup Patel 
> > > > > Reviewed-by: Alistair Francis 
> > > >
> > > > This patch seems to break 32-bit Linux boots on the sifive_u and virt 
> > > > machines.
> > > >
> > >
> > > It looks only Linux boot on sifive_u is broken. On our side, we have
> > > been using VxWorks to test 32-bit OpenSBI on sifive_u so this issue
> > > gets unnoticed. I will take a look.
> >
> > I've figured out the issue of 32-bit Linux booting failure on
> > sifive_u. A patch has been sent to Linux upstream:
> > http://lists.infradead.org/pipermail/linux-riscv/2020-July/001213.html
>
> Thanks for that. What change in QEMU causes this failure though?
>

There is nothing wrong in QEMU.

> There are lots of people not running the latest Linux from master that
> this will cause breakages for.

It's just that the 32-bit Linux defconfig has never been validated by
people with 'sifive_u' machine. I bet people only validated the 32-bit
kernel with the 'virt' machine.

Regards,
Bin



Re: Missing qapi_free_Type in error case for qapi generated code?

2020-07-28 Thread Eric Blake

On 7/28/20 10:26 AM, Christophe de Dinechin wrote:

The qapi generated code for qmp_marshal_query_spice seems to be missing a
resource deallocation for "retval". For example, for SpiceInfo:




 retval = qmp_query_spice();
 error_propagate(errp, err);
 if (err) {
/* retval not freed here */


Because it should be NULL here.  Returning an error AND an object is 
frowned on.



/* Missing: qapi_free_SpiceInfo(retval); */
 goto out;
 }

 qmp_marshal_output_SpiceInfo(retval, ret, errp);


And here, retval was non-NULL, but is cleaned as a side-effect of 
qmp_marshal_output_SpiceInfo.




out:


So no matter how you get to the label, retval is no longer valid memory 
that can be leaked.



 visit_free(v);
 v = qapi_dealloc_visitor_new();
 visit_start_struct(v, NULL, NULL, 0, NULL);
 visit_end_struct(v, NULL);
 visit_free(v);
}
#endif /* defined(CONFIG_SPICE) */

Questions:

- Is the query code supposed to always return NULL in case of error?


Yes.  If not, that is a bug in qmp_query_spice.


In the
   case of hmp_info_spice, there is no check for info==NULL, so on the
   contrary, it seems to indicate that a non-null result is always expected,
   and that function does call qapi_free_SpiceInfo


Calling qapi_free_SpiceInfo(NULL) is a safe no-op.  Or if you expect the 
function to always succeed, you could pass _abort as the errp 
parameter.




- If not, is there an existing shortcut to generate the correct deallocation
   code for return types that need it? You can't just use
   qapi_free_%(c_type)s because that would generate an extra * character,
   i.e. I get "SpiceInfo *" and not "SpiceInfo".


Ah, you're debating about editing scripts/qapi/commands.py.  If 
anything, an edit to add 'assert(!retval)' if qmp_COMMAND failed might 
be smarter than trying to add code to free retval.




- If not, is there any good way to know if the type is a pointer type?
   (A quick look in cripts/qapi/types.py does not show anything obvious)


Look at scripts/qapi/schema.py; each QAPI metatype has implementations 
of .c_name and .c_type that determine how to represent that QAPI object 
in C.  You probably want c_name instead of c_type when constructing the 
name of a qapi_free_FOO function, but that goes back to my question of 
whether such a call is even needed.


--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v4 4/7] hw/riscv: Use pre-built bios image of generic platform for virt & sifive_u

2020-07-28 Thread Alistair Francis
On Wed, Jul 15, 2020 at 9:55 PM Bin Meng  wrote:
>
> Hi Alistair,
>
> On Mon, Jul 13, 2020 at 9:53 AM Bin Meng  wrote:
> >
> > On Sun, Jul 12, 2020 at 1:34 AM Alistair Francis  
> > wrote:
> > >
> > > On Thu, Jul 9, 2020 at 10:07 PM Bin Meng  wrote:
> > > >
> > > > From: Bin Meng 
> > > >
> > > > Update virt and sifive_u machines to use the opensbi fw_dynamic bios
> > > > image built for the generic FDT platform.
> > > >
> > > > Remove the out-of-date no longer used bios images.
> > > >
> > > > Signed-off-by: Bin Meng 
> > > > Reviewed-by: Anup Patel 
> > > > Reviewed-by: Alistair Francis 
> > >
> > > This patch seems to break 32-bit Linux boots on the sifive_u and virt 
> > > machines.
> > >
> >
> > It looks only Linux boot on sifive_u is broken. On our side, we have
> > been using VxWorks to test 32-bit OpenSBI on sifive_u so this issue
> > gets unnoticed. I will take a look.
>
> I've figured out the issue of 32-bit Linux booting failure on
> sifive_u. A patch has been sent to Linux upstream:
> http://lists.infradead.org/pipermail/linux-riscv/2020-July/001213.html

Thanks for that. What change in QEMU causes this failure though?

There are lots of people not running the latest Linux from master that
this will cause breakages for.

Alistair

>
> Regards,
> Bin



Re: [PATCH v5 7/7] Makefile: Ship the generic platform bios ELF images for RISC-V

2020-07-28 Thread Alistair Francis
On Wed, Jul 15, 2020 at 11:01 PM Bin Meng  wrote:
>
> From: Bin Meng 
>
> At present only the generic platform fw_dynamic bios BIN images
> are included in the 'make install' target for 'virt' and 'sifive_u'
> machines. This updates the install blob list to include ELF images
> which are needed by the 'spike' machine.
>
> Signed-off-by: Bin Meng 

This commit should be squashed into patch 5.

Do you want me to do that when applying?

Alistair

>
> ---
>
> Changes in v5:
> - Ship generic fw_dynamic.elf images in the Makefile
>
> Changes in v3:
> - change fw_jump to fw_dynamic in the Makefile
>
> Changes in v2:
> - new patch: Makefile: Ship the generic platform bios images for RISC-V
>
>  Makefile | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/Makefile b/Makefile
> index a6d6234..142c545 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -841,7 +841,8 @@ u-boot.e500 u-boot-sam460-20100605.bin \
>  qemu_vga.ndrv \
>  edk2-licenses.txt \
>  hppa-firmware.img \
> -opensbi-riscv32-generic-fw_dynamic.bin opensbi-riscv64-generic-fw_dynamic.bin
> +opensbi-riscv32-generic-fw_dynamic.bin 
> opensbi-riscv64-generic-fw_dynamic.bin \
> +opensbi-riscv32-generic-fw_dynamic.elf opensbi-riscv64-generic-fw_dynamic.elf
>
>
>  DESCS=50-edk2-i386-secure.json 50-edk2-x86_64-secure.json \
> --
> 2.7.4
>
>



Re: [PULL 06/16] accel/tcg: better handle memory constrained systems

2020-07-28 Thread Christian Ehrhardt
On Mon, Jul 27, 2020 at 2:24 PM Alex Bennée  wrote:

> It turns out there are some 64 bit systems that have relatively low
> amounts of physical memory available to them (typically CI system).
> Even with swapping available a 1GB translation buffer that fills up
> can put the machine under increased memory pressure. Detect these low
> memory situations and reduce tb_size appropriately.
>
> Fixes: 600e17b2615 ("accel/tcg: increase default code gen buffer size for
> 64 bit")
> Signed-off-by: Alex Bennée 
> Reviewed-by: Richard Henderson 
> Reviewed-by: Robert Foley 
> Cc: BALATON Zoltan 
> Cc: Christian Ehrhardt 
> Message-Id: <20200724064509.331-7-alex.ben...@linaro.org>
>

I beg your pardon for the late reply, but I was out a week.
I see this is already the pull request and my former feedback was included
- thanks.

Never the less I took the chance to test it in the context that I found and
reported the initial bug.
If only to show that I didn't fire this case :-)

We know there is quite some noise/deviation, but I only ran single tests as
the problem was easily visible despite the noise. Amount of memory qemu
settles on:

Host 32G, Guest 512M
4.2633M
5.0   1672M
5.0+ Fix  1670M

Host 1.5G, Guest 512M
4.2692M
5.0   16xxM (OOM)
5.0+ Fix   766M

So we seem to have achieved that small environments no more break (a very
small amount of very densely sized systems might still) but at the same
time get the bigger cache for any normal/large system.
Tested-by: Christian Ehrhardt 
Reviewed-by: Christian Ehrhardt 


> diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
> index 2afa46bd2b1..2d83013633b 100644
> --- a/accel/tcg/translate-all.c
> +++ b/accel/tcg/translate-all.c
> @@ -976,7 +976,12 @@ static inline size_t size_code_gen_buffer(size_t
> tb_size)
>  {
>  /* Size the buffer.  */
>  if (tb_size == 0) {
> -tb_size = DEFAULT_CODE_GEN_BUFFER_SIZE;
> +size_t phys_mem = qemu_get_host_physmem();
> +if (phys_mem == 0) {
> +tb_size = DEFAULT_CODE_GEN_BUFFER_SIZE;
> +} else {
> +tb_size = MIN(DEFAULT_CODE_GEN_BUFFER_SIZE, phys_mem / 8);
> +}
>  }
>  if (tb_size < MIN_CODE_GEN_BUFFER_SIZE) {
>  tb_size = MIN_CODE_GEN_BUFFER_SIZE;
> --
> 2.20.1
>
>

-- 
Christian Ehrhardt
Staff Engineer, Ubuntu Server
Canonical Ltd


Re: [PATCH] hw/riscv: sifive_u: Add a dummy L2 cache controller device

2020-07-28 Thread Alistair Francis
On Sun, Jul 19, 2020 at 11:50 PM Bin Meng  wrote:
>
> From: Bin Meng 
>
> It is enough to simply map the SiFive FU540 L2 cache controller
> into the MMIO space using create_unimplemented_device(), with an
> FDT fragment generated, to make the latest upstream U-Boot happy.
>
> Signed-off-by: Bin Meng 

Reviewed-by: Alistair Francis 

Applied to riscv-to-apply.next tree for 5.2.

Alistair

> ---
>
>  hw/riscv/sifive_u.c | 22 ++
>  include/hw/riscv/sifive_u.h |  4 
>  2 files changed, 26 insertions(+)
>
> diff --git a/hw/riscv/sifive_u.c b/hw/riscv/sifive_u.c
> index 6487d5e..f771cb0 100644
> --- a/hw/riscv/sifive_u.c
> +++ b/hw/riscv/sifive_u.c
> @@ -72,6 +72,7 @@ static const struct MemmapEntry {
>  [SIFIVE_U_DEBUG] ={0x0,  0x100 },
>  [SIFIVE_U_MROM] = { 0x1000, 0xf000 },
>  [SIFIVE_U_CLINT] ={  0x200,0x1 },
> +[SIFIVE_U_L2CC] = {  0x201, 0x1000 },
>  [SIFIVE_U_L2LIM] ={  0x800,  0x200 },
>  [SIFIVE_U_PLIC] = {  0xc00,  0x400 },
>  [SIFIVE_U_PRCI] = { 0x1000, 0x1000 },
> @@ -302,6 +303,24 @@ static void create_fdt(SiFiveUState *s, const struct 
> MemmapEntry *memmap,
>  qemu_fdt_setprop_string(fdt, nodename, "compatible", "gpio-restart");
>  g_free(nodename);
>
> +nodename = g_strdup_printf("/soc/cache-controller@%lx",
> +(long)memmap[SIFIVE_U_L2CC].base);
> +qemu_fdt_add_subnode(fdt, nodename);
> +qemu_fdt_setprop_cells(fdt, nodename, "reg",
> +0x0, memmap[SIFIVE_U_L2CC].base,
> +0x0, memmap[SIFIVE_U_L2CC].size);
> +qemu_fdt_setprop_cells(fdt, nodename, "interrupts",
> +SIFIVE_U_L2CC_IRQ0, SIFIVE_U_L2CC_IRQ1, SIFIVE_U_L2CC_IRQ2);
> +qemu_fdt_setprop_cell(fdt, nodename, "interrupt-parent", plic_phandle);
> +qemu_fdt_setprop(fdt, nodename, "cache-unified", NULL, 0);
> +qemu_fdt_setprop_cell(fdt, nodename, "cache-size", 2097152);
> +qemu_fdt_setprop_cell(fdt, nodename, "cache-sets", 1024);
> +qemu_fdt_setprop_cell(fdt, nodename, "cache-level", 2);
> +qemu_fdt_setprop_cell(fdt, nodename, "cache-block-size", 64);
> +qemu_fdt_setprop_string(fdt, nodename, "compatible",
> +"sifive,fu540-c000-ccache");
> +g_free(nodename);
> +
>  phy_phandle = phandle++;
>  nodename = g_strdup_printf("/soc/ethernet@%lx",
>  (long)memmap[SIFIVE_U_GEM].base);
> @@ -732,6 +751,9 @@ static void sifive_u_soc_realize(DeviceState *dev, Error 
> **errp)
>
>  create_unimplemented_device("riscv.sifive.u.dmc",
>  memmap[SIFIVE_U_DMC].base, memmap[SIFIVE_U_DMC].size);
> +
> +create_unimplemented_device("riscv.sifive.u.l2cc",
> +memmap[SIFIVE_U_L2CC].base, memmap[SIFIVE_U_L2CC].size);
>  }
>
>  static Property sifive_u_soc_props[] = {
> diff --git a/include/hw/riscv/sifive_u.h b/include/hw/riscv/sifive_u.h
> index aba4d01..d3c0c00 100644
> --- a/include/hw/riscv/sifive_u.h
> +++ b/include/hw/riscv/sifive_u.h
> @@ -71,6 +71,7 @@ enum {
>  SIFIVE_U_DEBUG,
>  SIFIVE_U_MROM,
>  SIFIVE_U_CLINT,
> +SIFIVE_U_L2CC,
>  SIFIVE_U_L2LIM,
>  SIFIVE_U_PLIC,
>  SIFIVE_U_PRCI,
> @@ -86,6 +87,9 @@ enum {
>  };
>
>  enum {
> +SIFIVE_U_L2CC_IRQ0 = 1,
> +SIFIVE_U_L2CC_IRQ1 = 2,
> +SIFIVE_U_L2CC_IRQ2 = 3,
>  SIFIVE_U_UART0_IRQ = 4,
>  SIFIVE_U_UART1_IRQ = 5,
>  SIFIVE_U_GPIO_IRQ0 = 7,
> --
> 2.7.4
>
>



Re: [PATCH v2 3/3] virtiofsd: probe unshare(CLONE_FS) and print an error

2020-07-28 Thread Stefan Hajnoczi
On Tue, Jul 28, 2020 at 12:00:20PM +0200, Roman Mohr wrote:
> On Tue, Jul 28, 2020 at 3:07 AM misono.tomoh...@fujitsu.com <
> misono.tomoh...@fujitsu.com> wrote:
> 
> > > Subject: [PATCH v2 3/3] virtiofsd: probe unshare(CLONE_FS) and print an
> > error
> > >
> > > An assertion failure is raised during request processing if
> > > unshare(CLONE_FS) fails. Implement a probe at startup so the problem can
> > > be detected right away.
> > >
> > > Unfortunately Docker/Moby does not include unshare in the seccomp.json
> > > list unless CAP_SYS_ADMIN is given. Other seccomp.json lists always
> > > include unshare (e.g. podman is unaffected):
> > >
> > https://raw.githubusercontent.com/seccomp/containers-golang/master/seccomp.json
> > >
> > > Use "docker run --security-opt seccomp=path/to/seccomp.json ..." if the
> > > default seccomp.json is missing unshare.
> >
> > Hi, sorry for a bit late.
> >
> > unshare() was added to fix xattr problem:
> >
> > https://github.com/qemu/qemu/commit/bdfd66788349acc43cd3f1298718ad491663cfcc#
> > In theory we don't need to call unshare if xattr is disabled, but it is
> > hard to get to know
> > if xattr is enabled or disabled in fv_queue_worker(), right?
> >
> >
> In kubevirt we want to run virtiofsd in containers. We would already not
> have xattr support for e.g. overlayfs in the VM after this patch series (an
> acceptable con at least for us right now).
> If we can get rid of the unshare (and potentially of needing root) that
> would be great. We always assume that everything which we run in containers
> should work for cri-o and docker.

Root is required to access files with any uid/gid.

Dave Gilbert is working on xattr support without CAP_SYS_ADMIN. He may
be able to find a way to drop unshare (at least in containers).

> "Just" pointing docker to a different seccomp.json file is something which
> k8s users/admin in many cases can't do.

There is a Moby PR to change the default seccomp.json file here but it's
unclear if it will be merged:
https://github.com/moby/moby/pull/41244

Stefan


signature.asc
Description: PGP signature


Re: [PULL for-5.1 0/2] qemu-ga patch queue for hard-freeze

2020-07-28 Thread Peter Maydell
On Tue, 28 Jul 2020 at 00:23, Michael Roth  wrote:
>
> The following changes since commit 9303ecb658a0194560d1eecde165a1511223c2d8:
>
>   Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20200727' into 
> staging (2020-07-27 17:25:06 +0100)
>
> are available in the Git repository at:
>
>   git://github.com/mdroth/qemu.git tags/qga-pull-2020-07-27-tag
>
> for you to fetch changes up to ba620541d0db7e3433babbd97c0413a371e6fb4a:
>
>   qga/qapi-schema: Document -1 for invalid PCI address fields (2020-07-27 
> 18:03:55 -0500)
>
> 
> qemu-ga patch queue for hard-freeze
>
> * document use of -1 when pci_controller field can't be retrieved for
>   guest-get-fsinfo
> * fix incorrect filesystem type reporting on w32 for guest-get-fsinfo
>   when a volume is not mounted
>


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/5.1
for any user-visible changes.

-- PMM



Missing qapi_free_Type in error case for qapi generated code?

2020-07-28 Thread Christophe de Dinechin
The qapi generated code for qmp_marshal_query_spice seems to be missing a
resource deallocation for "retval". For example, for SpiceInfo:

#if defined(CONFIG_SPICE)
void qmp_marshal_query_spice(QDict *args, QObject **ret, Error **errp)
{
Error *err = NULL;
bool ok = false;
Visitor *v;
SpiceInfo *retval;

v = qobject_input_visitor_new(QOBJECT(args));
if (!visit_start_struct(v, NULL, NULL, 0, errp)) {
goto out;
}
ok = visit_check_struct(v, errp);
visit_end_struct(v, NULL);
if (!ok) {
goto out;
}

retval = qmp_query_spice();
error_propagate(errp, err);
if (err) {
/* retval not freed here */
/* Missing: qapi_free_SpiceInfo(retval); */
goto out;
}

qmp_marshal_output_SpiceInfo(retval, ret, errp);

out:
visit_free(v);
v = qapi_dealloc_visitor_new();
visit_start_struct(v, NULL, NULL, 0, NULL);
visit_end_struct(v, NULL);
visit_free(v);
}
#endif /* defined(CONFIG_SPICE) */

Questions:

- Is the query code supposed to always return NULL in case of error? In the
  case of hmp_info_spice, there is no check for info==NULL, so on the
  contrary, it seems to indicate that a non-null result is always expected,
  and that function does call qapi_free_SpiceInfo

- If not, is there an existing shortcut to generate the correct deallocation
  code for return types that need it? You can't just use
  qapi_free_%(c_type)s because that would generate an extra * character,
  i.e. I get "SpiceInfo *" and not "SpiceInfo".

- If not, is there any good way to know if the type is a pointer type?
  (A quick look in cripts/qapi/types.py does not show anything obvious)

--
Cheers,
Christophe de Dinechin (IRC c3d)




Re: [Virtio-fs] virtio-fs performance

2020-07-28 Thread Vivek Goyal
On Tue, Jul 28, 2020 at 02:49:36PM +0100, Stefan Hajnoczi wrote:
> > I'm trying and testing the virtio-fs feature in QEMU v5.0.0.
> > My host and guest OS are both ubuntu 18.04 with kernel 5.4, and the
> > underlying storage is one single SSD.
> > 
> > The configuations are:
> > (1) virtiofsd
> > ./virtiofsd -o 
> > source=/mnt/ssd/virtiofs,cache=auto,flock,posix_lock,writeback,xattr
> > --thread-pool-size=1 --socket-path=/tmp/vhostqemu
> > 
> > (2) qemu
> > qemu-system-x86_64 \
> > -enable-kvm \
> > -name ubuntu \
> > -cpu Westmere \
> > -m 4096 \
> > -global kvm-apic.vapic=false \
> > -netdev 
> > tap,id=hn0,vhost=off,br=br0,helper=/usr/local/libexec/qemu-bridge-helper
> > \
> > -device e1000,id=e0,netdev=hn0 \
> > -blockdev '{"node-name": "disk0", "driver": "qcow2",
> > "refcount-cache-size": 1638400, "l2-cache-size": 6553600, "file": {
> > "driver": "file", "filename": "'${imagefolder}\/ubuntu.qcow2'"}}' \
> > -device virtio-blk,drive=disk0,id=disk0 \
> > -chardev socket,id=ch0,path=/tmp/vhostqemu \
> > -device vhost-user-fs-pci,chardev=ch0,tag=myfs \
> > -object memory-backend-memfd,id=mem,size=4G,share=on \
> > -numa node,memdev=mem \
> > -qmp stdio \
> > -vnc :0
> > 
> > (3) guest
> > mount -t virtiofs myfs /mnt/virtiofs
> > 
> > I tried to change virtiofsd's --thread-pool-size value and test the
> > storage performance by fio.
> > Before each read/write/randread/randwrite test, the pagecaches of
> > guest and host are dropped.
> > 
> > ```
> > RW="read" # or write/randread/randwrite
> > fio --name=test --rw=$RW --bs=4k --numjobs=1 --ioengine=libaio
> > --runtime=60 --direct=0 --iodepth=64 --size=10g
> > --filename=/mnt/virtiofs/testfile
> > done

Couple of things.

- Can you try cache=none option in virtiofsd. That will bypass page
  cache in guest. It also gets rid of latencies related to
  file_remove_privs() as of now. 

- Also with direct=0, are we really driving iodepth of 64? With direct=0
  it is cached I/O. Is it still asynchronous at this point of time of
  we have fallen back to synchronous I/O and driving queue depth of
  1.

- With cache=auto/always, I am seeing performance issues with small writes
  and trying to address it.

https://lore.kernel.org/linux-fsdevel/20200716144032.gc422...@redhat.com/
https://lore.kernel.org/linux-fsdevel/20200724183812.19573-1-vgo...@redhat.com/

Thanks
Vivek

> > ```
> > 
> > --thread-pool-size=64 (default)
> > seq read: 305 MB/s
> > seq write: 118 MB/s
> > rand 4KB read:  IOPS
> > rand 4KB write: 21100 IOPS
> > 
> > --thread-pool-size=1
> > seq read: 387 MB/s
> > seq write: 160 MB/s
> > rand 4KB read: 2622 IOPS
> > rand 4KB write: 30400 IOPS
> > 
> > The results show the performance using default-pool-size (64) is
> > poorer than using single thread.
> > Is it due to the lock contention of the multiple threads?
> > When can virtio-fs get better performance using multiple threads?
> > 
> > 
> > I also tested the performance that guest accesses host's files via
> > NFSv4/CIFS network filesystem.
> > The "seq read" and "randread" performance of virtio-fs are also worse
> > than the NFSv4 and CIFS.
> > 
> > NFSv4:
> >   seq write: 244 MB/s
> >   rand 4K read: 4086 IOPS
> > 
> > I cannot figure out why the perf of NFSv4/CIFS with the network stack
> > is better than virtio-fs.
> > Is it expected? Or, do I have an incorrect configuration?
> 
> No, I remember benchmarking the thread pool and did not see such a big
> difference.
> 
> Please use direct=1 so that each I/O results in a virtio-fs request.
> Otherwise the I/O pattern is not directly controlled by the benchmark
> but by the page cache (readahead, etc).
> 
> Using numactl(8) or taskset(1) to launch virtiofsd allows you to control
> NUMA and CPU scheduling properties. For example, you could force all 64
> threads to run on the same host CPU using taskset to see if that helps
> this I/O bound workload.
> 
> fio can collect detailed statistics on queue depths and a latency
> histogram. It would be interesting to compare the --thread-pool-size=64
> and --thread-pool-size=1 numbers.
> 
> Comparing the "perf record -e kvm:kvm_exit" counts between the two might
> also be interesting.
> 
> Stefan



> ___
> Virtio-fs mailing list
> virtio...@redhat.com
> https://www.redhat.com/mailman/listinfo/virtio-fs




Re: [PATCH] osdep.h: Add doc comment for qemu_get_thread_id()

2020-07-28 Thread Peter Maydell
On Tue, 28 Jul 2020 at 16:17, Eric Blake  wrote:
>
> On 7/16/20 10:41 AM, Peter Maydell wrote:
> > Add a documentation comment for qemu_get_thread_id(): since this
> > is rather host-OS-specific it's useful if people writing the
> > implementation and people thinking of using the function know
> > what the purpose and limitations are.
> >
> > Signed-off-by: Peter Maydell 
> > ---
> > Based on conversation with Dan on IRC, and prompted by the recent
> > patch to add OpenBSD support.
> >
> > Q: should we document exactly what the thread-id value is for
> > each host platform in the QMP documentation ? Somebody writing
> > a management layer app should ideally not have to grovel through
> > the application to figure out what they should do with the
> > integer value they get back from query-cpus...
> >
> >   include/qemu/osdep.h | 14 ++
> >   1 file changed, 14 insertions(+)
>
> Do we need a counterpart change...
>
> >
> > diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
> > index 4841b5c6b5f..8279f72e5ed 100644
> > --- a/include/qemu/osdep.h
> > +++ b/include/qemu/osdep.h
> > @@ -515,6 +515,20 @@ bool qemu_has_ofd_lock(void);
> >
> >   bool qemu_write_pidfile(const char *pidfile, Error **errp);
> >
> > +/**
> > + * qemu_get_thread_id: Return OS-specific ID of current thread
> > + *
> > + * This function returns an OS-specific identifier of the
> > + * current thread. This will be used for the "thread-id" field in
> > + * the response to the QMP query-cpus and query-iothreads commands.
>
> ...to the qapi definition of query-cpus and query-iothreads?

Well, that was my question above. Currently the QAPI documentation
says absolutely nothing about what the thread-id values mean
for any host OS (beyond "ID of the underlying host thread"), which
means that any management layer application needs to look in the
implementation to find out what they actually are...

Improving the QAPI docs would probably be something like:
 * add a list of host OSes and semantics to the doc comment
   for CpuInfoFast
 * add cross-references to that definition from everywhere
   else in QAPI that uses a thread-id/thread_id
 * add a comment in the C file to say "if you're adding another
   OS ifdef here please update the QAPI doc comment"

thanks
-- PMM



Re: [PULL 2/2] s390x/s390-virtio-ccw: fix loadparm property getter

2020-07-28 Thread Cornelia Huck
On Tue, 28 Jul 2020 14:52:36 +0100
Peter Maydell  wrote:

> On Mon, 27 Jul 2020 at 15:05, Cornelia Huck  wrote:
> >
> > From: Halil Pasic 
> >
> > The function machine_get_loadparm() is supposed to produce a C-string,
> > that is a NUL-terminated one, but it does not. ElectricFence can detect
> > this problem if the loadparm machine property is used.
> >
> > Let us make the returned string a NUL-terminated one.
> >
> > Fixes: 7104bae9de ("hw/s390x: provide loadparm property for the machine")
> > Signed-off-by: Halil Pasic 
> > Reviewed-by: Thomas Huth 
> > Message-Id: <20200723162717.88485-1-pa...@linux.ibm.com>
> > Signed-off-by: Cornelia Huck 
> > ---
> >  hw/s390x/s390-virtio-ccw.c | 6 +-
> >  1 file changed, 5 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c
> > index 8cc2f25d8a6a..403d30e13bca 100644
> > --- a/hw/s390x/s390-virtio-ccw.c
> > +++ b/hw/s390x/s390-virtio-ccw.c
> > @@ -701,8 +701,12 @@ bool hpage_1m_allowed(void)
> >  static char *machine_get_loadparm(Object *obj, Error **errp)
> >  {
> >  S390CcwMachineState *ms = S390_CCW_MACHINE(obj);
> > +char *loadparm_str;
> >
> > -return g_memdup(ms->loadparm, sizeof(ms->loadparm));
> > +/* make a NUL-terminated string */
> > +loadparm_str = g_memdup(ms->loadparm, sizeof(ms->loadparm) + 1);
> > +loadparm_str[sizeof(ms->loadparm)] = 0;
> > +return loadparm_str;  
> 
> Hi. Coverity points out (CID 1431058) that this code now
> reads off the end of the ms->loadparm buffer, because
> g_memdup() is going to read and copy 9 bytes (size + 1)
> and the array itself is only 8 bytes.
> 
> I don't think you can use g_memdup() here -- you need to
> allocate the memory with g_malloc() and then fill it with
> memcpy(), something like:
> 
> loadparm_str = g_malloc(sizeof(ms->loadparm) + 1);
> memcpy(loadparm_str, ms->loadparm, sizeof(ms->loadparm));
> loadparm_str[sizeof(ms->loadparm)] = 0;

Sigh.

Halil, do you have time to cook up a patch?




[PATCH for-5.2 v4 11/11] hw/arm/smmuv3: Advertise SMMUv3.2 range invalidation

2020-07-28 Thread Eric Auger
Expose the RIL bit so that the guest driver uses range
invalidation. Although RIL is a 3.2 features, We let
the AIDR advertise SMMUv3.1 support as v3.x implementation
is allowed to implement features from v3.(x+1).

Signed-off-by: Eric Auger 
Reviewed-by: Peter Maydell 

---

v2 -> v3:
- keep the AIDR equal to 2. BBML support, which is 3.2
  mandatory feature will be addressed in a separate
  patch.
- added Peter's R-b (same version as in v1)
---
 hw/arm/smmuv3-internal.h | 1 +
 hw/arm/smmuv3.c  | 1 +
 2 files changed, 2 insertions(+)

diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index 9ae7d97faf..fa3c088972 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -55,6 +55,7 @@ REG32(IDR1,0x4)
 REG32(IDR2,0x8)
 REG32(IDR3,0xc)
  FIELD(IDR3, HAD, 2, 1);
+ FIELD(IDR3, RIL,10, 1);
 REG32(IDR4,0x10)
 REG32(IDR5,0x14)
  FIELD(IDR5, OAS, 0, 3);
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index b262f0e4a7..0122700e72 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -254,6 +254,7 @@ static void smmuv3_init_regs(SMMUv3State *s)
 s->idr[1] = FIELD_DP32(s->idr[1], IDR1, EVENTQS, SMMU_EVENTQS);
 s->idr[1] = FIELD_DP32(s->idr[1], IDR1, CMDQS,   SMMU_CMDQS);
 
+s->idr[3] = FIELD_DP32(s->idr[3], IDR3, RIL, 1);
 s->idr[3] = FIELD_DP32(s->idr[3], IDR3, HAD, 1);
 
/* 4K and 64K granule support */
-- 
2.21.3




[PATCH for-5.2 v4 06/11] hw/arm/smmuv3: Introduce smmuv3_s1_range_inval() helper

2020-07-28 Thread Eric Auger
Let's introduce an helper for S1 IOVA range invalidation.
This will be used for NH_VA and NH_VAA commands. It decodes
the same fields, trace, calls the UNMAP notifiers and
invalidate the corresponding IOTLB entries.

At the moment, we do not support 3.2 range invalidation yet.
So it reduces to a single IOVA invalidation.

Note the leaf bit now is also decoded for the CMD_TLBI_NH_VAA
command. At the moment it is only used for tracing.

Signed-off-by: Eric Auger 
Reviewed-by: Peter Maydell 

---

v1 -> v2:
- added comment about leaf bit and added Peter's R-b
---
 hw/arm/smmuv3.c | 36 +---
 hw/arm/trace-events |  3 +--
 2 files changed, 18 insertions(+), 21 deletions(-)

diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index b717bde832..e4a2cea7ad 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -836,6 +836,22 @@ static void smmuv3_inv_notifiers_iova(SMMUState *s, int 
asid, dma_addr_t iova)
 }
 }
 
+static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
+{
+dma_addr_t addr = CMD_ADDR(cmd);
+uint8_t type = CMD_TYPE(cmd);
+uint16_t vmid = CMD_VMID(cmd);
+bool leaf = CMD_LEAF(cmd);
+int asid = -1;
+
+if (type == SMMU_CMD_TLBI_NH_VA) {
+asid = CMD_ASID(cmd);
+}
+trace_smmuv3_s1_range_inval(vmid, asid, addr, leaf);
+smmuv3_inv_notifiers_iova(s, asid, addr);
+smmu_iotlb_inv_iova(s, asid, addr);
+}
+
 static int smmuv3_cmdq_consume(SMMUv3State *s)
 {
 SMMUState *bs = ARM_SMMU(s);
@@ -966,27 +982,9 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
 smmu_iotlb_inv_all(bs);
 break;
 case SMMU_CMD_TLBI_NH_VAA:
-{
-dma_addr_t addr = CMD_ADDR();
-uint16_t vmid = CMD_VMID();
-
-trace_smmuv3_cmdq_tlbi_nh_vaa(vmid, addr);
-smmuv3_inv_notifiers_iova(bs, -1, addr);
-smmu_iotlb_inv_iova(bs, -1, addr);
-break;
-}
 case SMMU_CMD_TLBI_NH_VA:
-{
-uint16_t asid = CMD_ASID();
-uint16_t vmid = CMD_VMID();
-dma_addr_t addr = CMD_ADDR();
-bool leaf = CMD_LEAF();
-
-trace_smmuv3_cmdq_tlbi_nh_va(vmid, asid, addr, leaf);
-smmuv3_inv_notifiers_iova(bs, asid, addr);
-smmu_iotlb_inv_iova(bs, asid, addr);
+smmuv3_s1_range_inval(bs, );
 break;
-}
 case SMMU_CMD_TLBI_EL3_ALL:
 case SMMU_CMD_TLBI_EL3_VA:
 case SMMU_CMD_TLBI_EL2_ALL:
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
index f74d3e920f..c219fe9e82 100644
--- a/hw/arm/trace-events
+++ b/hw/arm/trace-events
@@ -45,8 +45,7 @@ smmuv3_cmdq_cfgi_ste_range(int start, int end) "start=0x%d - 
end=0x%d"
 smmuv3_cmdq_cfgi_cd(uint32_t sid) "streamid = %d"
 smmuv3_config_cache_hit(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t 
perc) "Config cache HIT for sid %d (hits=%d, misses=%d, hit rate=%d)"
 smmuv3_config_cache_miss(uint32_t sid, uint32_t hits, uint32_t misses, 
uint32_t perc) "Config cache MISS for sid %d (hits=%d, misses=%d, hit rate=%d)"
-smmuv3_cmdq_tlbi_nh_va(int vmid, int asid, uint64_t addr, bool leaf) "vmid =%d 
asid =%d addr=0x%"PRIx64" leaf=%d"
-smmuv3_cmdq_tlbi_nh_vaa(int vmid, uint64_t addr) "vmid =%d addr=0x%"PRIx64
+smmuv3_s1_range_inval(int vmid, int asid, uint64_t addr, bool leaf) "vmid =%d 
asid =%d addr=0x%"PRIx64" leaf=%d"
 smmuv3_cmdq_tlbi_nh(void) ""
 smmuv3_cmdq_tlbi_nh_asid(uint16_t asid) "asid=%d"
 smmuv3_config_cache_inv(uint32_t sid) "Config cache INV for sid %d"
-- 
2.21.3




Re: [PATCH] osdep.h: Add doc comment for qemu_get_thread_id()

2020-07-28 Thread Eric Blake

On 7/16/20 10:41 AM, Peter Maydell wrote:

Add a documentation comment for qemu_get_thread_id(): since this
is rather host-OS-specific it's useful if people writing the
implementation and people thinking of using the function know
what the purpose and limitations are.

Signed-off-by: Peter Maydell 
---
Based on conversation with Dan on IRC, and prompted by the recent
patch to add OpenBSD support.

Q: should we document exactly what the thread-id value is for
each host platform in the QMP documentation ? Somebody writing
a management layer app should ideally not have to grovel through
the application to figure out what they should do with the
integer value they get back from query-cpus...

  include/qemu/osdep.h | 14 ++
  1 file changed, 14 insertions(+)


Do we need a counterpart change...



diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
index 4841b5c6b5f..8279f72e5ed 100644
--- a/include/qemu/osdep.h
+++ b/include/qemu/osdep.h
@@ -515,6 +515,20 @@ bool qemu_has_ofd_lock(void);
  
  bool qemu_write_pidfile(const char *pidfile, Error **errp);
  
+/**

+ * qemu_get_thread_id: Return OS-specific ID of current thread
+ *
+ * This function returns an OS-specific identifier of the
+ * current thread. This will be used for the "thread-id" field in
+ * the response to the QMP query-cpus and query-iothreads commands.


...to the qapi definition of query-cpus and query-iothreads?


+ * The intention is that a VM management layer application can then
+ * use it to tie specific QEMU vCPU and IO threads to specific host
+ * CPUs using whatever the host OS's CPU affinity setting API is.
+ * New implementations of this function for new host OSes should
+ * return the most sensible integer ID that works for that purpose.
+ *
+ * This function should not be used for anything else inside QEMU.
+ */
  int qemu_get_thread_id(void);


Otherwise this change looks sensible to me.

Reviewed-by: Eric Blake 

  
  #ifndef CONFIG_IOVEC




--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




  1   2   3   4   >