Re: [PATCH v3 1/7] target/i386: allow versioned CPUs to specify new cache_info

2023-04-24 Thread Robert Hoo
Babu Moger  于2023年4月25日周二 00:42写道:
>
> From: Michael Roth 
>
> New EPYC CPUs versions require small changes to their cache_info's.

Do you mean, for the real HW of EPYC CPU, each given model, e.g. Rome,
has HW version updates periodically?

> Because current QEMU x86 CPU definition does not support cache
> versions,

cache version --> versioned cache info

> we would have to declare a new CPU type for each such case.

My understanding was, for new HW CPU model, we should define a new
vCPU model mapping it. But if answer to my above question is yes, i.e.
new HW version of same CPU model, looks like it makes sense to some
extent.

> To avoid this duplication, the patch allows new cache_info pointers
> to be specified for a new CPU version.

"To avoid the dup work, the patch adds "cache_info" in X86CPUVersionDefinition"
>
> Co-developed-by: Wei Huang 
> Signed-off-by: Wei Huang 
> Signed-off-by: Michael Roth 
> Signed-off-by: Babu Moger 
> Acked-by: Michael S. Tsirkin 
> ---
>  target/i386/cpu.c | 36 +---
>  1 file changed, 33 insertions(+), 3 deletions(-)
>
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> index 6576287e5b..e3d9eaa307 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -1598,6 +1598,7 @@ typedef struct X86CPUVersionDefinition {
>  const char *alias;
>  const char *note;
>  PropValue *props;
> +const CPUCaches *const cache_info;
>  } X86CPUVersionDefinition;
>
>  /* Base definition for a CPU model */
> @@ -5192,6 +5193,32 @@ static void x86_cpu_apply_version_props(X86CPU *cpu, 
> X86CPUModel *model)
>  assert(vdef->version == version);
>  }
>
> +/* Apply properties for the CPU model version specified in model */

I don't think this comment matches below function.

> +static const CPUCaches *x86_cpu_get_version_cache_info(X86CPU *cpu,
> +   X86CPUModel *model)

Will "version" --> "versioned" be better?

> +{
> +const X86CPUVersionDefinition *vdef;
> +X86CPUVersion version = x86_cpu_model_resolve_version(model);
> +const CPUCaches *cache_info = model->cpudef->cache_info;
> +
> +if (version == CPU_VERSION_LEGACY) {
> +return cache_info;
> +}
> +
> +for (vdef = x86_cpu_def_get_versions(model->cpudef); vdef->version; 
> vdef++) {
> +if (vdef->cache_info) {
> +cache_info = vdef->cache_info;
> +}

No need to assign "cache_info" when traverse the vdef list, but in
below version matching block, do the assignment. Or, do you mean to
have last valid cache info (during the traverse) returned? e.g. v2 has
valid cache info, but v3 doesn't.
> +
> +if (vdef->version == version) {
> +break;
> +}
> +}
> +
> +assert(vdef->version == version);
> +return cache_info;
> +}
> +



Re: [PATCH 16/16] qapi storage-daemon/qapi: Fix documentation section structure

2023-04-24 Thread Markus Armbruster
Eric Blake  writes:

> On Tue, Apr 04, 2023 at 01:59:12PM +0200, Markus Armbruster wrote:
>> In the QEMU QMP Reference Manual, subsection "Block core (VM
>> unrelated)" is empty.  Its contents is at the end of subsection
>> "Background jobs" instead.  That's because qapi/job.json is includeded
>
> included

Fixing...

>> first from qapi/block-core.json, which makes qapi/job.json's
>> documentation go between qapi/block-core.json's subsection heading and
>> contents.
>> 
>> In the QEMU Storage Daemon QMP Reference Manual, section "Block
>> Devices" contains nothing but an empty subsection "Block core (VM
>> unrelated)".  The latter's contents is at the end section "Socket data
>> types", along with subsection "Block device exports".  Subsection
>> "Background jobs" is at the end of section "Cryptography".  All this
>> is because storage-daemon/qapi/qapi-schema.json includes modules in a
>> confused order.
>> 
>> Fix both as follows.
>> 
>> Turn subsection "Background jobs" into a section.
>> 
>> Move it before section "Block devices" in the QEMU QMP Reference
>> Manual, by including qapi/jobs.json right before qapi/block.json.
>> 
>> Reorder include directives in storage-daemon/qapi/qapi-schema.json to
>> match the order in qapi/qapi-schema.json, so that the QEMU Storage
>> Daemon QMP Reference Manual's section structure the QEMU QMP Reference
>> Manual's.
>> 
>> In the QEMU QMP Reference Manual, qapi/cryptodev.json's documentation
>> is at the end of section "Virtio devices".  That's because it lacks a
>> section heading, and therefore gets squashed into whatever section
>> happens to precede it.
>> 
>> Add section heading so it's in section "Cryptography devices".
>> 
>> Signed-off-by: Markus Armbruster 
>> ---
>>  qapi/cryptodev.json  |  4 
>>  qapi/job.json|  2 +-
>>  qapi/qapi-schema.json|  2 +-
>>  storage-daemon/qapi/qapi-schema.json | 22 +++---
>>  4 files changed, 21 insertions(+), 9 deletions(-)
>
> Reviewed-by: Eric Blake 

Thanks!




[PATCH v2 1/1] migration: Disable postcopy + multifd migration

2023-04-24 Thread Leonardo Bras
Since the introduction of multifd, it's possible to perform a multifd
migration and finish it using postcopy.

A bug introduced by yank (fixed on cfc3bcf373) was previously preventing
a successful use of this migration scenario, and now thing should be
working on most scenarios.

But since there is not enough testing/support nor any reported users for
this scenario, we should disable this combination before it may cause any
problems for users.

Suggested-by: Dr. David Alan Gilbert 
Signed-off-by: Leonardo Bras 
Acked-by: Peter Xu 
Reviewed-by: Dr. David Alan Gilbert 
---
Changes since RFC:
- Updated to latest master branch
- Included Acks and Reviews

 migration/options.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/migration/options.c b/migration/options.c
index 8e8753d9be..b0fc0aa60c 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -322,6 +322,11 @@ bool migrate_caps_check(bool *old_caps, bool *new_caps, 
Error **errp)
 error_setg(errp, "Postcopy is not compatible with ignore-shared");
 return false;
 }
+
+if (cap_list[MIGRATION_CAPABILITY_MULTIFD]) {
+error_setg(errp, "Postcopy is not yet compatible with multifd");
+return false;
+}
 }
 
 if (new_caps[MIGRATION_CAPABILITY_BACKGROUND_SNAPSHOT]) {
-- 
2.40.0




Re: [RFC PATCH 00/13] gfxstream + rutabaga_gfx: a surprising delight or startling epiphany?

2023-04-24 Thread Gurchetan Singh
On Sat, Apr 22, 2023 at 9:41 AM Akihiko Odaki  wrote:
>
> On 2023/04/22 8:54, Gurchetan Singh wrote:
> > On Fri, Apr 21, 2023 at 9:02 AM Stefan Hajnoczi  wrote:
> >>
> >> On Thu, 20 Apr 2023 at 21:13, Gurchetan Singh
> >>  wrote:
> >>>
> >>> From: Gurchetan Singh 
> >>>
> >>> Rationale:
> >>>
> >>> - gfxstream [a] is good for the Android Emulator/upstream QEMU
> >>>alignment
> >>> - Wayland passhthrough [b] via the cross-domain context type is good
> >>>for Linux on Linux display virtualization
> >>> - rutabaga_gfx [c] sits on top of gfxstream, cross-domain and even
> >>>virglrenderer
> >>> - This series ports rutabaga_gfx to QEMU
> >>
> >> What rutabaga_gfx and gfxstream? Can you explain where they sit in the
> >> stack and how they build on or complement virtio-gpu and
> >> virglrenderer?
> >
> > rutabaga_gfx and gfxstream are both libraries that implement the
> > virtio-gpu protocol.  There's a document available in the Gitlab issue
> > to see where they fit in the stack [a].
> >
> > gfxstream grew out of the Android Emulator's need to virtualize
> > graphics for app developers.  There's a short history of gfxstream in
> > patch 10.  It complements virglrenderer in that it's a bit more
> > cross-platform and targets different use cases -- more detail here
> > [b].  The ultimate goal is ditch out-of-tree kernel drivers in the
> > Android Emulator and adopt virtio, and porting gfxstream to QEMU would
> > speed up that transition.
>
> I wonder what is motivation for maintaining gfxstream instead of
> switching to virglrenderer/venus.

gfxstream GLES has features that would require significant redesign to
implement in virgl: multi-threading, live migration, widespread CTS
conformance (virgl only works well on FOSS Linux due to TGSI issues),
memory management to name a few.

Re: gfxstream VK and venus, it's a question of minimizing technical
risk.  Going from upstream to a shipping product that works on
MacOS/Windows/Linux means there's always going to be a long tail of
bugs.

The Android Emulator is still on QEMU 2.12 and the update won't be
easy (there are other things that need to be upstreamed besides GPU),
cross-platform API layering over Vulkan is expected to take 1+ year,
Vulkan doesn't work on many GPUs due to KVM issues [a], and no Vulkan
at all support has landed in upstream QEMU.

Probably the most pragmatic way to do this is to take it step by step,
and align over time by sharing components.  There might be a few
proposals to mesa-dev on that front, but getting upstream QEMU working
is a higher priority right now.  A bulk transition from one stack or
the other would be more difficult to pull off.

The great thing about context types/rutabaga_gfx,
gfxstream/virglrenderer details are largely hidden from QEMU and
present little maintenance burden.  Yes, a dependency on a new Rust
library is added, but moving towards Rust makes a ton of sense
security-wise long-term anyways.

[a] https://lore.kernel.org/all/20230330085802.2414466-1-steve...@google.com/
-- even if this patch lands today, users will still need 1-2 years to
update

>
> >
> > rutabaga_gfx is a much smaller Rust library that sits on top of
> > gfxstream and even virglrenderer, but does a few extra things.  It
> > implements the cross-domain context type, which provides Wayland
> > passthrough.  This helps virtio-gpu by providing more modern display
> > virtualization.  For example, Microsoft for WSL2 also uses a similar
> > technique [c], but I believe it is not virtio-based nor open-source.
>
> The guest side components of WSLg are open-source, but the host side is
> not: https://github.com/microsoft/wslg
> It also uses DirectX for acceleration so it's not really portable for
> outside Windows.
>
> > With this, we can have the same open-source Wayland passthrough
> > solution on crosvm, QEMU and even Fuchsia [d].  Also, there might be
> > an additional small Rust context type for security-sensitive use cases
> > in the future -- rutabaga_gfx wouldn't compile its gfxstream bindings
> > (since it's C++ based) in such cases.
> >
> > Both gfxstream and rutabaga_gfx are a part of the virtio spec [e] now too.
> >
> > [a] https://gitlab.com/qemu-project/qemu/-/issues/1611
> > [b] https://lists.gnu.org/archive/html/qemu-devel/2023-03/msg04271.html
> > [c] https://www.youtube.com/watch?v=EkNBsBx501Q
> > [d] https://fuchsia-review.googlesource.com/c/fuchsia/+/778764
> > [e] 
> > https://github.com/oasis-tcs/virtio-spec/blob/master/device-types/gpu/description.tex#L533
> >
> >>
> >> Stefan



Re: [RFC PATCH 12/13] HACK: use memory region API to inject memory to guest

2023-04-24 Thread Gurchetan Singh
On Sat, Apr 22, 2023 at 8:46 AM Akihiko Odaki  wrote:
>
> On 2023/04/21 10:12, Gurchetan Singh wrote:
> > I just copied the patches that have been floating around that do
> > this, but it doesn't seem to robustly work.  This current
> > implementation is probably good enough to run vkcube or simple
> > apps, but whenever a test starts to aggressively map/unmap memory,
> > things do explode on the QEMU side.
> >
> > A simple way to reproduce is run:
> >
> > ./deqp-vk --deqp-case=deqp-vk 
> > --deqp-case=dEQP-VK.memory.mapping.suballocation.*
> >
> > You should get stack traces that sometimes look like this:
> >
> > 0  __pthread_kill_implementation (no_tid=0, signo=6, 
> > threadid=140737316304448) at ./nptl/pthread_kill.c:44
> > 1  __pthread_kill_internal (signo=6, threadid=140737316304448) at 
> > ./nptl/pthread_kill.c:78
> > 2  __GI___pthread_kill (threadid=140737316304448, signo=signo@entry=6) at 
> > ./nptl/pthread_kill.c:89
> > 3  0x77042476 in __GI_raise (sig=sig@entry=6) at 
> > ../sysdeps/posix/raise.c:26
> > 4  0x770287f3 in __GI_abort () at ./stdlib/abort.c:79
> > 5  0x770896f6 in __libc_message (action=action@entry=do_abort, 
> > fmt=fmt@entry=0x771dbb8c "%s\n") at ../sysdeps/posix/libc_fatal.c:155
> > 6  0x770a0d7c in malloc_printerr (str=str@entry=0x771de7b0 
> > "double free or corruption (out)") at ./malloc/malloc.c:5664
> > 7  0x770a2ef0 in _int_free (av=0x77219c80 , 
> > p=0x57793e00, have_lock=) at ./malloc/malloc.c:4588
> > 8  0x770a54d3 in __GI___libc_free (mem=) at 
> > ./malloc/malloc.c:3391
> > 9  0x55d65e7e in phys_section_destroy (mr=0x57793e10) at 
> > ../softmmu/physmem.c:1003
> > 10 0x55d65ed0 in phys_sections_free (map=0x56d4b410) at 
> > ../softmmu/physmem.c:1011
> > 11 0x55d69578 in address_space_dispatch_free (d=0x56d4b400) at 
> > ../softmmu/physmem.c:2430
> > 12 0x55d58412 in flatview_destroy (view=0x572bb090) at 
> > ../softmmu/memory.c:292
> > 13 0x5600fd23 in call_rcu_thread (opaque=0x0) at ../util/rcu.c:284
> > 14 0x560026d4 in qemu_thread_start (args=0x569cafa0) at 
> > ../util/qemu-thread-posix.c:541
> > 15 0x77094b43 in start_thread (arg=) at 
> > ./nptl/pthread_create.c:442
> > 16 0x77126a00 in clone3 () at 
> > ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
> >
> > or this:
> >
> > 0x55e1dc80 in object_unref (objptr=0x6d656d3c6b6e696c) at 
> > ../qom/object.c:1198
> > 1198g_assert(obj->ref > 0);
> > (gdb) bt
> > 0  0x55e1dc80 in object_unref (objptr=0x6d656d3c6b6e696c) at 
> > ../qom/object.c:1198
> > 1  0x55d5cca5 in memory_region_unref (mr=0x572b9e20) at 
> > ../softmmu/memory.c:1799
> > 2  0x55d65e47 in phys_section_destroy (mr=0x572b9e20) at 
> > ../softmmu/physmem.c:998
> > 3  0x55d65ec7 in phys_sections_free (map=0x588365c0) at 
> > ../softmmu/physmem.c:1011
> > 4  0x55d6956f in address_space_dispatch_free (d=0x588365b0) at 
> > ../softmmu/physmem.c:2430
> > 5  0x55d58409 in flatview_destroy (view=0x58836570) at 
> > ../softmmu/memory.c:292
> > 6  0x5600fd1a in call_rcu_thread (opaque=0x0) at ../util/rcu.c:284
> > 7  0x560026cb in qemu_thread_start (args=0x569cafa0) at 
> > ../util/qemu-thread-posix.c:541
> > 8  0x77094b43 in start_thread (arg=) at 
> > ./nptl/pthread_create.c:442
> > 9  0x77126a00 in clone3 () at 
> > ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
> >
> > The reason seems to be memory regions are handled on a different
> > thread than the virtio-gpu thread, and that inevitably leads to
> > raciness.  The memory region docs[a] generally seems to dissuade this:
> >
> > "In order to do this, as a general rule do not create or destroy
> >   memory regions dynamically during a device’s lifetime, and only
> >   call object_unparent() in the memory region owner’s instance_finalize
> >   callback. The dynamically allocated data structure that contains
> >   the memory region then should obviously be freed in the
> >   instance_finalize callback as well."
> >
> > Though instance_finalize is called before device destruction, so
> > storing the memory until then is unlikely to be an option.  The
> > tests do pass when virtio-gpu doesn't free the memory, but
> > progressively the guest becomes slower and then OOMs.
> >
> > Though the api does make an exception:
> >
> > "There is an exception to the above rule: it is okay to call
> > object_unparent at any time for an alias or a container region. It is
> > therefore also okay to create or destroy alias and container regions
> > dynamically during a device’s lifetime."
> >
> > I believe we are trying to create a container subregion, but that's
> > still failing?  Are we doing it right?  Any memory region experts
> > here can help out?  The other revelant patch in this series
> > is "virtio-gpu: hostmem".
>
> Perhaps dma_memory_map() is what 

Multiple vIOMMU instance support in QEMU?

2023-04-24 Thread Nicolin Chen
Hi all,

(Please feel free to include related folks into this thread.)

In light of an ongoing nested-IOMMU support effort via IOMMUFD, we
would likely have a need of a multi-vIOMMU support in QEMU, or more
specificly a multi-vSMMU support for an underlying HW that has multi
physical SMMUs. This would be used in the following use cases.
 1) Multiple physical SMMUs with different feature bits so that one
vSMMU enabling a nesting configuration cannot reflect properly.
 2) NVIDIA Grace CPU has a VCMDQ HW extension for SMMU CMDQ. Every
VCMDQ HW has an MMIO region (CONS and PROD indexes) that should
be exposed to a VM, so that a hypervisor can avoid trappings by
using this HW accelerator for performance. However, one single
vSMMU cannot mmap multiple MMIO regions from multiple pSMMUs.
 3) With the latest iommufd design, a single vIOMMU model shares the
same stage-2 HW pagetable across all physical SMMUs with a shared
VMID. Then a stage-1 pagetable invalidation (for one device) at
the vSMMU would have to be broadcasted to all the SMMU instances,
which would hurt the overall performance.

I previously discussed with Eric this topic in a private email. Eric
felt the difficulty of implementing this in the current QEMU system,
as it would touch different subsystems like IORT and platform device,
since the passthrough devices would be attached to different vIOMMUs.

Yet, given the situations above, it's likely the best by duplicating
the vIOMMU instance corresponding to the number of the physical SMMU
instances.

So, I am sending this email to collect opinions on this and see what
would be a potential TODO list if we decide to go on this path.

Thanks
Nicolin



Re: [RFC] hw/arm/virt: Provide DT binding generation for PCI eXpander Bridges

2023-04-24 Thread Jonathan Cameron via
On Mon, 24 Apr 2023 16:45:48 +0100
Peter Maydell  wrote:

> On Mon, 24 Apr 2023 at 16:41, Jonathan Cameron
>  wrote:
> >
> > On Mon, 24 Apr 2023 10:28:30 +0100
> > Peter Maydell  wrote:  
> > > So, not knowing anything about CXL, my naive question is:
> > > this is PCI, why can't we handle it the way we handle everything
> > > else PCI, i.e. present the PCI controller in the DTB and it's
> > > the guest kernel's job to probe, enumerate, etc whatever it wants ?  
> >
> > Absolutely the kernel will still do the enumeration.  But there's a problem
> > - it won't always work, unless there is 'enough room'.
> >
> > If the aim is to do it in a similar fashion to how PCI Expander Bridges 
> > (PXB)
> > are handled today with ACPI (we could look at doing something different of 
> > course)
> >
> > We have one set of memory windows for assigning PCI bars into etc. In the 
> > model
> > used for PXB, that set of resources is shared between different host bridges
> > including the main one (each one has separate DT description).
> >
> > So for virt, VIRT_PCIE_MMIO, VIRT_PCIE_IO, VIRT_PCIE_ECAM, 
> > VIRT_HIGH_PCIE_ECAM
> > and VIRT_HIGH_PCIE_MMIO are split up between the host bridges.
> > Each host bridge has it's own DT description.
> >
> > For ACPI, how to split up available memory windows up depends on the 
> > requirements
> > of the PCIe devices below the host bridges.  For ACPI we 'know' the answer
> > as to what is required at the point of creating the description because EDK2
> > did the work for us.  For DT we currently don't.  What to do about that is 
> > the
> > question this RFC tries to address.
> >
> > In theory we could change the kernel to support enumerating PXB instances, 
> > but
> > that's a very different model from today where they are just 'standard'
> > host bridges, using the generic bindings for ACPI (and after this patch for 
> > DT).  
> 
> On the other hand, having QEMU enumerate PCI devices is *also* a
> very different model from today, where we assume that the guest
> code is the one that is going to deal with enumerating PCI devices.
> To my mind one of the major advantages of PCI is exactly that it
> is entirely probeable and discoverable, so that there is no need
> for the dtb to include a lot of information that the kernel can
> find out for itself by directly asking the hardware...

I absolutely agree that QEMU enumerating PCI seem silly level of complexity
to introduce. So easy route is to just use the bus numbers to partition
resources. We have those available without any complexity. It's not the
same as how it's done with ACPI, but then the alternatives are either
(though maybe they are closer).  Note current proposed algorithm may be
too simplistic (perhaps some alignment forcing should adjust the division
of the resources to start on round number addresses)

More complex route will be to push the whole problem to the kernel.
I doubt we'll get any agreement to add this to the generic host
bridge DT bindings given it's just to support QEMU specific host bridges
which the kernel has previously had no knowledge of beyond as generic
PCIe Host bridges. So we'd need QEMU to detect the presence of PXB
instances then use a new DT compatible for the main host bridge
and kick off a new type of host bridge discovery.
Probably also need to modify EDK2 to recognize this, which will give
us compatibility problems with existing ACPI usage (EDK2 is reading
the DT bindings so kick off it's PCIe enumeration so this gets fiddly)
May also have problems with the kernel having to do late discovery of
host bridges (not tried it but sounds 'interesting'.)

Anyhow, I promised some illustrative examples.

For reference, here's one of my test configurations
(slightly pathological as it exercises some of the heuristics in enumeration)

-+-[:00]-+-00.0  Red Hat, Inc. QEMU PCIe Host bridge
 |   +-01.0-[01]--
 |   +-02.0  Red Hat, Inc. Virtio block device
 |   +-03.0  Red Hat, Inc. Virtio network device
 |   +-04.0  Red Hat, Inc. QEMU PCIe Expander bridge
 |   \-05.0  Red Hat, Inc. QEMU PCIe Expander bridge
 +-[:0c]---00.0-[0d-0f]--+-00.0-[0e-0f]00.0-[0f]00.0  Red Hat, Inc. 
Virtio RNG
 |   \-00.1  Red Hat, Inc. Virtio RNG
 \-[:30]---00.0-[31]00.0  Red Hat, Inc. Virtio RNG

It's somewhat easier to see the current ACPI results vs what this
patch proposed for DT by looking at /proc/iomem than looking at the
firmware descriptions themselves. /proc/iomem includes
the firmware described host bridge windows and the PCI buses and devices
enumerated below them (EDK2 enumerated for ACPI, Linux kernel for DT)
There are some other subtle differences of how the memory is mapped, but
hopefully you can see the relationship between what happens with ACPI
(requiring enumeration) and the proposal in this RFC for DT which doesn't.

Relevant chunks of /proc/iomem for ACPI for 32 bit region (64 bit one similar.)
//FW described main HB

Re: [PATCH] pci: make ROM memory resizable

2023-04-24 Thread Vladimir Sementsov-Ogievskiy

On 25.04.23 00:06, Michael S. Tsirkin wrote:

On Tue, Apr 25, 2023 at 12:02:49AM +0300, Vladimir Sementsov-Ogievskiy wrote:

On 24.04.23 23:45, Michael S. Tsirkin wrote:

On Mon, Apr 24, 2023 at 04:42:00PM -0400, Michael S. Tsirkin wrote:

On Mon, Apr 24, 2023 at 11:36:47PM +0300, Vladimir Sementsov-Ogievskiy wrote:

On migration, on target we load local ROM file. But actual ROM content
migrates through migration channel. Original ROM content from local
file doesn't matter. But when size mismatch - we have an error like

   Size mismatch: :00:03.0/virtio-net-pci.rom: 0x4 != 0x8: Invalid 
argument

Let's just allow resizing of ROM memory. This way migration is not
relate on local ROM file on target node which is loaded by default but
is not actually needed.

Signed-off-by: Vladimir Sementsov-Ogievskiy

Also isn't ROM size reflected in config space etc?
I don't remember code that would update that on migration.




Thanks a lot for fast answers!

Hmm. I'm a newbie in these things.

But yes, I noted, that my patch helps, if, for example jump from 200K to 500K 
zero-filled ROM file. But if jump to 600K, migration fails with

(qemu) qemu: get_pci_config_device: Bad config data: i=0x32 read: b8 device: 0 
cmask: ff wmask: f0 w1cmask:0
qemu: Failed to load PCIDevice:config
qemu: Failed to load virtio-net:virtio
qemu: error while loading state for instance 0x0 of device 
':00:03.0/virtio-net'
qemu: load of migration failed: Invalid argument


I thought, that, maybe, romfile for this device just mustn't be more than 512K 
where config starts. But now I think that it's exactly the problem you are 
saying about.


I know also, that there were another step around this problem: 08b1df8ff463e72b087 
"pci: add romsize property".. But it doesn't help when you already have a 
running instance with small ROM and want to migrate it to the node where you have 
corresponding local ROM file updated to new package with bigger size.



set romsize on destination?


This does not work if you try to set smaller size, it say:

romfile "b" (409600 bytes) is too large for ROM size 262144

so we need additional option like romalloc=on, that say: don't load any file but just 
allocate ROM by romsize option. Or just handle romfile="",romsize=SOME_SIZE in 
this way.

But I'm still interested in possibility to avoid any additional option on 
target.




Hmm. So, simply reuse "resizable" memory blocks doesn't help. And I need more 
precise reinitialization of device on load of incoming migration..

--
Best regards,
Vladimir




--
Best regards,
Vladimir




Re: [RFC PATCH v3 14/20] configure: use 'mkvenv ensure meson' to bootstrap meson

2023-04-24 Thread Warner Losh
On Mon, Apr 24, 2023 at 2:41 PM John Snow  wrote:

> On Mon, Apr 24, 2023 at 4:36 PM Warner Losh  wrote:
> >
> >
> >
> > On Mon, Apr 24, 2023, 2:03 PM John Snow  wrote:
> >>
> >> This commit changes how we detect and install meson. It notably removes
> >> '--meson='.
> >>
> >> The previous patch creates a lightweight Python virtual environment
> >> unconditionally using the user's configured $python that inherits system
> >> packages. If Meson is installed there and meets our minimum version
> >> requirements, we will use that Meson.
> >>
> >> In the event that Meson is installed but *not for the chosen Python
> >> interpreter*, not found, or of insufficient version, we will attempt to
> >> install Meson from vendored source into the newly created Python virtual
> >> environment. This vendored installation is considered to replace the
> >> mechanism from prior tarball distributions.
> >>
> >> This commit restores the ability to use a system meson, but in turn
> >> temporarily removes the ability to use a meson as obtained from the
> >> internet at configure-time (git submodules, as it stood prior to this
> >> patch); that ability will be restored in the next commit.
> >>
> >> As a result of this patch, the Python interpreter we use for both our
> >> own build scripts *and* Meson extensions are always known to be the
> >> exact same Python. As a further benefit, there will also be a symlink
> >> available in the build directory that points to the correct, configured
> >> python and can be used by e.g. manual tests to invoke the correct,
> >> configured Python unambiguously.
> >>
> >> Signed-off-by: John Snow 
> >> ---
> >>  configure   | 72 -
> >>  .gitlab-ci.d/buildtest-template.yml |  4 +-
> >>  2 files changed, 21 insertions(+), 55 deletions(-)
> >>
> >> diff --git a/configure b/configure
> >> index 462fe604d6..e9947369b2 100755
> >> --- a/configure
> >> +++ b/configure
> >> @@ -731,8 +731,6 @@ for opt do
> >>;;
> >>--skip-meson) skip_meson=yes
> >>;;
> >> -  --meson=*) meson="$optarg"
> >> -  ;;
> >>--ninja=*) ninja="$optarg"
> >>;;
> >>--smbd=*) smbd="$optarg"
> >> @@ -1016,7 +1014,6 @@ Advanced options (experts only):
> >>--cross-prefix-ARCH=PREFIX cross compiler prefix when building ARCH
> guest test cases
> >>--make=MAKE  use specified make [$make]
> >>--python=PYTHON  use specified python [$python]
> >> -  --meson=MESONuse specified meson [$meson]
> >>--ninja=NINJAuse specified ninja [$ninja]
> >>--smbd=SMBD  use specified smbd [$smbd]
> >>--with-git=GIT   use specified git [$git]
> >> @@ -1089,7 +1086,8 @@ fi
> >>
> >>  # Resolve PATH
> >>  python="$(command -v "$python")"
> >> -explicit_python=yes
> >> +# This variable is intended to be used only for error messages:
> >> +target_python=$python
> >>
> >>  # Create a Python virtual environment using our configured python.
> >>  # The stdout of this script will be the location of a symlink that
> >> @@ -1101,7 +1099,6 @@ explicit_python=yes
> >>  # - venv is cleared if it exists already;
> >>  # - venv is allowed to use system packages;
> >>  # - all setup is performed **offline**;
> >> -# - No packages are installed by default;
> >>  # - pip is not installed into the venv when possible,
> >>  #   but ensurepip is called as a fallback when necessary.
> >>
> >> @@ -1116,58 +1113,27 @@ fi
> >>  # Suppress writing compiled files
> >>  python="$python -B"
> >>
> >> -has_meson() {
> >> -  local python_dir=$(dirname "$python")
> >> -  # PEP405: pyvenv.cfg is either adjacent to the Python executable
> >> -  # or one directory above
> >> -  if test -f $python_dir/pyvenv.cfg || test -f
> $python_dir/../pyvenv.cfg; then
> >> -# Ensure that Meson and Python come from the same virtual
> environment
> >> -test -x "$python_dir/meson" &&
> >> -  test "$(command -v meson)" -ef "$python_dir/meson"
> >> -  else
> >> -has meson
> >> -  fi
> >> -}
> >>
> >> -if test -z "$meson"; then
> >> -if test "$explicit_python" = no && has_meson && version_ge
> "$(meson --version)" 0.61.5; then
> >> -meson=meson
> >> -elif test "$git_submodules_action" != 'ignore' ; then
> >> -meson=git
> >> -elif test -e "${source_path}/meson/meson.py" ; then
> >> -meson=internal
> >> -else
> >> -if test "$explicit_python" = yes; then
> >> -error_exit "--python requires using QEMU's embedded Meson
> distribution, but it was not found."
> >> -else
> >> -error_exit "Meson not found.  Use --meson=/path/to/meson"
> >> -fi
> >> +if ! $python "${source_path}/python/scripts/mkvenv.py" ensure \
> >> + --dir "${source_path}/python/wheels" \
> >> + "meson>=0.61.5" ;
> >> +then
> >> +# We're very out of luck. Try to give a good diagnostic.
> >> +if test -e pyvenv/bin/meson; then
> >> +echo "Meson is too old:
> >
> >
> > 

Re: [PATCH] pci: make ROM memory resizable

2023-04-24 Thread Michael S. Tsirkin
On Tue, Apr 25, 2023 at 12:02:49AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> On 24.04.23 23:45, Michael S. Tsirkin wrote:
> > On Mon, Apr 24, 2023 at 04:42:00PM -0400, Michael S. Tsirkin wrote:
> > > On Mon, Apr 24, 2023 at 11:36:47PM +0300, Vladimir Sementsov-Ogievskiy 
> > > wrote:
> > > > On migration, on target we load local ROM file. But actual ROM content
> > > > migrates through migration channel. Original ROM content from local
> > > > file doesn't matter. But when size mismatch - we have an error like
> > > > 
> > > >   Size mismatch: :00:03.0/virtio-net-pci.rom: 0x4 != 0x8: 
> > > > Invalid argument
> > > > 
> > > > Let's just allow resizing of ROM memory. This way migration is not
> > > > relate on local ROM file on target node which is loaded by default but
> > > > is not actually needed.
> > > > 
> > > > Signed-off-by: Vladimir Sementsov-Ogievskiy
> > Also isn't ROM size reflected in config space etc?
> > I don't remember code that would update that on migration.
> > 
> > 
> 
> Thanks a lot for fast answers!
> 
> Hmm. I'm a newbie in these things.
> 
> But yes, I noted, that my patch helps, if, for example jump from 200K to 500K 
> zero-filled ROM file. But if jump to 600K, migration fails with
> 
> (qemu) qemu: get_pci_config_device: Bad config data: i=0x32 read: b8 device: 
> 0 cmask: ff wmask: f0 w1cmask:0
> qemu: Failed to load PCIDevice:config
> qemu: Failed to load virtio-net:virtio
> qemu: error while loading state for instance 0x0 of device 
> ':00:03.0/virtio-net'
> qemu: load of migration failed: Invalid argument
> 
> 
> I thought, that, maybe, romfile for this device just mustn't be more than 
> 512K where config starts. But now I think that it's exactly the problem you 
> are saying about.
> 
> 
> I know also, that there were another step around this problem: 
> 08b1df8ff463e72b087 "pci: add romsize property".. But it doesn't help when 
> you already have a running instance with small ROM and want to migrate it to 
> the node where you have corresponding local ROM file updated to new package 
> with bigger size.
> 

set romsize on destination?

> Hmm. So, simply reuse "resizable" memory blocks doesn't help. And I need more 
> precise reinitialization of device on load of incoming migration..
> 
> -- 
> Best regards,
> Vladimir




Re: [PATCH] pci: make ROM memory resizable

2023-04-24 Thread Vladimir Sementsov-Ogievskiy

On 24.04.23 23:45, Michael S. Tsirkin wrote:

On Mon, Apr 24, 2023 at 04:42:00PM -0400, Michael S. Tsirkin wrote:

On Mon, Apr 24, 2023 at 11:36:47PM +0300, Vladimir Sementsov-Ogievskiy wrote:

On migration, on target we load local ROM file. But actual ROM content
migrates through migration channel. Original ROM content from local
file doesn't matter. But when size mismatch - we have an error like

  Size mismatch: :00:03.0/virtio-net-pci.rom: 0x4 != 0x8: Invalid 
argument

Let's just allow resizing of ROM memory. This way migration is not
relate on local ROM file on target node which is loaded by default but
is not actually needed.

Signed-off-by: Vladimir Sementsov-Ogievskiy

Also isn't ROM size reflected in config space etc?
I don't remember code that would update that on migration.




Thanks a lot for fast answers!

Hmm. I'm a newbie in these things.

But yes, I noted, that my patch helps, if, for example jump from 200K to 500K 
zero-filled ROM file. But if jump to 600K, migration fails with

(qemu) qemu: get_pci_config_device: Bad config data: i=0x32 read: b8 device: 0 
cmask: ff wmask: f0 w1cmask:0
qemu: Failed to load PCIDevice:config
qemu: Failed to load virtio-net:virtio
qemu: error while loading state for instance 0x0 of device 
':00:03.0/virtio-net'
qemu: load of migration failed: Invalid argument


I thought, that, maybe, romfile for this device just mustn't be more than 512K 
where config starts. But now I think that it's exactly the problem you are 
saying about.


I know also, that there were another step around this problem: 08b1df8ff463e72b087 
"pci: add romsize property".. But it doesn't help when you already have a 
running instance with small ROM and want to migrate it to the node where you have 
corresponding local ROM file updated to new package with bigger size.


Hmm. So, simply reuse "resizable" memory blocks doesn't help. And I need more 
precise reinitialization of device on load of incoming migration..

--
Best regards,
Vladimir




Re: [PATCH v2 4/6] tests/qtest: make more migration pre-copy scenarios run non-live

2023-04-24 Thread Fabiano Rosas
Daniel P. Berrangé  writes:

> There are 27 pre-copy live migration scenarios being tested. In all of
> these we force non-convergance and run for one iteration, then let it
> converge and wait for completion during the second (or following)
> iterations. At 3 mbps bandwidth limit the first iteration takes a very
> long time (~30 seconds).
>
> While it is important to test the migration passes and convergance
> logic, it is overkill to do this for all 27 pre-copy scenarios. The
> TLS migration scenarios in particular are merely exercising different
> code paths during connection establishment.
>
> To optimize time taken, switch most of the test scenarios to run
> non-live (ie guest CPUs paused) with no bandwidth limits. This gives
> a massive speed up for most of the test scenarios.
>
> For test coverage the following scenarios are unchanged
>
>  * Precopy with UNIX sockets
>  * Precopy with UNIX sockets and dirty ring tracking
>  * Precopy with XBZRLE
>  * Precopy with multifd
>
> Signed-off-by: Daniel P. Berrangé 
> ---
>  tests/qtest/migration-test.c | 60 ++--
>  1 file changed, 50 insertions(+), 10 deletions(-)
>
> diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
> index 6492ffa7fe..40d0f75480 100644
> --- a/tests/qtest/migration-test.c
> +++ b/tests/qtest/migration-test.c
> @@ -568,6 +568,9 @@ typedef struct {
>  MIG_TEST_FAIL_DEST_QUIT_ERR,
>  } result;
>  
> +/* Whether the guest CPUs should be running during migration */
> +bool live;
> +
>  /* Postcopy specific fields */
>  void *postcopy_data;
>  bool postcopy_preempt;
> @@ -1324,8 +1327,6 @@ static void test_precopy_common(MigrateCommon *args)
>  return;
>  }
>  
> -migrate_ensure_non_converge(from);
> -
>  if (args->start_hook) {
>  data_hook = args->start_hook(from, to);
>  }
> @@ -1335,6 +1336,31 @@ static void test_precopy_common(MigrateCommon *args)
>  wait_for_serial("src_serial");
>  }
>  
> +if (args->live) {
> +/*
> + * Testing live migration, we want to ensure that some
> + * memory is re-dirtied after being transferred, so that
> + * we exercise logic for dirty page handling. We achieve
> + * this with a ridiculosly low bandwidth that guarantees
> + * non-convergance.
> + */
> +migrate_ensure_non_converge(from);
> +} else {
> +/*
> + * Testing non-live migration, we allow it to run at
> + * full speed to ensure short test case duration.
> + * For tests expected to fail, we don't need to
> + * change anything.
> + */
> +if (args->result == MIG_TEST_SUCCEED) {
> +qtest_qmp_assert_success(from, "{ 'execute' : 'stop'}");
> +if (!got_stop) {
> +qtest_qmp_eventwait(from, "STOP");
> +}
> +migrate_ensure_converge(from);
> +}
> +}
> +
>  if (!args->connect_uri) {
>  g_autofree char *local_connect_uri =
>  migrate_get_socket_address(to, "socket-address");
> @@ -1352,19 +1378,29 @@ static void test_precopy_common(MigrateCommon *args)
>  qtest_set_expected_status(to, EXIT_FAILURE);
>  }
>  } else {
> -wait_for_migration_pass(from);
> +if (args->live) {
> +wait_for_migration_pass(from);
>  
> -migrate_ensure_converge(from);
> +migrate_ensure_converge(from);
>  
> -/* We do this first, as it has a timeout to stop us
> - * hanging forever if migration didn't converge */
> -wait_for_migration_complete(from);
> +/*
> + * We do this first, as it has a timeout to stop us
> + * hanging forever if migration didn't converge
> + */
> +wait_for_migration_complete(from);
> +
> +if (!got_stop) {
> +qtest_qmp_eventwait(from, "STOP");
> +}
> +} else {
> +wait_for_migration_complete(from);
>  
> -if (!got_stop) {
> -qtest_qmp_eventwait(from, "STOP");
> +qtest_qmp_assert_success(to, "{ 'execute' : 'cont'}");

I retested and the problem still persists. The issue is with this wait +
cont sequence:

wait_for_migration_complete(from);
qtest_qmp_assert_success(to, "{ 'execute' : 'cont'}");

We wait for the source to finish but by the time qmp_cont executes, the
dst is still INMIGRATE, autostart gets set and I never see the RESUME
event.

When the dst migration finishes the VM gets put in RUN_STATE_PAUSED (at
process_incoming_migration_bh):

if (!global_state_received() ||
global_state_get_runstate() == RUN_STATE_RUNNING) {
if (autostart) {
vm_start();
} else {
runstate_set(RUN_STATE_PAUSED);
}
} else if (migration_incoming_colo_enabled()) {
migration_incoming_disable_colo();
vm_start();
} 

Re: [PATCH] pci: make ROM memory resizable

2023-04-24 Thread Vladimir Sementsov-Ogievskiy

On 24.04.23 23:41, Michael S. Tsirkin wrote:

@@ -2366,7 +2368,8 @@ static void pci_add_option_rom(PCIDevice *pdev, bool 
is_default_rom,
  snprintf(name, sizeof(name), "%s.rom", 
object_get_typename(OBJECT(pdev)));
  }
  pdev->has_rom = true;
-memory_region_init_rom(>rom, OBJECT(pdev), name, pdev->romsize, 
_fatal);
+memory_region_init_rom_resizable(>rom, OBJECT(pdev), name,
+ pdev->romsize, MAX_ROM_SIZE, 
_fatal);
  ptr = memory_region_get_ram_ptr(>rom);
  if (load_image_size(path, ptr, size) < 0) {
  error_setg(errp, "failed to load romfile \"%s\"", pdev->romfile);

You know this steals 2GB from address space, yes? This is quite a lot


Oops no, I didn't.

--
Best regards,
Vladimir




Re: [PATCH] pci: make ROM memory resizable

2023-04-24 Thread Michael S. Tsirkin
On Mon, Apr 24, 2023 at 04:42:00PM -0400, Michael S. Tsirkin wrote:
> On Mon, Apr 24, 2023 at 11:36:47PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> > On migration, on target we load local ROM file. But actual ROM content
> > migrates through migration channel. Original ROM content from local
> > file doesn't matter. But when size mismatch - we have an error like
> > 
> >  Size mismatch: :00:03.0/virtio-net-pci.rom: 0x4 != 0x8: 
> > Invalid argument
> > 
> > Let's just allow resizing of ROM memory. This way migration is not
> > relate on local ROM file on target node which is loaded by default but
> > is not actually needed.
> > 
> > Signed-off-by: Vladimir Sementsov-Ogievskiy 

Also isn't ROM size reflected in config space etc?
I don't remember code that would update that on migration.


> > ---
> >  hw/pci/pci.c  |  7 +--
> >  include/exec/memory.h | 26 ++
> >  softmmu/memory.c  | 39 +++
> >  3 files changed, 70 insertions(+), 2 deletions(-)
> > 
> > diff --git a/hw/pci/pci.c b/hw/pci/pci.c
> > index def5000e7b..72ee8f6aea 100644
> > --- a/hw/pci/pci.c
> > +++ b/hw/pci/pci.c
> > @@ -59,6 +59,8 @@
> >  # define PCI_DPRINTF(format, ...)   do { } while (0)
> >  #endif
> >  
> > +#define MAX_ROM_SIZE (2 * GiB)
> > +
> >  bool pci_available = true;
> >  
> >  static char *pcibus_get_dev_path(DeviceState *dev);
> > @@ -2341,7 +2343,7 @@ static void pci_add_option_rom(PCIDevice *pdev, bool 
> > is_default_rom,
> >  error_setg(errp, "romfile \"%s\" is empty", pdev->romfile);
> >  g_free(path);
> >  return;
> > -} else if (size > 2 * GiB) {
> > +} else if (size > MAX_ROM_SIZE) {
> >  error_setg(errp, "romfile \"%s\" too large (size cannot exceed 2 
> > GiB)",
> > pdev->romfile);
> >  g_free(path);
> > @@ -2366,7 +2368,8 @@ static void pci_add_option_rom(PCIDevice *pdev, bool 
> > is_default_rom,
> >  snprintf(name, sizeof(name), "%s.rom", 
> > object_get_typename(OBJECT(pdev)));
> >  }
> >  pdev->has_rom = true;
> > -memory_region_init_rom(>rom, OBJECT(pdev), name, pdev->romsize, 
> > _fatal);
> > +memory_region_init_rom_resizable(>rom, OBJECT(pdev), name,
> > + pdev->romsize, MAX_ROM_SIZE, 
> > _fatal);
> >  ptr = memory_region_get_ram_ptr(>rom);
> >  if (load_image_size(path, ptr, size) < 0) {
> >  error_setg(errp, "failed to load romfile \"%s\"", pdev->romfile);
> 
> You know this steals 2GB from address space, yes? This is quite a lot
> ...
> 
> > diff --git a/include/exec/memory.h b/include/exec/memory.h
> > index 15ade918ba..ed1e5d9126 100644
> > --- a/include/exec/memory.h
> > +++ b/include/exec/memory.h
> > @@ -1453,6 +1453,19 @@ void memory_region_init_rom_nomigrate(MemoryRegion 
> > *mr,
> >uint64_t size,
> >Error **errp);
> >  
> > +/*
> > + * memory_region_init_rom_nomigrate_resizable: same as
> > + * memory_region_init_rom_nomigrate(), but initialize resizable memory 
> > region.
> > + *
> > + * @max_size maximum allowed size.
> > + */
> > +void memory_region_init_rom_nomigrate_resizable(MemoryRegion *mr,
> > +struct Object *owner,
> > +const char *name,
> > +uint64_t size,
> > +uint64_t max_size,
> > +Error **errp);
> > +
> >  /**
> >   * memory_region_init_rom_device_nomigrate:  Initialize a ROM memory 
> > region.
> >   * Writes are handled via callbacks.
> > @@ -1562,6 +1575,19 @@ void memory_region_init_rom(MemoryRegion *mr,
> >  uint64_t size,
> >  Error **errp);
> >  
> > +/*
> > + * memory_region_init_rom_resizable: same as memory_region_init_rom(),
> > + * but initialize resizable memory region.
> > + *
> > + * @max_size maximum allowed size.
> > + */
> > +void memory_region_init_rom_resizable(MemoryRegion *mr,
> > +  struct Object *owner,
> > +  const char *name,
> > +  uint64_t size,
> > +  uint64_t max_size,
> > +  Error **errp);
> > +
> >  /**
> >   * memory_region_init_rom_device:  Initialize a ROM memory region.
> >   * Writes are handled via callbacks.
> > diff --git a/softmmu/memory.c b/softmmu/memory.c
> > index b1a6cae6f5..744d03bc02 100644
> > --- a/softmmu/memory.c
> > +++ b/softmmu/memory.c
> > @@ -1701,6 +1701,18 @@ void memory_region_init_rom_nomigrate(MemoryRegion 
> > *mr,
> >  mr->readonly = true;
> >  }
> >  
> > +void 

Re: [PATCH v3 02/13] migration: Move qmp_migrate_set_parameters() to options.c

2023-04-24 Thread Vladimir Sementsov-Ogievskiy

On 24.04.23 21:32, Juan Quintela wrote:

Signed-off-by: Juan Quintela


Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir




Re: [PATCH] pci: make ROM memory resizable

2023-04-24 Thread Michael S. Tsirkin
On Mon, Apr 24, 2023 at 11:36:47PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> On migration, on target we load local ROM file. But actual ROM content
> migrates through migration channel. Original ROM content from local
> file doesn't matter. But when size mismatch - we have an error like
> 
>  Size mismatch: :00:03.0/virtio-net-pci.rom: 0x4 != 0x8: Invalid 
> argument
> 
> Let's just allow resizing of ROM memory. This way migration is not
> relate on local ROM file on target node which is loaded by default but
> is not actually needed.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy 
> ---
>  hw/pci/pci.c  |  7 +--
>  include/exec/memory.h | 26 ++
>  softmmu/memory.c  | 39 +++
>  3 files changed, 70 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/pci/pci.c b/hw/pci/pci.c
> index def5000e7b..72ee8f6aea 100644
> --- a/hw/pci/pci.c
> +++ b/hw/pci/pci.c
> @@ -59,6 +59,8 @@
>  # define PCI_DPRINTF(format, ...)   do { } while (0)
>  #endif
>  
> +#define MAX_ROM_SIZE (2 * GiB)
> +
>  bool pci_available = true;
>  
>  static char *pcibus_get_dev_path(DeviceState *dev);
> @@ -2341,7 +2343,7 @@ static void pci_add_option_rom(PCIDevice *pdev, bool 
> is_default_rom,
>  error_setg(errp, "romfile \"%s\" is empty", pdev->romfile);
>  g_free(path);
>  return;
> -} else if (size > 2 * GiB) {
> +} else if (size > MAX_ROM_SIZE) {
>  error_setg(errp, "romfile \"%s\" too large (size cannot exceed 2 
> GiB)",
> pdev->romfile);
>  g_free(path);
> @@ -2366,7 +2368,8 @@ static void pci_add_option_rom(PCIDevice *pdev, bool 
> is_default_rom,
>  snprintf(name, sizeof(name), "%s.rom", 
> object_get_typename(OBJECT(pdev)));
>  }
>  pdev->has_rom = true;
> -memory_region_init_rom(>rom, OBJECT(pdev), name, pdev->romsize, 
> _fatal);
> +memory_region_init_rom_resizable(>rom, OBJECT(pdev), name,
> + pdev->romsize, MAX_ROM_SIZE, 
> _fatal);
>  ptr = memory_region_get_ram_ptr(>rom);
>  if (load_image_size(path, ptr, size) < 0) {
>  error_setg(errp, "failed to load romfile \"%s\"", pdev->romfile);

You know this steals 2GB from address space, yes? This is quite a lot
...

> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index 15ade918ba..ed1e5d9126 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -1453,6 +1453,19 @@ void memory_region_init_rom_nomigrate(MemoryRegion *mr,
>uint64_t size,
>Error **errp);
>  
> +/*
> + * memory_region_init_rom_nomigrate_resizable: same as
> + * memory_region_init_rom_nomigrate(), but initialize resizable memory 
> region.
> + *
> + * @max_size maximum allowed size.
> + */
> +void memory_region_init_rom_nomigrate_resizable(MemoryRegion *mr,
> +struct Object *owner,
> +const char *name,
> +uint64_t size,
> +uint64_t max_size,
> +Error **errp);
> +
>  /**
>   * memory_region_init_rom_device_nomigrate:  Initialize a ROM memory region.
>   * Writes are handled via callbacks.
> @@ -1562,6 +1575,19 @@ void memory_region_init_rom(MemoryRegion *mr,
>  uint64_t size,
>  Error **errp);
>  
> +/*
> + * memory_region_init_rom_resizable: same as memory_region_init_rom(),
> + * but initialize resizable memory region.
> + *
> + * @max_size maximum allowed size.
> + */
> +void memory_region_init_rom_resizable(MemoryRegion *mr,
> +  struct Object *owner,
> +  const char *name,
> +  uint64_t size,
> +  uint64_t max_size,
> +  Error **errp);
> +
>  /**
>   * memory_region_init_rom_device:  Initialize a ROM memory region.
>   * Writes are handled via callbacks.
> diff --git a/softmmu/memory.c b/softmmu/memory.c
> index b1a6cae6f5..744d03bc02 100644
> --- a/softmmu/memory.c
> +++ b/softmmu/memory.c
> @@ -1701,6 +1701,18 @@ void memory_region_init_rom_nomigrate(MemoryRegion *mr,
>  mr->readonly = true;
>  }
>  
> +void memory_region_init_rom_nomigrate_resizable(MemoryRegion *mr,
> +struct Object *owner,
> +const char *name,
> +uint64_t size,
> +uint64_t max_size,
> +Error **errp)
> +{
> +

Re: [RFC PATCH v3 14/20] configure: use 'mkvenv ensure meson' to bootstrap meson

2023-04-24 Thread John Snow
On Mon, Apr 24, 2023 at 4:36 PM Warner Losh  wrote:
>
>
>
> On Mon, Apr 24, 2023, 2:03 PM John Snow  wrote:
>>
>> This commit changes how we detect and install meson. It notably removes
>> '--meson='.
>>
>> The previous patch creates a lightweight Python virtual environment
>> unconditionally using the user's configured $python that inherits system
>> packages. If Meson is installed there and meets our minimum version
>> requirements, we will use that Meson.
>>
>> In the event that Meson is installed but *not for the chosen Python
>> interpreter*, not found, or of insufficient version, we will attempt to
>> install Meson from vendored source into the newly created Python virtual
>> environment. This vendored installation is considered to replace the
>> mechanism from prior tarball distributions.
>>
>> This commit restores the ability to use a system meson, but in turn
>> temporarily removes the ability to use a meson as obtained from the
>> internet at configure-time (git submodules, as it stood prior to this
>> patch); that ability will be restored in the next commit.
>>
>> As a result of this patch, the Python interpreter we use for both our
>> own build scripts *and* Meson extensions are always known to be the
>> exact same Python. As a further benefit, there will also be a symlink
>> available in the build directory that points to the correct, configured
>> python and can be used by e.g. manual tests to invoke the correct,
>> configured Python unambiguously.
>>
>> Signed-off-by: John Snow 
>> ---
>>  configure   | 72 -
>>  .gitlab-ci.d/buildtest-template.yml |  4 +-
>>  2 files changed, 21 insertions(+), 55 deletions(-)
>>
>> diff --git a/configure b/configure
>> index 462fe604d6..e9947369b2 100755
>> --- a/configure
>> +++ b/configure
>> @@ -731,8 +731,6 @@ for opt do
>>;;
>>--skip-meson) skip_meson=yes
>>;;
>> -  --meson=*) meson="$optarg"
>> -  ;;
>>--ninja=*) ninja="$optarg"
>>;;
>>--smbd=*) smbd="$optarg"
>> @@ -1016,7 +1014,6 @@ Advanced options (experts only):
>>--cross-prefix-ARCH=PREFIX cross compiler prefix when building ARCH guest 
>> test cases
>>--make=MAKE  use specified make [$make]
>>--python=PYTHON  use specified python [$python]
>> -  --meson=MESONuse specified meson [$meson]
>>--ninja=NINJAuse specified ninja [$ninja]
>>--smbd=SMBD  use specified smbd [$smbd]
>>--with-git=GIT   use specified git [$git]
>> @@ -1089,7 +1086,8 @@ fi
>>
>>  # Resolve PATH
>>  python="$(command -v "$python")"
>> -explicit_python=yes
>> +# This variable is intended to be used only for error messages:
>> +target_python=$python
>>
>>  # Create a Python virtual environment using our configured python.
>>  # The stdout of this script will be the location of a symlink that
>> @@ -1101,7 +1099,6 @@ explicit_python=yes
>>  # - venv is cleared if it exists already;
>>  # - venv is allowed to use system packages;
>>  # - all setup is performed **offline**;
>> -# - No packages are installed by default;
>>  # - pip is not installed into the venv when possible,
>>  #   but ensurepip is called as a fallback when necessary.
>>
>> @@ -1116,58 +1113,27 @@ fi
>>  # Suppress writing compiled files
>>  python="$python -B"
>>
>> -has_meson() {
>> -  local python_dir=$(dirname "$python")
>> -  # PEP405: pyvenv.cfg is either adjacent to the Python executable
>> -  # or one directory above
>> -  if test -f $python_dir/pyvenv.cfg || test -f $python_dir/../pyvenv.cfg; 
>> then
>> -# Ensure that Meson and Python come from the same virtual environment
>> -test -x "$python_dir/meson" &&
>> -  test "$(command -v meson)" -ef "$python_dir/meson"
>> -  else
>> -has meson
>> -  fi
>> -}
>>
>> -if test -z "$meson"; then
>> -if test "$explicit_python" = no && has_meson && version_ge "$(meson 
>> --version)" 0.61.5; then
>> -meson=meson
>> -elif test "$git_submodules_action" != 'ignore' ; then
>> -meson=git
>> -elif test -e "${source_path}/meson/meson.py" ; then
>> -meson=internal
>> -else
>> -if test "$explicit_python" = yes; then
>> -error_exit "--python requires using QEMU's embedded Meson 
>> distribution, but it was not found."
>> -else
>> -error_exit "Meson not found.  Use --meson=/path/to/meson"
>> -fi
>> +if ! $python "${source_path}/python/scripts/mkvenv.py" ensure \
>> + --dir "${source_path}/python/wheels" \
>> + "meson>=0.61.5" ;
>> +then
>> +# We're very out of luck. Try to give a good diagnostic.
>> +if test -e pyvenv/bin/meson; then
>> +echo "Meson is too old:
>
>
> Does a minimum version still get printed? I've needed to know that in the 
> past when I got the error...
>
> Warner
>  $(pyvenv/bin/meson --version)"

At the end of the series, here's what happens if i change the meson
requirement to a fictionally too-high version that would 

[RFC PATCH v3 04/20] mkvenv: Add better error message for missing pyexpat module

2023-04-24 Thread John Snow
NetBSD debundles pyexpat from python, but ensurepip needs pyexpat. Try
our best to offer a helpful error message instead of just failing
catastrophically.

Signed-off-by: John Snow 
---
 python/scripts/mkvenv.py | 18 +-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/python/scripts/mkvenv.py b/python/scripts/mkvenv.py
index f355cb54fb..2172774403 100644
--- a/python/scripts/mkvenv.py
+++ b/python/scripts/mkvenv.py
@@ -126,7 +126,10 @@ def check_ensurepip(with_pip: bool) -> None:
 
 Raise a fatal exception with a helpful hint if it isn't available.
 """
-if with_pip and not find_spec("ensurepip"):
+if not with_pip:
+return
+
+if not find_spec("ensurepip"):
 msg = (
 "Python's ensurepip module is not found.\n"
 "It's normally part of the Python standard library, "
@@ -138,6 +141,19 @@ def check_ensurepip(with_pip: bool) -> None:
 )
 raise Ouch(msg)
 
+# ensurepip uses pyexpat, which can also go missing on us:
+if not find_spec("pyexpat"):
+msg = (
+"Python's pyexpat module is not found.\n"
+"It's normally part of the Python standard library, "
+"maybe your distribution packages it separately?\n"
+"Either install pyexpat, or alleviate the need for it in the "
+"first place by installing pip and setuptools for "
+f"'{sys.executable}'.\n\n"
+"(Hint: NetBSD's pkgsrc debundles this to e.g. 'py310-expat'.)"
+)
+raise Ouch(msg)
+
 
 def make_venv(  # pylint: disable=too-many-arguments
 env_dir: Union[str, Path],
-- 
2.39.2




Re: [PATCH v3 01/13] migration: Move migrate_use_tls() to options.c

2023-04-24 Thread Vladimir Sementsov-Ogievskiy

On 24.04.23 21:32, Juan Quintela wrote:

Once there, rename it to migrate_tls() and make it return bool for
consistency.

Signed-off-by: Juan Quintela


Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir




[PATCH] pci: make ROM memory resizable

2023-04-24 Thread Vladimir Sementsov-Ogievskiy
On migration, on target we load local ROM file. But actual ROM content
migrates through migration channel. Original ROM content from local
file doesn't matter. But when size mismatch - we have an error like

 Size mismatch: :00:03.0/virtio-net-pci.rom: 0x4 != 0x8: Invalid 
argument

Let's just allow resizing of ROM memory. This way migration is not
relate on local ROM file on target node which is loaded by default but
is not actually needed.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---
 hw/pci/pci.c  |  7 +--
 include/exec/memory.h | 26 ++
 softmmu/memory.c  | 39 +++
 3 files changed, 70 insertions(+), 2 deletions(-)

diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index def5000e7b..72ee8f6aea 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -59,6 +59,8 @@
 # define PCI_DPRINTF(format, ...)   do { } while (0)
 #endif
 
+#define MAX_ROM_SIZE (2 * GiB)
+
 bool pci_available = true;
 
 static char *pcibus_get_dev_path(DeviceState *dev);
@@ -2341,7 +2343,7 @@ static void pci_add_option_rom(PCIDevice *pdev, bool 
is_default_rom,
 error_setg(errp, "romfile \"%s\" is empty", pdev->romfile);
 g_free(path);
 return;
-} else if (size > 2 * GiB) {
+} else if (size > MAX_ROM_SIZE) {
 error_setg(errp, "romfile \"%s\" too large (size cannot exceed 2 GiB)",
pdev->romfile);
 g_free(path);
@@ -2366,7 +2368,8 @@ static void pci_add_option_rom(PCIDevice *pdev, bool 
is_default_rom,
 snprintf(name, sizeof(name), "%s.rom", 
object_get_typename(OBJECT(pdev)));
 }
 pdev->has_rom = true;
-memory_region_init_rom(>rom, OBJECT(pdev), name, pdev->romsize, 
_fatal);
+memory_region_init_rom_resizable(>rom, OBJECT(pdev), name,
+ pdev->romsize, MAX_ROM_SIZE, 
_fatal);
 ptr = memory_region_get_ram_ptr(>rom);
 if (load_image_size(path, ptr, size) < 0) {
 error_setg(errp, "failed to load romfile \"%s\"", pdev->romfile);
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 15ade918ba..ed1e5d9126 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -1453,6 +1453,19 @@ void memory_region_init_rom_nomigrate(MemoryRegion *mr,
   uint64_t size,
   Error **errp);
 
+/*
+ * memory_region_init_rom_nomigrate_resizable: same as
+ * memory_region_init_rom_nomigrate(), but initialize resizable memory region.
+ *
+ * @max_size maximum allowed size.
+ */
+void memory_region_init_rom_nomigrate_resizable(MemoryRegion *mr,
+struct Object *owner,
+const char *name,
+uint64_t size,
+uint64_t max_size,
+Error **errp);
+
 /**
  * memory_region_init_rom_device_nomigrate:  Initialize a ROM memory region.
  * Writes are handled via callbacks.
@@ -1562,6 +1575,19 @@ void memory_region_init_rom(MemoryRegion *mr,
 uint64_t size,
 Error **errp);
 
+/*
+ * memory_region_init_rom_resizable: same as memory_region_init_rom(),
+ * but initialize resizable memory region.
+ *
+ * @max_size maximum allowed size.
+ */
+void memory_region_init_rom_resizable(MemoryRegion *mr,
+  struct Object *owner,
+  const char *name,
+  uint64_t size,
+  uint64_t max_size,
+  Error **errp);
+
 /**
  * memory_region_init_rom_device:  Initialize a ROM memory region.
  * Writes are handled via callbacks.
diff --git a/softmmu/memory.c b/softmmu/memory.c
index b1a6cae6f5..744d03bc02 100644
--- a/softmmu/memory.c
+++ b/softmmu/memory.c
@@ -1701,6 +1701,18 @@ void memory_region_init_rom_nomigrate(MemoryRegion *mr,
 mr->readonly = true;
 }
 
+void memory_region_init_rom_nomigrate_resizable(MemoryRegion *mr,
+struct Object *owner,
+const char *name,
+uint64_t size,
+uint64_t max_size,
+Error **errp)
+{
+memory_region_init_resizeable_ram(mr, owner, name, size, max_size, NULL,
+  errp);
+mr->readonly = true;
+}
+
 void memory_region_init_rom_device_nomigrate(MemoryRegion *mr,
  Object *owner,
  const MemoryRegionOps *ops,
@@ -3580,6 +3592,33 @@ void 

Re: [RFC PATCH v3 14/20] configure: use 'mkvenv ensure meson' to bootstrap meson

2023-04-24 Thread Warner Losh
On Mon, Apr 24, 2023, 2:03 PM John Snow  wrote:

> This commit changes how we detect and install meson. It notably removes
> '--meson='.
>
> The previous patch creates a lightweight Python virtual environment
> unconditionally using the user's configured $python that inherits system
> packages. If Meson is installed there and meets our minimum version
> requirements, we will use that Meson.
>
> In the event that Meson is installed but *not for the chosen Python
> interpreter*, not found, or of insufficient version, we will attempt to
> install Meson from vendored source into the newly created Python virtual
> environment. This vendored installation is considered to replace the
> mechanism from prior tarball distributions.
>
> This commit restores the ability to use a system meson, but in turn
> temporarily removes the ability to use a meson as obtained from the
> internet at configure-time (git submodules, as it stood prior to this
> patch); that ability will be restored in the next commit.
>
> As a result of this patch, the Python interpreter we use for both our
> own build scripts *and* Meson extensions are always known to be the
> exact same Python. As a further benefit, there will also be a symlink
> available in the build directory that points to the correct, configured
> python and can be used by e.g. manual tests to invoke the correct,
> configured Python unambiguously.
>
> Signed-off-by: John Snow 
> ---
>  configure   | 72 -
>  .gitlab-ci.d/buildtest-template.yml |  4 +-
>  2 files changed, 21 insertions(+), 55 deletions(-)
>
> diff --git a/configure b/configure
> index 462fe604d6..e9947369b2 100755
> --- a/configure
> +++ b/configure
> @@ -731,8 +731,6 @@ for opt do
>;;
>--skip-meson) skip_meson=yes
>;;
> -  --meson=*) meson="$optarg"
> -  ;;
>--ninja=*) ninja="$optarg"
>;;
>--smbd=*) smbd="$optarg"
> @@ -1016,7 +1014,6 @@ Advanced options (experts only):
>--cross-prefix-ARCH=PREFIX cross compiler prefix when building ARCH
> guest test cases
>--make=MAKE  use specified make [$make]
>--python=PYTHON  use specified python [$python]
> -  --meson=MESONuse specified meson [$meson]
>--ninja=NINJAuse specified ninja [$ninja]
>--smbd=SMBD  use specified smbd [$smbd]
>--with-git=GIT   use specified git [$git]
> @@ -1089,7 +1086,8 @@ fi
>
>  # Resolve PATH
>  python="$(command -v "$python")"
> -explicit_python=yes
> +# This variable is intended to be used only for error messages:
> +target_python=$python
>
>  # Create a Python virtual environment using our configured python.
>  # The stdout of this script will be the location of a symlink that
> @@ -1101,7 +1099,6 @@ explicit_python=yes
>  # - venv is cleared if it exists already;
>  # - venv is allowed to use system packages;
>  # - all setup is performed **offline**;
> -# - No packages are installed by default;
>  # - pip is not installed into the venv when possible,
>  #   but ensurepip is called as a fallback when necessary.
>
> @@ -1116,58 +1113,27 @@ fi
>  # Suppress writing compiled files
>  python="$python -B"
>
> -has_meson() {
> -  local python_dir=$(dirname "$python")
> -  # PEP405: pyvenv.cfg is either adjacent to the Python executable
> -  # or one directory above
> -  if test -f $python_dir/pyvenv.cfg || test -f $python_dir/../pyvenv.cfg;
> then
> -# Ensure that Meson and Python come from the same virtual environment
> -test -x "$python_dir/meson" &&
> -  test "$(command -v meson)" -ef "$python_dir/meson"
> -  else
> -has meson
> -  fi
> -}
>
> -if test -z "$meson"; then
> -if test "$explicit_python" = no && has_meson && version_ge "$(meson
> --version)" 0.61.5; then
> -meson=meson
> -elif test "$git_submodules_action" != 'ignore' ; then
> -meson=git
> -elif test -e "${source_path}/meson/meson.py" ; then
> -meson=internal
> -else
> -if test "$explicit_python" = yes; then
> -error_exit "--python requires using QEMU's embedded Meson
> distribution, but it was not found."
> -else
> -error_exit "Meson not found.  Use --meson=/path/to/meson"
> -fi
> +if ! $python "${source_path}/python/scripts/mkvenv.py" ensure \
> + --dir "${source_path}/python/wheels" \
> + "meson>=0.61.5" ;
> +then
> +# We're very out of luck. Try to give a good diagnostic.
> +if test -e pyvenv/bin/meson; then
> +echo "Meson is too old:


Does a minimum version still get printed? I've needed to know that in the
past when I got the error...

Warner
 $(pyvenv/bin/meson --version)"

> +elif has meson ; then
> +echo "Meson was found installed on your system," \
> + "but not for the configured Python interpreter
> ($target_python)."
> +echo "(Hint: check '$(which meson)' to see which interpreter its
> shebang uses.)"
>  fi
> -else
> -# Meson uses its own 

Ubuntu 22.04 - ARM Rock4SE

2023-04-24 Thread Victoria Stråberg
Hi!

Im working with an ARM machine (Rock 4 SE, Radxa 
https://wiki.radxa.com/Rock4/se) and it has Ubuntu 22.04, Linux 4.4. I want to 
install Qemu on my machine to get an Intel processor.

Im trying to follow your Linux start up guide:  
https://wiki.qemu.org/Hosts/Linux#Running_QEMU_on_Linux. How should I solve my 
problem? Im new to this.

Thanks in advance!

Regards, Victoria



[RFC PATCH v3 07/20] mkvenv: add nested venv workaround

2023-04-24 Thread John Snow
Python virtual environments do not typically nest; they may inherit from
the top-level system packages or not at all.

For our purposes, it would be convenient to emulate "nested" virtual
environments to allow callers of the configure script to install
specific versions of python utilities in order to test build system
features, utility version compatibility, etc.

While it is possible to install packages into the system environment
(say, by using the --user flag), it's nicer to install test packages
into a totally isolated environment instead.

As detailed in https://www.qemu.org/2023/03/24/python/, Emulate a nested
venv environment by using .pth files installed into the site-packages
folder that points to the parent environment when appropriate.

Signed-off-by: Paolo Bonzini 
Signed-off-by: John Snow 
---
 python/scripts/mkvenv.py | 72 ++--
 1 file changed, 69 insertions(+), 3 deletions(-)

diff --git a/python/scripts/mkvenv.py b/python/scripts/mkvenv.py
index 445f4eb092..45d1b772e5 100644
--- a/python/scripts/mkvenv.py
+++ b/python/scripts/mkvenv.py
@@ -51,9 +51,11 @@
 import os
 from pathlib import Path
 import re
+import site
 import stat
 import subprocess
 import sys
+import sysconfig
 import traceback
 from types import SimpleNamespace
 from typing import (
@@ -74,6 +76,11 @@
 logger = logging.getLogger("mkvenv")
 
 
+def inside_a_venv() -> bool:
+"""Returns True if it is executed inside of a virtual environment."""
+return sys.prefix != sys.base_prefix
+
+
 class Ouch(RuntimeError):
 """An Exception class we can't confuse with a builtin."""
 
@@ -82,9 +89,15 @@ class QemuEnvBuilder(venv.EnvBuilder):
 """
 An extension of venv.EnvBuilder for building QEMU's configure-time venv.
 
-The only functional change is that it adds the ability to regenerate
-console_script shims for packages available via system_site
-packages.
+The primary differences are:
+
+(1) It adds the ability to regenerate console_script shims for
+packages available via system_site_packages for any packages
+specified by the 'script_packages' argument
+
+(2) It emulates a "nested" virtual environment when invoked from
+inside of an existing virtual environment by including packages from
+the parent.
 
 Parameters for base class init:
   - system_site_packages: bool = False
@@ -99,11 +112,51 @@ class QemuEnvBuilder(venv.EnvBuilder):
 def __init__(self, *args: Any, **kwargs: Any) -> None:
 logger.debug("QemuEnvBuilder.__init__(...)")
 self.script_packages = kwargs.pop("script_packages", ())
+
+# For nested venv emulation:
+self.use_parent_packages = False
+if inside_a_venv():
+# Include parent packages only if we're in a venv and
+# system_site_packages was True.
+self.use_parent_packages = kwargs.pop(
+"system_site_packages", False
+)
+# Include system_site_packages only when the parent,
+# The venv we are currently in, also does so.
+kwargs["system_site_packages"] = sys.base_prefix in site.PREFIXES
+
 super().__init__(*args, **kwargs)
 
 # Make the context available post-creation:
 self._context: Optional[SimpleNamespace] = None
 
+def compute_venv_libpath(self, context: SimpleNamespace) -> str:
+"""
+Compatibility wrapper for context.lib_path for Python < 3.12
+"""
+# Python 3.12+, not strictly necessary because it's documented
+# to be the same as 3.10 code below:
+if sys.version_info >= (3, 12):
+return context.lib_path
+
+# Python 3.10+
+if "venv" in sysconfig.get_scheme_names():
+return sysconfig.get_path(
+"purelib", scheme="venv", vars={"base": context.env_dir}
+)
+
+# For Python <= 3.9 we need to hardcode this. Fortunately the
+# code below was the same in Python 3.6-3.10, so there is only
+# one case.
+if sys.platform == "win32":
+return os.path.join(context.env_dir, "Lib", "site-packages")
+return os.path.join(
+context.env_dir,
+"lib",
+"python%d.%d" % sys.version_info[:2],
+"site-packages",
+)
+
 def ensure_directories(self, env_dir: DirType) -> SimpleNamespace:
 logger.debug("ensure_directories(env_dir=%s)", env_dir)
 self._context = super().ensure_directories(env_dir)
@@ -124,6 +177,19 @@ def post_post_setup(self, context: SimpleNamespace) -> 
None:
 """
 The final, final hook. Enter the venv and run commands inside of it.
 """
+if self.use_parent_packages:
+# We're inside of a venv and we want to include the parent
+# venv's packages.
+parent_libpath = sysconfig.get_path("purelib")
+logger.debug("parent_libpath: %s", 

[RFC PATCH v3 08/20] mkvenv: add ensure subcommand

2023-04-24 Thread John Snow
This command is to be used to add various packages (or ensure they're
already present) into the configure-provided venv in a modular fashion.

Examples:

mkvenv ensure --online --dir "${source_dir}/python/wheels/" "meson>=0.61.5"
mkvenv ensure --online "sphinx>=1.6.0"
mkvenv ensure "qemu.qmp==0.0.2"

It's designed to look for packages in three places, in order:

(1) In system packages, if the version installed is already good
enough. This way your distribution-provided meson, sphinx, etc are
always used as first preference.

(2) In a vendored packages directory. Here I am suggesting
qemu.git/python/wheels/ as that directory. This is intended to serve as
a replacement for vendoring the meson source for QEMU tarballs. It is
also highly likely to be extremely useful for packaging the "qemu.qmp"
package in source distributions for platforms that do not yet package
qemu.qmp separately.

(3) Online, via PyPI, ***only when "--online" is passed***. This is only
ever used as a fallback if the first two sources do not have an
appropriate package that meets the requirement. The ability to build
QEMU and run tests *completely offline* is not impinged.

Signed-off-by: John Snow 
---
 python/scripts/mkvenv.py | 116 ++-
 1 file changed, 114 insertions(+), 2 deletions(-)

diff --git a/python/scripts/mkvenv.py b/python/scripts/mkvenv.py
index 45d1b772e5..937664ea9c 100644
--- a/python/scripts/mkvenv.py
+++ b/python/scripts/mkvenv.py
@@ -13,6 +13,7 @@
 createcreate a venv
 post_init
   post-venv initialization
+ensureEnsure that the specified package is installed.
 
 --
 
@@ -34,6 +35,18 @@
   --gen GEN  Regenerate console_scripts for given packages, if found.
   --binpath BINPATH  Path where console script shims should be generated
 
+--
+
+usage: mkvenv ensure [-h] [--online] [--dir DIR] dep_spec
+
+positional arguments:
+  dep_specPEP 508 Dependency specification, e.g. 'meson>=0.61.5'
+
+options:
+  -h, --help  show this help message and exit
+  --onlineInstall packages from PyPI, if necessary.
+  --dir DIR   Path to vendored packages where we may install from.
+
 """
 
 # Copyright (C) 2022-2023 Red Hat, Inc.
@@ -530,6 +543,71 @@ def checkpip() -> None:
 logging.debug("Pip is now (hopefully) repaired!")
 
 
+def pip_install(
+args: Sequence[str],
+online: bool = False,
+wheels_dir: Optional[Union[str, Path]] = None,
+) -> None:
+"""
+Use pip to install a package or package(s) as specified in @args.
+"""
+loud = bool(
+os.environ.get("DEBUG")
+or os.environ.get("GITLAB_CI")
+or os.environ.get("V")
+)
+
+full_args = [
+sys.executable,
+"-m",
+"pip",
+"install",
+"--disable-pip-version-check",
+"-v" if loud else "-q",
+]
+if not online:
+full_args += ["--no-index"]
+if wheels_dir:
+full_args += ["--find-links", f"file://{str(wheels_dir)}"]
+full_args += list(args)
+subprocess.run(full_args, check=True)
+
+
+def ensure(
+dep_spec: str,
+online: bool = False,
+wheels_dir: Optional[Union[str, Path]] = None,
+) -> None:
+"""
+Use pip to ensure we have the package specified by @dep_spec.
+
+If the package is already installed, do nothing. If online and
+wheels_dir are both provided, prefer packages found in wheels_dir
+first before connecting to PyPI.
+
+:param dep_spec:
+A PEP 508 dependency specification. e.g. 'meson>=0.61.5'.
+:param online: If True, fall back to PyPI.
+:param wheels_dir: If specified, search this path for packages.
+"""
+# This first install command will:
+# (A) Do nothing, if we already have a suitable package.
+# (B) Install the package from vendored source, if possible.
+# (C) Fail if neither A nor B.
+try:
+pip_install([dep_spec], online=False, wheels_dir=wheels_dir)
+# (A) or (B) happened. Success.
+return
+except subprocess.CalledProcessError:
+# (C) Happened.
+# The package is missing or isn't a suitable version,
+# and we weren't able to install a suitable vendored package.
+if online:
+pip_install([dep_spec], online=True)
+else:
+raise
+
+
 def post_venv_setup(bin_path: str, packages: Sequence[str] = ()) -> None:
 """
 This is intended to be run *inside the venv* after it is created.
@@ -578,6 +656,29 @@ def _add_post_init_subcommand(subparsers: Any) -> None:
 )
 
 
+def _add_ensure_subcommand(subparsers: Any) -> None:
+subparser = subparsers.add_parser(
+"ensure", help="Ensure that the specified package is installed."
+)
+subparser.add_argument(
+"--online",
+action="store_true",
+help="Install packages from PyPI, if necessary.",
+)
+

[RFC PATCH v3 18/20] mkvenv: add diagnose() method for ensure() failures

2023-04-24 Thread John Snow
This is a routine that is designed to print some usable info for human
beings back out to the terminal if/when "mkvenv ensure" fails to locate
or install a package during configure time, such as meson or sphinx.

Since we are requiring that "meson" and "sphinx" are installed to the
same Python environment as QEMU is configured to build with, this can
produce some surprising failures when things are mismatched. This method
is here to try and ease that sting by offering some actionable
diagnosis.

Signed-off-by: John Snow 
---
 python/scripts/mkvenv.py | 153 +++
 1 file changed, 140 insertions(+), 13 deletions(-)

diff --git a/python/scripts/mkvenv.py b/python/scripts/mkvenv.py
index 937664ea9c..1bc4bc01b1 100644
--- a/python/scripts/mkvenv.py
+++ b/python/scripts/mkvenv.py
@@ -64,6 +64,7 @@
 import os
 from pathlib import Path
 import re
+import shutil
 import site
 import stat
 import subprocess
@@ -543,6 +544,103 @@ def checkpip() -> None:
 logging.debug("Pip is now (hopefully) repaired!")
 
 
+def diagnose(
+dep_spec: str,
+online: bool,
+wheels_dir: Optional[Union[str, Path]],
+prog: Optional[str],
+) -> str:
+"""
+Offer a summary to the user as to why a package failed to be installed.
+
+:param dep_spec: The package we tried to ensure, e.g. 'meson>=0.61.5'
+:param online: Did we allow PyPI access?
+:param prog:
+Optionally, a shell program name that can be used as a
+bellwether to detect if this program is installed elsewhere on
+the system. This is used to offer advice when a program is
+detected for a different python version.
+:param wheels_dir:
+Optionally, a directory that was searched for vendored packages.
+"""
+# pylint: disable=too-many-branches
+
+# Parse name out of PEP-508 depspec.
+# See https://peps.python.org/pep-0508/#names
+match = re.match(
+r"^([A-Z0-9]([A-Z0-9._-]*[A-Z0-9])?)", dep_spec, re.IGNORECASE
+)
+if not match:
+raise ValueError(
+f"dep_spec '{dep_spec}'"
+" does not appear to contain a valid package name"
+)
+pkg_name = match.group(0)
+pkg_version = None
+
+has_importlib = False
+try:
+# Python 3.8+ stdlib
+# pylint: disable=import-outside-toplevel
+from importlib.metadata import PackageNotFoundError, version
+
+has_importlib = True
+try:
+pkg_version = version(pkg_name)
+except PackageNotFoundError:
+pass
+except ModuleNotFoundError:
+pass
+
+lines = []
+
+if pkg_version:
+lines.append(
+f"Python package '{pkg_name}' version '{pkg_version}' was found,"
+" but isn't suitable."
+)
+elif has_importlib:
+lines.append(
+f"Python package '{pkg_name}' was not found nor installed."
+)
+else:
+lines.append(
+f"Python package '{pkg_name}' is either not found or"
+" not a suitable version."
+)
+
+if wheels_dir:
+lines.append(
+"No suitable version found in, or failed to install from"
+f" '{wheels_dir}'."
+)
+else:
+lines.append("No local package directory was searched.")
+
+if online:
+lines.append("A suitable version could not be obtained from PyPI.")
+else:
+lines.append(
+"mkvenv was configured to operate offline and did not check PyPI."
+)
+
+if prog and not pkg_version:
+which = shutil.which(prog)
+if which:
+pypath = Path(sys.executable).resolve()
+lines.append(
+f"'{prog}' was detected on your system at '{which}', "
+f"but the Python package '{pkg_name}' was not found by this "
+f"Python interpreter ('{pypath}'). "
+f"Typically this means that '{prog}' has been installed "
+"against a different Python interpreter on your system."
+)
+
+lines = [f" • {line}" for line in lines]
+lines.insert(0, f"Could not ensure availability of '{dep_spec}':")
+return os.linesep.join(lines)
+
+
 def pip_install(
 args: Sequence[str],
 online: bool = False,
@@ -573,23 +671,11 @@ def pip_install(
 subprocess.run(full_args, check=True)
 
 
-def ensure(
+def _do_ensure(
 dep_spec: str,
 online: bool = False,
 wheels_dir: Optional[Union[str, Path]] = None,
 ) -> None:
-"""
-Use pip to ensure we have the package specified by @dep_spec.
-
-If the package is already installed, do nothing. If online and
-wheels_dir are both provided, prefer packages found in wheels_dir
-first before connecting to PyPI.
-
-:param dep_spec:
-A PEP 508 dependency specification. e.g. 'meson>=0.61.5'.
-:param online: If True, fall back to PyPI.
-:param wheels_dir: If specified, search this path for 

[RFC PATCH v3 00/20] configure: create a python venv and ensure meson, sphinx

2023-04-24 Thread John Snow
GitLab CI: https://gitlab.com/jsnow/qemu/-/pipelines/846869409
   (All green, except Python self-tests, see below)

This patch series creates a mandatory python virtual environment
("venv") during configure time and uses it to ensure the availability of
meson and sphinx.

See https://www.qemu.org/2023/03/24/python/ for details. The summary is
that the goal of this series is to ensure that the `python` used to run
meson is the same `python` used to run Sphinx, tests, and any build-time
python scripting we have. As it stands, meson and sphinx (and their
extensions) *may* run in a different python environment than the one
configured and chosen by the user at configure/build time.

The effective change of this series is that QEMU will now
unconditionally create a venv at configure-time and will ensure that
meson (and sphinx, if docs are enabled) are available through that venv.

Some important points as a pre-emptive "FAQ":

- This venv is unconditionally created and lives at {build_dir}/pyvenv.

- The python interpreter used by this venv is always the one identified
  by configure. (Which in turn is always the one specified by --python
  or $PYTHON)

- *almost* all python scripts in qemu.git executed as part of the build
  system, meson, sphinx, avocado tests, vm tests or CI are always
  executed within this venv.

  (iotests are not yet integrated; I plan to tackle this separately as a
  follow-up in order to have a more tightly focused scope on that
  series.)

- It remains possible to build and test fully offline.
  (In most cases, you just need meson and sphinx from your distro's repo.)

- Distribution packaged 'meson' and 'sphinx' are still utilized whenever
  possible as the highest preference.

- Vendored versions of e.g. 'meson' are always preferred to PyPI
  versions for speed, repeatability and ensuring tarball builds work
  as-is offline.

  (Sphinx will not be vendored, just like it already isn't.)

- Missing dependencies, when possible, are fetched and installed
  on-demand automatically to make developer environments "just work".

- Works for Python 3.7 and up, on Fedora, OpenSuSE, Red Hat, CentOS,
  Alpine, Debian, Ubuntu, NetBSD, OpenBSD, and hopefully everywhere

- No new dependencies (...for most platforms. Debian and NetBSD get an
  asterisk.)

- The meson git submodule is unused after this series and can be removed.

For reviewers, here's how the series is broken up:

Patch 1 is a testing pre-req. Note that even with this patch,
'check-python-minreqs' and 'check-python-tox' CI jobs will both still
fail on origin/master because this series requires 3.7+, but
origin/master is currently still 3.6+.

- python: update pylint configuration

Patches 2-8 add the mkvenv script. The first patch checks in the barest
essentials, and each subsequent patch adds a workaround or feature one
at a time.

- python: add mkvenv.py
- mkvenv: add console script entry point generation
- mkvenv: Add better error message for missing pyexapt module
- mkvenv: generate console entry shims from inside the venv
- mkvenv: work around broken pip installations on Debian 10
- mkvenv: add nested venv workaround
- mkvenv: add ensure subcommand

Patches 9-11 modify our testing configuration to add new dependencies as
needed.

- tests/docker: add python3-venv dependency
- tests/vm: Configure netbsd to use Python 3.10
- tests/vm: add py310-expat to NetBSD

Patch 12 changes how we package release tarballs.

- scripts/make-release: download meson==0.61.5 .whl

Patches 13-16 wire mkvenv into configure and tests.

- configure: create a python venv unconditionally
- configure: use 'mkvenv ensure meson' to bootstrap meson
- configure: add --enable-pypi and --disable-pypi
- tests: Use configure-provided pyvenv for tests

Patches 17-20 delegate Sphinx bootstrapping to mkvenv. Some of these
changes could be folded earlier in the series (like the diagnose()
patch), but I'm keeping it separate for review for now.

- configure: move --enable-docs and --disable-docs back to configure
- mkvenv: add diagnose() method for ensure() failures
- configure: use --diagnose option with meson ensure
- configure: bootstrap sphinx with mkvenv

That's all for now, seeya!
--js

John Snow (20):
  python: update pylint configuration
  python: add mkvenv.py
  mkvenv: add console script entry point generation
  mkvenv: Add better error message for missing pyexpat module
  mkvenv: generate console entry shims from inside the venv
  mkvenv: work around broken pip installations on Debian 10
  mkvenv: add nested venv workaround
  mkvenv: add ensure subcommand
  tests/docker: add python3-venv dependency
  tests/vm: Configure netbsd to use Python 3.10
  tests/vm: add py310-expat to NetBSD
  scripts/make-release: download meson==0.61.5 .whl
  configure: create a python venv unconditionally
  configure: use 'mkvenv ensure meson' to bootstrap meson
  configure: add --enable-pypi and --disable-pypi
  tests: Use configure-provided pyvenv for tests
  configure: 

[RFC PATCH v3 06/20] mkvenv: work around broken pip installations on Debian 10

2023-04-24 Thread John Snow
This is a workaround intended for Debian 10, where the debian-patched
pip does not function correctly if accessed from within a virtual
environment.

We don't support Debian 10 any longer, but it's possible that this bug
might appear on other derivative platforms and this workaround may prove
useful.

RFC, a note from Paolo:

"BTW, another way to repair Debian 10's pip is to create a symbolic link
to sys.base_prefix + '/share/python-wheels' in sys.prefix +
'/share/python-wheels'. Since this is much faster, perhaps it can be
done unconditionally [...] ?"

I was slightly apprehensive about this as it felt "hackier", but it is
indeed a lot less code and much faster. It's probably low-risk. Should
we do that instead, or should we just scrap any fix at all under the
premise that Debian 10 support is dropped anyway?

Signed-off-by: John Snow 
---
 python/scripts/mkvenv.py | 67 +++-
 1 file changed, 52 insertions(+), 15 deletions(-)

diff --git a/python/scripts/mkvenv.py b/python/scripts/mkvenv.py
index 4daa652f12..445f4eb092 100644
--- a/python/scripts/mkvenv.py
+++ b/python/scripts/mkvenv.py
@@ -151,26 +151,26 @@ def need_ensurepip() -> bool:
 return True
 
 
-def check_ensurepip(with_pip: bool) -> None:
+def check_ensurepip(prefix: str = "", suggest_remedy: bool = False) -> None:
 """
 Check that we have ensurepip.
 
 Raise a fatal exception with a helpful hint if it isn't available.
 """
-if not with_pip:
-return
-
 if not find_spec("ensurepip"):
 msg = (
 "Python's ensurepip module is not found.\n"
 "It's normally part of the Python standard library, "
 "maybe your distribution packages it separately?\n"
-"Either install ensurepip, or alleviate the need for it in the "
-"first place by installing pip and setuptools for "
-f"'{sys.executable}'.\n"
-"(Hint: Debian puts ensurepip in its python3-venv package.)"
+"(Debian puts ensurepip in its python3-venv package.)\n"
 )
-raise Ouch(msg)
+if suggest_remedy:
+msg += (
+"Either install ensurepip, or alleviate the need for it in the"
+" first place by installing pip and setuptools for "
+f"'{sys.executable}'.\n"
+)
+raise Ouch(prefix + msg)
 
 # ensurepip uses pyexpat, which can also go missing on us:
 if not find_spec("pyexpat"):
@@ -178,12 +178,15 @@ def check_ensurepip(with_pip: bool) -> None:
 "Python's pyexpat module is not found.\n"
 "It's normally part of the Python standard library, "
 "maybe your distribution packages it separately?\n"
-"Either install pyexpat, or alleviate the need for it in the "
-"first place by installing pip and setuptools for "
-f"'{sys.executable}'.\n\n"
-"(Hint: NetBSD's pkgsrc debundles this to e.g. 'py310-expat'.)"
+"(NetBSD's pkgsrc debundles this to e.g. 'py310-expat'.)\n"
 )
-raise Ouch(msg)
+if suggest_remedy:
+msg += (
+"Either install pyexpat, or alleviate the need for it in the "
+"first place by installing pip and setuptools for "
+f"'{sys.executable}'.\n"
+)
+raise Ouch(prefix + msg)
 
 
 def make_venv(  # pylint: disable=too-many-arguments
@@ -238,7 +241,8 @@ def make_venv(  # pylint: disable=too-many-arguments
 with_pip = True if not system_site_packages else need_ensurepip()
 logger.debug("with_pip unset, choosing %s", with_pip)
 
-check_ensurepip(with_pip)
+if with_pip:
+check_ensurepip(suggest_remedy=True)
 
 if symlinks is None:
 # Default behavior of standard venv CLI
@@ -430,6 +434,36 @@ def _get_entry_points() -> Iterator[Dict[str, str]]:
 logger.debug("wrote '%s'", script_path)
 
 
+def checkpip() -> None:
+"""
+Debian10 has a pip that's broken when used inside of a virtual environment.
+
+We try to detect and correct that case here.
+"""
+try:
+# pylint: disable=import-outside-toplevel, unused-import
+import pip._internal  # noqa: F401
+
+logger.debug("pip appears to be working correctly.")
+return
+except ModuleNotFoundError as exc:
+if exc.name == "pip._internal":
+# Uh, fair enough. They did say "internal".
+# Let's just assume it's fine.
+return
+logger.warning("pip appears to be malfunctioning: %s", str(exc))
+
+check_ensurepip("pip appears to be non-functional, and ")
+
+logging.debug("Attempting to repair pip ...")
+subprocess.run(
+(sys.executable, "-m", "ensurepip"),
+stdout=subprocess.DEVNULL,
+check=True,
+)
+logging.debug("Pip is now (hopefully) repaired!")
+
+
 def post_venv_setup(bin_path: str, packages: 

[RFC PATCH v3 20/20] configure: bootstrap sphinx with mkvenv

2023-04-24 Thread John Snow
When docs are explicitly requested, require Sphinx>=1.6.0. When docs are
explicitly disabled, don't bother to check for Sphinx at all. If docs
are set to "auto", attempt to locate Sphinx, but continue onward if it
wasn't located.

For the case that --enable-pypi is set to 'enabled' (the default) but
docs are set to 'auto' (also the default), do not actually consult PyPI
to install Sphinx. Only perform this action when docs are requested
explicitly.

Signed-off-by: John Snow 
---
 configure | 27 +--
 1 file changed, 25 insertions(+), 2 deletions(-)

diff --git a/configure b/configure
index 0881fffc14..a247b9491c 100755
--- a/configure
+++ b/configure
@@ -1122,14 +1122,14 @@ fi
 
 # Suppress writing compiled files
 python="$python -B"
-
+mkvenv="$python ${source_path}/python/scripts/mkvenv.py"
 
 mkvenv_flags=""
 if test "$pypi" = "enabled" ; then
 mkvenv_flags="--online"
 fi
 
-if ! $python "${source_path}/python/scripts/mkvenv.py" ensure \
+if ! $mkvenv ensure \
  $mkvenv_flags \
  --dir "${source_path}/python/wheels" \
  --diagnose "meson" \
@@ -1144,6 +1144,29 @@ fi
 # *exclusively*.
 meson="$(cd pyvenv/bin; pwd)/meson"
 
+# Conditionally ensure Sphinx is installed.
+
+mkvenv_flags=""
+if test "$pypi" = "enabled" -a "$docs" = "enabled" ; then
+mkvenv_flags="--online"
+fi
+
+if test "$docs" != "disabled" ; then
+if ! $mkvenv ensure \
+ $mkvenv_flags \
+ --diagnose "sphinx-build" \
+ "sphinx>=1.6.0" ;
+then
+if test "$docs" = "enabled" ; then
+exit 1
+fi
+echo "Sphinx not found/usable, disabling docs."
+docs=disabled
+else
+docs=enabled
+fi
+fi
+
 echo "MKVENV ok!"
 
 # Probe for ninja
-- 
2.39.2




[RFC PATCH v3 11/20] tests/vm: add py310-expat to NetBSD

2023-04-24 Thread John Snow
NetBSD cannot successfully run "ensurepip" without access to the pyexpat
module, which NetBSD debundles. Like the Debian patch, it would be
strictly faster long term to install pip/setuptools, and I recommend
developers at their workstations take that approach instead.

For the purposes of a throwaway VM, there's not really a speed
difference for who is responsible for installing pip; us (needs
py310-pip) or Python (needs py310-expat).

Signed-off-by: John Snow 
---
 tests/vm/netbsd | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tests/vm/netbsd b/tests/vm/netbsd
index 13eae109c0..c7e3f1e735 100755
--- a/tests/vm/netbsd
+++ b/tests/vm/netbsd
@@ -31,6 +31,7 @@ class NetBSDVM(basevm.BaseVM):
 "pkgconf",
 "xz",
 "python310",
+"py310-expat",
 "ninja-build",
 
 # gnu tools
-- 
2.39.2




[RFC PATCH v3 13/20] configure: create a python venv unconditionally

2023-04-24 Thread John Snow
This patch changes the configure script so that it always creates and
uses a python virtual environment unconditionally.

Meson bootstrapping is temporarily altered to force the use of meson
from git or vendored source. The next commit restores the use of
distribution-vendored Meson.

Signed-off-by: John Snow 
---
 configure | 36 +++-
 1 file changed, 31 insertions(+), 5 deletions(-)

diff --git a/configure b/configure
index 77c03315f8..462fe604d6 100755
--- a/configure
+++ b/configure
@@ -625,7 +625,6 @@ check_py_version() {
 python=
 first_python=
 if test -z "${PYTHON}"; then
-explicit_python=no
 # A bare 'python' is traditionally python 2.x, but some distros
 # have it as python 3.x, so check in both places.
 for binary in python3 python python3.11 python3.10 python3.9 python3.8 
python3.7 python3.6; do
@@ -644,7 +643,6 @@ else
 # Same as above, but only check the environment variable.
 has "${PYTHON}" || error_exit "The PYTHON environment variable does not 
point to an executable"
 python=$(command -v "$PYTHON")
-explicit_python=yes
 if check_py_version "$python"; then
 # This one is good.
 first_python=
@@ -729,7 +727,7 @@ for opt do
   ;;
   --install=*)
   ;;
-  --python=*) python="$optarg" ; explicit_python=yes
+  --python=*) python="$optarg"
   ;;
   --skip-meson) skip_meson=yes
   ;;
@@ -1089,8 +1087,34 @@ if ! check_py_version "$python"; then
   "Use --python=/path/to/python to specify a supported Python."
 fi
 
-# Resolve PATH + suppress writing compiled files
-python="$(command -v "$python") -B"
+# Resolve PATH
+python="$(command -v "$python")"
+explicit_python=yes
+
+# Create a Python virtual environment using our configured python.
+# The stdout of this script will be the location of a symlink that
+# points to the configured Python.
+# Entry point scripts for pip, meson, and sphinx are generated if those
+# packages are present.
+
+# Defaults assumed for now:
+# - venv is cleared if it exists already;
+# - venv is allowed to use system packages;
+# - all setup is performed **offline**;
+# - No packages are installed by default;
+# - pip is not installed into the venv when possible,
+#   but ensurepip is called as a fallback when necessary.
+
+echo "python determined to be '$python'"
+echo "python version: $($python --version)"
+
+python="$($python -B "${source_path}/python/scripts/mkvenv.py" create --gen 
pip,meson,sphinx pyvenv)"
+if test "$?" -ne 0 ; then
+error_exit "python venv creation failed"
+fi
+
+# Suppress writing compiled files
+python="$python -B"
 
 has_meson() {
   local python_dir=$(dirname "$python")
@@ -1145,6 +1169,8 @@ case "$meson" in
 *) meson=$(command -v "$meson") ;;
 esac
 
+echo "MKVENV ok!"
+
 # Probe for ninja
 
 if test -z "$ninja"; then
-- 
2.39.2




[RFC PATCH v3 16/20] tests: Use configure-provided pyvenv for tests

2023-04-24 Thread John Snow
This patch changes how the avocado tests are provided, ever so
slightly. Instead of creating a new testing venv, use the
configure-provided 'pyvenv' instead, and install optional packages into
that.

Note 1: This doesn't respect the "disable-pypi" setting; the avocado
tests remain an unconditional PyPI hit when you run "make
check-avocado". Future series may endeavor to offer tighter integration,
for now, this is a cheap win that doesn't regress anything.

Note 2: At the time of writing, avocado tests require
avocado-framework<90 whereas the qemu.qmp self-tests rely on
avocado-framework>=90. This collision is avoided for now because the
qemu.git/python/qemu/ code does not need avocado at *runtime*; it does
not install avocado-framework as a necessary dependency and is skipped
in this circumstance.

Nevertheless, we do want to address that discrepancy in the future so
that it will be possible to re-use the same venv for
qemu.git/python/qemu self-tests to introduce them to make check as "make
check-python".

Signed-off-by: John Snow 
---
 docs/devel/acpi-bits.rst   |  6 +++---
 docs/devel/testing.rst | 14 +++---
 .gitlab-ci.d/buildtest.yml |  6 +++---
 scripts/ci/org.centos/stream/8/x86_64/test-avocado |  4 ++--
 scripts/device-crash-test  |  2 +-
 tests/Makefile.include | 10 +-
 tests/requirements.txt |  7 +--
 7 files changed, 26 insertions(+), 23 deletions(-)

diff --git a/docs/devel/acpi-bits.rst b/docs/devel/acpi-bits.rst
index 9eb4b9e666..0c40359109 100644
--- a/docs/devel/acpi-bits.rst
+++ b/docs/devel/acpi-bits.rst
@@ -61,19 +61,19 @@ Under ``tests/avocado/`` as the root we have:
::
 
  $ make check-venv (needed only the first time to create the venv)
- $ ./tests/venv/bin/avocado run -t acpi tests/avocado
+ $ ./pyvenv/bin/avocado run -t acpi tests/avocado
 
The above will run all acpi avocado tests including this one.
In order to run the individual tests, perform the following:
::
 
- $ ./tests/venv/bin/avocado run tests/avocado/acpi-bits.py --tap -
+ $ ./pyvenv/bin/avocado run tests/avocado/acpi-bits.py --tap -
 
The above will produce output in tap format. You can omit "--tap -" in the
end and it will produce output like the following:
::
 
-  $ ./tests/venv/bin/avocado run tests/avocado/acpi-bits.py
+  $ ./pyvenv/bin/avocado run tests/avocado/acpi-bits.py
   Fetching asset from 
tests/avocado/acpi-bits.py:AcpiBitsTest.test_acpi_smbios_bits
   JOB ID : eab225724da7b64c012c65705dc2fa14ab1defef
   JOB LOG: 
/home/anisinha/avocado/job-results/job-2022-10-10T17.58-eab2257/job.log
diff --git a/docs/devel/testing.rst b/docs/devel/testing.rst
index 4071e72710..50664d9eb9 100644
--- a/docs/devel/testing.rst
+++ b/docs/devel/testing.rst
@@ -882,9 +882,9 @@ You can run the avocado tests simply by executing:
 
   make check-avocado
 
-This involves the automatic creation of Python virtual environment
-within the build tree (at ``tests/venv``) which will have all the
-right dependencies, and will save tests results also within the
+This involves the automatic installation, from PyPI, of all the
+necessary avocado-framework dependencies into the QEMU venv within the
+build tree (at ``./pyvenv``). Test results are also saved within the
 build tree (at ``tests/results``).
 
 Note: the build environment must be using a Python 3 stack, and have
@@ -941,7 +941,7 @@ may be invoked by running:
 
  .. code::
 
-  tests/venv/bin/avocado run $OPTION1 $OPTION2 tests/avocado/
+  pyvenv/bin/avocado run $OPTION1 $OPTION2 tests/avocado/
 
 Note that if ``make check-avocado`` was not executed before, it is
 possible to create the Python virtual environment with the dependencies
@@ -956,20 +956,20 @@ a test file. To run tests from a single file within the 
build tree, use:
 
  .. code::
 
-  tests/venv/bin/avocado run tests/avocado/$TESTFILE
+  pyvenv/bin/avocado run tests/avocado/$TESTFILE
 
 To run a single test within a test file, use:
 
  .. code::
 
-  tests/venv/bin/avocado run tests/avocado/$TESTFILE:$TESTCLASS.$TESTNAME
+  pyvenv/bin/avocado run tests/avocado/$TESTFILE:$TESTCLASS.$TESTNAME
 
 Valid test names are visible in the output from any previous execution
 of Avocado or ``make check-avocado``, and can also be queried using:
 
  .. code::
 
-  tests/venv/bin/avocado list tests/avocado
+  pyvenv/bin/avocado list tests/avocado
 
 Manual Installation
 ~~~
diff --git a/.gitlab-ci.d/buildtest.yml b/.gitlab-ci.d/buildtest.yml
index ba6f551752..53de9f23c4 100644
--- a/.gitlab-ci.d/buildtest.yml
+++ b/.gitlab-ci.d/buildtest.yml
@@ -103,7 +103,7 @@ crash-test-debian:
   script:
 - cd build
 - make check-venv
-- tests/venv/bin/python3 scripts/device-crash-test -q ./qemu-system-i386
+- pyvenv/bin/python3 scripts/device-crash-test -q 

[RFC PATCH v3 01/20] python: update pylint configuration

2023-04-24 Thread John Snow
Pylint 2.17.x decided that SocketAddrT was a bad name for a Type Alias for some
reason. Sure, fine, whatever.

Signed-off-by: John Snow 
---
 python/setup.cfg | 1 +
 1 file changed, 1 insertion(+)

diff --git a/python/setup.cfg b/python/setup.cfg
index 9e923d9762..3f36502954 100644
--- a/python/setup.cfg
+++ b/python/setup.cfg
@@ -132,6 +132,7 @@ good-names=i,
fd,  # fd = os.open(...)
c,   # for c in string: ...
T,   # for TypeVars. See pylint#3401
+   SocketAddrT,  # Not sure why this is invalid.
 
 [pylint.similarities]
 # Ignore imports when computing similarities.
-- 
2.39.2




[RFC PATCH v3 12/20] scripts/make-release: download meson==0.61.5 .whl

2023-04-24 Thread John Snow
In preference to vendoring meson source, vendor a built
distributable. This has two benefits:

(1) We can get rid of a git submodule,
(2) Installing built meson into a venv doesn't require any extra dependencies.

RFC:

The alternative approach is to just check in the .whl file into the git
tree directly, and have it available for both git and tarball
installations. That approach slightly changes the necessity of some
subsequent patches, but otherwise either way will work.

Owing to how "mkvenv ensure" will prefer vendored files prior to
connecting to PyPI, checking in a vendored meson file in this manner
means we will generally never use PyPI to install meson ever.

("Vote now on your phones.")

Signed-off-by: John Snow 
---
 scripts/make-release | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/scripts/make-release b/scripts/make-release
index 44a9d86a04..a59bad11f9 100755
--- a/scripts/make-release
+++ b/scripts/make-release
@@ -41,6 +41,17 @@ git submodule update --init --single-branch
 BaseTools/Source/C/BrotliCompress/brotli \
 CryptoPkg/Library/OpensslLib/openssl \
 MdeModulePkg/Library/BrotliCustomDecompressLib/brotli)
+
+# Handle vendoring Python dependencies:
+mkdir python/wheels
+pushd python/wheels
+pip download meson==0.61.5
+sha256sum -c <

[RFC PATCH v3 15/20] configure: add --enable-pypi and --disable-pypi

2023-04-24 Thread John Snow
In the event that there's no vendored source present and no sufficient
version of meson is found, we will attempt to connect to PyPI to install
the package ... only if '--disable-pypi' was not passed. This mechanism
is intended to replace the online functionality of the meson git submodule.

While --enable-pypi is the default, vendored source will always be
preferred when found, making PyPI a fallback. This should ensure that
configure-time venv building "just works" for almost everyone in almost
every circumstance.

Signed-off-by: John Snow 
---
 configure | 15 ++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/configure b/configure
index e9947369b2..7421bb8364 100755
--- a/configure
+++ b/configure
@@ -623,6 +623,7 @@ check_py_version() {
 }
 
 python=
+pypi="enabled"
 first_python=
 if test -z "${PYTHON}"; then
 # A bare 'python' is traditionally python 2.x, but some distros
@@ -883,6 +884,10 @@ for opt do
   --with-git-submodules=*)
   git_submodules_action="$optarg"
   ;;
+  --disable-pypi) pypi="disabled"
+  ;;
+  --enable-pypi) pypi="enabled"
+  ;;
   --enable-plugins) if test "$mingw32" = "yes"; then
 error_exit "TCG plugins not currently supported on 
Windows platforms"
 else
@@ -1098,7 +1103,9 @@ target_python=$python
 # Defaults assumed for now:
 # - venv is cleared if it exists already;
 # - venv is allowed to use system packages;
-# - all setup is performed **offline**;
+# - all setup can be performed offline;
+# - missing packages may be fetched from PyPI,
+#   unless --disable-pypi is passed.
 # - pip is not installed into the venv when possible,
 #   but ensurepip is called as a fallback when necessary.
 
@@ -1114,7 +1121,13 @@ fi
 python="$python -B"
 
 
+mkvenv_flags=""
+if test "$pypi" = "enabled" ; then
+mkvenv_flags="--online"
+fi
+
 if ! $python "${source_path}/python/scripts/mkvenv.py" ensure \
+ $mkvenv_flags \
  --dir "${source_path}/python/wheels" \
  "meson>=0.61.5" ;
 then
-- 
2.39.2




[RFC PATCH v3 17/20] configure: move --enable-docs and --disable-docs back to configure

2023-04-24 Thread John Snow
Move this option back from meson into configure for the purposes of
using the configuration value to bootstrap Sphinx in different ways
based on this value.

Signed-off-by: John Snow 
---
 configure | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/configure b/configure
index 7421bb8364..ae55de1408 100755
--- a/configure
+++ b/configure
@@ -279,6 +279,7 @@ debug_tcg="no"
 sanitizers="no"
 tsan="no"
 fortify_source="yes"
+docs="auto"
 EXESUF=""
 modules="no"
 prefix="/usr/local"
@@ -752,6 +753,10 @@ for opt do
   ;;
   --disable-debug-info) meson_option_add -Ddebug=false
   ;;
+  --enable-docs) docs=enabled
+  ;;
+  --disable-docs) docs=disabled
+  ;;
   --enable-modules)
   modules="yes"
   ;;
@@ -2638,6 +2643,7 @@ if test "$skip_meson" = no; then
 
   # QEMU options
   test "$cfi" != false && meson_option_add "-Dcfi=$cfi"
+  test "$docs" != auto && meson_option_add "-Ddocs=$docs"
   test "$fdt" != auto && meson_option_add "-Dfdt=$fdt"
   test -n "${LIB_FUZZING_ENGINE+xxx}" && meson_option_add 
"-Dfuzzing_engine=$LIB_FUZZING_ENGINE"
   test "$qemu_suffix" != qemu && meson_option_add "-Dqemu_suffix=$qemu_suffix"
-- 
2.39.2




[RFC PATCH v3 05/20] mkvenv: generate console entry shims from inside the venv

2023-04-24 Thread John Snow
This patch is meant to ensure that console entry point scripts will
always generate on Python 3.7 installations where we may not have access
to importlib.metadata. By running it from a separate process *inside*
the venv, we can be assured to have access to setuptools and by
extension pkg_resources as a fallback.

It isn't strictly necessary to do this for Python 3.8+, which will
always have importlib.metadata available.

Signed-off-by: John Snow 
---
 python/scripts/mkvenv.py | 99 ++--
 1 file changed, 85 insertions(+), 14 deletions(-)

diff --git a/python/scripts/mkvenv.py b/python/scripts/mkvenv.py
index 2172774403..4daa652f12 100644
--- a/python/scripts/mkvenv.py
+++ b/python/scripts/mkvenv.py
@@ -11,6 +11,8 @@
 Commands:
   command Description
 createcreate a venv
+post_init
+  post-venv initialization
 
 --
 
@@ -23,6 +25,15 @@
   -h, --help  show this help message and exit
   --gen GEN   Regenerate console_scripts for given packages, if found.
 
+--
+
+usage: mkvenv post_init [-h] [--gen GEN] [--binpath BINPATH]
+
+options:
+  -h, --help show this help message and exit
+  --gen GEN  Regenerate console_scripts for given packages, if found.
+  --binpath BINPATH  Path where console script shims should be generated
+
 """
 
 # Copyright (C) 2022-2023 Red Hat, Inc.
@@ -59,6 +70,7 @@
 # Do not add any mandatory dependencies from outside the stdlib:
 # This script *must* be usable standalone!
 
+DirType = Union[str, bytes, "os.PathLike[str]", "os.PathLike[bytes]"]
 logger = logging.getLogger("mkvenv")
 
 
@@ -89,23 +101,42 @@ def __init__(self, *args: Any, **kwargs: Any) -> None:
 self.script_packages = kwargs.pop("script_packages", ())
 super().__init__(*args, **kwargs)
 
-# The EnvBuilder class is cute and toggles this setting off
-# before post_setup, but we need it to decide if we want to
-# generate shims or not.
-self._system_site_packages = self.system_site_packages
+# Make the context available post-creation:
+self._context: Optional[SimpleNamespace] = None
+
+def ensure_directories(self, env_dir: DirType) -> SimpleNamespace:
+logger.debug("ensure_directories(env_dir=%s)", env_dir)
+self._context = super().ensure_directories(env_dir)
+return self._context
+
+def create(self, env_dir: DirType) -> None:
+logger.debug("create(env_dir=%s)", env_dir)
+super().create(env_dir)
+assert self._context is not None
+self.post_post_setup(self._context)
 
 def post_setup(self, context: SimpleNamespace) -> None:
 logger.debug("post_setup(...)")
-
-# Generate console_script entry points for system packages:
-if self._system_site_packages:
-generate_console_scripts(
-context.env_exe, context.bin_path, self.script_packages
-)
-
 # print the python executable to stdout for configure.
 print(context.env_exe)
 
+def post_post_setup(self, context: SimpleNamespace) -> None:
+"""
+The final, final hook. Enter the venv and run commands inside of it.
+"""
+args = [
+context.env_exe,
+__file__,
+"post_init",
+"--binpath",
+context.bin_path,
+]
+if self.system_site_packages:
+scripts = ",".join(self.script_packages)
+if scripts:
+args += ["--gen", scripts]
+subprocess.run(args, check=True)
+
 
 def need_ensurepip() -> bool:
 """
@@ -359,6 +390,13 @@ def generate_console_scripts(
 """
 Generate script shims for console_script entry points in @packages.
 """
+logger.debug(
+"generate_console_scripts(python_path=%s, bin_path=%s, packages=%s)",
+python_path,
+bin_path,
+packages,
+)
+
 if not packages:
 return
 
@@ -392,6 +430,17 @@ def _get_entry_points() -> Iterator[Dict[str, str]]:
 logger.debug("wrote '%s'", script_path)
 
 
+def post_venv_setup(bin_path: str, packages: Sequence[str] = ()) -> None:
+"""
+This is intended to be run *inside the venv* after it is created.
+"""
+python_path = sys.executable
+logger.debug(
+"post_venv_setup(bin_path=%s, packages=%s)", bin_path, packages
+)
+generate_console_scripts(python_path, bin_path, packages)
+
+
 def _add_create_subcommand(subparsers: Any) -> None:
 subparser = subparsers.add_parser("create", help="create a venv")
 subparser.add_argument(
@@ -408,6 +457,24 @@ def _add_create_subcommand(subparsers: Any) -> None:
 )
 
 
+def _add_post_init_subcommand(subparsers: Any) -> None:
+subparser = subparsers.add_parser(
+"post_init", help="post-venv initialization"
+)
+subparser.add_argument(
+

[RFC PATCH v3 03/20] mkvenv: add console script entry point generation

2023-04-24 Thread John Snow
When creating a virtual environment that inherits system packages,
script entry points (like "meson", "sphinx-build", etc) are not
re-generated with the correct shebang. When you are *inside* of the
venv, this is not a problem, but if you are *outside* of it, you will
not have a script that engages the virtual environment appropriately.

Add a mechanism that generates new entry points for pre-existing
packages so that we can use these scripts to run "meson",
"sphinx-build", "pip", unambiguously inside the venv.

Signed-off-by: John Snow 
---
 python/scripts/mkvenv.py | 179 +--
 1 file changed, 172 insertions(+), 7 deletions(-)

diff --git a/python/scripts/mkvenv.py b/python/scripts/mkvenv.py
index 1dfcc0198a..f355cb54fb 100644
--- a/python/scripts/mkvenv.py
+++ b/python/scripts/mkvenv.py
@@ -14,13 +14,14 @@
 
 --
 
-usage: mkvenv create [-h] target
+usage: mkvenv create [-h] [--gen GEN] target
 
 positional arguments:
   target  Target directory to install virtual environment into.
 
 options:
   -h, --help  show this help message and exit
+  --gen GEN   Regenerate console_scripts for given packages, if found.
 
 """
 
@@ -38,11 +39,20 @@
 import logging
 import os
 from pathlib import Path
+import re
+import stat
 import subprocess
 import sys
 import traceback
 from types import SimpleNamespace
-from typing import Any, Optional, Union
+from typing import (
+Any,
+Dict,
+Iterator,
+Optional,
+Sequence,
+Union,
+)
 import venv
 
 
@@ -60,10 +70,9 @@ class QemuEnvBuilder(venv.EnvBuilder):
 """
 An extension of venv.EnvBuilder for building QEMU's configure-time venv.
 
-As of this commit, it does not yet do anything particularly
-different than the standard venv-creation utility. The next several
-commits will gradually change that in small commits that highlight
-each feature individually.
+The only functional change is that it adds the ability to regenerate
+console_script shims for packages available via system_site
+packages.
 
 Parameters for base class init:
   - system_site_packages: bool = False
@@ -77,6 +86,7 @@ class QemuEnvBuilder(venv.EnvBuilder):
 
 def __init__(self, *args: Any, **kwargs: Any) -> None:
 logger.debug("QemuEnvBuilder.__init__(...)")
+self.script_packages = kwargs.pop("script_packages", ())
 super().__init__(*args, **kwargs)
 
 # The EnvBuilder class is cute and toggles this setting off
@@ -87,6 +97,12 @@ def __init__(self, *args: Any, **kwargs: Any) -> None:
 def post_setup(self, context: SimpleNamespace) -> None:
 logger.debug("post_setup(...)")
 
+# Generate console_script entry points for system packages:
+if self._system_site_packages:
+generate_console_scripts(
+context.env_exe, context.bin_path, self.script_packages
+)
+
 # print the python executable to stdout for configure.
 print(context.env_exe)
 
@@ -129,6 +145,7 @@ def make_venv(  # pylint: disable=too-many-arguments
 clear: bool = True,
 symlinks: Optional[bool] = None,
 with_pip: Optional[bool] = None,
+script_packages: Sequence[str] = (),
 ) -> None:
 """
 Create a venv using `QemuEnvBuilder`.
@@ -149,16 +166,20 @@ def make_venv(  # pylint: disable=too-many-arguments
 Whether to run "ensurepip" or not. If unspecified, this will
 default to False if system_site_packages is True and a usable
 version of pip is found.
+:param script_packages:
+A sequence of package names to generate console entry point
+shims for, when system_site_packages is True.
 """
 logging.debug(
 "%s: make_venv(env_dir=%s, system_site_packages=%s, "
-"clear=%s, symlinks=%s, with_pip=%s)",
+"clear=%s, symlinks=%s, with_pip=%s, script_packages=%s)",
 __file__,
 str(env_dir),
 system_site_packages,
 clear,
 symlinks,
 with_pip,
+script_packages,
 )
 
 print(f"MKVENV {str(env_dir)}", file=sys.stderr)
@@ -181,6 +202,7 @@ def make_venv(  # pylint: disable=too-many-arguments
 clear=clear,
 symlinks=symlinks,
 with_pip=with_pip,
+script_packages=script_packages,
 )
 try:
 logger.debug("Invoking builder.create()")
@@ -221,8 +243,147 @@ def _stringify(data: Optional[Union[str, bytes]]) -> 
Optional[str]:
 raise Ouch("VENV creation subprocess failed.") from exc
 
 
+def _gen_importlib(packages: Sequence[str]) -> Iterator[Dict[str, str]]:
+# pylint: disable=import-outside-toplevel
+try:
+# First preference: Python 3.8+ stdlib
+from importlib.metadata import (
+PackageNotFoundError,
+distribution,
+)
+except ImportError as exc:
+logger.debug("%s", str(exc))
+# Second preference: Commonly 

[RFC PATCH v3 09/20] tests/docker: add python3-venv dependency

2023-04-24 Thread John Snow
Several debian-based tests need the python3-venv dependency as a
consequence of Debian debundling the "ensurepip" module normally
included with Python.

As mkvenv.py stands as of this commit, Debian requires EITHER:

(A) setuptools and pip, or
(B) ensurepip

mkvenv is a few seconds faster if you have setuptools and pip, so
developers should prefer the first requirement. For the purposes of CI,
the time-save is a wash; it's only a matter of who is responsible for
installing pip and when; the timing is about the same.

Arbitrarily, I chose adding ensurepip to the test configuration because
it is normally part of the Python stdlib, and always having it allows us
a more consistent cross-platform environment.

Signed-off-by: John Snow 
---
 tests/docker/dockerfiles/debian-all-test-cross.docker | 3 ++-
 tests/docker/dockerfiles/debian-hexagon-cross.docker  | 3 ++-
 tests/docker/dockerfiles/debian-riscv64-cross.docker  | 3 ++-
 tests/docker/dockerfiles/debian-tricore-cross.docker  | 3 ++-
 4 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/tests/docker/dockerfiles/debian-all-test-cross.docker 
b/tests/docker/dockerfiles/debian-all-test-cross.docker
index 981e9bdc7b..f9f401544a 100644
--- a/tests/docker/dockerfiles/debian-all-test-cross.docker
+++ b/tests/docker/dockerfiles/debian-all-test-cross.docker
@@ -57,7 +57,8 @@ RUN DEBIAN_FRONTEND=noninteractive eatmydata \
 gcc-sh4-linux-gnu \
 libc6-dev-sh4-cross \
 gcc-sparc64-linux-gnu \
-libc6-dev-sparc64-cross
+libc6-dev-sparc64-cross \
+python3-venv
 
 ENV QEMU_CONFIGURE_OPTS --disable-system --disable-docs --disable-tools
 ENV DEF_TARGET_LIST 
aarch64-linux-user,alpha-linux-user,arm-linux-user,hppa-linux-user,i386-linux-user,m68k-linux-user,mips-linux-user,mips64-linux-user,mips64el-linux-user,mipsel-linux-user,ppc-linux-user,ppc64-linux-user,ppc64le-linux-user,riscv64-linux-user,s390x-linux-user,sh4-linux-user,sparc64-linux-user
diff --git a/tests/docker/dockerfiles/debian-hexagon-cross.docker 
b/tests/docker/dockerfiles/debian-hexagon-cross.docker
index b99d99f943..c2cfb6a5d0 100644
--- a/tests/docker/dockerfiles/debian-hexagon-cross.docker
+++ b/tests/docker/dockerfiles/debian-hexagon-cross.docker
@@ -20,7 +20,8 @@ RUN apt-get update && \
 bison \
 flex \
 git \
-ninja-build && \
+ninja-build \
+python3-venv && \
 # Install QEMU build deps for use in CI
 DEBIAN_FRONTEND=noninteractive eatmydata \
 apt build-dep -yy --arch-only qemu
diff --git a/tests/docker/dockerfiles/debian-riscv64-cross.docker 
b/tests/docker/dockerfiles/debian-riscv64-cross.docker
index 803afb9573..081404e014 100644
--- a/tests/docker/dockerfiles/debian-riscv64-cross.docker
+++ b/tests/docker/dockerfiles/debian-riscv64-cross.docker
@@ -28,7 +28,8 @@ RUN DEBIAN_FRONTEND=noninteractive eatmydata apt install -yy \
 libglib2.0-dev \
 ninja-build \
 pkg-config \
-python3
+python3 \
+python3-venv
 
 # Add ports and riscv64 architecture
 RUN echo "deb http://ftp.ports.debian.org/debian-ports/ sid main" >> 
/etc/apt/sources.list
diff --git a/tests/docker/dockerfiles/debian-tricore-cross.docker 
b/tests/docker/dockerfiles/debian-tricore-cross.docker
index cfd2faf9a8..269bfa8d42 100644
--- a/tests/docker/dockerfiles/debian-tricore-cross.docker
+++ b/tests/docker/dockerfiles/debian-tricore-cross.docker
@@ -33,7 +33,8 @@ RUN apt update && \
pkgconf \
python3-pip \
python3-setuptools \
-   python3-wheel
+   python3-wheel \
+   python3-venv
 
 RUN curl -#SL 
https://github.com/bkoppelmann/package_940/releases/download/tricore-toolchain-9.40/tricore-toolchain-9.4.0.tar.gz
 \
 | tar -xzC /usr/local/
-- 
2.39.2




[RFC PATCH v3 02/20] python: add mkvenv.py

2023-04-24 Thread John Snow
This script will be responsible for building a lightweight Python
virtual environment at configure time. It works with Python 3.7 or
newer.

It has been designed to:
- work *offline*, no PyPI required.
- work *quickly*, The fast path is only ~65ms on my machine.
- work *robustly*, with multiple fallbacks to keep things working.
- work *cooperatively*, using system packages where possible.
  (You can use your distro's meson, no problem.)

Due to its unique position in the build chain, it exists outside of the
installable python packages in-tree and *must* be runnable without any
third party dependencies.

Under normal circumstances, the only dependency required to execute this
script is Python 3.7+ itself. The script is *faster* by several seconds
when setuptools and pip are installed in the host environment, which is
probably the case for a typical multi-purpose developer workstation.

In the event that pip/setuptools are missing or not usable, additional
dependencies may be required on some distributions which remove certain
Python stdlib modules to package them separately:

- Debian may require python3-venv to provide "ensurepip"
- NetBSD may require py310-expat to provide "pyexpat" *
  (* Or whichever version is current for NetBSD.)

Signed-off-by: John Snow 
---
 python/scripts/mkvenv.py | 277 +++
 python/setup.cfg |   9 ++
 python/tests/flake8.sh   |   1 +
 python/tests/isort.sh|   1 +
 python/tests/mypy.sh |   1 +
 python/tests/pylint.sh   |   1 +
 6 files changed, 290 insertions(+)
 create mode 100644 python/scripts/mkvenv.py

diff --git a/python/scripts/mkvenv.py b/python/scripts/mkvenv.py
new file mode 100644
index 00..1dfcc0198a
--- /dev/null
+++ b/python/scripts/mkvenv.py
@@ -0,0 +1,277 @@
+"""
+mkvenv - QEMU pyvenv bootstrapping utility
+
+usage: mkvenv [-h] command ...
+
+QEMU pyvenv bootstrapping utility
+
+options:
+  -h, --help  show this help message and exit
+
+Commands:
+  command Description
+createcreate a venv
+
+--
+
+usage: mkvenv create [-h] target
+
+positional arguments:
+  target  Target directory to install virtual environment into.
+
+options:
+  -h, --help  show this help message and exit
+
+"""
+
+# Copyright (C) 2022-2023 Red Hat, Inc.
+#
+# Authors:
+#  John Snow 
+#  Paolo Bonzini 
+#
+# This work is licensed under the terms of the GNU GPL, version 2 or
+# later. See the COPYING file in the top-level directory.
+
+import argparse
+from importlib.util import find_spec
+import logging
+import os
+from pathlib import Path
+import subprocess
+import sys
+import traceback
+from types import SimpleNamespace
+from typing import Any, Optional, Union
+import venv
+
+
+# Do not add any mandatory dependencies from outside the stdlib:
+# This script *must* be usable standalone!
+
+logger = logging.getLogger("mkvenv")
+
+
+class Ouch(RuntimeError):
+"""An Exception class we can't confuse with a builtin."""
+
+
+class QemuEnvBuilder(venv.EnvBuilder):
+"""
+An extension of venv.EnvBuilder for building QEMU's configure-time venv.
+
+As of this commit, it does not yet do anything particularly
+different than the standard venv-creation utility. The next several
+commits will gradually change that in small commits that highlight
+each feature individually.
+
+Parameters for base class init:
+  - system_site_packages: bool = False
+  - clear: bool = False
+  - symlinks: bool = False
+  - upgrade: bool = False
+  - with_pip: bool = False
+  - prompt: Optional[str] = None
+  - upgrade_deps: bool = False (Since 3.9)
+"""
+
+def __init__(self, *args: Any, **kwargs: Any) -> None:
+logger.debug("QemuEnvBuilder.__init__(...)")
+super().__init__(*args, **kwargs)
+
+# The EnvBuilder class is cute and toggles this setting off
+# before post_setup, but we need it to decide if we want to
+# generate shims or not.
+self._system_site_packages = self.system_site_packages
+
+def post_setup(self, context: SimpleNamespace) -> None:
+logger.debug("post_setup(...)")
+
+# print the python executable to stdout for configure.
+print(context.env_exe)
+
+
+def need_ensurepip() -> bool:
+"""
+Tests for the presence of setuptools and pip.
+
+:return: `True` if we do not detect both packages.
+"""
+# Don't try to actually import them, it's fraught with danger:
+# https://github.com/pypa/setuptools/issues/2993
+if find_spec("setuptools") and find_spec("pip"):
+return False
+return True
+
+
+def check_ensurepip(with_pip: bool) -> None:
+"""
+Check that we have ensurepip.
+
+Raise a fatal exception with a helpful hint if it isn't available.
+"""
+if with_pip and not find_spec("ensurepip"):
+msg = (
+"Python's ensurepip module is not found.\n"
+"It's 

[RFC PATCH v3 14/20] configure: use 'mkvenv ensure meson' to bootstrap meson

2023-04-24 Thread John Snow
This commit changes how we detect and install meson. It notably removes
'--meson='.

The previous patch creates a lightweight Python virtual environment
unconditionally using the user's configured $python that inherits system
packages. If Meson is installed there and meets our minimum version
requirements, we will use that Meson.

In the event that Meson is installed but *not for the chosen Python
interpreter*, not found, or of insufficient version, we will attempt to
install Meson from vendored source into the newly created Python virtual
environment. This vendored installation is considered to replace the
mechanism from prior tarball distributions.

This commit restores the ability to use a system meson, but in turn
temporarily removes the ability to use a meson as obtained from the
internet at configure-time (git submodules, as it stood prior to this
patch); that ability will be restored in the next commit.

As a result of this patch, the Python interpreter we use for both our
own build scripts *and* Meson extensions are always known to be the
exact same Python. As a further benefit, there will also be a symlink
available in the build directory that points to the correct, configured
python and can be used by e.g. manual tests to invoke the correct,
configured Python unambiguously.

Signed-off-by: John Snow 
---
 configure   | 72 -
 .gitlab-ci.d/buildtest-template.yml |  4 +-
 2 files changed, 21 insertions(+), 55 deletions(-)

diff --git a/configure b/configure
index 462fe604d6..e9947369b2 100755
--- a/configure
+++ b/configure
@@ -731,8 +731,6 @@ for opt do
   ;;
   --skip-meson) skip_meson=yes
   ;;
-  --meson=*) meson="$optarg"
-  ;;
   --ninja=*) ninja="$optarg"
   ;;
   --smbd=*) smbd="$optarg"
@@ -1016,7 +1014,6 @@ Advanced options (experts only):
   --cross-prefix-ARCH=PREFIX cross compiler prefix when building ARCH guest 
test cases
   --make=MAKE  use specified make [$make]
   --python=PYTHON  use specified python [$python]
-  --meson=MESONuse specified meson [$meson]
   --ninja=NINJAuse specified ninja [$ninja]
   --smbd=SMBD  use specified smbd [$smbd]
   --with-git=GIT   use specified git [$git]
@@ -1089,7 +1086,8 @@ fi
 
 # Resolve PATH
 python="$(command -v "$python")"
-explicit_python=yes
+# This variable is intended to be used only for error messages:
+target_python=$python
 
 # Create a Python virtual environment using our configured python.
 # The stdout of this script will be the location of a symlink that
@@ -1101,7 +1099,6 @@ explicit_python=yes
 # - venv is cleared if it exists already;
 # - venv is allowed to use system packages;
 # - all setup is performed **offline**;
-# - No packages are installed by default;
 # - pip is not installed into the venv when possible,
 #   but ensurepip is called as a fallback when necessary.
 
@@ -1116,58 +1113,27 @@ fi
 # Suppress writing compiled files
 python="$python -B"
 
-has_meson() {
-  local python_dir=$(dirname "$python")
-  # PEP405: pyvenv.cfg is either adjacent to the Python executable
-  # or one directory above
-  if test -f $python_dir/pyvenv.cfg || test -f $python_dir/../pyvenv.cfg; then
-# Ensure that Meson and Python come from the same virtual environment
-test -x "$python_dir/meson" &&
-  test "$(command -v meson)" -ef "$python_dir/meson"
-  else
-has meson
-  fi
-}
 
-if test -z "$meson"; then
-if test "$explicit_python" = no && has_meson && version_ge "$(meson 
--version)" 0.61.5; then
-meson=meson
-elif test "$git_submodules_action" != 'ignore' ; then
-meson=git
-elif test -e "${source_path}/meson/meson.py" ; then
-meson=internal
-else
-if test "$explicit_python" = yes; then
-error_exit "--python requires using QEMU's embedded Meson 
distribution, but it was not found."
-else
-error_exit "Meson not found.  Use --meson=/path/to/meson"
-fi
+if ! $python "${source_path}/python/scripts/mkvenv.py" ensure \
+ --dir "${source_path}/python/wheels" \
+ "meson>=0.61.5" ;
+then
+# We're very out of luck. Try to give a good diagnostic.
+if test -e pyvenv/bin/meson; then
+echo "Meson is too old: $(pyvenv/bin/meson --version)"
+elif has meson ; then
+echo "Meson was found installed on your system," \
+ "but not for the configured Python interpreter ($target_python)."
+echo "(Hint: check '$(which meson)' to see which interpreter its 
shebang uses.)"
 fi
-else
-# Meson uses its own Python interpreter to invoke other Python scripts,
-# but the user wants to use the one they specified with --python.
-#
-# We do not want to override the distro Python interpreter (and sometimes
-# cannot: for example in Homebrew /usr/bin/meson is a bash script), so
-# just require --meson=git|internal together with --python.
-if test "$explicit_python" = yes; 

[RFC PATCH v3 19/20] configure: use --diagnose option with meson ensure

2023-04-24 Thread John Snow
Rely on the diagnosis information from mkvenv so it can be removed from
the configure script.

Signed-off-by: John Snow 
---
 configure | 11 +--
 1 file changed, 1 insertion(+), 10 deletions(-)

diff --git a/configure b/configure
index ae55de1408..0881fffc14 100755
--- a/configure
+++ b/configure
@@ -1096,8 +1096,6 @@ fi
 
 # Resolve PATH
 python="$(command -v "$python")"
-# This variable is intended to be used only for error messages:
-target_python=$python
 
 # Create a Python virtual environment using our configured python.
 # The stdout of this script will be the location of a symlink that
@@ -1134,16 +1132,9 @@ fi
 if ! $python "${source_path}/python/scripts/mkvenv.py" ensure \
  $mkvenv_flags \
  --dir "${source_path}/python/wheels" \
+ --diagnose "meson" \
  "meson>=0.61.5" ;
 then
-# We're very out of luck. Try to give a good diagnostic.
-if test -e pyvenv/bin/meson; then
-echo "Meson is too old: $(pyvenv/bin/meson --version)"
-elif has meson ; then
-echo "Meson was found installed on your system," \
- "but not for the configured Python interpreter ($target_python)."
-echo "(Hint: check '$(which meson)' to see which interpreter its 
shebang uses.)"
-fi
 exit 1
 fi
 
-- 
2.39.2




[RFC PATCH v3 10/20] tests/vm: Configure netbsd to use Python 3.10

2023-04-24 Thread John Snow
NetBSD removes some packages from the Python stdlib, but only
re-packages them for Python 3.10. Switch to using Python 3.10.

Signed-off-by: John Snow 
---
 tests/vm/netbsd | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tests/vm/netbsd b/tests/vm/netbsd
index 0b9536ca17..13eae109c0 100755
--- a/tests/vm/netbsd
+++ b/tests/vm/netbsd
@@ -30,6 +30,7 @@ class NetBSDVM(basevm.BaseVM):
 "git-base",
 "pkgconf",
 "xz",
+"python310",
 "ninja-build",
 
 # gnu tools
-- 
2.39.2




Re: [PATCH] MAINTAINERS: Add Leonardo and Peter as reviewers

2023-04-24 Thread Leonardo Bras Soares Passos
On Fri, Apr 21, 2023 at 12:23 PM Peter Xu  wrote:
>
> On Wed, Apr 19, 2023 at 06:29:57PM +0200, Juan Quintela wrote:
> > Now that David has stepped down with Migration maintainership,
> > Leonardo and Peter has volunteer to review the migration patches.
> > This way they got CC'd on every migration patch.
> >
> > Signed-off-by: Juan Quintela 
>
> Acked-by: Peter Xu 
>
> --
> Peter Xu
>

Acked-by: Leonardo Bras 

Thanks,
Leonardo Bras




[PATCH v3 02/13] migration: Move qmp_migrate_set_parameters() to options.c

2023-04-24 Thread Juan Quintela
Signed-off-by: Juan Quintela 
---
 migration/migration.c | 420 --
 migration/options.c   | 418 +
 migration/options.h   |  11 ++
 3 files changed, 429 insertions(+), 420 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 02c2355d0d..22e8586623 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -67,19 +67,10 @@
 
 #define MAX_THROTTLE  (128 << 20)  /* Migration transfer speed throttling 
*/
 
-/* Amount of time to allocate to each "chunk" of bandwidth-throttled
- * data. */
-#define BUFFER_DELAY 100
-#define XFER_LIMIT_RATIO (1000 / BUFFER_DELAY)
-
 /* Time in milliseconds we are allowed to stop the source,
  * for sending the last part */
 #define DEFAULT_MIGRATE_SET_DOWNTIME 300
 
-/* Maximum migrate downtime set to 2000 seconds */
-#define MAX_MIGRATE_DOWNTIME_SECONDS 2000
-#define MAX_MIGRATE_DOWNTIME (MAX_MIGRATE_DOWNTIME_SECONDS * 1000)
-
 /* Default compression thread count */
 #define DEFAULT_MIGRATE_COMPRESS_THREAD_COUNT 8
 /* Default decompression thread count, usually decompression is at
@@ -1140,417 +1131,6 @@ MigrationInfo *qmp_query_migrate(Error **errp)
 return info;
 }
 
-/*
- * Check whether the parameters are valid. Error will be put into errp
- * (if provided). Return true if valid, otherwise false.
- */
-static bool migrate_params_check(MigrationParameters *params, Error **errp)
-{
-if (params->has_compress_level &&
-(params->compress_level > 9)) {
-error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "compress_level",
-   "a value between 0 and 9");
-return false;
-}
-
-if (params->has_compress_threads && (params->compress_threads < 1)) {
-error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
-   "compress_threads",
-   "a value between 1 and 255");
-return false;
-}
-
-if (params->has_decompress_threads && (params->decompress_threads < 1)) {
-error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
-   "decompress_threads",
-   "a value between 1 and 255");
-return false;
-}
-
-if (params->has_throttle_trigger_threshold &&
-(params->throttle_trigger_threshold < 1 ||
- params->throttle_trigger_threshold > 100)) {
-error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
-   "throttle_trigger_threshold",
-   "an integer in the range of 1 to 100");
-return false;
-}
-
-if (params->has_cpu_throttle_initial &&
-(params->cpu_throttle_initial < 1 ||
- params->cpu_throttle_initial > 99)) {
-error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
-   "cpu_throttle_initial",
-   "an integer in the range of 1 to 99");
-return false;
-}
-
-if (params->has_cpu_throttle_increment &&
-(params->cpu_throttle_increment < 1 ||
- params->cpu_throttle_increment > 99)) {
-error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
-   "cpu_throttle_increment",
-   "an integer in the range of 1 to 99");
-return false;
-}
-
-if (params->has_max_bandwidth && (params->max_bandwidth > SIZE_MAX)) {
-error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
-   "max_bandwidth",
-   "an integer in the range of 0 to "stringify(SIZE_MAX)
-   " bytes/second");
-return false;
-}
-
-if (params->has_downtime_limit &&
-(params->downtime_limit > MAX_MIGRATE_DOWNTIME)) {
-error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
-   "downtime_limit",
-   "an integer in the range of 0 to "
-stringify(MAX_MIGRATE_DOWNTIME)" ms");
-return false;
-}
-
-/* x_checkpoint_delay is now always positive */
-
-if (params->has_multifd_channels && (params->multifd_channels < 1)) {
-error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
-   "multifd_channels",
-   "a value between 1 and 255");
-return false;
-}
-
-if (params->has_multifd_zlib_level &&
-(params->multifd_zlib_level > 9)) {
-error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "multifd_zlib_level",
-   "a value between 0 and 9");
-return false;
-}
-
-if (params->has_multifd_zstd_level &&
-(params->multifd_zstd_level > 20)) {
-error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "multifd_zstd_level",
-   "a value between 0 and 20");
-return false;
-}
-
-if (params->has_xbzrle_cache_size &&
-(params->xbzrle_cache_size < qemu_target_page_size() ||
- !is_power_of_2(params->xbzrle_cache_size))) {
-error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
-   "xbzrle_cache_size",
-   "a power of two no less than the 

[PATCH v3 03/13] migration: Create migrate_params_init() function

2023-04-24 Thread Juan Quintela
Signed-off-by: Juan Quintela 
---
 migration/migration.c | 29 +
 migration/options.c   | 31 +++
 migration/options.h   |  1 +
 3 files changed, 33 insertions(+), 28 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 22e8586623..45fc5be93a 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -3470,7 +3470,6 @@ static void migration_instance_finalize(Object *obj)
 static void migration_instance_init(Object *obj)
 {
 MigrationState *ms = MIGRATION_OBJ(obj);
-MigrationParameters *params = >parameters;
 
 ms->state = MIGRATION_STATUS_NONE;
 ms->mbps = -1;
@@ -3478,33 +3477,7 @@ static void migration_instance_init(Object *obj)
 qemu_sem_init(>pause_sem, 0);
 qemu_mutex_init(>error_mutex);
 
-params->tls_hostname = g_strdup("");
-params->tls_creds = g_strdup("");
-
-/* Set has_* up only for parameter checks */
-params->has_compress_level = true;
-params->has_compress_threads = true;
-params->has_compress_wait_thread = true;
-params->has_decompress_threads = true;
-params->has_throttle_trigger_threshold = true;
-params->has_cpu_throttle_initial = true;
-params->has_cpu_throttle_increment = true;
-params->has_cpu_throttle_tailslow = true;
-params->has_max_bandwidth = true;
-params->has_downtime_limit = true;
-params->has_x_checkpoint_delay = true;
-params->has_block_incremental = true;
-params->has_multifd_channels = true;
-params->has_multifd_compression = true;
-params->has_multifd_zlib_level = true;
-params->has_multifd_zstd_level = true;
-params->has_xbzrle_cache_size = true;
-params->has_max_postcopy_bandwidth = true;
-params->has_max_cpu_throttle = true;
-params->has_announce_initial = true;
-params->has_announce_max = true;
-params->has_announce_rounds = true;
-params->has_announce_step = true;
+migrate_params_init(>parameters);
 
 qemu_sem_init(>postcopy_pause_sem, 0);
 qemu_sem_init(>postcopy_pause_rp_sem, 0);
diff --git a/migration/options.c b/migration/options.c
index 4701c75a4d..201f9ff58f 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -738,6 +738,37 @@ MigrationParameters *qmp_query_migrate_parameters(Error 
**errp)
 return params;
 }
 
+void migrate_params_init(MigrationParameters *params)
+{
+params->tls_hostname = g_strdup("");
+params->tls_creds = g_strdup("");
+
+/* Set has_* up only for parameter checks */
+params->has_compress_level = true;
+params->has_compress_threads = true;
+params->has_compress_wait_thread = true;
+params->has_decompress_threads = true;
+params->has_throttle_trigger_threshold = true;
+params->has_cpu_throttle_initial = true;
+params->has_cpu_throttle_increment = true;
+params->has_cpu_throttle_tailslow = true;
+params->has_max_bandwidth = true;
+params->has_downtime_limit = true;
+params->has_x_checkpoint_delay = true;
+params->has_block_incremental = true;
+params->has_multifd_channels = true;
+params->has_multifd_compression = true;
+params->has_multifd_zlib_level = true;
+params->has_multifd_zstd_level = true;
+params->has_xbzrle_cache_size = true;
+params->has_max_postcopy_bandwidth = true;
+params->has_max_cpu_throttle = true;
+params->has_announce_initial = true;
+params->has_announce_max = true;
+params->has_announce_rounds = true;
+params->has_announce_step = true;
+}
+
 /*
  * Check whether the parameters are valid. Error will be put into errp
  * (if provided). Return true if valid, otherwise false.
diff --git a/migration/options.h b/migration/options.h
index 89067e59a0..86bcbb738c 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -84,5 +84,6 @@ uint64_t migrate_xbzrle_cache_size(void);
 /* parameters helpers */
 
 bool migrate_params_check(MigrationParameters *params, Error **errp);
+void migrate_params_init(MigrationParameters *params);
 
 #endif
-- 
2.39.2




[PATCH v3 05/13] migration: Create migrate_downtime_limit() function

2023-04-24 Thread Juan Quintela
Signed-off-by: Juan Quintela 
---
 migration/migration.c | 4 ++--
 migration/options.c   | 7 +++
 migration/options.h   | 1 +
 3 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 45fc5be93a..ee8e9416ce 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2737,7 +2737,7 @@ static void migration_update_counters(MigrationState *s,
 transferred = current_bytes - s->iteration_initial_bytes;
 time_spent = current_time - s->iteration_start_time;
 bandwidth = (double)transferred / time_spent;
-s->threshold_size = bandwidth * s->parameters.downtime_limit;
+s->threshold_size = bandwidth * migrate_downtime_limit();
 
 s->mbps = (((double) transferred * 8.0) /
((double) time_spent / 1000.0)) / 1000.0 / 1000.0;
@@ -3244,7 +3244,7 @@ void migrate_fd_connect(MigrationState *s, Error 
*error_in)
  */
 migrate_error_free(s);
 
-s->expected_downtime = s->parameters.downtime_limit;
+s->expected_downtime = migrate_downtime_limit();
 if (resume) {
 assert(s->cleanup_bh);
 } else {
diff --git a/migration/options.c b/migration/options.c
index bf4efd6ad4..ba854f613f 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -515,6 +515,13 @@ int migrate_decompress_threads(void)
 return s->parameters.decompress_threads;
 }
 
+uint64_t migrate_downtime_limit(void)
+{
+MigrationState *s = migrate_get_current();
+
+return s->parameters.downtime_limit;
+}
+
 uint8_t migrate_max_cpu_throttle(void)
 {
 MigrationState *s = migrate_get_current();
diff --git a/migration/options.h b/migration/options.h
index 86bcbb738c..e982103c0d 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -71,6 +71,7 @@ uint8_t migrate_cpu_throttle_increment(void);
 uint8_t migrate_cpu_throttle_initial(void);
 bool migrate_cpu_throttle_tailslow(void);
 int migrate_decompress_threads(void);
+uint64_t migrate_downtime_limit(void);
 uint8_t migrate_max_cpu_throttle(void);
 uint64_t migrate_max_bandwidth(void);
 int64_t migrate_max_postcopy_bandwidth(void);
-- 
2.39.2




[PATCH v3 04/13] migration: Make all functions check have the same format

2023-04-24 Thread Juan Quintela
Signed-off-by: Juan Quintela 
---
 migration/options.c | 153 +++-
 1 file changed, 39 insertions(+), 114 deletions(-)

diff --git a/migration/options.c b/migration/options.c
index 201f9ff58f..bf4efd6ad4 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -33,27 +33,21 @@
 
 bool migrate_auto_converge(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_AUTO_CONVERGE];
 }
 
 bool migrate_background_snapshot(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_BACKGROUND_SNAPSHOT];
 }
 
 bool migrate_block(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_BLOCK];
 }
@@ -61,95 +55,76 @@ bool migrate_block(void)
 bool migrate_colo(void)
 {
 MigrationState *s = migrate_get_current();
+
 return s->capabilities[MIGRATION_CAPABILITY_X_COLO];
 }
 
 bool migrate_compress(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_COMPRESS];
 }
 
 bool migrate_dirty_bitmaps(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_DIRTY_BITMAPS];
 }
 
 bool migrate_events(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_EVENTS];
 }
 
 bool migrate_ignore_shared(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_X_IGNORE_SHARED];
 }
 
 bool migrate_late_block_activate(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_LATE_BLOCK_ACTIVATE];
 }
 
 bool migrate_multifd(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_MULTIFD];
 }
 
 bool migrate_pause_before_switchover(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_PAUSE_BEFORE_SWITCHOVER];
 }
 
 bool migrate_postcopy_blocktime(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_POSTCOPY_BLOCKTIME];
 }
 
 bool migrate_postcopy_preempt(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_POSTCOPY_PREEMPT];
 }
 
 bool migrate_postcopy_ram(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_POSTCOPY_RAM];
 }
@@ -163,54 +138,42 @@ bool migrate_rdma_pin_all(void)
 
 bool migrate_release_ram(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_RELEASE_RAM];
 }
 
 bool migrate_return_path(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_RETURN_PATH];
 }
 
 bool migrate_validate_uuid(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_VALIDATE_UUID];
 }
 
 bool migrate_xbzrle(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_XBZRLE];
 }
 
 bool migrate_zero_blocks(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_ZERO_BLOCKS];
 }
 
 bool migrate_zero_copy_send(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->capabilities[MIGRATION_CAPABILITY_ZERO_COPY_SEND];
 }
@@ -224,9 +187,7 @@ bool migrate_postcopy(void)
 
 bool migrate_tls(void)
 {
-MigrationState *s;
-
-s = migrate_get_current();
+MigrationState *s = migrate_get_current();
 
 return s->parameters.tls_creds && *s->parameters.tls_creds;
 }
@@ -493,126 +454,98 @@ void 
qmp_migrate_set_capabilities(MigrationCapabilityStatusList *params,
 
 bool 

[PATCH v3 01/13] migration: Move migrate_use_tls() to options.c

2023-04-24 Thread Juan Quintela
Once there, rename it to migrate_tls() and make it return bool for
consistency.

Signed-off-by: Juan Quintela 

---

Fix typos found by fabiano
---
 migration/migration.c |  9 -
 migration/migration.h |  2 --
 migration/options.c   | 11 ++-
 migration/options.h   |  1 +
 migration/tls.c   |  3 ++-
 5 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 53dd59f6f6..02c2355d0d 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2177,15 +2177,6 @@ void qmp_migrate_continue(MigrationStatus state, Error 
**errp)
 qemu_sem_post(>pause_sem);
 }
 
-int migrate_use_tls(void)
-{
-MigrationState *s;
-
-s = migrate_get_current();
-
-return s->parameters.tls_creds && *s->parameters.tls_creds;
-}
-
 /* migration thread support */
 /*
  * Something bad happened to the RP stream, mark an error
diff --git a/migration/migration.h b/migration/migration.h
index dcf906868d..2b71df8617 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -447,8 +447,6 @@ bool migration_is_blocked(Error **errp);
 bool migration_in_postcopy(void);
 MigrationState *migrate_get_current(void);
 
-int migrate_use_tls(void);
-
 uint64_t ram_get_total_transferred_pages(void);
 
 /* Sending on the return path - generic and then for each message type */
diff --git a/migration/options.c b/migration/options.c
index 8e8753d9be..d4c0714683 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -214,6 +214,15 @@ bool migrate_postcopy(void)
 return migrate_postcopy_ram() || migrate_dirty_bitmaps();
 }
 
+bool migrate_tls(void)
+{
+MigrationState *s;
+
+s = migrate_get_current();
+
+return s->parameters.tls_creds && *s->parameters.tls_creds;
+}
+
 typedef enum WriteTrackingSupport {
 WT_SUPPORT_UNKNOWN = 0,
 WT_SUPPORT_ABSENT,
@@ -363,7 +372,7 @@ bool migrate_caps_check(bool *old_caps, bool *new_caps, 
Error **errp)
  new_caps[MIGRATION_CAPABILITY_COMPRESS] ||
  new_caps[MIGRATION_CAPABILITY_XBZRLE] ||
  migrate_multifd_compression() ||
- migrate_use_tls())) {
+ migrate_tls())) {
 error_setg(errp,
"Zero copy only available for non-compressed non-TLS 
multifd migration");
 return false;
diff --git a/migration/options.h b/migration/options.h
index 1b78fa9f3d..13318a16c7 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -46,6 +46,7 @@ bool migrate_zero_copy_send(void);
  */
 
 bool migrate_postcopy(void);
+bool migrate_tls(void);
 
 /* capabilities helpers */
 
diff --git a/migration/tls.c b/migration/tls.c
index 4d2166a209..acd38e0b62 100644
--- a/migration/tls.c
+++ b/migration/tls.c
@@ -22,6 +22,7 @@
 #include "channel.h"
 #include "migration.h"
 #include "tls.h"
+#include "options.h"
 #include "crypto/tlscreds.h"
 #include "qemu/error-report.h"
 #include "qapi/error.h"
@@ -165,7 +166,7 @@ void migration_tls_channel_connect(MigrationState *s,
 
 bool migrate_channel_requires_tls_upgrade(QIOChannel *ioc)
 {
-if (!migrate_use_tls()) {
+if (!migrate_tls()) {
 return false;
 }
 
-- 
2.39.2




[PATCH v3 09/13] migration: Create migrate_tls_creds() function

2023-04-24 Thread Juan Quintela
Signed-off-by: Juan Quintela 
---
 migration/options.c | 7 +++
 migration/options.h | 1 +
 migration/tls.c | 9 -
 3 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/migration/options.c b/migration/options.c
index f65b7babef..9eabb4c25d 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -579,6 +579,13 @@ uint8_t migrate_throttle_trigger_threshold(void)
 return s->parameters.throttle_trigger_threshold;
 }
 
+char *migrate_tls_creds(void)
+{
+MigrationState *s = migrate_get_current();
+
+return s->parameters.tls_creds;
+}
+
 uint64_t migrate_xbzrle_cache_size(void)
 {
 MigrationState *s = migrate_get_current();
diff --git a/migration/options.h b/migration/options.h
index 3948218dbe..47cc24585b 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -80,6 +80,7 @@ MultiFDCompression migrate_multifd_compression(void);
 int migrate_multifd_zlib_level(void);
 int migrate_multifd_zstd_level(void);
 uint8_t migrate_throttle_trigger_threshold(void);
+char *migrate_tls_creds(void);
 uint64_t migrate_xbzrle_cache_size(void);
 
 /* parameters setters */
diff --git a/migration/tls.c b/migration/tls.c
index acd38e0b62..0d318516de 100644
--- a/migration/tls.c
+++ b/migration/tls.c
@@ -34,20 +34,19 @@ migration_tls_get_creds(MigrationState *s,
 Error **errp)
 {
 Object *creds;
+char *tls_creds = migrate_tls_creds();
 QCryptoTLSCreds *ret;
 
-creds = object_resolve_path_component(
-object_get_objects_root(), s->parameters.tls_creds);
+creds = object_resolve_path_component(object_get_objects_root(), 
tls_creds);
 if (!creds) {
-error_setg(errp, "No TLS credentials with id '%s'",
-   s->parameters.tls_creds);
+error_setg(errp, "No TLS credentials with id '%s'", tls_creds);
 return NULL;
 }
 ret = (QCryptoTLSCreds *)object_dynamic_cast(
 creds, TYPE_QCRYPTO_TLS_CREDS);
 if (!ret) {
 error_setg(errp, "Object with id '%s' is not TLS credentials",
-   s->parameters.tls_creds);
+   tls_creds);
 return NULL;
 }
 if (!qcrypto_tls_creds_check_endpoint(ret, endpoint, errp)) {
-- 
2.39.2




[PATCH v3 08/13] migration: Remove MigrationState from block_cleanup_parameters()

2023-04-24 Thread Juan Quintela
This makes the function more regular with everything else.

Signed-off-by: Juan Quintela 
---
 migration/migration.c | 4 ++--
 migration/options.c   | 4 +++-
 migration/options.h   | 2 +-
 3 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index cefe6da2b8..ef8caa79b9 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1218,7 +1218,7 @@ static void migrate_fd_cleanup(MigrationState *s)
 error_report_err(error_copy(s->error));
 }
 notifier_list_notify(_state_notifiers, s);
-block_cleanup_parameters(s);
+block_cleanup_parameters();
 yank_unregister_instance(MIGRATION_YANK_INSTANCE);
 }
 
@@ -1712,7 +1712,7 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk,
"a valid migration protocol");
 migrate_set_state(>state, MIGRATION_STATUS_SETUP,
   MIGRATION_STATUS_FAILED);
-block_cleanup_parameters(s);
+block_cleanup_parameters();
 return;
 }
 
diff --git a/migration/options.c b/migration/options.c
index 26fe00799b..f65b7babef 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -597,8 +597,10 @@ void migrate_set_block_incremental(bool value)
 
 /* parameters helpers */
 
-void block_cleanup_parameters(MigrationState *s)
+void block_cleanup_parameters(void)
 {
+MigrationState *s = migrate_get_current();
+
 if (s->must_remove_block_options) {
 /* setting to false can never fail */
 migrate_cap_set(MIGRATION_CAPABILITY_BLOCK, false, _abort);
diff --git a/migration/options.h b/migration/options.h
index 1fc8d341dd..3948218dbe 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -90,6 +90,6 @@ void migrate_set_block_incremental(bool value);
 
 bool migrate_params_check(MigrationParameters *params, Error **errp);
 void migrate_params_init(MigrationParameters *params);
-void block_cleanup_parameters(MigrationState *s);
+void block_cleanup_parameters(void);
 
 #endif
-- 
2.39.2




[PATCH v3 06/13] migration: Move migrate_set_block_incremental() to options.c

2023-04-24 Thread Juan Quintela
Once there, make it more regular and remove th eneed for
MigrationState parameter.

Signed-off-by: Juan Quintela 
---
 migration/migration.c | 9 ++---
 migration/options.c   | 9 +
 migration/options.h   | 4 
 3 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index ee8e9416ce..9a42f73aeb 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1164,17 +1164,12 @@ void migrate_set_state(int *state, int old_state, int 
new_state)
 }
 }
 
-static void migrate_set_block_incremental(MigrationState *s, bool value)
-{
-s->parameters.block_incremental = value;
-}
-
 static void block_cleanup_parameters(MigrationState *s)
 {
 if (s->must_remove_block_options) {
 /* setting to false can never fail */
 migrate_cap_set(MIGRATION_CAPABILITY_BLOCK, false, _abort);
-migrate_set_block_incremental(s, false);
+migrate_set_block_incremental(false);
 s->must_remove_block_options = false;
 }
 }
@@ -1668,7 +1663,7 @@ static bool migrate_prepare(MigrationState *s, bool blk, 
bool blk_inc,
 }
 
 if (blk_inc) {
-migrate_set_block_incremental(s, true);
+migrate_set_block_incremental(true);
 }
 
 migrate_init(s);
diff --git a/migration/options.c b/migration/options.c
index ba854f613f..37f7051d9d 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -586,6 +586,15 @@ uint64_t migrate_xbzrle_cache_size(void)
 return s->parameters.xbzrle_cache_size;
 }
 
+/* parameter setters */
+
+void migrate_set_block_incremental(bool value)
+{
+MigrationState *s = migrate_get_current();
+
+s->parameters.block_incremental = value;
+}
+
 /* parameters helpers */
 
 AnnounceParameters *migrate_announce_params(void)
diff --git a/migration/options.h b/migration/options.h
index e982103c0d..d261a25441 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -82,6 +82,10 @@ int migrate_multifd_zstd_level(void);
 uint8_t migrate_throttle_trigger_threshold(void);
 uint64_t migrate_xbzrle_cache_size(void);
 
+/* parameters setters */
+
+void migrate_set_block_incremental(bool value);
+
 /* parameters helpers */
 
 bool migrate_params_check(MigrationParameters *params, Error **errp);
-- 
2.39.2




[PATCH v3 13/13] migration: Move migration_properties to options.c

2023-04-24 Thread Juan Quintela
Signed-off-by: Juan Quintela 
---
 migration/migration.c | 157 --
 migration/options.c   | 155 +
 migration/options.h   |   7 ++
 3 files changed, 162 insertions(+), 157 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index ef8caa79b9..3adcdfe286 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -52,8 +52,6 @@
 #include "io/channel-tls.h"
 #include "migration/colo.h"
 #include "hw/boards.h"
-#include "hw/qdev-properties.h"
-#include "hw/qdev-properties-system.h"
 #include "monitor/monitor.h"
 #include "net/announce.h"
 #include "qemu/queue.h"
@@ -65,51 +63,6 @@
 #include "sysemu/qtest.h"
 #include "options.h"
 
-#define MAX_THROTTLE  (128 << 20)  /* Migration transfer speed throttling 
*/
-
-/* Time in milliseconds we are allowed to stop the source,
- * for sending the last part */
-#define DEFAULT_MIGRATE_SET_DOWNTIME 300
-
-/* Default compression thread count */
-#define DEFAULT_MIGRATE_COMPRESS_THREAD_COUNT 8
-/* Default decompression thread count, usually decompression is at
- * least 4 times as fast as compression.*/
-#define DEFAULT_MIGRATE_DECOMPRESS_THREAD_COUNT 2
-/*0: means nocompress, 1: best speed, ... 9: best compress ratio */
-#define DEFAULT_MIGRATE_COMPRESS_LEVEL 1
-/* Define default autoconverge cpu throttle migration parameters */
-#define DEFAULT_MIGRATE_THROTTLE_TRIGGER_THRESHOLD 50
-#define DEFAULT_MIGRATE_CPU_THROTTLE_INITIAL 20
-#define DEFAULT_MIGRATE_CPU_THROTTLE_INCREMENT 10
-#define DEFAULT_MIGRATE_MAX_CPU_THROTTLE 99
-
-/* Migration XBZRLE default cache size */
-#define DEFAULT_MIGRATE_XBZRLE_CACHE_SIZE (64 * 1024 * 1024)
-
-/* The delay time (in ms) between two COLO checkpoints */
-#define DEFAULT_MIGRATE_X_CHECKPOINT_DELAY (200 * 100)
-#define DEFAULT_MIGRATE_MULTIFD_CHANNELS 2
-#define DEFAULT_MIGRATE_MULTIFD_COMPRESSION MULTIFD_COMPRESSION_NONE
-/* 0: means nocompress, 1: best speed, ... 9: best compress ratio */
-#define DEFAULT_MIGRATE_MULTIFD_ZLIB_LEVEL 1
-/* 0: means nocompress, 1: best speed, ... 20: best compress ratio */
-#define DEFAULT_MIGRATE_MULTIFD_ZSTD_LEVEL 1
-
-/* Background transfer rate for postcopy, 0 means unlimited, note
- * that page requests can still exceed this limit.
- */
-#define DEFAULT_MIGRATE_MAX_POSTCOPY_BANDWIDTH 0
-
-/*
- * Parameters for self_announce_delay giving a stream of RARP/ARP
- * packets after migration.
- */
-#define DEFAULT_MIGRATE_ANNOUNCE_INITIAL  50
-#define DEFAULT_MIGRATE_ANNOUNCE_MAX 550
-#define DEFAULT_MIGRATE_ANNOUNCE_ROUNDS5
-#define DEFAULT_MIGRATE_ANNOUNCE_STEP100
-
 static NotifierList migration_state_notifiers =
 NOTIFIER_LIST_INITIALIZER(migration_state_notifiers);
 
@@ -3317,116 +3270,6 @@ void migrate_fd_connect(MigrationState *s, Error 
*error_in)
 s->migration_thread_running = true;
 }
 
-#define DEFINE_PROP_MIG_CAP(name, x) \
-DEFINE_PROP_BOOL(name, MigrationState, capabilities[x], false)
-
-static Property migration_properties[] = {
-DEFINE_PROP_BOOL("store-global-state", MigrationState,
- store_global_state, true),
-DEFINE_PROP_BOOL("send-configuration", MigrationState,
- send_configuration, true),
-DEFINE_PROP_BOOL("send-section-footer", MigrationState,
- send_section_footer, true),
-DEFINE_PROP_BOOL("decompress-error-check", MigrationState,
-  decompress_error_check, true),
-DEFINE_PROP_UINT8("x-clear-bitmap-shift", MigrationState,
-  clear_bitmap_shift, CLEAR_BITMAP_SHIFT_DEFAULT),
-DEFINE_PROP_BOOL("x-preempt-pre-7-2", MigrationState,
- preempt_pre_7_2, false),
-
-/* Migration parameters */
-DEFINE_PROP_UINT8("x-compress-level", MigrationState,
-  parameters.compress_level,
-  DEFAULT_MIGRATE_COMPRESS_LEVEL),
-DEFINE_PROP_UINT8("x-compress-threads", MigrationState,
-  parameters.compress_threads,
-  DEFAULT_MIGRATE_COMPRESS_THREAD_COUNT),
-DEFINE_PROP_BOOL("x-compress-wait-thread", MigrationState,
-  parameters.compress_wait_thread, true),
-DEFINE_PROP_UINT8("x-decompress-threads", MigrationState,
-  parameters.decompress_threads,
-  DEFAULT_MIGRATE_DECOMPRESS_THREAD_COUNT),
-DEFINE_PROP_UINT8("x-throttle-trigger-threshold", MigrationState,
-  parameters.throttle_trigger_threshold,
-  DEFAULT_MIGRATE_THROTTLE_TRIGGER_THRESHOLD),
-DEFINE_PROP_UINT8("x-cpu-throttle-initial", MigrationState,
-  parameters.cpu_throttle_initial,
-  DEFAULT_MIGRATE_CPU_THROTTLE_INITIAL),
-DEFINE_PROP_UINT8("x-cpu-throttle-increment", MigrationState,
-  parameters.cpu_throttle_increment,
-  DEFAULT_MIGRATE_CPU_THROTTLE_INCREMENT),
-

[PATCH v3 00/13] Migration: Create options.c for capabilities/params/properties

2023-04-24 Thread Juan Quintela
Hi

In this v3:
- Rebase on top of latest.
- Fix review of migrate_use_tls() comments.
- Remaining of the patches not yet reviewed.

Please review.

[v2]
- the first two patches are included on the last pull request.
- Changed copyright from Anthony to Orit (thanks David)
  Some archeology required.
- Get all the reviews by from Vladimir.
- Rebased on top of my last pull request.

The first two patches don't belong in this series, but without them I
got lots of confilcts if you try to use the series.  That two patches
are independently on the list.

Please review.

[v1]
This series move to options.c:
- all migration capabilities code
- all migration parameters code
- all properties code
- all qmp commands that only touch the previous

And once there:
- sort of functions
- make consistent and coherent all the functions naming/typing
- create accessors for the parameters/capabilties that don't exist
- more cleanups here and there.

Todo:

- There is still capabilities code on savevm.c, but I want this in
  before moving that code to options.c, but still needs more thought
  for my part. I.e. should I put vmstate sections in options.c, or
  should I create new functions to access the capabilities in savevm.c.

Please review.

Juan Quintela (13):
  migration: Move migrate_use_tls() to options.c
  migration: Move qmp_migrate_set_parameters() to options.c
  migration: Create migrate_params_init() function
  migration: Make all functions check have the same format
  migration: Create migrate_downtime_limit() function
  migration: Move migrate_set_block_incremental() to options.c
  migration: Move block_cleanup_parameters() to options.c
  migration: Remove MigrationState from block_cleanup_parameters()
  migration: Create migrate_tls_creds() function
  migration: Create migrate_tls_authz() function
  migration: Create migrate_tls_hostname() function
  migration: Create migrate_block_bitmap_mapping() function
  migration: Move migration_properties to options.c

 migration/block-dirty-bitmap.c |  14 +-
 migration/migration.c  | 640 +-
 migration/migration.h  |   2 -
 migration/options.c| 818 -
 migration/options.h|  30 ++
 migration/tls.c|  23 +-
 6 files changed, 761 insertions(+), 766 deletions(-)

-- 
2.39.2




[PATCH v3 10/13] migration: Create migrate_tls_authz() function

2023-04-24 Thread Juan Quintela
Signed-off-by: Juan Quintela 
---
 migration/options.c | 7 +++
 migration/options.h | 1 +
 migration/tls.c | 5 +
 3 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/migration/options.c b/migration/options.c
index 9eabb4c25d..9e19e4ade1 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -579,6 +579,13 @@ uint8_t migrate_throttle_trigger_threshold(void)
 return s->parameters.throttle_trigger_threshold;
 }
 
+char *migrate_tls_authz(void)
+{
+MigrationState *s = migrate_get_current();
+
+return s->parameters.tls_authz;
+}
+
 char *migrate_tls_creds(void)
 {
 MigrationState *s = migrate_get_current();
diff --git a/migration/options.h b/migration/options.h
index 47cc24585b..0438c6e36e 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -80,6 +80,7 @@ MultiFDCompression migrate_multifd_compression(void);
 int migrate_multifd_zlib_level(void);
 int migrate_multifd_zstd_level(void);
 uint8_t migrate_throttle_trigger_threshold(void);
+char *migrate_tls_authz(void);
 char *migrate_tls_creds(void);
 uint64_t migrate_xbzrle_cache_size(void);
 
diff --git a/migration/tls.c b/migration/tls.c
index 0d318516de..4c229326fd 100644
--- a/migration/tls.c
+++ b/migration/tls.c
@@ -86,10 +86,7 @@ void migration_tls_channel_process_incoming(MigrationState 
*s,
 return;
 }
 
-tioc = qio_channel_tls_new_server(
-ioc, creds,
-s->parameters.tls_authz,
-errp);
+tioc = qio_channel_tls_new_server(ioc, creds, migrate_tls_authz(), errp);
 if (!tioc) {
 return;
 }
-- 
2.39.2




[PATCH v3 11/13] migration: Create migrate_tls_hostname() function

2023-04-24 Thread Juan Quintela
Signed-off-by: Juan Quintela 
---
 migration/options.c | 7 +++
 migration/options.h | 1 +
 migration/tls.c | 6 --
 3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/migration/options.c b/migration/options.c
index 9e19e4ade1..9fbba84b9a 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -593,6 +593,13 @@ char *migrate_tls_creds(void)
 return s->parameters.tls_creds;
 }
 
+char *migrate_tls_hostname(void)
+{
+MigrationState *s = migrate_get_current();
+
+return s->parameters.tls_hostname;
+}
+
 uint64_t migrate_xbzrle_cache_size(void)
 {
 MigrationState *s = migrate_get_current();
diff --git a/migration/options.h b/migration/options.h
index 0438c6e36e..9123fdb5f4 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -82,6 +82,7 @@ int migrate_multifd_zstd_level(void);
 uint8_t migrate_throttle_trigger_threshold(void);
 char *migrate_tls_authz(void);
 char *migrate_tls_creds(void);
+char *migrate_tls_hostname(void);
 uint64_t migrate_xbzrle_cache_size(void);
 
 /* parameters setters */
diff --git a/migration/tls.c b/migration/tls.c
index 4c229326fd..3cae1a06e7 100644
--- a/migration/tls.c
+++ b/migration/tls.c
@@ -123,6 +123,7 @@ QIOChannelTLS *migration_tls_client_create(MigrationState 
*s,
Error **errp)
 {
 QCryptoTLSCreds *creds;
+char *tls_hostname;
 
 creds = migration_tls_get_creds(
 s, QCRYPTO_TLS_CREDS_ENDPOINT_CLIENT, errp);
@@ -130,8 +131,9 @@ QIOChannelTLS *migration_tls_client_create(MigrationState 
*s,
 return NULL;
 }
 
-if (s->parameters.tls_hostname && *s->parameters.tls_hostname) {
-hostname = s->parameters.tls_hostname;
+tls_hostname = migrate_tls_hostname();
+if (tls_hostname && *tls_hostname) {
+hostname = tls_hostname;
 }
 
 return qio_channel_tls_new_client(ioc, creds, hostname, errp);
-- 
2.39.2




[PATCH v3 12/13] migration: Create migrate_block_bitmap_mapping() function

2023-04-24 Thread Juan Quintela
Notice that we changed the test of ->has_block_bitmap_mapping
for the test that block_bitmap_mapping is not NULL.

Signed-off-by: Juan Quintela 
---
 migration/block-dirty-bitmap.c | 14 --
 migration/options.c|  7 +++
 migration/options.h|  1 +
 3 files changed, 16 insertions(+), 6 deletions(-)

diff --git a/migration/block-dirty-bitmap.c b/migration/block-dirty-bitmap.c
index a6ffae0002..62b2352bbb 100644
--- a/migration/block-dirty-bitmap.c
+++ b/migration/block-dirty-bitmap.c
@@ -605,11 +605,12 @@ static int init_dirty_bitmap_migration(DBMSaveState *s)
 SaveBitmapState *dbms;
 GHashTable *handled_by_blk = g_hash_table_new(NULL, NULL);
 BlockBackend *blk;
-const MigrationParameters *mig_params = _get_current()->parameters;
 GHashTable *alias_map = NULL;
+BitmapMigrationNodeAliasList *block_bitmap_mapping =
+migrate_block_bitmap_mapping();
 
-if (mig_params->has_block_bitmap_mapping) {
-alias_map = construct_alias_map(mig_params->block_bitmap_mapping, true,
+if (block_bitmap_mapping) {
+alias_map = construct_alias_map(block_bitmap_mapping, true,
 _abort);
 }
 
@@ -1158,7 +1159,8 @@ static int dirty_bitmap_load_header(QEMUFile *f, 
DBMLoadState *s,
 static int dirty_bitmap_load(QEMUFile *f, void *opaque, int version_id)
 {
 GHashTable *alias_map = NULL;
-const MigrationParameters *mig_params = _get_current()->parameters;
+BitmapMigrationNodeAliasList *block_bitmap_mapping =
+migrate_block_bitmap_mapping();
 DBMLoadState *s = &((DBMState *)opaque)->load;
 int ret = 0;
 
@@ -1170,8 +1172,8 @@ static int dirty_bitmap_load(QEMUFile *f, void *opaque, 
int version_id)
 return -EINVAL;
 }
 
-if (mig_params->has_block_bitmap_mapping) {
-alias_map = construct_alias_map(mig_params->block_bitmap_mapping,
+if (block_bitmap_mapping) {
+alias_map = construct_alias_map(block_bitmap_mapping,
 false, _abort);
 }
 
diff --git a/migration/options.c b/migration/options.c
index 9fbba84b9a..ec234bf3ff 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -452,6 +452,13 @@ void 
qmp_migrate_set_capabilities(MigrationCapabilityStatusList *params,
 
 /* parameters */
 
+BitmapMigrationNodeAliasList *migrate_block_bitmap_mapping(void)
+{
+MigrationState *s = migrate_get_current();
+
+return s->parameters.block_bitmap_mapping;
+}
+
 bool migrate_block_incremental(void)
 {
 MigrationState *s = migrate_get_current();
diff --git a/migration/options.h b/migration/options.h
index 9123fdb5f4..43e8e9cd8f 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -62,6 +62,7 @@ bool migrate_cap_set(int cap, bool value, Error **errp);
 
 /* parameters */
 
+BitmapMigrationNodeAliasList *migrate_block_bitmap_mapping(void);
 bool migrate_block_incremental(void);
 uint32_t migrate_checkpoint_delay(void);
 int migrate_compress_level(void);
-- 
2.39.2




[PATCH v3 07/13] migration: Move block_cleanup_parameters() to options.c

2023-04-24 Thread Juan Quintela
Signed-off-by: Juan Quintela 
---
 migration/migration.c | 10 --
 migration/options.c   | 10 ++
 migration/options.h   |  1 +
 3 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 9a42f73aeb..cefe6da2b8 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1164,16 +1164,6 @@ void migrate_set_state(int *state, int old_state, int 
new_state)
 }
 }
 
-static void block_cleanup_parameters(MigrationState *s)
-{
-if (s->must_remove_block_options) {
-/* setting to false can never fail */
-migrate_cap_set(MIGRATION_CAPABILITY_BLOCK, false, _abort);
-migrate_set_block_incremental(false);
-s->must_remove_block_options = false;
-}
-}
-
 static void migrate_fd_cleanup(MigrationState *s)
 {
 qemu_bh_delete(s->cleanup_bh);
diff --git a/migration/options.c b/migration/options.c
index 37f7051d9d..26fe00799b 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -597,6 +597,16 @@ void migrate_set_block_incremental(bool value)
 
 /* parameters helpers */
 
+void block_cleanup_parameters(MigrationState *s)
+{
+if (s->must_remove_block_options) {
+/* setting to false can never fail */
+migrate_cap_set(MIGRATION_CAPABILITY_BLOCK, false, _abort);
+migrate_set_block_incremental(false);
+s->must_remove_block_options = false;
+}
+}
+
 AnnounceParameters *migrate_announce_params(void)
 {
 static AnnounceParameters ap;
diff --git a/migration/options.h b/migration/options.h
index d261a25441..1fc8d341dd 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -90,5 +90,6 @@ void migrate_set_block_incremental(bool value);
 
 bool migrate_params_check(MigrationParameters *params, Error **errp);
 void migrate_params_init(MigrationParameters *params);
+void block_cleanup_parameters(MigrationState *s);
 
 #endif
-- 
2.39.2




Re: test-blockjob: intermittent CI failures in msys2-64bit job

2023-04-24 Thread Vladimir Sementsov-Ogievskiy

On 24.04.23 16:36, Emanuele Giuseppe Esposito wrote:



Am 21/04/2023 um 12:13 schrieb Vladimir Sementsov-Ogievskiy:

On 17.03.23 15:35, Thomas Huth wrote:

On 17/03/2023 11.17, Peter Maydell wrote:

On Mon, 6 Mar 2023 at 11:16, Peter Maydell 
wrote:


On Fri, 3 Mar 2023 at 18:36, Peter Maydell
 wrote:


I've noticed that test-blockjob seems to fail intermittently
on the msys2-64bit job:

https://gitlab.com/qemu-project/qemu/-/jobs/3872508803
https://gitlab.com/qemu-project/qemu/-/jobs/3871061024
https://gitlab.com/qemu-project/qemu/-/jobs/3865312440

Sample output:
| 53/89
ERROR:../tests/unit/test-blockjob.c:499:test_complete_in_standby:
assertion failed: (job->status == JOB_STATUS_STANDBY) ERROR
53/89 qemu:unit / test-blockjob ERROR 0.08s exit status 3



Here's an intermittent failure from my macos x86 machine:

172/621 qemu:unit / test-blockjob
     ERROR   0.26s   killed by signal 6 SIGABRT


And an intermittent on the freebsd 13 CI job:
https://gitlab.com/qemu-project/qemu/-/jobs/3950667240


MALLOC_PERTURB_=197
G_TEST_BUILDDIR=/tmp/cirrus-ci-build/build/tests/unit
G_TEST_SRCDIR=/tmp/cirrus-ci-build/tests/unit
/tmp/cirrus-ci-build/build/tests/unit/test-blockjob --tap -k

▶ 178/650 /blockjob/ids
     OK
178/650 qemu:unit / test-blockjob
     ERROR   0.31s   killed by signal 6 SIGABRT
― ✀
―
stderr:
Assertion failed: (job->status == JOB_STATUS_STANDBY), function
test_complete_in_standby, file ../tests/unit/test-blockjob.c, line
499.


TAP parsing error: Too few tests run (expected 9, got 1)
(test program exited with status code -6)
――

Anybody in the block team looking at these, or shall we just
disable this test entirely ?


I ran into this issue today, too:

   https://gitlab.com/thuth/qemu/-/jobs/3954367101

... if nobody is interested in fixing this test, I think we should
disable it...

   Thomas



I'm looking at this now, and seems that it's broken since
6f592e5aca1a27fe1c1f6 "job.c: enable job lock/unlock and remove
Aiocontext locks", as it stops critical section by new
aio_context_release() call exactly after bdrv_drain_all_and(), so it's
not a surprise that job may start at that moment and the following
assertion fires.

Emanuele could please look at it?


Well if I understood correctly, the only thing that was preventing the
job from continuing was the aiocontext lock held.

The failing assertion even mentions that:
/* Lock the IO thread to prevent the job from being run */
[...]
/* But the job cannot run, so it will remain on standby */
assert(job->status == JOB_STATUS_STANDBY);

Essentially bdrv_drain_all_end() would wake up the job coroutine, but it
would anyways block somewhere after since the aiocontext lock was taken
by the test.

Now that we got rid of aiocontext lock in job code, nothing prevents the
test from resuming.
Mixing job lock and aiocontext acquire/release is not a good idea, and
it would anyways block job_resume() called by bdrv_drain_all_end(), so
the test itself would deadlock.

So unless @Kevin has a better idea, this seems to be just an "hack" to
test stuff that is not possible to do now anymore. What I would suggest
is to get rid of that test, or at least of that assert function. I
unfortunately cannot reproduce the failure, but I think the remaining
functions called by the test should run as expected.



Thanks! I agree. Probably, alternatively we could just expand the drained 
section, like

@@ -488,12 +488,6 @@ static void test_complete_in_standby(void)
 bdrv_drain_all_begin();
 assert_job_status_is(job, JOB_STATUS_STANDBY);
 
-/* Lock the IO thread to prevent the job from being run */

-aio_context_acquire(ctx);
-/* This will schedule the job to resume it */
-bdrv_drain_all_end();
-aio_context_release(ctx);
-
 WITH_JOB_LOCK_GUARD() {
 /* But the job cannot run, so it will remain on standby */
 assert(job->status == JOB_STATUS_STANDBY);
@@ -511,6 +505,7 @@ static void test_complete_in_standby(void)
 job_dismiss_locked(, _abort);
 }
 
+bdrv_drain_all_end();

 aio_context_acquire(ctx);
 destroy_blk(blk);
 aio_context_release(ctx);


But, seems that test wanted to specifically test job, that still in STANDBY 
exactly after drained section...

[cc: Hanna]

Hanna, it was your test (added in c2c731a4d35062295cd3260e66b3754588a2fad4, two 
years ago). Don't you remember was important to catch STANDBY job *after* a 
drained section?

--
Best regards,
Vladimir




Re: [PULL 00/20] Block patches

2023-04-24 Thread Sam Li
Stefan Hajnoczi  于2023年4月25日周二 01:52写道:
>
> On Fri, Apr 21, 2023 at 08:01:56PM +0100, Richard Henderson wrote:
> > On 4/20/23 13:09, Stefan Hajnoczi wrote:
> > > The following changes since commit 
> > > c1eb2ddf0f8075faddc5f7c3d39feae3e8e9d6b4:
> > >
> > >Update version for v8.0.0 release (2023-04-19 17:27:13 +0100)
> > >
> > > are available in the Git repository at:
> > >
> > >https://gitlab.com/stefanha/qemu.git tags/block-pull-request
> > >
> > > for you to fetch changes up to 36e5e9b22abe56aa00ca067851555ad8127a7966:
> > >
> > >tracing: install trace events file only if necessary (2023-04-20 
> > > 07:39:43 -0400)
> > >
> > > 
> > > Pull request
> > >
> > > Sam Li's zoned storage work and fixes I collected during the 8.0 freeze.
> > >
> > > 
> > >
> > > Carlos Santos (1):
> > >tracing: install trace events file only if necessary
> > >
> > > Philippe Mathieu-Daudé (1):
> > >block/dmg: Declare a type definition for DMG uncompress function
> > >
> > > Sam Li (17):
> > >block/block-common: add zoned device structs
> > >block/file-posix: introduce helper functions for sysfs attributes
> > >block/block-backend: add block layer APIs resembling Linux
> > >  ZonedBlockDevice ioctls
> > >block/raw-format: add zone operations to pass through requests
> > >block: add zoned BlockDriver check to block layer
> > >iotests: test new zone operations
> > >block: add some trace events for new block layer APIs
> > >docs/zoned-storage: add zoned device documentation
> > >file-posix: add tracking of the zone write pointers
> > >block: introduce zone append write for zoned devices
> > >qemu-iotests: test zone append operation
> > >block: add some trace events for zone append
> > >include: update virtio_blk headers to v6.3-rc1
> > >virtio-blk: add zoned storage emulation for zoned devices
> > >block: add accounting for zone append operation
> > >virtio-blk: add some trace events for zoned emulation
> > >docs/zoned-storage:add zoned emulation use case
> > >
> > > Thomas De Schampheleire (1):
> > >tracetool: use relative paths for '#line' preprocessor directives
> >
> > 32 failed CI jobs:
> > https://gitlab.com/qemu-project/qemu/-/pipelines/844927626/failures
>
> Hi Sam,
> I have dropped the zoned storage patches from the block pull request.
> Please take a look at the diff below and squash the fixes into your
> original commits.
>
> Once you have reworked your patch series, please retest them and then
> resend so we can merge them in the next block pull request.

Thanks! I will do that ASAP.


Sam



[PULL 1/2] block/dmg: Declare a type definition for DMG uncompress function

2023-04-24 Thread Stefan Hajnoczi
From: Philippe Mathieu-Daudé 

Introduce the BdrvDmgUncompressFunc type defintion. To emphasis
dmg_uncompress_bz2 and dmg_uncompress_lzfse are pointer to functions,
declare them using this new typedef.

Signed-off-by: Philippe Mathieu-Daudé 
Message-id: 20230320152610.32052-1-phi...@linaro.org
Signed-off-by: Stefan Hajnoczi 
---
 block/dmg.h | 8 
 block/dmg.c | 7 ++-
 2 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/block/dmg.h b/block/dmg.h
index e488601b62..dcd6165e63 100644
--- a/block/dmg.h
+++ b/block/dmg.h
@@ -51,10 +51,10 @@ typedef struct BDRVDMGState {
 z_stream zstream;
 } BDRVDMGState;
 
-extern int (*dmg_uncompress_bz2)(char *next_in, unsigned int avail_in,
- char *next_out, unsigned int avail_out);
+typedef int BdrvDmgUncompressFunc(char *next_in, unsigned int avail_in,
+  char *next_out, unsigned int avail_out);
 
-extern int (*dmg_uncompress_lzfse)(char *next_in, unsigned int avail_in,
-   char *next_out, unsigned int avail_out);
+extern BdrvDmgUncompressFunc *dmg_uncompress_bz2;
+extern BdrvDmgUncompressFunc *dmg_uncompress_lzfse;
 
 #endif
diff --git a/block/dmg.c b/block/dmg.c
index e10b9a2ba5..2769900359 100644
--- a/block/dmg.c
+++ b/block/dmg.c
@@ -31,11 +31,8 @@
 #include "qemu/memalign.h"
 #include "dmg.h"
 
-int (*dmg_uncompress_bz2)(char *next_in, unsigned int avail_in,
-  char *next_out, unsigned int avail_out);
-
-int (*dmg_uncompress_lzfse)(char *next_in, unsigned int avail_in,
-char *next_out, unsigned int avail_out);
+BdrvDmgUncompressFunc *dmg_uncompress_bz2;
+BdrvDmgUncompressFunc *dmg_uncompress_lzfse;
 
 enum {
 /* Limit chunk sizes to prevent unreasonable amounts of memory being used
-- 
2.39.2




[PULL 2/2] tracetool: use relative paths for '#line' preprocessor directives

2023-04-24 Thread Stefan Hajnoczi
From: Thomas De Schampheleire 

The event filename is an absolute path. Convert it to a relative path when
writing '#line' directives, to preserve reproducibility of the generated
output when different base paths are used.

Signed-off-by: Thomas De Schampheleire 
Signed-off-by: Stefan Hajnoczi 
Message-Id: <20230406080045.21696-1-thomas.de_schamphele...@nokia.com>
---
 scripts/tracetool/backend/ftrace.py | 4 +++-
 scripts/tracetool/backend/log.py| 4 +++-
 scripts/tracetool/backend/syslog.py | 4 +++-
 3 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/scripts/tracetool/backend/ftrace.py 
b/scripts/tracetool/backend/ftrace.py
index 5fa30ccc08..baed2ae61c 100644
--- a/scripts/tracetool/backend/ftrace.py
+++ b/scripts/tracetool/backend/ftrace.py
@@ -12,6 +12,8 @@
 __email__  = "stefa...@redhat.com"
 
 
+import os.path
+
 from tracetool import out
 
 
@@ -45,7 +47,7 @@ def generate_h(event, group):
 args=event.args,
 event_id="TRACE_" + event.name.upper(),
 event_lineno=event.lineno,
-event_filename=event.filename,
+event_filename=os.path.relpath(event.filename),
 fmt=event.fmt.rstrip("\n"),
 argnames=argnames)
 
diff --git a/scripts/tracetool/backend/log.py b/scripts/tracetool/backend/log.py
index 17ba1cd90e..de27b7e62e 100644
--- a/scripts/tracetool/backend/log.py
+++ b/scripts/tracetool/backend/log.py
@@ -12,6 +12,8 @@
 __email__  = "stefa...@redhat.com"
 
 
+import os.path
+
 from tracetool import out
 
 
@@ -53,7 +55,7 @@ def generate_h(event, group):
 '}',
 cond=cond,
 event_lineno=event.lineno,
-event_filename=event.filename,
+event_filename=os.path.relpath(event.filename),
 name=event.name,
 fmt=event.fmt.rstrip("\n"),
 argnames=argnames)
diff --git a/scripts/tracetool/backend/syslog.py 
b/scripts/tracetool/backend/syslog.py
index 5a3a00fe31..012970f6cc 100644
--- a/scripts/tracetool/backend/syslog.py
+++ b/scripts/tracetool/backend/syslog.py
@@ -12,6 +12,8 @@
 __email__  = "stefa...@redhat.com"
 
 
+import os.path
+
 from tracetool import out
 
 
@@ -41,7 +43,7 @@ def generate_h(event, group):
 '}',
 cond=cond,
 event_lineno=event.lineno,
-event_filename=event.filename,
+event_filename=os.path.relpath(event.filename),
 name=event.name,
 fmt=event.fmt.rstrip("\n"),
 argnames=argnames)
-- 
2.39.2




[PULL 0/2] Block patches

2023-04-24 Thread Stefan Hajnoczi
The following changes since commit ac5f7bf8e208cd7893dbb1a9520559e569a4677c:

  Merge tag 'migration-20230424-pull-request' of 
https://gitlab.com/juan.quintela/qemu into staging (2023-04-24 15:00:39 +0100)

are available in the Git repository at:

  https://gitlab.com/stefanha/qemu.git tags/block-pull-request

for you to fetch changes up to 9d672e290475001fcecdcc9dc79ad088ff89d17f:

  tracetool: use relative paths for '#line' preprocessor directives (2023-04-24 
13:53:44 -0400)


Pull request (v2)

I dropped the zoned storage patches that had CI failures. This pull request
only contains fixes now.



Philippe Mathieu-Daudé (1):
  block/dmg: Declare a type definition for DMG uncompress function

Thomas De Schampheleire (1):
  tracetool: use relative paths for '#line' preprocessor directives

 block/dmg.h | 8 
 block/dmg.c | 7 ++-
 scripts/tracetool/backend/ftrace.py | 4 +++-
 scripts/tracetool/backend/log.py| 4 +++-
 scripts/tracetool/backend/syslog.py | 4 +++-
 5 files changed, 15 insertions(+), 12 deletions(-)

-- 
2.39.2




Re: [PULL 00/20] Block patches

2023-04-24 Thread Stefan Hajnoczi
On Fri, Apr 21, 2023 at 08:01:56PM +0100, Richard Henderson wrote:
> On 4/20/23 13:09, Stefan Hajnoczi wrote:
> > The following changes since commit c1eb2ddf0f8075faddc5f7c3d39feae3e8e9d6b4:
> > 
> >Update version for v8.0.0 release (2023-04-19 17:27:13 +0100)
> > 
> > are available in the Git repository at:
> > 
> >https://gitlab.com/stefanha/qemu.git tags/block-pull-request
> > 
> > for you to fetch changes up to 36e5e9b22abe56aa00ca067851555ad8127a7966:
> > 
> >tracing: install trace events file only if necessary (2023-04-20 
> > 07:39:43 -0400)
> > 
> > 
> > Pull request
> > 
> > Sam Li's zoned storage work and fixes I collected during the 8.0 freeze.
> > 
> > 
> > 
> > Carlos Santos (1):
> >tracing: install trace events file only if necessary
> > 
> > Philippe Mathieu-Daudé (1):
> >block/dmg: Declare a type definition for DMG uncompress function
> > 
> > Sam Li (17):
> >block/block-common: add zoned device structs
> >block/file-posix: introduce helper functions for sysfs attributes
> >block/block-backend: add block layer APIs resembling Linux
> >  ZonedBlockDevice ioctls
> >block/raw-format: add zone operations to pass through requests
> >block: add zoned BlockDriver check to block layer
> >iotests: test new zone operations
> >block: add some trace events for new block layer APIs
> >docs/zoned-storage: add zoned device documentation
> >file-posix: add tracking of the zone write pointers
> >block: introduce zone append write for zoned devices
> >qemu-iotests: test zone append operation
> >block: add some trace events for zone append
> >include: update virtio_blk headers to v6.3-rc1
> >virtio-blk: add zoned storage emulation for zoned devices
> >block: add accounting for zone append operation
> >virtio-blk: add some trace events for zoned emulation
> >docs/zoned-storage:add zoned emulation use case
> > 
> > Thomas De Schampheleire (1):
> >tracetool: use relative paths for '#line' preprocessor directives
> 
> 32 failed CI jobs:
> https://gitlab.com/qemu-project/qemu/-/pipelines/844927626/failures

Hi Sam,
I have dropped the zoned storage patches from the block pull request.
Please take a look at the diff below and squash the fixes into your
original commits.

Once you have reworked your patch series, please retest them and then
resend so we can merge them in the next block pull request.

Thank you!

Stefan

---
From 08dd0db534d2dbc3aa41d6147ae99bf589bbed57 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi 
Date: Mon, 24 Apr 2023 13:46:46 -0400
Subject: [PATCH 1/4] Fix mingw32 32-bit compilation

Add uintptr_t casts where necessary so 32-bit mingw32 builds succeed.

Signed-off-by: Stefan Hajnoczi 
---
 block/block-backend.c | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/block/block-backend.c b/block/block-backend.c
index 6b2b92b7ff..b9274661fc 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -1839,7 +1839,8 @@ static void coroutine_fn blk_aio_zone_report_entry(void 
*opaque)
 BlkRwCo *rwco = >rwco;
 
 rwco->ret = blk_co_zone_report(rwco->blk, rwco->offset,
-   (unsigned int*)acb->bytes,rwco->iobuf);
+   (unsigned int *)(uintptr_t)acb->bytes,
+   rwco->iobuf);
 blk_aio_complete(acb);
 }
 
@@ -1860,7 +1861,7 @@ BlockAIOCB *blk_aio_zone_report(BlockBackend *blk, 
int64_t offset,
 .iobuf  = zones,
 .ret= NOT_DONE,
 };
-acb->bytes = (int64_t)nr_zones,
+acb->bytes = (int64_t)(uintptr_t)nr_zones,
 acb->has_returned = false;
 
 co = qemu_coroutine_create(blk_aio_zone_report_entry, acb);
@@ -1880,7 +1881,8 @@ static void coroutine_fn blk_aio_zone_mgmt_entry(void 
*opaque)
 BlkAioEmAIOCB *acb = opaque;
 BlkRwCo *rwco = >rwco;
 
-rwco->ret = blk_co_zone_mgmt(rwco->blk, (BlockZoneOp)rwco->iobuf,
+rwco->ret = blk_co_zone_mgmt(rwco->blk,
+ (BlockZoneOp)(uintptr_t)rwco->iobuf,
  rwco->offset, acb->bytes);
 blk_aio_complete(acb);
 }
@@ -1897,7 +1899,7 @@ BlockAIOCB *blk_aio_zone_mgmt(BlockBackend *blk, 
BlockZoneOp op,
 acb->rwco = (BlkRwCo) {
 .blk= blk,
 .offset = offset,
-.iobuf  = (void *)op,
+.iobuf  = (void *)(uintptr_t)op,
 .ret= NOT_DONE,
 };
 acb->bytes = len;
@@ -1920,7 +1922,7 @@ static void coroutine_fn blk_aio_zone_append_entry(void 
*opaque)
 BlkAioEmAIOCB *acb = opaque;
 BlkRwCo *rwco = >rwco;
 
-rwco->ret = blk_co_zone_append(rwco->blk, (int64_t *)acb->bytes,
+rwco->ret = blk_co_zone_append(rwco->blk, (int64_t *)(uintptr_t)acb->bytes,
rwco->iobuf, rwco->flags);
 blk_aio_complete(acb);
 }
@@ 

Re: [PATCH 2/2] hw/net/allwinner-sun8i-emac: Correctly byteswap descriptor fields

2023-04-24 Thread Alex Bennée


Peter Maydell  writes:

> In allwinner-sun8i-emac we just read directly from guest memory into
> a host FrameDescriptor struct and back.  This only works on
> little-endian hosts.  Reading and writing of descriptors is already
> abstracted into functions; make those functions also handle the
> byte-swapping so that TransferDescriptor structs as seen by the rest
> of the code are always in host-order, and fix two places that were
> doing ad-hoc descriptor reading without using the functions.
>
> Cc: qemu-sta...@nongnu.org
> Signed-off-by: Peter Maydell 

Reviewed-by: Alex Bennée 

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro



Re: [PATCH 1/2] hw/sd/allwinner-sdhost: Correctly byteswap descriptor fields

2023-04-24 Thread Alex Bennée


Peter Maydell  writes:

> In allwinner_sdhost_process_desc() we just read directly from
> guest memory into a host TransferDescriptor struct and back.
> This only works on little-endian hosts. Abstract the reading
> and writing of descriptors into functions that handle the
> byte-swapping so that TransferDescriptor structs as seen by
> the rest of the code are always in host-order.

I couldn't find any datasheets on the sdhost hardware but the kernel
certainly explicitly sets the endianess of the descriptors so:

Reviewed-by: Alex Bennée 

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro



Re: [PATCH 2/2] hw/net/allwinner-sun8i-emac: Correctly byteswap descriptor fields

2023-04-24 Thread Thomas Huth

On 24/04/2023 18.50, Peter Maydell wrote:

In allwinner-sun8i-emac we just read directly from guest memory into
a host FrameDescriptor struct and back.  This only works on
little-endian hosts.  Reading and writing of descriptors is already
abstracted into functions; make those functions also handle the
byte-swapping so that TransferDescriptor structs as seen by the rest
of the code are always in host-order, and fix two places that were
doing ad-hoc descriptor reading without using the functions.

Cc: qemu-sta...@nongnu.org
Signed-off-by: Peter Maydell 
---
  hw/net/allwinner-sun8i-emac.c | 22 +++---
  1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/hw/net/allwinner-sun8i-emac.c b/hw/net/allwinner-sun8i-emac.c
index b861d8ff352..fac4405f452 100644
--- a/hw/net/allwinner-sun8i-emac.c
+++ b/hw/net/allwinner-sun8i-emac.c
@@ -350,8 +350,13 @@ static void allwinner_sun8i_emac_get_desc(AwSun8iEmacState 
*s,
FrameDescriptor *desc,
uint32_t phys_addr)
  {
-dma_memory_read(>dma_as, phys_addr, desc, sizeof(*desc),
+uint32_t desc_words[4];
+dma_memory_read(>dma_as, phys_addr, _words, sizeof(desc_words),
  MEMTXATTRS_UNSPECIFIED);
+desc->status = le32_to_cpu(desc_words[0]);
+desc->status2 = le32_to_cpu(desc_words[1]);
+desc->addr = le32_to_cpu(desc_words[2]);
+desc->next = le32_to_cpu(desc_words[3]);
  }
  
  static uint32_t allwinner_sun8i_emac_next_desc(AwSun8iEmacState *s,

@@ -400,10 +405,15 @@ static uint32_t 
allwinner_sun8i_emac_tx_desc(AwSun8iEmacState *s,
  }
  
  static void allwinner_sun8i_emac_flush_desc(AwSun8iEmacState *s,

-FrameDescriptor *desc,
+const FrameDescriptor *desc,
  uint32_t phys_addr)
  {
-dma_memory_write(>dma_as, phys_addr, desc, sizeof(*desc),
+uint32_t desc_words[4];
+desc_words[0] = cpu_to_le32(desc->status);
+desc_words[1] = cpu_to_le32(desc->status2);
+desc_words[2] = cpu_to_le32(desc->addr);
+desc_words[3] = cpu_to_le32(desc->next);
+dma_memory_write(>dma_as, phys_addr, _words, sizeof(desc_words),
   MEMTXATTRS_UNSPECIFIED);
  }
  
@@ -638,8 +648,7 @@ static uint64_t allwinner_sun8i_emac_read(void *opaque, hwaddr offset,

  break;
  case REG_TX_CUR_BUF:/* Transmit Current Buffer */
  if (s->tx_desc_curr != 0) {
-dma_memory_read(>dma_as, s->tx_desc_curr, , sizeof(desc),
-MEMTXATTRS_UNSPECIFIED);
+allwinner_sun8i_emac_get_desc(s, , s->tx_desc_curr);
  value = desc.addr;
  } else {
  value = 0;
@@ -652,8 +661,7 @@ static uint64_t allwinner_sun8i_emac_read(void *opaque, 
hwaddr offset,
  break;
  case REG_RX_CUR_BUF:/* Receive Current Buffer */
  if (s->rx_desc_curr != 0) {
-dma_memory_read(>dma_as, s->rx_desc_curr, , sizeof(desc),
-MEMTXATTRS_UNSPECIFIED);
+allwinner_sun8i_emac_get_desc(s, , s->rx_desc_curr);
  value = desc.addr;
  } else {
  value = 0;


Reviewed-by: Thomas Huth 




Re: [PATCH 1/2] hw/sd/allwinner-sdhost: Correctly byteswap descriptor fields

2023-04-24 Thread Thomas Huth

On 24/04/2023 18.50, Peter Maydell wrote:

In allwinner_sdhost_process_desc() we just read directly from
guest memory into a host TransferDescriptor struct and back.
This only works on little-endian hosts. Abstract the reading
and writing of descriptors into functions that handle the
byte-swapping so that TransferDescriptor structs as seen by
the rest of the code are always in host-order.

This fixes a failure of one of the avocado tests on s390.

Cc: qemu-sta...@nongnu.org
Signed-off-by: Peter Maydell 
---
  hw/sd/allwinner-sdhost.c | 31 ++-
  1 file changed, 26 insertions(+), 5 deletions(-)

diff --git a/hw/sd/allwinner-sdhost.c b/hw/sd/allwinner-sdhost.c
index 51e5e908307..92a0f42708d 100644
--- a/hw/sd/allwinner-sdhost.c
+++ b/hw/sd/allwinner-sdhost.c
@@ -302,6 +302,30 @@ static void allwinner_sdhost_auto_stop(AwSdHostState *s)
  }
  }
  
+static void read_descriptor(AwSdHostState *s, hwaddr desc_addr,

+TransferDescriptor *desc)
+{
+uint32_t desc_words[4];
+dma_memory_read(>dma_as, desc_addr, _words, sizeof(desc_words),
+MEMTXATTRS_UNSPECIFIED);
+desc->status = le32_to_cpu(desc_words[0]);
+desc->size = le32_to_cpu(desc_words[1]);
+desc->addr = le32_to_cpu(desc_words[2]);
+desc->next = le32_to_cpu(desc_words[3]);
+}
+
+static void write_descriptor(AwSdHostState *s, hwaddr desc_addr,
+ const TransferDescriptor *desc)
+{
+uint32_t desc_words[4];
+desc_words[0] = cpu_to_le32(desc->status);
+desc_words[1] = cpu_to_le32(desc->size);
+desc_words[2] = cpu_to_le32(desc->addr);
+desc_words[3] = cpu_to_le32(desc->next);
+dma_memory_write(>dma_as, desc_addr, _words, sizeof(desc_words),
+ MEMTXATTRS_UNSPECIFIED);
+}
+
  static uint32_t allwinner_sdhost_process_desc(AwSdHostState *s,
hwaddr desc_addr,
TransferDescriptor *desc,
@@ -312,9 +336,7 @@ static uint32_t allwinner_sdhost_process_desc(AwSdHostState 
*s,
  uint32_t num_bytes = max_bytes;
  uint8_t buf[1024];
  
-/* Read descriptor */

-dma_memory_read(>dma_as, desc_addr, desc, sizeof(*desc),
-MEMTXATTRS_UNSPECIFIED);
+read_descriptor(s, desc_addr, desc);
  if (desc->size == 0) {
  desc->size = klass->max_desc_size;
  } else if (desc->size > klass->max_desc_size) {
@@ -356,8 +378,7 @@ static uint32_t allwinner_sdhost_process_desc(AwSdHostState 
*s,
  
  /* Clear hold flag and flush descriptor */

  desc->status &= ~DESC_STATUS_HOLD;
-dma_memory_write(>dma_as, desc_addr, desc, sizeof(*desc),
- MEMTXATTRS_UNSPECIFIED);
+write_descriptor(s, desc_addr, desc);
  
  return num_done;

  }


Reviewed-by: Thomas Huth 




[PATCH 0/2] arm: allwinner: fix endianness bugs in sdhost and sun8i-emac

2023-04-24 Thread Peter Maydell
This patchset fixes bugs in the sd controller and ethernet controller
devices used in the orangepi-pc board model. The bug is the same in
both cases: we read and write a descriptor struct from guest memory
without byte-swapping it, so the code only does the right thing on
a little-endian host.

These fixes (together with some of the others I've sent out earlier
today) are enough to get the BootLinuxConsole.test_arm_orangepi_sd
avocado test passing on an s390x host.

thanks
-- PMM

Peter Maydell (2):
  hw/sd/allwinner-sdhost: Correctly byteswap descriptor fields
  hw/net/allwinner-sun8i-emac: Correctly byteswap descriptor fields

 hw/net/allwinner-sun8i-emac.c | 22 +++---
 hw/sd/allwinner-sdhost.c  | 31 ++-
 2 files changed, 41 insertions(+), 12 deletions(-)

-- 
2.34.1




[PATCH 1/2] hw/sd/allwinner-sdhost: Correctly byteswap descriptor fields

2023-04-24 Thread Peter Maydell
In allwinner_sdhost_process_desc() we just read directly from
guest memory into a host TransferDescriptor struct and back.
This only works on little-endian hosts. Abstract the reading
and writing of descriptors into functions that handle the
byte-swapping so that TransferDescriptor structs as seen by
the rest of the code are always in host-order.

This fixes a failure of one of the avocado tests on s390.

Cc: qemu-sta...@nongnu.org
Signed-off-by: Peter Maydell 
---
 hw/sd/allwinner-sdhost.c | 31 ++-
 1 file changed, 26 insertions(+), 5 deletions(-)

diff --git a/hw/sd/allwinner-sdhost.c b/hw/sd/allwinner-sdhost.c
index 51e5e908307..92a0f42708d 100644
--- a/hw/sd/allwinner-sdhost.c
+++ b/hw/sd/allwinner-sdhost.c
@@ -302,6 +302,30 @@ static void allwinner_sdhost_auto_stop(AwSdHostState *s)
 }
 }
 
+static void read_descriptor(AwSdHostState *s, hwaddr desc_addr,
+TransferDescriptor *desc)
+{
+uint32_t desc_words[4];
+dma_memory_read(>dma_as, desc_addr, _words, sizeof(desc_words),
+MEMTXATTRS_UNSPECIFIED);
+desc->status = le32_to_cpu(desc_words[0]);
+desc->size = le32_to_cpu(desc_words[1]);
+desc->addr = le32_to_cpu(desc_words[2]);
+desc->next = le32_to_cpu(desc_words[3]);
+}
+
+static void write_descriptor(AwSdHostState *s, hwaddr desc_addr,
+ const TransferDescriptor *desc)
+{
+uint32_t desc_words[4];
+desc_words[0] = cpu_to_le32(desc->status);
+desc_words[1] = cpu_to_le32(desc->size);
+desc_words[2] = cpu_to_le32(desc->addr);
+desc_words[3] = cpu_to_le32(desc->next);
+dma_memory_write(>dma_as, desc_addr, _words, sizeof(desc_words),
+ MEMTXATTRS_UNSPECIFIED);
+}
+
 static uint32_t allwinner_sdhost_process_desc(AwSdHostState *s,
   hwaddr desc_addr,
   TransferDescriptor *desc,
@@ -312,9 +336,7 @@ static uint32_t allwinner_sdhost_process_desc(AwSdHostState 
*s,
 uint32_t num_bytes = max_bytes;
 uint8_t buf[1024];
 
-/* Read descriptor */
-dma_memory_read(>dma_as, desc_addr, desc, sizeof(*desc),
-MEMTXATTRS_UNSPECIFIED);
+read_descriptor(s, desc_addr, desc);
 if (desc->size == 0) {
 desc->size = klass->max_desc_size;
 } else if (desc->size > klass->max_desc_size) {
@@ -356,8 +378,7 @@ static uint32_t allwinner_sdhost_process_desc(AwSdHostState 
*s,
 
 /* Clear hold flag and flush descriptor */
 desc->status &= ~DESC_STATUS_HOLD;
-dma_memory_write(>dma_as, desc_addr, desc, sizeof(*desc),
- MEMTXATTRS_UNSPECIFIED);
+write_descriptor(s, desc_addr, desc);
 
 return num_done;
 }
-- 
2.34.1




[PATCH 2/2] hw/net/allwinner-sun8i-emac: Correctly byteswap descriptor fields

2023-04-24 Thread Peter Maydell
In allwinner-sun8i-emac we just read directly from guest memory into
a host FrameDescriptor struct and back.  This only works on
little-endian hosts.  Reading and writing of descriptors is already
abstracted into functions; make those functions also handle the
byte-swapping so that TransferDescriptor structs as seen by the rest
of the code are always in host-order, and fix two places that were
doing ad-hoc descriptor reading without using the functions.

Cc: qemu-sta...@nongnu.org
Signed-off-by: Peter Maydell 
---
 hw/net/allwinner-sun8i-emac.c | 22 +++---
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/hw/net/allwinner-sun8i-emac.c b/hw/net/allwinner-sun8i-emac.c
index b861d8ff352..fac4405f452 100644
--- a/hw/net/allwinner-sun8i-emac.c
+++ b/hw/net/allwinner-sun8i-emac.c
@@ -350,8 +350,13 @@ static void allwinner_sun8i_emac_get_desc(AwSun8iEmacState 
*s,
   FrameDescriptor *desc,
   uint32_t phys_addr)
 {
-dma_memory_read(>dma_as, phys_addr, desc, sizeof(*desc),
+uint32_t desc_words[4];
+dma_memory_read(>dma_as, phys_addr, _words, sizeof(desc_words),
 MEMTXATTRS_UNSPECIFIED);
+desc->status = le32_to_cpu(desc_words[0]);
+desc->status2 = le32_to_cpu(desc_words[1]);
+desc->addr = le32_to_cpu(desc_words[2]);
+desc->next = le32_to_cpu(desc_words[3]);
 }
 
 static uint32_t allwinner_sun8i_emac_next_desc(AwSun8iEmacState *s,
@@ -400,10 +405,15 @@ static uint32_t 
allwinner_sun8i_emac_tx_desc(AwSun8iEmacState *s,
 }
 
 static void allwinner_sun8i_emac_flush_desc(AwSun8iEmacState *s,
-FrameDescriptor *desc,
+const FrameDescriptor *desc,
 uint32_t phys_addr)
 {
-dma_memory_write(>dma_as, phys_addr, desc, sizeof(*desc),
+uint32_t desc_words[4];
+desc_words[0] = cpu_to_le32(desc->status);
+desc_words[1] = cpu_to_le32(desc->status2);
+desc_words[2] = cpu_to_le32(desc->addr);
+desc_words[3] = cpu_to_le32(desc->next);
+dma_memory_write(>dma_as, phys_addr, _words, sizeof(desc_words),
  MEMTXATTRS_UNSPECIFIED);
 }
 
@@ -638,8 +648,7 @@ static uint64_t allwinner_sun8i_emac_read(void *opaque, 
hwaddr offset,
 break;
 case REG_TX_CUR_BUF:/* Transmit Current Buffer */
 if (s->tx_desc_curr != 0) {
-dma_memory_read(>dma_as, s->tx_desc_curr, , sizeof(desc),
-MEMTXATTRS_UNSPECIFIED);
+allwinner_sun8i_emac_get_desc(s, , s->tx_desc_curr);
 value = desc.addr;
 } else {
 value = 0;
@@ -652,8 +661,7 @@ static uint64_t allwinner_sun8i_emac_read(void *opaque, 
hwaddr offset,
 break;
 case REG_RX_CUR_BUF:/* Receive Current Buffer */
 if (s->rx_desc_curr != 0) {
-dma_memory_read(>dma_as, s->rx_desc_curr, , sizeof(desc),
-MEMTXATTRS_UNSPECIFIED);
+allwinner_sun8i_emac_get_desc(s, , s->rx_desc_curr);
 value = desc.addr;
 } else {
 value = 0;
-- 
2.34.1




Re: [PULL 00/20] Block patches

2023-04-24 Thread Stefan Hajnoczi
On Fri, Apr 21, 2023 at 08:01:56PM +0100, Richard Henderson wrote:
> On 4/20/23 13:09, Stefan Hajnoczi wrote:
> > The following changes since commit c1eb2ddf0f8075faddc5f7c3d39feae3e8e9d6b4:
> > 
> >Update version for v8.0.0 release (2023-04-19 17:27:13 +0100)
> > 
> > are available in the Git repository at:
> > 
> >https://gitlab.com/stefanha/qemu.git tags/block-pull-request
> > 
> > for you to fetch changes up to 36e5e9b22abe56aa00ca067851555ad8127a7966:
> > 
> >tracing: install trace events file only if necessary (2023-04-20 
> > 07:39:43 -0400)
> > 
> > 
> > Pull request
> > 
> > Sam Li's zoned storage work and fixes I collected during the 8.0 freeze.
> > 
> > 
> > 
> > Carlos Santos (1):
> >tracing: install trace events file only if necessary
> > 
> > Philippe Mathieu-Daudé (1):
> >block/dmg: Declare a type definition for DMG uncompress function
> > 
> > Sam Li (17):
> >block/block-common: add zoned device structs
> >block/file-posix: introduce helper functions for sysfs attributes
> >block/block-backend: add block layer APIs resembling Linux
> >  ZonedBlockDevice ioctls
> >block/raw-format: add zone operations to pass through requests
> >block: add zoned BlockDriver check to block layer
> >iotests: test new zone operations
> >block: add some trace events for new block layer APIs
> >docs/zoned-storage: add zoned device documentation
> >file-posix: add tracking of the zone write pointers
> >block: introduce zone append write for zoned devices
> >qemu-iotests: test zone append operation
> >block: add some trace events for zone append
> >include: update virtio_blk headers to v6.3-rc1
> >virtio-blk: add zoned storage emulation for zoned devices
> >block: add accounting for zone append operation
> >virtio-blk: add some trace events for zoned emulation
> >docs/zoned-storage:add zoned emulation use case
> > 
> > Thomas De Schampheleire (1):
> >tracetool: use relative paths for '#line' preprocessor directives
> 
> 32 failed CI jobs:
> https://gitlab.com/qemu-project/qemu/-/pipelines/844927626/failures

Thanks for letting me know. I will resend without the zoned storage
patches that are failing CI.

Stefan


signature.asc
Description: PGP signature


Re: [PATCH] hw/intc/allwinner-a10-pic: Don't use set_bit()/clear_bit()

2023-04-24 Thread Thomas Huth

On 24/04/2023 17.28, Peter Maydell wrote:

The Allwinner PIC model uses set_bit() and clear_bit() to update the
values in its irq_pending[] array when an interrupt arrives.  However
it is using these functions wrongly: they work on an array of type
'long', and it is passing an array of type 'uint32_t'.  Because the
code manually figures out the right array element, this works on
little-endian hosts and on 32-bit big-endian hosts, where bits 0..31
in a 'long' are in the same place as they are in a 'uint32_t'.
However it breaks on 64-bit big-endian hosts.

Remove the use of set_bit() and clear_bit() in favour of using
deposit32() on the array element.  This fixes a bug where on
big-endian 64-bit hosts the guest kernel would hang early on in
bootup.

Cc: qemu-sta...@nongnu.org
Signed-off-by: Peter Maydell 
---
  hw/intc/allwinner-a10-pic.c | 7 ++-
  1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/hw/intc/allwinner-a10-pic.c b/hw/intc/allwinner-a10-pic.c
index 8cca1248073..4875e68ba6a 100644
--- a/hw/intc/allwinner-a10-pic.c
+++ b/hw/intc/allwinner-a10-pic.c
@@ -49,12 +49,9 @@ static void aw_a10_pic_update(AwA10PICState *s)
  static void aw_a10_pic_set_irq(void *opaque, int irq, int level)
  {
  AwA10PICState *s = opaque;
+uint32_t *pending_reg = >irq_pending[irq / 32];
  
-if (level) {

-set_bit(irq % 32, (void *)>irq_pending[irq / 32]);
-} else {
-clear_bit(irq % 32, (void *)>irq_pending[irq / 32]);
-}
+*pending_reg = deposit32(*pending_reg, irq % 32, 1, level);
  aw_a10_pic_update(s);
  }
  


Reviewed-by: Thomas Huth 




[PATCH v3 0/7] Add EPYC-Genoa model and update previous EPYC Models

2023-04-24 Thread Babu Moger
This series updates the AMD EPYC models and adds new EPYC-Genoa model.
Sent out v1,v2 earlier but those changes are not merged yet. Now adding
EPYC-Genoa on top of that.

Here are the features.
a. Allow versioned CPUs to specify new cache_info pointers.
b. Add EPYC-v4, EPYC-Rome-v3 and EPYC-Milan-v2 fixing the
   cache_info.complex_indexing.
c. Introduce EPYC-Milan-v2 by adding few missing feature bits.
d. Add CPU model for AMD EPYC Genoa processor series

This series depends on the following recent kernel commits:
8c19b6f257fa ("KVM: x86: Propagate the AMD Automatic IBRS feature to the guest")
e7862eda309e ("x86/cpu: Support AMD Automatic IBRS")
5b909d4ae59a ("x86/cpu, kvm: Add the Null Selector Clears Base feature")
a9dc9ec5a1fa ("x86/cpu, kvm: Add the NO_NESTED_DATA_BP feature")
0977cfac6e76 ("KVM: nSVM: Implement support for nested VNMI")
fa4c027a7956 ("KVM: x86: Add support for SVM's Virtual NMI")
---
v3:
  Refreshed the patches on top of latest master.
  Add CPU model for AMD EPYC Genoa processor series (zen4)
  
v2:
  Refreshed the patches on top of latest master.
  Changed the feature NULL_SELECT_CLEARS_BASE to NULL_SEL_CLR_BASE to
  match the kernel name.
  https://lore.kernel.org/kvm/20221205233235.622491-3-kim.phill...@amd.com/

v1: 
https://lore.kernel.org/kvm/167001034454.62456.7111414518087569436.stgit@bmoger-ubuntu/
v2: https://lore.kernel.org/kvm/20230106185700.28744-1-babu.mo...@amd.com/

Babu Moger (5):
  target/i386: Add a couple of feature bits in  8000_0008_EBX
  target/i386: Add feature bits for CPUID_Fn8021_EAX
  target/i386: Add missing feature bits in EPYC-Milan model
  target/i386: Add VNMI and automatic IBRS feature bits
  target/i386: Add EPYC-Genoa model to support Zen 4 processor series

Michael Roth (2):
  target/i386: allow versioned CPUs to specify new  cache_info
  target/i386: Add new EPYC CPU versions with updated  cache_info

 target/i386/cpu.c | 376 +-
 target/i386/cpu.h |  15 ++
 2 files changed, 385 insertions(+), 6 deletions(-)

-- 
2.34.1




Re: [PULL 00/30] Migration 20230424 patches

2023-04-24 Thread Richard Henderson

On 4/24/23 14:27, Juan Quintela wrote:

The following changes since commit 81072abf1575b11226b3779af76dc71dfa85ee5d:

   Merge tag 'migration-20230420-pull-request' 
ofhttps://gitlab.com/juan.quintela/qemu  into staging (2023-04-24 12:06:17 
+0100)

are available in the Git repository at:

   https://gitlab.com/juan.quintela/qemu.git  
tags/migration-20230424-pull-request

for you to fetch changes up to 9c894df3a37d675652390f7dbbe2f65b7bad7efa:

   migration: Create migrate_max_bandwidth() function (2023-04-24 15:01:47 
+0200)


Migration Pull request

Everything that was reviewed since last PULL request:
- fix to control flow (eric)
- rearrange of hmp commands (juan)
- Make capabilities more consistent and coherent (juan)
   Not all of them reviewed yet, so only the ones reviewed.

Later, Juan.

PD.  I am waiting to finish review of the compression fixes to send
them.




Applied, thanks.  Please update https://wiki.qemu.org/ChangeLog/8.1 as 
appropriate.


r~




[PATCH v3 4/7] target/i386: Add feature bits for CPUID_Fn80000021_EAX

2023-04-24 Thread Babu Moger
Add the following feature bits.
no-nested-data-bp : Processor ignores nested data breakpoints.
lfence-always-serializing : LFENCE instruction is always serializing.
null-sel-cls-base : Null Selector Clears Base. When this bit is
set, a null segment load clears the segment base.

The documentation for the features are available in the links below.
a. Processor Programming Reference (PPR) for AMD Family 19h Model 01h,
   Revision B1 Processors
b. AMD64 Architecture Programmer’s Manual Volumes 1–5 Publication No. Revision
40332 4.05 Date October 2022

Signed-off-by: Babu Moger 
Acked-by: Michael S. Tsirkin 
Link: https://www.amd.com/system/files/TechDocs/55898_B1_pub_0.50.zip
Link: https://www.amd.com/system/files/TechDocs/40332_4.05.pdf
---
 target/i386/cpu.c | 24 
 target/i386/cpu.h |  8 
 2 files changed, 32 insertions(+)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 64a1fdd6ca..d584a9488b 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -920,6 +920,22 @@ FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
 .tcg_features = 0,
 .unmigratable_flags = 0,
 },
+[FEAT_8000_0021_EAX] = {
+.type = CPUID_FEATURE_WORD,
+.feat_names = {
+"no-nested-data-bp", NULL, "lfence-always-serializing", NULL,
+NULL, NULL, "null-sel-clr-base", NULL,
+NULL, NULL, NULL, NULL,
+NULL, NULL, NULL, NULL,
+NULL, NULL, NULL, NULL,
+NULL, NULL, NULL, NULL,
+NULL, NULL, NULL, NULL,
+NULL, NULL, NULL, NULL,
+},
+.cpuid = { .eax = 0x8021, .reg = R_EAX, },
+.tcg_features = 0,
+.unmigratable_flags = 0,
+},
 [FEAT_XSAVE] = {
 .type = CPUID_FEATURE_WORD,
 .feat_names = {
@@ -6136,6 +6152,10 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
 *ebx |= sev_get_reduced_phys_bits() << 6;
 }
 break;
+case 0x8021:
+*eax = env->features[FEAT_8000_0021_EAX];
+*ebx = *ecx = *edx = 0;
+break;
 default:
 /* reserved values: zero */
 *eax = 0;
@@ -6565,6 +6585,10 @@ void x86_cpu_expand_features(X86CPU *cpu, Error **errp)
 x86_cpu_adjust_level(cpu, >cpuid_min_xlevel, 0x801F);
 }
 
+if (env->features[FEAT_8000_0021_EAX]) {
+x86_cpu_adjust_level(cpu, >cpuid_min_xlevel, 0x8021);
+}
+
 /* SGX requires CPUID[0x12] for EPC enumeration */
 if (env->features[FEAT_7_0_EBX] & CPUID_7_0_EBX_SGX) {
 x86_cpu_adjust_level(cpu, >cpuid_min_level, 0x12);
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 14645e3cb8..7cf811d8fe 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -600,6 +600,7 @@ typedef enum FeatureWord {
 FEAT_8000_0001_ECX, /* CPUID[8000_0001].ECX */
 FEAT_8000_0007_EDX, /* CPUID[8000_0007].EDX */
 FEAT_8000_0008_EBX, /* CPUID[8000_0008].EBX */
+FEAT_8000_0021_EAX, /* CPUID[8000_0021].EAX */
 FEAT_C000_0001_EDX, /* CPUID[C000_0001].EDX */
 FEAT_KVM,   /* CPUID[4000_0001].EAX (KVM_CPUID_FEATURES) */
 FEAT_KVM_HINTS, /* CPUID[4000_0001].EDX */
@@ -939,6 +940,13 @@ uint64_t x86_cpu_get_supported_feature_word(FeatureWord w,
 /* Predictive Store Forwarding Disable */
 #define CPUID_8000_0008_EBX_AMD_PSFD(1U << 28)
 
+/* Processor ignores nested data breakpoints */
+#define CPUID_8000_0021_EAX_No_NESTED_DATA_BP(1U << 0)
+/* LFENCE is always serializing */
+#define CPUID_8000_0021_EAX_LFENCE_ALWAYS_SERIALIZING(1U << 2)
+/* Null Selector Clears Base */
+#define CPUID_8000_0021_EAX_NULL_SEL_CLR_BASE(1U << 6)
+
 #define CPUID_XSAVE_XSAVEOPT   (1U << 0)
 #define CPUID_XSAVE_XSAVEC (1U << 1)
 #define CPUID_XSAVE_XGETBV1(1U << 2)
-- 
2.34.1




[PATCH v3 6/7] target/i386: Add VNMI and automatic IBRS feature bits

2023-04-24 Thread Babu Moger
Add the following featute bits.

vnmi: Virtual NMI (VNMI) allows the hypervisor to inject the NMI into the
  guest without using Event Injection mechanism meaning not required to
  track the guest NMI and intercepting the IRET.
  The presence of this feature is indicated via the CPUID function
  0x800A_EDX[25].


automatic-ibrs :
  The AMD Zen4 core supports a new feature called Automatic IBRS.
  It is a "set-and-forget" feature that means that, unlike e.g.,
  s/w-toggled SPEC_CTRL.IBRS, h/w manages its IBRS mitigation
  resources automatically across CPL transitions.
  The presence of this feature is indicated via the CPUID function
  0x8021_EAX[8].

The documention for the features are available in the links below.
a. Processor Programming Reference (PPR) for AMD Family 19h Model 01h,
   Revision B1 Processors
b. AMD64 Architecture Programmer’s Manual Volumes 1–5 Publication No. Revision
   40332 4.05 Date October 2022

Signed-off-by: Santosh Shukla 
Signed-off-by: Kim Phillips 
Signed-off-by: Babu Moger 
Link: https://www.amd.com/system/files/TechDocs/55898_B1_pub_0.50.zip
Link: https://www.amd.com/system/files/TechDocs/40332_4.05.pdf
---
 target/i386/cpu.c | 4 ++--
 target/i386/cpu.h | 3 +++
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 7fcdd33467..ce26e955d8 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -806,7 +806,7 @@ FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
 "pfthreshold", "avic", NULL, "v-vmsave-vmload",
 "vgif", NULL, NULL, NULL,
 NULL, NULL, NULL, NULL,
-NULL, NULL, NULL, NULL,
+NULL, "vnmi", NULL, NULL,
 "svme-addr-chk", NULL, NULL, NULL,
 },
 .cpuid = { .eax = 0x800A, .reg = R_EDX, },
@@ -925,7 +925,7 @@ FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
 .feat_names = {
 "no-nested-data-bp", NULL, "lfence-always-serializing", NULL,
 NULL, NULL, "null-sel-clr-base", NULL,
-NULL, NULL, NULL, NULL,
+"auto-ibrs", NULL, NULL, NULL,
 NULL, NULL, NULL, NULL,
 NULL, NULL, NULL, NULL,
 NULL, NULL, NULL, NULL,
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 7cf811d8fe..f6575f1f01 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -773,6 +773,7 @@ uint64_t x86_cpu_get_supported_feature_word(FeatureWord w,
 #define CPUID_SVM_AVIC(1U << 13)
 #define CPUID_SVM_V_VMSAVE_VMLOAD (1U << 15)
 #define CPUID_SVM_VGIF(1U << 16)
+#define CPUID_SVM_VNMI(1U << 25)
 #define CPUID_SVM_SVME_ADDR_CHK   (1U << 28)
 
 /* Support RDFSBASE/RDGSBASE/WRFSBASE/WRGSBASE */
@@ -946,6 +947,8 @@ uint64_t x86_cpu_get_supported_feature_word(FeatureWord w,
 #define CPUID_8000_0021_EAX_LFENCE_ALWAYS_SERIALIZING(1U << 2)
 /* Null Selector Clears Base */
 #define CPUID_8000_0021_EAX_NULL_SEL_CLR_BASE(1U << 6)
+/* Automatic IBRS */
+#define CPUID_8000_0021_EAX_AUTO_IBRS   (1U << 8)
 
 #define CPUID_XSAVE_XSAVEOPT   (1U << 0)
 #define CPUID_XSAVE_XSAVEC (1U << 1)
-- 
2.34.1




[PATCH v3 5/7] target/i386: Add missing feature bits in EPYC-Milan model

2023-04-24 Thread Babu Moger
And the following feature bits for EPYC-Milan model and bump the version.
vaes: Vector VAES(ENC|DEC), VAES(ENC|DEC)LAST instruction support
vpclmulqdq  : Vector VPCLMULQDQ instruction support
stibp-always-on : Single Thread Indirect Branch Prediction Mode has enhanced
  performance and may be left Always on
amd-psfd: Predictive Store Forward Disable
no-nested-data-bp : Processor ignores nested data breakpoints
lfence-always-serializing : LFENCE instruction is always serializing
null-sel-clr-base : Null Selector Clears Base. When this bit is
set, a null segment load clears the segment base

These new features will be added in EPYC-Milan-v2. The -cpu help output
after the change.

x86 EPYC-Milan (alias configured by machine type)
x86 EPYC-Milan-v1  AMD EPYC-Milan Processor
x86 EPYC-Milan-v2  AMD EPYC-Milan Processor

The documentation for the features are available in the links below.
a. Processor Programming Reference (PPR) for AMD Family 19h Model 01h,
   Revision B1 Processors
b. SECURITY ANALYSIS OF AMD PREDICTIVE STORE FORWARDING
c. AMD64 Architecture Programmer’s Manual Volumes 1–5 Publication No. Revision
40332 4.05 Date October 2022

Signed-off-by: Babu Moger 
Acked-by: Michael S. Tsirkin 
Link: https://www.amd.com/system/files/TechDocs/55898_B1_pub_0.50.zip
Link: 
https://www.amd.com/system/files/documents/security-analysis-predictive-store-forwarding.pdf
Link: https://www.amd.com/system/files/TechDocs/40332_4.05.pdf
---
 target/i386/cpu.c | 70 +++
 1 file changed, 70 insertions(+)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index d584a9488b..7fcdd33467 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -1923,6 +1923,56 @@ static const CPUCaches epyc_milan_cache_info = {
 },
 };
 
+static const CPUCaches epyc_milan_v2_cache_info = {
+.l1d_cache = &(CPUCacheInfo) {
+.type = DATA_CACHE,
+.level = 1,
+.size = 32 * KiB,
+.line_size = 64,
+.associativity = 8,
+.partitions = 1,
+.sets = 64,
+.lines_per_tag = 1,
+.self_init = 1,
+.no_invd_sharing = true,
+},
+.l1i_cache = &(CPUCacheInfo) {
+.type = INSTRUCTION_CACHE,
+.level = 1,
+.size = 32 * KiB,
+.line_size = 64,
+.associativity = 8,
+.partitions = 1,
+.sets = 64,
+.lines_per_tag = 1,
+.self_init = 1,
+.no_invd_sharing = true,
+},
+.l2_cache = &(CPUCacheInfo) {
+.type = UNIFIED_CACHE,
+.level = 2,
+.size = 512 * KiB,
+.line_size = 64,
+.associativity = 8,
+.partitions = 1,
+.sets = 1024,
+.lines_per_tag = 1,
+},
+.l3_cache = &(CPUCacheInfo) {
+.type = UNIFIED_CACHE,
+.level = 3,
+.size = 32 * MiB,
+.line_size = 64,
+.associativity = 16,
+.partitions = 1,
+.sets = 32768,
+.lines_per_tag = 1,
+.self_init = true,
+.inclusive = true,
+.complex_indexing = false,
+},
+};
+
 /* The following VMX features are not supported by KVM and are left out in the
  * CPU definitions:
  *
@@ -4401,6 +4451,26 @@ static const X86CPUDefinition builtin_x86_defs[] = {
 .xlevel = 0x801E,
 .model_id = "AMD EPYC-Milan Processor",
 .cache_info = _milan_cache_info,
+.versions = (X86CPUVersionDefinition[]) {
+{ .version = 1 },
+{
+.version = 2,
+.props = (PropValue[]) {
+{ "model-id",
+  "AMD EPYC-Milan-v2 Processor" },
+{ "vaes", "on" },
+{ "vpclmulqdq", "on" },
+{ "stibp-always-on", "on" },
+{ "amd-psfd", "on" },
+{ "no-nested-data-bp", "on" },
+{ "lfence-always-serializing", "on" },
+{ "null-sel-clr-base", "on" },
+{ /* end of list */ }
+},
+.cache_info = _milan_v2_cache_info
+},
+{ /* end of list */ }
+}
 },
 };
 
-- 
2.34.1




[PATCH v3 2/7] target/i386: Add new EPYC CPU versions with updated cache_info

2023-04-24 Thread Babu Moger
From: Michael Roth 

Introduce new EPYC cpu versions: EPYC-v4 and EPYC-Rome-v3.
The only difference vs. older models is an updated cache_info with
the 'complex_indexing' bit unset, since this bit is not currently
defined for AMD and may cause problems should it be used for
something else in the future. Setting this bit will also cause
CPUID validation failures when running SEV-SNP guests.

Signed-off-by: Michael Roth 
Signed-off-by: Babu Moger 
Acked-by: Michael S. Tsirkin 
---
 target/i386/cpu.c | 118 ++
 1 file changed, 118 insertions(+)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index e3d9eaa307..c1bc47661d 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -1707,6 +1707,56 @@ static const CPUCaches epyc_cache_info = {
 },
 };
 
+static CPUCaches epyc_v4_cache_info = {
+.l1d_cache = &(CPUCacheInfo) {
+.type = DATA_CACHE,
+.level = 1,
+.size = 32 * KiB,
+.line_size = 64,
+.associativity = 8,
+.partitions = 1,
+.sets = 64,
+.lines_per_tag = 1,
+.self_init = 1,
+.no_invd_sharing = true,
+},
+.l1i_cache = &(CPUCacheInfo) {
+.type = INSTRUCTION_CACHE,
+.level = 1,
+.size = 64 * KiB,
+.line_size = 64,
+.associativity = 4,
+.partitions = 1,
+.sets = 256,
+.lines_per_tag = 1,
+.self_init = 1,
+.no_invd_sharing = true,
+},
+.l2_cache = &(CPUCacheInfo) {
+.type = UNIFIED_CACHE,
+.level = 2,
+.size = 512 * KiB,
+.line_size = 64,
+.associativity = 8,
+.partitions = 1,
+.sets = 1024,
+.lines_per_tag = 1,
+},
+.l3_cache = &(CPUCacheInfo) {
+.type = UNIFIED_CACHE,
+.level = 3,
+.size = 8 * MiB,
+.line_size = 64,
+.associativity = 16,
+.partitions = 1,
+.sets = 8192,
+.lines_per_tag = 1,
+.self_init = true,
+.inclusive = true,
+.complex_indexing = false,
+},
+};
+
 static const CPUCaches epyc_rome_cache_info = {
 .l1d_cache = &(CPUCacheInfo) {
 .type = DATA_CACHE,
@@ -1757,6 +1807,56 @@ static const CPUCaches epyc_rome_cache_info = {
 },
 };
 
+static const CPUCaches epyc_rome_v3_cache_info = {
+.l1d_cache = &(CPUCacheInfo) {
+.type = DATA_CACHE,
+.level = 1,
+.size = 32 * KiB,
+.line_size = 64,
+.associativity = 8,
+.partitions = 1,
+.sets = 64,
+.lines_per_tag = 1,
+.self_init = 1,
+.no_invd_sharing = true,
+},
+.l1i_cache = &(CPUCacheInfo) {
+.type = INSTRUCTION_CACHE,
+.level = 1,
+.size = 32 * KiB,
+.line_size = 64,
+.associativity = 8,
+.partitions = 1,
+.sets = 64,
+.lines_per_tag = 1,
+.self_init = 1,
+.no_invd_sharing = true,
+},
+.l2_cache = &(CPUCacheInfo) {
+.type = UNIFIED_CACHE,
+.level = 2,
+.size = 512 * KiB,
+.line_size = 64,
+.associativity = 8,
+.partitions = 1,
+.sets = 1024,
+.lines_per_tag = 1,
+},
+.l3_cache = &(CPUCacheInfo) {
+.type = UNIFIED_CACHE,
+.level = 3,
+.size = 16 * MiB,
+.line_size = 64,
+.associativity = 16,
+.partitions = 1,
+.sets = 16384,
+.lines_per_tag = 1,
+.self_init = true,
+.inclusive = true,
+.complex_indexing = false,
+},
+};
+
 static const CPUCaches epyc_milan_cache_info = {
 .l1d_cache = &(CPUCacheInfo) {
 .type = DATA_CACHE,
@@ -4091,6 +4191,15 @@ static const X86CPUDefinition builtin_x86_defs[] = {
 { /* end of list */ }
 }
 },
+{
+.version = 4,
+.props = (PropValue[]) {
+{ "model-id",
+  "AMD EPYC-v4 Processor" },
+{ /* end of list */ }
+},
+.cache_info = _v4_cache_info
+},
 { /* end of list */ }
 }
 },
@@ -4210,6 +4319,15 @@ static const X86CPUDefinition builtin_x86_defs[] = {
 { /* end of list */ }
 }
 },
+{
+.version = 3,
+.props = (PropValue[]) {
+{ "model-id",
+  "AMD EPYC-Rome-v3 Processor" },
+{ /* end of list */ }
+},
+.cache_info = _rome_v3_cache_info
+},
 { /* end of list */ }
 }
 },
-- 
2.34.1




[PATCH v3 7/7] target/i386: Add EPYC-Genoa model to support Zen 4 processor series

2023-04-24 Thread Babu Moger
Adds the support for AMD EPYC Genoa generation processors. The model
display for the new processor will be EPYC-Genoa.

Adds the following new feature bits on top of the feature bits from
the previous generation EPYC models.

avx512f : AVX-512 Foundation instruction
avx512dq: AVX-512 Doubleword & Quadword Instruction
avx512ifma  : AVX-512 Integer Fused Multiply Add instruction
avx512cd: AVX-512 Conflict Detection instruction
avx512bw: AVX-512 Byte and Word Instructions
avx512vl: AVX-512 Vector Length Extension Instructions
avx512vbmi  : AVX-512 Vector Byte Manipulation Instruction
avx512_vbmi2: AVX-512 Additional Vector Byte Manipulation Instruction
gfni: AVX-512 Galois Field New Instructions
avx512_vnni : AVX-512 Vector Neural Network Instructions
avx512_bitalg   : AVX-512 Bit Algorithms, add bit algorithms Instructions
avx512_vpopcntdq: AVX-512 AVX-512 Vector Population Count Doubleword and
  Quadword Instructions
avx512_bf16 : AVX-512 BFLOAT16 instructions
la57: 57-bit virtual address support (5-level Page Tables)
vnmi: Virtual NMI (VNMI) allows the hypervisor to inject the NMI
  into the guest without using Event Injection mechanism
  meaning not required to track the guest NMI and intercepting
  the IRET.
auto-ibrs   : The AMD Zen4 core supports a new feature called Automatic 
IBRS.
  It is a "set-and-forget" feature that means that, unlike e.g.,
  s/w-toggled SPEC_CTRL.IBRS, h/w manages its IBRS mitigation
  resources automatically across CPL transitions.

Signed-off-by: Babu Moger 
---
 target/i386/cpu.c | 122 ++
 1 file changed, 122 insertions(+)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index ce26e955d8..0b0ff4a0c0 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -1973,6 +1973,56 @@ static const CPUCaches epyc_milan_v2_cache_info = {
 },
 };
 
+static const CPUCaches epyc_genoa_cache_info = {
+.l1d_cache = &(CPUCacheInfo) {
+.type = DATA_CACHE,
+.level = 1,
+.size = 32 * KiB,
+.line_size = 64,
+.associativity = 8,
+.partitions = 1,
+.sets = 64,
+.lines_per_tag = 1,
+.self_init = 1,
+.no_invd_sharing = true,
+},
+.l1i_cache = &(CPUCacheInfo) {
+.type = INSTRUCTION_CACHE,
+.level = 1,
+.size = 32 * KiB,
+.line_size = 64,
+.associativity = 8,
+.partitions = 1,
+.sets = 64,
+.lines_per_tag = 1,
+.self_init = 1,
+.no_invd_sharing = true,
+},
+.l2_cache = &(CPUCacheInfo) {
+.type = UNIFIED_CACHE,
+.level = 2,
+.size = 1 * MiB,
+.line_size = 64,
+.associativity = 8,
+.partitions = 1,
+.sets = 2048,
+.lines_per_tag = 1,
+},
+.l3_cache = &(CPUCacheInfo) {
+.type = UNIFIED_CACHE,
+.level = 3,
+.size = 32 * MiB,
+.line_size = 64,
+.associativity = 16,
+.partitions = 1,
+.sets = 32768,
+.lines_per_tag = 1,
+.self_init = true,
+.inclusive = true,
+.complex_indexing = false,
+},
+};
+
 /* The following VMX features are not supported by KVM and are left out in the
  * CPU definitions:
  *
@@ -4472,6 +4522,78 @@ static const X86CPUDefinition builtin_x86_defs[] = {
 { /* end of list */ }
 }
 },
+{
+.name = "EPYC-Genoa",
+.level = 0xd,
+.vendor = CPUID_VENDOR_AMD,
+.family = 25,
+.model = 17,
+.stepping = 0,
+.features[FEAT_1_EDX] =
+CPUID_SSE2 | CPUID_SSE | CPUID_FXSR | CPUID_MMX | CPUID_CLFLUSH |
+CPUID_PSE36 | CPUID_PAT | CPUID_CMOV | CPUID_MCA | CPUID_PGE |
+CPUID_MTRR | CPUID_SEP | CPUID_APIC | CPUID_CX8 | CPUID_MCE |
+CPUID_PAE | CPUID_MSR | CPUID_TSC | CPUID_PSE | CPUID_DE |
+CPUID_VME | CPUID_FP87,
+.features[FEAT_1_ECX] =
+CPUID_EXT_RDRAND | CPUID_EXT_F16C | CPUID_EXT_AVX |
+CPUID_EXT_XSAVE | CPUID_EXT_AES |  CPUID_EXT_POPCNT |
+CPUID_EXT_MOVBE | CPUID_EXT_SSE42 | CPUID_EXT_SSE41 |
+CPUID_EXT_PCID | CPUID_EXT_CX16 | CPUID_EXT_FMA |
+CPUID_EXT_SSSE3 | CPUID_EXT_MONITOR | CPUID_EXT_PCLMULQDQ |
+CPUID_EXT_SSE3,
+.features[FEAT_8000_0001_EDX] =
+CPUID_EXT2_LM | CPUID_EXT2_RDTSCP | CPUID_EXT2_PDPE1GB |
+CPUID_EXT2_FFXSR | CPUID_EXT2_MMXEXT | CPUID_EXT2_NX |
+CPUID_EXT2_SYSCALL,
+.features[FEAT_8000_0001_ECX] =
+CPUID_EXT3_OSVW | CPUID_EXT3_3DNOWPREFETCH |
+CPUID_EXT3_MISALIGNSSE | CPUID_EXT3_SSE4A | CPUID_EXT3_ABM |
+CPUID_EXT3_CR8LEG | CPUID_EXT3_SVM | CPUID_EXT3_LAHF_LM |
+

[PATCH v3 1/7] target/i386: allow versioned CPUs to specify new cache_info

2023-04-24 Thread Babu Moger
From: Michael Roth 

New EPYC CPUs versions require small changes to their cache_info's.
Because current QEMU x86 CPU definition does not support cache
versions, we would have to declare a new CPU type for each such case.
To avoid this duplication, the patch allows new cache_info pointers
to be specified for a new CPU version.

Co-developed-by: Wei Huang 
Signed-off-by: Wei Huang 
Signed-off-by: Michael Roth 
Signed-off-by: Babu Moger 
Acked-by: Michael S. Tsirkin 
---
 target/i386/cpu.c | 36 +---
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 6576287e5b..e3d9eaa307 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -1598,6 +1598,7 @@ typedef struct X86CPUVersionDefinition {
 const char *alias;
 const char *note;
 PropValue *props;
+const CPUCaches *const cache_info;
 } X86CPUVersionDefinition;
 
 /* Base definition for a CPU model */
@@ -5192,6 +5193,32 @@ static void x86_cpu_apply_version_props(X86CPU *cpu, 
X86CPUModel *model)
 assert(vdef->version == version);
 }
 
+/* Apply properties for the CPU model version specified in model */
+static const CPUCaches *x86_cpu_get_version_cache_info(X86CPU *cpu,
+   X86CPUModel *model)
+{
+const X86CPUVersionDefinition *vdef;
+X86CPUVersion version = x86_cpu_model_resolve_version(model);
+const CPUCaches *cache_info = model->cpudef->cache_info;
+
+if (version == CPU_VERSION_LEGACY) {
+return cache_info;
+}
+
+for (vdef = x86_cpu_def_get_versions(model->cpudef); vdef->version; 
vdef++) {
+if (vdef->cache_info) {
+cache_info = vdef->cache_info;
+}
+
+if (vdef->version == version) {
+break;
+}
+}
+
+assert(vdef->version == version);
+return cache_info;
+}
+
 /*
  * Load data from X86CPUDefinition into a X86CPU object.
  * Only for builtin_x86_defs models initialized with x86_register_cpudef_types.
@@ -5224,7 +5251,7 @@ static void x86_cpu_load_model(X86CPU *cpu, X86CPUModel 
*model)
 }
 
 /* legacy-cache defaults to 'off' if CPU model provides cache info */
-cpu->legacy_cache = !def->cache_info;
+cpu->legacy_cache = !x86_cpu_get_version_cache_info(cpu, model);
 
 env->features[FEAT_1_ECX] |= CPUID_EXT_HYPERVISOR;
 
@@ -6703,14 +6730,17 @@ static void x86_cpu_realizefn(DeviceState *dev, Error 
**errp)
 
 /* Cache information initialization */
 if (!cpu->legacy_cache) {
-if (!xcc->model || !xcc->model->cpudef->cache_info) {
+const CPUCaches *cache_info =
+x86_cpu_get_version_cache_info(cpu, xcc->model);
+
+if (!xcc->model || !cache_info) {
 g_autofree char *name = x86_cpu_class_get_model_name(xcc);
 error_setg(errp,
"CPU model '%s' doesn't support legacy-cache=off", 
name);
 return;
 }
 env->cache_info_cpuid2 = env->cache_info_cpuid4 = env->cache_info_amd =
-*xcc->model->cpudef->cache_info;
+*cache_info;
 } else {
 /* Build legacy cache information */
 env->cache_info_cpuid2.l1d_cache = _l1d_cache;
-- 
2.34.1




[PATCH v3 3/7] target/i386: Add a couple of feature bits in 8000_0008_EBX

2023-04-24 Thread Babu Moger
Add the following feature bits.

amd-psfd : Predictive Store Forwarding Disable:
   PSF is a hardware-based micro-architectural optimization
   designed to improve the performance of code execution by
   predicting address dependencies between loads and stores.
   While SSBD (Speculative Store Bypass Disable) disables both
   PSF and speculative store bypass, PSFD only disables PSF.
   PSFD may be desirable for the software which is concerned
   with the speculative behavior of PSF but desires a smaller
   performance impact than setting SSBD.
   Depends on the following kernel commit:
   b73a54321ad8 ("KVM: x86: Expose Predictive Store Forwarding Disable")

stibp-always-on :
   Single Thread Indirect Branch Prediction mode has enhanced
   performance and may be left always on.

The documentation for the features are available in the links below.
a. Processor Programming Reference (PPR) for AMD Family 19h Model 01h,
   Revision B1 Processors
b. SECURITY ANALYSIS OF AMD PREDICTIVE STORE FORWARDING

Signed-off-by: Babu Moger 
Acked-by: Michael S. Tsirkin 
Link: 
https://www.amd.com/system/files/documents/security-analysis-predictive-store-forwarding.pdf
Link: https://www.amd.com/system/files/TechDocs/55898_B1_pub_0.50.zip
---
 target/i386/cpu.c | 4 ++--
 target/i386/cpu.h | 4 
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index c1bc47661d..64a1fdd6ca 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -911,10 +911,10 @@ FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
 NULL, NULL, NULL, NULL,
 NULL, "wbnoinvd", NULL, NULL,
 "ibpb", NULL, "ibrs", "amd-stibp",
-NULL, NULL, NULL, NULL,
+NULL, "stibp-always-on", NULL, NULL,
 NULL, NULL, NULL, NULL,
 "amd-ssbd", "virt-ssbd", "amd-no-ssb", NULL,
-NULL, NULL, NULL, NULL,
+"amd-psfd", NULL, NULL, NULL,
 },
 .cpuid = { .eax = 0x8008, .reg = R_EBX, },
 .tcg_features = 0,
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index d243e290d3..14645e3cb8 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -932,8 +932,12 @@ uint64_t x86_cpu_get_supported_feature_word(FeatureWord w,
 #define CPUID_8000_0008_EBX_IBRS(1U << 14)
 /* Single Thread Indirect Branch Predictors */
 #define CPUID_8000_0008_EBX_STIBP   (1U << 15)
+/* STIBP mode has enhanced performance and may be left always on */
+#define CPUID_8000_0008_EBX_STIBP_ALWAYS_ON(1U << 17)
 /* Speculative Store Bypass Disable */
 #define CPUID_8000_0008_EBX_AMD_SSBD(1U << 24)
+/* Predictive Store Forwarding Disable */
+#define CPUID_8000_0008_EBX_AMD_PSFD(1U << 28)
 
 #define CPUID_XSAVE_XSAVEOPT   (1U << 0)
 #define CPUID_XSAVE_XSAVEC (1U << 1)
-- 
2.34.1




Re: [PATCH] hw/riscv/virt: Add a second UART for secure world

2023-04-24 Thread Philippe Mathieu-Daudé

On 24/4/23 03:01, Yong Li wrote:

The virt machine can have two UARTs and the second UART
can be used when host secure-mode support is enabled.

Signed-off-by: Yong Li 
Cc: "Zhiwei Liu" 
---
  hw/riscv/virt.c | 4 
  include/hw/riscv/virt.h | 2 ++
  2 files changed, 6 insertions(+)


Reviewed-by: Philippe Mathieu-Daudé 




Re: [PATCH] hw/net/msf2-emac: Don't modify descriptor in-place in emac_store_desc()

2023-04-24 Thread Thomas Huth

On 24/04/2023 17.19, Peter Maydell wrote:

The msf2-emac ethernet controller has functions emac_load_desc() and
emac_store_desc() which read and write the in-memory descriptor
blocks and handle conversion between guest and host endianness.

As currently written, emac_store_desc() does the endianness
conversion in-place; this means that it effectively consumes the
input EmacDesc struct, because on a big-endian host the fields will
be overwritten with the little-endian versions of their values.
Unfortunately, in all the callsites the code continues to access
fields in the EmacDesc struct after it has called emac_store_desc()
-- specifically, it looks at the d.next field.

The effect of this is that on a big-endian host networking doesn't
work because the address of the next descriptor is corrupted.

We could fix this by making the callsite avoid using the struct; but
it's more robust to have emac_store_desc() leave its input alone.

(emac_load_desc() also does an in-place conversion, but here this is
fine, because the function is supposed to be initializing the
struct.)

Cc: qemu-sta...@nongnu.org
Signed-off-by: Peter Maydell 
---
This is one of a number of issues that prevent 'make check-avocado'
working for arm targets on a big-endian host...

  hw/net/msf2-emac.c | 14 +-
  1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/hw/net/msf2-emac.c b/hw/net/msf2-emac.c
index 7ccd3e51427..34c1f768db0 100644
--- a/hw/net/msf2-emac.c
+++ b/hw/net/msf2-emac.c
@@ -120,12 +120,16 @@ static void emac_load_desc(MSF2EmacState *s, EmacDesc *d, 
hwaddr desc)
  
  static void emac_store_desc(MSF2EmacState *s, EmacDesc *d, hwaddr desc)


You could likely also add a "const" to "EmacDesc *d" now.


  {
-/* Convert from host endianness into LE. */
-d->pktaddr = cpu_to_le32(d->pktaddr);
-d->pktsize = cpu_to_le32(d->pktsize);
-d->next = cpu_to_le32(d->next);
+EmacDesc outd;
+/*
+ * Convert from host endianness into LE. We use a local struct because
+ * calling code may still want to look at the fields afterwards.
+ */
+outd.pktaddr = cpu_to_le32(d->pktaddr);
+outd.pktsize = cpu_to_le32(d->pktsize);
+outd.next = cpu_to_le32(d->next);
  
-address_space_write(>dma_as, desc, MEMTXATTRS_UNSPECIFIED, d, sizeof *d);

+address_space_write(>dma_as, desc, MEMTXATTRS_UNSPECIFIED, , 
sizeof outd);
  }


Reviewed-by: Thomas Huth 




Re: [PATCH 1/2] target/arm: Define and use new load_cpu_field_low32()

2023-04-24 Thread Richard Henderson

On 4/24/23 16:39, Peter Maydell wrote:

+QEMU_BUILD_BUG_ON(sizeof(typeof_field(CPUARMState, name)) != 8); \


Surely just sizeof_field().

Otherwise,
Reviewed-by: Richard Henderson 


r~



Re: [PATCH 2/2] target/arm: Add compile time asserts to load/store_cpu_field macros

2023-04-24 Thread Richard Henderson

On 4/24/23 16:39, Peter Maydell wrote:

+QEMU_BUILD_BUG_ON(sizeof(typeof_field(CPUARMState, name)) != 4); \


Similarly.

Reviewed-by: Richard Henderson 

r~



Re: [PATCH 1/6] update-linux-headers: sync-up header with Linux for KVM AIA support

2023-04-24 Thread Cornelia Huck
On Mon, Apr 24 2023, Thomas Huth  wrote:

> On 24/04/2023 11.07, Yong-Xuan Wang wrote:
>> Sync-up Linux header to get latest KVM RISC-V headers having AIA support.
>> 
>> Signed-off-by: Yong-Xuan Wang 
>> Reviewed-by: Jim Shu 
>> ---
>>   linux-headers/linux/kvm.h |  2 ++
>>   target/riscv/kvm_riscv.h  | 33 +
>
>   Hi!
>
> Please don't mix updates to linux-headers/ with updates to other files. 
> linux-headers/ should only by updated via the 
> scripts/update-linux-headers.sh script, and then the whole update should be 
> included in the patch, not only selected files.

...and in the cases where you cannot run a normal headers update because
the code has not been accepted into Linux yet, just create a placeholder
patch containing only the linux-headers changes, which can be replaced
with a proper update later.

[I didn't check whether the code is already included in Linux.]




Re: [RFC PATCH v1] hw/misc: add i2c slave device that passes i2c ops outside

2023-04-24 Thread Corey Minyard
On Mon, Apr 24, 2023 at 11:02:08AM -0500, Corey Minyard wrote:
> On Mon, Apr 24, 2023 at 02:09:50PM +, Karol Nowak wrote:
> > Hi Corey,
> > 
> > 
> > 
> > Have you got a chance to look at the I2C code?
> 
> No, I have not.  I've been pretty busy with work stuff.
> 
> > 
> > 
> > I imagine that the I2C code has to stop the emulation and keep the main 
> > thread running so that it can receive events. Is there something like that?
> 
> No, not really.
> 
> Right now an I2C host device will submit its transactions on the virtual
> bus and get an immediate response.  Basically, they call send() for all
> the bytes and then call recv() to get the response.  Your additions
> would require blocking on each send() and recv() until the other end of
> the connection responded.
> 
> The fundamental thing that must happen is that the individual I2C host
> devices need to be modified to submit their transactions asynchronously,
> they do a send_async(), to write bytes, and have some asyncronous way to
> receive the bytes (which is going to be a little tricky, I think).
> 
> There is already a send_async() function available.  This is added so
> that external devices can bus master, but would be usable for a host
> device to talk to a slave.

I should also add that I'm not 100% sure that a blocking interface is
just a bad idea.  I am fairly sure it would be a bad idea for
production, but for test systems it might be ok.

The trouble is that once you put something in for test systems, someone
will want to use it for production systems.  So I would really rather do
it right the first time.  But other QEMU developers that are more
experienced than me can convince me otherwise.

-corey

> 
> -corey
> 
> > 
> > 
> > 
> > Karol
> > 
> > 
> > 
> > From: Corey Minyard  on behalf of Corey Minyard 
> > 
> > Sent: Wednesday, April 19, 2023 4:35 PM
> > To: Karol Nowak 
> > Cc: qemu-devel@nongnu.org ; phi...@linaro.org 
> > ; c...@kaod.org 
> > Subject: Re: [RFC PATCH v1] hw/misc: add i2c slave device that passes i2c 
> > ops outside
> > 
> > ⚠ This email originates from outside the organization or the sender could 
> > not be verified.
> > 
> > On Wed, Apr 19, 2023 at 12:40:36PM +, Karol Nowak wrote:
> > > Hi Corey,
> > >
> > > I looked at hw/ipmi/ipmi_bmc_extern.c and I conclude that it is a bit 
> > > different case than in my one. The function 
> > > ipmi_bmc_extern_handle_command() does not wait for a response; the 
> > > response comes in a chardev-handler. If I am not mistaken, in my case I 
> > > have to arm a timer to avoid hanging of QEMU, somehow stop execution of 
> > > i2c-handler(recv/send/event), wait for a response in a chardev handler 
> > > and then resume an execution of i2c-handler when data arrive.
> > 
> > Yes, something like that.  Hopefully a timer isn't necessary (well, it's
> > necessary to make sure you don't sit there forever, but it shouldn't be
> > the main way to do it), you can use the response from the other end to
> > resume execution.
> > 
> > You don't stop execution of the i2c handler, either.  You aren't blocked
> > waiting for a response, that's the big thing you cannot do.  You send
> > the message and return.  When the response comes in, you do what the
> > hardware would do in that case.
> > 
> > I need to spend a little time looking at the I2C code.  I assume it
> > would need some adjustment to accommodate this.
> > 
> > -corey
> > 
> > >
> > > Best regards,
> > > Karol
> > >
> > >
> > >
> > > 
> > > From: Corey Minyard  on behalf of Corey Minyard 
> > > 
> > > Sent: Monday, April 17, 2023 4:34 PM
> > > To: Karol Nowak 
> > > Cc: qemu-devel@nongnu.org ; phi...@linaro.org 
> > > ; c...@kaod.org 
> > > Subject: Re: [RFC PATCH v1] hw/misc: add i2c slave device that passes i2c 
> > > ops outside
> > >
> > > [You don't often get email from miny...@acm.org. Learn why this is 
> > > important at https://aka.ms/LearnAboutSenderIdentification ]
> > >
> > > ⚠ This email originates from outside the organization or the sender could 
> > > not be verified.
> > >
> > > On Mon, Apr 17, 2023 at 10:18:08AM +, Karol Nowak wrote:
> > > > Hi Corey,
> > > >
> > > >
> > > > thank you for your response.
> > > >
> > > >
> > > > Could you give me some hints how to make IO operations non-blocking in 
> > > > QEMU? Is there a code reference in the source code of QEMU I could use?
> > > >
> > >
> > > You can look at hw/ipmi/ipmi_bmc_extern.c for an example.
> > >
> > > -corey
> > >
> > > >
> > > > Karol
> > > >
> > > >
> > > > 
> > > > From: Corey Minyard  on behalf of Corey Minyard 
> > > > 
> > > > Sent: Thursday, March 23, 2023 5:03 PM
> > > > To: Karol Nowak 
> > > > Cc: qemu-devel@nongnu.org ; phi...@linaro.org 
> > > > ; c...@kaod.org 
> > > > Subject: Re: [RFC PATCH v1] hw/misc: add i2c slave device that passes 
> > > > i2c ops outside
> > > >
> > > > [You don't often get email from miny...@acm.org. 

[PATCH 0/3] hw/core: Make machine-qmp-cmds.c target independent

2023-04-24 Thread Thomas Huth
For being able to create a universal QEMU binary one day, core files
like machine-qmp-cmds.c must not contain any target specifc macros.
This series reworks the related spots in this file, so we can move
it to the common softmmu_ss source set. This has also the advantage
that we only have to compile this file once, and not multiple times
(one time for each target) anymore.

Thomas Huth (3):
  hw/core: Use a callback for target specific query-cpus-fast
information
  cpu: Introduce a wrapper for being able to use TARGET_NAME in common
code
  hw/core: Move machine-qmp-cmds.c into the target independent source
set

 include/hw/core/cpu.h  |  6 ++
 include/qemu/typedefs.h|  1 +
 cpu.c  |  5 +
 hw/core/machine-qmp-cmds.c | 20 
 target/s390x/cpu.c |  8 
 hw/core/meson.build|  5 +
 6 files changed, 25 insertions(+), 20 deletions(-)

-- 
2.31.1




[PATCH 2/3] cpu: Introduce a wrapper for being able to use TARGET_NAME in common code

2023-04-24 Thread Thomas Huth
In some spots, it would be helpful to be able to use TARGET_NAME
in common (target independent) code, too. Thus introduce a wrapper
that can be called from common code, too, just like we already
have one for target_words_bigendian().

Signed-off-by: Thomas Huth 
---
 include/hw/core/cpu.h | 2 ++
 cpu.c | 5 +
 2 files changed, 7 insertions(+)

diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 5a019a27bc..39150cf8f8 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -1013,6 +1013,8 @@ void cpu_exec_unrealizefn(CPUState *cpu);
  */
 bool target_words_bigendian(void);
 
+const char *target_name(void);
+
 void page_size_init(void);
 
 #ifdef NEED_CPU_H
diff --git a/cpu.c b/cpu.c
index 9105c85404..65ebaf8159 100644
--- a/cpu.c
+++ b/cpu.c
@@ -427,6 +427,11 @@ bool target_words_bigendian(void)
 #endif
 }
 
+const char *target_name(void)
+{
+return TARGET_NAME;
+}
+
 void page_size_init(void)
 {
 /* NOTE: we can always suppose that qemu_host_page_size >=
-- 
2.31.1




[PATCH 1/3] hw/core: Use a callback for target specific query-cpus-fast information

2023-04-24 Thread Thomas Huth
For being able to create a universal QEMU binary one day, core
files like machine-qmp-cmds.c must not contain any "#ifdef TARGET_..."
parts. Thus let's provide the target specific function via a
function pointer in CPUClass instead, as a first step towards
making this file target independent.

Signed-off-by: Thomas Huth 
---
 include/hw/core/cpu.h  |  4 
 include/qemu/typedefs.h|  1 +
 hw/core/machine-qmp-cmds.c | 16 ++--
 target/s390x/cpu.c |  8 
 4 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 397fd3ac68..5a019a27bc 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -106,6 +106,9 @@ struct SysemuCPUOps;
  * @has_work: Callback for checking if there is work to do.
  * @memory_rw_debug: Callback for GDB memory access.
  * @dump_state: Callback for dumping state.
+ * @query_cpu_fast:
+ *   Fill in target specific information for the "query-cpus-fast"
+ *   QAPI call.
  * @get_arch_id: Callback for getting architecture-dependent CPU ID.
  * @set_pc: Callback for setting the Program Counter register. This
  *   should have the semantics used by the target architecture when
@@ -151,6 +154,7 @@ struct CPUClass {
 int (*memory_rw_debug)(CPUState *cpu, vaddr addr,
uint8_t *buf, int len, bool is_write);
 void (*dump_state)(CPUState *cpu, FILE *, int flags);
+void (*query_cpu_fast)(CPUState *cpu, CpuInfoFast *value);
 int64_t (*get_arch_id)(CPUState *cpu);
 void (*set_pc)(CPUState *cpu, vaddr value);
 vaddr (*get_pc)(CPUState *cpu);
diff --git a/include/qemu/typedefs.h b/include/qemu/typedefs.h
index df4b55ac65..8e9ef252f5 100644
--- a/include/qemu/typedefs.h
+++ b/include/qemu/typedefs.h
@@ -41,6 +41,7 @@ typedef struct CompatProperty CompatProperty;
 typedef struct ConfidentialGuestSupport ConfidentialGuestSupport;
 typedef struct CPUAddressSpace CPUAddressSpace;
 typedef struct CPUArchState CPUArchState;
+typedef struct CpuInfoFast CpuInfoFast;
 typedef struct CPUJumpCache CPUJumpCache;
 typedef struct CPUState CPUState;
 typedef struct CPUTLBEntryFull CPUTLBEntryFull;
diff --git a/hw/core/machine-qmp-cmds.c b/hw/core/machine-qmp-cmds.c
index b98ff15089..c158c02aa3 100644
--- a/hw/core/machine-qmp-cmds.c
+++ b/hw/core/machine-qmp-cmds.c
@@ -28,18 +28,6 @@
 #include "sysemu/runstate.h"
 #include "sysemu/sysemu.h"
 
-static void cpustate_to_cpuinfo_s390(CpuInfoS390 *info, const CPUState *cpu)
-{
-#ifdef TARGET_S390X
-S390CPU *s390_cpu = S390_CPU(cpu);
-CPUS390XState *env = _cpu->env;
-
-info->cpu_state = env->cpu_state;
-#else
-abort();
-#endif
-}
-
 /*
  * fast means: we NEVER interrupt vCPU threads to retrieve
  * information from KVM.
@@ -68,8 +56,8 @@ CpuInfoFastList *qmp_query_cpus_fast(Error **errp)
 }
 
 value->target = target;
-if (target == SYS_EMU_TARGET_S390X) {
-cpustate_to_cpuinfo_s390(>u.s390x, cpu);
+if (cpu->cc->query_cpu_fast) {
+cpu->cc->query_cpu_fast(cpu, value);
 }
 
 QAPI_LIST_APPEND(tail, value);
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
index 40fdeaa905..df167493c3 100644
--- a/target/s390x/cpu.c
+++ b/target/s390x/cpu.c
@@ -140,6 +140,13 @@ static bool s390_cpu_has_work(CPUState *cs)
 return s390_cpu_has_int(cpu);
 }
 
+static void s390_query_cpu_fast(CPUState *cpu, CpuInfoFast *value)
+{
+S390CPU *s390_cpu = S390_CPU(cpu);
+
+value->u.s390x.cpu_state = s390_cpu->env.cpu_state;
+}
+
 /* S390CPUClass::reset() */
 static void s390_cpu_reset(CPUState *s, cpu_reset_type type)
 {
@@ -332,6 +339,7 @@ static void s390_cpu_class_init(ObjectClass *oc, void *data)
 cc->class_by_name = s390_cpu_class_by_name,
 cc->has_work = s390_cpu_has_work;
 cc->dump_state = s390_cpu_dump_state;
+cc->query_cpu_fast = s390_query_cpu_fast;
 cc->set_pc = s390_cpu_set_pc;
 cc->get_pc = s390_cpu_get_pc;
 cc->gdb_read_register = s390_cpu_gdb_read_register;
-- 
2.31.1




[PATCH 3/3] hw/core: Move machine-qmp-cmds.c into the target independent source set

2023-04-24 Thread Thomas Huth
The only target specific code that is left in here are two spots that
use TARGET_NAME. Change them to use the new target_name() wrapper
function instead, so we can move the file into the common softmmu_ss
source set. That way we only have to compile this file once, and not
for each target anymore.

Signed-off-by: Thomas Huth 
---
 hw/core/machine-qmp-cmds.c | 4 ++--
 hw/core/meson.build| 5 +
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/hw/core/machine-qmp-cmds.c b/hw/core/machine-qmp-cmds.c
index c158c02aa3..3860a50c3b 100644
--- a/hw/core/machine-qmp-cmds.c
+++ b/hw/core/machine-qmp-cmds.c
@@ -37,7 +37,7 @@ CpuInfoFastList *qmp_query_cpus_fast(Error **errp)
 MachineState *ms = MACHINE(qdev_get_machine());
 MachineClass *mc = MACHINE_GET_CLASS(ms);
 CpuInfoFastList *head = NULL, **tail = 
-SysEmuTarget target = qapi_enum_parse(_lookup, TARGET_NAME,
+SysEmuTarget target = qapi_enum_parse(_lookup, target_name(),
   -1, _abort);
 CPUState *cpu;
 
@@ -117,7 +117,7 @@ TargetInfo *qmp_query_target(Error **errp)
 {
 TargetInfo *info = g_malloc0(sizeof(*info));
 
-info->arch = qapi_enum_parse(_lookup, TARGET_NAME, -1,
+info->arch = qapi_enum_parse(_lookup, target_name(), -1,
  _abort);
 
 return info;
diff --git a/hw/core/meson.build b/hw/core/meson.build
index ae977c9396..959bc924d4 100644
--- a/hw/core/meson.build
+++ b/hw/core/meson.build
@@ -41,6 +41,7 @@ softmmu_ss.add(files(
   'gpio.c',
   'loader.c',
   'machine-hmp-cmds.c',
+  'machine-qmp-cmds.c',
   'machine.c',
   'nmi.c',
   'null-machine.c',
@@ -51,7 +52,3 @@ softmmu_ss.add(files(
   'vm-change-state-handler.c',
   'clock-vmstate.c',
 ))
-
-specific_ss.add(when: 'CONFIG_SOFTMMU', if_true: files(
-  'machine-qmp-cmds.c',
-))
-- 
2.31.1




Re: [RFC PATCH v3 43/44] target/loongarch: Use {set/get}_gpr replace to cpu_fpr

2023-04-24 Thread Richard Henderson

On 4/20/23 09:07, Song Gao wrote:

Introduce set_fpr() and get_fpr() and remove cpu_fpr.

Signed-off-by: Song Gao
---
  .../loongarch/insn_trans/trans_farith.c.inc   | 72 +++
  target/loongarch/insn_trans/trans_fcmp.c.inc  | 12 ++--
  .../loongarch/insn_trans/trans_fmemory.c.inc  | 37 ++
  target/loongarch/insn_trans/trans_fmov.c.inc  | 31 +---
  target/loongarch/translate.c  | 20 --
  5 files changed, 129 insertions(+), 43 deletions(-)


Reviewed-by: Richard Henderson 

r~



Re: [RFC PATCH v3 42/44] target/loongarch: Implement vldi

2023-04-24 Thread Richard Henderson

On 4/20/23 09:07, Song Gao wrote:

This patch includes:
- VLDI.

Signed-off-by: Song Gao
---
  target/loongarch/disas.c|   7 +
  target/loongarch/insn_trans/trans_lsx.c.inc | 137 
  target/loongarch/insns.decode   |   4 +
  3 files changed, 148 insertions(+)


Reviewed-by: Richard Henderson 

r~



Re: [RFC PATCH v1] hw/misc: add i2c slave device that passes i2c ops outside

2023-04-24 Thread Corey Minyard
On Mon, Apr 24, 2023 at 02:09:50PM +, Karol Nowak wrote:
> Hi Corey,
> 
> 
> 
> Have you got a chance to look at the I2C code?

No, I have not.  I've been pretty busy with work stuff.

> 
> 
> I imagine that the I2C code has to stop the emulation and keep the main 
> thread running so that it can receive events. Is there something like that?

No, not really.

Right now an I2C host device will submit its transactions on the virtual
bus and get an immediate response.  Basically, they call send() for all
the bytes and then call recv() to get the response.  Your additions
would require blocking on each send() and recv() until the other end of
the connection responded.

The fundamental thing that must happen is that the individual I2C host
devices need to be modified to submit their transactions asynchronously,
they do a send_async(), to write bytes, and have some asyncronous way to
receive the bytes (which is going to be a little tricky, I think).

There is already a send_async() function available.  This is added so
that external devices can bus master, but would be usable for a host
device to talk to a slave.

-corey

> 
> 
> 
> Karol
> 
> 
> 
> From: Corey Minyard  on behalf of Corey Minyard 
> 
> Sent: Wednesday, April 19, 2023 4:35 PM
> To: Karol Nowak 
> Cc: qemu-devel@nongnu.org ; phi...@linaro.org 
> ; c...@kaod.org 
> Subject: Re: [RFC PATCH v1] hw/misc: add i2c slave device that passes i2c ops 
> outside
> 
> ⚠ This email originates from outside the organization or the sender could not 
> be verified.
> 
> On Wed, Apr 19, 2023 at 12:40:36PM +, Karol Nowak wrote:
> > Hi Corey,
> >
> > I looked at hw/ipmi/ipmi_bmc_extern.c and I conclude that it is a bit 
> > different case than in my one. The function 
> > ipmi_bmc_extern_handle_command() does not wait for a response; the response 
> > comes in a chardev-handler. If I am not mistaken, in my case I have to arm 
> > a timer to avoid hanging of QEMU, somehow stop execution of 
> > i2c-handler(recv/send/event), wait for a response in a chardev handler and 
> > then resume an execution of i2c-handler when data arrive.
> 
> Yes, something like that.  Hopefully a timer isn't necessary (well, it's
> necessary to make sure you don't sit there forever, but it shouldn't be
> the main way to do it), you can use the response from the other end to
> resume execution.
> 
> You don't stop execution of the i2c handler, either.  You aren't blocked
> waiting for a response, that's the big thing you cannot do.  You send
> the message and return.  When the response comes in, you do what the
> hardware would do in that case.
> 
> I need to spend a little time looking at the I2C code.  I assume it
> would need some adjustment to accommodate this.
> 
> -corey
> 
> >
> > Best regards,
> > Karol
> >
> >
> >
> > 
> > From: Corey Minyard  on behalf of Corey Minyard 
> > 
> > Sent: Monday, April 17, 2023 4:34 PM
> > To: Karol Nowak 
> > Cc: qemu-devel@nongnu.org ; phi...@linaro.org 
> > ; c...@kaod.org 
> > Subject: Re: [RFC PATCH v1] hw/misc: add i2c slave device that passes i2c 
> > ops outside
> >
> > [You don't often get email from miny...@acm.org. Learn why this is 
> > important at https://aka.ms/LearnAboutSenderIdentification ]
> >
> > ⚠ This email originates from outside the organization or the sender could 
> > not be verified.
> >
> > On Mon, Apr 17, 2023 at 10:18:08AM +, Karol Nowak wrote:
> > > Hi Corey,
> > >
> > >
> > > thank you for your response.
> > >
> > >
> > > Could you give me some hints how to make IO operations non-blocking in 
> > > QEMU? Is there a code reference in the source code of QEMU I could use?
> > >
> >
> > You can look at hw/ipmi/ipmi_bmc_extern.c for an example.
> >
> > -corey
> >
> > >
> > > Karol
> > >
> > >
> > > 
> > > From: Corey Minyard  on behalf of Corey Minyard 
> > > 
> > > Sent: Thursday, March 23, 2023 5:03 PM
> > > To: Karol Nowak 
> > > Cc: qemu-devel@nongnu.org ; phi...@linaro.org 
> > > ; c...@kaod.org 
> > > Subject: Re: [RFC PATCH v1] hw/misc: add i2c slave device that passes i2c 
> > > ops outside
> > >
> > > [You don't often get email from miny...@acm.org. Learn why this is 
> > > important at https://aka.ms/LearnAboutSenderIdentification ]
> > >
> > > ⚠ This email originates from outside the organization or the sender could 
> > > not be verified.
> > >
> > > On Thu, Mar 23, 2023 at 10:09:02AM +, Karol Nowak wrote:
> > > > Hi,
> > > >
> > > > There is a feature I prepared which may be practical for some QEMU 
> > > > users.
> > > >
> > > > The feature provides a new I2C slave device
> > > > that prepares a message depending what i2c-slave callback was called
> > > > and sends it outside of QEMU through the character device to a client
> > > > that receives that message, processes it and send back a response.
> > > > Thanks to that feature,
> > > > a user can emulate a logic of I2C device outside of QEMU.
> > > > 

Re: [RFC PATCH v3 41/44] target/loongarch: Implement vld vst

2023-04-24 Thread Richard Henderson

On 4/20/23 09:07, Song Gao wrote:

+#include "tcg/tcg-internal.h"


This is internal to tcg.  Do not use.


+tcg_gen_qemu_ld_i128(val, addr, ctx->mem_idx, MO_128);
+set_vreg64(TCGV128_HIGH(val), a->vd, 1);
+set_vreg64(TCGV128_LOW(val), a->vd, 0);


You want tcg_gen_extr_i128_i64().


+tcg_gen_mov_i64(TCGV128_LOW(val), al);
+tcg_gen_mov_i64(TCGV128_HIGH(val), ah);
+tcg_gen_qemu_st_i128(val, addr, ctx->mem_idx, MO_128);


tcg_gen_concat_i64_i128().


+++ b/target/loongarch/lsx_helper.c
@@ -12,6 +12,7 @@
 #include "fpu/softfloat.h"
 #include "internals.h"
 #include "tcg/tcg.h"
+#include "tcg/tcg-ldst.h"


Do not use.  Use "exec/cpu_ldst.h".


r~



Re: [PATCH 1/6] update-linux-headers: sync-up header with Linux for KVM AIA support

2023-04-24 Thread Thomas Huth

On 24/04/2023 11.07, Yong-Xuan Wang wrote:

Sync-up Linux header to get latest KVM RISC-V headers having AIA support.

Signed-off-by: Yong-Xuan Wang 
Reviewed-by: Jim Shu 
---
  linux-headers/linux/kvm.h |  2 ++
  target/riscv/kvm_riscv.h  | 33 +


 Hi!

Please don't mix updates to linux-headers/ with updates to other files. 
linux-headers/ should only by updated via the 
scripts/update-linux-headers.sh script, and then the whole update should be 
included in the patch, not only selected files.


Thanks,
  Thomas




Re: [RFC PATCH v3 40/44] target/loongarch: Implement vilvl vilvh vextrins vshuf

2023-04-24 Thread Richard Henderson

On 4/20/23 09:07, Song Gao wrote:

This patch includes:
- VILV{L/H}.{B/H/W/D};
- VSHUF.{B/H/W/D};
- VSHUF4I.{B/H/W/D};
- VPERMI.W;
- VEXTRINS.{B/H/W/D}.

Signed-off-by: Song Gao
---
  target/loongarch/disas.c|  25 
  target/loongarch/helper.h   |  25 
  target/loongarch/insn_trans/trans_lsx.c.inc |  25 
  target/loongarch/insns.decode   |  25 
  target/loongarch/lsx_helper.c   | 148 
  5 files changed, 248 insertions(+)


Reviewed-by: Richard Henderson 

r~



Re: [RFC PATCH v3 37/44] target/loongarch: Implement vbitsel vset

2023-04-24 Thread Richard Henderson

On 4/20/23 09:07, Song Gao wrote:

+#define SETANYEQZ(NAME, MO) \
+void HELPER(NAME)(CPULoongArchState *env, uint32_t cd, uint32_t vj) \
+{   \
+bool ret = false;   \
+VReg *Vj = &(env->fpr[vj].vreg);\
+\
+ret = do_match2(0, Vj->D(0), Vj->D(1), MO); \
+env->cf[cd & 0x7] = ret;\
+}


Good.


+
+#define SETALLNEZ(NAME, BIT, E) \
+void HELPER(NAME)(CPULoongArchState *env, uint32_t cd, uint32_t vj) \
+{   \
+int i;  \
+bool ret = true;\
+VReg *Vj = &(env->fpr[vj].vreg);\
+\
+for (i = 0; i < LSX_LEN/BIT; i++) { \
+ret &= (Vj->E(i) != 0); \
+}   \
+env->cf[cd & 0x7] = ret;\
+}


setallnez = !setanyeqz, so use !do_match2().



r~



Re: [RFC PATCH v3 35/44] target/loongarch: Implement vseq vsle vslt

2023-04-24 Thread Richard Henderson

On 4/20/23 09:07, Song Gao wrote:

This patch includes:
- VSEQ[I].{B/H/W/D};
- VSLE[I].{B/H/W/D}[U];
- VSLT[I].{B/H/W/D/}[U].

Signed-off-by: Song Gao
---
  target/loongarch/disas.c|  43 +
  target/loongarch/helper.h   |  23 +++
  target/loongarch/insn_trans/trans_lsx.c.inc | 185 
  target/loongarch/insns.decode   |  43 +
  target/loongarch/lsx_helper.c   |  38 
  5 files changed, 332 insertions(+)


Reviewed-by: Richard Henderson 

r~



Re: [RFC] hw/arm/virt: Provide DT binding generation for PCI eXpander Bridges

2023-04-24 Thread Peter Maydell
On Mon, 24 Apr 2023 at 16:41, Jonathan Cameron
 wrote:
>
> On Mon, 24 Apr 2023 10:28:30 +0100
> Peter Maydell  wrote:
> > So, not knowing anything about CXL, my naive question is:
> > this is PCI, why can't we handle it the way we handle everything
> > else PCI, i.e. present the PCI controller in the DTB and it's
> > the guest kernel's job to probe, enumerate, etc whatever it wants ?
>
> Absolutely the kernel will still do the enumeration.  But there's a problem
> - it won't always work, unless there is 'enough room'.
>
> If the aim is to do it in a similar fashion to how PCI Expander Bridges (PXB)
> are handled today with ACPI (we could look at doing something different of 
> course)
>
> We have one set of memory windows for assigning PCI bars into etc. In the 
> model
> used for PXB, that set of resources is shared between different host bridges
> including the main one (each one has separate DT description).
>
> So for virt, VIRT_PCIE_MMIO, VIRT_PCIE_IO, VIRT_PCIE_ECAM, VIRT_HIGH_PCIE_ECAM
> and VIRT_HIGH_PCIE_MMIO are split up between the host bridges.
> Each host bridge has it's own DT description.
>
> For ACPI, how to split up available memory windows up depends on the 
> requirements
> of the PCIe devices below the host bridges.  For ACPI we 'know' the answer
> as to what is required at the point of creating the description because EDK2
> did the work for us.  For DT we currently don't.  What to do about that is the
> question this RFC tries to address.
>
> In theory we could change the kernel to support enumerating PXB instances, but
> that's a very different model from today where they are just 'standard'
> host bridges, using the generic bindings for ACPI (and after this patch for 
> DT).

On the other hand, having QEMU enumerate PCI devices is *also* a
very different model from today, where we assume that the guest
code is the one that is going to deal with enumerating PCI devices.
To my mind one of the major advantages of PCI is exactly that it
is entirely probeable and discoverable, so that there is no need
for the dtb to include a lot of information that the kernel can
find out for itself by directly asking the hardware...

thanks
-- PMM



  1   2   3   4   >