On Wed, May 07, 2025 at 08:44:05PM +0000, Matt Ochs wrote:
> Hi Daniel,
> 
> Thanks for your feedback!
> 
> > On May 7, 2025, at 11:51 AM, Daniel P. Berrangé <berra...@redhat.com> wrote:
> > On Fri, Apr 11, 2025 at 08:40:54AM -0700, Matthew R. Ochs via Devel wrote:
> >> Resending: Series has been re-based over latest upstream.
> >> 
> >> This patch series adds support for configuring the PCI high memory MMIO
> >> window size for aarch64 virt machine types. This feature has been merged
> >> into the QEMU upstream master branch [1] and will be available in QEMU 
> >> 10.0.
> >> It allows users to configure the size of the high memory MMIO window above
> >> 4GB, which is particularly useful for systems with large amounts of PCI
> >> memory requirements.
> >> 
> >> The feature is exposed through the domain XML as a new PCI feature:
> >> <features>
> >>  <pci>
> >>    <highmem-mmio-size unit='G'>512</highmem-mmio-size>
> >>  </pci>
> >> </features>
> >> 
> >> When enabled, this configures the size of the PCI high memory MMIO window
> >> via QEMU's highmem-mmio-size machine property. The feature is only
> >> available for aarch64 virt machine types and requires QEMU support.
> > 
> > This isn't my area of expertize, but could you give any more background
> > on why we need to /manually/ set such a property on Arm only ? Is there
> > something that prevents us making QEMU "do the right thing" ?
> 
> The highmem-mmio-size property is only available for the arm64 “virt”
> machine. It is only needed when a VM configuration will exceed the 512G
> default for PCI highmem region. There are some GPU devices that exist
> today that have very large BARs and require more than 512G when
> multiple devices are passed through to a VM.
> 
> Regarding making QEMU “do the right thing”, we could add logic to
> libvirt to detect when these known devices are present in the VM
> configuration and automatically set an appropriate size for the
> parameter. However I was under the impression that type of solution
> was preferred to be handled at the mgmt app layer.

I wasn't suggestnig to put logic in libvirt actually. I'm querying why
QEMU's memory map is setup such that this PCI assignment can't work by
default with a standard QEMU configuration ?

Can you confirm this works correctly on x86 QEMU with q35 machine type
by default ?  If so, what prevents QEMU 'virt' machine for aarch64
being changed to also work ?

Libvirt can't detect when the devices are present in the VM config
because this mmio setting is a cold boot option, while PCI devices
are often hot-plugged to an existing VM.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

Reply via email to