On Sun, 20 Nov 2022 16:36:58 -0800
Bryan Angelo <bang...@gmail.com> wrote:

> When passing-through via vfio-pci using QEMU 7.1.0 and OVMF, it appears
> that qemu preallocates all guest system memory.
> 
> qemu-system-x86_64 \
>     -no-user-config \
>     -nodefaults \
>     -nographic \
>     -rtc base=utc \
>     -boot strict=on \
>     -machine pc,accel=kvm,dump-guest-core=off \
>     -cpu host,migratable=off \
>     -smp 8 \
>     -m size=8G \
>     -overcommit mem-lock=off \
>     -device vfio-pci,host=03:00.0 \
>     ...
> 
>   PID USER      PR  NI     VIRT      RES  %CPU  %MEM     TIME+ S COMMAND
>  4151 root      20   0 13560.8m  *8310.8m* 100.0  52.6   0:25.06 S
> qemu-system-x86_64
> 
> 
> If I remove just the vfio-pci device argument, it appears that qemu no
> longer preallocates all guest system memory.
> 
>   PID USER      PR  NI     VIRT      RES  %CPU  %MEM     TIME+ S COMMAND
>  5049 root      20   0 13414.0m   *762.4m*   0.0   4.8   0:27.06 S
> qemu-system-x86_64
> 
> 
> I am curious if anyone has any context on or experience with this
> functionality.  Does anyone know if preallocation is a requirement for VFIO
> with QEMU or if preallocation can be disabled?
> 
> I am speculating that QEMU is actually preallocating as opposed to the
> guest touching every page of system memory.


This is a necessary artifact of device assignment currently.  Any memory
that can potentially be a DMA target for the assigned device needs to be
pinned in the host.  By default, all guest memory is potentially a DMA
target, therefore all of guest memory is pinned.  A vIOMMU in the guest
can reduce the memory footprint, but the guest will still initially pin
all memory as the vIOMMU is disabled at guest boot/reboot, but this
also trades VM memory footprint for latency, as dynamic mappings
through a vIOMMU to the host IOMMU is a long path.

Eventually, devices supporting Page Request Interface capabilities can
help to alleviate this, by essentially faulting DMA pages, much like
the processor does for memory.  Support for this likely requires new
hardware and software though.  Thanks,

Alex

_______________________________________________
vfio-users mailing list
vfio-users@redhat.com
https://listman.redhat.com/mailman/listinfo/vfio-users

Reply via email to