On 14.01.21 00:34, Alex Williamson wrote: > On Thu, 7 Jan 2021 14:34:18 +0100 > David Hildenbrand <da...@redhat.com> wrote: > >> Although RamDiscardMgr can handle running into the maximum number of >> DMA mappings by propagating errors when creating a DMA mapping, we want >> to sanity check and warn the user early that there is a theoretical setup >> issue and that virtio-mem might not be able to provide as much memory >> towards a VM as desired. >> >> As suggested by Alex, let's use the number of KVM memory slots to guess >> how many other mappings we might see over time. >> >> Cc: Paolo Bonzini <pbonz...@redhat.com> >> Cc: "Michael S. Tsirkin" <m...@redhat.com> >> Cc: Alex Williamson <alex.william...@redhat.com> >> Cc: Dr. David Alan Gilbert <dgilb...@redhat.com> >> Cc: Igor Mammedov <imamm...@redhat.com> >> Cc: Pankaj Gupta <pankaj.gupta.li...@gmail.com> >> Cc: Peter Xu <pet...@redhat.com> >> Cc: Auger Eric <eric.au...@redhat.com> >> Cc: Wei Yang <richard.weiy...@linux.alibaba.com> >> Cc: teawater <teawat...@linux.alibaba.com> >> Cc: Marek Kedzierski <mkedz...@redhat.com> >> Signed-off-by: David Hildenbrand <da...@redhat.com> >> --- >> hw/vfio/common.c | 43 +++++++++++++++++++++++++++++++++++++++++++ >> 1 file changed, 43 insertions(+) >> >> diff --git a/hw/vfio/common.c b/hw/vfio/common.c >> index 1babb6bb99..bc20f738ce 100644 >> --- a/hw/vfio/common.c >> +++ b/hw/vfio/common.c >> @@ -758,6 +758,49 @@ static void >> vfio_register_ram_discard_notifier(VFIOContainer *container, >> vfio_ram_discard_notify_discard_all); >> rdmc->register_listener(rdm, section->mr, &vrdl->listener); >> QLIST_INSERT_HEAD(&container->vrdl_list, vrdl, next); >> + >> + /* >> + * Sanity-check if we have a theoretically problematic setup where we >> could >> + * exceed the maximum number of possible DMA mappings over time. We >> assume >> + * that each mapped section in the same address space as a RamDiscardMgr >> + * section consumes exactly one DMA mapping, with the exception of >> + * RamDiscardMgr sections; i.e., we don't expect to have gIOMMU >> sections in >> + * the same address space as RamDiscardMgr sections. >> + * >> + * We assume that each section in the address space consumes one >> memslot. >> + * We take the number of KVM memory slots as a best guess for the >> maximum >> + * number of sections in the address space we could have over time, >> + * also consuming DMA mappings. >> + */ >> + if (container->dma_max_mappings) { >> + unsigned int vrdl_count = 0, vrdl_mappings = 0, max_memslots = 512; >> + >> +#ifdef CONFIG_KVM >> + if (kvm_enabled()) { >> + max_memslots = kvm_get_max_memslots(); >> + } >> +#endif >> + >> + QLIST_FOREACH(vrdl, &container->vrdl_list, next) { >> + hwaddr start, end; >> + >> + start = QEMU_ALIGN_DOWN(vrdl->offset_within_address_space, >> + vrdl->granularity); >> + end = ROUND_UP(vrdl->offset_within_address_space + vrdl->size, >> + vrdl->granularity); >> + vrdl_mappings = (end - start) / vrdl->granularity; > > ---> += ?
Ah, yes, thanks. That's the result of testing only with a single virtio-mem device :) -- Thanks, David / dhildenb