On Thu, Jun 26, 2025 at 9:57 PM Sairaj Kodilkar <sarun...@amd.com> wrote:
>
>
>
> On 6/26/2025 9:59 PM, David Matlack wrote:
> > On Thu, Jun 26, 2025 at 4:44 AM Sairaj Kodilkar <sarun...@amd.com> wrote:
> >> On 6/26/2025 4:57 PM, Sairaj Kodilkar wrote:
> >>> On 6/21/2025 4:50 AM, David Matlack wrote:
> >>>> +/*
> >>>> + * Limit the number of MSIs enabled/disabled by the test regardless
> >>>> of the
> >>>> + * number of MSIs the device itself supports, e.g. to avoid hitting
> >>>> IRTE limits.
> >>>> + */
> >>>> +#define MAX_TEST_MSI 16U
> >>>> +
> >>>
> >>> Now that AMD IOMMU supports upto 2048 IRTEs per device, I wonder if we
> >>> can include a test with max MSIs 2048.
> >
> > That sounds worth doing. I originally added this because I was hitting
> > IRTE limits on an Intel host and a ~6.6 kernel.
> >
> > Is there some way the test can detect from userspace that the IOMMU
> > supports 2048 IRTEs that we could key off to decide what value of
> > MAX_TEST_MSI to use?
> >
>
> The feature is published to userspace through
>
> $ cat /sys/class/iommu/ivhd0/amd-iommu/features
> 25bf732fa2295afe:53d
>
> The output is in format "efr1:efr2". The Bit 9-8 of efr2 shows the
> support for 2048 interrupts (efr2 & 0x300).
>
> Please refer 3.4.13 Extended Feature 2 Register of IOMMU specs [1] for
> more details.
>
> [1]
> https://www.amd.com/content/dam/amd/en/documents/processor-tech-docs/specifications/48882_IOMMU.pdf
>
> Note that, when device is behind PCIe-PCI bridge the IOMMU may hit IRTE
> limit early as multiple devices share same IRTE table. (But this is a
> corner case and I doubt that 2K capable device is kept behind the
> bridge).

Thanks. We could definitely read that and allow up to 2048 MSIs in
this test. Would you be ok if we defer that to a future commit though?
This series is already quite big :)

>
> >>>> +
> >>>> +    vfio_pci_dma_map(self->device, iova, size, mem);
> >>>> +    printf("Mapped HVA %p (size 0x%lx) at IOVA 0x%lx\n", mem, size,
> >>>> iova);
> >>>> +    vfio_pci_dma_unmap(self->device, iova, size);
> >>>
> >>>
> >>> I am slightly confused here. Because You are having an assert on munmap
> >>> and not on any of the vfio_pci_dma_(map/unmap). This test case is not
> >>> testing VFIO.
> >>
> >> I missed to see ioctl_assert. Please ignore this :) Sorry about that.
> >
> > No worries, it's not very obvious :)
> >
> > vfio_pci_dma_map() and vfio_pci_dma_unmap() both return void right now
> > and perform internal asserts since all current users of those
> > functions want to assert success.
> >
> > If and when we have a use-case to assert that map or unmap fails
> > (which I think we'll definitely have) we can add __vfio_pci_dma_map()
> > and __vfio_pci_dma_unmap() variants that return int instead of void.
>
> Yep we can. Another question, why do we need assert on mmunmap ? If
> mmunmap fails then its not really a fault of VFIO.

You're right, it's very unlikely (almost impossible) to be VFIO's
fault if munmap() fails. But it would be a sign of a bug in the test,
so it is still good to detect so we can fix it.

Reply via email to