On 06/08/2019 14:11, Jan Beulich wrote:
> There's no point setting up tables with more space than a PCI device can
> use. For both MSI and MSI-X we can determine how many interrupts could
> be set up at most. Tables allocated during ACPI table parsing, however,
> will (for now at least) continue to be set up to have maximum size.
>
> Note that until we would want to use sub-page allocations here there's
> no point checking whether MSI is supported by a device - 1 or up to 32
> (or actually 128, due to the change effectively using a reserved
> encoding) IRTEs always mean an order-0 allocation anyway.

Devices which are not MSI-capable don't need an interrupt remapping
table at all.

Per my calculations, the Rome SDP has 62 devices with MSI/MSI-X support,
and 98 devices which are CPU-internals that have no interrupt support at
all.

In comparison, for a production Cascade Lake system I have to hand, the
stats are 92 non-MSI devices and 18 MSI-capable devices (which isn't a
valid direct comparison due to how VT-d's remapping tables work, but is
a datapoint on "similar looking systems").

I'm happy to leave "no IRT's for non-capable devices" for future work,
but at the very least, I don't think the commit message wants phrasing
in exactly this way.

>
> --- a/xen/drivers/passthrough/amd/iommu_init.c
> +++ b/xen/drivers/passthrough/amd/iommu_init.c
> @@ -1315,11 +1317,8 @@ static int __init amd_iommu_setup_device
>              }
>  
>              amd_iommu_set_intremap_table(
> -                dte,
> -                ivrs_mappings[bdf].intremap_table
> -                ? virt_to_maddr(ivrs_mappings[bdf].intremap_table)
> -                : 0,
> -                iommu_intremap);
> +                dte, ivrs_mappings[bdf].intremap_table,
> +                ivrs_mappings[bdf].iommu, iommu_intremap);

Ah - half of this looks like it wants to be in patch 6, rather than here.

>          }
>      }
>  
> --- a/xen/drivers/passthrough/amd/iommu_intr.c
> +++ b/xen/drivers/passthrough/amd/iommu_intr.c
> @@ -69,7 +69,8 @@ union irte_cptr {
>      const union irte128 *ptr128;
>  } __transparent__;
>  
> -#define INTREMAP_MAX_ENTRIES (1 << IOMMU_INTREMAP_ORDER)
> +#define INTREMAP_MAX_ORDER   0xB
> +#define INTREMAP_MAX_ENTRIES (1 << INTREMAP_MAX_ORDER)
>  
>  struct ioapic_sbdf ioapic_sbdf[MAX_IO_APICS];
>  struct hpet_sbdf hpet_sbdf;
> @@ -80,17 +81,13 @@ unsigned int nr_ioapic_sbdf;
>  
>  static void dump_intremap_tables(unsigned char key);
>  
> -static unsigned int __init intremap_table_order(const struct
> amd_iommu *iommu)
> -{
> -    return iommu->ctrl.ga_en
> -           ? get_order_from_bytes(INTREMAP_MAX_ENTRIES * sizeof(union
> irte128))
> -           : get_order_from_bytes(INTREMAP_MAX_ENTRIES * sizeof(union
> irte32));
> -}
> +#define intremap_page_order(irt) PFN_ORDER(virt_to_page(irt))

What makes the frameable order field safe to use?  It reaches into
(pg)->v.free.order which fairly obviously isn't safe for allocated pages.

virt_to_page() is a non-trivial calculation, which is now used in a
large number of circumstances.  I don't have an easy judgement of
whether they are hotpaths, but surely it would be easier to just store
another unsigned int per device.

Furthermore, it would work around a preexisting issue where we can
allocate beyond the number of interrupts for the device, up to the next
order boundary.

> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> @@ -471,16 +471,15 @@ static int amd_iommu_add_device(u8 devfn
>      {
>          ivrs_mappings[bdf].intremap_table =
>              amd_iommu_alloc_intremap_table(
> -                iommu, &ivrs_mappings[bdf].intremap_inuse);
> +                iommu, &ivrs_mappings[bdf].intremap_inuse,
> +                pdev->msix ? pdev->msix->nr_entries
> +                           : multi_msi_capable(~0u));
>          if ( !ivrs_mappings[bdf].intremap_table )
>              return -ENOMEM;
>  
>          amd_iommu_set_intremap_table(
>              iommu->dev_table.buffer + (bdf *
> IOMMU_DEV_TABLE_ENTRY_SIZE),
> -            ivrs_mappings[bdf].intremap_table
> -            ? virt_to_maddr(ivrs_mappings[bdf].intremap_table)
> -            : 0,
> -            iommu_intremap);
> +            ivrs_mappings[bdf].intremap_table, iommu, iommu_intremap);
>  

Similarly for patch 6 here.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to