On Mon, 14 Apr 2025 01:23:56 +0800 Tomita Moeko <tomitamo...@gmail.com> wrote:
> On 4/10/25 15:34, Cédric Le Goater wrote: > > + Corvin > > > > On 4/9/25 19:18, Alex Williamson wrote: > >> On Wed, 26 Mar 2025 01:22:39 +0800 > >> Tomita Moeko <tomitamo...@gmail.com> wrote: > >> > >>> So far, all Intel VGA adapters, including discrete GPUs like A770 and > >>> B580, were treated as IGD devices. While this had no functional impact, > >>> a error about "unsupported IGD device" will be printed when passthrough > >>> Intel discrete GPUs. > >>> > >>> Since IGD devices must be at "0000:00:02.0", let's check the host PCI > >>> address when probing. > >>> > >>> Signed-off-by: Tomita Moeko <tomitamo...@gmail.com> > >>> --- > >>> hw/vfio/igd.c | 23 +++++++++-------------- > >>> 1 file changed, 9 insertions(+), 14 deletions(-) > >>> > >>> diff --git a/hw/vfio/igd.c b/hw/vfio/igd.c > >>> index 265fffc2aa..ff250017b0 100644 > >>> --- a/hw/vfio/igd.c > >>> +++ b/hw/vfio/igd.c > >>> @@ -53,6 +53,13 @@ > >>> * headless setup is desired, the OpRegion gets in the way of that. > >>> */ > >>> +static bool vfio_is_igd(VFIOPCIDevice *vdev) > >>> +{ > >>> + return vfio_pci_is(vdev, PCI_VENDOR_ID_INTEL, PCI_ANY_ID) && > >>> + vfio_is_vga(vdev) && > >>> + vfio_pci_host_match(&vdev->host, "0000:00:02.0"); > >>> +} > >> > >> vfio-pci devices can also be specified via sysfsdev= rather than host=, > >> so at a minimum I think we'd need to test against vdev->vbasedev.name, > >> as other callers of vfio_pci_host_match do. For example building a > >> local PCIHostDeviceAddress and comparing it to name. This is also not > >> foolproof though if we start taking advantage of devices passed by fd. > >> > >> Could we instead rely PCIe capabilities? A discrete GPU should > >> identify as either an endpoint or legacy endpoint and IGD should > >> identify as a root complex integrated endpoint, or maybe older versions > >> would lack the PCIe capability altogether. > > > > Maintaining a list of PCI IDs for Intel GPU devices as Corvin was > > proposing in [1] is not a viable solution ? > > > > Thanks, > > > > C. > > > > [1] > > https://lore.kernel.org/qemu-devel/20250206121341.118337-1-corvin.koe...@gmail.com/ > > > > I checked Intel doc, probably maintaining an device ID list is the only > possible way. But given that intel is moving to xe driver, generation > becomes unclear, I'd like to propose a list with quirk flags for igd. > > static const struct igd_device igd_devices[] = { > INTEL_SNB_IDS(IGD_DEVICE, OPREGION_QUIRK | BDSM_QUIRK), > INTEL_TGL_IDS(IGD_DEVICE, OPREGION_QUIRK | BDSM64_QUIRK), > } > > Matching in the list is more time consuming than current switch-case, > it's better to have a new field to cache it. > > I will go with Corvin's first 2 patches with reordering suggested by > Cornelia. If I recall the discussion correctly, Corvin's series was mapping device IDs to generation, where I had the concern that it creates ongoing overhead to sync with the i915 driver to create new mappings. There was a suggestion that newer hardware has a register that reports the generation, so maybe we only need to manage creating the mapping table up to the point we can rely on getting the generation information from hardware (with the massive caveat that Intel could drop that generation register in the future, or maybe already has). The above table however suggests yet another use case of the table, a mapping of quirks to specific devices. It seems this once again introduces the maintenance issue. Why would it not be sufficient to determine the quirks based on the generation alone? Thanks, Alex