Re: [PATCH 1/1] IOMMU: Save pci device id instead of pci_dev* pointer for DMAR devices
On 2013/11/20 23:59, David Woodhouse wrote: > On Fri, 2013-11-08 at 08:46 -0700, Bjorn Helgaas wrote: >> >> I don't know the IOMMU drivers well either, but it seems like they >> rely on notifications of device addition and removal (see >> iommu_bus_notifier()). It doesn't seem right for them to also use the >> generic PCI interfaces like pci_get_domain_bus_and_slot() because the >> IOMMU driver should already know what devices exist and their >> lifetimes. It seems like confusion to mix the two. But I don't have >> a concrete suggestion. > Hi David, Thanks for your review and comment! > The generic IOMMU code has a notifier, and calls through to an > ->add_device() method in the specific IOMMU driver's iommu_ops. > > The Intel IOMMU driver predates that, and its scheme for mapping devices > to the correct DMAR unit is different. It happens entirely within the > get_domain_for_dev() function, which happens when we're first asked to > set up a mapping for a given device (when we don't already have the > answer stashed in dev->archdata). > > I think we should add an ->add_device() method to the Intel IOMMU > driver, and make it do much of what's in get_domain_for_dev() right now > — finding the "proxy" device (the upstream PCIe bridge or whatever), and > then looking through the ACPI DMAR table to find which DMAR unit that's > attached to. Then we stash that information (dmar, devfn) in > dev->archdata, and get_domain_for_dev() still has *some* work to do, > actually allocating a logical domain on the IOMMU in question, but not > as much. And refcount the damn domain instead of playing the horrid > tricks we currently do to hang it off the upstream proxy device *too*. Intel IOMMU driver has an ->add_device() method already, .add_device = intel_iommu_add_device, this method was used to update iommu group info. Since Intel IOMMU driver has its own notifier, so maybe it's a nice candidate to do something. Currently, dmar driver parse DMAR table and find the pci device id under a specific DRHD. But only save the device pci_dev * pointer in devices array. So if this pci device was removed, this info became stale info. In the last version patch, I use pci device id intead of pci_dev * pointer array completely. This maybe introduce some unsafe issues. Because pci device maybe destroyed during process device dma mapping etc. So, I have rework the patch and try to save pci device id as well as pci_dev *pointer, like: struct dmar_device { u16 segment; u8 bus; u8 devfn; --->these tree will be used only when pci device add or remove, we will use them to update pci_dev * pointer in intel iommu driver notifier. struct list_head list; -->add to DRHD device list. struct pci_dev *pdev; --->use to hold the pci device } What do you think about ? In this new patch, we won't change the Intel iommu driver much, just enhance Intel driver iommu notifier to make DRHD device list always effect, not stale info. I will send out this new patch soon. > > My main concern here is that the DMAR table contains the PCI bus numbers > at boot time. Doing the lookup later will only work if we don't renumber > busses. Or if we have a way to look things up based on the *original* > bus number. If we won't remove the pci device, the occupied buses won't be change, I think. And because in the new patch, we still use pci_dev *pointer to find match DRHD, so this is not a regression. Since DMAR also use pci device id to identify the support device, I have not found anything instead of device id. In AMD IOMMU driver, it seems to use pci device id to identify drhd too, although I just take a quickly glanced at it, maybe not correctly. > > The Intel IOMMU also has a bus notifier of its own which it only uses to > know when a driver is *detached*, so it can tear down the logical domain > for the corresponding device. Would be nice to have the generic IOMMU > notifier call a callback for us then too, perhaps. Update the device info in Intel IOMMU driver is a good point. Thanks! Yijing. > > -- Thanks! Yijing -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 1/1] IOMMU: Save pci device id instead of pci_dev* pointer for DMAR devices
On Fri, 2013-11-08 at 08:46 -0700, Bjorn Helgaas wrote: > > I don't know the IOMMU drivers well either, but it seems like they > rely on notifications of device addition and removal (see > iommu_bus_notifier()). It doesn't seem right for them to also use the > generic PCI interfaces like pci_get_domain_bus_and_slot() because the > IOMMU driver should already know what devices exist and their > lifetimes. It seems like confusion to mix the two. But I don't have > a concrete suggestion. The generic IOMMU code has a notifier, and calls through to an ->add_device() method in the specific IOMMU driver's iommu_ops. The Intel IOMMU driver predates that, and its scheme for mapping devices to the correct DMAR unit is different. It happens entirely within the get_domain_for_dev() function, which happens when we're first asked to set up a mapping for a given device (when we don't already have the answer stashed in dev->archdata). I think we should add an ->add_device() method to the Intel IOMMU driver, and make it do much of what's in get_domain_for_dev() right now — finding the "proxy" device (the upstream PCIe bridge or whatever), and then looking through the ACPI DMAR table to find which DMAR unit that's attached to. Then we stash that information (dmar, devfn) in dev->archdata, and get_domain_for_dev() still has *some* work to do, actually allocating a logical domain on the IOMMU in question, but not as much. And refcount the damn domain instead of playing the horrid tricks we currently do to hang it off the upstream proxy device *too*. My main concern here is that the DMAR table contains the PCI bus numbers at boot time. Doing the lookup later will only work if we don't renumber busses. Or if we have a way to look things up based on the *original* bus number. The Intel IOMMU also has a bus notifier of its own which it only uses to know when a driver is *detached*, so it can tear down the logical domain for the corresponding device. Would be nice to have the generic IOMMU notifier call a callback for us then too, perhaps. -- dwmw2 smime.p7s Description: S/MIME cryptographic signature
Re: [PATCH 1/1] IOMMU: Save pci device id instead of pci_dev* pointer for DMAR devices
On Fri, 2013-11-08 at 08:46 -0700, Bjorn Helgaas wrote: I don't know the IOMMU drivers well either, but it seems like they rely on notifications of device addition and removal (see iommu_bus_notifier()). It doesn't seem right for them to also use the generic PCI interfaces like pci_get_domain_bus_and_slot() because the IOMMU driver should already know what devices exist and their lifetimes. It seems like confusion to mix the two. But I don't have a concrete suggestion. The generic IOMMU code has a notifier, and calls through to an -add_device() method in the specific IOMMU driver's iommu_ops. The Intel IOMMU driver predates that, and its scheme for mapping devices to the correct DMAR unit is different. It happens entirely within the get_domain_for_dev() function, which happens when we're first asked to set up a mapping for a given device (when we don't already have the answer stashed in dev-archdata). I think we should add an -add_device() method to the Intel IOMMU driver, and make it do much of what's in get_domain_for_dev() right now — finding the proxy device (the upstream PCIe bridge or whatever), and then looking through the ACPI DMAR table to find which DMAR unit that's attached to. Then we stash that information (dmar, devfn) in dev-archdata, and get_domain_for_dev() still has *some* work to do, actually allocating a logical domain on the IOMMU in question, but not as much. And refcount the damn domain instead of playing the horrid tricks we currently do to hang it off the upstream proxy device *too*. My main concern here is that the DMAR table contains the PCI bus numbers at boot time. Doing the lookup later will only work if we don't renumber busses. Or if we have a way to look things up based on the *original* bus number. The Intel IOMMU also has a bus notifier of its own which it only uses to know when a driver is *detached*, so it can tear down the logical domain for the corresponding device. Would be nice to have the generic IOMMU notifier call a callback for us then too, perhaps. -- dwmw2 smime.p7s Description: S/MIME cryptographic signature
Re: [PATCH 1/1] IOMMU: Save pci device id instead of pci_dev* pointer for DMAR devices
On 2013/11/20 23:59, David Woodhouse wrote: On Fri, 2013-11-08 at 08:46 -0700, Bjorn Helgaas wrote: I don't know the IOMMU drivers well either, but it seems like they rely on notifications of device addition and removal (see iommu_bus_notifier()). It doesn't seem right for them to also use the generic PCI interfaces like pci_get_domain_bus_and_slot() because the IOMMU driver should already know what devices exist and their lifetimes. It seems like confusion to mix the two. But I don't have a concrete suggestion. Hi David, Thanks for your review and comment! The generic IOMMU code has a notifier, and calls through to an -add_device() method in the specific IOMMU driver's iommu_ops. The Intel IOMMU driver predates that, and its scheme for mapping devices to the correct DMAR unit is different. It happens entirely within the get_domain_for_dev() function, which happens when we're first asked to set up a mapping for a given device (when we don't already have the answer stashed in dev-archdata). I think we should add an -add_device() method to the Intel IOMMU driver, and make it do much of what's in get_domain_for_dev() right now — finding the proxy device (the upstream PCIe bridge or whatever), and then looking through the ACPI DMAR table to find which DMAR unit that's attached to. Then we stash that information (dmar, devfn) in dev-archdata, and get_domain_for_dev() still has *some* work to do, actually allocating a logical domain on the IOMMU in question, but not as much. And refcount the damn domain instead of playing the horrid tricks we currently do to hang it off the upstream proxy device *too*. Intel IOMMU driver has an -add_device() method already, .add_device = intel_iommu_add_device, this method was used to update iommu group info. Since Intel IOMMU driver has its own notifier, so maybe it's a nice candidate to do something. Currently, dmar driver parse DMAR table and find the pci device id under a specific DRHD. But only save the device pci_dev * pointer in devices array. So if this pci device was removed, this info became stale info. In the last version patch, I use pci device id intead of pci_dev * pointer array completely. This maybe introduce some unsafe issues. Because pci device maybe destroyed during process device dma mapping etc. So, I have rework the patch and try to save pci device id as well as pci_dev *pointer, like: struct dmar_device { u16 segment; u8 bus; u8 devfn; ---these tree will be used only when pci device add or remove, we will use them to update pci_dev * pointer in intel iommu driver notifier. struct list_head list; --add to DRHD device list. struct pci_dev *pdev; ---use to hold the pci device } What do you think about ? In this new patch, we won't change the Intel iommu driver much, just enhance Intel driver iommu notifier to make DRHD device list always effect, not stale info. I will send out this new patch soon. My main concern here is that the DMAR table contains the PCI bus numbers at boot time. Doing the lookup later will only work if we don't renumber busses. Or if we have a way to look things up based on the *original* bus number. If we won't remove the pci device, the occupied buses won't be change, I think. And because in the new patch, we still use pci_dev *pointer to find match DRHD, so this is not a regression. Since DMAR also use pci device id to identify the support device, I have not found anything instead of device id. In AMD IOMMU driver, it seems to use pci device id to identify drhd too, although I just take a quickly glanced at it, maybe not correctly. The Intel IOMMU also has a bus notifier of its own which it only uses to know when a driver is *detached*, so it can tear down the logical domain for the corresponding device. Would be nice to have the generic IOMMU notifier call a callback for us then too, perhaps. Update the device info in Intel IOMMU driver is a good point. Thanks! Yijing. -- Thanks! Yijing -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 1/1] IOMMU: Save pci device id instead of pci_dev* pointer for DMAR devices
>> Hmmm, this is the thing I am most worried about. If we just only use >> (pci_dev *) poninter in drhd->devices array as a identification. Change >> (pci_dev *) pointer instead of pci device id segment:bus:devfn is safe. >> Or, this is a wrong way to fix this issue. I don't know IOMMU driver much >> now, >> so IOMMU guys any comments on this issue is welcome. >> >> If this is not safe, what about we both save pci device id and (pci_dev *) >> pointer >> in drhd. So we can put pci_dev ref and set pci_dev * = NULL during device >> removed by bus notify, and >> update (pci_dev *)pointer during device add. > > I don't know the IOMMU drivers well either, but it seems like they > rely on notifications of device addition and removal (see > iommu_bus_notifier()). It doesn't seem right for them to also use the > generic PCI interfaces like pci_get_domain_bus_and_slot() because the > IOMMU driver should already know what devices exist and their > lifetimes. It seems like confusion to mix the two. But I don't have > a concrete suggestion. Maybe you are right~, I will try to rework the patch and resend soon. Thanks! Yijing. > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > > . > -- Thanks! Yijing -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 1/1] IOMMU: Save pci device id instead of pci_dev* pointer for DMAR devices
Hmmm, this is the thing I am most worried about. If we just only use (pci_dev *) poninter in drhd-devices array as a identification. Change (pci_dev *) pointer instead of pci device id segment:bus:devfn is safe. Or, this is a wrong way to fix this issue. I don't know IOMMU driver much now, so IOMMU guys any comments on this issue is welcome. If this is not safe, what about we both save pci device id and (pci_dev *) pointer in drhd. So we can put pci_dev ref and set pci_dev * = NULL during device removed by bus notify, and update (pci_dev *)pointer during device add. I don't know the IOMMU drivers well either, but it seems like they rely on notifications of device addition and removal (see iommu_bus_notifier()). It doesn't seem right for them to also use the generic PCI interfaces like pci_get_domain_bus_and_slot() because the IOMMU driver should already know what devices exist and their lifetimes. It seems like confusion to mix the two. But I don't have a concrete suggestion. Maybe you are right~, I will try to rework the patch and resend soon. Thanks! Yijing. -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ . -- Thanks! Yijing -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 1/1] IOMMU: Save pci device id instead of pci_dev* pointer for DMAR devices
On Thu, Nov 7, 2013 at 8:40 PM, Yijing Wang wrote: > HI Bjorn, >Thanks for your review and comments very much! > >>> +list_for_each_entry(dmar_dev, head, list) >>> +if (dmar_dev->segment == pci_domain_nr(dev->bus) >>> +&& dmar_dev->bus == dev->bus->number >>> +&& dmar_dev->devfn == dev->devfn) >>> +return 1; >>> + >>> /* Check our parent */ >>> dev = dev->bus->self; >> >> You didn't change this, but it looks like this may have the same problem >> we've been talking about here: >> >> http://lkml.kernel.org/r/20131105232903.3790.8738.st...@bhelgaas-glaptop.roam.corp.google.com >> >> Namely, if "dev" is a VF on a virtual bus, "dev->bus->self == NULL", so >> we won't search for any of the bridges leading to the VF. I proposed a >> pci_upstream_bridge() interface that could be used like this: >> >> /* Check our parent */ >> dev = pci_upstream_bridge(dev); >> > > It looks good to me, because pci_upstream_bridge() is still in your next > branch, I think maybe > I can split this changes in a separate patch after 3.13-rc1. Yep, that would be a fix for a separate issue and should be a separate patch. >>> static struct intel_iommu *device_to_iommu(int segment, u8 bus, u8 devfn) >>> { >>> struct dmar_drhd_unit *drhd = NULL; >>> -int i; >>> +struct dmar_device *dmar_dev; >>> +struct pci_dev *pdev; >>> >>> for_each_drhd_unit(drhd) { >>> if (drhd->ignored) >>> @@ -658,16 +659,22 @@ static struct intel_iommu *device_to_iommu(int >>> segment, u8 bus, u8 devfn) >>> if (segment != drhd->segment) >>> continue; >>> >>> -for (i = 0; i < drhd->devices_cnt; i++) { >>> -if (drhd->devices[i] && >>> -drhd->devices[i]->bus->number == bus && >>> -drhd->devices[i]->devfn == devfn) >>> -return drhd->iommu; >>> -if (drhd->devices[i] && >>> -drhd->devices[i]->subordinate && >>> -drhd->devices[i]->subordinate->number <= bus && >>> -drhd->devices[i]->subordinate->busn_res.end >= bus) >>> -return drhd->iommu; >>> +list_for_each_entry(dmar_dev, >head, list) { >>> +if (dmar_dev->bus == bus && >>> +dmar_dev->devfn == devfn) >>> +return drhd->iommu; >>> + >>> +pdev = pci_get_domain_bus_and_slot(dmar_dev->segment, >>> +dmar_dev->bus, dmar_dev->devfn); >>> +if (pdev->subordinate && >>> +pdev->subordinate->number <= bus && >>> +pdev->subordinate->busn_res.end >= bus) { >>> +pci_dev_put(pdev); >>> +return drhd->iommu; >> >> I don't know the details of how device_to_iommu() is used, but this >> style (acquire ref to pci_dev, match it to some other object, drop >> pci_dev ref, return object) makes me nervous. How do we know the >> caller isn't depending on pci_dev to remain attached to the object? >> What happens if the pci_dev disappears when we do the pci_dev_put() >> here? > > Hmmm, this is the thing I am most worried about. If we just only use > (pci_dev *) poninter in drhd->devices array as a identification. Change > (pci_dev *) pointer instead of pci device id segment:bus:devfn is safe. > Or, this is a wrong way to fix this issue. I don't know IOMMU driver much now, > so IOMMU guys any comments on this issue is welcome. > > If this is not safe, what about we both save pci device id and (pci_dev *) > pointer > in drhd. So we can put pci_dev ref and set pci_dev * = NULL during device > removed by bus notify, and > update (pci_dev *)pointer during device add. I don't know the IOMMU drivers well either, but it seems like they rely on notifications of device addition and removal (see iommu_bus_notifier()). It doesn't seem right for them to also use the generic PCI interfaces like pci_get_domain_bus_and_slot() because the IOMMU driver should already know what devices exist and their lifetimes. It seems like confusion to mix the two. But I don't have a concrete suggestion. Bjorn -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 1/1] IOMMU: Save pci device id instead of pci_dev* pointer for DMAR devices
On Thu, Nov 7, 2013 at 8:40 PM, Yijing Wang wangyij...@huawei.com wrote: HI Bjorn, Thanks for your review and comments very much! +list_for_each_entry(dmar_dev, head, list) +if (dmar_dev-segment == pci_domain_nr(dev-bus) + dmar_dev-bus == dev-bus-number + dmar_dev-devfn == dev-devfn) +return 1; + /* Check our parent */ dev = dev-bus-self; You didn't change this, but it looks like this may have the same problem we've been talking about here: http://lkml.kernel.org/r/20131105232903.3790.8738.st...@bhelgaas-glaptop.roam.corp.google.com Namely, if dev is a VF on a virtual bus, dev-bus-self == NULL, so we won't search for any of the bridges leading to the VF. I proposed a pci_upstream_bridge() interface that could be used like this: /* Check our parent */ dev = pci_upstream_bridge(dev); It looks good to me, because pci_upstream_bridge() is still in your next branch, I think maybe I can split this changes in a separate patch after 3.13-rc1. Yep, that would be a fix for a separate issue and should be a separate patch. static struct intel_iommu *device_to_iommu(int segment, u8 bus, u8 devfn) { struct dmar_drhd_unit *drhd = NULL; -int i; +struct dmar_device *dmar_dev; +struct pci_dev *pdev; for_each_drhd_unit(drhd) { if (drhd-ignored) @@ -658,16 +659,22 @@ static struct intel_iommu *device_to_iommu(int segment, u8 bus, u8 devfn) if (segment != drhd-segment) continue; -for (i = 0; i drhd-devices_cnt; i++) { -if (drhd-devices[i] -drhd-devices[i]-bus-number == bus -drhd-devices[i]-devfn == devfn) -return drhd-iommu; -if (drhd-devices[i] -drhd-devices[i]-subordinate -drhd-devices[i]-subordinate-number = bus -drhd-devices[i]-subordinate-busn_res.end = bus) -return drhd-iommu; +list_for_each_entry(dmar_dev, drhd-head, list) { +if (dmar_dev-bus == bus +dmar_dev-devfn == devfn) +return drhd-iommu; + +pdev = pci_get_domain_bus_and_slot(dmar_dev-segment, +dmar_dev-bus, dmar_dev-devfn); +if (pdev-subordinate +pdev-subordinate-number = bus +pdev-subordinate-busn_res.end = bus) { +pci_dev_put(pdev); +return drhd-iommu; I don't know the details of how device_to_iommu() is used, but this style (acquire ref to pci_dev, match it to some other object, drop pci_dev ref, return object) makes me nervous. How do we know the caller isn't depending on pci_dev to remain attached to the object? What happens if the pci_dev disappears when we do the pci_dev_put() here? Hmmm, this is the thing I am most worried about. If we just only use (pci_dev *) poninter in drhd-devices array as a identification. Change (pci_dev *) pointer instead of pci device id segment:bus:devfn is safe. Or, this is a wrong way to fix this issue. I don't know IOMMU driver much now, so IOMMU guys any comments on this issue is welcome. If this is not safe, what about we both save pci device id and (pci_dev *) pointer in drhd. So we can put pci_dev ref and set pci_dev * = NULL during device removed by bus notify, and update (pci_dev *)pointer during device add. I don't know the IOMMU drivers well either, but it seems like they rely on notifications of device addition and removal (see iommu_bus_notifier()). It doesn't seem right for them to also use the generic PCI interfaces like pci_get_domain_bus_and_slot() because the IOMMU driver should already know what devices exist and their lifetimes. It seems like confusion to mix the two. But I don't have a concrete suggestion. Bjorn -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 1/1] IOMMU: Save pci device id instead of pci_dev* pointer for DMAR devices
HI Bjorn, Thanks for your review and comments very much! >> +list_for_each_entry(dmar_dev, head, list) >> +if (dmar_dev->segment == pci_domain_nr(dev->bus) >> +&& dmar_dev->bus == dev->bus->number >> +&& dmar_dev->devfn == dev->devfn) >> +return 1; >> + >> /* Check our parent */ >> dev = dev->bus->self; > > You didn't change this, but it looks like this may have the same problem > we've been talking about here: > > http://lkml.kernel.org/r/20131105232903.3790.8738.st...@bhelgaas-glaptop.roam.corp.google.com > > Namely, if "dev" is a VF on a virtual bus, "dev->bus->self == NULL", so > we won't search for any of the bridges leading to the VF. I proposed a > pci_upstream_bridge() interface that could be used like this: > > /* Check our parent */ > dev = pci_upstream_bridge(dev); > It looks good to me, because pci_upstream_bridge() is still in your next branch, I think maybe I can split this changes in a separate patch after 3.13-rc1. >> static struct intel_iommu *device_to_iommu(int segment, u8 bus, u8 devfn) >> { >> struct dmar_drhd_unit *drhd = NULL; >> -int i; >> +struct dmar_device *dmar_dev; >> +struct pci_dev *pdev; >> >> for_each_drhd_unit(drhd) { >> if (drhd->ignored) >> @@ -658,16 +659,22 @@ static struct intel_iommu *device_to_iommu(int >> segment, u8 bus, u8 devfn) >> if (segment != drhd->segment) >> continue; >> >> -for (i = 0; i < drhd->devices_cnt; i++) { >> -if (drhd->devices[i] && >> -drhd->devices[i]->bus->number == bus && >> -drhd->devices[i]->devfn == devfn) >> -return drhd->iommu; >> -if (drhd->devices[i] && >> -drhd->devices[i]->subordinate && >> -drhd->devices[i]->subordinate->number <= bus && >> -drhd->devices[i]->subordinate->busn_res.end >= bus) >> -return drhd->iommu; >> +list_for_each_entry(dmar_dev, >head, list) { >> +if (dmar_dev->bus == bus && >> +dmar_dev->devfn == devfn) >> +return drhd->iommu; >> + >> +pdev = pci_get_domain_bus_and_slot(dmar_dev->segment, >> +dmar_dev->bus, dmar_dev->devfn); >> +if (pdev->subordinate && >> +pdev->subordinate->number <= bus && >> +pdev->subordinate->busn_res.end >= bus) { >> +pci_dev_put(pdev); >> +return drhd->iommu; > > I don't know the details of how device_to_iommu() is used, but this > style (acquire ref to pci_dev, match it to some other object, drop > pci_dev ref, return object) makes me nervous. How do we know the > caller isn't depending on pci_dev to remain attached to the object? > What happens if the pci_dev disappears when we do the pci_dev_put() > here? Hmmm, this is the thing I am most worried about. If we just only use (pci_dev *) poninter in drhd->devices array as a identification. Change (pci_dev *) pointer instead of pci device id segment:bus:devfn is safe. Or, this is a wrong way to fix this issue. I don't know IOMMU driver much now, so IOMMU guys any comments on this issue is welcome. If this is not safe, what about we both save pci device id and (pci_dev *) pointer in drhd. So we can put pci_dev ref and set pci_dev * = NULL during device removed by bus notify, and update (pci_dev *)pointer during device add. like this: struct dmar_device { struct list_head list; u16 segment; u8 bus; u8 devfn; struct pci_dev *dev; }; >> for_each_drhd_unit(drhd) { >> -int i; >> if (drhd->ignored || drhd->include_all) >> continue; >> >> -for (i = 0; i < drhd->devices_cnt; i++) >> -if (drhd->devices[i] && >> -!IS_GFX_DEVICE(drhd->devices[i])) >> +list_for_each_entry(dmar_dev, >head, list) { >> +pdev = pci_get_domain_bus_and_slot(dmar_dev->segment, >> +dmar_dev->bus, dmar_dev->devfn); >> +if (!IS_GFX_DEVICE(pdev)) { >> +pci_dev_put(pdev); >> break; >> +} >> +pci_dev_put(pdev); >> +} >> >> -if (i < drhd->devices_cnt) >> +if (!IS_GFX_DEVICE(pdev)) > > I think this is clearly wrong. You acquire a pdev reference, drop the > reference, then look at pdev again after dropping the reference. But > as soon as you do the pci_dev_put(), you have to assume pdev is no > longer valid. > You are right, should move pci_dev_put() after if
Re: [PATCH 1/1] IOMMU: Save pci device id instead of pci_dev* pointer for DMAR devices
On Tue, Nov 05, 2013 at 04:24:58PM +0800, Yijing Wang wrote: > Currently, DMAR driver save target pci devices pointers for drhd/rmrr/atsr > in (pci_dev *) array. This is not safe, because pci devices maybe > hot added or removed during system running. They will have new pci_dev * > pointer. So if there have two IOMMUs or more in system, these devices > will find a wrong drhd during DMA mapping. And DMAR faults will occur. > This patch save pci device id insted of (pci_dev *) to fix this issue, > Because DMAR table just provide pci device id under a specific IOMMU, > so there is no reason to bind IOMMU with the (pci_dev *). Other, here > use list to manage devices' id for IOMMU, we can easily use list helper > to manage device id. > > after remove and rescan a pci device > [ 611.857095] dmar: DRHD: handling fault status reg 2 > [ 611.857109] dmar: DMAR:[DMA Read] Request device [86:00.3] fault addr > 7000 > [ 611.857109] DMAR:[fault reason 02] Present bit in context entry is clear > [ 611.857524] dmar: DRHD: handling fault status reg 102 > [ 611.857534] dmar: DMAR:[DMA Read] Request device [86:00.3] fault addr > 6000 > [ 611.857534] DMAR:[fault reason 02] Present bit in context entry is clear > [ 611.857936] dmar: DRHD: handling fault status reg 202 > [ 611.857947] dmar: DMAR:[DMA Read] Request device [86:00.3] fault addr > 5000 > [ 611.857947] DMAR:[fault reason 02] Present bit in context entry is clear > [ 611.858351] dmar: DRHD: handling fault status reg 302 > [ 611.858362] dmar: DMAR:[DMA Read] Request device [86:00.3] fault addr > 4000 > [ 611.858362] DMAR:[fault reason 02] Present bit in context entry is clear > [ 611.860819] IPv6: ADDRCONF(NETDEV_UP): eth3: link is not ready > [ 611.860983] dmar: DRHD: handling fault status reg 402 > [ 611.860995] dmar: INTR-REMAP: Request device [[86:00.3] fault index a4 > [ 611.860995] INTR-REMAP:[fault reason 34] Present field in the IRTE entry > is clear > > Signed-off-by: Yijing Wang > --- > drivers/iommu/dmar.c| 93 +- > drivers/iommu/intel-iommu.c | 155 > --- > include/linux/dmar.h| 20 -- > 3 files changed, 159 insertions(+), 109 deletions(-) > > diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c > index 785675a..9aa65a3 100644 > --- a/drivers/iommu/dmar.c > +++ b/drivers/iommu/dmar.c > @@ -65,12 +65,13 @@ static void __init dmar_register_drhd_unit(struct > dmar_drhd_unit *drhd) > } > > static int __init dmar_parse_one_dev_scope(struct acpi_dmar_device_scope > *scope, > -struct pci_dev **dev, u16 segment) > + u16 segment, struct list_head *head) > { > struct pci_bus *bus; > struct pci_dev *pdev = NULL; > struct acpi_dmar_pci_path *path; > int count; > + struct dmar_device *dmar_dev; > > bus = pci_find_bus(segment, scope->bus); > path = (struct acpi_dmar_pci_path *)(scope + 1); > @@ -100,7 +101,6 @@ static int __init dmar_parse_one_dev_scope(struct > acpi_dmar_device_scope *scope, > if (!pdev) { > pr_warn("Device scope device [%04x:%02x:%02x.%02x] not found\n", > segment, scope->bus, path->dev, path->fn); > - *dev = NULL; > return 0; > } > if ((scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT && \ > @@ -111,54 +111,39 @@ static int __init dmar_parse_one_dev_scope(struct > acpi_dmar_device_scope *scope, > pci_name(pdev)); > return -EINVAL; > } > - *dev = pdev; > + > + dmar_dev = kzalloc(sizeof(struct dmar_device), GFP_KERNEL); > + if (!dmar_dev) { > + pci_dev_put(pdev); > + return -ENOMEM; > + } > + > + dmar_dev->segment = segment; > + dmar_dev->bus = pdev->bus->number; > + dmar_dev->devfn = pdev->devfn; > + list_add_tail(_dev->list, head); > + > + pci_dev_put(pdev); > return 0; > } > > -int __init dmar_parse_dev_scope(void *start, void *end, int *cnt, > - struct pci_dev ***devices, u16 segment) > +int __init dmar_parse_dev_scope(void *start, void *end, u16 segment, > + struct list_head *head) > { > struct acpi_dmar_device_scope *scope; > - void * tmp = start; > - int index; > int ret; > > - *cnt = 0; > - while (start < end) { > - scope = start; > - if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT || > - scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE) > - (*cnt)++; > - else if (scope->entry_type != ACPI_DMAR_SCOPE_TYPE_IOAPIC && > - scope->entry_type != ACPI_DMAR_SCOPE_TYPE_HPET) { > - pr_warn("Unsupported device scope\n"); > - } > - start += scope->length; > - } > - if (*cnt == 0) > -
Re: [PATCH 1/1] IOMMU: Save pci device id instead of pci_dev* pointer for DMAR devices
On Tue, Nov 05, 2013 at 04:24:58PM +0800, Yijing Wang wrote: Currently, DMAR driver save target pci devices pointers for drhd/rmrr/atsr in (pci_dev *) array. This is not safe, because pci devices maybe hot added or removed during system running. They will have new pci_dev * pointer. So if there have two IOMMUs or more in system, these devices will find a wrong drhd during DMA mapping. And DMAR faults will occur. This patch save pci device id insted of (pci_dev *) to fix this issue, Because DMAR table just provide pci device id under a specific IOMMU, so there is no reason to bind IOMMU with the (pci_dev *). Other, here use list to manage devices' id for IOMMU, we can easily use list helper to manage device id. after remove and rescan a pci device [ 611.857095] dmar: DRHD: handling fault status reg 2 [ 611.857109] dmar: DMAR:[DMA Read] Request device [86:00.3] fault addr 7000 [ 611.857109] DMAR:[fault reason 02] Present bit in context entry is clear [ 611.857524] dmar: DRHD: handling fault status reg 102 [ 611.857534] dmar: DMAR:[DMA Read] Request device [86:00.3] fault addr 6000 [ 611.857534] DMAR:[fault reason 02] Present bit in context entry is clear [ 611.857936] dmar: DRHD: handling fault status reg 202 [ 611.857947] dmar: DMAR:[DMA Read] Request device [86:00.3] fault addr 5000 [ 611.857947] DMAR:[fault reason 02] Present bit in context entry is clear [ 611.858351] dmar: DRHD: handling fault status reg 302 [ 611.858362] dmar: DMAR:[DMA Read] Request device [86:00.3] fault addr 4000 [ 611.858362] DMAR:[fault reason 02] Present bit in context entry is clear [ 611.860819] IPv6: ADDRCONF(NETDEV_UP): eth3: link is not ready [ 611.860983] dmar: DRHD: handling fault status reg 402 [ 611.860995] dmar: INTR-REMAP: Request device [[86:00.3] fault index a4 [ 611.860995] INTR-REMAP:[fault reason 34] Present field in the IRTE entry is clear Signed-off-by: Yijing Wang wangyij...@huawei.com --- drivers/iommu/dmar.c| 93 +- drivers/iommu/intel-iommu.c | 155 --- include/linux/dmar.h| 20 -- 3 files changed, 159 insertions(+), 109 deletions(-) diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c index 785675a..9aa65a3 100644 --- a/drivers/iommu/dmar.c +++ b/drivers/iommu/dmar.c @@ -65,12 +65,13 @@ static void __init dmar_register_drhd_unit(struct dmar_drhd_unit *drhd) } static int __init dmar_parse_one_dev_scope(struct acpi_dmar_device_scope *scope, -struct pci_dev **dev, u16 segment) + u16 segment, struct list_head *head) { struct pci_bus *bus; struct pci_dev *pdev = NULL; struct acpi_dmar_pci_path *path; int count; + struct dmar_device *dmar_dev; bus = pci_find_bus(segment, scope-bus); path = (struct acpi_dmar_pci_path *)(scope + 1); @@ -100,7 +101,6 @@ static int __init dmar_parse_one_dev_scope(struct acpi_dmar_device_scope *scope, if (!pdev) { pr_warn(Device scope device [%04x:%02x:%02x.%02x] not found\n, segment, scope-bus, path-dev, path-fn); - *dev = NULL; return 0; } if ((scope-entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT \ @@ -111,54 +111,39 @@ static int __init dmar_parse_one_dev_scope(struct acpi_dmar_device_scope *scope, pci_name(pdev)); return -EINVAL; } - *dev = pdev; + + dmar_dev = kzalloc(sizeof(struct dmar_device), GFP_KERNEL); + if (!dmar_dev) { + pci_dev_put(pdev); + return -ENOMEM; + } + + dmar_dev-segment = segment; + dmar_dev-bus = pdev-bus-number; + dmar_dev-devfn = pdev-devfn; + list_add_tail(dmar_dev-list, head); + + pci_dev_put(pdev); return 0; } -int __init dmar_parse_dev_scope(void *start, void *end, int *cnt, - struct pci_dev ***devices, u16 segment) +int __init dmar_parse_dev_scope(void *start, void *end, u16 segment, + struct list_head *head) { struct acpi_dmar_device_scope *scope; - void * tmp = start; - int index; int ret; - *cnt = 0; - while (start end) { - scope = start; - if (scope-entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT || - scope-entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE) - (*cnt)++; - else if (scope-entry_type != ACPI_DMAR_SCOPE_TYPE_IOAPIC - scope-entry_type != ACPI_DMAR_SCOPE_TYPE_HPET) { - pr_warn(Unsupported device scope\n); - } - start += scope-length; - } - if (*cnt == 0) - return 0; - - *devices = kcalloc(*cnt, sizeof(struct pci_dev *), GFP_KERNEL); - if (!*devices) -
Re: [PATCH 1/1] IOMMU: Save pci device id instead of pci_dev* pointer for DMAR devices
HI Bjorn, Thanks for your review and comments very much! +list_for_each_entry(dmar_dev, head, list) +if (dmar_dev-segment == pci_domain_nr(dev-bus) + dmar_dev-bus == dev-bus-number + dmar_dev-devfn == dev-devfn) +return 1; + /* Check our parent */ dev = dev-bus-self; You didn't change this, but it looks like this may have the same problem we've been talking about here: http://lkml.kernel.org/r/20131105232903.3790.8738.st...@bhelgaas-glaptop.roam.corp.google.com Namely, if dev is a VF on a virtual bus, dev-bus-self == NULL, so we won't search for any of the bridges leading to the VF. I proposed a pci_upstream_bridge() interface that could be used like this: /* Check our parent */ dev = pci_upstream_bridge(dev); It looks good to me, because pci_upstream_bridge() is still in your next branch, I think maybe I can split this changes in a separate patch after 3.13-rc1. static struct intel_iommu *device_to_iommu(int segment, u8 bus, u8 devfn) { struct dmar_drhd_unit *drhd = NULL; -int i; +struct dmar_device *dmar_dev; +struct pci_dev *pdev; for_each_drhd_unit(drhd) { if (drhd-ignored) @@ -658,16 +659,22 @@ static struct intel_iommu *device_to_iommu(int segment, u8 bus, u8 devfn) if (segment != drhd-segment) continue; -for (i = 0; i drhd-devices_cnt; i++) { -if (drhd-devices[i] -drhd-devices[i]-bus-number == bus -drhd-devices[i]-devfn == devfn) -return drhd-iommu; -if (drhd-devices[i] -drhd-devices[i]-subordinate -drhd-devices[i]-subordinate-number = bus -drhd-devices[i]-subordinate-busn_res.end = bus) -return drhd-iommu; +list_for_each_entry(dmar_dev, drhd-head, list) { +if (dmar_dev-bus == bus +dmar_dev-devfn == devfn) +return drhd-iommu; + +pdev = pci_get_domain_bus_and_slot(dmar_dev-segment, +dmar_dev-bus, dmar_dev-devfn); +if (pdev-subordinate +pdev-subordinate-number = bus +pdev-subordinate-busn_res.end = bus) { +pci_dev_put(pdev); +return drhd-iommu; I don't know the details of how device_to_iommu() is used, but this style (acquire ref to pci_dev, match it to some other object, drop pci_dev ref, return object) makes me nervous. How do we know the caller isn't depending on pci_dev to remain attached to the object? What happens if the pci_dev disappears when we do the pci_dev_put() here? Hmmm, this is the thing I am most worried about. If we just only use (pci_dev *) poninter in drhd-devices array as a identification. Change (pci_dev *) pointer instead of pci device id segment:bus:devfn is safe. Or, this is a wrong way to fix this issue. I don't know IOMMU driver much now, so IOMMU guys any comments on this issue is welcome. If this is not safe, what about we both save pci device id and (pci_dev *) pointer in drhd. So we can put pci_dev ref and set pci_dev * = NULL during device removed by bus notify, and update (pci_dev *)pointer during device add. like this: struct dmar_device { struct list_head list; u16 segment; u8 bus; u8 devfn; struct pci_dev *dev; }; for_each_drhd_unit(drhd) { -int i; if (drhd-ignored || drhd-include_all) continue; -for (i = 0; i drhd-devices_cnt; i++) -if (drhd-devices[i] -!IS_GFX_DEVICE(drhd-devices[i])) +list_for_each_entry(dmar_dev, drhd-head, list) { +pdev = pci_get_domain_bus_and_slot(dmar_dev-segment, +dmar_dev-bus, dmar_dev-devfn); +if (!IS_GFX_DEVICE(pdev)) { +pci_dev_put(pdev); break; +} +pci_dev_put(pdev); +} -if (i drhd-devices_cnt) +if (!IS_GFX_DEVICE(pdev)) I think this is clearly wrong. You acquire a pdev reference, drop the reference, then look at pdev again after dropping the reference. But as soon as you do the pci_dev_put(), you have to assume pdev is no longer valid. You are right, should move pci_dev_put() after if (!IS_GFX_DEVICE(pdev)). +struct dmar_device { +struct list_head list; +u8 segment; I think this should be u16. I didn't chase down how you're using it, but Table 8.3 in the Intel VT-d spec shows Segment Number in
[PATCH 1/1] IOMMU: Save pci device id instead of pci_dev* pointer for DMAR devices
Currently, DMAR driver save target pci devices pointers for drhd/rmrr/atsr in (pci_dev *) array. This is not safe, because pci devices maybe hot added or removed during system running. They will have new pci_dev * pointer. So if there have two IOMMUs or more in system, these devices will find a wrong drhd during DMA mapping. And DMAR faults will occur. This patch save pci device id insted of (pci_dev *) to fix this issue, Because DMAR table just provide pci device id under a specific IOMMU, so there is no reason to bind IOMMU with the (pci_dev *). Other, here use list to manage devices' id for IOMMU, we can easily use list helper to manage device id. after remove and rescan a pci device [ 611.857095] dmar: DRHD: handling fault status reg 2 [ 611.857109] dmar: DMAR:[DMA Read] Request device [86:00.3] fault addr 7000 [ 611.857109] DMAR:[fault reason 02] Present bit in context entry is clear [ 611.857524] dmar: DRHD: handling fault status reg 102 [ 611.857534] dmar: DMAR:[DMA Read] Request device [86:00.3] fault addr 6000 [ 611.857534] DMAR:[fault reason 02] Present bit in context entry is clear [ 611.857936] dmar: DRHD: handling fault status reg 202 [ 611.857947] dmar: DMAR:[DMA Read] Request device [86:00.3] fault addr 5000 [ 611.857947] DMAR:[fault reason 02] Present bit in context entry is clear [ 611.858351] dmar: DRHD: handling fault status reg 302 [ 611.858362] dmar: DMAR:[DMA Read] Request device [86:00.3] fault addr 4000 [ 611.858362] DMAR:[fault reason 02] Present bit in context entry is clear [ 611.860819] IPv6: ADDRCONF(NETDEV_UP): eth3: link is not ready [ 611.860983] dmar: DRHD: handling fault status reg 402 [ 611.860995] dmar: INTR-REMAP: Request device [[86:00.3] fault index a4 [ 611.860995] INTR-REMAP:[fault reason 34] Present field in the IRTE entry is clear Signed-off-by: Yijing Wang --- drivers/iommu/dmar.c| 93 +- drivers/iommu/intel-iommu.c | 155 --- include/linux/dmar.h| 20 -- 3 files changed, 159 insertions(+), 109 deletions(-) diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c index 785675a..9aa65a3 100644 --- a/drivers/iommu/dmar.c +++ b/drivers/iommu/dmar.c @@ -65,12 +65,13 @@ static void __init dmar_register_drhd_unit(struct dmar_drhd_unit *drhd) } static int __init dmar_parse_one_dev_scope(struct acpi_dmar_device_scope *scope, - struct pci_dev **dev, u16 segment) + u16 segment, struct list_head *head) { struct pci_bus *bus; struct pci_dev *pdev = NULL; struct acpi_dmar_pci_path *path; int count; + struct dmar_device *dmar_dev; bus = pci_find_bus(segment, scope->bus); path = (struct acpi_dmar_pci_path *)(scope + 1); @@ -100,7 +101,6 @@ static int __init dmar_parse_one_dev_scope(struct acpi_dmar_device_scope *scope, if (!pdev) { pr_warn("Device scope device [%04x:%02x:%02x.%02x] not found\n", segment, scope->bus, path->dev, path->fn); - *dev = NULL; return 0; } if ((scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT && \ @@ -111,54 +111,39 @@ static int __init dmar_parse_one_dev_scope(struct acpi_dmar_device_scope *scope, pci_name(pdev)); return -EINVAL; } - *dev = pdev; + + dmar_dev = kzalloc(sizeof(struct dmar_device), GFP_KERNEL); + if (!dmar_dev) { + pci_dev_put(pdev); + return -ENOMEM; + } + + dmar_dev->segment = segment; + dmar_dev->bus = pdev->bus->number; + dmar_dev->devfn = pdev->devfn; + list_add_tail(_dev->list, head); + + pci_dev_put(pdev); return 0; } -int __init dmar_parse_dev_scope(void *start, void *end, int *cnt, - struct pci_dev ***devices, u16 segment) +int __init dmar_parse_dev_scope(void *start, void *end, u16 segment, + struct list_head *head) { struct acpi_dmar_device_scope *scope; - void * tmp = start; - int index; int ret; - *cnt = 0; - while (start < end) { - scope = start; - if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT || - scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE) - (*cnt)++; - else if (scope->entry_type != ACPI_DMAR_SCOPE_TYPE_IOAPIC && - scope->entry_type != ACPI_DMAR_SCOPE_TYPE_HPET) { - pr_warn("Unsupported device scope\n"); - } - start += scope->length; - } - if (*cnt == 0) - return 0; - - *devices = kcalloc(*cnt, sizeof(struct pci_dev *), GFP_KERNEL); - if (!*devices) - return -ENOMEM; - - start = tmp; - index = 0;
[PATCH 1/1] IOMMU: Save pci device id instead of pci_dev* pointer for DMAR devices
Currently, DMAR driver save target pci devices pointers for drhd/rmrr/atsr in (pci_dev *) array. This is not safe, because pci devices maybe hot added or removed during system running. They will have new pci_dev * pointer. So if there have two IOMMUs or more in system, these devices will find a wrong drhd during DMA mapping. And DMAR faults will occur. This patch save pci device id insted of (pci_dev *) to fix this issue, Because DMAR table just provide pci device id under a specific IOMMU, so there is no reason to bind IOMMU with the (pci_dev *). Other, here use list to manage devices' id for IOMMU, we can easily use list helper to manage device id. after remove and rescan a pci device [ 611.857095] dmar: DRHD: handling fault status reg 2 [ 611.857109] dmar: DMAR:[DMA Read] Request device [86:00.3] fault addr 7000 [ 611.857109] DMAR:[fault reason 02] Present bit in context entry is clear [ 611.857524] dmar: DRHD: handling fault status reg 102 [ 611.857534] dmar: DMAR:[DMA Read] Request device [86:00.3] fault addr 6000 [ 611.857534] DMAR:[fault reason 02] Present bit in context entry is clear [ 611.857936] dmar: DRHD: handling fault status reg 202 [ 611.857947] dmar: DMAR:[DMA Read] Request device [86:00.3] fault addr 5000 [ 611.857947] DMAR:[fault reason 02] Present bit in context entry is clear [ 611.858351] dmar: DRHD: handling fault status reg 302 [ 611.858362] dmar: DMAR:[DMA Read] Request device [86:00.3] fault addr 4000 [ 611.858362] DMAR:[fault reason 02] Present bit in context entry is clear [ 611.860819] IPv6: ADDRCONF(NETDEV_UP): eth3: link is not ready [ 611.860983] dmar: DRHD: handling fault status reg 402 [ 611.860995] dmar: INTR-REMAP: Request device [[86:00.3] fault index a4 [ 611.860995] INTR-REMAP:[fault reason 34] Present field in the IRTE entry is clear Signed-off-by: Yijing Wang wangyij...@huawei.com --- drivers/iommu/dmar.c| 93 +- drivers/iommu/intel-iommu.c | 155 --- include/linux/dmar.h| 20 -- 3 files changed, 159 insertions(+), 109 deletions(-) diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c index 785675a..9aa65a3 100644 --- a/drivers/iommu/dmar.c +++ b/drivers/iommu/dmar.c @@ -65,12 +65,13 @@ static void __init dmar_register_drhd_unit(struct dmar_drhd_unit *drhd) } static int __init dmar_parse_one_dev_scope(struct acpi_dmar_device_scope *scope, - struct pci_dev **dev, u16 segment) + u16 segment, struct list_head *head) { struct pci_bus *bus; struct pci_dev *pdev = NULL; struct acpi_dmar_pci_path *path; int count; + struct dmar_device *dmar_dev; bus = pci_find_bus(segment, scope-bus); path = (struct acpi_dmar_pci_path *)(scope + 1); @@ -100,7 +101,6 @@ static int __init dmar_parse_one_dev_scope(struct acpi_dmar_device_scope *scope, if (!pdev) { pr_warn(Device scope device [%04x:%02x:%02x.%02x] not found\n, segment, scope-bus, path-dev, path-fn); - *dev = NULL; return 0; } if ((scope-entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT \ @@ -111,54 +111,39 @@ static int __init dmar_parse_one_dev_scope(struct acpi_dmar_device_scope *scope, pci_name(pdev)); return -EINVAL; } - *dev = pdev; + + dmar_dev = kzalloc(sizeof(struct dmar_device), GFP_KERNEL); + if (!dmar_dev) { + pci_dev_put(pdev); + return -ENOMEM; + } + + dmar_dev-segment = segment; + dmar_dev-bus = pdev-bus-number; + dmar_dev-devfn = pdev-devfn; + list_add_tail(dmar_dev-list, head); + + pci_dev_put(pdev); return 0; } -int __init dmar_parse_dev_scope(void *start, void *end, int *cnt, - struct pci_dev ***devices, u16 segment) +int __init dmar_parse_dev_scope(void *start, void *end, u16 segment, + struct list_head *head) { struct acpi_dmar_device_scope *scope; - void * tmp = start; - int index; int ret; - *cnt = 0; - while (start end) { - scope = start; - if (scope-entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT || - scope-entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE) - (*cnt)++; - else if (scope-entry_type != ACPI_DMAR_SCOPE_TYPE_IOAPIC - scope-entry_type != ACPI_DMAR_SCOPE_TYPE_HPET) { - pr_warn(Unsupported device scope\n); - } - start += scope-length; - } - if (*cnt == 0) - return 0; - - *devices = kcalloc(*cnt, sizeof(struct pci_dev *), GFP_KERNEL); - if (!*devices) - return -ENOMEM; - - start = tmp; - index = 0;