Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Fri, May 13, 2022 at 10:25:48AM -0600, Alex Williamson wrote: > On Fri, 13 May 2022 17:49:44 +0200 > Joerg Roedel wrote: > > > Hi Alex, > > > > On Wed, May 04, 2022 at 10:29:56AM -0600, Alex Williamson wrote: > > > Done, and thanks for the heads-up. Please try to cc me when the > > > vfio-notifier-fix branch is merged back into your next branch. Thanks, > > > > This has happened now, the vfio-notifier-fix branch got the fix and is > > merged back into my next branch. > > Thanks, Joerg! > > Jason, I'll push a merge of this with > > Subject: [PATCH] vfio: Delete container_q > 0-v1-a1e8791d795b+6b-vfio_container_q_...@nvidia.com > > and > > Subject: [PATCH v3 0/8] Remove vfio_group from the struct file facing VFIO API > 0-v3-f7729924a7ea+25e33-vfio_kvm_no_group_...@nvidia.com > > as soon as my sanity build finishes. Thanks, Thanks, I'll rebase and repost the remaining vfio series. Jason ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Fri, 13 May 2022 17:49:44 +0200 Joerg Roedel wrote: > Hi Alex, > > On Wed, May 04, 2022 at 10:29:56AM -0600, Alex Williamson wrote: > > Done, and thanks for the heads-up. Please try to cc me when the > > vfio-notifier-fix branch is merged back into your next branch. Thanks, > > This has happened now, the vfio-notifier-fix branch got the fix and is > merged back into my next branch. Thanks, Joerg! Jason, I'll push a merge of this with Subject: [PATCH] vfio: Delete container_q 0-v1-a1e8791d795b+6b-vfio_container_q_...@nvidia.com and Subject: [PATCH v3 0/8] Remove vfio_group from the struct file facing VFIO API 0-v3-f7729924a7ea+25e33-vfio_kvm_no_group_...@nvidia.com as soon as my sanity build finishes. Thanks, Alex ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
Hi Alex, On Wed, May 04, 2022 at 10:29:56AM -0600, Alex Williamson wrote: > Done, and thanks for the heads-up. Please try to cc me when the > vfio-notifier-fix branch is merged back into your next branch. Thanks, This has happened now, the vfio-notifier-fix branch got the fix and is merged back into my next branch. Regards, Joerg ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
RE: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
> From: Jason Gunthorpe > Sent: Tuesday, May 10, 2022 2:33 AM > > On Wed, May 04, 2022 at 01:57:05PM +0200, Joerg Roedel wrote: > > On Wed, May 04, 2022 at 08:51:35AM -0300, Jason Gunthorpe wrote: > > > Nicolin and Eric have been testing with this series on ARM for a long > > > time now, it is not like it is completely broken. > > > > Yeah, I am also optimistic this can be fixed soon. But the rule is that > > the next branch should only contain patches which I would send to Linus. > > And with a known issue in it I wouldn't, so it is excluded at least from > > my next branch for now. The topic branch is still alive and I will merge > > it again when the fix is in. > > The fix is out, lets merge it back in so we can have some more time to > discover any additional issues. People seem to test when it is in your > branch. > Joerg, any chance you may give it a priority? This is the first step of a long refactoring effort and it has been gating quite a few well-reviewed improvements down the road. having it tested earlier in your branch is definitely appreciated. Thanks Kevin ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Wed, May 04, 2022 at 01:57:05PM +0200, Joerg Roedel wrote: > On Wed, May 04, 2022 at 08:51:35AM -0300, Jason Gunthorpe wrote: > > Nicolin and Eric have been testing with this series on ARM for a long > > time now, it is not like it is completely broken. > > Yeah, I am also optimistic this can be fixed soon. But the rule is that > the next branch should only contain patches which I would send to Linus. > And with a known issue in it I wouldn't, so it is excluded at least from > my next branch for now. The topic branch is still alive and I will merge > it again when the fix is in. The fix is out, lets merge it back in so we can have some more time to discover any additional issues. People seem to test when it is in your branch. Thanks, Jason ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Wed, 4 May 2022 10:42:07 +0200 Joerg Roedel wrote: > On Mon, May 02, 2022 at 12:12:04PM -0400, Qian Cai wrote: > > Reverting this series fixed an user-after-free while doing SR-IOV. > > > > BUG: KASAN: use-after-free in __lock_acquire > > Hrm, okay. I am going exclude this series from my next branch for now > until this has been sorted out. > > Alex, I suggest you do the same. Done, and thanks for the heads-up. Please try to cc me when the vfio-notifier-fix branch is merged back into your next branch. Thanks, Alex ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Wed, May 04, 2022 at 08:51:35AM -0300, Jason Gunthorpe wrote: > Nicolin and Eric have been testing with this series on ARM for a long > time now, it is not like it is completely broken. Yeah, I am also optimistic this can be fixed soon. But the rule is that the next branch should only contain patches which I would send to Linus. And with a known issue in it I wouldn't, so it is excluded at least from my next branch for now. The topic branch is still alive and I will merge it again when the fix is in. Regards, Joerg ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Wed, May 04, 2022 at 10:42:07AM +0200, Joerg Roedel wrote: > On Mon, May 02, 2022 at 12:12:04PM -0400, Qian Cai wrote: > > Reverting this series fixed an user-after-free while doing SR-IOV. > > > > BUG: KASAN: use-after-free in __lock_acquire > > Hrm, okay. I am going exclude this series from my next branch for now > until this has been sorted out. This is going to blow up everything going on in vfio right now, let's not do something so drastic please. There is already a patch to fix it, lets wait for it to get sorted out. Nicolin and Eric have been testing with this series on ARM for a long time now, it is not like it is completely broken. Thanks, Jason ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Mon, May 02, 2022 at 12:12:04PM -0400, Qian Cai wrote: > Reverting this series fixed an user-after-free while doing SR-IOV. > > BUG: KASAN: use-after-free in __lock_acquire Hrm, okay. I am going exclude this series from my next branch for now until this has been sorted out. Alex, I suggest you do the same. Regards, Joerg ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On 2022-05-03 16:23, Jason Gunthorpe wrote: On Tue, May 03, 2022 at 02:04:37PM +0100, Robin Murphy wrote: I'm guessing SMMU3 needs to call it's arm_smmu_detach_dev(master) from the detach_dev op and null it's cached copy of the domain, but I don't know this driver.. Robin? The original intent was that .detach_dev is deprecated in favour of default domains, and when the latter are in use, a device is always attached *somewhere* once probed (i.e. group->domain is never NULL). At face value, the neatest fix IMO would probably be for SMMUv3's .domain_free to handle smmu_domain->devices being non-empty and detach them at that point. However that wouldn't be viable for virtio-iommu or anyone else keeping an internal one-way association of devices to their current domains. Oh wow that is not obvious Actually, I think it is much worse than this because iommu_group_claim_dma_owner() does a __iommu_detach_group() with the expecation that this would actually result in DMA being blocked, immediately. The idea that __iomuu_detatch_group() is a NOP is kind of scary. Scarier than the fact that even where it *is* implemented, .detach_dev has never had a well-defined behaviour either, and plenty of drivers treat it as a "remove the IOMMU from the picture altogether" operation which ends up with the device in bypass rather than blocked? Leaving the group attached to the kernel DMA domain will allow userspace to DMA to all kernel memory :\ Note that a fair amount of IOMMU hardware only has two states, thus could only actually achieve a blocking behaviour by enabling translation with an empty pagetable anyway. (Trivia: and technically some of them aren't even capable of blocking invalid accesses *when* translating - they can only apply a "default" translation targeting some scratch page) So one approach could be to block use of iommu_group_claim_dma_owner() if no detatch_dev op is present and then go through and put them back or do something else. This could be short-term OK if we add an op to SMMUv3, but long term everything would have to be fixed Or we can allocate a dummy empty/blocked domain during iommu_group_claim_dma_owner() and attach it whenever. How does the compile-tested diff below seem? There's a fair chance it's still broken, but I don't have the bandwidth to give it much more thought right now. Cheers, Robin. ->8- diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 29906bc16371..597d70ed7007 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -45,6 +45,7 @@ struct iommu_group { int id; struct iommu_domain *default_domain; struct iommu_domain *domain; + struct iommu_domain *purgatory; struct list_head entry; unsigned int owner_cnt; void *owner; @@ -596,6 +597,8 @@ static void iommu_group_release(struct kobject *kobj) if (group->default_domain) iommu_domain_free(group->default_domain); + if (group->purgatory) + iommu_domain_free(group->purgatory); kfree(group->name); kfree(group); @@ -2041,6 +2044,12 @@ struct iommu_domain *iommu_get_dma_domain(struct device *dev) return dev->iommu_group->default_domain; } +static bool iommu_group_user_attached(struct iommu_group *group) +{ + return group->domain && group->domain != group->default_domain && + group->domain != group->purgatory; +} + /* * IOMMU groups are really the natural working unit of the IOMMU, but * the IOMMU API works on domains and devices. Bridge that gap by @@ -2063,7 +2072,7 @@ static int __iommu_attach_group(struct iommu_domain *domain, { int ret; - if (group->domain && group->domain != group->default_domain) + if (iommu_group_user_attached(group)) return -EBUSY; ret = __iommu_group_for_each_dev(group, domain, @@ -2104,7 +2113,12 @@ static void __iommu_detach_group(struct iommu_domain *domain, * If the group has been claimed already, do not re-attach the default * domain. */ - if (!group->default_domain || group->owner) { + if (group->owner) { + WARN_ON(__iommu_attach_group(group->purgatory, group)); + return; + } + + if (!group->default_domain) { __iommu_group_for_each_dev(group, domain, iommu_group_do_detach_device); group->domain = NULL; @@ -3111,6 +3125,25 @@ void iommu_device_unuse_default_domain(struct device *dev) iommu_group_put(group); } +static struct iommu_domain *iommu_group_get_purgatory(struct iommu_group *group) +{ + struct group_device *dev; + + mutex_lock(>mutex); + if (group->purgatory) + goto out; + + dev = list_first_entry(>devices, struct group_device, list); + group->purgatory = __iommu_domain_alloc(dev->dev->bus, + IOMMU_DOMAIN_BLOCKED);
Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Tue, May 03, 2022 at 02:04:37PM +0100, Robin Murphy wrote: > > I'm guessing SMMU3 needs to call it's arm_smmu_detach_dev(master) from > > the detach_dev op and null it's cached copy of the domain, but I don't > > know this driver.. Robin? > > The original intent was that .detach_dev is deprecated in favour of default > domains, and when the latter are in use, a device is always attached > *somewhere* once probed (i.e. group->domain is never NULL). At face value, > the neatest fix IMO would probably be for SMMUv3's .domain_free to handle > smmu_domain->devices being non-empty and detach them at that point. However > that wouldn't be viable for virtio-iommu or anyone else keeping an internal > one-way association of devices to their current domains. Oh wow that is not obvious Actually, I think it is much worse than this because iommu_group_claim_dma_owner() does a __iommu_detach_group() with the expecation that this would actually result in DMA being blocked, immediately. The idea that __iomuu_detatch_group() is a NOP is kind of scary. Leaving the group attached to the kernel DMA domain will allow userspace to DMA to all kernel memory :\ So one approach could be to block use of iommu_group_claim_dma_owner() if no detatch_dev op is present and then go through and put them back or do something else. This could be short-term OK if we add an op to SMMUv3, but long term everything would have to be fixed Or we can allocate a dummy empty/blocked domain during iommu_group_claim_dma_owner() and attach it whenever. The really ugly trick is that detatch cannot fail, so attach to this blocking domain must also not fail - IMHO this is a very complicated API to expect for the driver to implement correctly... I see there is already a WARN_ON that attaching to the default domain cannot fail. Maybe this warrants an actual no-fail attach op so the driver can be more aware of this.. And some of these internal APIs could stand some adjusting if we really never want a true "detatch" it is always some kind of replace/swap type operation, either to the default domain or to the blocking domain. > We *could* stay true to the original paradigm by introducing some real usage > of IOMMU_DOMAIN_BLOCKED, such that we could keep one or more of those around > to actively attach to instead of having groups in this unattached limbo > state, but that's a bigger job involving adding support to drivers as well; > too much for a quick fix now... I suspect for the short term we can get by with an empty mapping domain - using DOMAIN_BLOCKED is a bit of a refinement. Thanks, Jason ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On 2022-05-02 17:42, Jason Gunthorpe wrote: On Mon, May 02, 2022 at 12:12:04PM -0400, Qian Cai wrote: On Mon, Apr 18, 2022 at 08:49:49AM +0800, Lu Baolu wrote: Hi Joerg, This is a resend version of v8 posted here: https://lore.kernel.org/linux-iommu/20220308054421.847385-1-baolu...@linux.intel.com/ as we discussed in this thread: https://lore.kernel.org/linux-iommu/yk%2fq1bgn8pc5h...@8bytes.org/ All patches can be applied perfectly except this one: - [PATCH v8 02/11] driver core: Add dma_cleanup callback in bus_type It conflicts with below refactoring commit: - 4b775aaf1ea99 "driver core: Refactor sysfs and drv/bus remove hooks" The conflict has been fixed in this post. No functional changes in this series. I suppress cc-ing this series to all v8 reviewers in order to avoid spam. Please consider it for your iommu tree. Reverting this series fixed an user-after-free while doing SR-IOV. BUG: KASAN: use-after-free in __lock_acquire Read of size 8 at addr 080279825d78 by task qemu-system-aar/22429 CPU: 24 PID: 22429 Comm: qemu-system-aar Not tainted 5.18.0-rc5-next-20220502 #69 Call trace: dump_backtrace show_stack dump_stack_lvl print_address_description.constprop.0 print_report kasan_report __asan_report_load8_noabort __lock_acquire lock_acquire.part.0 lock_acquire _raw_spin_lock_irqsave arm_smmu_detach_dev arm_smmu_detach_dev at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:2377 arm_smmu_attach_dev Hum. So what has happened is that VFIO does this sequence: iommu_detach_group() iommu_domain_free() iommu_group_release_dma_owner() Which, I think should be valid, API wise. From what I can see reading the code SMMUv3 blows up above because it doesn't have a detach_dev op: .default_domain_ops = &(const struct iommu_domain_ops) { .attach_dev = arm_smmu_attach_dev, .map_pages = arm_smmu_map_pages, .unmap_pages= arm_smmu_unmap_pages, .flush_iotlb_all= arm_smmu_flush_iotlb_all, .iotlb_sync = arm_smmu_iotlb_sync, .iova_to_phys = arm_smmu_iova_to_phys, .enable_nesting = arm_smmu_enable_nesting, .free = arm_smmu_domain_free, } But it is internally tracking the domain inside the master - so when the next domain is attached it does this: static void arm_smmu_detach_dev(struct arm_smmu_master *master) { struct arm_smmu_domain *smmu_domain = master->domain; spin_lock_irqsave(_domain->devices_lock, flags); And explodes as the domain has been freed but master->domain was not NULL'd. It worked before because iommu_detach_group() used to attach the default group and that was before the domain was freed in the above sequence. Oof, I totally overlooked the significance of that little subtlety in review :( I'm guessing SMMU3 needs to call it's arm_smmu_detach_dev(master) from the detach_dev op and null it's cached copy of the domain, but I don't know this driver.. Robin? The original intent was that .detach_dev is deprecated in favour of default domains, and when the latter are in use, a device is always attached *somewhere* once probed (i.e. group->domain is never NULL). At face value, the neatest fix IMO would probably be for SMMUv3's .domain_free to handle smmu_domain->devices being non-empty and detach them at that point. However that wouldn't be viable for virtio-iommu or anyone else keeping an internal one-way association of devices to their current domains. If we're giving up entirely on that notion of .detach_dev going away then all default-domain-supporting drivers probably want checking to make sure that path hasn't bitrotted; both Arm SMMU drivers had it proactively removed 6 years ago; virtio-iommu never had it at all; newer drivers like apple-dart have some code there, but it won't have ever run until now. We *could* stay true to the original paradigm by introducing some real usage of IOMMU_DOMAIN_BLOCKED, such that we could keep one or more of those around to actively attach to instead of having groups in this unattached limbo state, but that's a bigger job involving adding support to drivers as well; too much for a quick fix now... Robin. ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Mon, May 02, 2022 at 12:12:04PM -0400, Qian Cai wrote: > On Mon, Apr 18, 2022 at 08:49:49AM +0800, Lu Baolu wrote: > > Hi Joerg, > > > > This is a resend version of v8 posted here: > > https://lore.kernel.org/linux-iommu/20220308054421.847385-1-baolu...@linux.intel.com/ > > as we discussed in this thread: > > https://lore.kernel.org/linux-iommu/yk%2fq1bgn8pc5h...@8bytes.org/ > > > > All patches can be applied perfectly except this one: > > - [PATCH v8 02/11] driver core: Add dma_cleanup callback in bus_type > > It conflicts with below refactoring commit: > > - 4b775aaf1ea99 "driver core: Refactor sysfs and drv/bus remove hooks" > > The conflict has been fixed in this post. > > > > No functional changes in this series. I suppress cc-ing this series to > > all v8 reviewers in order to avoid spam. > > > > Please consider it for your iommu tree. > > Reverting this series fixed an user-after-free while doing SR-IOV. > > BUG: KASAN: use-after-free in __lock_acquire > Read of size 8 at addr 080279825d78 by task qemu-system-aar/22429 > CPU: 24 PID: 22429 Comm: qemu-system-aar Not tainted > 5.18.0-rc5-next-20220502 #69 > Call trace: > dump_backtrace > show_stack > dump_stack_lvl > print_address_description.constprop.0 > print_report > kasan_report > __asan_report_load8_noabort > __lock_acquire > lock_acquire.part.0 > lock_acquire > _raw_spin_lock_irqsave > arm_smmu_detach_dev > arm_smmu_detach_dev at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:2377 > arm_smmu_attach_dev Hum. So what has happened is that VFIO does this sequence: iommu_detach_group() iommu_domain_free() iommu_group_release_dma_owner() Which, I think should be valid, API wise. >From what I can see reading the code SMMUv3 blows up above because it doesn't have a detach_dev op: .default_domain_ops = &(const struct iommu_domain_ops) { .attach_dev = arm_smmu_attach_dev, .map_pages = arm_smmu_map_pages, .unmap_pages= arm_smmu_unmap_pages, .flush_iotlb_all= arm_smmu_flush_iotlb_all, .iotlb_sync = arm_smmu_iotlb_sync, .iova_to_phys = arm_smmu_iova_to_phys, .enable_nesting = arm_smmu_enable_nesting, .free = arm_smmu_domain_free, } But it is internally tracking the domain inside the master - so when the next domain is attached it does this: static void arm_smmu_detach_dev(struct arm_smmu_master *master) { struct arm_smmu_domain *smmu_domain = master->domain; spin_lock_irqsave(_domain->devices_lock, flags); And explodes as the domain has been freed but master->domain was not NULL'd. It worked before because iommu_detach_group() used to attach the default group and that was before the domain was freed in the above sequence. I'm guessing SMMU3 needs to call it's arm_smmu_detach_dev(master) from the detach_dev op and null it's cached copy of the domain, but I don't know this driver.. Robin? Thanks, Jason ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Mon, Apr 18, 2022 at 08:49:49AM +0800, Lu Baolu wrote: > Hi Joerg, > > This is a resend version of v8 posted here: > https://lore.kernel.org/linux-iommu/20220308054421.847385-1-baolu...@linux.intel.com/ > as we discussed in this thread: > https://lore.kernel.org/linux-iommu/yk%2fq1bgn8pc5h...@8bytes.org/ > > All patches can be applied perfectly except this one: > - [PATCH v8 02/11] driver core: Add dma_cleanup callback in bus_type > It conflicts with below refactoring commit: > - 4b775aaf1ea99 "driver core: Refactor sysfs and drv/bus remove hooks" > The conflict has been fixed in this post. > > No functional changes in this series. I suppress cc-ing this series to > all v8 reviewers in order to avoid spam. > > Please consider it for your iommu tree. Reverting this series fixed an user-after-free while doing SR-IOV. BUG: KASAN: use-after-free in __lock_acquire Read of size 8 at addr 080279825d78 by task qemu-system-aar/22429 CPU: 24 PID: 22429 Comm: qemu-system-aar Not tainted 5.18.0-rc5-next-20220502 #69 Call trace: dump_backtrace show_stack dump_stack_lvl print_address_description.constprop.0 print_report kasan_report __asan_report_load8_noabort __lock_acquire lock_acquire.part.0 lock_acquire _raw_spin_lock_irqsave arm_smmu_detach_dev arm_smmu_detach_dev at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:2377 arm_smmu_attach_dev __iommu_attach_group __iommu_attach_device at drivers/iommu/iommu.c:1942 (inlined by) iommu_group_do_attach_device at drivers/iommu/iommu.c:2058 (inlined by) __iommu_group_for_each_dev at drivers/iommu/iommu.c:989 (inlined by) __iommu_attach_group at drivers/iommu/iommu.c:2069 iommu_group_release_dma_owner __vfio_group_unset_container vfio_group_try_dissolve_container vfio_group_put_external_user kvm_vfio_destroy kvm_destroy_vm kvm_vm_release __fput fput task_work_run do_exit do_group_exit get_signal do_signal do_notify_resume el0_svc el0t_64_sync_handler el0t_64_sync Allocated by task 22427: kasan_save_stack __kasan_kmalloc kmem_cache_alloc_trace arm_smmu_domain_alloc iommu_domain_alloc vfio_iommu_type1_attach_group vfio_ioctl_set_iommu vfio_fops_unl_ioctl __arm64_sys_ioctl invoke_syscall el0_svc_common.constprop.0 do_el0_svc el0_svc el0t_64_sync_handler el0t_64_sync Freed by task 22429: kasan_save_stack kasan_set_track kasan_set_free_info kasan_slab_free __kasan_slab_free slab_free_freelist_hook kfree arm_smmu_domain_free arm_smmu_domain_free at iommu/arm/arm-smmu-v3/arm-smmu-v3.c:2067 iommu_domain_free vfio_iommu_type1_detach_group __vfio_group_unset_container vfio_group_try_dissolve_container vfio_group_put_external_user kvm_vfio_destroy kvm_destroy_vm kvm_vm_release __fput fput task_work_run do_exit do_group_exit get_signal do_signal do_notify_resume el0_svc el0t_64_sync_handler el0t_64_sync ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Thu, Apr 28, 2022 at 08:54:11AM -0300, Jason Gunthorpe wrote: > Can we get this on a topic branch so Alex can pull it? There are > conflicts with other VFIO patches Right, we already discussed this. Moved the patches to a separate topic branch. It will appear as 'vfio-notifier-fix' once I pushed the changes. Regards, Joerg ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Thu, Apr 28, 2022 at 11:32:04AM +0200, Joerg Roedel wrote: > On Mon, Apr 18, 2022 at 08:49:49AM +0800, Lu Baolu wrote: > > Lu Baolu (10): > > iommu: Add DMA ownership management interfaces > > driver core: Add dma_cleanup callback in bus_type > > amba: Stop sharing platform_dma_configure() > > bus: platform,amba,fsl-mc,PCI: Add device DMA ownership management > > PCI: pci_stub: Set driver_managed_dma > > PCI: portdrv: Set driver_managed_dma > > vfio: Set DMA ownership for VFIO devices > > vfio: Remove use of vfio_group_viable() > > vfio: Remove iommu group notifier > > iommu: Remove iommu group changes notifier > > Applied to core branch, thanks Baolu. Can we get this on a topic branch so Alex can pull it? There are conflicts with other VFIO patches Thanks! Jason ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Mon, Apr 18, 2022 at 08:49:49AM +0800, Lu Baolu wrote: > Lu Baolu (10): > iommu: Add DMA ownership management interfaces > driver core: Add dma_cleanup callback in bus_type > amba: Stop sharing platform_dma_configure() > bus: platform,amba,fsl-mc,PCI: Add device DMA ownership management > PCI: pci_stub: Set driver_managed_dma > PCI: portdrv: Set driver_managed_dma > vfio: Set DMA ownership for VFIO devices > vfio: Remove use of vfio_group_viable() > vfio: Remove iommu group notifier > iommu: Remove iommu group changes notifier Applied to core branch, thanks Baolu. ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
Hi Joerg, This is a resend version of v8 posted here: https://lore.kernel.org/linux-iommu/20220308054421.847385-1-baolu...@linux.intel.com/ as we discussed in this thread: https://lore.kernel.org/linux-iommu/yk%2fq1bgn8pc5h...@8bytes.org/ All patches can be applied perfectly except this one: - [PATCH v8 02/11] driver core: Add dma_cleanup callback in bus_type It conflicts with below refactoring commit: - 4b775aaf1ea99 "driver core: Refactor sysfs and drv/bus remove hooks" The conflict has been fixed in this post. No functional changes in this series. I suppress cc-ing this series to all v8 reviewers in order to avoid spam. Please consider it for your iommu tree. Best regards, baolu Change log: - v8 and before: - Please refer to v8 post for all the change history. - v8-resend - Rebase the series on top of v5.18-rc3. - Add Reviewed-by's granted by Robin. - Add a Tested-by granted by Eric. Jason Gunthorpe (1): vfio: Delete the unbound_list Lu Baolu (10): iommu: Add DMA ownership management interfaces driver core: Add dma_cleanup callback in bus_type amba: Stop sharing platform_dma_configure() bus: platform,amba,fsl-mc,PCI: Add device DMA ownership management PCI: pci_stub: Set driver_managed_dma PCI: portdrv: Set driver_managed_dma vfio: Set DMA ownership for VFIO devices vfio: Remove use of vfio_group_viable() vfio: Remove iommu group notifier iommu: Remove iommu group changes notifier include/linux/amba/bus.h | 8 + include/linux/device/bus.h| 3 + include/linux/fsl/mc.h| 8 + include/linux/iommu.h | 54 +++--- include/linux/pci.h | 8 + include/linux/platform_device.h | 10 +- drivers/amba/bus.c| 37 +++- drivers/base/dd.c | 5 + drivers/base/platform.c | 21 ++- drivers/bus/fsl-mc/fsl-mc-bus.c | 24 ++- drivers/iommu/iommu.c | 228 drivers/pci/pci-driver.c | 18 ++ drivers/pci/pci-stub.c| 1 + drivers/pci/pcie/portdrv_pci.c| 2 + drivers/vfio/fsl-mc/vfio_fsl_mc.c | 1 + drivers/vfio/pci/vfio_pci.c | 1 + drivers/vfio/platform/vfio_amba.c | 1 + drivers/vfio/platform/vfio_platform.c | 1 + drivers/vfio/vfio.c | 245 ++ 19 files changed, 338 insertions(+), 338 deletions(-) -- 2.25.1 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Fri, Apr 08, 2022 at 10:07:50AM -0600, Alex Williamson wrote: > On Fri, 8 Apr 2022 10:59:22 -0500 > Bjorn Helgaas wrote: > > > On Fri, Apr 08, 2022 at 05:37:16PM +0200, Joerg Roedel wrote: > > > On Fri, Apr 08, 2022 at 11:17:47AM -0300, Jason Gunthorpe wrote: > > > > You might consider using a linear tree instead of the topic branches, > > > > topics are tricky and I'm not sure it helps a small subsystem so much. > > > > Conflicts between topics are a PITA for everyone, and it makes > > > > handling conflicts with rc much harder than it needs to be. > > > > > > I like the concept of a branch per driver, because with that I can just > > > exclude that branch from my next-merge when there are issues with it. > > > Conflicts between branches happen too, but they are quite manageable > > > when the branches have the same base. > > > > FWIW, I use the same topic branch approach for PCI. I like the > > ability to squash in fixes or drop things without having to clutter > > the history with trivial commits and reverts. I haven't found > > conflicts to be a problem. > > Same. I think I've generally modeled my branch handling after Bjorn > and Joerg, I don't always use topic branches, but will for larger > contributions and I don't generally find conflicts to be a problem. > I'm always open to adopting best practices though. Thanks, I don't know about best practices, but I see most maintainers fall somewhere on a continuum between how Andrew Morton works and how David Miller/Linus work. Andrew's model is to try and send patches that are perfect and he manipulates his queue continually. It is never quite clear what will get merged until Linus actually merges it. The David/Linus model is that git is immutable and they only move forward. Never rebase, never manipulate an applied patch. Andrew has significantly reigned in how much he manipulates his queue - mostly due to pressure from Linus. Some of the remarks on this topic over the last year are pretty informative. So I would say changing patches once applied, or any rebasing, is now being seen as not best practice. (Indeed if Linus catches it and a mistake was made you are likely to get a sharp email) Why I made the note, is that at least in my flow, I would not trade two weeks of accepting patches for topics. I'll probably have 20-30 patches merged already before rc3 comes out. I never have problems merging rc's because when a rc conflicts with the next I have only one branch and can just merge the rc and resolve the conflict, or merge the rc before applying a patch that would create a conflict in the first place. Linus has given some guidance on when/how he prefers to see those merges done. Though I tend to advocate for a philosophy more like DaveM that the maintainer role is to hurry up and accept good patches - to emphasize flow as a way to build involvement and community. That is not necessarily an entirely appropriate approach in some of the more critical subsystems like mm/pci - if they are broken in a way that impacts a large number of people at rc1 then it can cause a lot of impact. For instance our QA team is always paniced if rc1 doesn't work on our test environments. Cheers, Jason ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Fri, 8 Apr 2022 10:59:22 -0500 Bjorn Helgaas wrote: > On Fri, Apr 08, 2022 at 05:37:16PM +0200, Joerg Roedel wrote: > > On Fri, Apr 08, 2022 at 11:17:47AM -0300, Jason Gunthorpe wrote: > > > You might consider using a linear tree instead of the topic branches, > > > topics are tricky and I'm not sure it helps a small subsystem so much. > > > Conflicts between topics are a PITA for everyone, and it makes > > > handling conflicts with rc much harder than it needs to be. > > > > I like the concept of a branch per driver, because with that I can just > > exclude that branch from my next-merge when there are issues with it. > > Conflicts between branches happen too, but they are quite manageable > > when the branches have the same base. > > FWIW, I use the same topic branch approach for PCI. I like the > ability to squash in fixes or drop things without having to clutter > the history with trivial commits and reverts. I haven't found > conflicts to be a problem. Same. I think I've generally modeled my branch handling after Bjorn and Joerg, I don't always use topic branches, but will for larger contributions and I don't generally find conflicts to be a problem. I'm always open to adopting best practices though. Thanks, Alex ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Fri, Apr 08, 2022 at 05:37:16PM +0200, Joerg Roedel wrote: > On Fri, Apr 08, 2022 at 11:17:47AM -0300, Jason Gunthorpe wrote: > > You might consider using a linear tree instead of the topic branches, > > topics are tricky and I'm not sure it helps a small subsystem so much. > > Conflicts between topics are a PITA for everyone, and it makes > > handling conflicts with rc much harder than it needs to be. > > I like the concept of a branch per driver, because with that I can just > exclude that branch from my next-merge when there are issues with it. > Conflicts between branches happen too, but they are quite manageable > when the branches have the same base. FWIW, I use the same topic branch approach for PCI. I like the ability to squash in fixes or drop things without having to clutter the history with trivial commits and reverts. I haven't found conflicts to be a problem. Bjorn ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Fri, Apr 08, 2022 at 11:17:47AM -0300, Jason Gunthorpe wrote: > You might consider using a linear tree instead of the topic branches, > topics are tricky and I'm not sure it helps a small subsystem so much. > Conflicts between topics are a PITA for everyone, and it makes > handling conflicts with rc much harder than it needs to be. I like the concept of a branch per driver, because with that I can just exclude that branch from my next-merge when there are issues with it. Conflicts between branches happen too, but they are quite manageable when the branches have the same base. Overall I am thinking of reorganizing the IOMMU tree, but it will likely not end up to be a single-branch tree, although the number of patches per cycle _could_ just be carried in a single branch. > At least I haven't felt a need for topics while running larger trees, > and would find it stressful to try and squeeze the entire patch flow > into only 3 weeks out of the 7 week cycle. Yeah, so it is 4 weeks in an 9 weeks cycle :) The merge window is 2 weeks and not a lot happens. The 2 weeks after are for stabilization and I usually only pick up fixes. Then come the 4 weeks were new code gets into the tree. In the last week everything gets testing in linux-next to be ready for the merge window. I will pickup fixes in that week, of course. > In any event, I'd like this on a branch so Alex can pull it too, I > guess it means Alex has to merge rc3 to VFIO as well? Sure, I can put these patches in a separate branch for Alex to pull into the VFIO tree. Regards, Joerg ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Fri, Apr 08, 2022 at 04:00:31PM +0200, Joerg Roedel wrote: > On Fri, Apr 08, 2022 at 09:23:52AM -0300, Jason Gunthorpe wrote: > > Why rc3? It has been 4 weeks now with no futher comments. > > Because I start applying new code to branches based on -rc3. In the past > I used different -rc's for the topic branches (usually the latest -rc > available when I started applying to that branch), but that caused silly > merge conflicts from time to time. So I am now basing every topic branch > on the same -rc, which is usually -rc3. Rationale is that by -rc3 time > the kernel should have reasonably stabilized after the merge window. You might consider using a linear tree instead of the topic branches, topics are tricky and I'm not sure it helps a small subsystem so much. Conflicts between topics are a PITA for everyone, and it makes handling conflicts with rc much harder than it needs to be. At least I haven't felt a need for topics while running larger trees, and would find it stressful to try and squeeze the entire patch flow into only 3 weeks out of the 7 week cycle. In any event, I'd like this on a branch so Alex can pull it too, I guess it means Alex has to merge rc3 to VFIO as well? Thanks for explaining Jason ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Fri, Apr 08, 2022 at 09:23:52AM -0300, Jason Gunthorpe wrote: > Why rc3? It has been 4 weeks now with no futher comments. Because I start applying new code to branches based on -rc3. In the past I used different -rc's for the topic branches (usually the latest -rc available when I started applying to that branch), but that caused silly merge conflicts from time to time. So I am now basing every topic branch on the same -rc, which is usually -rc3. Rationale is that by -rc3 time the kernel should have reasonably stabilized after the merge window. Regards, Joerg ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Fri, Apr 08, 2022 at 08:22:35PM +0800, Lu Baolu wrote: > Hi Joerg, > > On 2022/4/8 15:57, Joerg Roedel wrote: > > On Mon, Mar 14, 2022 at 09:21:25PM -0300, Jason Gunthorpe wrote: > > > Joerg, are we good for the coming v5.18 merge window now? There are > > > several things backed up behind this series. > > > > I usually don't apply bigger changes like this after -rc7, so it didn't > > make it. Please re-send after -rc3 is out and I will consider it. > > Sure. I will do. Why rc3? It has been 4 weeks now with no futher comments. Jason ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
Hi Joerg, On 2022/4/8 15:57, Joerg Roedel wrote: On Mon, Mar 14, 2022 at 09:21:25PM -0300, Jason Gunthorpe wrote: Joerg, are we good for the coming v5.18 merge window now? There are several things backed up behind this series. I usually don't apply bigger changes like this after -rc7, so it didn't make it. Please re-send after -rc3 is out and I will consider it. Sure. I will do. Best regards, baolu ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Mon, Mar 14, 2022 at 09:21:25PM -0300, Jason Gunthorpe wrote: > Joerg, are we good for the coming v5.18 merge window now? There are > several things backed up behind this series. I usually don't apply bigger changes like this after -rc7, so it didn't make it. Please re-send after -rc3 is out and I will consider it. Thanks, Joerg ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
On Tue, Mar 08, 2022 at 01:44:10PM +0800, Lu Baolu wrote: > Hi folks, > > The iommu group is the minimal isolation boundary for DMA. Devices in > a group can access each other's MMIO registers via peer to peer DMA > and also need share the same I/O address space. Joerg, are we good for the coming v5.18 merge window now? There are several things backed up behind this series. Thanks, Jason ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
Hi Lu, On 3/8/22 6:44 AM, Lu Baolu wrote: > Hi folks, > > The iommu group is the minimal isolation boundary for DMA. Devices in > a group can access each other's MMIO registers via peer to peer DMA > and also need share the same I/O address space. > > Once the I/O address space is assigned to user control it is no longer > available to the dma_map* API, which effectively makes the DMA API > non-working. > > Second, userspace can use DMA initiated by a device that it controls > to access the MMIO spaces of other devices in the group. This allows > userspace to indirectly attack any kernel owned device and it's driver. > > Therefore groups must either be entirely under kernel control or > userspace control, never a mixture. Unfortunately some systems have > problems with the granularity of groups and there are a couple of > important exceptions: > > - pci_stub allows the admin to block driver binding on a device and >make it permanently shared with userspace. Since PCI stub does not >do DMA it is safe, however the admin must understand that using >pci_stub allows userspace to attack whatever device it was bound >it. > > - PCI bridges are sometimes included in groups. Typically PCI bridges >do not use DMA, and generally do not have MMIO regions. > > Generally any device that does not have any MMIO registers is a > possible candidate for an exception. > > Currently vfio adopts a workaround to detect violations of the above > restrictions by monitoring the driver core BOUND event, and hardwiring > the above exceptions. Since there is no way for vfio to reject driver > binding at this point, BUG_ON() is triggered if a violation is > captured (kernel driver BOUND event on a group which already has some > devices assigned to userspace). Aside from the bad user experience > this opens a way for root userspace to crash the kernel, even in high > integrity configurations, by manipulating the module binding and > triggering the BUG_ON. > > This series solves this problem by making the user/kernel ownership a > core concept at the IOMMU layer. The driver core enforces kernel > ownership while drivers are bound and violations now result in a error > codes during probe, not BUG_ON failures. > > Patch partitions: > [PATCH 1-4]: Detect DMA ownership conflicts during driver binding; > [PATCH 5-7]: Add security context management for assigned devices; > [PATCH 8-11]: Various cleanups. > > This is also part one of three initial series for IOMMUFD: > * Move IOMMU Group security into the iommu layer > - Generic IOMMUFD implementation > - VFIO ability to consume IOMMUFD > > Change log: > v1: initial post > - > https://lore.kernel.org/linux-iommu/2025020552.2378167-1-baolu...@linux.intel.com/ > > v2: > - > https://lore.kernel.org/linux-iommu/20211128025051.355578-1-baolu...@linux.intel.com/ > > - Move kernel dma ownership auto-claiming from driver core to bus > callback. [Greg/Christoph/Robin/Jason] > > https://lore.kernel.org/linux-iommu/2025020552.2378167-1-baolu...@linux.intel.com/T/#m153706912b770682cb12e3c28f57e171aa1f9d0c > > - Code and interface refactoring for iommu_set/release_dma_owner() > interfaces. [Jason] > > https://lore.kernel.org/linux-iommu/2025020552.2378167-1-baolu...@linux.intel.com/T/#mea70ed8e4e3665aedf32a5a0a7db095bf680325e > > - [NEW]Add new iommu_attach/detach_device_shared() interfaces for > multiple devices group. [Robin/Jason] > > https://lore.kernel.org/linux-iommu/2025020552.2378167-1-baolu...@linux.intel.com/T/#mea70ed8e4e3665aedf32a5a0a7db095bf680325e > > - [NEW]Use iommu_attach/detach_device_shared() in drm/tegra drivers. > > - Refactoring and description refinement. > > v3: > - > https://lore.kernel.org/linux-iommu/20211206015903.88687-1-baolu...@linux.intel.com/ > > - Rename bus_type::dma_unconfigure to bus_type::dma_cleanup. [Greg] > > https://lore.kernel.org/linux-iommu/c3230ace-c878-39db-1663-2b752ff53...@linux.intel.com/T/#m6711e041e47cb0cbe3964fad0a3466f5ae4b3b9b > > - Avoid _platform_dma_configure for platform_bus_type::dma_configure. > [Greg] > > https://lore.kernel.org/linux-iommu/c3230ace-c878-39db-1663-2b752ff53...@linux.intel.com/T/#m43fc46286611aa56a5c0eeaad99d539e5519f3f6 > > - Patch "0012-iommu-Add-iommu_at-de-tach_device_shared-for-mult.patch" > and "0018-drm-tegra-Use-the-iommu-dma_owner-mechanism.patch" have > been tested by Dmitry Osipenko . > > v4: > - > https://lore.kernel.org/linux-iommu/20211217063708.1740334-1-baolu...@linux.intel.com/ > - Remove unnecessary tegra->domain chech in the tegra patch. (Jason) > - Remove DMA_OWNER_NONE. (Joerg) > - Change refcount to unsigned int. (Christoph) > - Move mutex lock into group set_dma_owner functions. (Christoph) > - Add kernel doc for iommu_attach/detach_domain_shared(). (Christoph) > - Move dma auto-claim into driver core. (Jason/Christoph) > > v5: > - >
[PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()
Hi folks, The iommu group is the minimal isolation boundary for DMA. Devices in a group can access each other's MMIO registers via peer to peer DMA and also need share the same I/O address space. Once the I/O address space is assigned to user control it is no longer available to the dma_map* API, which effectively makes the DMA API non-working. Second, userspace can use DMA initiated by a device that it controls to access the MMIO spaces of other devices in the group. This allows userspace to indirectly attack any kernel owned device and it's driver. Therefore groups must either be entirely under kernel control or userspace control, never a mixture. Unfortunately some systems have problems with the granularity of groups and there are a couple of important exceptions: - pci_stub allows the admin to block driver binding on a device and make it permanently shared with userspace. Since PCI stub does not do DMA it is safe, however the admin must understand that using pci_stub allows userspace to attack whatever device it was bound it. - PCI bridges are sometimes included in groups. Typically PCI bridges do not use DMA, and generally do not have MMIO regions. Generally any device that does not have any MMIO registers is a possible candidate for an exception. Currently vfio adopts a workaround to detect violations of the above restrictions by monitoring the driver core BOUND event, and hardwiring the above exceptions. Since there is no way for vfio to reject driver binding at this point, BUG_ON() is triggered if a violation is captured (kernel driver BOUND event on a group which already has some devices assigned to userspace). Aside from the bad user experience this opens a way for root userspace to crash the kernel, even in high integrity configurations, by manipulating the module binding and triggering the BUG_ON. This series solves this problem by making the user/kernel ownership a core concept at the IOMMU layer. The driver core enforces kernel ownership while drivers are bound and violations now result in a error codes during probe, not BUG_ON failures. Patch partitions: [PATCH 1-4]: Detect DMA ownership conflicts during driver binding; [PATCH 5-7]: Add security context management for assigned devices; [PATCH 8-11]: Various cleanups. This is also part one of three initial series for IOMMUFD: * Move IOMMU Group security into the iommu layer - Generic IOMMUFD implementation - VFIO ability to consume IOMMUFD Change log: v1: initial post - https://lore.kernel.org/linux-iommu/2025020552.2378167-1-baolu...@linux.intel.com/ v2: - https://lore.kernel.org/linux-iommu/20211128025051.355578-1-baolu...@linux.intel.com/ - Move kernel dma ownership auto-claiming from driver core to bus callback. [Greg/Christoph/Robin/Jason] https://lore.kernel.org/linux-iommu/2025020552.2378167-1-baolu...@linux.intel.com/T/#m153706912b770682cb12e3c28f57e171aa1f9d0c - Code and interface refactoring for iommu_set/release_dma_owner() interfaces. [Jason] https://lore.kernel.org/linux-iommu/2025020552.2378167-1-baolu...@linux.intel.com/T/#mea70ed8e4e3665aedf32a5a0a7db095bf680325e - [NEW]Add new iommu_attach/detach_device_shared() interfaces for multiple devices group. [Robin/Jason] https://lore.kernel.org/linux-iommu/2025020552.2378167-1-baolu...@linux.intel.com/T/#mea70ed8e4e3665aedf32a5a0a7db095bf680325e - [NEW]Use iommu_attach/detach_device_shared() in drm/tegra drivers. - Refactoring and description refinement. v3: - https://lore.kernel.org/linux-iommu/20211206015903.88687-1-baolu...@linux.intel.com/ - Rename bus_type::dma_unconfigure to bus_type::dma_cleanup. [Greg] https://lore.kernel.org/linux-iommu/c3230ace-c878-39db-1663-2b752ff53...@linux.intel.com/T/#m6711e041e47cb0cbe3964fad0a3466f5ae4b3b9b - Avoid _platform_dma_configure for platform_bus_type::dma_configure. [Greg] https://lore.kernel.org/linux-iommu/c3230ace-c878-39db-1663-2b752ff53...@linux.intel.com/T/#m43fc46286611aa56a5c0eeaad99d539e5519f3f6 - Patch "0012-iommu-Add-iommu_at-de-tach_device_shared-for-mult.patch" and "0018-drm-tegra-Use-the-iommu-dma_owner-mechanism.patch" have been tested by Dmitry Osipenko . v4: - https://lore.kernel.org/linux-iommu/20211217063708.1740334-1-baolu...@linux.intel.com/ - Remove unnecessary tegra->domain chech in the tegra patch. (Jason) - Remove DMA_OWNER_NONE. (Joerg) - Change refcount to unsigned int. (Christoph) - Move mutex lock into group set_dma_owner functions. (Christoph) - Add kernel doc for iommu_attach/detach_domain_shared(). (Christoph) - Move dma auto-claim into driver core. (Jason/Christoph) v5: - https://lore.kernel.org/linux-iommu/20220104015644.2294354-1-baolu...@linux.intel.com/ - Move kernel dma ownership auto-claiming from driver core to bus callback. (Greg) - Refactor the iommu interfaces to make them more specific. (Jason/Robin) - Simplify the dma