Un-inline the domain specific logic from the attach/detach_group ops into
two paired functions vfio_iommu_alloc_attach_domain() and
vfio_iommu_detach_destroy_domain() that strictly deal with creating and
destroying struct vfio_domains.
Add the logic to check for EMEDIUMTYPE return code of iommu_at
All devices in emulated_iommu_groups have pinned_page_dirty_scope
set, so the update_dirty_scope in the first list_for_each_entry
is always false. Clean it up, and move the "if update_dirty_scope"
part from the detach_group_done routine to the domain_list part.
Suggested-by: Jason Gunthorpe
Revie
The domain->ops validation was added, as a precaution, for mixed-driver
systems.
Per Robin's remarks,
* While bus_set_iommu() still exists, the core code prevents multiple
drivers from registering, so we can't really run into a situation of
having a mixed-driver system:
https://lore.kernel.o
From: Jason Gunthorpe
The KVM mechanism for controlling wbinvd is based on OR of the coherency
property of all devices attached to a guest, no matter whether those
devices are attached to a single domain or multiple domains.
On the other hand, the benefit to using separate domains was that those
Cases like VFIO wish to attach a device to an existing domain that was
not allocated specifically from the device. This raises a condition
where the IOMMU driver can fail the domain attach because the domain and
device are incompatible with each other.
This is a soft failure that can be resolved b
This is a preparatory series for IOMMUFD v2 patches. It enforces error
code -EMEDIUMTYPE in iommu_attach_device() and iommu_attach_group() when
an IOMMU domain and a device/group are incompatible. It also drops the
useless domain->ops check since it won't fail in current environment.
These allow V
On Fri, Jul 01, 2022 at 07:17:38PM +0100, Robin Murphy wrote:
> External email: Use caution opening links or attachments
>
>
> On 01/07/2022 5:43 pm, Nicolin Chen wrote:
> > On Fri, Jul 01, 2022 at 11:21:48AM +0100, Robin Murphy wrote:
> >
> > > > diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu
On Fri, Jul 01, 2022 at 11:21:48AM +0100, Robin Murphy wrote:
> > diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c
> > b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> > index 2ed3594f384e..072cac5ab5a4 100644
> > --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
> > +++ b/drivers/iommu/arm/arm-smmu/arm-smm
All devices in emulated_iommu_groups have pinned_page_dirty_scope
set, so the update_dirty_scope in the first list_for_each_entry
is always false. Clean it up, and move the "if update_dirty_scope"
part from the detach_group_done routine to the domain_list part.
Suggested-by: Jason Gunthorpe
Revie
Un-inline the domain specific logic from the attach/detach_group ops into
two paired functions vfio_iommu_alloc_attach_domain() and
vfio_iommu_detach_destroy_domain() that strictly deal with creating and
destroying struct vfio_domains.
Add the logic to check for EMEDIUMTYPE return code of iommu_at
From: Jason Gunthorpe
The KVM mechanism for controlling wbinvd is based on OR of the coherency
property of all devices attached to a guest, no matter whether those
devices are attached to a single domain or multiple domains.
On the other hand, the benefit to using separate domains was that those
The domain->ops validation was added, as a precaution, for mixed-driver
systems.
Per Robin's remarks,
* While bus_set_iommu() still exists, the core code prevents multiple
drivers from registering, so we can't really run into a situation of
having a mixed-driver system:
https://lore.kernel.o
Cases like VFIO wish to attach a device to an existing domain that was
not allocated specifically from the device. This raises a condition
where the IOMMU driver can fail the domain attach because the domain and
device are incompatible with each other.
This is a soft failure that can be resolved b
This is a preparatory series for IOMMUFD v2 patches. It enforces error
code -EMEDIUMTYPE in iommu_attach_device() and iommu_attach_group() when
an IOMMU domain and a device/group are incompatible. It also drops the
useless domain->ops check since it won't fail in current environment.
These allow V
On Thu, Jun 30, 2022 at 09:21:42AM +0100, Robin Murphy wrote:
> External email: Use caution opening links or attachments
>
>
> On 2022-06-29 20:47, Nicolin Chen wrote:
> > On Fri, Jun 24, 2022 at 03:19:43PM -0300, Jason Gunthorpe wrote:
> > > On Fri, Jun 24, 2022 at 06:35:49PM +0800, Yong Wu wrot
On Thu, Jun 30, 2022 at 05:33:16PM +0800, Yong Wu wrote:
> External email: Use caution opening links or attachments
>
>
> On Wed, 2022-06-29 at 12:47 -0700, Nicolin Chen wrote:
> > On Fri, Jun 24, 2022 at 03:19:43PM -0300, Jason Gunthorpe wrote:
> > > On Fri, Jun 24, 2022 at 06:35:49PM +0800, Yon
On Fri, Jun 24, 2022 at 03:19:43PM -0300, Jason Gunthorpe wrote:
> On Fri, Jun 24, 2022 at 06:35:49PM +0800, Yong Wu wrote:
>
> > > > It's not used in VFIO context. "return 0" just satisfy the iommu
> > > > framework to go ahead. and yes, here we only allow the shared
> > > > "mapping-domain" (All
On Fri, Jun 24, 2022 at 01:38:58PM +0800, Yong Wu wrote:
> > > > diff --git a/drivers/iommu/mtk_iommu_v1.c
> > > > b/drivers/iommu/mtk_iommu_v1.c
> > > > index e1cb51b9866c..5386d889429d 100644
> > > > --- a/drivers/iommu/mtk_iommu_v1.c
> > > > +++ b/drivers/iommu/mtk_iommu_v1.c
> > > > @@ -304,7
On Fri, Jun 24, 2022 at 09:35:49AM +0800, Baolu Lu wrote:
> External email: Use caution opening links or attachments
>
>
> On 2022/6/24 04:00, Nicolin Chen wrote:
> > diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c
> > index e1cb51b9866c..5386d889429d 100644
> > --- a/dri
The domain->ops validation was added, as a precaution, for mixed-driver
systems.
Per Robin's remarks,
* While bus_set_iommu() still exists, the core code prevents multiple
drivers from registering, so we can't really run into a situation of
having a mixed-driver system:
https://lore.kernel.o
All devices in emulated_iommu_groups have pinned_page_dirty_scope
set, so the update_dirty_scope in the first list_for_each_entry
is always false. Clean it up, and move the "if update_dirty_scope"
part from the detach_group_done routine to the domain_list part.
Suggested-by: Jason Gunthorpe
Revie
Un-inline the domain specific logic from the attach/detach_group ops into
two paired functions vfio_iommu_alloc_attach_domain() and
vfio_iommu_detach_destroy_domain() that strictly deal with creating and
destroying struct vfio_domains.
Add the logic to check for EMEDIUMTYPE return code of iommu_at
From: Jason Gunthorpe
The KVM mechanism for controlling wbinvd is based on OR of the coherency
property of all devices attached to a guest, no matter whether those
devices are attached to a single domain or multiple domains.
On the other hand, the benefit to using separate domains was that those
Cases like VFIO wish to attach a device to an existing domain that was
not allocated specifically from the device. This raises a condition
where the IOMMU driver can fail the domain attach because the domain and
device are incompatible with each other.
This is a soft failure that can be resolved b
This is a preparatory series for IOMMUFD v2 patches. It enforces error
code -EMEDIUMTYPE in iommu_attach_device() and iommu_attach_group() when
an IOMMU domain and a device/group are incompatible. It also drops the
useless domain->ops check since it won't fail in current environment.
These allow V
On Thu, Jun 23, 2022 at 03:50:22AM +, Tian, Kevin wrote:
> External email: Use caution opening links or attachments
>
>
> > From: Robin Murphy
> > Sent: Wednesday, June 22, 2022 3:55 PM
> >
> > On 2022-06-16 23:23, Nicolin Chen wrote:
> > > On Thu, Jun 16, 2022 at 06:40:14AM +, Tian, Kev
On Tue, Jun 21, 2022 at 04:46:02PM -0600, Alex Williamson wrote:
> External email: Use caution opening links or attachments
>
>
> On Wed, 15 Jun 2022 17:03:01 -0700
> Nicolin Chen wrote:
>
> > From: Jason Gunthorpe
> >
> > The KVM mechanism for controlling wbinvd is based on OR of the coherenc
On Mon, Jun 20, 2022 at 11:11:01AM +0100, Robin Murphy wrote:
> External email: Use caution opening links or attachments
>
>
> On 2022-06-17 03:53, Tian, Kevin wrote:
> > > From: Nicolin Chen
> > > Sent: Friday, June 17, 2022 6:41 AM
> > >
> > > > ...
> > > > > - if (resv_msi) {
> > > > > +
On Mon, Jun 20, 2022 at 01:03:17AM -0300, Jason Gunthorpe wrote:
> On Fri, Jun 17, 2022 at 04:07:20PM -0700, Nicolin Chen wrote:
>
> > > > > > + vfio_iommu_aper_expand(iommu, &iova_copy);
> > > > >
> > > > > but now it's done for every group detach. The aperture is decided
> > > > > by domain
On Fri, Jun 17, 2022 at 02:53:13AM +, Tian, Kevin wrote:
> > > ...
> > > > - if (resv_msi) {
> > > > + if (resv_msi && !domain->msi_cookie) {
> > > > ret = iommu_get_msi_cookie(domain->domain,
> > > > resv_msi_base);
> > > > if (ret && ret != -ENODEV)
> > > >
On Thu, Jun 16, 2022 at 07:08:10AM +, Tian, Kevin wrote:
> ...
> > +static struct vfio_domain *
> > +vfio_iommu_alloc_attach_domain(struct bus_type *bus, struct vfio_iommu
> > *iommu,
> > +struct vfio_iommu_group *group)
> > +{
> > + struct iommu_domain *new_doma
On Thu, Jun 16, 2022 at 06:45:09AM +, Tian, Kevin wrote:
> > +out_unlock:
> > mutex_unlock(&iommu->lock);
> > }
> >
>
> I'd just replace the goto with a direct unlock and then return there.
> the readability is slightly better.
OK. Will do that.
___
On Thu, Jun 16, 2022 at 06:40:14AM +, Tian, Kevin wrote:
> > The domain->ops validation was added, as a precaution, for mixed-driver
> > systems. However, at this moment only one iommu driver is possible. So
> > remove it.
>
> It's true on a physical platform. But I'm not sure whether a virtu
On Thu, Jun 16, 2022 at 10:09:49AM +0800, Baolu Lu wrote:
> External email: Use caution opening links or attachments
>
>
> On 2022/6/16 08:03, Nicolin Chen wrote:
> > diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> > index 44016594831d..0dd13330fe12 100644
> > --- a/drive
Un-inline the domain specific logic from the attach/detach_group ops into
two paired functions vfio_iommu_alloc_attach_domain() and
vfio_iommu_detach_destroy_domain() that strictly deal with creating and
destroying struct vfio_domains.
Add the logic to check for EMEDIUMTYPE return code of iommu_at
The domain->ops validation was added, as a precaution, for mixed-driver
systems. However, at this moment only one iommu driver is possible. So
remove it.
Per discussion with Robin, in future when many can be permitted we will
rely on the IOMMU core code to check the domain->ops:
https://lore.kerne
All devices in emulated_iommu_groups have pinned_page_dirty_scope
set, so the update_dirty_scope in the first list_for_each_entry
is always false. Clean it up, and move the "if update_dirty_scope"
part from the detach_group_done routine to the domain_list part.
Rename the "detach_group_done" goto
Cases like VFIO wish to attach a device to an existing domain that was
not allocated specifically from the device. This raises a condition
where the IOMMU driver can fail the domain attach because the domain and
device are incompatible with each other.
This is a soft failure that can be resolved b
From: Jason Gunthorpe
The KVM mechanism for controlling wbinvd is based on OR of the coherency
property of all devices attached to a guest, no matter those devices are
attached to a single domain or multiple domains.
So, there is no value in trying to push a device that could do enforced
cache c
This is a preparatory series for IOMMUFD v2 patches. It enforces error
code -EMEDIUMTYPE in iommu_attach_device() and iommu_attach_group() when
an IOMMU domain and a device/group are incompatible. It also drops the
useless domain->ops check since it won't fail in current environment.
These allow V
On Wed, Jun 15, 2022 at 07:35:00AM +, Tian, Kevin wrote:
> External email: Use caution opening links or attachments
>
>
> > From: Nicolin Chen
> > Sent: Wednesday, June 15, 2022 4:45 AM
> >
> > Hi Kevin,
> >
> > On Wed, Jun 08, 2022 at 11:48:27PM +, Tian, Kevin wrote:
> > > > > > The KVM
Hi Kevin,
On Wed, Jun 08, 2022 at 11:48:27PM +, Tian, Kevin wrote:
> > > > The KVM mechanism for controlling wbinvd is only triggered during
> > > > kvm_vfio_group_add(), meaning it is a one-shot test done once the
> > devices
> > > > are setup.
> > >
> > > It's not one-shot. kvm_vfio_update_c
On Wed, Jun 08, 2022 at 08:35:47AM +, Tian, Kevin wrote:
> > @@ -2519,7 +2515,17 @@ static void vfio_iommu_type1_detach_group(void
> > *iommu_data,
> > kfree(domain);
> > vfio_iommu_aper_expand(iommu, &iova_copy);
> > vfio_updat
Hi Kevin,
On Wed, Jun 08, 2022 at 07:49:10AM +, Tian, Kevin wrote:
> External email: Use caution opening links or attachments
>
>
> > From: Nicolin Chen
> > Sent: Monday, June 6, 2022 2:19 PM
> >
> > Cases like VFIO wish to attach a device to an existing domain that was
> > not allocated spe
On Tue, Jun 07, 2022 at 11:23:27AM +0800, Baolu Lu wrote:
> External email: Use caution opening links or attachments
>
>
> On 2022/6/6 14:19, Nicolin Chen wrote:
> > +/**
> > + * iommu_attach_group - Attach an IOMMU group to an IOMMU domain
> > + * @domain: IOMMU domain to attach
> > + * @dev: IO
On Mon, Jun 06, 2022 at 06:50:33PM +0100, Robin Murphy wrote:
> External email: Use caution opening links or attachments
>
>
> On 2022-06-06 17:51, Nicolin Chen wrote:
> > Hi Robin,
> >
> > On Mon, Jun 06, 2022 at 03:33:42PM +0100, Robin Murphy wrote:
> > > On 2022-06-06 07:19, Nicolin Chen wrot
Hi Robin,
On Mon, Jun 06, 2022 at 03:33:42PM +0100, Robin Murphy wrote:
> On 2022-06-06 07:19, Nicolin Chen wrote:
> > The core code should not call an iommu driver op with a struct device
> > parameter unless it knows that the dev_iommu_priv_get() for that struct
> > device was setup by the same
All devices in emulated_iommu_groups have pinned_page_dirty_scope
set, so the update_dirty_scope in the first list_for_each_entry
is always false. Clean it up, and move the "if update_dirty_scope"
part from the detach_group_done routine to the domain_list part.
Rename the "detach_group_done" goto
Un-inline the domain specific logic from the attach/detach_group ops into
two paired functions vfio_iommu_alloc_attach_domain() and
vfio_iommu_detach_destroy_domain() that strictly deal with creating and
destroying struct vfio_domains.
Add the logic to check for EMEDIUMTYPE return code of iommu_at
From: Jason Gunthorpe
The KVM mechanism for controlling wbinvd is only triggered during
kvm_vfio_group_add(), meaning it is a one-shot test done once the devices
are setup.
So, there is no value in trying to push a device that could do enforced
cache coherency to a dedicated domain vs re-using a
The core code should not call an iommu driver op with a struct device
parameter unless it knows that the dev_iommu_priv_get() for that struct
device was setup by the same driver. Otherwise in a mixed driver system
the iommu_priv could be casted to the wrong type.
Store the iommu_ops pointer in the
Cases like VFIO wish to attach a device to an existing domain that was
not allocated specifically from the device. This raises a condition
where the IOMMU driver can fail the domain attach because the domain and
device are incompatible with each other.
This is a soft failure that can be resolved b
This is a preparatory series for IOMMUFD v2 patches. It enforces error
code -EMEDIUMTYPE in iommu_attach_device() and iommu_attach_group() when
an IOMMU domain and a device/group are incompatible. It also moves the
domain->ops check into __iommu_attach_device(). These allow VFIO iommu
code to simpl
On Fri, May 13, 2022 at 08:50:32AM -0300, Jason Gunthorpe wrote:
> > Perhaps, we can make device_to_iommu() only for probe_device() where the
> > per-device info data is not initialized yet. After probe_device(), iommu
> > and sid are retrieved through other helpers by looking up the device
> > in
On Fri, May 13, 2022 at 11:32:11AM +0800, Baolu Lu wrote:
> External email: Use caution opening links or attachments
>
>
> On 2022/5/13 08:32, Nicolin Chen wrote:
> > Local boot test and VFIO sanity test show that info->iommu can be
> > used in device_to_iommu() as a fast path. So this patch adds
Local boot test and VFIO sanity test show that info->iommu can be
used in device_to_iommu() as a fast path. So this patch adds it.
Signed-off-by: Nicolin Chen
---
drivers/iommu/intel/iommu.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/inte
On Tue, May 10, 2022 at 01:55:24PM -0300, Jason Gunthorpe wrote:
> This control causes the ARM SMMU drivers to choose a stage 2
> implementation for the IO pagetable (vs the stage 1 usual default),
> however this choice has no visible impact to the VFIO user. Further qemu
> never implemented this a
On Tue, May 03, 2022 at 09:11:02PM -0300, Jason Gunthorpe wrote:
> This is based on Robins draft here:
>
> https://lore.kernel.org/linux-iommu/18831161-473f-e04f-4a81-1c7062ad1...@arm.com/
>
> With some rework. I re-organized the call chains instead of introducing
> iommu_group_user_attached(),
On Tue, Apr 19, 2022 at 08:10:34PM -0300, Jason Gunthorpe wrote:
> > - size_t size = end - start + 1;
> > + size_t size;
> > +
> > + /*
> > + * The mm_types defines vm_end as the first byte after the end
> > address,
> > + * different from IOMMU subsystem using the last addr
The arm_smmu_mm_invalidate_range function is designed to be called
by mm core for Shared Virtual Addressing purpose between IOMMU and
CPU MMU. However, the ways of two subsystems defining their "end"
addresses are slightly different. IOMMU defines its "end" address
using the last address of an addr
On Tue, Apr 19, 2022 at 05:02:33PM -0300, Jason Gunthorpe wrote:
> > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> > b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> > index d816759a6bcf..e280568bb513 100644
> > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> > @@ -183
On Thu, Apr 14, 2022 at 11:32:38AM +0100, Robin Murphy wrote:
> > By looking at the call trace within arm_smmu_* functions:
> >__arm_smmu_tlb_inv_range
> >arm_smmu_tlb_inv_range_asid
> >arm_smmu_mm_invalidate_range
> >{from mm_notifier_* functions}
> >
> > There's no address alignm
On Thu, Apr 14, 2022 at 11:32:38AM +0100, Robin Murphy wrote:
> External email: Use caution opening links or attachments
>
>
> On 2022-04-13 21:19, Nicolin Chen wrote:
> > Hi Robin,
> >
> > On Wed, Apr 13, 2022 at 02:40:31PM +0100, Robin Murphy wrote:
> > > On 2022-04-13 05:17, Nicolin Chen wrot
Hi Robin,
On Wed, Apr 13, 2022 at 02:40:31PM +0100, Robin Murphy wrote:
> On 2022-04-13 05:17, Nicolin Chen wrote:
> > To calculate num_pages, the size should be aligned with
> > "page size", determined by the tg value. Otherwise, its
> > following "while (iova < end)" might become an infinite
> >
To calculate num_pages, the size should be aligned with
"page size", determined by the tg value. Otherwise, its
following "while (iova < end)" might become an infinite
loop if unaligned size is slightly greater than 1 << tg.
Signed-off-by: Nicolin Chen
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-
On Fri, Dec 24, 2021 at 12:13:57PM +, Robin Murphy wrote:
> > > > > > @@ -176,6 +177,24 @@ struct arm_smmu_cmdq
> > > > > > *nvidia_grace_cmdqv_get_cmdq(struct arm_smmu_device *smmu)
> > > > > > if (!FIELD_GET(VINTF_STATUS, vintf0->status))
> > > > > > return &smmu->cm
pening links or attachments
> > >
> > >
> > > On 2021-11-19 07:19, Nicolin Chen via iommu wrote:
> > > > When VCMDQs are assigned to a VINTF that is owned by a guest, not
> > > > hypervisor (HYP_OWN bit is unset), only TLB invalidation comm
On Wed, Dec 22, 2021 at 12:32:29PM +, Robin Murphy wrote:
> External email: Use caution opening links or attachments
>
>
> On 2021-11-19 07:19, Nicolin Chen via iommu wrote:
> > When VCMDQs are assigned to a VINTF that is owned by a guest, not
> > hypervisor (HYP_OWN
On Tue, Dec 21, 2021 at 06:55:20PM +, Robin Murphy wrote:
> External email: Use caution opening links or attachments
>
>
> On 2021-12-20 19:27, Nicolin Chen wrote:
> > Hi Robin,
> >
> > Thank you for the reply!
> >
> > On Mon, Dec 20, 2021 at 06:42:26PM +, Robin Murphy wrote:
> > > On 2
Hi Robin,
Thank you for the reply!
On Mon, Dec 20, 2021 at 06:42:26PM +, Robin Murphy wrote:
> On 2021-11-19 07:19, Nicolin Chen wrote:
> > From: Nate Watterson
> >
> > NVIDIA's Grace Soc has a CMDQ-Virtualization (CMDQV) hardware,
> > which extends the standard ARM SMMU v3 IP to support mu
On Thu, Dec 09, 2021 at 10:58:15PM +0300, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
>
>
> 09.12.2021 22:51, Nicolin Chen пишет:
> > On Thu, Dec 09, 2021 at 10:40:42PM +0300, Dmitry Osipenko wrote:
> >> External email: Use caution opening links or attachment
On Thu, Dec 09, 2021 at 10:58:32PM +0300, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
>
>
> 09.12.2021 22:54, Nicolin Chen пишет:
> > On Thu, Dec 09, 2021 at 10:44:25PM +0300, Dmitry Osipenko wrote:
> >> External email: Use caution opening links or attachment
On Thu, Dec 09, 2021 at 10:44:25PM +0300, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
>
>
> 09.12.2021 22:24, Nicolin Chen пишет:
> > On Thu, Dec 09, 2021 at 05:49:09PM +0300, Dmitry Osipenko wrote:
> >> External email: Use caution opening links or attachment
On Thu, Dec 09, 2021 at 10:40:42PM +0300, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
>
>
> 09.12.2021 22:32, Nicolin Chen пишет:
> > On Thu, Dec 09, 2021 at 05:47:18PM +0300, Dmitry Osipenko wrote:
> >> External email: Use caution opening links or attachment
On Thu, Dec 09, 2021 at 05:47:18PM +0300, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
>
>
> 09.12.2021 10:38, Nicolin Chen пишет:
> > @@ -545,6 +719,15 @@ static void tegra_smmu_detach_as(struct tegra_smmu
> > *smmu,
> > if (group->swgrp != swg
On Thu, Dec 09, 2021 at 05:49:09PM +0300, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
>
>
> 09.12.2021 10:38, Nicolin Chen пишет:
> > +static unsigned long pd_pt_index_iova(unsigned int pd_index, unsigned int
> > pt_index)
> > +{
> > + return (pd_index &
This patch dumps all active mapping entries from pagetable to a
debugfs directory named "mappings".
Part of this patch for listing all swgroup names in a group_soc
is provided by Dmitry Osipenko
Attaching an example:
[SWGROUP: xusb_host] [as: (id: 5), (attr: R|W|-), (pd_dma: 0x80005000)
This patch changes in struct tegra_smmu_group to use swgrp
pointer instead of swgroup, as a preparational change for
the "mappings" debugfs feature.
Acked-by: Thierry Reding
Signed-off-by: Nicolin Chen
---
drivers/iommu/tegra-smmu.c | 12
1 file changed, 8 insertions(+), 4 deletion
The existing function tegra_smmu_find_group really finds group->soc
pointer, so naming it "find_group" might not be clear by looking at
it alone. This patch renames it to tegra_smmu_group_soc in order to
disambiguate the use of "group" in this driver.
Signed-off-by: Nicolin Chen
---
drivers/iomm
There are both tegra_smmu_swgroup and tegra_smmu_group structs
using "group" for their pointer instances. This gets confusing
to read the driver sometimes.
So this patch renames "group" of struct tegra_smmu_swgroup to
"swgrp" as a cleanup. Also renames its "find" function.
Note that we already ha
This could ease driver to access corresponding as pointer
when having tegra_smmu_group pointer only, which can help
new mappings debugfs nodes.
Also moving tegra_smmu_find_group_soc() upward, for using
it in new tegra_smmu_attach_as(); and it's better to have
all tegra_smmu_find_* functions togeth
This series of patches adds a new mappings node to debugfs for
tegra-smmu driver. The first five patches are all preparational
changes for PATCH-6, based on Thierry's review feedback against
v5.
Changelog
v8:
* No changes for PATCH 1-4
* PATCH-5:
* * bypassed "group->as == as" to fix KMSG bug r
There are a few structs using "group" for their pointer instances.
This gets confusing sometimes. The instance of struct iommu_group
is used in local function with an alias "grp", which can separate
it from others.
So this patch simply renames "group" to "grp" as a cleanup.
Acked-by: Thierry Redi
On Wed, Dec 08, 2021 at 07:09:37PM +0300, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
>
>
> 08.12.2021 11:47, Nicolin Chen пишет:
> > static void tegra_smmu_attach_as(struct tegra_smmu *smmu,
> >struct tegra_smmu_as *as,
> >
This series of patches adds a new mappings node to debugfs for
tegra-smmu driver. The first five patches are all preparational
changes for PATCH-6, based on Thierry's review feedback against
v5.
Changelog
v7:
* Added "Acked-by" from Thierry to PATCH1,4,5
* No other changes for PATCH1,3,4,5
* PA
There are a few structs using "group" for their pointer instances.
This gets confusing sometimes. The instance of struct iommu_group
is used in local function with an alias "grp", which can separate
it from others.
So this patch simply renames "group" to "grp" as a cleanup.
Acked-by: Thierry Redi
This patch dumps all active mapping entries from pagetable to a
debugfs directory named "mappings".
Attaching an example:
[SWGROUP: xusb_host] [as: (id: 5), (attr: R|W|-), (pd_dma: 0x80005000)]
{
[index: 1023] 0xf0080040 (count: 52)
{
PTE RANGE | ATTR
There are both tegra_smmu_swgroup and tegra_smmu_group structs
using "group" for their pointer instances. This gets confusing
to read the driver sometimes.
So this patch renames "group" of struct tegra_smmu_swgroup to
"swgrp" as a cleanup. Also renames its "find" function.
Note that we already ha
This patch changes in struct tegra_smmu_group to use swgrp
pointer instead of swgroup, as a preparational change for
the "mappings" debugfs feature.
Acked-by: Thierry Reding
Signed-off-by: Nicolin Chen
---
drivers/iommu/tegra-smmu.c | 12
1 file changed, 8 insertions(+), 4 deletion
The existing function tegra_smmu_find_group really finds group->soc
pointer, so naming it "find_group" might not be clear by looking at
it alone. This patch renames it to tegra_smmu_group_soc in order to
disambiguate the use of "group" in this driver.
Signed-off-by: Nicolin Chen
---
drivers/iomm
This could ease driver to access corresponding as pointer
when having tegra_smmu_group pointer only, which can help
new mappings debugfs nodes.
Also moving tegra_smmu_find_group_soc() upward, for using
it in new tegra_smmu_attach_as(); and it's better to have
all tegra_smmu_find_* functions togeth
From: Nate Watterson
NVIDIA's Grace Soc has a CMDQ-Virtualization (CMDQV) hardware,
which extends the standard ARM SMMU v3 IP to support multiple
VCMDQs with virtualization capabilities. In-kernel of host OS,
they're used to reduce contention on a single queue. In terms
of command queue, they are
The CMDQV extension in NVIDIA Grace SoC resues the arm_smmu_cmdq
structure while the queue location isn't same as smmu->cmdq. So,
this patch adds a cmdq argument to arm_smmu_cmdq_init() function
and shares its define in the header for CMDQV driver to use.
Signed-off-by: Nicolin Chen
---
drivers/
When VCMDQs are assigned to a VINTF that is owned by a guest, not
hypervisor (HYP_OWN bit is unset), only TLB invalidation commands
are supported. This requires get_cmd() function to scan the input
cmd before selecting cmdq between smmu->cmdq and vintf->vcmdq, so
unsupported commands can still go t
The driver currently calls arm_smmu_get_cmdq() helper internally in
different places, though they are all actually called from the same
source -- arm_smmu_cmdq_issue_cmdlist() function.
This patch changes this to pass the cmdq pointer to these functions
instead of calling arm_smmu_get_cmdq() every
The CMDQV extension in NVIDIA Grace SoC only supports CS_NONE in the
CS field of CMD_SYNC. So this patch adds a quirk flag to accommodate
that.
Signed-off-by: Nicolin Chen
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 7 ++-
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 4
2 files c
From: Nicolin Chen
NVIDIA's Grace SoC has a CMDQ-Virtualization (CMDQV) hardware that
extends standard ARM SMMUv3 to support multiple command queues with
virtualization capabilities. Though this is similar to the ECMDQ in
SMMUv3.3, CMDQV provides additional V-Interfaces that allow VMs to
have the
Hi Kevin,
On Thu, Sep 02, 2021 at 10:27:06PM +, Tian, Kevin wrote:
> > Indeed, this looks like a flavour of the accelerated invalidation
> > stuff we've talked about already.
> >
> > I would see it probably exposed as some HW specific IOCTL on the iommu
> > fd to get access to the accelerated
From: Nate Watterson
NVIDIA's Grace SoC has a CMDQ-Virtualization (CMDQV) hardware,
which adds multiple VCMDQ interfaces (VINTFs) to supplement the
architected SMMU_CMDQ in an effort to reduce contention.
To make use of these supplemental CMDQs in arm-smmu-v3 driver,
this patch borrows the "impl
The driver currently calls arm_smmu_get_cmdq() helper internally in
different places, though they are all actually called from the same
source -- arm_smmu_cmdq_issue_cmdlist() function.
This patch changes this to pass the cmdq pointer to these functions
instead of calling arm_smmu_get_cmdq() every
1 - 100 of 115 matches
Mail list logo