Hi Robin,
On 24.09.2020 13:06, Robin Murphy wrote:
> On 2020-09-24 11:47, Marek Szyprowski wrote:
>> On 24.09.2020 12:40, Robin Murphy wrote:
>>> On 2020-09-24 11:16, Thierry Reding wrote:
On Thu, Sep 24, 2020 at 10:46:46AM +0200, Marek Szyprowski wrote:
> On 24.09.2020 10:28, Joerg
In arm_smmu_evtq_thread, reading event queue is from consumer pointer,
which has no address dependency on producer pointer, prog_reg(MMIO) and
event queue memory(Normal memory) can disorder. So the load for event queue
can be done before the load of prod_reg, then perhaps wrong event entry
value
On 2020-09-23 20:54, Robin Murphy wrote:
On 2020-09-22 07:18, Sai Prakash Ranjan wrote:
Use table and of_match_node() to match qcom implementation
instead of multiple of_device_compatible() calls for each
QCOM SMMU implementation.
Signed-off-by: Sai Prakash Ranjan
---
Hi Robin,
On 20.08.2020 17:08, Robin Murphy wrote:
> With the IOMMU ops now looking much the same shape as iommu_dma_ops,
> switch them out in favour of the iommu-dma library, currently enhanced
> with temporary workarounds that allow it to also sit underneath the
> arch-specific API. With that
From: Vijayanand Jitta
When ever a new iova alloc request comes iova is always searched
from the cached node and the nodes which are previous to cached
node. So, even if there is free iova space available in the nodes
which are next to the cached node iova allocation can still fail
because of
From: Vijayanand Jitta
When ever an iova alloc request fails we free the iova
ranges present in the percpu iova rcaches and then retry
but the global iova rcache is not freed as a result we could
still see iova alloc failure even after retry as global
rcache is holding the iova's which can cause
Hi Jordan,
On 2020-09-23 20:33, Jordan Crouse wrote:
On Tue, Sep 22, 2020 at 11:48:17AM +0530, Sai Prakash Ranjan wrote:
From: Sharat Masetty
The last level system cache can be partitioned to 32 different
slices of which GPU has two slices preallocated. One slice is
used for caching GPU
On 27/09/2020 07:34, Lu Baolu wrote:
Hi,
The previous post of this series could be found here.
https://lore.kernel.org/linux-iommu/20200912032200.11489-1-baolu...@linux.intel.com/
This version introduce a new patch [4/7] to fix an issue reported here.
On Sat, Sep 26 2020 at 14:38, Vasily Gorbik wrote:
> On Fri, Sep 25, 2020 at 09:54:52AM -0400, Qian Cai wrote:
> Yes, as well as on mips and sparc which also don't FORCE_PCI.
> This seems to work for s390:
>
> diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
> index b0b7acf07eb8..41136fbe909b
On 9/18/2020 8:11 PM, Robin Murphy wrote:
> On 2020-08-20 13:49, vji...@codeaurora.org wrote:
>> From: Vijayanand Jitta
>>
>> When ever an iova alloc request fails we free the iova
>> ranges present in the percpu iova rcaches and then retry
>> but the global iova rcache is not freed as a result
On Mon, Sep 28, 2020 at 05:56:55PM +0530, Sai Prakash Ranjan wrote:
> Hi Jordan,
>
> On 2020-09-23 20:33, Jordan Crouse wrote:
> >On Tue, Sep 22, 2020 at 11:48:17AM +0530, Sai Prakash Ranjan wrote:
> >>From: Sharat Masetty
> >>
> >>The last level system cache can be partitioned to 32 different
>
On Tue, Sep 22, 2020 at 03:16:48PM +0100, Robin Murphy wrote:
> Midgard GPUs have ACE-Lite master interfaces which allows systems to
> integrate them in an I/O-coherent manner. It seems that from the GPU's
> viewpoint, the rest of the system is its outer shareable domain, and so
> even when snoop
On 2020-09-28 21:41, Jordan Crouse wrote:
On Mon, Sep 28, 2020 at 05:56:55PM +0530, Sai Prakash Ranjan wrote:
Hi Jordan,
On 2020-09-23 20:33, Jordan Crouse wrote:
>On Tue, Sep 22, 2020 at 11:48:17AM +0530, Sai Prakash Ranjan wrote:
>>From: Sharat Masetty
>>
>>The last level system cache can
On Sat, Sep 26, 2020 at 01:07:15AM -0700, Nicolin Chen wrote:
> The tegra_smmu_group_get was added to group devices in different
> SWGROUPs and it'd return a NULL group pointer upon a mismatch at
> tegra_smmu_find_group(), so for most of clients/devices, it very
> likely would mismatch and need a
On Sat, Sep 26, 2020 at 05:48:17PM +0300, Dmitry Osipenko wrote:
> 26.09.2020 11:07, Nicolin Chen пишет:
> ...
> > + /* NULL smmu pointer means that SMMU driver is not probed yet */
> > + if (unlikely(!smmu))
> > + return ERR_PTR(-EPROBE_DEFER);
>
> Hello, Nicolin!
>
> Please don't
On Sat, Sep 26, 2020 at 01:07:18AM -0700, Nicolin Chen wrote:
> This patch simply adds support for PCI devices.
>
> Signed-off-by: Nicolin Chen
> ---
> drivers/iommu/tegra-smmu.c | 17 -
> 1 file changed, 16 insertions(+), 1 deletion(-)
>
> diff --git
On Sat, Sep 26, 2020 at 01:07:17AM -0700, Nicolin Chen wrote:
> The tegra_smmu_probe_device() function searches in DT for the iommu
> phandler to get "smmu" pointer. This works for most of SMMU clients
> that exist in the DTB. But a PCI device will not be added to iommu,
> since it doesn't have a
Hi Thierry,
Thanks for the review.
On Mon, Sep 28, 2020 at 09:13:56AM +0200, Thierry Reding wrote:
> > -static struct iommu_group *tegra_smmu_group_get(struct tegra_smmu *smmu,
> > - unsigned int swgroup)
> > +static struct iommu_group
Hi Will,
On Fri, Sep 18, 2020 at 12:18:40PM +0200, Jean-Philippe Brucker wrote:
> This is version 10 of the page table sharing support for Arm SMMUv3.
> Patch 1 still needs an Ack from mm maintainers. However patches 4-11 do
> not depend on it, and could get merged for v5.10 regardless.
Are you
Hi Joerg,
Just wondering if you will be able to take this for v5.10? There hasn't
been any material changes since we last discussed in LPC. We have VFIO and
other vSVA patches depending on it.
Thanks!
Jacob
On Fri, 25 Sep 2020 09:32:41 -0700, Jacob Pan
wrote:
> IOMMU user API header was
Hi Jean-Philippe,
On Mon, Sep 28, 2020 at 06:47:31PM +0200, Jean-Philippe Brucker wrote:
> On Fri, Sep 18, 2020 at 12:18:40PM +0200, Jean-Philippe Brucker wrote:
> > This is version 10 of the page table sharing support for Arm SMMUv3.
> > Patch 1 still needs an Ack from mm maintainers. However
Relations among IOASID users largely follow a publisher-subscriber
pattern. E.g. to support guest SVA on Intel Scalable I/O Virtualization
(SIOV) enabled platforms, VFIO, IOMMU, device drivers, KVM are all users
of IOASIDs. When a state change occurs, VFIO publishes the change event
that needs to
IOASID is used to identify address spaces that can be targeted by device
DMA. It is a system-wide resource that is essential to its many users.
This document is an attempt to help developers from all vendors navigate
the APIs. At this time, ARM SMMU and Intel’s Scalable IO Virtualization
(SIOV)
Each ioasid_set is given a quota during allocation. As system
administrators balance resources among VMs, we shall support the
adjustment of quota at runtime. The new quota cannot be less than the
outstanding IOASIDs already allocated within the set. The extra quota
will be returned to the
When an IOASID set is used for guest SVA, each VM will acquire its
ioasid_set for IOASID allocations. IOASIDs within the VM must have a
host/physical IOASID backing, mapping between guest and host IOASIDs can
be non-identical. IOASID set private ID (SPID) is introduced in this
patch to be used as
Now that IOASID core keeps track of the IOASID to mm_struct ownership in
the forms of ioasid_set with IOASID_SET_TYPE_MM token type, there is no
need to keep the same mapping in VT-d driver specific data. Native SVM
usage is not affected by the change.
Signed-off-by: Jacob Pan
---
Rename ioasid_set_data() to ioasid_attach_data() to avoid confusion with
struct ioasid_set. ioasid_set is a group of IOASIDs that share a common
token.
Reviewed-by: Jean-Philippe Brucker
Signed-off-by: Jacob Pan
---
drivers/iommu/intel/svm.c | 6 +++---
drivers/iommu/ioasid.c| 6 +++---
IOASID private data can be cleared by ioasid_attach_data() with a NULL
data pointer. A common use case is for a caller to free the data
afterward. ioasid_attach_data() calls synchronize_rcu() before return
such that free data can be sure without outstanding readers.
However, since
ioasid_set was introduced as an arbitrary token that is shared by a
group of IOASIDs. For example, two IOASIDs allocated via the same
ioasid_set pointer belong to the same set.
For guest SVA usages, system-wide IOASID resources need to be
partitioned such that each VM can have its own quota and
Users of an ioasid_set may not keep track of all the IOASIDs allocated
under the set. When collective actions are needed for each IOASIDs, it
is useful to iterate over all the IOASIDs within the set. For example,
when the ioasid_set is freed, the user might perform the same cleanup
operation on
IOASID core maintains the guest-host mapping in the form of SPID and
IOASID. This patch assigns the guest PASID (if valid) as SPID while
binding guest page table with a host PASID. This mapping will be used
for lookup and notifications.
Signed-off-by: Jacob Pan
---
drivers/iommu/intel/svm.c | 2
IOASID was introduced in v5.5 as a generic kernel allocator service for
both PCIe Process Address Space ID (PASID) and ARM SMMU's Sub Stream
ID. In addition to basic ID allocation, ioasid_set was defined as a
token that is shared by a group of IOASIDs. This set token can be used
for permission
As a system-wide resource, IOASID is often shared by multiple kernel
subsystems that are independent of each other. However, at the
ioasid_set level, these kernel subsystems must communicate with each
other for ownership checking, event notifications, etc. For example, on
Intel Scalable IO
On Mon, 28 Sep 2020 21:50:34 +0200
Eric Auger wrote:
> VFIO currently exposes the usable IOVA regions through the
> VFIO_IOMMU_GET_INFO ioctl / VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE
> capability. However it fails to take into account the dma_mask
> of the devices within the container. The top
On Mon, 28 Sep 2020 16:32:02 +0800, Zhou Wang wrote:
> In arm_smmu_evtq_thread, reading event queue is from consumer pointer,
> which has no address dependency on producer pointer, prog_reg(MMIO) and
> event queue memory(Normal memory) can disorder. So the load for event queue
> can be done before
Hi, Will and Jean,
On Mon, Sep 28, 2020 at 11:22:51PM +0100, Will Deacon wrote:
> On Fri, Sep 18, 2020 at 12:18:41PM +0200, Jean-Philippe Brucker wrote:
> > From: Fenghua Yu
> >
> > PASID is shared by all threads in a process. So the logical place to keep
> > track of it is in the "mm". Both
On Mon, Sep 21, 2020 at 09:45:57PM +0100, Will Deacon wrote:
> On Tue, Sep 22, 2020 at 03:13:53AM +0800, kernel test robot wrote:
> > Thank you for the patch! Perhaps something to improve:
> >
> > [auto build test WARNING on iommu/next]
> > [also build test WARNING on linus/master v5.9-rc6
Now the IOVA regions beyond the dma_mask and the vfio aperture are
removed from the usable IOVA ranges, the API becomes reliable to
compute the max IOVA. Let's advertise this by using a new version
for the capability.
Signed-off-by: Eric Auger
---
drivers/vfio/vfio_iommu_type1.c | 2 +-
1 file
VFIO currently exposes the usable IOVA regions through the
VFIO_IOMMU_GET_INFO ioctl / VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE
capability. However it fails to take into account the dma_mask
of the devices within the container. The top limit currently is
defined by the iommu aperture.
So, for
VFIO currently exposes the usable IOVA regions through the
VFIO_IOMMU_GET_INFO ioctl. However it fails to take into account
the dma_mask of the devices within the container. The top limit
currently is defined by the iommu aperture.
So, for instance, if the IOMMU supports up to 48bits, it may give
On Wed, Sep 23, 2020 at 08:32:43AM +0200, Auger Eric wrote:
> On 9/21/20 10:45 PM, Will Deacon wrote:
> > On Mon, Sep 14, 2020 at 11:13:07AM -0700, Vennila Megavannan wrote:
> >> From: Srinath Mannam
> >>
> >> Add provision to change default value of MSI IOVA base to platform's
> >> suitable IOVA
On Mon, Sep 28, 2020 at 09:52:12AM +0200, Thierry Reding wrote:
> On Sat, Sep 26, 2020 at 01:07:17AM -0700, Nicolin Chen wrote:
> > @@ -13,6 +13,7 @@
> > #include
> > #include
> > #include
> > +#include
>
> Why is this needed? I don't see any of the symbols declared in that file
> used
On Fri, Sep 18, 2020 at 12:18:41PM +0200, Jean-Philippe Brucker wrote:
> From: Fenghua Yu
>
> PASID is shared by all threads in a process. So the logical place to keep
> track of it is in the "mm". Both ARM and X86 need to use the PASID in the
> "mm".
>
> Suggested-by: Christoph Hellwig
>
On Sun, Sep 27, 2020 at 01:18:15AM +0300, Dmitry Osipenko wrote:
> 26.09.2020 11:07, Nicolin Chen пишет:
> ...
> > +#ifdef CONFIG_PCI
> > + if (!iommu_present(_bus_type)) {
>
> Is this iommu_present() check really needed?
>
> > + pci_request_acs();
>
> Shouldn't pci_request_acs() be
On Mon, Sep 28, 2020 at 09:55:45AM +0200, Thierry Reding wrote:
> On Sat, Sep 26, 2020 at 01:07:18AM -0700, Nicolin Chen wrote:
> > +#ifdef CONFIG_PCI
> > + if (!iommu_present(_bus_type)) {
> > + pci_request_acs();
> > + err = bus_set_iommu(_bus_type, _smmu_ops);
> > +
On Mon, Sep 28, 2020 at 06:23:15PM +0100, Will Deacon wrote:
> On Mon, Sep 28, 2020 at 06:47:31PM +0200, Jean-Philippe Brucker wrote:
> > On Fri, Sep 18, 2020 at 12:18:40PM +0200, Jean-Philippe Brucker wrote:
> > > This is version 10 of the page table sharing support for Arm SMMUv3.
> > > Patch 1
This is used to protect potential race condition at use_count.
since probes of client drivers, calling attach_dev(), may run
concurrently.
Signed-off-by: Nicolin Chen
---
Changelog
v1->v2:
* N/A
drivers/iommu/tegra-smmu.c | 34 +-
1 file changed, 21
Two followup patches for tegra-smmu:
PATCH-1 is a clean-up patch for the recently applied SWGROUP change.
PATCH-2 fixes a potential race condition
Changelog
v1->v2:
* Separated first two changs of V1 so they may get applied first,
since the other three changes need some extra time to
The tegra_smmu_group_get was added to group devices in different
SWGROUPs and it'd return a NULL group pointer upon a mismatch at
tegra_smmu_find_group(), so for most of clients/devices, it very
likely would mismatch and need a fallback generic_device_group().
But now tegra_smmu_group_get handles
if of_find_device_by_node() succeed, qcom_iommu_of_xlate() doesn't have
a corresponding put_device(). Thus add put_device() to fix the exception
handling for this function implementation.
Fixes: 0ae349a0f33fb ("iommu/qcom: Add qcom_iommu")
Signed-off-by: Yu Kuai
---
Changes in V2:
- Fix wrong
Hi Tvrtko,
On 9/28/20 5:44 PM, Tvrtko Ursulin wrote:
On 27/09/2020 07:34, Lu Baolu wrote:
Hi,
The previous post of this series could be found here.
https://lore.kernel.org/linux-iommu/20200912032200.11489-1-baolu...@linux.intel.com/
This version introduce a new patch [4/7] to fix an
...
> static bool tegra_smmu_capable(enum iommu_cap cap)
> @@ -420,17 +413,21 @@ static int tegra_smmu_as_prepare(struct tegra_smmu
> *smmu,
>struct tegra_smmu_as *as)
> {
> u32 value;
> - int err;
> + int err = 0;
> +
> + mutex_lock(>lock);
>
On Tue, Sep 29, 2020 at 03:17:58AM +0300, Dmitry Osipenko wrote:
> ...
> > static bool tegra_smmu_capable(enum iommu_cap cap)
> > @@ -420,17 +413,21 @@ static int tegra_smmu_as_prepare(struct tegra_smmu
> > *smmu,
> > struct tegra_smmu_as *as)
> > {
> > u32
On 2020/09/29 7:08, Will Deacon wrote:
On Mon, Sep 21, 2020 at 09:45:57PM +0100, Will Deacon wrote:
On Tue, Sep 22, 2020 at 03:13:53AM +0800, kernel test robot wrote:
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on iommu/next]
[also build test WARNING on
Hi Joerg
On Fri, Sep 25, 2020 at 09:34:23AM +0200, Joerg Roedel wrote:
> Hi Ashok,
>
> On Thu, Sep 24, 2020 at 10:21:48AM -0700, Raj, Ashok wrote:
> > Just trying to followup on this series.
> >
> > Sai has moved out of Intel, hence I'm trying to followup on his behalf.
> >
> > Let me know if
...
>> As I mentioned in another reply, I think tegra_smmu_find() should be all
>> you need in this case.
>
> This function is used by .probe_device() where its dev pointer is
> an SMMU client. IIUC, tegra_smmu_find() needs np pointer of "mc".
> For a PCI device that doesn't have a DT node with
This is used to protect potential race condition at use_count.
since probes of client drivers, calling attach_dev(), may run
concurrently.
Signed-off-by: Nicolin Chen
---
Changelog
v2->v3:
* Renamed label "err_unlock" to "unlock"
v1->v2:
* N/A
drivers/iommu/tegra-smmu.c | 34
Two followup patches for tegra-smmu:
PATCH-1 is a clean-up patch for the recently applied SWGROUP change.
PATCH-2 fixes a potential race condition
Changelog
v2->v3:
* PATCH-2: renamed "err_unlock" to "unlock"
v1->v2:
* Separated first two changs of V1 so they may get applied first,
since the
The tegra_smmu_group_get was added to group devices in different
SWGROUPs and it'd return a NULL group pointer upon a mismatch at
tegra_smmu_find_group(), so for most of clients/devices, it very
likely would mismatch and need a fallback generic_device_group().
But now tegra_smmu_group_get handles
On Tue, Sep 29, 2020 at 07:06:37AM +0300, Dmitry Osipenko wrote:
> ...
> >> As I mentioned in another reply, I think tegra_smmu_find() should be all
> >> you need in this case.
> >
> > This function is used by .probe_device() where its dev pointer is
> > an SMMU client. IIUC, tegra_smmu_find()
60 matches
Mail list logo