On 16/07/2021 09:34, Shameer Kolothum wrote:
> From: Jon Nettleton
>
> Check if there is any RMR info associated with the devices behind
> the SMMU and if any, install bypass SMRs for them. This is to
> keep any ongoing traffic associated with these devices alive
> when we enable/reset SMMU
On 2021-07-16 07:24, Christoph Hellwig wrote:
On Wed, Jul 14, 2021 at 07:19:50PM +0100, Robin Murphy wrote:
Even at the DMA API level you could hide *some* of it (at the cost of
effectively only having 1/4 of the usable address space), but there are
still cases like where v4l2 has a hard
> Technically this looks good. But given that exposing a helper
> that does either vmalloc_to_page or virt_to_page is one of the
> never ending MM discussions I don't want to get into that discussion
> and just keep it local in the DMA code.
>
> Are you fine with me applying this version?
Looks
On 2021-07-16 12:33 a.m., Christoph Hellwig wrote:
> On Thu, Jul 15, 2021 at 10:45:44AM -0600, Logan Gunthorpe wrote:
>> @@ -194,6 +194,8 @@ static int __dma_map_sg_attrs(struct device *dev, struct
>> scatterlist *sg,
>> else
>> ents = ops->map_sg(dev, sg, nents, dir, attrs);
On Thu, 2021-07-15 at 10:45 -0600, Logan Gunthorpe wrote:
> From: Martin Oliveira
>
> The .map_sg() op now expects an error code instead of zero on failure.
>
> So propagate the error from __s390_dma_map_sg() up.
>
> Signed-off-by: Martin Oliveira
> Signed-off-by: Logan Gunthorpe
> Cc:
On Fri, Jul 2, 2021 at 8:05 AM Dmitry Osipenko wrote:
>
> 23.04.2021 19:32, Thierry Reding пишет:
> > +void of_iommu_get_resv_regions(struct device *dev, struct list_head *list)
> > +{
> > + struct of_phandle_iterator it;
> > + int err;
> > +
> > + of_for_each_phandle(, err,
On Fri, 16 Jul 2021, Roman Skakun wrote:
> > Technically this looks good. But given that exposing a helper
> > that does either vmalloc_to_page or virt_to_page is one of the
> > never ending MM discussions I don't want to get into that discussion
> > and just keep it local in the DMA code.
> >
>
Add support for parsing RMR node information from ACPI.
Find the associated streamid and smmu node info from the
RMR node and populate a linked list with RMR memory
descriptors.
Signed-off-by: Shameer Kolothum
---
drivers/acpi/arm64/iort.c | 134 +-
1 file
Reserved Memory Regions(RMR) associated with an IOMMU can be
described through ACPI IORT tables in systems with devices
that require a unity mapping or bypass for those
regions.
Introduce a generic interface so that IOMMU drivers can retrieve
and set up necessary mappings.
Signed-off-by: Shameer
Introduce a helper to check the sid range and to init the l2 strtab
entries(bypass). This will be useful when we have to initialize the
l2 strtab with bypass for RMR SIDs.
Signed-off-by: Shameer Kolothum
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 28 +++--
1 file changed,
By default, disable_bypass flag is set and any dev without
an iommu domain installs STE with CFG_ABORT during
arm_smmu_init_bypass_stes(). Introduce a "force" flag and
move the STE update logic to arm_smmu_init_bypass_stes()
so that we can force it to install CFG_BYPASS STE for specific
SIDs.
A union is introduced to struct iommu_resv_region to hold
any firmware specific data. This is in preparation to add
support for IORT RMR reserve regions and the union now holds
the RMR specific information.
Signed-off-by: Shameer Kolothum
---
include/linux/iommu.h | 11 +++
1 file
Add a helper function (iort_iommu_get_rmrs()) that retrieves RMR
memory descriptors associated with a given IOMMU. This will be used
by IOMMU drivers to setup necessary mappings.
Invoke it from the generic helper iommu_dma_get_rmrs().
Signed-off-by: Shameer Kolothum
---
Hi,
Major Changes from v5:
- Addressed comments from Robin & Lorenzo.
: Moved iort_parse_rmr() to acpi_iort_init() from
iort_init_platform_devices().
: Removed use of struct iort_rmr_entry during the initial
parse. Using struct iommu_resv_region instead.
: Report RMR address
From: Jon Nettleton
Check if there is any RMR info associated with the devices behind
the SMMU and if any, install bypass SMRs for them. This is to
keep any ongoing traffic associated with these devices alive
when we enable/reset SMMU during probe().
Signed-off-by: Jon Nettleton
Signed-off-by:
Check if there is any RMR info associated with the devices behind
the SMMUv3 and if any, install bypass STEs for them. This is to
keep any ongoing traffic associated with these devices alive
when we enable/reset SMMUv3 during probe().
Signed-off-by: Shameer Kolothum
---
Get ACPI IORT RMR regions associated with a dev reserved
so that there is a unity mapping for them in SMMU.
Signed-off-by: Shameer Kolothum
---
drivers/iommu/dma-iommu.c | 56 +++
1 file changed, 51 insertions(+), 5 deletions(-)
diff --git
From: Roman Skakun
This commit is dedicated to fix incorrect conversion from
cpu_addr to page address in cases when we get virtual
address which allocated in the vmalloc range.
As the result, virt_to_page() cannot convert this address
properly and return incorrect page address.
Need to detect
Technically this looks good. But given that exposing a helper
that does either vmalloc_to_page or virt_to_page is one of the
never ending MM discussions I don't want to get into that discussion
and just keep it local in the DMA code.
Are you fine with me applying this version?
---
>From
On 2021-07-16 07:19, Christoph Hellwig wrote:
On Thu, Jul 15, 2021 at 03:16:08PM +0100, Robin Murphy wrote:
On 2021-07-15 15:07, Christoph Hellwig wrote:
On Thu, Jul 15, 2021 at 02:04:24PM +0100, Robin Murphy wrote:
If people are going to insist on calling iommu_iova_to_phys()
pointlessly and
On 2021/7/16 9:20, Tian, Kevin wrote:
> To summarize, for vIOMMU we can work with the spec owner to
> define a proper interface to feedback such restriction into the guest
> if necessary. For the kernel part, it's clear that IOMMU fd should
> disallow two devices attached to a single [RID] or
On Fri, Jul 16, 2021 at 01:24:31PM +0900, David Stevens wrote:
> From: David Stevens
>
> Fix RW protection check when making a pte, so that it properly checks
> that both R and W flags are set, instead of either R or W.
>
> Signed-off-by: David Stevens
Acked-by: Maxime Ripard
Thanks!
Maxime
On 2021-07-16 07:32, Christoph Hellwig wrote:
On Thu, Jul 15, 2021 at 10:45:42AM -0600, Logan Gunthorpe wrote:
@@ -458,7 +460,7 @@ static int gart_map_sg(struct device *dev, struct
scatterlist *sg, int nents,
iommu_full(dev, pages << PAGE_SHIFT, dir);
for_each_sg(sg, s, nents,
On 2021-07-16 07:33, Christoph Hellwig wrote:
On Thu, Jul 15, 2021 at 10:45:44AM -0600, Logan Gunthorpe wrote:
@@ -194,6 +194,8 @@ static int __dma_map_sg_attrs(struct device *dev, struct
scatterlist *sg,
else
ents = ops->map_sg(dev, sg, nents, dir, attrs);
+
On Fri, Jul 16, 2021 at 3:52 PM Steven Price wrote:
>
> On 16/07/2021 09:34, Shameer Kolothum wrote:
> > From: Jon Nettleton
> >
> > Check if there is any RMR info associated with the devices behind
> > the SMMU and if any, install bypass SMRs for them. This is to
> > keep any ongoing traffic
On Fri, Jul 16, 2021 at 01:20:15AM +, Tian, Kevin wrote:
> One thought is to have vfio device driver deal with it. In this proposal
> it is the vfio device driver to define the PASID virtualization policy and
> report it to userspace via VFIO_DEVICE_GET_INFO. The driver understands
> the
On Tue 06 Jul 23:53 CDT 2021, John Stultz wrote:
> Allow the qcom_scm driver to be loadable as a permenent module.
>
> This still uses the "depends on QCOM_SCM || !QCOM_SCM" bit to
> ensure that drivers that call into the qcom_scm driver are
> also built as modules. While not ideal in some cases
On Thu, Jul 15, 2021 at 03:16:08PM +0100, Robin Murphy wrote:
> On 2021-07-15 15:07, Christoph Hellwig wrote:
> > On Thu, Jul 15, 2021 at 02:04:24PM +0100, Robin Murphy wrote:
> > > If people are going to insist on calling iommu_iova_to_phys()
> > > pointlessly and expecting it to work,
> >
> >
On Wed, Jul 14, 2021 at 07:19:50PM +0100, Robin Murphy wrote:
> Even at the DMA API level you could hide *some* of it (at the cost of
> effectively only having 1/4 of the usable address space), but there are
> still cases like where v4l2 has a hard requirement that a page-aligned
> scatterlist can
On Thu, Jul 15, 2021 at 10:45:29AM -0600, Logan Gunthorpe wrote:
> + * dma_map_sgtable() will return the error code returned and convert
> + * a zero return (for legacy implementations) into -EINVAL.
> + *
> + * dma_map_sg() will always return zero on any negative or zero
> +
Careful here. What do all these errors from the low-level code mean
here? I think we need to clearly standardize on what we actually
return from ->map_sg and possibly document what the callers expect and
can do, and enforce that only those error are reported.
On Thu, Jul 15, 2021 at 10:45:44AM -0600, Logan Gunthorpe wrote:
> @@ -194,6 +194,8 @@ static int __dma_map_sg_attrs(struct device *dev, struct
> scatterlist *sg,
> else
> ents = ops->map_sg(dev, sg, nents, dir, attrs);
>
> + WARN_ON_ONCE(ents == 0);
Turns this into a
On Thu, Jul 15, 2021 at 10:45:42AM -0600, Logan Gunthorpe wrote:
> @@ -458,7 +460,7 @@ static int gart_map_sg(struct device *dev, struct
> scatterlist *sg, int nents,
> iommu_full(dev, pages << PAGE_SHIFT, dir);
> for_each_sg(sg, s, nents, i)
> s->dma_address =
33 matches
Mail list logo