Re: [RFC 0/7] Support in-kernel DMA with PASID and SVA

2021-10-06 Thread Barry Song
On Tue, Oct 5, 2021 at 7:21 AM Jason Gunthorpe  wrote:
>
> On Mon, Oct 04, 2021 at 09:40:03AM -0700, Jacob Pan wrote:
> > Hi Barry,
> >
> > On Sat, 2 Oct 2021 01:45:59 +1300, Barry Song <21cn...@gmail.com> wrote:
> >
> > > >
> > > > > I assume KVA mode can avoid this iotlb flush as the device is using
> > > > > the page table of the kernel and sharing the whole kernel space. But
> > > > > will users be glad to accept this mode?
> > > >
> > > > You can avoid the lock be identity mapping the physical address space
> > > > of the kernel and maping map/unmap a NOP.
> > > >
> > > > KVA is just a different way to achive this identity map with slightly
> > > > different security properties than the normal way, but it doesn't
> > > > reach to the same security level as proper map/unmap.
> > > >
> > > > I'm not sure anyone who cares about DMA security would see value in
> > > > the slight difference between KVA and a normal identity map.
> > >
> > > yes. This is an important question. if users want a high security level,
> > > kva might not their choice; if users don't want the security, they are
> > > using iommu passthrough. So when will users choose KVA?
> > Right, KVAs sit in the middle in terms of performance and security.
> > Performance is better than IOVA due to IOTLB flush as you mentioned. Also
> > not too far behind of pass-through.
>
> The IOTLB flush is not on a DMA path but on a vmap path, so it is very
> hard to compare the two things.. Maybe vmap can be made to do lazy
> IOTLB flush or something and it could be closer
>
> > Security-wise, KVA respects kernel mapping. So permissions are better
> > enforced than pass-through and identity mapping.
>
> Is this meaningful? Isn't the entire physical map still in the KVA and
> isn't it entirely RW ?

Some areas are RX, for example, ARCH64 supports KERNEL_TEXT_RDONLY.
But the difference is really minor.

So do we have a case where devices can directly access the kernel's data
structure such as a list/graph/tree with pointers to a kernel virtual address?
then devices don't need to translate the address of pointers in a structure.
I assume this is one of the most useful features userspace SVA can provide.

But do we have a case where accelerators/GPU want to use the complex data
structures of kernel drivers?

>
> Jason

Thanks
barry
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 11/29] iommu/mediatek: Always pm_runtime_get while tlb flush

2021-10-06 Thread Yong Wu
On Thu, 2021-09-30 at 13:26 +0200, Dafna Hirschfeld wrote:
> 
> On 13.08.21 08:53, Yong Wu wrote:
> > Prepare for 2 HWs that sharing pgtable in different power-domains.
> > 
> > The previous SoC don't have PM. Only mt8192 has power-domain,
> > and it is display's power-domain which nearly always is enabled.
> 
> hi, I see that in mt1873.dtsi, many devices that uses the iommu have
> the
> 'power-domains' property.

Sorry, I didn't clarify this clear. I mean the iommu device don't have
this property rather than the other device.

> 
> > 
> > When there are 2 M4U HWs, it may has problem.
> > In this function, we get the pm_status via the m4u dev, but it
> > don't
> > reflect the real power-domain status of the HW since there may be
> > other
> > HW also use that power-domain.
> > 
> > Currently we could not get the real power-domain status, thus
> > always
> > pm_runtime_get here.
> > 
> > Prepare for mt8195, thus, no need fix tags here.
> > 
> > This patch may drop the performance, we expect the user could
> > pm_runtime_get_sync before dma_alloc_attrs which need tlb ops.
> > 
> 
> Could you explain this sentence a bit? should the user call
> pm_runtime_get_sync
> before calling dma_alloc_attrs?

In v3, I have removed this patch. Use [1] instead.

[1] 
https://lore.kernel.org/linux-mediatek/20210923115840.17813-13-yong...@mediatek.com/

Thanks.

> 
> Thanks,
> Dafna
> 
> > Signed-off-by: Yong Wu 
> > ---
> >   drivers/iommu/mtk_iommu.c | 5 -
> >   1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
> > index add23a36a5e2..abc721a1da21 100644
> > --- a/drivers/iommu/mtk_iommu.c
> > +++ b/drivers/iommu/mtk_iommu.c
> > @@ -238,8 +238,11 @@ static void
> > mtk_iommu_tlb_flush_range_sync(unsigned long iova, size_t size,
> >   
> > for_each_m4u(data, head) {
> > if (has_pm) {
> > -   if (pm_runtime_get_if_in_use(data->dev) <= 0)
> > +   ret = pm_runtime_resume_and_get(data->dev);
> > +   if (ret < 0) {
> > +   dev_err(data->dev, "tlb flush: pm get
> > fail %d.\n", ret);
> > continue;
> > +   }
> > }
> >   
> > spin_lock_irqsave(>tlb_lock, flags);
> > 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v8 09/12] media: mtk-vcodec: Get rid of mtk_smi_larb_get/put

2021-10-06 Thread Yong Wu
On Thu, 2021-09-30 at 12:57 +0200, Dafna Hirschfeld wrote:
> 
> On 30.09.21 05:28, Yong Wu wrote:
> > Hi Dafna,
> > 
> > Thanks very much for the review.
> > 
> > On Wed, 2021-09-29 at 14:13 +0200, Dafna Hirschfeld wrote:
> > > 
> > > On 29.09.21 03:37, Yong Wu wrote:
> > > > MediaTek IOMMU has already added the device_link between the
> > > > consumer
> > > > and smi-larb device. If the vcodec device call the
> > > > pm_runtime_get_sync,
> > > > the smi-larb's pm_runtime_get_sync also be called
> > > > automatically.
> > > > 
> > > > CC: Tiffany Lin 
> > > > CC: Irui Wang 
> > > > Signed-off-by: Yong Wu 
> > > > Reviewed-by: Evan Green 
> > > > Acked-by: Tiffany Lin 
> > > > Reviewed-by: Dafna Hirschfeld 
> > > > ---
> > > >.../platform/mtk-vcodec/mtk_vcodec_dec_pm.c   | 37 +++
> > > > ---
> > > > --
> > > >.../platform/mtk-vcodec/mtk_vcodec_drv.h  |  3 --
> > > >.../platform/mtk-vcodec/mtk_vcodec_enc.c  |  1 -
> > > >.../platform/mtk-vcodec/mtk_vcodec_enc_pm.c   | 44 +++
> > > > ---

[snip]

> > > >void mtk_vcodec_release_dec_pm(struct mtk_vcodec_dev *dev)
> > > >{
> > > > pm_runtime_disable(dev->pm.dev);
> > > > -   put_device(dev->pm.larbvdec);
> > > >}
> > > 
> > > Now that functions only do  'pm_runtime_disable(dev->pm.dev);' so
> > > it
> > > will be more
> > > readable to remove the function mtk_vcodec_release_dec_pm
> > > and replace with pm_runtime_disable(dev->pm.dev);
> > > Same for the 'enc' equivalent.
> > 
> > Make sense. But It may be not proper if using pm_runtime_disable
> > as the symmetry with mtk_vcodec_init_dec_pm in the
> > mtk_vcodec_probe.
> > 
> > Maybe we should move pm_runtime_enable out from
> > mtk_vcodec_init_dec_pm
> > into mtk_vcodec_probe. I could do a new patch for this. Is this ok
> > for
> > you?
> 
> yes, there is also asymettry when calling pm_runtime* in general,
> I see in the decoder it is called from mtk_vcodec_dec_pm.c
> but in the encoder it is called from mtk_vcodec_enc.c,
> 
> I think all calls to pm_runtime* should be out of the *_pm.c files

OK. I will try this.

> since for example 'mtk_vcodec_dec_pw_on' also do just one call to
> pm_runtime_resume_and_get so this function can also be removed.

I guess this one should be reserved to vcodec guys. I see this function
is changed at [1]. Let's keep this patchset clean.

[1] 
https://patchwork.kernel.org/project/linux-mediatek/patch/20210901083215.25984-10-yunfei.d...@mediatek.com/

> 
> thanks,
> Dafna
> 
> > 
> > > 
> > > Thanks,
> > > Dafna
> > 
> > [snip]
> > ___
> > Linux-mediatek mailing list
> > linux-media...@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-mediatek
> > 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 07/20] iommu/iommufd: Add iommufd_[un]bind_device()

2021-10-06 Thread David Gibson
On Fri, Oct 01, 2021 at 09:43:22AM -0300, Jason Gunthorpe wrote:
> On Thu, Sep 30, 2021 at 01:10:29PM +1000, David Gibson wrote:
> > On Wed, Sep 29, 2021 at 09:24:57AM -0300, Jason Gunthorpe wrote:
> > > On Wed, Sep 29, 2021 at 03:25:54PM +1000, David Gibson wrote:
> > > 
> > > > > +struct iommufd_device {
> > > > > + unsigned int id;
> > > > > + struct iommufd_ctx *ictx;
> > > > > + struct device *dev; /* always be the physical device */
> > > > > + u64 dev_cookie;
> > > > 
> > > > Why do you need both an 'id' and a 'dev_cookie'?  Since they're both
> > > > unique, couldn't you just use the cookie directly as the index into
> > > > the xarray?
> > > 
> > > ID is the kernel value in the xarray - xarray is much more efficient &
> > > safe with small kernel controlled values.
> > > 
> > > dev_cookie is a user assigned value that may not be unique. It's
> > > purpose is to allow userspace to receive and event and go back to its
> > > structure. Most likely userspace will store a pointer here, but it is
> > > also possible userspace could not use it.
> > > 
> > > It is a pretty normal pattern
> > 
> > Hm, ok.  Could you point me at an example?
> 
> For instance user_data vs fd in io_uring

Ok, but one of those is an fd, which is an existing type of handle.
Here we're introducing two different unique handles that aren't an
existing kernel concept.

> RDMA has many similar examples.
> 
> More or less anytime you want to allow the kernel to async retun some
> information providing a 64 bit user_data lets userspace have an easier
> time to deal with it.

I absolutely see the need for user_data.  What I'm questioning is
having two different, user-visible unique handles, neither of which is
an fd.


That said... is there any strong reason why user_data needs to be
unique?  I can imagine userspace applications where you don't care
which device the notification is coming from - or at least don't care
down to the same granularity that /dev/iommu is using.  In which case
having the kernel provided unique handle and the
not-necessarily-unique user_data would make perfect sense.

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

[PATCH] dma-debug: fix sg checks in debug_dma_map_sg()

2021-10-06 Thread Gerald Schaefer
The following warning occurred sporadically on s390:
DMA-API: nvme 0006:00:00.0: device driver maps memory from kernel text or 
rodata [addr=48cc5e2f] [len=131072]
WARNING: CPU: 4 PID: 825 at kernel/dma/debug.c:1083 
check_for_illegal_area+0xa8/0x138

It is a false-positive warning, due to broken logic in debug_dma_map_sg().
check_for_illegal_area() checks for overlay of sg elements with kernel text
or rodata. It is called with sg_dma_len(s) instead of s->length as
parameter. After the call to ->map_sg(), sg_dma_len() will contain the
length of possibly combined sg elements in the DMA address space, and not
the individual sg element length, which would be s->length.

The check will then use the physical start address of an sg element, and
add the DMA length for the overlap check, which could result in the false
warning, because the DMA length can be larger than the actual single sg
element length.

In addition, the call to check_for_illegal_area() happens in the iteration
over mapped_ents, which will not include all individual sg elements if
any of them were combined in ->map_sg().

Fix this by using s->length instead of sg_dma_len(s). Also put the call to
check_for_illegal_area() in a separate loop, iterating over all the
individual sg elements ("nents" instead of "mapped_ents").

While at it, as suggested by Robin Murphy, also move check_for_stack()
inside the new loop, as it is similarly concerned with validating the
individual sg elements.

Link: 
https://lore.kernel.org/lkml/20210705185252.4074653-1-gerald.schae...@linux.ibm.com
Fixes: 884d05970bfb ("dma-debug: use sg_dma_len accessor")
Signed-off-by: Gerald Schaefer 
---
 kernel/dma/debug.c | 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
index 95445bd6eb72..d968a429f0d1 100644
--- a/kernel/dma/debug.c
+++ b/kernel/dma/debug.c
@@ -1289,6 +1289,13 @@ void debug_dma_map_sg(struct device *dev, struct 
scatterlist *sg,
if (unlikely(dma_debug_disabled()))
return;
 
+   for_each_sg(sg, s, nents, i) {
+   check_for_stack(dev, sg_page(s), s->offset);
+   if (!PageHighMem(sg_page(s))) {
+   check_for_illegal_area(dev, sg_virt(s), s->length);
+   }
+   }
+
for_each_sg(sg, s, mapped_ents, i) {
entry = dma_entry_alloc();
if (!entry)
@@ -1304,12 +1311,6 @@ void debug_dma_map_sg(struct device *dev, struct 
scatterlist *sg,
entry->sg_call_ents   = nents;
entry->sg_mapped_ents = mapped_ents;
 
-   check_for_stack(dev, sg_page(s), s->offset);
-
-   if (!PageHighMem(sg_page(s))) {
-   check_for_illegal_area(dev, sg_virt(s), sg_dma_len(s));
-   }
-
check_sg_segment(dev, s);
 
add_dma_entry(entry);
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v1 1/2] iommu/vt-d: Move intel_iommu_ops to header file

2021-10-06 Thread Andy Shevchenko
On Fri, Jul 30, 2021 at 10:20:08AM +0800, Lu Baolu wrote:
> Hi Andy,
> 
> On 7/30/21 12:35 AM, Andy Shevchenko wrote:
> > Compiler is not happy about hidden declaration of intel_iommu_ops.
> > 
> > .../drivers/iommu/intel/iommu.c:414:24: warning: symbol 'intel_iommu_ops' 
> > was not declared. Should it be static?
> > 
> > Move declaration to header file to make compiler happy.
> 
> Thanks for the cleanup. Sharing data structures between different files
> doesn't seem to be a good design. How about adding a helper so that the
> intel_iommu_ops could be a static one?

I don't see any change in the upstream. What's the plan?
Can we take my patch as a quick fix?

-- 
With Best Regards,
Andy Shevchenko


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: DPAA2 triggers, [PATCH] dma debug: report -EEXIST errors in add_dma_entry

2021-10-06 Thread Gerald Schaefer
On Wed, 6 Oct 2021 15:23:36 +0100
Robin Murphy  wrote:

> On 2021-10-06 14:10, Gerald Schaefer wrote:
> > On Fri, 1 Oct 2021 14:52:56 +0200
> > Gerald Schaefer  wrote:
> > 
> >> On Thu, 30 Sep 2021 15:37:33 +0200
> >> Karsten Graul  wrote:
> >>
> >>> On 14/09/2021 17:45, Ioana Ciornei wrote:
>  On Wed, Sep 08, 2021 at 10:33:26PM -0500, Jeremy Linton wrote:
> > +DPAA2, netdev maintainers
> > Hi,
> >
> > On 5/18/21 7:54 AM, Hamza Mahfooz wrote:
> >> Since, overlapping mappings are not supported by the DMA API we should
> >> report an error if active_cacheline_insert returns -EEXIST.
> >
> > It seems this patch found a victim. I was trying to run iperf3 on a
> > honeycomb (5.14.0, fedora 35) and the console is blasting this error 
> > message
> > at 100% cpu. So, I changed it to a WARN_ONCE() to get the call trace, 
> > which
> > is attached below.
> >
> 
>  These frags are allocated by the stack, transformed into a scatterlist
>  by skb_to_sgvec and then DMA mapped with dma_map_sg. It was not the
>  dpaa2-eth's decision to use two fragments from the same page (that will
>  also end un in the same cacheline) in two different in-flight skbs.
> 
>  Is this behavior normal?
> 
> >>>
> >>> We see the same problem here and it started with 5.15-rc2 in our nightly 
> >>> CI runs.
> >>> The CI has panic_on_warn enabled so we see the panic every day now.
> >>
> >> Adding a WARN for a case that be detected false-positive seems not
> >> acceptable, exactly for this reason (kernel panic on unaffected
> >> systems).
> >>
> >> So I guess it boils down to the question if the behavior that Ioana
> >> described is legit behavior, on a system that is dma coherent. We
> >> are apparently hitting the same scenario, although it could not yet be
> >> reproduced with debug printks for some reason.
> >>
> >> If the answer is yes, than please remove at lease the WARN, so that
> >> it will not make systems crash that behave valid, and have
> >> panic_on_warn set. Even a normal printk feels wrong to me in that
> >> case, it really sounds rather like you want to fix / better refine
> >> the overlap check, if you want to report anything here.
> > 
> > Dan, Christoph, any opinion?
> > 
> > So far it all looks a lot like a false positive, so could you please
> > see that those patches get reverted? I do wonder a bit why this is
> > not an issue for others, we surely cannot be the only ones running
> > CI with panic_on_warn.
> 
> What convinces you it's a false-positive? I'm hardly familiar with most 
> of that callstack, but it appears to be related to mlx5, and I know that 
> exists on expansion cards which could be plugged into a system with 
> non-coherent PCIe where partial cacheline overlap *would* be a real 
> issue. Of course it's dubious that there are many real use-cases for 
> plugging a NIC with a 4-figure price tag into a little i.MX8 or 
> whatever, but the point is that it *should* still work correctly.

I would assume that a *proper* warning would check if we see the
"non-coherent" case, e.g. by using dev_is_dma_coherent() and only
report with potentially fatal WARN on systems where it is appropriate.

However, I am certainly even less familiar with all that, and might
just have gotten the wrong impression here.

Also not sure about mlx5 relation here, it does not really show
in the call trace, only in the err_printk() output, probably
from dev_driver_string(dev) or dev_name(dev). But I do not see
where mlx5 code would be involved here.

[...]
> According to the streaming DMA API documentation, it is *not* valid:
> 
> ".. warning::
> 
>Memory coherency operates at a granularity called the cache
>line width.  In order for memory mapped by this API to operate
>correctly, the mapped region must begin exactly on a cache line
>boundary and end exactly on one (to prevent two separately mapped
>regions from sharing a single cache line).  Since the cache line size
>may not be known at compile time, the API will not enforce this
>requirement.  Therefore, it is recommended that driver writers who
>don't take special care to determine the cache line size at run time
>only map virtual regions that begin and end on page boundaries (which
>are guaranteed also to be cache line boundaries)."

Thanks, but I cannot really make a lot of sense out if this. Which
driver exactly would be the one that needs to take care of the
cache line alignment for sg elements? If this WARN is really reporting
a bug, could you please help pointing to where it would need to be
addressed?

And does this really say that it is illegal to have multiple sg elements
within the same cache line, regardless of cache coherence?

Adding linux-r...@vger.kernel.org, sorry for the noise, but maybe somebody
on that list can make more sense of this.

For reference, the link to the start of this thread:

Re: DPAA2 triggers, [PATCH] dma debug: report -EEXIST errors in add_dma_entry

2021-10-06 Thread Robin Murphy

On 2021-10-06 14:10, Gerald Schaefer wrote:

On Fri, 1 Oct 2021 14:52:56 +0200
Gerald Schaefer  wrote:


On Thu, 30 Sep 2021 15:37:33 +0200
Karsten Graul  wrote:


On 14/09/2021 17:45, Ioana Ciornei wrote:

On Wed, Sep 08, 2021 at 10:33:26PM -0500, Jeremy Linton wrote:

+DPAA2, netdev maintainers
Hi,

On 5/18/21 7:54 AM, Hamza Mahfooz wrote:

Since, overlapping mappings are not supported by the DMA API we should
report an error if active_cacheline_insert returns -EEXIST.


It seems this patch found a victim. I was trying to run iperf3 on a
honeycomb (5.14.0, fedora 35) and the console is blasting this error message
at 100% cpu. So, I changed it to a WARN_ONCE() to get the call trace, which
is attached below.



These frags are allocated by the stack, transformed into a scatterlist
by skb_to_sgvec and then DMA mapped with dma_map_sg. It was not the
dpaa2-eth's decision to use two fragments from the same page (that will
also end un in the same cacheline) in two different in-flight skbs.

Is this behavior normal?



We see the same problem here and it started with 5.15-rc2 in our nightly CI 
runs.
The CI has panic_on_warn enabled so we see the panic every day now.


Adding a WARN for a case that be detected false-positive seems not
acceptable, exactly for this reason (kernel panic on unaffected
systems).

So I guess it boils down to the question if the behavior that Ioana
described is legit behavior, on a system that is dma coherent. We
are apparently hitting the same scenario, although it could not yet be
reproduced with debug printks for some reason.

If the answer is yes, than please remove at lease the WARN, so that
it will not make systems crash that behave valid, and have
panic_on_warn set. Even a normal printk feels wrong to me in that
case, it really sounds rather like you want to fix / better refine
the overlap check, if you want to report anything here.


Dan, Christoph, any opinion?

So far it all looks a lot like a false positive, so could you please
see that those patches get reverted? I do wonder a bit why this is
not an issue for others, we surely cannot be the only ones running
CI with panic_on_warn.


What convinces you it's a false-positive? I'm hardly familiar with most 
of that callstack, but it appears to be related to mlx5, and I know that 
exists on expansion cards which could be plugged into a system with 
non-coherent PCIe where partial cacheline overlap *would* be a real 
issue. Of course it's dubious that there are many real use-cases for 
plugging a NIC with a 4-figure price tag into a little i.MX8 or 
whatever, but the point is that it *should* still work correctly.



We would need to disable DEBUG_DMA if this WARN stays in, which
would be a shame. Of course, in theory, this might also indicate
some real bug, but there really is no sign of that so far.


The whole point of DMA debug is to flag up things that you *do* get away 
with on the vast majority of systems, precisely because most testing 
happens on those systems rather than more esoteric embedded setups. Say 
your system only uses dma-direct and a driver starts triggering the 
warning for not calling dma_mapping_error(), would you argue for 
removing that warning as well since dma_map_single() can't fail on your 
machine so it's "not a bug"?



Having multiple sg elements in the same page (or cacheline) is
valid, correct? And this is also not a decision of the driver
IIUC, so if it was bug, it should be addressed in common code,
correct?


According to the streaming DMA API documentation, it is *not* valid:

".. warning::

  Memory coherency operates at a granularity called the cache
  line width.  In order for memory mapped by this API to operate
  correctly, the mapped region must begin exactly on a cache line
  boundary and end exactly on one (to prevent two separately mapped
  regions from sharing a single cache line).  Since the cache line size
  may not be known at compile time, the API will not enforce this
  requirement.  Therefore, it is recommended that driver writers who
  don't take special care to determine the cache line size at run time
  only map virtual regions that begin and end on page boundaries (which
  are guaranteed also to be cache line boundaries)."


BTW, there is already a WARN in the add_dma_entry() path, related
to cachlline overlap and -EEXIST:

add_dma_entry() -> active_cacheline_insert() -> -EEXIST ->
active_cacheline_inc_overlap()

That will only trigger when "overlap > ACTIVE_CACHELINE_MAX_OVERLAP".
Not familiar with that code, but it seems that there are now two
warnings for more or less the same, and the new warning is much more
prone to false-positives.

How do these 2 warnings relate, are they both really necessary?
I think the new warning was only introduced because of some old
TODO comment in add_dma_entry(), see commit 2b4bbc6231d78
("dma-debug: report -EEXIST errors in add_dma_entry").


AFAICS they are different things. I believe the new warning is supposed 
to be for the 

Re: DPAA2 triggers, [PATCH] dma debug: report -EEXIST errors in add_dma_entry

2021-10-06 Thread Gerald Schaefer
On Wed, 6 Oct 2021 15:10:43 +0200
Gerald Schaefer  wrote:

> On Fri, 1 Oct 2021 14:52:56 +0200
> Gerald Schaefer  wrote:
> 
> > On Thu, 30 Sep 2021 15:37:33 +0200
> > Karsten Graul  wrote:
> > 
> > > On 14/09/2021 17:45, Ioana Ciornei wrote:
> > > > On Wed, Sep 08, 2021 at 10:33:26PM -0500, Jeremy Linton wrote:
> > > >> +DPAA2, netdev maintainers
> > > >> Hi,
> > > >>
> > > >> On 5/18/21 7:54 AM, Hamza Mahfooz wrote:
> > > >>> Since, overlapping mappings are not supported by the DMA API we should
> > > >>> report an error if active_cacheline_insert returns -EEXIST.
> > > >>
> > > >> It seems this patch found a victim. I was trying to run iperf3 on a
> > > >> honeycomb (5.14.0, fedora 35) and the console is blasting this error 
> > > >> message
> > > >> at 100% cpu. So, I changed it to a WARN_ONCE() to get the call trace, 
> > > >> which
> > > >> is attached below.
> > > >>
> > > > 
> > > > These frags are allocated by the stack, transformed into a scatterlist
> > > > by skb_to_sgvec and then DMA mapped with dma_map_sg. It was not the
> > > > dpaa2-eth's decision to use two fragments from the same page (that will
> > > > also end un in the same cacheline) in two different in-flight skbs.
> > > > 
> > > > Is this behavior normal?
> > > > 
> > > 
> > > We see the same problem here and it started with 5.15-rc2 in our nightly 
> > > CI runs.
> > > The CI has panic_on_warn enabled so we see the panic every day now.
> > 
> > Adding a WARN for a case that be detected false-positive seems not
> > acceptable, exactly for this reason (kernel panic on unaffected
> > systems).
> > 
> > So I guess it boils down to the question if the behavior that Ioana
> > described is legit behavior, on a system that is dma coherent. We
> > are apparently hitting the same scenario, although it could not yet be
> > reproduced with debug printks for some reason.
> > 
> > If the answer is yes, than please remove at lease the WARN, so that
> > it will not make systems crash that behave valid, and have
> > panic_on_warn set. Even a normal printk feels wrong to me in that
> > case, it really sounds rather like you want to fix / better refine
> > the overlap check, if you want to report anything here.
> 
> Dan, Christoph, any opinion?
> 
> So far it all looks a lot like a false positive, so could you please
> see that those patches get reverted? I do wonder a bit why this is
> not an issue for others, we surely cannot be the only ones running
> CI with panic_on_warn.

For reference, we are talking about these commits:

2b4bbc6231d7 ("dma-debug: report -EEXIST errors in add_dma_entry")
510e1a724ab1 ("dma-debug: prevent an error message from causing runtime 
problems")

The latter introduced the WARN (through err_printk usage), and should
be reverted if it can be false-positive, but both seem wrong in that
case.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: DPAA2 triggers, [PATCH] dma debug: report -EEXIST errors in add_dma_entry

2021-10-06 Thread Gerald Schaefer
On Fri, 1 Oct 2021 14:52:56 +0200
Gerald Schaefer  wrote:

> On Thu, 30 Sep 2021 15:37:33 +0200
> Karsten Graul  wrote:
> 
> > On 14/09/2021 17:45, Ioana Ciornei wrote:
> > > On Wed, Sep 08, 2021 at 10:33:26PM -0500, Jeremy Linton wrote:
> > >> +DPAA2, netdev maintainers
> > >> Hi,
> > >>
> > >> On 5/18/21 7:54 AM, Hamza Mahfooz wrote:
> > >>> Since, overlapping mappings are not supported by the DMA API we should
> > >>> report an error if active_cacheline_insert returns -EEXIST.
> > >>
> > >> It seems this patch found a victim. I was trying to run iperf3 on a
> > >> honeycomb (5.14.0, fedora 35) and the console is blasting this error 
> > >> message
> > >> at 100% cpu. So, I changed it to a WARN_ONCE() to get the call trace, 
> > >> which
> > >> is attached below.
> > >>
> > > 
> > > These frags are allocated by the stack, transformed into a scatterlist
> > > by skb_to_sgvec and then DMA mapped with dma_map_sg. It was not the
> > > dpaa2-eth's decision to use two fragments from the same page (that will
> > > also end un in the same cacheline) in two different in-flight skbs.
> > > 
> > > Is this behavior normal?
> > > 
> > 
> > We see the same problem here and it started with 5.15-rc2 in our nightly CI 
> > runs.
> > The CI has panic_on_warn enabled so we see the panic every day now.
> 
> Adding a WARN for a case that be detected false-positive seems not
> acceptable, exactly for this reason (kernel panic on unaffected
> systems).
> 
> So I guess it boils down to the question if the behavior that Ioana
> described is legit behavior, on a system that is dma coherent. We
> are apparently hitting the same scenario, although it could not yet be
> reproduced with debug printks for some reason.
> 
> If the answer is yes, than please remove at lease the WARN, so that
> it will not make systems crash that behave valid, and have
> panic_on_warn set. Even a normal printk feels wrong to me in that
> case, it really sounds rather like you want to fix / better refine
> the overlap check, if you want to report anything here.

Dan, Christoph, any opinion?

So far it all looks a lot like a false positive, so could you please
see that those patches get reverted? I do wonder a bit why this is
not an issue for others, we surely cannot be the only ones running
CI with panic_on_warn.

We would need to disable DEBUG_DMA if this WARN stays in, which
would be a shame. Of course, in theory, this might also indicate
some real bug, but there really is no sign of that so far.

Having multiple sg elements in the same page (or cacheline) is
valid, correct? And this is also not a decision of the driver
IIUC, so if it was bug, it should be addressed in common code,
correct?

> 
> BTW, there is already a WARN in the add_dma_entry() path, related
> to cachlline overlap and -EEXIST:
> 
> add_dma_entry() -> active_cacheline_insert() -> -EEXIST ->
> active_cacheline_inc_overlap()
> 
> That will only trigger when "overlap > ACTIVE_CACHELINE_MAX_OVERLAP".
> Not familiar with that code, but it seems that there are now two
> warnings for more or less the same, and the new warning is much more
> prone to false-positives.
> 
> How do these 2 warnings relate, are they both really necessary?
> I think the new warning was only introduced because of some old
> TODO comment in add_dma_entry(), see commit 2b4bbc6231d78
> ("dma-debug: report -EEXIST errors in add_dma_entry").
> 
> That comment was initially added by Dan long time ago, and he
> added several fix-ups for overlap detection after that, including
> the "overlap > ACTIVE_CACHELINE_MAX_OVERLAP" stuff in
> active_cacheline_inc_overlap(). So could it be that the TODO
> comment was simply not valid any more, and better be removed
> instead of adding new / double warnings, that also generate
> false-positives and kernel crashes?

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH RFC v1 10/11] uapi/virtio-iommu: Add a new request type to send page response

2021-10-06 Thread Jean-Philippe Brucker
On Thu, Sep 30, 2021 at 02:54:05PM +0530, Vivek Kumar Gautam wrote:
> > > +struct virtio_iommu_req_page_resp {
> > > + struct virtio_iommu_req_headhead;
> > > + __le32  domain;
> > 
> > I don't think we need this field, since the fault report doesn't come with
> > a domain.
> 
> But here we are sending the response which would be consumed by the vfio
> ultimately. In kvmtool, I am consuming this "virtio_iommu_req_page_resp"
> request in the virtio/iommu driver, extracting the domain from it, and using
> that to call the respective "page_response" ops from "vfio_iommu_ops" in the
> vfio/core driver.
> 
> Is this incorrect way of passing on the page-response back to the host
> kernel?

That works for the host userspace-kernel interface because the device is
always attached to a VFIO container.

For virtio-iommu the domain info is redundant. The endpoint information
needs to be kept through the whole response path in order to target the
right endpoint in the end. In addition the guest could enable PRI without
attaching the endpoint to a domain, or fail to disable PRI before
detaching the endpoint. Sure it's weird, but the host can still inject the
recoverable page fault in this case, and the guest answers with "invalid"
status but no domain. We could mandate domains for recoverable faults but
that forces a synchronization against attach/detach and I think it
needlessly deviates from other IOMMUs.

Thanks,
Jean
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu