Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-05 Thread Rob Clark
On Wed, Jun 5, 2019 at 6:18 AM Marek Szyprowski
 wrote:
>
> Hi Rob,
>
> On 2019-06-05 14:57, Rob Clark wrote:
> > On Tue, Jun 4, 2019 at 11:58 PM Tomasz Figa  wrote:
> >> But first of all, I remember Marek already submitted some patches long
> >> ago that extended struct driver with some flag that means that the
> >> driver doesn't want the IOMMU to be attached before probe. Why
> >> wouldn't that work? Sounds like a perfect opt-out solution.
> > Actually, yeah.. we should do that.  That is the simplest solution.
>
> Tomasz has very good memory. It took me a while to find that old patches:
>
> https://patchwork.kernel.org/patch/4677251/
> https://patchwork.kernel.org/patch/4677941/
> https://patchwork.kernel.org/patch/4677401/
>
> It looks that my idea was a bit ahead of its time ;)
>

if I re-spin this, was their a reason not to just use bitfields, ie:

-bool suppress_bind_attrs;/* disables bind/unbind via sysfs */
+bool suppress_bind_attrs : 1;/* disables bind/unbind via sysfs */
+bool has_own_iommu_manager : 1;  /* driver explictly manages IOMMU */

That seems like it would have been a bit less churn and a bit nicer
looking (IMO at least)

BR,
-R
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-05 Thread Marek Szyprowski
Hi Rob,

On 2019-06-05 14:57, Rob Clark wrote:
> On Tue, Jun 4, 2019 at 11:58 PM Tomasz Figa  wrote:
>> But first of all, I remember Marek already submitted some patches long
>> ago that extended struct driver with some flag that means that the
>> driver doesn't want the IOMMU to be attached before probe. Why
>> wouldn't that work? Sounds like a perfect opt-out solution.
> Actually, yeah.. we should do that.  That is the simplest solution.

Tomasz has very good memory. It took me a while to find that old patches:

https://patchwork.kernel.org/patch/4677251/
https://patchwork.kernel.org/patch/4677941/
https://patchwork.kernel.org/patch/4677401/

It looks that my idea was a bit ahead of its time ;)

Best regards
-- 
Marek Szyprowski, PhD
Samsung R Institute Poland

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-05 Thread Rob Clark
On Tue, Jun 4, 2019 at 11:58 PM Tomasz Figa  wrote:
>
> But first of all, I remember Marek already submitted some patches long
> ago that extended struct driver with some flag that means that the
> driver doesn't want the IOMMU to be attached before probe. Why
> wouldn't that work? Sounds like a perfect opt-out solution.

Actually, yeah.. we should do that.  That is the simplest solution.

BR,
-R
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-05 Thread Tomasz Figa
On Mon, Jun 3, 2019 at 7:48 PM Rob Clark  wrote:
>
> On Sun, Jun 2, 2019 at 11:25 PM Tomasz Figa  wrote:
> >
> > On Mon, Jun 3, 2019 at 4:40 AM Rob Clark  wrote:
> > >
> > > On Fri, May 10, 2019 at 7:35 AM Rob Clark  wrote:
> > > >
> > > > On Tue, Dec 4, 2018 at 2:29 PM Rob Herring  wrote:
> > > > >
> > > > > On Sat, Dec 1, 2018 at 10:54 AM Rob Clark  wrote:
> > > > > >
> > > > > > This solves a problem we see with drm/msm, caused by getting
> > > > > > iommu_dma_ops while we attach our own domain and manage it directly 
> > > > > > at
> > > > > > the iommu API level:
> > > > > >
> > > > > >   [0038] user address but active_mm is swapper
> > > > > >   Internal error: Oops: 9605 [#1] PREEMPT SMP
> > > > > >   Modules linked in:
> > > > > >   CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 
> > > > > > 4.19.3 #90
> > > > > >   Hardware name: xxx (DT)
> > > > > >   Workqueue: events deferred_probe_work_func
> > > > > >   pstate: 80c9 (Nzcv daif +PAN +UAO)
> > > > > >   pc : iommu_dma_map_sg+0x7c/0x2c8
> > > > > >   lr : iommu_dma_map_sg+0x40/0x2c8
> > > > > >   sp : ff80095eb4f0
> > > > > >   x29: ff80095eb4f0 x28: 
> > > > > >   x27: ffc0f9431578 x26: 
> > > > > >   x25:  x24: 0003
> > > > > >   x23: 0001 x22: ffc0fa9ac010
> > > > > >   x21:  x20: ffc0fab40980
> > > > > >   x19: ffc0fab40980 x18: 0003
> > > > > >   x17: 01c4 x16: 0007
> > > > > >   x15: 000e x14: 
> > > > > >   x13:  x12: 0028
> > > > > >   x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
> > > > > >   x9 :  x8 : ffc0fab409a0
> > > > > >   x7 :  x6 : 0002
> > > > > >   x5 : 0001 x4 : 
> > > > > >   x3 : 0001 x2 : 0002
> > > > > >   x1 : ffc0f9431578 x0 : 
> > > > > >   Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
> > > > > >   Call trace:
> > > > > >iommu_dma_map_sg+0x7c/0x2c8
> > > > > >__iommu_map_sg_attrs+0x70/0x84
> > > > > >get_pages+0x170/0x1e8
> > > > > >msm_gem_get_iova+0x8c/0x128
> > > > > >_msm_gem_kernel_new+0x6c/0xc8
> > > > > >msm_gem_kernel_new+0x4c/0x58
> > > > > >dsi_tx_buf_alloc_6g+0x4c/0x8c
> > > > > >msm_dsi_host_modeset_init+0xc8/0x108
> > > > > >msm_dsi_modeset_init+0x54/0x18c
> > > > > >_dpu_kms_drm_obj_init+0x430/0x474
> > > > > >dpu_kms_hw_init+0x5f8/0x6b4
> > > > > >msm_drm_bind+0x360/0x6c8
> > > > > >try_to_bring_up_master.part.7+0x28/0x70
> > > > > >component_master_add_with_match+0xe8/0x124
> > > > > >msm_pdev_probe+0x294/0x2b4
> > > > > >platform_drv_probe+0x58/0xa4
> > > > > >really_probe+0x150/0x294
> > > > > >driver_probe_device+0xac/0xe8
> > > > > >__device_attach_driver+0xa4/0xb4
> > > > > >bus_for_each_drv+0x98/0xc8
> > > > > >__device_attach+0xac/0x12c
> > > > > >device_initial_probe+0x24/0x30
> > > > > >bus_probe_device+0x38/0x98
> > > > > >deferred_probe_work_func+0x78/0xa4
> > > > > >process_one_work+0x24c/0x3dc
> > > > > >worker_thread+0x280/0x360
> > > > > >kthread+0x134/0x13c
> > > > > >ret_from_fork+0x10/0x18
> > > > > >   Code: d284 91000725 6b17039f 5400048a (f9401f40)
> > > > > >   ---[ end trace f22dda57f3648e2c ]---
> > > > > >   Kernel panic - not syncing: Fatal exception
> > > > > >   SMP: stopping secondary CPUs
> > > > > >   Kernel Offset: disabled
> > > > > >   CPU features: 0x0,22802a18
> > > > > >   Memory Limit: none
> > > > > >
> > > > > > The problem is that when drm/msm does it's own 
> > > > > > iommu_attach_device(),
> > > > > > now the domain returned by iommu_get_domain_for_dev() is drm/msm's
> > > > > > domain, and it doesn't have domain->iova_cookie.
> > > > > >
> > > > > > We kind of avoided this problem prior to sdm845/dpu because the 
> > > > > > iommu
> > > > > > was attached to the mdp node in dt, which is a child of the toplevel
> > > > > > mdss node (which corresponds to the dev passed in dma_map_sg()).  
> > > > > > But
> > > > > > with sdm845, now the iommu is attached at the mdss level so we hit 
> > > > > > the
> > > > > > iommu_dma_ops in dma_map_sg().
> > > > > >
> > > > > > But auto allocating/attaching a domain before the driver is probed 
> > > > > > was
> > > > > > already a blocking problem for enabling per-context pagetables for 
> > > > > > the
> > > > > > GPU.  This problem is also now solved with this patch.
> > > > > >
> > > > > > Fixes: 97890ba9289c dma-mapping: detect and configure IOMMU in 
> > > > > > of_dma_configure
> > > > > > Tested-by: Douglas Anderson 
> > > > > > Signed-off-by: Rob Clark 
> > > > > > ---
> > > > > > This is an alternative/replacement for [1].  What it lacks in 
> > > > > > elegance
> > > > > > it makes up for in practicality ;-)
> > > > > >
> > > > 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-03 Thread Jordan Crouse
> It shouldn't be a problem to hook something else up to the IOMMU
> subsystem. Hopefully it's something that people are going to standardize
> on.
> 
> > 3) The automatic attach of DMA domain is also causing a different
> >problem for us on the GPU side, preventing us from supporting per-
> >context pagetables (since we end up with a disagreement about
> >which context bank is used between arm-smmu and the firmware).
> 
> I'm not sure I understand this issue. Is the context bank hard-coded in
> the firmware somehow? Or is it possible to rewrite which one is going to
> be used at runtime? Do you switch out the actual page tables rather than
> the IOMMU domains for context switching?
 
We have a rather long history on this but the tl;dr is that the GPU microcode
switches the pagetables by rewriting TTBR0 on the fly (since this is
arm-smmu-v2 we have no better option) and yes, unfortunately it is hard coded
to use context bank 0. [1] is the current patchset to support all this,
including my own take on avoiding the dma-domain (all the cool kids have one).

Jordan

[1] https://patchwork.freedesktop.org/series/57441/

-- 
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-03 Thread Thierry Reding
On Mon, Jun 03, 2019 at 07:20:14AM -0700, Rob Clark wrote:
> On Mon, Jun 3, 2019 at 6:54 AM Thierry Reding  
> wrote:
> >
> > On Mon, Jun 03, 2019 at 06:20:57AM -0700, Rob Clark wrote:
> > > On Mon, Jun 3, 2019 at 4:14 AM Robin Murphy  wrote:
> > > >
> > > > On 03/06/2019 11:47, Rob Clark wrote:
> > > > > On Sun, Jun 2, 2019 at 11:25 PM Tomasz Figa  
> > > > > wrote:
> > > > >>
> > > > >> On Mon, Jun 3, 2019 at 4:40 AM Rob Clark  wrote:
> > > > >>>
> > > > >>> So, another case I've come across, on the display side.. I'm working
> > > > >>> on handling the case where bootloader enables display (and takes 
> > > > >>> iommu
> > > > >>> out of reset).. as soon as DMA domain gets attached we get iommu
> > > > >>> faults, because bootloader has already configured display for 
> > > > >>> scanout.
> > > > >>> Unfortunately this all happens before actual driver is probed and 
> > > > >>> has
> > > > >>> a chance to intervene.
> > > > >>>
> > > > >>> It's rather unfortunate that we tried to be clever rather than just
> > > > >>> making drivers call some function to opt-in to the hookup of dma 
> > > > >>> iommu
> > > > >>> ops :-(
> > > > >>
> > > > >> I think it still works for the 90% of cases and if 10% needs some
> > > > >> explicit work in the drivers, that's better than requiring 100% of 
> > > > >> the
> > > > >> drivers to do things manually.
> > > >
> > > > Right, it's not about "being clever", it's about not adding opt-in code
> > > > to the hundreds and hundreds and hundreds of drivers which *might* ever
> > > > find themselves used on a system where they would need the IOMMU's help
> > > > for DMA.
> > >
> > > Well, I mean, at one point we didn't do the automatic iommu hookup, we
> > > could have just stuck with that and added a helper so drivers could
> > > request the hookup.  Things wouldn't have been any more broken than
> > > before, and when people bring up systems where iommu is required, they
> > > could have added the necessary dma_iommu_configure() call.  But that
> > > is water under the bridge now.
> > >
> > > > >> Adding Marek who had the same problem on Exynos.
> > > > >
> > > > > I do wonder how many drivers need to iommu_map in their ->probe()?
> > > > > I'm thinking moving the auto-hookup to after a successful probe(),
> > > > > with some function a driver could call if they need mapping in probe,
> > > > > might be a way to eventually get rid of the blacklist.  But I've no
> > > > > idea how to find the subset of drivers that would be broken without a
> > > > > dma_setup_iommu_stuff() call in their probe.
> > > >
> > > > Wouldn't help much. That particular issue is nothing to do with DMA ops
> > > > really, it's about IOMMU initialisation. On something like SMMUv3 there
> > > > is literally no way to turn the thing on without blocking unknown
> > > > traffic - it *has* to have stream table entries programmed before it can
> > > > even allow stuff to bypass.
> > >
> > > fwiw, on these sdm850 laptops (and I think sdm845 boards like mtp and
> > > db845c) the SMMU (v2) is taken out of bypass by the bootloader.  Bjorn
> > > has some patches for arm-smmu to read-back the state at boot.
> > >
> > > (Although older devices were booting with display enabled but SMMU in 
> > > bypass.)
> > >
> > > > The answer is either to either pay attention to the "Quiesce all DMA
> > > > capable devices" part of the boot protocol (which has been there since
> > > > pretty much forever), or to come up with some robust way of
> > > > communicating "live" boot-time mappings to IOMMU drivers so that they
> > > > can program themselves appropriately during probe.
> > >
> > > Unfortunately display lit up by bootloader is basically ubiquitous.
> > > Every single android phone does it.  All of the windows-arm laptops do
> > > it.  Basically 99.9% of things that have a display do it.  It's a
> > > tough problem to solve involving clks, gdsc, regulators, etc, in
> > > addition to the display driver.. and sadly now smmu.  And devices
> > > where we can make changes to and update the firmware are rather rare.
> > > So there is really no option to ignore this problem.
> >
> > I think this is going to require at least some degree of cooperation
> > from the bootloader. See my other thread on that. Unfortunately I think
> > this is an area where everyone has kind of been doing their own thing
> > even if standard bindings for this have been around for quite a while
> > (actually 5 years by now). I suspect that most bootloaders that run
> > today are not that old, but as always downstream doesn't follow closely
> > where upstream is guiding.
> >
> > > I guess if we had some early-quirks mechanism like x86 does, we could
> > > mash the display off early in boot.  That would be an easy solution.
> > > Although I'd prefer a proper solution so that android phones aren't
> > > carrying around enormous stacks of hack patches to achieve a smooth
> > > flicker-free boot.
> >
> > The proper solution, I think, is for bootloader and 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-03 Thread Rob Clark
On Mon, Jun 3, 2019 at 6:54 AM Thierry Reding  wrote:
>
> On Mon, Jun 03, 2019 at 06:20:57AM -0700, Rob Clark wrote:
> > On Mon, Jun 3, 2019 at 4:14 AM Robin Murphy  wrote:
> > >
> > > On 03/06/2019 11:47, Rob Clark wrote:
> > > > On Sun, Jun 2, 2019 at 11:25 PM Tomasz Figa  wrote:
> > > >>
> > > >> On Mon, Jun 3, 2019 at 4:40 AM Rob Clark  wrote:
> > > >>>
> > > >>> So, another case I've come across, on the display side.. I'm working
> > > >>> on handling the case where bootloader enables display (and takes iommu
> > > >>> out of reset).. as soon as DMA domain gets attached we get iommu
> > > >>> faults, because bootloader has already configured display for scanout.
> > > >>> Unfortunately this all happens before actual driver is probed and has
> > > >>> a chance to intervene.
> > > >>>
> > > >>> It's rather unfortunate that we tried to be clever rather than just
> > > >>> making drivers call some function to opt-in to the hookup of dma iommu
> > > >>> ops :-(
> > > >>
> > > >> I think it still works for the 90% of cases and if 10% needs some
> > > >> explicit work in the drivers, that's better than requiring 100% of the
> > > >> drivers to do things manually.
> > >
> > > Right, it's not about "being clever", it's about not adding opt-in code
> > > to the hundreds and hundreds and hundreds of drivers which *might* ever
> > > find themselves used on a system where they would need the IOMMU's help
> > > for DMA.
> >
> > Well, I mean, at one point we didn't do the automatic iommu hookup, we
> > could have just stuck with that and added a helper so drivers could
> > request the hookup.  Things wouldn't have been any more broken than
> > before, and when people bring up systems where iommu is required, they
> > could have added the necessary dma_iommu_configure() call.  But that
> > is water under the bridge now.
> >
> > > >> Adding Marek who had the same problem on Exynos.
> > > >
> > > > I do wonder how many drivers need to iommu_map in their ->probe()?
> > > > I'm thinking moving the auto-hookup to after a successful probe(),
> > > > with some function a driver could call if they need mapping in probe,
> > > > might be a way to eventually get rid of the blacklist.  But I've no
> > > > idea how to find the subset of drivers that would be broken without a
> > > > dma_setup_iommu_stuff() call in their probe.
> > >
> > > Wouldn't help much. That particular issue is nothing to do with DMA ops
> > > really, it's about IOMMU initialisation. On something like SMMUv3 there
> > > is literally no way to turn the thing on without blocking unknown
> > > traffic - it *has* to have stream table entries programmed before it can
> > > even allow stuff to bypass.
> >
> > fwiw, on these sdm850 laptops (and I think sdm845 boards like mtp and
> > db845c) the SMMU (v2) is taken out of bypass by the bootloader.  Bjorn
> > has some patches for arm-smmu to read-back the state at boot.
> >
> > (Although older devices were booting with display enabled but SMMU in 
> > bypass.)
> >
> > > The answer is either to either pay attention to the "Quiesce all DMA
> > > capable devices" part of the boot protocol (which has been there since
> > > pretty much forever), or to come up with some robust way of
> > > communicating "live" boot-time mappings to IOMMU drivers so that they
> > > can program themselves appropriately during probe.
> >
> > Unfortunately display lit up by bootloader is basically ubiquitous.
> > Every single android phone does it.  All of the windows-arm laptops do
> > it.  Basically 99.9% of things that have a display do it.  It's a
> > tough problem to solve involving clks, gdsc, regulators, etc, in
> > addition to the display driver.. and sadly now smmu.  And devices
> > where we can make changes to and update the firmware are rather rare.
> > So there is really no option to ignore this problem.
>
> I think this is going to require at least some degree of cooperation
> from the bootloader. See my other thread on that. Unfortunately I think
> this is an area where everyone has kind of been doing their own thing
> even if standard bindings for this have been around for quite a while
> (actually 5 years by now). I suspect that most bootloaders that run
> today are not that old, but as always downstream doesn't follow closely
> where upstream is guiding.
>
> > I guess if we had some early-quirks mechanism like x86 does, we could
> > mash the display off early in boot.  That would be an easy solution.
> > Although I'd prefer a proper solution so that android phones aren't
> > carrying around enormous stacks of hack patches to achieve a smooth
> > flicker-free boot.
>
> The proper solution, I think, is for bootloader and kernel to work
> together. Unfortunately that means we'll just have to bite the bullet
> and get things fixed across the stack. I think this is just the latest
> manifestation of the catch-up that upstream has been playing. Only now
> that we're starting to enable all of these features upstream are we
> running 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-03 Thread Thierry Reding
On Mon, Jun 03, 2019 at 06:20:57AM -0700, Rob Clark wrote:
> On Mon, Jun 3, 2019 at 4:14 AM Robin Murphy  wrote:
> >
> > On 03/06/2019 11:47, Rob Clark wrote:
> > > On Sun, Jun 2, 2019 at 11:25 PM Tomasz Figa  wrote:
> > >>
> > >> On Mon, Jun 3, 2019 at 4:40 AM Rob Clark  wrote:
> > >>>
> > >>> So, another case I've come across, on the display side.. I'm working
> > >>> on handling the case where bootloader enables display (and takes iommu
> > >>> out of reset).. as soon as DMA domain gets attached we get iommu
> > >>> faults, because bootloader has already configured display for scanout.
> > >>> Unfortunately this all happens before actual driver is probed and has
> > >>> a chance to intervene.
> > >>>
> > >>> It's rather unfortunate that we tried to be clever rather than just
> > >>> making drivers call some function to opt-in to the hookup of dma iommu
> > >>> ops :-(
> > >>
> > >> I think it still works for the 90% of cases and if 10% needs some
> > >> explicit work in the drivers, that's better than requiring 100% of the
> > >> drivers to do things manually.
> >
> > Right, it's not about "being clever", it's about not adding opt-in code
> > to the hundreds and hundreds and hundreds of drivers which *might* ever
> > find themselves used on a system where they would need the IOMMU's help
> > for DMA.
> 
> Well, I mean, at one point we didn't do the automatic iommu hookup, we
> could have just stuck with that and added a helper so drivers could
> request the hookup.  Things wouldn't have been any more broken than
> before, and when people bring up systems where iommu is required, they
> could have added the necessary dma_iommu_configure() call.  But that
> is water under the bridge now.
> 
> > >> Adding Marek who had the same problem on Exynos.
> > >
> > > I do wonder how many drivers need to iommu_map in their ->probe()?
> > > I'm thinking moving the auto-hookup to after a successful probe(),
> > > with some function a driver could call if they need mapping in probe,
> > > might be a way to eventually get rid of the blacklist.  But I've no
> > > idea how to find the subset of drivers that would be broken without a
> > > dma_setup_iommu_stuff() call in their probe.
> >
> > Wouldn't help much. That particular issue is nothing to do with DMA ops
> > really, it's about IOMMU initialisation. On something like SMMUv3 there
> > is literally no way to turn the thing on without blocking unknown
> > traffic - it *has* to have stream table entries programmed before it can
> > even allow stuff to bypass.
> 
> fwiw, on these sdm850 laptops (and I think sdm845 boards like mtp and
> db845c) the SMMU (v2) is taken out of bypass by the bootloader.  Bjorn
> has some patches for arm-smmu to read-back the state at boot.
> 
> (Although older devices were booting with display enabled but SMMU in bypass.)
> 
> > The answer is either to either pay attention to the "Quiesce all DMA
> > capable devices" part of the boot protocol (which has been there since
> > pretty much forever), or to come up with some robust way of
> > communicating "live" boot-time mappings to IOMMU drivers so that they
> > can program themselves appropriately during probe.
> 
> Unfortunately display lit up by bootloader is basically ubiquitous.
> Every single android phone does it.  All of the windows-arm laptops do
> it.  Basically 99.9% of things that have a display do it.  It's a
> tough problem to solve involving clks, gdsc, regulators, etc, in
> addition to the display driver.. and sadly now smmu.  And devices
> where we can make changes to and update the firmware are rather rare.
> So there is really no option to ignore this problem.

I think this is going to require at least some degree of cooperation
from the bootloader. See my other thread on that. Unfortunately I think
this is an area where everyone has kind of been doing their own thing
even if standard bindings for this have been around for quite a while
(actually 5 years by now). I suspect that most bootloaders that run
today are not that old, but as always downstream doesn't follow closely
where upstream is guiding.

> I guess if we had some early-quirks mechanism like x86 does, we could
> mash the display off early in boot.  That would be an easy solution.
> Although I'd prefer a proper solution so that android phones aren't
> carrying around enormous stacks of hack patches to achieve a smooth
> flicker-free boot.

The proper solution, I think, is for bootloader and kernel to work
together. Unfortunately that means we'll just have to bite the bullet
and get things fixed across the stack. I think this is just the latest
manifestation of the catch-up that upstream has been playing. Only now
that we're starting to enable all of these features upstream are we
running into interoperability issues.

If upstream had been further along we would've caught these issues way
ahead of time and could've influenced the designs of bootloader much
earlier. Now, unless we get all the vendors to go back and 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-03 Thread Thierry Reding
On Mon, Jun 03, 2019 at 12:14:27PM +0100, Robin Murphy wrote:
> On 03/06/2019 11:47, Rob Clark wrote:
> > On Sun, Jun 2, 2019 at 11:25 PM Tomasz Figa  wrote:
> > > 
> > > On Mon, Jun 3, 2019 at 4:40 AM Rob Clark  wrote:
> > > > 
> > > > On Fri, May 10, 2019 at 7:35 AM Rob Clark  wrote:
> > > > > 
> > > > > On Tue, Dec 4, 2018 at 2:29 PM Rob Herring  wrote:
> > > > > > 
> > > > > > On Sat, Dec 1, 2018 at 10:54 AM Rob Clark  
> > > > > > wrote:
> > > > > > > 
> > > > > > > This solves a problem we see with drm/msm, caused by getting
> > > > > > > iommu_dma_ops while we attach our own domain and manage it 
> > > > > > > directly at
> > > > > > > the iommu API level:
> > > > > > > 
> > > > > > >[0038] user address but active_mm is swapper
> > > > > > >Internal error: Oops: 9605 [#1] PREEMPT SMP
> > > > > > >Modules linked in:
> > > > > > >CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 
> > > > > > > 4.19.3 #90
> > > > > > >Hardware name: xxx (DT)
> > > > > > >Workqueue: events deferred_probe_work_func
> > > > > > >pstate: 80c9 (Nzcv daif +PAN +UAO)
> > > > > > >pc : iommu_dma_map_sg+0x7c/0x2c8
> > > > > > >lr : iommu_dma_map_sg+0x40/0x2c8
> > > > > > >sp : ff80095eb4f0
> > > > > > >x29: ff80095eb4f0 x28: 
> > > > > > >x27: ffc0f9431578 x26: 
> > > > > > >x25:  x24: 0003
> > > > > > >x23: 0001 x22: ffc0fa9ac010
> > > > > > >x21:  x20: ffc0fab40980
> > > > > > >x19: ffc0fab40980 x18: 0003
> > > > > > >x17: 01c4 x16: 0007
> > > > > > >x15: 000e x14: 
> > > > > > >x13:  x12: 0028
> > > > > > >x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
> > > > > > >x9 :  x8 : ffc0fab409a0
> > > > > > >x7 :  x6 : 0002
> > > > > > >x5 : 0001 x4 : 
> > > > > > >x3 : 0001 x2 : 0002
> > > > > > >x1 : ffc0f9431578 x0 : 
> > > > > > >Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
> > > > > > >Call trace:
> > > > > > > iommu_dma_map_sg+0x7c/0x2c8
> > > > > > > __iommu_map_sg_attrs+0x70/0x84
> > > > > > > get_pages+0x170/0x1e8
> > > > > > > msm_gem_get_iova+0x8c/0x128
> > > > > > > _msm_gem_kernel_new+0x6c/0xc8
> > > > > > > msm_gem_kernel_new+0x4c/0x58
> > > > > > > dsi_tx_buf_alloc_6g+0x4c/0x8c
> > > > > > > msm_dsi_host_modeset_init+0xc8/0x108
> > > > > > > msm_dsi_modeset_init+0x54/0x18c
> > > > > > > _dpu_kms_drm_obj_init+0x430/0x474
> > > > > > > dpu_kms_hw_init+0x5f8/0x6b4
> > > > > > > msm_drm_bind+0x360/0x6c8
> > > > > > > try_to_bring_up_master.part.7+0x28/0x70
> > > > > > > component_master_add_with_match+0xe8/0x124
> > > > > > > msm_pdev_probe+0x294/0x2b4
> > > > > > > platform_drv_probe+0x58/0xa4
> > > > > > > really_probe+0x150/0x294
> > > > > > > driver_probe_device+0xac/0xe8
> > > > > > > __device_attach_driver+0xa4/0xb4
> > > > > > > bus_for_each_drv+0x98/0xc8
> > > > > > > __device_attach+0xac/0x12c
> > > > > > > device_initial_probe+0x24/0x30
> > > > > > > bus_probe_device+0x38/0x98
> > > > > > > deferred_probe_work_func+0x78/0xa4
> > > > > > > process_one_work+0x24c/0x3dc
> > > > > > > worker_thread+0x280/0x360
> > > > > > > kthread+0x134/0x13c
> > > > > > > ret_from_fork+0x10/0x18
> > > > > > >Code: d284 91000725 6b17039f 5400048a (f9401f40)
> > > > > > >---[ end trace f22dda57f3648e2c ]---
> > > > > > >Kernel panic - not syncing: Fatal exception
> > > > > > >SMP: stopping secondary CPUs
> > > > > > >Kernel Offset: disabled
> > > > > > >CPU features: 0x0,22802a18
> > > > > > >Memory Limit: none
> > > > > > > 
> > > > > > > The problem is that when drm/msm does it's own 
> > > > > > > iommu_attach_device(),
> > > > > > > now the domain returned by iommu_get_domain_for_dev() is drm/msm's
> > > > > > > domain, and it doesn't have domain->iova_cookie.
> > > > > > > 
> > > > > > > We kind of avoided this problem prior to sdm845/dpu because the 
> > > > > > > iommu
> > > > > > > was attached to the mdp node in dt, which is a child of the 
> > > > > > > toplevel
> > > > > > > mdss node (which corresponds to the dev passed in dma_map_sg()).  
> > > > > > > But
> > > > > > > with sdm845, now the iommu is attached at the mdss level so we 
> > > > > > > hit the
> > > > > > > iommu_dma_ops in dma_map_sg().
> > > > > > > 
> > > > > > > But auto allocating/attaching a domain before the driver is 
> > > > > > > probed was
> > > > > > > already a blocking problem for enabling per-context pagetables 
> > > > > > > for the
> > > > > > > GPU.  This problem is also now solved with this patch.
> > > > > > > 
> > > > > > > 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-03 Thread Rob Clark
On Mon, Jun 3, 2019 at 4:14 AM Robin Murphy  wrote:
>
> On 03/06/2019 11:47, Rob Clark wrote:
> > On Sun, Jun 2, 2019 at 11:25 PM Tomasz Figa  wrote:
> >>
> >> On Mon, Jun 3, 2019 at 4:40 AM Rob Clark  wrote:
> >>>
> >>> So, another case I've come across, on the display side.. I'm working
> >>> on handling the case where bootloader enables display (and takes iommu
> >>> out of reset).. as soon as DMA domain gets attached we get iommu
> >>> faults, because bootloader has already configured display for scanout.
> >>> Unfortunately this all happens before actual driver is probed and has
> >>> a chance to intervene.
> >>>
> >>> It's rather unfortunate that we tried to be clever rather than just
> >>> making drivers call some function to opt-in to the hookup of dma iommu
> >>> ops :-(
> >>
> >> I think it still works for the 90% of cases and if 10% needs some
> >> explicit work in the drivers, that's better than requiring 100% of the
> >> drivers to do things manually.
>
> Right, it's not about "being clever", it's about not adding opt-in code
> to the hundreds and hundreds and hundreds of drivers which *might* ever
> find themselves used on a system where they would need the IOMMU's help
> for DMA.

Well, I mean, at one point we didn't do the automatic iommu hookup, we
could have just stuck with that and added a helper so drivers could
request the hookup.  Things wouldn't have been any more broken than
before, and when people bring up systems where iommu is required, they
could have added the necessary dma_iommu_configure() call.  But that
is water under the bridge now.

> >> Adding Marek who had the same problem on Exynos.
> >
> > I do wonder how many drivers need to iommu_map in their ->probe()?
> > I'm thinking moving the auto-hookup to after a successful probe(),
> > with some function a driver could call if they need mapping in probe,
> > might be a way to eventually get rid of the blacklist.  But I've no
> > idea how to find the subset of drivers that would be broken without a
> > dma_setup_iommu_stuff() call in their probe.
>
> Wouldn't help much. That particular issue is nothing to do with DMA ops
> really, it's about IOMMU initialisation. On something like SMMUv3 there
> is literally no way to turn the thing on without blocking unknown
> traffic - it *has* to have stream table entries programmed before it can
> even allow stuff to bypass.

fwiw, on these sdm850 laptops (and I think sdm845 boards like mtp and
db845c) the SMMU (v2) is taken out of bypass by the bootloader.  Bjorn
has some patches for arm-smmu to read-back the state at boot.

(Although older devices were booting with display enabled but SMMU in bypass.)

> The answer is either to either pay attention to the "Quiesce all DMA
> capable devices" part of the boot protocol (which has been there since
> pretty much forever), or to come up with some robust way of
> communicating "live" boot-time mappings to IOMMU drivers so that they
> can program themselves appropriately during probe.

Unfortunately display lit up by bootloader is basically ubiquitous.
Every single android phone does it.  All of the windows-arm laptops do
it.  Basically 99.9% of things that have a display do it.  It's a
tough problem to solve involving clks, gdsc, regulators, etc, in
addition to the display driver.. and sadly now smmu.  And devices
where we can make changes to and update the firmware are rather rare.
So there is really no option to ignore this problem.

I guess if we had some early-quirks mechanism like x86 does, we could
mash the display off early in boot.  That would be an easy solution.
Although I'd prefer a proper solution so that android phones aren't
carrying around enormous stacks of hack patches to achieve a smooth
flicker-free boot.

I suppose arm-smmu could realize that the context bank is already
taken out of bypass..  although I'm not entirely sure if we can assume
that the CPU would be able to read back the pagetable setup by the
bootloader.  Maybe Vivek has an idea about that?

BR,
-R
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-03 Thread Vivek Gautam
On Mon, Jun 3, 2019 at 4:47 PM Christoph Hellwig  wrote:
>
> If you (and a few others actors in the thread) want people to actually
> read what you wrote please follow proper mailing list ettiquette.  I've
> given up on reading all the recent mails after scrolling through two
> pages of full quotes.

Apologies for not cutting down the quoted text. I will be more careful
next time onwards.

Regards
Vivek

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-03 Thread Christoph Hellwig
If you (and a few others actors in the thread) want people to actually
read what you wrote please follow proper mailing list ettiquette.  I've
given up on reading all the recent mails after scrolling through two
pages of full quotes.
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-03 Thread Robin Murphy

On 03/06/2019 11:47, Rob Clark wrote:

On Sun, Jun 2, 2019 at 11:25 PM Tomasz Figa  wrote:


On Mon, Jun 3, 2019 at 4:40 AM Rob Clark  wrote:


On Fri, May 10, 2019 at 7:35 AM Rob Clark  wrote:


On Tue, Dec 4, 2018 at 2:29 PM Rob Herring  wrote:


On Sat, Dec 1, 2018 at 10:54 AM Rob Clark  wrote:


This solves a problem we see with drm/msm, caused by getting
iommu_dma_ops while we attach our own domain and manage it directly at
the iommu API level:

   [0038] user address but active_mm is swapper
   Internal error: Oops: 9605 [#1] PREEMPT SMP
   Modules linked in:
   CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 4.19.3 #90
   Hardware name: xxx (DT)
   Workqueue: events deferred_probe_work_func
   pstate: 80c9 (Nzcv daif +PAN +UAO)
   pc : iommu_dma_map_sg+0x7c/0x2c8
   lr : iommu_dma_map_sg+0x40/0x2c8
   sp : ff80095eb4f0
   x29: ff80095eb4f0 x28: 
   x27: ffc0f9431578 x26: 
   x25:  x24: 0003
   x23: 0001 x22: ffc0fa9ac010
   x21:  x20: ffc0fab40980
   x19: ffc0fab40980 x18: 0003
   x17: 01c4 x16: 0007
   x15: 000e x14: 
   x13:  x12: 0028
   x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
   x9 :  x8 : ffc0fab409a0
   x7 :  x6 : 0002
   x5 : 0001 x4 : 
   x3 : 0001 x2 : 0002
   x1 : ffc0f9431578 x0 : 
   Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
   Call trace:
iommu_dma_map_sg+0x7c/0x2c8
__iommu_map_sg_attrs+0x70/0x84
get_pages+0x170/0x1e8
msm_gem_get_iova+0x8c/0x128
_msm_gem_kernel_new+0x6c/0xc8
msm_gem_kernel_new+0x4c/0x58
dsi_tx_buf_alloc_6g+0x4c/0x8c
msm_dsi_host_modeset_init+0xc8/0x108
msm_dsi_modeset_init+0x54/0x18c
_dpu_kms_drm_obj_init+0x430/0x474
dpu_kms_hw_init+0x5f8/0x6b4
msm_drm_bind+0x360/0x6c8
try_to_bring_up_master.part.7+0x28/0x70
component_master_add_with_match+0xe8/0x124
msm_pdev_probe+0x294/0x2b4
platform_drv_probe+0x58/0xa4
really_probe+0x150/0x294
driver_probe_device+0xac/0xe8
__device_attach_driver+0xa4/0xb4
bus_for_each_drv+0x98/0xc8
__device_attach+0xac/0x12c
device_initial_probe+0x24/0x30
bus_probe_device+0x38/0x98
deferred_probe_work_func+0x78/0xa4
process_one_work+0x24c/0x3dc
worker_thread+0x280/0x360
kthread+0x134/0x13c
ret_from_fork+0x10/0x18
   Code: d284 91000725 6b17039f 5400048a (f9401f40)
   ---[ end trace f22dda57f3648e2c ]---
   Kernel panic - not syncing: Fatal exception
   SMP: stopping secondary CPUs
   Kernel Offset: disabled
   CPU features: 0x0,22802a18
   Memory Limit: none

The problem is that when drm/msm does it's own iommu_attach_device(),
now the domain returned by iommu_get_domain_for_dev() is drm/msm's
domain, and it doesn't have domain->iova_cookie.

We kind of avoided this problem prior to sdm845/dpu because the iommu
was attached to the mdp node in dt, which is a child of the toplevel
mdss node (which corresponds to the dev passed in dma_map_sg()).  But
with sdm845, now the iommu is attached at the mdss level so we hit the
iommu_dma_ops in dma_map_sg().

But auto allocating/attaching a domain before the driver is probed was
already a blocking problem for enabling per-context pagetables for the
GPU.  This problem is also now solved with this patch.

Fixes: 97890ba9289c dma-mapping: detect and configure IOMMU in of_dma_configure
Tested-by: Douglas Anderson 
Signed-off-by: Rob Clark 
---
This is an alternative/replacement for [1].  What it lacks in elegance
it makes up for in practicality ;-)

[1] https://patchwork.freedesktop.org/patch/264930/

  drivers/of/device.c | 22 ++
  1 file changed, 22 insertions(+)

diff --git a/drivers/of/device.c b/drivers/of/device.c
index 5957cd4fa262..15ffee00fb22 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -72,6 +72,14 @@ int of_device_add(struct platform_device *ofdev)
 return device_add(>dev);
  }

+static const struct of_device_id iommu_blacklist[] = {
+   { .compatible = "qcom,mdp4" },
+   { .compatible = "qcom,mdss" },
+   { .compatible = "qcom,sdm845-mdss" },
+   { .compatible = "qcom,adreno" },
+   {}
+};


Not completely clear to whether this is still needed or not, but this
really won't scale. Why can't the driver for these devices override
whatever has been setup by default?



fwiw, at the moment it is not needed, but it will become needed again
to implement per-context pagetables (although I suppose for this we
only need to blacklist qcom,adreno and not also the display nodes).


So, another case I've come across, on the display side.. I'm working
on handling the case where bootloader enables display (and takes iommu
out of reset).. as soon as DMA domain 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-03 Thread Vivek Gautam
On Mon, Jun 3, 2019 at 4:14 PM Rob Clark  wrote:
>
> On Mon, Jun 3, 2019 at 12:57 AM Vivek Gautam
>  wrote:
> >
> >
> >
> > On 6/3/2019 11:50 AM, Tomasz Figa wrote:
> > > On Mon, Jun 3, 2019 at 4:40 AM Rob Clark  wrote:
> > >> On Fri, May 10, 2019 at 7:35 AM Rob Clark  wrote:
> > >>> On Tue, Dec 4, 2018 at 2:29 PM Rob Herring  wrote:
> >  On Sat, Dec 1, 2018 at 10:54 AM Rob Clark  wrote:
> > > This solves a problem we see with drm/msm, caused by getting
> > > iommu_dma_ops while we attach our own domain and manage it directly at
> > > the iommu API level:
> > >
> > >[0038] user address but active_mm is swapper
> > >Internal error: Oops: 9605 [#1] PREEMPT SMP
> > >Modules linked in:
> > >CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 
> > > 4.19.3 #90
> > >Hardware name: xxx (DT)
> > >Workqueue: events deferred_probe_work_func
> > >pstate: 80c9 (Nzcv daif +PAN +UAO)
> > >pc : iommu_dma_map_sg+0x7c/0x2c8
> > >lr : iommu_dma_map_sg+0x40/0x2c8
> > >sp : ff80095eb4f0
> > >x29: ff80095eb4f0 x28: 
> > >x27: ffc0f9431578 x26: 
> > >x25:  x24: 0003
> > >x23: 0001 x22: ffc0fa9ac010
> > >x21:  x20: ffc0fab40980
> > >x19: ffc0fab40980 x18: 0003
> > >x17: 01c4 x16: 0007
> > >x15: 000e x14: 
> > >x13:  x12: 0028
> > >x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
> > >x9 :  x8 : ffc0fab409a0
> > >x7 :  x6 : 0002
> > >x5 : 0001 x4 : 
> > >x3 : 0001 x2 : 0002
> > >x1 : ffc0f9431578 x0 : 
> > >Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
> > >Call trace:
> > > iommu_dma_map_sg+0x7c/0x2c8
> > > __iommu_map_sg_attrs+0x70/0x84
> > > get_pages+0x170/0x1e8
> > > msm_gem_get_iova+0x8c/0x128
> > > _msm_gem_kernel_new+0x6c/0xc8
> > > msm_gem_kernel_new+0x4c/0x58
> > > dsi_tx_buf_alloc_6g+0x4c/0x8c
> > > msm_dsi_host_modeset_init+0xc8/0x108
> > > msm_dsi_modeset_init+0x54/0x18c
> > > _dpu_kms_drm_obj_init+0x430/0x474
> > > dpu_kms_hw_init+0x5f8/0x6b4
> > > msm_drm_bind+0x360/0x6c8
> > > try_to_bring_up_master.part.7+0x28/0x70
> > > component_master_add_with_match+0xe8/0x124
> > > msm_pdev_probe+0x294/0x2b4
> > > platform_drv_probe+0x58/0xa4
> > > really_probe+0x150/0x294
> > > driver_probe_device+0xac/0xe8
> > > __device_attach_driver+0xa4/0xb4
> > > bus_for_each_drv+0x98/0xc8
> > > __device_attach+0xac/0x12c
> > > device_initial_probe+0x24/0x30
> > > bus_probe_device+0x38/0x98
> > > deferred_probe_work_func+0x78/0xa4
> > > process_one_work+0x24c/0x3dc
> > > worker_thread+0x280/0x360
> > > kthread+0x134/0x13c
> > > ret_from_fork+0x10/0x18
> > >Code: d284 91000725 6b17039f 5400048a (f9401f40)
> > >---[ end trace f22dda57f3648e2c ]---
> > >Kernel panic - not syncing: Fatal exception
> > >SMP: stopping secondary CPUs
> > >Kernel Offset: disabled
> > >CPU features: 0x0,22802a18
> > >Memory Limit: none
> > >
> > > The problem is that when drm/msm does it's own iommu_attach_device(),
> > > now the domain returned by iommu_get_domain_for_dev() is drm/msm's
> > > domain, and it doesn't have domain->iova_cookie.
> > >
> > > We kind of avoided this problem prior to sdm845/dpu because the iommu
> > > was attached to the mdp node in dt, which is a child of the toplevel
> > > mdss node (which corresponds to the dev passed in dma_map_sg()).  But
> > > with sdm845, now the iommu is attached at the mdss level so we hit the
> > > iommu_dma_ops in dma_map_sg().
> > >
> > > But auto allocating/attaching a domain before the driver is probed was
> > > already a blocking problem for enabling per-context pagetables for the
> > > GPU.  This problem is also now solved with this patch.
> > >
> > > Fixes: 97890ba9289c dma-mapping: detect and configure IOMMU in 
> > > of_dma_configure
> > > Tested-by: Douglas Anderson 
> > > Signed-off-by: Rob Clark 
> > > ---
> > > This is an alternative/replacement for [1].  What it lacks in elegance
> > > it makes up for in practicality ;-)
> > >
> > > [1] https://patchwork.freedesktop.org/patch/264930/
> > >
> > >   drivers/of/device.c | 22 ++
> > >   1 file changed, 22 insertions(+)
> > >
> > > diff --git 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-03 Thread Rob Clark
On Sun, Jun 2, 2019 at 11:25 PM Tomasz Figa  wrote:
>
> On Mon, Jun 3, 2019 at 4:40 AM Rob Clark  wrote:
> >
> > On Fri, May 10, 2019 at 7:35 AM Rob Clark  wrote:
> > >
> > > On Tue, Dec 4, 2018 at 2:29 PM Rob Herring  wrote:
> > > >
> > > > On Sat, Dec 1, 2018 at 10:54 AM Rob Clark  wrote:
> > > > >
> > > > > This solves a problem we see with drm/msm, caused by getting
> > > > > iommu_dma_ops while we attach our own domain and manage it directly at
> > > > > the iommu API level:
> > > > >
> > > > >   [0038] user address but active_mm is swapper
> > > > >   Internal error: Oops: 9605 [#1] PREEMPT SMP
> > > > >   Modules linked in:
> > > > >   CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 4.19.3 
> > > > > #90
> > > > >   Hardware name: xxx (DT)
> > > > >   Workqueue: events deferred_probe_work_func
> > > > >   pstate: 80c9 (Nzcv daif +PAN +UAO)
> > > > >   pc : iommu_dma_map_sg+0x7c/0x2c8
> > > > >   lr : iommu_dma_map_sg+0x40/0x2c8
> > > > >   sp : ff80095eb4f0
> > > > >   x29: ff80095eb4f0 x28: 
> > > > >   x27: ffc0f9431578 x26: 
> > > > >   x25:  x24: 0003
> > > > >   x23: 0001 x22: ffc0fa9ac010
> > > > >   x21:  x20: ffc0fab40980
> > > > >   x19: ffc0fab40980 x18: 0003
> > > > >   x17: 01c4 x16: 0007
> > > > >   x15: 000e x14: 
> > > > >   x13:  x12: 0028
> > > > >   x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
> > > > >   x9 :  x8 : ffc0fab409a0
> > > > >   x7 :  x6 : 0002
> > > > >   x5 : 0001 x4 : 
> > > > >   x3 : 0001 x2 : 0002
> > > > >   x1 : ffc0f9431578 x0 : 
> > > > >   Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
> > > > >   Call trace:
> > > > >iommu_dma_map_sg+0x7c/0x2c8
> > > > >__iommu_map_sg_attrs+0x70/0x84
> > > > >get_pages+0x170/0x1e8
> > > > >msm_gem_get_iova+0x8c/0x128
> > > > >_msm_gem_kernel_new+0x6c/0xc8
> > > > >msm_gem_kernel_new+0x4c/0x58
> > > > >dsi_tx_buf_alloc_6g+0x4c/0x8c
> > > > >msm_dsi_host_modeset_init+0xc8/0x108
> > > > >msm_dsi_modeset_init+0x54/0x18c
> > > > >_dpu_kms_drm_obj_init+0x430/0x474
> > > > >dpu_kms_hw_init+0x5f8/0x6b4
> > > > >msm_drm_bind+0x360/0x6c8
> > > > >try_to_bring_up_master.part.7+0x28/0x70
> > > > >component_master_add_with_match+0xe8/0x124
> > > > >msm_pdev_probe+0x294/0x2b4
> > > > >platform_drv_probe+0x58/0xa4
> > > > >really_probe+0x150/0x294
> > > > >driver_probe_device+0xac/0xe8
> > > > >__device_attach_driver+0xa4/0xb4
> > > > >bus_for_each_drv+0x98/0xc8
> > > > >__device_attach+0xac/0x12c
> > > > >device_initial_probe+0x24/0x30
> > > > >bus_probe_device+0x38/0x98
> > > > >deferred_probe_work_func+0x78/0xa4
> > > > >process_one_work+0x24c/0x3dc
> > > > >worker_thread+0x280/0x360
> > > > >kthread+0x134/0x13c
> > > > >ret_from_fork+0x10/0x18
> > > > >   Code: d284 91000725 6b17039f 5400048a (f9401f40)
> > > > >   ---[ end trace f22dda57f3648e2c ]---
> > > > >   Kernel panic - not syncing: Fatal exception
> > > > >   SMP: stopping secondary CPUs
> > > > >   Kernel Offset: disabled
> > > > >   CPU features: 0x0,22802a18
> > > > >   Memory Limit: none
> > > > >
> > > > > The problem is that when drm/msm does it's own iommu_attach_device(),
> > > > > now the domain returned by iommu_get_domain_for_dev() is drm/msm's
> > > > > domain, and it doesn't have domain->iova_cookie.
> > > > >
> > > > > We kind of avoided this problem prior to sdm845/dpu because the iommu
> > > > > was attached to the mdp node in dt, which is a child of the toplevel
> > > > > mdss node (which corresponds to the dev passed in dma_map_sg()).  But
> > > > > with sdm845, now the iommu is attached at the mdss level so we hit the
> > > > > iommu_dma_ops in dma_map_sg().
> > > > >
> > > > > But auto allocating/attaching a domain before the driver is probed was
> > > > > already a blocking problem for enabling per-context pagetables for the
> > > > > GPU.  This problem is also now solved with this patch.
> > > > >
> > > > > Fixes: 97890ba9289c dma-mapping: detect and configure IOMMU in 
> > > > > of_dma_configure
> > > > > Tested-by: Douglas Anderson 
> > > > > Signed-off-by: Rob Clark 
> > > > > ---
> > > > > This is an alternative/replacement for [1].  What it lacks in elegance
> > > > > it makes up for in practicality ;-)
> > > > >
> > > > > [1] https://patchwork.freedesktop.org/patch/264930/
> > > > >
> > > > >  drivers/of/device.c | 22 ++
> > > > >  1 file changed, 22 insertions(+)
> > > > >
> > > > > diff --git a/drivers/of/device.c b/drivers/of/device.c
> > > > > index 5957cd4fa262..15ffee00fb22 100644
> > > > > --- a/drivers/of/device.c
> > > > > +++ 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-03 Thread Rob Clark
On Mon, Jun 3, 2019 at 12:57 AM Vivek Gautam
 wrote:
>
>
>
> On 6/3/2019 11:50 AM, Tomasz Figa wrote:
> > On Mon, Jun 3, 2019 at 4:40 AM Rob Clark  wrote:
> >> On Fri, May 10, 2019 at 7:35 AM Rob Clark  wrote:
> >>> On Tue, Dec 4, 2018 at 2:29 PM Rob Herring  wrote:
>  On Sat, Dec 1, 2018 at 10:54 AM Rob Clark  wrote:
> > This solves a problem we see with drm/msm, caused by getting
> > iommu_dma_ops while we attach our own domain and manage it directly at
> > the iommu API level:
> >
> >[0038] user address but active_mm is swapper
> >Internal error: Oops: 9605 [#1] PREEMPT SMP
> >Modules linked in:
> >CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 4.19.3 
> > #90
> >Hardware name: xxx (DT)
> >Workqueue: events deferred_probe_work_func
> >pstate: 80c9 (Nzcv daif +PAN +UAO)
> >pc : iommu_dma_map_sg+0x7c/0x2c8
> >lr : iommu_dma_map_sg+0x40/0x2c8
> >sp : ff80095eb4f0
> >x29: ff80095eb4f0 x28: 
> >x27: ffc0f9431578 x26: 
> >x25:  x24: 0003
> >x23: 0001 x22: ffc0fa9ac010
> >x21:  x20: ffc0fab40980
> >x19: ffc0fab40980 x18: 0003
> >x17: 01c4 x16: 0007
> >x15: 000e x14: 
> >x13:  x12: 0028
> >x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
> >x9 :  x8 : ffc0fab409a0
> >x7 :  x6 : 0002
> >x5 : 0001 x4 : 
> >x3 : 0001 x2 : 0002
> >x1 : ffc0f9431578 x0 : 
> >Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
> >Call trace:
> > iommu_dma_map_sg+0x7c/0x2c8
> > __iommu_map_sg_attrs+0x70/0x84
> > get_pages+0x170/0x1e8
> > msm_gem_get_iova+0x8c/0x128
> > _msm_gem_kernel_new+0x6c/0xc8
> > msm_gem_kernel_new+0x4c/0x58
> > dsi_tx_buf_alloc_6g+0x4c/0x8c
> > msm_dsi_host_modeset_init+0xc8/0x108
> > msm_dsi_modeset_init+0x54/0x18c
> > _dpu_kms_drm_obj_init+0x430/0x474
> > dpu_kms_hw_init+0x5f8/0x6b4
> > msm_drm_bind+0x360/0x6c8
> > try_to_bring_up_master.part.7+0x28/0x70
> > component_master_add_with_match+0xe8/0x124
> > msm_pdev_probe+0x294/0x2b4
> > platform_drv_probe+0x58/0xa4
> > really_probe+0x150/0x294
> > driver_probe_device+0xac/0xe8
> > __device_attach_driver+0xa4/0xb4
> > bus_for_each_drv+0x98/0xc8
> > __device_attach+0xac/0x12c
> > device_initial_probe+0x24/0x30
> > bus_probe_device+0x38/0x98
> > deferred_probe_work_func+0x78/0xa4
> > process_one_work+0x24c/0x3dc
> > worker_thread+0x280/0x360
> > kthread+0x134/0x13c
> > ret_from_fork+0x10/0x18
> >Code: d284 91000725 6b17039f 5400048a (f9401f40)
> >---[ end trace f22dda57f3648e2c ]---
> >Kernel panic - not syncing: Fatal exception
> >SMP: stopping secondary CPUs
> >Kernel Offset: disabled
> >CPU features: 0x0,22802a18
> >Memory Limit: none
> >
> > The problem is that when drm/msm does it's own iommu_attach_device(),
> > now the domain returned by iommu_get_domain_for_dev() is drm/msm's
> > domain, and it doesn't have domain->iova_cookie.
> >
> > We kind of avoided this problem prior to sdm845/dpu because the iommu
> > was attached to the mdp node in dt, which is a child of the toplevel
> > mdss node (which corresponds to the dev passed in dma_map_sg()).  But
> > with sdm845, now the iommu is attached at the mdss level so we hit the
> > iommu_dma_ops in dma_map_sg().
> >
> > But auto allocating/attaching a domain before the driver is probed was
> > already a blocking problem for enabling per-context pagetables for the
> > GPU.  This problem is also now solved with this patch.
> >
> > Fixes: 97890ba9289c dma-mapping: detect and configure IOMMU in 
> > of_dma_configure
> > Tested-by: Douglas Anderson 
> > Signed-off-by: Rob Clark 
> > ---
> > This is an alternative/replacement for [1].  What it lacks in elegance
> > it makes up for in practicality ;-)
> >
> > [1] https://patchwork.freedesktop.org/patch/264930/
> >
> >   drivers/of/device.c | 22 ++
> >   1 file changed, 22 insertions(+)
> >
> > diff --git a/drivers/of/device.c b/drivers/of/device.c
> > index 5957cd4fa262..15ffee00fb22 100644
> > --- a/drivers/of/device.c
> > +++ b/drivers/of/device.c
> > @@ -72,6 +72,14 @@ int of_device_add(struct platform_device *ofdev)
> >  return 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-03 Thread Vivek Gautam



On 6/3/2019 11:50 AM, Tomasz Figa wrote:

On Mon, Jun 3, 2019 at 4:40 AM Rob Clark  wrote:

On Fri, May 10, 2019 at 7:35 AM Rob Clark  wrote:

On Tue, Dec 4, 2018 at 2:29 PM Rob Herring  wrote:

On Sat, Dec 1, 2018 at 10:54 AM Rob Clark  wrote:

This solves a problem we see with drm/msm, caused by getting
iommu_dma_ops while we attach our own domain and manage it directly at
the iommu API level:

   [0038] user address but active_mm is swapper
   Internal error: Oops: 9605 [#1] PREEMPT SMP
   Modules linked in:
   CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 4.19.3 #90
   Hardware name: xxx (DT)
   Workqueue: events deferred_probe_work_func
   pstate: 80c9 (Nzcv daif +PAN +UAO)
   pc : iommu_dma_map_sg+0x7c/0x2c8
   lr : iommu_dma_map_sg+0x40/0x2c8
   sp : ff80095eb4f0
   x29: ff80095eb4f0 x28: 
   x27: ffc0f9431578 x26: 
   x25:  x24: 0003
   x23: 0001 x22: ffc0fa9ac010
   x21:  x20: ffc0fab40980
   x19: ffc0fab40980 x18: 0003
   x17: 01c4 x16: 0007
   x15: 000e x14: 
   x13:  x12: 0028
   x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
   x9 :  x8 : ffc0fab409a0
   x7 :  x6 : 0002
   x5 : 0001 x4 : 
   x3 : 0001 x2 : 0002
   x1 : ffc0f9431578 x0 : 
   Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
   Call trace:
iommu_dma_map_sg+0x7c/0x2c8
__iommu_map_sg_attrs+0x70/0x84
get_pages+0x170/0x1e8
msm_gem_get_iova+0x8c/0x128
_msm_gem_kernel_new+0x6c/0xc8
msm_gem_kernel_new+0x4c/0x58
dsi_tx_buf_alloc_6g+0x4c/0x8c
msm_dsi_host_modeset_init+0xc8/0x108
msm_dsi_modeset_init+0x54/0x18c
_dpu_kms_drm_obj_init+0x430/0x474
dpu_kms_hw_init+0x5f8/0x6b4
msm_drm_bind+0x360/0x6c8
try_to_bring_up_master.part.7+0x28/0x70
component_master_add_with_match+0xe8/0x124
msm_pdev_probe+0x294/0x2b4
platform_drv_probe+0x58/0xa4
really_probe+0x150/0x294
driver_probe_device+0xac/0xe8
__device_attach_driver+0xa4/0xb4
bus_for_each_drv+0x98/0xc8
__device_attach+0xac/0x12c
device_initial_probe+0x24/0x30
bus_probe_device+0x38/0x98
deferred_probe_work_func+0x78/0xa4
process_one_work+0x24c/0x3dc
worker_thread+0x280/0x360
kthread+0x134/0x13c
ret_from_fork+0x10/0x18
   Code: d284 91000725 6b17039f 5400048a (f9401f40)
   ---[ end trace f22dda57f3648e2c ]---
   Kernel panic - not syncing: Fatal exception
   SMP: stopping secondary CPUs
   Kernel Offset: disabled
   CPU features: 0x0,22802a18
   Memory Limit: none

The problem is that when drm/msm does it's own iommu_attach_device(),
now the domain returned by iommu_get_domain_for_dev() is drm/msm's
domain, and it doesn't have domain->iova_cookie.

We kind of avoided this problem prior to sdm845/dpu because the iommu
was attached to the mdp node in dt, which is a child of the toplevel
mdss node (which corresponds to the dev passed in dma_map_sg()).  But
with sdm845, now the iommu is attached at the mdss level so we hit the
iommu_dma_ops in dma_map_sg().

But auto allocating/attaching a domain before the driver is probed was
already a blocking problem for enabling per-context pagetables for the
GPU.  This problem is also now solved with this patch.

Fixes: 97890ba9289c dma-mapping: detect and configure IOMMU in of_dma_configure
Tested-by: Douglas Anderson 
Signed-off-by: Rob Clark 
---
This is an alternative/replacement for [1].  What it lacks in elegance
it makes up for in practicality ;-)

[1] https://patchwork.freedesktop.org/patch/264930/

  drivers/of/device.c | 22 ++
  1 file changed, 22 insertions(+)

diff --git a/drivers/of/device.c b/drivers/of/device.c
index 5957cd4fa262..15ffee00fb22 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -72,6 +72,14 @@ int of_device_add(struct platform_device *ofdev)
 return device_add(>dev);
  }

+static const struct of_device_id iommu_blacklist[] = {
+   { .compatible = "qcom,mdp4" },
+   { .compatible = "qcom,mdss" },
+   { .compatible = "qcom,sdm845-mdss" },
+   { .compatible = "qcom,adreno" },
+   {}
+};

Not completely clear to whether this is still needed or not, but this
really won't scale. Why can't the driver for these devices override
whatever has been setup by default?


fwiw, at the moment it is not needed, but it will become needed again
to implement per-context pagetables (although I suppose for this we
only need to blacklist qcom,adreno and not also the display nodes).

So, another case I've come across, on the display side.. I'm working
on handling the case where bootloader enables display (and takes iommu
out of reset).. as soon as DMA domain gets attached we get iommu
faults, because bootloader has 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-03 Thread Tomasz Figa
On Mon, Jun 3, 2019 at 4:40 AM Rob Clark  wrote:
>
> On Fri, May 10, 2019 at 7:35 AM Rob Clark  wrote:
> >
> > On Tue, Dec 4, 2018 at 2:29 PM Rob Herring  wrote:
> > >
> > > On Sat, Dec 1, 2018 at 10:54 AM Rob Clark  wrote:
> > > >
> > > > This solves a problem we see with drm/msm, caused by getting
> > > > iommu_dma_ops while we attach our own domain and manage it directly at
> > > > the iommu API level:
> > > >
> > > >   [0038] user address but active_mm is swapper
> > > >   Internal error: Oops: 9605 [#1] PREEMPT SMP
> > > >   Modules linked in:
> > > >   CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 4.19.3 
> > > > #90
> > > >   Hardware name: xxx (DT)
> > > >   Workqueue: events deferred_probe_work_func
> > > >   pstate: 80c9 (Nzcv daif +PAN +UAO)
> > > >   pc : iommu_dma_map_sg+0x7c/0x2c8
> > > >   lr : iommu_dma_map_sg+0x40/0x2c8
> > > >   sp : ff80095eb4f0
> > > >   x29: ff80095eb4f0 x28: 
> > > >   x27: ffc0f9431578 x26: 
> > > >   x25:  x24: 0003
> > > >   x23: 0001 x22: ffc0fa9ac010
> > > >   x21:  x20: ffc0fab40980
> > > >   x19: ffc0fab40980 x18: 0003
> > > >   x17: 01c4 x16: 0007
> > > >   x15: 000e x14: 
> > > >   x13:  x12: 0028
> > > >   x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
> > > >   x9 :  x8 : ffc0fab409a0
> > > >   x7 :  x6 : 0002
> > > >   x5 : 0001 x4 : 
> > > >   x3 : 0001 x2 : 0002
> > > >   x1 : ffc0f9431578 x0 : 
> > > >   Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
> > > >   Call trace:
> > > >iommu_dma_map_sg+0x7c/0x2c8
> > > >__iommu_map_sg_attrs+0x70/0x84
> > > >get_pages+0x170/0x1e8
> > > >msm_gem_get_iova+0x8c/0x128
> > > >_msm_gem_kernel_new+0x6c/0xc8
> > > >msm_gem_kernel_new+0x4c/0x58
> > > >dsi_tx_buf_alloc_6g+0x4c/0x8c
> > > >msm_dsi_host_modeset_init+0xc8/0x108
> > > >msm_dsi_modeset_init+0x54/0x18c
> > > >_dpu_kms_drm_obj_init+0x430/0x474
> > > >dpu_kms_hw_init+0x5f8/0x6b4
> > > >msm_drm_bind+0x360/0x6c8
> > > >try_to_bring_up_master.part.7+0x28/0x70
> > > >component_master_add_with_match+0xe8/0x124
> > > >msm_pdev_probe+0x294/0x2b4
> > > >platform_drv_probe+0x58/0xa4
> > > >really_probe+0x150/0x294
> > > >driver_probe_device+0xac/0xe8
> > > >__device_attach_driver+0xa4/0xb4
> > > >bus_for_each_drv+0x98/0xc8
> > > >__device_attach+0xac/0x12c
> > > >device_initial_probe+0x24/0x30
> > > >bus_probe_device+0x38/0x98
> > > >deferred_probe_work_func+0x78/0xa4
> > > >process_one_work+0x24c/0x3dc
> > > >worker_thread+0x280/0x360
> > > >kthread+0x134/0x13c
> > > >ret_from_fork+0x10/0x18
> > > >   Code: d284 91000725 6b17039f 5400048a (f9401f40)
> > > >   ---[ end trace f22dda57f3648e2c ]---
> > > >   Kernel panic - not syncing: Fatal exception
> > > >   SMP: stopping secondary CPUs
> > > >   Kernel Offset: disabled
> > > >   CPU features: 0x0,22802a18
> > > >   Memory Limit: none
> > > >
> > > > The problem is that when drm/msm does it's own iommu_attach_device(),
> > > > now the domain returned by iommu_get_domain_for_dev() is drm/msm's
> > > > domain, and it doesn't have domain->iova_cookie.
> > > >
> > > > We kind of avoided this problem prior to sdm845/dpu because the iommu
> > > > was attached to the mdp node in dt, which is a child of the toplevel
> > > > mdss node (which corresponds to the dev passed in dma_map_sg()).  But
> > > > with sdm845, now the iommu is attached at the mdss level so we hit the
> > > > iommu_dma_ops in dma_map_sg().
> > > >
> > > > But auto allocating/attaching a domain before the driver is probed was
> > > > already a blocking problem for enabling per-context pagetables for the
> > > > GPU.  This problem is also now solved with this patch.
> > > >
> > > > Fixes: 97890ba9289c dma-mapping: detect and configure IOMMU in 
> > > > of_dma_configure
> > > > Tested-by: Douglas Anderson 
> > > > Signed-off-by: Rob Clark 
> > > > ---
> > > > This is an alternative/replacement for [1].  What it lacks in elegance
> > > > it makes up for in practicality ;-)
> > > >
> > > > [1] https://patchwork.freedesktop.org/patch/264930/
> > > >
> > > >  drivers/of/device.c | 22 ++
> > > >  1 file changed, 22 insertions(+)
> > > >
> > > > diff --git a/drivers/of/device.c b/drivers/of/device.c
> > > > index 5957cd4fa262..15ffee00fb22 100644
> > > > --- a/drivers/of/device.c
> > > > +++ b/drivers/of/device.c
> > > > @@ -72,6 +72,14 @@ int of_device_add(struct platform_device *ofdev)
> > > > return device_add(>dev);
> > > >  }
> > > >
> > > > +static const struct of_device_id iommu_blacklist[] = {
> > > > +   { .compatible = "qcom,mdp4" },
> > > > +

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-06-02 Thread Rob Clark
On Fri, May 10, 2019 at 7:35 AM Rob Clark  wrote:
>
> On Tue, Dec 4, 2018 at 2:29 PM Rob Herring  wrote:
> >
> > On Sat, Dec 1, 2018 at 10:54 AM Rob Clark  wrote:
> > >
> > > This solves a problem we see with drm/msm, caused by getting
> > > iommu_dma_ops while we attach our own domain and manage it directly at
> > > the iommu API level:
> > >
> > >   [0038] user address but active_mm is swapper
> > >   Internal error: Oops: 9605 [#1] PREEMPT SMP
> > >   Modules linked in:
> > >   CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 4.19.3 #90
> > >   Hardware name: xxx (DT)
> > >   Workqueue: events deferred_probe_work_func
> > >   pstate: 80c9 (Nzcv daif +PAN +UAO)
> > >   pc : iommu_dma_map_sg+0x7c/0x2c8
> > >   lr : iommu_dma_map_sg+0x40/0x2c8
> > >   sp : ff80095eb4f0
> > >   x29: ff80095eb4f0 x28: 
> > >   x27: ffc0f9431578 x26: 
> > >   x25:  x24: 0003
> > >   x23: 0001 x22: ffc0fa9ac010
> > >   x21:  x20: ffc0fab40980
> > >   x19: ffc0fab40980 x18: 0003
> > >   x17: 01c4 x16: 0007
> > >   x15: 000e x14: 
> > >   x13:  x12: 0028
> > >   x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
> > >   x9 :  x8 : ffc0fab409a0
> > >   x7 :  x6 : 0002
> > >   x5 : 0001 x4 : 
> > >   x3 : 0001 x2 : 0002
> > >   x1 : ffc0f9431578 x0 : 
> > >   Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
> > >   Call trace:
> > >iommu_dma_map_sg+0x7c/0x2c8
> > >__iommu_map_sg_attrs+0x70/0x84
> > >get_pages+0x170/0x1e8
> > >msm_gem_get_iova+0x8c/0x128
> > >_msm_gem_kernel_new+0x6c/0xc8
> > >msm_gem_kernel_new+0x4c/0x58
> > >dsi_tx_buf_alloc_6g+0x4c/0x8c
> > >msm_dsi_host_modeset_init+0xc8/0x108
> > >msm_dsi_modeset_init+0x54/0x18c
> > >_dpu_kms_drm_obj_init+0x430/0x474
> > >dpu_kms_hw_init+0x5f8/0x6b4
> > >msm_drm_bind+0x360/0x6c8
> > >try_to_bring_up_master.part.7+0x28/0x70
> > >component_master_add_with_match+0xe8/0x124
> > >msm_pdev_probe+0x294/0x2b4
> > >platform_drv_probe+0x58/0xa4
> > >really_probe+0x150/0x294
> > >driver_probe_device+0xac/0xe8
> > >__device_attach_driver+0xa4/0xb4
> > >bus_for_each_drv+0x98/0xc8
> > >__device_attach+0xac/0x12c
> > >device_initial_probe+0x24/0x30
> > >bus_probe_device+0x38/0x98
> > >deferred_probe_work_func+0x78/0xa4
> > >process_one_work+0x24c/0x3dc
> > >worker_thread+0x280/0x360
> > >kthread+0x134/0x13c
> > >ret_from_fork+0x10/0x18
> > >   Code: d284 91000725 6b17039f 5400048a (f9401f40)
> > >   ---[ end trace f22dda57f3648e2c ]---
> > >   Kernel panic - not syncing: Fatal exception
> > >   SMP: stopping secondary CPUs
> > >   Kernel Offset: disabled
> > >   CPU features: 0x0,22802a18
> > >   Memory Limit: none
> > >
> > > The problem is that when drm/msm does it's own iommu_attach_device(),
> > > now the domain returned by iommu_get_domain_for_dev() is drm/msm's
> > > domain, and it doesn't have domain->iova_cookie.
> > >
> > > We kind of avoided this problem prior to sdm845/dpu because the iommu
> > > was attached to the mdp node in dt, which is a child of the toplevel
> > > mdss node (which corresponds to the dev passed in dma_map_sg()).  But
> > > with sdm845, now the iommu is attached at the mdss level so we hit the
> > > iommu_dma_ops in dma_map_sg().
> > >
> > > But auto allocating/attaching a domain before the driver is probed was
> > > already a blocking problem for enabling per-context pagetables for the
> > > GPU.  This problem is also now solved with this patch.
> > >
> > > Fixes: 97890ba9289c dma-mapping: detect and configure IOMMU in 
> > > of_dma_configure
> > > Tested-by: Douglas Anderson 
> > > Signed-off-by: Rob Clark 
> > > ---
> > > This is an alternative/replacement for [1].  What it lacks in elegance
> > > it makes up for in practicality ;-)
> > >
> > > [1] https://patchwork.freedesktop.org/patch/264930/
> > >
> > >  drivers/of/device.c | 22 ++
> > >  1 file changed, 22 insertions(+)
> > >
> > > diff --git a/drivers/of/device.c b/drivers/of/device.c
> > > index 5957cd4fa262..15ffee00fb22 100644
> > > --- a/drivers/of/device.c
> > > +++ b/drivers/of/device.c
> > > @@ -72,6 +72,14 @@ int of_device_add(struct platform_device *ofdev)
> > > return device_add(>dev);
> > >  }
> > >
> > > +static const struct of_device_id iommu_blacklist[] = {
> > > +   { .compatible = "qcom,mdp4" },
> > > +   { .compatible = "qcom,mdss" },
> > > +   { .compatible = "qcom,sdm845-mdss" },
> > > +   { .compatible = "qcom,adreno" },
> > > +   {}
> > > +};
> >
> > Not completely clear to whether this is still needed or not, but this
> > really won't scale. Why can't the 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2019-05-10 Thread Rob Clark
On Tue, Dec 4, 2018 at 2:29 PM Rob Herring  wrote:
>
> On Sat, Dec 1, 2018 at 10:54 AM Rob Clark  wrote:
> >
> > This solves a problem we see with drm/msm, caused by getting
> > iommu_dma_ops while we attach our own domain and manage it directly at
> > the iommu API level:
> >
> >   [0038] user address but active_mm is swapper
> >   Internal error: Oops: 9605 [#1] PREEMPT SMP
> >   Modules linked in:
> >   CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 4.19.3 #90
> >   Hardware name: xxx (DT)
> >   Workqueue: events deferred_probe_work_func
> >   pstate: 80c9 (Nzcv daif +PAN +UAO)
> >   pc : iommu_dma_map_sg+0x7c/0x2c8
> >   lr : iommu_dma_map_sg+0x40/0x2c8
> >   sp : ff80095eb4f0
> >   x29: ff80095eb4f0 x28: 
> >   x27: ffc0f9431578 x26: 
> >   x25:  x24: 0003
> >   x23: 0001 x22: ffc0fa9ac010
> >   x21:  x20: ffc0fab40980
> >   x19: ffc0fab40980 x18: 0003
> >   x17: 01c4 x16: 0007
> >   x15: 000e x14: 
> >   x13:  x12: 0028
> >   x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
> >   x9 :  x8 : ffc0fab409a0
> >   x7 :  x6 : 0002
> >   x5 : 0001 x4 : 
> >   x3 : 0001 x2 : 0002
> >   x1 : ffc0f9431578 x0 : 
> >   Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
> >   Call trace:
> >iommu_dma_map_sg+0x7c/0x2c8
> >__iommu_map_sg_attrs+0x70/0x84
> >get_pages+0x170/0x1e8
> >msm_gem_get_iova+0x8c/0x128
> >_msm_gem_kernel_new+0x6c/0xc8
> >msm_gem_kernel_new+0x4c/0x58
> >dsi_tx_buf_alloc_6g+0x4c/0x8c
> >msm_dsi_host_modeset_init+0xc8/0x108
> >msm_dsi_modeset_init+0x54/0x18c
> >_dpu_kms_drm_obj_init+0x430/0x474
> >dpu_kms_hw_init+0x5f8/0x6b4
> >msm_drm_bind+0x360/0x6c8
> >try_to_bring_up_master.part.7+0x28/0x70
> >component_master_add_with_match+0xe8/0x124
> >msm_pdev_probe+0x294/0x2b4
> >platform_drv_probe+0x58/0xa4
> >really_probe+0x150/0x294
> >driver_probe_device+0xac/0xe8
> >__device_attach_driver+0xa4/0xb4
> >bus_for_each_drv+0x98/0xc8
> >__device_attach+0xac/0x12c
> >device_initial_probe+0x24/0x30
> >bus_probe_device+0x38/0x98
> >deferred_probe_work_func+0x78/0xa4
> >process_one_work+0x24c/0x3dc
> >worker_thread+0x280/0x360
> >kthread+0x134/0x13c
> >ret_from_fork+0x10/0x18
> >   Code: d284 91000725 6b17039f 5400048a (f9401f40)
> >   ---[ end trace f22dda57f3648e2c ]---
> >   Kernel panic - not syncing: Fatal exception
> >   SMP: stopping secondary CPUs
> >   Kernel Offset: disabled
> >   CPU features: 0x0,22802a18
> >   Memory Limit: none
> >
> > The problem is that when drm/msm does it's own iommu_attach_device(),
> > now the domain returned by iommu_get_domain_for_dev() is drm/msm's
> > domain, and it doesn't have domain->iova_cookie.
> >
> > We kind of avoided this problem prior to sdm845/dpu because the iommu
> > was attached to the mdp node in dt, which is a child of the toplevel
> > mdss node (which corresponds to the dev passed in dma_map_sg()).  But
> > with sdm845, now the iommu is attached at the mdss level so we hit the
> > iommu_dma_ops in dma_map_sg().
> >
> > But auto allocating/attaching a domain before the driver is probed was
> > already a blocking problem for enabling per-context pagetables for the
> > GPU.  This problem is also now solved with this patch.
> >
> > Fixes: 97890ba9289c dma-mapping: detect and configure IOMMU in 
> > of_dma_configure
> > Tested-by: Douglas Anderson 
> > Signed-off-by: Rob Clark 
> > ---
> > This is an alternative/replacement for [1].  What it lacks in elegance
> > it makes up for in practicality ;-)
> >
> > [1] https://patchwork.freedesktop.org/patch/264930/
> >
> >  drivers/of/device.c | 22 ++
> >  1 file changed, 22 insertions(+)
> >
> > diff --git a/drivers/of/device.c b/drivers/of/device.c
> > index 5957cd4fa262..15ffee00fb22 100644
> > --- a/drivers/of/device.c
> > +++ b/drivers/of/device.c
> > @@ -72,6 +72,14 @@ int of_device_add(struct platform_device *ofdev)
> > return device_add(>dev);
> >  }
> >
> > +static const struct of_device_id iommu_blacklist[] = {
> > +   { .compatible = "qcom,mdp4" },
> > +   { .compatible = "qcom,mdss" },
> > +   { .compatible = "qcom,sdm845-mdss" },
> > +   { .compatible = "qcom,adreno" },
> > +   {}
> > +};
>
> Not completely clear to whether this is still needed or not, but this
> really won't scale. Why can't the driver for these devices override
> whatever has been setup by default?
>

fwiw, at the moment it is not needed, but it will become needed again
to implement per-context pagetables (although I suppose for this we
only need to blacklist qcom,adreno and not also the display nodes).

The 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2018-12-04 Thread Rob Herring
On Sat, Dec 1, 2018 at 10:54 AM Rob Clark  wrote:
>
> This solves a problem we see with drm/msm, caused by getting
> iommu_dma_ops while we attach our own domain and manage it directly at
> the iommu API level:
>
>   [0038] user address but active_mm is swapper
>   Internal error: Oops: 9605 [#1] PREEMPT SMP
>   Modules linked in:
>   CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 4.19.3 #90
>   Hardware name: xxx (DT)
>   Workqueue: events deferred_probe_work_func
>   pstate: 80c9 (Nzcv daif +PAN +UAO)
>   pc : iommu_dma_map_sg+0x7c/0x2c8
>   lr : iommu_dma_map_sg+0x40/0x2c8
>   sp : ff80095eb4f0
>   x29: ff80095eb4f0 x28: 
>   x27: ffc0f9431578 x26: 
>   x25:  x24: 0003
>   x23: 0001 x22: ffc0fa9ac010
>   x21:  x20: ffc0fab40980
>   x19: ffc0fab40980 x18: 0003
>   x17: 01c4 x16: 0007
>   x15: 000e x14: 
>   x13:  x12: 0028
>   x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
>   x9 :  x8 : ffc0fab409a0
>   x7 :  x6 : 0002
>   x5 : 0001 x4 : 
>   x3 : 0001 x2 : 0002
>   x1 : ffc0f9431578 x0 : 
>   Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
>   Call trace:
>iommu_dma_map_sg+0x7c/0x2c8
>__iommu_map_sg_attrs+0x70/0x84
>get_pages+0x170/0x1e8
>msm_gem_get_iova+0x8c/0x128
>_msm_gem_kernel_new+0x6c/0xc8
>msm_gem_kernel_new+0x4c/0x58
>dsi_tx_buf_alloc_6g+0x4c/0x8c
>msm_dsi_host_modeset_init+0xc8/0x108
>msm_dsi_modeset_init+0x54/0x18c
>_dpu_kms_drm_obj_init+0x430/0x474
>dpu_kms_hw_init+0x5f8/0x6b4
>msm_drm_bind+0x360/0x6c8
>try_to_bring_up_master.part.7+0x28/0x70
>component_master_add_with_match+0xe8/0x124
>msm_pdev_probe+0x294/0x2b4
>platform_drv_probe+0x58/0xa4
>really_probe+0x150/0x294
>driver_probe_device+0xac/0xe8
>__device_attach_driver+0xa4/0xb4
>bus_for_each_drv+0x98/0xc8
>__device_attach+0xac/0x12c
>device_initial_probe+0x24/0x30
>bus_probe_device+0x38/0x98
>deferred_probe_work_func+0x78/0xa4
>process_one_work+0x24c/0x3dc
>worker_thread+0x280/0x360
>kthread+0x134/0x13c
>ret_from_fork+0x10/0x18
>   Code: d284 91000725 6b17039f 5400048a (f9401f40)
>   ---[ end trace f22dda57f3648e2c ]---
>   Kernel panic - not syncing: Fatal exception
>   SMP: stopping secondary CPUs
>   Kernel Offset: disabled
>   CPU features: 0x0,22802a18
>   Memory Limit: none
>
> The problem is that when drm/msm does it's own iommu_attach_device(),
> now the domain returned by iommu_get_domain_for_dev() is drm/msm's
> domain, and it doesn't have domain->iova_cookie.
>
> We kind of avoided this problem prior to sdm845/dpu because the iommu
> was attached to the mdp node in dt, which is a child of the toplevel
> mdss node (which corresponds to the dev passed in dma_map_sg()).  But
> with sdm845, now the iommu is attached at the mdss level so we hit the
> iommu_dma_ops in dma_map_sg().
>
> But auto allocating/attaching a domain before the driver is probed was
> already a blocking problem for enabling per-context pagetables for the
> GPU.  This problem is also now solved with this patch.
>
> Fixes: 97890ba9289c dma-mapping: detect and configure IOMMU in 
> of_dma_configure
> Tested-by: Douglas Anderson 
> Signed-off-by: Rob Clark 
> ---
> This is an alternative/replacement for [1].  What it lacks in elegance
> it makes up for in practicality ;-)
>
> [1] https://patchwork.freedesktop.org/patch/264930/
>
>  drivers/of/device.c | 22 ++
>  1 file changed, 22 insertions(+)
>
> diff --git a/drivers/of/device.c b/drivers/of/device.c
> index 5957cd4fa262..15ffee00fb22 100644
> --- a/drivers/of/device.c
> +++ b/drivers/of/device.c
> @@ -72,6 +72,14 @@ int of_device_add(struct platform_device *ofdev)
> return device_add(>dev);
>  }
>
> +static const struct of_device_id iommu_blacklist[] = {
> +   { .compatible = "qcom,mdp4" },
> +   { .compatible = "qcom,mdss" },
> +   { .compatible = "qcom,sdm845-mdss" },
> +   { .compatible = "qcom,adreno" },
> +   {}
> +};

Not completely clear to whether this is still needed or not, but this
really won't scale. Why can't the driver for these devices override
whatever has been setup by default?

Rob
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2018-12-03 Thread Vivek Gautam
On Mon, Dec 3, 2018 at 7:56 PM Rob Clark  wrote:
>
> On Mon, Dec 3, 2018 at 7:45 AM Robin Murphy  wrote:
> >
> > Hi Rob,
> >
> > On 01/12/2018 16:53, Rob Clark wrote:
> > > This solves a problem we see with drm/msm, caused by getting
> > > iommu_dma_ops while we attach our own domain and manage it directly at
> > > the iommu API level:
> > >
> > >[0038] user address but active_mm is swapper
> > >Internal error: Oops: 9605 [#1] PREEMPT SMP
> > >Modules linked in:
> > >CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 4.19.3 #90
> > >Hardware name: xxx (DT)
> > >Workqueue: events deferred_probe_work_func
> > >pstate: 80c9 (Nzcv daif +PAN +UAO)
> > >pc : iommu_dma_map_sg+0x7c/0x2c8
> > >lr : iommu_dma_map_sg+0x40/0x2c8
> > >sp : ff80095eb4f0
> > >x29: ff80095eb4f0 x28: 
> > >x27: ffc0f9431578 x26: 
> > >x25:  x24: 0003
> > >x23: 0001 x22: ffc0fa9ac010
> > >x21:  x20: ffc0fab40980
> > >x19: ffc0fab40980 x18: 0003
> > >x17: 01c4 x16: 0007
> > >x15: 000e x14: 
> > >x13:  x12: 0028
> > >x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
> > >x9 :  x8 : ffc0fab409a0
> > >x7 :  x6 : 0002
> > >x5 : 0001 x4 : 
> > >x3 : 0001 x2 : 0002
> > >x1 : ffc0f9431578 x0 : 
> > >Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
> > >Call trace:
> > > iommu_dma_map_sg+0x7c/0x2c8
> > > __iommu_map_sg_attrs+0x70/0x84
> > > get_pages+0x170/0x1e8
> > > msm_gem_get_iova+0x8c/0x128
> > > _msm_gem_kernel_new+0x6c/0xc8
> > > msm_gem_kernel_new+0x4c/0x58
> > > dsi_tx_buf_alloc_6g+0x4c/0x8c
> > > msm_dsi_host_modeset_init+0xc8/0x108
> > > msm_dsi_modeset_init+0x54/0x18c
> > > _dpu_kms_drm_obj_init+0x430/0x474
> > > dpu_kms_hw_init+0x5f8/0x6b4
> > > msm_drm_bind+0x360/0x6c8
> > > try_to_bring_up_master.part.7+0x28/0x70
> > > component_master_add_with_match+0xe8/0x124
> > > msm_pdev_probe+0x294/0x2b4
> > > platform_drv_probe+0x58/0xa4
> > > really_probe+0x150/0x294
> > > driver_probe_device+0xac/0xe8
> > > __device_attach_driver+0xa4/0xb4
> > > bus_for_each_drv+0x98/0xc8
> > > __device_attach+0xac/0x12c
> > > device_initial_probe+0x24/0x30
> > > bus_probe_device+0x38/0x98
> > > deferred_probe_work_func+0x78/0xa4
> > > process_one_work+0x24c/0x3dc
> > > worker_thread+0x280/0x360
> > > kthread+0x134/0x13c
> > > ret_from_fork+0x10/0x18
> > >Code: d284 91000725 6b17039f 5400048a (f9401f40)
> > >---[ end trace f22dda57f3648e2c ]---
> > >Kernel panic - not syncing: Fatal exception
> > >SMP: stopping secondary CPUs
> > >Kernel Offset: disabled
> > >CPU features: 0x0,22802a18
> > >Memory Limit: none
> > >
> > > The problem is that when drm/msm does it's own iommu_attach_device(),
> > > now the domain returned by iommu_get_domain_for_dev() is drm/msm's
> > > domain, and it doesn't have domain->iova_cookie.
> >
> > Does this crash still happen with 4.20-rc? Because as of 6af588fed391 it
> > really shouldn't.
> >
> > > We kind of avoided this problem prior to sdm845/dpu because the iommu
> > > was attached to the mdp node in dt, which is a child of the toplevel
> > > mdss node (which corresponds to the dev passed in dma_map_sg()).  But
> > > with sdm845, now the iommu is attached at the mdss level so we hit the
> > > iommu_dma_ops in dma_map_sg().
> > >
> > > But auto allocating/attaching a domain before the driver is probed was
> > > already a blocking problem for enabling per-context pagetables for the
> > > GPU.  This problem is also now solved with this patch.
> >
> > s/solved/worked around/
> >
> > If you want a guarantee of actually getting a specific hardware context
> > allocated for a given domain, there needs to be code in the IOMMU driver
> > to understand and honour that. Implicitly depending on whatever happens
> > to fall out of current driver behaviour on current systems is not a real
> > solution.
> >
> > > Fixes: 97890ba9289c dma-mapping: detect and configure IOMMU in 
> > > of_dma_configure
> >
> > That's rather misleading, since the crash described above depends on at
> > least two other major changes which came long after that commit.
> >
> > It's not that I don't understand exactly what you want here - just that
> > this commit message isn't a coherent justification for that ;)
> >
> > > Tested-by: Douglas Anderson 
> > > Signed-off-by: Rob Clark 
> > > ---
> > > This is an alternative/replacement for [1].  What it lacks in elegance
> > > it makes up for in practicality ;-)
> > >
> > > [1] 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2018-12-03 Thread Rob Clark
On Mon, Dec 3, 2018 at 7:45 AM Robin Murphy  wrote:
>
> Hi Rob,
>
> On 01/12/2018 16:53, Rob Clark wrote:
> > This solves a problem we see with drm/msm, caused by getting
> > iommu_dma_ops while we attach our own domain and manage it directly at
> > the iommu API level:
> >
> >[0038] user address but active_mm is swapper
> >Internal error: Oops: 9605 [#1] PREEMPT SMP
> >Modules linked in:
> >CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 4.19.3 #90
> >Hardware name: xxx (DT)
> >Workqueue: events deferred_probe_work_func
> >pstate: 80c9 (Nzcv daif +PAN +UAO)
> >pc : iommu_dma_map_sg+0x7c/0x2c8
> >lr : iommu_dma_map_sg+0x40/0x2c8
> >sp : ff80095eb4f0
> >x29: ff80095eb4f0 x28: 
> >x27: ffc0f9431578 x26: 
> >x25:  x24: 0003
> >x23: 0001 x22: ffc0fa9ac010
> >x21:  x20: ffc0fab40980
> >x19: ffc0fab40980 x18: 0003
> >x17: 01c4 x16: 0007
> >x15: 000e x14: 
> >x13:  x12: 0028
> >x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
> >x9 :  x8 : ffc0fab409a0
> >x7 :  x6 : 0002
> >x5 : 0001 x4 : 
> >x3 : 0001 x2 : 0002
> >x1 : ffc0f9431578 x0 : 
> >Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
> >Call trace:
> > iommu_dma_map_sg+0x7c/0x2c8
> > __iommu_map_sg_attrs+0x70/0x84
> > get_pages+0x170/0x1e8
> > msm_gem_get_iova+0x8c/0x128
> > _msm_gem_kernel_new+0x6c/0xc8
> > msm_gem_kernel_new+0x4c/0x58
> > dsi_tx_buf_alloc_6g+0x4c/0x8c
> > msm_dsi_host_modeset_init+0xc8/0x108
> > msm_dsi_modeset_init+0x54/0x18c
> > _dpu_kms_drm_obj_init+0x430/0x474
> > dpu_kms_hw_init+0x5f8/0x6b4
> > msm_drm_bind+0x360/0x6c8
> > try_to_bring_up_master.part.7+0x28/0x70
> > component_master_add_with_match+0xe8/0x124
> > msm_pdev_probe+0x294/0x2b4
> > platform_drv_probe+0x58/0xa4
> > really_probe+0x150/0x294
> > driver_probe_device+0xac/0xe8
> > __device_attach_driver+0xa4/0xb4
> > bus_for_each_drv+0x98/0xc8
> > __device_attach+0xac/0x12c
> > device_initial_probe+0x24/0x30
> > bus_probe_device+0x38/0x98
> > deferred_probe_work_func+0x78/0xa4
> > process_one_work+0x24c/0x3dc
> > worker_thread+0x280/0x360
> > kthread+0x134/0x13c
> > ret_from_fork+0x10/0x18
> >Code: d284 91000725 6b17039f 5400048a (f9401f40)
> >---[ end trace f22dda57f3648e2c ]---
> >Kernel panic - not syncing: Fatal exception
> >SMP: stopping secondary CPUs
> >Kernel Offset: disabled
> >CPU features: 0x0,22802a18
> >Memory Limit: none
> >
> > The problem is that when drm/msm does it's own iommu_attach_device(),
> > now the domain returned by iommu_get_domain_for_dev() is drm/msm's
> > domain, and it doesn't have domain->iova_cookie.
>
> Does this crash still happen with 4.20-rc? Because as of 6af588fed391 it
> really shouldn't.
>
> > We kind of avoided this problem prior to sdm845/dpu because the iommu
> > was attached to the mdp node in dt, which is a child of the toplevel
> > mdss node (which corresponds to the dev passed in dma_map_sg()).  But
> > with sdm845, now the iommu is attached at the mdss level so we hit the
> > iommu_dma_ops in dma_map_sg().
> >
> > But auto allocating/attaching a domain before the driver is probed was
> > already a blocking problem for enabling per-context pagetables for the
> > GPU.  This problem is also now solved with this patch.
>
> s/solved/worked around/
>
> If you want a guarantee of actually getting a specific hardware context
> allocated for a given domain, there needs to be code in the IOMMU driver
> to understand and honour that. Implicitly depending on whatever happens
> to fall out of current driver behaviour on current systems is not a real
> solution.
>
> > Fixes: 97890ba9289c dma-mapping: detect and configure IOMMU in 
> > of_dma_configure
>
> That's rather misleading, since the crash described above depends on at
> least two other major changes which came long after that commit.
>
> It's not that I don't understand exactly what you want here - just that
> this commit message isn't a coherent justification for that ;)
>
> > Tested-by: Douglas Anderson 
> > Signed-off-by: Rob Clark 
> > ---
> > This is an alternative/replacement for [1].  What it lacks in elegance
> > it makes up for in practicality ;-)
> >
> > [1] https://patchwork.freedesktop.org/patch/264930/
> >
> >   drivers/of/device.c | 22 ++
> >   1 file changed, 22 insertions(+)
> >
> > diff --git a/drivers/of/device.c b/drivers/of/device.c
> > index 5957cd4fa262..15ffee00fb22 100644
> > --- a/drivers/of/device.c
> > +++ b/drivers/of/device.c
> > 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2018-12-03 Thread Rob Clark
On Mon, Dec 3, 2018 at 7:45 AM Robin Murphy  wrote:
>
> Hi Rob,
>
> On 01/12/2018 16:53, Rob Clark wrote:
> > This solves a problem we see with drm/msm, caused by getting
> > iommu_dma_ops while we attach our own domain and manage it directly at
> > the iommu API level:
> >
> >[0038] user address but active_mm is swapper
> >Internal error: Oops: 9605 [#1] PREEMPT SMP
> >Modules linked in:
> >CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 4.19.3 #90
> >Hardware name: xxx (DT)
> >Workqueue: events deferred_probe_work_func
> >pstate: 80c9 (Nzcv daif +PAN +UAO)
> >pc : iommu_dma_map_sg+0x7c/0x2c8
> >lr : iommu_dma_map_sg+0x40/0x2c8
> >sp : ff80095eb4f0
> >x29: ff80095eb4f0 x28: 
> >x27: ffc0f9431578 x26: 
> >x25:  x24: 0003
> >x23: 0001 x22: ffc0fa9ac010
> >x21:  x20: ffc0fab40980
> >x19: ffc0fab40980 x18: 0003
> >x17: 01c4 x16: 0007
> >x15: 000e x14: 
> >x13:  x12: 0028
> >x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
> >x9 :  x8 : ffc0fab409a0
> >x7 :  x6 : 0002
> >x5 : 0001 x4 : 
> >x3 : 0001 x2 : 0002
> >x1 : ffc0f9431578 x0 : 
> >Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
> >Call trace:
> > iommu_dma_map_sg+0x7c/0x2c8
> > __iommu_map_sg_attrs+0x70/0x84
> > get_pages+0x170/0x1e8
> > msm_gem_get_iova+0x8c/0x128
> > _msm_gem_kernel_new+0x6c/0xc8
> > msm_gem_kernel_new+0x4c/0x58
> > dsi_tx_buf_alloc_6g+0x4c/0x8c
> > msm_dsi_host_modeset_init+0xc8/0x108
> > msm_dsi_modeset_init+0x54/0x18c
> > _dpu_kms_drm_obj_init+0x430/0x474
> > dpu_kms_hw_init+0x5f8/0x6b4
> > msm_drm_bind+0x360/0x6c8
> > try_to_bring_up_master.part.7+0x28/0x70
> > component_master_add_with_match+0xe8/0x124
> > msm_pdev_probe+0x294/0x2b4
> > platform_drv_probe+0x58/0xa4
> > really_probe+0x150/0x294
> > driver_probe_device+0xac/0xe8
> > __device_attach_driver+0xa4/0xb4
> > bus_for_each_drv+0x98/0xc8
> > __device_attach+0xac/0x12c
> > device_initial_probe+0x24/0x30
> > bus_probe_device+0x38/0x98
> > deferred_probe_work_func+0x78/0xa4
> > process_one_work+0x24c/0x3dc
> > worker_thread+0x280/0x360
> > kthread+0x134/0x13c
> > ret_from_fork+0x10/0x18
> >Code: d284 91000725 6b17039f 5400048a (f9401f40)
> >---[ end trace f22dda57f3648e2c ]---
> >Kernel panic - not syncing: Fatal exception
> >SMP: stopping secondary CPUs
> >Kernel Offset: disabled
> >CPU features: 0x0,22802a18
> >Memory Limit: none
> >
> > The problem is that when drm/msm does it's own iommu_attach_device(),
> > now the domain returned by iommu_get_domain_for_dev() is drm/msm's
> > domain, and it doesn't have domain->iova_cookie.
>
> Does this crash still happen with 4.20-rc? Because as of 6af588fed391 it
> really shouldn't.

for this hw, I'm still on 4.19, although that does look like it would
avoid the issue.

> > We kind of avoided this problem prior to sdm845/dpu because the iommu
> > was attached to the mdp node in dt, which is a child of the toplevel
> > mdss node (which corresponds to the dev passed in dma_map_sg()).  But
> > with sdm845, now the iommu is attached at the mdss level so we hit the
> > iommu_dma_ops in dma_map_sg().
> >
> > But auto allocating/attaching a domain before the driver is probed was
> > already a blocking problem for enabling per-context pagetables for the
> > GPU.  This problem is also now solved with this patch.
>
> s/solved/worked around/
>
> If you want a guarantee of actually getting a specific hardware context
> allocated for a given domain, there needs to be code in the IOMMU driver
> to understand and honour that. Implicitly depending on whatever happens
> to fall out of current driver behaviour on current systems is not a real
> solution.

ok, fair.. but I'll settle for "works" in the absence of better options..

At some level, it would be nice to be able to optionally specify a
context-bank in the iommu bindings.  But not sure how to make that fit
w/ cb allocated per domain.  And I assume I'm the only one who has
this problem?

> > Fixes: 97890ba9289c dma-mapping: detect and configure IOMMU in 
> > of_dma_configure
>
> That's rather misleading, since the crash described above depends on at
> least two other major changes which came long after that commit.

Fair, when I realized it was the difference in where the iommu
attaches between dpu1 (sdm845) and everything coming before, I should
have removed the tag.

BR,
-R

> It's not that I don't understand exactly what you want here - just that
> this commit message isn't a 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2018-12-03 Thread Robin Murphy

Hi Rob,

On 01/12/2018 16:53, Rob Clark wrote:

This solves a problem we see with drm/msm, caused by getting
iommu_dma_ops while we attach our own domain and manage it directly at
the iommu API level:

   [0038] user address but active_mm is swapper
   Internal error: Oops: 9605 [#1] PREEMPT SMP
   Modules linked in:
   CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 4.19.3 #90
   Hardware name: xxx (DT)
   Workqueue: events deferred_probe_work_func
   pstate: 80c9 (Nzcv daif +PAN +UAO)
   pc : iommu_dma_map_sg+0x7c/0x2c8
   lr : iommu_dma_map_sg+0x40/0x2c8
   sp : ff80095eb4f0
   x29: ff80095eb4f0 x28: 
   x27: ffc0f9431578 x26: 
   x25:  x24: 0003
   x23: 0001 x22: ffc0fa9ac010
   x21:  x20: ffc0fab40980
   x19: ffc0fab40980 x18: 0003
   x17: 01c4 x16: 0007
   x15: 000e x14: 
   x13:  x12: 0028
   x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
   x9 :  x8 : ffc0fab409a0
   x7 :  x6 : 0002
   x5 : 0001 x4 : 
   x3 : 0001 x2 : 0002
   x1 : ffc0f9431578 x0 : 
   Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
   Call trace:
iommu_dma_map_sg+0x7c/0x2c8
__iommu_map_sg_attrs+0x70/0x84
get_pages+0x170/0x1e8
msm_gem_get_iova+0x8c/0x128
_msm_gem_kernel_new+0x6c/0xc8
msm_gem_kernel_new+0x4c/0x58
dsi_tx_buf_alloc_6g+0x4c/0x8c
msm_dsi_host_modeset_init+0xc8/0x108
msm_dsi_modeset_init+0x54/0x18c
_dpu_kms_drm_obj_init+0x430/0x474
dpu_kms_hw_init+0x5f8/0x6b4
msm_drm_bind+0x360/0x6c8
try_to_bring_up_master.part.7+0x28/0x70
component_master_add_with_match+0xe8/0x124
msm_pdev_probe+0x294/0x2b4
platform_drv_probe+0x58/0xa4
really_probe+0x150/0x294
driver_probe_device+0xac/0xe8
__device_attach_driver+0xa4/0xb4
bus_for_each_drv+0x98/0xc8
__device_attach+0xac/0x12c
device_initial_probe+0x24/0x30
bus_probe_device+0x38/0x98
deferred_probe_work_func+0x78/0xa4
process_one_work+0x24c/0x3dc
worker_thread+0x280/0x360
kthread+0x134/0x13c
ret_from_fork+0x10/0x18
   Code: d284 91000725 6b17039f 5400048a (f9401f40)
   ---[ end trace f22dda57f3648e2c ]---
   Kernel panic - not syncing: Fatal exception
   SMP: stopping secondary CPUs
   Kernel Offset: disabled
   CPU features: 0x0,22802a18
   Memory Limit: none

The problem is that when drm/msm does it's own iommu_attach_device(),
now the domain returned by iommu_get_domain_for_dev() is drm/msm's
domain, and it doesn't have domain->iova_cookie.


Does this crash still happen with 4.20-rc? Because as of 6af588fed391 it 
really shouldn't.



We kind of avoided this problem prior to sdm845/dpu because the iommu
was attached to the mdp node in dt, which is a child of the toplevel
mdss node (which corresponds to the dev passed in dma_map_sg()).  But
with sdm845, now the iommu is attached at the mdss level so we hit the
iommu_dma_ops in dma_map_sg().

But auto allocating/attaching a domain before the driver is probed was
already a blocking problem for enabling per-context pagetables for the
GPU.  This problem is also now solved with this patch.


s/solved/worked around/

If you want a guarantee of actually getting a specific hardware context 
allocated for a given domain, there needs to be code in the IOMMU driver 
to understand and honour that. Implicitly depending on whatever happens 
to fall out of current driver behaviour on current systems is not a real 
solution.



Fixes: 97890ba9289c dma-mapping: detect and configure IOMMU in of_dma_configure


That's rather misleading, since the crash described above depends on at 
least two other major changes which came long after that commit.


It's not that I don't understand exactly what you want here - just that 
this commit message isn't a coherent justification for that ;)



Tested-by: Douglas Anderson 
Signed-off-by: Rob Clark 
---
This is an alternative/replacement for [1].  What it lacks in elegance
it makes up for in practicality ;-)

[1] https://patchwork.freedesktop.org/patch/264930/

  drivers/of/device.c | 22 ++
  1 file changed, 22 insertions(+)

diff --git a/drivers/of/device.c b/drivers/of/device.c
index 5957cd4fa262..15ffee00fb22 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -72,6 +72,14 @@ int of_device_add(struct platform_device *ofdev)
return device_add(>dev);
  }
  
+static const struct of_device_id iommu_blacklist[] = {

+   { .compatible = "qcom,mdp4" },
+   { .compatible = "qcom,mdss" },
+   { .compatible = "qcom,sdm845-mdss" },
+   { .compatible = "qcom,adreno" },
+   {}
+};
+
  /**
   * of_dma_configure - Setup DMA configuration
   * @dev:  Device to apply DMA configuration
@@ 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2018-12-03 Thread Marek Szyprowski
Hi Tomasz,

On 2018-12-03 01:10, Tomasz Figa wrote:
> On Sat, Dec 1, 2018 at 8:54 AM Rob Clark  wrote:
>> This solves a problem we see with drm/msm, caused by getting
>> iommu_dma_ops while we attach our own domain and manage it directly at
>> the iommu API level:
>>
>>   [0038] user address but active_mm is swapper
>>   Internal error: Oops: 9605 [#1] PREEMPT SMP
>>   Modules linked in:
>>   CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 4.19.3 #90
>>   Hardware name: xxx (DT)
>>   Workqueue: events deferred_probe_work_func
>>   pstate: 80c9 (Nzcv daif +PAN +UAO)
>>   pc : iommu_dma_map_sg+0x7c/0x2c8
>>   lr : iommu_dma_map_sg+0x40/0x2c8
>>   sp : ff80095eb4f0
>>   x29: ff80095eb4f0 x28: 
>>   x27: ffc0f9431578 x26: 
>>   x25:  x24: 0003
>>   x23: 0001 x22: ffc0fa9ac010
>>   x21:  x20: ffc0fab40980
>>   x19: ffc0fab40980 x18: 0003
>>   x17: 01c4 x16: 0007
>>   x15: 000e x14: 
>>   x13:  x12: 0028
>>   x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
>>   x9 :  x8 : ffc0fab409a0
>>   x7 :  x6 : 0002
>>   x5 : 0001 x4 : 
>>   x3 : 0001 x2 : 0002
>>   x1 : ffc0f9431578 x0 : 
>>   Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
>>   Call trace:
>>iommu_dma_map_sg+0x7c/0x2c8
>>__iommu_map_sg_attrs+0x70/0x84
>>get_pages+0x170/0x1e8
>>msm_gem_get_iova+0x8c/0x128
>>_msm_gem_kernel_new+0x6c/0xc8
>>msm_gem_kernel_new+0x4c/0x58
>>dsi_tx_buf_alloc_6g+0x4c/0x8c
>>msm_dsi_host_modeset_init+0xc8/0x108
>>msm_dsi_modeset_init+0x54/0x18c
>>_dpu_kms_drm_obj_init+0x430/0x474
>>dpu_kms_hw_init+0x5f8/0x6b4
>>msm_drm_bind+0x360/0x6c8
>>try_to_bring_up_master.part.7+0x28/0x70
>>component_master_add_with_match+0xe8/0x124
>>msm_pdev_probe+0x294/0x2b4
>>platform_drv_probe+0x58/0xa4
>>really_probe+0x150/0x294
>>driver_probe_device+0xac/0xe8
>>__device_attach_driver+0xa4/0xb4
>>bus_for_each_drv+0x98/0xc8
>>__device_attach+0xac/0x12c
>>device_initial_probe+0x24/0x30
>>bus_probe_device+0x38/0x98
>>deferred_probe_work_func+0x78/0xa4
>>process_one_work+0x24c/0x3dc
>>worker_thread+0x280/0x360
>>kthread+0x134/0x13c
>>ret_from_fork+0x10/0x18
>>   Code: d284 91000725 6b17039f 5400048a (f9401f40)
>>   ---[ end trace f22dda57f3648e2c ]---
>>   Kernel panic - not syncing: Fatal exception
>>   SMP: stopping secondary CPUs
>>   Kernel Offset: disabled
>>   CPU features: 0x0,22802a18
>>   Memory Limit: none
>>
>> The problem is that when drm/msm does it's own iommu_attach_device(),
>> now the domain returned by iommu_get_domain_for_dev() is drm/msm's
>> domain, and it doesn't have domain->iova_cookie.
>>
>> We kind of avoided this problem prior to sdm845/dpu because the iommu
>> was attached to the mdp node in dt, which is a child of the toplevel
>> mdss node (which corresponds to the dev passed in dma_map_sg()).  But
>> with sdm845, now the iommu is attached at the mdss level so we hit the
>> iommu_dma_ops in dma_map_sg().
>>
>> But auto allocating/attaching a domain before the driver is probed was
>> already a blocking problem for enabling per-context pagetables for the
>> GPU.  This problem is also now solved with this patch.
>>
>> Fixes: 97890ba9289c dma-mapping: detect and configure IOMMU in 
>> of_dma_configure
>> Tested-by: Douglas Anderson 
>> Signed-off-by: Rob Clark 
>> ---
>> This is an alternative/replacement for [1].  What it lacks in elegance
>> it makes up for in practicality ;-)
>>
>> [1] https://patchwork.freedesktop.org/patch/264930/
>>
>>  drivers/of/device.c | 22 ++
>>  1 file changed, 22 insertions(+)
>>
>> diff --git a/drivers/of/device.c b/drivers/of/device.c
>> index 5957cd4fa262..15ffee00fb22 100644
>> --- a/drivers/of/device.c
>> +++ b/drivers/of/device.c
>> @@ -72,6 +72,14 @@ int of_device_add(struct platform_device *ofdev)
>> return device_add(>dev);
>>  }
>>
>> +static const struct of_device_id iommu_blacklist[] = {
>> +   { .compatible = "qcom,mdp4" },
>> +   { .compatible = "qcom,mdss" },
>> +   { .compatible = "qcom,sdm845-mdss" },
>> +   { .compatible = "qcom,adreno" },
>> +   {}
>> +};
>> +
>>  /**
>>   * of_dma_configure - Setup DMA configuration
>>   * @dev:   Device to apply DMA configuration
>> @@ -164,6 +172,20 @@ int of_dma_configure(struct device *dev, struct 
>> device_node *np, bool force_dma)
>> dev_dbg(dev, "device is%sbehind an iommu\n",
>> iommu ? " " : " not ");
>>
>> +   /*
>> +* There is at least one case where the driver wants to directly
>> +* manage the IOMMU, but if we end up with iommu dma_ops, that
>> +* 

Re: [Freedreno] [PATCH] of/device: add blacklist for iommu dma_ops

2018-12-02 Thread Tomasz Figa
On Sat, Dec 1, 2018 at 8:54 AM Rob Clark  wrote:
>
> This solves a problem we see with drm/msm, caused by getting
> iommu_dma_ops while we attach our own domain and manage it directly at
> the iommu API level:
>
>   [0038] user address but active_mm is swapper
>   Internal error: Oops: 9605 [#1] PREEMPT SMP
>   Modules linked in:
>   CPU: 7 PID: 70 Comm: kworker/7:1 Tainted: GW 4.19.3 #90
>   Hardware name: xxx (DT)
>   Workqueue: events deferred_probe_work_func
>   pstate: 80c9 (Nzcv daif +PAN +UAO)
>   pc : iommu_dma_map_sg+0x7c/0x2c8
>   lr : iommu_dma_map_sg+0x40/0x2c8
>   sp : ff80095eb4f0
>   x29: ff80095eb4f0 x28: 
>   x27: ffc0f9431578 x26: 
>   x25:  x24: 0003
>   x23: 0001 x22: ffc0fa9ac010
>   x21:  x20: ffc0fab40980
>   x19: ffc0fab40980 x18: 0003
>   x17: 01c4 x16: 0007
>   x15: 000e x14: 
>   x13:  x12: 0028
>   x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
>   x9 :  x8 : ffc0fab409a0
>   x7 :  x6 : 0002
>   x5 : 0001 x4 : 
>   x3 : 0001 x2 : 0002
>   x1 : ffc0f9431578 x0 : 
>   Process kworker/7:1 (pid: 70, stack limit = 0x17d08ffb)
>   Call trace:
>iommu_dma_map_sg+0x7c/0x2c8
>__iommu_map_sg_attrs+0x70/0x84
>get_pages+0x170/0x1e8
>msm_gem_get_iova+0x8c/0x128
>_msm_gem_kernel_new+0x6c/0xc8
>msm_gem_kernel_new+0x4c/0x58
>dsi_tx_buf_alloc_6g+0x4c/0x8c
>msm_dsi_host_modeset_init+0xc8/0x108
>msm_dsi_modeset_init+0x54/0x18c
>_dpu_kms_drm_obj_init+0x430/0x474
>dpu_kms_hw_init+0x5f8/0x6b4
>msm_drm_bind+0x360/0x6c8
>try_to_bring_up_master.part.7+0x28/0x70
>component_master_add_with_match+0xe8/0x124
>msm_pdev_probe+0x294/0x2b4
>platform_drv_probe+0x58/0xa4
>really_probe+0x150/0x294
>driver_probe_device+0xac/0xe8
>__device_attach_driver+0xa4/0xb4
>bus_for_each_drv+0x98/0xc8
>__device_attach+0xac/0x12c
>device_initial_probe+0x24/0x30
>bus_probe_device+0x38/0x98
>deferred_probe_work_func+0x78/0xa4
>process_one_work+0x24c/0x3dc
>worker_thread+0x280/0x360
>kthread+0x134/0x13c
>ret_from_fork+0x10/0x18
>   Code: d284 91000725 6b17039f 5400048a (f9401f40)
>   ---[ end trace f22dda57f3648e2c ]---
>   Kernel panic - not syncing: Fatal exception
>   SMP: stopping secondary CPUs
>   Kernel Offset: disabled
>   CPU features: 0x0,22802a18
>   Memory Limit: none
>
> The problem is that when drm/msm does it's own iommu_attach_device(),
> now the domain returned by iommu_get_domain_for_dev() is drm/msm's
> domain, and it doesn't have domain->iova_cookie.
>
> We kind of avoided this problem prior to sdm845/dpu because the iommu
> was attached to the mdp node in dt, which is a child of the toplevel
> mdss node (which corresponds to the dev passed in dma_map_sg()).  But
> with sdm845, now the iommu is attached at the mdss level so we hit the
> iommu_dma_ops in dma_map_sg().
>
> But auto allocating/attaching a domain before the driver is probed was
> already a blocking problem for enabling per-context pagetables for the
> GPU.  This problem is also now solved with this patch.
>
> Fixes: 97890ba9289c dma-mapping: detect and configure IOMMU in 
> of_dma_configure
> Tested-by: Douglas Anderson 
> Signed-off-by: Rob Clark 
> ---
> This is an alternative/replacement for [1].  What it lacks in elegance
> it makes up for in practicality ;-)
>
> [1] https://patchwork.freedesktop.org/patch/264930/
>
>  drivers/of/device.c | 22 ++
>  1 file changed, 22 insertions(+)
>
> diff --git a/drivers/of/device.c b/drivers/of/device.c
> index 5957cd4fa262..15ffee00fb22 100644
> --- a/drivers/of/device.c
> +++ b/drivers/of/device.c
> @@ -72,6 +72,14 @@ int of_device_add(struct platform_device *ofdev)
> return device_add(>dev);
>  }
>
> +static const struct of_device_id iommu_blacklist[] = {
> +   { .compatible = "qcom,mdp4" },
> +   { .compatible = "qcom,mdss" },
> +   { .compatible = "qcom,sdm845-mdss" },
> +   { .compatible = "qcom,adreno" },
> +   {}
> +};
> +
>  /**
>   * of_dma_configure - Setup DMA configuration
>   * @dev:   Device to apply DMA configuration
> @@ -164,6 +172,20 @@ int of_dma_configure(struct device *dev, struct 
> device_node *np, bool force_dma)
> dev_dbg(dev, "device is%sbehind an iommu\n",
> iommu ? " " : " not ");
>
> +   /*
> +* There is at least one case where the driver wants to directly
> +* manage the IOMMU, but if we end up with iommu dma_ops, that
> +* interferes with the drivers ability to use dma_map_sg() for
> +* cache operations.  Since we don't currently have a better
> +* solution, and this code runs before the