Re: [PATCH 02/25] dma-fence: prime lockdep annotations

2020-07-14 Thread Jason Gunthorpe
On Fri, Jul 10, 2020 at 03:01:10PM +0200, Christian König wrote:
> Am 10.07.20 um 14:54 schrieb Jason Gunthorpe:
> > On Fri, Jul 10, 2020 at 02:48:16PM +0200, Christian König wrote:
> > > Am 10.07.20 um 14:43 schrieb Jason Gunthorpe:
> > > > On Thu, Jul 09, 2020 at 10:09:11AM +0200, Daniel Vetter wrote:
> > > > > Hi Jason,
> > > > > 
> > > > > Below the paragraph I've added after our discussions around dma-fences
> > > > > outside of drivers/gpu. Good enough for an ack on this, or want 
> > > > > something
> > > > > changed?
> > > > > 
> > > > > Thanks, Daniel
> > > > > 
> > > > > > + * Note that only GPU drivers have a reasonable excuse for both 
> > > > > > requiring
> > > > > > + * _interval_notifier and  callbacks at the same time 
> > > > > > as having to
> > > > > > + * track asynchronous compute work using _fence. No driver 
> > > > > > outside of
> > > > > > + * drivers/gpu should ever call dma_fence_wait() in such contexts.
> > > > I was hoping we'd get to 'no driver outside GPU should even use
> > > > dma_fence()'
> > > My last status was that V4L could come use dma_fences as well.
> > I'm sure lots of places *could* use it, but I think I understood that
> > it is a bad idea unless you have to fit into the DRM uAPI?
> 
> It would be a bit questionable if you use the container objects we came up
> with in the DRM subsystem outside of it.
> 
> But using the dma_fence itself makes sense for everything which could do
> async DMA in general.

dma_fence only possibly makes some sense if you intend to expose the
completion outside a single driver. 

The prefered kernel design pattern for this is to connect things with
a function callback.

So the actual use case of dma_fence is quite narrow and tightly linked
to DRM.

I don't think we should spread this beyond DRM, I can't see a reason.

> > You are better to do something contained in the single driver where
> > locking can be analyzed.
> > 
> > > I'm not 100% sure, but wouldn't MMU notifier + dma_fence be a valid use 
> > > case
> > > for things like custom FPGA interfaces as well?
> > I don't think we should expand the list of drivers that use this
> > technique.
> > Drivers that can't suspend should pin memory, not use blocked
> > notifiers to created pinned memory.
> 
> Agreed totally, it's a complete pain to maintain even for the GPU drivers.
> 
> Unfortunately that doesn't change users from requesting it. So I'm pretty
> sure we are going to see more of this.

Kernel maintainers need to say no.

The proper way to do DMA on no-faulting hardware is page pinning.

Trying to improve performance of limited HW by using sketchy
techniques at the cost of general system stability should be a NAK.

Jason
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 02/25] dma-fence: prime lockdep annotations

2020-07-14 Thread Jason Gunthorpe
On Fri, Jul 10, 2020 at 04:02:35PM +0200, Daniel Vetter wrote:

> > dma_fence only possibly makes some sense if you intend to expose the
> > completion outside a single driver.
> >
> > The prefered kernel design pattern for this is to connect things with
> > a function callback.
> >
> > So the actual use case of dma_fence is quite narrow and tightly linked
> > to DRM.
> >
> > I don't think we should spread this beyond DRM, I can't see a reason.
> 
> Yeah v4l has a legit reason to use dma_fence, android wants that
> there. 

'legit' in the sense the v4l is supposed to trigger stuff in DRM when
V4L DMA completes? I would still see that as part of DRM

Or is it building a parallel DRM like DMA completion graph?

> > Trying to improve performance of limited HW by using sketchy
> > techniques at the cost of general system stability should be a NAK.
>
> Well that's pretty much gpu drivers, all the horrors for a bit more speed :-)
> 
> On the text itself, should I upgrade to "must not" instead of "should
> not"? Or more needed?

Fundamentally having some unknowable graph of dependencies where parts
of the graph can be placed in critical regions like notifiers is a
complete maintenance nightmare.

I think building systems like this should be discouraged :\

Jason
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 02/25] dma-fence: prime lockdep annotations

2020-07-14 Thread Jason Gunthorpe
On Thu, Jul 09, 2020 at 10:09:11AM +0200, Daniel Vetter wrote:
> Hi Jason,
> 
> Below the paragraph I've added after our discussions around dma-fences
> outside of drivers/gpu. Good enough for an ack on this, or want something
> changed?
> 
> Thanks, Daniel
> 
> > + * Note that only GPU drivers have a reasonable excuse for both requiring
> > + * _interval_notifier and  callbacks at the same time as 
> > having to
> > + * track asynchronous compute work using _fence. No driver outside of
> > + * drivers/gpu should ever call dma_fence_wait() in such contexts.

I was hoping we'd get to 'no driver outside GPU should even use
dma_fence()'

Is that not reasonable?

When your annotations once anything uses dma_fence it has to assume
the worst cases, right?

Jason
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 02/25] dma-fence: prime lockdep annotations

2020-07-14 Thread Jason Gunthorpe
On Fri, Jul 10, 2020 at 02:48:16PM +0200, Christian König wrote:
> Am 10.07.20 um 14:43 schrieb Jason Gunthorpe:
> > On Thu, Jul 09, 2020 at 10:09:11AM +0200, Daniel Vetter wrote:
> > > Hi Jason,
> > > 
> > > Below the paragraph I've added after our discussions around dma-fences
> > > outside of drivers/gpu. Good enough for an ack on this, or want something
> > > changed?
> > > 
> > > Thanks, Daniel
> > > 
> > > > + * Note that only GPU drivers have a reasonable excuse for both 
> > > > requiring
> > > > + * _interval_notifier and  callbacks at the same time as 
> > > > having to
> > > > + * track asynchronous compute work using _fence. No driver outside 
> > > > of
> > > > + * drivers/gpu should ever call dma_fence_wait() in such contexts.
> > I was hoping we'd get to 'no driver outside GPU should even use
> > dma_fence()'
> 
> My last status was that V4L could come use dma_fences as well.

I'm sure lots of places *could* use it, but I think I understood that
it is a bad idea unless you have to fit into the DRM uAPI?

You are better to do something contained in the single driver where
locking can be analyzed.

> I'm not 100% sure, but wouldn't MMU notifier + dma_fence be a valid use case
> for things like custom FPGA interfaces as well?

I don't think we should expand the list of drivers that use this
technique. 

Drivers that can't suspend should pin memory, not use blocked
notifiers to created pinned memory.

Jason
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 02/25] dma-fence: prime lockdep annotations

2020-07-10 Thread Daniel Vetter
On Fri, Jul 10, 2020 at 4:23 PM Jason Gunthorpe  wrote:
>
> On Fri, Jul 10, 2020 at 04:02:35PM +0200, Daniel Vetter wrote:
>
> > > dma_fence only possibly makes some sense if you intend to expose the
> > > completion outside a single driver.
> > >
> > > The prefered kernel design pattern for this is to connect things with
> > > a function callback.
> > >
> > > So the actual use case of dma_fence is quite narrow and tightly linked
> > > to DRM.
> > >
> > > I don't think we should spread this beyond DRM, I can't see a reason.
> >
> > Yeah v4l has a legit reason to use dma_fence, android wants that
> > there.
>
> 'legit' in the sense the v4l is supposed to trigger stuff in DRM when
> V4L DMA completes? I would still see that as part of DRM

Yes, and also the other way around. But thus far it didn't land.
-Daniel

> Or is it building a parallel DRM like DMA completion graph?
>
> > > Trying to improve performance of limited HW by using sketchy
> > > techniques at the cost of general system stability should be a NAK.
> >
> > Well that's pretty much gpu drivers, all the horrors for a bit more speed 
> > :-)
> >
> > On the text itself, should I upgrade to "must not" instead of "should
> > not"? Or more needed?
>
> Fundamentally having some unknowable graph of dependencies where parts
> of the graph can be placed in critical regions like notifiers is a
> complete maintenance nightmare.
>
> I think building systems like this should be discouraged :\

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 02/25] dma-fence: prime lockdep annotations

2020-07-10 Thread Daniel Vetter
On Fri, Jul 10, 2020 at 3:48 PM Jason Gunthorpe  wrote:
>
> On Fri, Jul 10, 2020 at 03:01:10PM +0200, Christian König wrote:
> > Am 10.07.20 um 14:54 schrieb Jason Gunthorpe:
> > > On Fri, Jul 10, 2020 at 02:48:16PM +0200, Christian König wrote:
> > > > Am 10.07.20 um 14:43 schrieb Jason Gunthorpe:
> > > > > On Thu, Jul 09, 2020 at 10:09:11AM +0200, Daniel Vetter wrote:
> > > > > > Hi Jason,
> > > > > >
> > > > > > Below the paragraph I've added after our discussions around 
> > > > > > dma-fences
> > > > > > outside of drivers/gpu. Good enough for an ack on this, or want 
> > > > > > something
> > > > > > changed?
> > > > > >
> > > > > > Thanks, Daniel
> > > > > >
> > > > > > > + * Note that only GPU drivers have a reasonable excuse for both 
> > > > > > > requiring
> > > > > > > + * _interval_notifier and  callbacks at the same 
> > > > > > > time as having to
> > > > > > > + * track asynchronous compute work using _fence. No driver 
> > > > > > > outside of
> > > > > > > + * drivers/gpu should ever call dma_fence_wait() in such 
> > > > > > > contexts.
> > > > > I was hoping we'd get to 'no driver outside GPU should even use
> > > > > dma_fence()'
> > > > My last status was that V4L could come use dma_fences as well.
> > > I'm sure lots of places *could* use it, but I think I understood that
> > > it is a bad idea unless you have to fit into the DRM uAPI?
> >
> > It would be a bit questionable if you use the container objects we came up
> > with in the DRM subsystem outside of it.
> >
> > But using the dma_fence itself makes sense for everything which could do
> > async DMA in general.
>
> dma_fence only possibly makes some sense if you intend to expose the
> completion outside a single driver.
>
> The prefered kernel design pattern for this is to connect things with
> a function callback.
>
> So the actual use case of dma_fence is quite narrow and tightly linked
> to DRM.
>
> I don't think we should spread this beyond DRM, I can't see a reason.

Yeah v4l has a legit reason to use dma_fence, android wants that
there. There's even been patches proposed years ago, but never landed
because android is using some vendor hack horror show for camera
drivers right now.

But there is an effort going on to fix that (under the libcamera
heading), and I expect that once we have that, it'll want dma_fence
support. So outright excluding everyone from dma_fence is a bit too
much. They definitely shouldn't be used though for entirely
independent stuff.

> > > You are better to do something contained in the single driver where
> > > locking can be analyzed.
> > >
> > > > I'm not 100% sure, but wouldn't MMU notifier + dma_fence be a valid use 
> > > > case
> > > > for things like custom FPGA interfaces as well?
> > > I don't think we should expand the list of drivers that use this
> > > technique.
> > > Drivers that can't suspend should pin memory, not use blocked
> > > notifiers to created pinned memory.
> >
> > Agreed totally, it's a complete pain to maintain even for the GPU drivers.
> >
> > Unfortunately that doesn't change users from requesting it. So I'm pretty
> > sure we are going to see more of this.
>
> Kernel maintainers need to say no.
>
> The proper way to do DMA on no-faulting hardware is page pinning.
>
> Trying to improve performance of limited HW by using sketchy
> techniques at the cost of general system stability should be a NAK.

Well that's pretty much gpu drivers, all the horrors for a bit more speed :-)

On the text itself, should I upgrade to "must not" instead of "should
not"? Or more needed?
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 02/25] dma-fence: prime lockdep annotations

2020-07-10 Thread Christian König

Am 10.07.20 um 14:54 schrieb Jason Gunthorpe:

On Fri, Jul 10, 2020 at 02:48:16PM +0200, Christian König wrote:

Am 10.07.20 um 14:43 schrieb Jason Gunthorpe:

On Thu, Jul 09, 2020 at 10:09:11AM +0200, Daniel Vetter wrote:

Hi Jason,

Below the paragraph I've added after our discussions around dma-fences
outside of drivers/gpu. Good enough for an ack on this, or want something
changed?

Thanks, Daniel


+ * Note that only GPU drivers have a reasonable excuse for both requiring
+ * _interval_notifier and  callbacks at the same time as having to
+ * track asynchronous compute work using _fence. No driver outside of
+ * drivers/gpu should ever call dma_fence_wait() in such contexts.

I was hoping we'd get to 'no driver outside GPU should even use
dma_fence()'

My last status was that V4L could come use dma_fences as well.

I'm sure lots of places *could* use it, but I think I understood that
it is a bad idea unless you have to fit into the DRM uAPI?


It would be a bit questionable if you use the container objects we came 
up with in the DRM subsystem outside of it.


But using the dma_fence itself makes sense for everything which could do 
async DMA in general.



You are better to do something contained in the single driver where
locking can be analyzed.


I'm not 100% sure, but wouldn't MMU notifier + dma_fence be a valid use case
for things like custom FPGA interfaces as well?

I don't think we should expand the list of drivers that use this
technique.
Drivers that can't suspend should pin memory, not use blocked
notifiers to created pinned memory.


Agreed totally, it's a complete pain to maintain even for the GPU drivers.

Unfortunately that doesn't change users from requesting it. So I'm 
pretty sure we are going to see more of this.


Christian.



Jason


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 02/25] dma-fence: prime lockdep annotations

2020-07-10 Thread Christian König

Am 10.07.20 um 14:43 schrieb Jason Gunthorpe:

On Thu, Jul 09, 2020 at 10:09:11AM +0200, Daniel Vetter wrote:

Hi Jason,

Below the paragraph I've added after our discussions around dma-fences
outside of drivers/gpu. Good enough for an ack on this, or want something
changed?

Thanks, Daniel


+ * Note that only GPU drivers have a reasonable excuse for both requiring
+ * _interval_notifier and  callbacks at the same time as having to
+ * track asynchronous compute work using _fence. No driver outside of
+ * drivers/gpu should ever call dma_fence_wait() in such contexts.

I was hoping we'd get to 'no driver outside GPU should even use
dma_fence()'


My last status was that V4L could come use dma_fences as well.

I'm not 100% sure, but wouldn't MMU notifier + dma_fence be a valid use 
case for things like custom FPGA interfaces as well?



Is that not reasonable?

When your annotations once anything uses dma_fence it has to assume
the worst cases, right?


Well a defensive approach is usually the best idea, yes.

Christian.



Jason


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 02/25] dma-fence: prime lockdep annotations

2020-07-09 Thread Daniel Vetter
Hi Jason,

Below the paragraph I've added after our discussions around dma-fences
outside of drivers/gpu. Good enough for an ack on this, or want something
changed?

Thanks, Daniel

> + * Note that only GPU drivers have a reasonable excuse for both requiring
> + * _interval_notifier and  callbacks at the same time as having 
> to
> + * track asynchronous compute work using _fence. No driver outside of
> + * drivers/gpu should ever call dma_fence_wait() in such contexts.


On Tue, Jul 07, 2020 at 10:12:06PM +0200, Daniel Vetter wrote:
> Two in one go:
> - it is allowed to call dma_fence_wait() while holding a
>   dma_resv_lock(). This is fundamental to how eviction works with ttm,
>   so required.
> 
> - it is allowed to call dma_fence_wait() from memory reclaim contexts,
>   specifically from shrinker callbacks (which i915 does), and from mmu
>   notifier callbacks (which amdgpu does, and which i915 sometimes also
>   does, and probably always should, but that's kinda a debate). Also
>   for stuff like HMM we really need to be able to do this, or things
>   get real dicey.
> 
> Consequence is that any critical path necessary to get to a
> dma_fence_signal for a fence must never a) call dma_resv_lock nor b)
> allocate memory with GFP_KERNEL. Also by implication of
> dma_resv_lock(), no userspace faulting allowed. That's some supremely
> obnoxious limitations, which is why we need to sprinkle the right
> annotations to all relevant paths.
> 
> The one big locking context we're leaving out here is mmu notifiers,
> added in
> 
> commit 23b68395c7c78a764e8963fc15a7cfd318bf187f
> Author: Daniel Vetter 
> Date:   Mon Aug 26 22:14:21 2019 +0200
> 
> mm/mmu_notifiers: add a lockdep map for invalidate_range_start/end
> 
> that one covers a lot of other callsites, and it's also allowed to
> wait on dma-fences from mmu notifiers. But there's no ready-made
> functions exposed to prime this, so I've left it out for now.
> 
> v2: Also track against mmu notifier context.
> 
> v3: kerneldoc to spec the cross-driver contract. Note that currently
> i915 throws in a hard-coded 10s timeout on foreign fences (not sure
> why that was done, but it's there), which is why that rule is worded
> with SHOULD instead of MUST.
> 
> Also some of the mmu_notifier/shrinker rules might surprise SoC
> drivers, I haven't fully audited them all. Which is infeasible anyway,
> we'll need to run them with lockdep and dma-fence annotations and see
> what goes boom.
> 
> v4: A spelling fix from Mika
> 
> v5: #ifdef for CONFIG_MMU_NOTIFIER. Reported by 0day. Unfortunately
> this means lockdep enforcement is slightly inconsistent, it won't spot
> GFP_NOIO and GFP_NOFS allocations in the wrong spot if
> CONFIG_MMU_NOTIFIER is disabled in the kernel config. Oh well.
> 
> v5: Note that only drivers/gpu has a reasonable (or at least
> historical) excuse to use dma_fence_wait() from shrinker and mmu
> notifier callbacks. Everyone else should either have a better memory
> manager model, or better hardware. This reflects discussions with
> Jason Gunthorpe.
> 
> Cc: Jason Gunthorpe 
> Cc: Felix Kuehling 
> Cc: kernel test robot 
> Reviewed-by: Thomas Hellström  (v4)
> Cc: Mika Kuoppala 
> Cc: Thomas Hellstrom 
> Cc: linux-me...@vger.kernel.org
> Cc: linaro-mm-...@lists.linaro.org
> Cc: linux-r...@vger.kernel.org
> Cc: amd-...@lists.freedesktop.org
> Cc: intel-...@lists.freedesktop.org
> Cc: Chris Wilson 
> Cc: Maarten Lankhorst 
> Cc: Christian König 
> Signed-off-by: Daniel Vetter 
> ---
>  Documentation/driver-api/dma-buf.rst |  6 
>  drivers/dma-buf/dma-fence.c  | 46 
>  drivers/dma-buf/dma-resv.c   |  8 +
>  include/linux/dma-fence.h|  1 +
>  4 files changed, 61 insertions(+)
> 
> diff --git a/Documentation/driver-api/dma-buf.rst 
> b/Documentation/driver-api/dma-buf.rst
> index 05d856131140..f8f6decde359 100644
> --- a/Documentation/driver-api/dma-buf.rst
> +++ b/Documentation/driver-api/dma-buf.rst
> @@ -133,6 +133,12 @@ DMA Fences
>  .. kernel-doc:: drivers/dma-buf/dma-fence.c
> :doc: DMA fences overview
>  
> +DMA Fence Cross-Driver Contract
> +~~~
> +
> +.. kernel-doc:: drivers/dma-buf/dma-fence.c
> +   :doc: fence cross-driver contract
> +
>  DMA Fence Signalling Annotations
>  
>  
> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
> index 0005bc002529..af1d8ea926b3 100644
> --- a/drivers/dma-buf/dma-fence.c
> +++ b/drivers/dma-buf/dma-fence.c
> @@ -64,6 +64,52 @@ static atomic64_t dma_fence_context_counter = 
> ATOMIC64_INIT(1);
>   *   _buf.resv pointer.
>   */
>  
> +/**
> + * DOC: fence cross-driver contract
> + *
> + * Since _fence provide a cross driver contract, all drivers must follow 
> the
> + * same rules:
> + *
> + * * Fences must complete in a reasonable time. Fences which represent 
> kernels
> + *   and shaders submitted by userspace, which could run forever, must be 
> backed
> +