Re: [Mesa-dev] [PATCH 01/11] drm/amdgpu: Comply with implicit fencing rules

2021-05-21 Thread Daniel Vetter
On Fri, May 21, 2021 at 8:08 PM Christian König
 wrote:
>
> Am 21.05.21 um 17:16 schrieb Daniel Vetter:
> > On Fri, May 21, 2021 at 05:00:46PM +0200, Bas Nieuwenhuizen wrote:
> >> On Fri, May 21, 2021 at 4:37 PM Daniel Vetter  wrote:
> >>> On Fri, May 21, 2021 at 11:46:23AM +0200, Bas Nieuwenhuizen wrote:
>  On Fri, May 21, 2021 at 11:10 AM Daniel Vetter  
>  wrote:
> > ---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 4 ++--
> >   1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > index 88a24a0b5691..cc8426e1e8a8 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > @@ -617,8 +617,8 @@ static int amdgpu_cs_parser_bos(struct 
> > amdgpu_cs_parser *p,
> >  amdgpu_bo_list_for_each_entry(e, p->bo_list) {
> >  struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
> >
> > -   /* Make sure we use the exclusive slot for shared BOs */
> > -   if (bo->prime_shared_count)
> > +   /* Make sure we use the exclusive slot for all 
> > potentially shared BOs */
> > +   if (!(bo->flags & AMDGPU_GEM_CREATE_VM_ALWAYS_VALID))
> >  e->tv.num_shared = 0;
>  I think it also makes sense to skip this with
>  AMDGPU_GEM_CREATE_EXPLICIT_SYNC? It can be shared but I don't think
>  anyone expects implicit sync to happen with those.
> >>> Ah yes, I missed this entirely. So the "no implicit flag" is already
> >>> there, and the _only_ thing that's missing really is a way to fish out the
> >>> implicit fences, and set them.
> >>>
> >>> https://lore.kernel.org/dri-devel/20210520190007.534046-1-ja...@jlekstrand.net/
> >>>
> >>> So I think all that's really needed in radv is not setting
> >>> RADEON_FLAG_IMPLICIT_SYNC for winsys buffers when Jason's dma-buf ioctl
> >>> are present (means you need to do some import/export and keep the fd
> >>> around for winsys buffers, but shouldn't be too bad), and then control the
> >>> implicit fences entirely explicitly like vk expects.
> >> That is the part I'm less sure about. This is a BO wide flag so we are
> >> also disabling implicit sync in the compositor. If the compositor does
> >> only do read stuff that is ok, as the inserted exclusive fence will
> >> work for that. But as I learned recently the app provided buffer may
> >> end up being written to by the X server which open a whole can of
> >> potential problems if implicit sync gets disabled between Xserver
> >> operations on the app provided buffer. Hence setting that on the WSI
> >> buffer is a whole new can of potential problems and hence I've said a
> >> submission based flag would be preferred.
> >>
> >> I can certainly try it out though.
> > Hm yeah that's the wrong flag. We need a flag on the drm_file which the
> > explicit userspace sets. And which is valid only for itself.
> >
> > There's a nice flags field when creating a ctx, but it's not validated and
> > there's already a comment that we have to filter out garbage priority, so
> > that's not use. I'll whip up something entirely untested just as a draft.
>
> We could provide an IOCTL for the BO to change the flag.

That's not the semantics we need.

> But could we first figure out the semantics we want to use here?
>
> Cause I'm pretty sure we don't actually need those changes at all and as
> said before I'm certainly NAKing things which break existing use cases.

Please read how other drivers do this and at least _try_ to understand
it. I'm really loosing my patience here with you NAKing patches you're
not even understanding (or did you actually read and fully understand
the entire story I typed up here, and your NAK is on the entire
thing?). There's not much useful conversation to be had with that
approach. And with drivers I mean kernel + userspace here.

That's the other frustration part: You're trying to fix this purely in
the kernel. This is exactly one of these issues why we require open
source userspace, so that we can fix the issues correctly across the
entire stack. And meanwhile you're steadfastily refusing to even look
at that the userspace side of the picture.

Also I thought through your tlb issue, why are you even putting these
tlb flush fences into the shard dma_resv slots? If you store them
somewhere else in the amdgpu private part, the oversync issues goes
away
- in your ttm bo move callback, you can just make your bo copy job
depend on them too (you have to anyway)
- even for p2p there's not an issue here, because you have the
->move_notify callback, and can then lift the tlb flush fences from
your private place to the shared slots so the exporter can see them.

The kernel move fences otoh are a bit more nasty to wring through the
p2p dma-buf interface. That one probably needs something new.
-Daniel

>
> 

Re: [Mesa-dev] [PATCH 01/11] drm/amdgpu: Comply with implicit fencing rules

2021-05-21 Thread Christian König

Am 21.05.21 um 17:16 schrieb Daniel Vetter:

On Fri, May 21, 2021 at 05:00:46PM +0200, Bas Nieuwenhuizen wrote:

On Fri, May 21, 2021 at 4:37 PM Daniel Vetter  wrote:

On Fri, May 21, 2021 at 11:46:23AM +0200, Bas Nieuwenhuizen wrote:

On Fri, May 21, 2021 at 11:10 AM Daniel Vetter  wrote:

---
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 88a24a0b5691..cc8426e1e8a8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -617,8 +617,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 amdgpu_bo_list_for_each_entry(e, p->bo_list) {
 struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);

-   /* Make sure we use the exclusive slot for shared BOs */
-   if (bo->prime_shared_count)
+   /* Make sure we use the exclusive slot for all potentially 
shared BOs */
+   if (!(bo->flags & AMDGPU_GEM_CREATE_VM_ALWAYS_VALID))
 e->tv.num_shared = 0;

I think it also makes sense to skip this with
AMDGPU_GEM_CREATE_EXPLICIT_SYNC? It can be shared but I don't think
anyone expects implicit sync to happen with those.

Ah yes, I missed this entirely. So the "no implicit flag" is already
there, and the _only_ thing that's missing really is a way to fish out the
implicit fences, and set them.

https://lore.kernel.org/dri-devel/20210520190007.534046-1-ja...@jlekstrand.net/

So I think all that's really needed in radv is not setting
RADEON_FLAG_IMPLICIT_SYNC for winsys buffers when Jason's dma-buf ioctl
are present (means you need to do some import/export and keep the fd
around for winsys buffers, but shouldn't be too bad), and then control the
implicit fences entirely explicitly like vk expects.

That is the part I'm less sure about. This is a BO wide flag so we are
also disabling implicit sync in the compositor. If the compositor does
only do read stuff that is ok, as the inserted exclusive fence will
work for that. But as I learned recently the app provided buffer may
end up being written to by the X server which open a whole can of
potential problems if implicit sync gets disabled between Xserver
operations on the app provided buffer. Hence setting that on the WSI
buffer is a whole new can of potential problems and hence I've said a
submission based flag would be preferred.

I can certainly try it out though.

Hm yeah that's the wrong flag. We need a flag on the drm_file which the
explicit userspace sets. And which is valid only for itself.

There's a nice flags field when creating a ctx, but it's not validated and
there's already a comment that we have to filter out garbage priority, so
that's not use. I'll whip up something entirely untested just as a draft.


We could provide an IOCTL for the BO to change the flag.

But could we first figure out the semantics we want to use here?

Cause I'm pretty sure we don't actually need those changes at all and as 
said before I'm certainly NAKing things which break existing use cases.


Regards,
Christian.


-Daniel




Are you bored enough to type this up for radv? I'll give Jason's kernel
stuff another review meanwhile.
-Daniel


 e->bo_va = amdgpu_vm_bo_find(vm, bo);
 }
--
2.31.0


--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [Intel-gfx] [RFC 2/2] drm/doc/rfc: i915 new parallel submission uAPI plan

2021-05-21 Thread Matthew Brost
On Fri, May 21, 2021 at 01:00:54PM +0100, Tvrtko Ursulin wrote:
> 
> On 19/05/2021 00:58, Matthew Brost wrote:
> > Add entry fpr i915 new parallel submission uAPI plan.
> > 
> > v2:
> >   (Daniel Vetter):
> >- Expand logical order explaination
> >- Add dummy header
> >- Only allow N BBs in execbuf IOCTL
> >- Configure parallel submission per slot not per gem context
> > 
> > Cc: Tvrtko Ursulin 
> > Cc: Tony Ye 
> > CC: Carl Zhang 
> > Cc: Daniel Vetter 
> > Cc: Jason Ekstrand 
> > Signed-off-by: Matthew Brost 
> > ---
> >   Documentation/gpu/rfc/i915_parallel_execbuf.h | 144 ++
> >   Documentation/gpu/rfc/i915_scheduler.rst  |  53 ++-
> >   2 files changed, 196 insertions(+), 1 deletion(-)
> >   create mode 100644 Documentation/gpu/rfc/i915_parallel_execbuf.h
> > 
> > diff --git a/Documentation/gpu/rfc/i915_parallel_execbuf.h 
> > b/Documentation/gpu/rfc/i915_parallel_execbuf.h
> > new file mode 100644
> > index ..8c64b983ccad
> > --- /dev/null
> > +++ b/Documentation/gpu/rfc/i915_parallel_execbuf.h
> > @@ -0,0 +1,144 @@
> > +#define I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT 2 /* see 
> > i915_context_engines_parallel_submit */
> > +
> > +/*
> > + * i915_context_engines_parallel_submit:
> > + *
> > + * Setup a slot to allow multiple BBs to be submitted in a single execbuf 
> > IOCTL.
> > + * Those BBs will then be scheduled to run on the GPU in parallel. Multiple
> > + * hardware contexts are created internally in the i915 run these BBs. 
> > Once a
> > + * slot is configured for N BBs only N BBs can be submitted in each execbuf
> > + * IOCTL and this is implict behavior (e.g. the user doesn't tell the 
> > execbuf
> > + * IOCTL there are N BBs, the execbuf IOCTL know how many BBs there are 
> > based on
> > + * the slots configuration).
> 
> 1)
> Expand the term slot here with "slot in the context engine map" least once
> for clarity.
> 

Sure.

> 2)
> About where execbuf will implicitly be finding batches - suggest to also
> cover first/last flag here. I know you have it in the readme but I think it
> is good if uapi header is as self-contained as possible.
>

Yep, good idea.

> > + *
> > + * Their are two currently defined ways to control the placement of the
> > + * hardware contexts on physical engines: default behavior (no flags) and
> > + * I915_PARALLEL_IMPLICT_BONDS (a flag). More flags may be added the in the
> > + * future as new hardware / use cases arise. Details of how to use this
> > + * interface below above the flags.
> > + *
> > + * Returns -EINVAL if hardware context placement configuration invalid or 
> > if the
> > + * placement configuration isn't supported on the platform / submission
> > + * interface.
> > + * Returns -ENODEV if extension isn't supported on the platform / 
> > submission
> > + * inteface.
> > + */
> > +struct i915_context_engines_parallel_submit {
> > +   struct i915_user_extension base;
> > +
> > +   __u16 engine_index; /* slot for parallel engine */
> > +   __u16 width;/* number of contexts per parallel engine */
> > +   __u16 num_siblings; /* number of siblings per context */
> > +   __u16 mbz16;
> > +/*
> > + * Default placement behvavior (currently unsupported):
> > + *
> > + * Rather than restricting parallel submission to a single class with a
> > + * logically contiguous placement (I915_PARALLEL_IMPLICT_BONDS), add a 
> > mode that
> 
> What do you mean with logically contiguous here? It sounds ambiguous versus
> logical vs "normal" engine instance numbers.
>

This is a little backwards. I think when I wrote this originally the implicit
placement comments were first. I can reword this.
 
> > + * enables parallel submission across multiple engine classes. In this 
> > case each
> > + * context's logical engine mask indicates where that context can placed. 
> > It is
> > + * implied in this mode that all contexts have mutual exclusive placement 
> > (e.g.
> > + * if one context is running CS0 no other contexts can run on CS0).
> 
> I think talk about logical context and its mask is too implementation detail
> at the uapi level. Instead I would suggest more userspace programmer centric
> description.

Ok, can you give me suggestion? Writing DOC isn't really my strength.

> 
> > + *
> > + * Example 1 pseudo code:
> > + * CSX[Y] = engine class X, logical instance Y
> > + * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
> > + * set_engines(INVALID)
> > + * set_parallel(engine_index=0, width=2, num_siblings=2,
> > + * engines=CS0[0],CS0[1],CS1[0],CS1[1])
> > + *
> > + * Results in the following valid placements:
> > + * CS0[0], CS1[0]
> > + * CS0[0], CS1[1]
> > + * CS0[1], CS1[0]
> > + * CS0[1], CS1[1]
> > + *
> > + * This can also be though of as 2 virtual engines:
> > + * VE[0] = CS0[0], CS0[1]
> > + * VE[1] = CS1[0], CS1[1]
> 
> Ah okay so essentially similar to what I was proposing a year ago. But then
> it is no longer "set_parallel" really. It is one slot in 

Re: [Mesa-dev] [Intel-gfx] [RFC 2/2] drm/doc/rfc: i915 new parallel submission uAPI plan

2021-05-21 Thread Matthew Brost
On Thu, May 20, 2021 at 09:41:20PM +0200, Daniel Vetter wrote:
> On Thu, May 20, 2021 at 11:57:44AM +0100, Tvrtko Ursulin wrote:
> > 
> > On 20/05/2021 10:54, Daniel Vetter wrote:
> > > On Wed, May 19, 2021 at 7:19 PM Matthew Brost  
> > > wrote:
> > > > 
> > > > On Wed, May 19, 2021 at 01:10:04PM +0200, Daniel Vetter wrote:
> > > > > On Tue, May 18, 2021 at 04:58:30PM -0700, Matthew Brost wrote:
> > > > > > Add entry fpr i915 new parallel submission uAPI plan.
> > > > > > 
> > > > > > v2:
> > > > > >   (Daniel Vetter):
> > > > > >- Expand logical order explaination
> > > > > >- Add dummy header
> > > > > >- Only allow N BBs in execbuf IOCTL
> > > > > >- Configure parallel submission per slot not per gem context
> > > > > > 
> > > > > > Cc: Tvrtko Ursulin 
> > > > > > Cc: Tony Ye 
> > > > > > CC: Carl Zhang 
> > > > > > Cc: Daniel Vetter 
> > > > > > Cc: Jason Ekstrand 
> > > > > > Signed-off-by: Matthew Brost 
> > > > > > ---
> > > > > >   Documentation/gpu/rfc/i915_parallel_execbuf.h | 144 
> > > > > > ++
> > > > > >   Documentation/gpu/rfc/i915_scheduler.rst  |  53 ++-
> > > > > >   2 files changed, 196 insertions(+), 1 deletion(-)
> > > > > >   create mode 100644 Documentation/gpu/rfc/i915_parallel_execbuf.h
> > > > > > 
> > > > > > diff --git a/Documentation/gpu/rfc/i915_parallel_execbuf.h 
> > > > > > b/Documentation/gpu/rfc/i915_parallel_execbuf.h
> > > > > > new file mode 100644
> > > > > > index ..8c64b983ccad
> > > > > > --- /dev/null
> > > > > > +++ b/Documentation/gpu/rfc/i915_parallel_execbuf.h
> > > > > > @@ -0,0 +1,144 @@
> > > > > > +#define I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT 2 /* see 
> > > > > > i915_context_engines_parallel_submit */
> > > > > > +
> > > > > > +/*
> > > > > > + * i915_context_engines_parallel_submit:
> > > > > > + *
> > > > > > + * Setup a slot to allow multiple BBs to be submitted in a single 
> > > > > > execbuf IOCTL.
> > > > > > + * Those BBs will then be scheduled to run on the GPU in parallel. 
> > > > > > Multiple
> > > > > > + * hardware contexts are created internally in the i915 run these 
> > > > > > BBs. Once a
> > > > > > + * slot is configured for N BBs only N BBs can be submitted in 
> > > > > > each execbuf
> > > > > > + * IOCTL and this is implict behavior (e.g. the user doesn't tell 
> > > > > > the execbuf
> > > > > > + * IOCTL there are N BBs, the execbuf IOCTL know how many BBs 
> > > > > > there are based on
> > > > > > + * the slots configuration).
> > > > > > + *
> > > > > > + * Their are two currently defined ways to control the placement 
> > > > > > of the
> > > > > > + * hardware contexts on physical engines: default behavior (no 
> > > > > > flags) and
> > > > > > + * I915_PARALLEL_IMPLICT_BONDS (a flag). More flags may be added 
> > > > > > the in the
> > > > > > + * future as new hardware / use cases arise. Details of how to use 
> > > > > > this
> > > > > > + * interface below above the flags.
> > > > > > + *
> > > > > > + * Returns -EINVAL if hardware context placement configuration 
> > > > > > invalid or if the
> > > > > > + * placement configuration isn't supported on the platform / 
> > > > > > submission
> > > > > > + * interface.
> > > > > > + * Returns -ENODEV if extension isn't supported on the platform / 
> > > > > > submission
> > > > > > + * inteface.
> > > > > > + */
> > > > > > +struct i915_context_engines_parallel_submit {
> > > > > > +   struct i915_user_extension base;
> > > > > > +
> > > > > > +   __u16 engine_index; /* slot for parallel engine */
> > > > > > +   __u16 width;/* number of contexts per parallel 
> > > > > > engine */
> > > > > > +   __u16 num_siblings; /* number of siblings per context */
> > > > > > +   __u16 mbz16;
> > > > > 
> > > > > Ok the big picture looks reasonable now, the flags still confuse me.
> > > > > 
> > > > 
> > > > Yea, it is a bit confusing.
> > > > 
> > > > > > +/*
> > > > > > + * Default placement behvavior (currently unsupported):
> > > > > > + *
> > > > > > + * Rather than restricting parallel submission to a single class 
> > > > > > with a
> > > > > > + * logically contiguous placement (I915_PARALLEL_IMPLICT_BONDS), 
> > > > > > add a mode that
> > > > > > + * enables parallel submission across multiple engine classes. In 
> > > > > > this case each
> > > > > > + * context's logical engine mask indicates where that context can 
> > > > > > placed. It is
> > > > > > + * implied in this mode that all contexts have mutual exclusive 
> > > > > > placement (e.g.
> > > > > > + * if one context is running CS0 no other contexts can run on CS0).
> > > > > > + *
> > > > > > + * Example 1 pseudo code:
> > > > > > + * CSX[Y] = engine class X, logical instance Y
> > > > > > + * INVALID = I915_ENGINE_CLASS_INVALID, 
> > > > > > I915_ENGINE_CLASS_INVALID_NONE
> > > > > > + * set_engines(INVALID)
> > > > > > + * set_parallel(engine_index=0, width=2, num_siblings=2,
> > > > > > + * 

Re: [Mesa-dev] [Intel-gfx] [RFC 2/2] drm/doc/rfc: i915 new parallel submission uAPI plan

2021-05-21 Thread Matthew Brost
On Thu, May 20, 2021 at 09:44:59PM +0200, Daniel Vetter wrote:
> On Thu, May 20, 2021 at 08:10:59AM -0700, Matthew Brost wrote:
> > On Thu, May 20, 2021 at 11:54:25AM +0200, Daniel Vetter wrote:
> > > On Wed, May 19, 2021 at 7:19 PM Matthew Brost  
> > > wrote:
> > > >
> > > > On Wed, May 19, 2021 at 01:10:04PM +0200, Daniel Vetter wrote:
> > > > > On Tue, May 18, 2021 at 04:58:30PM -0700, Matthew Brost wrote:
> > > > > > Add entry fpr i915 new parallel submission uAPI plan.
> > > > > >
> > > > > > v2:
> > > > > >  (Daniel Vetter):
> > > > > >   - Expand logical order explaination
> > > > > >   - Add dummy header
> > > > > >   - Only allow N BBs in execbuf IOCTL
> > > > > >   - Configure parallel submission per slot not per gem context
> > > > > >
> > > > > > Cc: Tvrtko Ursulin 
> > > > > > Cc: Tony Ye 
> > > > > > CC: Carl Zhang 
> > > > > > Cc: Daniel Vetter 
> > > > > > Cc: Jason Ekstrand 
> > > > > > Signed-off-by: Matthew Brost 
> > > > > > ---
> > > > > >  Documentation/gpu/rfc/i915_parallel_execbuf.h | 144 
> > > > > > ++
> > > > > >  Documentation/gpu/rfc/i915_scheduler.rst  |  53 ++-
> > > > > >  2 files changed, 196 insertions(+), 1 deletion(-)
> > > > > >  create mode 100644 Documentation/gpu/rfc/i915_parallel_execbuf.h
> > > > > >
> > > > > > diff --git a/Documentation/gpu/rfc/i915_parallel_execbuf.h 
> > > > > > b/Documentation/gpu/rfc/i915_parallel_execbuf.h
> > > > > > new file mode 100644
> > > > > > index ..8c64b983ccad
> > > > > > --- /dev/null
> > > > > > +++ b/Documentation/gpu/rfc/i915_parallel_execbuf.h
> > > > > > @@ -0,0 +1,144 @@
> > > > > > +#define I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT 2 /* see 
> > > > > > i915_context_engines_parallel_submit */
> > > > > > +
> > > > > > +/*
> > > > > > + * i915_context_engines_parallel_submit:
> > > > > > + *
> > > > > > + * Setup a slot to allow multiple BBs to be submitted in a single 
> > > > > > execbuf IOCTL.
> > > > > > + * Those BBs will then be scheduled to run on the GPU in parallel. 
> > > > > > Multiple
> > > > > > + * hardware contexts are created internally in the i915 run these 
> > > > > > BBs. Once a
> > > > > > + * slot is configured for N BBs only N BBs can be submitted in 
> > > > > > each execbuf
> > > > > > + * IOCTL and this is implict behavior (e.g. the user doesn't tell 
> > > > > > the execbuf
> > > > > > + * IOCTL there are N BBs, the execbuf IOCTL know how many BBs 
> > > > > > there are based on
> > > > > > + * the slots configuration).
> > > > > > + *
> > > > > > + * Their are two currently defined ways to control the placement 
> > > > > > of the
> > > > > > + * hardware contexts on physical engines: default behavior (no 
> > > > > > flags) and
> > > > > > + * I915_PARALLEL_IMPLICT_BONDS (a flag). More flags may be added 
> > > > > > the in the
> > > > > > + * future as new hardware / use cases arise. Details of how to use 
> > > > > > this
> > > > > > + * interface below above the flags.
> > > > > > + *
> > > > > > + * Returns -EINVAL if hardware context placement configuration 
> > > > > > invalid or if the
> > > > > > + * placement configuration isn't supported on the platform / 
> > > > > > submission
> > > > > > + * interface.
> > > > > > + * Returns -ENODEV if extension isn't supported on the platform / 
> > > > > > submission
> > > > > > + * inteface.
> > > > > > + */
> > > > > > +struct i915_context_engines_parallel_submit {
> > > > > > +   struct i915_user_extension base;
> > > > > > +
> > > > > > +   __u16 engine_index; /* slot for parallel engine */
> > > > > > +   __u16 width;/* number of contexts per parallel 
> > > > > > engine */
> > > > > > +   __u16 num_siblings; /* number of siblings per context */
> > > > > > +   __u16 mbz16;
> > > > >
> > > > > Ok the big picture looks reasonable now, the flags still confuse me.
> > > > >
> > > >
> > > > Yea, it is a bit confusing.
> > > >
> > > > > > +/*
> > > > > > + * Default placement behvavior (currently unsupported):
> > > > > > + *
> > > > > > + * Rather than restricting parallel submission to a single class 
> > > > > > with a
> > > > > > + * logically contiguous placement (I915_PARALLEL_IMPLICT_BONDS), 
> > > > > > add a mode that
> > > > > > + * enables parallel submission across multiple engine classes. In 
> > > > > > this case each
> > > > > > + * context's logical engine mask indicates where that context can 
> > > > > > placed. It is
> > > > > > + * implied in this mode that all contexts have mutual exclusive 
> > > > > > placement (e.g.
> > > > > > + * if one context is running CS0 no other contexts can run on CS0).
> > > > > > + *
> > > > > > + * Example 1 pseudo code:
> > > > > > + * CSX[Y] = engine class X, logical instance Y
> > > > > > + * INVALID = I915_ENGINE_CLASS_INVALID, 
> > > > > > I915_ENGINE_CLASS_INVALID_NONE
> > > > > > + * set_engines(INVALID)
> > > > > > + * set_parallel(engine_index=0, width=2, num_siblings=2,
> > > > > > + * 

Re: [Mesa-dev] [Intel-gfx] [RFC 2/2] drm/doc/rfc: i915 new parallel submission uAPI plan

2021-05-21 Thread Matthew Brost
On Fri, May 21, 2021 at 10:35:37AM +0200, Christian König wrote:
> Am 20.05.21 um 23:38 schrieb Jason Ekstrand:
> > On Thu, May 20, 2021 at 10:46 AM Matthew Brost  
> > wrote:
> > > On Thu, May 20, 2021 at 01:11:59PM +0200, Christian König wrote:
> > > > Am 19.05.21 um 18:51 schrieb Matthew Brost:
> > > > > On Wed, May 19, 2021 at 01:45:39PM +0200, Christian König wrote:
> > > > > > Oh, yeah we call that gang submit on the AMD side.
> > > > > > 
> > > > > > Had already some internal discussions how to implement this, but so 
> > > > > > far
> > > > > > couldn't figure out how to cleanly introduce that into the DRM 
> > > > > > scheduler.
> > > > > > 
> > > > > > Can you briefly describe in a few words how that is supposed to 
> > > > > > work on the
> > > > > > Intel side?
> > On Intel, we actually have two cases which don't fit the current
> > drm/scheduler model well: balanced and bonded.
> > 
> > In the balanced model, we want to submit a batch which can go to any
> > one of some set of engines and we don't care which.  It's up to the
> > kernel to pick an engine.  Imagine you had 64 identical HW compute
> > queues, for instance.  This could be done by making all the identical
> > engines share a single drm_gpu_scheduler and round-robin around the HW
> > queues or something.  I don't know that we strictly need drm/scheduler
> > to be aware of it but it might be nice if it grew support for this
> > mode so we could maintain a 1:1 relationship between HW queues and
> > drm_gpu_schedulers.  That said, I'm not sure how this would play with
> > GuC queues so maybe it doesn't help?
> 
> Oh, we do have support for load balancing like that.
> 
> When you call drm_sched_entity_init() you can give a list of
> drm_gpu_scheduler object to use round robing for scheduling.
> 
> New jobs are then scheduler to the drm_gpu_scheduler instance which is idle
> or rather the least busy one.
> 
> > The bonded model is like your ganged, I think.  We want to submit N
> > batches to run in parallel.  And they actually have to be executing on
> > the GPU simultaneously and not just sort-of at similar times.  We need
> > this for video.  There are also potential use-cases in Vulkan or even
> > GL that might be able to use this.  One difference with the balanced
> > mode is that bonds don't, strictly speaking, need to be on the same
> > type of engine.  Imagine, for instance, a 3D batch with a parallel
> > compute batch doing vertex pre-processing.
> > 
> > I'm pretty sure the bonded case is something that the mobile drivers
> > (panfrost, etc.) would like as well for doing Vulkan on tilers where
> > you often have to have two command buffers running in parallel.
> > They're currently doing it by submitting a giant pile of batches where
> > they split the batch and add sync primitives every time some GL call
> > requires them to sync between fragment and vertex pipes.
> 
> Yeah, we have exactly the same problem as well.
> 
> But so far every model we discussed has some drawbacks and it is rather hard
> for the scheduler to guarantee that stuff runs at the same time.
> 
> So if you got any ideas how to cleanly implement that then they would be
> rather welcomed.
> 

Everything Jason said about our submission modes is correct for execlists, we
have balanced + bonded models which is tightly coupled with that backend.

Fortunately with GuC submission most of this complexity goes away as the GuC
handles this for us. e.g. For balanced when we register a context we just give
it a mask of which physical engines a context is allowed to run on. For parallel
we register N contexts in a single command with the placement information +
submit to all the contexts with a command which moves the tails in LRCs for us.
We really don't need to bake any of this into the DRM scheduler for GuC
submission. 

Execlists is different story but I think our plan is to do the minimum possible
to plum that into the DRM scheduler (e.g. leave the balanced / bonded code in
the backend). Could we update the DRM scheduler to understand balanced (seems
like already does to some extent) and bonded? Yes we could but IMO the ROI on
that is low for Intel. The DRM scheduler is quite clean at the moment compared
to our near incomprehensible execlist backend. Execlists are basically a legacy
interface so pushing features which are only required for that backend to the
DRM scheduler doesn't make sense to me. That being said, if this is something
AMD needs in the DRM scheduler of course we can support you.

Matt

> Regards,
> Christian.
> 
> > 
> > So, to sum up, I think there's likely some good collaboration to be
> > had here for everyone. :-)
> > 
> > --Jason
> > 
> > > > > Sure, I've done a quick PoC internally and have been able to hook this
> > > > > into the DRM scheduler.
> > > > > 
> > > > > Basically each BB still maps to a single job as each job is somewhat
> > > > > unique (e.g. each job has its own ring, lrc, seqno, etc...). However 
> > > > > all
> > > > > the 

Re: [Mesa-dev] [PATCH 01/11] drm/amdgpu: Comply with implicit fencing rules

2021-05-21 Thread Daniel Vetter
On Fri, May 21, 2021 at 05:00:46PM +0200, Bas Nieuwenhuizen wrote:
> On Fri, May 21, 2021 at 4:37 PM Daniel Vetter  wrote:
> >
> > On Fri, May 21, 2021 at 11:46:23AM +0200, Bas Nieuwenhuizen wrote:
> > > On Fri, May 21, 2021 at 11:10 AM Daniel Vetter  
> > > wrote:
> > > > ---
> > > >  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 4 ++--
> > > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> > > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > > > index 88a24a0b5691..cc8426e1e8a8 100644
> > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > > > @@ -617,8 +617,8 @@ static int amdgpu_cs_parser_bos(struct 
> > > > amdgpu_cs_parser *p,
> > > > amdgpu_bo_list_for_each_entry(e, p->bo_list) {
> > > > struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
> > > >
> > > > -   /* Make sure we use the exclusive slot for shared BOs */
> > > > -   if (bo->prime_shared_count)
> > > > +   /* Make sure we use the exclusive slot for all 
> > > > potentially shared BOs */
> > > > +   if (!(bo->flags & AMDGPU_GEM_CREATE_VM_ALWAYS_VALID))
> > > > e->tv.num_shared = 0;
> > >
> > > I think it also makes sense to skip this with
> > > AMDGPU_GEM_CREATE_EXPLICIT_SYNC? It can be shared but I don't think
> > > anyone expects implicit sync to happen with those.
> >
> > Ah yes, I missed this entirely. So the "no implicit flag" is already
> > there, and the _only_ thing that's missing really is a way to fish out the
> > implicit fences, and set them.
> >
> > https://lore.kernel.org/dri-devel/20210520190007.534046-1-ja...@jlekstrand.net/
> >
> > So I think all that's really needed in radv is not setting
> > RADEON_FLAG_IMPLICIT_SYNC for winsys buffers when Jason's dma-buf ioctl
> > are present (means you need to do some import/export and keep the fd
> > around for winsys buffers, but shouldn't be too bad), and then control the
> > implicit fences entirely explicitly like vk expects.
> 
> That is the part I'm less sure about. This is a BO wide flag so we are
> also disabling implicit sync in the compositor. If the compositor does
> only do read stuff that is ok, as the inserted exclusive fence will
> work for that. But as I learned recently the app provided buffer may
> end up being written to by the X server which open a whole can of
> potential problems if implicit sync gets disabled between Xserver
> operations on the app provided buffer. Hence setting that on the WSI
> buffer is a whole new can of potential problems and hence I've said a
> submission based flag would be preferred.
> 
> I can certainly try it out though.

Hm yeah that's the wrong flag. We need a flag on the drm_file which the
explicit userspace sets. And which is valid only for itself.

There's a nice flags field when creating a ctx, but it's not validated and
there's already a comment that we have to filter out garbage priority, so
that's not use. I'll whip up something entirely untested just as a draft.
-Daniel



> 
> >
> > Are you bored enough to type this up for radv? I'll give Jason's kernel
> > stuff another review meanwhile.
> > -Daniel
> >
> > > > e->bo_va = amdgpu_vm_bo_find(vm, bo);
> > > > }
> > > > --
> > > > 2.31.0
> > > >
> >
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH 01/11] drm/amdgpu: Comply with implicit fencing rules

2021-05-21 Thread Bas Nieuwenhuizen
On Fri, May 21, 2021 at 4:37 PM Daniel Vetter  wrote:
>
> On Fri, May 21, 2021 at 11:46:23AM +0200, Bas Nieuwenhuizen wrote:
> > On Fri, May 21, 2021 at 11:10 AM Daniel Vetter  
> > wrote:
> > > ---
> > >  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 4 ++--
> > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > > index 88a24a0b5691..cc8426e1e8a8 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > > @@ -617,8 +617,8 @@ static int amdgpu_cs_parser_bos(struct 
> > > amdgpu_cs_parser *p,
> > > amdgpu_bo_list_for_each_entry(e, p->bo_list) {
> > > struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
> > >
> > > -   /* Make sure we use the exclusive slot for shared BOs */
> > > -   if (bo->prime_shared_count)
> > > +   /* Make sure we use the exclusive slot for all 
> > > potentially shared BOs */
> > > +   if (!(bo->flags & AMDGPU_GEM_CREATE_VM_ALWAYS_VALID))
> > > e->tv.num_shared = 0;
> >
> > I think it also makes sense to skip this with
> > AMDGPU_GEM_CREATE_EXPLICIT_SYNC? It can be shared but I don't think
> > anyone expects implicit sync to happen with those.
>
> Ah yes, I missed this entirely. So the "no implicit flag" is already
> there, and the _only_ thing that's missing really is a way to fish out the
> implicit fences, and set them.
>
> https://lore.kernel.org/dri-devel/20210520190007.534046-1-ja...@jlekstrand.net/
>
> So I think all that's really needed in radv is not setting
> RADEON_FLAG_IMPLICIT_SYNC for winsys buffers when Jason's dma-buf ioctl
> are present (means you need to do some import/export and keep the fd
> around for winsys buffers, but shouldn't be too bad), and then control the
> implicit fences entirely explicitly like vk expects.

That is the part I'm less sure about. This is a BO wide flag so we are
also disabling implicit sync in the compositor. If the compositor does
only do read stuff that is ok, as the inserted exclusive fence will
work for that. But as I learned recently the app provided buffer may
end up being written to by the X server which open a whole can of
potential problems if implicit sync gets disabled between Xserver
operations on the app provided buffer. Hence setting that on the WSI
buffer is a whole new can of potential problems and hence I've said a
submission based flag would be preferred.

I can certainly try it out though.

>
> Are you bored enough to type this up for radv? I'll give Jason's kernel
> stuff another review meanwhile.
> -Daniel
>
> > > e->bo_va = amdgpu_vm_bo_find(vm, bo);
> > > }
> > > --
> > > 2.31.0
> > >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH 01/11] drm/amdgpu: Comply with implicit fencing rules

2021-05-21 Thread Daniel Vetter
On Fri, May 21, 2021 at 07:58:57AM -0700, Rob Clark wrote:
> On Fri, May 21, 2021 at 2:10 AM Daniel Vetter  wrote:
> >
> > - msm is mildly entertaining. It also supports MSM_SUBMIT_NO_IMPLICIT,
> >   but because it doesn't use the drm/scheduler it handles fences from
> >   the wrong context with a synchronous dma_fence_wait. See
> >   submit_fence_sync() leading to msm_gem_sync_object(). Investing into
> >   a scheduler might be a good idea.
> 
> Yeah, drm/scheduler is (along with a lot of other things) on the TODO
> list.  But this isn't quite as bad as it sounds because userspace uses
> a u_queue thread to call the submit ioctl rather than blocking the
> driver.  (It also offloads some other work from the driver thread,
> like submit merging to reduce # of ioctls.  Coincidentally that
> arrangement was a step towards preparing userspace for some
> hypothetical non-ioctl uapi ;-))

You're also holding a pile of locks, which I expect latest with
multi-engine buffer sharing will be pain. If you push this to the
scheduler then the locks aren't held. Or maybe I've misread the flow, it's
become all a bit a blurr after all these drivers :-)

> OTOH it would be good to move blocking until the system can free
> enough pages to repin bo's out of the ioctl path to better handle some
> memory pressure corner cases without having to be interruptable over a
> lot more of the submit path..  Running chrome+android on devices
> without a lot of memory is fun..

Uh that one needs the userspace thread. Or entirely different semantics of
your ioctl, because you're not allowed to allocate memory once any
dma_fence is visible. So offloading the entire pinning to a submit thread
is no-go.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH 01/11] drm/amdgpu: Comply with implicit fencing rules

2021-05-21 Thread Rob Clark
On Fri, May 21, 2021 at 2:10 AM Daniel Vetter  wrote:
>
> - msm is mildly entertaining. It also supports MSM_SUBMIT_NO_IMPLICIT,
>   but because it doesn't use the drm/scheduler it handles fences from
>   the wrong context with a synchronous dma_fence_wait. See
>   submit_fence_sync() leading to msm_gem_sync_object(). Investing into
>   a scheduler might be a good idea.

Yeah, drm/scheduler is (along with a lot of other things) on the TODO
list.  But this isn't quite as bad as it sounds because userspace uses
a u_queue thread to call the submit ioctl rather than blocking the
driver.  (It also offloads some other work from the driver thread,
like submit merging to reduce # of ioctls.  Coincidentally that
arrangement was a step towards preparing userspace for some
hypothetical non-ioctl uapi ;-))

OTOH it would be good to move blocking until the system can free
enough pages to repin bo's out of the ioctl path to better handle some
memory pressure corner cases without having to be interruptable over a
lot more of the submit path..  Running chrome+android on devices
without a lot of memory is fun..

BR,
-R
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH 01/11] drm/amdgpu: Comply with implicit fencing rules

2021-05-21 Thread Daniel Vetter
On Fri, May 21, 2021 at 11:46:23AM +0200, Bas Nieuwenhuizen wrote:
> On Fri, May 21, 2021 at 11:10 AM Daniel Vetter  wrote:
> >
> > Docs for struct dma_resv are fairly clear:
> >
> > "A reservation object can have attached one exclusive fence (normally
> > associated with write operations) or N shared fences (read
> > operations)."
> >
> > https://dri.freedesktop.org/docs/drm/driver-api/dma-buf.html#reservation-objects
> >
> > Furthermore a review across all of upstream.
> >
> > First of render drivers and how they set implicit fences:
> >
> > - nouveau follows this contract, see in validate_fini_no_ticket()
> >
> > nouveau_bo_fence(nvbo, fence, !!b->write_domains);
> >
> >   and that last boolean controls whether the exclusive or shared fence
> >   slot is used.
> >
> > - radeon follows this contract by setting
> >
> > p->relocs[i].tv.num_shared = !r->write_domain;
> >
> >   in radeon_cs_parser_relocs(), which ensures that the call to
> >   ttm_eu_fence_buffer_objects() in radeon_cs_parser_fini() will do the
> >   right thing.
> >
> > - vmwgfx seems to follow this contract with the shotgun approach of
> >   always setting ttm_val_buf->num_shared = 0, which means
> >   ttm_eu_fence_buffer_objects() will only use the exclusive slot.
> >
> > - etnaviv follows this contract, as can be trivially seen by looking
> >   at submit_attach_object_fences()
> >
> > - i915 is a bit a convoluted maze with multiple paths leading to
> >   i915_vma_move_to_active(). Which sets the exclusive flag if
> >   EXEC_OBJECT_WRITE is set. This can either come as a buffer flag for
> >   softpin mode, or through the write_domain when using relocations. It
> >   follows this contract.
> >
> > - lima follows this contract, see lima_gem_submit() which sets the
> >   exclusive fence when the LIMA_SUBMIT_BO_WRITE flag is set for that
> >   bo
> >
> > - msm follows this contract, see msm_gpu_submit() which sets the
> >   exclusive flag when the MSM_SUBMIT_BO_WRITE is set for that buffer
> >
> > - panfrost follows this contract with the shotgun approach of just
> >   always setting the exclusive fence, see
> >   panfrost_attach_object_fences(). Benefits of a single engine I guess
> >
> > - v3d follows this contract with the same shotgun approach in
> >   v3d_attach_fences_and_unlock_reservation(), but it has at least an
> >   XXX comment that maybe this should be improved
> >
> > - v4c uses the same shotgun approach of always setting an exclusive
> >   fence, see vc4_update_bo_seqnos()
> >
> > - vgem also follows this contract, see vgem_fence_attach_ioctl() and
> >   the VGEM_FENCE_WRITE. This is used in some igts to validate prime
> >   sharing with i915.ko without the need of a 2nd gpu
> >
> > - vritio follows this contract again with the shotgun approach of
> >   always setting an exclusive fence, see virtio_gpu_array_add_fence()
> >
> > This covers the setting of the exclusive fences when writing.
> >
> > Synchronizing against the exclusive fence is a lot more tricky, and I
> > only spot checked a few:
> >
> > - i915 does it, with the optional EXEC_OBJECT_ASYNC to skip all
> >   implicit dependencies (which is used by vulkan)
> >
> > - etnaviv does this. Implicit dependencies are collected in
> >   submit_fence_sync(), again with an opt-out flag
> >   ETNA_SUBMIT_NO_IMPLICIT. These are then picked up in
> >   etnaviv_sched_dependency which is the
> >   drm_sched_backend_ops->dependency callback.
> >
> > - v4c seems to not do much here, maybe gets away with it by not having
> >   a scheduler and only a single engine. Since all newer broadcom chips than
> >   the OG vc4 use v3d for rendering, which follows this contract, the
> >   impact of this issue is fairly small.
> >
> > - v3d does this using the drm_gem_fence_array_add_implicit() helper,
> >   which then it's drm_sched_backend_ops->dependency callback
> >   v3d_job_dependency() picks up.
> >
> > - panfrost is nice here and tracks the implicit fences in
> >   panfrost_job->implicit_fences, which again the
> >   drm_sched_backend_ops->dependency callback panfrost_job_dependency()
> >   picks up. It is mildly questionable though since it only picks up
> >   exclusive fences in panfrost_acquire_object_fences(), but not buggy
> >   in practice because it also always sets the exclusive fence. It
> >   should pick up both sets of fences, just in case there's ever going
> >   to be a 2nd gpu in a SoC with a mali gpu. Or maybe a mali SoC with a
> >   pcie port and a real gpu, which might actually happen eventually. A
> >   bug, but easy to fix. Should probably use the
> >   drm_gem_fence_array_add_implicit() helper.
> >
> > - lima is nice an easy, uses drm_gem_fence_array_add_implicit() and
> >   the same schema as v3d.
> >
> > - msm is mildly entertaining. It also supports MSM_SUBMIT_NO_IMPLICIT,
> >   but because it doesn't use the drm/scheduler it handles fences from
> >   the wrong context with a synchronous dma_fence_wait. See
> >   

Re: [Mesa-dev] Freenode fallout

2021-05-21 Thread Simon Ser
On Friday, May 21st, 2021 at 1:49 AM, Lyude Paul  wrote:

> After considering Libera and OFTC as options, the board settled on
> recommending OFTC. The primary reason for this is because OFTC is
> associated with our parent foundation SPI, and has a long and well known
> history of involvement with the open source community. As well, the
> board believes OFTC's current Governance model is a lot more clear then
> Libera's.

I'd personally prefer Libera Chat. They don't yet have a published
formal governance model, but I hope this will come soon. As the former
Freenode staff, I trust them to make sure mistakes from the past won't
be repeated.

Apart from politics, Libera also offers a more modern feature set. This
can ease daily usage, with features such as reliable authentication,
account tracking, and many other IRC protocol improvements. OFTC has
plans to eventually migrate to Solanum, but they don't have the time to
do it for now.

For reference, on OFTC:

CAP LS 302
:kinetic.oftc.net CAP * LS :multi-prefix

And on Libera:

CAP LS 302
:ruthenium.libera.chat CAP * LS :account-notify away-notify chghost 
extended-join multi-prefix sasl=PLAIN,ECDSA-NIST256P-CHALLENGE,EXTERNAL tls 
userhost-in-names account-tag cap-notify echo-message solanum.chat/identify-msg 
solanum.chat/realhost
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [Intel-gfx] [RFC 2/2] drm/doc/rfc: i915 new parallel submission uAPI plan

2021-05-21 Thread Tvrtko Ursulin



On 20/05/2021 20:41, Daniel Vetter wrote:

On Thu, May 20, 2021 at 11:57:44AM +0100, Tvrtko Ursulin wrote:


On 20/05/2021 10:54, Daniel Vetter wrote:

On Wed, May 19, 2021 at 7:19 PM Matthew Brost  wrote:


On Wed, May 19, 2021 at 01:10:04PM +0200, Daniel Vetter wrote:

On Tue, May 18, 2021 at 04:58:30PM -0700, Matthew Brost wrote:

Add entry fpr i915 new parallel submission uAPI plan.

v2:
   (Daniel Vetter):
- Expand logical order explaination
- Add dummy header
- Only allow N BBs in execbuf IOCTL
- Configure parallel submission per slot not per gem context

Cc: Tvrtko Ursulin 
Cc: Tony Ye 
CC: Carl Zhang 
Cc: Daniel Vetter 
Cc: Jason Ekstrand 
Signed-off-by: Matthew Brost 
---
   Documentation/gpu/rfc/i915_parallel_execbuf.h | 144 ++
   Documentation/gpu/rfc/i915_scheduler.rst  |  53 ++-
   2 files changed, 196 insertions(+), 1 deletion(-)
   create mode 100644 Documentation/gpu/rfc/i915_parallel_execbuf.h

diff --git a/Documentation/gpu/rfc/i915_parallel_execbuf.h 
b/Documentation/gpu/rfc/i915_parallel_execbuf.h
new file mode 100644
index ..8c64b983ccad
--- /dev/null
+++ b/Documentation/gpu/rfc/i915_parallel_execbuf.h
@@ -0,0 +1,144 @@
+#define I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT 2 /* see 
i915_context_engines_parallel_submit */
+
+/*
+ * i915_context_engines_parallel_submit:
+ *
+ * Setup a slot to allow multiple BBs to be submitted in a single execbuf 
IOCTL.
+ * Those BBs will then be scheduled to run on the GPU in parallel. Multiple
+ * hardware contexts are created internally in the i915 run these BBs. Once a
+ * slot is configured for N BBs only N BBs can be submitted in each execbuf
+ * IOCTL and this is implict behavior (e.g. the user doesn't tell the execbuf
+ * IOCTL there are N BBs, the execbuf IOCTL know how many BBs there are based 
on
+ * the slots configuration).
+ *
+ * Their are two currently defined ways to control the placement of the
+ * hardware contexts on physical engines: default behavior (no flags) and
+ * I915_PARALLEL_IMPLICT_BONDS (a flag). More flags may be added the in the
+ * future as new hardware / use cases arise. Details of how to use this
+ * interface below above the flags.
+ *
+ * Returns -EINVAL if hardware context placement configuration invalid or if 
the
+ * placement configuration isn't supported on the platform / submission
+ * interface.
+ * Returns -ENODEV if extension isn't supported on the platform / submission
+ * inteface.
+ */
+struct i915_context_engines_parallel_submit {
+   struct i915_user_extension base;
+
+   __u16 engine_index; /* slot for parallel engine */
+   __u16 width;/* number of contexts per parallel engine */
+   __u16 num_siblings; /* number of siblings per context */
+   __u16 mbz16;


Ok the big picture looks reasonable now, the flags still confuse me.



Yea, it is a bit confusing.


+/*
+ * Default placement behvavior (currently unsupported):
+ *
+ * Rather than restricting parallel submission to a single class with a
+ * logically contiguous placement (I915_PARALLEL_IMPLICT_BONDS), add a mode 
that
+ * enables parallel submission across multiple engine classes. In this case 
each
+ * context's logical engine mask indicates where that context can placed. It is
+ * implied in this mode that all contexts have mutual exclusive placement (e.g.
+ * if one context is running CS0 no other contexts can run on CS0).
+ *
+ * Example 1 pseudo code:
+ * CSX[Y] = engine class X, logical instance Y
+ * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
+ * set_engines(INVALID)
+ * set_parallel(engine_index=0, width=2, num_siblings=2,
+ * engines=CS0[0],CS0[1],CS1[0],CS1[1])
+ *
+ * Results in the following valid placements:
+ * CS0[0], CS1[0]
+ * CS0[0], CS1[1]
+ * CS0[1], CS1[0]
+ * CS0[1], CS1[1]
+ *
+ * This can also be though of as 2 virtual engines:
+ * VE[0] = CS0[0], CS0[1]
+ * VE[1] = CS1[0], CS1[1]
+ *
+ * Example 2 pseudo code:
+ * CS[X] = generic engine of same class, logical instance X
+ * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
+ * set_engines(INVALID)
+ * set_parallel(engine_index=0, width=2, num_siblings=3,
+ * engines=CS[0],CS[1],CS[2],CS[0],CS[1],CS[2])
+ *
+ * Results in the following valid placements:
+ * CS[0], CS[1]
+ * CS[0], CS[2]
+ * CS[1], CS[0]
+ * CS[1], CS[2]
+ * CS[2], CS[0]
+ * CS[2], CS[1]
+ *
+ *
+ * This can also be though of as 2 virtual engines:
+ * VE[0] = CS[0], CS[1], CS[2]
+ * VE[1] = CS[0], CS[1], CS[2]
+
+ * This enables a use case where all engines are created equally, we don't care
+ * where they are scheduled, we just want a certain number of resources, for
+ * those resources to be scheduled in parallel, and possibly across multiple
+ * engine classes.
+ */


So I don't really get what this does compared to setting the flag below.
Is this just about running the batchbuffers the wrong way round, i.e. if
you have (simplest case)

width=2, num_sibglings=1, 

Re: [Mesa-dev] [Intel-gfx] [RFC 2/2] drm/doc/rfc: i915 new parallel submission uAPI plan

2021-05-21 Thread Tvrtko Ursulin



On 19/05/2021 00:58, Matthew Brost wrote:

Add entry fpr i915 new parallel submission uAPI plan.

v2:
  (Daniel Vetter):
   - Expand logical order explaination
   - Add dummy header
   - Only allow N BBs in execbuf IOCTL
   - Configure parallel submission per slot not per gem context

Cc: Tvrtko Ursulin 
Cc: Tony Ye 
CC: Carl Zhang 
Cc: Daniel Vetter 
Cc: Jason Ekstrand 
Signed-off-by: Matthew Brost 
---
  Documentation/gpu/rfc/i915_parallel_execbuf.h | 144 ++
  Documentation/gpu/rfc/i915_scheduler.rst  |  53 ++-
  2 files changed, 196 insertions(+), 1 deletion(-)
  create mode 100644 Documentation/gpu/rfc/i915_parallel_execbuf.h

diff --git a/Documentation/gpu/rfc/i915_parallel_execbuf.h 
b/Documentation/gpu/rfc/i915_parallel_execbuf.h
new file mode 100644
index ..8c64b983ccad
--- /dev/null
+++ b/Documentation/gpu/rfc/i915_parallel_execbuf.h
@@ -0,0 +1,144 @@
+#define I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT 2 /* see 
i915_context_engines_parallel_submit */
+
+/*
+ * i915_context_engines_parallel_submit:
+ *
+ * Setup a slot to allow multiple BBs to be submitted in a single execbuf 
IOCTL.
+ * Those BBs will then be scheduled to run on the GPU in parallel. Multiple
+ * hardware contexts are created internally in the i915 run these BBs. Once a
+ * slot is configured for N BBs only N BBs can be submitted in each execbuf
+ * IOCTL and this is implict behavior (e.g. the user doesn't tell the execbuf
+ * IOCTL there are N BBs, the execbuf IOCTL know how many BBs there are based 
on
+ * the slots configuration).


1)
Expand the term slot here with "slot in the context engine map" least 
once for clarity.


2)
About where execbuf will implicitly be finding batches - suggest to also 
cover first/last flag here. I know you have it in the readme but I think 
it is good if uapi header is as self-contained as possible.



+ *
+ * Their are two currently defined ways to control the placement of the
+ * hardware contexts on physical engines: default behavior (no flags) and
+ * I915_PARALLEL_IMPLICT_BONDS (a flag). More flags may be added the in the
+ * future as new hardware / use cases arise. Details of how to use this
+ * interface below above the flags.
+ *
+ * Returns -EINVAL if hardware context placement configuration invalid or if 
the
+ * placement configuration isn't supported on the platform / submission
+ * interface.
+ * Returns -ENODEV if extension isn't supported on the platform / submission
+ * inteface.
+ */
+struct i915_context_engines_parallel_submit {
+   struct i915_user_extension base;
+
+   __u16 engine_index; /* slot for parallel engine */
+   __u16 width;/* number of contexts per parallel engine */
+   __u16 num_siblings; /* number of siblings per context */
+   __u16 mbz16;
+/*
+ * Default placement behvavior (currently unsupported):
+ *
+ * Rather than restricting parallel submission to a single class with a
+ * logically contiguous placement (I915_PARALLEL_IMPLICT_BONDS), add a mode 
that


What do you mean with logically contiguous here? It sounds ambiguous 
versus logical vs "normal" engine instance numbers.



+ * enables parallel submission across multiple engine classes. In this case 
each
+ * context's logical engine mask indicates where that context can placed. It is
+ * implied in this mode that all contexts have mutual exclusive placement (e.g.
+ * if one context is running CS0 no other contexts can run on CS0).


I think talk about logical context and its mask is too implementation 
detail at the uapi level. Instead I would suggest more userspace 
programmer centric description.



+ *
+ * Example 1 pseudo code:
+ * CSX[Y] = engine class X, logical instance Y
+ * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
+ * set_engines(INVALID)
+ * set_parallel(engine_index=0, width=2, num_siblings=2,
+ * engines=CS0[0],CS0[1],CS1[0],CS1[1])
+ *
+ * Results in the following valid placements:
+ * CS0[0], CS1[0]
+ * CS0[0], CS1[1]
+ * CS0[1], CS1[0]
+ * CS0[1], CS1[1]
+ *
+ * This can also be though of as 2 virtual engines:
+ * VE[0] = CS0[0], CS0[1]
+ * VE[1] = CS1[0], CS1[1]


Ah okay so essentially similar to what I was proposing a year ago. But 
then it is no longer "set_parallel" really. It is one slot in the engine 
map, right, with the idea to super class intel_context in the 
implementation?


So really a wide virtual engine, as opposed to single one. In which case 
I think it makes sense to stay close to the existing naming of the 
load_balance extension for consistency. Load_balance_wide? 
Load_balance_parallel? Multi?


I also have to say the notation "CS0[0]" - I who know this problem space 
am finding it hard to penetrate what that actually means. (Also 
uppercase IMO makes it hard to read, but maybe it is just me.)


Looking a bit lower below, extension seems to be taking a 2d array of 
class:instance pairs, right? If so then reading these docs in order, or 
even just 

Re: [Mesa-dev] [PATCH 01/11] drm/amdgpu: Comply with implicit fencing rules

2021-05-21 Thread Christian König

Am 21.05.21 um 11:09 schrieb Daniel Vetter:

Docs for struct dma_resv are fairly clear:

"A reservation object can have attached one exclusive fence (normally
associated with write operations) or N shared fences (read
operations)."

https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdri.freedesktop.org%2Fdocs%2Fdrm%2Fdriver-api%2Fdma-buf.html%23reservation-objectsdata=04%7C01%7Cchristian.koenig%40amd.com%7C2cdb7d8e82de40fd452e08d91c383a13%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637571850083203679%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=Y5zO4aMKMuQwTCKVk6DrjTIbbRBLrcklrZgNCzNGXGs%3Dreserved=0

Furthermore a review across all of upstream.

First of render drivers and how they set implicit fences:

- nouveau follows this contract, see in validate_fini_no_ticket()

nouveau_bo_fence(nvbo, fence, !!b->write_domains);

   and that last boolean controls whether the exclusive or shared fence
   slot is used.

- radeon follows this contract by setting

p->relocs[i].tv.num_shared = !r->write_domain;

   in radeon_cs_parser_relocs(), which ensures that the call to
   ttm_eu_fence_buffer_objects() in radeon_cs_parser_fini() will do the
   right thing.

- vmwgfx seems to follow this contract with the shotgun approach of
   always setting ttm_val_buf->num_shared = 0, which means
   ttm_eu_fence_buffer_objects() will only use the exclusive slot.

- etnaviv follows this contract, as can be trivially seen by looking
   at submit_attach_object_fences()

- i915 is a bit a convoluted maze with multiple paths leading to
   i915_vma_move_to_active(). Which sets the exclusive flag if
   EXEC_OBJECT_WRITE is set. This can either come as a buffer flag for
   softpin mode, or through the write_domain when using relocations. It
   follows this contract.

- lima follows this contract, see lima_gem_submit() which sets the
   exclusive fence when the LIMA_SUBMIT_BO_WRITE flag is set for that
   bo

- msm follows this contract, see msm_gpu_submit() which sets the
   exclusive flag when the MSM_SUBMIT_BO_WRITE is set for that buffer

- panfrost follows this contract with the shotgun approach of just
   always setting the exclusive fence, see
   panfrost_attach_object_fences(). Benefits of a single engine I guess

- v3d follows this contract with the same shotgun approach in
   v3d_attach_fences_and_unlock_reservation(), but it has at least an
   XXX comment that maybe this should be improved

- v4c uses the same shotgun approach of always setting an exclusive
   fence, see vc4_update_bo_seqnos()

- vgem also follows this contract, see vgem_fence_attach_ioctl() and
   the VGEM_FENCE_WRITE. This is used in some igts to validate prime
   sharing with i915.ko without the need of a 2nd gpu

- vritio follows this contract again with the shotgun approach of
   always setting an exclusive fence, see virtio_gpu_array_add_fence()

This covers the setting of the exclusive fences when writing.

Synchronizing against the exclusive fence is a lot more tricky, and I
only spot checked a few:

- i915 does it, with the optional EXEC_OBJECT_ASYNC to skip all
   implicit dependencies (which is used by vulkan)

- etnaviv does this. Implicit dependencies are collected in
   submit_fence_sync(), again with an opt-out flag
   ETNA_SUBMIT_NO_IMPLICIT. These are then picked up in
   etnaviv_sched_dependency which is the
   drm_sched_backend_ops->dependency callback.

- v4c seems to not do much here, maybe gets away with it by not having
   a scheduler and only a single engine. Since all newer broadcom chips than
   the OG vc4 use v3d for rendering, which follows this contract, the
   impact of this issue is fairly small.

- v3d does this using the drm_gem_fence_array_add_implicit() helper,
   which then it's drm_sched_backend_ops->dependency callback
   v3d_job_dependency() picks up.

- panfrost is nice here and tracks the implicit fences in
   panfrost_job->implicit_fences, which again the
   drm_sched_backend_ops->dependency callback panfrost_job_dependency()
   picks up. It is mildly questionable though since it only picks up
   exclusive fences in panfrost_acquire_object_fences(), but not buggy
   in practice because it also always sets the exclusive fence. It
   should pick up both sets of fences, just in case there's ever going
   to be a 2nd gpu in a SoC with a mali gpu. Or maybe a mali SoC with a
   pcie port and a real gpu, which might actually happen eventually. A
   bug, but easy to fix. Should probably use the
   drm_gem_fence_array_add_implicit() helper.

- lima is nice an easy, uses drm_gem_fence_array_add_implicit() and
   the same schema as v3d.

- msm is mildly entertaining. It also supports MSM_SUBMIT_NO_IMPLICIT,
   but because it doesn't use the drm/scheduler it handles fences from
   the wrong context with a synchronous dma_fence_wait. See
   submit_fence_sync() leading to msm_gem_sync_object(). Investing into
   a 

Re: [Mesa-dev] [PATCH 01/11] drm/amdgpu: Comply with implicit fencing rules

2021-05-21 Thread Bas Nieuwenhuizen
On Fri, May 21, 2021 at 11:10 AM Daniel Vetter  wrote:
>
> Docs for struct dma_resv are fairly clear:
>
> "A reservation object can have attached one exclusive fence (normally
> associated with write operations) or N shared fences (read
> operations)."
>
> https://dri.freedesktop.org/docs/drm/driver-api/dma-buf.html#reservation-objects
>
> Furthermore a review across all of upstream.
>
> First of render drivers and how they set implicit fences:
>
> - nouveau follows this contract, see in validate_fini_no_ticket()
>
> nouveau_bo_fence(nvbo, fence, !!b->write_domains);
>
>   and that last boolean controls whether the exclusive or shared fence
>   slot is used.
>
> - radeon follows this contract by setting
>
> p->relocs[i].tv.num_shared = !r->write_domain;
>
>   in radeon_cs_parser_relocs(), which ensures that the call to
>   ttm_eu_fence_buffer_objects() in radeon_cs_parser_fini() will do the
>   right thing.
>
> - vmwgfx seems to follow this contract with the shotgun approach of
>   always setting ttm_val_buf->num_shared = 0, which means
>   ttm_eu_fence_buffer_objects() will only use the exclusive slot.
>
> - etnaviv follows this contract, as can be trivially seen by looking
>   at submit_attach_object_fences()
>
> - i915 is a bit a convoluted maze with multiple paths leading to
>   i915_vma_move_to_active(). Which sets the exclusive flag if
>   EXEC_OBJECT_WRITE is set. This can either come as a buffer flag for
>   softpin mode, or through the write_domain when using relocations. It
>   follows this contract.
>
> - lima follows this contract, see lima_gem_submit() which sets the
>   exclusive fence when the LIMA_SUBMIT_BO_WRITE flag is set for that
>   bo
>
> - msm follows this contract, see msm_gpu_submit() which sets the
>   exclusive flag when the MSM_SUBMIT_BO_WRITE is set for that buffer
>
> - panfrost follows this contract with the shotgun approach of just
>   always setting the exclusive fence, see
>   panfrost_attach_object_fences(). Benefits of a single engine I guess
>
> - v3d follows this contract with the same shotgun approach in
>   v3d_attach_fences_and_unlock_reservation(), but it has at least an
>   XXX comment that maybe this should be improved
>
> - v4c uses the same shotgun approach of always setting an exclusive
>   fence, see vc4_update_bo_seqnos()
>
> - vgem also follows this contract, see vgem_fence_attach_ioctl() and
>   the VGEM_FENCE_WRITE. This is used in some igts to validate prime
>   sharing with i915.ko without the need of a 2nd gpu
>
> - vritio follows this contract again with the shotgun approach of
>   always setting an exclusive fence, see virtio_gpu_array_add_fence()
>
> This covers the setting of the exclusive fences when writing.
>
> Synchronizing against the exclusive fence is a lot more tricky, and I
> only spot checked a few:
>
> - i915 does it, with the optional EXEC_OBJECT_ASYNC to skip all
>   implicit dependencies (which is used by vulkan)
>
> - etnaviv does this. Implicit dependencies are collected in
>   submit_fence_sync(), again with an opt-out flag
>   ETNA_SUBMIT_NO_IMPLICIT. These are then picked up in
>   etnaviv_sched_dependency which is the
>   drm_sched_backend_ops->dependency callback.
>
> - v4c seems to not do much here, maybe gets away with it by not having
>   a scheduler and only a single engine. Since all newer broadcom chips than
>   the OG vc4 use v3d for rendering, which follows this contract, the
>   impact of this issue is fairly small.
>
> - v3d does this using the drm_gem_fence_array_add_implicit() helper,
>   which then it's drm_sched_backend_ops->dependency callback
>   v3d_job_dependency() picks up.
>
> - panfrost is nice here and tracks the implicit fences in
>   panfrost_job->implicit_fences, which again the
>   drm_sched_backend_ops->dependency callback panfrost_job_dependency()
>   picks up. It is mildly questionable though since it only picks up
>   exclusive fences in panfrost_acquire_object_fences(), but not buggy
>   in practice because it also always sets the exclusive fence. It
>   should pick up both sets of fences, just in case there's ever going
>   to be a 2nd gpu in a SoC with a mali gpu. Or maybe a mali SoC with a
>   pcie port and a real gpu, which might actually happen eventually. A
>   bug, but easy to fix. Should probably use the
>   drm_gem_fence_array_add_implicit() helper.
>
> - lima is nice an easy, uses drm_gem_fence_array_add_implicit() and
>   the same schema as v3d.
>
> - msm is mildly entertaining. It also supports MSM_SUBMIT_NO_IMPLICIT,
>   but because it doesn't use the drm/scheduler it handles fences from
>   the wrong context with a synchronous dma_fence_wait. See
>   submit_fence_sync() leading to msm_gem_sync_object(). Investing into
>   a scheduler might be a good idea.
>
> - all the remaining drivers are ttm based, where I hope they do
>   appropriately obey implicit fences already. I didn't do the full
>   audit there because a) not follow the contract 

[Mesa-dev] [PATCH 01/11] drm/amdgpu: Comply with implicit fencing rules

2021-05-21 Thread Daniel Vetter
Docs for struct dma_resv are fairly clear:

"A reservation object can have attached one exclusive fence (normally
associated with write operations) or N shared fences (read
operations)."

https://dri.freedesktop.org/docs/drm/driver-api/dma-buf.html#reservation-objects

Furthermore a review across all of upstream.

First of render drivers and how they set implicit fences:

- nouveau follows this contract, see in validate_fini_no_ticket()

nouveau_bo_fence(nvbo, fence, !!b->write_domains);

  and that last boolean controls whether the exclusive or shared fence
  slot is used.

- radeon follows this contract by setting

p->relocs[i].tv.num_shared = !r->write_domain;

  in radeon_cs_parser_relocs(), which ensures that the call to
  ttm_eu_fence_buffer_objects() in radeon_cs_parser_fini() will do the
  right thing.

- vmwgfx seems to follow this contract with the shotgun approach of
  always setting ttm_val_buf->num_shared = 0, which means
  ttm_eu_fence_buffer_objects() will only use the exclusive slot.

- etnaviv follows this contract, as can be trivially seen by looking
  at submit_attach_object_fences()

- i915 is a bit a convoluted maze with multiple paths leading to
  i915_vma_move_to_active(). Which sets the exclusive flag if
  EXEC_OBJECT_WRITE is set. This can either come as a buffer flag for
  softpin mode, or through the write_domain when using relocations. It
  follows this contract.

- lima follows this contract, see lima_gem_submit() which sets the
  exclusive fence when the LIMA_SUBMIT_BO_WRITE flag is set for that
  bo

- msm follows this contract, see msm_gpu_submit() which sets the
  exclusive flag when the MSM_SUBMIT_BO_WRITE is set for that buffer

- panfrost follows this contract with the shotgun approach of just
  always setting the exclusive fence, see
  panfrost_attach_object_fences(). Benefits of a single engine I guess

- v3d follows this contract with the same shotgun approach in
  v3d_attach_fences_and_unlock_reservation(), but it has at least an
  XXX comment that maybe this should be improved

- v4c uses the same shotgun approach of always setting an exclusive
  fence, see vc4_update_bo_seqnos()

- vgem also follows this contract, see vgem_fence_attach_ioctl() and
  the VGEM_FENCE_WRITE. This is used in some igts to validate prime
  sharing with i915.ko without the need of a 2nd gpu

- vritio follows this contract again with the shotgun approach of
  always setting an exclusive fence, see virtio_gpu_array_add_fence()

This covers the setting of the exclusive fences when writing.

Synchronizing against the exclusive fence is a lot more tricky, and I
only spot checked a few:

- i915 does it, with the optional EXEC_OBJECT_ASYNC to skip all
  implicit dependencies (which is used by vulkan)

- etnaviv does this. Implicit dependencies are collected in
  submit_fence_sync(), again with an opt-out flag
  ETNA_SUBMIT_NO_IMPLICIT. These are then picked up in
  etnaviv_sched_dependency which is the
  drm_sched_backend_ops->dependency callback.

- v4c seems to not do much here, maybe gets away with it by not having
  a scheduler and only a single engine. Since all newer broadcom chips than
  the OG vc4 use v3d for rendering, which follows this contract, the
  impact of this issue is fairly small.

- v3d does this using the drm_gem_fence_array_add_implicit() helper,
  which then it's drm_sched_backend_ops->dependency callback
  v3d_job_dependency() picks up.

- panfrost is nice here and tracks the implicit fences in
  panfrost_job->implicit_fences, which again the
  drm_sched_backend_ops->dependency callback panfrost_job_dependency()
  picks up. It is mildly questionable though since it only picks up
  exclusive fences in panfrost_acquire_object_fences(), but not buggy
  in practice because it also always sets the exclusive fence. It
  should pick up both sets of fences, just in case there's ever going
  to be a 2nd gpu in a SoC with a mali gpu. Or maybe a mali SoC with a
  pcie port and a real gpu, which might actually happen eventually. A
  bug, but easy to fix. Should probably use the
  drm_gem_fence_array_add_implicit() helper.

- lima is nice an easy, uses drm_gem_fence_array_add_implicit() and
  the same schema as v3d.

- msm is mildly entertaining. It also supports MSM_SUBMIT_NO_IMPLICIT,
  but because it doesn't use the drm/scheduler it handles fences from
  the wrong context with a synchronous dma_fence_wait. See
  submit_fence_sync() leading to msm_gem_sync_object(). Investing into
  a scheduler might be a good idea.

- all the remaining drivers are ttm based, where I hope they do
  appropriately obey implicit fences already. I didn't do the full
  audit there because a) not follow the contract would confuse ttm
  quite well and b) reading non-standard scheduler and submit code
  which isn't based on drm/scheduler is a pain.

Onwards to the display side.

- Any driver using the drm_gem_plane_helper_prepare_fb() helper will
  correctly. 

Re: [Mesa-dev] [Intel-gfx] [RFC 2/2] drm/doc/rfc: i915 new parallel submission uAPI plan

2021-05-21 Thread Christian König

Am 20.05.21 um 23:38 schrieb Jason Ekstrand:

On Thu, May 20, 2021 at 10:46 AM Matthew Brost  wrote:

On Thu, May 20, 2021 at 01:11:59PM +0200, Christian König wrote:

Am 19.05.21 um 18:51 schrieb Matthew Brost:

On Wed, May 19, 2021 at 01:45:39PM +0200, Christian König wrote:

Oh, yeah we call that gang submit on the AMD side.

Had already some internal discussions how to implement this, but so far
couldn't figure out how to cleanly introduce that into the DRM scheduler.

Can you briefly describe in a few words how that is supposed to work on the
Intel side?

On Intel, we actually have two cases which don't fit the current
drm/scheduler model well: balanced and bonded.

In the balanced model, we want to submit a batch which can go to any
one of some set of engines and we don't care which.  It's up to the
kernel to pick an engine.  Imagine you had 64 identical HW compute
queues, for instance.  This could be done by making all the identical
engines share a single drm_gpu_scheduler and round-robin around the HW
queues or something.  I don't know that we strictly need drm/scheduler
to be aware of it but it might be nice if it grew support for this
mode so we could maintain a 1:1 relationship between HW queues and
drm_gpu_schedulers.  That said, I'm not sure how this would play with
GuC queues so maybe it doesn't help?


Oh, we do have support for load balancing like that.

When you call drm_sched_entity_init() you can give a list of 
drm_gpu_scheduler object to use round robing for scheduling.


New jobs are then scheduler to the drm_gpu_scheduler instance which is 
idle or rather the least busy one.



The bonded model is like your ganged, I think.  We want to submit N
batches to run in parallel.  And they actually have to be executing on
the GPU simultaneously and not just sort-of at similar times.  We need
this for video.  There are also potential use-cases in Vulkan or even
GL that might be able to use this.  One difference with the balanced
mode is that bonds don't, strictly speaking, need to be on the same
type of engine.  Imagine, for instance, a 3D batch with a parallel
compute batch doing vertex pre-processing.

I'm pretty sure the bonded case is something that the mobile drivers
(panfrost, etc.) would like as well for doing Vulkan on tilers where
you often have to have two command buffers running in parallel.
They're currently doing it by submitting a giant pile of batches where
they split the batch and add sync primitives every time some GL call
requires them to sync between fragment and vertex pipes.


Yeah, we have exactly the same problem as well.

But so far every model we discussed has some drawbacks and it is rather 
hard for the scheduler to guarantee that stuff runs at the same time.


So if you got any ideas how to cleanly implement that then they would be 
rather welcomed.


Regards,
Christian.



So, to sum up, I think there's likely some good collaboration to be
had here for everyone. :-)

--Jason


Sure, I've done a quick PoC internally and have been able to hook this
into the DRM scheduler.

Basically each BB still maps to a single job as each job is somewhat
unique (e.g. each job has its own ring, lrc, seqno, etc...). However all
the jobs configured to run in parallel map to a single sched_entity
which maintains the order each job was generated from the execbuf IOCTL
(1 - N). When the backend receives jobs 1 to N - 1 it basically just
updates some internal state. When the backend sees job N (last job) it
actually does the submit for jobs 1 - N which with GuC submission is a
simple command moving the LRC tail of the N jobs.

Daniel has suggested that we create a single job for the NN BBs but that
would be huge rework to the internals of the i915 and likely won't
happen by the time this code first lands.

Also worth noting one way a job isn't really a treated individually is
the excl slot with dma-resv. In that case we create a composite fence of
all jobs (dma_fence_array).

Yeah, that's something we have discussed as well.

How do you prevent the scheduler from over committing to a single ring
buffer in this scenario?


Each job has its own ring, the execbuf IOCTL throttles itself for each
job if there isn't space in the ring. This is exactly the same as
non-parallel submits.

I think this is what you were asking? If not, maybe try explaining the
question a bit more.

Matt


Christian.


Matt


Thanks,
Christian.

Am 19.05.21 um 01:58 schrieb Matthew Brost:

Add entry fpr i915 new parallel submission uAPI plan.

v2:
(Daniel Vetter):
 - Expand logical order explaination
 - Add dummy header
 - Only allow N BBs in execbuf IOCTL
 - Configure parallel submission per slot not per gem context

Cc: Tvrtko Ursulin 
Cc: Tony Ye 
CC: Carl Zhang 
Cc: Daniel Vetter 
Cc: Jason Ekstrand 
Signed-off-by: Matthew Brost 
---
Documentation/gpu/rfc/i915_parallel_execbuf.h | 144 ++
Documentation/gpu/rfc/i915_scheduler.rst  |  53 ++-
2 files changed,