[Mesa-dev] Linux Graphics Next: Userspace submission update

2021-05-27 Thread Marek Olšák
Hi,

Since Christian believes that we can't deadlock the kernel with some
changes there, we just need to make everything nice for userspace too.
Instead of explaining how it will work, I will explain the cases where
future hardware (and its kernel driver) will break existing userspace in
order to protect everybody from deadlocks. Anything that uses implicit sync
will be spared, so X and Wayland will be fine, assuming they don't
import/export fences. Those use cases that do import/export fences might or
might not work, depending on how the fences are used.

One of the necessities is that all fences will become future fences. The
semantics of imported/exported fences will change completely and will have
new restrictions on the usage. The restrictions are:


1) Android sync files will be impossible to support, so won't be supported.
(they don't allow future fences)


2) Implicit sync and explicit sync will be mutually exclusive between
process. A process can either use one or the other, but not both. This is
meant to prevent a deadlock condition with future fences where any process
can malevolently deadlock execution of any other process, even execution of
a higher-privileged process. The kernel will impose the following
restrictions to protect against the deadlock:

a) a process with an implicitly-sync'd imported/exported buffer can't
import/export a fence from/to another process
b) a process with an imported/exported fence can't import/export an
implicitly-sync'd buffer from/to another process

Alternative: A higher-privileged process could enforce both restrictions
instead of the kernel to protect itself from the deadlock, but this would
be a can of worms for existing userspace. It would be better if the kernel
just broke unsafe userspace on future hw, just like sync files.

If both implicit and explicit sync are allowed to occur simultaneously,
sending a future fence that will never signal to any process will deadlock
that process after it acquires the implicit sync lock, which is a sequence
number that the process is required to write to memory and send an
interrupt from the GPU in a finite time. This is how the deadlock can
happen:

* The process gets sequence number N from the kernel for an
implicitly-sync'd buffer.
* The process inserts (into the GPU user-mapped queue) a wait for sequence
number N-1.
* The process inserts a wait for a fence, but it doesn't know that it will
never signal ==> deadlock.
...
* The process inserts a command to write sequence number N to a
predetermined memory location. (which will make the buffer idle and send an
interrupt to the kernel)
...
* The kernel will terminate the process because it has never received the
interrupt. (i.e. a less-privileged process just killed a more-privileged
process)

It's the interrupt for implicit sync that never arrived that caused the
termination, and the only way another process can cause it is by sending a
fence that will never signal. Thus, importing/exporting fences from/to
other processes can't be allowed simultaneously with implicit sync.


3) Compositors (and other privileged processes, and display flipping) can't
trust imported/exported fences. They need a timeout recovery mechanism from
the beginning, and the following are some possible solutions to timeouts:

a) use a CPU wait with a small absolute timeout, and display the previous
content on timeout
b) use a GPU wait with a small absolute timeout, and conditional rendering
will choose between the latest content (if signalled) and previous content
(if timed out)

The result would be that the desktop can run close to 60 fps even if an app
runs at 1 fps.

*Redefining imported/exported fences and breaking some users/OSs is the
only way to have userspace GPU command submission, and the deadlock example
here is the counterexample proving that there is no other way.*

So, what are the chances this is going to fly with the ecosystem?

Thanks,
Marek
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [Intel-gfx] [RFC PATCH 2/2] drm/doc/rfc: i915 new parallel submission uAPI plan

2021-05-27 Thread Tvrtko Ursulin


On 27/05/2021 00:33, Matthew Brost wrote:

Add entry for i915 new parallel submission uAPI plan.

v2:
  (Daniel Vetter):
   - Expand logical order explaination
   - Add dummy header
   - Only allow N BBs in execbuf IOCTL
   - Configure parallel submission per slot not per gem context
v3:
  (Marcin Ślusarz):
   - Lot's of typos / bad english fixed
  (Tvrtko Ursulin):
   - Consistent pseudo code, clean up wording in descriptions

Cc: Tvrtko Ursulin 
Cc: Tony Ye 
CC: Carl Zhang 
Cc: Daniel Vetter 
Cc: Jason Ekstrand 
Signed-off-by: Matthew Brost 
---
  Documentation/gpu/rfc/i915_parallel_execbuf.h | 145 ++
  Documentation/gpu/rfc/i915_scheduler.rst  |  55 ++-
  2 files changed, 198 insertions(+), 2 deletions(-)
  create mode 100644 Documentation/gpu/rfc/i915_parallel_execbuf.h

diff --git a/Documentation/gpu/rfc/i915_parallel_execbuf.h 
b/Documentation/gpu/rfc/i915_parallel_execbuf.h
new file mode 100644
index ..20de206e3ab4
--- /dev/null
+++ b/Documentation/gpu/rfc/i915_parallel_execbuf.h
@@ -0,0 +1,145 @@
+#define I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT 2 /* see 
i915_context_engines_parallel_submit */
+
+/*
+ * i915_context_engines_parallel_submit:
+ *
+ * Setup a slot in the context engine map to allow multiple BBs to be submitted
+ * in a single execbuf IOCTL. Those BBs will then be scheduled to run on the 
GPU
+ * in parallel. Multiple hardware contexts are created internally in the i915
+ * run these BBs. Once a slot is configured for N BBs only N BBs can be
+ * submitted in each execbuf IOCTL and this is implicit behavior e.g. The user


"i.e. the user" I think.


+ * doesn't tell the execbuf IOCTL there are N BBs, the execbuf IOCTL know how


knows


+ * many BBs there are based on the slots configuration. The N BBs are the last 
N


"slot's" ? Or maybe better fully qualified as "engine map slot"?


+ * buffer objects for first N if I915_EXEC_BATCH_FIRST is set.


s/for/or/


+ *
+ * There are two currently defined ways to control the placement of the
+ * hardware contexts on physical engines: default behavior (no flags) and
+ * I915_PARALLEL_IMPLICIT_BONDS (a flag). More flags may be added the in the
+ * future as new hardware / use cases arise. Details of how to use this
+ * interface above the flags field in this structure.


"are above", "can be found above" ?


+ *
+ * Returns -EINVAL if hardware context placement configuration is invalid or if
+ * the placement configuration isn't supported on the platform / submission
+ * interface.
+ * Returns -ENODEV if extension isn't supported on the platform / submission
+ * inteface.
+ */
+struct i915_context_engines_parallel_submit {
+   struct i915_user_extension base;
+
+   __u16 engine_index; /* slot for parallel engine */
+   __u16 width;/* number of contexts per parallel engine */
+   __u16 num_siblings; /* number of siblings per context */
+   __u16 mbz16;
+/*
+ * Default placement behavior (currently unsupported):
+ *
+ * Allow BBs to be placed on any available engine instance. In this case each
+ * context's engine mask indicates where that context can be placed. It is


Context does not have an engine mask in the uapi so I'd explain this in 
terms of list of engines.



+ * implied in this mode that all contexts have mutual exclusive placement.


mutually


+ * e.g. If one context is running CSX[0] no other contexts can run on CSX[0]).


s/If/if, I think.

I don't find CSX[0] readable nor I think introducing csx as a term is 
desirable. Lowercase cs (like cs0) is what I would prefer for both 
readability and likely cs = command streamer could be more obvious what 
it is. Naming like rcs0 (prefix-cs-number) and similar are exposed in 
kernel logs and error state while csx[] (cs-suffix-number) are not at all.


In placement examples I would refrain from using any prefixes/suffixes 
to designate engine classes and just say cs0/cs1, mentioning the 
same/mixed class requirement separately in the notes if applicable.



+ *
+ * Example 1 pseudo code:
+ * CSX,Y[N] = generic engine class X or Y, logical instance N
+ * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
+ * set_engines(INVALID)
+ * set_parallel(engine_index=0, width=2, num_siblings=2,
+ * engines=CSX[0],CSX[1],CSY[0],CSY[1])
+ *
+ * Results in the following valid placements:
+ * CSX[0], CSY[0]
+ * CSX[0], CSY[1]
+ * CSX[1], CSY[0]
+ * CSX[1], CSY[1]
+ *
+ * This can also be thought of as 2 virtual engines described by 2-D array in
+ * the engines the field:


One the too many.


+ * VE[0] = CSX[0], CSX[1]
+ * VE[1] = CSY[0], CSY[1]
+ *
+ * Example 2 pseudo code:
+ * CSX[Y] = generic engine of same class X, logical instance N
+ * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
+ * set_engines(INVALID)
+ * set_parallel(engine_index=0, width=2, num_siblings=3,
+ * engines=CSX[0],CSX[1],CSX[2],CSX[0],CSX[1],CSX[2])
+ *
+ * Results in the following valid 

Re: [Mesa-dev] [Intel-gfx] [RFC PATCH 1/2] drm/doc/rfc: i915 GuC submission / DRM scheduler

2021-05-27 Thread Daniel Vetter
On Thu, May 27, 2021 at 11:06:38AM +0100, Tvrtko Ursulin wrote:
> 
> On 27/05/2021 00:33, Matthew Brost wrote:
> > Add entry for i915 GuC submission / DRM scheduler integration plan.
> > Follow up patch with details of new parallel submission uAPI to come.
> > 
> > v2:
> >   (Daniel Vetter)
> >- Expand explaination of why bonding isn't supported for GuC
> >  submission
> >- CC some of the DRM scheduler maintainers
> >- Add priority inheritance / boosting use case
> >- Add reasoning for removing in order assumptions
> >   (Daniel Stone)
> >- Add links to priority spec
> 
> Where will the outstanding items like, from the top of my head only, error
> capture and open source logging tool be tracked? I thought here but maybe
> not.

I thought the same that we'd put these really important bits into the
rfc/todo here. Matt, can you pls do that?
-Daniel

> 
> Regards,
> 
> Tvrtko
> 
> > Cc: Christian König 
> > Cc: Luben Tuikov 
> > Cc: Alex Deucher 
> > Cc: Steven Price 
> > Cc: Jon Bloomfield 
> > Cc: Jason Ekstrand 
> > Cc: Dave Airlie 
> > Cc: Daniel Vetter 
> > Cc: Jason Ekstrand 
> > Cc: dri-de...@lists.freedesktop.org
> > Signed-off-by: Matthew Brost 
> > ---
> >   Documentation/gpu/rfc/i915_scheduler.rst | 85 
> >   Documentation/gpu/rfc/index.rst  |  4 ++
> >   2 files changed, 89 insertions(+)
> >   create mode 100644 Documentation/gpu/rfc/i915_scheduler.rst
> > 
> > diff --git a/Documentation/gpu/rfc/i915_scheduler.rst 
> > b/Documentation/gpu/rfc/i915_scheduler.rst
> > new file mode 100644
> > index ..7faa46cde088
> > --- /dev/null
> > +++ b/Documentation/gpu/rfc/i915_scheduler.rst
> > @@ -0,0 +1,85 @@
> > +=
> > +I915 GuC Submission/DRM Scheduler Section
> > +=
> > +
> > +Upstream plan
> > +=
> > +For upstream the overall plan for landing GuC submission and integrating 
> > the
> > +i915 with the DRM scheduler is:
> > +
> > +* Merge basic GuC submission
> > +   * Basic submission support for all gen11+ platforms
> > +   * Not enabled by default on any current platforms but can be enabled via
> > + modparam enable_guc
> > +   * Lots of rework will need to be done to integrate with DRM scheduler so
> > + no need to nit pick everything in the code, it just should be
> > + functional, no major coding style / layering errors, and not regress
> > + execlists
> > +   * Update IGTs / selftests as needed to work with GuC submission
> > +   * Enable CI on supported platforms for a baseline
> > +   * Rework / get CI heathly for GuC submission in place as needed
> > +* Merge new parallel submission uAPI
> > +   * Bonding uAPI completely incompatible with GuC submission, plus it has
> > + severe design issues in general, which is why we want to retire it no
> > + matter what
> > +   * New uAPI adds I915_CONTEXT_ENGINES_EXT_PARALLEL context setup step
> > + which configures a slot with N contexts
> > +   * After I915_CONTEXT_ENGINES_EXT_PARALLEL a user can submit N batches to
> > + a slot in a single execbuf IOCTL and the batches run on the GPU in
> > + paralllel
> > +   * Initially only for GuC submission but execlists can be supported if
> > + needed
> > +* Convert the i915 to use the DRM scheduler
> > +   * GuC submission backend fully integrated with DRM scheduler
> > +   * All request queues removed from backend (e.g. all backpressure
> > + handled in DRM scheduler)
> > +   * Resets / cancels hook in DRM scheduler
> > +   * Watchdog hooks into DRM scheduler
> > +   * Lots of complexity of the GuC backend can be pulled out once
> > + integrated with DRM scheduler (e.g. state machine gets
> > + simplier, locking gets simplier, etc...)
> > +   * Execlist backend will do the minimum required to hook in the DRM
> > + scheduler so it can live next to the fully integrated GuC backend
> > +   * Legacy interface
> > +   * Features like timeslicing / preemption / virtual engines would
> > + be difficult to integrate with the DRM scheduler and these
> > + features are not required for GuC submission as the GuC does
> > + these things for us
> > +   * ROI low on fully integrating into DRM scheduler
> > +   * Fully integrating would add lots of complexity to DRM
> > + scheduler
> > +   * Port i915 priority inheritance / boosting feature in DRM scheduler
> > +   * Used for i915 page flip, may be useful to other DRM drivers as
> > + well
> > +   * Will be an optional feature in the DRM scheduler
> > +   * Remove in-order completion assumptions from DRM scheduler
> > +   * Even when using the DRM scheduler the backends will handle
> > + preemption, timeslicing, etc... so it is possible for jobs to
> > + finish out of orde

Re: [Mesa-dev] [Intel-gfx] [RFC PATCH 1/2] drm/doc/rfc: i915 GuC submission / DRM scheduler

2021-05-27 Thread Tvrtko Ursulin


On 27/05/2021 00:33, Matthew Brost wrote:

Add entry for i915 GuC submission / DRM scheduler integration plan.
Follow up patch with details of new parallel submission uAPI to come.

v2:
  (Daniel Vetter)
   - Expand explaination of why bonding isn't supported for GuC
 submission
   - CC some of the DRM scheduler maintainers
   - Add priority inheritance / boosting use case
   - Add reasoning for removing in order assumptions
  (Daniel Stone)
   - Add links to priority spec


Where will the outstanding items like, from the top of my head only, 
error capture and open source logging tool be tracked? I thought here 
but maybe not.


Regards,

Tvrtko


Cc: Christian König 
Cc: Luben Tuikov 
Cc: Alex Deucher 
Cc: Steven Price 
Cc: Jon Bloomfield 
Cc: Jason Ekstrand 
Cc: Dave Airlie 
Cc: Daniel Vetter 
Cc: Jason Ekstrand 
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: Matthew Brost 
---
  Documentation/gpu/rfc/i915_scheduler.rst | 85 
  Documentation/gpu/rfc/index.rst  |  4 ++
  2 files changed, 89 insertions(+)
  create mode 100644 Documentation/gpu/rfc/i915_scheduler.rst

diff --git a/Documentation/gpu/rfc/i915_scheduler.rst 
b/Documentation/gpu/rfc/i915_scheduler.rst
new file mode 100644
index ..7faa46cde088
--- /dev/null
+++ b/Documentation/gpu/rfc/i915_scheduler.rst
@@ -0,0 +1,85 @@
+=
+I915 GuC Submission/DRM Scheduler Section
+=
+
+Upstream plan
+=
+For upstream the overall plan for landing GuC submission and integrating the
+i915 with the DRM scheduler is:
+
+* Merge basic GuC submission
+   * Basic submission support for all gen11+ platforms
+   * Not enabled by default on any current platforms but can be enabled via
+ modparam enable_guc
+   * Lots of rework will need to be done to integrate with DRM scheduler so
+ no need to nit pick everything in the code, it just should be
+ functional, no major coding style / layering errors, and not regress
+ execlists
+   * Update IGTs / selftests as needed to work with GuC submission
+   * Enable CI on supported platforms for a baseline
+   * Rework / get CI heathly for GuC submission in place as needed
+* Merge new parallel submission uAPI
+   * Bonding uAPI completely incompatible with GuC submission, plus it has
+ severe design issues in general, which is why we want to retire it no
+ matter what
+   * New uAPI adds I915_CONTEXT_ENGINES_EXT_PARALLEL context setup step
+ which configures a slot with N contexts
+   * After I915_CONTEXT_ENGINES_EXT_PARALLEL a user can submit N batches to
+ a slot in a single execbuf IOCTL and the batches run on the GPU in
+ paralllel
+   * Initially only for GuC submission but execlists can be supported if
+ needed
+* Convert the i915 to use the DRM scheduler
+   * GuC submission backend fully integrated with DRM scheduler
+   * All request queues removed from backend (e.g. all backpressure
+ handled in DRM scheduler)
+   * Resets / cancels hook in DRM scheduler
+   * Watchdog hooks into DRM scheduler
+   * Lots of complexity of the GuC backend can be pulled out once
+ integrated with DRM scheduler (e.g. state machine gets
+ simplier, locking gets simplier, etc...)
+   * Execlist backend will do the minimum required to hook in the DRM
+ scheduler so it can live next to the fully integrated GuC backend
+   * Legacy interface
+   * Features like timeslicing / preemption / virtual engines would
+ be difficult to integrate with the DRM scheduler and these
+ features are not required for GuC submission as the GuC does
+ these things for us
+   * ROI low on fully integrating into DRM scheduler
+   * Fully integrating would add lots of complexity to DRM
+ scheduler
+   * Port i915 priority inheritance / boosting feature in DRM scheduler
+   * Used for i915 page flip, may be useful to other DRM drivers as
+ well
+   * Will be an optional feature in the DRM scheduler
+   * Remove in-order completion assumptions from DRM scheduler
+   * Even when using the DRM scheduler the backends will handle
+ preemption, timeslicing, etc... so it is possible for jobs to
+ finish out of order
+   * Pull out i915 priority levels and use DRM priority levels
+   * Optimize DRM scheduler as needed
+
+New uAPI for basic GuC submission
+=
+No major changes are required to the uAPI for basic GuC submission. The only
+change is a new scheduler attribute: I915_SCHEDULER_CAP_STATIC_PRIORITY_MAP.
+This attribute indicates the 2k i915 user priority level