On 09.11.2017 14:45, Nicolai Hähnle wrote:
From: Nicolai Hähnle <nicolai.haeh...@amd.com>

Having the gallium driver thread flush in the background should be
sufficient for glFlush semantics.

Various end-of-frame flushes (from st_context_flush and st/dri) still
use a synchronous flush. We should eventually be able to transition
those to asynchronous flushes as well by passing fences explicitly
via the X protocol.

Thanks for the reviews, folks.

This last patch causes a non-deterministic regression in dEQP-EGL.functional.image.render_multiple_contexts.gles2_*_read_pixels

I'm pretty sure that's a test bug, and I filed an issue on Khronos' internal bug tracker (https://gitlab.khronos.org/Tracker/vk-gl-cts/issues/857). Still, I'll hold this particular patch for now, and only push the first three after some more double-checking.

Cheers,
Nicolai


---
  src/mesa/state_tracker/st_cb_flush.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/mesa/state_tracker/st_cb_flush.c 
b/src/mesa/state_tracker/st_cb_flush.c
index 14bfd5a4684..5f4e2ac3cc1 100644
--- a/src/mesa/state_tracker/st_cb_flush.c
+++ b/src/mesa/state_tracker/st_cb_flush.c
@@ -81,21 +81,21 @@ void st_finish( struct st_context *st )
   */
  static void st_glFlush(struct gl_context *ctx)
  {
     struct st_context *st = st_context(ctx);
/* Don't call st_finish() here. It is not the state tracker's
      * responsibilty to inject sleeps in the hope of avoiding buffer
      * synchronization issues.  Calling finish() here will just hide
      * problems that need to be fixed elsewhere.
      */
-   st_flush(st, NULL, 0);
+   st_flush(st, NULL, PIPE_FLUSH_ASYNC);
st_manager_flush_frontbuffer(st);
  }
/**
   * Called via ctx->Driver.Finish()
   */
  static void st_glFinish(struct gl_context *ctx)
  {



--
Lerne, wie die Welt wirklich ist,
Aber vergiss niemals, wie sie sein sollte.
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to