Re: [Mesa-dev] [PATCH v3 23/25] panfrost: Remove uneeded add_bo() in initialize_surface()

2019-09-05 Thread Boris Brezillon
On Thu, 5 Sep 2019 19:28:04 -0400
Alyssa Rosenzweig  wrote:

> Ah, ignore my previous comment. Could we squash this into the patch that
> added the PAN_SHARED_BO_RW define?

Absolutely (I don't know why I did that separately).

> 
> On Thu, Sep 05, 2019 at 09:41:48PM +0200, Boris Brezillon wrote:
> > Should already be added in panfrost_draw_vbo() and panfrost_clear(),
> > no need to add it here too.
> > 
> > Signed-off-by: Boris Brezillon 
> > ---
> >  src/gallium/drivers/panfrost/pan_fragment.c | 3 ---
> >  1 file changed, 3 deletions(-)
> > 
> > diff --git a/src/gallium/drivers/panfrost/pan_fragment.c 
> > b/src/gallium/drivers/panfrost/pan_fragment.c
> > index cbb95b79f52a..00ff363a1bba 100644
> > --- a/src/gallium/drivers/panfrost/pan_fragment.c
> > +++ b/src/gallium/drivers/panfrost/pan_fragment.c
> > @@ -42,9 +42,6 @@ panfrost_initialize_surface(
> >  struct panfrost_resource *rsrc = pan_resource(surf->texture);
> >  
> >  rsrc->slices[level].initialized = true;
> > -
> > -assert(rsrc->bo);
> > -panfrost_batch_add_bo(batch, rsrc->bo, PAN_SHARED_BO_RW);
> >  }
> >  
> >  /* Generate a fragment job. This should be called once per frame. 
> > (According to
> > -- 
> > 2.21.0  

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH] llvmpipe: fix CALLOC vs. free mismatches

2019-09-05 Thread Jose Fonseca
Reviewed-by: Jose Fonseca 


From: srol...@vmware.com 
Sent: Friday, September 6, 2019 03:13
To: Jose Fonseca ; airl...@redhat.com 
; mesa-dev@lists.freedesktop.org 

Cc: Roland Scheidegger 
Subject: [PATCH] llvmpipe: fix CALLOC vs. free mismatches

From: Roland Scheidegger 

Should fix some issues we're seeing. And use REALLOC instead of realloc.
---
 src/gallium/drivers/llvmpipe/lp_cs_tpool.c | 6 +++---
 src/gallium/drivers/llvmpipe/lp_state_cs.c | 3 ++-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/src/gallium/drivers/llvmpipe/lp_cs_tpool.c 
b/src/gallium/drivers/llvmpipe/lp_cs_tpool.c
index 04495727e1c..6f1b4e2ee55 100644
--- a/src/gallium/drivers/llvmpipe/lp_cs_tpool.c
+++ b/src/gallium/drivers/llvmpipe/lp_cs_tpool.c
@@ -65,7 +65,7 @@ lp_cs_tpool_worker(void *data)
  cnd_broadcast(&task->finish);
}
mtx_unlock(&pool->m);
-   free(lmem.local_mem_ptr);
+   FREE(lmem.local_mem_ptr);
return 0;
 }

@@ -105,7 +105,7 @@ lp_cs_tpool_destroy(struct lp_cs_tpool *pool)

cnd_destroy(&pool->new_work);
mtx_destroy(&pool->m);
-   free(pool);
+   FREE(pool);
 }

 struct lp_cs_tpool_task *
@@ -148,6 +148,6 @@ lp_cs_tpool_wait_for_task(struct lp_cs_tpool *pool,
mtx_unlock(&pool->m);

cnd_destroy(&task->finish);
-   free(task);
+   FREE(task);
*task_handle = NULL;
 }
diff --git a/src/gallium/drivers/llvmpipe/lp_state_cs.c 
b/src/gallium/drivers/llvmpipe/lp_state_cs.c
index 1645a185cb2..a26cbf4df22 100644
--- a/src/gallium/drivers/llvmpipe/lp_state_cs.c
+++ b/src/gallium/drivers/llvmpipe/lp_state_cs.c
@@ -1123,8 +1123,9 @@ cs_exec_fn(void *init_data, int iter_idx, struct 
lp_cs_local_mem *lmem)
memset(&thread_data, 0, sizeof(thread_data));

if (lmem->local_size < job_info->req_local_mem) {
+  lmem->local_mem_ptr = REALLOC(lmem->local_mem_ptr, lmem->local_size,
+job_info->req_local_mem);
   lmem->local_size = job_info->req_local_mem;
-  lmem->local_mem_ptr = realloc(lmem->local_mem_ptr, lmem->local_size);
}
thread_data.shared = lmem->local_mem_ptr;

--
2.17.1

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH] llvmpipe: fix CALLOC vs. free mismatches

2019-09-05 Thread Dave Airlie
On Fri, 6 Sep 2019 at 12:13,  wrote:
>
> From: Roland Scheidegger 
>
> Should fix some issues we're seeing. And use REALLOC instead of realloc.

Oops sorry

Reviewed-by: Dave Airlie 
> ---
>  src/gallium/drivers/llvmpipe/lp_cs_tpool.c | 6 +++---
>  src/gallium/drivers/llvmpipe/lp_state_cs.c | 3 ++-
>  2 files changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/src/gallium/drivers/llvmpipe/lp_cs_tpool.c 
> b/src/gallium/drivers/llvmpipe/lp_cs_tpool.c
> index 04495727e1c..6f1b4e2ee55 100644
> --- a/src/gallium/drivers/llvmpipe/lp_cs_tpool.c
> +++ b/src/gallium/drivers/llvmpipe/lp_cs_tpool.c
> @@ -65,7 +65,7 @@ lp_cs_tpool_worker(void *data)
>   cnd_broadcast(&task->finish);
> }
> mtx_unlock(&pool->m);
> -   free(lmem.local_mem_ptr);
> +   FREE(lmem.local_mem_ptr);
> return 0;
>  }
>
> @@ -105,7 +105,7 @@ lp_cs_tpool_destroy(struct lp_cs_tpool *pool)
>
> cnd_destroy(&pool->new_work);
> mtx_destroy(&pool->m);
> -   free(pool);
> +   FREE(pool);
>  }
>
>  struct lp_cs_tpool_task *
> @@ -148,6 +148,6 @@ lp_cs_tpool_wait_for_task(struct lp_cs_tpool *pool,
> mtx_unlock(&pool->m);
>
> cnd_destroy(&task->finish);
> -   free(task);
> +   FREE(task);
> *task_handle = NULL;
>  }
> diff --git a/src/gallium/drivers/llvmpipe/lp_state_cs.c 
> b/src/gallium/drivers/llvmpipe/lp_state_cs.c
> index 1645a185cb2..a26cbf4df22 100644
> --- a/src/gallium/drivers/llvmpipe/lp_state_cs.c
> +++ b/src/gallium/drivers/llvmpipe/lp_state_cs.c
> @@ -1123,8 +1123,9 @@ cs_exec_fn(void *init_data, int iter_idx, struct 
> lp_cs_local_mem *lmem)
> memset(&thread_data, 0, sizeof(thread_data));
>
> if (lmem->local_size < job_info->req_local_mem) {
> +  lmem->local_mem_ptr = REALLOC(lmem->local_mem_ptr, lmem->local_size,
> +job_info->req_local_mem);
>lmem->local_size = job_info->req_local_mem;
> -  lmem->local_mem_ptr = realloc(lmem->local_mem_ptr, lmem->local_size);
> }
> thread_data.shared = lmem->local_mem_ptr;
>
> --
> 2.17.1
>
> ___
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH] llvmpipe: fix CALLOC vs. free mismatches

2019-09-05 Thread sroland
From: Roland Scheidegger 

Should fix some issues we're seeing. And use REALLOC instead of realloc.
---
 src/gallium/drivers/llvmpipe/lp_cs_tpool.c | 6 +++---
 src/gallium/drivers/llvmpipe/lp_state_cs.c | 3 ++-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/src/gallium/drivers/llvmpipe/lp_cs_tpool.c 
b/src/gallium/drivers/llvmpipe/lp_cs_tpool.c
index 04495727e1c..6f1b4e2ee55 100644
--- a/src/gallium/drivers/llvmpipe/lp_cs_tpool.c
+++ b/src/gallium/drivers/llvmpipe/lp_cs_tpool.c
@@ -65,7 +65,7 @@ lp_cs_tpool_worker(void *data)
  cnd_broadcast(&task->finish);
}
mtx_unlock(&pool->m);
-   free(lmem.local_mem_ptr);
+   FREE(lmem.local_mem_ptr);
return 0;
 }
 
@@ -105,7 +105,7 @@ lp_cs_tpool_destroy(struct lp_cs_tpool *pool)
 
cnd_destroy(&pool->new_work);
mtx_destroy(&pool->m);
-   free(pool);
+   FREE(pool);
 }
 
 struct lp_cs_tpool_task *
@@ -148,6 +148,6 @@ lp_cs_tpool_wait_for_task(struct lp_cs_tpool *pool,
mtx_unlock(&pool->m);
 
cnd_destroy(&task->finish);
-   free(task);
+   FREE(task);
*task_handle = NULL;
 }
diff --git a/src/gallium/drivers/llvmpipe/lp_state_cs.c 
b/src/gallium/drivers/llvmpipe/lp_state_cs.c
index 1645a185cb2..a26cbf4df22 100644
--- a/src/gallium/drivers/llvmpipe/lp_state_cs.c
+++ b/src/gallium/drivers/llvmpipe/lp_state_cs.c
@@ -1123,8 +1123,9 @@ cs_exec_fn(void *init_data, int iter_idx, struct 
lp_cs_local_mem *lmem)
memset(&thread_data, 0, sizeof(thread_data));
 
if (lmem->local_size < job_info->req_local_mem) {
+  lmem->local_mem_ptr = REALLOC(lmem->local_mem_ptr, lmem->local_size,
+job_info->req_local_mem);
   lmem->local_size = job_info->req_local_mem;
-  lmem->local_mem_ptr = realloc(lmem->local_mem_ptr, lmem->local_size);
}
thread_data.shared = lmem->local_mem_ptr;
 
-- 
2.17.1

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] Enabling freedreno CI in Mesa MRs

2019-09-05 Thread Rob Clark
On Wed, Sep 4, 2019 at 1:42 PM Eric Anholt  wrote:
>
> If you haven't seen this MR:
>
> https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1632
>
> I feel ready to enable CI of freedreno on Mesa MRs.  There are some docs
> here:
>
> https://gitlab.freedesktop.org/mesa/mesa/blob/e81a2d3b40240651f506a2a5afeb989792b3dc0e/.gitlab-ci/README.md
>
> Once we merge this, this will greatly increase Mesa's pre-merge CI
> coverage on MRs by getting us up to GLES3.1 going through the CTS.  Once
> krh is ready to put up an in-progress MR of tess, we can override the
> GLES3.1 run to force-enable 3.2 with the remaining tess issues as
> expected fails, and get a whole lot more API coverage.
>
> As far as stability of this CI, I've been through I think an order of
> magnitude more runs of the CI than are visible from that MR, and I'm
> pretty sure we've got a stable set of tests now -- I'm currently working
> on fixing the flappy tests so we can drop the a630-specific skip list.
> The lab has also been up for long enough that I'm convinced the HW is
> stable enough to subject you all to it.

I won't claim to be an unbiased observer, but I'm pretty excited about
this.  This has been in the works for a while, and I think it is to
the point where we aren't going to get much more useful testing of our
gitlab runners with it living off on a branch, so at some point you
just have to throw the switch.

I'd propose, that unless there are any objections, we land this Monday
morning (PST) on master, to ensure a relatively short turn-around just
in case something went badly.

(I can be online(ish) over the weekend if we want to throw the switch
sooner.. but I might be AFK here and there to get groceries and things
like that.  So response time might be a bit longer than on a week
day.)

Objections anyone?  Or counter-proposals?

BR,
-R

> Once this is merged, please @anholt me on your MRs if you find spurious
> failures in freedreno so I can go either disable those tests or fix
> them.
>
> For some info on how I set up my DUTs, see
> https://gitlab.freedesktop.org/anholt/mesa/wikis/db410c-setup for
> starting from a pretty normal debian buster rootfs.  I'd love to work
> with anyone on replicating this style of CI for your own hardware lab if
> you're interested, or hooking pre-merge gitlab CI up to your existing CI
> lab if you can make it public-access (panfrost?  Intel's CI?)
> ___
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 23/25] panfrost: Remove uneeded add_bo() in initialize_surface()

2019-09-05 Thread Alyssa Rosenzweig
Ah, ignore my previous comment. Could we squash this into the patch that
added the PAN_SHARED_BO_RW define?

On Thu, Sep 05, 2019 at 09:41:48PM +0200, Boris Brezillon wrote:
> Should already be added in panfrost_draw_vbo() and panfrost_clear(),
> no need to add it here too.
> 
> Signed-off-by: Boris Brezillon 
> ---
>  src/gallium/drivers/panfrost/pan_fragment.c | 3 ---
>  1 file changed, 3 deletions(-)
> 
> diff --git a/src/gallium/drivers/panfrost/pan_fragment.c 
> b/src/gallium/drivers/panfrost/pan_fragment.c
> index cbb95b79f52a..00ff363a1bba 100644
> --- a/src/gallium/drivers/panfrost/pan_fragment.c
> +++ b/src/gallium/drivers/panfrost/pan_fragment.c
> @@ -42,9 +42,6 @@ panfrost_initialize_surface(
>  struct panfrost_resource *rsrc = pan_resource(surf->texture);
>  
>  rsrc->slices[level].initialized = true;
> -
> -assert(rsrc->bo);
> -panfrost_batch_add_bo(batch, rsrc->bo, PAN_SHARED_BO_RW);
>  }
>  
>  /* Generate a fragment job. This should be called once per frame. (According 
> to
> -- 
> 2.21.0
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 21/25] panfrost: Add new helpers to describe job depencencies on BOs

2019-09-05 Thread Alyssa Rosenzweig
> --- a/src/gallium/drivers/panfrost/pan_fragment.c
> +++ b/src/gallium/drivers/panfrost/pan_fragment.c
> @@ -44,7 +44,7 @@ panfrost_initialize_surface(
>  rsrc->slices[level].initialized = true;
>  
>  assert(rsrc->bo);
> -panfrost_batch_add_bo(batch, rsrc->bo);
> +panfrost_batch_add_bo(batch, rsrc->bo, PAN_SHARED_BO_RW);
>  }

This should be write-only. The corresponding read would be iff we're
wallpapering, so add an add_bo with RO in the wallpaper drawing routine.

I don't know if it really matters (since we can only have one write at a
time) but let's be precise.

---

On that note, sometimes we stuff multiple related-but-independent
buffers within a single BO, particularly multiple miplevels/cubemap
faces/etc in one BO.  Hypothetically, it is legal to render to multiple
faces independently at once. In practice, I don't know if this case is
it is, we can of course split up the resource into per-face BOs.

>  _mesa_hash_table_remove_key(ctx->batches, &batch->key);
> +util_unreference_framebuffer_state(&batch->key);

(Remind me where was the corresponding reference..?)

> +void panfrost_batch_add_fbo_bos(struct panfrost_batch *batch)
> +{
> +for (unsigned i = 0; i < batch->key.nr_cbufs; ++i) {
> +struct panfrost_resource *rsrc = 
> pan_resource(batch->key.cbufs[i]->texture);
> +panfrost_batch_add_bo(batch, rsrc->bo, PAN_SHARED_BO_RW);
> + }
> +
> +if (batch->key.zsbuf) {
> +struct panfrost_resource *rsrc = 
> pan_resource(batch->key.zsbuf->texture);
> +panfrost_batch_add_bo(batch, rsrc->bo, PAN_SHARED_BO_RW);
> +}
> +}

As per above, these should be write-only. Also, is this duplicate from
the panfrost_batch_add_bo in panfrost_initialize_surface? It feels like
it. Which one is deadcode..?
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 22/25] panfrost: Delay payloads[].offset_start initialization

2019-09-05 Thread Alyssa Rosenzweig
> panfrost_draw_vbo() Might call the primeconvert/without_prim_restart

s/M/m/ s/prime/prim/ but R-b
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 20/25] panfrost: Prepare things to avoid flushes on FB switch

2019-09-05 Thread Alyssa Rosenzweig
R-b
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 19/25] panfrost: Pass a batch to panfrost_set_value_job()

2019-09-05 Thread Alyssa Rosenzweig
R-b

On Thu, Sep 05, 2019 at 09:41:44PM +0200, Boris Brezillon wrote:
> So we can emit SET_VALUE jobs for a batch that's not currently bound
> to the context.
> 
> Signed-off-by: Boris Brezillon 
> ---
>  src/gallium/drivers/panfrost/pan_scoreboard.c | 6 ++
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/src/gallium/drivers/panfrost/pan_scoreboard.c 
> b/src/gallium/drivers/panfrost/pan_scoreboard.c
> index f0771a2c5b56..f340bb62662e 100644
> --- a/src/gallium/drivers/panfrost/pan_scoreboard.c
> +++ b/src/gallium/drivers/panfrost/pan_scoreboard.c
> @@ -270,7 +270,7 @@ panfrost_scoreboard_queue_fused_job_prepend(
>  /* Generates a set value job, used below as part of TILER job scheduling. */
>  
>  static struct panfrost_transfer
> -panfrost_set_value_job(struct panfrost_context *ctx, mali_ptr polygon_list)
> +panfrost_set_value_job(struct panfrost_batch *batch, mali_ptr polygon_list)
>  {
>  struct mali_job_descriptor_header job = {
>  .job_type = JOB_TYPE_SET_VALUE,
> @@ -282,7 +282,6 @@ panfrost_set_value_job(struct panfrost_context *ctx, 
> mali_ptr polygon_list)
>  .unknown = 0x3,
>  };
>  
> -struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
>  struct panfrost_transfer transfer = 
> panfrost_allocate_transient(batch, sizeof(job) + sizeof(payload));
>  memcpy(transfer.cpu, &job, sizeof(job));
>  memcpy(transfer.cpu + sizeof(job), &payload, sizeof(payload));
> @@ -303,11 +302,10 @@ panfrost_scoreboard_set_value(struct panfrost_batch 
> *batch)
>  /* Okay, we do. Let's generate it. We'll need the job's polygon list
>   * regardless of size. */
>  
> -struct panfrost_context *ctx = batch->ctx;
>  mali_ptr polygon_list = panfrost_batch_get_polygon_list(batch, 0);
>  
>  struct panfrost_transfer job =
> -panfrost_set_value_job(ctx, polygon_list);
> +panfrost_set_value_job(batch, polygon_list);
>  
>  /* Queue it */
>  panfrost_scoreboard_queue_compute_job(batch, job);
> -- 
> 2.21.0
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 18/25] panfrost: Use ctx->wallpaper_batch in panfrost_blit_wallpaper()

2019-09-05 Thread Alyssa Rosenzweig
R-b

On Thu, Sep 05, 2019 at 09:41:43PM +0200, Boris Brezillon wrote:
> We'll soon be able to flush a batch that's not currently bound to the
> context, which means ctx->pipe_framebuffer will not necessarily be the
> FBO targeted by the wallpaper draw. Let's prepare for this case and
> use ctx->wallpaper_batch in panfrost_blit_wallpaper().
> 
> Signed-off-by: Boris Brezillon 
> ---
>  src/gallium/drivers/panfrost/pan_blit.c | 9 +
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/src/gallium/drivers/panfrost/pan_blit.c 
> b/src/gallium/drivers/panfrost/pan_blit.c
> index 4be8c044ee2f..2d44f06227bf 100644
> --- a/src/gallium/drivers/panfrost/pan_blit.c
> +++ b/src/gallium/drivers/panfrost/pan_blit.c
> @@ -105,16 +105,17 @@ panfrost_blit(struct pipe_context *pipe,
>  void
>  panfrost_blit_wallpaper(struct panfrost_context *ctx, struct pipe_box *box)
>  {
> +struct panfrost_batch *batch = ctx->wallpaper_batch;
>  struct pipe_blit_info binfo = { };
>  
>  panfrost_blitter_save(ctx, ctx->blitter_wallpaper);
>  
> -struct pipe_surface *surf = ctx->pipe_framebuffer.cbufs[0];
> +struct pipe_surface *surf = batch->key.cbufs[0];
>  unsigned level = surf->u.tex.level;
>  unsigned layer = surf->u.tex.first_layer;
>  assert(surf->u.tex.last_layer == layer);
>  
> -binfo.src.resource = binfo.dst.resource = 
> ctx->pipe_framebuffer.cbufs[0]->texture;
> +binfo.src.resource = binfo.dst.resource = 
> batch->key.cbufs[0]->texture;
>  binfo.src.level = binfo.dst.level = level;
>  binfo.src.box.x = binfo.dst.box.x = box->x;
>  binfo.src.box.y = binfo.dst.box.y = box->y;
> @@ -123,9 +124,9 @@ panfrost_blit_wallpaper(struct panfrost_context *ctx, 
> struct pipe_box *box)
>  binfo.src.box.height = binfo.dst.box.height = box->height;
>  binfo.src.box.depth = binfo.dst.box.depth = 1;
>  
> -binfo.src.format = binfo.dst.format = 
> ctx->pipe_framebuffer.cbufs[0]->format;
> +binfo.src.format = binfo.dst.format = batch->key.cbufs[0]->format;
>  
> -assert(ctx->pipe_framebuffer.nr_cbufs == 1);
> +assert(batch->key.nr_cbufs == 1);
>  binfo.mask = PIPE_MASK_RGBA;
>  binfo.filter = PIPE_TEX_FILTER_LINEAR;
>  binfo.scissor_enable = FALSE;
> -- 
> 2.21.0
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 17/25] panfrost: Pass a batch to functions emitting FB descs

2019-09-05 Thread Alyssa Rosenzweig
Very happily R-b, this is a good cleanup :)
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 16/25] panfrost: Pass a batch to panfrost_{allocate, upload}_transient()

2019-09-05 Thread Alyssa Rosenzweig
> We need that if we want to emit CMDs to a job that's not currenlty

Nit but s/emit CMDs/upload transient buffers/; s/job/batch/;
s/currenlty/currently/

Midgard/Bifrost don't have commands (c.f. Utgard), just descriptors and
data buffers. We just call the stuff we submit with the batch "the
command stream" for familiarity with other driver.

Aside from that tiny `reword`, r-b
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 15/25] panfrost: Move the batch submission logic to panfrost_batch_submit()

2019-09-05 Thread Alyssa Rosenzweig
> +out:
> +if (ctx->batch == batch)
> +panfrost_invalidate_frame(ctx);

Could you explain the logic a bit? I think the idea is that "if this is
the bound batch, the panfrost_context parameters are relevant so
submitting it invalidates those paramaters, but if it's not bound, the
context parameters are for some other batch so we can't invalidate
them". If so, this is good, just add a comment explaining so.

> +/* We always stall the pipeline for correct results since pipelined
> +  * rendering is quite broken right now (to be fixed by the panfrost_job
> +  * refactor, just take the perf hit for correctness)
> +  */
> +drmSyncobjWait(pan_screen(ctx->base.screen)->fd, &ctx->out_sync, 1,
> +   INT64_MAX, 0, NULL);
> +panfrost_free_batch(batch);

Comment is borked but I think you'll reshuffle this later in the series
so don't bother adjusting; I'm not that much of a pedant for commit
history.

---

Conditional on the added comment, you can make this R-b :)
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 14/25] panfrost: Move the fence creation in panfrost_flush()

2019-09-05 Thread Alyssa Rosenzweig
Reviewed-by: Alyssa Rosenzweig 

On Thu, Sep 05, 2019 at 09:41:39PM +0200, Boris Brezillon wrote:
> panfrost_flush() is about to be reworked to flush all pending batches,
> but we want the fence to block on the last one. Let's move the fence
> creation logic in panfrost_flush() to prepare for this situation.
> 
> Signed-off-by: Boris Brezillon 
> ---
>  src/gallium/drivers/panfrost/pan_context.c | 13 +
>  src/gallium/drivers/panfrost/pan_context.h |  3 +++
>  src/gallium/drivers/panfrost/pan_drm.c | 11 ++-
>  src/gallium/drivers/panfrost/pan_screen.h  |  3 +--
>  4 files changed, 15 insertions(+), 15 deletions(-)
> 
> diff --git a/src/gallium/drivers/panfrost/pan_context.c 
> b/src/gallium/drivers/panfrost/pan_context.c
> index e34f5757b1cf..6552052b8cad 100644
> --- a/src/gallium/drivers/panfrost/pan_context.c
> +++ b/src/gallium/drivers/panfrost/pan_context.c
> @@ -1308,7 +1308,6 @@ panfrost_queue_draw(struct panfrost_context *ctx)
>  
>  static void
>  panfrost_submit_frame(struct panfrost_context *ctx, bool flush_immediate,
> -  struct pipe_fence_handle **fence,
>struct panfrost_batch *batch)
>  {
>  panfrost_batch_submit(batch);
> @@ -1316,14 +1315,14 @@ panfrost_submit_frame(struct panfrost_context *ctx, 
> bool flush_immediate,
>  /* If visual, we can stall a frame */
>  
>  if (!flush_immediate)
> -panfrost_drm_force_flush_fragment(ctx, fence);
> +panfrost_drm_force_flush_fragment(ctx);
>  
>  ctx->last_fragment_flushed = false;
>  ctx->last_batch = batch;
>  
>  /* If readback, flush now (hurts the pipelined performance) */
>  if (flush_immediate)
> -panfrost_drm_force_flush_fragment(ctx, fence);
> +panfrost_drm_force_flush_fragment(ctx);
>  }
>  
>  static void
> @@ -1452,7 +1451,13 @@ panfrost_flush(
>  bool flush_immediate = /*flags & PIPE_FLUSH_END_OF_FRAME*/true;
>  
>  /* Submit the frame itself */
> -panfrost_submit_frame(ctx, flush_immediate, fence, batch);
> +panfrost_submit_frame(ctx, flush_immediate, batch);
> +
> +if (fence) {
> +struct panfrost_fence *f = panfrost_fence_create(ctx);
> +pipe->screen->fence_reference(pipe->screen, fence, NULL);
> +*fence = (struct pipe_fence_handle *)f;
> +}
>  
>  /* Prepare for the next frame */
>  panfrost_invalidate_frame(ctx);
> diff --git a/src/gallium/drivers/panfrost/pan_context.h 
> b/src/gallium/drivers/panfrost/pan_context.h
> index 02552ed23de2..6ad2cc81c781 100644
> --- a/src/gallium/drivers/panfrost/pan_context.h
> +++ b/src/gallium/drivers/panfrost/pan_context.h
> @@ -297,6 +297,9 @@ pan_context(struct pipe_context *pcontext)
>  return (struct panfrost_context *) pcontext;
>  }
>  
> +struct panfrost_fence *
> +panfrost_fence_create(struct panfrost_context *ctx);
> +
>  struct pipe_context *
>  panfrost_create_context(struct pipe_screen *screen, void *priv, unsigned 
> flags);
>  
> diff --git a/src/gallium/drivers/panfrost/pan_drm.c 
> b/src/gallium/drivers/panfrost/pan_drm.c
> index e4b75fad4078..47cec9f39fef 100644
> --- a/src/gallium/drivers/panfrost/pan_drm.c
> +++ b/src/gallium/drivers/panfrost/pan_drm.c
> @@ -109,7 +109,7 @@ panfrost_drm_submit_vs_fs_batch(struct panfrost_batch 
> *batch, bool has_draws)
>  return ret;
>  }
>  
> -static struct panfrost_fence *
> +struct panfrost_fence *
>  panfrost_fence_create(struct panfrost_context *ctx)
>  {
>  struct pipe_context *gallium = (struct pipe_context *) ctx;
> @@ -136,8 +136,7 @@ panfrost_fence_create(struct panfrost_context *ctx)
>  }
>  
>  void
> -panfrost_drm_force_flush_fragment(struct panfrost_context *ctx,
> -  struct pipe_fence_handle **fence)
> +panfrost_drm_force_flush_fragment(struct panfrost_context *ctx)
>  {
>  struct pipe_context *gallium = (struct pipe_context *) ctx;
>  struct panfrost_screen *screen = pan_screen(gallium->screen);
> @@ -149,12 +148,6 @@ panfrost_drm_force_flush_fragment(struct 
> panfrost_context *ctx,
>  /* The job finished up, so we're safe to clean it up now */
>  panfrost_free_batch(ctx->last_batch);
>  }
> -
> -if (fence) {
> -struct panfrost_fence *f = panfrost_fence_create(ctx);
> -gallium->screen->fence_reference(gallium->screen, fence, 
> NULL);
> -*fence = (struct pipe_fence_handle *)f;
> -}
>  }
>  
>  unsigned
> diff --git a/src/gallium/drivers/panfrost/pan_screen.h 
> b/src/gallium/drivers/panfrost/pan_screen.h
> index aab141a563c2..4acdd3572c9f 100644
> --- a/src/gallium/drivers/panfrost/pan_screen.h
> +++ b/src/gallium/drivers/panfrost/pan_screen.h
> @@ -123,8 +123,7 @@ pan_screen(struct pipe_screen *p)
>  int
>  panfrost_drm_submit_vs_fs_batch(struct panfrost_batch 

Re: [Mesa-dev] [PATCH v3 10/25] panfrost: Make sure the BO is 'ready' when picked from the cache

2019-09-05 Thread Alyssa Rosenzweig
> Will document that.
+1

> Evict won't help here as memory will only be released after the jobs
> are done using it. And madvise doesn't help either, for the same reason.

Ah-ha, I understand the distinction; thank you.

> The behavior hasn't changed regarding allocation failures: it's still
> an assert(), so the code is not more or less buggy than it was :p. What
> happens when assert()s are disabled? probably a segfault because of a
> NULL pointer dereference. So, adding the fprintf() is probably a good
> idea as a first step, and then we can see if we can handle the OOM case
> gracefully.

Haha, that's reasonable. I'm wondering if we should try some assert-less
stress test but maybe that doesn't matter until "productization".

> > In short, I'm not convinced this algorithm (specifically the last step)
> > is ideal.
> 
> It really depends on how robust you want to be when the system is under
> memory pressure vs how long you accept to wait. Note that, in the worst
> case scenario we wouldn't wait more than we currently do, as having each
> batch wait on BOs of the previous batch is just like the serialization
> we had in panfrost_flush(). I don't see it as a huge problem, but maybe
> I'm wrong.

Ya, I don't know; these seem like hard problems to say the least :-(

> > If there is no memory left for us, is it responsible to continue at all?
> 
> It's not exactly no memory, it's no immediately available memory.

'fraid I don't know enough about Linux allocators to grok the
distinction.

> > Should we just fail the allocation after step 2, and if the caller has a
> > problem with that, it's their issue? Or we abort here after step 2?
> 
> I think that one is a separate issue. I mean, that's something we have
> to handle even if we go through step 3 and step 3 fails. 
>
> > I
> > don't like the robustness implications but low memory behaviour is a
> > risky subject as it is; I don't want to add more unknowns into it --
> > aborting it with an assert(0) is something we can recognize immediately.
> > Strange crashes in random places with no explanation, less so.
> 
> And that hasn't changed. We still have an assert after step 3.

OK, let's hold off until after this series is merged :)
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [ANNOUNCE] mesa 19.2.0-rc2

2019-09-05 Thread Dylan Baker
I've added it to the staging/19.2 branch, thanks.

Dylan

Quoting apinheiro (2019-09-05 01:21:23)
> 
> On 5/9/19 0:57, Dylan Baker wrote:
> 
> Hi List,
> 
> I'd like to announce the availability of mesa-19.2.0-rc2. This is the
> culmination of two weeks worth of work. Due to maintenance the Intel CI 
> is not
> running, but I've built and tested this locally. I would have preferred 
> to get
> more testing, but being two weeks out from -rc1 I wanted to get a release 
> out.
> 
> Dylan
> 
> 
> I would like to nominate the following v3d patch:
> 
> "broadcom/v3d: Allow importing linear BOs with arbitrary offset/stride" [1]
> 
> I already mentioned that patch on the "[Mesa-dev] Mesa 19.2.0 release plan"
> thread, but I forgot to CC mesa-stable. Sorry for that.
> 
> FWIW, the patch fixes the following piglit tests:
> 
>spec/ext_image_dma_buf_import/ext_image_dma_buf_import-sample_nv12
>spec/ext_image_dma_buf_import/ext_image_dma_buf_import-sample_yuv420
>spec/ext_image_dma_buf_import/ext_image_dma_buf_import-sample_yvu420
> 
> [1] https://gitlab.igalia.com/graphics/mesa/commit/
> 873b092e9110a0605293db7bc1c5bcb749cf9a28
> 
> 
> 
> 
> Shortlog:
> 
> 
> Alex Smith (1):
>   radv: Change memory type order for GPUs without dedicated VRAM
> 
> Alyssa Rosenzweig (1):
>   pan/midgard: Fix writeout combining
> 
> Andres Rodriguez (1):
>   radv: additional query fixes
> 
> Bas Nieuwenhuizen (3):
>   radv: Use correct vgpr_comp_cnt for VS if both prim_id and 
> instance_id are needed.
>   radv: Emit VGT_GS_ONCHIP_CNTL for tess on GFX10.
>   radv: Disable NGG for geometry shaders.
> 
> Danylo Piliaiev (1):
>   nir/loop_unroll: Prepare loop for unrolling in wrapper_unroll
> 
> Dave Airlie (2):
>   virgl: fix format conversion for recent gallium changes.
>   gallivm: fix atomic compare-and-swap
> 
> Dylan Baker (1):
>   bump version to 19.2-rc2
> 
> Ian Romanick (7):
>   nir/algrbraic: Don't optimize open-coded bitfield reverse when 
> lowering is enabled
>   intel/compiler: Request bitfield_reverse lowering on pre-Gen7 
> hardware
>   nir/algebraic: Mark some value range analysis-based optimizations 
> imprecise
>   nir/range-analysis: Adjust result range of exp2 to account for 
> flush-to-zero
>   nir/range-analysis: Adjust result range of multiplication to 
> account for flush-to-zero
>   nir/range-analysis: Fix incorrect fadd range result for (ne_zero, 
> ne_zero)
>   nir/range-analysis: Handle constants in nir_op_mov just like 
> nir_op_bcsel
> 
> Ilia Mirkin (1):
>   gallium/vl: use compute preference for all multimedia, not just blit
> 
> Jose Maria Casanova Crespo (1):
>   mesa: recover target_check before get_current_tex_objects
> 
> Kenneth Graunke (15):
>   gallium/ddebug: Wrap resource_get_param if available
>   gallium/trace: Wrap resource_get_param if available
>   gallium/rbug: Wrap resource_get_param if available
>   gallium/noop: Implement resource_get_param
>   iris: Replace devinfo->gen with GEN_GEN
>   iris: Fix broken aux.possible/sampler_usages bitmask handling
>   iris: Update fast clear colors on Gen9 with direct immediate writes.
>   iris: Drop copy format hacks from copy region based transfer path.
>   iris: Avoid unnecessary resolves on transfer maps
>   iris: Fix large timeout handling in rel2abs()
>   isl: Drop UnormPathInColorPipe for buffer surfaces.
>   isl: Don't set UnormPathInColorPipe for integer surfaces.
>   util: Add a _mesa_i64roundevenf() helper.
>   mesa: Fix _mesa_float_to_unorm() on 32-bit systems.
>   iris: Fix partial fast clear checks to account for miplevel.
> 
> Lionel Landwerlin (2):
>   util/timespec: use unsigned 64 bit integers for nsec values
>   util: fix compilation on macos
> 
> Marek Ol\u0161 k (18):
>   radeonsi/gfx10: fix the legacy pipeline by storing as_ngg in the 
> shader cache
>   radeonsi: move some global shader cache flags to per-binary flags
>   radeonsi/gfx10: fix tessellation for the legacy pipeline
>   radeonsi/gfx10: fix the PRIMITIVES_GENERATED query if using legacy 
> streamout
>   radeonsi/gfx10: create the GS copy shader if using legacy streamout
>   radeonsi/gfx10: add as_ngg variant for VS as ES to select Wave32/64
>   radeonsi/gfx10: fix InstanceID for legacy VS+GS
>   radeonsi/gfx10: don't initialize VGT_INSTANCE_STEP_RATE_0
>   radeonsi/gfx10: always use the legacy pipeline for streamout
>   radeonsi/gfx10: finish up Navi14, add PCI ID
>   radeonsi/gfx10: add AMD_DEBUG=nongg
>   winsys/amdgpu+radeon: process AMD_DEBUG in addition to R600_DEBUG
>   radeonsi:

Re: [Mesa-dev] [PATCH v3 09/25] panfrost: Rework the panfrost_bo API

2019-09-05 Thread Alyssa Rosenzweig
> > I notice this had a print to stderr before with an assertion out, but
> > now it fails silently. Is this change of behaviour intentional? 
> 
> It is.

Alright! :-)

> > BO
> > creation would previously return a valid BO gauranteed. This is no
> > longer so obviously true -- although I see we later assert that the
> > return is non-NULL in the caller.
> > 
> > Could you help me understand the new logic a bit? Thank you!
> 
> The rationale behind this change being that panfrost_bo_alloc() will
> not be our last option (see patch 9). I can add the fprintf() back in
> this patch, and move it to the caller in patch 9 if you prefer.

Ah, that makes sense; thank you for clarifying!

> > > +if (!(flags & (PAN_ALLOCATE_INVISIBLE | 
> > > PAN_ALLOCATE_DELAY_MMAP)))
> > > +panfrost_bo_mmap(bo);
> > > + else if ((flags & PAN_ALLOCATE_INVISIBLE) && (pan_debug & 
> > > PAN_DBG_TRACE))  
> > 
> > I think the spacing got wacky here (on the beginning of the last line)
> >
> 
> Will fix that.

+1

> > I see we now have the distinction between panfrost_bo_release (cached)
> > and panfrost_bo_free (uncached). I'm worried the distinction might not
> > be obvious to future Panfrost hackers.
> > 
> > Could you add a comment above each function clarifying the cache
> > behaviour?
> 
> Looks like the _release() function can be inlined in
> panfrost_bo_unreference(). I'm still not happy with the
> panfrost_bo_create() name though. Maybe we should rename this one into
> panfrost_get_bo().

I think splitting free/release to separate functions is good; I don't
know that inlining _release() is inherently needed. I'm just wondering
if we want a comment to make the distinction clear for future denizens
trying to figure out which routine to use --- although inlining one
would certainly solve that part...

> Yes, I guess I got tired splitting things up and decided to group
> changes that were kind of related in a single patch (also don't like
> having 30+ patch series). I'll split that up in v4.

No need to split it for v4; just a general note for future series :)
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111522] [bisected] Supraland no longer start

2019-09-05 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111522

--- Comment #11 from MWATTT  ---
I can confirm that the last MR + .drirc solves this issue, at least on radv.
Anv may have additional issues.

May be unrelated but I have a lot of
"SPIR-V WARNING:
In file ../src/compiler/spirv/spirv_to_nir.c:826
Decoration not allowed on struct members: SpvDecorationInvariant
" 
in the console

-- 
You are receiving this mail because:
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111549] 19.2.0_rc1 fails lp_test_arit, u_format_test, PIPE_FORMAT_DXT5_RGBA (unorm8)

2019-09-05 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111549

--- Comment #2 from Roland Scheidegger  ---
This cpu doesn't support sse 4.1, and some of the fallbacks for rounding aren't
quite kosher for the sake of simplicity / performance (I believe the format
tests fail for similar reason).
I suppose it could be fixed albeit personally I think it makes a lot of sense
to prioritize performance especially on such cpus.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are the QA Contact for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 10/25] panfrost: Make sure the BO is 'ready' when picked from the cache

2019-09-05 Thread Boris Brezillon
On Thu, 5 Sep 2019 16:43:23 -0400
Alyssa Rosenzweig  wrote:

> > +bool
> > +panfrost_bo_wait(struct panfrost_bo *bo, int64_t timeout_ns)
> > +{
> > +struct drm_panfrost_wait_bo req = {
> > +.handle = bo->gem_handle,
> > +   .timeout_ns = timeout_ns,
> > +};
> > +int ret;
> > +
> > +ret = drmIoctl(bo->screen->fd, DRM_IOCTL_PANFROST_WAIT_BO, &req);
> > +if (ret != -1)
> > +return true;
> > +
> > +assert(errno == ETIMEDOUT || errno == EBUSY);
> > +return false;
> > +}  
> 
> I would appreciate a comment explaining what the return value of this
> ioctl is. `ret != -1` and asserting an errno is... suspicious? Not
> wrong, to my knowledge, but hard to decipher without context.

Will document that.

> 
> > +/* Before creating a BO, we first want to check the cache but 
> > without
> > + * waiting for BO readiness (BOs in the cache can still be 
> > referenced
> > + * by jobs that are not finished yet).
> > + * If the cached allocation fails we fall back on fresh BO 
> > allocation,
> > + * and if that fails too, we try one more time to allocate from the
> > + * cache, but this time we accept to wait.
> >   */  
> 
> Conceptually:
> 
> We first try a ready BO from the cache. OK.
> 
> If that fails, there is no BO in the cache that is currently ready for
> use; by definition of BO readiness, this is because another concurrent
> job is using it. We then try to create a new BO. Suppose a given job
> uses an average of `b` BOs. Then for `j` concurrent jobs, assuming all
> of these allocations succeed, we have `j * b` BOs in the cache. This is
> an unfortunate bump in memory usage but necessary for pipelining.
> 
> If that allocation fails, by definition of memory allocation failures,
> we ran out of memory and cannot proceed with the allocation. Either:
> 
>  - The BO cache is responsible for this. In this case, continuing to use
>the BO cache (even with the waits) will just dig us deeper into the
>hole. Perhaps we should call bo_evict_all from userspace to handle
>the memory pressure? Or does madvise render this irrelevant?

Evict won't help here as memory will only be released after the jobs
are done using it. And madvise doesn't help either, for the same reason.

> 
>  - The BO cache is not responsible for this. In this case, we could
>continue to use the BO cache, but then either:
> 
>   - There is a BO we can wait for. Then waiting is okay.
>   - There is not. Then that cache fetch fails and we kerplutz.
> What now? If we need an allocation, cache or no cache, if the
> kernel says no, no means no. What then?

The behavior hasn't changed regarding allocation failures: it's still
an assert(), so the code is not more or less buggy than it was :p. What
happens when assert()s are disabled? probably a segfault because of a
NULL pointer dereference. So, adding the fprintf() is probably a good
idea as a first step, and then we can see if we can handle the OOM case
gracefully.

> 
> In short, I'm not convinced this algorithm (specifically the last step)
> is ideal.

It really depends on how robust you want to be when the system is under
memory pressure vs how long you accept to wait. Note that, in the worst
case scenario we wouldn't wait more than we currently do, as having each
batch wait on BOs of the previous batch is just like the serialization
we had in panfrost_flush(). I don't see it as a huge problem, but maybe
I'm wrong.

> 
> If there is no memory left for us, is it responsible to continue at all?

It's not exactly no memory, it's no immediately available memory.

> Should we just fail the allocation after step 2, and if the caller has a
> problem with that, it's their issue? Or we abort here after step 2?

I think that one is a separate issue. I mean, that's something we have
to handle even if we go through step 3 and step 3 fails. 

> I
> don't like the robustness implications but low memory behaviour is a
> risky subject as it is; I don't want to add more unknowns into it --
> aborting it with an assert(0) is something we can recognize immediately.
> Strange crashes in random places with no explanation, less so.

And that hasn't changed. We still have an assert after step 3.

> 
> CC'ing Rob to see if he has any advise re Panfrost madvise interactions
> as well as general kernel OOM policy.

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111522] [bisected] Supraland no longer start

2019-09-05 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111522

Lionel Landwerlin  changed:

   What|Removed |Added

   Assignee|fdo-b...@engestrom.ch   |mesa-dev@lists.freedesktop.
   ||org
 Status|ASSIGNED|NEEDINFO

--- Comment #10 from Lionel Landwerlin  ---
I've put up another MR :
https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1883

Here is the ~/.drirc I was using :


   
  
 
 
  
   


If you could test this that would be great.


Note that for me with this fix, the game crashes at start with the following
backtrace :

Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x7f051ec464e5 in ralloc_parent (ptr=0x7f04fd83d760) at
../src/util/ralloc.c:356
356return info->parent ? PTR_FROM_HEADER(info->parent) : NULL;
[Current thread is 1 (Thread 0x7f0505afb700 (LWP 15827))]
(gdb) bt
#0  0x7f051ec464e5 in ralloc_parent (ptr=0x7f04fd83d760) at
../src/util/ralloc.c:356
#1  0x7f051ec464b8 in ralloc_parent (ptr=0x7f04fc891a00) at
../src/util/ralloc.c:353
#2  0x7f051ec464b8 in ralloc_parent (ptr=0x7f04fce65fe0) at
../src/util/ralloc.c:353
#3  0x7f051ec463d9 in ralloc_adopt (new_ctx=0x7f051ec463d9
, old_ctx=0x7f0505aebde0) at ../src/util/ralloc.c:326
#4  0x7f051e8ff6b3 in anv_pipeline_compile_graphics
(pipeline=0x7f04fd443410, cache=0x8d20800, info=0x7f0505af9c68) at
../src/intel/vulkan/anv_pipeline.c:1448
#5  0x7f051e900b5f in anv_pipeline_init (pipeline=0x7f04fd443410,
device=0x8d208f0, cache=0x8d20800, pCreateInfo=0x7f0505af9c68, alloc=0x8d208f8)
at ../src/intel/vulkan/anv_pipeline.c:1930
#6  0x7f051e9f4793 in gen9_graphics_pipeline_create (_device=0x8d208f0,
cache=0x8d20800, pCreateInfo=0x7f0505af9c68, pAllocator=0x0,
pPipeline=0x7f04a4b01c10)
at ../src/intel/vulkan/genX_pipeline.c:2135
#7  0x7f051e9f5200 in VALGRIND_PRINTF (format=0x7f051e9e82aa
<_anv_combine_address+105> "H\215\065\357\t?") at
/usr/include/valgrind/valgrind.h:6248
#8  0x7f051e27d20c in vkCreateGraphicsPipelines (device=0x8d208f0,
pipelineCache=0x8d20800, createInfoCount=1, pCreateInfos=0x7f0505af9c68,
pAllocator=0x0, pPipelines=0x7f04a4b01c10)
at layersvt/api_dump.cpp:8318
#9  0x7f051dcd7078 in ?? () from
/home/djdeath/.steam/ubuntu12_64/libVkLayer_steam_fossilize.so
#10 0x7f0528057c94 in vkCreateGraphicsPipelines (device=0x8d208f0,
pipelineCache=0x8d20800, createInfoCount=1, pCreateInfos=0x7f0505af9c68,
pAllocator=0x0, pPipelines=0x7f04a4b01c10)
at ../loader/trampoline.c:1275
#11 0x0467195d in
FVulkanPipelineStateCacheManager::CreateGfxPipelineFromEntry(FVulkanPipelineStateCacheManager::FGfxPipelineEntry*,
FVulkanShader**, FVulkanGfxPipeline*) ()
#12 0x04670ee1 in
FVulkanPipelineStateCacheManager::CreateAndAdd(FGraphicsPipelineStateInitializer
const&, FGfxPSIKey,
TSharedPtr,
FGfxEntryKey) ()
#13 0x04674dcb in
FVulkanDynamicRHI::RHICreateGraphicsPipelineState(FGraphicsPipelineStateInitializer
const&) ()
#14 0x0472a1b6 in
PipelineStateCache::GetAndOrCreateGraphicsPipelineState(FRHICommandList&,
FGraphicsPipelineStateInitializer const&, EApplyRendertargetOption) ()
#15 0x04729e3f in SetGraphicsPipelineState(FRHICommandList&,
FGraphicsPipelineStateInitializer const&, EApplyRendertargetOption) ()
#16 0x042c3f86 in
FRCPassPostProcessCombineLUTs::Process(FRenderingCompositePassContext&) ()
#17 0x043820d8 in
FRenderingCompositionGraph::RecursivelyProcess(FRenderingCompositeOutputRef
const&, FRenderingCompositePassContext&) const ()
#18 0x04381d94 in
FRenderingCompositePassContext::Process(TArray const&, char16_t const*) ()
#19 0x042d43f1 in FPostProcessing::Process(FRHICommandListImmediate&,
FViewInfo const&, TRefCountPtr&) ()
#20 0x0415007b in
FDeferredShadingSceneRenderer::Render(FRHICommandListImmediate&) ()
#21 0x0446c7c6 in ?? ()
#22 0x044772ba in ?? ()
#23 0x0391c18f in FNamedTaskThread::ProcessTasksNamedThread(int, bool)
()
#24 0x0391bdf3 in FNamedTaskThread::ProcessTasksUntilQuit(int) ()
#25 0x0475e0b2 in FRenderingThread::Run() ()
#26 0x03954b03 in FRunnableThreadPThread::Run() ()
#27 0x03946aad in FRunnableThreadPThread::_ThreadProc(void*) ()
#28 0x7f054e77d182 in start_thread (arg=) at
pthread_create.c:486
#29 0x7f054dd88b1f in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:95


Though running the game under valgrind shows incorrect free() from the
application so I believe the above backtrace is the result of a previous memory
corruption.

Thanks!

-- 
You are receiving this mail because:
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 09/25] panfrost: Rework the panfrost_bo API

2019-09-05 Thread Boris Brezillon
On Thu, 5 Sep 2019 16:31:04 -0400
Alyssa Rosenzweig  wrote:

> > +static struct panfrost_bo *
> > +panfrost_bo_alloc(struct panfrost_screen *screen, size_t size,
> > +  uint32_t flags)
> > +{  
> ...
> > +ret = drmIoctl(screen->fd, DRM_IOCTL_PANFROST_CREATE_BO, 
> > &create_bo);
> > +if (ret)
> > +return NULL;  
> 
> I notice this had a print to stderr before with an assertion out, but
> now it fails silently. Is this change of behaviour intentional? 

It is.

> BO
> creation would previously return a valid BO gauranteed. This is no
> longer so obviously true -- although I see we later assert that the
> return is non-NULL in the caller.
> 
> Could you help me understand the new logic a bit? Thank you!
> 

The rationale behind this change being that panfrost_bo_alloc() will
not be our last option (see patch 9). I can add the fprintf() back in
this patch, and move it to the caller in patch 9 if you prefer.

> > +if (!(flags & (PAN_ALLOCATE_INVISIBLE | PAN_ALLOCATE_DELAY_MMAP)))
> > +panfrost_bo_mmap(bo);
> > +   else if ((flags & PAN_ALLOCATE_INVISIBLE) && (pan_debug & 
> > PAN_DBG_TRACE))  
> 
> I think the spacing got wacky here (on the beginning of the last line)
>

Will fix that.
 
> > +static void
> > +panfrost_bo_release(struct panfrost_bo *bo)
> > +{
> > +
> > +/* Rather than freeing the BO now, we'll cache the BO for later
> > + * allocations if we're allowed to */
> > +
> > +panfrost_bo_munmap(bo);
> > +
> > +if (panfrost_bo_cache_put(bo))
> > +return;
> > +
> > +panfrost_bo_free(bo);
> > +}  
> 
> I see we now have the distinction between panfrost_bo_release (cached)
> and panfrost_bo_free (uncached). I'm worried the distinction might not
> be obvious to future Panfrost hackers.
> 
> Could you add a comment above each function clarifying the cache
> behaviour?

Looks like the _release() function can be inlined in
panfrost_bo_unreference(). I'm still not happy with the
panfrost_bo_create() name though. Maybe we should rename this one into
panfrost_get_bo().

> 
> -
> 
> Other than these, the cleanup in general seems like a good idea. But in
> general, please try to split up patches like this to aid reviewin. Thank
> you!

Yes, I guess I got tired splitting things up and decided to group
changes that were kind of related in a single patch (also don't like
having 30+ patch series). I'll split that up in v4.

Thanks for the review!

Boris
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 13/25] panfrost: Allow testing if a specific batch is targeting a scanout FB

2019-09-05 Thread Alyssa Rosenzweig
R-b

On Thu, Sep 05, 2019 at 09:41:38PM +0200, Boris Brezillon wrote:
> Rename panfrost_is_scanout() into panfrost_batch_is_scanout(), pass it
> a batch instead of a context and move the code to pan_job.c.
> 
> With this in place, we can now test if a batch is targeting a scanout
> FB even if this batch is not bound to the context.
> 
> Signed-off-by: Boris Brezillon 
> ---
>  src/gallium/drivers/panfrost/pan_context.c | 20 +---
>  src/gallium/drivers/panfrost/pan_context.h |  3 ---
>  src/gallium/drivers/panfrost/pan_job.c | 18 ++
>  src/gallium/drivers/panfrost/pan_job.h |  3 +++
>  src/gallium/drivers/panfrost/pan_mfbd.c|  3 +--
>  5 files changed, 23 insertions(+), 24 deletions(-)
> 
> diff --git a/src/gallium/drivers/panfrost/pan_context.c 
> b/src/gallium/drivers/panfrost/pan_context.c
> index f0cd8cdb12ea..e34f5757b1cf 100644
> --- a/src/gallium/drivers/panfrost/pan_context.c
> +++ b/src/gallium/drivers/panfrost/pan_context.c
> @@ -152,24 +152,6 @@ panfrost_emit_mfbd(struct panfrost_context *ctx, 
> unsigned vertex_count)
>  return framebuffer;
>  }
>  
> -/* Are we currently rendering to the screen (rather than an FBO)? */
> -
> -bool
> -panfrost_is_scanout(struct panfrost_context *ctx)
> -{
> -/* If there is no color buffer, it's an FBO */
> -if (ctx->pipe_framebuffer.nr_cbufs != 1)
> -return false;
> -
> -/* If we're too early that no framebuffer was sent, it's scanout */
> -if (!ctx->pipe_framebuffer.cbufs[0])
> -return true;
> -
> -return ctx->pipe_framebuffer.cbufs[0]->texture->bind & 
> PIPE_BIND_DISPLAY_TARGET ||
> -   ctx->pipe_framebuffer.cbufs[0]->texture->bind & 
> PIPE_BIND_SCANOUT ||
> -   ctx->pipe_framebuffer.cbufs[0]->texture->bind & 
> PIPE_BIND_SHARED;
> -}
> -
>  static void
>  panfrost_clear(
>  struct pipe_context *pipe,
> @@ -2397,7 +2379,7 @@ panfrost_set_framebuffer_state(struct pipe_context 
> *pctx,
>   */
>  
>  struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
> -bool is_scanout = panfrost_is_scanout(ctx);
> +bool is_scanout = panfrost_batch_is_scanout(batch);
>  bool has_draws = batch->last_job.gpu;
>  
>  /* Bail out early when the current and new states are the same. */
> diff --git a/src/gallium/drivers/panfrost/pan_context.h 
> b/src/gallium/drivers/panfrost/pan_context.h
> index 586b6d854b6c..02552ed23de2 100644
> --- a/src/gallium/drivers/panfrost/pan_context.h
> +++ b/src/gallium/drivers/panfrost/pan_context.h
> @@ -315,9 +315,6 @@ panfrost_flush(
>  struct pipe_fence_handle **fence,
>  unsigned flags);
>  
> -bool
> -panfrost_is_scanout(struct panfrost_context *ctx);
> -
>  mali_ptr panfrost_sfbd_fragment(struct panfrost_context *ctx, bool 
> has_draws);
>  mali_ptr panfrost_mfbd_fragment(struct panfrost_context *ctx, bool 
> has_draws);
>  
> diff --git a/src/gallium/drivers/panfrost/pan_job.c 
> b/src/gallium/drivers/panfrost/pan_job.c
> index 56aab13d7d5a..0f7e139f1a64 100644
> --- a/src/gallium/drivers/panfrost/pan_job.c
> +++ b/src/gallium/drivers/panfrost/pan_job.c
> @@ -374,6 +374,24 @@ panfrost_batch_intersection_scissor(struct 
> panfrost_batch *batch,
>  batch->maxy = MIN2(batch->maxy, maxy);
>  }
>  
> +/* Are we currently rendering to the screen (rather than an FBO)? */
> +
> +bool
> +panfrost_batch_is_scanout(struct panfrost_batch *batch)
> +{
> +/* If there is no color buffer, it's an FBO */
> +if (batch->key.nr_cbufs != 1)
> +return false;
> +
> +/* If we're too early that no framebuffer was sent, it's scanout */
> +if (!batch->key.cbufs[0])
> +return true;
> +
> +return batch->key.cbufs[0]->texture->bind & PIPE_BIND_DISPLAY_TARGET 
> ||
> +   batch->key.cbufs[0]->texture->bind & PIPE_BIND_SCANOUT ||
> +   batch->key.cbufs[0]->texture->bind & PIPE_BIND_SHARED;
> +}
> +
>  void
>  panfrost_batch_init(struct panfrost_context *ctx)
>  {
> diff --git a/src/gallium/drivers/panfrost/pan_job.h 
> b/src/gallium/drivers/panfrost/pan_job.h
> index e885d0b9fbd5..ea832f2c3efe 100644
> --- a/src/gallium/drivers/panfrost/pan_job.h
> +++ b/src/gallium/drivers/panfrost/pan_job.h
> @@ -195,4 +195,7 @@ panfrost_scoreboard_queue_fused_job_prepend(
>  void
>  panfrost_scoreboard_link_batch(struct panfrost_batch *batch);
>  
> +bool
> +panfrost_batch_is_scanout(struct panfrost_batch *batch);
> +
>  #endif
> diff --git a/src/gallium/drivers/panfrost/pan_mfbd.c 
> b/src/gallium/drivers/panfrost/pan_mfbd.c
> index 618ebd3c4a19..c89b0b44a47c 100644
> --- a/src/gallium/drivers/panfrost/pan_mfbd.c
> +++ b/src/gallium/drivers/panfrost/pan_mfbd.c
> @@ -455,9 +455,8 @@ panfrost_mfbd_fragment(struct panfrost_context *ctx, bool 
> has_draws)
>   * The exception is ReadPixels, but this is not supported on GLES so 
> we
>   * can saf

Re: [Mesa-dev] [PATCH v3 12/25] panfrost: Get rid of the unused 'flush jobs accessing res' infra

2019-09-05 Thread Alyssa Rosenzweig
Fair enough, R-b

On Thu, Sep 05, 2019 at 09:41:37PM +0200, Boris Brezillon wrote:
> Will be replaced by something similar but using a BOs as keys instead
> of resources.
> 
> Signed-off-by: Boris Brezillon 
> ---
>  src/gallium/drivers/panfrost/pan_context.h |  3 --
>  src/gallium/drivers/panfrost/pan_job.c | 38 --
>  src/gallium/drivers/panfrost/pan_job.h |  8 -
>  3 files changed, 49 deletions(-)
> 
> diff --git a/src/gallium/drivers/panfrost/pan_context.h 
> b/src/gallium/drivers/panfrost/pan_context.h
> index 9723d56ac5f7..586b6d854b6c 100644
> --- a/src/gallium/drivers/panfrost/pan_context.h
> +++ b/src/gallium/drivers/panfrost/pan_context.h
> @@ -114,9 +114,6 @@ struct panfrost_context {
>  struct panfrost_batch *batch;
>  struct hash_table *batches;
>  
> -/* panfrost_resource -> panfrost_job */
> -struct hash_table *write_jobs;
> -
>  /* Within a launch_grid call.. */
>  const struct pipe_grid_info *compute_grid;
>  
> diff --git a/src/gallium/drivers/panfrost/pan_job.c 
> b/src/gallium/drivers/panfrost/pan_job.c
> index 6b0f612bb156..56aab13d7d5a 100644
> --- a/src/gallium/drivers/panfrost/pan_job.c
> +++ b/src/gallium/drivers/panfrost/pan_job.c
> @@ -162,21 +162,6 @@ panfrost_batch_get_polygon_list(struct panfrost_batch 
> *batch, unsigned size)
>  return batch->polygon_list->gpu;
>  }
>  
> -void
> -panfrost_flush_jobs_writing_resource(struct panfrost_context *panfrost,
> - struct pipe_resource *prsc)
> -{
> -#if 0
> -struct hash_entry *entry = 
> _mesa_hash_table_search(panfrost->write_jobs,
> -   prsc);
> -if (entry) {
> -struct panfrost_batch *batch = entry->data;
> -panfrost_batch_submit(job);
> -}
> -#endif
> -/* TODO stub */
> -}
> -
>  void
>  panfrost_batch_submit(struct panfrost_batch *batch)
>  {
> @@ -352,25 +337,6 @@ panfrost_batch_clear(struct panfrost_batch *batch,
>   ctx->pipe_framebuffer.height);
>  }
>  
> -void
> -panfrost_flush_jobs_reading_resource(struct panfrost_context *panfrost,
> - struct pipe_resource *prsc)
> -{
> -struct panfrost_resource *rsc = pan_resource(prsc);
> -
> -panfrost_flush_jobs_writing_resource(panfrost, prsc);
> -
> -hash_table_foreach(panfrost->batches, entry) {
> -struct panfrost_batch *batch = entry->data;
> -
> -if (_mesa_set_search(batch->bos, rsc->bo)) {
> -printf("TODO: submit job for flush\n");
> -//panfrost_batch_submit(job);
> -continue;
> -}
> -}
> -}
> -
>  static bool
>  panfrost_batch_compare(const void *a, const void *b)
>  {
> @@ -414,8 +380,4 @@ panfrost_batch_init(struct panfrost_context *ctx)
>  ctx->batches = _mesa_hash_table_create(ctx,
> panfrost_batch_hash,
> panfrost_batch_compare);
> -
> -ctx->write_jobs = _mesa_hash_table_create(ctx,
> -  _mesa_hash_pointer,
> -  _mesa_key_pointer_equal);
>  }
> diff --git a/src/gallium/drivers/panfrost/pan_job.h 
> b/src/gallium/drivers/panfrost/pan_job.h
> index 6d89603f8798..e885d0b9fbd5 100644
> --- a/src/gallium/drivers/panfrost/pan_job.h
> +++ b/src/gallium/drivers/panfrost/pan_job.h
> @@ -138,14 +138,6 @@ panfrost_batch_init(struct panfrost_context *ctx);
>  void
>  panfrost_batch_add_bo(struct panfrost_batch *batch, struct panfrost_bo *bo);
>  
> -void
> -panfrost_flush_jobs_writing_resource(struct panfrost_context *panfrost,
> - struct pipe_resource *prsc);
> -
> -void
> -panfrost_flush_jobs_reading_resource(struct panfrost_context *panfrost,
> - struct pipe_resource *prsc);
> -
>  void
>  panfrost_batch_submit(struct panfrost_batch *batch);
>  
> -- 
> 2.21.0
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 11/25] panfrost: Use a pipe_framebuffer_state as the batch key

2019-09-05 Thread Alyssa Rosenzweig
Hrm. I'm not sure I'm 100% comfortable using a Gallium object for this,
since many of these properties could be inferred, but this is still
probably the best compromise for now, so R-b.

On Thu, Sep 05, 2019 at 09:41:36PM +0200, Boris Brezillon wrote:
> This way we have all the fb_state information directly attached to a
> batch and can pass only the batch to functions emitting CMDs, which is
> needed if we want to be able to queue CMDs to a batch that's not
> currently bound to the context.
> 
> Signed-off-by: Boris Brezillon 
> ---
>  src/gallium/drivers/panfrost/pan_job.c | 34 +++---
>  src/gallium/drivers/panfrost/pan_job.h |  5 ++--
>  2 files changed, 11 insertions(+), 28 deletions(-)
> 
> diff --git a/src/gallium/drivers/panfrost/pan_job.c 
> b/src/gallium/drivers/panfrost/pan_job.c
> index 7c40bcee0fca..6b0f612bb156 100644
> --- a/src/gallium/drivers/panfrost/pan_job.c
> +++ b/src/gallium/drivers/panfrost/pan_job.c
> @@ -79,21 +79,10 @@ panfrost_free_batch(struct panfrost_batch *batch)
>  
>  struct panfrost_batch *
>  panfrost_get_batch(struct panfrost_context *ctx,
> - struct pipe_surface **cbufs, struct pipe_surface *zsbuf)
> +   const struct pipe_framebuffer_state *key)
>  {
>  /* Lookup the job first */
> -
> -struct panfrost_batch_key key = {
> -.cbufs = {
> -cbufs[0],
> -cbufs[1],
> -cbufs[2],
> -cbufs[3],
> -},
> -.zsbuf = zsbuf
> -};
> -
> -struct hash_entry *entry = _mesa_hash_table_search(ctx->batches, 
> &key);
> +struct hash_entry *entry = _mesa_hash_table_search(ctx->batches, 
> key);
>  
>  if (entry)
>  return entry->data;
> @@ -103,8 +92,7 @@ panfrost_get_batch(struct panfrost_context *ctx,
>  struct panfrost_batch *batch = panfrost_create_batch(ctx);
>  
>  /* Save the created job */
> -
> -memcpy(&batch->key, &key, sizeof(key));
> +util_copy_framebuffer_state(&batch->key, key);
>  _mesa_hash_table_insert(ctx->batches, &batch->key, batch);
>  
>  return batch;
> @@ -124,18 +112,14 @@ panfrost_get_batch_for_fbo(struct panfrost_context *ctx)
>  /* If we already began rendering, use that */
>  
>  if (ctx->batch) {
> -assert(ctx->batch->key.zsbuf == ctx->pipe_framebuffer.zsbuf 
> &&
> -   !memcmp(ctx->batch->key.cbufs,
> -   ctx->pipe_framebuffer.cbufs,
> -   sizeof(ctx->batch->key.cbufs)));
> +assert(util_framebuffer_state_equal(&ctx->batch->key,
> +&ctx->pipe_framebuffer));
>  return ctx->batch;
>  }
>  
>  /* If not, look up the job */
> -
> -struct pipe_surface **cbufs = ctx->pipe_framebuffer.cbufs;
> -struct pipe_surface *zsbuf = ctx->pipe_framebuffer.zsbuf;
> -struct panfrost_batch *batch = panfrost_get_batch(ctx, cbufs, zsbuf);
> +struct panfrost_batch *batch = panfrost_get_batch(ctx,
> +  
> &ctx->pipe_framebuffer);
>  
>  /* Set this job as the current FBO job. Will be reset when updating 
> the
>   * FB state and when submitting or releasing a job.
> @@ -390,13 +374,13 @@ panfrost_flush_jobs_reading_resource(struct 
> panfrost_context *panfrost,
>  static bool
>  panfrost_batch_compare(const void *a, const void *b)
>  {
> -return memcmp(a, b, sizeof(struct panfrost_batch_key)) == 0;
> +return util_framebuffer_state_equal(a, b);
>  }
>  
>  static uint32_t
>  panfrost_batch_hash(const void *key)
>  {
> -return _mesa_hash_data(key, sizeof(struct panfrost_batch_key));
> +return _mesa_hash_data(key, sizeof(struct pipe_framebuffer_state));
>  }
>  
>  /* Given a new bounding rectangle (scissor), let the job cover the union of 
> the
> diff --git a/src/gallium/drivers/panfrost/pan_job.h 
> b/src/gallium/drivers/panfrost/pan_job.h
> index c9f487871216..6d89603f8798 100644
> --- a/src/gallium/drivers/panfrost/pan_job.h
> +++ b/src/gallium/drivers/panfrost/pan_job.h
> @@ -46,7 +46,7 @@ struct panfrost_batch_key {
>  
>  struct panfrost_batch {
>  struct panfrost_context *ctx;
> -struct panfrost_batch_key key;
> +struct pipe_framebuffer_state key;
>  
>  /* Buffers cleared (PIPE_CLEAR_* bitmask) */
>  unsigned clear;
> @@ -127,8 +127,7 @@ panfrost_free_batch(struct panfrost_batch *batch);
>  
>  struct panfrost_batch *
>  panfrost_get_batch(struct panfrost_context *ctx,
> -   struct pipe_surface **cbufs,
> -   struct pipe_surface *zsbuf);
> +   const struct pipe_framebuffer_state *key);
>  
>  struct panfrost_batch *
>  panfrost_get_batch_for_fbo(struct panf

Re: [Mesa-dev] [PATCH v3 10/25] panfrost: Make sure the BO is 'ready' when picked from the cache

2019-09-05 Thread Alyssa Rosenzweig
> +bool
> +panfrost_bo_wait(struct panfrost_bo *bo, int64_t timeout_ns)
> +{
> +struct drm_panfrost_wait_bo req = {
> +.handle = bo->gem_handle,
> + .timeout_ns = timeout_ns,
> +};
> +int ret;
> +
> +ret = drmIoctl(bo->screen->fd, DRM_IOCTL_PANFROST_WAIT_BO, &req);
> +if (ret != -1)
> +return true;
> +
> +assert(errno == ETIMEDOUT || errno == EBUSY);
> +return false;
> +}

I would appreciate a comment explaining what the return value of this
ioctl is. `ret != -1` and asserting an errno is... suspicious? Not
wrong, to my knowledge, but hard to decipher without context.

> +/* Before creating a BO, we first want to check the cache but without
> + * waiting for BO readiness (BOs in the cache can still be referenced
> + * by jobs that are not finished yet).
> + * If the cached allocation fails we fall back on fresh BO 
> allocation,
> + * and if that fails too, we try one more time to allocate from the
> + * cache, but this time we accept to wait.
>   */

Conceptually:

We first try a ready BO from the cache. OK.

If that fails, there is no BO in the cache that is currently ready for
use; by definition of BO readiness, this is because another concurrent
job is using it. We then try to create a new BO. Suppose a given job
uses an average of `b` BOs. Then for `j` concurrent jobs, assuming all
of these allocations succeed, we have `j * b` BOs in the cache. This is
an unfortunate bump in memory usage but necessary for pipelining.

If that allocation fails, by definition of memory allocation failures,
we ran out of memory and cannot proceed with the allocation. Either:

 - The BO cache is responsible for this. In this case, continuing to use
   the BO cache (even with the waits) will just dig us deeper into the
   hole. Perhaps we should call bo_evict_all from userspace to handle
   the memory pressure? Or does madvise render this irrelevant?

 - The BO cache is not responsible for this. In this case, we could
   continue to use the BO cache, but then either:

- There is a BO we can wait for. Then waiting is okay.
- There is not. Then that cache fetch fails and we kerplutz.
  What now? If we need an allocation, cache or no cache, if the
  kernel says no, no means no. What then?

In short, I'm not convinced this algorithm (specifically the last step)
is ideal.

If there is no memory left for us, is it responsible to continue at all?
Should we just fail the allocation after step 2, and if the caller has a
problem with that, it's their issue? Or we abort here after step 2? I
don't like the robustness implications but low memory behaviour is a
risky subject as it is; I don't want to add more unknowns into it --
aborting it with an assert(0) is something we can recognize immediately.
Strange crashes in random places with no explanation, less so.

CC'ing Rob to see if he has any advise re Panfrost madvise interactions
as well as general kernel OOM policy.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 09/25] panfrost: Rework the panfrost_bo API

2019-09-05 Thread Alyssa Rosenzweig
> +static struct panfrost_bo *
> +panfrost_bo_alloc(struct panfrost_screen *screen, size_t size,
> +  uint32_t flags)
> +{
...
> +ret = drmIoctl(screen->fd, DRM_IOCTL_PANFROST_CREATE_BO, &create_bo);
> +if (ret)
> +return NULL;

I notice this had a print to stderr before with an assertion out, but
now it fails silently. Is this change of behaviour intentional? BO
creation would previously return a valid BO gauranteed. This is no
longer so obviously true -- although I see we later assert that the
return is non-NULL in the caller.

Could you help me understand the new logic a bit? Thank you!

> +if (!(flags & (PAN_ALLOCATE_INVISIBLE | PAN_ALLOCATE_DELAY_MMAP)))
> +panfrost_bo_mmap(bo);
> + else if ((flags & PAN_ALLOCATE_INVISIBLE) && (pan_debug & 
> PAN_DBG_TRACE))

I think the spacing got wacky here (on the beginning of the last line)

> +static void
> +panfrost_bo_release(struct panfrost_bo *bo)
> +{
> +
> +/* Rather than freeing the BO now, we'll cache the BO for later
> + * allocations if we're allowed to */
> +
> +panfrost_bo_munmap(bo);
> +
> +if (panfrost_bo_cache_put(bo))
> +return;
> +
> +panfrost_bo_free(bo);
> +}

I see we now have the distinction between panfrost_bo_release (cached)
and panfrost_bo_free (uncached). I'm worried the distinction might not
be obvious to future Panfrost hackers.

Could you add a comment above each function clarifying the cache
behaviour?

-

Other than these, the cleanup in general seems like a good idea. But in
general, please try to split up patches like this to aid reviewin. Thank
you!
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 08/25] panfrost: Rename pan_bo_cache.c into pan_bo.c

2019-09-05 Thread Alyssa Rosenzweig
R-b

On Thu, Sep 05, 2019 at 09:41:33PM +0200, Boris Brezillon wrote:
> So we can move all the BO logic into this file instead of having it
> spread over pan_resource.c, pan_drm.c and pan_bo_cache.c.
> 
> Signed-off-by: Boris Brezillon 
> ---
>  src/gallium/drivers/panfrost/meson.build  | 2 +-
>  src/gallium/drivers/panfrost/{pan_bo_cache.c => pan_bo.c} | 0
>  2 files changed, 1 insertion(+), 1 deletion(-)
>  rename src/gallium/drivers/panfrost/{pan_bo_cache.c => pan_bo.c} (100%)
> 
> diff --git a/src/gallium/drivers/panfrost/meson.build 
> b/src/gallium/drivers/panfrost/meson.build
> index c188274236bb..73c3b54923a4 100644
> --- a/src/gallium/drivers/panfrost/meson.build
> +++ b/src/gallium/drivers/panfrost/meson.build
> @@ -32,7 +32,7 @@ files_panfrost = files(
>  
>'pan_context.c',
>'pan_afbc.c',
> -  'pan_bo_cache.c',
> +  'pan_bo.c',
>'pan_blit.c',
>'pan_job.c',
>'pan_drm.c',
> diff --git a/src/gallium/drivers/panfrost/pan_bo_cache.c 
> b/src/gallium/drivers/panfrost/pan_bo.c
> similarity index 100%
> rename from src/gallium/drivers/panfrost/pan_bo_cache.c
> rename to src/gallium/drivers/panfrost/pan_bo.c
> -- 
> 2.21.0
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 07/25] panfrost: Get rid of the now unused SLAB allocator

2019-09-05 Thread Alyssa Rosenzweig
Glad to see this gone, thank you! R-b

On Thu, Sep 05, 2019 at 09:41:32PM +0200, Boris Brezillon wrote:
> The last users have been converted to use plain BOs. Let's get rid of
> this abstraction. We can always consider adding it back if we need it
> at some point.
> 
> Signed-off-by: Boris Brezillon 
> ---
>  src/gallium/drivers/panfrost/pan_allocate.h | 13 
>  src/gallium/drivers/panfrost/pan_drm.c  | 23 -
>  src/gallium/drivers/panfrost/pan_screen.h   | 11 --
>  3 files changed, 47 deletions(-)
> 
> diff --git a/src/gallium/drivers/panfrost/pan_allocate.h 
> b/src/gallium/drivers/panfrost/pan_allocate.h
> index cf9499154c8b..c0aff62df4a1 100644
> --- a/src/gallium/drivers/panfrost/pan_allocate.h
> +++ b/src/gallium/drivers/panfrost/pan_allocate.h
> @@ -63,23 +63,10 @@ struct panfrost_bo {
>  uint32_t flags;
>  };
>  
> -struct panfrost_memory {
> -/* Backing for the slab in memory */
> -struct panfrost_bo *bo;
> -int stack_bottom;
> -};
> -
>  struct panfrost_transfer
>  panfrost_allocate_transient(struct panfrost_context *ctx, size_t sz);
>  
>  mali_ptr
>  panfrost_upload_transient(struct panfrost_context *ctx, const void *data, 
> size_t sz);
>  
> -static inline mali_ptr
> -panfrost_reserve(struct panfrost_memory *mem, size_t sz)
> -{
> -mem->stack_bottom += sz;
> -return mem->bo->gpu + (mem->stack_bottom - sz);
> -}
> -
>  #endif /* __PAN_ALLOCATE_H__ */
> diff --git a/src/gallium/drivers/panfrost/pan_drm.c 
> b/src/gallium/drivers/panfrost/pan_drm.c
> index 1edbb5bd1dcc..e7dcd2e58751 100644
> --- a/src/gallium/drivers/panfrost/pan_drm.c
> +++ b/src/gallium/drivers/panfrost/pan_drm.c
> @@ -183,29 +183,6 @@ panfrost_drm_release_bo(struct panfrost_screen *screen, 
> struct panfrost_bo *bo,
>  ralloc_free(bo);
>  }
>  
> -void
> -panfrost_drm_allocate_slab(struct panfrost_screen *screen,
> -   struct panfrost_memory *mem,
> -   size_t pages,
> -   bool same_va,
> -   int extra_flags,
> -   int commit_count,
> -   int extent)
> -{
> -// TODO cache allocations
> -// TODO properly handle errors
> -// TODO take into account extra_flags
> -mem->bo = panfrost_drm_create_bo(screen, pages * 4096, extra_flags);
> -mem->stack_bottom = 0;
> -}
> -
> -void
> -panfrost_drm_free_slab(struct panfrost_screen *screen, struct 
> panfrost_memory *mem)
> -{
> -panfrost_bo_unreference(&screen->base, mem->bo);
> -mem->bo = NULL;
> -}
> -
>  struct panfrost_bo *
>  panfrost_drm_import_bo(struct panfrost_screen *screen, int fd)
>  {
> diff --git a/src/gallium/drivers/panfrost/pan_screen.h 
> b/src/gallium/drivers/panfrost/pan_screen.h
> index 7ed5193277ac..96044b8c8b90 100644
> --- a/src/gallium/drivers/panfrost/pan_screen.h
> +++ b/src/gallium/drivers/panfrost/pan_screen.h
> @@ -120,17 +120,6 @@ pan_screen(struct pipe_screen *p)
>  return (struct panfrost_screen *)p;
>  }
>  
> -void
> -panfrost_drm_allocate_slab(struct panfrost_screen *screen,
> -   struct panfrost_memory *mem,
> -   size_t pages,
> -   bool same_va,
> -   int extra_flags,
> -   int commit_count,
> -   int extent);
> -void
> -panfrost_drm_free_slab(struct panfrost_screen *screen,
> -   struct panfrost_memory *mem);
>  struct panfrost_bo *
>  panfrost_drm_create_bo(struct panfrost_screen *screen, size_t size,
> uint32_t flags);
> -- 
> 2.21.0
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 06/25] panfrost: Get rid of unused panfrost_context fields

2019-09-05 Thread Alyssa Rosenzweig
Reviewed-by: Alyssa Rosenzweig 

I wish static analysis and friends would identify these automatically.

On Thu, Sep 05, 2019 at 09:41:31PM +0200, Boris Brezillon wrote:
> Some fields in panfrost_context are unused (probably leftovers from
> previous refactor). Let's get rid of them.
> 
> Signed-off-by: Boris Brezillon 
> ---
>  src/gallium/drivers/panfrost/pan_context.h | 4 
>  1 file changed, 4 deletions(-)
> 
> diff --git a/src/gallium/drivers/panfrost/pan_context.h 
> b/src/gallium/drivers/panfrost/pan_context.h
> index 8f9cc44fedac..9723d56ac5f7 100644
> --- a/src/gallium/drivers/panfrost/pan_context.h
> +++ b/src/gallium/drivers/panfrost/pan_context.h
> @@ -129,8 +129,6 @@ struct panfrost_context {
>  struct panfrost_bo *scratchpad;
>  struct panfrost_bo *tiler_heap;
>  struct panfrost_bo *tiler_dummy;
> -struct panfrost_memory cmdstream_persistent;
> -struct panfrost_memory depth_stencil_buffer;
>  
>  bool active_queries;
>  uint64_t prims_generated;
> @@ -157,8 +155,6 @@ struct panfrost_context {
>   * it is disabled, just equal to plain vertex count */
>  unsigned padded_count;
>  
> -union mali_attr attributes[PIPE_MAX_ATTRIBS];
> -
>  /* TODO: Multiple uniform buffers (index =/= 0), finer updates? */
>  
>  struct panfrost_constant_buffer constant_buffer[PIPE_SHADER_TYPES];
> -- 
> 2.21.0
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 05/25] panfrost: Convert ctx->{scratchpad, tiler_heap, tiler_dummy} to plain BOs

2019-09-05 Thread Alyssa Rosenzweig
Reviewed-by: Alyssa Rosenzweig 

On Thu, Sep 05, 2019 at 09:41:30PM +0200, Boris Brezillon wrote:
> ctx->{scratchpad,tiler_heap,tiler_dummy} are allocated using
> panfrost_drm_allocate_slab() but they never any of the SLAB-based
> allocation logic. Let's convert those fields to plain BOs.
> 
> Signed-off-by: Boris Brezillon 
> ---
>  src/gallium/drivers/panfrost/pan_context.c | 29 --
>  src/gallium/drivers/panfrost/pan_context.h |  6 ++---
>  src/gallium/drivers/panfrost/pan_drm.c |  4 +--
>  3 files changed, 21 insertions(+), 18 deletions(-)
> 
> diff --git a/src/gallium/drivers/panfrost/pan_context.c 
> b/src/gallium/drivers/panfrost/pan_context.c
> index 292de7fe132c..0fb4c2584e40 100644
> --- a/src/gallium/drivers/panfrost/pan_context.c
> +++ b/src/gallium/drivers/panfrost/pan_context.c
> @@ -83,16 +83,15 @@ panfrost_emit_midg_tiler(
>  
>  
>  /* Allow the entire tiler heap */
> -t.heap_start = ctx->tiler_heap.bo->gpu;
> -t.heap_end =
> -ctx->tiler_heap.bo->gpu + ctx->tiler_heap.bo->size;
> +t.heap_start = ctx->tiler_heap->gpu;
> +t.heap_end = ctx->tiler_heap->gpu + ctx->tiler_heap->size;
>  } else {
>  /* The tiler is disabled, so don't allow the tiler heap */
> -t.heap_start = ctx->tiler_heap.bo->gpu;
> +t.heap_start = ctx->tiler_heap->gpu;
>  t.heap_end = t.heap_start;
>  
>  /* Use a dummy polygon list */
> -t.polygon_list = ctx->tiler_dummy.bo->gpu;
> +t.polygon_list = ctx->tiler_dummy->gpu;
>  
>  /* Disable the tiler */
>  t.hierarchy_mask |= MALI_TILER_DISABLED;
> @@ -116,7 +115,7 @@ panfrost_emit_sfbd(struct panfrost_context *ctx, unsigned 
> vertex_count)
>  .unknown2 = 0x1f,
>  .format = 0x3000,
>  .clear_flags = 0x1000,
> -.unknown_address_0 = ctx->scratchpad.bo->gpu,
> +.unknown_address_0 = ctx->scratchpad->gpu,
>  .tiler = panfrost_emit_midg_tiler(ctx,
>width, height, 
> vertex_count),
>  };
> @@ -144,7 +143,7 @@ panfrost_emit_mfbd(struct panfrost_context *ctx, unsigned 
> vertex_count)
>  
>  .unknown2 = 0x1f,
>  
> -.scratchpad = ctx->scratchpad.bo->gpu,
> +.scratchpad = ctx->scratchpad->gpu,
>  .tiler = panfrost_emit_midg_tiler(ctx,
>width, height, 
> vertex_count)
>  };
> @@ -2565,9 +2564,9 @@ panfrost_destroy(struct pipe_context *pipe)
>  if (panfrost->blitter_wallpaper)
>  util_blitter_destroy(panfrost->blitter_wallpaper);
>  
> -panfrost_drm_free_slab(screen, &panfrost->scratchpad);
> -panfrost_drm_free_slab(screen, &panfrost->tiler_heap);
> -panfrost_drm_free_slab(screen, &panfrost->tiler_dummy);
> +panfrost_drm_release_bo(screen, panfrost->scratchpad, false);
> +panfrost_drm_release_bo(screen, panfrost->tiler_heap, false);
> +panfrost_drm_release_bo(screen, panfrost->tiler_dummy, false);
>  
>  ralloc_free(pipe);
>  }
> @@ -2750,9 +2749,13 @@ panfrost_setup_hardware(struct panfrost_context *ctx)
>  struct pipe_context *gallium = (struct pipe_context *) ctx;
>  struct panfrost_screen *screen = pan_screen(gallium->screen);
>  
> -panfrost_drm_allocate_slab(screen, &ctx->scratchpad, 64*4, false, 0, 
> 0, 0);
> -panfrost_drm_allocate_slab(screen, &ctx->tiler_heap, 4096, false, 
> PAN_ALLOCATE_INVISIBLE | PAN_ALLOCATE_GROWABLE, 1, 128);
> -panfrost_drm_allocate_slab(screen, &ctx->tiler_dummy, 1, false, 
> PAN_ALLOCATE_INVISIBLE, 0, 0);
> +ctx->scratchpad = panfrost_drm_create_bo(screen, 64 * 4 * 4096, 0);
> +ctx->tiler_heap = panfrost_drm_create_bo(screen, 4096 * 4096,
> + PAN_ALLOCATE_INVISIBLE |
> + PAN_ALLOCATE_GROWABLE);
> +ctx->tiler_dummy = panfrost_drm_create_bo(screen, 4096,
> +  PAN_ALLOCATE_INVISIBLE);
> +assert(ctx->scratchpad && ctx->tiler_heap && ctx->tiler_dummy);
>  }
>  
>  /* New context creation, which also does hardware initialisation since I 
> don't
> diff --git a/src/gallium/drivers/panfrost/pan_context.h 
> b/src/gallium/drivers/panfrost/pan_context.h
> index 5af950e10013..8f9cc44fedac 100644
> --- a/src/gallium/drivers/panfrost/pan_context.h
> +++ b/src/gallium/drivers/panfrost/pan_context.h
> @@ -126,10 +126,10 @@ struct panfrost_context {
>  struct pipe_framebuffer_state pipe_framebuffer;
>  struct panfrost_streamout streamout;
>  
> +struct panfrost_bo *scratchpad;
> +struct panfros

Re: [Mesa-dev] [PATCH v3 04/25] panfrost: Make transient allocation rely on the BO cache

2019-09-05 Thread Alyssa Rosenzweig
Reviewed-by: Alyssa Rosenzweig 

> Right now, the transient memory allocator implements its own BO caching
> mechanism, which is not really needed since we already have a generic
> BO cache. Let's simplify things a bit.
> 
> Signed-off-by: Boris Brezillon 
> Alyssa Rosenzweig 
> ---
> Changes in v3:
> * Collect R-b
> 
> Changes in v2:
> * None
> ---
>  src/gallium/drivers/panfrost/pan_allocate.c | 80 -
>  src/gallium/drivers/panfrost/pan_job.c  | 11 ---
>  src/gallium/drivers/panfrost/pan_job.h  |  4 +-
>  src/gallium/drivers/panfrost/pan_screen.c   |  4 --
>  src/gallium/drivers/panfrost/pan_screen.h   | 21 --
>  5 files changed, 16 insertions(+), 104 deletions(-)
> 
> diff --git a/src/gallium/drivers/panfrost/pan_allocate.c 
> b/src/gallium/drivers/panfrost/pan_allocate.c
> index d8a594551c76..a22b1a5a88d6 100644
> --- a/src/gallium/drivers/panfrost/pan_allocate.c
> +++ b/src/gallium/drivers/panfrost/pan_allocate.c
> @@ -34,27 +34,6 @@
>  /* TODO: What does this actually have to be? */
>  #define ALIGNMENT 128
>  
> -/* Allocate a new transient slab */
> -
> -static struct panfrost_bo *
> -panfrost_create_slab(struct panfrost_screen *screen, unsigned *index)
> -{
> -/* Allocate a new slab on the screen */
> -
> -struct panfrost_bo **new =
> -util_dynarray_grow(&screen->transient_bo,
> -struct panfrost_bo *, 1);
> -
> -struct panfrost_bo *alloc = panfrost_drm_create_bo(screen, 
> TRANSIENT_SLAB_SIZE, 0);
> -
> -*new = alloc;
> -
> -/* Return the BO as well as the index we just added */
> -
> -*index = util_dynarray_num_elements(&screen->transient_bo, void *) - 
> 1;
> -return alloc;
> -}
> -
>  /* Transient command stream pooling: command stream uploads try to simply 
> copy
>   * into whereever we left off. If there isn't space, we allocate a new entry
>   * into the pool and copy there */
> @@ -72,59 +51,32 @@ panfrost_allocate_transient(struct panfrost_context *ctx, 
> size_t sz)
>  struct panfrost_bo *bo = NULL;
>  
>  unsigned offset = 0;
> -bool update_offset = false;
>  
> -pthread_mutex_lock(&screen->transient_lock);
> -bool has_current = batch->transient_indices.size;
>  bool fits_in_current = (batch->transient_offset + sz) < 
> TRANSIENT_SLAB_SIZE;
>  
> -if (likely(has_current && fits_in_current)) {
> -/* We can reuse the topmost BO, so get it */
> -unsigned idx = util_dynarray_top(&batch->transient_indices, 
> unsigned);
> -bo = pan_bo_for_index(screen, idx);
> +if (likely(batch->transient_bo && fits_in_current)) {
> +/* We can reuse the current BO, so get it */
> +bo = batch->transient_bo;
>  
>  /* Use the specified offset */
>  offset = batch->transient_offset;
> -update_offset = true;
> -} else if (sz < TRANSIENT_SLAB_SIZE) {
> -/* We can't reuse the topmost BO, but we can get a new one.
> - * First, look for a free slot */
> -
> -unsigned count = 
> util_dynarray_num_elements(&screen->transient_bo, void *);
> -unsigned index = 0;
> -
> -unsigned free = __bitset_ffs(
> -screen->free_transient,
> -count / BITSET_WORDBITS);
> -
> -if (likely(free)) {
> -/* Use this one */
> -index = free - 1;
> -
> -/* It's ours, so no longer free */
> -BITSET_CLEAR(screen->free_transient, index);
> -
> -/* Grab the BO */
> -bo = pan_bo_for_index(screen, index);
> -} else {
> -/* Otherwise, create a new BO */
> -bo = panfrost_create_slab(screen, &index);
> -}
> -
> -panfrost_batch_add_bo(batch, bo);
> -
> -/* Remember we created this */
> -util_dynarray_append(&batch->transient_indices, unsigned, 
> index);
> -
> -update_offset = true;
> +batch->transient_offset = offset + sz;
>  } else {
> -/* Create a new BO and reference it */
> -bo = panfrost_drm_create_bo(screen, ALIGN_POT(sz, 4096), 0);
> +size_t bo_sz = sz < TRANSIENT_SLAB_SIZE ?
> +   TRANSIENT_SLAB_SIZE : ALIGN_POT(sz, 4096);
> +
> +/* We can't reuse the current BO, but we can create a new 
> one. */
> +bo = panfrost_drm_create_bo(screen, bo_sz, 0);
>  panfrost_batch_add_bo(batch, bo);
>  
>  /* Creating a BO adds a reference, and then the job adds a
>   * second one. So we need to pop back one reference

Re: [Mesa-dev] [PATCH v3 03/25] panfrost: Stop passing a ctx to functions being passed a batch

2019-09-05 Thread Alyssa Rosenzweig
Reviewed-by: Alyssa Rosenzweig 

On Thu, Sep 05, 2019 at 09:41:28PM +0200, Boris Brezillon wrote:
> The context can be retrieved from batch->ctx.
> 
> Signed-off-by: Boris Brezillon 
> Alyssa Rosenzweig 
> Reviewed-by: Daniel Stone 
> ---
> Changes in v3:
> * Collect R-bs
> 
> Changes in v2:
> * s/panfrost_job_get_batch_for_fbo/panfrost_get_batch_for_fbo/
> * s/panfrost_job_batch/panfrost_batch/g
> ---
>  src/gallium/drivers/panfrost/pan_context.c |  6 +++---
>  src/gallium/drivers/panfrost/pan_drm.c |  2 +-
>  src/gallium/drivers/panfrost/pan_job.c | 25 +-
>  src/gallium/drivers/panfrost/pan_job.h | 11 --
>  4 files changed, 23 insertions(+), 21 deletions(-)
> 
> diff --git a/src/gallium/drivers/panfrost/pan_context.c 
> b/src/gallium/drivers/panfrost/pan_context.c
> index ce895822014d..292de7fe132c 100644
> --- a/src/gallium/drivers/panfrost/pan_context.c
> +++ b/src/gallium/drivers/panfrost/pan_context.c
> @@ -180,7 +180,7 @@ panfrost_clear(
>  struct panfrost_context *ctx = pan_context(pipe);
>  struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
>  
> -panfrost_batch_clear(ctx, batch, buffers, color, depth, stencil);
> +panfrost_batch_clear(batch, buffers, color, depth, stencil);
>  }
>  
>  static mali_ptr
> @@ -907,7 +907,7 @@ panfrost_emit_for_draw(struct panfrost_context *ctx, bool 
> with_vertex_data)
>  SET_BIT(ctx->fragment_shader_core.unknown2_4, MALI_NO_MSAA, 
> !msaa);
>  }
>  
> -panfrost_batch_set_requirements(ctx, batch);
> +panfrost_batch_set_requirements(batch);
>  
>  if (ctx->occlusion_query) {
>  ctx->payloads[PIPE_SHADER_FRAGMENT].gl_enables |= 
> MALI_OCCLUSION_QUERY | MALI_OCCLUSION_PRECISE;
> @@ -1329,7 +1329,7 @@ panfrost_submit_frame(struct panfrost_context *ctx, 
> bool flush_immediate,
>struct pipe_fence_handle **fence,
>struct panfrost_batch *batch)
>  {
> -panfrost_batch_submit(ctx, batch);
> +panfrost_batch_submit(batch);
>  
>  /* If visual, we can stall a frame */
>  
> diff --git a/src/gallium/drivers/panfrost/pan_drm.c 
> b/src/gallium/drivers/panfrost/pan_drm.c
> index 768d9602eee7..040cb1368e4e 100644
> --- a/src/gallium/drivers/panfrost/pan_drm.c
> +++ b/src/gallium/drivers/panfrost/pan_drm.c
> @@ -355,7 +355,7 @@ panfrost_drm_force_flush_fragment(struct panfrost_context 
> *ctx,
>  ctx->last_fragment_flushed = true;
>  
>  /* The job finished up, so we're safe to clean it up now */
> -panfrost_free_batch(ctx, ctx->last_batch);
> +panfrost_free_batch(ctx->last_batch);
>  }
>  
>  if (fence) {
> diff --git a/src/gallium/drivers/panfrost/pan_job.c 
> b/src/gallium/drivers/panfrost/pan_job.c
> index f136ccb97fcd..0d19c2b4c5cd 100644
> --- a/src/gallium/drivers/panfrost/pan_job.c
> +++ b/src/gallium/drivers/panfrost/pan_job.c
> @@ -54,11 +54,13 @@ panfrost_create_batch(struct panfrost_context *ctx)
>  }
>  
>  void
> -panfrost_free_batch(struct panfrost_context *ctx, struct panfrost_batch 
> *batch)
> +panfrost_free_batch(struct panfrost_batch *batch)
>  {
>  if (!batch)
>  return;
>  
> +struct panfrost_context *ctx = batch->ctx;
> +
>  set_foreach(batch->bos, entry) {
>  struct panfrost_bo *bo = (struct panfrost_bo *)entry->key;
>  panfrost_bo_unreference(ctx->base.screen, bo);
> @@ -195,18 +197,20 @@ panfrost_flush_jobs_writing_resource(struct 
> panfrost_context *panfrost,
> prsc);
>  if (entry) {
>  struct panfrost_batch *batch = entry->data;
> -panfrost_batch_submit(panfrost, job);
> +panfrost_batch_submit(job);
>  }
>  #endif
>  /* TODO stub */
>  }
>  
>  void
> -panfrost_batch_submit(struct panfrost_context *ctx, struct panfrost_batch 
> *batch)
> +panfrost_batch_submit(struct panfrost_batch *batch)
>  {
> +assert(batch);
> +
> +struct panfrost_context *ctx = batch->ctx;
>  int ret;
>  
> -assert(batch);
>  panfrost_scoreboard_link_batch(batch);
>  
>  bool has_draws = batch->last_job.gpu;
> @@ -232,9 +236,10 @@ panfrost_batch_submit(struct panfrost_context *ctx, 
> struct panfrost_batch *batch
>  }
>  
>  void
> -panfrost_batch_set_requirements(struct panfrost_context *ctx,
> -struct panfrost_batch *batch)
> +panfrost_batch_set_requirements(struct panfrost_batch *batch)
>  {
> +struct panfrost_context *ctx = batch->ctx;
> +
>  if (ctx->rasterizer && ctx->rasterizer->base.multisample)
>  batch->requirements |= PAN_REQ_MSAA;
>  
> @@ -336,13 +341,13 @@ pan_pack_color(uint32_t *packed, const union 
> pipe_color_union *color, enum pipe_
>  }
>  
>  void
> -panfrost_batch_clear(struct

Re: [Mesa-dev] [PATCH v3 02/25] panfrost: Pass a batch to panfrost_drm_submit_vs_fs_batch()

2019-09-05 Thread Alyssa Rosenzweig
Reviewed-by: Alyssa Rosenzweig 

On Thu, Sep 05, 2019 at 09:41:27PM +0200, Boris Brezillon wrote:
> Given the function name it makes more sense to pass it a job batch
> directly.
> 
> Signed-off-by: Boris Brezillon 
> Alyssa Rosenzweig 
> Reviewed-by: Daniel Stone 
> ---
> Changes in v3:
> * Collect R-bs
> 
> Changes in v2:
> * s/panfrost_job_get_batch_for_fbo/panfrost_get_batch_for_fbo/
> * s/panfrost_job_batch/panfrost_batch/g
> ---
>  src/gallium/drivers/panfrost/pan_drm.c| 13 ++---
>  src/gallium/drivers/panfrost/pan_job.c|  2 +-
>  src/gallium/drivers/panfrost/pan_screen.h |  3 ++-
>  3 files changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/src/gallium/drivers/panfrost/pan_drm.c 
> b/src/gallium/drivers/panfrost/pan_drm.c
> index 75fc5a726b1f..768d9602eee7 100644
> --- a/src/gallium/drivers/panfrost/pan_drm.c
> +++ b/src/gallium/drivers/panfrost/pan_drm.c
> @@ -248,12 +248,12 @@ panfrost_drm_export_bo(struct panfrost_screen *screen, 
> const struct panfrost_bo
>  }
>  
>  static int
> -panfrost_drm_submit_batch(struct panfrost_context *ctx, u64 first_job_desc,
> +panfrost_drm_submit_batch(struct panfrost_batch *batch, u64 first_job_desc,
>int reqs)
>  {
> +struct panfrost_context *ctx = batch->ctx;
>  struct pipe_context *gallium = (struct pipe_context *) ctx;
>  struct panfrost_screen *screen = pan_screen(gallium->screen);
> -struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
>  struct drm_panfrost_submit submit = {0,};
>  int *bo_handles, ret;
>  
> @@ -293,23 +293,22 @@ panfrost_drm_submit_batch(struct panfrost_context *ctx, 
> u64 first_job_desc,
>  }
>  
>  int
> -panfrost_drm_submit_vs_fs_batch(struct panfrost_context *ctx, bool has_draws)
> +panfrost_drm_submit_vs_fs_batch(struct panfrost_batch *batch, bool has_draws)
>  {
> +struct panfrost_context *ctx = batch->ctx;
>  int ret = 0;
>  
> -struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
> -
>  panfrost_batch_add_bo(batch, ctx->scratchpad.bo);
>  panfrost_batch_add_bo(batch, ctx->tiler_heap.bo);
>  panfrost_batch_add_bo(batch, batch->polygon_list);
>  
>  if (batch->first_job.gpu) {
> -ret = panfrost_drm_submit_batch(ctx, batch->first_job.gpu, 
> 0);
> +ret = panfrost_drm_submit_batch(batch, batch->first_job.gpu, 
> 0);
>  assert(!ret);
>  }
>  
>  if (batch->first_tiler.gpu || batch->clear) {
> -ret = panfrost_drm_submit_batch(ctx,
> +ret = panfrost_drm_submit_batch(batch,
>  panfrost_fragment_job(ctx, 
> has_draws),
>  PANFROST_JD_REQ_FS);
>  assert(!ret);
> diff --git a/src/gallium/drivers/panfrost/pan_job.c 
> b/src/gallium/drivers/panfrost/pan_job.c
> index a019c2adf69a..f136ccb97fcd 100644
> --- a/src/gallium/drivers/panfrost/pan_job.c
> +++ b/src/gallium/drivers/panfrost/pan_job.c
> @@ -211,7 +211,7 @@ panfrost_batch_submit(struct panfrost_context *ctx, 
> struct panfrost_batch *batch
>  
>  bool has_draws = batch->last_job.gpu;
>  
> -ret = panfrost_drm_submit_vs_fs_batch(ctx, has_draws);
> +ret = panfrost_drm_submit_vs_fs_batch(batch, has_draws);
>  
>  if (ret)
>  fprintf(stderr, "panfrost_batch_submit failed: %d\n", ret);
> diff --git a/src/gallium/drivers/panfrost/pan_screen.h 
> b/src/gallium/drivers/panfrost/pan_screen.h
> index 3017b9c154f4..11cbb72075ab 100644
> --- a/src/gallium/drivers/panfrost/pan_screen.h
> +++ b/src/gallium/drivers/panfrost/pan_screen.h
> @@ -39,6 +39,7 @@
>  #include 
>  #include "pan_allocate.h"
>  
> +struct panfrost_batch;
>  struct panfrost_context;
>  struct panfrost_resource;
>  struct panfrost_screen;
> @@ -163,7 +164,7 @@ panfrost_drm_import_bo(struct panfrost_screen *screen, 
> int fd);
>  int
>  panfrost_drm_export_bo(struct panfrost_screen *screen, const struct 
> panfrost_bo *bo);
>  int
> -panfrost_drm_submit_vs_fs_batch(struct panfrost_context *ctx, bool 
> has_draws);
> +panfrost_drm_submit_vs_fs_batch(struct panfrost_batch *batch, bool 
> has_draws);
>  void
>  panfrost_drm_force_flush_fragment(struct panfrost_context *ctx,
>struct pipe_fence_handle **fence);
> -- 
> 2.21.0
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 01/25] panfrost: s/job/batch/

2019-09-05 Thread Alyssa Rosenzweig
Reviewed-by: Alyssa Rosenzweig 

On Thu, Sep 05, 2019 at 09:41:26PM +0200, Boris Brezillon wrote:
> What we currently call a job is actually a batch containing several jobs
> all attached to a rendering operation targeting a specific FBO.
> 
> Let's rename structs, functions, variables and fields to reflect this
> fact.
> 
> Suggested-by: Alyssa Rosenzweig 
> Signed-off-by: Boris Brezillon 
> ---
> Changes in v3:
> * s/panfrost_job_/panfrost_batch_/
> 
> Changes in v2:
> * s/panfrost_job_get_batch_for_fbo/panfrost_get_batch_for_fbo/
> * s/panfrost_job_batch/panfrost_batch/g
> ---
>  src/gallium/drivers/panfrost/pan_allocate.c   |   6 +-
>  src/gallium/drivers/panfrost/pan_blend_cso.c  |   4 +-
>  src/gallium/drivers/panfrost/pan_compute.c|   2 +-
>  src/gallium/drivers/panfrost/pan_context.c|  72 +++
>  src/gallium/drivers/panfrost/pan_context.h|  12 +-
>  src/gallium/drivers/panfrost/pan_drm.c|  33 +--
>  src/gallium/drivers/panfrost/pan_fragment.c   |  20 +-
>  src/gallium/drivers/panfrost/pan_instancing.c |   6 +-
>  src/gallium/drivers/panfrost/pan_job.c| 198 +-
>  src/gallium/drivers/panfrost/pan_job.h|  72 +++
>  src/gallium/drivers/panfrost/pan_mfbd.c   |  30 +--
>  src/gallium/drivers/panfrost/pan_resource.c   |   4 +-
>  src/gallium/drivers/panfrost/pan_scoreboard.c |  22 +-
>  src/gallium/drivers/panfrost/pan_screen.h |   2 +-
>  src/gallium/drivers/panfrost/pan_sfbd.c   |  36 ++--
>  src/gallium/drivers/panfrost/pan_varyings.c   |   4 +-
>  16 files changed, 264 insertions(+), 259 deletions(-)
> 
> diff --git a/src/gallium/drivers/panfrost/pan_allocate.c 
> b/src/gallium/drivers/panfrost/pan_allocate.c
> index 2efb01c75589..d8a594551c76 100644
> --- a/src/gallium/drivers/panfrost/pan_allocate.c
> +++ b/src/gallium/drivers/panfrost/pan_allocate.c
> @@ -63,7 +63,7 @@ struct panfrost_transfer
>  panfrost_allocate_transient(struct panfrost_context *ctx, size_t sz)
>  {
>  struct panfrost_screen *screen = pan_screen(ctx->base.screen);
> -struct panfrost_job *batch = panfrost_get_job_for_fbo(ctx);
> +struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
>  
>  /* Pad the size */
>  sz = ALIGN_POT(sz, ALIGNMENT);
> @@ -111,7 +111,7 @@ panfrost_allocate_transient(struct panfrost_context *ctx, 
> size_t sz)
>  bo = panfrost_create_slab(screen, &index);
>  }
>  
> -panfrost_job_add_bo(batch, bo);
> +panfrost_batch_add_bo(batch, bo);
>  
>  /* Remember we created this */
>  util_dynarray_append(&batch->transient_indices, unsigned, 
> index);
> @@ -120,7 +120,7 @@ panfrost_allocate_transient(struct panfrost_context *ctx, 
> size_t sz)
>  } else {
>  /* Create a new BO and reference it */
>  bo = panfrost_drm_create_bo(screen, ALIGN_POT(sz, 4096), 0);
> -panfrost_job_add_bo(batch, bo);
> +panfrost_batch_add_bo(batch, bo);
>  
>  /* Creating a BO adds a reference, and then the job adds a
>   * second one. So we need to pop back one reference */
> diff --git a/src/gallium/drivers/panfrost/pan_blend_cso.c 
> b/src/gallium/drivers/panfrost/pan_blend_cso.c
> index 43121335f5e7..ab49772f3ba3 100644
> --- a/src/gallium/drivers/panfrost/pan_blend_cso.c
> +++ b/src/gallium/drivers/panfrost/pan_blend_cso.c
> @@ -227,7 +227,7 @@ struct panfrost_blend_final
>  panfrost_get_blend_for_context(struct panfrost_context *ctx, unsigned rti)
>  {
>  struct panfrost_screen *screen = pan_screen(ctx->base.screen);
> -struct panfrost_job *job = panfrost_get_job_for_fbo(ctx);
> +struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
>  
>  /* Grab the format, falling back gracefully if called invalidly 
> (which
>   * has to happen for no-color-attachment FBOs, for instance)  */
> @@ -276,7 +276,7 @@ panfrost_get_blend_for_context(struct panfrost_context 
> *ctx, unsigned rti)
>  memcpy(final.shader.bo->cpu, shader->buffer, shader->size);
>  
>  /* Pass BO ownership to job */
> -panfrost_job_add_bo(job, final.shader.bo);
> +panfrost_batch_add_bo(batch, final.shader.bo);
>  panfrost_bo_unreference(ctx->base.screen, final.shader.bo);
>  
>  if (shader->patch_index) {
> diff --git a/src/gallium/drivers/panfrost/pan_compute.c 
> b/src/gallium/drivers/panfrost/pan_compute.c
> index 50e70cd8298e..51967fe481ef 100644
> --- a/src/gallium/drivers/panfrost/pan_compute.c
> +++ b/src/gallium/drivers/panfrost/pan_compute.c
> @@ -128,7 +128,7 @@ panfrost_launch_grid(struct pipe_context *pipe,
>  memcpy(transfer.cpu + sizeof(job), payload, sizeof(*payload));
>  
>  /* TODO: Do we want a special compute-only batch? */
> -struct panfrost_job *batch = panfrost_get_job_for_fbo(ctx);
> +struct panfrost

[Mesa-dev] [PATCH v3 14/25] panfrost: Move the fence creation in panfrost_flush()

2019-09-05 Thread Boris Brezillon
panfrost_flush() is about to be reworked to flush all pending batches,
but we want the fence to block on the last one. Let's move the fence
creation logic in panfrost_flush() to prepare for this situation.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_context.c | 13 +
 src/gallium/drivers/panfrost/pan_context.h |  3 +++
 src/gallium/drivers/panfrost/pan_drm.c | 11 ++-
 src/gallium/drivers/panfrost/pan_screen.h  |  3 +--
 4 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_context.c 
b/src/gallium/drivers/panfrost/pan_context.c
index e34f5757b1cf..6552052b8cad 100644
--- a/src/gallium/drivers/panfrost/pan_context.c
+++ b/src/gallium/drivers/panfrost/pan_context.c
@@ -1308,7 +1308,6 @@ panfrost_queue_draw(struct panfrost_context *ctx)
 
 static void
 panfrost_submit_frame(struct panfrost_context *ctx, bool flush_immediate,
-  struct pipe_fence_handle **fence,
   struct panfrost_batch *batch)
 {
 panfrost_batch_submit(batch);
@@ -1316,14 +1315,14 @@ panfrost_submit_frame(struct panfrost_context *ctx, 
bool flush_immediate,
 /* If visual, we can stall a frame */
 
 if (!flush_immediate)
-panfrost_drm_force_flush_fragment(ctx, fence);
+panfrost_drm_force_flush_fragment(ctx);
 
 ctx->last_fragment_flushed = false;
 ctx->last_batch = batch;
 
 /* If readback, flush now (hurts the pipelined performance) */
 if (flush_immediate)
-panfrost_drm_force_flush_fragment(ctx, fence);
+panfrost_drm_force_flush_fragment(ctx);
 }
 
 static void
@@ -1452,7 +1451,13 @@ panfrost_flush(
 bool flush_immediate = /*flags & PIPE_FLUSH_END_OF_FRAME*/true;
 
 /* Submit the frame itself */
-panfrost_submit_frame(ctx, flush_immediate, fence, batch);
+panfrost_submit_frame(ctx, flush_immediate, batch);
+
+if (fence) {
+struct panfrost_fence *f = panfrost_fence_create(ctx);
+pipe->screen->fence_reference(pipe->screen, fence, NULL);
+*fence = (struct pipe_fence_handle *)f;
+}
 
 /* Prepare for the next frame */
 panfrost_invalidate_frame(ctx);
diff --git a/src/gallium/drivers/panfrost/pan_context.h 
b/src/gallium/drivers/panfrost/pan_context.h
index 02552ed23de2..6ad2cc81c781 100644
--- a/src/gallium/drivers/panfrost/pan_context.h
+++ b/src/gallium/drivers/panfrost/pan_context.h
@@ -297,6 +297,9 @@ pan_context(struct pipe_context *pcontext)
 return (struct panfrost_context *) pcontext;
 }
 
+struct panfrost_fence *
+panfrost_fence_create(struct panfrost_context *ctx);
+
 struct pipe_context *
 panfrost_create_context(struct pipe_screen *screen, void *priv, unsigned 
flags);
 
diff --git a/src/gallium/drivers/panfrost/pan_drm.c 
b/src/gallium/drivers/panfrost/pan_drm.c
index e4b75fad4078..47cec9f39fef 100644
--- a/src/gallium/drivers/panfrost/pan_drm.c
+++ b/src/gallium/drivers/panfrost/pan_drm.c
@@ -109,7 +109,7 @@ panfrost_drm_submit_vs_fs_batch(struct panfrost_batch 
*batch, bool has_draws)
 return ret;
 }
 
-static struct panfrost_fence *
+struct panfrost_fence *
 panfrost_fence_create(struct panfrost_context *ctx)
 {
 struct pipe_context *gallium = (struct pipe_context *) ctx;
@@ -136,8 +136,7 @@ panfrost_fence_create(struct panfrost_context *ctx)
 }
 
 void
-panfrost_drm_force_flush_fragment(struct panfrost_context *ctx,
-  struct pipe_fence_handle **fence)
+panfrost_drm_force_flush_fragment(struct panfrost_context *ctx)
 {
 struct pipe_context *gallium = (struct pipe_context *) ctx;
 struct panfrost_screen *screen = pan_screen(gallium->screen);
@@ -149,12 +148,6 @@ panfrost_drm_force_flush_fragment(struct panfrost_context 
*ctx,
 /* The job finished up, so we're safe to clean it up now */
 panfrost_free_batch(ctx->last_batch);
 }
-
-if (fence) {
-struct panfrost_fence *f = panfrost_fence_create(ctx);
-gallium->screen->fence_reference(gallium->screen, fence, NULL);
-*fence = (struct pipe_fence_handle *)f;
-}
 }
 
 unsigned
diff --git a/src/gallium/drivers/panfrost/pan_screen.h 
b/src/gallium/drivers/panfrost/pan_screen.h
index aab141a563c2..4acdd3572c9f 100644
--- a/src/gallium/drivers/panfrost/pan_screen.h
+++ b/src/gallium/drivers/panfrost/pan_screen.h
@@ -123,8 +123,7 @@ pan_screen(struct pipe_screen *p)
 int
 panfrost_drm_submit_vs_fs_batch(struct panfrost_batch *batch, bool has_draws);
 void
-panfrost_drm_force_flush_fragment(struct panfrost_context *ctx,
-  struct pipe_fence_handle **fence);
+panfrost_drm_force_flush_fragment(struct panfrost_context *ctx);
 unsigned
 panfrost_drm_query_gpu_version(struct panfrost_screen *screen);
 int
-- 
2.21.0

___

[Mesa-dev] [PATCH v3 23/25] panfrost: Remove uneeded add_bo() in initialize_surface()

2019-09-05 Thread Boris Brezillon
Should already be added in panfrost_draw_vbo() and panfrost_clear(),
no need to add it here too.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_fragment.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_fragment.c 
b/src/gallium/drivers/panfrost/pan_fragment.c
index cbb95b79f52a..00ff363a1bba 100644
--- a/src/gallium/drivers/panfrost/pan_fragment.c
+++ b/src/gallium/drivers/panfrost/pan_fragment.c
@@ -42,9 +42,6 @@ panfrost_initialize_surface(
 struct panfrost_resource *rsrc = pan_resource(surf->texture);
 
 rsrc->slices[level].initialized = true;
-
-assert(rsrc->bo);
-panfrost_batch_add_bo(batch, rsrc->bo, PAN_SHARED_BO_RW);
 }
 
 /* Generate a fragment job. This should be called once per frame. (According to
-- 
2.21.0

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH v3 21/25] panfrost: Add new helpers to describe job depencencies on BOs

2019-09-05 Thread Boris Brezillon
Batch ordering is most of the time enforced by the resources they are
reading/writing from/to. This patch adds some new helpers to keep track
of that and modifies the existing add_bo() helper to pass flags encoding
the type of access a batch intends to do on this BO.

Since all resources are backed by BOs, and
given we might want to describe dependencies on BOs that are not
exposed as resources, we decided to use BOs as keys on our hash tables.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_allocate.c   |   2 +-
 src/gallium/drivers/panfrost/pan_blend_cso.c  |   2 +-
 src/gallium/drivers/panfrost/pan_context.c|  10 +-
 src/gallium/drivers/panfrost/pan_context.h|   5 +
 src/gallium/drivers/panfrost/pan_drm.c|   6 +-
 src/gallium/drivers/panfrost/pan_fragment.c   |   2 +-
 src/gallium/drivers/panfrost/pan_instancing.c |   2 +-
 src/gallium/drivers/panfrost/pan_job.c| 124 +-
 src/gallium/drivers/panfrost/pan_job.h|  21 ++-
 src/gallium/drivers/panfrost/pan_varyings.c   |   2 +-
 10 files changed, 159 insertions(+), 17 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_allocate.c 
b/src/gallium/drivers/panfrost/pan_allocate.c
index 7938196e3e4f..7b0a7baa32dc 100644
--- a/src/gallium/drivers/panfrost/pan_allocate.c
+++ b/src/gallium/drivers/panfrost/pan_allocate.c
@@ -67,7 +67,7 @@ panfrost_allocate_transient(struct panfrost_batch *batch, 
size_t sz)
 
 /* We can't reuse the current BO, but we can create a new one. 
*/
 bo = panfrost_bo_create(screen, bo_sz, 0);
-panfrost_batch_add_bo(batch, bo);
+panfrost_batch_add_bo(batch, bo, PAN_PRIVATE_BO);
 
 /* Creating a BO adds a reference, and then the job adds a
  * second one. So we need to pop back one reference */
diff --git a/src/gallium/drivers/panfrost/pan_blend_cso.c 
b/src/gallium/drivers/panfrost/pan_blend_cso.c
index 69897be4f007..b27e36a7ce28 100644
--- a/src/gallium/drivers/panfrost/pan_blend_cso.c
+++ b/src/gallium/drivers/panfrost/pan_blend_cso.c
@@ -277,7 +277,7 @@ panfrost_get_blend_for_context(struct panfrost_context 
*ctx, unsigned rti)
 memcpy(final.shader.bo->cpu, shader->buffer, shader->size);
 
 /* Pass BO ownership to job */
-panfrost_batch_add_bo(batch, final.shader.bo);
+panfrost_batch_add_bo(batch, final.shader.bo, PAN_PRIVATE_BO);
 panfrost_bo_unreference(final.shader.bo);
 
 if (shader->patch_index) {
diff --git a/src/gallium/drivers/panfrost/pan_context.c 
b/src/gallium/drivers/panfrost/pan_context.c
index 3e0a3e9df992..c31dc1580524 100644
--- a/src/gallium/drivers/panfrost/pan_context.c
+++ b/src/gallium/drivers/panfrost/pan_context.c
@@ -160,6 +160,7 @@ panfrost_clear(
 struct panfrost_context *ctx = pan_context(pipe);
 struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
 
+panfrost_batch_add_fbo_bos(batch);
 panfrost_batch_clear(batch, buffers, color, depth, stencil);
 }
 
@@ -605,7 +606,7 @@ panfrost_upload_tex(
 
 /* Add the BO to the job so it's retained until the job is done. */
 struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
-panfrost_batch_add_bo(batch, rsrc->bo);
+panfrost_batch_add_bo(batch, rsrc->bo, PAN_SHARED_BO_RD);
 
 /* Add the usage flags in, since they can change across the CSO
  * lifetime due to layout switches */
@@ -724,7 +725,7 @@ static void panfrost_upload_ssbo_sysval(
 struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
 struct panfrost_bo *bo = pan_resource(sb.buffer)->bo;
 
-panfrost_batch_add_bo(batch, bo);
+panfrost_batch_add_bo(batch, bo, PAN_SHARED_BO_RW);
 
 /* Upload address and size as sysval */
 uniform->du[0] = bo->gpu + sb.buffer_offset;
@@ -878,6 +879,7 @@ panfrost_emit_for_draw(struct panfrost_context *ctx, bool 
with_vertex_data)
 struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
 struct panfrost_screen *screen = pan_screen(ctx->base.screen);
 
+panfrost_batch_add_fbo_bos(batch);
 panfrost_attach_vt_framebuffer(ctx);
 
 if (with_vertex_data) {
@@ -929,7 +931,7 @@ panfrost_emit_for_draw(struct panfrost_context *ctx, bool 
with_vertex_data)
 
 panfrost_patch_shader_state(ctx, variant, 
PIPE_SHADER_FRAGMENT, false);
 
-panfrost_batch_add_bo(batch, variant->bo);
+panfrost_batch_add_bo(batch, variant->bo, PAN_PRIVATE_BO);
 
 #define COPY(name) ctx->fragment_shader_core.name = variant->tripipe->name
 
@@ -1389,7 +1391,7 @@ panfrost_get_index_buffer_mapped(struct panfrost_context 
*ctx, const struct pipe
 
 if (!info->has_user_indices) {
 /* Only resources can be directly mapped */
-panfrost_batch_add_bo(batch, rsrc->bo);
+panfrost_batch_add_bo(batch, rsrc->bo, 

[Mesa-dev] [PATCH v3 25/25] panfrost/ci: New tests are passing

2019-09-05 Thread Boris Brezillon
All dEQP-GLES2.functional.fbo.render.texsubimage.* tests are now
passing.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/ci/expected-failures.txt | 4 
 1 file changed, 4 deletions(-)

diff --git a/src/gallium/drivers/panfrost/ci/expected-failures.txt 
b/src/gallium/drivers/panfrost/ci/expected-failures.txt
index b0fc872a3009..3c707230dd23 100644
--- a/src/gallium/drivers/panfrost/ci/expected-failures.txt
+++ b/src/gallium/drivers/panfrost/ci/expected-failures.txt
@@ -53,10 +53,6 @@ 
dEQP-GLES2.functional.fbo.render.shared_colorbuffer.tex2d_rgb_depth_component16
 
dEQP-GLES2.functional.fbo.render.shared_depthbuffer.rbo_rgb565_depth_component16
 Fail
 
dEQP-GLES2.functional.fbo.render.shared_depthbuffer.tex2d_rgba_depth_component16
 Fail
 
dEQP-GLES2.functional.fbo.render.shared_depthbuffer.tex2d_rgb_depth_component16 
Fail
-dEQP-GLES2.functional.fbo.render.texsubimage.after_render_tex2d_rgba Fail
-dEQP-GLES2.functional.fbo.render.texsubimage.after_render_tex2d_rgb Fail
-dEQP-GLES2.functional.fbo.render.texsubimage.between_render_tex2d_rgba Fail
-dEQP-GLES2.functional.fbo.render.texsubimage.between_render_tex2d_rgb Fail
 dEQP-GLES2.functional.fragment_ops.depth_stencil.random.0 Fail
 dEQP-GLES2.functional.fragment_ops.depth_stencil.random.10 Fail
 dEQP-GLES2.functional.fragment_ops.depth_stencil.random.11 Fail
-- 
2.21.0

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH v3 19/25] panfrost: Pass a batch to panfrost_set_value_job()

2019-09-05 Thread Boris Brezillon
So we can emit SET_VALUE jobs for a batch that's not currently bound
to the context.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_scoreboard.c | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_scoreboard.c 
b/src/gallium/drivers/panfrost/pan_scoreboard.c
index f0771a2c5b56..f340bb62662e 100644
--- a/src/gallium/drivers/panfrost/pan_scoreboard.c
+++ b/src/gallium/drivers/panfrost/pan_scoreboard.c
@@ -270,7 +270,7 @@ panfrost_scoreboard_queue_fused_job_prepend(
 /* Generates a set value job, used below as part of TILER job scheduling. */
 
 static struct panfrost_transfer
-panfrost_set_value_job(struct panfrost_context *ctx, mali_ptr polygon_list)
+panfrost_set_value_job(struct panfrost_batch *batch, mali_ptr polygon_list)
 {
 struct mali_job_descriptor_header job = {
 .job_type = JOB_TYPE_SET_VALUE,
@@ -282,7 +282,6 @@ panfrost_set_value_job(struct panfrost_context *ctx, 
mali_ptr polygon_list)
 .unknown = 0x3,
 };
 
-struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
 struct panfrost_transfer transfer = panfrost_allocate_transient(batch, 
sizeof(job) + sizeof(payload));
 memcpy(transfer.cpu, &job, sizeof(job));
 memcpy(transfer.cpu + sizeof(job), &payload, sizeof(payload));
@@ -303,11 +302,10 @@ panfrost_scoreboard_set_value(struct panfrost_batch 
*batch)
 /* Okay, we do. Let's generate it. We'll need the job's polygon list
  * regardless of size. */
 
-struct panfrost_context *ctx = batch->ctx;
 mali_ptr polygon_list = panfrost_batch_get_polygon_list(batch, 0);
 
 struct panfrost_transfer job =
-panfrost_set_value_job(ctx, polygon_list);
+panfrost_set_value_job(batch, polygon_list);
 
 /* Queue it */
 panfrost_scoreboard_queue_compute_job(batch, job);
-- 
2.21.0

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH v3 16/25] panfrost: Pass a batch to panfrost_{allocate, upload}_transient()

2019-09-05 Thread Boris Brezillon
We need that if we want to emit CMDs to a job that's not currenlty
bound to the context, which in turn will be needed if we want to relax
the job serialization we have right now (only flush jobs when we need
to: on a flush request, or when one job depends on results of other
jobs).

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_allocate.c   | 10 ++--
 src/gallium/drivers/panfrost/pan_allocate.h   |  7 +--
 src/gallium/drivers/panfrost/pan_compute.c| 10 ++--
 src/gallium/drivers/panfrost/pan_context.c| 51 +++
 src/gallium/drivers/panfrost/pan_fragment.c   |  2 +-
 src/gallium/drivers/panfrost/pan_instancing.c |  2 +-
 src/gallium/drivers/panfrost/pan_mfbd.c   |  3 +-
 src/gallium/drivers/panfrost/pan_scoreboard.c |  3 +-
 src/gallium/drivers/panfrost/pan_sfbd.c   |  2 +-
 src/gallium/drivers/panfrost/pan_varyings.c   |  8 +--
 10 files changed, 57 insertions(+), 41 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_allocate.c 
b/src/gallium/drivers/panfrost/pan_allocate.c
index beebb0bc6d7e..7938196e3e4f 100644
--- a/src/gallium/drivers/panfrost/pan_allocate.c
+++ b/src/gallium/drivers/panfrost/pan_allocate.c
@@ -40,10 +40,9 @@
  * into the pool and copy there */
 
 struct panfrost_transfer
-panfrost_allocate_transient(struct panfrost_context *ctx, size_t sz)
+panfrost_allocate_transient(struct panfrost_batch *batch, size_t sz)
 {
-struct panfrost_screen *screen = pan_screen(ctx->base.screen);
-struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
+struct panfrost_screen *screen = pan_screen(batch->ctx->base.screen);
 
 /* Pad the size */
 sz = ALIGN_POT(sz, ALIGNMENT);
@@ -90,9 +89,10 @@ panfrost_allocate_transient(struct panfrost_context *ctx, 
size_t sz)
 }
 
 mali_ptr
-panfrost_upload_transient(struct panfrost_context *ctx, const void *data, 
size_t sz)
+panfrost_upload_transient(struct panfrost_batch *batch, const void *data,
+  size_t sz)
 {
-struct panfrost_transfer transfer = panfrost_allocate_transient(ctx, 
sz);
+struct panfrost_transfer transfer = panfrost_allocate_transient(batch, 
sz);
 memcpy(transfer.cpu, data, sz);
 return transfer.gpu;
 }
diff --git a/src/gallium/drivers/panfrost/pan_allocate.h 
b/src/gallium/drivers/panfrost/pan_allocate.h
index 91c2af9c4f17..f18218fb32a1 100644
--- a/src/gallium/drivers/panfrost/pan_allocate.h
+++ b/src/gallium/drivers/panfrost/pan_allocate.h
@@ -33,7 +33,7 @@
 
 #include "util/list.h"
 
-struct panfrost_context;
+struct panfrost_batch;
 
 /* Represents a fat pointer for GPU-mapped memory, returned from the transient
  * allocator and not used for much else */
@@ -44,9 +44,10 @@ struct panfrost_transfer {
 };
 
 struct panfrost_transfer
-panfrost_allocate_transient(struct panfrost_context *ctx, size_t sz);
+panfrost_allocate_transient(struct panfrost_batch *batch, size_t sz);
 
 mali_ptr
-panfrost_upload_transient(struct panfrost_context *ctx, const void *data, 
size_t sz);
+panfrost_upload_transient(struct panfrost_batch *batch, const void *data,
+  size_t sz);
 
 #endif /* __PAN_ALLOCATE_H__ */
diff --git a/src/gallium/drivers/panfrost/pan_compute.c 
b/src/gallium/drivers/panfrost/pan_compute.c
index 51967fe481ef..4639c1b03c38 100644
--- a/src/gallium/drivers/panfrost/pan_compute.c
+++ b/src/gallium/drivers/panfrost/pan_compute.c
@@ -87,6 +87,9 @@ panfrost_launch_grid(struct pipe_context *pipe,
 {
 struct panfrost_context *ctx = pan_context(pipe);
 
+/* TODO: Do we want a special compute-only batch? */
+struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
+
 ctx->compute_grid = info;
 
 struct mali_job_descriptor_header job = {
@@ -113,7 +116,7 @@ panfrost_launch_grid(struct pipe_context *pipe,
 };
 
 payload->postfix.framebuffer =
-panfrost_upload_transient(ctx, &compute_fbd, 
sizeof(compute_fbd));
+panfrost_upload_transient(batch, &compute_fbd, 
sizeof(compute_fbd));
 
 /* Invoke according to the grid info */
 
@@ -123,13 +126,10 @@ panfrost_launch_grid(struct pipe_context *pipe,
 
 /* Upload the payload */
 
-struct panfrost_transfer transfer = panfrost_allocate_transient(ctx, 
sizeof(job) + sizeof(*payload));
+struct panfrost_transfer transfer = panfrost_allocate_transient(batch, 
sizeof(job) + sizeof(*payload));
 memcpy(transfer.cpu, &job, sizeof(job));
 memcpy(transfer.cpu + sizeof(job), payload, sizeof(*payload));
 
-/* TODO: Do we want a special compute-only batch? */
-struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
-
 /* Queue the job */
 panfrost_scoreboard_queue_compute_job(batch, transfer);
 
diff --git a/src/gallium/drivers/panfrost/pan_context.c 
b/src/gallium/drivers/panfrost/pan_context.c
index fdc62d2f957f..a1b112c08919 100644
--- a/src/gallium/drivers/p

[Mesa-dev] [PATCH v3 20/25] panfrost: Prepare things to avoid flushes on FB switch

2019-09-05 Thread Boris Brezillon
panfrost_attach_vt_xxx() functions are now passed a batch, and the
generated FB desc is kept in panfrost_batch so we can switch FBs
without forcing a flush. The postfix->framebuffer field is restored
on the next attach_vt_framebuffer() call if the batch already has an
FB desc.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_context.c | 17 +
 src/gallium/drivers/panfrost/pan_job.h |  3 +++
 2 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_context.c 
b/src/gallium/drivers/panfrost/pan_context.c
index c56f404cd9e9..3e0a3e9df992 100644
--- a/src/gallium/drivers/panfrost/pan_context.c
+++ b/src/gallium/drivers/panfrost/pan_context.c
@@ -164,18 +164,16 @@ panfrost_clear(
 }
 
 static mali_ptr
-panfrost_attach_vt_mfbd(struct panfrost_context *ctx)
+panfrost_attach_vt_mfbd(struct panfrost_batch *batch)
 {
-struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
 struct bifrost_framebuffer mfbd = panfrost_emit_mfbd(batch, ~0);
 
 return panfrost_upload_transient(batch, &mfbd, sizeof(mfbd)) | 
MALI_MFBD;
 }
 
 static mali_ptr
-panfrost_attach_vt_sfbd(struct panfrost_context *ctx)
+panfrost_attach_vt_sfbd(struct panfrost_batch *batch)
 {
-struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
 struct mali_single_framebuffer sfbd = panfrost_emit_sfbd(batch, ~0);
 
 return panfrost_upload_transient(batch, &sfbd, sizeof(sfbd)) | 
MALI_SFBD;
@@ -192,12 +190,15 @@ panfrost_attach_vt_framebuffer(struct panfrost_context 
*ctx)
 }
 
 struct panfrost_screen *screen = pan_screen(ctx->base.screen);
-mali_ptr framebuffer = screen->require_sfbd ?
-   panfrost_attach_vt_sfbd(ctx) :
-   panfrost_attach_vt_mfbd(ctx);
+struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
+
+if (!batch->framebuffer)
+batch->framebuffer = screen->require_sfbd ?
+ panfrost_attach_vt_sfbd(batch) :
+ panfrost_attach_vt_mfbd(batch);
 
 for (unsigned i = 0; i < PIPE_SHADER_TYPES; ++i)
-ctx->payloads[i].postfix.framebuffer = framebuffer;
+ctx->payloads[i].postfix.framebuffer = batch->framebuffer;
 }
 
 /* Reset per-frame context, called on context initialisation as well as after
diff --git a/src/gallium/drivers/panfrost/pan_job.h 
b/src/gallium/drivers/panfrost/pan_job.h
index ea832f2c3efe..48d483c9a724 100644
--- a/src/gallium/drivers/panfrost/pan_job.h
+++ b/src/gallium/drivers/panfrost/pan_job.h
@@ -115,6 +115,9 @@ struct panfrost_batch {
 
 /* Polygon list bound to the batch, or NULL if none bound yet */
 struct panfrost_bo *polygon_list;
+
+/* Framebuffer descriptor. */
+mali_ptr framebuffer;
 };
 
 /* Functions for managing the above */
-- 
2.21.0

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH v3 22/25] panfrost: Delay payloads[].offset_start initialization

2019-09-05 Thread Boris Brezillon
panfrost_draw_vbo() Might call the primeconvert/without_prim_restart
helpers which will enter the ->draw_vbo() again. Let's delay
payloads[].offset_start initialization so we don't initialize them
twice.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_context.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_context.c 
b/src/gallium/drivers/panfrost/pan_context.c
index c31dc1580524..02726e7cd349 100644
--- a/src/gallium/drivers/panfrost/pan_context.c
+++ b/src/gallium/drivers/panfrost/pan_context.c
@@ -1447,9 +1447,6 @@ panfrost_draw_vbo(
 if (panfrost_scissor_culls_everything(ctx))
 return;
 
-ctx->payloads[PIPE_SHADER_VERTEX].offset_start = info->start;
-ctx->payloads[PIPE_SHADER_FRAGMENT].offset_start = info->start;
-
 int mode = info->mode;
 
 /* Fallback unsupported restart index */
@@ -1480,6 +1477,9 @@ panfrost_draw_vbo(
 }
 }
 
+ctx->payloads[PIPE_SHADER_VERTEX].offset_start = info->start;
+ctx->payloads[PIPE_SHADER_FRAGMENT].offset_start = info->start;
+
 /* Now that we have a guaranteed terminating path, find the job.
  * Assignment commented out to prevent unused warning */
 
-- 
2.21.0

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH v3 15/25] panfrost: Move the batch submission logic to panfrost_batch_submit()

2019-09-05 Thread Boris Brezillon
We are about to patch panfrost_flush() to flush all pending batches,
not only the current one. In order to do that, we need to move the
'flush single batch' code to panfrost_batch_submit().

While at it, we get rid of the existing pipelining logic, which is
currently unused and replace it by an unconditional wait at the end of
panfrost_batch_submit(). A new pipeline logic will be introduced later
on.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_context.c | 145 +
 src/gallium/drivers/panfrost/pan_context.h |   9 +-
 src/gallium/drivers/panfrost/pan_drm.c |  15 ---
 src/gallium/drivers/panfrost/pan_job.c | 125 +-
 src/gallium/drivers/panfrost/pan_screen.h  |   2 -
 5 files changed, 123 insertions(+), 173 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_context.c 
b/src/gallium/drivers/panfrost/pan_context.c
index 6552052b8cad..fdc62d2f957f 100644
--- a/src/gallium/drivers/panfrost/pan_context.c
+++ b/src/gallium/drivers/panfrost/pan_context.c
@@ -203,7 +203,7 @@ panfrost_attach_vt_framebuffer(struct panfrost_context *ctx)
 /* Reset per-frame context, called on context initialisation as well as after
  * flushing a frame */
 
-static void
+void
 panfrost_invalidate_frame(struct panfrost_context *ctx)
 {
 for (unsigned i = 0; i < PIPE_SHADER_TYPES; ++i)
@@ -1306,130 +1306,6 @@ panfrost_queue_draw(struct panfrost_context *ctx)
 
 /* The entire frame is in memory -- send it off to the kernel! */
 
-static void
-panfrost_submit_frame(struct panfrost_context *ctx, bool flush_immediate,
-  struct panfrost_batch *batch)
-{
-panfrost_batch_submit(batch);
-
-/* If visual, we can stall a frame */
-
-if (!flush_immediate)
-panfrost_drm_force_flush_fragment(ctx);
-
-ctx->last_fragment_flushed = false;
-ctx->last_batch = batch;
-
-/* If readback, flush now (hurts the pipelined performance) */
-if (flush_immediate)
-panfrost_drm_force_flush_fragment(ctx);
-}
-
-static void
-panfrost_draw_wallpaper(struct pipe_context *pipe)
-{
-struct panfrost_context *ctx = pan_context(pipe);
-
-/* Nothing to reload? TODO: MRT wallpapers */
-if (ctx->pipe_framebuffer.cbufs[0] == NULL)
-return;
-
-/* Check if the buffer has any content on it worth preserving */
-
-struct pipe_surface *surf = ctx->pipe_framebuffer.cbufs[0];
-struct panfrost_resource *rsrc = pan_resource(surf->texture);
-unsigned level = surf->u.tex.level;
-
-if (!rsrc->slices[level].initialized)
-return;
-
-/* Save the batch */
-struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
-
-ctx->wallpaper_batch = batch;
-
-/* Clamp the rendering area to the damage extent. The
- * KHR_partial_update() spec states that trying to render outside of
- * the damage region is "undefined behavior", so we should be safe.
- */
-unsigned damage_width = (rsrc->damage.extent.maxx - 
rsrc->damage.extent.minx);
-unsigned damage_height = (rsrc->damage.extent.maxy - 
rsrc->damage.extent.miny);
-
-if (damage_width && damage_height) {
-panfrost_batch_intersection_scissor(batch,
-rsrc->damage.extent.minx,
-rsrc->damage.extent.miny,
-rsrc->damage.extent.maxx,
-rsrc->damage.extent.maxy);
-}
-
-/* FIXME: Looks like aligning on a tile is not enough, but
- * aligning on twice the tile size seems to works. We don't
- * know exactly what happens here but this deserves extra
- * investigation to figure it out.
- */
-batch->minx = batch->minx & ~((MALI_TILE_LENGTH * 2) - 1);
-batch->miny = batch->miny & ~((MALI_TILE_LENGTH * 2) - 1);
-batch->maxx = MIN2(ALIGN_POT(batch->maxx, MALI_TILE_LENGTH * 2),
-   rsrc->base.width0);
-batch->maxy = MIN2(ALIGN_POT(batch->maxy, MALI_TILE_LENGTH * 2),
-   rsrc->base.height0);
-
-struct pipe_scissor_state damage;
-struct pipe_box rects[4];
-
-/* Clamp the damage box to the rendering area. */
-damage.minx = MAX2(batch->minx, rsrc->damage.biggest_rect.x);
-damage.miny = MAX2(batch->miny, rsrc->damage.biggest_rect.y);
-damage.maxx = MIN2(batch->maxx,
-   rsrc->damage.biggest_rect.x +
-   rsrc->damage.biggest_rect.width);
-damage.maxy = MIN2(batch->maxy,
-   rsrc->damage.biggest_rect.y +
-   rsrc->damage.biggest_rect.height);
-
-/* One damage rectangle means we can end up with at most 4 reload
- 

[Mesa-dev] [PATCH v3 18/25] panfrost: Use ctx->wallpaper_batch in panfrost_blit_wallpaper()

2019-09-05 Thread Boris Brezillon
We'll soon be able to flush a batch that's not currently bound to the
context, which means ctx->pipe_framebuffer will not necessarily be the
FBO targeted by the wallpaper draw. Let's prepare for this case and
use ctx->wallpaper_batch in panfrost_blit_wallpaper().

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_blit.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_blit.c 
b/src/gallium/drivers/panfrost/pan_blit.c
index 4be8c044ee2f..2d44f06227bf 100644
--- a/src/gallium/drivers/panfrost/pan_blit.c
+++ b/src/gallium/drivers/panfrost/pan_blit.c
@@ -105,16 +105,17 @@ panfrost_blit(struct pipe_context *pipe,
 void
 panfrost_blit_wallpaper(struct panfrost_context *ctx, struct pipe_box *box)
 {
+struct panfrost_batch *batch = ctx->wallpaper_batch;
 struct pipe_blit_info binfo = { };
 
 panfrost_blitter_save(ctx, ctx->blitter_wallpaper);
 
-struct pipe_surface *surf = ctx->pipe_framebuffer.cbufs[0];
+struct pipe_surface *surf = batch->key.cbufs[0];
 unsigned level = surf->u.tex.level;
 unsigned layer = surf->u.tex.first_layer;
 assert(surf->u.tex.last_layer == layer);
 
-binfo.src.resource = binfo.dst.resource = 
ctx->pipe_framebuffer.cbufs[0]->texture;
+binfo.src.resource = binfo.dst.resource = batch->key.cbufs[0]->texture;
 binfo.src.level = binfo.dst.level = level;
 binfo.src.box.x = binfo.dst.box.x = box->x;
 binfo.src.box.y = binfo.dst.box.y = box->y;
@@ -123,9 +124,9 @@ panfrost_blit_wallpaper(struct panfrost_context *ctx, 
struct pipe_box *box)
 binfo.src.box.height = binfo.dst.box.height = box->height;
 binfo.src.box.depth = binfo.dst.box.depth = 1;
 
-binfo.src.format = binfo.dst.format = 
ctx->pipe_framebuffer.cbufs[0]->format;
+binfo.src.format = binfo.dst.format = batch->key.cbufs[0]->format;
 
-assert(ctx->pipe_framebuffer.nr_cbufs == 1);
+assert(batch->key.nr_cbufs == 1);
 binfo.mask = PIPE_MASK_RGBA;
 binfo.filter = PIPE_TEX_FILTER_LINEAR;
 binfo.scissor_enable = FALSE;
-- 
2.21.0

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH v3 13/25] panfrost: Allow testing if a specific batch is targeting a scanout FB

2019-09-05 Thread Boris Brezillon
Rename panfrost_is_scanout() into panfrost_batch_is_scanout(), pass it
a batch instead of a context and move the code to pan_job.c.

With this in place, we can now test if a batch is targeting a scanout
FB even if this batch is not bound to the context.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_context.c | 20 +---
 src/gallium/drivers/panfrost/pan_context.h |  3 ---
 src/gallium/drivers/panfrost/pan_job.c | 18 ++
 src/gallium/drivers/panfrost/pan_job.h |  3 +++
 src/gallium/drivers/panfrost/pan_mfbd.c|  3 +--
 5 files changed, 23 insertions(+), 24 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_context.c 
b/src/gallium/drivers/panfrost/pan_context.c
index f0cd8cdb12ea..e34f5757b1cf 100644
--- a/src/gallium/drivers/panfrost/pan_context.c
+++ b/src/gallium/drivers/panfrost/pan_context.c
@@ -152,24 +152,6 @@ panfrost_emit_mfbd(struct panfrost_context *ctx, unsigned 
vertex_count)
 return framebuffer;
 }
 
-/* Are we currently rendering to the screen (rather than an FBO)? */
-
-bool
-panfrost_is_scanout(struct panfrost_context *ctx)
-{
-/* If there is no color buffer, it's an FBO */
-if (ctx->pipe_framebuffer.nr_cbufs != 1)
-return false;
-
-/* If we're too early that no framebuffer was sent, it's scanout */
-if (!ctx->pipe_framebuffer.cbufs[0])
-return true;
-
-return ctx->pipe_framebuffer.cbufs[0]->texture->bind & 
PIPE_BIND_DISPLAY_TARGET ||
-   ctx->pipe_framebuffer.cbufs[0]->texture->bind & 
PIPE_BIND_SCANOUT ||
-   ctx->pipe_framebuffer.cbufs[0]->texture->bind & 
PIPE_BIND_SHARED;
-}
-
 static void
 panfrost_clear(
 struct pipe_context *pipe,
@@ -2397,7 +2379,7 @@ panfrost_set_framebuffer_state(struct pipe_context *pctx,
  */
 
 struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
-bool is_scanout = panfrost_is_scanout(ctx);
+bool is_scanout = panfrost_batch_is_scanout(batch);
 bool has_draws = batch->last_job.gpu;
 
 /* Bail out early when the current and new states are the same. */
diff --git a/src/gallium/drivers/panfrost/pan_context.h 
b/src/gallium/drivers/panfrost/pan_context.h
index 586b6d854b6c..02552ed23de2 100644
--- a/src/gallium/drivers/panfrost/pan_context.h
+++ b/src/gallium/drivers/panfrost/pan_context.h
@@ -315,9 +315,6 @@ panfrost_flush(
 struct pipe_fence_handle **fence,
 unsigned flags);
 
-bool
-panfrost_is_scanout(struct panfrost_context *ctx);
-
 mali_ptr panfrost_sfbd_fragment(struct panfrost_context *ctx, bool has_draws);
 mali_ptr panfrost_mfbd_fragment(struct panfrost_context *ctx, bool has_draws);
 
diff --git a/src/gallium/drivers/panfrost/pan_job.c 
b/src/gallium/drivers/panfrost/pan_job.c
index 56aab13d7d5a..0f7e139f1a64 100644
--- a/src/gallium/drivers/panfrost/pan_job.c
+++ b/src/gallium/drivers/panfrost/pan_job.c
@@ -374,6 +374,24 @@ panfrost_batch_intersection_scissor(struct panfrost_batch 
*batch,
 batch->maxy = MIN2(batch->maxy, maxy);
 }
 
+/* Are we currently rendering to the screen (rather than an FBO)? */
+
+bool
+panfrost_batch_is_scanout(struct panfrost_batch *batch)
+{
+/* If there is no color buffer, it's an FBO */
+if (batch->key.nr_cbufs != 1)
+return false;
+
+/* If we're too early that no framebuffer was sent, it's scanout */
+if (!batch->key.cbufs[0])
+return true;
+
+return batch->key.cbufs[0]->texture->bind & PIPE_BIND_DISPLAY_TARGET ||
+   batch->key.cbufs[0]->texture->bind & PIPE_BIND_SCANOUT ||
+   batch->key.cbufs[0]->texture->bind & PIPE_BIND_SHARED;
+}
+
 void
 panfrost_batch_init(struct panfrost_context *ctx)
 {
diff --git a/src/gallium/drivers/panfrost/pan_job.h 
b/src/gallium/drivers/panfrost/pan_job.h
index e885d0b9fbd5..ea832f2c3efe 100644
--- a/src/gallium/drivers/panfrost/pan_job.h
+++ b/src/gallium/drivers/panfrost/pan_job.h
@@ -195,4 +195,7 @@ panfrost_scoreboard_queue_fused_job_prepend(
 void
 panfrost_scoreboard_link_batch(struct panfrost_batch *batch);
 
+bool
+panfrost_batch_is_scanout(struct panfrost_batch *batch);
+
 #endif
diff --git a/src/gallium/drivers/panfrost/pan_mfbd.c 
b/src/gallium/drivers/panfrost/pan_mfbd.c
index 618ebd3c4a19..c89b0b44a47c 100644
--- a/src/gallium/drivers/panfrost/pan_mfbd.c
+++ b/src/gallium/drivers/panfrost/pan_mfbd.c
@@ -455,9 +455,8 @@ panfrost_mfbd_fragment(struct panfrost_context *ctx, bool 
has_draws)
  * The exception is ReadPixels, but this is not supported on GLES so we
  * can safely ignore it. */
 
-if (panfrost_is_scanout(ctx)) {
+if (panfrost_batch_is_scanout(batch))
 batch->requirements &= ~PAN_REQ_DEPTH_WRITE;
-}
 
 /* Actualize the requirements */
 
-- 
2.21.0

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.o

[Mesa-dev] [PATCH v3 09/25] panfrost: Rework the panfrost_bo API

2019-09-05 Thread Boris Brezillon
* BO related functions/structs are now exposed in pan_bo.h instead of
  being spread in pan_screen.h/pan_resource.h
* cache related functions are no longer exposed
* panfrost_bo now has a ->screen field to avoid passing screen around
* the function names are made consistent (all BO related functions are
  prefixed with panfrost_bo_)
* release functions are no longer exposed, existing users are converted
  to use panfrost_bo_unreference() instead

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_allocate.c   |   5 +-
 src/gallium/drivers/panfrost/pan_allocate.h   |  20 --
 src/gallium/drivers/panfrost/pan_assemble.c   |   3 +-
 src/gallium/drivers/panfrost/pan_blend_cso.c  |   5 +-
 src/gallium/drivers/panfrost/pan_bo.c | 236 +-
 src/gallium/drivers/panfrost/pan_bo.h |  78 ++
 src/gallium/drivers/panfrost/pan_context.c|  16 +-
 src/gallium/drivers/panfrost/pan_drm.c| 187 +-
 src/gallium/drivers/panfrost/pan_instancing.c |   1 +
 src/gallium/drivers/panfrost/pan_job.c|   7 +-
 src/gallium/drivers/panfrost/pan_mfbd.c   |   1 +
 src/gallium/drivers/panfrost/pan_resource.c   |  38 +--
 src/gallium/drivers/panfrost/pan_resource.h   |   8 +-
 src/gallium/drivers/panfrost/pan_screen.c |   1 +
 src/gallium/drivers/panfrost/pan_screen.h |  24 --
 src/gallium/drivers/panfrost/pan_sfbd.c   |   1 +
 src/gallium/drivers/panfrost/pan_varyings.c   |   1 +
 17 files changed, 338 insertions(+), 294 deletions(-)
 create mode 100644 src/gallium/drivers/panfrost/pan_bo.h

diff --git a/src/gallium/drivers/panfrost/pan_allocate.c 
b/src/gallium/drivers/panfrost/pan_allocate.c
index a22b1a5a88d6..beebb0bc6d7e 100644
--- a/src/gallium/drivers/panfrost/pan_allocate.c
+++ b/src/gallium/drivers/panfrost/pan_allocate.c
@@ -29,6 +29,7 @@
 #include 
 #include 
 #include 
+#include "pan_bo.h"
 #include "pan_context.h"
 
 /* TODO: What does this actually have to be? */
@@ -66,12 +67,12 @@ panfrost_allocate_transient(struct panfrost_context *ctx, 
size_t sz)
TRANSIENT_SLAB_SIZE : ALIGN_POT(sz, 4096);
 
 /* We can't reuse the current BO, but we can create a new one. 
*/
-bo = panfrost_drm_create_bo(screen, bo_sz, 0);
+bo = panfrost_bo_create(screen, bo_sz, 0);
 panfrost_batch_add_bo(batch, bo);
 
 /* Creating a BO adds a reference, and then the job adds a
  * second one. So we need to pop back one reference */
-panfrost_bo_unreference(&screen->base, bo);
+panfrost_bo_unreference(bo);
 
 if (sz < TRANSIENT_SLAB_SIZE) {
 batch->transient_bo = bo;
diff --git a/src/gallium/drivers/panfrost/pan_allocate.h 
b/src/gallium/drivers/panfrost/pan_allocate.h
index c0aff62df4a1..91c2af9c4f17 100644
--- a/src/gallium/drivers/panfrost/pan_allocate.h
+++ b/src/gallium/drivers/panfrost/pan_allocate.h
@@ -43,26 +43,6 @@ struct panfrost_transfer {
 mali_ptr gpu;
 };
 
-struct panfrost_bo {
-/* Must be first for casting */
-struct list_head link;
-
-struct pipe_reference reference;
-
-/* Mapping for the entire object (all levels) */
-uint8_t *cpu;
-
-/* GPU address for the object */
-mali_ptr gpu;
-
-/* Size of all entire trees */
-size_t size;
-
-int gem_handle;
-
-uint32_t flags;
-};
-
 struct panfrost_transfer
 panfrost_allocate_transient(struct panfrost_context *ctx, size_t sz);
 
diff --git a/src/gallium/drivers/panfrost/pan_assemble.c 
b/src/gallium/drivers/panfrost/pan_assemble.c
index b57cd5ef6ad2..79c000367632 100644
--- a/src/gallium/drivers/panfrost/pan_assemble.c
+++ b/src/gallium/drivers/panfrost/pan_assemble.c
@@ -25,6 +25,7 @@
 #include 
 #include 
 #include 
+#include "pan_bo.h"
 #include "pan_context.h"
 
 #include "compiler/nir/nir.h"
@@ -82,7 +83,7 @@ panfrost_shader_compile(
  * I bet someone just thought that would be a cute pun. At least,
  * that's how I'd do it. */
 
-state->bo = panfrost_drm_create_bo(screen, size, PAN_ALLOCATE_EXECUTE);
+state->bo = panfrost_bo_create(screen, size, PAN_ALLOCATE_EXECUTE);
 memcpy(state->bo->cpu, dst, size);
 meta->shader = state->bo->gpu | program.first_tag;
 
diff --git a/src/gallium/drivers/panfrost/pan_blend_cso.c 
b/src/gallium/drivers/panfrost/pan_blend_cso.c
index ab49772f3ba3..69897be4f007 100644
--- a/src/gallium/drivers/panfrost/pan_blend_cso.c
+++ b/src/gallium/drivers/panfrost/pan_blend_cso.c
@@ -29,6 +29,7 @@
 #include "util/u_memory.h"
 #include "pan_blend_shaders.h"
 #include "pan_blending.h"
+#include "pan_bo.h"
 
 /* A given Gallium blend state can be encoded to the hardware in numerous,
  * dramatically divergent ways due to the interactions of blending with
@@ -272,12 +273,12 @@ panfrost_get_blend_for_context(struct panfrost_context 
*ctx, unsi

[Mesa-dev] [PATCH v3 24/25] panfrost: Support batch pipelining

2019-09-05 Thread Boris Brezillon
We adjust the code to explicitly request flush of batches accessing
BOs they care about. Thanks to that, we can get rid of the implicit
serialization done in panfrost_batch_submit() and
panfrost_set_framebuffer_state(). Finally, panfrost_flush() is
changed to to flush all pending batches.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_compute.c  |   2 +-
 src/gallium/drivers/panfrost/pan_context.c  | 145 +---
 src/gallium/drivers/panfrost/pan_job.c  |  15 +-
 src/gallium/drivers/panfrost/pan_resource.c |  26 ++--
 4 files changed, 115 insertions(+), 73 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_compute.c 
b/src/gallium/drivers/panfrost/pan_compute.c
index 4639c1b03c38..036dffbb17be 100644
--- a/src/gallium/drivers/panfrost/pan_compute.c
+++ b/src/gallium/drivers/panfrost/pan_compute.c
@@ -133,7 +133,7 @@ panfrost_launch_grid(struct pipe_context *pipe,
 /* Queue the job */
 panfrost_scoreboard_queue_compute_job(batch, transfer);
 
-panfrost_flush(pipe, NULL, PIPE_FLUSH_END_OF_FRAME);
+panfrost_flush_all_batches(ctx, true);
 }
 
 void
diff --git a/src/gallium/drivers/panfrost/pan_context.c 
b/src/gallium/drivers/panfrost/pan_context.c
index 02726e7cd349..993744a1ffd0 100644
--- a/src/gallium/drivers/panfrost/pan_context.c
+++ b/src/gallium/drivers/panfrost/pan_context.c
@@ -150,6 +150,28 @@ panfrost_emit_mfbd(struct panfrost_batch *batch, unsigned 
vertex_count)
 return framebuffer;
 }
 
+static void
+panfrost_flush_fbo_deps(struct panfrost_context *ctx)
+{
+struct pipe_framebuffer_state *fb = &ctx->pipe_framebuffer;
+for (unsigned i = 0; i < fb->nr_cbufs; i++) {
+if (!fb->cbufs[i])
+continue;
+
+struct panfrost_resource *rsrc = 
pan_resource(fb->cbufs[i]->texture);
+
+panfrost_flush_batch_writing_bo(ctx, rsrc->bo, true);
+panfrost_flush_batches_reading_bo(ctx, rsrc->bo, true);
+}
+
+if (fb->zsbuf) {
+struct panfrost_resource *rsrc = 
pan_resource(fb->zsbuf->texture);
+
+panfrost_flush_batch_writing_bo(ctx, rsrc->bo, true);
+panfrost_flush_batches_reading_bo(ctx, rsrc->bo, true);
+}
+}
+
 static void
 panfrost_clear(
 struct pipe_context *pipe,
@@ -160,6 +182,7 @@ panfrost_clear(
 struct panfrost_context *ctx = pan_context(pipe);
 struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
 
+panfrost_flush_fbo_deps(ctx);
 panfrost_batch_add_fbo_bos(batch);
 panfrost_batch_clear(batch, buffers, color, depth, stencil);
 }
@@ -1324,10 +1347,9 @@ panfrost_flush(
 unsigned flags)
 {
 struct panfrost_context *ctx = pan_context(pipe);
-struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
 
-/* Submit the frame itself */
-panfrost_batch_submit(batch);
+/* Submit all pending jobs */
+panfrost_flush_all_batches(ctx, false);
 
 if (fence) {
 struct panfrost_fence *f = panfrost_fence_create(ctx);
@@ -1433,6 +1455,71 @@ panfrost_statistics_record(
 ctx->tf_prims_generated += prims;
 }
 
+static void
+panfrost_flush_draw_deps(struct panfrost_context *ctx, const struct 
pipe_draw_info *info)
+{
+   struct panfrost_resource *rsrc;
+
+if (ctx->wallpaper_batch)
+return;
+
+panfrost_flush_fbo_deps(ctx);
+
+for (unsigned stage = 0; stage < PIPE_SHADER_TYPES; stage++) {
+for (unsigned i = 0; i < ctx->sampler_view_count[stage]; i++) {
+struct panfrost_sampler_view *view = 
ctx->sampler_views[stage][i];
+
+if (!view)
+continue;
+
+rsrc = pan_resource(view->base.texture);
+panfrost_flush_batch_writing_bo(ctx, rsrc->bo, true);
+}
+
+for (unsigned i = 0; i < 32; i++) {
+if (!(ctx->ssbo_mask[stage] & (1 << i)))
+continue;
+
+rsrc = pan_resource(ctx->ssbo[stage][i].buffer);
+panfrost_flush_batch_writing_bo(ctx, rsrc->bo, true);
+panfrost_flush_batches_reading_bo(ctx, rsrc->bo, true);
+}
+}
+
+if (info->index_size && !info->has_user_indices) {
+struct panfrost_resource *rsrc = 
pan_resource(info->index.resource);
+
+panfrost_flush_batch_writing_bo(ctx, rsrc->bo, true);
+}
+
+for (unsigned i = 0; ctx->vertex && i < ctx->vertex->num_elements; 
i++) {
+struct pipe_vertex_element *velem = &ctx->vertex->pipe[i];
+unsigned vbi = velem->vertex_buffer_index;
+
+if (!(ctx->vb_mask & (1 << vbi)))
+continue;
+
+struct pi

[Mesa-dev] [PATCH v3 17/25] panfrost: Pass a batch to functions emitting FB descs

2019-09-05 Thread Boris Brezillon
So we can emit such jobs to a batch that's not currently bound to the
context.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_context.c  | 36 ++---
 src/gallium/drivers/panfrost/pan_context.h  | 10 +++---
 src/gallium/drivers/panfrost/pan_drm.c  |  2 +-
 src/gallium/drivers/panfrost/pan_fragment.c | 11 +++
 src/gallium/drivers/panfrost/pan_mfbd.c | 25 ++
 src/gallium/drivers/panfrost/pan_sfbd.c | 13 
 6 files changed, 44 insertions(+), 53 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_context.c 
b/src/gallium/drivers/panfrost/pan_context.c
index a1b112c08919..c56f404cd9e9 100644
--- a/src/gallium/drivers/panfrost/pan_context.c
+++ b/src/gallium/drivers/panfrost/pan_context.c
@@ -55,14 +55,12 @@
 /* Framebuffer descriptor */
 
 static struct midgard_tiler_descriptor
-panfrost_emit_midg_tiler(
-struct panfrost_context *ctx,
-unsigned width,
-unsigned height,
-unsigned vertex_count)
+panfrost_emit_midg_tiler(struct panfrost_batch *batch, unsigned vertex_count)
 {
+struct panfrost_context *ctx = batch->ctx;
 struct midgard_tiler_descriptor t = {};
-struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
+unsigned height = batch->key.height;
+unsigned width = batch->key.width;
 
 t.hierarchy_mask =
 panfrost_choose_hierarchy_mask(width, height, vertex_count);
@@ -105,10 +103,11 @@ panfrost_emit_midg_tiler(
 }
 
 struct mali_single_framebuffer
-panfrost_emit_sfbd(struct panfrost_context *ctx, unsigned vertex_count)
+panfrost_emit_sfbd(struct panfrost_batch *batch, unsigned vertex_count)
 {
-unsigned width = ctx->pipe_framebuffer.width;
-unsigned height = ctx->pipe_framebuffer.height;
+struct panfrost_context *ctx = batch->ctx;
+unsigned width = batch->key.width;
+unsigned height = batch->key.height;
 
 struct mali_single_framebuffer framebuffer = {
 .width = MALI_POSITIVE(width),
@@ -117,18 +116,18 @@ panfrost_emit_sfbd(struct panfrost_context *ctx, unsigned 
vertex_count)
 .format = 0x3000,
 .clear_flags = 0x1000,
 .unknown_address_0 = ctx->scratchpad->gpu,
-.tiler = panfrost_emit_midg_tiler(ctx,
-  width, height, vertex_count),
+.tiler = panfrost_emit_midg_tiler(batch, vertex_count),
 };
 
 return framebuffer;
 }
 
 struct bifrost_framebuffer
-panfrost_emit_mfbd(struct panfrost_context *ctx, unsigned vertex_count)
+panfrost_emit_mfbd(struct panfrost_batch *batch, unsigned vertex_count)
 {
-unsigned width = ctx->pipe_framebuffer.width;
-unsigned height = ctx->pipe_framebuffer.height;
+struct panfrost_context *ctx = batch->ctx;
+unsigned width = batch->key.width;
+unsigned height = batch->key.height;
 
 struct bifrost_framebuffer framebuffer = {
 .unk0 = 0x1e5, /* 1e4 if no spill */
@@ -139,14 +138,13 @@ panfrost_emit_mfbd(struct panfrost_context *ctx, unsigned 
vertex_count)
 
 .unk1 = 0x1080,
 
-.rt_count_1 = MALI_POSITIVE(ctx->pipe_framebuffer.nr_cbufs),
+.rt_count_1 = MALI_POSITIVE(batch->key.nr_cbufs),
 .rt_count_2 = 4,
 
 .unknown2 = 0x1f,
 
 .scratchpad = ctx->scratchpad->gpu,
-.tiler = panfrost_emit_midg_tiler(ctx,
-  width, height, vertex_count)
+.tiler = panfrost_emit_midg_tiler(batch, vertex_count)
 };
 
 return framebuffer;
@@ -169,7 +167,7 @@ static mali_ptr
 panfrost_attach_vt_mfbd(struct panfrost_context *ctx)
 {
 struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
-struct bifrost_framebuffer mfbd = panfrost_emit_mfbd(ctx, ~0);
+struct bifrost_framebuffer mfbd = panfrost_emit_mfbd(batch, ~0);
 
 return panfrost_upload_transient(batch, &mfbd, sizeof(mfbd)) | 
MALI_MFBD;
 }
@@ -178,7 +176,7 @@ static mali_ptr
 panfrost_attach_vt_sfbd(struct panfrost_context *ctx)
 {
 struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
-struct mali_single_framebuffer sfbd = panfrost_emit_sfbd(ctx, ~0);
+struct mali_single_framebuffer sfbd = panfrost_emit_sfbd(batch, ~0);
 
 return panfrost_upload_transient(batch, &sfbd, sizeof(sfbd)) | 
MALI_SFBD;
 }
diff --git a/src/gallium/drivers/panfrost/pan_context.h 
b/src/gallium/drivers/panfrost/pan_context.h
index f5e54f862cca..f0578d6808d2 100644
--- a/src/gallium/drivers/panfrost/pan_context.h
+++ b/src/gallium/drivers/panfrost/pan_context.h
@@ -315,17 +315,17 @@ panfrost_flush(
 struct pipe_fence_handle **fence,
 unsigned flags);
 
-mali_ptr panfrost_sfbd_fragment(struct panfrost_context *ctx, bool has_draws);
-mali_ptr p

[Mesa-dev] [PATCH v3 07/25] panfrost: Get rid of the now unused SLAB allocator

2019-09-05 Thread Boris Brezillon
The last users have been converted to use plain BOs. Let's get rid of
this abstraction. We can always consider adding it back if we need it
at some point.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_allocate.h | 13 
 src/gallium/drivers/panfrost/pan_drm.c  | 23 -
 src/gallium/drivers/panfrost/pan_screen.h   | 11 --
 3 files changed, 47 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_allocate.h 
b/src/gallium/drivers/panfrost/pan_allocate.h
index cf9499154c8b..c0aff62df4a1 100644
--- a/src/gallium/drivers/panfrost/pan_allocate.h
+++ b/src/gallium/drivers/panfrost/pan_allocate.h
@@ -63,23 +63,10 @@ struct panfrost_bo {
 uint32_t flags;
 };
 
-struct panfrost_memory {
-/* Backing for the slab in memory */
-struct panfrost_bo *bo;
-int stack_bottom;
-};
-
 struct panfrost_transfer
 panfrost_allocate_transient(struct panfrost_context *ctx, size_t sz);
 
 mali_ptr
 panfrost_upload_transient(struct panfrost_context *ctx, const void *data, 
size_t sz);
 
-static inline mali_ptr
-panfrost_reserve(struct panfrost_memory *mem, size_t sz)
-{
-mem->stack_bottom += sz;
-return mem->bo->gpu + (mem->stack_bottom - sz);
-}
-
 #endif /* __PAN_ALLOCATE_H__ */
diff --git a/src/gallium/drivers/panfrost/pan_drm.c 
b/src/gallium/drivers/panfrost/pan_drm.c
index 1edbb5bd1dcc..e7dcd2e58751 100644
--- a/src/gallium/drivers/panfrost/pan_drm.c
+++ b/src/gallium/drivers/panfrost/pan_drm.c
@@ -183,29 +183,6 @@ panfrost_drm_release_bo(struct panfrost_screen *screen, 
struct panfrost_bo *bo,
 ralloc_free(bo);
 }
 
-void
-panfrost_drm_allocate_slab(struct panfrost_screen *screen,
-   struct panfrost_memory *mem,
-   size_t pages,
-   bool same_va,
-   int extra_flags,
-   int commit_count,
-   int extent)
-{
-// TODO cache allocations
-// TODO properly handle errors
-// TODO take into account extra_flags
-mem->bo = panfrost_drm_create_bo(screen, pages * 4096, extra_flags);
-mem->stack_bottom = 0;
-}
-
-void
-panfrost_drm_free_slab(struct panfrost_screen *screen, struct panfrost_memory 
*mem)
-{
-panfrost_bo_unreference(&screen->base, mem->bo);
-mem->bo = NULL;
-}
-
 struct panfrost_bo *
 panfrost_drm_import_bo(struct panfrost_screen *screen, int fd)
 {
diff --git a/src/gallium/drivers/panfrost/pan_screen.h 
b/src/gallium/drivers/panfrost/pan_screen.h
index 7ed5193277ac..96044b8c8b90 100644
--- a/src/gallium/drivers/panfrost/pan_screen.h
+++ b/src/gallium/drivers/panfrost/pan_screen.h
@@ -120,17 +120,6 @@ pan_screen(struct pipe_screen *p)
 return (struct panfrost_screen *)p;
 }
 
-void
-panfrost_drm_allocate_slab(struct panfrost_screen *screen,
-   struct panfrost_memory *mem,
-   size_t pages,
-   bool same_va,
-   int extra_flags,
-   int commit_count,
-   int extent);
-void
-panfrost_drm_free_slab(struct panfrost_screen *screen,
-   struct panfrost_memory *mem);
 struct panfrost_bo *
 panfrost_drm_create_bo(struct panfrost_screen *screen, size_t size,
uint32_t flags);
-- 
2.21.0

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH v3 08/25] panfrost: Rename pan_bo_cache.c into pan_bo.c

2019-09-05 Thread Boris Brezillon
So we can move all the BO logic into this file instead of having it
spread over pan_resource.c, pan_drm.c and pan_bo_cache.c.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/meson.build  | 2 +-
 src/gallium/drivers/panfrost/{pan_bo_cache.c => pan_bo.c} | 0
 2 files changed, 1 insertion(+), 1 deletion(-)
 rename src/gallium/drivers/panfrost/{pan_bo_cache.c => pan_bo.c} (100%)

diff --git a/src/gallium/drivers/panfrost/meson.build 
b/src/gallium/drivers/panfrost/meson.build
index c188274236bb..73c3b54923a4 100644
--- a/src/gallium/drivers/panfrost/meson.build
+++ b/src/gallium/drivers/panfrost/meson.build
@@ -32,7 +32,7 @@ files_panfrost = files(
 
   'pan_context.c',
   'pan_afbc.c',
-  'pan_bo_cache.c',
+  'pan_bo.c',
   'pan_blit.c',
   'pan_job.c',
   'pan_drm.c',
diff --git a/src/gallium/drivers/panfrost/pan_bo_cache.c 
b/src/gallium/drivers/panfrost/pan_bo.c
similarity index 100%
rename from src/gallium/drivers/panfrost/pan_bo_cache.c
rename to src/gallium/drivers/panfrost/pan_bo.c
-- 
2.21.0

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH v3 05/25] panfrost: Convert ctx->{scratchpad, tiler_heap, tiler_dummy} to plain BOs

2019-09-05 Thread Boris Brezillon
ctx->{scratchpad,tiler_heap,tiler_dummy} are allocated using
panfrost_drm_allocate_slab() but they never any of the SLAB-based
allocation logic. Let's convert those fields to plain BOs.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_context.c | 29 --
 src/gallium/drivers/panfrost/pan_context.h |  6 ++---
 src/gallium/drivers/panfrost/pan_drm.c |  4 +--
 3 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_context.c 
b/src/gallium/drivers/panfrost/pan_context.c
index 292de7fe132c..0fb4c2584e40 100644
--- a/src/gallium/drivers/panfrost/pan_context.c
+++ b/src/gallium/drivers/panfrost/pan_context.c
@@ -83,16 +83,15 @@ panfrost_emit_midg_tiler(
 
 
 /* Allow the entire tiler heap */
-t.heap_start = ctx->tiler_heap.bo->gpu;
-t.heap_end =
-ctx->tiler_heap.bo->gpu + ctx->tiler_heap.bo->size;
+t.heap_start = ctx->tiler_heap->gpu;
+t.heap_end = ctx->tiler_heap->gpu + ctx->tiler_heap->size;
 } else {
 /* The tiler is disabled, so don't allow the tiler heap */
-t.heap_start = ctx->tiler_heap.bo->gpu;
+t.heap_start = ctx->tiler_heap->gpu;
 t.heap_end = t.heap_start;
 
 /* Use a dummy polygon list */
-t.polygon_list = ctx->tiler_dummy.bo->gpu;
+t.polygon_list = ctx->tiler_dummy->gpu;
 
 /* Disable the tiler */
 t.hierarchy_mask |= MALI_TILER_DISABLED;
@@ -116,7 +115,7 @@ panfrost_emit_sfbd(struct panfrost_context *ctx, unsigned 
vertex_count)
 .unknown2 = 0x1f,
 .format = 0x3000,
 .clear_flags = 0x1000,
-.unknown_address_0 = ctx->scratchpad.bo->gpu,
+.unknown_address_0 = ctx->scratchpad->gpu,
 .tiler = panfrost_emit_midg_tiler(ctx,
   width, height, vertex_count),
 };
@@ -144,7 +143,7 @@ panfrost_emit_mfbd(struct panfrost_context *ctx, unsigned 
vertex_count)
 
 .unknown2 = 0x1f,
 
-.scratchpad = ctx->scratchpad.bo->gpu,
+.scratchpad = ctx->scratchpad->gpu,
 .tiler = panfrost_emit_midg_tiler(ctx,
   width, height, vertex_count)
 };
@@ -2565,9 +2564,9 @@ panfrost_destroy(struct pipe_context *pipe)
 if (panfrost->blitter_wallpaper)
 util_blitter_destroy(panfrost->blitter_wallpaper);
 
-panfrost_drm_free_slab(screen, &panfrost->scratchpad);
-panfrost_drm_free_slab(screen, &panfrost->tiler_heap);
-panfrost_drm_free_slab(screen, &panfrost->tiler_dummy);
+panfrost_drm_release_bo(screen, panfrost->scratchpad, false);
+panfrost_drm_release_bo(screen, panfrost->tiler_heap, false);
+panfrost_drm_release_bo(screen, panfrost->tiler_dummy, false);
 
 ralloc_free(pipe);
 }
@@ -2750,9 +2749,13 @@ panfrost_setup_hardware(struct panfrost_context *ctx)
 struct pipe_context *gallium = (struct pipe_context *) ctx;
 struct panfrost_screen *screen = pan_screen(gallium->screen);
 
-panfrost_drm_allocate_slab(screen, &ctx->scratchpad, 64*4, false, 0, 
0, 0);
-panfrost_drm_allocate_slab(screen, &ctx->tiler_heap, 4096, false, 
PAN_ALLOCATE_INVISIBLE | PAN_ALLOCATE_GROWABLE, 1, 128);
-panfrost_drm_allocate_slab(screen, &ctx->tiler_dummy, 1, false, 
PAN_ALLOCATE_INVISIBLE, 0, 0);
+ctx->scratchpad = panfrost_drm_create_bo(screen, 64 * 4 * 4096, 0);
+ctx->tiler_heap = panfrost_drm_create_bo(screen, 4096 * 4096,
+ PAN_ALLOCATE_INVISIBLE |
+ PAN_ALLOCATE_GROWABLE);
+ctx->tiler_dummy = panfrost_drm_create_bo(screen, 4096,
+  PAN_ALLOCATE_INVISIBLE);
+assert(ctx->scratchpad && ctx->tiler_heap && ctx->tiler_dummy);
 }
 
 /* New context creation, which also does hardware initialisation since I don't
diff --git a/src/gallium/drivers/panfrost/pan_context.h 
b/src/gallium/drivers/panfrost/pan_context.h
index 5af950e10013..8f9cc44fedac 100644
--- a/src/gallium/drivers/panfrost/pan_context.h
+++ b/src/gallium/drivers/panfrost/pan_context.h
@@ -126,10 +126,10 @@ struct panfrost_context {
 struct pipe_framebuffer_state pipe_framebuffer;
 struct panfrost_streamout streamout;
 
+struct panfrost_bo *scratchpad;
+struct panfrost_bo *tiler_heap;
+struct panfrost_bo *tiler_dummy;
 struct panfrost_memory cmdstream_persistent;
-struct panfrost_memory scratchpad;
-struct panfrost_memory tiler_heap;
-struct panfrost_memory tiler_dummy;
 struct panfrost_memory depth_stencil_buffer;
 
  

[Mesa-dev] [PATCH v3 06/25] panfrost: Get rid of unused panfrost_context fields

2019-09-05 Thread Boris Brezillon
Some fields in panfrost_context are unused (probably leftovers from
previous refactor). Let's get rid of them.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_context.h | 4 
 1 file changed, 4 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_context.h 
b/src/gallium/drivers/panfrost/pan_context.h
index 8f9cc44fedac..9723d56ac5f7 100644
--- a/src/gallium/drivers/panfrost/pan_context.h
+++ b/src/gallium/drivers/panfrost/pan_context.h
@@ -129,8 +129,6 @@ struct panfrost_context {
 struct panfrost_bo *scratchpad;
 struct panfrost_bo *tiler_heap;
 struct panfrost_bo *tiler_dummy;
-struct panfrost_memory cmdstream_persistent;
-struct panfrost_memory depth_stencil_buffer;
 
 bool active_queries;
 uint64_t prims_generated;
@@ -157,8 +155,6 @@ struct panfrost_context {
  * it is disabled, just equal to plain vertex count */
 unsigned padded_count;
 
-union mali_attr attributes[PIPE_MAX_ATTRIBS];
-
 /* TODO: Multiple uniform buffers (index =/= 0), finer updates? */
 
 struct panfrost_constant_buffer constant_buffer[PIPE_SHADER_TYPES];
-- 
2.21.0

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH v3 12/25] panfrost: Get rid of the unused 'flush jobs accessing res' infra

2019-09-05 Thread Boris Brezillon
Will be replaced by something similar but using a BOs as keys instead
of resources.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_context.h |  3 --
 src/gallium/drivers/panfrost/pan_job.c | 38 --
 src/gallium/drivers/panfrost/pan_job.h |  8 -
 3 files changed, 49 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_context.h 
b/src/gallium/drivers/panfrost/pan_context.h
index 9723d56ac5f7..586b6d854b6c 100644
--- a/src/gallium/drivers/panfrost/pan_context.h
+++ b/src/gallium/drivers/panfrost/pan_context.h
@@ -114,9 +114,6 @@ struct panfrost_context {
 struct panfrost_batch *batch;
 struct hash_table *batches;
 
-/* panfrost_resource -> panfrost_job */
-struct hash_table *write_jobs;
-
 /* Within a launch_grid call.. */
 const struct pipe_grid_info *compute_grid;
 
diff --git a/src/gallium/drivers/panfrost/pan_job.c 
b/src/gallium/drivers/panfrost/pan_job.c
index 6b0f612bb156..56aab13d7d5a 100644
--- a/src/gallium/drivers/panfrost/pan_job.c
+++ b/src/gallium/drivers/panfrost/pan_job.c
@@ -162,21 +162,6 @@ panfrost_batch_get_polygon_list(struct panfrost_batch 
*batch, unsigned size)
 return batch->polygon_list->gpu;
 }
 
-void
-panfrost_flush_jobs_writing_resource(struct panfrost_context *panfrost,
- struct pipe_resource *prsc)
-{
-#if 0
-struct hash_entry *entry = 
_mesa_hash_table_search(panfrost->write_jobs,
-   prsc);
-if (entry) {
-struct panfrost_batch *batch = entry->data;
-panfrost_batch_submit(job);
-}
-#endif
-/* TODO stub */
-}
-
 void
 panfrost_batch_submit(struct panfrost_batch *batch)
 {
@@ -352,25 +337,6 @@ panfrost_batch_clear(struct panfrost_batch *batch,
  ctx->pipe_framebuffer.height);
 }
 
-void
-panfrost_flush_jobs_reading_resource(struct panfrost_context *panfrost,
- struct pipe_resource *prsc)
-{
-struct panfrost_resource *rsc = pan_resource(prsc);
-
-panfrost_flush_jobs_writing_resource(panfrost, prsc);
-
-hash_table_foreach(panfrost->batches, entry) {
-struct panfrost_batch *batch = entry->data;
-
-if (_mesa_set_search(batch->bos, rsc->bo)) {
-printf("TODO: submit job for flush\n");
-//panfrost_batch_submit(job);
-continue;
-}
-}
-}
-
 static bool
 panfrost_batch_compare(const void *a, const void *b)
 {
@@ -414,8 +380,4 @@ panfrost_batch_init(struct panfrost_context *ctx)
 ctx->batches = _mesa_hash_table_create(ctx,
panfrost_batch_hash,
panfrost_batch_compare);
-
-ctx->write_jobs = _mesa_hash_table_create(ctx,
-  _mesa_hash_pointer,
-  _mesa_key_pointer_equal);
 }
diff --git a/src/gallium/drivers/panfrost/pan_job.h 
b/src/gallium/drivers/panfrost/pan_job.h
index 6d89603f8798..e885d0b9fbd5 100644
--- a/src/gallium/drivers/panfrost/pan_job.h
+++ b/src/gallium/drivers/panfrost/pan_job.h
@@ -138,14 +138,6 @@ panfrost_batch_init(struct panfrost_context *ctx);
 void
 panfrost_batch_add_bo(struct panfrost_batch *batch, struct panfrost_bo *bo);
 
-void
-panfrost_flush_jobs_writing_resource(struct panfrost_context *panfrost,
- struct pipe_resource *prsc);
-
-void
-panfrost_flush_jobs_reading_resource(struct panfrost_context *panfrost,
- struct pipe_resource *prsc);
-
 void
 panfrost_batch_submit(struct panfrost_batch *batch);
 
-- 
2.21.0

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH v3 11/25] panfrost: Use a pipe_framebuffer_state as the batch key

2019-09-05 Thread Boris Brezillon
This way we have all the fb_state information directly attached to a
batch and can pass only the batch to functions emitting CMDs, which is
needed if we want to be able to queue CMDs to a batch that's not
currently bound to the context.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_job.c | 34 +++---
 src/gallium/drivers/panfrost/pan_job.h |  5 ++--
 2 files changed, 11 insertions(+), 28 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_job.c 
b/src/gallium/drivers/panfrost/pan_job.c
index 7c40bcee0fca..6b0f612bb156 100644
--- a/src/gallium/drivers/panfrost/pan_job.c
+++ b/src/gallium/drivers/panfrost/pan_job.c
@@ -79,21 +79,10 @@ panfrost_free_batch(struct panfrost_batch *batch)
 
 struct panfrost_batch *
 panfrost_get_batch(struct panfrost_context *ctx,
- struct pipe_surface **cbufs, struct pipe_surface *zsbuf)
+   const struct pipe_framebuffer_state *key)
 {
 /* Lookup the job first */
-
-struct panfrost_batch_key key = {
-.cbufs = {
-cbufs[0],
-cbufs[1],
-cbufs[2],
-cbufs[3],
-},
-.zsbuf = zsbuf
-};
-
-struct hash_entry *entry = _mesa_hash_table_search(ctx->batches, &key);
+struct hash_entry *entry = _mesa_hash_table_search(ctx->batches, key);
 
 if (entry)
 return entry->data;
@@ -103,8 +92,7 @@ panfrost_get_batch(struct panfrost_context *ctx,
 struct panfrost_batch *batch = panfrost_create_batch(ctx);
 
 /* Save the created job */
-
-memcpy(&batch->key, &key, sizeof(key));
+util_copy_framebuffer_state(&batch->key, key);
 _mesa_hash_table_insert(ctx->batches, &batch->key, batch);
 
 return batch;
@@ -124,18 +112,14 @@ panfrost_get_batch_for_fbo(struct panfrost_context *ctx)
 /* If we already began rendering, use that */
 
 if (ctx->batch) {
-assert(ctx->batch->key.zsbuf == ctx->pipe_framebuffer.zsbuf &&
-   !memcmp(ctx->batch->key.cbufs,
-   ctx->pipe_framebuffer.cbufs,
-   sizeof(ctx->batch->key.cbufs)));
+assert(util_framebuffer_state_equal(&ctx->batch->key,
+&ctx->pipe_framebuffer));
 return ctx->batch;
 }
 
 /* If not, look up the job */
-
-struct pipe_surface **cbufs = ctx->pipe_framebuffer.cbufs;
-struct pipe_surface *zsbuf = ctx->pipe_framebuffer.zsbuf;
-struct panfrost_batch *batch = panfrost_get_batch(ctx, cbufs, zsbuf);
+struct panfrost_batch *batch = panfrost_get_batch(ctx,
+  
&ctx->pipe_framebuffer);
 
 /* Set this job as the current FBO job. Will be reset when updating the
  * FB state and when submitting or releasing a job.
@@ -390,13 +374,13 @@ panfrost_flush_jobs_reading_resource(struct 
panfrost_context *panfrost,
 static bool
 panfrost_batch_compare(const void *a, const void *b)
 {
-return memcmp(a, b, sizeof(struct panfrost_batch_key)) == 0;
+return util_framebuffer_state_equal(a, b);
 }
 
 static uint32_t
 panfrost_batch_hash(const void *key)
 {
-return _mesa_hash_data(key, sizeof(struct panfrost_batch_key));
+return _mesa_hash_data(key, sizeof(struct pipe_framebuffer_state));
 }
 
 /* Given a new bounding rectangle (scissor), let the job cover the union of the
diff --git a/src/gallium/drivers/panfrost/pan_job.h 
b/src/gallium/drivers/panfrost/pan_job.h
index c9f487871216..6d89603f8798 100644
--- a/src/gallium/drivers/panfrost/pan_job.h
+++ b/src/gallium/drivers/panfrost/pan_job.h
@@ -46,7 +46,7 @@ struct panfrost_batch_key {
 
 struct panfrost_batch {
 struct panfrost_context *ctx;
-struct panfrost_batch_key key;
+struct pipe_framebuffer_state key;
 
 /* Buffers cleared (PIPE_CLEAR_* bitmask) */
 unsigned clear;
@@ -127,8 +127,7 @@ panfrost_free_batch(struct panfrost_batch *batch);
 
 struct panfrost_batch *
 panfrost_get_batch(struct panfrost_context *ctx,
-   struct pipe_surface **cbufs,
-   struct pipe_surface *zsbuf);
+   const struct pipe_framebuffer_state *key);
 
 struct panfrost_batch *
 panfrost_get_batch_for_fbo(struct panfrost_context *ctx);
-- 
2.21.0

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH v3 10/25] panfrost: Make sure the BO is 'ready' when picked from the cache

2019-09-05 Thread Boris Brezillon
This is needed if we want to free the panfrost_batch object at submit
time in order to not have to GC the batch on the next job submission.

Signed-off-by: Boris Brezillon 
---
 src/gallium/drivers/panfrost/pan_bo.c | 68 ++-
 src/gallium/drivers/panfrost/pan_bo.h |  2 +
 2 files changed, 49 insertions(+), 21 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_bo.c 
b/src/gallium/drivers/panfrost/pan_bo.c
index 1f87c18e9ad5..3fd2e179aa72 100644
--- a/src/gallium/drivers/panfrost/pan_bo.c
+++ b/src/gallium/drivers/panfrost/pan_bo.c
@@ -23,6 +23,7 @@
  * Authors (Collabora):
  *   Alyssa Rosenzweig 
  */
+#include 
 #include 
 #include 
 #include 
@@ -99,6 +100,23 @@ panfrost_bo_free(struct panfrost_bo *bo)
 ralloc_free(bo);
 }
 
+bool
+panfrost_bo_wait(struct panfrost_bo *bo, int64_t timeout_ns)
+{
+struct drm_panfrost_wait_bo req = {
+.handle = bo->gem_handle,
+   .timeout_ns = timeout_ns,
+};
+int ret;
+
+ret = drmIoctl(bo->screen->fd, DRM_IOCTL_PANFROST_WAIT_BO, &req);
+if (ret != -1)
+return true;
+
+assert(errno == ETIMEDOUT || errno == EBUSY);
+return false;
+}
+
 /* Helper to calculate the bucket index of a BO */
 
 static unsigned
@@ -136,7 +154,7 @@ pan_bucket(struct panfrost_screen *screen, unsigned size)
 
 static struct panfrost_bo *
 panfrost_bo_cache_fetch(struct panfrost_screen *screen,
-size_t size, uint32_t flags)
+size_t size, uint32_t flags, bool dontwait)
 {
 pthread_mutex_lock(&screen->bo_cache_lock);
 struct list_head *bucket = pan_bucket(screen, size);
@@ -144,27 +162,29 @@ panfrost_bo_cache_fetch(struct panfrost_screen *screen,
 
 /* Iterate the bucket looking for something suitable */
 list_for_each_entry_safe(struct panfrost_bo, entry, bucket, link) {
-if (entry->size >= size &&
-entry->flags == flags) {
-int ret;
-struct drm_panfrost_madvise madv;
+if (entry->size < size || entry->flags != flags)
+continue;
 
-/* This one works, splice it out of the cache */
-list_del(&entry->link);
+if (!panfrost_bo_wait(entry, dontwait ? 0 : INT64_MAX))
+continue;
 
-madv.handle = entry->gem_handle;
-madv.madv = PANFROST_MADV_WILLNEED;
-madv.retained = 0;
+struct drm_panfrost_madvise madv = {
+.handle = entry->gem_handle,
+.madv = PANFROST_MADV_WILLNEED,
+};
+int ret;
 
-ret = drmIoctl(screen->fd, DRM_IOCTL_PANFROST_MADVISE, 
&madv);
-if (!ret && !madv.retained) {
-panfrost_bo_free(entry);
-continue;
-}
-/* Let's go! */
-bo = entry;
-break;
+/* This one works, splice it out of the cache */
+list_del(&entry->link);
+
+ret = drmIoctl(screen->fd, DRM_IOCTL_PANFROST_MADVISE, &madv);
+if (!ret && !madv.retained) {
+panfrost_bo_free(entry);
+continue;
 }
+/* Let's go! */
+bo = entry;
+break;
 }
 pthread_mutex_unlock(&screen->bo_cache_lock);
 
@@ -277,12 +297,18 @@ panfrost_bo_create(struct panfrost_screen *screen, size_t 
size,
 if (flags & PAN_ALLOCATE_GROWABLE)
 assert(flags & PAN_ALLOCATE_INVISIBLE);
 
-/* Before creating a BO, we first want to check the cache, otherwise,
- * the cache misses and we need to allocate a BO fresh from the kernel
+/* Before creating a BO, we first want to check the cache but without
+ * waiting for BO readiness (BOs in the cache can still be referenced
+ * by jobs that are not finished yet).
+ * If the cached allocation fails we fall back on fresh BO allocation,
+ * and if that fails too, we try one more time to allocate from the
+ * cache, but this time we accept to wait.
  */
-bo = panfrost_bo_cache_fetch(screen, size, flags);
+bo = panfrost_bo_cache_fetch(screen, size, flags, true);
 if (!bo)
 bo = panfrost_bo_alloc(screen, size, flags);
+if (!bo)
+bo = panfrost_bo_cache_fetch(screen, size, flags, false);
 
 assert(bo);
 
diff --git a/src/gallium/drivers/panfrost/pan_bo.h 
b/src/gallium/drivers/panfrost/pan_bo.h
index b4bff645e257..5c0a00d944ec 100644
--- a/src/gallium/drivers/panfrost/pan_bo.h
+++ b/src/gallium/drivers/p

[Mesa-dev] [PATCH v3 04/25] panfrost: Make transient allocation rely on the BO cache

2019-09-05 Thread Boris Brezillon
Right now, the transient memory allocator implements its own BO caching
mechanism, which is not really needed since we already have a generic
BO cache. Let's simplify things a bit.

Signed-off-by: Boris Brezillon 
Alyssa Rosenzweig 
---
Changes in v3:
* Collect R-b

Changes in v2:
* None
---
 src/gallium/drivers/panfrost/pan_allocate.c | 80 -
 src/gallium/drivers/panfrost/pan_job.c  | 11 ---
 src/gallium/drivers/panfrost/pan_job.h  |  4 +-
 src/gallium/drivers/panfrost/pan_screen.c   |  4 --
 src/gallium/drivers/panfrost/pan_screen.h   | 21 --
 5 files changed, 16 insertions(+), 104 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_allocate.c 
b/src/gallium/drivers/panfrost/pan_allocate.c
index d8a594551c76..a22b1a5a88d6 100644
--- a/src/gallium/drivers/panfrost/pan_allocate.c
+++ b/src/gallium/drivers/panfrost/pan_allocate.c
@@ -34,27 +34,6 @@
 /* TODO: What does this actually have to be? */
 #define ALIGNMENT 128
 
-/* Allocate a new transient slab */
-
-static struct panfrost_bo *
-panfrost_create_slab(struct panfrost_screen *screen, unsigned *index)
-{
-/* Allocate a new slab on the screen */
-
-struct panfrost_bo **new =
-util_dynarray_grow(&screen->transient_bo,
-struct panfrost_bo *, 1);
-
-struct panfrost_bo *alloc = panfrost_drm_create_bo(screen, 
TRANSIENT_SLAB_SIZE, 0);
-
-*new = alloc;
-
-/* Return the BO as well as the index we just added */
-
-*index = util_dynarray_num_elements(&screen->transient_bo, void *) - 1;
-return alloc;
-}
-
 /* Transient command stream pooling: command stream uploads try to simply copy
  * into whereever we left off. If there isn't space, we allocate a new entry
  * into the pool and copy there */
@@ -72,59 +51,32 @@ panfrost_allocate_transient(struct panfrost_context *ctx, 
size_t sz)
 struct panfrost_bo *bo = NULL;
 
 unsigned offset = 0;
-bool update_offset = false;
 
-pthread_mutex_lock(&screen->transient_lock);
-bool has_current = batch->transient_indices.size;
 bool fits_in_current = (batch->transient_offset + sz) < 
TRANSIENT_SLAB_SIZE;
 
-if (likely(has_current && fits_in_current)) {
-/* We can reuse the topmost BO, so get it */
-unsigned idx = util_dynarray_top(&batch->transient_indices, 
unsigned);
-bo = pan_bo_for_index(screen, idx);
+if (likely(batch->transient_bo && fits_in_current)) {
+/* We can reuse the current BO, so get it */
+bo = batch->transient_bo;
 
 /* Use the specified offset */
 offset = batch->transient_offset;
-update_offset = true;
-} else if (sz < TRANSIENT_SLAB_SIZE) {
-/* We can't reuse the topmost BO, but we can get a new one.
- * First, look for a free slot */
-
-unsigned count = 
util_dynarray_num_elements(&screen->transient_bo, void *);
-unsigned index = 0;
-
-unsigned free = __bitset_ffs(
-screen->free_transient,
-count / BITSET_WORDBITS);
-
-if (likely(free)) {
-/* Use this one */
-index = free - 1;
-
-/* It's ours, so no longer free */
-BITSET_CLEAR(screen->free_transient, index);
-
-/* Grab the BO */
-bo = pan_bo_for_index(screen, index);
-} else {
-/* Otherwise, create a new BO */
-bo = panfrost_create_slab(screen, &index);
-}
-
-panfrost_batch_add_bo(batch, bo);
-
-/* Remember we created this */
-util_dynarray_append(&batch->transient_indices, unsigned, 
index);
-
-update_offset = true;
+batch->transient_offset = offset + sz;
 } else {
-/* Create a new BO and reference it */
-bo = panfrost_drm_create_bo(screen, ALIGN_POT(sz, 4096), 0);
+size_t bo_sz = sz < TRANSIENT_SLAB_SIZE ?
+   TRANSIENT_SLAB_SIZE : ALIGN_POT(sz, 4096);
+
+/* We can't reuse the current BO, but we can create a new one. 
*/
+bo = panfrost_drm_create_bo(screen, bo_sz, 0);
 panfrost_batch_add_bo(batch, bo);
 
 /* Creating a BO adds a reference, and then the job adds a
  * second one. So we need to pop back one reference */
 panfrost_bo_unreference(&screen->base, bo);
+
+if (sz < TRANSIENT_SLAB_SIZE) {
+batch->transient_bo = bo;
+batch->transient_offset = offset + sz;
+}
 }
 
 struct panfrost_t

[Mesa-dev] [PATCH v3 02/25] panfrost: Pass a batch to panfrost_drm_submit_vs_fs_batch()

2019-09-05 Thread Boris Brezillon
Given the function name it makes more sense to pass it a job batch
directly.

Signed-off-by: Boris Brezillon 
Alyssa Rosenzweig 
Reviewed-by: Daniel Stone 
---
Changes in v3:
* Collect R-bs

Changes in v2:
* s/panfrost_job_get_batch_for_fbo/panfrost_get_batch_for_fbo/
* s/panfrost_job_batch/panfrost_batch/g
---
 src/gallium/drivers/panfrost/pan_drm.c| 13 ++---
 src/gallium/drivers/panfrost/pan_job.c|  2 +-
 src/gallium/drivers/panfrost/pan_screen.h |  3 ++-
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_drm.c 
b/src/gallium/drivers/panfrost/pan_drm.c
index 75fc5a726b1f..768d9602eee7 100644
--- a/src/gallium/drivers/panfrost/pan_drm.c
+++ b/src/gallium/drivers/panfrost/pan_drm.c
@@ -248,12 +248,12 @@ panfrost_drm_export_bo(struct panfrost_screen *screen, 
const struct panfrost_bo
 }
 
 static int
-panfrost_drm_submit_batch(struct panfrost_context *ctx, u64 first_job_desc,
+panfrost_drm_submit_batch(struct panfrost_batch *batch, u64 first_job_desc,
   int reqs)
 {
+struct panfrost_context *ctx = batch->ctx;
 struct pipe_context *gallium = (struct pipe_context *) ctx;
 struct panfrost_screen *screen = pan_screen(gallium->screen);
-struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
 struct drm_panfrost_submit submit = {0,};
 int *bo_handles, ret;
 
@@ -293,23 +293,22 @@ panfrost_drm_submit_batch(struct panfrost_context *ctx, 
u64 first_job_desc,
 }
 
 int
-panfrost_drm_submit_vs_fs_batch(struct panfrost_context *ctx, bool has_draws)
+panfrost_drm_submit_vs_fs_batch(struct panfrost_batch *batch, bool has_draws)
 {
+struct panfrost_context *ctx = batch->ctx;
 int ret = 0;
 
-struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
-
 panfrost_batch_add_bo(batch, ctx->scratchpad.bo);
 panfrost_batch_add_bo(batch, ctx->tiler_heap.bo);
 panfrost_batch_add_bo(batch, batch->polygon_list);
 
 if (batch->first_job.gpu) {
-ret = panfrost_drm_submit_batch(ctx, batch->first_job.gpu, 0);
+ret = panfrost_drm_submit_batch(batch, batch->first_job.gpu, 
0);
 assert(!ret);
 }
 
 if (batch->first_tiler.gpu || batch->clear) {
-ret = panfrost_drm_submit_batch(ctx,
+ret = panfrost_drm_submit_batch(batch,
 panfrost_fragment_job(ctx, 
has_draws),
 PANFROST_JD_REQ_FS);
 assert(!ret);
diff --git a/src/gallium/drivers/panfrost/pan_job.c 
b/src/gallium/drivers/panfrost/pan_job.c
index a019c2adf69a..f136ccb97fcd 100644
--- a/src/gallium/drivers/panfrost/pan_job.c
+++ b/src/gallium/drivers/panfrost/pan_job.c
@@ -211,7 +211,7 @@ panfrost_batch_submit(struct panfrost_context *ctx, struct 
panfrost_batch *batch
 
 bool has_draws = batch->last_job.gpu;
 
-ret = panfrost_drm_submit_vs_fs_batch(ctx, has_draws);
+ret = panfrost_drm_submit_vs_fs_batch(batch, has_draws);
 
 if (ret)
 fprintf(stderr, "panfrost_batch_submit failed: %d\n", ret);
diff --git a/src/gallium/drivers/panfrost/pan_screen.h 
b/src/gallium/drivers/panfrost/pan_screen.h
index 3017b9c154f4..11cbb72075ab 100644
--- a/src/gallium/drivers/panfrost/pan_screen.h
+++ b/src/gallium/drivers/panfrost/pan_screen.h
@@ -39,6 +39,7 @@
 #include 
 #include "pan_allocate.h"
 
+struct panfrost_batch;
 struct panfrost_context;
 struct panfrost_resource;
 struct panfrost_screen;
@@ -163,7 +164,7 @@ panfrost_drm_import_bo(struct panfrost_screen *screen, int 
fd);
 int
 panfrost_drm_export_bo(struct panfrost_screen *screen, const struct 
panfrost_bo *bo);
 int
-panfrost_drm_submit_vs_fs_batch(struct panfrost_context *ctx, bool has_draws);
+panfrost_drm_submit_vs_fs_batch(struct panfrost_batch *batch, bool has_draws);
 void
 panfrost_drm_force_flush_fragment(struct panfrost_context *ctx,
   struct pipe_fence_handle **fence);
-- 
2.21.0

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH v3 00/25] panfrost: Rework the batch pipelining logic

2019-09-05 Thread Boris Brezillon
Hello,

This is actually a v1 expect for patches 1 to 4, which have already
been submitted separately.

The goal here is to rework the panfrost_job logic (renamed
panfrost_batch at the beginning of the series) to avoid unnecessary
flushes when we can.

The new solution is based on the VC4/V3D implementation.

Regards,

Boris

Boris Brezillon (25):
  panfrost: s/job/batch/
  panfrost: Pass a batch to panfrost_drm_submit_vs_fs_batch()
  panfrost: Stop passing a ctx to functions being passed a batch
  panfrost: Make transient allocation rely on the BO cache
  panfrost: Convert ctx->{scratchpad,tiler_heap,tiler_dummy} to plain
BOs
  panfrost: Get rid of unused panfrost_context fields
  panfrost: Get rid of the now unused SLAB allocator
  panfrost: Rename pan_bo_cache.c into pan_bo.c
  panfrost: Rework the panfrost_bo API
  panfrost: Make sure the BO is 'ready' when picked from the cache
  panfrost: Use a pipe_framebuffer_state as the batch key
  panfrost: Get rid of the unused 'flush jobs accessing res' infra
  panfrost: Allow testing if a specific batch is targeting a scanout FB
  panfrost: Move the fence creation in panfrost_flush()
  panfrost: Move the batch submission logic to panfrost_batch_submit()
  panfrost: Pass a batch to panfrost_{allocate,upload}_transient()
  panfrost: Pass a batch to functions emitting FB descs
  panfrost: Use ctx->wallpaper_batch in panfrost_blit_wallpaper()
  panfrost: Pass a batch to panfrost_set_value_job()
  panfrost: Prepare things to avoid flushes on FB switch
  panfrost: Add new helpers to describe job depencencies on BOs
  panfrost: Delay payloads[].offset_start initialization
  panfrost: Remove uneeded add_bo() in initialize_surface()
  panfrost: Support batch pipelining
  panfrost/ci: New tests are passing

 .../drivers/panfrost/ci/expected-failures.txt |   4 -
 src/gallium/drivers/panfrost/meson.build  |   2 +-
 src/gallium/drivers/panfrost/pan_allocate.c   |  95 +---
 src/gallium/drivers/panfrost/pan_allocate.h   |  40 +-
 src/gallium/drivers/panfrost/pan_assemble.c   |   3 +-
 src/gallium/drivers/panfrost/pan_blend_cso.c  |   9 +-
 src/gallium/drivers/panfrost/pan_blit.c   |   9 +-
 src/gallium/drivers/panfrost/pan_bo.c | 405 ++
 src/gallium/drivers/panfrost/pan_bo.h |  80 +++
 src/gallium/drivers/panfrost/pan_bo_cache.c   | 167 --
 src/gallium/drivers/panfrost/pan_compute.c|  12 +-
 src/gallium/drivers/panfrost/pan_context.c| 478 +++--
 src/gallium/drivers/panfrost/pan_context.h|  51 +-
 src/gallium/drivers/panfrost/pan_drm.c| 266 +-
 src/gallium/drivers/panfrost/pan_fragment.c   |  32 +-
 src/gallium/drivers/panfrost/pan_instancing.c |   9 +-
 src/gallium/drivers/panfrost/pan_job.c| 493 --
 src/gallium/drivers/panfrost/pan_job.h|  97 ++--
 src/gallium/drivers/panfrost/pan_mfbd.c   |  58 +--
 src/gallium/drivers/panfrost/pan_resource.c   |  64 +--
 src/gallium/drivers/panfrost/pan_resource.h   |   8 +-
 src/gallium/drivers/panfrost/pan_scoreboard.c |  29 +-
 src/gallium/drivers/panfrost/pan_screen.c |   5 +-
 src/gallium/drivers/panfrost/pan_screen.h |  62 +--
 src/gallium/drivers/panfrost/pan_sfbd.c   |  50 +-
 src/gallium/drivers/panfrost/pan_varyings.c   |  13 +-
 26 files changed, 1277 insertions(+), 1264 deletions(-)
 create mode 100644 src/gallium/drivers/panfrost/pan_bo.c
 create mode 100644 src/gallium/drivers/panfrost/pan_bo.h
 delete mode 100644 src/gallium/drivers/panfrost/pan_bo_cache.c

-- 
2.21.0

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH v3 01/25] panfrost: s/job/batch/

2019-09-05 Thread Boris Brezillon
What we currently call a job is actually a batch containing several jobs
all attached to a rendering operation targeting a specific FBO.

Let's rename structs, functions, variables and fields to reflect this
fact.

Suggested-by: Alyssa Rosenzweig 
Signed-off-by: Boris Brezillon 
---
Changes in v3:
* s/panfrost_job_/panfrost_batch_/

Changes in v2:
* s/panfrost_job_get_batch_for_fbo/panfrost_get_batch_for_fbo/
* s/panfrost_job_batch/panfrost_batch/g
---
 src/gallium/drivers/panfrost/pan_allocate.c   |   6 +-
 src/gallium/drivers/panfrost/pan_blend_cso.c  |   4 +-
 src/gallium/drivers/panfrost/pan_compute.c|   2 +-
 src/gallium/drivers/panfrost/pan_context.c|  72 +++
 src/gallium/drivers/panfrost/pan_context.h|  12 +-
 src/gallium/drivers/panfrost/pan_drm.c|  33 +--
 src/gallium/drivers/panfrost/pan_fragment.c   |  20 +-
 src/gallium/drivers/panfrost/pan_instancing.c |   6 +-
 src/gallium/drivers/panfrost/pan_job.c| 198 +-
 src/gallium/drivers/panfrost/pan_job.h|  72 +++
 src/gallium/drivers/panfrost/pan_mfbd.c   |  30 +--
 src/gallium/drivers/panfrost/pan_resource.c   |   4 +-
 src/gallium/drivers/panfrost/pan_scoreboard.c |  22 +-
 src/gallium/drivers/panfrost/pan_screen.h |   2 +-
 src/gallium/drivers/panfrost/pan_sfbd.c   |  36 ++--
 src/gallium/drivers/panfrost/pan_varyings.c   |   4 +-
 16 files changed, 264 insertions(+), 259 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_allocate.c 
b/src/gallium/drivers/panfrost/pan_allocate.c
index 2efb01c75589..d8a594551c76 100644
--- a/src/gallium/drivers/panfrost/pan_allocate.c
+++ b/src/gallium/drivers/panfrost/pan_allocate.c
@@ -63,7 +63,7 @@ struct panfrost_transfer
 panfrost_allocate_transient(struct panfrost_context *ctx, size_t sz)
 {
 struct panfrost_screen *screen = pan_screen(ctx->base.screen);
-struct panfrost_job *batch = panfrost_get_job_for_fbo(ctx);
+struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
 
 /* Pad the size */
 sz = ALIGN_POT(sz, ALIGNMENT);
@@ -111,7 +111,7 @@ panfrost_allocate_transient(struct panfrost_context *ctx, 
size_t sz)
 bo = panfrost_create_slab(screen, &index);
 }
 
-panfrost_job_add_bo(batch, bo);
+panfrost_batch_add_bo(batch, bo);
 
 /* Remember we created this */
 util_dynarray_append(&batch->transient_indices, unsigned, 
index);
@@ -120,7 +120,7 @@ panfrost_allocate_transient(struct panfrost_context *ctx, 
size_t sz)
 } else {
 /* Create a new BO and reference it */
 bo = panfrost_drm_create_bo(screen, ALIGN_POT(sz, 4096), 0);
-panfrost_job_add_bo(batch, bo);
+panfrost_batch_add_bo(batch, bo);
 
 /* Creating a BO adds a reference, and then the job adds a
  * second one. So we need to pop back one reference */
diff --git a/src/gallium/drivers/panfrost/pan_blend_cso.c 
b/src/gallium/drivers/panfrost/pan_blend_cso.c
index 43121335f5e7..ab49772f3ba3 100644
--- a/src/gallium/drivers/panfrost/pan_blend_cso.c
+++ b/src/gallium/drivers/panfrost/pan_blend_cso.c
@@ -227,7 +227,7 @@ struct panfrost_blend_final
 panfrost_get_blend_for_context(struct panfrost_context *ctx, unsigned rti)
 {
 struct panfrost_screen *screen = pan_screen(ctx->base.screen);
-struct panfrost_job *job = panfrost_get_job_for_fbo(ctx);
+struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
 
 /* Grab the format, falling back gracefully if called invalidly (which
  * has to happen for no-color-attachment FBOs, for instance)  */
@@ -276,7 +276,7 @@ panfrost_get_blend_for_context(struct panfrost_context 
*ctx, unsigned rti)
 memcpy(final.shader.bo->cpu, shader->buffer, shader->size);
 
 /* Pass BO ownership to job */
-panfrost_job_add_bo(job, final.shader.bo);
+panfrost_batch_add_bo(batch, final.shader.bo);
 panfrost_bo_unreference(ctx->base.screen, final.shader.bo);
 
 if (shader->patch_index) {
diff --git a/src/gallium/drivers/panfrost/pan_compute.c 
b/src/gallium/drivers/panfrost/pan_compute.c
index 50e70cd8298e..51967fe481ef 100644
--- a/src/gallium/drivers/panfrost/pan_compute.c
+++ b/src/gallium/drivers/panfrost/pan_compute.c
@@ -128,7 +128,7 @@ panfrost_launch_grid(struct pipe_context *pipe,
 memcpy(transfer.cpu + sizeof(job), payload, sizeof(*payload));
 
 /* TODO: Do we want a special compute-only batch? */
-struct panfrost_job *batch = panfrost_get_job_for_fbo(ctx);
+struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
 
 /* Queue the job */
 panfrost_scoreboard_queue_compute_job(batch, transfer);
diff --git a/src/gallium/drivers/panfrost/pan_context.c 
b/src/gallium/drivers/panfrost/pan_context.c
index 2db360b0d490..ce895822014d 100644
--- a/src/gallium/driv

[Mesa-dev] [PATCH v3 03/25] panfrost: Stop passing a ctx to functions being passed a batch

2019-09-05 Thread Boris Brezillon
The context can be retrieved from batch->ctx.

Signed-off-by: Boris Brezillon 
Alyssa Rosenzweig 
Reviewed-by: Daniel Stone 
---
Changes in v3:
* Collect R-bs

Changes in v2:
* s/panfrost_job_get_batch_for_fbo/panfrost_get_batch_for_fbo/
* s/panfrost_job_batch/panfrost_batch/g
---
 src/gallium/drivers/panfrost/pan_context.c |  6 +++---
 src/gallium/drivers/panfrost/pan_drm.c |  2 +-
 src/gallium/drivers/panfrost/pan_job.c | 25 +-
 src/gallium/drivers/panfrost/pan_job.h | 11 --
 4 files changed, 23 insertions(+), 21 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_context.c 
b/src/gallium/drivers/panfrost/pan_context.c
index ce895822014d..292de7fe132c 100644
--- a/src/gallium/drivers/panfrost/pan_context.c
+++ b/src/gallium/drivers/panfrost/pan_context.c
@@ -180,7 +180,7 @@ panfrost_clear(
 struct panfrost_context *ctx = pan_context(pipe);
 struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
 
-panfrost_batch_clear(ctx, batch, buffers, color, depth, stencil);
+panfrost_batch_clear(batch, buffers, color, depth, stencil);
 }
 
 static mali_ptr
@@ -907,7 +907,7 @@ panfrost_emit_for_draw(struct panfrost_context *ctx, bool 
with_vertex_data)
 SET_BIT(ctx->fragment_shader_core.unknown2_4, MALI_NO_MSAA, 
!msaa);
 }
 
-panfrost_batch_set_requirements(ctx, batch);
+panfrost_batch_set_requirements(batch);
 
 if (ctx->occlusion_query) {
 ctx->payloads[PIPE_SHADER_FRAGMENT].gl_enables |= 
MALI_OCCLUSION_QUERY | MALI_OCCLUSION_PRECISE;
@@ -1329,7 +1329,7 @@ panfrost_submit_frame(struct panfrost_context *ctx, bool 
flush_immediate,
   struct pipe_fence_handle **fence,
   struct panfrost_batch *batch)
 {
-panfrost_batch_submit(ctx, batch);
+panfrost_batch_submit(batch);
 
 /* If visual, we can stall a frame */
 
diff --git a/src/gallium/drivers/panfrost/pan_drm.c 
b/src/gallium/drivers/panfrost/pan_drm.c
index 768d9602eee7..040cb1368e4e 100644
--- a/src/gallium/drivers/panfrost/pan_drm.c
+++ b/src/gallium/drivers/panfrost/pan_drm.c
@@ -355,7 +355,7 @@ panfrost_drm_force_flush_fragment(struct panfrost_context 
*ctx,
 ctx->last_fragment_flushed = true;
 
 /* The job finished up, so we're safe to clean it up now */
-panfrost_free_batch(ctx, ctx->last_batch);
+panfrost_free_batch(ctx->last_batch);
 }
 
 if (fence) {
diff --git a/src/gallium/drivers/panfrost/pan_job.c 
b/src/gallium/drivers/panfrost/pan_job.c
index f136ccb97fcd..0d19c2b4c5cd 100644
--- a/src/gallium/drivers/panfrost/pan_job.c
+++ b/src/gallium/drivers/panfrost/pan_job.c
@@ -54,11 +54,13 @@ panfrost_create_batch(struct panfrost_context *ctx)
 }
 
 void
-panfrost_free_batch(struct panfrost_context *ctx, struct panfrost_batch *batch)
+panfrost_free_batch(struct panfrost_batch *batch)
 {
 if (!batch)
 return;
 
+struct panfrost_context *ctx = batch->ctx;
+
 set_foreach(batch->bos, entry) {
 struct panfrost_bo *bo = (struct panfrost_bo *)entry->key;
 panfrost_bo_unreference(ctx->base.screen, bo);
@@ -195,18 +197,20 @@ panfrost_flush_jobs_writing_resource(struct 
panfrost_context *panfrost,
prsc);
 if (entry) {
 struct panfrost_batch *batch = entry->data;
-panfrost_batch_submit(panfrost, job);
+panfrost_batch_submit(job);
 }
 #endif
 /* TODO stub */
 }
 
 void
-panfrost_batch_submit(struct panfrost_context *ctx, struct panfrost_batch 
*batch)
+panfrost_batch_submit(struct panfrost_batch *batch)
 {
+assert(batch);
+
+struct panfrost_context *ctx = batch->ctx;
 int ret;
 
-assert(batch);
 panfrost_scoreboard_link_batch(batch);
 
 bool has_draws = batch->last_job.gpu;
@@ -232,9 +236,10 @@ panfrost_batch_submit(struct panfrost_context *ctx, struct 
panfrost_batch *batch
 }
 
 void
-panfrost_batch_set_requirements(struct panfrost_context *ctx,
-struct panfrost_batch *batch)
+panfrost_batch_set_requirements(struct panfrost_batch *batch)
 {
+struct panfrost_context *ctx = batch->ctx;
+
 if (ctx->rasterizer && ctx->rasterizer->base.multisample)
 batch->requirements |= PAN_REQ_MSAA;
 
@@ -336,13 +341,13 @@ pan_pack_color(uint32_t *packed, const union 
pipe_color_union *color, enum pipe_
 }
 
 void
-panfrost_batch_clear(struct panfrost_context *ctx,
- struct panfrost_batch *batch,
+panfrost_batch_clear(struct panfrost_batch *batch,
  unsigned buffers,
  const union pipe_color_union *color,
  double depth, unsigned stencil)
-
 {
+struct panfrost_context *ctx = batch->ctx;
+
 if (buffers & PIPE_CLEAR

[Mesa-dev] [Bug 111529] EGL_PLATFORM=gbm doesn't expose MESA_query_driver extension

2019-09-05 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111529

--- Comment #6 from Jean Hertel  ---
Curious, as for me it keeps failing to recognize the extension, both with my
own compiled version and with Arch-provided version.

Any other idea what could be wrong?

-- 
You are receiving this mail because:
You are the QA Contact for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH] android: mesa: revert "Enable asm unconditionally"

2019-09-05 Thread Eric Engestrom
On Thursday, 2019-09-05 17:58:22 +0200, Mauro Rossi wrote:
> Hi Eric, Emil,
> we have Tapani ok, in my understanding
> 
> Please follow up on this one

Sure, feel free to push the android revert until we can figure out a fix:
Acked-by: Eric Engestrom 


> Mauro
> 
> On Fri, Aug 16, 2019 at 4:29 AM Mauro Rossi  wrote:
> 
> > Hi Tapani, Eric,
> >
> > On Thu, Aug 15, 2019 at 1:00 PM Tapani Pälli 
> > wrote:
> >
> >>
> >> On 8/15/19 12:52 PM, Mauro Rossi wrote:
> >> > Hi Tapani,
> >> >
> >> > On Thu, Aug 15, 2019 at 7:29 AM Tapani Pälli  >> > > wrote:
> >> >
> >> >
> >> > On 8/13/19 9:55 PM, Mauro Rossi wrote:
> >> >  > Hi,
> >> >  >
> >> >  > On Tue, Aug 13, 2019 at 3:51 PM Tapani Pälli
> >> > mailto:tapani.pa...@intel.com>
> >> >  > >>
> >> > wrote:
> >> >  >
> >> >  >
> >> >  > On 8/13/19 3:32 PM, Mauro Rossi wrote:
> >> >  >  > Hi,
> >> >  >  >
> >> >  >  > On Tue, Aug 13, 2019 at 2:03 PM Tapani Pälli
> >> >  > mailto:tapani.pa...@intel.com>
> >> > >
> >> >  >  >  >> >   >> >  >> >  > wrote:
> >> >  >  >
> >> >  >  > Hi;
> >> >  >  >
> >> >  >  > On 8/13/19 2:43 PM, Mauro Rossi wrote:
> >> >  >  >  > Hi Tapani,
> >> >  >  >  >
> >> >  >  >  > On Sat, Jul 27, 2019 at 2:56 PM Mauro Rossi
> >> >  >  > mailto:issor.or...@gmail.com>
> >> > >
> >> >  > 
> >> > >>
> >> >  >  >  >  >> > 
> >> >  >  >> >>
> >> > 
> >> >  >  >> >  wrote:
> >> >  >  >  >
> >> >  >  >  > On Sat, Jul 27, 2019 at 2:56 PM Mauro Rossi
> >> >  >  > mailto:issor.or...@gmail.com>
> >> > >
> >> >  > 
> >> > >>
> >> >  >  >  >  >> > 
> >> >  >  >> >>
> >> >  >  >  >> > 
> >> >  >  >> >  wrote:
> >> >  >  >  >  >
> >> >  >  >  >  > On Thu, Jul 18, 2019 at 1:07 PM Chih-Wei
> >> Huang
> >> >  >  >  >  >> > 
> >> >  >  >> > >  >> > 
> >> >  >  >> > >>
> >> >  >  >  >> > 
> >> >  >  >> > >  >> > 
> >> >  >  >> > 
> >> >  >  > wrote:
> >> >  >  >  >  > >
> >> >  >  >  >  > > Mauro Rossi  >> > 
> >> >  >  >> >>
> >> >  >  >  >> >   >> > >>
> >> >  >  >  >  >> > 
> >> >  >  >> >>
> >> >  >  >  >> > 
> >> >  >  >> >  於 2019年7月14日 週日 下午5:17寫道:
> >> >  >  >  >  > > >
> >> >  >  >  >  > > > This patch partially reverts 20294dc
> >> ("mesa:
> >> >  > Enable asm
> >> >  >  >  > unconditionally, ...")
> >> >  >  >  >  > > >
> >> > 

[Mesa-dev] [Bug 111444] [TRACKER] Mesa 19.2 release tracker

2019-09-05 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111444

Ian Romanick  changed:

   What|Removed |Added

 Depends on||110295


Referenced Bugs:

https://bugs.freedesktop.org/show_bug.cgi?id=110295
[Bug 110295] DiRT 4 has rendering problems
-- 
You are receiving this mail because:
You are the QA Contact for the bug.
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111522] [bisected] Supraland no longer start

2019-09-05 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111522

Mark Janes  changed:

   What|Removed |Added

 Status|NEEDINFO|ASSIGNED
   Assignee|mesa-dev@lists.freedesktop. |fdo-b...@engestrom.ch
   |org |

--- Comment #8 from Mark Janes  ---
Eric, can you make a similar fix to what you did for minimagecount dri config?

-- 
You are receiving this mail because:
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111444] [TRACKER] Mesa 19.2 release tracker

2019-09-05 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111444
Bug 111444 depends on bug 111384, which changed state.

Bug 111384 Summary: [BXT/Iris] (recoverable) GPU hang in SynMark compute CSCloth
https://bugs.freedesktop.org/show_bug.cgi?id=111384

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |WORKSFORME

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111444] [TRACKER] Mesa 19.2 release tracker

2019-09-05 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111444

Mark Janes  changed:

   What|Removed |Added

 Depends on||111405


Referenced Bugs:

https://bugs.freedesktop.org/show_bug.cgi?id=111405
[Bug 111405] Some infinite 'do{}while' loops lead mesa to an infinite
compilation
-- 
You are receiving this mail because:
You are the assignee for the bug.
You are the QA Contact for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH] android: mesa: revert "Enable asm unconditionally"

2019-09-05 Thread Mauro Rossi
Hi Eric, Emil,
we have Tapani ok, in my understanding

Please follow up on this one
Mauro

On Fri, Aug 16, 2019 at 4:29 AM Mauro Rossi  wrote:

> Hi Tapani, Eric,
>
> On Thu, Aug 15, 2019 at 1:00 PM Tapani Pälli 
> wrote:
>
>>
>> On 8/15/19 12:52 PM, Mauro Rossi wrote:
>> > Hi Tapani,
>> >
>> > On Thu, Aug 15, 2019 at 7:29 AM Tapani Pälli > > > wrote:
>> >
>> >
>> > On 8/13/19 9:55 PM, Mauro Rossi wrote:
>> >  > Hi,
>> >  >
>> >  > On Tue, Aug 13, 2019 at 3:51 PM Tapani Pälli
>> > mailto:tapani.pa...@intel.com>
>> >  > >>
>> > wrote:
>> >  >
>> >  >
>> >  > On 8/13/19 3:32 PM, Mauro Rossi wrote:
>> >  >  > Hi,
>> >  >  >
>> >  >  > On Tue, Aug 13, 2019 at 2:03 PM Tapani Pälli
>> >  > mailto:tapani.pa...@intel.com>
>> > >
>> >  >  > > >  > > > >  > wrote:
>> >  >  >
>> >  >  > Hi;
>> >  >  >
>> >  >  > On 8/13/19 2:43 PM, Mauro Rossi wrote:
>> >  >  >  > Hi Tapani,
>> >  >  >  >
>> >  >  >  > On Sat, Jul 27, 2019 at 2:56 PM Mauro Rossi
>> >  >  > mailto:issor.or...@gmail.com>
>> > >
>> >  > 
>> > >>
>> >  >  >  > > > 
>> >  > > >>
>> > 
>> >  > > >  wrote:
>> >  >  >  >
>> >  >  >  > On Sat, Jul 27, 2019 at 2:56 PM Mauro Rossi
>> >  >  > mailto:issor.or...@gmail.com>
>> > >
>> >  > 
>> > >>
>> >  >  >  > > > 
>> >  > > >>
>> >  >  > > > 
>> >  > > >  wrote:
>> >  >  >  >  >
>> >  >  >  >  > On Thu, Jul 18, 2019 at 1:07 PM Chih-Wei
>> Huang
>> >  >  >  > > > 
>> >  > > > > > > 
>> >  > > > >>
>> >  >  > > > 
>> >  > > > > > > 
>> >  > > > 
>> >  >  > wrote:
>> >  >  >  >  > >
>> >  >  >  >  > > Mauro Rossi > > 
>> >  > > >>
>> >  >  > > >  > > >>
>> >  >  >  > > > 
>> >  > > >>
>> >  >  > > > 
>> >  > > >  於 2019年7月14日 週日 下午5:17寫道:
>> >  >  >  >  > > >
>> >  >  >  >  > > > This patch partially reverts 20294dc
>> ("mesa:
>> >  > Enable asm
>> >  >  >  > unconditionally, ...")
>> >  >  >  >  > > >
>> >  >  >  >  > > > Android makefile build logic needs to
>> > disable
>> >  > assembler
>> >  >  >  > optimization
>> >  >  >  >  > > > in 32bit builds to avoid text
>> > relocations for
>> >  >  > libglapi.so shared
>> >  >  >  >  > > >
>> >  >  >  >  > > > Fixes the following buil

[Mesa-dev] [Bug 111444] [TRACKER] Mesa 19.2 release tracker

2019-09-05 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111444

Mark Janes  changed:

   What|Removed |Added

 Depends on|110507  |


Referenced Bugs:

https://bugs.freedesktop.org/show_bug.cgi?id=110507
[Bug 110507] [Regression] [Bisected] assert in fragment shader compilation when
SIMD32 is enabled
-- 
You are receiving this mail because:
You are the assignee for the bug.
You are the QA Contact for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111522] [bisected] Supraland no longer start

2019-09-05 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111522

--- Comment #7 from Lionel Landwerlin  ---
Just to confirm, is this the title causing problems? :
https://store.steampowered.com/app/813630/Supraland/

-- 
You are receiving this mail because:
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [ANNOUNCE] mesa 19.2.0-rc2

2019-09-05 Thread apinheiro


On 5/9/19 0:57, Dylan Baker wrote:

Hi List,

I'd like to announce the availability of mesa-19.2.0-rc2. This is the
culmination of two weeks worth of work. Due to maintenance the Intel CI is not
running, but I've built and tested this locally. I would have preferred to get
more testing, but being two weeks out from -rc1 I wanted to get a release out.

Dylan



I would like to nominate the following v3d patch:

"broadcom/v3d: Allow importing linear BOs with arbitrary offset/stride" [1]

I already mentioned that patch on the "[Mesa-dev] Mesa 19.2.0 release 
plan" thread, but I forgot to CC mesa-stable. Sorry for that.


FWIW, the patch fixes the following piglit tests:

spec/ext_image_dma_buf_import/ext_image_dma_buf_import-sample_nv12
spec/ext_image_dma_buf_import/ext_image_dma_buf_import-sample_yuv420
spec/ext_image_dma_buf_import/ext_image_dma_buf_import-sample_yvu420

[1] 
https://gitlab.igalia.com/graphics/mesa/commit/873b092e9110a0605293db7bc1c5bcb749cf9a28






Shortlog:


Alex Smith (1):
   radv: Change memory type order for GPUs without dedicated VRAM

Alyssa Rosenzweig (1):
   pan/midgard: Fix writeout combining

Andres Rodriguez (1):
   radv: additional query fixes

Bas Nieuwenhuizen (3):
   radv: Use correct vgpr_comp_cnt for VS if both prim_id and instance_id 
are needed.
   radv: Emit VGT_GS_ONCHIP_CNTL for tess on GFX10.
   radv: Disable NGG for geometry shaders.

Danylo Piliaiev (1):
   nir/loop_unroll: Prepare loop for unrolling in wrapper_unroll

Dave Airlie (2):
   virgl: fix format conversion for recent gallium changes.
   gallivm: fix atomic compare-and-swap

Dylan Baker (1):
   bump version to 19.2-rc2

Ian Romanick (7):
   nir/algrbraic: Don't optimize open-coded bitfield reverse when lowering 
is enabled
   intel/compiler: Request bitfield_reverse lowering on pre-Gen7 hardware
   nir/algebraic: Mark some value range analysis-based optimizations 
imprecise
   nir/range-analysis: Adjust result range of exp2 to account for 
flush-to-zero
   nir/range-analysis: Adjust result range of multiplication to account for 
flush-to-zero
   nir/range-analysis: Fix incorrect fadd range result for (ne_zero, 
ne_zero)
   nir/range-analysis: Handle constants in nir_op_mov just like nir_op_bcsel

Ilia Mirkin (1):
   gallium/vl: use compute preference for all multimedia, not just blit

Jose Maria Casanova Crespo (1):
   mesa: recover target_check before get_current_tex_objects

Kenneth Graunke (15):
   gallium/ddebug: Wrap resource_get_param if available
   gallium/trace: Wrap resource_get_param if available
   gallium/rbug: Wrap resource_get_param if available
   gallium/noop: Implement resource_get_param
   iris: Replace devinfo->gen with GEN_GEN
   iris: Fix broken aux.possible/sampler_usages bitmask handling
   iris: Update fast clear colors on Gen9 with direct immediate writes.
   iris: Drop copy format hacks from copy region based transfer path.
   iris: Avoid unnecessary resolves on transfer maps
   iris: Fix large timeout handling in rel2abs()
   isl: Drop UnormPathInColorPipe for buffer surfaces.
   isl: Don't set UnormPathInColorPipe for integer surfaces.
   util: Add a _mesa_i64roundevenf() helper.
   mesa: Fix _mesa_float_to_unorm() on 32-bit systems.
   iris: Fix partial fast clear checks to account for miplevel.

Lionel Landwerlin (2):
   util/timespec: use unsigned 64 bit integers for nsec values
   util: fix compilation on macos

Marek Olšák (18):
   radeonsi/gfx10: fix the legacy pipeline by storing as_ngg in the shader 
cache
   radeonsi: move some global shader cache flags to per-binary flags
   radeonsi/gfx10: fix tessellation for the legacy pipeline
   radeonsi/gfx10: fix the PRIMITIVES_GENERATED query if using legacy 
streamout
   radeonsi/gfx10: create the GS copy shader if using legacy streamout
   radeonsi/gfx10: add as_ngg variant for VS as ES to select Wave32/64
   radeonsi/gfx10: fix InstanceID for legacy VS+GS
   radeonsi/gfx10: don't initialize VGT_INSTANCE_STEP_RATE_0
   radeonsi/gfx10: always use the legacy pipeline for streamout
   radeonsi/gfx10: finish up Navi14, add PCI ID
   radeonsi/gfx10: add AMD_DEBUG=nongg
   winsys/amdgpu+radeon: process AMD_DEBUG in addition to R600_DEBUG
   radeonsi: add PKT3_CONTEXT_REG_RMW
   radeonsi/gfx10: remove incorrect ngg/pos_writes_edgeflag variables
   radeonsi/gfx10: set PA_CL_VS_OUT_CNTL with CONTEXT_REG_RMW to fix edge 
flags
   radeonsi: consolidate determining VGPR_COMP_CNT for API VS
   radeonsi: unbind blend/DSA/rasterizer state correctly in delete functions
   radeonsi: fix scratch buffer WAVESIZE setting leading to corruption

Paulo Zanoni (1):
   intel/fs: grab fail_msg from v32 instead of v16 when v32->run_cs fails

Pierre-Eric Pelloux-Prayer (1):
   glsl: replace 'x + (-x)' with constant 0


[Mesa-dev] [Bug 111454] Gallium: removal of the manual defining of PIPE_FORMAT values breaks virgl

2019-09-05 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111454

--- Comment #8 from Gert Wollny  ---
Virgl sends the TGSI as text so I think we are save. 

At least Dave and me we run the patches through piglit and the virglrenderer CI
on a hardware host (the fdo one only used softpipe) and there all passes as
expected.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are the QA Contact for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev