Re: [Mesa-dev] [PATCH v2 2/2] panfrost: protect access to shared bo cache and transient pool

2019-08-31 Thread Daniel Stone
Hi Boris,

On Sat, 31 Aug 2019 at 18:33, Boris Brezillon
 wrote:
> On Sat, 31 Aug 2019 17:06:30 +0100 Daniel Stone  wrote:
> > On Fri, 30 Aug 2019 at 17:00, Rohan Garg  wrote:
> > > Both the BO cache and the transient pool are shared across
> > > context's. Protect access to these with mutexes.
> >
> > These fixes seem right to me, and (minus the issues Boris pointed
> > out), both are:
> > Reviewed-by: Daniel Stone 
>
> Unfortunately, they've already been applied (maybe I should wait
> a bit longer before applying patches :-/).

No problem, they'd already been reviewed in a previous round anyway. :)

Have a great weekend!

Cheers,
Daniel
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v2 2/2] panfrost: protect access to shared bo cache and transient pool

2019-08-31 Thread Boris Brezillon
On Sat, 31 Aug 2019 17:06:30 +0100
Daniel Stone  wrote:

> Hi Rohan,
> 
> On Fri, 30 Aug 2019 at 17:00, Rohan Garg  wrote:
> > Both the BO cache and the transient pool are shared across
> > context's. Protect access to these with mutexes.  
> 
> These fixes seem right to me, and (minus the issues Boris pointed
> out), both are:
> Reviewed-by: Daniel Stone 

Unfortunately, they've already been applied (maybe I should wait
a bit longer before applying patches :-/).

> 
> I think it might be worthwhile pushing both of these to the context in
> the longer term.

Just sent a patch that gets rid of the transient fields, so that leaves
the BO (reuse) cache.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v2 2/2] panfrost: protect access to shared bo cache and transient pool

2019-08-31 Thread Daniel Stone
Hi Rohan,

On Fri, 30 Aug 2019 at 17:00, Rohan Garg  wrote:
> Both the BO cache and the transient pool are shared across
> context's. Protect access to these with mutexes.

These fixes seem right to me, and (minus the issues Boris pointed
out), both are:
Reviewed-by: Daniel Stone 

I think it might be worthwhile pushing both of these to the context in
the longer term. I wonder if we should also rename the 'BO cache' to
something else. (If only so I can stop saying the wrong thing about
'the BO cache'! Maybe we could call it 'BO reuse cache' or similar?)

Another use of BO caching (usually 'BO handle cache') is to handle the
exclusivity requirement around GEM handles. For any given kernel
buffer (wherever it comes from: a panfrost_gem_object allocated by us,
a panfrost_gem_object allocated by another process, a V4L2 buffer,
etc), for every DRM connection (a drm_file represents one open() of
the DRM device), there can only be one GEM handle referring to that
buffer.

drmPrimeFDToHandle() is where you will most often bump into this. If a
client imports the same underlying buffer twice, drmPrimeFDToHandle()
may return a GEM handle which you already have open. Since creating
and destroying GEM handles is not refcounted, this means that you may
get two panfrost_bo's referring to the same GEM handle. When you close
one of those, the other will be left hanging: either you cannot use it
(because the handle is invalid), or you will end up overlapping with
another buffer which later comes to use the same GEM handle. Neither
result is good.

Given that, on the screen (not on the context, since it needs to be
scoped to every user of the same DRM FD), we need a BO handle cache.
The screen would have a list of every panfrost_bo for that device, and
when panfrost_drm_import_bo() receives a GEM handle back from the
kernel, we need to walk the list of all BOs. If we find a panfrost_bo
with the same handle, we just bump the refcount and return that. This
would need a screen-wide lock on the BO handle list.

Would you be able to take a look at that please?

Cheers,
Daniel
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v2 2/2] panfrost: protect access to shared bo cache and transient pool

2019-08-30 Thread Boris Brezillon
On Fri, 30 Aug 2019 18:00:13 +0200
Rohan Garg  wrote:

> Both the BO cache and the transient pool are shared across
> context's. Protect access to these with mutexes.
> 
> Signed-off-by: Rohan Garg 

> --- a/src/gallium/drivers/panfrost/pan_screen.c
> +++ b/src/gallium/drivers/panfrost/pan_screen.c
> @@ -639,8 +639,10 @@ panfrost_create_screen(int fd, struct renderonly *ro)
>  return NULL;
>  }
>  
> + pthread_mutex_init(>transient_lock, NULL);

Coding style issue: indentation should use spaces not tabs.

>  util_dynarray_init(>transient_bo, screen);
>  
> + pthread_mutex_init(>bo_cache_lock, NULL);

We should probably call pthread_mutex_destroy() in
panfrost_destroy_screen().

>  for (unsigned i = 0; i < ARRAY_SIZE(screen->bo_cache); ++i)
>  list_inithead(>bo_cache[i]);
>  
> diff --git a/src/gallium/drivers/panfrost/pan_screen.h 
> b/src/gallium/drivers/panfrost/pan_screen.h
> index 7991b395f54..e3ea246d3f3 100644
> --- a/src/gallium/drivers/panfrost/pan_screen.h
> +++ b/src/gallium/drivers/panfrost/pan_screen.h
> @@ -104,6 +104,8 @@ struct panfrost_screen {
>  
>  struct renderonly *ro;
>  
> +pthread_mutex_t transient_lock;
> +
>  /* Transient memory management is based on borrowing fixed-size slabs
>   * off the screen (loaning them out to the batch). Dynamic array
>   * container of panfrost_bo */
> @@ -113,6 +115,8 @@ struct panfrost_screen {
>  /* Set of free transient BOs */
>  BITSET_DECLARE(free_transient, MAX_TRANSIENT_SLABS);
>  
> + pthread_mutex_t bo_cache_lock;

Same indentation issue.

No need to send a new version, I can fix those issues when applying.

Reviewed-by: Boris Brezillon 

> +
>  /* The BO cache is a set of buckets with power-of-two sizes ranging
>   * from 2^12 (4096, the page size) to 2^(12 + MAX_BO_CACHE_BUCKETS).
>   * Each bucket is a linked list of free panfrost_bo objects. */

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH v2 2/2] panfrost: protect access to shared bo cache and transient pool

2019-08-30 Thread Rohan Garg
Both the BO cache and the transient pool are shared across
context's. Protect access to these with mutexes.

Signed-off-by: Rohan Garg 
---
 src/gallium/drivers/panfrost/pan_allocate.c |  2 ++
 src/gallium/drivers/panfrost/pan_bo_cache.c | 16 +++-
 src/gallium/drivers/panfrost/pan_job.c  |  2 ++
 src/gallium/drivers/panfrost/pan_screen.c   |  2 ++
 src/gallium/drivers/panfrost/pan_screen.h   |  4 
 5 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/src/gallium/drivers/panfrost/pan_allocate.c 
b/src/gallium/drivers/panfrost/pan_allocate.c
index f549c864c70..fb8b18fe718 100644
--- a/src/gallium/drivers/panfrost/pan_allocate.c
+++ b/src/gallium/drivers/panfrost/pan_allocate.c
@@ -74,6 +74,7 @@ panfrost_allocate_transient(struct panfrost_context *ctx, 
size_t sz)
 unsigned offset = 0;
 bool update_offset = false;
 
+pthread_mutex_lock(>transient_lock);
 bool has_current = batch->transient_indices.size;
 bool fits_in_current = (batch->transient_offset + sz) < 
TRANSIENT_SLAB_SIZE;
 
@@ -131,6 +132,7 @@ panfrost_allocate_transient(struct panfrost_context *ctx, 
size_t sz)
 
 if (update_offset)
 batch->transient_offset = offset + sz;
+pthread_mutex_unlock(>transient_lock);
 
 return ret;
 
diff --git a/src/gallium/drivers/panfrost/pan_bo_cache.c 
b/src/gallium/drivers/panfrost/pan_bo_cache.c
index 9dd6b694b72..f2f49437a89 100644
--- a/src/gallium/drivers/panfrost/pan_bo_cache.c
+++ b/src/gallium/drivers/panfrost/pan_bo_cache.c
@@ -24,6 +24,7 @@
  *   Alyssa Rosenzweig 
  */
 #include 
+#include 
 #include "drm-uapi/panfrost_drm.h"
 
 #include "pan_screen.h"
@@ -84,7 +85,9 @@ panfrost_bo_cache_fetch(
 struct panfrost_screen *screen,
 size_t size, uint32_t flags)
 {
+pthread_mutex_lock(>bo_cache_lock);
 struct list_head *bucket = pan_bucket(screen, size);
+struct panfrost_bo *bo = NULL;
 
 /* Iterate the bucket looking for something suitable */
 list_for_each_entry_safe(struct panfrost_bo, entry, bucket, link) {
@@ -106,12 +109,13 @@ panfrost_bo_cache_fetch(
 continue;
 }
 /* Let's go! */
-return entry;
+bo = entry;
+break;
 }
 }
+pthread_mutex_unlock(>bo_cache_lock);
 
-/* We didn't find anything */
-return NULL;
+return bo;
 }
 
 /* Tries to add a BO to the cache. Returns if it was
@@ -122,6 +126,7 @@ panfrost_bo_cache_put(
 struct panfrost_screen *screen,
 struct panfrost_bo *bo)
 {
+pthread_mutex_lock(>bo_cache_lock);
 struct list_head *bucket = pan_bucket(screen, bo->size);
 struct drm_panfrost_madvise madv;
 
@@ -133,6 +138,7 @@ panfrost_bo_cache_put(
 
 /* Add us to the bucket */
 list_addtail(>link, bucket);
+pthread_mutex_unlock(>bo_cache_lock);
 
 return true;
 }
@@ -147,6 +153,7 @@ void
 panfrost_bo_cache_evict_all(
 struct panfrost_screen *screen)
 {
+pthread_mutex_lock(>bo_cache_lock);
 for (unsigned i = 0; i < ARRAY_SIZE(screen->bo_cache); ++i) {
 struct list_head *bucket = >bo_cache[i];
 
@@ -155,7 +162,6 @@ panfrost_bo_cache_evict_all(
 panfrost_drm_release_bo(screen, entry, false);
 }
 }
-
-return;
+pthread_mutex_unlock(>bo_cache_lock);
 }
 
diff --git a/src/gallium/drivers/panfrost/pan_job.c 
b/src/gallium/drivers/panfrost/pan_job.c
index 4d8ec2eadc9..4d2908a58b7 100644
--- a/src/gallium/drivers/panfrost/pan_job.c
+++ b/src/gallium/drivers/panfrost/pan_job.c
@@ -67,10 +67,12 @@ panfrost_free_job(struct panfrost_context *ctx, struct 
panfrost_job *job)
 /* Free up the transient BOs we're sitting on */
 struct panfrost_screen *screen = pan_screen(ctx->base.screen);
 
+pthread_mutex_lock(>transient_lock);
 util_dynarray_foreach(>transient_indices, unsigned, index) {
 /* Mark it free */
 BITSET_SET(screen->free_transient, *index);
 }
+pthread_mutex_unlock(>transient_lock);
 
 /* Unreference the polygon list */
 panfrost_bo_unreference(ctx->base.screen, job->polygon_list);
diff --git a/src/gallium/drivers/panfrost/pan_screen.c 
b/src/gallium/drivers/panfrost/pan_screen.c
index 5c288f52bbd..5ceceac33ae 100644
--- a/src/gallium/drivers/panfrost/pan_screen.c
+++ b/src/gallium/drivers/panfrost/pan_screen.c
@@ -639,8 +639,10 @@ panfrost_create_screen(int fd, struct renderonly *ro)
 return NULL;
 }
 
+   pthread_mutex_init(>transient_lock, NULL);
 util_dynarray_init(>transient_bo, screen);
 
+   pthread_mutex_init(>bo_cache_lock, NULL);
 for (unsigned i = 0; i <