Re: [Mesa-dev] [PATCH 2/2] r600/compute: Mark several functions as static

2018-05-21 Thread Bruno Jimenez
On Mon, 2018-05-21 at 09:34 -0500, Aaron Watry wrote:
> On Mon, May 21, 2018 at 9:34 AM, Aaron Watry 
> wrote:
> > On Mon, May 21, 2018 at 6:25 AM, Bruno Jimenez  > m> wrote:
> > > On Fri, 2018-05-18 at 22:52 -0400, Jan Vesely wrote:
> > > > On Fri, 2018-05-18 at 21:31 -0500, Aaron Watry wrote:
> > > > > On Fri, May 18, 2018 at 1:15 PM, Jan Vesely  > > > > rs.edu
> > > > > > wrote:
> > > > > > On Thu, 2018-05-17 at 19:20 -0500, Aaron Watry wrote:
> > > > > > > They're not used anywhere else, so keep them private
> > > > > > > 
> > > > > > > Signed-off-by: Aaron Watry 
> > > > > > 
> > > > > > I'm not sure what the original purpose of the removed
> > > > > > functions
> > > > > > was. If
> > > > > > you recall please add it to the commit message.
> > > > > 
> > > > > I didn't really bother looking into it when I made this
> > > > > change (I
> > > > > didn't write this code in the first place).  This is
> > > > > something that
> > > > > I've had sitting in my local repo for a while, and just want
> > > > > to
> > > > > stop
> > > > > rebasing it.
> > > > > 
> > > > > It was originally found a while ago when I started looking
> > > > > into the
> > > > > thread-safety (or lack thereof) of the compute pool for r600.
> > > > > Figured
> > > > > there was no point trying to fix code that was unused.
> > > > 
> > > > OK, nvm then. I was just curious how we ended up with so much
> > > > dead
> > > > code.
> > > 
> > > Hiya,
> > > 
> > > I can comment on that (mainly because I was the one who almost
> > > re-wrote
> > > the memory pool).
> > > 
> > > The reason is that I did most of these as a GSoC which started
> > > trying
> > > to fix a bug in the code that growth the pool when memory was
> > > needed
> > > and it continued with improving the pool. These functions are
> > > mostly
> > > remains from the original implementation. If I remember
> > > correctly, I
> > > sent a patch to remove them (and also a big patch which was a
> > > comment
> > > explaning how the pool worked and why it was like that) but it
> > > never
> > > got reviewed.
> > > 
> > > In fact, the implementation IS NOT complete as it lacks a path to
> > > unmap
> > > memory. I also had some patches for this, but as before, they
> > > never got
> > > reviewed...
> > 
> > I'm going to guess that this list contains some of the patches you
> > were talking about:
> > https://patchwork.freedesktop.org/project/mesa/list/?submitter=1166
> > 0=*
> > 
> > Especially the superseded ones. If you want to point them out, I
> > can
> > try either rebasing or (finally) reviewing them as appropriate.
> 
> And by superseded, I really meant "Not Applicable"... Too early.
> 
> --Aaron

Hi,

I'm afraid that those are not all the patches that I had... and as
said, I do not have access to the laptop where I believe they still
are.

Still, if you are interested in checking the pool, I can check the code
and tell you how it works, why it is like this and what is missing.

Best regards,
Bruno

> 
> > 
> > --Aaron
> > 
> > > 
> > > I might still have the patches in my old laptop, but I don't have
> > > access to it, sorry. Either way, I more or less remember how
> > > everything
> > > worked, although I don't have a way of testing it because I no
> > > longer
> > > have a computer with a R600 card.
> > > 
> > > If nothing breaks (compiling) you can have my
> > > 
> > > Reviewed-by: Bruno Jiménez 
> > > 
> > > If you need anything regarding this code, just ask and I'll see
> > > if I
> > > can dig up from my memory how it worked (and why)
> > > 
> > > Best Regards,
> > > Bruno
> > > 
> > > > 
> > > > > 
> > > > > > Either way:
> > > > > > 
> > > > > > Reviewed-by: Jan Vesely 
> > > > > 
> > > > > For this one, or both?
> > > > 
> > > > for the series.
> > > > 
> > > > thanks,
> > > > Jan
> > > > 
> > > > > 
> > > > > --Aaron
> > > > > 
> > > > > > 
> > > > > > Jan
> > > > > > 
> > > > > > > ---
> > > > > > >  .../drivers/r600/compute_memory_pool.c| 35
> > > > > > > +++
> > > > > > >  .../drivers/r600/compute_memory_pool.h| 24 ---
> > > > > > > --
> > > > > > > 
> > > > > > >  2 files changed, 29 insertions(+), 30 deletions(-)
> > > > > > > 
> > > > > > > diff --git
> > > > > > > a/src/gallium/drivers/r600/compute_memory_pool.c
> > > > > > > b/src/gallium/drivers/r600/compute_memory_pool.c
> > > > > > > index d1ef25ae1e..981d944b8d 100644
> > > > > > > --- a/src/gallium/drivers/r600/compute_memory_pool.c
> > > > > > > +++ b/src/gallium/drivers/r600/compute_memory_pool.c
> > > > > > > @@ -43,6 +43,29 @@
> > > > > > >  #include 
> > > > > > > 
> > > > > > >  #define ITEM_ALIGNMENT 1024
> > > > > > > +
> > > > > > > +/* A few forward declarations of static functions */
> > > > > > > +static void compute_memory_shadow(struct
> > > > > > > compute_memory_pool*
> > > > > > > pool,
> > > > > > > + struct 

Re: [Mesa-dev] [PATCH 2/2] r600/compute: Mark several functions as static

2018-05-21 Thread Aaron Watry
On Mon, May 21, 2018 at 9:34 AM, Aaron Watry  wrote:
> On Mon, May 21, 2018 at 6:25 AM, Bruno Jimenez  wrote:
>> On Fri, 2018-05-18 at 22:52 -0400, Jan Vesely wrote:
>>> On Fri, 2018-05-18 at 21:31 -0500, Aaron Watry wrote:
>>> > On Fri, May 18, 2018 at 1:15 PM, Jan Vesely >> > > wrote:
>>> > > On Thu, 2018-05-17 at 19:20 -0500, Aaron Watry wrote:
>>> > > > They're not used anywhere else, so keep them private
>>> > > >
>>> > > > Signed-off-by: Aaron Watry 
>>> > >
>>> > > I'm not sure what the original purpose of the removed functions
>>> > > was. If
>>> > > you recall please add it to the commit message.
>>> >
>>> > I didn't really bother looking into it when I made this change (I
>>> > didn't write this code in the first place).  This is something that
>>> > I've had sitting in my local repo for a while, and just want to
>>> > stop
>>> > rebasing it.
>>> >
>>> > It was originally found a while ago when I started looking into the
>>> > thread-safety (or lack thereof) of the compute pool for r600.
>>> > Figured
>>> > there was no point trying to fix code that was unused.
>>>
>>> OK, nvm then. I was just curious how we ended up with so much dead
>>> code.
>>
>> Hiya,
>>
>> I can comment on that (mainly because I was the one who almost re-wrote
>> the memory pool).
>>
>> The reason is that I did most of these as a GSoC which started trying
>> to fix a bug in the code that growth the pool when memory was needed
>> and it continued with improving the pool. These functions are mostly
>> remains from the original implementation. If I remember correctly, I
>> sent a patch to remove them (and also a big patch which was a comment
>> explaning how the pool worked and why it was like that) but it never
>> got reviewed.
>>
>> In fact, the implementation IS NOT complete as it lacks a path to unmap
>> memory. I also had some patches for this, but as before, they never got
>> reviewed...
>
> I'm going to guess that this list contains some of the patches you
> were talking about:
> https://patchwork.freedesktop.org/project/mesa/list/?submitter=11660=*
>
> Especially the superseded ones. If you want to point them out, I can
> try either rebasing or (finally) reviewing them as appropriate.

And by superseded, I really meant "Not Applicable"... Too early.

--Aaron

>
> --Aaron
>
>>
>> I might still have the patches in my old laptop, but I don't have
>> access to it, sorry. Either way, I more or less remember how everything
>> worked, although I don't have a way of testing it because I no longer
>> have a computer with a R600 card.
>>
>> If nothing breaks (compiling) you can have my
>>
>> Reviewed-by: Bruno Jiménez 
>>
>> If you need anything regarding this code, just ask and I'll see if I
>> can dig up from my memory how it worked (and why)
>>
>> Best Regards,
>> Bruno
>>
>>>
>>> >
>>> > > Either way:
>>> > >
>>> > > Reviewed-by: Jan Vesely 
>>> >
>>> > For this one, or both?
>>>
>>> for the series.
>>>
>>> thanks,
>>> Jan
>>>
>>> >
>>> > --Aaron
>>> >
>>> > >
>>> > > Jan
>>> > >
>>> > > > ---
>>> > > >  .../drivers/r600/compute_memory_pool.c| 35
>>> > > > +++
>>> > > >  .../drivers/r600/compute_memory_pool.h| 24 -
>>> > > > 
>>> > > >  2 files changed, 29 insertions(+), 30 deletions(-)
>>> > > >
>>> > > > diff --git a/src/gallium/drivers/r600/compute_memory_pool.c
>>> > > > b/src/gallium/drivers/r600/compute_memory_pool.c
>>> > > > index d1ef25ae1e..981d944b8d 100644
>>> > > > --- a/src/gallium/drivers/r600/compute_memory_pool.c
>>> > > > +++ b/src/gallium/drivers/r600/compute_memory_pool.c
>>> > > > @@ -43,6 +43,29 @@
>>> > > >  #include 
>>> > > >
>>> > > >  #define ITEM_ALIGNMENT 1024
>>> > > > +
>>> > > > +/* A few forward declarations of static functions */
>>> > > > +static void compute_memory_shadow(struct compute_memory_pool*
>>> > > > pool,
>>> > > > + struct pipe_context *pipe, int device_to_host);
>>> > > > +
>>> > > > +static void compute_memory_defrag(struct compute_memory_pool
>>> > > > *pool,
>>> > > > + struct pipe_resource *src, struct pipe_resource *dst,
>>> > > > + struct pipe_context *pipe);
>>> > > > +
>>> > > > +static int compute_memory_promote_item(struct
>>> > > > compute_memory_pool *pool,
>>> > > > + struct compute_memory_item *item, struct pipe_context
>>> > > > *pipe,
>>> > > > + int64_t allocated);
>>> > > > +
>>> > > > +static void compute_memory_move_item(struct
>>> > > > compute_memory_pool *pool,
>>> > > > + struct pipe_resource *src, struct pipe_resource *dst,
>>> > > > + struct compute_memory_item *item, uint64_t
>>> > > > new_start_in_dw,
>>> > > > + struct pipe_context *pipe);
>>> > > > +
>>> > > > +static void compute_memory_transfer(struct
>>> > > > compute_memory_pool* pool,
>>> > > > + struct pipe_context * pipe, int device_to_host,
>>> > > > + struct compute_memory_item* 

Re: [Mesa-dev] [PATCH 2/2] r600/compute: Mark several functions as static

2018-05-21 Thread Aaron Watry
On Mon, May 21, 2018 at 6:25 AM, Bruno Jimenez  wrote:
> On Fri, 2018-05-18 at 22:52 -0400, Jan Vesely wrote:
>> On Fri, 2018-05-18 at 21:31 -0500, Aaron Watry wrote:
>> > On Fri, May 18, 2018 at 1:15 PM, Jan Vesely > > > wrote:
>> > > On Thu, 2018-05-17 at 19:20 -0500, Aaron Watry wrote:
>> > > > They're not used anywhere else, so keep them private
>> > > >
>> > > > Signed-off-by: Aaron Watry 
>> > >
>> > > I'm not sure what the original purpose of the removed functions
>> > > was. If
>> > > you recall please add it to the commit message.
>> >
>> > I didn't really bother looking into it when I made this change (I
>> > didn't write this code in the first place).  This is something that
>> > I've had sitting in my local repo for a while, and just want to
>> > stop
>> > rebasing it.
>> >
>> > It was originally found a while ago when I started looking into the
>> > thread-safety (or lack thereof) of the compute pool for r600.
>> > Figured
>> > there was no point trying to fix code that was unused.
>>
>> OK, nvm then. I was just curious how we ended up with so much dead
>> code.
>
> Hiya,
>
> I can comment on that (mainly because I was the one who almost re-wrote
> the memory pool).
>
> The reason is that I did most of these as a GSoC which started trying
> to fix a bug in the code that growth the pool when memory was needed
> and it continued with improving the pool. These functions are mostly
> remains from the original implementation. If I remember correctly, I
> sent a patch to remove them (and also a big patch which was a comment
> explaning how the pool worked and why it was like that) but it never
> got reviewed.
>
> In fact, the implementation IS NOT complete as it lacks a path to unmap
> memory. I also had some patches for this, but as before, they never got
> reviewed...

I'm going to guess that this list contains some of the patches you
were talking about:
https://patchwork.freedesktop.org/project/mesa/list/?submitter=11660=*

Especially the superseded ones. If you want to point them out, I can
try either rebasing or (finally) reviewing them as appropriate.

--Aaron

>
> I might still have the patches in my old laptop, but I don't have
> access to it, sorry. Either way, I more or less remember how everything
> worked, although I don't have a way of testing it because I no longer
> have a computer with a R600 card.
>
> If nothing breaks (compiling) you can have my
>
> Reviewed-by: Bruno Jiménez 
>
> If you need anything regarding this code, just ask and I'll see if I
> can dig up from my memory how it worked (and why)
>
> Best Regards,
> Bruno
>
>>
>> >
>> > > Either way:
>> > >
>> > > Reviewed-by: Jan Vesely 
>> >
>> > For this one, or both?
>>
>> for the series.
>>
>> thanks,
>> Jan
>>
>> >
>> > --Aaron
>> >
>> > >
>> > > Jan
>> > >
>> > > > ---
>> > > >  .../drivers/r600/compute_memory_pool.c| 35
>> > > > +++
>> > > >  .../drivers/r600/compute_memory_pool.h| 24 -
>> > > > 
>> > > >  2 files changed, 29 insertions(+), 30 deletions(-)
>> > > >
>> > > > diff --git a/src/gallium/drivers/r600/compute_memory_pool.c
>> > > > b/src/gallium/drivers/r600/compute_memory_pool.c
>> > > > index d1ef25ae1e..981d944b8d 100644
>> > > > --- a/src/gallium/drivers/r600/compute_memory_pool.c
>> > > > +++ b/src/gallium/drivers/r600/compute_memory_pool.c
>> > > > @@ -43,6 +43,29 @@
>> > > >  #include 
>> > > >
>> > > >  #define ITEM_ALIGNMENT 1024
>> > > > +
>> > > > +/* A few forward declarations of static functions */
>> > > > +static void compute_memory_shadow(struct compute_memory_pool*
>> > > > pool,
>> > > > + struct pipe_context *pipe, int device_to_host);
>> > > > +
>> > > > +static void compute_memory_defrag(struct compute_memory_pool
>> > > > *pool,
>> > > > + struct pipe_resource *src, struct pipe_resource *dst,
>> > > > + struct pipe_context *pipe);
>> > > > +
>> > > > +static int compute_memory_promote_item(struct
>> > > > compute_memory_pool *pool,
>> > > > + struct compute_memory_item *item, struct pipe_context
>> > > > *pipe,
>> > > > + int64_t allocated);
>> > > > +
>> > > > +static void compute_memory_move_item(struct
>> > > > compute_memory_pool *pool,
>> > > > + struct pipe_resource *src, struct pipe_resource *dst,
>> > > > + struct compute_memory_item *item, uint64_t
>> > > > new_start_in_dw,
>> > > > + struct pipe_context *pipe);
>> > > > +
>> > > > +static void compute_memory_transfer(struct
>> > > > compute_memory_pool* pool,
>> > > > + struct pipe_context * pipe, int device_to_host,
>> > > > + struct compute_memory_item* chunk, void* data,
>> > > > + int offset_in_chunk, int size);
>> > > > +
>> > > >  /**
>> > > >   * Creates a new pool.
>> > > >   */
>> > > > @@ -106,7 +129,7 @@ void compute_memory_pool_delete(struct
>> > > > compute_memory_pool* pool)
>> > > >   * \returns -1 if it fails, 0 

Re: [Mesa-dev] [PATCH 2/2] r600/compute: Mark several functions as static

2018-05-21 Thread Bruno Jimenez
On Fri, 2018-05-18 at 22:52 -0400, Jan Vesely wrote:
> On Fri, 2018-05-18 at 21:31 -0500, Aaron Watry wrote:
> > On Fri, May 18, 2018 at 1:15 PM, Jan Vesely  > > wrote:
> > > On Thu, 2018-05-17 at 19:20 -0500, Aaron Watry wrote:
> > > > They're not used anywhere else, so keep them private
> > > > 
> > > > Signed-off-by: Aaron Watry 
> > > 
> > > I'm not sure what the original purpose of the removed functions
> > > was. If
> > > you recall please add it to the commit message.
> > 
> > I didn't really bother looking into it when I made this change (I
> > didn't write this code in the first place).  This is something that
> > I've had sitting in my local repo for a while, and just want to
> > stop
> > rebasing it.
> > 
> > It was originally found a while ago when I started looking into the
> > thread-safety (or lack thereof) of the compute pool for r600.
> > Figured
> > there was no point trying to fix code that was unused.
> 
> OK, nvm then. I was just curious how we ended up with so much dead
> code.

Hiya,

I can comment on that (mainly because I was the one who almost re-wrote 
the memory pool).

The reason is that I did most of these as a GSoC which started trying
to fix a bug in the code that growth the pool when memory was needed
and it continued with improving the pool. These functions are mostly
remains from the original implementation. If I remember correctly, I
sent a patch to remove them (and also a big patch which was a comment
explaning how the pool worked and why it was like that) but it never
got reviewed.

In fact, the implementation IS NOT complete as it lacks a path to unmap
memory. I also had some patches for this, but as before, they never got
reviewed...

I might still have the patches in my old laptop, but I don't have
access to it, sorry. Either way, I more or less remember how everything
worked, although I don't have a way of testing it because I no longer
have a computer with a R600 card.

If nothing breaks (compiling) you can have my

Reviewed-by: Bruno Jiménez 

If you need anything regarding this code, just ask and I'll see if I
can dig up from my memory how it worked (and why)

Best Regards,
Bruno

> 
> > 
> > > Either way:
> > > 
> > > Reviewed-by: Jan Vesely 
> > 
> > For this one, or both?
> 
> for the series.
> 
> thanks,
> Jan
> 
> > 
> > --Aaron
> > 
> > > 
> > > Jan
> > > 
> > > > ---
> > > >  .../drivers/r600/compute_memory_pool.c| 35
> > > > +++
> > > >  .../drivers/r600/compute_memory_pool.h| 24 -
> > > > 
> > > >  2 files changed, 29 insertions(+), 30 deletions(-)
> > > > 
> > > > diff --git a/src/gallium/drivers/r600/compute_memory_pool.c
> > > > b/src/gallium/drivers/r600/compute_memory_pool.c
> > > > index d1ef25ae1e..981d944b8d 100644
> > > > --- a/src/gallium/drivers/r600/compute_memory_pool.c
> > > > +++ b/src/gallium/drivers/r600/compute_memory_pool.c
> > > > @@ -43,6 +43,29 @@
> > > >  #include 
> > > > 
> > > >  #define ITEM_ALIGNMENT 1024
> > > > +
> > > > +/* A few forward declarations of static functions */
> > > > +static void compute_memory_shadow(struct compute_memory_pool*
> > > > pool,
> > > > + struct pipe_context *pipe, int device_to_host);
> > > > +
> > > > +static void compute_memory_defrag(struct compute_memory_pool
> > > > *pool,
> > > > + struct pipe_resource *src, struct pipe_resource *dst,
> > > > + struct pipe_context *pipe);
> > > > +
> > > > +static int compute_memory_promote_item(struct
> > > > compute_memory_pool *pool,
> > > > + struct compute_memory_item *item, struct pipe_context
> > > > *pipe,
> > > > + int64_t allocated);
> > > > +
> > > > +static void compute_memory_move_item(struct
> > > > compute_memory_pool *pool,
> > > > + struct pipe_resource *src, struct pipe_resource *dst,
> > > > + struct compute_memory_item *item, uint64_t
> > > > new_start_in_dw,
> > > > + struct pipe_context *pipe);
> > > > +
> > > > +static void compute_memory_transfer(struct
> > > > compute_memory_pool* pool,
> > > > + struct pipe_context * pipe, int device_to_host,
> > > > + struct compute_memory_item* chunk, void* data,
> > > > + int offset_in_chunk, int size);
> > > > +
> > > >  /**
> > > >   * Creates a new pool.
> > > >   */
> > > > @@ -106,7 +129,7 @@ void compute_memory_pool_delete(struct
> > > > compute_memory_pool* pool)
> > > >   * \returns -1 if it fails, 0 otherwise
> > > >   * \see compute_memory_finalize_pending
> > > >   */
> > > > -int compute_memory_grow_defrag_pool(struct compute_memory_pool
> > > > *pool,
> > > > +static int compute_memory_grow_defrag_pool(struct
> > > > compute_memory_pool *pool,
> > > >   struct pipe_context *pipe, int new_size_in_dw)
> > > >  {
> > > >   new_size_in_dw = align(new_size_in_dw, ITEM_ALIGNMENT);
> > > > @@ -168,7 +191,7 @@ int compute_memory_grow_defrag_pool(struct
> > > > compute_memory_pool *pool,
> > > >   * \param 

Re: [Mesa-dev] [PATCH 2/2] r600/compute: Mark several functions as static

2018-05-18 Thread Jan Vesely
On Fri, 2018-05-18 at 21:31 -0500, Aaron Watry wrote:
> On Fri, May 18, 2018 at 1:15 PM, Jan Vesely  wrote:
> > On Thu, 2018-05-17 at 19:20 -0500, Aaron Watry wrote:
> > > They're not used anywhere else, so keep them private
> > > 
> > > Signed-off-by: Aaron Watry 
> > 
> > I'm not sure what the original purpose of the removed functions was. If
> > you recall please add it to the commit message.
> 
> I didn't really bother looking into it when I made this change (I
> didn't write this code in the first place).  This is something that
> I've had sitting in my local repo for a while, and just want to stop
> rebasing it.
> 
> It was originally found a while ago when I started looking into the
> thread-safety (or lack thereof) of the compute pool for r600. Figured
> there was no point trying to fix code that was unused.

OK, nvm then. I was just curious how we ended up with so much dead
code.

> 
> > Either way:
> > 
> > Reviewed-by: Jan Vesely 
> 
> For this one, or both?

for the series.

thanks,
Jan

> 
> --Aaron
> 
> > 
> > Jan
> > 
> > > ---
> > >  .../drivers/r600/compute_memory_pool.c| 35 +++
> > >  .../drivers/r600/compute_memory_pool.h| 24 -
> > >  2 files changed, 29 insertions(+), 30 deletions(-)
> > > 
> > > diff --git a/src/gallium/drivers/r600/compute_memory_pool.c 
> > > b/src/gallium/drivers/r600/compute_memory_pool.c
> > > index d1ef25ae1e..981d944b8d 100644
> > > --- a/src/gallium/drivers/r600/compute_memory_pool.c
> > > +++ b/src/gallium/drivers/r600/compute_memory_pool.c
> > > @@ -43,6 +43,29 @@
> > >  #include 
> > > 
> > >  #define ITEM_ALIGNMENT 1024
> > > +
> > > +/* A few forward declarations of static functions */
> > > +static void compute_memory_shadow(struct compute_memory_pool* pool,
> > > + struct pipe_context *pipe, int device_to_host);
> > > +
> > > +static void compute_memory_defrag(struct compute_memory_pool *pool,
> > > + struct pipe_resource *src, struct pipe_resource *dst,
> > > + struct pipe_context *pipe);
> > > +
> > > +static int compute_memory_promote_item(struct compute_memory_pool *pool,
> > > + struct compute_memory_item *item, struct pipe_context *pipe,
> > > + int64_t allocated);
> > > +
> > > +static void compute_memory_move_item(struct compute_memory_pool *pool,
> > > + struct pipe_resource *src, struct pipe_resource *dst,
> > > + struct compute_memory_item *item, uint64_t new_start_in_dw,
> > > + struct pipe_context *pipe);
> > > +
> > > +static void compute_memory_transfer(struct compute_memory_pool* pool,
> > > + struct pipe_context * pipe, int device_to_host,
> > > + struct compute_memory_item* chunk, void* data,
> > > + int offset_in_chunk, int size);
> > > +
> > >  /**
> > >   * Creates a new pool.
> > >   */
> > > @@ -106,7 +129,7 @@ void compute_memory_pool_delete(struct 
> > > compute_memory_pool* pool)
> > >   * \returns -1 if it fails, 0 otherwise
> > >   * \see compute_memory_finalize_pending
> > >   */
> > > -int compute_memory_grow_defrag_pool(struct compute_memory_pool *pool,
> > > +static int compute_memory_grow_defrag_pool(struct compute_memory_pool 
> > > *pool,
> > >   struct pipe_context *pipe, int new_size_in_dw)
> > >  {
> > >   new_size_in_dw = align(new_size_in_dw, ITEM_ALIGNMENT);
> > > @@ -168,7 +191,7 @@ int compute_memory_grow_defrag_pool(struct 
> > > compute_memory_pool *pool,
> > >   * \param device_to_host 1 for device->host, 0 for host->device
> > >   * \see compute_memory_grow_defrag_pool
> > >   */
> > > -void compute_memory_shadow(struct compute_memory_pool* pool,
> > > +static void compute_memory_shadow(struct compute_memory_pool* pool,
> > >   struct pipe_context * pipe, int device_to_host)
> > >  {
> > >   struct compute_memory_item chunk;
> > > @@ -262,7 +285,7 @@ int compute_memory_finalize_pending(struct 
> > > compute_memory_pool* pool,
> > >   * \param dstThe destination resource
> > >   * \see compute_memory_grow_defrag_pool and 
> > > compute_memory_finalize_pending
> > >   */
> > > -void compute_memory_defrag(struct compute_memory_pool *pool,
> > > +static void compute_memory_defrag(struct compute_memory_pool *pool,
> > >   struct pipe_resource *src, struct pipe_resource *dst,
> > >   struct pipe_context *pipe)
> > >  {
> > > @@ -292,7 +315,7 @@ void compute_memory_defrag(struct compute_memory_pool 
> > > *pool,
> > >   * \return -1 if it fails, 0 otherwise
> > >   * \see compute_memory_finalize_pending
> > >   */
> > > -int compute_memory_promote_item(struct compute_memory_pool *pool,
> > > +static int compute_memory_promote_item(struct compute_memory_pool *pool,
> > >   struct compute_memory_item *item, struct pipe_context *pipe,
> > >   int64_t start_in_dw)
> > >  {
> > > @@ -397,7 +420,7 @@ void compute_memory_demote_item(struct 
> > > compute_memory_pool *pool,
> > >   * \param new_start_in_dwThe 

Re: [Mesa-dev] [PATCH 2/2] r600/compute: Mark several functions as static

2018-05-18 Thread Aaron Watry
On Fri, May 18, 2018 at 1:15 PM, Jan Vesely  wrote:
> On Thu, 2018-05-17 at 19:20 -0500, Aaron Watry wrote:
>> They're not used anywhere else, so keep them private
>>
>> Signed-off-by: Aaron Watry 
>
> I'm not sure what the original purpose of the removed functions was. If
> you recall please add it to the commit message.

I didn't really bother looking into it when I made this change (I
didn't write this code in the first place).  This is something that
I've had sitting in my local repo for a while, and just want to stop
rebasing it.

It was originally found a while ago when I started looking into the
thread-safety (or lack thereof) of the compute pool for r600. Figured
there was no point trying to fix code that was unused.

> Either way:
>
> Reviewed-by: Jan Vesely 

For this one, or both?

--Aaron

>
> Jan
>
>> ---
>>  .../drivers/r600/compute_memory_pool.c| 35 +++
>>  .../drivers/r600/compute_memory_pool.h| 24 -
>>  2 files changed, 29 insertions(+), 30 deletions(-)
>>
>> diff --git a/src/gallium/drivers/r600/compute_memory_pool.c 
>> b/src/gallium/drivers/r600/compute_memory_pool.c
>> index d1ef25ae1e..981d944b8d 100644
>> --- a/src/gallium/drivers/r600/compute_memory_pool.c
>> +++ b/src/gallium/drivers/r600/compute_memory_pool.c
>> @@ -43,6 +43,29 @@
>>  #include 
>>
>>  #define ITEM_ALIGNMENT 1024
>> +
>> +/* A few forward declarations of static functions */
>> +static void compute_memory_shadow(struct compute_memory_pool* pool,
>> + struct pipe_context *pipe, int device_to_host);
>> +
>> +static void compute_memory_defrag(struct compute_memory_pool *pool,
>> + struct pipe_resource *src, struct pipe_resource *dst,
>> + struct pipe_context *pipe);
>> +
>> +static int compute_memory_promote_item(struct compute_memory_pool *pool,
>> + struct compute_memory_item *item, struct pipe_context *pipe,
>> + int64_t allocated);
>> +
>> +static void compute_memory_move_item(struct compute_memory_pool *pool,
>> + struct pipe_resource *src, struct pipe_resource *dst,
>> + struct compute_memory_item *item, uint64_t new_start_in_dw,
>> + struct pipe_context *pipe);
>> +
>> +static void compute_memory_transfer(struct compute_memory_pool* pool,
>> + struct pipe_context * pipe, int device_to_host,
>> + struct compute_memory_item* chunk, void* data,
>> + int offset_in_chunk, int size);
>> +
>>  /**
>>   * Creates a new pool.
>>   */
>> @@ -106,7 +129,7 @@ void compute_memory_pool_delete(struct 
>> compute_memory_pool* pool)
>>   * \returns -1 if it fails, 0 otherwise
>>   * \see compute_memory_finalize_pending
>>   */
>> -int compute_memory_grow_defrag_pool(struct compute_memory_pool *pool,
>> +static int compute_memory_grow_defrag_pool(struct compute_memory_pool *pool,
>>   struct pipe_context *pipe, int new_size_in_dw)
>>  {
>>   new_size_in_dw = align(new_size_in_dw, ITEM_ALIGNMENT);
>> @@ -168,7 +191,7 @@ int compute_memory_grow_defrag_pool(struct 
>> compute_memory_pool *pool,
>>   * \param device_to_host 1 for device->host, 0 for host->device
>>   * \see compute_memory_grow_defrag_pool
>>   */
>> -void compute_memory_shadow(struct compute_memory_pool* pool,
>> +static void compute_memory_shadow(struct compute_memory_pool* pool,
>>   struct pipe_context * pipe, int device_to_host)
>>  {
>>   struct compute_memory_item chunk;
>> @@ -262,7 +285,7 @@ int compute_memory_finalize_pending(struct 
>> compute_memory_pool* pool,
>>   * \param dstThe destination resource
>>   * \see compute_memory_grow_defrag_pool and compute_memory_finalize_pending
>>   */
>> -void compute_memory_defrag(struct compute_memory_pool *pool,
>> +static void compute_memory_defrag(struct compute_memory_pool *pool,
>>   struct pipe_resource *src, struct pipe_resource *dst,
>>   struct pipe_context *pipe)
>>  {
>> @@ -292,7 +315,7 @@ void compute_memory_defrag(struct compute_memory_pool 
>> *pool,
>>   * \return -1 if it fails, 0 otherwise
>>   * \see compute_memory_finalize_pending
>>   */
>> -int compute_memory_promote_item(struct compute_memory_pool *pool,
>> +static int compute_memory_promote_item(struct compute_memory_pool *pool,
>>   struct compute_memory_item *item, struct pipe_context *pipe,
>>   int64_t start_in_dw)
>>  {
>> @@ -397,7 +420,7 @@ void compute_memory_demote_item(struct 
>> compute_memory_pool *pool,
>>   * \param new_start_in_dwThe new position of the item in \a item_list
>>   * \see compute_memory_defrag
>>   */
>> -void compute_memory_move_item(struct compute_memory_pool *pool,
>> +static void compute_memory_move_item(struct compute_memory_pool *pool,
>>   struct pipe_resource *src, struct pipe_resource *dst,
>>   struct compute_memory_item *item, uint64_t new_start_in_dw,
>>   struct pipe_context *pipe)
>> @@ -569,7 +592,7 @@ struct compute_memory_item* compute_memory_alloc(
>>   * \param 

Re: [Mesa-dev] [PATCH 2/2] r600/compute: Mark several functions as static

2018-05-18 Thread Jan Vesely
On Thu, 2018-05-17 at 19:20 -0500, Aaron Watry wrote:
> They're not used anywhere else, so keep them private
> 
> Signed-off-by: Aaron Watry 

I'm not sure what the original purpose of the removed functions was. If
you recall please add it to the commit message.
Either way:

Reviewed-by: Jan Vesely 

Jan

> ---
>  .../drivers/r600/compute_memory_pool.c| 35 +++
>  .../drivers/r600/compute_memory_pool.h| 24 -
>  2 files changed, 29 insertions(+), 30 deletions(-)
> 
> diff --git a/src/gallium/drivers/r600/compute_memory_pool.c 
> b/src/gallium/drivers/r600/compute_memory_pool.c
> index d1ef25ae1e..981d944b8d 100644
> --- a/src/gallium/drivers/r600/compute_memory_pool.c
> +++ b/src/gallium/drivers/r600/compute_memory_pool.c
> @@ -43,6 +43,29 @@
>  #include 
>  
>  #define ITEM_ALIGNMENT 1024
> +
> +/* A few forward declarations of static functions */
> +static void compute_memory_shadow(struct compute_memory_pool* pool,
> + struct pipe_context *pipe, int device_to_host);
> +
> +static void compute_memory_defrag(struct compute_memory_pool *pool,
> + struct pipe_resource *src, struct pipe_resource *dst,
> + struct pipe_context *pipe);
> +
> +static int compute_memory_promote_item(struct compute_memory_pool *pool,
> + struct compute_memory_item *item, struct pipe_context *pipe,
> + int64_t allocated);
> +
> +static void compute_memory_move_item(struct compute_memory_pool *pool,
> + struct pipe_resource *src, struct pipe_resource *dst,
> + struct compute_memory_item *item, uint64_t new_start_in_dw,
> + struct pipe_context *pipe);
> +
> +static void compute_memory_transfer(struct compute_memory_pool* pool,
> + struct pipe_context * pipe, int device_to_host,
> + struct compute_memory_item* chunk, void* data,
> + int offset_in_chunk, int size);
> +
>  /**
>   * Creates a new pool.
>   */
> @@ -106,7 +129,7 @@ void compute_memory_pool_delete(struct 
> compute_memory_pool* pool)
>   * \returns -1 if it fails, 0 otherwise
>   * \see compute_memory_finalize_pending
>   */
> -int compute_memory_grow_defrag_pool(struct compute_memory_pool *pool,
> +static int compute_memory_grow_defrag_pool(struct compute_memory_pool *pool,
>   struct pipe_context *pipe, int new_size_in_dw)
>  {
>   new_size_in_dw = align(new_size_in_dw, ITEM_ALIGNMENT);
> @@ -168,7 +191,7 @@ int compute_memory_grow_defrag_pool(struct 
> compute_memory_pool *pool,
>   * \param device_to_host 1 for device->host, 0 for host->device
>   * \see compute_memory_grow_defrag_pool
>   */
> -void compute_memory_shadow(struct compute_memory_pool* pool,
> +static void compute_memory_shadow(struct compute_memory_pool* pool,
>   struct pipe_context * pipe, int device_to_host)
>  {
>   struct compute_memory_item chunk;
> @@ -262,7 +285,7 @@ int compute_memory_finalize_pending(struct 
> compute_memory_pool* pool,
>   * \param dstThe destination resource
>   * \see compute_memory_grow_defrag_pool and compute_memory_finalize_pending
>   */
> -void compute_memory_defrag(struct compute_memory_pool *pool,
> +static void compute_memory_defrag(struct compute_memory_pool *pool,
>   struct pipe_resource *src, struct pipe_resource *dst,
>   struct pipe_context *pipe)
>  {
> @@ -292,7 +315,7 @@ void compute_memory_defrag(struct compute_memory_pool 
> *pool,
>   * \return -1 if it fails, 0 otherwise
>   * \see compute_memory_finalize_pending
>   */
> -int compute_memory_promote_item(struct compute_memory_pool *pool,
> +static int compute_memory_promote_item(struct compute_memory_pool *pool,
>   struct compute_memory_item *item, struct pipe_context *pipe,
>   int64_t start_in_dw)
>  {
> @@ -397,7 +420,7 @@ void compute_memory_demote_item(struct 
> compute_memory_pool *pool,
>   * \param new_start_in_dwThe new position of the item in \a item_list
>   * \see compute_memory_defrag
>   */
> -void compute_memory_move_item(struct compute_memory_pool *pool,
> +static void compute_memory_move_item(struct compute_memory_pool *pool,
>   struct pipe_resource *src, struct pipe_resource *dst,
>   struct compute_memory_item *item, uint64_t new_start_in_dw,
>   struct pipe_context *pipe)
> @@ -569,7 +592,7 @@ struct compute_memory_item* compute_memory_alloc(
>   * \param device_to_host 1 for device->host, 0 for host->device.
>   * \see compute_memory_shadow
>   */
> -void compute_memory_transfer(
> +static void compute_memory_transfer(
>   struct compute_memory_pool* pool,
>   struct pipe_context * pipe,
>   int device_to_host,
> diff --git a/src/gallium/drivers/r600/compute_memory_pool.h 
> b/src/gallium/drivers/r600/compute_memory_pool.h
> index 3a17c5176b..2064e56352 100644
> --- a/src/gallium/drivers/r600/compute_memory_pool.h
> +++ b/src/gallium/drivers/r600/compute_memory_pool.h
> @@ -86,39 +86,15 @@ struct compute_memory_pool* 
> compute_memory_pool_new(struct r600_screen *rscreen)

[Mesa-dev] [PATCH 2/2] r600/compute: Mark several functions as static

2018-05-17 Thread Aaron Watry
They're not used anywhere else, so keep them private

Signed-off-by: Aaron Watry 
---
 .../drivers/r600/compute_memory_pool.c| 35 +++
 .../drivers/r600/compute_memory_pool.h| 24 -
 2 files changed, 29 insertions(+), 30 deletions(-)

diff --git a/src/gallium/drivers/r600/compute_memory_pool.c 
b/src/gallium/drivers/r600/compute_memory_pool.c
index d1ef25ae1e..981d944b8d 100644
--- a/src/gallium/drivers/r600/compute_memory_pool.c
+++ b/src/gallium/drivers/r600/compute_memory_pool.c
@@ -43,6 +43,29 @@
 #include 
 
 #define ITEM_ALIGNMENT 1024
+
+/* A few forward declarations of static functions */
+static void compute_memory_shadow(struct compute_memory_pool* pool,
+   struct pipe_context *pipe, int device_to_host);
+
+static void compute_memory_defrag(struct compute_memory_pool *pool,
+   struct pipe_resource *src, struct pipe_resource *dst,
+   struct pipe_context *pipe);
+
+static int compute_memory_promote_item(struct compute_memory_pool *pool,
+   struct compute_memory_item *item, struct pipe_context *pipe,
+   int64_t allocated);
+
+static void compute_memory_move_item(struct compute_memory_pool *pool,
+   struct pipe_resource *src, struct pipe_resource *dst,
+   struct compute_memory_item *item, uint64_t new_start_in_dw,
+   struct pipe_context *pipe);
+
+static void compute_memory_transfer(struct compute_memory_pool* pool,
+   struct pipe_context * pipe, int device_to_host,
+   struct compute_memory_item* chunk, void* data,
+   int offset_in_chunk, int size);
+
 /**
  * Creates a new pool.
  */
@@ -106,7 +129,7 @@ void compute_memory_pool_delete(struct compute_memory_pool* 
pool)
  * \returns -1 if it fails, 0 otherwise
  * \see compute_memory_finalize_pending
  */
-int compute_memory_grow_defrag_pool(struct compute_memory_pool *pool,
+static int compute_memory_grow_defrag_pool(struct compute_memory_pool *pool,
struct pipe_context *pipe, int new_size_in_dw)
 {
new_size_in_dw = align(new_size_in_dw, ITEM_ALIGNMENT);
@@ -168,7 +191,7 @@ int compute_memory_grow_defrag_pool(struct 
compute_memory_pool *pool,
  * \param device_to_host 1 for device->host, 0 for host->device
  * \see compute_memory_grow_defrag_pool
  */
-void compute_memory_shadow(struct compute_memory_pool* pool,
+static void compute_memory_shadow(struct compute_memory_pool* pool,
struct pipe_context * pipe, int device_to_host)
 {
struct compute_memory_item chunk;
@@ -262,7 +285,7 @@ int compute_memory_finalize_pending(struct 
compute_memory_pool* pool,
  * \param dst  The destination resource
  * \see compute_memory_grow_defrag_pool and compute_memory_finalize_pending
  */
-void compute_memory_defrag(struct compute_memory_pool *pool,
+static void compute_memory_defrag(struct compute_memory_pool *pool,
struct pipe_resource *src, struct pipe_resource *dst,
struct pipe_context *pipe)
 {
@@ -292,7 +315,7 @@ void compute_memory_defrag(struct compute_memory_pool *pool,
  * \return -1 if it fails, 0 otherwise
  * \see compute_memory_finalize_pending
  */
-int compute_memory_promote_item(struct compute_memory_pool *pool,
+static int compute_memory_promote_item(struct compute_memory_pool *pool,
struct compute_memory_item *item, struct pipe_context *pipe,
int64_t start_in_dw)
 {
@@ -397,7 +420,7 @@ void compute_memory_demote_item(struct compute_memory_pool 
*pool,
  * \param new_start_in_dw  The new position of the item in \a item_list
  * \see compute_memory_defrag
  */
-void compute_memory_move_item(struct compute_memory_pool *pool,
+static void compute_memory_move_item(struct compute_memory_pool *pool,
struct pipe_resource *src, struct pipe_resource *dst,
struct compute_memory_item *item, uint64_t new_start_in_dw,
struct pipe_context *pipe)
@@ -569,7 +592,7 @@ struct compute_memory_item* compute_memory_alloc(
  * \param device_to_host 1 for device->host, 0 for host->device.
  * \see compute_memory_shadow
  */
-void compute_memory_transfer(
+static void compute_memory_transfer(
struct compute_memory_pool* pool,
struct pipe_context * pipe,
int device_to_host,
diff --git a/src/gallium/drivers/r600/compute_memory_pool.h 
b/src/gallium/drivers/r600/compute_memory_pool.h
index 3a17c5176b..2064e56352 100644
--- a/src/gallium/drivers/r600/compute_memory_pool.h
+++ b/src/gallium/drivers/r600/compute_memory_pool.h
@@ -86,39 +86,15 @@ struct compute_memory_pool* compute_memory_pool_new(struct 
r600_screen *rscreen)
 
 void compute_memory_pool_delete(struct compute_memory_pool* pool);
 
-int compute_memory_grow_defrag_pool(struct compute_memory_pool* pool,
-   struct pipe_context *pipe, int new_size_in_dw);
-
-void compute_memory_shadow(struct compute_memory_pool* pool,
-   struct pipe_context *pipe, int device_to_host);
-
 int compute_memory_finalize_pending(struct compute_memory_pool* pool,
struct