Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-16 Thread Marek Olšák
FYI, I've pushed the patch because it helps simplify our the amdgpu winsys
code and I already have code that depends on it that I don't wanna rewrite.

Marek

On Wed, Jan 16, 2019 at 12:39 PM Marek Olšák  wrote:

> On Wed, Jan 16, 2019 at 9:43 AM Christian König <
> ckoenig.leichtzumer...@gmail.com> wrote:
>
>> Am 16.01.19 um 15:39 schrieb Marek Olšák:
>>
>>
>>
>> On Wed, Jan 16, 2019, 9:34 AM Koenig, Christian > wrote:
>>
>>> Am 16.01.19 um 15:31 schrieb Marek Olšák:
>>>
>>>
>>>
>>> On Wed, Jan 16, 2019, 7:55 AM Christian König <
>>> ckoenig.leichtzumer...@gmail.com wrote:
>>>
 Well if you ask me we should have the following interface for
 negotiating memory management with the kernel:

 1. We have per process BOs which can't be shared between processes.

 Those are always valid and don't need to be mentioned in any BO list
 whatsoever.

 If we knew that a per process BO is currently not in use we can
 optionally tell that to the kernel to make memory management more
 efficient.

 In other words instead of a list of stuff which is used we send down to
 the kernel a list of stuff which is not used any more and that only
 when
 we know that it is necessary, e.g. when a game or application
 overcommits.

>>>
>>> Radeonsi doesn't use this because this approach caused performance
>>> degradation and also drops BO priorities.
>>>
>>>
>>> The performance degradation where mostly shortcomings with the LRU which
>>> by now have been fixed.
>>>
>>> BO priorities are a different topic, but could be added to per VM BOs as
>>> well.
>>>
>>
>> What's the minimum drm version that contains the fixes?
>>
>>
>> I've pushed the last optimization this morning. No idea when it really
>> became useful, but the numbers from the closed source clients now look much
>> better.
>>
>> We should probably test and bump the drm version when we are sure that
>> this now works as expected.
>>
>
> We should, but AMD Mesa guys don't have any time.
>
> Marek
>
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-16 Thread Marek Olšák
On Wed, Jan 16, 2019 at 9:43 AM Christian König <
ckoenig.leichtzumer...@gmail.com> wrote:

> Am 16.01.19 um 15:39 schrieb Marek Olšák:
>
>
>
> On Wed, Jan 16, 2019, 9:34 AM Koenig, Christian  wrote:
>
>> Am 16.01.19 um 15:31 schrieb Marek Olšák:
>>
>>
>>
>> On Wed, Jan 16, 2019, 7:55 AM Christian König <
>> ckoenig.leichtzumer...@gmail.com wrote:
>>
>>> Well if you ask me we should have the following interface for
>>> negotiating memory management with the kernel:
>>>
>>> 1. We have per process BOs which can't be shared between processes.
>>>
>>> Those are always valid and don't need to be mentioned in any BO list
>>> whatsoever.
>>>
>>> If we knew that a per process BO is currently not in use we can
>>> optionally tell that to the kernel to make memory management more
>>> efficient.
>>>
>>> In other words instead of a list of stuff which is used we send down to
>>> the kernel a list of stuff which is not used any more and that only when
>>> we know that it is necessary, e.g. when a game or application
>>> overcommits.
>>>
>>
>> Radeonsi doesn't use this because this approach caused performance
>> degradation and also drops BO priorities.
>>
>>
>> The performance degradation where mostly shortcomings with the LRU which
>> by now have been fixed.
>>
>> BO priorities are a different topic, but could be added to per VM BOs as
>> well.
>>
>
> What's the minimum drm version that contains the fixes?
>
>
> I've pushed the last optimization this morning. No idea when it really
> became useful, but the numbers from the closed source clients now look much
> better.
>
> We should probably test and bump the drm version when we are sure that
> this now works as expected.
>

We should, but AMD Mesa guys don't have any time.

Marek
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-16 Thread Marek Olšák
On Wed, Jan 16, 2019 at 10:15 AM Bas Nieuwenhuizen 
wrote:

> On Wed, Jan 16, 2019 at 3:38 PM Marek Olšák  wrote:
> >
> >
> >
> > On Wed, Jan 16, 2019, 7:46 AM Bas Nieuwenhuizen  wrote:
> >>
> >> So random questions:
> >>
> >> 1) In this discussion it was mentioned that some Vulkan drivers still
> >> use the bo_list interface. I think that implies radv as I think we're
> >> still using bo_list. Is there any other API we should be using? (Also,
> >> with VK_EXT_descriptor_indexing I suspect we'll be moving more towards
> >> a global bo list instead of a cmd buffer one, as we cannot know all
> >> the BOs referenced anymore, but not sure what end state here will be).
> >>
> >> 2) The other alternative mentioned was adding the buffers directly
> >> into the submit ioctl. Is this the desired end state (though as above
> >> I'm not sure how that works for vulkan)? If yes, what is the timeline
> >> for this that we need something in the interim?
> >
> >
> > Radeonsi already uses this.
> >
> >>
> >> 3) Did we measure any performance benefit?
> >>
> >> In general I'd like to to ack the raw bo list creation function as
> >> this interface seems easier to use. The two arrays thing has always
> >> been kind of a pain when we want to use e.g. builtin sort functions to
> >> make sure we have no duplicate BOs, but have some comments below.
> >
> >
> > The reason amdgpu was slower than radeon was because of this inefficient
> bo list interface.
> >
> >>
> >> On Mon, Jan 7, 2019 at 8:31 PM Marek Olšák  wrote:
> >> >
> >> > From: Marek Olšák 
> >> >
> >> > ---
> >> >  amdgpu/amdgpu-symbol-check |  3 ++
> >> >  amdgpu/amdgpu.h| 56
> +-
> >> >  amdgpu/amdgpu_bo.c | 36 
> >> >  amdgpu/amdgpu_cs.c | 25 +
> >> >  4 files changed, 119 insertions(+), 1 deletion(-)
> >> >
> >> > diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check
> >> > index 6f5e0f95..96a44b40 100755
> >> > --- a/amdgpu/amdgpu-symbol-check
> >> > +++ b/amdgpu/amdgpu-symbol-check
> >> > @@ -12,20 +12,22 @@ _edata
> >> >  _end
> >> >  _fini
> >> >  _init
> >> >  amdgpu_bo_alloc
> >> >  amdgpu_bo_cpu_map
> >> >  amdgpu_bo_cpu_unmap
> >> >  amdgpu_bo_export
> >> >  amdgpu_bo_free
> >> >  amdgpu_bo_import
> >> >  amdgpu_bo_inc_ref
> >> > +amdgpu_bo_list_create_raw
> >> > +amdgpu_bo_list_destroy_raw
> >> >  amdgpu_bo_list_create
> >> >  amdgpu_bo_list_destroy
> >> >  amdgpu_bo_list_update
> >> >  amdgpu_bo_query_info
> >> >  amdgpu_bo_set_metadata
> >> >  amdgpu_bo_va_op
> >> >  amdgpu_bo_va_op_raw
> >> >  amdgpu_bo_wait_for_idle
> >> >  amdgpu_create_bo_from_user_mem
> >> >  amdgpu_cs_chunk_fence_info_to_data
> >> > @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
> >> >  amdgpu_cs_destroy_syncobj
> >> >  amdgpu_cs_export_syncobj
> >> >  amdgpu_cs_fence_to_handle
> >> >  amdgpu_cs_import_syncobj
> >> >  amdgpu_cs_query_fence_status
> >> >  amdgpu_cs_query_reset_state
> >> >  amdgpu_query_sw_info
> >> >  amdgpu_cs_signal_semaphore
> >> >  amdgpu_cs_submit
> >> >  amdgpu_cs_submit_raw
> >> > +amdgpu_cs_submit_raw2
> >> >  amdgpu_cs_syncobj_export_sync_file
> >> >  amdgpu_cs_syncobj_import_sync_file
> >> >  amdgpu_cs_syncobj_reset
> >> >  amdgpu_cs_syncobj_signal
> >> >  amdgpu_cs_syncobj_wait
> >> >  amdgpu_cs_wait_fences
> >> >  amdgpu_cs_wait_semaphore
> >> >  amdgpu_device_deinitialize
> >> >  amdgpu_device_initialize
> >> >  amdgpu_find_bo_by_cpu_mapping
> >> > diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h
> >> > index dc51659a..5b800033 100644
> >> > --- a/amdgpu/amdgpu.h
> >> > +++ b/amdgpu/amdgpu.h
> >> > @@ -35,20 +35,21 @@
> >> >  #define _AMDGPU_H_
> >> >
> >> >  #include 
> >> >  #include 
> >> >
> >> >  #ifdef __cplusplus
> >> >  extern "C" {
> >> >  #endif
> >> >
> >> >  struct drm_amdgpu_info_hw_ip;
> >> > +struct drm_amdgpu_bo_list_entry;
> >> >
> >> >
> /*--*/
> >> >  /* --- Defines
>  */
> >> >
> /*--*/
> >> >
> >> >  /**
> >> >   * Define max. number of Command Buffers (IB) which could be sent to
> the single
> >> >   * hardware IP to accommodate CE/DE requirements
> >> >   *
> >> >   * \sa amdgpu_cs_ib_info
> >> > @@ -767,34 +768,65 @@ int amdgpu_bo_cpu_unmap(amdgpu_bo_handle
> buf_handle);
> >> >   *and no GPU access is scheduled.
> >> >   *  1 GPU access is in fly or scheduled
> >> >   *
> >> >   * \return   0 - on success
> >> >   *  <0 - Negative POSIX Error code
> >> >   */
> >> >  int amdgpu_bo_wait_for_idle(amdgpu_bo_handle buf_handle,
> >> > uint64_t timeout_ns,
> >> > bool *buffer_busy);
> >> >
> >> > +/**
> >> > + * Creates a BO list handle for command submission.
> >> > + *
> >> > + * \param   dev

Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-16 Thread Bas Nieuwenhuizen
On Wed, Jan 16, 2019 at 3:38 PM Marek Olšák  wrote:
>
>
>
> On Wed, Jan 16, 2019, 7:46 AM Bas Nieuwenhuizen  wrote:
>>
>> So random questions:
>>
>> 1) In this discussion it was mentioned that some Vulkan drivers still
>> use the bo_list interface. I think that implies radv as I think we're
>> still using bo_list. Is there any other API we should be using? (Also,
>> with VK_EXT_descriptor_indexing I suspect we'll be moving more towards
>> a global bo list instead of a cmd buffer one, as we cannot know all
>> the BOs referenced anymore, but not sure what end state here will be).
>>
>> 2) The other alternative mentioned was adding the buffers directly
>> into the submit ioctl. Is this the desired end state (though as above
>> I'm not sure how that works for vulkan)? If yes, what is the timeline
>> for this that we need something in the interim?
>
>
> Radeonsi already uses this.
>
>>
>> 3) Did we measure any performance benefit?
>>
>> In general I'd like to to ack the raw bo list creation function as
>> this interface seems easier to use. The two arrays thing has always
>> been kind of a pain when we want to use e.g. builtin sort functions to
>> make sure we have no duplicate BOs, but have some comments below.
>
>
> The reason amdgpu was slower than radeon was because of this inefficient bo 
> list interface.
>
>>
>> On Mon, Jan 7, 2019 at 8:31 PM Marek Olšák  wrote:
>> >
>> > From: Marek Olšák 
>> >
>> > ---
>> >  amdgpu/amdgpu-symbol-check |  3 ++
>> >  amdgpu/amdgpu.h| 56 +-
>> >  amdgpu/amdgpu_bo.c | 36 
>> >  amdgpu/amdgpu_cs.c | 25 +
>> >  4 files changed, 119 insertions(+), 1 deletion(-)
>> >
>> > diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check
>> > index 6f5e0f95..96a44b40 100755
>> > --- a/amdgpu/amdgpu-symbol-check
>> > +++ b/amdgpu/amdgpu-symbol-check
>> > @@ -12,20 +12,22 @@ _edata
>> >  _end
>> >  _fini
>> >  _init
>> >  amdgpu_bo_alloc
>> >  amdgpu_bo_cpu_map
>> >  amdgpu_bo_cpu_unmap
>> >  amdgpu_bo_export
>> >  amdgpu_bo_free
>> >  amdgpu_bo_import
>> >  amdgpu_bo_inc_ref
>> > +amdgpu_bo_list_create_raw
>> > +amdgpu_bo_list_destroy_raw
>> >  amdgpu_bo_list_create
>> >  amdgpu_bo_list_destroy
>> >  amdgpu_bo_list_update
>> >  amdgpu_bo_query_info
>> >  amdgpu_bo_set_metadata
>> >  amdgpu_bo_va_op
>> >  amdgpu_bo_va_op_raw
>> >  amdgpu_bo_wait_for_idle
>> >  amdgpu_create_bo_from_user_mem
>> >  amdgpu_cs_chunk_fence_info_to_data
>> > @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
>> >  amdgpu_cs_destroy_syncobj
>> >  amdgpu_cs_export_syncobj
>> >  amdgpu_cs_fence_to_handle
>> >  amdgpu_cs_import_syncobj
>> >  amdgpu_cs_query_fence_status
>> >  amdgpu_cs_query_reset_state
>> >  amdgpu_query_sw_info
>> >  amdgpu_cs_signal_semaphore
>> >  amdgpu_cs_submit
>> >  amdgpu_cs_submit_raw
>> > +amdgpu_cs_submit_raw2
>> >  amdgpu_cs_syncobj_export_sync_file
>> >  amdgpu_cs_syncobj_import_sync_file
>> >  amdgpu_cs_syncobj_reset
>> >  amdgpu_cs_syncobj_signal
>> >  amdgpu_cs_syncobj_wait
>> >  amdgpu_cs_wait_fences
>> >  amdgpu_cs_wait_semaphore
>> >  amdgpu_device_deinitialize
>> >  amdgpu_device_initialize
>> >  amdgpu_find_bo_by_cpu_mapping
>> > diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h
>> > index dc51659a..5b800033 100644
>> > --- a/amdgpu/amdgpu.h
>> > +++ b/amdgpu/amdgpu.h
>> > @@ -35,20 +35,21 @@
>> >  #define _AMDGPU_H_
>> >
>> >  #include 
>> >  #include 
>> >
>> >  #ifdef __cplusplus
>> >  extern "C" {
>> >  #endif
>> >
>> >  struct drm_amdgpu_info_hw_ip;
>> > +struct drm_amdgpu_bo_list_entry;
>> >
>> >  
>> > /*--*/
>> >  /* --- Defines 
>> >  */
>> >  
>> > /*--*/
>> >
>> >  /**
>> >   * Define max. number of Command Buffers (IB) which could be sent to the 
>> > single
>> >   * hardware IP to accommodate CE/DE requirements
>> >   *
>> >   * \sa amdgpu_cs_ib_info
>> > @@ -767,34 +768,65 @@ int amdgpu_bo_cpu_unmap(amdgpu_bo_handle buf_handle);
>> >   *and no GPU access is scheduled.
>> >   *  1 GPU access is in fly or scheduled
>> >   *
>> >   * \return   0 - on success
>> >   *  <0 - Negative POSIX Error code
>> >   */
>> >  int amdgpu_bo_wait_for_idle(amdgpu_bo_handle buf_handle,
>> > uint64_t timeout_ns,
>> > bool *buffer_busy);
>> >
>> > +/**
>> > + * Creates a BO list handle for command submission.
>> > + *
>> > + * \param   dev- \c [in] Device handle.
>> > + *See #amdgpu_device_initialize()
>> > + * \param   number_of_buffers  - \c [in] Number of BOs in the list
>> > + * \param   buffers- \c [in] List of BO handles
>> > + * \param   result - \c [out] Created BO list 

Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-16 Thread Christian König

Am 16.01.19 um 15:39 schrieb Marek Olšák:



On Wed, Jan 16, 2019, 9:34 AM Koenig, Christian 
mailto:christian.koe...@amd.com> wrote:


Am 16.01.19 um 15:31 schrieb Marek Olšák:



On Wed, Jan 16, 2019, 7:55 AM Christian König
mailto:ckoenig.leichtzumer...@gmail.com> wrote:

Well if you ask me we should have the following interface for
negotiating memory management with the kernel:

1. We have per process BOs which can't be shared between
processes.

Those are always valid and don't need to be mentioned in any
BO list
whatsoever.

If we knew that a per process BO is currently not in use we can
optionally tell that to the kernel to make memory management
more efficient.

In other words instead of a list of stuff which is used we
send down to
the kernel a list of stuff which is not used any more and
that only when
we know that it is necessary, e.g. when a game or application
overcommits.


Radeonsi doesn't use this because this approach caused
performance degradation and also drops BO priorities.


The performance degradation where mostly shortcomings with the LRU
which by now have been fixed.

BO priorities are a different topic, but could be added to per VM
BOs as well.


What's the minimum drm version that contains the fixes?


I've pushed the last optimization this morning. No idea when it really 
became useful, but the numbers from the closed source clients now look 
much better.


We should probably test and bump the drm version when we are sure that 
this now works as expected.


Christian.



Marek


Christian.



Marek


2. We have shared BOs which are used by more than one process.

Those are rare and should be added to the per CS list of BOs
in use.


The whole BO list interface Marek tries to optimize here
should be
deprecated and not used any more.

Regards,
Christian.

Am 16.01.19 um 13:46 schrieb Bas Nieuwenhuizen:
> So random questions:
>
> 1) In this discussion it was mentioned that some Vulkan
drivers still
> use the bo_list interface. I think that implies radv as I
think we're
> still using bo_list. Is there any other API we should be
using? (Also,
> with VK_EXT_descriptor_indexing I suspect we'll be moving
more towards
> a global bo list instead of a cmd buffer one, as we cannot
know all
> the BOs referenced anymore, but not sure what end state
here will be).
>
> 2) The other alternative mentioned was adding the buffers
directly
> into the submit ioctl. Is this the desired end state
(though as above
> I'm not sure how that works for vulkan)? If yes, what is
the timeline
> for this that we need something in the interim?
>
> 3) Did we measure any performance benefit?
>
> In general I'd like to to ack the raw bo list creation
function as
> this interface seems easier to use. The two arrays thing
has always
> been kind of a pain when we want to use e.g. builtin sort
functions to
> make sure we have no duplicate BOs, but have some comments
below.
>
> On Mon, Jan 7, 2019 at 8:31 PM Marek Olšák
mailto:mar...@gmail.com>> wrote:
>> From: Marek Olšák mailto:marek.ol...@amd.com>>
>>
>> ---
>>   amdgpu/amdgpu-symbol-check |  3 ++
>>   amdgpu/amdgpu.h            | 56
+-
>>   amdgpu/amdgpu_bo.c         | 36 
>>   amdgpu/amdgpu_cs.c         | 25 +
>>   4 files changed, 119 insertions(+), 1 deletion(-)
>>
>> diff --git a/amdgpu/amdgpu-symbol-check
b/amdgpu/amdgpu-symbol-check
>> index 6f5e0f95..96a44b40 100755
>> --- a/amdgpu/amdgpu-symbol-check
>> +++ b/amdgpu/amdgpu-symbol-check
>> @@ -12,20 +12,22 @@ _edata
>>   _end
>>   _fini
>>   _init
>>   amdgpu_bo_alloc
>>   amdgpu_bo_cpu_map
>>   amdgpu_bo_cpu_unmap
>>   amdgpu_bo_export
>>   amdgpu_bo_free
>>   amdgpu_bo_import
>>   amdgpu_bo_inc_ref
>> +amdgpu_bo_list_create_raw
>> +amdgpu_bo_list_destroy_raw
>>   amdgpu_bo_list_create
>>   amdgpu_bo_list_destroy
>>   amdgpu_bo_list_update
>>   amdgpu_bo_query_info
>>   amdgpu_bo_set_metadata
>>   amdgpu_bo_va_op
>>   amdgpu_bo_va_op_raw
>>   amdgpu_bo_wait_for_idle
>>   amdgpu_create_bo_from_user_mem
>>   amdgpu_cs_chunk_fence_info_to_data
>> @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
>>   

Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-16 Thread Marek Olšák
On Wed, Jan 16, 2019, 9:34 AM Koenig, Christian  Am 16.01.19 um 15:31 schrieb Marek Olšák:
>
>
>
> On Wed, Jan 16, 2019, 7:55 AM Christian König <
> ckoenig.leichtzumer...@gmail.com wrote:
>
>> Well if you ask me we should have the following interface for
>> negotiating memory management with the kernel:
>>
>> 1. We have per process BOs which can't be shared between processes.
>>
>> Those are always valid and don't need to be mentioned in any BO list
>> whatsoever.
>>
>> If we knew that a per process BO is currently not in use we can
>> optionally tell that to the kernel to make memory management more
>> efficient.
>>
>> In other words instead of a list of stuff which is used we send down to
>> the kernel a list of stuff which is not used any more and that only when
>> we know that it is necessary, e.g. when a game or application overcommits.
>>
>
> Radeonsi doesn't use this because this approach caused performance
> degradation and also drops BO priorities.
>
>
> The performance degradation where mostly shortcomings with the LRU which
> by now have been fixed.
>
> BO priorities are a different topic, but could be added to per VM BOs as
> well.
>

What's the minimum drm version that contains the fixes?

Marek


> Christian.
>
>
> Marek
>
>
>> 2. We have shared BOs which are used by more than one process.
>>
>> Those are rare and should be added to the per CS list of BOs in use.
>>
>>
>> The whole BO list interface Marek tries to optimize here should be
>> deprecated and not used any more.
>>
>> Regards,
>> Christian.
>>
>> Am 16.01.19 um 13:46 schrieb Bas Nieuwenhuizen:
>> > So random questions:
>> >
>> > 1) In this discussion it was mentioned that some Vulkan drivers still
>> > use the bo_list interface. I think that implies radv as I think we're
>> > still using bo_list. Is there any other API we should be using? (Also,
>> > with VK_EXT_descriptor_indexing I suspect we'll be moving more towards
>> > a global bo list instead of a cmd buffer one, as we cannot know all
>> > the BOs referenced anymore, but not sure what end state here will be).
>> >
>> > 2) The other alternative mentioned was adding the buffers directly
>> > into the submit ioctl. Is this the desired end state (though as above
>> > I'm not sure how that works for vulkan)? If yes, what is the timeline
>> > for this that we need something in the interim?
>> >
>> > 3) Did we measure any performance benefit?
>> >
>> > In general I'd like to to ack the raw bo list creation function as
>> > this interface seems easier to use. The two arrays thing has always
>> > been kind of a pain when we want to use e.g. builtin sort functions to
>> > make sure we have no duplicate BOs, but have some comments below.
>> >
>> > On Mon, Jan 7, 2019 at 8:31 PM Marek Olšák  wrote:
>> >> From: Marek Olšák 
>> >>
>> >> ---
>> >>   amdgpu/amdgpu-symbol-check |  3 ++
>> >>   amdgpu/amdgpu.h| 56
>> +-
>> >>   amdgpu/amdgpu_bo.c | 36 
>> >>   amdgpu/amdgpu_cs.c | 25 +
>> >>   4 files changed, 119 insertions(+), 1 deletion(-)
>> >>
>> >> diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check
>> >> index 6f5e0f95..96a44b40 100755
>> >> --- a/amdgpu/amdgpu-symbol-check
>> >> +++ b/amdgpu/amdgpu-symbol-check
>> >> @@ -12,20 +12,22 @@ _edata
>> >>   _end
>> >>   _fini
>> >>   _init
>> >>   amdgpu_bo_alloc
>> >>   amdgpu_bo_cpu_map
>> >>   amdgpu_bo_cpu_unmap
>> >>   amdgpu_bo_export
>> >>   amdgpu_bo_free
>> >>   amdgpu_bo_import
>> >>   amdgpu_bo_inc_ref
>> >> +amdgpu_bo_list_create_raw
>> >> +amdgpu_bo_list_destroy_raw
>> >>   amdgpu_bo_list_create
>> >>   amdgpu_bo_list_destroy
>> >>   amdgpu_bo_list_update
>> >>   amdgpu_bo_query_info
>> >>   amdgpu_bo_set_metadata
>> >>   amdgpu_bo_va_op
>> >>   amdgpu_bo_va_op_raw
>> >>   amdgpu_bo_wait_for_idle
>> >>   amdgpu_create_bo_from_user_mem
>> >>   amdgpu_cs_chunk_fence_info_to_data
>> >> @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
>> >>   amdgpu_cs_destroy_syncobj
>> >>   amdgpu_cs_export_syncobj
>> >>   amdgpu_cs_fence_to_handle
>> >>   amdgpu_cs_import_syncobj
>> >>   amdgpu_cs_query_fence_status
>> >>   amdgpu_cs_query_reset_state
>> >>   amdgpu_query_sw_info
>> >>   amdgpu_cs_signal_semaphore
>> >>   amdgpu_cs_submit
>> >>   amdgpu_cs_submit_raw
>> >> +amdgpu_cs_submit_raw2
>> >>   amdgpu_cs_syncobj_export_sync_file
>> >>   amdgpu_cs_syncobj_import_sync_file
>> >>   amdgpu_cs_syncobj_reset
>> >>   amdgpu_cs_syncobj_signal
>> >>   amdgpu_cs_syncobj_wait
>> >>   amdgpu_cs_wait_fences
>> >>   amdgpu_cs_wait_semaphore
>> >>   amdgpu_device_deinitialize
>> >>   amdgpu_device_initialize
>> >>   amdgpu_find_bo_by_cpu_mapping
>> >> diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h
>> >> index dc51659a..5b800033 100644
>> >> --- a/amdgpu/amdgpu.h
>> >> +++ b/amdgpu/amdgpu.h
>> >> @@ -35,20 +35,21 @@
>> >>   #define _AMDGPU_H_
>> >>
>> >>   #include 
>> >>   #include 
>> >>
>> >>   #ifdef __cplusplus
>> >>   extern 

Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-16 Thread Marek Olšák
On Wed, Jan 16, 2019, 7:46 AM Bas Nieuwenhuizen  So random questions:
>
> 1) In this discussion it was mentioned that some Vulkan drivers still
> use the bo_list interface. I think that implies radv as I think we're
> still using bo_list. Is there any other API we should be using? (Also,
> with VK_EXT_descriptor_indexing I suspect we'll be moving more towards
> a global bo list instead of a cmd buffer one, as we cannot know all
> the BOs referenced anymore, but not sure what end state here will be).
>
> 2) The other alternative mentioned was adding the buffers directly
> into the submit ioctl. Is this the desired end state (though as above
> I'm not sure how that works for vulkan)? If yes, what is the timeline
> for this that we need something in the interim?
>

Radeonsi already uses this.


> 3) Did we measure any performance benefit?
>
> In general I'd like to to ack the raw bo list creation function as
> this interface seems easier to use. The two arrays thing has always
> been kind of a pain when we want to use e.g. builtin sort functions to
> make sure we have no duplicate BOs, but have some comments below.
>

The reason amdgpu was slower than radeon was because of this inefficient bo
list interface.


> On Mon, Jan 7, 2019 at 8:31 PM Marek Olšák  wrote:
> >
> > From: Marek Olšák 
> >
> > ---
> >  amdgpu/amdgpu-symbol-check |  3 ++
> >  amdgpu/amdgpu.h| 56 +-
> >  amdgpu/amdgpu_bo.c | 36 
> >  amdgpu/amdgpu_cs.c | 25 +
> >  4 files changed, 119 insertions(+), 1 deletion(-)
> >
> > diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check
> > index 6f5e0f95..96a44b40 100755
> > --- a/amdgpu/amdgpu-symbol-check
> > +++ b/amdgpu/amdgpu-symbol-check
> > @@ -12,20 +12,22 @@ _edata
> >  _end
> >  _fini
> >  _init
> >  amdgpu_bo_alloc
> >  amdgpu_bo_cpu_map
> >  amdgpu_bo_cpu_unmap
> >  amdgpu_bo_export
> >  amdgpu_bo_free
> >  amdgpu_bo_import
> >  amdgpu_bo_inc_ref
> > +amdgpu_bo_list_create_raw
> > +amdgpu_bo_list_destroy_raw
> >  amdgpu_bo_list_create
> >  amdgpu_bo_list_destroy
> >  amdgpu_bo_list_update
> >  amdgpu_bo_query_info
> >  amdgpu_bo_set_metadata
> >  amdgpu_bo_va_op
> >  amdgpu_bo_va_op_raw
> >  amdgpu_bo_wait_for_idle
> >  amdgpu_create_bo_from_user_mem
> >  amdgpu_cs_chunk_fence_info_to_data
> > @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
> >  amdgpu_cs_destroy_syncobj
> >  amdgpu_cs_export_syncobj
> >  amdgpu_cs_fence_to_handle
> >  amdgpu_cs_import_syncobj
> >  amdgpu_cs_query_fence_status
> >  amdgpu_cs_query_reset_state
> >  amdgpu_query_sw_info
> >  amdgpu_cs_signal_semaphore
> >  amdgpu_cs_submit
> >  amdgpu_cs_submit_raw
> > +amdgpu_cs_submit_raw2
> >  amdgpu_cs_syncobj_export_sync_file
> >  amdgpu_cs_syncobj_import_sync_file
> >  amdgpu_cs_syncobj_reset
> >  amdgpu_cs_syncobj_signal
> >  amdgpu_cs_syncobj_wait
> >  amdgpu_cs_wait_fences
> >  amdgpu_cs_wait_semaphore
> >  amdgpu_device_deinitialize
> >  amdgpu_device_initialize
> >  amdgpu_find_bo_by_cpu_mapping
> > diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h
> > index dc51659a..5b800033 100644
> > --- a/amdgpu/amdgpu.h
> > +++ b/amdgpu/amdgpu.h
> > @@ -35,20 +35,21 @@
> >  #define _AMDGPU_H_
> >
> >  #include 
> >  #include 
> >
> >  #ifdef __cplusplus
> >  extern "C" {
> >  #endif
> >
> >  struct drm_amdgpu_info_hw_ip;
> > +struct drm_amdgpu_bo_list_entry;
> >
> >
> /*--*/
> >  /* --- Defines
>  */
> >
> /*--*/
> >
> >  /**
> >   * Define max. number of Command Buffers (IB) which could be sent to
> the single
> >   * hardware IP to accommodate CE/DE requirements
> >   *
> >   * \sa amdgpu_cs_ib_info
> > @@ -767,34 +768,65 @@ int amdgpu_bo_cpu_unmap(amdgpu_bo_handle
> buf_handle);
> >   *and no GPU access is scheduled.
> >   *  1 GPU access is in fly or scheduled
> >   *
> >   * \return   0 - on success
> >   *  <0 - Negative POSIX Error code
> >   */
> >  int amdgpu_bo_wait_for_idle(amdgpu_bo_handle buf_handle,
> > uint64_t timeout_ns,
> > bool *buffer_busy);
> >
> > +/**
> > + * Creates a BO list handle for command submission.
> > + *
> > + * \param   dev- \c [in] Device handle.
> > + *See #amdgpu_device_initialize()
> > + * \param   number_of_buffers  - \c [in] Number of BOs in the list
> > + * \param   buffers- \c [in] List of BO handles
> > + * \param   result - \c [out] Created BO list handle
> > + *
> > + * \return   0 on success\n
> > + *  <0 - Negative POSIX Error code
> > + *
> > + * \sa amdgpu_bo_list_destroy_raw()
> > +*/
> > +int amdgpu_bo_list_create_raw(amdgpu_device_handle dev,
> > + 

Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-16 Thread Koenig, Christian
Am 16.01.19 um 15:31 schrieb Marek Olšák:


On Wed, Jan 16, 2019, 7:55 AM Christian König 
mailto:ckoenig.leichtzumer...@gmail.com> 
wrote:
Well if you ask me we should have the following interface for
negotiating memory management with the kernel:

1. We have per process BOs which can't be shared between processes.

Those are always valid and don't need to be mentioned in any BO list
whatsoever.

If we knew that a per process BO is currently not in use we can
optionally tell that to the kernel to make memory management more efficient.

In other words instead of a list of stuff which is used we send down to
the kernel a list of stuff which is not used any more and that only when
we know that it is necessary, e.g. when a game or application overcommits.

Radeonsi doesn't use this because this approach caused performance degradation 
and also drops BO priorities.

The performance degradation where mostly shortcomings with the LRU which by now 
have been fixed.

BO priorities are a different topic, but could be added to per VM BOs as well.

Christian.


Marek


2. We have shared BOs which are used by more than one process.

Those are rare and should be added to the per CS list of BOs in use.


The whole BO list interface Marek tries to optimize here should be
deprecated and not used any more.

Regards,
Christian.

Am 16.01.19 um 13:46 schrieb Bas Nieuwenhuizen:
> So random questions:
>
> 1) In this discussion it was mentioned that some Vulkan drivers still
> use the bo_list interface. I think that implies radv as I think we're
> still using bo_list. Is there any other API we should be using? (Also,
> with VK_EXT_descriptor_indexing I suspect we'll be moving more towards
> a global bo list instead of a cmd buffer one, as we cannot know all
> the BOs referenced anymore, but not sure what end state here will be).
>
> 2) The other alternative mentioned was adding the buffers directly
> into the submit ioctl. Is this the desired end state (though as above
> I'm not sure how that works for vulkan)? If yes, what is the timeline
> for this that we need something in the interim?
>
> 3) Did we measure any performance benefit?
>
> In general I'd like to to ack the raw bo list creation function as
> this interface seems easier to use. The two arrays thing has always
> been kind of a pain when we want to use e.g. builtin sort functions to
> make sure we have no duplicate BOs, but have some comments below.
>
> On Mon, Jan 7, 2019 at 8:31 PM Marek Olšák 
> mailto:mar...@gmail.com>> wrote:
>> From: Marek Olšák mailto:marek.ol...@amd.com>>
>>
>> ---
>>   amdgpu/amdgpu-symbol-check |  3 ++
>>   amdgpu/amdgpu.h| 56 +-
>>   amdgpu/amdgpu_bo.c | 36 
>>   amdgpu/amdgpu_cs.c | 25 +
>>   4 files changed, 119 insertions(+), 1 deletion(-)
>>
>> diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check
>> index 6f5e0f95..96a44b40 100755
>> --- a/amdgpu/amdgpu-symbol-check
>> +++ b/amdgpu/amdgpu-symbol-check
>> @@ -12,20 +12,22 @@ _edata
>>   _end
>>   _fini
>>   _init
>>   amdgpu_bo_alloc
>>   amdgpu_bo_cpu_map
>>   amdgpu_bo_cpu_unmap
>>   amdgpu_bo_export
>>   amdgpu_bo_free
>>   amdgpu_bo_import
>>   amdgpu_bo_inc_ref
>> +amdgpu_bo_list_create_raw
>> +amdgpu_bo_list_destroy_raw
>>   amdgpu_bo_list_create
>>   amdgpu_bo_list_destroy
>>   amdgpu_bo_list_update
>>   amdgpu_bo_query_info
>>   amdgpu_bo_set_metadata
>>   amdgpu_bo_va_op
>>   amdgpu_bo_va_op_raw
>>   amdgpu_bo_wait_for_idle
>>   amdgpu_create_bo_from_user_mem
>>   amdgpu_cs_chunk_fence_info_to_data
>> @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
>>   amdgpu_cs_destroy_syncobj
>>   amdgpu_cs_export_syncobj
>>   amdgpu_cs_fence_to_handle
>>   amdgpu_cs_import_syncobj
>>   amdgpu_cs_query_fence_status
>>   amdgpu_cs_query_reset_state
>>   amdgpu_query_sw_info
>>   amdgpu_cs_signal_semaphore
>>   amdgpu_cs_submit
>>   amdgpu_cs_submit_raw
>> +amdgpu_cs_submit_raw2
>>   amdgpu_cs_syncobj_export_sync_file
>>   amdgpu_cs_syncobj_import_sync_file
>>   amdgpu_cs_syncobj_reset
>>   amdgpu_cs_syncobj_signal
>>   amdgpu_cs_syncobj_wait
>>   amdgpu_cs_wait_fences
>>   amdgpu_cs_wait_semaphore
>>   amdgpu_device_deinitialize
>>   amdgpu_device_initialize
>>   amdgpu_find_bo_by_cpu_mapping
>> diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h
>> index dc51659a..5b800033 100644
>> --- a/amdgpu/amdgpu.h
>> +++ b/amdgpu/amdgpu.h
>> @@ -35,20 +35,21 @@
>>   #define _AMDGPU_H_
>>
>>   #include 
>>   #include 
>>
>>   #ifdef __cplusplus
>>   extern "C" {
>>   #endif
>>
>>   struct drm_amdgpu_info_hw_ip;
>> +struct drm_amdgpu_bo_list_entry;
>>
>>   
>> /*--*/
>>   /* --- Defines 
>>  */
>>   
>> /*--*/
>>
>>   /**
>>* Define max. number of Command Buffers (IB) which could be 

Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-16 Thread Marek Olšák
On Wed, Jan 16, 2019, 7:55 AM Christian König <
ckoenig.leichtzumer...@gmail.com wrote:

> Well if you ask me we should have the following interface for
> negotiating memory management with the kernel:
>
> 1. We have per process BOs which can't be shared between processes.
>
> Those are always valid and don't need to be mentioned in any BO list
> whatsoever.
>
> If we knew that a per process BO is currently not in use we can
> optionally tell that to the kernel to make memory management more
> efficient.
>
> In other words instead of a list of stuff which is used we send down to
> the kernel a list of stuff which is not used any more and that only when
> we know that it is necessary, e.g. when a game or application overcommits.
>

Radeonsi doesn't use this because this approach caused performance
degradation and also drops BO priorities.

Marek


> 2. We have shared BOs which are used by more than one process.
>
> Those are rare and should be added to the per CS list of BOs in use.
>
>
> The whole BO list interface Marek tries to optimize here should be
> deprecated and not used any more.
>
> Regards,
> Christian.
>
> Am 16.01.19 um 13:46 schrieb Bas Nieuwenhuizen:
> > So random questions:
> >
> > 1) In this discussion it was mentioned that some Vulkan drivers still
> > use the bo_list interface. I think that implies radv as I think we're
> > still using bo_list. Is there any other API we should be using? (Also,
> > with VK_EXT_descriptor_indexing I suspect we'll be moving more towards
> > a global bo list instead of a cmd buffer one, as we cannot know all
> > the BOs referenced anymore, but not sure what end state here will be).
> >
> > 2) The other alternative mentioned was adding the buffers directly
> > into the submit ioctl. Is this the desired end state (though as above
> > I'm not sure how that works for vulkan)? If yes, what is the timeline
> > for this that we need something in the interim?
> >
> > 3) Did we measure any performance benefit?
> >
> > In general I'd like to to ack the raw bo list creation function as
> > this interface seems easier to use. The two arrays thing has always
> > been kind of a pain when we want to use e.g. builtin sort functions to
> > make sure we have no duplicate BOs, but have some comments below.
> >
> > On Mon, Jan 7, 2019 at 8:31 PM Marek Olšák  wrote:
> >> From: Marek Olšák 
> >>
> >> ---
> >>   amdgpu/amdgpu-symbol-check |  3 ++
> >>   amdgpu/amdgpu.h| 56 +-
> >>   amdgpu/amdgpu_bo.c | 36 
> >>   amdgpu/amdgpu_cs.c | 25 +
> >>   4 files changed, 119 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check
> >> index 6f5e0f95..96a44b40 100755
> >> --- a/amdgpu/amdgpu-symbol-check
> >> +++ b/amdgpu/amdgpu-symbol-check
> >> @@ -12,20 +12,22 @@ _edata
> >>   _end
> >>   _fini
> >>   _init
> >>   amdgpu_bo_alloc
> >>   amdgpu_bo_cpu_map
> >>   amdgpu_bo_cpu_unmap
> >>   amdgpu_bo_export
> >>   amdgpu_bo_free
> >>   amdgpu_bo_import
> >>   amdgpu_bo_inc_ref
> >> +amdgpu_bo_list_create_raw
> >> +amdgpu_bo_list_destroy_raw
> >>   amdgpu_bo_list_create
> >>   amdgpu_bo_list_destroy
> >>   amdgpu_bo_list_update
> >>   amdgpu_bo_query_info
> >>   amdgpu_bo_set_metadata
> >>   amdgpu_bo_va_op
> >>   amdgpu_bo_va_op_raw
> >>   amdgpu_bo_wait_for_idle
> >>   amdgpu_create_bo_from_user_mem
> >>   amdgpu_cs_chunk_fence_info_to_data
> >> @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
> >>   amdgpu_cs_destroy_syncobj
> >>   amdgpu_cs_export_syncobj
> >>   amdgpu_cs_fence_to_handle
> >>   amdgpu_cs_import_syncobj
> >>   amdgpu_cs_query_fence_status
> >>   amdgpu_cs_query_reset_state
> >>   amdgpu_query_sw_info
> >>   amdgpu_cs_signal_semaphore
> >>   amdgpu_cs_submit
> >>   amdgpu_cs_submit_raw
> >> +amdgpu_cs_submit_raw2
> >>   amdgpu_cs_syncobj_export_sync_file
> >>   amdgpu_cs_syncobj_import_sync_file
> >>   amdgpu_cs_syncobj_reset
> >>   amdgpu_cs_syncobj_signal
> >>   amdgpu_cs_syncobj_wait
> >>   amdgpu_cs_wait_fences
> >>   amdgpu_cs_wait_semaphore
> >>   amdgpu_device_deinitialize
> >>   amdgpu_device_initialize
> >>   amdgpu_find_bo_by_cpu_mapping
> >> diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h
> >> index dc51659a..5b800033 100644
> >> --- a/amdgpu/amdgpu.h
> >> +++ b/amdgpu/amdgpu.h
> >> @@ -35,20 +35,21 @@
> >>   #define _AMDGPU_H_
> >>
> >>   #include 
> >>   #include 
> >>
> >>   #ifdef __cplusplus
> >>   extern "C" {
> >>   #endif
> >>
> >>   struct drm_amdgpu_info_hw_ip;
> >> +struct drm_amdgpu_bo_list_entry;
> >>
> >>
>  
> /*--*/
> >>   /* --- Defines
>  */
> >>
>  
> /*--*/
> >>
> >>   /**
> >>* Define max. number of Command Buffers (IB) which could be sent to
> the single
> >>* hardware IP to 

Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-16 Thread Christian König
Well if you ask me we should have the following interface for 
negotiating memory management with the kernel:


1. We have per process BOs which can't be shared between processes.

Those are always valid and don't need to be mentioned in any BO list 
whatsoever.


If we knew that a per process BO is currently not in use we can 
optionally tell that to the kernel to make memory management more efficient.


In other words instead of a list of stuff which is used we send down to 
the kernel a list of stuff which is not used any more and that only when 
we know that it is necessary, e.g. when a game or application overcommits.


2. We have shared BOs which are used by more than one process.

Those are rare and should be added to the per CS list of BOs in use.


The whole BO list interface Marek tries to optimize here should be 
deprecated and not used any more.


Regards,
Christian.

Am 16.01.19 um 13:46 schrieb Bas Nieuwenhuizen:

So random questions:

1) In this discussion it was mentioned that some Vulkan drivers still
use the bo_list interface. I think that implies radv as I think we're
still using bo_list. Is there any other API we should be using? (Also,
with VK_EXT_descriptor_indexing I suspect we'll be moving more towards
a global bo list instead of a cmd buffer one, as we cannot know all
the BOs referenced anymore, but not sure what end state here will be).

2) The other alternative mentioned was adding the buffers directly
into the submit ioctl. Is this the desired end state (though as above
I'm not sure how that works for vulkan)? If yes, what is the timeline
for this that we need something in the interim?

3) Did we measure any performance benefit?

In general I'd like to to ack the raw bo list creation function as
this interface seems easier to use. The two arrays thing has always
been kind of a pain when we want to use e.g. builtin sort functions to
make sure we have no duplicate BOs, but have some comments below.

On Mon, Jan 7, 2019 at 8:31 PM Marek Olšák  wrote:

From: Marek Olšák 

---
  amdgpu/amdgpu-symbol-check |  3 ++
  amdgpu/amdgpu.h| 56 +-
  amdgpu/amdgpu_bo.c | 36 
  amdgpu/amdgpu_cs.c | 25 +
  4 files changed, 119 insertions(+), 1 deletion(-)

diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check
index 6f5e0f95..96a44b40 100755
--- a/amdgpu/amdgpu-symbol-check
+++ b/amdgpu/amdgpu-symbol-check
@@ -12,20 +12,22 @@ _edata
  _end
  _fini
  _init
  amdgpu_bo_alloc
  amdgpu_bo_cpu_map
  amdgpu_bo_cpu_unmap
  amdgpu_bo_export
  amdgpu_bo_free
  amdgpu_bo_import
  amdgpu_bo_inc_ref
+amdgpu_bo_list_create_raw
+amdgpu_bo_list_destroy_raw
  amdgpu_bo_list_create
  amdgpu_bo_list_destroy
  amdgpu_bo_list_update
  amdgpu_bo_query_info
  amdgpu_bo_set_metadata
  amdgpu_bo_va_op
  amdgpu_bo_va_op_raw
  amdgpu_bo_wait_for_idle
  amdgpu_create_bo_from_user_mem
  amdgpu_cs_chunk_fence_info_to_data
@@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
  amdgpu_cs_destroy_syncobj
  amdgpu_cs_export_syncobj
  amdgpu_cs_fence_to_handle
  amdgpu_cs_import_syncobj
  amdgpu_cs_query_fence_status
  amdgpu_cs_query_reset_state
  amdgpu_query_sw_info
  amdgpu_cs_signal_semaphore
  amdgpu_cs_submit
  amdgpu_cs_submit_raw
+amdgpu_cs_submit_raw2
  amdgpu_cs_syncobj_export_sync_file
  amdgpu_cs_syncobj_import_sync_file
  amdgpu_cs_syncobj_reset
  amdgpu_cs_syncobj_signal
  amdgpu_cs_syncobj_wait
  amdgpu_cs_wait_fences
  amdgpu_cs_wait_semaphore
  amdgpu_device_deinitialize
  amdgpu_device_initialize
  amdgpu_find_bo_by_cpu_mapping
diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h
index dc51659a..5b800033 100644
--- a/amdgpu/amdgpu.h
+++ b/amdgpu/amdgpu.h
@@ -35,20 +35,21 @@
  #define _AMDGPU_H_

  #include 
  #include 

  #ifdef __cplusplus
  extern "C" {
  #endif

  struct drm_amdgpu_info_hw_ip;
+struct drm_amdgpu_bo_list_entry;

  /*--*/
  /* --- Defines  */
  /*--*/

  /**
   * Define max. number of Command Buffers (IB) which could be sent to the 
single
   * hardware IP to accommodate CE/DE requirements
   *
   * \sa amdgpu_cs_ib_info
@@ -767,34 +768,65 @@ int amdgpu_bo_cpu_unmap(amdgpu_bo_handle buf_handle);
   *and no GPU access is scheduled.
   *  1 GPU access is in fly or scheduled
   *
   * \return   0 - on success
   *  <0 - Negative POSIX Error code
   */
  int amdgpu_bo_wait_for_idle(amdgpu_bo_handle buf_handle,
 uint64_t timeout_ns,
 bool *buffer_busy);

+/**
+ * Creates a BO list handle for command submission.
+ *
+ * \param   dev- \c [in] Device handle.
+ *See #amdgpu_device_initialize()
+ * \param   number_of_buffers  - 

Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-16 Thread Bas Nieuwenhuizen
So random questions:

1) In this discussion it was mentioned that some Vulkan drivers still
use the bo_list interface. I think that implies radv as I think we're
still using bo_list. Is there any other API we should be using? (Also,
with VK_EXT_descriptor_indexing I suspect we'll be moving more towards
a global bo list instead of a cmd buffer one, as we cannot know all
the BOs referenced anymore, but not sure what end state here will be).

2) The other alternative mentioned was adding the buffers directly
into the submit ioctl. Is this the desired end state (though as above
I'm not sure how that works for vulkan)? If yes, what is the timeline
for this that we need something in the interim?

3) Did we measure any performance benefit?

In general I'd like to to ack the raw bo list creation function as
this interface seems easier to use. The two arrays thing has always
been kind of a pain when we want to use e.g. builtin sort functions to
make sure we have no duplicate BOs, but have some comments below.

On Mon, Jan 7, 2019 at 8:31 PM Marek Olšák  wrote:
>
> From: Marek Olšák 
>
> ---
>  amdgpu/amdgpu-symbol-check |  3 ++
>  amdgpu/amdgpu.h| 56 +-
>  amdgpu/amdgpu_bo.c | 36 
>  amdgpu/amdgpu_cs.c | 25 +
>  4 files changed, 119 insertions(+), 1 deletion(-)
>
> diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check
> index 6f5e0f95..96a44b40 100755
> --- a/amdgpu/amdgpu-symbol-check
> +++ b/amdgpu/amdgpu-symbol-check
> @@ -12,20 +12,22 @@ _edata
>  _end
>  _fini
>  _init
>  amdgpu_bo_alloc
>  amdgpu_bo_cpu_map
>  amdgpu_bo_cpu_unmap
>  amdgpu_bo_export
>  amdgpu_bo_free
>  amdgpu_bo_import
>  amdgpu_bo_inc_ref
> +amdgpu_bo_list_create_raw
> +amdgpu_bo_list_destroy_raw
>  amdgpu_bo_list_create
>  amdgpu_bo_list_destroy
>  amdgpu_bo_list_update
>  amdgpu_bo_query_info
>  amdgpu_bo_set_metadata
>  amdgpu_bo_va_op
>  amdgpu_bo_va_op_raw
>  amdgpu_bo_wait_for_idle
>  amdgpu_create_bo_from_user_mem
>  amdgpu_cs_chunk_fence_info_to_data
> @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
>  amdgpu_cs_destroy_syncobj
>  amdgpu_cs_export_syncobj
>  amdgpu_cs_fence_to_handle
>  amdgpu_cs_import_syncobj
>  amdgpu_cs_query_fence_status
>  amdgpu_cs_query_reset_state
>  amdgpu_query_sw_info
>  amdgpu_cs_signal_semaphore
>  amdgpu_cs_submit
>  amdgpu_cs_submit_raw
> +amdgpu_cs_submit_raw2
>  amdgpu_cs_syncobj_export_sync_file
>  amdgpu_cs_syncobj_import_sync_file
>  amdgpu_cs_syncobj_reset
>  amdgpu_cs_syncobj_signal
>  amdgpu_cs_syncobj_wait
>  amdgpu_cs_wait_fences
>  amdgpu_cs_wait_semaphore
>  amdgpu_device_deinitialize
>  amdgpu_device_initialize
>  amdgpu_find_bo_by_cpu_mapping
> diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h
> index dc51659a..5b800033 100644
> --- a/amdgpu/amdgpu.h
> +++ b/amdgpu/amdgpu.h
> @@ -35,20 +35,21 @@
>  #define _AMDGPU_H_
>
>  #include 
>  #include 
>
>  #ifdef __cplusplus
>  extern "C" {
>  #endif
>
>  struct drm_amdgpu_info_hw_ip;
> +struct drm_amdgpu_bo_list_entry;
>
>  
> /*--*/
>  /* --- Defines  
> */
>  
> /*--*/
>
>  /**
>   * Define max. number of Command Buffers (IB) which could be sent to the 
> single
>   * hardware IP to accommodate CE/DE requirements
>   *
>   * \sa amdgpu_cs_ib_info
> @@ -767,34 +768,65 @@ int amdgpu_bo_cpu_unmap(amdgpu_bo_handle buf_handle);
>   *and no GPU access is scheduled.
>   *  1 GPU access is in fly or scheduled
>   *
>   * \return   0 - on success
>   *  <0 - Negative POSIX Error code
>   */
>  int amdgpu_bo_wait_for_idle(amdgpu_bo_handle buf_handle,
> uint64_t timeout_ns,
> bool *buffer_busy);
>
> +/**
> + * Creates a BO list handle for command submission.
> + *
> + * \param   dev- \c [in] Device handle.
> + *See #amdgpu_device_initialize()
> + * \param   number_of_buffers  - \c [in] Number of BOs in the list
> + * \param   buffers- \c [in] List of BO handles
> + * \param   result - \c [out] Created BO list handle
> + *
> + * \return   0 on success\n
> + *  <0 - Negative POSIX Error code
> + *
> + * \sa amdgpu_bo_list_destroy_raw()
> +*/
> +int amdgpu_bo_list_create_raw(amdgpu_device_handle dev,
> + uint32_t number_of_buffers,
> + struct drm_amdgpu_bo_list_entry *buffers,
> + uint32_t *result);

So AFAIU  drm_amdgpu_bo_list_entry takes a raw bo handle while we
never get a raw bo handle from libdrm_amdgpu. How are we supposed to
fill it in?

What do we win by having the raw handle for the bo_list? If we would
not return the raw handle we would not need 

Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-10 Thread Marek Olšák
On Thu, Jan 10, 2019, 6:51 AM Christian König <
ckoenig.leichtzumer...@gmail.com wrote:

> Am 10.01.19 um 12:41 schrieb Marek Olšák:
>
>
>
> On Thu, Jan 10, 2019, 4:15 AM Koenig, Christian  wrote:
>
>> Am 10.01.19 um 00:39 schrieb Marek Olšák:
>>
>> On Wed, Jan 9, 2019 at 1:41 PM Christian König <
>> ckoenig.leichtzumer...@gmail.com> wrote:
>>
>>> Am 09.01.19 um 17:14 schrieb Marek Olšák:
>>>
>>> On Wed, Jan 9, 2019 at 8:09 AM Christian König <
>>> ckoenig.leichtzumer...@gmail.com> wrote:
>>>
 Am 09.01.19 um 13:36 schrieb Marek Olšák:



 On Wed, Jan 9, 2019, 5:28 AM Christian König <
 ckoenig.leichtzumer...@gmail.com wrote:

> Looks good, but I'm wondering what's the actual improvement?
>

 No malloc calls and 1 less for loop copying the bo list.


 Yeah, but didn't we want to get completely rid of the bo list?

>>>
>>> If we have multiple IBs (e.g. gfx + compute) that share a BO list, I
>>> think it's faster to send the BO list to the kernel only once.
>>>
>>>
>>> That's not really faster.
>>>
>>> The only thing we safe us is a single loop over all BOs to lockup the
>>> handle into a pointer and that is only a tiny fraction of the overhead.
>>>
>>> The majority of the overhead is locking the BOs and reserving space for
>>> the submission.
>>>
>>> What could really help here is to submit gfx+comput together in just one
>>> CS IOCTL. This way we would need the locking and space reservation only
>>> once.
>>>
>>> It's a bit of work in the kernel side, but certainly doable.
>>>
>>
>> OK. Any objections to this patch?
>>
>>
>> In general I'm wondering if we couldn't avoid adding so much new
>> interface.
>>
>
> There are Vulkan drivers that still use the bo_list interface.
>
>
>> For example we can avoid the malloc() when we just cache the last freed
>> bo_list structure in the device. We would just need an atomic pointer
>> exchange operation for that.
>>
>
>> This way we even don't need to change mesa at all.
>>
>
> There is still the for loop that we need to get rid of.
>
>
> Yeah, but that I'm fine to handle with a amdgpu_bo_list_create_raw which
> only takes the handles and still returns the amdgpu_bo_list structure we
> are used to.
>
> See what I'm mostly concerned about is having another CS function to
> maintain.
>

There is no maintenance cost. It's just a wrapper. Eventually all drivers
will switch to it.

Marek


>
>
>> Regarding optimization, this chunk can be replaced by a cast on 64bit:
>>
>> +chunk_array = alloca(sizeof(uint64_t) * num_chunks);
>> +for (i = 0; i < num_chunks; i++)
>> +chunk_array[i] = (uint64_t)(uintptr_t)[i];
>>
>> It can't. The input is an array of structures. The ioctl takes an array
> of pointers.
>
>
> Ah! Haven't seen this, sorry for the noise.
>
> Christian.
>
>
> Marek
>
>
>> Regards,
>> Christian.
>>
>>
>> Thanks,
>> Marek
>>
>>
>>
> ___
> amd-gfx mailing 
> listamd-gfx@lists.freedesktop.orghttps://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
>
>
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-10 Thread Christian König

Am 10.01.19 um 12:41 schrieb Marek Olšák:



On Thu, Jan 10, 2019, 4:15 AM Koenig, Christian 
mailto:christian.koe...@amd.com> wrote:


Am 10.01.19 um 00:39 schrieb Marek Olšák:

On Wed, Jan 9, 2019 at 1:41 PM Christian König
mailto:ckoenig.leichtzumer...@gmail.com>> wrote:

Am 09.01.19 um 17:14 schrieb Marek Olšák:

On Wed, Jan 9, 2019 at 8:09 AM Christian König
mailto:ckoenig.leichtzumer...@gmail.com>> wrote:

Am 09.01.19 um 13:36 schrieb Marek Olšák:



On Wed, Jan 9, 2019, 5:28 AM Christian König
mailto:ckoenig.leichtzumer...@gmail.com> wrote:

Looks good, but I'm wondering what's the actual
improvement?


No malloc calls and 1 less for loop copying the bo list.


Yeah, but didn't we want to get completely rid of the bo
list?


If we have multiple IBs (e.g. gfx + compute) that share a BO
list, I think it's faster to send the BO list to the kernel
only once.


That's not really faster.

The only thing we safe us is a single loop over all BOs to
lockup the handle into a pointer and that is only a tiny
fraction of the overhead.

The majority of the overhead is locking the BOs and reserving
space for the submission.

What could really help here is to submit gfx+comput together
in just one CS IOCTL. This way we would need the locking and
space reservation only once.

It's a bit of work in the kernel side, but certainly doable.


OK. Any objections to this patch?


In general I'm wondering if we couldn't avoid adding so much new
interface.


There are Vulkan drivers that still use the bo_list interface.


For example we can avoid the malloc() when we just cache the last
freed bo_list structure in the device. We would just need an
atomic pointer exchange operation for that.


This way we even don't need to change mesa at all.


There is still the for loop that we need to get rid of.


Yeah, but that I'm fine to handle with a amdgpu_bo_list_create_raw which 
only takes the handles and still returns the amdgpu_bo_list structure we 
are used to.


See what I'm mostly concerned about is having another CS function to 
maintain.





Regarding optimization, this chunk can be replaced by a cast on 64bit:

+   chunk_array = alloca(sizeof(uint64_t) * num_chunks);
+   for (i = 0; i < num_chunks; i++)
+   chunk_array[i] = (uint64_t)(uintptr_t)[i];


It can't. The input is an array of structures. The ioctl takes an 
array of pointers.


Ah! Haven't seen this, sorry for the noise.

Christian.



Marek


Regards,
Christian.



Thanks,
Marek



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-10 Thread Marek Olšák
On Thu, Jan 10, 2019, 4:15 AM Koenig, Christian  Am 10.01.19 um 00:39 schrieb Marek Olšák:
>
> On Wed, Jan 9, 2019 at 1:41 PM Christian König <
> ckoenig.leichtzumer...@gmail.com> wrote:
>
>> Am 09.01.19 um 17:14 schrieb Marek Olšák:
>>
>> On Wed, Jan 9, 2019 at 8:09 AM Christian König <
>> ckoenig.leichtzumer...@gmail.com> wrote:
>>
>>> Am 09.01.19 um 13:36 schrieb Marek Olšák:
>>>
>>>
>>>
>>> On Wed, Jan 9, 2019, 5:28 AM Christian König <
>>> ckoenig.leichtzumer...@gmail.com wrote:
>>>
 Looks good, but I'm wondering what's the actual improvement?

>>>
>>> No malloc calls and 1 less for loop copying the bo list.
>>>
>>>
>>> Yeah, but didn't we want to get completely rid of the bo list?
>>>
>>
>> If we have multiple IBs (e.g. gfx + compute) that share a BO list, I
>> think it's faster to send the BO list to the kernel only once.
>>
>>
>> That's not really faster.
>>
>> The only thing we safe us is a single loop over all BOs to lockup the
>> handle into a pointer and that is only a tiny fraction of the overhead.
>>
>> The majority of the overhead is locking the BOs and reserving space for
>> the submission.
>>
>> What could really help here is to submit gfx+comput together in just one
>> CS IOCTL. This way we would need the locking and space reservation only
>> once.
>>
>> It's a bit of work in the kernel side, but certainly doable.
>>
>
> OK. Any objections to this patch?
>
>
> In general I'm wondering if we couldn't avoid adding so much new interface.
>

There are Vulkan drivers that still use the bo_list interface.


> For example we can avoid the malloc() when we just cache the last freed
> bo_list structure in the device. We would just need an atomic pointer
> exchange operation for that.
>

> This way we even don't need to change mesa at all.
>

There is still the for loop that we need to get rid of.


> Regarding optimization, this chunk can be replaced by a cast on 64bit:
>
> + chunk_array = alloca(sizeof(uint64_t) * num_chunks);
> + for (i = 0; i < num_chunks; i++)
> + chunk_array[i] = (uint64_t)(uintptr_t)[i];
>
> It can't. The input is an array of structures. The ioctl takes an array of
pointers.

Marek


> Regards,
> Christian.
>
>
> Thanks,
> Marek
>
>
>
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-10 Thread Koenig, Christian
Am 10.01.19 um 00:39 schrieb Marek Olšák:
On Wed, Jan 9, 2019 at 1:41 PM Christian König 
mailto:ckoenig.leichtzumer...@gmail.com>> 
wrote:
Am 09.01.19 um 17:14 schrieb Marek Olšák:
On Wed, Jan 9, 2019 at 8:09 AM Christian König 
mailto:ckoenig.leichtzumer...@gmail.com>> 
wrote:
Am 09.01.19 um 13:36 schrieb Marek Olšák:


On Wed, Jan 9, 2019, 5:28 AM Christian König 
mailto:ckoenig.leichtzumer...@gmail.com> 
wrote:
Looks good, but I'm wondering what's the actual improvement?

No malloc calls and 1 less for loop copying the bo list.

Yeah, but didn't we want to get completely rid of the bo list?

If we have multiple IBs (e.g. gfx + compute) that share a BO list, I think it's 
faster to send the BO list to the kernel only once.

That's not really faster.

The only thing we safe us is a single loop over all BOs to lockup the handle 
into a pointer and that is only a tiny fraction of the overhead.

The majority of the overhead is locking the BOs and reserving space for the 
submission.

What could really help here is to submit gfx+comput together in just one CS 
IOCTL. This way we would need the locking and space reservation only once.

It's a bit of work in the kernel side, but certainly doable.

OK. Any objections to this patch?

In general I'm wondering if we couldn't avoid adding so much new interface.

For example we can avoid the malloc() when we just cache the last freed bo_list 
structure in the device. We would just need an atomic pointer exchange 
operation for that.

This way we even don't need to change mesa at all.

Regarding optimization, this chunk can be replaced by a cast on 64bit:

+   chunk_array = alloca(sizeof(uint64_t) * num_chunks);
+   for (i = 0; i < num_chunks; i++)
+   chunk_array[i] = (uint64_t)(uintptr_t)[i];

Regards,
Christian.


Thanks,
Marek

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-09 Thread Marek Olšák
On Wed, Jan 9, 2019 at 1:41 PM Christian König <
ckoenig.leichtzumer...@gmail.com> wrote:

> Am 09.01.19 um 17:14 schrieb Marek Olšák:
>
> On Wed, Jan 9, 2019 at 8:09 AM Christian König <
> ckoenig.leichtzumer...@gmail.com> wrote:
>
>> Am 09.01.19 um 13:36 schrieb Marek Olšák:
>>
>>
>>
>> On Wed, Jan 9, 2019, 5:28 AM Christian König <
>> ckoenig.leichtzumer...@gmail.com wrote:
>>
>>> Looks good, but I'm wondering what's the actual improvement?
>>>
>>
>> No malloc calls and 1 less for loop copying the bo list.
>>
>>
>> Yeah, but didn't we want to get completely rid of the bo list?
>>
>
> If we have multiple IBs (e.g. gfx + compute) that share a BO list, I think
> it's faster to send the BO list to the kernel only once.
>
>
> That's not really faster.
>
> The only thing we safe us is a single loop over all BOs to lockup the
> handle into a pointer and that is only a tiny fraction of the overhead.
>
> The majority of the overhead is locking the BOs and reserving space for
> the submission.
>
> What could really help here is to submit gfx+comput together in just one
> CS IOCTL. This way we would need the locking and space reservation only
> once.
>
> It's a bit of work in the kernel side, but certainly doable.
>

OK. Any objections to this patch?

Thanks,
Marek
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-09 Thread Christian König

Am 09.01.19 um 17:14 schrieb Marek Olšák:
On Wed, Jan 9, 2019 at 8:09 AM Christian König 
> wrote:


Am 09.01.19 um 13:36 schrieb Marek Olšák:



On Wed, Jan 9, 2019, 5:28 AM Christian König
mailto:ckoenig.leichtzumer...@gmail.com> wrote:

Looks good, but I'm wondering what's the actual improvement?


No malloc calls and 1 less for loop copying the bo list.


Yeah, but didn't we want to get completely rid of the bo list?


If we have multiple IBs (e.g. gfx + compute) that share a BO list, I 
think it's faster to send the BO list to the kernel only once.


That's not really faster.

The only thing we safe us is a single loop over all BOs to lockup the 
handle into a pointer and that is only a tiny fraction of the overhead.


The majority of the overhead is locking the BOs and reserving space for 
the submission.


What could really help here is to submit gfx+comput together in just one 
CS IOCTL. This way we would need the locking and space reservation only 
once.


It's a bit of work in the kernel side, but certainly doable.

Christian.



Marek

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-09 Thread Christian König

Am 09.01.19 um 13:36 schrieb Marek Olšák:



On Wed, Jan 9, 2019, 5:28 AM Christian König 
 wrote:


Looks good, but I'm wondering what's the actual improvement?


No malloc calls and 1 less for loop copying the bo list.


Yeah, but didn't we want to get completely rid of the bo list?

Christian.



Marek


Christian.

Am 07.01.19 um 20:31 schrieb Marek Olšák:
> From: Marek Olšák mailto:marek.ol...@amd.com>>
>
> ---
>   amdgpu/amdgpu-symbol-check |  3 ++
>   amdgpu/amdgpu.h            | 56
+-
>   amdgpu/amdgpu_bo.c         | 36 
>   amdgpu/amdgpu_cs.c         | 25 +
>   4 files changed, 119 insertions(+), 1 deletion(-)
>
> diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check
> index 6f5e0f95..96a44b40 100755
> --- a/amdgpu/amdgpu-symbol-check
> +++ b/amdgpu/amdgpu-symbol-check
> @@ -12,20 +12,22 @@ _edata
>   _end
>   _fini
>   _init
>   amdgpu_bo_alloc
>   amdgpu_bo_cpu_map
>   amdgpu_bo_cpu_unmap
>   amdgpu_bo_export
>   amdgpu_bo_free
>   amdgpu_bo_import
>   amdgpu_bo_inc_ref
> +amdgpu_bo_list_create_raw
> +amdgpu_bo_list_destroy_raw
>   amdgpu_bo_list_create
>   amdgpu_bo_list_destroy
>   amdgpu_bo_list_update
>   amdgpu_bo_query_info
>   amdgpu_bo_set_metadata
>   amdgpu_bo_va_op
>   amdgpu_bo_va_op_raw
>   amdgpu_bo_wait_for_idle
>   amdgpu_create_bo_from_user_mem
>   amdgpu_cs_chunk_fence_info_to_data
> @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
>   amdgpu_cs_destroy_syncobj
>   amdgpu_cs_export_syncobj
>   amdgpu_cs_fence_to_handle
>   amdgpu_cs_import_syncobj
>   amdgpu_cs_query_fence_status
>   amdgpu_cs_query_reset_state
>   amdgpu_query_sw_info
>   amdgpu_cs_signal_semaphore
>   amdgpu_cs_submit
>   amdgpu_cs_submit_raw
> +amdgpu_cs_submit_raw2
>   amdgpu_cs_syncobj_export_sync_file
>   amdgpu_cs_syncobj_import_sync_file
>   amdgpu_cs_syncobj_reset
>   amdgpu_cs_syncobj_signal
>   amdgpu_cs_syncobj_wait
>   amdgpu_cs_wait_fences
>   amdgpu_cs_wait_semaphore
>   amdgpu_device_deinitialize
>   amdgpu_device_initialize
>   amdgpu_find_bo_by_cpu_mapping
> diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h
> index dc51659a..5b800033 100644
> --- a/amdgpu/amdgpu.h
> +++ b/amdgpu/amdgpu.h
> @@ -35,20 +35,21 @@
>   #define _AMDGPU_H_
>
>   #include 
>   #include 
>
>   #ifdef __cplusplus
>   extern "C" {
>   #endif
>
>   struct drm_amdgpu_info_hw_ip;
> +struct drm_amdgpu_bo_list_entry;
>
>
 
/*--*/
>   /* --- Defines
 */
>
 
/*--*/
>
>   /**
>    * Define max. number of Command Buffers (IB) which could be
sent to the single
>    * hardware IP to accommodate CE/DE requirements
>    *
>    * \sa amdgpu_cs_ib_info
> @@ -767,34 +768,65 @@ int amdgpu_bo_cpu_unmap(amdgpu_bo_handle
buf_handle);
>    *                            and no GPU access is scheduled.
>    *                          1 GPU access is in fly or scheduled
>    *
>    * \return   0 - on success
>    *          <0 - Negative POSIX Error code
>    */
>   int amdgpu_bo_wait_for_idle(amdgpu_bo_handle buf_handle,
>                           uint64_t timeout_ns,
>                           bool *buffer_busy);
>
> +/**
> + * Creates a BO list handle for command submission.
> + *
> + * \param   dev                      - \c [in] Device handle.
> + *                              See #amdgpu_device_initialize()
> + * \param   number_of_buffers        - \c [in] Number of BOs in
the list
> + * \param   buffers          - \c [in] List of BO handles
> + * \param   result           - \c [out] Created BO list handle
> + *
> + * \return   0 on success\n
> + *          <0 - Negative POSIX Error code
> + *
> + * \sa amdgpu_bo_list_destroy_raw()
> +*/
> +int amdgpu_bo_list_create_raw(amdgpu_device_handle dev,
> +                           uint32_t number_of_buffers,
> +                           struct drm_amdgpu_bo_list_entry
*buffers,
> +                           uint32_t *result);
> +
> +/**
> + * Destroys a BO list handle.
> + *
> + * \param   bo_list  - \c [in] BO list handle.
> + *
> + * \return   0 on success\n
> + *          <0 - Negative POSIX Error code
> + *
> + * \sa amdgpu_bo_list_create_raw(), amdgpu_cs_submit_raw2()
> +*/
> +int amdgpu_bo_list_destroy_raw(amdgpu_device_handle 

Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-09 Thread Marek Olšák
On Wed, Jan 9, 2019, 5:28 AM Christian König <
ckoenig.leichtzumer...@gmail.com wrote:

> Looks good, but I'm wondering what's the actual improvement?
>

No malloc calls and 1 less for loop copying the bo list.

Marek


> Christian.
>
> Am 07.01.19 um 20:31 schrieb Marek Olšák:
> > From: Marek Olšák 
> >
> > ---
> >   amdgpu/amdgpu-symbol-check |  3 ++
> >   amdgpu/amdgpu.h| 56 +-
> >   amdgpu/amdgpu_bo.c | 36 
> >   amdgpu/amdgpu_cs.c | 25 +
> >   4 files changed, 119 insertions(+), 1 deletion(-)
> >
> > diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check
> > index 6f5e0f95..96a44b40 100755
> > --- a/amdgpu/amdgpu-symbol-check
> > +++ b/amdgpu/amdgpu-symbol-check
> > @@ -12,20 +12,22 @@ _edata
> >   _end
> >   _fini
> >   _init
> >   amdgpu_bo_alloc
> >   amdgpu_bo_cpu_map
> >   amdgpu_bo_cpu_unmap
> >   amdgpu_bo_export
> >   amdgpu_bo_free
> >   amdgpu_bo_import
> >   amdgpu_bo_inc_ref
> > +amdgpu_bo_list_create_raw
> > +amdgpu_bo_list_destroy_raw
> >   amdgpu_bo_list_create
> >   amdgpu_bo_list_destroy
> >   amdgpu_bo_list_update
> >   amdgpu_bo_query_info
> >   amdgpu_bo_set_metadata
> >   amdgpu_bo_va_op
> >   amdgpu_bo_va_op_raw
> >   amdgpu_bo_wait_for_idle
> >   amdgpu_create_bo_from_user_mem
> >   amdgpu_cs_chunk_fence_info_to_data
> > @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
> >   amdgpu_cs_destroy_syncobj
> >   amdgpu_cs_export_syncobj
> >   amdgpu_cs_fence_to_handle
> >   amdgpu_cs_import_syncobj
> >   amdgpu_cs_query_fence_status
> >   amdgpu_cs_query_reset_state
> >   amdgpu_query_sw_info
> >   amdgpu_cs_signal_semaphore
> >   amdgpu_cs_submit
> >   amdgpu_cs_submit_raw
> > +amdgpu_cs_submit_raw2
> >   amdgpu_cs_syncobj_export_sync_file
> >   amdgpu_cs_syncobj_import_sync_file
> >   amdgpu_cs_syncobj_reset
> >   amdgpu_cs_syncobj_signal
> >   amdgpu_cs_syncobj_wait
> >   amdgpu_cs_wait_fences
> >   amdgpu_cs_wait_semaphore
> >   amdgpu_device_deinitialize
> >   amdgpu_device_initialize
> >   amdgpu_find_bo_by_cpu_mapping
> > diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h
> > index dc51659a..5b800033 100644
> > --- a/amdgpu/amdgpu.h
> > +++ b/amdgpu/amdgpu.h
> > @@ -35,20 +35,21 @@
> >   #define _AMDGPU_H_
> >
> >   #include 
> >   #include 
> >
> >   #ifdef __cplusplus
> >   extern "C" {
> >   #endif
> >
> >   struct drm_amdgpu_info_hw_ip;
> > +struct drm_amdgpu_bo_list_entry;
> >
> >
>  
> /*--*/
> >   /* --- Defines
>  */
> >
>  
> /*--*/
> >
> >   /**
> >* Define max. number of Command Buffers (IB) which could be sent to
> the single
> >* hardware IP to accommodate CE/DE requirements
> >*
> >* \sa amdgpu_cs_ib_info
> > @@ -767,34 +768,65 @@ int amdgpu_bo_cpu_unmap(amdgpu_bo_handle
> buf_handle);
> >*and no GPU access is scheduled.
> >*  1 GPU access is in fly or scheduled
> >*
> >* \return   0 - on success
> >*  <0 - Negative POSIX Error code
> >*/
> >   int amdgpu_bo_wait_for_idle(amdgpu_bo_handle buf_handle,
> >   uint64_t timeout_ns,
> >   bool *buffer_busy);
> >
> > +/**
> > + * Creates a BO list handle for command submission.
> > + *
> > + * \param   dev  - \c [in] Device handle.
> > + *  See #amdgpu_device_initialize()
> > + * \param   number_of_buffers- \c [in] Number of BOs in the list
> > + * \param   buffers  - \c [in] List of BO handles
> > + * \param   result   - \c [out] Created BO list handle
> > + *
> > + * \return   0 on success\n
> > + *  <0 - Negative POSIX Error code
> > + *
> > + * \sa amdgpu_bo_list_destroy_raw()
> > +*/
> > +int amdgpu_bo_list_create_raw(amdgpu_device_handle dev,
> > +   uint32_t number_of_buffers,
> > +   struct drm_amdgpu_bo_list_entry *buffers,
> > +   uint32_t *result);
> > +
> > +/**
> > + * Destroys a BO list handle.
> > + *
> > + * \param   bo_list  - \c [in] BO list handle.
> > + *
> > + * \return   0 on success\n
> > + *  <0 - Negative POSIX Error code
> > + *
> > + * \sa amdgpu_bo_list_create_raw(), amdgpu_cs_submit_raw2()
> > +*/
> > +int amdgpu_bo_list_destroy_raw(amdgpu_device_handle dev, uint32_t
> bo_list);
> > +
> >   /**
> >* Creates a BO list handle for command submission.
> >*
> >* \param   dev - \c [in] Device handle.
> >* See #amdgpu_device_initialize()
> >* \param   number_of_resources - \c [in] Number of BOs in the list
> >* \param   resources   - \c [in] List of BO handles
> >* \param   resource_prios  - \c 

Re: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-09 Thread Christian König

Looks good, but I'm wondering what's the actual improvement?

Christian.

Am 07.01.19 um 20:31 schrieb Marek Olšák:

From: Marek Olšák 

---
  amdgpu/amdgpu-symbol-check |  3 ++
  amdgpu/amdgpu.h| 56 +-
  amdgpu/amdgpu_bo.c | 36 
  amdgpu/amdgpu_cs.c | 25 +
  4 files changed, 119 insertions(+), 1 deletion(-)

diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check
index 6f5e0f95..96a44b40 100755
--- a/amdgpu/amdgpu-symbol-check
+++ b/amdgpu/amdgpu-symbol-check
@@ -12,20 +12,22 @@ _edata
  _end
  _fini
  _init
  amdgpu_bo_alloc
  amdgpu_bo_cpu_map
  amdgpu_bo_cpu_unmap
  amdgpu_bo_export
  amdgpu_bo_free
  amdgpu_bo_import
  amdgpu_bo_inc_ref
+amdgpu_bo_list_create_raw
+amdgpu_bo_list_destroy_raw
  amdgpu_bo_list_create
  amdgpu_bo_list_destroy
  amdgpu_bo_list_update
  amdgpu_bo_query_info
  amdgpu_bo_set_metadata
  amdgpu_bo_va_op
  amdgpu_bo_va_op_raw
  amdgpu_bo_wait_for_idle
  amdgpu_create_bo_from_user_mem
  amdgpu_cs_chunk_fence_info_to_data
@@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
  amdgpu_cs_destroy_syncobj
  amdgpu_cs_export_syncobj
  amdgpu_cs_fence_to_handle
  amdgpu_cs_import_syncobj
  amdgpu_cs_query_fence_status
  amdgpu_cs_query_reset_state
  amdgpu_query_sw_info
  amdgpu_cs_signal_semaphore
  amdgpu_cs_submit
  amdgpu_cs_submit_raw
+amdgpu_cs_submit_raw2
  amdgpu_cs_syncobj_export_sync_file
  amdgpu_cs_syncobj_import_sync_file
  amdgpu_cs_syncobj_reset
  amdgpu_cs_syncobj_signal
  amdgpu_cs_syncobj_wait
  amdgpu_cs_wait_fences
  amdgpu_cs_wait_semaphore
  amdgpu_device_deinitialize
  amdgpu_device_initialize
  amdgpu_find_bo_by_cpu_mapping
diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h
index dc51659a..5b800033 100644
--- a/amdgpu/amdgpu.h
+++ b/amdgpu/amdgpu.h
@@ -35,20 +35,21 @@
  #define _AMDGPU_H_
  
  #include 

  #include 
  
  #ifdef __cplusplus

  extern "C" {
  #endif
  
  struct drm_amdgpu_info_hw_ip;

+struct drm_amdgpu_bo_list_entry;
  
  /*--*/

  /* --- Defines  */
  /*--*/
  
  /**

   * Define max. number of Command Buffers (IB) which could be sent to the 
single
   * hardware IP to accommodate CE/DE requirements
   *
   * \sa amdgpu_cs_ib_info
@@ -767,34 +768,65 @@ int amdgpu_bo_cpu_unmap(amdgpu_bo_handle buf_handle);
   *and no GPU access is scheduled.
   *  1 GPU access is in fly or scheduled
   *
   * \return   0 - on success
   *  <0 - Negative POSIX Error code
   */
  int amdgpu_bo_wait_for_idle(amdgpu_bo_handle buf_handle,
uint64_t timeout_ns,
bool *buffer_busy);
  
+/**

+ * Creates a BO list handle for command submission.
+ *
+ * \param   dev- \c [in] Device handle.
+ *See #amdgpu_device_initialize()
+ * \param   number_of_buffers  - \c [in] Number of BOs in the list
+ * \param   buffers- \c [in] List of BO handles
+ * \param   result - \c [out] Created BO list handle
+ *
+ * \return   0 on success\n
+ *  <0 - Negative POSIX Error code
+ *
+ * \sa amdgpu_bo_list_destroy_raw()
+*/
+int amdgpu_bo_list_create_raw(amdgpu_device_handle dev,
+ uint32_t number_of_buffers,
+ struct drm_amdgpu_bo_list_entry *buffers,
+ uint32_t *result);
+
+/**
+ * Destroys a BO list handle.
+ *
+ * \param   bo_list- \c [in] BO list handle.
+ *
+ * \return   0 on success\n
+ *  <0 - Negative POSIX Error code
+ *
+ * \sa amdgpu_bo_list_create_raw(), amdgpu_cs_submit_raw2()
+*/
+int amdgpu_bo_list_destroy_raw(amdgpu_device_handle dev, uint32_t bo_list);
+
  /**
   * Creates a BO list handle for command submission.
   *
   * \param   dev   - \c [in] Device handle.
   *   See #amdgpu_device_initialize()
   * \param   number_of_resources   - \c [in] Number of BOs in the list
   * \param   resources - \c [in] List of BO handles
   * \param   resource_prios- \c [in] Optional priority for each handle
   * \param   result- \c [out] Created BO list handle
   *
   * \return   0 on success\n
   *  <0 - Negative POSIX Error code
   *
- * \sa amdgpu_bo_list_destroy()
+ * \sa amdgpu_bo_list_destroy(), amdgpu_cs_submit_raw2()
  */
  int amdgpu_bo_list_create(amdgpu_device_handle dev,
  uint32_t number_of_resources,
  amdgpu_bo_handle *resources,
  uint8_t *resource_prios,
  amdgpu_bo_list_handle *result);
  
  /**

   * Destroys a BO list handle.
   *
@@ -1580,20 +1612,42 @@ struct drm_amdgpu_cs_chunk;
  struct 

RE: [PATCH libdrm] amdgpu: add a faster BO list API

2019-01-07 Thread Zhou, David(ChunMing)
Looks good to me, Reviewed-by: Chunming Zhou 

> -Original Message-
> From: amd-gfx  On Behalf Of
> Marek Ol?ák
> Sent: Tuesday, January 08, 2019 3:31 AM
> To: amd-gfx@lists.freedesktop.org
> Subject: [PATCH libdrm] amdgpu: add a faster BO list API
> 
> From: Marek Olšák 
> 
> ---
>  amdgpu/amdgpu-symbol-check |  3 ++
>  amdgpu/amdgpu.h| 56
> +-
>  amdgpu/amdgpu_bo.c | 36 
>  amdgpu/amdgpu_cs.c | 25 +
>  4 files changed, 119 insertions(+), 1 deletion(-)
> 
> diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-
> check index 6f5e0f95..96a44b40 100755
> --- a/amdgpu/amdgpu-symbol-check
> +++ b/amdgpu/amdgpu-symbol-check
> @@ -12,20 +12,22 @@ _edata
>  _end
>  _fini
>  _init
>  amdgpu_bo_alloc
>  amdgpu_bo_cpu_map
>  amdgpu_bo_cpu_unmap
>  amdgpu_bo_export
>  amdgpu_bo_free
>  amdgpu_bo_import
>  amdgpu_bo_inc_ref
> +amdgpu_bo_list_create_raw
> +amdgpu_bo_list_destroy_raw
>  amdgpu_bo_list_create
>  amdgpu_bo_list_destroy
>  amdgpu_bo_list_update
>  amdgpu_bo_query_info
>  amdgpu_bo_set_metadata
>  amdgpu_bo_va_op
>  amdgpu_bo_va_op_raw
>  amdgpu_bo_wait_for_idle
>  amdgpu_create_bo_from_user_mem
>  amdgpu_cs_chunk_fence_info_to_data
> @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
> amdgpu_cs_destroy_syncobj  amdgpu_cs_export_syncobj
> amdgpu_cs_fence_to_handle  amdgpu_cs_import_syncobj
> amdgpu_cs_query_fence_status  amdgpu_cs_query_reset_state
> amdgpu_query_sw_info  amdgpu_cs_signal_semaphore
> amdgpu_cs_submit  amdgpu_cs_submit_raw
> +amdgpu_cs_submit_raw2
>  amdgpu_cs_syncobj_export_sync_file
>  amdgpu_cs_syncobj_import_sync_file
>  amdgpu_cs_syncobj_reset
>  amdgpu_cs_syncobj_signal
>  amdgpu_cs_syncobj_wait
>  amdgpu_cs_wait_fences
>  amdgpu_cs_wait_semaphore
>  amdgpu_device_deinitialize
>  amdgpu_device_initialize
>  amdgpu_find_bo_by_cpu_mapping
> diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h index
> dc51659a..5b800033 100644
> --- a/amdgpu/amdgpu.h
> +++ b/amdgpu/amdgpu.h
> @@ -35,20 +35,21 @@
>  #define _AMDGPU_H_
> 
>  #include 
>  #include 
> 
>  #ifdef __cplusplus
>  extern "C" {
>  #endif
> 
>  struct drm_amdgpu_info_hw_ip;
> +struct drm_amdgpu_bo_list_entry;
> 
>  
> /*--*/
>  /* --- Defines  
> */  /*-
> -*/
> 
>  /**
>   * Define max. number of Command Buffers (IB) which could be sent to the
> single
>   * hardware IP to accommodate CE/DE requirements
>   *
>   * \sa amdgpu_cs_ib_info
> @@ -767,34 +768,65 @@ int amdgpu_bo_cpu_unmap(amdgpu_bo_handle
> buf_handle);
>   *and no GPU access is scheduled.
>   *  1 GPU access is in fly or scheduled
>   *
>   * \return   0 - on success
>   *  <0 - Negative POSIX Error code
>   */
>  int amdgpu_bo_wait_for_idle(amdgpu_bo_handle buf_handle,
>   uint64_t timeout_ns,
>   bool *buffer_busy);
> 
> +/**
> + * Creates a BO list handle for command submission.
> + *
> + * \param   dev  - \c [in] Device handle.
> + *  See #amdgpu_device_initialize()
> + * \param   number_of_buffers- \c [in] Number of BOs in the list
> + * \param   buffers  - \c [in] List of BO handles
> + * \param   result   - \c [out] Created BO list handle
> + *
> + * \return   0 on success\n
> + *  <0 - Negative POSIX Error code
> + *
> + * \sa amdgpu_bo_list_destroy_raw()
> +*/
> +int amdgpu_bo_list_create_raw(amdgpu_device_handle dev,
> +   uint32_t number_of_buffers,
> +   struct drm_amdgpu_bo_list_entry *buffers,
> +   uint32_t *result);
> +
> +/**
> + * Destroys a BO list handle.
> + *
> + * \param   bo_list  - \c [in] BO list handle.
> + *
> + * \return   0 on success\n
> + *  <0 - Negative POSIX Error code
> + *
> + * \sa amdgpu_bo_list_create_raw(), amdgpu_cs_submit_raw2() */ int
> +amdgpu_bo_list_destroy_raw(amdgpu_device_handle dev, uint32_t
> bo_list);
> +
>  /**
>   * Creates a BO list handle for command submission.
>   *
>   * \param   dev  - \c [in] Device handle.
>   *  See #amdgpu_device_initialize()
>   * \param   number_of_resources  - \c [in] Number of BOs in the list
>   * \param   

[PATCH libdrm] amdgpu: add a faster BO list API

2019-01-07 Thread Marek Olšák
From: Marek Olšák 

---
 amdgpu/amdgpu-symbol-check |  3 ++
 amdgpu/amdgpu.h| 56 +-
 amdgpu/amdgpu_bo.c | 36 
 amdgpu/amdgpu_cs.c | 25 +
 4 files changed, 119 insertions(+), 1 deletion(-)

diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check
index 6f5e0f95..96a44b40 100755
--- a/amdgpu/amdgpu-symbol-check
+++ b/amdgpu/amdgpu-symbol-check
@@ -12,20 +12,22 @@ _edata
 _end
 _fini
 _init
 amdgpu_bo_alloc
 amdgpu_bo_cpu_map
 amdgpu_bo_cpu_unmap
 amdgpu_bo_export
 amdgpu_bo_free
 amdgpu_bo_import
 amdgpu_bo_inc_ref
+amdgpu_bo_list_create_raw
+amdgpu_bo_list_destroy_raw
 amdgpu_bo_list_create
 amdgpu_bo_list_destroy
 amdgpu_bo_list_update
 amdgpu_bo_query_info
 amdgpu_bo_set_metadata
 amdgpu_bo_va_op
 amdgpu_bo_va_op_raw
 amdgpu_bo_wait_for_idle
 amdgpu_create_bo_from_user_mem
 amdgpu_cs_chunk_fence_info_to_data
@@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
 amdgpu_cs_destroy_syncobj
 amdgpu_cs_export_syncobj
 amdgpu_cs_fence_to_handle
 amdgpu_cs_import_syncobj
 amdgpu_cs_query_fence_status
 amdgpu_cs_query_reset_state
 amdgpu_query_sw_info
 amdgpu_cs_signal_semaphore
 amdgpu_cs_submit
 amdgpu_cs_submit_raw
+amdgpu_cs_submit_raw2
 amdgpu_cs_syncobj_export_sync_file
 amdgpu_cs_syncobj_import_sync_file
 amdgpu_cs_syncobj_reset
 amdgpu_cs_syncobj_signal
 amdgpu_cs_syncobj_wait
 amdgpu_cs_wait_fences
 amdgpu_cs_wait_semaphore
 amdgpu_device_deinitialize
 amdgpu_device_initialize
 amdgpu_find_bo_by_cpu_mapping
diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h
index dc51659a..5b800033 100644
--- a/amdgpu/amdgpu.h
+++ b/amdgpu/amdgpu.h
@@ -35,20 +35,21 @@
 #define _AMDGPU_H_
 
 #include 
 #include 
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
 struct drm_amdgpu_info_hw_ip;
+struct drm_amdgpu_bo_list_entry;
 
 /*--*/
 /* --- Defines  */
 /*--*/
 
 /**
  * Define max. number of Command Buffers (IB) which could be sent to the single
  * hardware IP to accommodate CE/DE requirements
  *
  * \sa amdgpu_cs_ib_info
@@ -767,34 +768,65 @@ int amdgpu_bo_cpu_unmap(amdgpu_bo_handle buf_handle);
  *and no GPU access is scheduled.
  *  1 GPU access is in fly or scheduled
  *
  * \return   0 - on success
  *  <0 - Negative POSIX Error code
  */
 int amdgpu_bo_wait_for_idle(amdgpu_bo_handle buf_handle,
uint64_t timeout_ns,
bool *buffer_busy);
 
+/**
+ * Creates a BO list handle for command submission.
+ *
+ * \param   dev- \c [in] Device handle.
+ *See #amdgpu_device_initialize()
+ * \param   number_of_buffers  - \c [in] Number of BOs in the list
+ * \param   buffers- \c [in] List of BO handles
+ * \param   result - \c [out] Created BO list handle
+ *
+ * \return   0 on success\n
+ *  <0 - Negative POSIX Error code
+ *
+ * \sa amdgpu_bo_list_destroy_raw()
+*/
+int amdgpu_bo_list_create_raw(amdgpu_device_handle dev,
+ uint32_t number_of_buffers,
+ struct drm_amdgpu_bo_list_entry *buffers,
+ uint32_t *result);
+
+/**
+ * Destroys a BO list handle.
+ *
+ * \param   bo_list- \c [in] BO list handle.
+ *
+ * \return   0 on success\n
+ *  <0 - Negative POSIX Error code
+ *
+ * \sa amdgpu_bo_list_create_raw(), amdgpu_cs_submit_raw2()
+*/
+int amdgpu_bo_list_destroy_raw(amdgpu_device_handle dev, uint32_t bo_list);
+
 /**
  * Creates a BO list handle for command submission.
  *
  * \param   dev- \c [in] Device handle.
  *See #amdgpu_device_initialize()
  * \param   number_of_resources- \c [in] Number of BOs in the list
  * \param   resources  - \c [in] List of BO handles
  * \param   resource_prios - \c [in] Optional priority for each handle
  * \param   result - \c [out] Created BO list handle
  *
  * \return   0 on success\n
  *  <0 - Negative POSIX Error code
  *
- * \sa amdgpu_bo_list_destroy()
+ * \sa amdgpu_bo_list_destroy(), amdgpu_cs_submit_raw2()
 */
 int amdgpu_bo_list_create(amdgpu_device_handle dev,
  uint32_t number_of_resources,
  amdgpu_bo_handle *resources,
  uint8_t *resource_prios,
  amdgpu_bo_list_handle *result);
 
 /**
  * Destroys a BO list handle.
  *
@@ -1580,20 +1612,42 @@ struct drm_amdgpu_cs_chunk;
 struct drm_amdgpu_cs_chunk_dep;
 struct drm_amdgpu_cs_chunk_data;
 
 int amdgpu_cs_submit_raw(amdgpu_device_handle dev,
 amdgpu_context_handle context,
 amdgpu_bo_list_handle