Re: [PATCH v4 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v4)

2018-08-22 Thread Huang Rui
On Wed, Aug 22, 2018 at 11:31:02AM +0800, Huang Rui wrote:
> On Tue, Aug 21, 2018 at 03:54:28PM +0200, Christian König wrote:
> > Am 21.08.2018 um 15:43 schrieb Huang Rui:
> > >On Mon, Aug 20, 2018 at 09:17:12PM +0800, Christian König wrote:
> > >>Am 20.08.2018 um 08:05 schrieb Huang Rui:
> > >>>On Fri, Aug 17, 2018 at 06:38:16PM +0800, Koenig, Christian wrote:
> > Am 17.08.2018 um 12:08 schrieb Huang Rui:
> > >I continue to work for bulk moving that based on the proposal by 
> > >Christian.
> > >
> > >Background:
> > >amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then 
> > >move all of
> > >them on the end of LRU list one by one. Thus, that cause so many BOs 
> > >moved to
> > >the end of the LRU, and impact performance seriously.
> > >
> > >Then Christian provided a workaround to not move PD/PT BOs on LRU with 
> > >below
> > >patch:
> > >"drm/amdgpu: band aid validating VM PTs"
> > >Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
> > >
> > >However, the final solution should bulk move all PD/PT and PerVM BOs 
> > >on the LRU
> > >instead of one by one.
> > >
> > >Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which 
> > >need to be
> > >validated we move all BOs together to the end of the LRU without 
> > >dropping the
> > >lock for the LRU.
> > >
> > >While doing so we note the beginning and end of this block in the LRU 
> > >list.
> > >
> > >Now when amdgpu_vm_validate_pt_bos() is called and we don't have 
> > >anything to do,
> > >we don't move every BO one by one, but instead cut the LRU list into 
> > >pieces so
> > >that we bulk move everything to the end in just one operation.
> > >
> > >Test data:
> > >+--+-+---+---+
> > >|  |The Talos|Clpeak(OCL)|BusSpeedReadback(OCL)
> > >  |
> > >|  |Principle(Vulkan)|   | 
> > >  |
> > >++
> > >|  | |   |0.319 ms(1k) 0.314 
> > >ms(2K) 0.308 ms(4K) |
> > >| Original |  147.7 FPS  |  76.86 us |0.307 ms(8K) 0.310 
> > >ms(16K) |
> > >++
> > >| Orignial + WA| |   |0.254 ms(1K) 0.241 
> > >ms(2K)  |
> > >|(don't move   |  162.1 FPS  |  42.15 us |0.230 ms(4K) 0.223 
> > >ms(8K) 0.204 ms(16K)|
> > >|PT BOs on LRU)| |   | 
> > >  |
> > >++
> > >| Bulk move|  163.1 FPS  |  40.52 us |0.244 ms(1K) 0.252 
> > >ms(2K) 0.213 ms(4K) |
> > >|  | |   |0.214 ms(8K) 0.225 
> > >ms(16K) |
> > >+--+-+---+---+
> > >
> > >After test them with above three benchmarks include vulkan and opencl. 
> > >We can
> > >see the visible improvement than original, and even better than 
> > >original with
> > >workaround.
> > >
> > >v2: move all BOs include idle, relocated, and moved list to the end of 
> > >LRU and
> > >put them together.
> > >v3: remove unused parameter and use list_for_each_entry instead of the 
> > >one with
> > >save entry.
> > >v4: move the amdgpu_vm_move_to_lru_tail after command submission, at 
> > >that time,
> > >all bo will be back on idle list.
> > >
> > >Signed-off-by: Christian König 
> > >Signed-off-by: Huang Rui 
> > >Tested-by: Mike Lothian 
> > >Tested-by: Dieter Nützel 
> > >Acked-by: Chunming Zhou 
> > >---
> > >drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 11 ++
> > >drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 71 
> > > ++
> > >drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +-
> > >3 files changed, 75 insertions(+), 18 deletions(-)
> > >
> > >diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> > >b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > >index 502b94f..9fbdf02 100644
> > >--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > >+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > >@@ -1260,6 +1260,16 @@ static int amdgpu_cs_submit(struct 
> > >amdgpu_cs_parser *p,
> > >   return 0;
> > >}
> > >+static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,
> > >+   struct amdgpu_cs_parser *p)
> > >+{
> > >+  struct amdgpu_fpriv *fpriv = 

Re: [PATCH v4 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v4)

2018-08-21 Thread Huang Rui
On Tue, Aug 21, 2018 at 03:54:28PM +0200, Christian König wrote:
> Am 21.08.2018 um 15:43 schrieb Huang Rui:
> >On Mon, Aug 20, 2018 at 09:17:12PM +0800, Christian König wrote:
> >>Am 20.08.2018 um 08:05 schrieb Huang Rui:
> >>>On Fri, Aug 17, 2018 at 06:38:16PM +0800, Koenig, Christian wrote:
> Am 17.08.2018 um 12:08 schrieb Huang Rui:
> >I continue to work for bulk moving that based on the proposal by 
> >Christian.
> >
> >Background:
> >amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then 
> >move all of
> >them on the end of LRU list one by one. Thus, that cause so many BOs 
> >moved to
> >the end of the LRU, and impact performance seriously.
> >
> >Then Christian provided a workaround to not move PD/PT BOs on LRU with 
> >below
> >patch:
> >"drm/amdgpu: band aid validating VM PTs"
> >Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
> >
> >However, the final solution should bulk move all PD/PT and PerVM BOs on 
> >the LRU
> >instead of one by one.
> >
> >Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which 
> >need to be
> >validated we move all BOs together to the end of the LRU without 
> >dropping the
> >lock for the LRU.
> >
> >While doing so we note the beginning and end of this block in the LRU 
> >list.
> >
> >Now when amdgpu_vm_validate_pt_bos() is called and we don't have 
> >anything to do,
> >we don't move every BO one by one, but instead cut the LRU list into 
> >pieces so
> >that we bulk move everything to the end in just one operation.
> >
> >Test data:
> >+--+-+---+---+
> >|  |The Talos|Clpeak(OCL)|BusSpeedReadback(OCL)  
> >|
> >|  |Principle(Vulkan)|   |   
> >|
> >++
> >|  | |   |0.319 ms(1k) 0.314 ms(2K) 
> >0.308 ms(4K) |
> >| Original |  147.7 FPS  |  76.86 us |0.307 ms(8K) 0.310 ms(16K) 
> >|
> >++
> >| Orignial + WA| |   |0.254 ms(1K) 0.241 ms(2K)  
> >|
> >|(don't move   |  162.1 FPS  |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 
> >0.204 ms(16K)|
> >|PT BOs on LRU)| |   |   
> >|
> >++
> >| Bulk move|  163.1 FPS  |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 
> >0.213 ms(4K) |
> >|  | |   |0.214 ms(8K) 0.225 ms(16K) 
> >|
> >+--+-+---+---+
> >
> >After test them with above three benchmarks include vulkan and opencl. 
> >We can
> >see the visible improvement than original, and even better than original 
> >with
> >workaround.
> >
> >v2: move all BOs include idle, relocated, and moved list to the end of 
> >LRU and
> >put them together.
> >v3: remove unused parameter and use list_for_each_entry instead of the 
> >one with
> >save entry.
> >v4: move the amdgpu_vm_move_to_lru_tail after command submission, at 
> >that time,
> >all bo will be back on idle list.
> >
> >Signed-off-by: Christian König 
> >Signed-off-by: Huang Rui 
> >Tested-by: Mike Lothian 
> >Tested-by: Dieter Nützel 
> >Acked-by: Chunming Zhou 
> >---
> >drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 11 ++
> >drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 71 
> > ++
> >drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +-
> >3 files changed, 75 insertions(+), 18 deletions(-)
> >
> >diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> >b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >index 502b94f..9fbdf02 100644
> >--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >@@ -1260,6 +1260,16 @@ static int amdgpu_cs_submit(struct 
> >amdgpu_cs_parser *p,
> > return 0;
> >}
> >+static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,
> >+ struct amdgpu_cs_parser *p)
> >+{
> >+struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> >+struct amdgpu_vm *vm = >vm;
> >+
> >+if (vm->validated)
> That check belongs inside amdgpu_vm_move_to_lru_tail().
> 
> >+amdgpu_vm_move_to_lru_tail(adev, vm);
> >+}
> >+
> >int 

Re: [PATCH v4 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v4)

2018-08-21 Thread Christian König

Am 21.08.2018 um 15:43 schrieb Huang Rui:

On Mon, Aug 20, 2018 at 09:17:12PM +0800, Christian König wrote:

Am 20.08.2018 um 08:05 schrieb Huang Rui:

On Fri, Aug 17, 2018 at 06:38:16PM +0800, Koenig, Christian wrote:

Am 17.08.2018 um 12:08 schrieb Huang Rui:

I continue to work for bulk moving that based on the proposal by Christian.

Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact performance seriously.

Then Christian provided a workaround to not move PD/PT BOs on LRU with below
patch:
"drm/amdgpu: band aid validating VM PTs"
Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae

However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
instead of one by one.

Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
validated we move all BOs together to the end of the LRU without dropping the
lock for the LRU.

While doing so we note the beginning and end of this block in the LRU list.

Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
we don't move every BO one by one, but instead cut the LRU list into pieces so
that we bulk move everything to the end in just one operation.

Test data:
+--+-+---+---+
|  |The Talos|Clpeak(OCL)|BusSpeedReadback(OCL) 
 |
|  |Principle(Vulkan)|   |  
 |
++
|  | |   |0.319 ms(1k) 0.314 ms(2K) 0.308 
ms(4K) |
| Original |  147.7 FPS  |  76.86 us |0.307 ms(8K) 0.310 ms(16K)
 |
++
| Orignial + WA| |   |0.254 ms(1K) 0.241 ms(2K) 
 |
|(don't move   |  162.1 FPS  |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 
ms(16K)|
|PT BOs on LRU)| |   |  
 |
++
| Bulk move|  163.1 FPS  |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 
ms(4K) |
|  | |   |0.214 ms(8K) 0.225 ms(16K)
 |
+--+-+---+---+

After test them with above three benchmarks include vulkan and opencl. We can
see the visible improvement than original, and even better than original with
workaround.

v2: move all BOs include idle, relocated, and moved list to the end of LRU and
put them together.
v3: remove unused parameter and use list_for_each_entry instead of the one with
save entry.
v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
all bo will be back on idle list.

Signed-off-by: Christian König 
Signed-off-by: Huang Rui 
Tested-by: Mike Lothian 
Tested-by: Dieter Nützel 
Acked-by: Chunming Zhou 
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 11 ++
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 71 
++
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +-
3 files changed, 75 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 502b94f..9fbdf02 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1260,6 +1260,16 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
return 0;
}

+static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,

+struct amdgpu_cs_parser *p)
+{
+   struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+   struct amdgpu_vm *vm = >vm;
+
+   if (vm->validated)

That check belongs inside amdgpu_vm_move_to_lru_tail().


+   amdgpu_vm_move_to_lru_tail(adev, vm);
+}
+
int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file 
*filp)
{
struct amdgpu_device *adev = dev->dev_private;
@@ -1310,6 +1320,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)

	r = amdgpu_cs_submit(, cs);

+	amdgpu_cs_vm_move_on_lru(adev, );

out:
amdgpu_cs_parser_fini(, r, reserved_buffers);
return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 9c84770..037cfbc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
}

/**

+ * amdgpu_vm_move_to_lru_tail_by_list - move one list of BOs to end of LRU
+ *
+ * @vm: vm providing the BOs
+ * @list: the list that stored BOs
+ *
+ * Move one 

Re: [PATCH v4 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v4)

2018-08-21 Thread Huang Rui
On Mon, Aug 20, 2018 at 09:17:12PM +0800, Christian König wrote:
> Am 20.08.2018 um 08:05 schrieb Huang Rui:
> > On Fri, Aug 17, 2018 at 06:38:16PM +0800, Koenig, Christian wrote:
> >> Am 17.08.2018 um 12:08 schrieb Huang Rui:
> >>> I continue to work for bulk moving that based on the proposal by 
> >>> Christian.
> >>>
> >>> Background:
> >>> amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move 
> >>> all of
> >>> them on the end of LRU list one by one. Thus, that cause so many BOs 
> >>> moved to
> >>> the end of the LRU, and impact performance seriously.
> >>>
> >>> Then Christian provided a workaround to not move PD/PT BOs on LRU with 
> >>> below
> >>> patch:
> >>> "drm/amdgpu: band aid validating VM PTs"
> >>> Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
> >>>
> >>> However, the final solution should bulk move all PD/PT and PerVM BOs on 
> >>> the LRU
> >>> instead of one by one.
> >>>
> >>> Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need 
> >>> to be
> >>> validated we move all BOs together to the end of the LRU without dropping 
> >>> the
> >>> lock for the LRU.
> >>>
> >>> While doing so we note the beginning and end of this block in the LRU 
> >>> list.
> >>>
> >>> Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything 
> >>> to do,
> >>> we don't move every BO one by one, but instead cut the LRU list into 
> >>> pieces so
> >>> that we bulk move everything to the end in just one operation.
> >>>
> >>> Test data:
> >>> +--+-+---+---+
> >>> |  |The Talos|Clpeak(OCL)|BusSpeedReadback(OCL)   
> >>>|
> >>> |  |Principle(Vulkan)|   |
> >>>|
> >>> ++
> >>> |  | |   |0.319 ms(1k) 0.314 ms(2K) 
> >>> 0.308 ms(4K) |
> >>> | Original |  147.7 FPS  |  76.86 us |0.307 ms(8K) 0.310 ms(16K)  
> >>>|
> >>> ++
> >>> | Orignial + WA| |   |0.254 ms(1K) 0.241 ms(2K)   
> >>>|
> >>> |(don't move   |  162.1 FPS  |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 
> >>> 0.204 ms(16K)|
> >>> |PT BOs on LRU)| |   |
> >>>|
> >>> ++
> >>> | Bulk move|  163.1 FPS  |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 
> >>> 0.213 ms(4K) |
> >>> |  | |   |0.214 ms(8K) 0.225 ms(16K)  
> >>>|
> >>> +--+-+---+---+
> >>>
> >>> After test them with above three benchmarks include vulkan and opencl. We 
> >>> can
> >>> see the visible improvement than original, and even better than original 
> >>> with
> >>> workaround.
> >>>
> >>> v2: move all BOs include idle, relocated, and moved list to the end of 
> >>> LRU and
> >>> put them together.
> >>> v3: remove unused parameter and use list_for_each_entry instead of the 
> >>> one with
> >>> save entry.
> >>> v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that 
> >>> time,
> >>> all bo will be back on idle list.
> >>>
> >>> Signed-off-by: Christian König 
> >>> Signed-off-by: Huang Rui 
> >>> Tested-by: Mike Lothian 
> >>> Tested-by: Dieter Nützel 
> >>> Acked-by: Chunming Zhou 
> >>> ---
> >>>drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 11 ++
> >>>drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 71 
> >>> ++
> >>>drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +-
> >>>3 files changed, 75 insertions(+), 18 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> >>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >>> index 502b94f..9fbdf02 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >>> @@ -1260,6 +1260,16 @@ static int amdgpu_cs_submit(struct 
> >>> amdgpu_cs_parser *p,
> >>>   return 0;
> >>>}
> >>>
> >>> +static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,
> >>> +  struct amdgpu_cs_parser *p)
> >>> +{
> >>> + struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> >>> + struct amdgpu_vm *vm = >vm;
> >>> +
> >>> + if (vm->validated)
> >> That check belongs inside amdgpu_vm_move_to_lru_tail().
> >>
> >>> + amdgpu_vm_move_to_lru_tail(adev, vm);
> >>> +}
> >>> +
> >>>int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct 
> >>> drm_file *filp)
> >>>{
> >>>   struct amdgpu_device *adev = dev->dev_private;
> >>> @@ -1310,6 +1320,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void 
> >>> *data, struct drm_file *filp)
> 

Re: [PATCH v4 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v4)

2018-08-20 Thread Christian König

Am 20.08.2018 um 08:05 schrieb Huang Rui:

On Fri, Aug 17, 2018 at 06:38:16PM +0800, Koenig, Christian wrote:

Am 17.08.2018 um 12:08 schrieb Huang Rui:

I continue to work for bulk moving that based on the proposal by Christian.

Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact performance seriously.

Then Christian provided a workaround to not move PD/PT BOs on LRU with below
patch:
"drm/amdgpu: band aid validating VM PTs"
Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae

However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
instead of one by one.

Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
validated we move all BOs together to the end of the LRU without dropping the
lock for the LRU.

While doing so we note the beginning and end of this block in the LRU list.

Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
we don't move every BO one by one, but instead cut the LRU list into pieces so
that we bulk move everything to the end in just one operation.

Test data:
+--+-+---+---+
|  |The Talos|Clpeak(OCL)|BusSpeedReadback(OCL) 
 |
|  |Principle(Vulkan)|   |  
 |
++
|  | |   |0.319 ms(1k) 0.314 ms(2K) 0.308 
ms(4K) |
| Original |  147.7 FPS  |  76.86 us |0.307 ms(8K) 0.310 ms(16K)
 |
++
| Orignial + WA| |   |0.254 ms(1K) 0.241 ms(2K) 
 |
|(don't move   |  162.1 FPS  |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 
ms(16K)|
|PT BOs on LRU)| |   |  
 |
++
| Bulk move|  163.1 FPS  |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 
ms(4K) |
|  | |   |0.214 ms(8K) 0.225 ms(16K)
 |
+--+-+---+---+

After test them with above three benchmarks include vulkan and opencl. We can
see the visible improvement than original, and even better than original with
workaround.

v2: move all BOs include idle, relocated, and moved list to the end of LRU and
put them together.
v3: remove unused parameter and use list_for_each_entry instead of the one with
save entry.
v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
all bo will be back on idle list.

Signed-off-by: Christian König 
Signed-off-by: Huang Rui 
Tested-by: Mike Lothian 
Tested-by: Dieter Nützel 
Acked-by: Chunming Zhou 
---
   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 11 ++
   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 71 
++
   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +-
   3 files changed, 75 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 502b94f..9fbdf02 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1260,6 +1260,16 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
return 0;
   }
   
+static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,

+struct amdgpu_cs_parser *p)
+{
+   struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+   struct amdgpu_vm *vm = >vm;
+
+   if (vm->validated)

That check belongs inside amdgpu_vm_move_to_lru_tail().


+   amdgpu_vm_move_to_lru_tail(adev, vm);
+}
+
   int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file 
*filp)
   {
struct amdgpu_device *adev = dev->dev_private;
@@ -1310,6 +1320,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
   
   	r = amdgpu_cs_submit(, cs);
   
+	amdgpu_cs_vm_move_on_lru(adev, );

   out:
amdgpu_cs_parser_fini(, r, reserved_buffers);
return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 9c84770..037cfbc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
   }
   
   /**

+ * amdgpu_vm_move_to_lru_tail_by_list - move one list of BOs to end of LRU
+ *
+ * @vm: vm providing the BOs
+ * @list: the list that stored BOs
+ *
+ * Move one list of BOs to the end of LRU and update the positions.
+ */
+static void
+amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm 

Re: [PATCH v4 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v4)

2018-08-20 Thread Huang Rui
On Fri, Aug 17, 2018 at 06:38:16PM +0800, Koenig, Christian wrote:
> Am 17.08.2018 um 12:08 schrieb Huang Rui:
> > I continue to work for bulk moving that based on the proposal by Christian.
> >
> > Background:
> > amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move 
> > all of
> > them on the end of LRU list one by one. Thus, that cause so many BOs moved 
> > to
> > the end of the LRU, and impact performance seriously.
> >
> > Then Christian provided a workaround to not move PD/PT BOs on LRU with below
> > patch:
> > "drm/amdgpu: band aid validating VM PTs"
> > Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
> >
> > However, the final solution should bulk move all PD/PT and PerVM BOs on the 
> > LRU
> > instead of one by one.
> >
> > Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need 
> > to be
> > validated we move all BOs together to the end of the LRU without dropping 
> > the
> > lock for the LRU.
> >
> > While doing so we note the beginning and end of this block in the LRU list.
> >
> > Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything 
> > to do,
> > we don't move every BO one by one, but instead cut the LRU list into pieces 
> > so
> > that we bulk move everything to the end in just one operation.
> >
> > Test data:
> > +--+-+---+---+
> > |  |The Talos|Clpeak(OCL)|BusSpeedReadback(OCL) 
> >  |
> > |  |Principle(Vulkan)|   |  
> >  |
> > ++
> > |  | |   |0.319 ms(1k) 0.314 ms(2K) 
> > 0.308 ms(4K) |
> > | Original |  147.7 FPS  |  76.86 us |0.307 ms(8K) 0.310 ms(16K)
> >  |
> > ++
> > | Orignial + WA| |   |0.254 ms(1K) 0.241 ms(2K) 
> >  |
> > |(don't move   |  162.1 FPS  |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 
> > 0.204 ms(16K)|
> > |PT BOs on LRU)| |   |  
> >  |
> > ++
> > | Bulk move|  163.1 FPS  |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 
> > 0.213 ms(4K) |
> > |  | |   |0.214 ms(8K) 0.225 ms(16K)
> >  |
> > +--+-+---+---+
> >
> > After test them with above three benchmarks include vulkan and opencl. We 
> > can
> > see the visible improvement than original, and even better than original 
> > with
> > workaround.
> >
> > v2: move all BOs include idle, relocated, and moved list to the end of LRU 
> > and
> > put them together.
> > v3: remove unused parameter and use list_for_each_entry instead of the one 
> > with
> > save entry.
> > v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that 
> > time,
> > all bo will be back on idle list.
> >
> > Signed-off-by: Christian König 
> > Signed-off-by: Huang Rui 
> > Tested-by: Mike Lothian 
> > Tested-by: Dieter Nützel 
> > Acked-by: Chunming Zhou 
> > ---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 11 ++
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 71 
> > ++
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +-
> >   3 files changed, 75 insertions(+), 18 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > index 502b94f..9fbdf02 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > @@ -1260,6 +1260,16 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser 
> > *p,
> > return 0;
> >   }
> >   
> > +static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,
> > +struct amdgpu_cs_parser *p)
> > +{
> > +   struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> > +   struct amdgpu_vm *vm = >vm;
> > +
> > +   if (vm->validated)
> 
> That check belongs inside amdgpu_vm_move_to_lru_tail().
> 
> > +   amdgpu_vm_move_to_lru_tail(adev, vm);
> > +}
> > +
> >   int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file 
> > *filp)
> >   {
> > struct amdgpu_device *adev = dev->dev_private;
> > @@ -1310,6 +1320,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void 
> > *data, struct drm_file *filp)
> >   
> > r = amdgpu_cs_submit(, cs);
> >   
> > +   amdgpu_cs_vm_move_on_lru(adev, );
> >   out:
> > amdgpu_cs_parser_fini(, r, reserved_buffers);
> > return r;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > index 9c84770..037cfbc 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > +++ 

Re: [PATCH v4 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v4)

2018-08-17 Thread Christian König

Am 17.08.2018 um 12:08 schrieb Huang Rui:

I continue to work for bulk moving that based on the proposal by Christian.

Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact performance seriously.

Then Christian provided a workaround to not move PD/PT BOs on LRU with below
patch:
"drm/amdgpu: band aid validating VM PTs"
Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae

However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
instead of one by one.

Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
validated we move all BOs together to the end of the LRU without dropping the
lock for the LRU.

While doing so we note the beginning and end of this block in the LRU list.

Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
we don't move every BO one by one, but instead cut the LRU list into pieces so
that we bulk move everything to the end in just one operation.

Test data:
+--+-+---+---+
|  |The Talos|Clpeak(OCL)|BusSpeedReadback(OCL) 
 |
|  |Principle(Vulkan)|   |  
 |
++
|  | |   |0.319 ms(1k) 0.314 ms(2K) 0.308 
ms(4K) |
| Original |  147.7 FPS  |  76.86 us |0.307 ms(8K) 0.310 ms(16K)
 |
++
| Orignial + WA| |   |0.254 ms(1K) 0.241 ms(2K) 
 |
|(don't move   |  162.1 FPS  |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 
ms(16K)|
|PT BOs on LRU)| |   |  
 |
++
| Bulk move|  163.1 FPS  |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 
ms(4K) |
|  | |   |0.214 ms(8K) 0.225 ms(16K)
 |
+--+-+---+---+

After test them with above three benchmarks include vulkan and opencl. We can
see the visible improvement than original, and even better than original with
workaround.

v2: move all BOs include idle, relocated, and moved list to the end of LRU and
put them together.
v3: remove unused parameter and use list_for_each_entry instead of the one with
save entry.
v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
all bo will be back on idle list.

Signed-off-by: Christian König 
Signed-off-by: Huang Rui 
Tested-by: Mike Lothian 
Tested-by: Dieter Nützel 
Acked-by: Chunming Zhou 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 11 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 71 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +-
  3 files changed, 75 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 502b94f..9fbdf02 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1260,6 +1260,16 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
return 0;
  }
  
+static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,

+struct amdgpu_cs_parser *p)
+{
+   struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+   struct amdgpu_vm *vm = >vm;
+
+   if (vm->validated)


That check belongs inside amdgpu_vm_move_to_lru_tail().


+   amdgpu_vm_move_to_lru_tail(adev, vm);
+}
+
  int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
  {
struct amdgpu_device *adev = dev->dev_private;
@@ -1310,6 +1320,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
  
  	r = amdgpu_cs_submit(, cs);
  
+	amdgpu_cs_vm_move_on_lru(adev, );

  out:
amdgpu_cs_parser_fini(, r, reserved_buffers);
return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 9c84770..037cfbc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
  }
  
  /**

+ * amdgpu_vm_move_to_lru_tail_by_list - move one list of BOs to end of LRU
+ *
+ * @vm: vm providing the BOs
+ * @list: the list that stored BOs
+ *
+ * Move one list of BOs to the end of LRU and update the positions.
+ */
+static void
+amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm *vm, struct list_head 
*list)


I don't see much of a point having a separate function for this any more.


+{
+   struct 

[PATCH v4 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v4)

2018-08-17 Thread Huang Rui
I continue to work for bulk moving that based on the proposal by Christian.

Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact performance seriously.

Then Christian provided a workaround to not move PD/PT BOs on LRU with below
patch:
"drm/amdgpu: band aid validating VM PTs"
Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae

However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
instead of one by one.

Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
validated we move all BOs together to the end of the LRU without dropping the
lock for the LRU.

While doing so we note the beginning and end of this block in the LRU list.

Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
we don't move every BO one by one, but instead cut the LRU list into pieces so
that we bulk move everything to the end in just one operation.

Test data:
+--+-+---+---+
|  |The Talos|Clpeak(OCL)|BusSpeedReadback(OCL) 
 |
|  |Principle(Vulkan)|   |  
 |
++
|  | |   |0.319 ms(1k) 0.314 ms(2K) 0.308 
ms(4K) |
| Original |  147.7 FPS  |  76.86 us |0.307 ms(8K) 0.310 ms(16K)
 |
++
| Orignial + WA| |   |0.254 ms(1K) 0.241 ms(2K) 
 |
|(don't move   |  162.1 FPS  |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 
ms(16K)|
|PT BOs on LRU)| |   |  
 |
++
| Bulk move|  163.1 FPS  |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 
ms(4K) |
|  | |   |0.214 ms(8K) 0.225 ms(16K)
 |
+--+-+---+---+

After test them with above three benchmarks include vulkan and opencl. We can
see the visible improvement than original, and even better than original with
workaround.

v2: move all BOs include idle, relocated, and moved list to the end of LRU and
put them together.
v3: remove unused parameter and use list_for_each_entry instead of the one with
save entry.
v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
all bo will be back on idle list.

Signed-off-by: Christian König 
Signed-off-by: Huang Rui 
Tested-by: Mike Lothian 
Tested-by: Dieter Nützel 
Acked-by: Chunming Zhou 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 11 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 71 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +-
 3 files changed, 75 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 502b94f..9fbdf02 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1260,6 +1260,16 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
return 0;
 }
 
+static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,
+struct amdgpu_cs_parser *p)
+{
+   struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+   struct amdgpu_vm *vm = >vm;
+
+   if (vm->validated)
+   amdgpu_vm_move_to_lru_tail(adev, vm);
+}
+
 int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 {
struct amdgpu_device *adev = dev->dev_private;
@@ -1310,6 +1320,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
 
r = amdgpu_cs_submit(, cs);
 
+   amdgpu_cs_vm_move_on_lru(adev, );
 out:
amdgpu_cs_parser_fini(, r, reserved_buffers);
return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 9c84770..037cfbc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
 }
 
 /**
+ * amdgpu_vm_move_to_lru_tail_by_list - move one list of BOs to end of LRU
+ *
+ * @vm: vm providing the BOs
+ * @list: the list that stored BOs
+ *
+ * Move one list of BOs to the end of LRU and update the positions.
+ */
+static void
+amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm *vm, struct list_head 
*list)
+{
+   struct amdgpu_vm_bo_base *bo_base;
+
+   list_for_each_entry(bo_base, list, vm_status) {
+   struct amdgpu_bo *bo = bo_base->bo;
+
+   if (!bo->parent)
+