Re: [PATCH 11/11] drm/amdgpu: enable GTT PD/PT for raven

2018-08-22 Thread Zhang, Jerry (Junwei)

On 08/22/2018 11:05 PM, Christian König wrote:

Should work on Vega10 as well, but with an obvious performance hit.

Older APUs can be enabled as well, but will probably be more work.

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 11 ++-
  1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 928fdae0dab4..670a42729f88 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -308,6 +308,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
list_move(_base->vm_status, >moved);
spin_unlock(>moved_lock);
} else {
+   amdgpu_ttm_alloc_gart(>tbo);
list_move(_base->vm_status, >relocated);
}
}
@@ -396,6 +397,10 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
if (r)
goto error;

+   r = amdgpu_ttm_alloc_gart(>tbo);
+   if (r)
+   return r;
+
r = amdgpu_job_alloc_with_ib(adev, 64, );
if (r)
goto error;
@@ -461,7 +466,11 @@ static void amdgpu_vm_bo_param(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
bp->size = amdgpu_vm_bo_size(adev, level);
bp->byte_align = AMDGPU_GPU_PAGE_SIZE;
bp->domain = AMDGPU_GEM_DOMAIN_VRAM;
-   bp->flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
+   if (bp->size <= PAGE_SIZE && adev->asic_type == CHIP_RAVEN)


Do we need bp->size <= PAGE_SIZE?
Seems it's always less 12 bit for raven?

Regards,
Jerry


+   bp->domain |= AMDGPU_GEM_DOMAIN_GTT;
+   bp->domain = amdgpu_bo_get_preferred_pin_domain(adev, bp->domain);
+   bp->flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
+   AMDGPU_GEM_CREATE_CPU_GTT_USWC;
if (vm->use_cpu_for_update)
bp->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
else


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v6 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v6)

2018-08-22 Thread Zhang, Jerry (Junwei)

On 08/22/2018 06:12 PM, Huang Rui wrote:

I continue to work for bulk moving that based on the proposal by Christian.

Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact performance seriously.

Then Christian provided a workaround to not move PD/PT BOs on LRU with below
patch:
Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
validating VM PTs")

However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
instead of one by one.

Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
validated we move all BOs together to the end of the LRU without dropping the
lock for the LRU.

While doing so we note the beginning and end of this block in the LRU list.

Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
we don't move every BO one by one, but instead cut the LRU list into pieces so
that we bulk move everything to the end in just one operation.

Test data:
+--+-+---+---+
|  |The Talos|Clpeak(OCL)|BusSpeedReadback(OCL) 
 |
|  |Principle(Vulkan)|   |  
 |
++
|  | |   |0.319 ms(1k) 0.314 ms(2K) 0.308 
ms(4K) |
| Original |  147.7 FPS  |  76.86 us |0.307 ms(8K) 0.310 ms(16K)
 |
++
| Orignial + WA| |   |0.254 ms(1K) 0.241 ms(2K) 
 |
|(don't move   |  162.1 FPS  |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 
ms(16K)|
|PT BOs on LRU)| |   |  
 |
++
| Bulk move|  163.1 FPS  |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 
ms(4K) |
|  | |   |0.214 ms(8K) 0.225 ms(16K)
 |
+--+-+---+---+

After test them with above three benchmarks include vulkan and opencl. We can
see the visible improvement than original, and even better than original with
workaround.

v2: move all BOs include idle, relocated, and moved list to the end of LRU and
put them together.
v3: remove unused parameter and use list_for_each_entry instead of the one with
save entry.
v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
all bo will be back on idle list.
v5: remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of
validated, and move ttm_bo_bulk_move_lru_tail() also into
amdgpu_vm_move_to_lru_tail().
v6: clean up and fix return value.

Signed-off-by: Christian König 
Signed-off-by: Huang Rui 
Tested-by: Mike Lothian 
Tested-by: Dieter Nützel 
Acked-by: Chunming Zhou 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c |  3 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 64 +++---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +-
  3 files changed, 57 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 502b94f..8a5e557 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1266,6 +1266,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
union drm_amdgpu_cs *cs = data;
struct amdgpu_cs_parser parser = {};
bool reserved_buffers = false;
+   struct amdgpu_fpriv *fpriv;
int i, r;

if (!adev->accel_working)
@@ -1310,6 +1311,8 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)

r = amdgpu_cs_submit(, cs);

+   fpriv = filp->driver_priv;


A trivial thing is that could be assigned in definition.
anyway, it's

Reviewed-by: Junwei Zhang 


+   amdgpu_vm_move_to_lru_tail(adev, >vm);
  out:
amdgpu_cs_parser_fini(, r, reserved_buffers);
return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 9c84770..daae0fd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -268,6 +268,47 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
  }

  /**
+ * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
+ *
+ * @adev: amdgpu device pointer
+ * @vm: vm providing the BOs
+ *
+ * Move all BOs to the end of LRU and remember their positions to put them
+ * together.
+ */
+void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev,
+   struct amdgpu_vm *vm)
+{
+   struct ttm_bo_global 

Re: [PATCH 10/11] drm/amdgpu: add helper for VM PD/PT allocation parameters

2018-08-22 Thread Zhang, Jerry (Junwei)

On 08/22/2018 11:05 PM, Christian König wrote:

Add a helper function to figure them out only once.

Signed-off-by: Christian König 

Reviewed-by: Junwei Zhang 


---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 61 --
  1 file changed, 28 insertions(+), 33 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 87e3d44b0a3f..928fdae0dab4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -446,6 +446,31 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
return r;
  }

+/**
+ * amdgpu_vm_bo_param - fill in parameters for PD/PT allocation
+ *
+ * @adev: amdgpu_device pointer
+ * @vm: requesting vm
+ * @bp: resulting BO allocation parameters
+ */
+static void amdgpu_vm_bo_param(struct amdgpu_device *adev, struct amdgpu_vm 
*vm,
+  int level, struct amdgpu_bo_param *bp)
+{
+   memset(, 0, sizeof(bp));
+
+   bp->size = amdgpu_vm_bo_size(adev, level);
+   bp->byte_align = AMDGPU_GPU_PAGE_SIZE;
+   bp->domain = AMDGPU_GEM_DOMAIN_VRAM;
+   bp->flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
+   if (vm->use_cpu_for_update)
+   bp->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
+   else
+   bp->flags |= AMDGPU_GEM_CREATE_SHADOW;
+   bp->type = ttm_bo_type_kernel;
+   if (vm->root.base.bo)
+   bp->resv = vm->root.base.bo->tbo.resv;
+}
+
  /**
   * amdgpu_vm_alloc_levels - allocate the PD/PT levels
   *
@@ -469,8 +494,8 @@ static int amdgpu_vm_alloc_levels(struct amdgpu_device 
*adev,
  unsigned level, bool ats)
  {
unsigned shift = amdgpu_vm_level_shift(adev, level);
+   struct amdgpu_bo_param bp;
unsigned pt_idx, from, to;
-   u64 flags;
int r;

if (!parent->entries) {
@@ -494,29 +519,14 @@ static int amdgpu_vm_alloc_levels(struct amdgpu_device 
*adev,
saddr = saddr & ((1 << shift) - 1);
eaddr = eaddr & ((1 << shift) - 1);

-   flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
-   if (vm->use_cpu_for_update)
-   flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
-   else
-   flags |= (AMDGPU_GEM_CREATE_NO_CPU_ACCESS |
-   AMDGPU_GEM_CREATE_SHADOW);
+   amdgpu_vm_bo_param(adev, vm, level, );

/* walk over the address space and allocate the page tables */
for (pt_idx = from; pt_idx <= to; ++pt_idx) {
-   struct reservation_object *resv = vm->root.base.bo->tbo.resv;
struct amdgpu_vm_pt *entry = >entries[pt_idx];
struct amdgpu_bo *pt;

if (!entry->base.bo) {
-   struct amdgpu_bo_param bp;
-
-   memset(, 0, sizeof(bp));
-   bp.size = amdgpu_vm_bo_size(adev, level);
-   bp.byte_align = AMDGPU_GPU_PAGE_SIZE;
-   bp.domain = AMDGPU_GEM_DOMAIN_VRAM;
-   bp.flags = flags;
-   bp.type = ttm_bo_type_kernel;
-   bp.resv = resv;
r = amdgpu_bo_create(adev, , );
if (r)
return r;
@@ -2564,8 +2574,6 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
amdgpu_vm *vm,
  {
struct amdgpu_bo_param bp;
struct amdgpu_bo *root;
-   unsigned long size;
-   uint64_t flags;
int r, i;

vm->va = RB_ROOT_CACHED;
@@ -2602,20 +2610,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
amdgpu_vm *vm,
  "CPU update of VM recommended only for large BAR system\n");
vm->last_update = NULL;

-   flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
-   if (vm->use_cpu_for_update)
-   flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
-   else
-   flags |= AMDGPU_GEM_CREATE_SHADOW;
-
-   size = amdgpu_vm_bo_size(adev, adev->vm_manager.root_level);
-   memset(, 0, sizeof(bp));
-   bp.size = size;
-   bp.byte_align = AMDGPU_GPU_PAGE_SIZE;
-   bp.domain = AMDGPU_GEM_DOMAIN_VRAM;
-   bp.flags = flags;
-   bp.type = ttm_bo_type_kernel;
-   bp.resv = NULL;
+   amdgpu_vm_bo_param(adev, vm, adev->vm_manager.root_level, );
r = amdgpu_bo_create(adev, , );
if (r)
goto error_free_sched_entity;


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 08/11] drm/amdgpu: add amdgpu_gmc_pd_addr helper

2018-08-22 Thread Zhang, Jerry (Junwei)

On 08/22/2018 11:05 PM, Christian König wrote:

Add a helper to get the root PD address and remove the workarounds from
the GMC9 code for that.

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/Makefile   |  3 +-
  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |  5 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c|  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c   | 47 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h   |  2 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
  drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c  |  7 +--
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c |  4 --
  drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c   |  7 +--
  9 files changed, 56 insertions(+), 23 deletions(-)
  create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile 
b/drivers/gpu/drm/amd/amdgpu/Makefile
index 860cb8731c7c..d2bafabe585d 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -51,7 +51,8 @@ amdgpu-y += amdgpu_device.o amdgpu_kms.o \
amdgpu_prime.o amdgpu_vm.o amdgpu_ib.o amdgpu_pll.o \
amdgpu_ucode.o amdgpu_bo_list.o amdgpu_ctx.o amdgpu_sync.o \
amdgpu_gtt_mgr.o amdgpu_vram_mgr.o amdgpu_virt.o amdgpu_atomfirmware.o \
-   amdgpu_vf_error.o amdgpu_sched.o amdgpu_debugfs.o amdgpu_ids.o
+   amdgpu_vf_error.o amdgpu_sched.o amdgpu_debugfs.o amdgpu_ids.o \
+   amdgpu_gmc.o

  # add asic specific block
  amdgpu-$(CONFIG_DRM_AMDGPU_CIK)+= cik.o cik_ih.o kv_smc.o kv_dpm.o \
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 7eadc58231f2..2e2393fe09b2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -364,7 +364,6 @@ static int vm_validate_pt_pd_bos(struct amdgpu_vm *vm)
struct amdgpu_bo *pd = vm->root.base.bo;
struct amdgpu_device *adev = amdgpu_ttm_adev(pd->tbo.bdev);
struct amdgpu_vm_parser param;
-   uint64_t addr, flags = AMDGPU_PTE_VALID;
int ret;

param.domain = AMDGPU_GEM_DOMAIN_VRAM;
@@ -383,9 +382,7 @@ static int vm_validate_pt_pd_bos(struct amdgpu_vm *vm)
return ret;
}

-   addr = amdgpu_bo_gpu_offset(vm->root.base.bo);
-   amdgpu_gmc_get_vm_pde(adev, -1, , );
-   vm->pd_phys_addr = addr;
+   vm->pd_phys_addr = amdgpu_gmc_pd_addr(vm->root.base.bo);

if (vm->use_cpu_for_update) {
ret = amdgpu_bo_kmap(pd, NULL);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 17bf63f93c93..d268035cf2f3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -946,7 +946,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
if (r)
return r;

-   p->job->vm_pd_addr = amdgpu_bo_gpu_offset(vm->root.base.bo);
+   p->job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.base.bo);

if (amdgpu_vm_debug) {
/* Invalidate all BOs to test for userspace bugs */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
new file mode 100644
index ..36058feac64f
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -0,0 +1,47 @@
+/*
+ * Copyright 2018 Advanced Micro Devices, Inc.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sub license, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
+ * USE OR OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * The above copyright notice and this permission notice (including the
+ * next paragraph) shall be included in all copies or substantial portions
+ * of the Software.
+ *
+ */
+
+#include "amdgpu.h"
+
+/**
+ * amdgpu_gmc_pd_addr - return the address of the root directory
+ *
+ */
+uint64_t amdgpu_gmc_pd_addr(struct amdgpu_bo *bo)


If the func is going to handle all pd address, it's better to be called in 
gmc6,7,8 as well.


+{
+   struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
+   uint64_t pd_addr;
+
+   pd_addr = 

Re: [PATCH 2/6] drm/amdgpu: cleanup GPU recovery check a bit

2018-08-22 Thread Huang Rui
On Wed, Aug 22, 2018 at 12:04:53PM +0200, Christian König wrote:
> Check if we should call the function instead of providing the forced
> flag.
> 
> Signed-off-by: Christian König 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h|  3 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 38 
> --
>  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c  |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c|  4 ++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c|  3 ++-
>  drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c  |  4 ++--
>  drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c  |  3 ++-
>  7 files changed, 36 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 19ef7711d944..340e40d03d54 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -1158,8 +1158,9 @@ int emu_soc_asic_init(struct amdgpu_device *adev);
>  #define amdgpu_asic_need_full_reset(adev) 
> (adev)->asic_funcs->need_full_reset((adev))
>  
>  /* Common functions */
> +bool amdgpu_device_should_recover_gpu(struct amdgpu_device *adev);
>  int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
> -   struct amdgpu_job* job, bool force);
> +   struct amdgpu_job* job);
>  void amdgpu_device_pci_config_reset(struct amdgpu_device *adev);
>  bool amdgpu_device_need_post(struct amdgpu_device *adev);
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index c23339d8ae2d..9f5e4be76d5e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -3244,32 +3244,44 @@ static int amdgpu_device_reset_sriov(struct 
> amdgpu_device *adev,
>   return r;
>  }
>  
> +/**
> + * amdgpu_device_should_recover_gpu - check if we should try GPU recovery
> + *
> + * @adev: amdgpu device pointer
> + *
> + * Check amdgpu_gpu_recovery and SRIOV status to see if we should try to 
> recover
> + * a hung GPU.
> + */
> +bool amdgpu_device_should_recover_gpu(struct amdgpu_device *adev)
> +{
> + if (!amdgpu_device_ip_check_soft_reset(adev)) {
> + DRM_INFO("Timeout, but no hardware hang detected.\n");
> + return false;
> + }
> +
> + if (amdgpu_gpu_recovery == 0 || (amdgpu_gpu_recovery == -1  &&
> +  !amdgpu_sriov_vf(adev))) {
> + DRM_INFO("GPU recovery disabled.\n");
> + return false;
> + }
> +
> + return true;
> +}
> +
>  /**
>   * amdgpu_device_gpu_recover - reset the asic and recover scheduler
>   *
>   * @adev: amdgpu device pointer
>   * @job: which job trigger hang
> - * @force: forces reset regardless of amdgpu_gpu_recovery
>   *
>   * Attempt to reset the GPU if it has hung (all asics).
>   * Returns 0 for success or an error on failure.
>   */
>  int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
> -   struct amdgpu_job *job, bool force)
> +   struct amdgpu_job *job)
>  {

In my view, actually, we don't need return as "int" for this function.
Because, no calling is to check the return value.

Others looks good for me.
Reviewed-by: Huang Rui 

>   int i, r, resched;
>  
> - if (!force && !amdgpu_device_ip_check_soft_reset(adev)) {
> - DRM_INFO("No hardware hang detected. Did some blocks stall?\n");
> - return 0;
> - }
> -
> - if (!force && (amdgpu_gpu_recovery == 0 ||
> - (amdgpu_gpu_recovery == -1  && 
> !amdgpu_sriov_vf(adev {
> - DRM_INFO("GPU recovery disabled.\n");
> - return 0;
> - }
> -
>   dev_info(adev->dev, "GPU reset begin!\n");
>  
>   mutex_lock(>lock_reset);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> index e74d620d9699..68cccebb8463 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> @@ -702,7 +702,7 @@ static int amdgpu_debugfs_gpu_recover(struct seq_file *m, 
> void *data)
>   struct amdgpu_device *adev = dev->dev_private;
>  
>   seq_printf(m, "gpu recover\n");
> - amdgpu_device_gpu_recover(adev, NULL, true);
> + amdgpu_device_gpu_recover(adev, NULL);
>  
>   return 0;
>  }
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
> index 1abf5b5bac9e..b927e8798534 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
> @@ -105,8 +105,8 @@ static void amdgpu_irq_reset_work_func(struct work_struct 
> *work)
>   struct amdgpu_device *adev = container_of(work, struct amdgpu_device,
> reset_work);
>  
> - if (!amdgpu_sriov_vf(adev))
> - amdgpu_device_gpu_recover(adev, NULL, false);
> + if 

Re: [PATCH 07/11] drm/amdgpu: add GMC9 support for PDs/PTs in system memory

2018-08-22 Thread Zhang, Jerry (Junwei)

On 08/22/2018 11:05 PM, Christian König wrote:

Add the necessary handling.

Signed-off-by: Christian König 


Looks going to use GTT for page table.
What kind of scenario to use that?
could it be replaced by CPU updating page table in system memory?

Regards,
Jerry


---
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index e412eb8e347c..3393a329fc9c 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -571,7 +571,7 @@ static uint64_t gmc_v9_0_get_vm_pte_flags(struct 
amdgpu_device *adev,
  static void gmc_v9_0_get_vm_pde(struct amdgpu_device *adev, int level,
uint64_t *addr, uint64_t *flags)
  {
-   if (!(*flags & AMDGPU_PDE_PTE))
+   if (!(*flags & AMDGPU_PDE_PTE) && !(*flags & AMDGPU_PTE_SYSTEM))
*addr = adev->vm_manager.vram_base_offset + *addr -
adev->gmc.vram_start;
BUG_ON(*addr & 0x003FULL);


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 02/11] drm/amdgpu: validate the VM root PD from the VM code

2018-08-22 Thread Zhang, Jerry (Junwei)

Patch 2 ~ 6 are

Reviewed-by: Junwei Zhang 

Jerry

On 08/22/2018 11:05 PM, Christian König wrote:

Preparation for following changes. This validates the root PD twice,
but the overhead of that should be minimal.

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 8 
  1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 73b8dcaf66e6..53ce9982a5ee 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -291,11 +291,11 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
list_for_each_entry_safe(bo_base, tmp, >evicted, vm_status) {
struct amdgpu_bo *bo = bo_base->bo;

-   if (bo->parent) {
-   r = validate(param, bo);
-   if (r)
-   break;
+   r = validate(param, bo);
+   if (r)
+   break;

+   if (bo->parent) {
spin_lock(>lru_lock);
ttm_bo_move_to_lru_tail(>tbo);
if (bo->shadow)


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 01/11] drm/amdgpu: remove extra root PD alignment

2018-08-22 Thread Zhang, Jerry (Junwei)

On 08/23/2018 03:46 AM, Alex Deucher wrote:

On Wed, Aug 22, 2018 at 11:05 AM Christian König
 wrote:


Just another leftover from radeon.


I can't remember exactly what chip this was for.  Are you sure this
isn't still required for SI or something like that?


FYI.

Some projects still use SI with amdgpu.

Regards,
Jerry



Alex



Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 4 +---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 3 ---
  2 files changed, 1 insertion(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 662aec5c81d4..73b8dcaf66e6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -2566,8 +2566,6 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
amdgpu_vm *vm,
  {
 struct amdgpu_bo_param bp;
 struct amdgpu_bo *root;
-   const unsigned align = min(AMDGPU_VM_PTB_ALIGN_SIZE,
-   AMDGPU_VM_PTE_COUNT(adev) * 8);
 unsigned long size;
 uint64_t flags;
 int r, i;
@@ -2615,7 +2613,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
amdgpu_vm *vm,
 size = amdgpu_vm_bo_size(adev, adev->vm_manager.root_level);
 memset(, 0, sizeof(bp));
 bp.size = size;
-   bp.byte_align = align;
+   bp.byte_align = AMDGPU_GPU_PAGE_SIZE;
 bp.domain = AMDGPU_GEM_DOMAIN_VRAM;
 bp.flags = flags;
 bp.type = ttm_bo_type_kernel;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 1162c2bf3138..1c9049feaaea 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -48,9 +48,6 @@ struct amdgpu_bo_list_entry;
  /* number of entries in page table */
  #define AMDGPU_VM_PTE_COUNT(adev) (1 << (adev)->vm_manager.block_size)

-/* PTBs (Page Table Blocks) need to be aligned to 32K */
-#define AMDGPU_VM_PTB_ALIGN_SIZE   32768
-
  #define AMDGPU_PTE_VALID   (1ULL << 0)
  #define AMDGPU_PTE_SYSTEM  (1ULL << 1)
  #define AMDGPU_PTE_SNOOPED (1ULL << 2)
--
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: move full access into amdgpu_device_ip_suspend

2018-08-22 Thread Alex Deucher
On Wed, Aug 22, 2018 at 7:26 AM Yintian Tao  wrote:
>
> It will be more safe to make full-acess include both phase1 and phase2.
> Then accessing special registeris wherever at phase1 or phase2 will not
> block any shutdown and suspend process under virtualization.
>
> Signed-off-by: Yintian Tao 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 12 ++--
>  1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index c23339d..6bb0e47 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -1932,9 +1932,6 @@ static int amdgpu_device_ip_suspend_phase1(struct 
> amdgpu_device *adev)
>  {
> int i, r;
>
> -   if (amdgpu_sriov_vf(adev))
> -   amdgpu_virt_request_full_gpu(adev, false);
> -
> amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
> amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
>
> @@ -1953,9 +1950,6 @@ static int amdgpu_device_ip_suspend_phase1(struct 
> amdgpu_device *adev)
> }
> }
>
> -   if (amdgpu_sriov_vf(adev))
> -   amdgpu_virt_release_full_gpu(adev, false);
> -
> return 0;
>  }
>
> @@ -2007,11 +2001,17 @@ int amdgpu_device_ip_suspend(struct amdgpu_device 
> *adev)
>  {
> int r;
>
> +   if (amdgpu_sriov_vf(adev))
> +   amdgpu_virt_request_full_gpu(adev, false);
> +
> r = amdgpu_device_ip_suspend_phase1(adev);
> if (r)
> return r;
> r = amdgpu_device_ip_suspend_phase2(adev);
>
> +   if (amdgpu_sriov_vf(adev))
> +   amdgpu_virt_release_full_gpu(adev, false);
> +
> return r;
>  }
>
> --
> 2.7.4
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: KFD co-maintainership and branch model

2018-08-22 Thread Zhang, Jerry (Junwei)

On 08/23/2018 06:25 AM, Felix Kuehling wrote:

Hi all,

Oded has offered to make me co-maintainer of KFD, as he's super busy at
work and less responsive than he used to be.

At the same time we're about to send out the first patches to merge KFD
and AMDGPU into a single kernel module.

With that in mind I'd like to propose to upstream KFD through Alex's
branch in the future. It would avoid conflicts in shared code
(amdgpu_vm.c is most active at the moment) when merging branches, and
make the code flow and testing easier.

Please let me know what you think?


Shall we share the same style as DC upstream in amdgpu?

For the common code like amdgpu_vm.c, it's ideally to work for both KFD and 
amdgpu gfx.
If a patch impacts gfx performance but improves compute ability, we may reserve 
that
as a hybrid patch for production release only.

Regards,
Jerry



Regards,
   Felix


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: KFD co-maintainership and branch model

2018-08-22 Thread David Airlie
On Thu, Aug 23, 2018 at 8:25 AM, Felix Kuehling  wrote:
> Hi all,
>
> Oded has offered to make me co-maintainer of KFD, as he's super busy at
> work and less responsive than he used to be.
>
> At the same time we're about to send out the first patches to merge KFD
> and AMDGPU into a single kernel module.
>
> With that in mind I'd like to propose to upstream KFD through Alex's
> branch in the future. It would avoid conflicts in shared code
> (amdgpu_vm.c is most active at the moment) when merging branches, and
> make the code flow and testing easier.
>
> Please let me know what you think?
>

Works for me.

Thanks,
Dave.
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


KFD co-maintainership and branch model

2018-08-22 Thread Felix Kuehling
Hi all,

Oded has offered to make me co-maintainer of KFD, as he's super busy at
work and less responsive than he used to be.

At the same time we're about to send out the first patches to merge KFD
and AMDGPU into a single kernel module.

With that in mind I'd like to propose to upstream KFD through Alex's
branch in the future. It would avoid conflicts in shared code
(amdgpu_vm.c is most active at the moment) when merging branches, and
make the code flow and testing easier.

Please let me know what you think?

Regards,
  Felix

-- 
F e l i x   K u e h l i n g
PMTS Software Development Engineer | Linux Compute Kernel
1 Commerce Valley Dr. East, Markham, ON L3T 7X6 Canada
(O) +1(289)695-1597
   _ _   _   _   _
  / \   | \ / | |  _  \  \ _  |
 / A \  | \M/ | | |D) )  /|_| |
/_/ \_\ |_| |_| |_/ |__/ \|   facebook.com/AMD | amd.com

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 08/11] drm/amdgpu: add amdgpu_gmc_pd_addr helper

2018-08-22 Thread Felix Kuehling
Acked-by: Felix Kuehling 

The amdgpu_amdkfd_gpuvm code looked different than I remembered. There
are some important patches missing upstream that I'll roll into my next
patch series.

Regards,
  Felix


On 2018-08-22 11:05 AM, Christian König wrote:
> Add a helper to get the root PD address and remove the workarounds from
> the GMC9 code for that.
>
> Signed-off-by: Christian König 
> ---
>  drivers/gpu/drm/amd/amdgpu/Makefile   |  3 +-
>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |  5 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c|  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c   | 47 +++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h   |  2 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
>  drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c  |  7 +--
>  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c |  4 --
>  drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c   |  7 +--
>  9 files changed, 56 insertions(+), 23 deletions(-)
>  create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile 
> b/drivers/gpu/drm/amd/amdgpu/Makefile
> index 860cb8731c7c..d2bafabe585d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/Makefile
> +++ b/drivers/gpu/drm/amd/amdgpu/Makefile
> @@ -51,7 +51,8 @@ amdgpu-y += amdgpu_device.o amdgpu_kms.o \
>   amdgpu_prime.o amdgpu_vm.o amdgpu_ib.o amdgpu_pll.o \
>   amdgpu_ucode.o amdgpu_bo_list.o amdgpu_ctx.o amdgpu_sync.o \
>   amdgpu_gtt_mgr.o amdgpu_vram_mgr.o amdgpu_virt.o amdgpu_atomfirmware.o \
> - amdgpu_vf_error.o amdgpu_sched.o amdgpu_debugfs.o amdgpu_ids.o
> + amdgpu_vf_error.o amdgpu_sched.o amdgpu_debugfs.o amdgpu_ids.o \
> + amdgpu_gmc.o
>  
>  # add asic specific block
>  amdgpu-$(CONFIG_DRM_AMDGPU_CIK)+= cik.o cik_ih.o kv_smc.o kv_dpm.o \
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> index 7eadc58231f2..2e2393fe09b2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> @@ -364,7 +364,6 @@ static int vm_validate_pt_pd_bos(struct amdgpu_vm *vm)
>   struct amdgpu_bo *pd = vm->root.base.bo;
>   struct amdgpu_device *adev = amdgpu_ttm_adev(pd->tbo.bdev);
>   struct amdgpu_vm_parser param;
> - uint64_t addr, flags = AMDGPU_PTE_VALID;
>   int ret;
>  
>   param.domain = AMDGPU_GEM_DOMAIN_VRAM;
> @@ -383,9 +382,7 @@ static int vm_validate_pt_pd_bos(struct amdgpu_vm *vm)
>   return ret;
>   }
>  
> - addr = amdgpu_bo_gpu_offset(vm->root.base.bo);
> - amdgpu_gmc_get_vm_pde(adev, -1, , );
> - vm->pd_phys_addr = addr;
> + vm->pd_phys_addr = amdgpu_gmc_pd_addr(vm->root.base.bo);
>  
>   if (vm->use_cpu_for_update) {
>   ret = amdgpu_bo_kmap(pd, NULL);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index 17bf63f93c93..d268035cf2f3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -946,7 +946,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser 
> *p)
>   if (r)
>   return r;
>  
> - p->job->vm_pd_addr = amdgpu_bo_gpu_offset(vm->root.base.bo);
> + p->job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.base.bo);
>  
>   if (amdgpu_vm_debug) {
>   /* Invalidate all BOs to test for userspace bugs */
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> new file mode 100644
> index ..36058feac64f
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> @@ -0,0 +1,47 @@
> +/*
> + * Copyright 2018 Advanced Micro Devices, Inc.
> + * All Rights Reserved.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the
> + * "Software"), to deal in the Software without restriction, including
> + * without limitation the rights to use, copy, modify, merge, publish,
> + * distribute, sub license, and/or sell copies of the Software, and to
> + * permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY 
> CLAIM,
> + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
> + * USE OR OTHER DEALINGS IN THE SOFTWARE.
> + *
> + * The above copyright notice and this permission notice (including the
> + * next paragraph) shall be included in all copies or substantial portions
> + * of the Software.
> + *
> + */
> +
> +#include 

Re: [PATCH 2/2] drm/amdgpu: Change kiq ring initialize sequence on gfx9

2018-08-22 Thread Alex Deucher
On Wed, Aug 22, 2018 at 7:26 AM Rex Zhu  wrote:
>
> 1. initialize kiq before initialize gfx ring.
> 2. set kiq ring ready immediately when kiq initialize
>successfully.
> 3. split function gfx_v9_0_kiq_resume into two functions.
>  gfx_v9_0_kiq_resume is for kiq initialize.
>  gfx_v9_0_kcq_resume is for kcq initialize.
>
> Signed-off-by: Rex Zhu 

Series is:
Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 38 
> ++-
>  1 file changed, 24 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> index 5990e5dc..ed1868a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> @@ -2684,7 +2684,6 @@ static int gfx_v9_0_kiq_kcq_enable(struct amdgpu_device 
> *adev)
> queue_mask |= (1ull << i);
> }
>
> -   kiq_ring->ready = true;
> r = amdgpu_ring_alloc(kiq_ring, (7 * adev->gfx.num_compute_rings) + 
> 8);
> if (r) {
> DRM_ERROR("Failed to lock KIQ (%d).\n", r);
> @@ -3091,26 +3090,33 @@ static int gfx_v9_0_kcq_init_queue(struct amdgpu_ring 
> *ring)
>
>  static int gfx_v9_0_kiq_resume(struct amdgpu_device *adev)
>  {
> -   struct amdgpu_ring *ring = NULL;
> -   int r = 0, i;
> -
> -   gfx_v9_0_cp_compute_enable(adev, true);
> +   struct amdgpu_ring *ring;
> +   int r;
>
> ring = >gfx.kiq.ring;
>
> r = amdgpu_bo_reserve(ring->mqd_obj, false);
> if (unlikely(r != 0))
> -   goto done;
> +   return r;
>
> r = amdgpu_bo_kmap(ring->mqd_obj, (void **)>mqd_ptr);
> -   if (!r) {
> -   r = gfx_v9_0_kiq_init_queue(ring);
> -   amdgpu_bo_kunmap(ring->mqd_obj);
> -   ring->mqd_ptr = NULL;
> -   }
> +   if (unlikely(r != 0))
> +   return r;
> +
> +   gfx_v9_0_kiq_init_queue(ring);
> +   amdgpu_bo_kunmap(ring->mqd_obj);
> +   ring->mqd_ptr = NULL;
> amdgpu_bo_unreserve(ring->mqd_obj);
> -   if (r)
> -   goto done;
> +   ring->ready = true;
> +   return 0;
> +}
> +
> +static int gfx_v9_0_kcq_resume(struct amdgpu_device *adev)
> +{
> +   struct amdgpu_ring *ring = NULL;
> +   int r = 0, i;
> +
> +   gfx_v9_0_cp_compute_enable(adev, true);
>
> for (i = 0; i < adev->gfx.num_compute_rings; i++) {
> ring = >gfx.compute_ring[i];
> @@ -3153,11 +3159,15 @@ static int gfx_v9_0_cp_resume(struct amdgpu_device 
> *adev)
> return r;
> }
>
> +   r = gfx_v9_0_kiq_resume(adev);
> +   if (r)
> +   return r;
> +
> r = gfx_v9_0_cp_gfx_resume(adev);
> if (r)
> return r;
>
> -   r = gfx_v9_0_kiq_resume(adev);
> +   r = gfx_v9_0_kcq_resume(adev);
> if (r)
> return r;
>
> --
> 1.9.1
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Set pasid for compute vm

2018-08-22 Thread Alex Deucher
On Wed, Aug 22, 2018 at 5:31 PM Oak Zeng  wrote:
>

Please provide a patch description.

Alex

> Signed-off-by: Oak Zeng 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h   |  4 ++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c |  8 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c   | 20 +---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h   |  2 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_process.c |  4 ++--
>  drivers/gpu/drm/amd/include/kgd_kfd_interface.h  |  4 ++--
>  6 files changed, 28 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
> index a8418a3..8939f54 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
> @@ -157,11 +157,11 @@ uint64_t amdgpu_amdkfd_get_vram_usage(struct kgd_dev 
> *kgd);
>  /* GPUVM API */
>  int amdgpu_amdkfd_gpuvm_create_process_vm(struct kgd_dev *kgd, void **vm,
> void **process_info,
> -   struct dma_fence **ef);
> +   struct dma_fence **ef, unsigned int 
> pasid);
>  int amdgpu_amdkfd_gpuvm_acquire_process_vm(struct kgd_dev *kgd,
> struct file *filp,
> void **vm, void **process_info,
> -   struct dma_fence **ef);
> +   struct dma_fence **ef, unsigned int 
> pasid);
>  void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev,
> struct amdgpu_vm *vm);
>  void amdgpu_amdkfd_gpuvm_destroy_process_vm(struct kgd_dev *kgd, void *vm);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> index 7eadc58..659c397 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> @@ -1005,7 +1005,7 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void 
> **process_info,
>
>  int amdgpu_amdkfd_gpuvm_create_process_vm(struct kgd_dev *kgd, void **vm,
>   void **process_info,
> - struct dma_fence **ef)
> + struct dma_fence **ef, unsigned int 
> pasid)
>  {
> struct amdgpu_device *adev = get_amdgpu_device(kgd);
> struct amdgpu_vm *new_vm;
> @@ -1016,7 +1016,7 @@ int amdgpu_amdkfd_gpuvm_create_process_vm(struct 
> kgd_dev *kgd, void **vm,
> return -ENOMEM;
>
> /* Initialize AMDGPU part of the VM */
> -   ret = amdgpu_vm_init(adev, new_vm, AMDGPU_VM_CONTEXT_COMPUTE, 0);
> +   ret = amdgpu_vm_init(adev, new_vm, AMDGPU_VM_CONTEXT_COMPUTE, pasid);
> if (ret) {
> pr_err("Failed init vm ret %d\n", ret);
> goto amdgpu_vm_init_fail;
> @@ -1041,7 +1041,7 @@ int amdgpu_amdkfd_gpuvm_create_process_vm(struct 
> kgd_dev *kgd, void **vm,
>  int amdgpu_amdkfd_gpuvm_acquire_process_vm(struct kgd_dev *kgd,
>struct file *filp,
>void **vm, void **process_info,
> -  struct dma_fence **ef)
> +  struct dma_fence **ef, unsigned 
> int pasid)
>  {
> struct amdgpu_device *adev = get_amdgpu_device(kgd);
> struct drm_file *drm_priv = filp->private_data;
> @@ -1054,7 +1054,7 @@ int amdgpu_amdkfd_gpuvm_acquire_process_vm(struct 
> kgd_dev *kgd,
> return -EINVAL;
>
> /* Convert VM into a compute VM */
> -   ret = amdgpu_vm_make_compute(adev, avm);
> +   ret = amdgpu_vm_make_compute(adev, avm, pasid);
> if (ret)
> return ret;
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 662aec5..0f6b304 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -2690,7 +2690,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
> amdgpu_vm *vm,
>   * Returns:
>   * 0 for success, -errno for errors.
>   */
> -int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> +int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm, 
> unsigned int pasid)
>  {
> bool pte_support_ats = (adev->asic_type == CHIP_RAVEN);
> int r;
> @@ -2705,6 +2705,18 @@ int amdgpu_vm_make_compute(struct amdgpu_device *adev, 
> struct amdgpu_vm *vm)
> goto error;
> }
>
> +   if (vm->pasid) {
> +   unsigned long flags;
> +
> +   spin_lock_irqsave(>vm_manager.pasid_lock, flags);
> +   r = idr_alloc(>vm_manager.pasid_idr, vm, pasid, pasid + 
> 1,
> + GFP_ATOMIC);
> +   

Re: [PATCH] drm/amdgpu: Set pasid for compute vm

2018-08-22 Thread Felix Kuehling
See comments inline ...

Regards,
  Felix


On 2018-08-22 05:10 PM, Oak Zeng wrote:
> Signed-off-by: Oak Zeng 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h   |  4 ++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c |  8 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c   | 20 +---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h   |  2 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_process.c |  4 ++--
>  drivers/gpu/drm/amd/include/kgd_kfd_interface.h  |  4 ++--
>  6 files changed, 28 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
> index a8418a3..8939f54 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
> @@ -157,11 +157,11 @@ uint64_t amdgpu_amdkfd_get_vram_usage(struct kgd_dev 
> *kgd);
>  /* GPUVM API */
>  int amdgpu_amdkfd_gpuvm_create_process_vm(struct kgd_dev *kgd, void **vm,
>   void **process_info,
> - struct dma_fence **ef);
> + struct dma_fence **ef, unsigned int 
> pasid);

vm, process_info and ef are output parameters. pasid is an input
parameter. I'd add that before the output parameters.

>  int amdgpu_amdkfd_gpuvm_acquire_process_vm(struct kgd_dev *kgd,
>   struct file *filp,
>   void **vm, void **process_info,
> - struct dma_fence **ef);
> + struct dma_fence **ef, unsigned int 
> pasid);

Same as above.

>  void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev,
>   struct amdgpu_vm *vm);
>  void amdgpu_amdkfd_gpuvm_destroy_process_vm(struct kgd_dev *kgd, void *vm);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> index 7eadc58..659c397 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> @@ -1005,7 +1005,7 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void 
> **process_info,
>  
>  int amdgpu_amdkfd_gpuvm_create_process_vm(struct kgd_dev *kgd, void **vm,
> void **process_info,
> -   struct dma_fence **ef)
> +   struct dma_fence **ef, unsigned int 
> pasid)
>  {
>   struct amdgpu_device *adev = get_amdgpu_device(kgd);
>   struct amdgpu_vm *new_vm;
> @@ -1016,7 +1016,7 @@ int amdgpu_amdkfd_gpuvm_create_process_vm(struct 
> kgd_dev *kgd, void **vm,
>   return -ENOMEM;
>  
>   /* Initialize AMDGPU part of the VM */
> - ret = amdgpu_vm_init(adev, new_vm, AMDGPU_VM_CONTEXT_COMPUTE, 0);
> + ret = amdgpu_vm_init(adev, new_vm, AMDGPU_VM_CONTEXT_COMPUTE, pasid);
>   if (ret) {
>   pr_err("Failed init vm ret %d\n", ret);
>   goto amdgpu_vm_init_fail;
> @@ -1041,7 +1041,7 @@ int amdgpu_amdkfd_gpuvm_create_process_vm(struct 
> kgd_dev *kgd, void **vm,
>  int amdgpu_amdkfd_gpuvm_acquire_process_vm(struct kgd_dev *kgd,
>  struct file *filp,
>  void **vm, void **process_info,
> -struct dma_fence **ef)
> +struct dma_fence **ef, unsigned int 
> pasid)
>  {
>   struct amdgpu_device *adev = get_amdgpu_device(kgd);
>   struct drm_file *drm_priv = filp->private_data;
> @@ -1054,7 +1054,7 @@ int amdgpu_amdkfd_gpuvm_acquire_process_vm(struct 
> kgd_dev *kgd,
>   return -EINVAL;
>  
>   /* Convert VM into a compute VM */
> - ret = amdgpu_vm_make_compute(adev, avm);
> + ret = amdgpu_vm_make_compute(adev, avm, pasid);
>   if (ret)
>   return ret;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 662aec5..0f6b304 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -2690,7 +2690,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
> amdgpu_vm *vm,
>   * Returns:
>   * 0 for success, -errno for errors.
>   */
> -int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> +int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm, 
> unsigned int pasid)
>  {
>   bool pte_support_ats = (adev->asic_type == CHIP_RAVEN);
>   int r;
> @@ -2705,6 +2705,18 @@ int amdgpu_vm_make_compute(struct amdgpu_device *adev, 
> struct amdgpu_vm *vm)
>   goto error;
>   }
>  
> + if (vm->pasid) {

This condition should be if (pasid). Or if we always expect a valid
pasid, then you don't need the condition at all.

> + unsigned long flags;
> +
> + 

[PATCH] drm/amdgpu: Set pasid for compute vm

2018-08-22 Thread Oak Zeng
Signed-off-by: Oak Zeng 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h   |  4 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c |  8 
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c   | 20 +---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h   |  2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_process.c |  4 ++--
 drivers/gpu/drm/amd/include/kgd_kfd_interface.h  |  4 ++--
 6 files changed, 28 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
index a8418a3..8939f54 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
@@ -157,11 +157,11 @@ uint64_t amdgpu_amdkfd_get_vram_usage(struct kgd_dev 
*kgd);
 /* GPUVM API */
 int amdgpu_amdkfd_gpuvm_create_process_vm(struct kgd_dev *kgd, void **vm,
void **process_info,
-   struct dma_fence **ef);
+   struct dma_fence **ef, unsigned int 
pasid);
 int amdgpu_amdkfd_gpuvm_acquire_process_vm(struct kgd_dev *kgd,
struct file *filp,
void **vm, void **process_info,
-   struct dma_fence **ef);
+   struct dma_fence **ef, unsigned int 
pasid);
 void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev,
struct amdgpu_vm *vm);
 void amdgpu_amdkfd_gpuvm_destroy_process_vm(struct kgd_dev *kgd, void *vm);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 7eadc58..659c397 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -1005,7 +1005,7 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void 
**process_info,
 
 int amdgpu_amdkfd_gpuvm_create_process_vm(struct kgd_dev *kgd, void **vm,
  void **process_info,
- struct dma_fence **ef)
+ struct dma_fence **ef, unsigned int 
pasid)
 {
struct amdgpu_device *adev = get_amdgpu_device(kgd);
struct amdgpu_vm *new_vm;
@@ -1016,7 +1016,7 @@ int amdgpu_amdkfd_gpuvm_create_process_vm(struct kgd_dev 
*kgd, void **vm,
return -ENOMEM;
 
/* Initialize AMDGPU part of the VM */
-   ret = amdgpu_vm_init(adev, new_vm, AMDGPU_VM_CONTEXT_COMPUTE, 0);
+   ret = amdgpu_vm_init(adev, new_vm, AMDGPU_VM_CONTEXT_COMPUTE, pasid);
if (ret) {
pr_err("Failed init vm ret %d\n", ret);
goto amdgpu_vm_init_fail;
@@ -1041,7 +1041,7 @@ int amdgpu_amdkfd_gpuvm_create_process_vm(struct kgd_dev 
*kgd, void **vm,
 int amdgpu_amdkfd_gpuvm_acquire_process_vm(struct kgd_dev *kgd,
   struct file *filp,
   void **vm, void **process_info,
-  struct dma_fence **ef)
+  struct dma_fence **ef, unsigned int 
pasid)
 {
struct amdgpu_device *adev = get_amdgpu_device(kgd);
struct drm_file *drm_priv = filp->private_data;
@@ -1054,7 +1054,7 @@ int amdgpu_amdkfd_gpuvm_acquire_process_vm(struct kgd_dev 
*kgd,
return -EINVAL;
 
/* Convert VM into a compute VM */
-   ret = amdgpu_vm_make_compute(adev, avm);
+   ret = amdgpu_vm_make_compute(adev, avm, pasid);
if (ret)
return ret;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 662aec5..0f6b304 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -2690,7 +2690,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
amdgpu_vm *vm,
  * Returns:
  * 0 for success, -errno for errors.
  */
-int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm)
+int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm, 
unsigned int pasid)
 {
bool pte_support_ats = (adev->asic_type == CHIP_RAVEN);
int r;
@@ -2705,6 +2705,18 @@ int amdgpu_vm_make_compute(struct amdgpu_device *adev, 
struct amdgpu_vm *vm)
goto error;
}
 
+   if (vm->pasid) {
+   unsigned long flags;
+
+   spin_lock_irqsave(>vm_manager.pasid_lock, flags);
+   r = idr_alloc(>vm_manager.pasid_idr, vm, pasid, pasid + 1,
+ GFP_ATOMIC);
+   spin_unlock_irqrestore(>vm_manager.pasid_lock, flags);
+
+   if (r < 0)
+   goto error;
+   }
+
/* Check if PD needs to be reinitialized and do it before
 * changing any other state, in case it fails.
 */
@@ -2713,7 +2725,7 @@ 

Re: Possible use_mm() mis-uses

2018-08-22 Thread Linus Torvalds
On Wed, Aug 22, 2018 at 12:44 PM Felix Kuehling  wrote:
>
> You're right, but that's a bit fragile and convoluted. I'll fix KFD to
> handle this more robustly. See the attached (untested) patch.

Yes, this patch that makes the whole "has to use current mm" or uses
"get_task_mm()" looks good from a VM< worry standpoint.

Thanks.

> And
> obviously that opaque pointer didn't work as intended. It just gets
> promoted to an mm_struct * without a warning from the compiler. Maybe I
> should change that to a long to make abuse easier to spot.

Using a "void *" is actually just about the worst possible type for
something that should be a cookie, because it silently translates to
any pointer.

"long" is actually not much better, becuase it will silently convert
to any integer type.

A good fairly type-safe cookie type is actually this:

typedef volatile const struct no_such_struct *cookie_ptr_t;

and now something of type "cookie_ptr_t" is actually very  hard to
convert to other types by mistake.

Note that the "volatile const" is not just random noise - it's so that
it won't even convert without warnings to things that take a "const
void *" as an argument (like, say, the source of 'memcpy()').

So you almost _have_ to explicitly cast it to use it.

   Linus
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[pull] amdgpu drm-next-4.19

2018-08-22 Thread Alex Deucher
Hi Dave,

Fixes for 4.19:
- Fix build when KCOV is enabled
- Misc display fixes
- A couple of SR-IOV fixes
- Fence fixes for eviction handling for KFD
- Misc other fixes

The following changes since commit 3d63a3c14741ed015948943076f3c6a2f2cd7b27:

  Merge tag 'drm-msm-next-2018-08-10' of 
git://people.freedesktop.org/~robclark/linux into drm-next (2018-08-17 10:46:51 
+1000)

are available in the git repository at:

  git://people.freedesktop.org/~agd5f/linux drm-next-4.19

for you to fetch changes up to 9d1d02ff36783f954a206dfbf7943b7f2057f58b:

  drm/amd/display: Don't build DCN1 when kcov is enabled (2018-08-21 14:33:59 
-0500)


Alex Deucher (1):
  drm/amdgpu/display: disable eDP fast boot optimization on DCE8

Christian König (3):
  drm/amdgpu: fix incorrect use of fcheck
  drm/amdgpu: fix incorrect use of drm_file->pid
  drm/amdgpu: fix amdgpu_amdkfd_remove_eviction_fence v3

Dmytro Laktyushkin (3):
  drm/amd/display: fix dp_ss_control vbios flag parsing
  drm/amd/display: make dp_ss_off optional
  drm/amd/display: fix dentist did ranges

Evan Quan (1):
  drm/amdgpu: set correct base for THM/NBIF/MP1 IP

Leo (Sunpeng) Li (2):
  Revert "drm/amdgpu/display: Replace CONFIG_DRM_AMD_DC_DCN1_0 with 
CONFIG_X86"
  drm/amd/display: Don't build DCN1 when kcov is enabled

Samson Tam (1):
  drm/amd/display: Do not retain link settings

Yintian Tao (2):
  drm/amdgpu: access register without KIQ
  drm/powerplay: enable dpm under pass-through

 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c   | 103 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c  |  21 ++---
 drivers/gpu/drm/amd/amdgpu/vega20_reg_init.c   |   3 +
 drivers/gpu/drm/amd/amdgpu/vi.c|   4 +-
 drivers/gpu/drm/amd/display/Kconfig|   6 ++
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  |  10 +-
 drivers/gpu/drm/amd/display/dc/Makefile|   2 +-
 .../amd/display/dc/bios/command_table_helper2.c|   2 +-
 drivers/gpu/drm/amd/display/dc/calcs/Makefile  |   2 +-
 drivers/gpu/drm/amd/display/dc/core/dc.c   |  21 -
 drivers/gpu/drm/amd/display/dc/core/dc_debug.c |   2 +-
 drivers/gpu/drm/amd/display/dc/core/dc_link.c  |   6 +-
 drivers/gpu/drm/amd/display/dc/core/dc_resource.c  |  12 +--
 drivers/gpu/drm/amd/display/dc/dc.h|   2 +-
 .../gpu/drm/amd/display/dc/dce/dce_clock_source.c  |   6 +-
 .../gpu/drm/amd/display/dc/dce/dce_clock_source.h  |   2 +-
 drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c|  18 ++--
 drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h|   2 +-
 drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c  |   6 +-
 .../drm/amd/display/dc/dce/dce_stream_encoder.c|  20 ++--
 .../amd/display/dc/dce110/dce110_hw_sequencer.c|  10 +-
 drivers/gpu/drm/amd/display/dc/gpio/Makefile   |   2 +-
 drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c   |   4 +-
 drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c |   4 +-
 drivers/gpu/drm/amd/display/dc/i2caux/Makefile |   2 +-
 drivers/gpu/drm/amd/display/dc/i2caux/i2caux.c |   4 +-
 drivers/gpu/drm/amd/display/dc/inc/core_types.h|   7 +-
 drivers/gpu/drm/amd/display/dc/irq/Makefile|   2 +-
 drivers/gpu/drm/amd/display/dc/irq/irq_service.c   |   2 +-
 drivers/gpu/drm/amd/display/dc/os_types.h  |   2 +-
 .../gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c  |   4 +-
 32 files changed, 154 insertions(+), 141 deletions(-)
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 01/11] drm/amd/display: Eliminate i2c hw function pointers

2018-08-22 Thread sunpeng.li
From: David Francis 

[Why]
The function pointers of the dce_i2c_hw struct were never
accessed from outside dce_i2c_hw.c and had only one version.
As function pointers take up space and make debugging difficult,
and they are not needed in this case, they should be removed.

[How]
Remove the dce_i2c_hw_funcs struct and make static all
functions that were previously a part of it.  Reorder
the functions in dce_i2c_hw.c.

Signed-off-by: David Francis 
Reviewed-by: Sun peng Li 
Acked-by: Leo Li 
---
 drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c | 607 
 drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.h |  29 --
 2 files changed, 291 insertions(+), 345 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c 
b/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c
index 3a63e3c..cd7da59 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c
@@ -36,223 +36,41 @@
 #define FN(reg_name, field_name) \
dce_i2c_hw->shifts->field_name, dce_i2c_hw->masks->field_name
 
-
-static inline void reset_hw_engine(struct dce_i2c_hw *dce_i2c_hw)
-{
-   REG_UPDATE_2(DC_I2C_CONTROL,
-DC_I2C_SW_STATUS_RESET, 1,
-DC_I2C_SW_STATUS_RESET, 1);
-}
-
-static bool is_hw_busy(struct dce_i2c_hw *dce_i2c_hw)
-{
-   uint32_t i2c_sw_status = 0;
-
-   REG_GET(DC_I2C_SW_STATUS, DC_I2C_SW_STATUS, _sw_status);
-   if (i2c_sw_status == DC_I2C_STATUS__DC_I2C_STATUS_IDLE)
-   return false;
-
-   reset_hw_engine(dce_i2c_hw);
-
-   REG_GET(DC_I2C_SW_STATUS, DC_I2C_SW_STATUS, _sw_status);
-   return i2c_sw_status != DC_I2C_STATUS__DC_I2C_STATUS_IDLE;
-}
-
-static void set_speed(
-   struct dce_i2c_hw *dce_i2c_hw,
-   uint32_t speed)
-{
-
-   if (speed) {
-   if (dce_i2c_hw->masks->DC_I2C_DDC1_START_STOP_TIMING_CNTL)
-   REG_UPDATE_N(SPEED, 3,
-FN(DC_I2C_DDC1_SPEED, 
DC_I2C_DDC1_PRESCALE), dce_i2c_hw->reference_frequency / speed,
-FN(DC_I2C_DDC1_SPEED, 
DC_I2C_DDC1_THRESHOLD), 2,
-FN(DC_I2C_DDC1_SPEED, 
DC_I2C_DDC1_START_STOP_TIMING_CNTL), speed > 50 ? 2:1);
-   else
-   REG_UPDATE_N(SPEED, 2,
-FN(DC_I2C_DDC1_SPEED, 
DC_I2C_DDC1_PRESCALE), dce_i2c_hw->reference_frequency / speed,
-FN(DC_I2C_DDC1_SPEED, 
DC_I2C_DDC1_THRESHOLD), 2);
-   }
-}
-
-bool dce_i2c_hw_engine_acquire_engine(
-   struct dce_i2c_hw *dce_i2c_hw,
-   struct ddc *ddc)
-{
-
-   enum gpio_result result;
-   uint32_t current_speed;
-
-   result = dal_ddc_open(ddc, GPIO_MODE_HARDWARE,
-   GPIO_DDC_CONFIG_TYPE_MODE_I2C);
-
-   if (result != GPIO_RESULT_OK)
-   return false;
-
-   dce_i2c_hw->ddc = ddc;
-
-
-   current_speed = dce_i2c_hw->funcs->get_speed(dce_i2c_hw);
-
-   if (current_speed)
-   dce_i2c_hw->original_speed = current_speed;
-
-   return true;
-}
-bool dce_i2c_engine_acquire_hw(
-   struct dce_i2c_hw *dce_i2c_hw,
-   struct ddc *ddc_handle)
-{
-
-   uint32_t counter = 0;
-   bool result;
-
-   do {
-   result = dce_i2c_hw_engine_acquire_engine(
-   dce_i2c_hw, ddc_handle);
-
-   if (result)
-   break;
-
-   /* i2c_engine is busy by VBios, lets wait and retry */
-
-   udelay(10);
-
-   ++counter;
-   } while (counter < 2);
-
-   if (result) {
-   if (!dce_i2c_hw->funcs->setup_engine(dce_i2c_hw)) {
-   dce_i2c_hw->funcs->release_engine(dce_i2c_hw);
-   result = false;
-   }
-   }
-
-   return result;
-}
-struct dce_i2c_hw *acquire_i2c_hw_engine(
-   struct resource_pool *pool,
-   struct ddc *ddc)
+static void disable_i2c_hw_engine(
+   struct dce_i2c_hw *dce_i2c_hw)
 {
-
-   struct dce_i2c_hw *engine = NULL;
-
-   if (!ddc)
-   return NULL;
-
-   if (ddc->hw_info.hw_supported) {
-   enum gpio_ddc_line line = dal_ddc_get_line(ddc);
-
-   if (line < pool->pipe_count)
-   engine = pool->hw_i2cs[line];
-   }
-
-   if (!engine)
-   return NULL;
-
-
-   if (!pool->i2c_hw_buffer_in_use &&
-   dce_i2c_engine_acquire_hw(engine, ddc)) {
-   pool->i2c_hw_buffer_in_use = true;
-   return engine;
-   }
-
-
-   return NULL;
+   REG_UPDATE_N(SETUP, 1, FN(SETUP, DC_I2C_DDC1_ENABLE), 0);
 }
 
-static bool setup_engine(
+static void execute_transaction(
struct dce_i2c_hw *dce_i2c_hw)
 {
-   uint32_t i2c_setup_limit = I2C_SETUP_TIME_LIMIT_DCE;
+   REG_UPDATE_N(SETUP, 5,
+FN(DC_I2C_DDC1_SETUP, 

[PATCH 05/11] drm/amd/display: eliminate long wait between register polls on Maximus

2018-08-22 Thread sunpeng.li
From: Ken Chalmers 

[Why]
Now that we "scale" time delays correctly on Maximus (as of diags svn
r170115), the forced "35 ms" wait time now becomes 35 ms * 500 = 17.5
seconds, which is far too long.  Even having to repeat polling a
register once causes excessive delays on Maximus.

[How]
Just use the regular wait time passed to the generic_reg_wait()
function.  This is sufficient for Maximus now, and it also means that
there's one less "Maximus-only" code path in DAL.

Also disable the "REG_WAIT taking a while:" message on Maximus, since
things do take a while longer there and 1-2ms delays are not uncommon
(and nothing to worry about).

Signed-off-by: Ken Chalmers 
Reviewed-by: Eric Bernstein 
Acked-by: Leo Li 
---
 drivers/gpu/drm/amd/display/dc/dc_helper.c | 9 ++---
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc_helper.c 
b/drivers/gpu/drm/amd/display/dc/dc_helper.c
index e68077e..fcfd50b 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_helper.c
+++ b/drivers/gpu/drm/amd/display/dc/dc_helper.c
@@ -219,12 +219,6 @@ uint32_t generic_reg_wait(const struct dc_context *ctx,
/* something is terribly wrong if time out is > 200ms. (5Hz) */
ASSERT(delay_between_poll_us * time_out_num_tries <= 20);
 
-   if (IS_FPGA_MAXIMUS_DC(ctx->dce_environment)) {
-   /* 35 seconds */
-   delay_between_poll_us = 35000;
-   time_out_num_tries = 1000;
-   }
-
for (i = 0; i <= time_out_num_tries; i++) {
if (i) {
if (delay_between_poll_us >= 1000)
@@ -238,7 +232,8 @@ uint32_t generic_reg_wait(const struct dc_context *ctx,
field_value = get_reg_field_value_ex(reg_val, mask, shift);
 
if (field_value == condition_value) {
-   if (i * delay_between_poll_us > 1000)
+   if (i * delay_between_poll_us > 1000 &&
+   
!IS_FPGA_MAXIMUS_DC(ctx->dce_environment))
dm_output_to_console("REG_WAIT taking a while: 
%dms in %s line:%d\n",
delay_between_poll_us * i / 
1000,
func_name, line);
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 00/11] DC Patches Aug 22, 2018

2018-08-22 Thread sunpeng.li
From: Leo Li 

Summary of change:
* Flattening and cleaning up of i2c code
* Spelling and grammar fixups in amdgpu_dm
* Implement hardware state logging via debugfs

David Francis (4):
  drm/amd/display: Eliminate i2c hw function pointers
  drm/amd/display: Improve spelling, grammar, and formatting of
amdgpu_dm.c comments
  drm/amd/display: Remove redundant i2c structs
  drm/amd/display: Flatten unnecessary i2c functions

Eric Yang (1):
  drm/amd/display: support 48 MHZ refclk off

Ken Chalmers (1):
  drm/amd/display: eliminate long wait between register polls on Maximus

Leo (Sunpeng) Li (1):
  drm/amd/display: Use non-deprecated vblank handler

Nicholas Kazlauskas (2):
  drm/amd/display: Add support for hw_state logging via debugfs
  drm/amd/display: Support reading hw state from debugfs file

SivapiriyanKumarasamy (1):
  drm/amd/display: Fix memory leak caused by missed dc_sink_release

Tony Cheng (1):
  drm/amd/display: dc 3.1.63

 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  | 204 ---
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c  |  86 +++
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.h  |   1 +
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c  |  89 ++-
 drivers/gpu/drm/amd/display/dc/core/dc.c   |  36 +-
 drivers/gpu/drm/amd/display/dc/core/dc_link.c  |   6 +-
 drivers/gpu/drm/amd/display/dc/dc.h|   2 +-
 drivers/gpu/drm/amd/display/dc/dc_helper.c |   9 +-
 drivers/gpu/drm/amd/display/dc/dce/dce_i2c.h   |  33 --
 drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c| 652 +
 drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.h|  34 --
 drivers/gpu/drm/amd/display/dc/dce/dce_i2c_sw.c|  83 +--
 .../drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c  |  24 +-
 drivers/gpu/drm/amd/display/dc/dm_services.h   |  10 +-
 drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h  |   3 +-
 .../gpu/drm/amd/display/include/logger_interface.h |   6 +-
 drivers/gpu/drm/amd/display/include/logger_types.h |   6 +
 17 files changed, 639 insertions(+), 645 deletions(-)

-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 06/11] drm/amd/display: Fix memory leak caused by missed dc_sink_release

2018-08-22 Thread sunpeng.li
From: SivapiriyanKumarasamy 

[Why]
There is currently an intermittent hang from a memory leak in
DTN stress testing. It is caused by unfreed memory during driver
disable.

[How]
Do a dc_sink_release in the case that skips it incorrectly.

Signed-off-by: SivapiriyanKumarasamy 
Reviewed-by: Aric Cyr 
Acked-by: Leo Li 
---
 drivers/gpu/drm/amd/display/dc/core/dc_link.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index 53ce2a9..789689f 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -757,8 +757,12 @@ bool dc_link_detect(struct dc_link *link, enum 
dc_detect_reason reason)
 * fail-safe mode
 */
if (dc_is_hdmi_signal(link->connector_signal) ||
-   dc_is_dvi_signal(link->connector_signal))
+   dc_is_dvi_signal(link->connector_signal)) {
+   if (prev_sink != NULL)
+   dc_sink_release(prev_sink);
+
return false;
+   }
default:
break;
}
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 03/11] drm/amd/display: Use non-deprecated vblank handler

2018-08-22 Thread sunpeng.li
From: "Leo (Sunpeng) Li" 

[Why]
drm_handle_vblank is deprecated. Use drm_crtc_handle_vblank instead.

Signed-off-by: Leo (Sunpeng) Li 
Reviewed-by: David Francis 
Acked-by: Leo Li 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 3224bdc..2287f09 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -311,16 +311,14 @@ static void dm_crtc_high_irq(void *interrupt_params)
 {
struct common_irq_params *irq_params = interrupt_params;
struct amdgpu_device *adev = irq_params->adev;
-   uint8_t crtc_index = 0;
struct amdgpu_crtc *acrtc;
 
acrtc = get_crtc_by_otg_inst(adev, irq_params->irq_src - 
IRQ_TYPE_VBLANK);
 
-   if (acrtc)
-   crtc_index = acrtc->crtc_id;
-
-   drm_handle_vblank(adev->ddev, crtc_index);
-   amdgpu_dm_crtc_handle_crc_irq(>base);
+   if (acrtc) {
+   drm_crtc_handle_vblank(>base);
+   amdgpu_dm_crtc_handle_crc_irq(>base);
+   }
 }
 
 static int dm_set_clockgating_state(void *handle,
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 09/11] drm/amd/display: Remove redundant i2c structs

2018-08-22 Thread sunpeng.li
From: David Francis 

[Why]
The i2c code contains two structs that contain the same
information as i2c_payload

[How]
Replace references to those structs with references to
i2c_payload

dce_i2c_transaction_request->status was written to but never read,
so all references to it are removed

Signed-off-by: David Francis 
Reviewed-by: Jordan Lazare 
Acked-by: Leo Li 
---
 drivers/gpu/drm/amd/display/dc/dce/dce_i2c.h| 33 --
 drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c | 84 +
 drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.h |  5 --
 drivers/gpu/drm/amd/display/dc/dce/dce_i2c_sw.c | 83 
 4 files changed, 28 insertions(+), 177 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_i2c.h 
b/drivers/gpu/drm/amd/display/dc/dce/dce_i2c.h
index d655f89..a171c5c 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_i2c.h
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_i2c.h
@@ -30,39 +30,6 @@
 #include "dce_i2c_hw.h"
 #include "dce_i2c_sw.h"
 
-enum dce_i2c_transaction_status {
-   DCE_I2C_TRANSACTION_STATUS_UNKNOWN = (-1L),
-   DCE_I2C_TRANSACTION_STATUS_SUCCEEDED,
-   DCE_I2C_TRANSACTION_STATUS_FAILED_CHANNEL_BUSY,
-   DCE_I2C_TRANSACTION_STATUS_FAILED_TIMEOUT,
-   DCE_I2C_TRANSACTION_STATUS_FAILED_PROTOCOL_ERROR,
-   DCE_I2C_TRANSACTION_STATUS_FAILED_NACK,
-   DCE_I2C_TRANSACTION_STATUS_FAILED_INCOMPLETE,
-   DCE_I2C_TRANSACTION_STATUS_FAILED_OPERATION,
-   DCE_I2C_TRANSACTION_STATUS_FAILED_INVALID_OPERATION,
-   DCE_I2C_TRANSACTION_STATUS_FAILED_BUFFER_OVERFLOW,
-   DCE_I2C_TRANSACTION_STATUS_FAILED_HPD_DISCON
-};
-
-enum dce_i2c_transaction_operation {
-   DCE_I2C_TRANSACTION_READ,
-   DCE_I2C_TRANSACTION_WRITE
-};
-
-struct dce_i2c_transaction_payload {
-   enum dce_i2c_transaction_address_space address_space;
-   uint32_t address;
-   uint32_t length;
-   uint8_t *data;
-};
-
-struct dce_i2c_transaction_request {
-   enum dce_i2c_transaction_operation operation;
-   struct dce_i2c_transaction_payload payload;
-   enum dce_i2c_transaction_status status;
-};
-
-
 bool dce_i2c_submit_command(
struct resource_pool *pool,
struct ddc *ddc,
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c 
b/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c
index cd7da59..2800d3f 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c
@@ -129,7 +129,7 @@ static uint32_t get_speed(
 
 static void process_channel_reply(
struct dce_i2c_hw *dce_i2c_hw,
-   struct i2c_reply_transaction_data *reply)
+   struct i2c_payload *reply)
 {
uint32_t length = reply->length;
uint8_t *buffer = reply->data;
@@ -522,9 +522,9 @@ static uint32_t get_transaction_timeout_hw(
return period_timeout * num_of_clock_stretches;
 }
 
-bool dce_i2c_hw_engine_submit_request(
+bool dce_i2c_hw_engine_submit_payload(
struct dce_i2c_hw *dce_i2c_hw,
-   struct dce_i2c_transaction_request *dce_i2c_request,
+   struct i2c_payload *payload,
bool middle_of_transaction)
 {
 
@@ -541,46 +541,36 @@ bool dce_i2c_hw_engine_submit_request(
 * the number of free bytes in HW buffer (minus one for address)
 */
 
-   if (dce_i2c_request->payload.length >=
+   if (payload->length >=
get_hw_buffer_available_size(dce_i2c_hw)) {
-   dce_i2c_request->status =
-   DCE_I2C_TRANSACTION_STATUS_FAILED_BUFFER_OVERFLOW;
return false;
}
 
-   if (dce_i2c_request->operation == DCE_I2C_TRANSACTION_READ)
+   if (!payload->write)
request.action = middle_of_transaction ?
DCE_I2C_TRANSACTION_ACTION_I2C_READ_MOT :
DCE_I2C_TRANSACTION_ACTION_I2C_READ;
-   else if (dce_i2c_request->operation == DCE_I2C_TRANSACTION_WRITE)
+   else
request.action = middle_of_transaction ?
DCE_I2C_TRANSACTION_ACTION_I2C_WRITE_MOT :
DCE_I2C_TRANSACTION_ACTION_I2C_WRITE;
-   else {
-   dce_i2c_request->status =
-   DCE_I2C_TRANSACTION_STATUS_FAILED_INVALID_OPERATION;
-   /* [anaumov] in DAL2, there was no "return false" */
-   return false;
-   }
 
-   request.address = (uint8_t) dce_i2c_request->payload.address;
-   request.length = dce_i2c_request->payload.length;
-   request.data = dce_i2c_request->payload.data;
+
+   request.address = (uint8_t) ((payload->address << 1) | !payload->write);
+   request.length = payload->length;
+   request.data = payload->data;
 
/* obtain timeout value before submitting request */
 
transaction_timeout = get_transaction_timeout_hw(
-   dce_i2c_hw, dce_i2c_request->payload.length + 1);
+   dce_i2c_hw, payload->length + 1);
 
   

[PATCH 08/11] drm/amd/display: Support reading hw state from debugfs file

2018-08-22 Thread sunpeng.li
From: Nicholas Kazlauskas 

[Why]

Logging hardware state can be done by triggering a write to the
debugfs file. It would also be useful to be able to read the hardware
state from the debugfs file to be able to generate a clean log without
timestamps.

[How]

Usage: cat /sys/kernel/debug/dri/0/amdgpu_dm_dtn_log

Threading is an obvious concern when dealing with multiple debugfs
operations and blocking on global state in dm or dc seems unfavorable.

Adding an extra parameter for the debugfs log context state is the
implementation done here. Existing code that made use of DTN_INFO
and its associated macros needed to be refactored to support this.

We don't know the size of the log in advance so it reallocates the
log string dynamically. Once the log has been generated it's copied
into the user supplied buffer for the debugfs. This allows for seeking
support but it's worth nothing that unlike triggering output via
dmesg the hardware state might change in-between reads if your buffer
size is too small.

Signed-off-by: Nicholas Kazlauskas 
Reviewed-by: Jordan Lazare 
Acked-by: Leo Li 
---
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c  | 39 ++-
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c  | 81 +++---
 .../drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c  | 24 ---
 drivers/gpu/drm/amd/display/dc/dm_services.h   | 10 ++-
 drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h  |  3 +-
 .../gpu/drm/amd/display/include/logger_interface.h |  6 +-
 drivers/gpu/drm/amd/display/include/logger_types.h |  6 ++
 7 files changed, 140 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
index e79ac1e..35ca732 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
@@ -720,16 +720,49 @@ int connector_debugfs_init(struct amdgpu_dm_connector 
*connector)
return 0;
 }
 
+/*
+ * Writes DTN log state to the user supplied buffer.
+ * Example usage: cat /sys/kernel/debug/dri/0/amdgpu_dm_dtn_log
+ */
 static ssize_t dtn_log_read(
struct file *f,
char __user *buf,
size_t size,
loff_t *pos)
 {
-   /* TODO: Write log output to the user supplied buffer. */
-   return 0;
+   struct amdgpu_device *adev = file_inode(f)->i_private;
+   struct dc *dc = adev->dm.dc;
+   struct dc_log_buffer_ctx log_ctx = { 0 };
+   ssize_t result = 0;
+
+   if (!buf || !size)
+   return -EINVAL;
+
+   if (!dc->hwss.log_hw_state)
+   return 0;
+
+   dc->hwss.log_hw_state(dc, _ctx);
+
+   if (*pos < log_ctx.pos) {
+   size_t to_copy = log_ctx.pos - *pos;
+
+   to_copy = min(to_copy, size);
+
+   if (!copy_to_user(buf, log_ctx.buf + *pos, to_copy)) {
+   *pos += to_copy;
+   result = to_copy;
+   }
+   }
+
+   kfree(log_ctx.buf);
+
+   return result;
 }
 
+/*
+ * Writes DTN log state to dmesg when triggered via a write.
+ * Example usage: echo 1 > /sys/kernel/debug/dri/0/amdgpu_dm_dtn_log
+ */
 static ssize_t dtn_log_write(
struct file *f,
const char __user *buf,
@@ -744,7 +777,7 @@ static ssize_t dtn_log_write(
return 0;
 
if (dc->hwss.log_hw_state)
-   dc->hwss.log_hw_state(dc);
+   dc->hwss.log_hw_state(dc, NULL);
 
return size;
 }
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
index 86b63ce..39997d9 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
@@ -335,28 +335,91 @@ bool dm_helpers_dp_mst_send_payload_allocation(
return true;
 }
 
-void dm_dtn_log_begin(struct dc_context *ctx)
+void dm_dtn_log_begin(struct dc_context *ctx,
+   struct dc_log_buffer_ctx *log_ctx)
 {
-   pr_info("[dtn begin]\n");
+   static const char msg[] = "[dtn begin]\n";
+
+   if (!log_ctx) {
+   pr_info("%s", msg);
+   return;
+   }
+
+   dm_dtn_log_append_v(ctx, log_ctx, "%s", msg);
 }
 
 void dm_dtn_log_append_v(struct dc_context *ctx,
-   const char *msg, ...)
+   struct dc_log_buffer_ctx *log_ctx,
+   const char *msg, ...)
 {
-   struct va_format vaf;
va_list args;
+   size_t total;
+   int n;
+
+   if (!log_ctx) {
+   /* No context, redirect to dmesg. */
+   struct va_format vaf;
+
+   vaf.fmt = msg;
+   vaf.va = 
+
+   va_start(args, msg);
+   pr_info("%pV", );
+   va_end(args);
 
+   return;
+   }
+
+   /* Measure the output. */
va_start(args, msg);
-   vaf.fmt = msg;
-   vaf.va = 
+   n = 

[PATCH 10/11] drm/amd/display: support 48 MHZ refclk off

2018-08-22 Thread sunpeng.li
From: Eric Yang 

[Why]
On PCO and up, whenever SMU receive message to indicate active
display count = 0. SMU will turn off 48MHZ TMDP reference clock
by writing to 1 TMDP_48M_Refclk_Driver_PWDN. Once this clock is
off, no PHY register will respond to register access. This means
our current sequence of notifying display count along with requesting
clock will cause driver to hang when accessing PHY registers after
displays count goes to 0.

[How]
Separate the PPSMC_MSG_SetDisplayCount message from the SMU messages
that request clocks, have display own sequencing of this message so
that we can send it at the appropriate time.
Do not redundantly power off HW when entering S3, S4, since display
should already be called to disable all streams. And ASIC soon be
powered down.

Signed-off-by: Eric Yang 
Reviewed-by: Tony Cheng 
Acked-by: Leo Li 
---
 drivers/gpu/drm/amd/display/dc/core/dc.c | 36 +---
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c 
b/drivers/gpu/drm/amd/display/dc/core/dc.c
index b594806..5108873 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -1367,6 +1367,34 @@ static struct dc_stream_status *stream_get_status(
 
 static const enum surface_update_type update_surface_trace_level = 
UPDATE_TYPE_FULL;
 
+static void notify_display_count_to_smu(
+   struct dc *dc,
+   struct dc_state *context)
+{
+   int i, display_count;
+   struct pp_smu_funcs_rv *pp_smu = dc->res_pool->pp_smu;
+
+   /*
+* if function pointer not set up, this message is
+* sent as part of pplib_apply_display_requirements.
+* So just return.
+*/
+   if (!pp_smu->set_display_count)
+   return;
+
+   display_count = 0;
+   for (i = 0; i < context->stream_count; i++) {
+   const struct dc_stream_state *stream = context->streams[i];
+
+   /* only notify active stream */
+   if (stream->dpms_off)
+   continue;
+
+   display_count++;
+   }
+
+   pp_smu->set_display_count(_smu->pp_smu, display_count);
+}
 
 static void commit_planes_do_stream_update(struct dc *dc,
struct dc_stream_state *stream,
@@ -1420,13 +1448,17 @@ static void commit_planes_do_stream_update(struct dc 
*dc,
core_link_disable_stream(pipe_ctx, 
KEEP_ACQUIRED_RESOURCE);

dc->hwss.pplib_apply_display_requirements(
dc, dc->current_state);
+   notify_display_count_to_smu(dc, 
dc->current_state);
} else {

dc->hwss.pplib_apply_display_requirements(
dc, dc->current_state);
+   notify_display_count_to_smu(dc, 
dc->current_state);

core_link_enable_stream(dc->current_state, pipe_ctx);
}
}
 
+
+
if (stream_update->abm_level && 
pipe_ctx->stream_res.abm) {
if (pipe_ctx->stream_res.tg->funcs->is_blanked) 
{
// if otg funcs defined check if 
blanked before programming
@@ -1662,9 +1694,7 @@ void dc_set_power_state(
dc->hwss.init_hw(dc);
break;
default:
-
-   dc->hwss.power_down(dc);
-
+   ASSERT(dc->current_state->stream_count == 0);
/* Zero out the current context so that on resume we start with
 * clean state, and dc hw programming optimizations will not
 * cause any trouble.
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 07/11] drm/amd/display: Improve spelling, grammar, and formatting of amdgpu_dm.c comments

2018-08-22 Thread sunpeng.li
From: David Francis 

[Why]
Good spelling and grammar makes comments
more pleasant and clearer.

Linux has coding standards for comments
that we should try to follow.

[How]
Fix obvious spelling and grammar issues

Ensure all comments use '/*' and '*/' and multi-line comments
follow linux convention

Remove line-of-stars comments that do not separate sections
of code and comments referring to lines of code that have
since been removed

Signed-off-by: David Francis 
Reviewed-by: Nicholas Kazlauskas 
Acked-by: Leo Li 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 189 +-
 1 file changed, 109 insertions(+), 80 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 739a797..66add25 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -75,7 +75,8 @@
 static int amdgpu_dm_init(struct amdgpu_device *adev);
 static void amdgpu_dm_fini(struct amdgpu_device *adev);
 
-/* initializes drm_device display related structures, based on the information
+/*
+ * initializes drm_device display related structures, based on the information
  * provided by DAL. The drm strcutures are: drm_crtc, drm_connector,
  * drm_encoder, drm_mode_config
  *
@@ -237,10 +238,6 @@ get_crtc_by_otg_inst(struct amdgpu_device *adev,
struct drm_crtc *crtc;
struct amdgpu_crtc *amdgpu_crtc;
 
-   /*
-* following if is check inherited from both functions where this one is
-* used now. Need to be checked why it could happen.
-*/
if (otg_inst == -1) {
WARN_ON(1);
return adev->mode_info.crtcs[0];
@@ -266,7 +263,7 @@ static void dm_pflip_high_irq(void *interrupt_params)
amdgpu_crtc = get_crtc_by_otg_inst(adev, irq_params->irq_src - 
IRQ_TYPE_PFLIP);
 
/* IRQ could occur when in initial stage */
-   /*TODO work and BO cleanup */
+   /* TODO work and BO cleanup */
if (amdgpu_crtc == NULL) {
DRM_DEBUG_DRIVER("CRTC is null, returning.\n");
return;
@@ -285,9 +282,9 @@ static void dm_pflip_high_irq(void *interrupt_params)
}
 
 
-   /* wakeup usersapce */
+   /* wake up userspace */
if (amdgpu_crtc->event) {
-   /* Update to correct count/ts if racing with vblank irq */
+   /* Update to correct count(s) if racing with vblank irq */
drm_crtc_accurate_vblank_count(_crtc->base);
 
drm_crtc_send_vblank_event(_crtc->base, 
amdgpu_crtc->event);
@@ -385,8 +382,8 @@ static void amdgpu_dm_fbc_init(struct drm_connector 
*connector)
 
 }
 
-
-/* Init display KMS
+/*
+ * Init display KMS
  *
  * Returns 0 on success
  */
@@ -695,7 +692,7 @@ static int dm_resume(void *handle)
mutex_unlock(>hpd_lock);
}
 
-   /* Force mode set in atomic comit */
+   /* Force mode set in atomic commit */
for_each_new_crtc_in_state(dm->cached_state, crtc, new_crtc_state, i)
new_crtc_state->active_changed = true;
 
@@ -826,24 +823,27 @@ amdgpu_dm_update_connector_after_detect(struct 
amdgpu_dm_connector *aconnector)
 
sink = aconnector->dc_link->local_sink;
 
-   /* Edid mgmt connector gets first update only in mode_valid hook and 
then
+   /*
+* Edid mgmt connector gets first update only in mode_valid hook and 
then
 * the connector sink is set to either fake or physical sink depends on 
link status.
-* don't do it here if u are during boot
+* Skip if already done during boot.
 */
if (aconnector->base.force != DRM_FORCE_UNSPECIFIED
&& aconnector->dc_em_sink) {
 
-   /* For S3 resume with headless use eml_sink to fake stream
-* because on resume connecotr->sink is set ti NULL
+   /*
+* For S3 resume with headless use eml_sink to fake stream
+* because on resume connector->sink is set to NULL
 */
mutex_lock(>mode_config.mutex);
 
if (sink) {
if (aconnector->dc_sink) {
amdgpu_dm_update_freesync_caps(connector, NULL);
-   /* retain and release bellow are used for
-* bump up refcount for sink because the link 
don't point
-* to it anymore after disconnect so on next 
crtc to connector
+   /*
+* retain and release below are used to
+* bump up refcount for sink because the link 
doesn't point
+* to it anymore after disconnect, so on next 
crtc to connector
 * reshuffle by UMD we will get into unwanted 
dc_sink release
  

[PATCH 11/11] drm/amd/display: Flatten unnecessary i2c functions

2018-08-22 Thread sunpeng.li
From: David Francis 

[Why]
The dce_i2c_hw code contained four funtcions that were only
called in one place and did not have a clearly delineated
purpose.

[How]
Inline these functions, keeping the same functionality.

This is not a functional change.

The functions disable_i2c_hw_engine and release_engine_dce_hw were
pulled into their respective callers.

The most interesting part of this change is the acquire functions.
dce_i2c_hw_engine_acquire_engine was pulled into
dce_i2c_engine_acquire_hw, and dce_i2c_engine_acquire_hw was pulled
into acquire_i2c_hw_engine.

Some notes to show that this change is not functional:
-Failure conditions in any function resulted in a cascade of calls that
ended in a 'return NULL'.
Those are replaced with a direct 'return NULL'.

-The variable result is the one from dce_i2c_hw_engine_acquire_engine.
The boolean result used as part of return logic was removed.

-As the second half of dce_i2c_hw_engine_acquire_engine is only executed
if that function is returning true and therefore exiting the do-while
loop in dce_i2c_engine_acquire_hw, those lines were moved outside
of the loop.

Signed-off-by: David Francis 
Acked-by: Leo Li 
---
 drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c | 111 
 1 file changed, 34 insertions(+), 77 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c 
b/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c
index 2800d3f..40f2d6e 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c
@@ -36,12 +36,6 @@
 #define FN(reg_name, field_name) \
dce_i2c_hw->shifts->field_name, dce_i2c_hw->masks->field_name
 
-static void disable_i2c_hw_engine(
-   struct dce_i2c_hw *dce_i2c_hw)
-{
-   REG_UPDATE_N(SETUP, 1, FN(SETUP, DC_I2C_DDC1_ENABLE), 0);
-}
-
 static void execute_transaction(
struct dce_i2c_hw *dce_i2c_hw)
 {
@@ -348,60 +342,40 @@ static void release_engine(
REG_UPDATE(DC_I2C_CONTROL, DC_I2C_SW_STATUS_RESET, 1);
/* HW I2c engine - clock gating feature */
if (!dce_i2c_hw->engine_keep_power_up_count)
-   disable_i2c_hw_engine(dce_i2c_hw);
+   REG_UPDATE_N(SETUP, 1, FN(SETUP, DC_I2C_DDC1_ENABLE), 0);
 
 }
 
-static void release_engine_dce_hw(
+struct dce_i2c_hw *acquire_i2c_hw_engine(
struct resource_pool *pool,
-   struct dce_i2c_hw *dce_i2c_hw)
-{
-   pool->i2c_hw_buffer_in_use = false;
-
-   release_engine(dce_i2c_hw);
-   dal_ddc_close(dce_i2c_hw->ddc);
-
-   dce_i2c_hw->ddc = NULL;
-}
-
-bool dce_i2c_hw_engine_acquire_engine(
-   struct dce_i2c_hw *dce_i2c_hw,
struct ddc *ddc)
 {
-
+   uint32_t counter = 0;
enum gpio_result result;
uint32_t current_speed;
+   struct dce_i2c_hw *dce_i2c_hw = NULL;
 
-   result = dal_ddc_open(ddc, GPIO_MODE_HARDWARE,
-   GPIO_DDC_CONFIG_TYPE_MODE_I2C);
-
-   if (result != GPIO_RESULT_OK)
-   return false;
-
-   dce_i2c_hw->ddc = ddc;
-
-
-   current_speed = get_speed(dce_i2c_hw);
+   if (!ddc)
+   return NULL;
 
-   if (current_speed)
-   dce_i2c_hw->original_speed = current_speed;
+   if (ddc->hw_info.hw_supported) {
+   enum gpio_ddc_line line = dal_ddc_get_line(ddc);
 
-   return true;
-}
+   if (line < pool->pipe_count)
+   dce_i2c_hw = pool->hw_i2cs[line];
+   }
 
-bool dce_i2c_engine_acquire_hw(
-   struct dce_i2c_hw *dce_i2c_hw,
-   struct ddc *ddc_handle)
-{
+   if (!dce_i2c_hw)
+   return NULL;
 
-   uint32_t counter = 0;
-   bool result;
+   if (pool->i2c_hw_buffer_in_use)
+   return NULL;
 
do {
-   result = dce_i2c_hw_engine_acquire_engine(
-   dce_i2c_hw, ddc_handle);
+   result = dal_ddc_open(ddc, GPIO_MODE_HARDWARE,
+   GPIO_DDC_CONFIG_TYPE_MODE_I2C);
 
-   if (result)
+   if (result == GPIO_RESULT_OK)
break;
 
/* i2c_engine is busy by VBios, lets wait and retry */
@@ -411,45 +385,23 @@ bool dce_i2c_engine_acquire_hw(
++counter;
} while (counter < 2);
 
-   if (result) {
-   if (!setup_engine(dce_i2c_hw)) {
-   release_engine(dce_i2c_hw);
-   result = false;
-   }
-   }
-
-   return result;
-}
-
-struct dce_i2c_hw *acquire_i2c_hw_engine(
-   struct resource_pool *pool,
-   struct ddc *ddc)
-{
-
-   struct dce_i2c_hw *engine = NULL;
-
-   if (!ddc)
+   if (result != GPIO_RESULT_OK)
return NULL;
 
-   if (ddc->hw_info.hw_supported) {
-   enum gpio_ddc_line line = dal_ddc_get_line(ddc);
-
-   if (line < pool->pipe_count)
-   engine = pool->hw_i2cs[line];
-   }
+   

[PATCH 02/11] drm/amd/display: dc 3.1.63

2018-08-22 Thread sunpeng.li
From: Tony Cheng 

Signed-off-by: Tony Cheng 
Reviewed-by: Steven Chiu 
Acked-by: Leo Li 
---
 drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index 2bb7719..9ce14a2 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -38,7 +38,7 @@
 #include "inc/compressor.h"
 #include "dml/display_mode_lib.h"
 
-#define DC_VER "3.1.62"
+#define DC_VER "3.1.63"
 
 #define MAX_SURFACES 3
 #define MAX_STREAMS 6
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 04/11] drm/amd/display: Add support for hw_state logging via debugfs

2018-08-22 Thread sunpeng.li
From: Nicholas Kazlauskas 

[Why]

We have logging methods for printing hardware state for newer ASICs
but no way to trigger the log output.

[How]

Add support for triggering the output via writing to a debugfs file
entry. Log output currently goes into dmesg for convenience, but
accessing via a read should be possible later.

Signed-off-by: Nicholas Kazlauskas 
Reviewed-by: Jordan Lazare 
Acked-by: Leo Li 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  |  5 ++
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c  | 53 ++
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.h  |  1 +
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c  | 22 +++--
 4 files changed, 77 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 2287f09..739a797 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -480,6 +480,11 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
goto error;
}
 
+#if defined(CONFIG_DEBUG_FS)
+   if (dtn_debugfs_init(adev))
+   DRM_ERROR("amdgpu: failed initialize dtn debugfs support.\n");
+#endif
+
DRM_DEBUG_DRIVER("KMS initialized.\n");
 
return 0;
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
index 0d9e410..e79ac1e 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
@@ -720,3 +720,56 @@ int connector_debugfs_init(struct amdgpu_dm_connector 
*connector)
return 0;
 }
 
+static ssize_t dtn_log_read(
+   struct file *f,
+   char __user *buf,
+   size_t size,
+   loff_t *pos)
+{
+   /* TODO: Write log output to the user supplied buffer. */
+   return 0;
+}
+
+static ssize_t dtn_log_write(
+   struct file *f,
+   const char __user *buf,
+   size_t size,
+   loff_t *pos)
+{
+   struct amdgpu_device *adev = file_inode(f)->i_private;
+   struct dc *dc = adev->dm.dc;
+
+   /* Write triggers log output via dmesg. */
+   if (size == 0)
+   return 0;
+
+   if (dc->hwss.log_hw_state)
+   dc->hwss.log_hw_state(dc);
+
+   return size;
+}
+
+int dtn_debugfs_init(struct amdgpu_device *adev)
+{
+   static const struct file_operations dtn_log_fops = {
+   .owner = THIS_MODULE,
+   .read = dtn_log_read,
+   .write = dtn_log_write,
+   .llseek = default_llseek
+   };
+
+   struct drm_minor *minor = adev->ddev->primary;
+   struct dentry *root = minor->debugfs_root;
+
+   struct dentry *ent = debugfs_create_file(
+   "amdgpu_dm_dtn_log",
+   0644,
+   root,
+   adev,
+   _log_fops);
+
+   if (IS_ERR(ent))
+   return PTR_ERR(ent);
+
+   return 0;
+}
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.h 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.h
index d9ed1b2..bdef158 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.h
@@ -30,5 +30,6 @@
 #include "amdgpu_dm.h"
 
 int connector_debugfs_init(struct amdgpu_dm_connector *connector);
+int dtn_debugfs_init(struct amdgpu_device *adev);
 
 #endif
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
index 8403b6a..86b63ce 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
@@ -336,14 +336,28 @@ bool dm_helpers_dp_mst_send_payload_allocation(
 }
 
 void dm_dtn_log_begin(struct dc_context *ctx)
-{}
+{
+   pr_info("[dtn begin]\n");
+}
 
 void dm_dtn_log_append_v(struct dc_context *ctx,
-   const char *pMsg, ...)
-{}
+   const char *msg, ...)
+{
+   struct va_format vaf;
+   va_list args;
+
+   va_start(args, msg);
+   vaf.fmt = msg;
+   vaf.va = 
+
+   pr_info("%pV", );
+   va_end(args);
+}
 
 void dm_dtn_log_end(struct dc_context *ctx)
-{}
+{
+   pr_info("[dtn end]\n");
+}
 
 bool dm_helpers_dp_mst_start_top_mgr(
struct dc_context *ctx,
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: Possible use_mm() mis-uses

2018-08-22 Thread Linus Torvalds
On Wed, Aug 22, 2018 at 12:37 PM Oded Gabbay  wrote:
>
> Having said that, I think we *are* protected by the mmu_notifier
> release because if the process suddenly dies, we will gracefully clean
> the process's data in our driver and on the H/W before returning to
> the mm core code. And before we return to the mm core code, we set the
> mm pointer to NULL. And the graceful cleaning should be serialized
> with the load_hqd uses.

So I'm a bit nervous about the mmu_notifier model (and the largely
equivalent exit_aio() model for the USB gardget AIO uses).

The reason I'm nervous about it is that the mmu_notifier() gets called
only after the mm_users count has already been decremented to zero
(and the exact same thing goes for exit_aio()).

Now that's fine if you actually get rid of all accesses in
mmu_notifier_release() or in exit_aio(), because the page tables still
exist at that point - they are in the process of being torn down, but
they haven't been torn down yet.

But for something like a kernel thread doing use_mm(), the thing that
worries me is a pattern something like this:

  kwork thread  exit thread
    

mmput() ->
  mm_users goes to zero

  use_mm(mmptr);
  ..

  mmu_notifier_release();
  exit_mm() ->
exit_aio()

and the pattern is basically the same regatdless of whether you use
mmu_notifier_release() or depend on some exit_aio() flushing your aio
work: the use_mm() can be called with a mm that has already had its
mm_users count decremented to zero, and that is now scheduled to be
free'd.

Does it "work"? Yes. Kind of. At least if the mmu notifier and/or
exit_aio() actually makes sure to wait for any kwork thread thing. But
it's a bit of a worrisome pattern.

   Linus
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: Possible use_mm() mis-uses

2018-08-22 Thread Oded Gabbay
On Wed, Aug 22, 2018 at 10:58 PM Linus Torvalds
 wrote:
>
> On Wed, Aug 22, 2018 at 12:37 PM Oded Gabbay  wrote:
> >
> > Having said that, I think we *are* protected by the mmu_notifier
> > release because if the process suddenly dies, we will gracefully clean
> > the process's data in our driver and on the H/W before returning to
> > the mm core code. And before we return to the mm core code, we set the
> > mm pointer to NULL. And the graceful cleaning should be serialized
> > with the load_hqd uses.
>
> So I'm a bit nervous about the mmu_notifier model (and the largely
> equivalent exit_aio() model for the USB gardget AIO uses).
>
> The reason I'm nervous about it is that the mmu_notifier() gets called
> only after the mm_users count has already been decremented to zero
> (and the exact same thing goes for exit_aio()).
>
> Now that's fine if you actually get rid of all accesses in
> mmu_notifier_release() or in exit_aio(), because the page tables still
> exist at that point - they are in the process of being torn down, but
> they haven't been torn down yet.
>
> But for something like a kernel thread doing use_mm(), the thing that
> worries me is a pattern something like this:
>
>   kwork thread  exit thread
>     
>
> mmput() ->
>   mm_users goes to zero
>
>   use_mm(mmptr);
>   ..
>
>   mmu_notifier_release();
>   exit_mm() ->
> exit_aio()
>
> and the pattern is basically the same regatdless of whether you use
> mmu_notifier_release() or depend on some exit_aio() flushing your aio
> work: the use_mm() can be called with a mm that has already had its
> mm_users count decremented to zero, and that is now scheduled to be
> free'd.
>
> Does it "work"? Yes. Kind of. At least if the mmu notifier and/or
> exit_aio() actually makes sure to wait for any kwork thread thing. But
> it's a bit of a worrisome pattern.
>
>Linus

Yes, agreed, and that's why we will be on the safe side and eliminate
this pattern from our code and make sure we won't add this pattern in
the future.

Oded
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 10/11] drm/amdgpu: add helper for VM PD/PT allocation parameters

2018-08-22 Thread Alex Deucher
On Wed, Aug 22, 2018 at 11:06 AM Christian König
 wrote:
>
> Add a helper function to figure them out only once.
>
> Signed-off-by: Christian König 

Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 61 --
>  1 file changed, 28 insertions(+), 33 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 87e3d44b0a3f..928fdae0dab4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -446,6 +446,31 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
> return r;
>  }
>
> +/**
> + * amdgpu_vm_bo_param - fill in parameters for PD/PT allocation
> + *
> + * @adev: amdgpu_device pointer
> + * @vm: requesting vm
> + * @bp: resulting BO allocation parameters
> + */
> +static void amdgpu_vm_bo_param(struct amdgpu_device *adev, struct amdgpu_vm 
> *vm,
> +  int level, struct amdgpu_bo_param *bp)
> +{
> +   memset(, 0, sizeof(bp));
> +
> +   bp->size = amdgpu_vm_bo_size(adev, level);
> +   bp->byte_align = AMDGPU_GPU_PAGE_SIZE;
> +   bp->domain = AMDGPU_GEM_DOMAIN_VRAM;
> +   bp->flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
> +   if (vm->use_cpu_for_update)
> +   bp->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
> +   else
> +   bp->flags |= AMDGPU_GEM_CREATE_SHADOW;
> +   bp->type = ttm_bo_type_kernel;
> +   if (vm->root.base.bo)
> +   bp->resv = vm->root.base.bo->tbo.resv;
> +}
> +
>  /**
>   * amdgpu_vm_alloc_levels - allocate the PD/PT levels
>   *
> @@ -469,8 +494,8 @@ static int amdgpu_vm_alloc_levels(struct amdgpu_device 
> *adev,
>   unsigned level, bool ats)
>  {
> unsigned shift = amdgpu_vm_level_shift(adev, level);
> +   struct amdgpu_bo_param bp;
> unsigned pt_idx, from, to;
> -   u64 flags;
> int r;
>
> if (!parent->entries) {
> @@ -494,29 +519,14 @@ static int amdgpu_vm_alloc_levels(struct amdgpu_device 
> *adev,
> saddr = saddr & ((1 << shift) - 1);
> eaddr = eaddr & ((1 << shift) - 1);
>
> -   flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
> -   if (vm->use_cpu_for_update)
> -   flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
> -   else
> -   flags |= (AMDGPU_GEM_CREATE_NO_CPU_ACCESS |
> -   AMDGPU_GEM_CREATE_SHADOW);
> +   amdgpu_vm_bo_param(adev, vm, level, );
>
> /* walk over the address space and allocate the page tables */
> for (pt_idx = from; pt_idx <= to; ++pt_idx) {
> -   struct reservation_object *resv = vm->root.base.bo->tbo.resv;
> struct amdgpu_vm_pt *entry = >entries[pt_idx];
> struct amdgpu_bo *pt;
>
> if (!entry->base.bo) {
> -   struct amdgpu_bo_param bp;
> -
> -   memset(, 0, sizeof(bp));
> -   bp.size = amdgpu_vm_bo_size(adev, level);
> -   bp.byte_align = AMDGPU_GPU_PAGE_SIZE;
> -   bp.domain = AMDGPU_GEM_DOMAIN_VRAM;
> -   bp.flags = flags;
> -   bp.type = ttm_bo_type_kernel;
> -   bp.resv = resv;
> r = amdgpu_bo_create(adev, , );
> if (r)
> return r;
> @@ -2564,8 +2574,6 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
> amdgpu_vm *vm,
>  {
> struct amdgpu_bo_param bp;
> struct amdgpu_bo *root;
> -   unsigned long size;
> -   uint64_t flags;
> int r, i;
>
> vm->va = RB_ROOT_CACHED;
> @@ -2602,20 +2610,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
> amdgpu_vm *vm,
>   "CPU update of VM recommended only for large BAR system\n");
> vm->last_update = NULL;
>
> -   flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
> -   if (vm->use_cpu_for_update)
> -   flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
> -   else
> -   flags |= AMDGPU_GEM_CREATE_SHADOW;
> -
> -   size = amdgpu_vm_bo_size(adev, adev->vm_manager.root_level);
> -   memset(, 0, sizeof(bp));
> -   bp.size = size;
> -   bp.byte_align = AMDGPU_GPU_PAGE_SIZE;
> -   bp.domain = AMDGPU_GEM_DOMAIN_VRAM;
> -   bp.flags = flags;
> -   bp.type = ttm_bo_type_kernel;
> -   bp.resv = NULL;
> +   amdgpu_vm_bo_param(adev, vm, adev->vm_manager.root_level, );
> r = amdgpu_bo_create(adev, , );
> if (r)
> goto error_free_sched_entity;
> --
> 2.17.1
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org

Re: [PATCH 05/11] drm/amdgpu: rename gart.robj into gart.bo

2018-08-22 Thread Alex Deucher
On Wed, Aug 22, 2018 at 11:05 AM Christian König
 wrote:
>
> sed -i "s/gart.robj/gart.bo/" drivers/gpu/drm/amd/amdgpu/*.c
> sed -i "s/gart.robj/gart.bo/" drivers/gpu/drm/amd/amdgpu/*.h
>
> Just cleaning up radeon leftovers.
>
> Signed-off-by: Christian König 

Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c | 32 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h |  2 +-
>  drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c|  4 +--
>  drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c|  4 +--
>  drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c|  4 +--
>  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c|  4 +--
>  6 files changed, 25 insertions(+), 25 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
> index a54d5655a191..f5cb5e2856c1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
> @@ -112,7 +112,7 @@ int amdgpu_gart_table_vram_alloc(struct amdgpu_device 
> *adev)
>  {
> int r;
>
> -   if (adev->gart.robj == NULL) {
> +   if (adev->gart.bo == NULL) {
> struct amdgpu_bo_param bp;
>
> memset(, 0, sizeof(bp));
> @@ -123,7 +123,7 @@ int amdgpu_gart_table_vram_alloc(struct amdgpu_device 
> *adev)
> AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
> bp.type = ttm_bo_type_kernel;
> bp.resv = NULL;
> -   r = amdgpu_bo_create(adev, , >gart.robj);
> +   r = amdgpu_bo_create(adev, , >gart.bo);
> if (r) {
> return r;
> }
> @@ -145,19 +145,19 @@ int amdgpu_gart_table_vram_pin(struct amdgpu_device 
> *adev)
>  {
> int r;
>
> -   r = amdgpu_bo_reserve(adev->gart.robj, false);
> +   r = amdgpu_bo_reserve(adev->gart.bo, false);
> if (unlikely(r != 0))
> return r;
> -   r = amdgpu_bo_pin(adev->gart.robj, AMDGPU_GEM_DOMAIN_VRAM);
> +   r = amdgpu_bo_pin(adev->gart.bo, AMDGPU_GEM_DOMAIN_VRAM);
> if (r) {
> -   amdgpu_bo_unreserve(adev->gart.robj);
> +   amdgpu_bo_unreserve(adev->gart.bo);
> return r;
> }
> -   r = amdgpu_bo_kmap(adev->gart.robj, >gart.ptr);
> +   r = amdgpu_bo_kmap(adev->gart.bo, >gart.ptr);
> if (r)
> -   amdgpu_bo_unpin(adev->gart.robj);
> -   amdgpu_bo_unreserve(adev->gart.robj);
> -   adev->gart.table_addr = amdgpu_bo_gpu_offset(adev->gart.robj);
> +   amdgpu_bo_unpin(adev->gart.bo);
> +   amdgpu_bo_unreserve(adev->gart.bo);
> +   adev->gart.table_addr = amdgpu_bo_gpu_offset(adev->gart.bo);
> return r;
>  }
>
> @@ -173,14 +173,14 @@ void amdgpu_gart_table_vram_unpin(struct amdgpu_device 
> *adev)
>  {
> int r;
>
> -   if (adev->gart.robj == NULL) {
> +   if (adev->gart.bo == NULL) {
> return;
> }
> -   r = amdgpu_bo_reserve(adev->gart.robj, true);
> +   r = amdgpu_bo_reserve(adev->gart.bo, true);
> if (likely(r == 0)) {
> -   amdgpu_bo_kunmap(adev->gart.robj);
> -   amdgpu_bo_unpin(adev->gart.robj);
> -   amdgpu_bo_unreserve(adev->gart.robj);
> +   amdgpu_bo_kunmap(adev->gart.bo);
> +   amdgpu_bo_unpin(adev->gart.bo);
> +   amdgpu_bo_unreserve(adev->gart.bo);
> adev->gart.ptr = NULL;
> }
>  }
> @@ -196,10 +196,10 @@ void amdgpu_gart_table_vram_unpin(struct amdgpu_device 
> *adev)
>   */
>  void amdgpu_gart_table_vram_free(struct amdgpu_device *adev)
>  {
> -   if (adev->gart.robj == NULL) {
> +   if (adev->gart.bo == NULL) {
> return;
> }
> -   amdgpu_bo_unref(>gart.robj);
> +   amdgpu_bo_unref(>gart.bo);
>  }
>
>  /*
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h
> index 9f9e9dc87da1..d7b7c2d408d5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h
> @@ -41,7 +41,7 @@ struct amdgpu_bo;
>
>  struct amdgpu_gart {
> u64 table_addr;
> -   struct amdgpu_bo*robj;
> +   struct amdgpu_bo*bo;
> void*ptr;
> unsignednum_gpu_pages;
> unsignednum_cpu_pages;
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
> index c14cf1c5bf57..c50bd0c46508 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
> @@ -497,7 +497,7 @@ static int gmc_v6_0_gart_enable(struct amdgpu_device 
> *adev)
> int r, i;
> u32 field;
>
> -   if (adev->gart.robj == NULL) {
> +   if (adev->gart.bo == NULL) {
> dev_err(adev->dev, "No VRAM object for PCIE GART.\n");
> 

Re: [PATCH 04/11] drm/amdgpu: move setting the GART addr into TTM

2018-08-22 Thread Alex Deucher
On Wed, Aug 22, 2018 at 11:05 AM Christian König
 wrote:
>
> Move setting the GART addr for window based copies into the TTM code who
> uses it.
>
> Signed-off-by: Christian König 

Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 --
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 5 -
>  2 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index 391e2f7c03aa..239ccbae09bc 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -82,8 +82,6 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, 
> unsigned size,
> r = amdgpu_ib_get(adev, NULL, size, &(*job)->ibs[0]);
> if (r)
> kfree(*job);
> -   else
> -   (*job)->vm_pd_addr = adev->gart.table_addr;
>
> return r;
>  }
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index c6611cff64c8..b4333f60ed8b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -2048,7 +2048,10 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, 
> uint64_t src_offset,
> if (r)
> return r;
>
> -   job->vm_needs_flush = vm_needs_flush;
> +   if (vm_needs_flush) {
> +   job->vm_pd_addr = adev->gart.table_addr;
> +   job->vm_needs_flush = true;
> +   }
> if (resv) {
> r = amdgpu_sync_resv(adev, >sync, resv,
>  AMDGPU_FENCE_OWNER_UNDEFINED,
> --
> 2.17.1
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 03/11] drm/amdgpu: cleanup VM handling in the CS a bit

2018-08-22 Thread Alex Deucher
On Wed, Aug 22, 2018 at 11:05 AM Christian König
 wrote:
>
> Add a helper function for getting the root PD addr and cleanup join the
> two VM related functions and cleanup the function name.
>
> No functional change.
>
> Signed-off-by: Christian König 

Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 160 -
>  1 file changed, 74 insertions(+), 86 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index d42d1c8f78f6..17bf63f93c93 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -804,8 +804,9 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser 
> *parser, int error,
> amdgpu_bo_unref(>uf_entry.robj);
>  }
>
> -static int amdgpu_bo_vm_update_pte(struct amdgpu_cs_parser *p)
> +static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
>  {
> +   struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
> struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> struct amdgpu_device *adev = p->adev;
> struct amdgpu_vm *vm = >vm;
> @@ -814,6 +815,71 @@ static int amdgpu_bo_vm_update_pte(struct 
> amdgpu_cs_parser *p)
> struct amdgpu_bo *bo;
> int r;
>
> +   /* Only for UVD/VCE VM emulation */
> +   if (ring->funcs->parse_cs || ring->funcs->patch_cs_in_place) {
> +   unsigned i, j;
> +
> +   for (i = 0, j = 0; i < p->nchunks && j < p->job->num_ibs; 
> i++) {
> +   struct drm_amdgpu_cs_chunk_ib *chunk_ib;
> +   struct amdgpu_bo_va_mapping *m;
> +   struct amdgpu_bo *aobj = NULL;
> +   struct amdgpu_cs_chunk *chunk;
> +   uint64_t offset, va_start;
> +   struct amdgpu_ib *ib;
> +   uint8_t *kptr;
> +
> +   chunk = >chunks[i];
> +   ib = >job->ibs[j];
> +   chunk_ib = chunk->kdata;
> +
> +   if (chunk->chunk_id != AMDGPU_CHUNK_ID_IB)
> +   continue;
> +
> +   va_start = chunk_ib->va_start & AMDGPU_VA_HOLE_MASK;
> +   r = amdgpu_cs_find_mapping(p, va_start, , );
> +   if (r) {
> +   DRM_ERROR("IB va_start is invalid\n");
> +   return r;
> +   }
> +
> +   if ((va_start + chunk_ib->ib_bytes) >
> +   (m->last + 1) * AMDGPU_GPU_PAGE_SIZE) {
> +   DRM_ERROR("IB va_start+ib_bytes is 
> invalid\n");
> +   return -EINVAL;
> +   }
> +
> +   /* the IB should be reserved at this point */
> +   r = amdgpu_bo_kmap(aobj, (void **));
> +   if (r) {
> +   return r;
> +   }
> +
> +   offset = m->start * AMDGPU_GPU_PAGE_SIZE;
> +   kptr += va_start - offset;
> +
> +   if (ring->funcs->parse_cs) {
> +   memcpy(ib->ptr, kptr, chunk_ib->ib_bytes);
> +   amdgpu_bo_kunmap(aobj);
> +
> +   r = amdgpu_ring_parse_cs(ring, p, j);
> +   if (r)
> +   return r;
> +   } else {
> +   ib->ptr = (uint32_t *)kptr;
> +   r = amdgpu_ring_patch_cs_in_place(ring, p, j);
> +   amdgpu_bo_kunmap(aobj);
> +   if (r)
> +   return r;
> +   }
> +
> +   j++;
> +   }
> +   }
> +
> +   if (!p->job->vm)
> +   return amdgpu_cs_sync_rings(p);
> +
> +
> r = amdgpu_vm_clear_freed(adev, vm, NULL);
> if (r)
> return r;
> @@ -876,6 +942,12 @@ static int amdgpu_bo_vm_update_pte(struct 
> amdgpu_cs_parser *p)
> if (r)
> return r;
>
> +   r = reservation_object_reserve_shared(vm->root.base.bo->tbo.resv);
> +   if (r)
> +   return r;
> +
> +   p->job->vm_pd_addr = amdgpu_bo_gpu_offset(vm->root.base.bo);
> +
> if (amdgpu_vm_debug) {
> /* Invalidate all BOs to test for userspace bugs */
> amdgpu_bo_list_for_each_entry(e, p->bo_list) {
> @@ -887,90 +959,6 @@ static int amdgpu_bo_vm_update_pte(struct 
> amdgpu_cs_parser *p)
> }
> }
>
> -   return r;
> -}
> -
> -static int amdgpu_cs_ib_vm_chunk(struct amdgpu_device *adev,
> -struct amdgpu_cs_parser *p)
> -{
> -   struct amdgpu_ring *ring = 

Re: [PATCH 01/11] drm/amdgpu: remove extra root PD alignment

2018-08-22 Thread Alex Deucher
On Wed, Aug 22, 2018 at 11:05 AM Christian König
 wrote:
>
> Just another leftover from radeon.

I can't remember exactly what chip this was for.  Are you sure this
isn't still required for SI or something like that?

Alex

>
> Signed-off-by: Christian König 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 4 +---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 3 ---
>  2 files changed, 1 insertion(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 662aec5c81d4..73b8dcaf66e6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -2566,8 +2566,6 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
> amdgpu_vm *vm,
>  {
> struct amdgpu_bo_param bp;
> struct amdgpu_bo *root;
> -   const unsigned align = min(AMDGPU_VM_PTB_ALIGN_SIZE,
> -   AMDGPU_VM_PTE_COUNT(adev) * 8);
> unsigned long size;
> uint64_t flags;
> int r, i;
> @@ -2615,7 +2613,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
> amdgpu_vm *vm,
> size = amdgpu_vm_bo_size(adev, adev->vm_manager.root_level);
> memset(, 0, sizeof(bp));
> bp.size = size;
> -   bp.byte_align = align;
> +   bp.byte_align = AMDGPU_GPU_PAGE_SIZE;
> bp.domain = AMDGPU_GEM_DOMAIN_VRAM;
> bp.flags = flags;
> bp.type = ttm_bo_type_kernel;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> index 1162c2bf3138..1c9049feaaea 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> @@ -48,9 +48,6 @@ struct amdgpu_bo_list_entry;
>  /* number of entries in page table */
>  #define AMDGPU_VM_PTE_COUNT(adev) (1 << (adev)->vm_manager.block_size)
>
> -/* PTBs (Page Table Blocks) need to be aligned to 32K */
> -#define AMDGPU_VM_PTB_ALIGN_SIZE   32768
> -
>  #define AMDGPU_PTE_VALID   (1ULL << 0)
>  #define AMDGPU_PTE_SYSTEM  (1ULL << 1)
>  #define AMDGPU_PTE_SNOOPED (1ULL << 2)
> --
> 2.17.1
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: Possible use_mm() mis-uses

2018-08-22 Thread Felix Kuehling

On 2018-08-22 02:13 PM, Christian König wrote:
> Adding Felix because the KFD part of amdgpu is actually his
> responsibility.
>
> If I'm not completely mistaken the release callback of the
> mmu_notifier should take care of that for amdgpu.

You're right, but that's a bit fragile and convoluted. I'll fix KFD to
handle this more robustly. See the attached (untested) patch. And
obviously that opaque pointer didn't work as intended. It just gets
promoted to an mm_struct * without a warning from the compiler. Maybe I
should change that to a long to make abuse easier to spot.

Regards,
  Felix

>
> Regards,
> Christian.
>
> Am 22.08.2018 um 18:44 schrieb Linus Torvalds:
>> Guys and gals,
>>   this is a *very* random list of people on the recipients list, but we
>> had a subtle TLB shootdown issue in the VM, and that brought up some
>> issues when people then went through the code more carefully.
>>
>> I think we have a handle on the TLB shootdown bug itself. But when
>> people were discussing all the possible situations, one thing that
>> came up was "use_mm()" that takes a mm, and makes it temporarily the
>> mm for a kernel thread (until "unuse_mm()", duh).
>>
>> And it turns out that some of those uses are definitely wrong, some of
>> them are right, and some of them are suspect or at least so overly
>> complicated that it's hard for the VM people to know if they are ok.
>>
>> Basically, the rule for "use_mm()" is that the mm in question *has* to
>> have a valid page table associated with it over the whole use_mm() ->
>> unuse_mm() sequence. That may sound obvious, and I guess it actually
>> is so obvious that there isn't even a comment about it, but the actual
>> users are showing that it's sadly apparently not so obvious after all.
>>
>> There is one user that uses the "obviously correct" model: the vhost
>> driver does a "mmget()" and "mmput()" pair around its use of it,
>> thanks to vhost_dev_set_owner() doing a
>>
>>  dev->mm = get_task_mm(current);
>>
>> to look up the mm, and then the teardown case does a
>>
>>  if (dev->mm)
>>  mmput(dev->mm);
>>  dev->mm = NULL;
>>
>> This is the *right* sequence. A gold star to the vhost people.
>>
>> Sadly, the vhost people are the only ones who seem to get things
>> unquestionably right. And some of those gold star people are also
>> apparently involved in the cases that didn't get things right.
>>
>> An example of something that *isn't* right, is the i915 kvm interface,
>> which does
>>
>>  use_mm(kvm->mm);
>>
>> on an mm that was initialized in virt/kvm/kvm_main.c using
>>
>>  mmgrab(current->mm);
>>  kvm->mm = current->mm;
>>
>> which is *not* right. Using "mmgrab()" does indeed guarantee the
>> lifetime of the 'struct mm_struct' itself, but it does *not* guarantee
>> the lifetime of the page tables. You need to use "mmget()" and
>> "mmput()", which get the reference to the actual process address
>> space!
>>
>> Now, it is *possible* that the kvm use is correct too, because kvm
>> does register a mmu_notifier chain, and in theory you can avoid the
>> proper refcounting by just making sure the mmu "release" notifier
>> kills any existing uses, but I don't really see kvm doing that. Kvm
>> does register a release notifier, but that just flushes the shadow
>> page tables, it doesn't kill any use_mm() use by some i915 use case.
>>
>> So while the vhost use looks right, the kvm/i915 use looks definitely
>> wrong.
>>
>> The other users of "use_mm()" and "unuse_mm()" are less
>> black-and-white right vs wrong..
>>
>> One of the complex ones is the amdgpu driver. It does a
>> "use_mm(mmptr)" deep deep in the guts of a macro that ends up being
>> used in fa few places, and it's very hard to tell if it's right.
>>
>> It looks almost certainly buggy (there is no mmget/mmput to get the
>> refcount), but there _is_ a "release" mmu_notifier function and that
>> one - unlike the kvm case - looks like it might actually be trying to
>> flush the existing pending users of that mm.
>>
>> But on the whole, I'm suspicious of the amdgpu use. It smells. Jann
>> Horn pointed out that even if it migth be ok due to the mmu_notifier,
>> the comments are garbage:
>>
>>>   Where "process" in the uniquely-named "struct queue" is a "struct
>>>   kfd_process"; that struct's definition has this comment in it:
>>>
>>>     /*
>>>  * Opaque pointer to mm_struct. We don't hold a reference to
>>>  * it so it should never be dereferenced from here. This is
>>>  * only used for looking up processes by their mm.
>>>  */
>>>     void *mm;
>>>
>>>   So I think either that comment is wrong, or their code is wrong?
>> so I'm chalking the amdgpu use up in the "broken" column.
>>
>> It's also actually quite hard to synchronze with some other kernel
>> worker thread correctly, so just on general principles, if you use
>> "use_mm()" it really really should be on something that you've
>> properly gotten a 

Re: Possible use_mm() mis-uses

2018-08-22 Thread Oded Gabbay
On Wed, Aug 22, 2018 at 7:44 PM Linus Torvalds
 wrote:
> One of the complex ones is the amdgpu driver. It does a
> "use_mm(mmptr)" deep deep in the guts of a macro that ends up being
> used in fa few places, and it's very hard to tell if it's right.
>
> It looks almost certainly buggy (there is no mmget/mmput to get the
> refcount), but there _is_ a "release" mmu_notifier function and that
> one - unlike the kvm case - looks like it might actually be trying to
> flush the existing pending users of that mm.
>
> But on the whole, I'm suspicious of the amdgpu use. It smells. Jann
> Horn pointed out that even if it migth be ok due to the mmu_notifier,
> the comments are garbage:
>
> >  Where "process" in the uniquely-named "struct queue" is a "struct
> >  kfd_process"; that struct's definition has this comment in it:
> >
> >/*
> > * Opaque pointer to mm_struct. We don't hold a reference to
> > * it so it should never be dereferenced from here. This is
> > * only used for looking up processes by their mm.
> > */
> >void *mm;
> >
> >  So I think either that comment is wrong, or their code is wrong?
>
> so I'm chalking the amdgpu use up in the "broken" column.
>
Hello Linus,

I looked at the amdkfd code and indeed the comment does not match the
actual code because the mm pointer is clearly dereferenced directly in
the macro you mentioned (read_user_wptr). That macro is used in the
code path of loading a descriptor to the H/W (load_hqd). That function
is called in several cases, where in some of them we are in the
context of the calling process, but in others we are in a kernel
thread context (hence the use_mm). That's why we check these two
situations inside that macro and only do use_mm if we are in kernel
thread.

We need to fix that behavior and obviously make sure that in future
code we don't cast this pointer to mm_struct* and derereference it
directly.
Actually, the original code had "mm_struct *mm" instead of "void *mm"
in the structure, and I think the reason we changed it to void* is to
"make sure" that we won't dereference it directly, but clearly that
failed :(

Having said that, I think we *are* protected by the mmu_notifier
release because if the process suddenly dies, we will gracefully clean
the process's data in our driver and on the H/W before returning to
the mm core code. And before we return to the mm core code, we set the
mm pointer to NULL. And the graceful cleaning should be serialized
with the load_hqd uses.

Felix, do you have anything to add here that I might have missed ?

Thanks,
Oded
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/5] drm/amdgpu: add ring soft recovery v2

2018-08-22 Thread Marek Olšák
On Wed, Aug 22, 2018 at 12:56 PM Alex Deucher  wrote:
>
> On Wed, Aug 22, 2018 at 6:05 AM Christian König
>  wrote:
> >
> > Instead of hammering hard on the GPU try a soft recovery first.
> >
> > v2: reorder code a bit
> >
> > Signed-off-by: Christian König 
> > ---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  6 ++
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 24 
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  4 
> >  3 files changed, 34 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > index 265ff90f4e01..d93e31a5c4e7 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > @@ -33,6 +33,12 @@ static void amdgpu_job_timedout(struct drm_sched_job 
> > *s_job)
> > struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
> > struct amdgpu_job *job = to_amdgpu_job(s_job);
> >
> > +   if (amdgpu_ring_soft_recovery(ring, job->vmid, 
> > s_job->s_fence->parent)) {
> > +   DRM_ERROR("ring %s timeout, but soft recovered\n",
> > + s_job->sched->name);
> > +   return;
> > +   }
>
> I think we should still bubble up the error to userspace even if we
> can recover.  Data is lost when the wave is killed.  We should treat
> it like a GPU reset.

Yes, please increment gpu_reset_counter, so that we are compliant with
OpenGL. Being able to recover from infinite loops is great, but test
suites also expect this to be properly reported to userspace via the
per-context query.

Also please bump the deadline to 1 second. Even you if you kill all
shaders, the IB can also contain CP DMA, which may take longer than 1
ms.

Marek

Marek

>
> Alex
>
> > +
> > DRM_ERROR("ring %s timeout, signaled seq=%u, emitted seq=%u\n",
> >   job->base.sched->name, 
> > atomic_read(>fence_drv.last_seq),
> >   ring->fence_drv.sync_seq);
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> > index 5dfd26be1eec..c045a4e38ad1 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> > @@ -383,6 +383,30 @@ void amdgpu_ring_emit_reg_write_reg_wait_helper(struct 
> > amdgpu_ring *ring,
> > amdgpu_ring_emit_reg_wait(ring, reg1, mask, mask);
> >  }
> >
> > +/**
> > + * amdgpu_ring_soft_recovery - try to soft recover a ring lockup
> > + *
> > + * @ring: ring to try the recovery on
> > + * @vmid: VMID we try to get going again
> > + * @fence: timedout fence
> > + *
> > + * Tries to get a ring proceeding again when it is stuck.
> > + */
> > +bool amdgpu_ring_soft_recovery(struct amdgpu_ring *ring, unsigned int vmid,
> > +  struct dma_fence *fence)
> > +{
> > +   ktime_t deadline = ktime_add_us(ktime_get(), 1000);
> > +
> > +   if (!ring->funcs->soft_recovery)
> > +   return false;
> > +
> > +   while (!dma_fence_is_signaled(fence) &&
> > +  ktime_to_ns(ktime_sub(deadline, ktime_get())) > 0)
> > +   ring->funcs->soft_recovery(ring, vmid);
> > +
> > +   return dma_fence_is_signaled(fence);
> > +}
> > +
> >  /*
> >   * Debugfs info
> >   */
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > index 409fdd9b9710..9cc239968e40 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > @@ -168,6 +168,8 @@ struct amdgpu_ring_funcs {
> > /* priority functions */
> > void (*set_priority) (struct amdgpu_ring *ring,
> >   enum drm_sched_priority priority);
> > +   /* Try to soft recover the ring to make the fence signal */
> > +   void (*soft_recovery)(struct amdgpu_ring *ring, unsigned vmid);
> >  };
> >
> >  struct amdgpu_ring {
> > @@ -260,6 +262,8 @@ void amdgpu_ring_fini(struct amdgpu_ring *ring);
> >  void amdgpu_ring_emit_reg_write_reg_wait_helper(struct amdgpu_ring *ring,
> > uint32_t reg0, uint32_t 
> > val0,
> > uint32_t reg1, uint32_t 
> > val1);
> > +bool amdgpu_ring_soft_recovery(struct amdgpu_ring *ring, unsigned int vmid,
> > +  struct dma_fence *fence);
> >
> >  static inline void amdgpu_ring_clear_ring(struct amdgpu_ring *ring)
> >  {
> > --
> > 2.14.1
> >
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org

Possible use_mm() mis-uses

2018-08-22 Thread Linus Torvalds
Guys and gals,
 this is a *very* random list of people on the recipients list, but we
had a subtle TLB shootdown issue in the VM, and that brought up some
issues when people then went through the code more carefully.

I think we have a handle on the TLB shootdown bug itself. But when
people were discussing all the possible situations, one thing that
came up was "use_mm()" that takes a mm, and makes it temporarily the
mm for a kernel thread (until "unuse_mm()", duh).

And it turns out that some of those uses are definitely wrong, some of
them are right, and some of them are suspect or at least so overly
complicated that it's hard for the VM people to know if they are ok.

Basically, the rule for "use_mm()" is that the mm in question *has* to
have a valid page table associated with it over the whole use_mm() ->
unuse_mm() sequence. That may sound obvious, and I guess it actually
is so obvious that there isn't even a comment about it, but the actual
users are showing that it's sadly apparently not so obvious after all.

There is one user that uses the "obviously correct" model: the vhost
driver does a "mmget()" and "mmput()" pair around its use of it,
thanks to vhost_dev_set_owner() doing a

dev->mm = get_task_mm(current);

to look up the mm, and then the teardown case does a

if (dev->mm)
mmput(dev->mm);
dev->mm = NULL;

This is the *right* sequence. A gold star to the vhost people.

Sadly, the vhost people are the only ones who seem to get things
unquestionably right. And some of those gold star people are also
apparently involved in the cases that didn't get things right.

An example of something that *isn't* right, is the i915 kvm interface,
which does

use_mm(kvm->mm);

on an mm that was initialized in virt/kvm/kvm_main.c using

mmgrab(current->mm);
kvm->mm = current->mm;

which is *not* right. Using "mmgrab()" does indeed guarantee the
lifetime of the 'struct mm_struct' itself, but it does *not* guarantee
the lifetime of the page tables. You need to use "mmget()" and
"mmput()", which get the reference to the actual process address
space!

Now, it is *possible* that the kvm use is correct too, because kvm
does register a mmu_notifier chain, and in theory you can avoid the
proper refcounting by just making sure the mmu "release" notifier
kills any existing uses, but I don't really see kvm doing that. Kvm
does register a release notifier, but that just flushes the shadow
page tables, it doesn't kill any use_mm() use by some i915 use case.

So while the vhost use looks right, the kvm/i915 use looks definitely wrong.

The other users of "use_mm()" and "unuse_mm()" are less
black-and-white right vs wrong..

One of the complex ones is the amdgpu driver. It does a
"use_mm(mmptr)" deep deep in the guts of a macro that ends up being
used in fa few places, and it's very hard to tell if it's right.

It looks almost certainly buggy (there is no mmget/mmput to get the
refcount), but there _is_ a "release" mmu_notifier function and that
one - unlike the kvm case - looks like it might actually be trying to
flush the existing pending users of that mm.

But on the whole, I'm suspicious of the amdgpu use. It smells. Jann
Horn pointed out that even if it migth be ok due to the mmu_notifier,
the comments are garbage:

>  Where "process" in the uniquely-named "struct queue" is a "struct
>  kfd_process"; that struct's definition has this comment in it:
>
>/*
> * Opaque pointer to mm_struct. We don't hold a reference to
> * it so it should never be dereferenced from here. This is
> * only used for looking up processes by their mm.
> */
>void *mm;
>
>  So I think either that comment is wrong, or their code is wrong?

so I'm chalking the amdgpu use up in the "broken" column.

It's also actually quite hard to synchronze with some other kernel
worker thread correctly, so just on general principles, if you use
"use_mm()" it really really should be on something that you've
properly gotten a mm refcount on with mmget(). Really. Even if you try
to synchronize things.

The two final cases are two uses in the USB gadget driver. Again, they
don't have the proper mmget/mmput, but they may br ok simply because
the uses are done for AIO, and the VM teardown is preceded by an AIO
teardown, so the proper serialization may come in from that.

Anyway, sorry for the long email, and the big list of participants and
odd mailing lists, but I'd really like people to look at their
"use_mm()" cases, and ask themselves if they have done enough to
guarantee that the full mm exists. Again, "mmgrab()" is *not* enough
on its own. You need either "mmget()" or some lifetime guarantee.

And if you do have those lifetime guarantees, it would be really nice
to get a good explanatory comment about said lifetime guarantees above
the "use_mm()" call. Ok?

Note that the lifetime rules are very important, because obviously
use_mm() itself is never called 

Re: Possible use_mm() mis-uses

2018-08-22 Thread Linus Torvalds
On Wed, Aug 22, 2018 at 11:33 AM Linus Torvalds
 wrote:
>
> On Wed, Aug 22, 2018 at 11:21 AM Paolo Bonzini  wrote:
> >
> > Yes, KVM is correct but the i915 bits are at least fishy.  It's probably
> > as simple as adding a mmget/mmput pair respectively in kvmgt_guest_init
> > and kvmgt_guest_exit, or maybe mmget_not_zero.
>
> Definitely mmget_not_zero(). If it was just mmgrab()'ed earlier, the
> actual page tables might already be gone.

Side note: we _could_ do the mmget_not_zero() inside use_mm() itself,
if we just knew that the mm was at least mmgrab()'ed correctly.

But for some of the uses, even that isn't clear. It's not entirely
obvious that the "struct mm_struct" exists _at_all_ at that point, and
that a mmget_not_zero() wouldn't just have some use-after-free access.

Again, independent lifetime rules could show that this isn't the case
(ie "exit_aio() is always called before exit_mmap(), and kill_ioctx()
takes care of it all"), but it would be good to have the users of
"use_mm()" actually verify their lifetime rules are correct and
enforced.

Because quite often, the lifetime rule might nbot be a mmu notifier or
aio_exit at all, but just be "oh, the user won't exit until this is
all done". But do you *control* the user? What if the user is buggy?

 Linus
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: Possible use_mm() mis-uses

2018-08-22 Thread Linus Torvalds
On Wed, Aug 22, 2018 at 11:21 AM Paolo Bonzini  wrote:
>
> Yes, KVM is correct but the i915 bits are at least fishy.  It's probably
> as simple as adding a mmget/mmput pair respectively in kvmgt_guest_init
> and kvmgt_guest_exit, or maybe mmget_not_zero.

Definitely mmget_not_zero(). If it was just mmgrab()'ed earlier, the
actual page tables might already be gone.

  Linus
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: Possible use_mm() mis-uses

2018-08-22 Thread Zhi Wang

Hi Linus:

Thanks for letting us know that. We would fix this ASAP. The kvmgt.c 
module is a part of GVT-g code. It's our fault that we didn't find this 
mis-uses, not i915 or KVM guys. Wish they would feel better after seeing 
this message.


Thanks,
Zhi.

On 08/23/18 00:44, Linus Torvalds wrote:

Guys and gals,
  this is a *very* random list of people on the recipients list, but we
had a subtle TLB shootdown issue in the VM, and that brought up some
issues when people then went through the code more carefully.

I think we have a handle on the TLB shootdown bug itself. But when
people were discussing all the possible situations, one thing that
came up was "use_mm()" that takes a mm, and makes it temporarily the
mm for a kernel thread (until "unuse_mm()", duh).

And it turns out that some of those uses are definitely wrong, some of
them are right, and some of them are suspect or at least so overly
complicated that it's hard for the VM people to know if they are ok.

Basically, the rule for "use_mm()" is that the mm in question *has* to
have a valid page table associated with it over the whole use_mm() ->
unuse_mm() sequence. That may sound obvious, and I guess it actually
is so obvious that there isn't even a comment about it, but the actual
users are showing that it's sadly apparently not so obvious after all.

There is one user that uses the "obviously correct" model: the vhost
driver does a "mmget()" and "mmput()" pair around its use of it,
thanks to vhost_dev_set_owner() doing a

 dev->mm = get_task_mm(current);

to look up the mm, and then the teardown case does a

 if (dev->mm)
 mmput(dev->mm);
 dev->mm = NULL;

This is the *right* sequence. A gold star to the vhost people.

Sadly, the vhost people are the only ones who seem to get things
unquestionably right. And some of those gold star people are also
apparently involved in the cases that didn't get things right.

An example of something that *isn't* right, is the i915 kvm interface,
which does

 use_mm(kvm->mm);

on an mm that was initialized in virt/kvm/kvm_main.c using

 mmgrab(current->mm);
 kvm->mm = current->mm;

which is *not* right. Using "mmgrab()" does indeed guarantee the
lifetime of the 'struct mm_struct' itself, but it does *not* guarantee
the lifetime of the page tables. You need to use "mmget()" and
"mmput()", which get the reference to the actual process address
space!

Now, it is *possible* that the kvm use is correct too, because kvm
does register a mmu_notifier chain, and in theory you can avoid the
proper refcounting by just making sure the mmu "release" notifier
kills any existing uses, but I don't really see kvm doing that. Kvm
does register a release notifier, but that just flushes the shadow
page tables, it doesn't kill any use_mm() use by some i915 use case.

So while the vhost use looks right, the kvm/i915 use looks definitely wrong.

The other users of "use_mm()" and "unuse_mm()" are less
black-and-white right vs wrong..

One of the complex ones is the amdgpu driver. It does a
"use_mm(mmptr)" deep deep in the guts of a macro that ends up being
used in fa few places, and it's very hard to tell if it's right.

It looks almost certainly buggy (there is no mmget/mmput to get the
refcount), but there _is_ a "release" mmu_notifier function and that
one - unlike the kvm case - looks like it might actually be trying to
flush the existing pending users of that mm.

But on the whole, I'm suspicious of the amdgpu use. It smells. Jann
Horn pointed out that even if it migth be ok due to the mmu_notifier,
the comments are garbage:


  Where "process" in the uniquely-named "struct queue" is a "struct
  kfd_process"; that struct's definition has this comment in it:

/*
 * Opaque pointer to mm_struct. We don't hold a reference to
 * it so it should never be dereferenced from here. This is
 * only used for looking up processes by their mm.
 */
void *mm;

  So I think either that comment is wrong, or their code is wrong?


so I'm chalking the amdgpu use up in the "broken" column.

It's also actually quite hard to synchronze with some other kernel
worker thread correctly, so just on general principles, if you use
"use_mm()" it really really should be on something that you've
properly gotten a mm refcount on with mmget(). Really. Even if you try
to synchronize things.

The two final cases are two uses in the USB gadget driver. Again, they
don't have the proper mmget/mmput, but they may br ok simply because
the uses are done for AIO, and the VM teardown is preceded by an AIO
teardown, so the proper serialization may come in from that.

Anyway, sorry for the long email, and the big list of participants and
odd mailing lists, but I'd really like people to look at their
"use_mm()" cases, and ask themselves if they have done enough to
guarantee that the full mm exists. Again, "mmgrab()" is *not* enough
on its own. You need either 

Re: Possible use_mm() mis-uses

2018-08-22 Thread Paolo Bonzini
On 22/08/2018 18:44, Linus Torvalds wrote:
> An example of something that *isn't* right, is the i915 kvm interface,
> which does
> 
> use_mm(kvm->mm);
> 
> on an mm that was initialized in virt/kvm/kvm_main.c using
> 
> mmgrab(current->mm);
> kvm->mm = current->mm;
> 
> which is *not* right. Using "mmgrab()" does indeed guarantee the
> lifetime of the 'struct mm_struct' itself, but it does *not* guarantee
> the lifetime of the page tables. You need to use "mmget()" and
> "mmput()", which get the reference to the actual process address
> space!
> 
> Now, it is *possible* that the kvm use is correct too, because kvm
> does register a mmu_notifier chain, and in theory you can avoid the
> proper refcounting by just making sure the mmu "release" notifier
> kills any existing uses, but I don't really see kvm doing that. Kvm
> does register a release notifier, but that just flushes the shadow
> page tables, it doesn't kill any use_mm() use by some i915 use case.

Yes, KVM is correct but the i915 bits are at least fishy.  It's probably
as simple as adding a mmget/mmput pair respectively in kvmgt_guest_init
and kvmgt_guest_exit, or maybe mmget_not_zero.

Paolo
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: Possible use_mm() mis-uses

2018-08-22 Thread Christian König

Adding Felix because the KFD part of amdgpu is actually his responsibility.

If I'm not completely mistaken the release callback of the mmu_notifier 
should take care of that for amdgpu.


Regards,
Christian.

Am 22.08.2018 um 18:44 schrieb Linus Torvalds:

Guys and gals,
  this is a *very* random list of people on the recipients list, but we
had a subtle TLB shootdown issue in the VM, and that brought up some
issues when people then went through the code more carefully.

I think we have a handle on the TLB shootdown bug itself. But when
people were discussing all the possible situations, one thing that
came up was "use_mm()" that takes a mm, and makes it temporarily the
mm for a kernel thread (until "unuse_mm()", duh).

And it turns out that some of those uses are definitely wrong, some of
them are right, and some of them are suspect or at least so overly
complicated that it's hard for the VM people to know if they are ok.

Basically, the rule for "use_mm()" is that the mm in question *has* to
have a valid page table associated with it over the whole use_mm() ->
unuse_mm() sequence. That may sound obvious, and I guess it actually
is so obvious that there isn't even a comment about it, but the actual
users are showing that it's sadly apparently not so obvious after all.

There is one user that uses the "obviously correct" model: the vhost
driver does a "mmget()" and "mmput()" pair around its use of it,
thanks to vhost_dev_set_owner() doing a

 dev->mm = get_task_mm(current);

to look up the mm, and then the teardown case does a

 if (dev->mm)
 mmput(dev->mm);
 dev->mm = NULL;

This is the *right* sequence. A gold star to the vhost people.

Sadly, the vhost people are the only ones who seem to get things
unquestionably right. And some of those gold star people are also
apparently involved in the cases that didn't get things right.

An example of something that *isn't* right, is the i915 kvm interface,
which does

 use_mm(kvm->mm);

on an mm that was initialized in virt/kvm/kvm_main.c using

 mmgrab(current->mm);
 kvm->mm = current->mm;

which is *not* right. Using "mmgrab()" does indeed guarantee the
lifetime of the 'struct mm_struct' itself, but it does *not* guarantee
the lifetime of the page tables. You need to use "mmget()" and
"mmput()", which get the reference to the actual process address
space!

Now, it is *possible* that the kvm use is correct too, because kvm
does register a mmu_notifier chain, and in theory you can avoid the
proper refcounting by just making sure the mmu "release" notifier
kills any existing uses, but I don't really see kvm doing that. Kvm
does register a release notifier, but that just flushes the shadow
page tables, it doesn't kill any use_mm() use by some i915 use case.

So while the vhost use looks right, the kvm/i915 use looks definitely wrong.

The other users of "use_mm()" and "unuse_mm()" are less
black-and-white right vs wrong..

One of the complex ones is the amdgpu driver. It does a
"use_mm(mmptr)" deep deep in the guts of a macro that ends up being
used in fa few places, and it's very hard to tell if it's right.

It looks almost certainly buggy (there is no mmget/mmput to get the
refcount), but there _is_ a "release" mmu_notifier function and that
one - unlike the kvm case - looks like it might actually be trying to
flush the existing pending users of that mm.

But on the whole, I'm suspicious of the amdgpu use. It smells. Jann
Horn pointed out that even if it migth be ok due to the mmu_notifier,
the comments are garbage:


  Where "process" in the uniquely-named "struct queue" is a "struct
  kfd_process"; that struct's definition has this comment in it:

/*
 * Opaque pointer to mm_struct. We don't hold a reference to
 * it so it should never be dereferenced from here. This is
 * only used for looking up processes by their mm.
 */
void *mm;

  So I think either that comment is wrong, or their code is wrong?

so I'm chalking the amdgpu use up in the "broken" column.

It's also actually quite hard to synchronze with some other kernel
worker thread correctly, so just on general principles, if you use
"use_mm()" it really really should be on something that you've
properly gotten a mm refcount on with mmget(). Really. Even if you try
to synchronize things.

The two final cases are two uses in the USB gadget driver. Again, they
don't have the proper mmget/mmput, but they may br ok simply because
the uses are done for AIO, and the VM teardown is preceded by an AIO
teardown, so the proper serialization may come in from that.

Anyway, sorry for the long email, and the big list of participants and
odd mailing lists, but I'd really like people to look at their
"use_mm()" cases, and ask themselves if they have done enough to
guarantee that the full mm exists. Again, "mmgrab()" is *not* enough
on its own. You need either "mmget()" or some lifetime guarantee.

And if 

Re: [PATCH] drm/amdgpu: Adjust the VM size based on system memory size

2018-08-22 Thread Christian König

Am 22.08.2018 um 19:40 schrieb Felix Kuehling:

On 2018-08-22 02:55 AM, Christian König wrote:

Am 21.08.2018 um 23:45 schrieb Felix Kuehling:

[snip]

+    } else {

+    struct sysinfo si;
+    unsigned int phys_ram_gb;
+
+    /* Optimal VM size depends on the amount of physical
+ * RAM available. Underlying requirements and
+ * assumptions:
+ *
+ *  - Need to map system memory and VRAM from all GPUs
+ * - VRAM from other GPUs not known here
+ * - Assume VRAM <= system memory
+ *  - On GFX8 and older, VM space can be segmented for
+ *    different MTYPEs
+ *  - Need to allow room for fragmentation, guard pages etc.
+ */
+    si_meminfo();
+    phys_ram_gb = ((uint64_t)si.totalram * si.mem_unit) >> 30;

Looks good to me, but I would make sure that round that up before
shifting it.

Hmm, we used to round up. I just removed it because we were told it's
not necessary.

But I guess rounding up to the next power of two increases the available
VM size without increasing page table size. So we should take it if we
can get it for free. I'll reintroduce rounding up.


No, that wasn't what I meant. Rounding up to the next power of two is 
indeed not necessary.


What I meant is when the installed system memory is 3.9GB we only take 
into account 3GB here because we always truncate.



+    vm_size = min(max(phys_ram_gb * 3, min_vm_size), max_size);

Mhm, "phys_ram_gb * 3"? Maybe add a comment with the rational for that.

Well, the long comment above was meant to justify the factor 3. Maybe I
didn't make that clear enough. 1x is system memory itself, 2x is a wild
guess of VRAM on all GPUs. 3x is room for a second aperture for MTYPE
control, fragmentation and guard pages.


Ah! Yeah that wasn't obvious.

Christian.



Regards,
   Felix


Christian.


   }
     adev->vm_manager.max_pfn = (uint64_t)vm_size << 18;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 1162c2b..ab1d23e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -345,7 +345,7 @@ struct amdgpu_bo_va_mapping
*amdgpu_vm_bo_lookup_mapping(struct amdgpu_vm *vm,
   void amdgpu_vm_bo_trace_cs(struct amdgpu_vm *vm, struct
ww_acquire_ctx *ticket);
   void amdgpu_vm_bo_rmv(struct amdgpu_device *adev,
     struct amdgpu_bo_va *bo_va);
-void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t
vm_size,
+void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t
min_vm_size,
  uint32_t fragment_size_default, unsigned max_level,
  unsigned max_bits);
   int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct
drm_file *filp);

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Adjust the VM size based on system memory size

2018-08-22 Thread Felix Kuehling
On 2018-08-22 02:55 AM, Christian König wrote:
> Am 21.08.2018 um 23:45 schrieb Felix Kuehling:
[snip]
>> +    } else {
>>> +    struct sysinfo si;
>>> +    unsigned int phys_ram_gb;
>>> +
>>> +    /* Optimal VM size depends on the amount of physical
>>> + * RAM available. Underlying requirements and
>>> + * assumptions:
>>> + *
>>> + *  - Need to map system memory and VRAM from all GPUs
>>> + * - VRAM from other GPUs not known here
>>> + * - Assume VRAM <= system memory
>>> + *  - On GFX8 and older, VM space can be segmented for
>>> + *    different MTYPEs
>>> + *  - Need to allow room for fragmentation, guard pages etc.
>>> + */
>>> +    si_meminfo();
>>> +    phys_ram_gb = ((uint64_t)si.totalram * si.mem_unit) >> 30;
>
> Looks good to me, but I would make sure that round that up before
> shifting it.

Hmm, we used to round up. I just removed it because we were told it's
not necessary.

But I guess rounding up to the next power of two increases the available
VM size without increasing page table size. So we should take it if we
can get it for free. I'll reintroduce rounding up.

>
>>> +    vm_size = min(max(phys_ram_gb * 3, min_vm_size), max_size);
>
> Mhm, "phys_ram_gb * 3"? Maybe add a comment with the rational for that.

Well, the long comment above was meant to justify the factor 3. Maybe I
didn't make that clear enough. 1x is system memory itself, 2x is a wild
guess of VRAM on all GPUs. 3x is room for a second aperture for MTYPE
control, fragmentation and guard pages.

Regards,
  Felix

>
> Christian.
>
>>>   }
>>>     adev->vm_manager.max_pfn = (uint64_t)vm_size << 18;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> index 1162c2b..ab1d23e 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> @@ -345,7 +345,7 @@ struct amdgpu_bo_va_mapping
>>> *amdgpu_vm_bo_lookup_mapping(struct amdgpu_vm *vm,
>>>   void amdgpu_vm_bo_trace_cs(struct amdgpu_vm *vm, struct
>>> ww_acquire_ctx *ticket);
>>>   void amdgpu_vm_bo_rmv(struct amdgpu_device *adev,
>>>     struct amdgpu_bo_va *bo_va);
>>> -void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t
>>> vm_size,
>>> +void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t
>>> min_vm_size,
>>>  uint32_t fragment_size_default, unsigned max_level,
>>>  unsigned max_bits);
>>>   int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct
>>> drm_file *filp);
>> ___
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [igt-dev] RFC: Migration to Gitlab

2018-08-22 Thread Rodrigo Vivi
On Wed, Aug 22, 2018 at 05:37:22PM +0100, Daniel Stone wrote:
> Hi Rodrigo,
> 
> On Wed, 22 Aug 2018 at 17:06, Rodrigo Vivi  wrote:
> > On Wed, Aug 22, 2018 at 10:19:19AM -0400, Adam Jackson wrote:
> > > On Wed, 2018-08-22 at 16:13 +0300, Jani Nikula wrote:
> > > > - Sticking to fdo bugzilla and disabling gitlab issues for at least
> > > >   drm-intel for the time being. Doing that migration in the same go is a
> > > >   bit much I think. Reassignment across bugzilla and gitlab will be an
> > > >   issue.
> > >
> > > Can you elaborate a bit on the issues here? The actual move-the-bugs
> > > process has been pretty painless for the parts of xorg we've done so
> > > far.
> >
> > I guess there is nothing against moving the bugs there. The concern is only 
> > on
> > doing everything at once.
> >
> > I'm in favor of moving gits for now and after we are confident that
> > everything is there and working we move the bugs.
> 
> As Daniel alluded to, the only issue I really have is moving _all_ the
> kernel repos at once. At the end of the year we'll have easy automatic
> scaling thanks to the independent services being separated. As it is,
> all the GitLab services (apart from CI runners) run on a single
> machine, so we have limited options if it becomes overwhelmed with
> load.
> 
> Do you have a particular concern about the repos?

no concerns from my side about the repos itself. From my committer
perspective on libdrm, mesa the migration was really smooth.

> e.g. what would you
> check for to make sure things are 'there and working'?

more in terms of other committers getting used to it, dim working
for most of committers, links in documentations and wikis updated...

but no concerns with the infra itself.

> 
> > One question about the bugzilla:
> >
> > Will all the referrences on all commit messages get outdated after
> > bugzilla is dead?
> > Or bugzilla will stay up for referrence but closed for interaction?
> > or all old closed stuff are always moved and bugzilla.fd.o as well and
> > bugzilla.fd.o will be mirroring gitlab?
> 
> When bugs are migrated from Bugzilla to GitLab, only open bugs are
> migrated. Closed ones are left in place, as is; open ones have a
> comment at the end saying that the bug has moved to GitLab, a URL
> linking to the new GitLab issue, and telling them to please chase it
> up there.
> 
> Even when we move everyone completely off Bugzilla, we will keep it as
> a read-only mirror forever. Even with Phabricator, which very few
> people ever used, has had all its bugs and code review captured and
> archived, so we can continue to preserve all the old content and
> links, without having to run the actual service.

Great!
Thanks for all clarification,
Rodrigo.

> 
> Cheers,
> Daniel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/5] drm/amdgpu: add ring soft recovery v2

2018-08-22 Thread Alex Deucher
On Wed, Aug 22, 2018 at 6:05 AM Christian König
 wrote:
>
> Instead of hammering hard on the GPU try a soft recovery first.
>
> v2: reorder code a bit
>
> Signed-off-by: Christian König 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  6 ++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 24 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  4 
>  3 files changed, 34 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index 265ff90f4e01..d93e31a5c4e7 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -33,6 +33,12 @@ static void amdgpu_job_timedout(struct drm_sched_job 
> *s_job)
> struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
> struct amdgpu_job *job = to_amdgpu_job(s_job);
>
> +   if (amdgpu_ring_soft_recovery(ring, job->vmid, 
> s_job->s_fence->parent)) {
> +   DRM_ERROR("ring %s timeout, but soft recovered\n",
> + s_job->sched->name);
> +   return;
> +   }

I think we should still bubble up the error to userspace even if we
can recover.  Data is lost when the wave is killed.  We should treat
it like a GPU reset.

Alex

> +
> DRM_ERROR("ring %s timeout, signaled seq=%u, emitted seq=%u\n",
>   job->base.sched->name, 
> atomic_read(>fence_drv.last_seq),
>   ring->fence_drv.sync_seq);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> index 5dfd26be1eec..c045a4e38ad1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> @@ -383,6 +383,30 @@ void amdgpu_ring_emit_reg_write_reg_wait_helper(struct 
> amdgpu_ring *ring,
> amdgpu_ring_emit_reg_wait(ring, reg1, mask, mask);
>  }
>
> +/**
> + * amdgpu_ring_soft_recovery - try to soft recover a ring lockup
> + *
> + * @ring: ring to try the recovery on
> + * @vmid: VMID we try to get going again
> + * @fence: timedout fence
> + *
> + * Tries to get a ring proceeding again when it is stuck.
> + */
> +bool amdgpu_ring_soft_recovery(struct amdgpu_ring *ring, unsigned int vmid,
> +  struct dma_fence *fence)
> +{
> +   ktime_t deadline = ktime_add_us(ktime_get(), 1000);
> +
> +   if (!ring->funcs->soft_recovery)
> +   return false;
> +
> +   while (!dma_fence_is_signaled(fence) &&
> +  ktime_to_ns(ktime_sub(deadline, ktime_get())) > 0)
> +   ring->funcs->soft_recovery(ring, vmid);
> +
> +   return dma_fence_is_signaled(fence);
> +}
> +
>  /*
>   * Debugfs info
>   */
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> index 409fdd9b9710..9cc239968e40 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> @@ -168,6 +168,8 @@ struct amdgpu_ring_funcs {
> /* priority functions */
> void (*set_priority) (struct amdgpu_ring *ring,
>   enum drm_sched_priority priority);
> +   /* Try to soft recover the ring to make the fence signal */
> +   void (*soft_recovery)(struct amdgpu_ring *ring, unsigned vmid);
>  };
>
>  struct amdgpu_ring {
> @@ -260,6 +262,8 @@ void amdgpu_ring_fini(struct amdgpu_ring *ring);
>  void amdgpu_ring_emit_reg_write_reg_wait_helper(struct amdgpu_ring *ring,
> uint32_t reg0, uint32_t val0,
> uint32_t reg1, uint32_t val1);
> +bool amdgpu_ring_soft_recovery(struct amdgpu_ring *ring, unsigned int vmid,
> +  struct dma_fence *fence);
>
>  static inline void amdgpu_ring_clear_ring(struct amdgpu_ring *ring)
>  {
> --
> 2.14.1
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: Regression: Kaveri VCE ring test fails

2018-08-22 Thread Zhu, Rex
Hi Michel,

Thanks. I know the root cause, and will fix this issue today.

Best Regards
Rex

> -Original Message-
> From: Michel Dänzer 
> Sent: Thursday, August 23, 2018 12:31 AM
> To: Zhu, Rex 
> Cc: amd-gfx@lists.freedesktop.org
> Subject: Regression: Kaveri VCE ring test fails
> 
> 
> Hi Rex,
> 
> 
> I bisected a regression to your "drm/amd/pp: Unify
> powergate_uvd/vce/mmhub to set_powergating_by_smu" change. Since this
> change, the VCE ring test fails on this Kaveri laptop, which prevents the
> amdgpu driver from initializing at all, see the the attached kern.log.
> 
> If I disable VCE via the ip_block_mask module parameter, I get an oops in
> kv_dpm_get_sclk called from amdkfd, see the attached no-vce.log.
> 
> 
> Since the change above landed for 4.19-rc1, the regression should be fixed
> before the final 4.19 release if at all possible.
> 
> 
> Note that before this change, the VCE IB test has always (as far as I
> remember) failed on this machine, see the attached vce-ib-fail.log. But I 
> don't
> care about that too much, as I don't need VCE, and I haven't noticed any
> other issues related to that.
> 
> 
> P.S. Somebody else bisected a Mullins issue to the same commit:
> https://bugs.freedesktop.org/show_bug.cgi?id=107595
> 
> --
> Earthling Michel Dänzer   |   http://www.amd.com
> Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [igt-dev] RFC: Migration to Gitlab

2018-08-22 Thread Daniel Stone
Hi Rodrigo,

On Wed, 22 Aug 2018 at 17:06, Rodrigo Vivi  wrote:
> On Wed, Aug 22, 2018 at 10:19:19AM -0400, Adam Jackson wrote:
> > On Wed, 2018-08-22 at 16:13 +0300, Jani Nikula wrote:
> > > - Sticking to fdo bugzilla and disabling gitlab issues for at least
> > >   drm-intel for the time being. Doing that migration in the same go is a
> > >   bit much I think. Reassignment across bugzilla and gitlab will be an
> > >   issue.
> >
> > Can you elaborate a bit on the issues here? The actual move-the-bugs
> > process has been pretty painless for the parts of xorg we've done so
> > far.
>
> I guess there is nothing against moving the bugs there. The concern is only on
> doing everything at once.
>
> I'm in favor of moving gits for now and after we are confident that
> everything is there and working we move the bugs.

As Daniel alluded to, the only issue I really have is moving _all_ the
kernel repos at once. At the end of the year we'll have easy automatic
scaling thanks to the independent services being separated. As it is,
all the GitLab services (apart from CI runners) run on a single
machine, so we have limited options if it becomes overwhelmed with
load.

Do you have a particular concern about the repos? e.g. what would you
check for to make sure things are 'there and working'?

> One question about the bugzilla:
>
> Will all the referrences on all commit messages get outdated after
> bugzilla is dead?
> Or bugzilla will stay up for referrence but closed for interaction?
> or all old closed stuff are always moved and bugzilla.fd.o as well and
> bugzilla.fd.o will be mirroring gitlab?

When bugs are migrated from Bugzilla to GitLab, only open bugs are
migrated. Closed ones are left in place, as is; open ones have a
comment at the end saying that the bug has moved to GitLab, a URL
linking to the new GitLab issue, and telling them to please chase it
up there.

Even when we move everyone completely off Bugzilla, we will keep it as
a read-only mirror forever. Even with Phabricator, which very few
people ever used, has had all its bugs and code review captured and
archived, so we can continue to preserve all the old content and
links, without having to run the actual service.

Cheers,
Daniel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [igt-dev] RFC: Migration to Gitlab

2018-08-22 Thread Daniel Stone
Hi,

On Wed, 22 Aug 2018 at 15:44, Daniel Vetter  wrote:
> On Wed, Aug 22, 2018 at 3:13 PM, Jani Nikula  
> wrote:
> > Just a couple of concerns from drm/i915 perspective for starters:
> >
> > - Patchwork integration. I think we'll want to keep patchwork for at
> >   least intel-gfx etc. for the time being. IIUC the one thing we need is
> >   some server side hook to update patchwork on git push.
> >
> > - Sticking to fdo bugzilla and disabling gitlab issues for at least
> >   drm-intel for the time being. Doing that migration in the same go is a
> >   bit much I think. Reassignment across bugzilla and gitlab will be an
> >   issue.
>
> Good points, forgot about both. Patchwork reading the mailing list
> should keep working as-is, but the update hook needs looking into.

All the hooks are retained. gitlab.fd.o pushes to git.fd.o, and
git.fd.o still executes all the old hooks. This includes Patchwork,
readthedocs, AppVeyor, and whatever else.

> For merge requests I think best approach is to enable them very
> selectively at first for testing out, and then making a per-subproject
> decision whether they make sense. E.g. I think for maintainer-tools
> integrating make check and the doc building into gitlab CI would be
> sweet, and worth looking into gitlab merge requests just to automate
> that. Again best left out of scope for the initial migration.

You don't need MRs to do that. You can build a .gitlab-ci.yml file
which will run make check or build the docs or whatever, and have that
run on pushes. Anyone can then fork the repo, push their changes to
that fork, and see the CI results from there. It's like Travis: the CI
configuration is a (mutable) part of the codebase, not a separate
'thing' hanging off a specific repo. So if you write the CI pipeline
first, you can have people running CI on push completely independently
of switching the workflow to use MRs.

Cheers,
Daniel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Regression: Kaveri VCE ring test fails

2018-08-22 Thread Michel Dänzer

Hi Rex,


I bisected a regression to your "drm/amd/pp: Unify
powergate_uvd/vce/mmhub to set_powergating_by_smu" change. Since this
change, the VCE ring test fails on this Kaveri laptop, which prevents
the amdgpu driver from initializing at all, see the the attached kern.log.

If I disable VCE via the ip_block_mask module parameter, I get an oops
in kv_dpm_get_sclk called from amdkfd, see the attached no-vce.log.


Since the change above landed for 4.19-rc1, the regression should be
fixed before the final 4.19 release if at all possible.


Note that before this change, the VCE IB test has always (as far as I
remember) failed on this machine, see the attached vce-ib-fail.log. But
I don't care about that too much, as I don't need VCE, and I haven't
noticed any other issues related to that.


P.S. Somebody else bisected a Mullins issue to the same commit:
https://bugs.freedesktop.org/show_bug.cgi?id=107595

-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
Aug 21 10:20:37 thor kernel: [4.429726] [drm] amdgpu kernel modesetting enabled.
Aug 21 10:20:37 thor kernel: [4.434379] thermal LNXTHERM:04: registered as thermal_zone4
Aug 21 10:20:37 thor kernel: [4.435586] ACPI: Thermal Zone [BATZ] (27 C)
Aug 21 10:20:37 thor kernel: [4.438045] ohci-pci: OHCI PCI platform driver
Aug 21 10:20:37 thor kernel: [4.438785] Parsing CRAT table with 1 nodes
Aug 21 10:20:37 thor kernel: [4.440120] ohci-pci :00:12.0: OHCI PCI host controller
Aug 21 10:20:37 thor kernel: [4.440522] Creating topology SYSFS entries
Aug 21 10:20:37 thor kernel: [4.441676] ohci-pci :00:12.0: new USB bus registered, assigned bus number 5
Aug 21 10:20:37 thor kernel: [4.442936] Topology: Add APU node [0x0:0x0]
Aug 21 10:20:37 thor kernel: [4.444233] ohci-pci :00:12.0: irq 18, io mem 0xd684e000
Aug 21 10:20:37 thor kernel: [4.445142] Finished initializing topology
Aug 21 10:20:37 thor kernel: [4.445268] kfd kfd: Initialized module
Aug 21 10:20:37 thor kernel: [4.449415] checking generic (c000 30) vs hw (c000 1000)
Aug 21 10:20:37 thor kernel: [4.449420] fb: switching to amdgpudrmfb from EFI VGA
Aug 21 10:20:37 thor kernel: [4.450767] Console: switching to colour dummy device 80x25
Aug 21 10:20:37 thor kernel: [4.455227] [drm] initializing kernel modesetting (KAVERI 0x1002:0x130A 0x103C:0x2234 0x00).
Aug 21 10:20:37 thor kernel: [4.455359] [drm] register mmio base: 0xD680
Aug 21 10:20:37 thor kernel: [4.455380] [drm] register mmio size: 262144
Aug 21 10:20:37 thor kernel: [4.455415] [drm] add ip block number 0 
Aug 21 10:20:37 thor kernel: [4.455437] [drm] add ip block number 1 
Aug 21 10:20:37 thor kernel: [4.455458] [drm] add ip block number 2 
Aug 21 10:20:37 thor kernel: [4.455479] [drm] add ip block number 3 
Aug 21 10:20:37 thor kernel: [4.455499] [drm] add ip block number 4 
Aug 21 10:20:37 thor kernel: [4.455521] [drm] add ip block number 5 
Aug 21 10:20:37 thor kernel: [4.455542] [drm] add ip block number 6 
Aug 21 10:20:37 thor kernel: [4.455562] [drm] add ip block number 7 
Aug 21 10:20:37 thor kernel: [4.455582] [drm] add ip block number 8 
Aug 21 10:20:37 thor kernel: [4.480652] [drm] BIOS signature incorrect 0 0
Aug 21 10:20:37 thor kernel: [4.480716] resource sanity check: requesting [mem 0x000c-0x000d], which spans more than PCI Bus :00 [mem 0x000c-0x000c3fff window]
Aug 21 10:20:37 thor kernel: [4.480825] caller pci_map_rom+0x58/0xe0 mapping multiple BARs
Aug 21 10:20:37 thor kernel: [4.482188] ATOM BIOS: BR45464.001
Aug 21 10:20:37 thor kernel: [4.483717] [drm] vm size is 64 GB, 2 levels, block size is 10-bit, fragment size is 9-bit
Aug 21 10:20:37 thor kernel: [4.483764] amdgpu :00:01.0: VRAM: 1024M 0x00F4 - 0x00F43FFF (1024M used)
Aug 21 10:20:37 thor kernel: [4.483804] amdgpu :00:01.0: GART: 1024M 0x - 0x3FFF
Aug 21 10:20:37 thor kernel: [4.483850] [drm] Detected VRAM RAM=1024M, BAR=1024M
Aug 21 10:20:37 thor kernel: [4.483875] [drm] RAM width 128bits UNKNOWN
Aug 21 10:20:37 thor kernel: [4.484472] [TTM] Zone  kernel: Available graphics memory: 3568746 kiB
Aug 21 10:20:37 thor kernel: [4.484537] [TTM] Zone   dma32: Available graphics memory: 2097152 kiB
Aug 21 10:20:37 thor kernel: [4.484563] [TTM] Initializing pool allocator
Aug 21 10:20:37 thor kernel: [4.484620] [TTM] Initializing DMA pool allocator
Aug 21 10:20:37 thor kernel: [4.485188] [drm] amdgpu: 1024M of VRAM memory ready
Aug 21 10:20:37 thor kernel: [4.485217] [drm] amdgpu: 3072M of GTT memory ready.
Aug 21 10:20:37 thor kernel: [4.485316] [drm] GART: num cpu pages 262144, num gpu pages 262144
Aug 21 10:20:37 thor kernel: [4.486477] [drm] PCIE GART of 1024M enabled (table at 0x00F40030).
Aug 

Re: RFC: Migration to Gitlab

2018-08-22 Thread Daniel Stone
 Hi,

On Wed, 22 Aug 2018 at 16:02, Emil Velikov  wrote:
> On 22 August 2018 at 12:44, Daniel Vetter  wrote:
> > I think it's time to brainstorm a bit about the gitlab migration. Basic 
> > reasons:
> >
> > - fd.o admins want to deprecate shell accounts and hand-rolled
> > infrastructure, because it's a pain to keep secure
> >
> > - gitlab will allow us to add committers on our own, greatly
> > simplifying that process (and offloading that task from fd.o admins).
>
> Random thought - I really wish the admins spoke early and louder about issues.
> From infra to manpower and adhoc tool maintenance.

I thought I mostly had it covered, but fair enough. What knowledge are
you missing and how should it best be delivered?

One first-order issue is that as documented at
https://www.freedesktop.org/wiki/AccountRequests/ creating accounts
requires manual admin intervention. You can also search the
'freedesktop.org' product on Bugzilla to see the amount of time we
spend shuffling around GPG & SSH keys, which is about the worst
possible use of my time. Many other people have had access to drive
the ancient shell-script frontend to LDAP before, but for some reason
they mostly aren't very enthusiastic about doing it all the time.

In the mesa-dev@ thread about Mesa's migration, which is linked from
my blog post, I went into quite a lot of detail about why Jenkins was
not suitable to roll out across fd.o globally. That knowledge was
gained from trial & error, which was a lot of time burnt. The end
result is that we don't have any CI, except if people hang
Travis/AppVeyor off GitHub mirrors.

You've personally seen what's involved in setting up Git repository
hooks so we can build docs. We can't give access to let people work on
those, without giving them direct access to the literal Git repository
itself on disk. The hooks mostly involve bespoke sets of rsync jobs
and hand-managed SSH credentials and external services to build docs
and so on and so forth. None of this is accessible to a newcomer who
wants to make a non-code contribution: you have to find someone with
access to the repo to go bash some shell scripts directly and hope you
didn't screw it up. None of this is auditable, so if the repo
mysteriously gets wiped, then hopefully there are some backups
somewhere. But there definitely aren't any logs. Luckily we prevent
most people from having access to most repositories via a mandatory
'shell' which only allows people to push Git; this was written by hand
by us 12 years ago.

What else? Our fork of Patchwork was until recently based on an
ancient fork of Django, in a bespoke unreproducible production
environment. Bugzilla is patched for spam and templates (making
upgrades complex), yet we still have a surprising amount of spam pass
through. There's no way to delete spam, but you have to manually move
every bug to the 'spam' group, then go through and delete the user
which involves copying & pasting the email and a few clicks per user.
Mailman is patched to support Recaptcha, bringing more upgrade fun.
ikiwiki breaks all the time because it's designed to have the
public-access web server on the same host as the writeable Git
repositories.

Our servers are several years old and approaching life expiry, and we
have no more spare disk. You can see in #freedesktop IRC the constant
network connectivity issues people have to PSU almost every day. Our
attempts to root-cause and solve those have got nowhere.

I could go on, but the 'moved elsewhere' list in
https://gitlab.freedesktop.org/freedesktop/freedesktop/issues/2
indicates that we do have problems to solve, and that changing
peoples' SSH keys for them isn't the best way for us to be spending
the extremely limited time that we do have.

> > For the full in-depth writeup of everything, see
> >
> > https://www.fooishbar.org/blog/gitlab-fdo-introduction/

If you haven't seen this, or the post it was linked from, they would
be a good start:
https://lists.freedesktop.org/archives/freedesktop/2018-July/000370.html

There's also the long thread on mesa-dev a long time back which covers
a lot of this ground already.

> > - Figuring out the actual migration - we've been adding a pile of
> > committers since fd.o LDAP was converted to gitlab once back in
> > spring. We need to at least figure out how to move the new
> > accounts/committers.
>
> As a observer, allow me to put some ideas. You've mostly covered them
> all, my emphasis is to seriously stick with _one_ thing at a time.
> Attempting to do multiple things in parallel will end up with
> sub-optimal results.
>
>  - (at random point) cleanup the committers list - people who have not
> contributed in the last 1 year?

libdrm was previously covered under the Mesa ACL. Here are the other
committer lists, which you can see via 'getent group' on annarchy or
another machine:

amdkfd:x:922:fxkuehl,agd5f,deathsimple,danvet,jazhou,jbridgman,hwentland,tstdenis,gitlab-mirror,rui
drm-meson:x:936:narmstrong
drm:x:940:airlied,danvet

Re: [igt-dev] RFC: Migration to Gitlab

2018-08-22 Thread Rodrigo Vivi
On Wed, Aug 22, 2018 at 10:19:19AM -0400, Adam Jackson wrote:
> On Wed, 2018-08-22 at 16:13 +0300, Jani Nikula wrote:
> 
> > - Sticking to fdo bugzilla and disabling gitlab issues for at least
> >   drm-intel for the time being. Doing that migration in the same go is a
> >   bit much I think. Reassignment across bugzilla and gitlab will be an
> >   issue.
> 
> Can you elaborate a bit on the issues here? The actual move-the-bugs
> process has been pretty painless for the parts of xorg we've done so
> far.

I guess there is nothing against moving the bugs there. The concern is only on
doing everything at once.

I'm in favor of moving gits for now and after we are confident that
everything is there and working we move the bugs.

One question about the bugzilla:

Will all the referrences on all commit messages get outdated after
bugzilla is dead?
Or bugzilla will stay up for referrence but closed for interaction?
or all old closed stuff are always moved and bugzilla.fd.o as well and
bugzilla.fd.o will be mirroring gitlab?

Thanks,
Rodrigo.

> 
> - ajax
> ___
> dim-tools mailing list
> dim-to...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dim-tools
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 11/11] drm/amdgpu: enable GTT PD/PT for raven

2018-08-22 Thread Andrey Grodzovsky



On 08/22/2018 11:05 AM, Christian König wrote:

Should work on Vega10 as well, but with an obvious performance hit.

Older APUs can be enabled as well, but will probably be more work.

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 11 ++-
  1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 928fdae0dab4..670a42729f88 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -308,6 +308,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
list_move(_base->vm_status, >moved);
spin_unlock(>moved_lock);
} else {
+   amdgpu_ttm_alloc_gart(>tbo);


Looks like you forgot to check for return value here.

Andrey


list_move(_base->vm_status, >relocated);
}
}
@@ -396,6 +397,10 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
if (r)
goto error;
  
+	r = amdgpu_ttm_alloc_gart(>tbo);

+   if (r)
+   return r;
+
r = amdgpu_job_alloc_with_ib(adev, 64, );
if (r)
goto error;
@@ -461,7 +466,11 @@ static void amdgpu_vm_bo_param(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
bp->size = amdgpu_vm_bo_size(adev, level);
bp->byte_align = AMDGPU_GPU_PAGE_SIZE;
bp->domain = AMDGPU_GEM_DOMAIN_VRAM;
-   bp->flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
+   if (bp->size <= PAGE_SIZE && adev->asic_type == CHIP_RAVEN)
+   bp->domain |= AMDGPU_GEM_DOMAIN_GTT;
+   bp->domain = amdgpu_bo_get_preferred_pin_domain(adev, bp->domain);
+   bp->flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
+   AMDGPU_GEM_CREATE_CPU_GTT_USWC;
if (vm->use_cpu_for_update)
bp->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
else


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 04/11] drm/amdgpu: move setting the GART addr into TTM

2018-08-22 Thread Christian König
Move setting the GART addr for window based copies into the TTM code who
uses it.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 --
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 5 -
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 391e2f7c03aa..239ccbae09bc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -82,8 +82,6 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, 
unsigned size,
r = amdgpu_ib_get(adev, NULL, size, &(*job)->ibs[0]);
if (r)
kfree(*job);
-   else
-   (*job)->vm_pd_addr = adev->gart.table_addr;
 
return r;
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index c6611cff64c8..b4333f60ed8b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -2048,7 +2048,10 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, 
uint64_t src_offset,
if (r)
return r;
 
-   job->vm_needs_flush = vm_needs_flush;
+   if (vm_needs_flush) {
+   job->vm_pd_addr = adev->gart.table_addr;
+   job->vm_needs_flush = true;
+   }
if (resv) {
r = amdgpu_sync_resv(adev, >sync, resv,
 AMDGPU_FENCE_OWNER_UNDEFINED,
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 02/11] drm/amdgpu: validate the VM root PD from the VM code

2018-08-22 Thread Christian König
Preparation for following changes. This validates the root PD twice,
but the overhead of that should be minimal.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 73b8dcaf66e6..53ce9982a5ee 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -291,11 +291,11 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
list_for_each_entry_safe(bo_base, tmp, >evicted, vm_status) {
struct amdgpu_bo *bo = bo_base->bo;
 
-   if (bo->parent) {
-   r = validate(param, bo);
-   if (r)
-   break;
+   r = validate(param, bo);
+   if (r)
+   break;
 
+   if (bo->parent) {
spin_lock(>lru_lock);
ttm_bo_move_to_lru_tail(>tbo);
if (bo->shadow)
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 06/11] drm/amdgpu: remove gart.table_addr

2018-08-22 Thread Christian König
We can easily figure out the address on the fly.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c | 1 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h | 1 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c  | 4 ++--
 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c | 7 +++
 drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c| 9 +
 drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c| 9 +
 drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c| 9 +
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c| 2 +-
 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c  | 7 +++
 9 files changed, 24 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
index f5cb5e2856c1..11fea28f8ad3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
@@ -157,7 +157,6 @@ int amdgpu_gart_table_vram_pin(struct amdgpu_device *adev)
if (r)
amdgpu_bo_unpin(adev->gart.bo);
amdgpu_bo_unreserve(adev->gart.bo);
-   adev->gart.table_addr = amdgpu_bo_gpu_offset(adev->gart.bo);
return r;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h
index d7b7c2d408d5..9ff62887e4e3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h
@@ -40,7 +40,6 @@ struct amdgpu_bo;
 #define AMDGPU_GPU_PAGES_IN_CPU_PAGE (PAGE_SIZE / AMDGPU_GPU_PAGE_SIZE)
 
 struct amdgpu_gart {
-   u64 table_addr;
struct amdgpu_bo*bo;
void*ptr;
unsignednum_gpu_pages;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index b4333f60ed8b..e7f73deed975 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1988,7 +1988,7 @@ static int amdgpu_map_buffer(struct ttm_buffer_object *bo,
src_addr = num_dw * 4;
src_addr += job->ibs[0].gpu_addr;
 
-   dst_addr = adev->gart.table_addr;
+   dst_addr = amdgpu_bo_gpu_offset(adev->gart.bo);
dst_addr += window * AMDGPU_GTT_MAX_TRANSFER_SIZE * 8;
amdgpu_emit_copy_buffer(adev, >ibs[0], src_addr,
dst_addr, num_bytes);
@@ -2049,7 +2049,7 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t 
src_offset,
return r;
 
if (vm_needs_flush) {
-   job->vm_pd_addr = adev->gart.table_addr;
+   job->vm_pd_addr = amdgpu_bo_gpu_offset(adev->gart.bo);
job->vm_needs_flush = true;
}
if (resv) {
diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
index acfbd2d749cf..2baab7e69ef5 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
@@ -37,11 +37,10 @@ u64 gfxhub_v1_0_get_mc_fb_offset(struct amdgpu_device *adev)
 
 static void gfxhub_v1_0_init_gart_pt_regs(struct amdgpu_device *adev)
 {
-   uint64_t value;
+   uint64_t value = amdgpu_bo_gpu_offset(adev->gart.bo);
 
-   BUG_ON(adev->gart.table_addr & (~0xF000ULL));
-   value = adev->gart.table_addr - adev->gmc.vram_start
-   + adev->vm_manager.vram_base_offset;
+   BUG_ON(value & (~0xF000ULL));
+   value -= adev->gmc.vram_start + adev->vm_manager.vram_base_offset;
value &= 0xF000ULL;
value |= 0x1; /*valid bit*/
 
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
index c50bd0c46508..8b313cd00b7e 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
@@ -494,6 +494,7 @@ static void gmc_v6_0_set_prt(struct amdgpu_device *adev, 
bool enable)
 
 static int gmc_v6_0_gart_enable(struct amdgpu_device *adev)
 {
+   uint64_t table_addr = amdgpu_bo_gpu_offset(adev->gart.bo);
int r, i;
u32 field;
 
@@ -532,7 +533,7 @@ static int gmc_v6_0_gart_enable(struct amdgpu_device *adev)
/* setup context0 */
WREG32(mmVM_CONTEXT0_PAGE_TABLE_START_ADDR, adev->gmc.gart_start >> 12);
WREG32(mmVM_CONTEXT0_PAGE_TABLE_END_ADDR, adev->gmc.gart_end >> 12);
-   WREG32(mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR, adev->gart.table_addr >> 12);
+   WREG32(mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR, table_addr >> 12);
WREG32(mmVM_CONTEXT0_PROTECTION_FAULT_DEFAULT_ADDR,
(u32)(adev->dummy_page_addr >> 12));
WREG32(mmVM_CONTEXT0_CNTL2, 0);
@@ -556,10 +557,10 @@ static int gmc_v6_0_gart_enable(struct amdgpu_device 
*adev)
for (i = 1; i < 16; i++) {
if (i < 8)
WREG32(mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR + i,
-  adev->gart.table_addr >> 12);
+  table_addr >> 12);
else

[PATCH 09/11] drm/amdgpu: add amdgpu_gmc_get_pde_for_bo helper

2018-08-22 Thread Christian König
Helper to get the PDE for a PD/PT.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 37 +++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h |  2 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 21 --
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c  |  4 +--
 5 files changed, 57 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index 36058feac64f..6f79ce108728 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -26,6 +26,38 @@
 
 #include "amdgpu.h"
 
+/**
+ * amdgpu_gmc_get_pde_for_bo - get the PDE for a BO
+ *
+ * @bo: the BO to get the PDE for
+ * @level: the level in the PD hirarchy
+ * @addr: resulting addr
+ * @flags: resulting flags
+ *
+ * Get the address and flags to be used for a PDE.
+ */
+void amdgpu_gmc_get_pde_for_bo(struct amdgpu_bo *bo, int level,
+  uint64_t *addr, uint64_t *flags)
+{
+   struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
+   struct ttm_dma_tt *ttm;
+
+   switch (bo->tbo.mem.mem_type) {
+   case TTM_PL_TT:
+   ttm = container_of(bo->tbo.ttm, struct ttm_dma_tt, ttm);
+   *addr = ttm->dma_address[0];
+   break;
+   case TTM_PL_VRAM:
+   *addr = amdgpu_bo_gpu_offset(bo);
+   break;
+   default:
+   *addr = 0;
+   break;
+   }
+   *flags = amdgpu_ttm_tt_pde_flags(bo->tbo.ttm, >tbo.mem);
+   amdgpu_gmc_get_vm_pde(adev, level, addr, flags);
+}
+
 /**
  * amdgpu_gmc_pd_addr - return the address of the root directory
  *
@@ -35,13 +67,14 @@ uint64_t amdgpu_gmc_pd_addr(struct amdgpu_bo *bo)
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
uint64_t pd_addr;
 
-   pd_addr = amdgpu_bo_gpu_offset(bo);
/* TODO: move that into ASIC specific code */
if (adev->asic_type >= CHIP_VEGA10) {
uint64_t flags = AMDGPU_PTE_VALID;
 
-   amdgpu_gmc_get_vm_pde(adev, -1, _addr, );
+   amdgpu_gmc_get_pde_for_bo(bo, -1, _addr, );
pd_addr |= flags;
+   } else {
+   pd_addr = amdgpu_bo_gpu_offset(bo);
}
return pd_addr;
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
index 7c469cce0498..0d2c9f65ca13 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
@@ -131,6 +131,8 @@ static inline bool amdgpu_gmc_vram_full_visible(struct 
amdgpu_gmc *gmc)
return (gmc->real_vram_size == gmc->visible_vram_size);
 }
 
+void amdgpu_gmc_get_pde_for_bo(struct amdgpu_bo *bo, int level,
+  uint64_t *addr, uint64_t *flags);
 uint64_t amdgpu_gmc_pd_addr(struct amdgpu_bo *bo);
 
 #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index eb08a03b82a0..72366643e3c2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1428,13 +1428,14 @@ bool amdgpu_ttm_tt_is_readonly(struct ttm_tt *ttm)
 }
 
 /**
- * amdgpu_ttm_tt_pte_flags - Compute PTE flags for ttm_tt object
+ * amdgpu_ttm_tt_pde_flags - Compute PDE flags for ttm_tt object
  *
  * @ttm: The ttm_tt object to compute the flags for
  * @mem: The memory registry backing this ttm_tt object
+ *
+ * Figure out the flags to use for a VM PDE.
  */
-uint64_t amdgpu_ttm_tt_pte_flags(struct amdgpu_device *adev, struct ttm_tt 
*ttm,
-struct ttm_mem_reg *mem)
+uint64_t amdgpu_ttm_tt_pde_flags(struct ttm_tt *ttm, struct ttm_mem_reg *mem)
 {
uint64_t flags = 0;
 
@@ -1448,6 +1449,20 @@ uint64_t amdgpu_ttm_tt_pte_flags(struct amdgpu_device 
*adev, struct ttm_tt *ttm,
flags |= AMDGPU_PTE_SNOOPED;
}
 
+   return flags;
+}
+
+/**
+ * amdgpu_ttm_tt_pte_flags - Compute PTE flags for ttm_tt object
+ *
+ * @ttm: The ttm_tt object to compute the flags for
+ * @mem: The memory registry backing this ttm_tt object
+ */
+uint64_t amdgpu_ttm_tt_pte_flags(struct amdgpu_device *adev, struct ttm_tt 
*ttm,
+struct ttm_mem_reg *mem)
+{
+   uint64_t flags = amdgpu_ttm_tt_pde_flags(ttm, mem);
+
flags |= adev->gart.gart_pte_flags;
flags |= AMDGPU_PTE_READABLE;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index 8b3cc6687769..fe8f276e9811 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -116,6 +116,7 @@ bool amdgpu_ttm_tt_userptr_invalidated(struct ttm_tt *ttm,
   int *last_invalidated);
 bool amdgpu_ttm_tt_userptr_needs_pages(struct ttm_tt *ttm);
 bool amdgpu_ttm_tt_is_readonly(struct ttm_tt *ttm);
+uint64_t 

[PATCH 10/11] drm/amdgpu: add helper for VM PD/PT allocation parameters

2018-08-22 Thread Christian König
Add a helper function to figure them out only once.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 61 --
 1 file changed, 28 insertions(+), 33 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 87e3d44b0a3f..928fdae0dab4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -446,6 +446,31 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
return r;
 }
 
+/**
+ * amdgpu_vm_bo_param - fill in parameters for PD/PT allocation
+ *
+ * @adev: amdgpu_device pointer
+ * @vm: requesting vm
+ * @bp: resulting BO allocation parameters
+ */
+static void amdgpu_vm_bo_param(struct amdgpu_device *adev, struct amdgpu_vm 
*vm,
+  int level, struct amdgpu_bo_param *bp)
+{
+   memset(, 0, sizeof(bp));
+
+   bp->size = amdgpu_vm_bo_size(adev, level);
+   bp->byte_align = AMDGPU_GPU_PAGE_SIZE;
+   bp->domain = AMDGPU_GEM_DOMAIN_VRAM;
+   bp->flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
+   if (vm->use_cpu_for_update)
+   bp->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
+   else
+   bp->flags |= AMDGPU_GEM_CREATE_SHADOW;
+   bp->type = ttm_bo_type_kernel;
+   if (vm->root.base.bo)
+   bp->resv = vm->root.base.bo->tbo.resv;
+}
+
 /**
  * amdgpu_vm_alloc_levels - allocate the PD/PT levels
  *
@@ -469,8 +494,8 @@ static int amdgpu_vm_alloc_levels(struct amdgpu_device 
*adev,
  unsigned level, bool ats)
 {
unsigned shift = amdgpu_vm_level_shift(adev, level);
+   struct amdgpu_bo_param bp;
unsigned pt_idx, from, to;
-   u64 flags;
int r;
 
if (!parent->entries) {
@@ -494,29 +519,14 @@ static int amdgpu_vm_alloc_levels(struct amdgpu_device 
*adev,
saddr = saddr & ((1 << shift) - 1);
eaddr = eaddr & ((1 << shift) - 1);
 
-   flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
-   if (vm->use_cpu_for_update)
-   flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
-   else
-   flags |= (AMDGPU_GEM_CREATE_NO_CPU_ACCESS |
-   AMDGPU_GEM_CREATE_SHADOW);
+   amdgpu_vm_bo_param(adev, vm, level, );
 
/* walk over the address space and allocate the page tables */
for (pt_idx = from; pt_idx <= to; ++pt_idx) {
-   struct reservation_object *resv = vm->root.base.bo->tbo.resv;
struct amdgpu_vm_pt *entry = >entries[pt_idx];
struct amdgpu_bo *pt;
 
if (!entry->base.bo) {
-   struct amdgpu_bo_param bp;
-
-   memset(, 0, sizeof(bp));
-   bp.size = amdgpu_vm_bo_size(adev, level);
-   bp.byte_align = AMDGPU_GPU_PAGE_SIZE;
-   bp.domain = AMDGPU_GEM_DOMAIN_VRAM;
-   bp.flags = flags;
-   bp.type = ttm_bo_type_kernel;
-   bp.resv = resv;
r = amdgpu_bo_create(adev, , );
if (r)
return r;
@@ -2564,8 +2574,6 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
amdgpu_vm *vm,
 {
struct amdgpu_bo_param bp;
struct amdgpu_bo *root;
-   unsigned long size;
-   uint64_t flags;
int r, i;
 
vm->va = RB_ROOT_CACHED;
@@ -2602,20 +2610,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
amdgpu_vm *vm,
  "CPU update of VM recommended only for large BAR system\n");
vm->last_update = NULL;
 
-   flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
-   if (vm->use_cpu_for_update)
-   flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
-   else
-   flags |= AMDGPU_GEM_CREATE_SHADOW;
-
-   size = amdgpu_vm_bo_size(adev, adev->vm_manager.root_level);
-   memset(, 0, sizeof(bp));
-   bp.size = size;
-   bp.byte_align = AMDGPU_GPU_PAGE_SIZE;
-   bp.domain = AMDGPU_GEM_DOMAIN_VRAM;
-   bp.flags = flags;
-   bp.type = ttm_bo_type_kernel;
-   bp.resv = NULL;
+   amdgpu_vm_bo_param(adev, vm, adev->vm_manager.root_level, );
r = amdgpu_bo_create(adev, , );
if (r)
goto error_free_sched_entity;
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 03/11] drm/amdgpu: cleanup VM handling in the CS a bit

2018-08-22 Thread Christian König
Add a helper function for getting the root PD addr and cleanup join the
two VM related functions and cleanup the function name.

No functional change.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 160 -
 1 file changed, 74 insertions(+), 86 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index d42d1c8f78f6..17bf63f93c93 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -804,8 +804,9 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser 
*parser, int error,
amdgpu_bo_unref(>uf_entry.robj);
 }
 
-static int amdgpu_bo_vm_update_pte(struct amdgpu_cs_parser *p)
+static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
 {
+   struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
struct amdgpu_device *adev = p->adev;
struct amdgpu_vm *vm = >vm;
@@ -814,6 +815,71 @@ static int amdgpu_bo_vm_update_pte(struct amdgpu_cs_parser 
*p)
struct amdgpu_bo *bo;
int r;
 
+   /* Only for UVD/VCE VM emulation */
+   if (ring->funcs->parse_cs || ring->funcs->patch_cs_in_place) {
+   unsigned i, j;
+
+   for (i = 0, j = 0; i < p->nchunks && j < p->job->num_ibs; i++) {
+   struct drm_amdgpu_cs_chunk_ib *chunk_ib;
+   struct amdgpu_bo_va_mapping *m;
+   struct amdgpu_bo *aobj = NULL;
+   struct amdgpu_cs_chunk *chunk;
+   uint64_t offset, va_start;
+   struct amdgpu_ib *ib;
+   uint8_t *kptr;
+
+   chunk = >chunks[i];
+   ib = >job->ibs[j];
+   chunk_ib = chunk->kdata;
+
+   if (chunk->chunk_id != AMDGPU_CHUNK_ID_IB)
+   continue;
+
+   va_start = chunk_ib->va_start & AMDGPU_VA_HOLE_MASK;
+   r = amdgpu_cs_find_mapping(p, va_start, , );
+   if (r) {
+   DRM_ERROR("IB va_start is invalid\n");
+   return r;
+   }
+
+   if ((va_start + chunk_ib->ib_bytes) >
+   (m->last + 1) * AMDGPU_GPU_PAGE_SIZE) {
+   DRM_ERROR("IB va_start+ib_bytes is invalid\n");
+   return -EINVAL;
+   }
+
+   /* the IB should be reserved at this point */
+   r = amdgpu_bo_kmap(aobj, (void **));
+   if (r) {
+   return r;
+   }
+
+   offset = m->start * AMDGPU_GPU_PAGE_SIZE;
+   kptr += va_start - offset;
+
+   if (ring->funcs->parse_cs) {
+   memcpy(ib->ptr, kptr, chunk_ib->ib_bytes);
+   amdgpu_bo_kunmap(aobj);
+
+   r = amdgpu_ring_parse_cs(ring, p, j);
+   if (r)
+   return r;
+   } else {
+   ib->ptr = (uint32_t *)kptr;
+   r = amdgpu_ring_patch_cs_in_place(ring, p, j);
+   amdgpu_bo_kunmap(aobj);
+   if (r)
+   return r;
+   }
+
+   j++;
+   }
+   }
+
+   if (!p->job->vm)
+   return amdgpu_cs_sync_rings(p);
+
+
r = amdgpu_vm_clear_freed(adev, vm, NULL);
if (r)
return r;
@@ -876,6 +942,12 @@ static int amdgpu_bo_vm_update_pte(struct amdgpu_cs_parser 
*p)
if (r)
return r;
 
+   r = reservation_object_reserve_shared(vm->root.base.bo->tbo.resv);
+   if (r)
+   return r;
+
+   p->job->vm_pd_addr = amdgpu_bo_gpu_offset(vm->root.base.bo);
+
if (amdgpu_vm_debug) {
/* Invalidate all BOs to test for userspace bugs */
amdgpu_bo_list_for_each_entry(e, p->bo_list) {
@@ -887,90 +959,6 @@ static int amdgpu_bo_vm_update_pte(struct amdgpu_cs_parser 
*p)
}
}
 
-   return r;
-}
-
-static int amdgpu_cs_ib_vm_chunk(struct amdgpu_device *adev,
-struct amdgpu_cs_parser *p)
-{
-   struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
-   struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
-   struct amdgpu_vm *vm = >vm;
-   int r;
-
-   /* Only for UVD/VCE VM emulation */
-   if (ring->funcs->parse_cs || ring->funcs->patch_cs_in_place) {
-   unsigned i, j;
-
-   for (i = 0, j = 0; i < 

[PATCH 05/11] drm/amdgpu: rename gart.robj into gart.bo

2018-08-22 Thread Christian König
sed -i "s/gart.robj/gart.bo/" drivers/gpu/drm/amd/amdgpu/*.c
sed -i "s/gart.robj/gart.bo/" drivers/gpu/drm/amd/amdgpu/*.h

Just cleaning up radeon leftovers.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c | 32 
 drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h |  2 +-
 drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c|  4 +--
 drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c|  4 +--
 drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c|  4 +--
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c|  4 +--
 6 files changed, 25 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
index a54d5655a191..f5cb5e2856c1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
@@ -112,7 +112,7 @@ int amdgpu_gart_table_vram_alloc(struct amdgpu_device *adev)
 {
int r;
 
-   if (adev->gart.robj == NULL) {
+   if (adev->gart.bo == NULL) {
struct amdgpu_bo_param bp;
 
memset(, 0, sizeof(bp));
@@ -123,7 +123,7 @@ int amdgpu_gart_table_vram_alloc(struct amdgpu_device *adev)
AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
bp.type = ttm_bo_type_kernel;
bp.resv = NULL;
-   r = amdgpu_bo_create(adev, , >gart.robj);
+   r = amdgpu_bo_create(adev, , >gart.bo);
if (r) {
return r;
}
@@ -145,19 +145,19 @@ int amdgpu_gart_table_vram_pin(struct amdgpu_device *adev)
 {
int r;
 
-   r = amdgpu_bo_reserve(adev->gart.robj, false);
+   r = amdgpu_bo_reserve(adev->gart.bo, false);
if (unlikely(r != 0))
return r;
-   r = amdgpu_bo_pin(adev->gart.robj, AMDGPU_GEM_DOMAIN_VRAM);
+   r = amdgpu_bo_pin(adev->gart.bo, AMDGPU_GEM_DOMAIN_VRAM);
if (r) {
-   amdgpu_bo_unreserve(adev->gart.robj);
+   amdgpu_bo_unreserve(adev->gart.bo);
return r;
}
-   r = amdgpu_bo_kmap(adev->gart.robj, >gart.ptr);
+   r = amdgpu_bo_kmap(adev->gart.bo, >gart.ptr);
if (r)
-   amdgpu_bo_unpin(adev->gart.robj);
-   amdgpu_bo_unreserve(adev->gart.robj);
-   adev->gart.table_addr = amdgpu_bo_gpu_offset(adev->gart.robj);
+   amdgpu_bo_unpin(adev->gart.bo);
+   amdgpu_bo_unreserve(adev->gart.bo);
+   adev->gart.table_addr = amdgpu_bo_gpu_offset(adev->gart.bo);
return r;
 }
 
@@ -173,14 +173,14 @@ void amdgpu_gart_table_vram_unpin(struct amdgpu_device 
*adev)
 {
int r;
 
-   if (adev->gart.robj == NULL) {
+   if (adev->gart.bo == NULL) {
return;
}
-   r = amdgpu_bo_reserve(adev->gart.robj, true);
+   r = amdgpu_bo_reserve(adev->gart.bo, true);
if (likely(r == 0)) {
-   amdgpu_bo_kunmap(adev->gart.robj);
-   amdgpu_bo_unpin(adev->gart.robj);
-   amdgpu_bo_unreserve(adev->gart.robj);
+   amdgpu_bo_kunmap(adev->gart.bo);
+   amdgpu_bo_unpin(adev->gart.bo);
+   amdgpu_bo_unreserve(adev->gart.bo);
adev->gart.ptr = NULL;
}
 }
@@ -196,10 +196,10 @@ void amdgpu_gart_table_vram_unpin(struct amdgpu_device 
*adev)
  */
 void amdgpu_gart_table_vram_free(struct amdgpu_device *adev)
 {
-   if (adev->gart.robj == NULL) {
+   if (adev->gart.bo == NULL) {
return;
}
-   amdgpu_bo_unref(>gart.robj);
+   amdgpu_bo_unref(>gart.bo);
 }
 
 /*
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h
index 9f9e9dc87da1..d7b7c2d408d5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.h
@@ -41,7 +41,7 @@ struct amdgpu_bo;
 
 struct amdgpu_gart {
u64 table_addr;
-   struct amdgpu_bo*robj;
+   struct amdgpu_bo*bo;
void*ptr;
unsignednum_gpu_pages;
unsignednum_cpu_pages;
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
index c14cf1c5bf57..c50bd0c46508 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
@@ -497,7 +497,7 @@ static int gmc_v6_0_gart_enable(struct amdgpu_device *adev)
int r, i;
u32 field;
 
-   if (adev->gart.robj == NULL) {
+   if (adev->gart.bo == NULL) {
dev_err(adev->dev, "No VRAM object for PCIE GART.\n");
return -EINVAL;
}
@@ -588,7 +588,7 @@ static int gmc_v6_0_gart_init(struct amdgpu_device *adev)
 {
int r;
 
-   if (adev->gart.robj) {
+   if (adev->gart.bo) {
dev_warn(adev->dev, "gmc_v6_0 PCIE GART already initialized\n");
return 0;
}
diff --git 

[PATCH 11/11] drm/amdgpu: enable GTT PD/PT for raven

2018-08-22 Thread Christian König
Should work on Vega10 as well, but with an obvious performance hit.

Older APUs can be enabled as well, but will probably be more work.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 928fdae0dab4..670a42729f88 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -308,6 +308,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
list_move(_base->vm_status, >moved);
spin_unlock(>moved_lock);
} else {
+   amdgpu_ttm_alloc_gart(>tbo);
list_move(_base->vm_status, >relocated);
}
}
@@ -396,6 +397,10 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
if (r)
goto error;
 
+   r = amdgpu_ttm_alloc_gart(>tbo);
+   if (r)
+   return r;
+
r = amdgpu_job_alloc_with_ib(adev, 64, );
if (r)
goto error;
@@ -461,7 +466,11 @@ static void amdgpu_vm_bo_param(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
bp->size = amdgpu_vm_bo_size(adev, level);
bp->byte_align = AMDGPU_GPU_PAGE_SIZE;
bp->domain = AMDGPU_GEM_DOMAIN_VRAM;
-   bp->flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
+   if (bp->size <= PAGE_SIZE && adev->asic_type == CHIP_RAVEN)
+   bp->domain |= AMDGPU_GEM_DOMAIN_GTT;
+   bp->domain = amdgpu_bo_get_preferred_pin_domain(adev, bp->domain);
+   bp->flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
+   AMDGPU_GEM_CREATE_CPU_GTT_USWC;
if (vm->use_cpu_for_update)
bp->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
else
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 01/11] drm/amdgpu: remove extra root PD alignment

2018-08-22 Thread Christian König
Just another leftover from radeon.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 4 +---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 3 ---
 2 files changed, 1 insertion(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 662aec5c81d4..73b8dcaf66e6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -2566,8 +2566,6 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
amdgpu_vm *vm,
 {
struct amdgpu_bo_param bp;
struct amdgpu_bo *root;
-   const unsigned align = min(AMDGPU_VM_PTB_ALIGN_SIZE,
-   AMDGPU_VM_PTE_COUNT(adev) * 8);
unsigned long size;
uint64_t flags;
int r, i;
@@ -2615,7 +2613,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
amdgpu_vm *vm,
size = amdgpu_vm_bo_size(adev, adev->vm_manager.root_level);
memset(, 0, sizeof(bp));
bp.size = size;
-   bp.byte_align = align;
+   bp.byte_align = AMDGPU_GPU_PAGE_SIZE;
bp.domain = AMDGPU_GEM_DOMAIN_VRAM;
bp.flags = flags;
bp.type = ttm_bo_type_kernel;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 1162c2bf3138..1c9049feaaea 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -48,9 +48,6 @@ struct amdgpu_bo_list_entry;
 /* number of entries in page table */
 #define AMDGPU_VM_PTE_COUNT(adev) (1 << (adev)->vm_manager.block_size)
 
-/* PTBs (Page Table Blocks) need to be aligned to 32K */
-#define AMDGPU_VM_PTB_ALIGN_SIZE   32768
-
 #define AMDGPU_PTE_VALID   (1ULL << 0)
 #define AMDGPU_PTE_SYSTEM  (1ULL << 1)
 #define AMDGPU_PTE_SNOOPED (1ULL << 2)
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 07/11] drm/amdgpu: add GMC9 support for PDs/PTs in system memory

2018-08-22 Thread Christian König
Add the necessary handling.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index e412eb8e347c..3393a329fc9c 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -571,7 +571,7 @@ static uint64_t gmc_v9_0_get_vm_pte_flags(struct 
amdgpu_device *adev,
 static void gmc_v9_0_get_vm_pde(struct amdgpu_device *adev, int level,
uint64_t *addr, uint64_t *flags)
 {
-   if (!(*flags & AMDGPU_PDE_PTE))
+   if (!(*flags & AMDGPU_PDE_PTE) && !(*flags & AMDGPU_PTE_SYSTEM))
*addr = adev->vm_manager.vram_base_offset + *addr -
adev->gmc.vram_start;
BUG_ON(*addr & 0x003FULL);
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 08/11] drm/amdgpu: add amdgpu_gmc_pd_addr helper

2018-08-22 Thread Christian König
Add a helper to get the root PD address and remove the workarounds from
the GMC9 code for that.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/Makefile   |  3 +-
 .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |  5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c|  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c   | 47 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h   |  2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c  |  7 +--
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c |  4 --
 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c   |  7 +--
 9 files changed, 56 insertions(+), 23 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile 
b/drivers/gpu/drm/amd/amdgpu/Makefile
index 860cb8731c7c..d2bafabe585d 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -51,7 +51,8 @@ amdgpu-y += amdgpu_device.o amdgpu_kms.o \
amdgpu_prime.o amdgpu_vm.o amdgpu_ib.o amdgpu_pll.o \
amdgpu_ucode.o amdgpu_bo_list.o amdgpu_ctx.o amdgpu_sync.o \
amdgpu_gtt_mgr.o amdgpu_vram_mgr.o amdgpu_virt.o amdgpu_atomfirmware.o \
-   amdgpu_vf_error.o amdgpu_sched.o amdgpu_debugfs.o amdgpu_ids.o
+   amdgpu_vf_error.o amdgpu_sched.o amdgpu_debugfs.o amdgpu_ids.o \
+   amdgpu_gmc.o
 
 # add asic specific block
 amdgpu-$(CONFIG_DRM_AMDGPU_CIK)+= cik.o cik_ih.o kv_smc.o kv_dpm.o \
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 7eadc58231f2..2e2393fe09b2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -364,7 +364,6 @@ static int vm_validate_pt_pd_bos(struct amdgpu_vm *vm)
struct amdgpu_bo *pd = vm->root.base.bo;
struct amdgpu_device *adev = amdgpu_ttm_adev(pd->tbo.bdev);
struct amdgpu_vm_parser param;
-   uint64_t addr, flags = AMDGPU_PTE_VALID;
int ret;
 
param.domain = AMDGPU_GEM_DOMAIN_VRAM;
@@ -383,9 +382,7 @@ static int vm_validate_pt_pd_bos(struct amdgpu_vm *vm)
return ret;
}
 
-   addr = amdgpu_bo_gpu_offset(vm->root.base.bo);
-   amdgpu_gmc_get_vm_pde(adev, -1, , );
-   vm->pd_phys_addr = addr;
+   vm->pd_phys_addr = amdgpu_gmc_pd_addr(vm->root.base.bo);
 
if (vm->use_cpu_for_update) {
ret = amdgpu_bo_kmap(pd, NULL);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 17bf63f93c93..d268035cf2f3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -946,7 +946,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
if (r)
return r;
 
-   p->job->vm_pd_addr = amdgpu_bo_gpu_offset(vm->root.base.bo);
+   p->job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.base.bo);
 
if (amdgpu_vm_debug) {
/* Invalidate all BOs to test for userspace bugs */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
new file mode 100644
index ..36058feac64f
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -0,0 +1,47 @@
+/*
+ * Copyright 2018 Advanced Micro Devices, Inc.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sub license, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
+ * USE OR OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * The above copyright notice and this permission notice (including the
+ * next paragraph) shall be included in all copies or substantial portions
+ * of the Software.
+ *
+ */
+
+#include "amdgpu.h"
+
+/**
+ * amdgpu_gmc_pd_addr - return the address of the root directory
+ *
+ */
+uint64_t amdgpu_gmc_pd_addr(struct amdgpu_bo *bo)
+{
+   struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
+   uint64_t pd_addr;
+
+   pd_addr = amdgpu_bo_gpu_offset(bo);
+   /* TODO: move that into ASIC specific code */
+   if (adev->asic_type >= CHIP_VEGA10) {
+   uint64_t 

Re: RFC: Migration to Gitlab

2018-08-22 Thread Emil Velikov
Hi Dan,

On 22 August 2018 at 12:44, Daniel Vetter  wrote:
> Hi all,
>
> I think it's time to brainstorm a bit about the gitlab migration. Basic 
> reasons:
>
> - fd.o admins want to deprecate shell accounts and hand-rolled
> infrastructure, because it's a pain to keep secure
>
> - gitlab will allow us to add committers on our own, greatly
> simplifying that process (and offloading that task from fd.o admins).
>
Random thought - I really wish the admins spoke early and louder about issues.
From infra to manpower and adhoc tool maintenance.

> There's also some more benefits we might want to reap, like better CI
> integration for basic build testing - no more "oops didn't build
> drm-misc defconfigs" or "sry, forgot make check in maintainer-tools".
> But that's all fully optional.
>
> For the full in-depth writeup of everything, see
>
> https://www.fooishbar.org/blog/gitlab-fdo-introduction/
>
> I think now is also a good time, with mesa, xorg, wayland/weston and
> others moved, to start thinking about how we'll move drm. There's a
> few things to figure out though:
>
> - We probably want to split out maintainer-tools. That would address
> the concern that there's 50+ committers to an auto-updating shell
> script ...
>
> - We need to figure out how to handle the ACL trickery around drm-tip in 
> gitlab.
>
> - Probably good to stage the migration, with maintainer-tools, igt
> leading. That will also make fd.o admins happy, who want to rework
> their cloud infrastructure a bit before migrating the big kernel repos
> over.
>
> - Figuring out the actual migration - we've been adding a pile of
> committers since fd.o LDAP was converted to gitlab once back in
> spring. We need to at least figure out how to move the new
> accounts/committers.
>
As a observer, allow me to put some ideas. You've mostly covered them
all, my emphasis is to seriously stick with _one_ thing at a time.
Attempting to do multiple things in parallel will end up with
sub-optimal results.

 - (at random point) cleanup the committers list - people who have not
contributed in the last 1 year?
 - setup drm group, copy/migrate accounts - one could even reuse the
existing credentials
 - move git repos to gitlab, the push URL change, cgit mirror
preserves the normal fetch ones as well as PW hooks
 - work out how new accounts are handled - still in bugzilla, otherwise

At this stage only workflow change is a) once-off account setup and b)
pushURL update
As a follow-up one can setup anything fancy.
 - migrate PW/other hooks
 - migrate bugs - if applicable
 - add new hooks - DRM docs, other
 - etc


> - Similar, maintainer-tools needs to move. We probably want to move
> all the dim maintained kernel repos in one go, to avoid headaches with
> double-accounts needed for committers.
>
One should be able to create a separate repo for these. And then either:
 - one by one add the required features into the gitlab MR machinery,
 - or, wire the execution in pre/post merge stage.

IIRC there are some upstream requests about the former.

> - CI, linux-next and everyone else should be fine, since the
> cgit/non-ssh paths will keep working (they'll be read-only mirrors).
> Need to double-check that with everyone.
>
> - Some organization structure would be good.
>
> https://cgit.freedesktop.org/drm
>
> libdrm won't be part of the gitlab drm group because that's already
> moved under mesa (and you can't symlink/mulit-home anymore on gitlab):
>
> https://gitlab.freedesktop.org/mesa/drm
>
> But there's also drm_hwcomposer, which we might want to migrate into
> drm too - gitlab requires a containing group, and
> drm_hwcomposer/drm_hwcomposer is a bit silly.
>
It did strike me a lot when drm_hwcomposer/drm_hwcomposer was
introduced. Fortunately moving repos in gitlab is reasonably pain-free


HTH
Emil
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/5] drm/amdgpu: cleanup GPU recovery check a bit

2018-08-22 Thread Andrey Grodzovsky

Series is Acked-by: Andrey Grodzovsky 

Andrey


On 08/22/2018 06:05 AM, Christian König wrote:

Check if we should call the function instead of providing the forced
flag.

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu.h|  3 ++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 38 --
  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c  |  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c|  4 ++--
  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c|  3 ++-
  drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c  |  4 ++--
  drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c  |  3 ++-
  7 files changed, 36 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 19ef7711d944..340e40d03d54 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1158,8 +1158,9 @@ int emu_soc_asic_init(struct amdgpu_device *adev);
  #define amdgpu_asic_need_full_reset(adev) 
(adev)->asic_funcs->need_full_reset((adev))
  
  /* Common functions */

+bool amdgpu_device_should_recover_gpu(struct amdgpu_device *adev);
  int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
- struct amdgpu_job* job, bool force);
+ struct amdgpu_job* job);
  void amdgpu_device_pci_config_reset(struct amdgpu_device *adev);
  bool amdgpu_device_need_post(struct amdgpu_device *adev);
  
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

index c23339d8ae2d..9f5e4be76d5e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3244,32 +3244,44 @@ static int amdgpu_device_reset_sriov(struct 
amdgpu_device *adev,
return r;
  }
  
+/**

+ * amdgpu_device_should_recover_gpu - check if we should try GPU recovery
+ *
+ * @adev: amdgpu device pointer
+ *
+ * Check amdgpu_gpu_recovery and SRIOV status to see if we should try to 
recover
+ * a hung GPU.
+ */
+bool amdgpu_device_should_recover_gpu(struct amdgpu_device *adev)
+{
+   if (!amdgpu_device_ip_check_soft_reset(adev)) {
+   DRM_INFO("Timeout, but no hardware hang detected.\n");
+   return false;
+   }
+
+   if (amdgpu_gpu_recovery == 0 || (amdgpu_gpu_recovery == -1  &&
+!amdgpu_sriov_vf(adev))) {
+   DRM_INFO("GPU recovery disabled.\n");
+   return false;
+   }
+
+   return true;
+}
+
  /**
   * amdgpu_device_gpu_recover - reset the asic and recover scheduler
   *
   * @adev: amdgpu device pointer
   * @job: which job trigger hang
- * @force: forces reset regardless of amdgpu_gpu_recovery
   *
   * Attempt to reset the GPU if it has hung (all asics).
   * Returns 0 for success or an error on failure.
   */
  int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
- struct amdgpu_job *job, bool force)
+ struct amdgpu_job *job)
  {
int i, r, resched;
  
-	if (!force && !amdgpu_device_ip_check_soft_reset(adev)) {

-   DRM_INFO("No hardware hang detected. Did some blocks stall?\n");
-   return 0;
-   }
-
-   if (!force && (amdgpu_gpu_recovery == 0 ||
-   (amdgpu_gpu_recovery == -1  && 
!amdgpu_sriov_vf(adev {
-   DRM_INFO("GPU recovery disabled.\n");
-   return 0;
-   }
-
dev_info(adev->dev, "GPU reset begin!\n");
  
  	mutex_lock(>lock_reset);

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index e74d620d9699..68cccebb8463 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -702,7 +702,7 @@ static int amdgpu_debugfs_gpu_recover(struct seq_file *m, 
void *data)
struct amdgpu_device *adev = dev->dev_private;
  
  	seq_printf(m, "gpu recover\n");

-   amdgpu_device_gpu_recover(adev, NULL, true);
+   amdgpu_device_gpu_recover(adev, NULL);
  
  	return 0;

  }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
index 1abf5b5bac9e..b927e8798534 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
@@ -105,8 +105,8 @@ static void amdgpu_irq_reset_work_func(struct work_struct 
*work)
struct amdgpu_device *adev = container_of(work, struct amdgpu_device,
  reset_work);
  
-	if (!amdgpu_sriov_vf(adev))

-   amdgpu_device_gpu_recover(adev, NULL, false);
+   if (!amdgpu_sriov_vf(adev) && amdgpu_device_should_recover_gpu(adev))
+   amdgpu_device_gpu_recover(adev, NULL);
  }
  
  /**

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 391e2f7c03aa..265ff90f4e01 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ 

Re: [igt-dev] RFC: Migration to Gitlab

2018-08-22 Thread Daniel Vetter
On Wed, Aug 22, 2018 at 3:13 PM, Jani Nikula
 wrote:
> On Wed, 22 Aug 2018, Daniel Vetter  wrote:
>> Hi all,
>>
>> I think it's time to brainstorm a bit about the gitlab migration. Basic 
>> reasons:
>>
>> - fd.o admins want to deprecate shell accounts and hand-rolled
>> infrastructure, because it's a pain to keep secure
>>
>> - gitlab will allow us to add committers on our own, greatly
>> simplifying that process (and offloading that task from fd.o admins).
>>
>> There's also some more benefits we might want to reap, like better CI
>> integration for basic build testing - no more "oops didn't build
>> drm-misc defconfigs" or "sry, forgot make check in maintainer-tools".
>> But that's all fully optional.
>>
>> For the full in-depth writeup of everything, see
>>
>> https://www.fooishbar.org/blog/gitlab-fdo-introduction/
>>
>> I think now is also a good time, with mesa, xorg, wayland/weston and
>> others moved, to start thinking about how we'll move drm. There's a
>> few things to figure out though:
>>
>> - We probably want to split out maintainer-tools. That would address
>> the concern that there's 50+ committers to an auto-updating shell
>> script ...
>>
>> - We need to figure out how to handle the ACL trickery around drm-tip in 
>> gitlab.
>>
>> - Probably good to stage the migration, with maintainer-tools, igt
>> leading. That will also make fd.o admins happy, who want to rework
>> their cloud infrastructure a bit before migrating the big kernel repos
>> over.
>>
>> - Figuring out the actual migration - we've been adding a pile of
>> committers since fd.o LDAP was converted to gitlab once back in
>> spring. We need to at least figure out how to move the new
>> accounts/committers.
>>
>> - Similar, maintainer-tools needs to move. We probably want to move
>> all the dim maintained kernel repos in one go, to avoid headaches with
>> double-accounts needed for committers.
>>
>> - CI, linux-next and everyone else should be fine, since the
>> cgit/non-ssh paths will keep working (they'll be read-only mirrors).
>> Need to double-check that with everyone.
>>
>> - Some organization structure would be good.
>>
>> https://cgit.freedesktop.org/drm
>>
>> libdrm won't be part of the gitlab drm group because that's already
>> moved under mesa (and you can't symlink/mulit-home anymore on gitlab):
>>
>> https://gitlab.freedesktop.org/mesa/drm
>>
>> But there's also drm_hwcomposer, which we might want to migrate into
>> drm too - gitlab requires a containing group, and
>> drm_hwcomposer/drm_hwcomposer is a bit silly.
>>
>> Note: Access rights can be done at any level in the hierarchy, the
>> organization is orthogonal to commit rights.
>>
>> - Anything else I've forgotten.
>>
>> A lot of this still needs to be figured out first. As a first step I'm
>> looking for volunteers who want to join the fun, besides comments and
>> thoughts on the overall topic of course.
>
> Just a couple of concerns from drm/i915 perspective for starters:
>
> - Patchwork integration. I think we'll want to keep patchwork for at
>   least intel-gfx etc. for the time being. IIUC the one thing we need is
>   some server side hook to update patchwork on git push.
>
> - Sticking to fdo bugzilla and disabling gitlab issues for at least
>   drm-intel for the time being. Doing that migration in the same go is a
>   bit much I think. Reassignment across bugzilla and gitlab will be an
>   issue.

Good points, forgot about both. Patchwork reading the mailing list
should keep working as-is, but the update hook needs looking into.

Disabling gitlab issues is a non-brainer, same with merge requests.
Mesa is already doing that. For bugs I think it's best to entirely
leave them out for now, and maybe reconsider when/if mesa has moved.
Before that I don't think gitlab issues make any sense at all.

For merge requests I think best approach is to enable them very
selectively at first for testing out, and then making a per-subproject
decision whether they make sense. E.g. I think for maintainer-tools
integrating make check and the doc building into gitlab CI would be
sweet, and worth looking into gitlab merge requests just to automate
that. Again best left out of scope for the initial migration.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [igt-dev] RFC: Migration to Gitlab

2018-08-22 Thread Adam Jackson
On Wed, 2018-08-22 at 16:13 +0300, Jani Nikula wrote:

> - Sticking to fdo bugzilla and disabling gitlab issues for at least
>   drm-intel for the time being. Doing that migration in the same go is a
>   bit much I think. Reassignment across bugzilla and gitlab will be an
>   issue.

Can you elaborate a bit on the issues here? The actual move-the-bugs
process has been pretty painless for the parts of xorg we've done so
far.

- ajax
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Fix page fault and kasan warning on pci device remove.

2018-08-22 Thread Andrey Grodzovsky



On 08/22/2018 01:28 AM, Paul Menzel wrote:

Dear Andrey,


Am 21.08.2018 um 23:23 schrieb Andrey Grodzovsky:

Problem:
When executing echo 1 > /sys/class/drm/card0/device/remove kasan warning
as bellow and page fault happen because adev->gart.pages already 
freed by the

time amdgpu_gart_unbind is called.

BUG: KASAN: user-memory-access in amdgpu_gart_unbind+0x98/0x180 [amdgpu]
Write of size 8 at addr 3648 by task bash/1828
CPU: 2 PID: 1828 Comm: bash Tainted: G    W  O 4.18.0-rc1-dev+ #29
Hardware name: Gigabyte Technology Co., Ltd. 
AX370-Gaming/AX370-Gaming-CF, BIOS F3 06/19/2017

Call Trace:
dump_stack+0x71/0xab
kasan_report+0x109/0x390
amdgpu_gart_unbind+0x98/0x180 [amdgpu]
ttm_tt_unbind+0x43/0x60 [ttm]
ttm_bo_move_ttm+0x83/0x1c0 [ttm]
ttm_bo_handle_move_mem+0xb97/0xd00 [ttm]
ttm_bo_evict+0x273/0x530 [ttm]
ttm_mem_evict_first+0x29c/0x360 [ttm]
ttm_bo_force_list_clean+0xfc/0x210 [ttm]
ttm_bo_clean_mm+0xe7/0x160 [ttm]
amdgpu_ttm_fini+0xda/0x1d0 [amdgpu]
amdgpu_bo_fini+0xf/0x60 [amdgpu]
gmc_v8_0_sw_fini+0x36/0x70 [amdgpu]
amdgpu_device_fini+0x2d0/0x7d0 [amdgpu]
amdgpu_driver_unload_kms+0x6a/0xd0 [amdgpu]
drm_dev_unregister+0x79/0x180 [drm]
amdgpu_pci_remove+0x2a/0x60 [amdgpu]
pci_device_remove+0x5b/0x100
device_release_driver_internal+0x236/0x360
pci_stop_bus_device+0xbf/0xf0
pci_stop_and_remove_bus_device_locked+0x16/0x30
remove_store+0xda/0xf0
kernfs_fop_write+0x186/0x220
  __vfs_write+0xcc/0x330
vfs_write+0xe6/0x250
ksys_write+0xb1/0x140
do_syscall_64+0x77/0x1e0
entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7f66ebbb32c0

Fix:
Split gmc_v{6,7,8,9}_0_gart_fini to pospone amdgpu_gart_fini to after


pos*t*pone


memory managers are shut down since gart unbind happens
as part of this procudure.


proc*e*dure

Also, I wouldn’t put a dot at the end of the commit message summary.


Signed-off-by: Andrey Grodzovsky 
---
  1 |  0
  drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c |  9 ++---
  drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c | 16 ++--
  drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c | 16 ++--
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 16 ++--
  5 files changed, 8 insertions(+), 49 deletions(-)
  create mode 100644 1

diff --git a/1 b/1
new file mode 100644
index 000..e69de29


What happened here? Is the file `1` needed?

[…]


Kind regards,

Paul


Will fix, thanks.

Andrey

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Fix page fault and kasan warning on pci device remove.

2018-08-22 Thread Andrey Grodzovsky



On 08/22/2018 02:57 AM, Christian König wrote:

Am 21.08.2018 um 23:23 schrieb Andrey Grodzovsky:

Problem:
When executing echo 1 > /sys/class/drm/card0/device/remove kasan warning
as bellow and page fault happen because adev->gart.pages already 
freed by the

time amdgpu_gart_unbind is called.

BUG: KASAN: user-memory-access in amdgpu_gart_unbind+0x98/0x180 [amdgpu]
Write of size 8 at addr 3648 by task bash/1828
CPU: 2 PID: 1828 Comm: bash Tainted: G    W  O 4.18.0-rc1-dev+ #29
Hardware name: Gigabyte Technology Co., Ltd. 
AX370-Gaming/AX370-Gaming-CF, BIOS F3 06/19/2017

Call Trace:
dump_stack+0x71/0xab
kasan_report+0x109/0x390
amdgpu_gart_unbind+0x98/0x180 [amdgpu]
ttm_tt_unbind+0x43/0x60 [ttm]
ttm_bo_move_ttm+0x83/0x1c0 [ttm]
ttm_bo_handle_move_mem+0xb97/0xd00 [ttm]
ttm_bo_evict+0x273/0x530 [ttm]
ttm_mem_evict_first+0x29c/0x360 [ttm]
ttm_bo_force_list_clean+0xfc/0x210 [ttm]
ttm_bo_clean_mm+0xe7/0x160 [ttm]
amdgpu_ttm_fini+0xda/0x1d0 [amdgpu]
amdgpu_bo_fini+0xf/0x60 [amdgpu]
gmc_v8_0_sw_fini+0x36/0x70 [amdgpu]
amdgpu_device_fini+0x2d0/0x7d0 [amdgpu]
amdgpu_driver_unload_kms+0x6a/0xd0 [amdgpu]
drm_dev_unregister+0x79/0x180 [drm]
amdgpu_pci_remove+0x2a/0x60 [amdgpu]
pci_device_remove+0x5b/0x100
device_release_driver_internal+0x236/0x360
pci_stop_bus_device+0xbf/0xf0
pci_stop_and_remove_bus_device_locked+0x16/0x30
remove_store+0xda/0xf0
kernfs_fop_write+0x186/0x220
  __vfs_write+0xcc/0x330
vfs_write+0xe6/0x250
ksys_write+0xb1/0x140
do_syscall_64+0x77/0x1e0
entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7f66ebbb32c0

Fix:
Split gmc_v{6,7,8,9}_0_gart_fini to pospone amdgpu_gart_fini to after
memory managers are shut down since gart unbind happens
as part of this procudure.

Signed-off-by: Andrey Grodzovsky 
---
  1 |  0
  drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c |  9 ++---
  drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c | 16 ++--
  drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c | 16 ++--
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 16 ++--
  5 files changed, 8 insertions(+), 49 deletions(-)
  create mode 100644 1

diff --git a/1 b/1
new file mode 100644
index 000..e69de29


Good cleanup, but what the heck is that?

Christian.


Yea, git add *
I will fix and push.

Andrey


diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c

index c14cf1c..0a0a4dc 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
@@ -633,12 +633,6 @@ static void gmc_v6_0_gart_disable(struct 
amdgpu_device *adev)

  amdgpu_gart_table_vram_unpin(adev);
  }
  -static void gmc_v6_0_gart_fini(struct amdgpu_device *adev)
-{
-    amdgpu_gart_table_vram_free(adev);
-    amdgpu_gart_fini(adev);
-}
-
  static void gmc_v6_0_vm_decode_fault(struct amdgpu_device *adev,
   u32 status, u32 addr, u32 mc_client)
  {
@@ -936,8 +930,9 @@ static int gmc_v6_0_sw_fini(void *handle)
    amdgpu_gem_force_release(adev);
  amdgpu_vm_manager_fini(adev);
-    gmc_v6_0_gart_fini(adev);
+    amdgpu_gart_table_vram_free(adev);
  amdgpu_bo_fini(adev);
+    amdgpu_gart_fini(adev);
  release_firmware(adev->gmc.fw);
  adev->gmc.fw = NULL;
  diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c

index 0c3a161..afbadfc 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
@@ -750,19 +750,6 @@ static void gmc_v7_0_gart_disable(struct 
amdgpu_device *adev)

  }
    /**
- * gmc_v7_0_gart_fini - vm fini callback
- *
- * @adev: amdgpu_device pointer
- *
- * Tears down the driver GART/VM setup (CIK).
- */
-static void gmc_v7_0_gart_fini(struct amdgpu_device *adev)
-{
-    amdgpu_gart_table_vram_free(adev);
-    amdgpu_gart_fini(adev);
-}
-
-/**
   * gmc_v7_0_vm_decode_fault - print human readable fault info
   *
   * @adev: amdgpu_device pointer
@@ -1091,8 +1078,9 @@ static int gmc_v7_0_sw_fini(void *handle)
    amdgpu_gem_force_release(adev);
  amdgpu_vm_manager_fini(adev);
-    gmc_v7_0_gart_fini(adev);
+    amdgpu_gart_table_vram_free(adev);
  amdgpu_bo_fini(adev);
+    amdgpu_gart_fini(adev);
  release_firmware(adev->gmc.fw);
  adev->gmc.fw = NULL;
  diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c

index 274c932..d871dae 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
@@ -969,19 +969,6 @@ static void gmc_v8_0_gart_disable(struct 
amdgpu_device *adev)

  }
    /**
- * gmc_v8_0_gart_fini - vm fini callback
- *
- * @adev: amdgpu_device pointer
- *
- * Tears down the driver GART/VM setup (CIK).
- */
-static void gmc_v8_0_gart_fini(struct amdgpu_device *adev)
-{
-    amdgpu_gart_table_vram_free(adev);
-    amdgpu_gart_fini(adev);
-}
-
-/**
   * gmc_v8_0_vm_decode_fault - print human readable fault info
   *
   * @adev: amdgpu_device pointer
@@ -1192,8 +1179,9 @@ static int 

Re: [PATCH v2] drm/amd/display: Fix bug use wrong pp interface

2018-08-22 Thread Deucher, Alexander
Acked-by: Alex Deucher 


From: amd-gfx  on behalf of Rex Zhu 

Sent: Wednesday, August 22, 2018 2:41:19 AM
To: amd-gfx@lists.freedesktop.org; Francis, David; Wentland, Harry
Cc: Zhu, Rex
Subject: [PATCH v2] drm/amd/display: Fix bug use wrong pp interface

Used wrong pp interface, the original interface is
exposed by dpm on SI and paritial CI.

Pointed out by Francis David 

v2: dal only need to set min_dcefclk and min_fclk to smu.
so use display_clock_voltage_request interface,
instand of update all display configuration.

Acked-by: Alex Deucher 
Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c | 12 ++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
index e5c5b0a..7811d60 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
@@ -480,12 +480,20 @@ void pp_rv_set_display_requirement(struct pp_smu *pp,
 {
 const struct dc_context *ctx = pp->dm;
 struct amdgpu_device *adev = ctx->driver_context;
+   void *pp_handle = adev->powerplay.pp_handle;
 const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+   struct pp_display_clock_request clock = {0};

-   if (!pp_funcs || !pp_funcs->display_configuration_changed)
+   if (!req || !pp_funcs || !pp_funcs->display_clock_voltage_request)
 return;

-   amdgpu_dpm_display_configuration_changed(adev);
+   clock.clock_type = amd_pp_dcf_clock;
+   clock.clock_freq_in_khz = req->hard_min_dcefclk_khz;
+   pp_funcs->display_clock_voltage_request(pp_handle, );
+
+   clock.clock_type = amd_pp_f_clock;
+   clock.clock_freq_in_khz = req->hard_min_fclk_khz;
+   pp_funcs->display_clock_voltage_request(pp_handle, );
 }

 void pp_rv_set_wm_ranges(struct pp_smu *pp,
--
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [igt-dev] RFC: Migration to Gitlab

2018-08-22 Thread Jani Nikula
On Wed, 22 Aug 2018, Daniel Vetter  wrote:
> Hi all,
>
> I think it's time to brainstorm a bit about the gitlab migration. Basic 
> reasons:
>
> - fd.o admins want to deprecate shell accounts and hand-rolled
> infrastructure, because it's a pain to keep secure
>
> - gitlab will allow us to add committers on our own, greatly
> simplifying that process (and offloading that task from fd.o admins).
>
> There's also some more benefits we might want to reap, like better CI
> integration for basic build testing - no more "oops didn't build
> drm-misc defconfigs" or "sry, forgot make check in maintainer-tools".
> But that's all fully optional.
>
> For the full in-depth writeup of everything, see
>
> https://www.fooishbar.org/blog/gitlab-fdo-introduction/
>
> I think now is also a good time, with mesa, xorg, wayland/weston and
> others moved, to start thinking about how we'll move drm. There's a
> few things to figure out though:
>
> - We probably want to split out maintainer-tools. That would address
> the concern that there's 50+ committers to an auto-updating shell
> script ...
>
> - We need to figure out how to handle the ACL trickery around drm-tip in 
> gitlab.
>
> - Probably good to stage the migration, with maintainer-tools, igt
> leading. That will also make fd.o admins happy, who want to rework
> their cloud infrastructure a bit before migrating the big kernel repos
> over.
>
> - Figuring out the actual migration - we've been adding a pile of
> committers since fd.o LDAP was converted to gitlab once back in
> spring. We need to at least figure out how to move the new
> accounts/committers.
>
> - Similar, maintainer-tools needs to move. We probably want to move
> all the dim maintained kernel repos in one go, to avoid headaches with
> double-accounts needed for committers.
>
> - CI, linux-next and everyone else should be fine, since the
> cgit/non-ssh paths will keep working (they'll be read-only mirrors).
> Need to double-check that with everyone.
>
> - Some organization structure would be good.
>
> https://cgit.freedesktop.org/drm
>
> libdrm won't be part of the gitlab drm group because that's already
> moved under mesa (and you can't symlink/mulit-home anymore on gitlab):
>
> https://gitlab.freedesktop.org/mesa/drm
>
> But there's also drm_hwcomposer, which we might want to migrate into
> drm too - gitlab requires a containing group, and
> drm_hwcomposer/drm_hwcomposer is a bit silly.
>
> Note: Access rights can be done at any level in the hierarchy, the
> organization is orthogonal to commit rights.
>
> - Anything else I've forgotten.
>
> A lot of this still needs to be figured out first. As a first step I'm
> looking for volunteers who want to join the fun, besides comments and
> thoughts on the overall topic of course.

Just a couple of concerns from drm/i915 perspective for starters:

- Patchwork integration. I think we'll want to keep patchwork for at
  least intel-gfx etc. for the time being. IIUC the one thing we need is
  some server side hook to update patchwork on git push.

- Sticking to fdo bugzilla and disabling gitlab issues for at least
  drm-intel for the time being. Doing that migration in the same go is a
  bit much I think. Reassignment across bugzilla and gitlab will be an
  issue.

BR,
Jani.


-- 
Jani Nikula, Intel Open Source Graphics Center
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [Intel-gfx] RFC: Migration to Gitlab

2018-08-22 Thread Sean Paul
On Wed, Aug 22, 2018 at 01:44:56PM +0200, Daniel Vetter wrote:
> Hi all,
> 
> I think it's time to brainstorm a bit about the gitlab migration. Basic 
> reasons:
> 
> - fd.o admins want to deprecate shell accounts and hand-rolled
> infrastructure, because it's a pain to keep secure
> 
> - gitlab will allow us to add committers on our own, greatly
> simplifying that process (and offloading that task from fd.o admins).
> 
> There's also some more benefits we might want to reap, like better CI
> integration for basic build testing - no more "oops didn't build
> drm-misc defconfigs" or "sry, forgot make check in maintainer-tools".
> But that's all fully optional.
> 
> For the full in-depth writeup of everything, see
> 
> https://www.fooishbar.org/blog/gitlab-fdo-introduction/
> 
> I think now is also a good time, with mesa, xorg, wayland/weston and
> others moved, to start thinking about how we'll move drm. There's a
> few things to figure out though:
> 
> - We probably want to split out maintainer-tools. That would address
> the concern that there's 50+ committers to an auto-updating shell
> script ...
> 

/me laughs nervously

> - We need to figure out how to handle the ACL trickery around drm-tip in 
> gitlab.
> 
> - Probably good to stage the migration, with maintainer-tools, igt
> leading. That will also make fd.o admins happy, who want to rework
> their cloud infrastructure a bit before migrating the big kernel repos
> over.
> 
> - Figuring out the actual migration - we've been adding a pile of
> committers since fd.o LDAP was converted to gitlab once back in
> spring. We need to at least figure out how to move the new
> accounts/committers.
> 
> - Similar, maintainer-tools needs to move. We probably want to move
> all the dim maintained kernel repos in one go, to avoid headaches with
> double-accounts needed for committers.
> 
> - CI, linux-next and everyone else should be fine, since the
> cgit/non-ssh paths will keep working (they'll be read-only mirrors).
> Need to double-check that with everyone.

They can also pull the trees from git://gitlab.fd.o/blah as normal, just need to
give them new pointers once we're stable.

> 
> - Some organization structure would be good.
> 
> https://cgit.freedesktop.org/drm
> 
> libdrm won't be part of the gitlab drm group because that's already
> moved under mesa (and you can't symlink/mulit-home anymore on gitlab):
> 
> https://gitlab.freedesktop.org/mesa/drm
> 
> But there's also drm_hwcomposer, which we might want to migrate into
> drm too - gitlab requires a containing group, and
> drm_hwcomposer/drm_hwcomposer is a bit silly.

This seems fine to me. Our expansion plans likely aren't big enough to warrant a
separate group.

> 
> Note: Access rights can be done at any level in the hierarchy, the
> organization is orthogonal to commit rights.
> 
> - Anything else I've forgotten.
> 
> A lot of this still needs to be figured out first. As a first step I'm
> looking for volunteers who want to join the fun, besides comments and
> thoughts on the overall topic of course.

I'm pretty keen on getting this done, so I'll volunteer some cycles if there's
something that needs doing.

Sean

> 
> Cheers, Daniel
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
> ___
> Intel-gfx mailing list
> intel-...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Sean Paul, Software Engineer, Google / Chromium OS
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/3] drm/amdgpu: Don't use kiq in gpu reset

2018-08-22 Thread Deng, Emily
>-Original Message-
>From: Christian König 
>Sent: Wednesday, August 22, 2018 8:24 PM
>To: Deng, Emily ; amd-gfx@lists.freedesktop.org
>Subject: Re: [PATCH 1/3] drm/amdgpu: Don't use kiq in gpu reset
>
>Am 22.08.2018 um 06:39 schrieb Emily Deng:
>> When in gpu reset, don't use kiq, it will generate more TDR.
>>
>> Signed-off-by: Emily Deng 
>
>Patch #1 is Reviewed-by: Christian König .
>
>Patch #2 actually not necessary since we should never flush the tlb from
>interrupt context.
Ok, if have the constraint, then ignore the patch.
>
>Patch #3: I would actually rather keep that an error message cause it still 
>means
>that something went wrong.
Ok, then ignore the patch.
>Christian.
>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 15 ---
>>   1 file changed, 4 insertions(+), 11 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> index eec991f..fcdbacb 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> @@ -331,15 +331,8 @@ signed long  amdgpu_kiq_reg_write_reg_wait(struct
>> amdgpu_device *adev,
>>
>>  r = amdgpu_fence_wait_polling(ring, seq, MAX_KIQ_REG_WAIT);
>>
>> -/* don't wait anymore for gpu reset case because this way may
>> - * block gpu_recover() routine forever, e.g. this virt_kiq_rreg
>> - * is triggered in TTM and ttm_bo_lock_delayed_workqueue() will
>> - * never return if we keep waiting in virt_kiq_rreg, which cause
>> - * gpu_recover() hang there.
>> - *
>> - * also don't wait anymore for IRQ context
>> - * */
>> -if (r < 1 && (adev->in_gpu_reset || in_interrupt()))
>> +/* don't wait anymore for IRQ context */
>> +if (r < 1 && in_interrupt())
>>  goto failed_kiq;
>>
>>  might_sleep();
>> @@ -387,8 +380,8 @@ static void gmc_v9_0_flush_gpu_tlb(struct
>amdgpu_device *adev,
>>  u32 tmp = gmc_v9_0_get_invalidate_req(vmid);
>>
>>  if (adev->gfx.kiq.ring.ready &&
>> -(amdgpu_sriov_runtime(adev) ||
>> - !amdgpu_sriov_vf(adev))) {
>> +(amdgpu_sriov_runtime(adev) || !amdgpu_sriov_vf(adev))
>&&
>> +!adev->in_gpu_reset) {
>>  r = amdgpu_kiq_reg_write_reg_wait(adev, hub-
>>vm_inv_eng0_req + eng,
>>  hub->vm_inv_eng0_ack + eng, tmp, 1 << vmid);
>>  if (!r)

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/3] drm/amdgpu: Don't use kiq in gpu reset

2018-08-22 Thread Christian König

Am 22.08.2018 um 06:39 schrieb Emily Deng:

When in gpu reset, don't use kiq, it will generate more TDR.

Signed-off-by: Emily Deng 


Patch #1 is Reviewed-by: Christian König .

Patch #2 actually not necessary since we should never flush the tlb from 
interrupt context.


Patch #3: I would actually rather keep that an error message cause it 
still means that something went wrong.


Christian.


---
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 15 ---
  1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index eec991f..fcdbacb 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -331,15 +331,8 @@ signed long  amdgpu_kiq_reg_write_reg_wait(struct 
amdgpu_device *adev,
  
  	r = amdgpu_fence_wait_polling(ring, seq, MAX_KIQ_REG_WAIT);
  
-	/* don't wait anymore for gpu reset case because this way may

-* block gpu_recover() routine forever, e.g. this virt_kiq_rreg
-* is triggered in TTM and ttm_bo_lock_delayed_workqueue() will
-* never return if we keep waiting in virt_kiq_rreg, which cause
-* gpu_recover() hang there.
-*
-* also don't wait anymore for IRQ context
-* */
-   if (r < 1 && (adev->in_gpu_reset || in_interrupt()))
+   /* don't wait anymore for IRQ context */
+   if (r < 1 && in_interrupt())
goto failed_kiq;
  
  	might_sleep();

@@ -387,8 +380,8 @@ static void gmc_v9_0_flush_gpu_tlb(struct amdgpu_device 
*adev,
u32 tmp = gmc_v9_0_get_invalidate_req(vmid);
  
  		if (adev->gfx.kiq.ring.ready &&

-   (amdgpu_sriov_runtime(adev) ||
-!amdgpu_sriov_vf(adev))) {
+   (amdgpu_sriov_runtime(adev) || !amdgpu_sriov_vf(adev)) &&
+   !adev->in_gpu_reset) {
r = amdgpu_kiq_reg_write_reg_wait(adev, 
hub->vm_inv_eng0_req + eng,
hub->vm_inv_eng0_ack + eng, tmp, 1 << vmid);
if (!r)


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/3] drm/amdgpu: Don't use kiq in gpu reset

2018-08-22 Thread Deng, Emily
Ping..

>-Original Message-
>From: amd-gfx  On Behalf Of Emily
>Deng
>Sent: Wednesday, August 22, 2018 12:39 PM
>To: amd-gfx@lists.freedesktop.org
>Cc: Deng, Emily 
>Subject: [PATCH 1/3] drm/amdgpu: Don't use kiq in gpu reset
>
>When in gpu reset, don't use kiq, it will generate more TDR.
>
>Signed-off-by: Emily Deng 
>---
> drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 15 ---
> 1 file changed, 4 insertions(+), 11 deletions(-)
>
>diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>index eec991f..fcdbacb 100644
>--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>@@ -331,15 +331,8 @@ signed long  amdgpu_kiq_reg_write_reg_wait(struct
>amdgpu_device *adev,
>
>   r = amdgpu_fence_wait_polling(ring, seq, MAX_KIQ_REG_WAIT);
>
>-  /* don't wait anymore for gpu reset case because this way may
>-   * block gpu_recover() routine forever, e.g. this virt_kiq_rreg
>-   * is triggered in TTM and ttm_bo_lock_delayed_workqueue() will
>-   * never return if we keep waiting in virt_kiq_rreg, which cause
>-   * gpu_recover() hang there.
>-   *
>-   * also don't wait anymore for IRQ context
>-   * */
>-  if (r < 1 && (adev->in_gpu_reset || in_interrupt()))
>+  /* don't wait anymore for IRQ context */
>+  if (r < 1 && in_interrupt())
>   goto failed_kiq;
>
>   might_sleep();
>@@ -387,8 +380,8 @@ static void gmc_v9_0_flush_gpu_tlb(struct
>amdgpu_device *adev,
>   u32 tmp = gmc_v9_0_get_invalidate_req(vmid);
>
>   if (adev->gfx.kiq.ring.ready &&
>-  (amdgpu_sriov_runtime(adev) ||
>-   !amdgpu_sriov_vf(adev))) {
>+  (amdgpu_sriov_runtime(adev) || !amdgpu_sriov_vf(adev))
>&&
>+  !adev->in_gpu_reset) {
>   r = amdgpu_kiq_reg_write_reg_wait(adev, hub-
>>vm_inv_eng0_req + eng,
>   hub->vm_inv_eng0_ack + eng, tmp, 1 << vmid);
>   if (!r)
>--
>2.7.4
>
>___
>amd-gfx mailing list
>amd-gfx@lists.freedesktop.org
>https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] amdgpu: fix multi-process hang issue

2018-08-22 Thread Christian König

Am 22.08.2018 um 14:07 schrieb Emily Deng:

SWDEV-146499: hang during multi vulkan process testing

cause:
the second frame's PREAMBLE_IB have clear-state
and LOAD actions, those actions ruin the pipeline
that is still doing process in the previous frame's
work-load IB.

fix:
need insert pipeline sync if have context switch for
SRIOV (because only SRIOV will report PREEMPTION flag
to UMD)

Signed-off-by: Monk Liu 
Signed-off-by: Emily Deng 


Much better, patch is Reviewed-by: Christian König 




---
  drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
index 5c22cfd..47817e0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
@@ -165,8 +165,10 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
num_ibs,
return r;
}
  
+	need_ctx_switch = ring->current_ctx != fence_ctx;

if (ring->funcs->emit_pipeline_sync && job &&
((tmp = amdgpu_sync_get_fence(>sched_sync, NULL)) ||
+(amdgpu_sriov_vf(adev) && need_ctx_switch) ||
 amdgpu_vm_need_pipeline_sync(ring, job))) {
need_pipe_sync = true;
  
@@ -201,7 +203,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,

}
  
  	skip_preamble = ring->current_ctx == fence_ctx;

-   need_ctx_switch = ring->current_ctx != fence_ctx;
if (job && ring->funcs->emit_cntxcntl) {
if (need_ctx_switch)
status |= AMDGPU_HAVE_CTX_SWITCH;


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] amdgpu: fix multi-process hang issue

2018-08-22 Thread Emily Deng
SWDEV-146499: hang during multi vulkan process testing

cause:
the second frame's PREAMBLE_IB have clear-state
and LOAD actions, those actions ruin the pipeline
that is still doing process in the previous frame's
work-load IB.

fix:
need insert pipeline sync if have context switch for
SRIOV (because only SRIOV will report PREEMPTION flag
to UMD)

Signed-off-by: Monk Liu 
Signed-off-by: Emily Deng 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
index 5c22cfd..47817e0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
@@ -165,8 +165,10 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
num_ibs,
return r;
}
 
+   need_ctx_switch = ring->current_ctx != fence_ctx;
if (ring->funcs->emit_pipeline_sync && job &&
((tmp = amdgpu_sync_get_fence(>sched_sync, NULL)) ||
+(amdgpu_sriov_vf(adev) && need_ctx_switch) ||
 amdgpu_vm_need_pipeline_sync(ring, job))) {
need_pipe_sync = true;
 
@@ -201,7 +203,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
num_ibs,
}
 
skip_preamble = ring->current_ctx == fence_ctx;
-   need_ctx_switch = ring->current_ctx != fence_ctx;
if (job && ring->funcs->emit_cntxcntl) {
if (need_ctx_switch)
status |= AMDGPU_HAVE_CTX_SWITCH;
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RFC: Migration to Gitlab

2018-08-22 Thread Daniel Vetter
Hi all,

I think it's time to brainstorm a bit about the gitlab migration. Basic reasons:

- fd.o admins want to deprecate shell accounts and hand-rolled
infrastructure, because it's a pain to keep secure

- gitlab will allow us to add committers on our own, greatly
simplifying that process (and offloading that task from fd.o admins).

There's also some more benefits we might want to reap, like better CI
integration for basic build testing - no more "oops didn't build
drm-misc defconfigs" or "sry, forgot make check in maintainer-tools".
But that's all fully optional.

For the full in-depth writeup of everything, see

https://www.fooishbar.org/blog/gitlab-fdo-introduction/

I think now is also a good time, with mesa, xorg, wayland/weston and
others moved, to start thinking about how we'll move drm. There's a
few things to figure out though:

- We probably want to split out maintainer-tools. That would address
the concern that there's 50+ committers to an auto-updating shell
script ...

- We need to figure out how to handle the ACL trickery around drm-tip in gitlab.

- Probably good to stage the migration, with maintainer-tools, igt
leading. That will also make fd.o admins happy, who want to rework
their cloud infrastructure a bit before migrating the big kernel repos
over.

- Figuring out the actual migration - we've been adding a pile of
committers since fd.o LDAP was converted to gitlab once back in
spring. We need to at least figure out how to move the new
accounts/committers.

- Similar, maintainer-tools needs to move. We probably want to move
all the dim maintained kernel repos in one go, to avoid headaches with
double-accounts needed for committers.

- CI, linux-next and everyone else should be fine, since the
cgit/non-ssh paths will keep working (they'll be read-only mirrors).
Need to double-check that with everyone.

- Some organization structure would be good.

https://cgit.freedesktop.org/drm

libdrm won't be part of the gitlab drm group because that's already
moved under mesa (and you can't symlink/mulit-home anymore on gitlab):

https://gitlab.freedesktop.org/mesa/drm

But there's also drm_hwcomposer, which we might want to migrate into
drm too - gitlab requires a containing group, and
drm_hwcomposer/drm_hwcomposer is a bit silly.

Note: Access rights can be done at any level in the hierarchy, the
organization is orthogonal to commit rights.

- Anything else I've forgotten.

A lot of this still needs to be figured out first. As a first step I'm
looking for volunteers who want to join the fun, besides comments and
thoughts on the overall topic of course.

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/2] drm/amdgpu: Change kiq ring initialize sequence on gfx9

2018-08-22 Thread Rex Zhu
1. initialize kiq before initialize gfx ring.
2. set kiq ring ready immediately when kiq initialize
   successfully.
3. split function gfx_v9_0_kiq_resume into two functions.
 gfx_v9_0_kiq_resume is for kiq initialize.
 gfx_v9_0_kcq_resume is for kcq initialize.

Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 38 ++-
 1 file changed, 24 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 5990e5dc..ed1868a 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -2684,7 +2684,6 @@ static int gfx_v9_0_kiq_kcq_enable(struct amdgpu_device 
*adev)
queue_mask |= (1ull << i);
}
 
-   kiq_ring->ready = true;
r = amdgpu_ring_alloc(kiq_ring, (7 * adev->gfx.num_compute_rings) + 8);
if (r) {
DRM_ERROR("Failed to lock KIQ (%d).\n", r);
@@ -3091,26 +3090,33 @@ static int gfx_v9_0_kcq_init_queue(struct amdgpu_ring 
*ring)
 
 static int gfx_v9_0_kiq_resume(struct amdgpu_device *adev)
 {
-   struct amdgpu_ring *ring = NULL;
-   int r = 0, i;
-
-   gfx_v9_0_cp_compute_enable(adev, true);
+   struct amdgpu_ring *ring;
+   int r;
 
ring = >gfx.kiq.ring;
 
r = amdgpu_bo_reserve(ring->mqd_obj, false);
if (unlikely(r != 0))
-   goto done;
+   return r;
 
r = amdgpu_bo_kmap(ring->mqd_obj, (void **)>mqd_ptr);
-   if (!r) {
-   r = gfx_v9_0_kiq_init_queue(ring);
-   amdgpu_bo_kunmap(ring->mqd_obj);
-   ring->mqd_ptr = NULL;
-   }
+   if (unlikely(r != 0))
+   return r;
+
+   gfx_v9_0_kiq_init_queue(ring);
+   amdgpu_bo_kunmap(ring->mqd_obj);
+   ring->mqd_ptr = NULL;
amdgpu_bo_unreserve(ring->mqd_obj);
-   if (r)
-   goto done;
+   ring->ready = true;
+   return 0;
+}
+
+static int gfx_v9_0_kcq_resume(struct amdgpu_device *adev)
+{
+   struct amdgpu_ring *ring = NULL;
+   int r = 0, i;
+
+   gfx_v9_0_cp_compute_enable(adev, true);
 
for (i = 0; i < adev->gfx.num_compute_rings; i++) {
ring = >gfx.compute_ring[i];
@@ -3153,11 +3159,15 @@ static int gfx_v9_0_cp_resume(struct amdgpu_device 
*adev)
return r;
}
 
+   r = gfx_v9_0_kiq_resume(adev);
+   if (r)
+   return r;
+
r = gfx_v9_0_cp_gfx_resume(adev);
if (r)
return r;
 
-   r = gfx_v9_0_kiq_resume(adev);
+   r = gfx_v9_0_kcq_resume(adev);
if (r)
return r;
 
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/2] drm/amdgpu: Change kiq initialize/reset sequence on gfx8

2018-08-22 Thread Rex Zhu
1. initialize kiq before initialize gfx ring.
2. set kiq ring ready immediately when kiq initialize
   successfully.
3. split function gfx_v8_0_kiq_resume into two functions.
   gfx_v8_0_kiq_resume is for kiq initialize.
   gfx_v8_0_kcq_resume is for kcq initialize.

Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 49 +--
 1 file changed, 30 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
index 6a2296a..903725ba 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
@@ -4622,7 +4622,6 @@ static int gfx_v8_0_kiq_kcq_enable(struct amdgpu_device 
*adev)
queue_mask |= (1ull << i);
}
 
-   kiq_ring->ready = true;
r = amdgpu_ring_alloc(kiq_ring, (8 * adev->gfx.num_compute_rings) + 8);
if (r) {
DRM_ERROR("Failed to lock KIQ (%d).\n", r);
@@ -4949,26 +4948,33 @@ static void gfx_v8_0_set_mec_doorbell_range(struct 
amdgpu_device *adev)
 
 static int gfx_v8_0_kiq_resume(struct amdgpu_device *adev)
 {
-   struct amdgpu_ring *ring = NULL;
-   int r = 0, i;
-
-   gfx_v8_0_cp_compute_enable(adev, true);
+   struct amdgpu_ring *ring;
+   int r;
 
ring = >gfx.kiq.ring;
 
r = amdgpu_bo_reserve(ring->mqd_obj, false);
if (unlikely(r != 0))
-   goto done;
+   return r;
 
r = amdgpu_bo_kmap(ring->mqd_obj, >mqd_ptr);
-   if (!r) {
-   r = gfx_v8_0_kiq_init_queue(ring);
-   amdgpu_bo_kunmap(ring->mqd_obj);
-   ring->mqd_ptr = NULL;
-   }
+   if (unlikely(r != 0))
+   return r;
+
+   gfx_v8_0_kiq_init_queue(ring);
+   amdgpu_bo_kunmap(ring->mqd_obj);
+   ring->mqd_ptr = NULL;
amdgpu_bo_unreserve(ring->mqd_obj);
-   if (r)
-   goto done;
+   ring->ready = true;
+   return 0;
+}
+
+static int gfx_v8_0_kcq_resume(struct amdgpu_device *adev)
+{
+   struct amdgpu_ring *ring = NULL;
+   int r = 0, i;
+
+   gfx_v8_0_cp_compute_enable(adev, true);
 
for (i = 0; i < adev->gfx.num_compute_rings; i++) {
ring = >gfx.compute_ring[i];
@@ -5024,14 +5030,17 @@ static int gfx_v8_0_cp_resume(struct amdgpu_device 
*adev)
return r;
}
 
-   r = gfx_v8_0_cp_gfx_resume(adev);
+   r = gfx_v8_0_kiq_resume(adev);
if (r)
return r;
 
-   r = gfx_v8_0_kiq_resume(adev);
+   r = gfx_v8_0_cp_gfx_resume(adev);
if (r)
return r;
 
+   r = gfx_v8_0_kcq_resume(adev);
+   if (r)
+   return r;
gfx_v8_0_enable_gui_idle_interrupt(adev, true);
 
return 0;
@@ -5334,10 +5343,6 @@ static int gfx_v8_0_post_soft_reset(void *handle)
srbm_soft_reset = adev->gfx.srbm_soft_reset;
 
if (REG_GET_FIELD(grbm_soft_reset, GRBM_SOFT_RESET, SOFT_RESET_CP) ||
-   REG_GET_FIELD(grbm_soft_reset, GRBM_SOFT_RESET, SOFT_RESET_GFX))
-   gfx_v8_0_cp_gfx_resume(adev);
-
-   if (REG_GET_FIELD(grbm_soft_reset, GRBM_SOFT_RESET, SOFT_RESET_CP) ||
REG_GET_FIELD(grbm_soft_reset, GRBM_SOFT_RESET, SOFT_RESET_CPF) ||
REG_GET_FIELD(grbm_soft_reset, GRBM_SOFT_RESET, SOFT_RESET_CPC) ||
REG_GET_FIELD(grbm_soft_reset, GRBM_SOFT_RESET, SOFT_RESET_CPG)) {
@@ -5353,7 +5358,13 @@ static int gfx_v8_0_post_soft_reset(void *handle)
mutex_unlock(>srbm_mutex);
}
gfx_v8_0_kiq_resume(adev);
+   gfx_v8_0_kcq_resume(adev);
}
+
+   if (REG_GET_FIELD(grbm_soft_reset, GRBM_SOFT_RESET, SOFT_RESET_CP) ||
+   REG_GET_FIELD(grbm_soft_reset, GRBM_SOFT_RESET, SOFT_RESET_GFX))
+   gfx_v8_0_cp_gfx_resume(adev);
+
gfx_v8_0_rlc_start(adev);
 
return 0;
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: move full access into amdgpu_device_ip_suspend

2018-08-22 Thread Yintian Tao
It will be more safe to make full-acess include both phase1 and phase2.
Then accessing special registeris wherever at phase1 or phase2 will not
block any shutdown and suspend process under virtualization.

Signed-off-by: Yintian Tao 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index c23339d..6bb0e47 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1932,9 +1932,6 @@ static int amdgpu_device_ip_suspend_phase1(struct 
amdgpu_device *adev)
 {
int i, r;
 
-   if (amdgpu_sriov_vf(adev))
-   amdgpu_virt_request_full_gpu(adev, false);
-
amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
 
@@ -1953,9 +1950,6 @@ static int amdgpu_device_ip_suspend_phase1(struct 
amdgpu_device *adev)
}
}
 
-   if (amdgpu_sriov_vf(adev))
-   amdgpu_virt_release_full_gpu(adev, false);
-
return 0;
 }
 
@@ -2007,11 +2001,17 @@ int amdgpu_device_ip_suspend(struct amdgpu_device *adev)
 {
int r;
 
+   if (amdgpu_sriov_vf(adev))
+   amdgpu_virt_request_full_gpu(adev, false);
+
r = amdgpu_device_ip_suspend_phase1(adev);
if (r)
return r;
r = amdgpu_device_ip_suspend_phase2(adev);
 
+   if (amdgpu_sriov_vf(adev))
+   amdgpu_virt_release_full_gpu(adev, false);
+
return r;
 }
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/2] drm: rename null fence to stub fence in syncobj

2018-08-22 Thread Daniel Vetter
On Wed, Aug 22, 2018 at 05:56:10PM +0800, zhoucm1 wrote:
> 
> 
> On 2018年08月22日 17:34, Daniel Vetter wrote:
> > On Wed, Aug 22, 2018 at 04:38:56PM +0800, Chunming Zhou wrote:
> > > stub fence will be used by timeline syncobj as well.
> > > 
> > > Change-Id: Ia4252f03c07a8105491d2791dc7c8c6976682285
> > > Signed-off-by: Chunming Zhou 
> > > Cc: Jason Ekstrand 
> > Please don't expose stuff only used by the drm_syncobj implementation to
> > drivers. Gives us a very unclean driver interface. Imo this should all be
> > left within drm_syncobj.h.
> .c? will fix that.

Yup I meant to leave it all in drm_syncobj.c :-)
-Daniel

> > 
> > See also my comments for patch 2, you leak all the implemenation details
> > to drivers. We need to fix that and have a clear interface.
> Yes, I will address them when I do v2.
> 
> Thanks,
> David Zhou
> > -Daniel
> > 
> > > ---
> > >   drivers/gpu/drm/drm_syncobj.c | 28 ++--
> > >   include/drm/drm_syncobj.h | 24 
> > >   2 files changed, 26 insertions(+), 26 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
> > > index d4f4ce484529..70af32d0def1 100644
> > > --- a/drivers/gpu/drm/drm_syncobj.c
> > > +++ b/drivers/gpu/drm/drm_syncobj.c
> > > @@ -187,39 +187,15 @@ void drm_syncobj_replace_fence(struct drm_syncobj 
> > > *syncobj,
> > >   }
> > >   EXPORT_SYMBOL(drm_syncobj_replace_fence);
> > > -struct drm_syncobj_null_fence {
> > > - struct dma_fence base;
> > > - spinlock_t lock;
> > > -};
> > > -
> > > -static const char *drm_syncobj_null_fence_get_name(struct dma_fence 
> > > *fence)
> > > -{
> > > -return "syncobjnull";
> > > -}
> > > -
> > > -static bool drm_syncobj_null_fence_enable_signaling(struct dma_fence 
> > > *fence)
> > > -{
> > > -dma_fence_enable_sw_signaling(fence);
> > > -return !dma_fence_is_signaled(fence);
> > > -}
> > > -
> > > -static const struct dma_fence_ops drm_syncobj_null_fence_ops = {
> > > - .get_driver_name = drm_syncobj_null_fence_get_name,
> > > - .get_timeline_name = drm_syncobj_null_fence_get_name,
> > > - .enable_signaling = drm_syncobj_null_fence_enable_signaling,
> > > - .wait = dma_fence_default_wait,
> > > - .release = NULL,
> > > -};
> > > -
> > >   static int drm_syncobj_assign_null_handle(struct drm_syncobj *syncobj)
> > >   {
> > > - struct drm_syncobj_null_fence *fence;
> > > + struct drm_syncobj_stub_fence *fence;
> > >   fence = kzalloc(sizeof(*fence), GFP_KERNEL);
> > >   if (fence == NULL)
> > >   return -ENOMEM;
> > >   spin_lock_init(>lock);
> > > - dma_fence_init(>base, _syncobj_null_fence_ops,
> > > + dma_fence_init(>base, _syncobj_stub_fence_ops,
> > >  >lock, 0, 0);
> > >   dma_fence_signal(>base);
> > > diff --git a/include/drm/drm_syncobj.h b/include/drm/drm_syncobj.h
> > > index 3980602472c0..b04c492ddbb5 100644
> > > --- a/include/drm/drm_syncobj.h
> > > +++ b/include/drm/drm_syncobj.h
> > > @@ -30,6 +30,30 @@
> > >   struct drm_syncobj_cb;
> > > +struct drm_syncobj_stub_fence {
> > > + struct dma_fence base;
> > > + spinlock_t lock;
> > > +};
> > > +
> > > +const char *drm_syncobj_stub_fence_get_name(struct dma_fence *fence)
> > > +{
> > > +return "syncobjstub";
> > > +}
> > > +
> > > +bool drm_syncobj_stub_fence_enable_signaling(struct dma_fence *fence)
> > > +{
> > > +dma_fence_enable_sw_signaling(fence);
> > > +return !dma_fence_is_signaled(fence);
> > > +}
> > > +
> > > +const struct dma_fence_ops drm_syncobj_stub_fence_ops = {
> > > + .get_driver_name = drm_syncobj_stub_fence_get_name,
> > > + .get_timeline_name = drm_syncobj_stub_fence_get_name,
> > > + .enable_signaling = drm_syncobj_stub_fence_enable_signaling,
> > > + .wait = dma_fence_default_wait,
> > > + .release = NULL,
> > > +};
> > > +
> > >   /**
> > >* struct drm_syncobj - sync object.
> > >*
> > > -- 
> > > 2.14.1
> > > 
> > > ___
> > > dri-devel mailing list
> > > dri-de...@lists.freedesktop.org
> > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v6 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v6)

2018-08-22 Thread Huang Rui
I continue to work for bulk moving that based on the proposal by Christian.

Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact performance seriously.

Then Christian provided a workaround to not move PD/PT BOs on LRU with below
patch:
Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
validating VM PTs")

However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
instead of one by one.

Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
validated we move all BOs together to the end of the LRU without dropping the
lock for the LRU.

While doing so we note the beginning and end of this block in the LRU list.

Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
we don't move every BO one by one, but instead cut the LRU list into pieces so
that we bulk move everything to the end in just one operation.

Test data:
+--+-+---+---+
|  |The Talos|Clpeak(OCL)|BusSpeedReadback(OCL) 
 |
|  |Principle(Vulkan)|   |  
 |
++
|  | |   |0.319 ms(1k) 0.314 ms(2K) 0.308 
ms(4K) |
| Original |  147.7 FPS  |  76.86 us |0.307 ms(8K) 0.310 ms(16K)
 |
++
| Orignial + WA| |   |0.254 ms(1K) 0.241 ms(2K) 
 |
|(don't move   |  162.1 FPS  |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 
ms(16K)|
|PT BOs on LRU)| |   |  
 |
++
| Bulk move|  163.1 FPS  |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 
ms(4K) |
|  | |   |0.214 ms(8K) 0.225 ms(16K)
 |
+--+-+---+---+

After test them with above three benchmarks include vulkan and opencl. We can
see the visible improvement than original, and even better than original with
workaround.

v2: move all BOs include idle, relocated, and moved list to the end of LRU and
put them together.
v3: remove unused parameter and use list_for_each_entry instead of the one with
save entry.
v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
all bo will be back on idle list.
v5: remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of
validated, and move ttm_bo_bulk_move_lru_tail() also into
amdgpu_vm_move_to_lru_tail().
v6: clean up and fix return value.

Signed-off-by: Christian König 
Signed-off-by: Huang Rui 
Tested-by: Mike Lothian 
Tested-by: Dieter Nützel 
Acked-by: Chunming Zhou 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c |  3 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 64 +++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +-
 3 files changed, 57 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 502b94f..8a5e557 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1266,6 +1266,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
union drm_amdgpu_cs *cs = data;
struct amdgpu_cs_parser parser = {};
bool reserved_buffers = false;
+   struct amdgpu_fpriv *fpriv;
int i, r;
 
if (!adev->accel_working)
@@ -1310,6 +1311,8 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
 
r = amdgpu_cs_submit(, cs);
 
+   fpriv = filp->driver_priv;
+   amdgpu_vm_move_to_lru_tail(adev, >vm);
 out:
amdgpu_cs_parser_fini(, r, reserved_buffers);
return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 9c84770..daae0fd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -268,6 +268,47 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
 }
 
 /**
+ * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
+ *
+ * @adev: amdgpu device pointer
+ * @vm: vm providing the BOs
+ *
+ * Move all BOs to the end of LRU and remember their positions to put them
+ * together.
+ */
+void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev,
+   struct amdgpu_vm *vm)
+{
+   struct ttm_bo_global *glob = adev->mman.bdev.glob;
+   struct amdgpu_vm_bo_base *bo_base;
+
+   if (vm->bulk_moveable) {
+   spin_lock(>lru_lock);
+

Re: [PATCH 2/2] [RFC]drm: add syncobj timeline support

2018-08-22 Thread Daniel Vetter
On Wed, Aug 22, 2018 at 11:59 AM, zhoucm1  wrote:
>
>
> On 2018年08月22日 17:31, Daniel Vetter wrote:
>>
>> On Wed, Aug 22, 2018 at 05:28:17PM +0800, zhoucm1 wrote:
>>>
>>>
>>> On 2018年08月22日 17:24, Daniel Vetter wrote:

 On Wed, Aug 22, 2018 at 04:49:28PM +0800, Chunming Zhou wrote:
>
> VK_KHR_timeline_semaphore:
> This extension introduces a new type of semaphore that has an integer
> payload
> identifying a point in a timeline. Such timeline semaphores support the
> following operations:
> * Host query - A host operation that allows querying the payload of
> the
>   timeline semaphore.
> * Host wait - A host operation that allows a blocking wait for a
>   timeline semaphore to reach a specified value.
> * Device wait - A device operation that allows waiting for a
>   timeline semaphore to reach a specified value.
> * Device signal - A device operation that allows advancing the
>   timeline semaphore to a specified value.
>
> Since it's a timeline, that means the front time point(PT) always is
> signaled before the late PT.
> a. signal PT design:
> Signal PT fence N depends on PT[N-1] fence and signal opertion fence,
> when PT[N] fence is signaled,
> the timeline will increase to value of PT[N].
> b. wait PT design:
> Wait PT fence is signaled by reaching timeline point value, when
> timeline is increasing, will compare
> wait PTs value with new timeline value, if PT value is lower than
> timeline value, then wait PT will be
> signaled, otherwise keep in list. semaphore wait operation can wait on
> any point of timeline,
> so need a RB tree to order them. And wait PT could ahead of signal PT,
> we need a sumission fence to
> perform that.
>
> TODO:
> CPU query and wait on timeline semaphore.

 Another TODO: igt testcase for all the cornercases. We already have
 other syncobj tests in there.
>>>
>>> Yes, I'm also trying to find where test should be wrote, Could you give a
>>> directory?
>>
>> There's already tests/syncobj_basic.c and tests/syncobj_wait.c. Either
>> extend those, or probably better to start a new tests/syncobj_timeline.c
>> since I expect this will have a lot of corner-cases we need to check.
>
> I failed to find them in both kernel and libdrm, Could you point which test
> you said?

igt testcases. It's a separate thing with lots of drm tests:

https://cgit.freedesktop.org/drm/igt-gpu-tools

I know that amdgpu has their tests in libdrm (which imo is a bit
unfortunate split, but there's also reasons and stuff).

Cheers, Daniel

> Thanks,
> David Zhou
>
>> -Daniel
>>
>>> Thanks,
>>> David Zhou

 That would also help with understanding how this is supposed to be used,
 since I'm a bit too dense to immediately get your algorithm by just
 staring at the code.


> Change-Id: I9f09aae225e268442c30451badac40406f24262c
> Signed-off-by: Chunming Zhou 
> Cc: Christian Konig 
> Cc: Dave Airlie 
> Cc: Daniel Rakos 
> ---
>drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c |   7 +-
>drivers/gpu/drm/drm_syncobj.c  | 385
> -
>drivers/gpu/drm/v3d/v3d_gem.c  |   4 +-
>drivers/gpu/drm/vc4/vc4_gem.c  |   2 +-
>include/drm/drm_syncobj.h  |  45 +++-
>include/uapi/drm/drm.h |   3 +-
>6 files changed, 435 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index d42d1c8f78f6..463cc8960723 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -1105,7 +1105,7 @@ static int
> amdgpu_syncobj_lookup_and_add_to_sync(struct amdgpu_cs_parser *p,
>{
> int r;
> struct dma_fence *fence;
> -   r = drm_syncobj_find_fence(p->filp, handle, );
> +   r = drm_syncobj_find_fence(p->filp, handle, , 0);
> if (r)
> return r;
> @@ -1193,8 +1193,9 @@ static void amdgpu_cs_post_dependencies(struct
> amdgpu_cs_parser *p)
>{
> int i;
> -   for (i = 0; i < p->num_post_dep_syncobjs; ++i)
> -   drm_syncobj_replace_fence(p->post_dep_syncobjs[i],
> p->fence);
> +   for (i = 0; i < p->num_post_dep_syncobjs; ++i) {
> +   drm_syncobj_signal_fence(p->post_dep_syncobjs[i],
> p->fence, 0);
> +   }
>}
>static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
> diff --git a/drivers/gpu/drm/drm_syncobj.c
> b/drivers/gpu/drm/drm_syncobj.c
> index 70af32d0def1..3709f36c901e 100644
> --- a/drivers/gpu/drm/drm_syncobj.c
> +++ b/drivers/gpu/drm/drm_syncobj.c
> @@ -187,6 +187,191 @@ void drm_syncobj_replace_fence(struct 

[PATCH 4/5] drm/amdgpu: implement soft_recovery for GFX8 v2

2018-08-22 Thread Christian König
Try to kill waves on the SQ.

v2: only for the GFX ring for now.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
index 282dba6cce86..9de940a65c80 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
@@ -6714,6 +6714,18 @@ static void gfx_v8_0_ring_emit_wreg(struct amdgpu_ring 
*ring, uint32_t reg,
amdgpu_ring_write(ring, val);
 }
 
+static void gfx_v8_0_ring_soft_recovery(struct amdgpu_ring *ring, unsigned 
vmid)
+{
+   struct amdgpu_device *adev = ring->adev;
+   uint32_t value = 0;
+
+   value = REG_SET_FIELD(value, SQ_CMD, CMD, 0x03);
+   value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
+   value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
+   value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
+   WREG32(mmSQ_CMD, value);
+}
+
 static void gfx_v8_0_set_gfx_eop_interrupt_state(struct amdgpu_device *adev,
 enum amdgpu_interrupt_state 
state)
 {
@@ -7171,6 +7183,7 @@ static const struct amdgpu_ring_funcs 
gfx_v8_0_ring_funcs_gfx = {
.init_cond_exec = gfx_v8_0_ring_emit_init_cond_exec,
.patch_cond_exec = gfx_v8_0_ring_emit_patch_cond_exec,
.emit_wreg = gfx_v8_0_ring_emit_wreg,
+   .soft_recovery = gfx_v8_0_ring_soft_recovery,
 };
 
 static const struct amdgpu_ring_funcs gfx_v8_0_ring_funcs_compute = {
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/5] drm/amdgpu: add ring soft recovery v2

2018-08-22 Thread Christian König
Instead of hammering hard on the GPU try a soft recovery first.

v2: reorder code a bit

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  6 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 24 
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  4 
 3 files changed, 34 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 265ff90f4e01..d93e31a5c4e7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -33,6 +33,12 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job)
struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
struct amdgpu_job *job = to_amdgpu_job(s_job);
 
+   if (amdgpu_ring_soft_recovery(ring, job->vmid, s_job->s_fence->parent)) 
{
+   DRM_ERROR("ring %s timeout, but soft recovered\n",
+ s_job->sched->name);
+   return;
+   }
+
DRM_ERROR("ring %s timeout, signaled seq=%u, emitted seq=%u\n",
  job->base.sched->name, atomic_read(>fence_drv.last_seq),
  ring->fence_drv.sync_seq);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
index 5dfd26be1eec..c045a4e38ad1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
@@ -383,6 +383,30 @@ void amdgpu_ring_emit_reg_write_reg_wait_helper(struct 
amdgpu_ring *ring,
amdgpu_ring_emit_reg_wait(ring, reg1, mask, mask);
 }
 
+/**
+ * amdgpu_ring_soft_recovery - try to soft recover a ring lockup
+ *
+ * @ring: ring to try the recovery on
+ * @vmid: VMID we try to get going again
+ * @fence: timedout fence
+ *
+ * Tries to get a ring proceeding again when it is stuck.
+ */
+bool amdgpu_ring_soft_recovery(struct amdgpu_ring *ring, unsigned int vmid,
+  struct dma_fence *fence)
+{
+   ktime_t deadline = ktime_add_us(ktime_get(), 1000);
+
+   if (!ring->funcs->soft_recovery)
+   return false;
+
+   while (!dma_fence_is_signaled(fence) &&
+  ktime_to_ns(ktime_sub(deadline, ktime_get())) > 0)
+   ring->funcs->soft_recovery(ring, vmid);
+
+   return dma_fence_is_signaled(fence);
+}
+
 /*
  * Debugfs info
  */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index 409fdd9b9710..9cc239968e40 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -168,6 +168,8 @@ struct amdgpu_ring_funcs {
/* priority functions */
void (*set_priority) (struct amdgpu_ring *ring,
  enum drm_sched_priority priority);
+   /* Try to soft recover the ring to make the fence signal */
+   void (*soft_recovery)(struct amdgpu_ring *ring, unsigned vmid);
 };
 
 struct amdgpu_ring {
@@ -260,6 +262,8 @@ void amdgpu_ring_fini(struct amdgpu_ring *ring);
 void amdgpu_ring_emit_reg_write_reg_wait_helper(struct amdgpu_ring *ring,
uint32_t reg0, uint32_t val0,
uint32_t reg1, uint32_t val1);
+bool amdgpu_ring_soft_recovery(struct amdgpu_ring *ring, unsigned int vmid,
+  struct dma_fence *fence);
 
 static inline void amdgpu_ring_clear_ring(struct amdgpu_ring *ring)
 {
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 5/5] drm/amdgpu: implement soft_recovery for GFX9

2018-08-22 Thread Christian König
Try to kill waves on the SQ.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 44707f94b2c5..ab5cacea967b 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -4421,6 +4421,18 @@ static void gfx_v9_0_ring_emit_reg_write_reg_wait(struct 
amdgpu_ring *ring,
   ref, mask);
 }
 
+static void gfx_v9_0_ring_soft_recovery(struct amdgpu_ring *ring, unsigned 
vmid)
+{
+   struct amdgpu_device *adev = ring->adev;
+   uint32_t value = 0;
+
+   value = REG_SET_FIELD(value, SQ_CMD, CMD, 0x03);
+   value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
+   value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
+   value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
+   WREG32(mmSQ_CMD, value);
+}
+
 static void gfx_v9_0_set_gfx_eop_interrupt_state(struct amdgpu_device *adev,
 enum amdgpu_interrupt_state 
state)
 {
@@ -4743,6 +4755,7 @@ static const struct amdgpu_ring_funcs 
gfx_v9_0_ring_funcs_gfx = {
.emit_wreg = gfx_v9_0_ring_emit_wreg,
.emit_reg_wait = gfx_v9_0_ring_emit_reg_wait,
.emit_reg_write_reg_wait = gfx_v9_0_ring_emit_reg_write_reg_wait,
+   .soft_recovery = gfx_v9_0_ring_soft_recovery,
 };
 
 static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_compute = {
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 3/5] drm/amdgpu: implement soft_recovery for GFX7

2018-08-22 Thread Christian König
Try to kill waves on the SQ.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
index 95452c5a9df6..a15d9c0f233b 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
@@ -4212,6 +4212,18 @@ static void gfx_v7_0_ring_emit_gds_switch(struct 
amdgpu_ring *ring,
amdgpu_ring_write(ring, (1 << (oa_size + oa_base)) - (1 << oa_base));
 }
 
+static void gfx_v7_0_ring_soft_recovery(struct amdgpu_ring *ring, unsigned 
vmid)
+{
+   struct amdgpu_device *adev = ring->adev;
+   uint32_t value = 0;
+
+   value = REG_SET_FIELD(value, SQ_CMD, CMD, 0x03);
+   value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
+   value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
+   value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
+   WREG32(mmSQ_CMD, value);
+}
+
 static uint32_t wave_read_ind(struct amdgpu_device *adev, uint32_t simd, 
uint32_t wave, uint32_t address)
 {
WREG32(mmSQ_IND_INDEX,
@@ -5088,6 +5100,7 @@ static const struct amdgpu_ring_funcs 
gfx_v7_0_ring_funcs_gfx = {
.pad_ib = amdgpu_ring_generic_pad_ib,
.emit_cntxcntl = gfx_v7_ring_emit_cntxcntl,
.emit_wreg = gfx_v7_0_ring_emit_wreg,
+   .soft_recovery = gfx_v7_0_ring_soft_recovery,
 };
 
 static const struct amdgpu_ring_funcs gfx_v7_0_ring_funcs_compute = {
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/5] drm/amdgpu: cleanup GPU recovery check a bit

2018-08-22 Thread Christian König
Check if we should call the function instead of providing the forced
flag.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h|  3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 38 --
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c  |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c|  4 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c|  3 ++-
 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c  |  4 ++--
 drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c  |  3 ++-
 7 files changed, 36 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 19ef7711d944..340e40d03d54 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1158,8 +1158,9 @@ int emu_soc_asic_init(struct amdgpu_device *adev);
 #define amdgpu_asic_need_full_reset(adev) 
(adev)->asic_funcs->need_full_reset((adev))
 
 /* Common functions */
+bool amdgpu_device_should_recover_gpu(struct amdgpu_device *adev);
 int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
- struct amdgpu_job* job, bool force);
+ struct amdgpu_job* job);
 void amdgpu_device_pci_config_reset(struct amdgpu_device *adev);
 bool amdgpu_device_need_post(struct amdgpu_device *adev);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index c23339d8ae2d..9f5e4be76d5e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3244,32 +3244,44 @@ static int amdgpu_device_reset_sriov(struct 
amdgpu_device *adev,
return r;
 }
 
+/**
+ * amdgpu_device_should_recover_gpu - check if we should try GPU recovery
+ *
+ * @adev: amdgpu device pointer
+ *
+ * Check amdgpu_gpu_recovery and SRIOV status to see if we should try to 
recover
+ * a hung GPU.
+ */
+bool amdgpu_device_should_recover_gpu(struct amdgpu_device *adev)
+{
+   if (!amdgpu_device_ip_check_soft_reset(adev)) {
+   DRM_INFO("Timeout, but no hardware hang detected.\n");
+   return false;
+   }
+
+   if (amdgpu_gpu_recovery == 0 || (amdgpu_gpu_recovery == -1  &&
+!amdgpu_sriov_vf(adev))) {
+   DRM_INFO("GPU recovery disabled.\n");
+   return false;
+   }
+
+   return true;
+}
+
 /**
  * amdgpu_device_gpu_recover - reset the asic and recover scheduler
  *
  * @adev: amdgpu device pointer
  * @job: which job trigger hang
- * @force: forces reset regardless of amdgpu_gpu_recovery
  *
  * Attempt to reset the GPU if it has hung (all asics).
  * Returns 0 for success or an error on failure.
  */
 int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
- struct amdgpu_job *job, bool force)
+ struct amdgpu_job *job)
 {
int i, r, resched;
 
-   if (!force && !amdgpu_device_ip_check_soft_reset(adev)) {
-   DRM_INFO("No hardware hang detected. Did some blocks stall?\n");
-   return 0;
-   }
-
-   if (!force && (amdgpu_gpu_recovery == 0 ||
-   (amdgpu_gpu_recovery == -1  && 
!amdgpu_sriov_vf(adev {
-   DRM_INFO("GPU recovery disabled.\n");
-   return 0;
-   }
-
dev_info(adev->dev, "GPU reset begin!\n");
 
mutex_lock(>lock_reset);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index e74d620d9699..68cccebb8463 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -702,7 +702,7 @@ static int amdgpu_debugfs_gpu_recover(struct seq_file *m, 
void *data)
struct amdgpu_device *adev = dev->dev_private;
 
seq_printf(m, "gpu recover\n");
-   amdgpu_device_gpu_recover(adev, NULL, true);
+   amdgpu_device_gpu_recover(adev, NULL);
 
return 0;
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
index 1abf5b5bac9e..b927e8798534 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
@@ -105,8 +105,8 @@ static void amdgpu_irq_reset_work_func(struct work_struct 
*work)
struct amdgpu_device *adev = container_of(work, struct amdgpu_device,
  reset_work);
 
-   if (!amdgpu_sriov_vf(adev))
-   amdgpu_device_gpu_recover(adev, NULL, false);
+   if (!amdgpu_sriov_vf(adev) && amdgpu_device_should_recover_gpu(adev))
+   amdgpu_device_gpu_recover(adev, NULL);
 }
 
 /**
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 391e2f7c03aa..265ff90f4e01 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -37,7 +37,8 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job)
  

[PATCH 4/6] drm/amdgpu: implement soft_recovery for GFX7

2018-08-22 Thread Christian König
Try to kill waves on the SQ.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
index 95452c5a9df6..a15d9c0f233b 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
@@ -4212,6 +4212,18 @@ static void gfx_v7_0_ring_emit_gds_switch(struct 
amdgpu_ring *ring,
amdgpu_ring_write(ring, (1 << (oa_size + oa_base)) - (1 << oa_base));
 }
 
+static void gfx_v7_0_ring_soft_recovery(struct amdgpu_ring *ring, unsigned 
vmid)
+{
+   struct amdgpu_device *adev = ring->adev;
+   uint32_t value = 0;
+
+   value = REG_SET_FIELD(value, SQ_CMD, CMD, 0x03);
+   value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
+   value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
+   value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
+   WREG32(mmSQ_CMD, value);
+}
+
 static uint32_t wave_read_ind(struct amdgpu_device *adev, uint32_t simd, 
uint32_t wave, uint32_t address)
 {
WREG32(mmSQ_IND_INDEX,
@@ -5088,6 +5100,7 @@ static const struct amdgpu_ring_funcs 
gfx_v7_0_ring_funcs_gfx = {
.pad_ib = amdgpu_ring_generic_pad_ib,
.emit_cntxcntl = gfx_v7_ring_emit_cntxcntl,
.emit_wreg = gfx_v7_0_ring_emit_wreg,
+   .soft_recovery = gfx_v7_0_ring_soft_recovery,
 };
 
 static const struct amdgpu_ring_funcs gfx_v7_0_ring_funcs_compute = {
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/6] drm/amdgpu: fix preamble handling

2018-08-22 Thread Christian König
At this point the command submission can still be interrupted.

Signed-off-by: Christian König 
Acked-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 16 +---
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index d42d1c8f78f6..313ac971eaaf 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1015,13 +1015,9 @@ static int amdgpu_cs_ib_fill(struct amdgpu_device *adev,
if (r)
return r;
 
-   if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE) {
-   parser->job->preamble_status |= 
AMDGPU_PREAMBLE_IB_PRESENT;
-   if (!parser->ctx->preamble_presented) {
-   parser->job->preamble_status |= 
AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
-   parser->ctx->preamble_presented = true;
-   }
-   }
+   if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE)
+   parser->job->preamble_status |=
+   AMDGPU_PREAMBLE_IB_PRESENT;
 
if (parser->entity && parser->entity != entity)
return -EINVAL;
@@ -1244,6 +1240,12 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 
amdgpu_cs_post_dependencies(p);
 
+   if ((job->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
+   !p->ctx->preamble_presented) {
+   job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
+   p->ctx->preamble_presented = true;
+   }
+
cs->out.handle = seq;
job->uf_sequence = seq;
 
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 6/6] drm/amdgpu: implement soft_recovery for GFX9

2018-08-22 Thread Christian König
Try to kill waves on the SQ.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 44707f94b2c5..ab5cacea967b 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -4421,6 +4421,18 @@ static void gfx_v9_0_ring_emit_reg_write_reg_wait(struct 
amdgpu_ring *ring,
   ref, mask);
 }
 
+static void gfx_v9_0_ring_soft_recovery(struct amdgpu_ring *ring, unsigned 
vmid)
+{
+   struct amdgpu_device *adev = ring->adev;
+   uint32_t value = 0;
+
+   value = REG_SET_FIELD(value, SQ_CMD, CMD, 0x03);
+   value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
+   value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
+   value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
+   WREG32(mmSQ_CMD, value);
+}
+
 static void gfx_v9_0_set_gfx_eop_interrupt_state(struct amdgpu_device *adev,
 enum amdgpu_interrupt_state 
state)
 {
@@ -4743,6 +4755,7 @@ static const struct amdgpu_ring_funcs 
gfx_v9_0_ring_funcs_gfx = {
.emit_wreg = gfx_v9_0_ring_emit_wreg,
.emit_reg_wait = gfx_v9_0_ring_emit_reg_wait,
.emit_reg_write_reg_wait = gfx_v9_0_ring_emit_reg_write_reg_wait,
+   .soft_recovery = gfx_v9_0_ring_soft_recovery,
 };
 
 static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_compute = {
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 3/6] drm/amdgpu: add ring soft recovery v2

2018-08-22 Thread Christian König
Instead of hammering hard on the GPU try a soft recovery first.

v2: reorder code a bit

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  6 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 24 
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  4 
 3 files changed, 34 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 265ff90f4e01..d93e31a5c4e7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -33,6 +33,12 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job)
struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
struct amdgpu_job *job = to_amdgpu_job(s_job);
 
+   if (amdgpu_ring_soft_recovery(ring, job->vmid, s_job->s_fence->parent)) 
{
+   DRM_ERROR("ring %s timeout, but soft recovered\n",
+ s_job->sched->name);
+   return;
+   }
+
DRM_ERROR("ring %s timeout, signaled seq=%u, emitted seq=%u\n",
  job->base.sched->name, atomic_read(>fence_drv.last_seq),
  ring->fence_drv.sync_seq);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
index 5dfd26be1eec..c045a4e38ad1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
@@ -383,6 +383,30 @@ void amdgpu_ring_emit_reg_write_reg_wait_helper(struct 
amdgpu_ring *ring,
amdgpu_ring_emit_reg_wait(ring, reg1, mask, mask);
 }
 
+/**
+ * amdgpu_ring_soft_recovery - try to soft recover a ring lockup
+ *
+ * @ring: ring to try the recovery on
+ * @vmid: VMID we try to get going again
+ * @fence: timedout fence
+ *
+ * Tries to get a ring proceeding again when it is stuck.
+ */
+bool amdgpu_ring_soft_recovery(struct amdgpu_ring *ring, unsigned int vmid,
+  struct dma_fence *fence)
+{
+   ktime_t deadline = ktime_add_us(ktime_get(), 1000);
+
+   if (!ring->funcs->soft_recovery)
+   return false;
+
+   while (!dma_fence_is_signaled(fence) &&
+  ktime_to_ns(ktime_sub(deadline, ktime_get())) > 0)
+   ring->funcs->soft_recovery(ring, vmid);
+
+   return dma_fence_is_signaled(fence);
+}
+
 /*
  * Debugfs info
  */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index 409fdd9b9710..9cc239968e40 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -168,6 +168,8 @@ struct amdgpu_ring_funcs {
/* priority functions */
void (*set_priority) (struct amdgpu_ring *ring,
  enum drm_sched_priority priority);
+   /* Try to soft recover the ring to make the fence signal */
+   void (*soft_recovery)(struct amdgpu_ring *ring, unsigned vmid);
 };
 
 struct amdgpu_ring {
@@ -260,6 +262,8 @@ void amdgpu_ring_fini(struct amdgpu_ring *ring);
 void amdgpu_ring_emit_reg_write_reg_wait_helper(struct amdgpu_ring *ring,
uint32_t reg0, uint32_t val0,
uint32_t reg1, uint32_t val1);
+bool amdgpu_ring_soft_recovery(struct amdgpu_ring *ring, unsigned int vmid,
+  struct dma_fence *fence);
 
 static inline void amdgpu_ring_clear_ring(struct amdgpu_ring *ring)
 {
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/6] drm/amdgpu: cleanup GPU recovery check a bit

2018-08-22 Thread Christian König
Check if we should call the function instead of providing the forced
flag.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h|  3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 38 --
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c  |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c|  4 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c|  3 ++-
 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c  |  4 ++--
 drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c  |  3 ++-
 7 files changed, 36 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 19ef7711d944..340e40d03d54 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1158,8 +1158,9 @@ int emu_soc_asic_init(struct amdgpu_device *adev);
 #define amdgpu_asic_need_full_reset(adev) 
(adev)->asic_funcs->need_full_reset((adev))
 
 /* Common functions */
+bool amdgpu_device_should_recover_gpu(struct amdgpu_device *adev);
 int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
- struct amdgpu_job* job, bool force);
+ struct amdgpu_job* job);
 void amdgpu_device_pci_config_reset(struct amdgpu_device *adev);
 bool amdgpu_device_need_post(struct amdgpu_device *adev);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index c23339d8ae2d..9f5e4be76d5e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3244,32 +3244,44 @@ static int amdgpu_device_reset_sriov(struct 
amdgpu_device *adev,
return r;
 }
 
+/**
+ * amdgpu_device_should_recover_gpu - check if we should try GPU recovery
+ *
+ * @adev: amdgpu device pointer
+ *
+ * Check amdgpu_gpu_recovery and SRIOV status to see if we should try to 
recover
+ * a hung GPU.
+ */
+bool amdgpu_device_should_recover_gpu(struct amdgpu_device *adev)
+{
+   if (!amdgpu_device_ip_check_soft_reset(adev)) {
+   DRM_INFO("Timeout, but no hardware hang detected.\n");
+   return false;
+   }
+
+   if (amdgpu_gpu_recovery == 0 || (amdgpu_gpu_recovery == -1  &&
+!amdgpu_sriov_vf(adev))) {
+   DRM_INFO("GPU recovery disabled.\n");
+   return false;
+   }
+
+   return true;
+}
+
 /**
  * amdgpu_device_gpu_recover - reset the asic and recover scheduler
  *
  * @adev: amdgpu device pointer
  * @job: which job trigger hang
- * @force: forces reset regardless of amdgpu_gpu_recovery
  *
  * Attempt to reset the GPU if it has hung (all asics).
  * Returns 0 for success or an error on failure.
  */
 int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
- struct amdgpu_job *job, bool force)
+ struct amdgpu_job *job)
 {
int i, r, resched;
 
-   if (!force && !amdgpu_device_ip_check_soft_reset(adev)) {
-   DRM_INFO("No hardware hang detected. Did some blocks stall?\n");
-   return 0;
-   }
-
-   if (!force && (amdgpu_gpu_recovery == 0 ||
-   (amdgpu_gpu_recovery == -1  && 
!amdgpu_sriov_vf(adev {
-   DRM_INFO("GPU recovery disabled.\n");
-   return 0;
-   }
-
dev_info(adev->dev, "GPU reset begin!\n");
 
mutex_lock(>lock_reset);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index e74d620d9699..68cccebb8463 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -702,7 +702,7 @@ static int amdgpu_debugfs_gpu_recover(struct seq_file *m, 
void *data)
struct amdgpu_device *adev = dev->dev_private;
 
seq_printf(m, "gpu recover\n");
-   amdgpu_device_gpu_recover(adev, NULL, true);
+   amdgpu_device_gpu_recover(adev, NULL);
 
return 0;
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
index 1abf5b5bac9e..b927e8798534 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
@@ -105,8 +105,8 @@ static void amdgpu_irq_reset_work_func(struct work_struct 
*work)
struct amdgpu_device *adev = container_of(work, struct amdgpu_device,
  reset_work);
 
-   if (!amdgpu_sriov_vf(adev))
-   amdgpu_device_gpu_recover(adev, NULL, false);
+   if (!amdgpu_sriov_vf(adev) && amdgpu_device_should_recover_gpu(adev))
+   amdgpu_device_gpu_recover(adev, NULL);
 }
 
 /**
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 391e2f7c03aa..265ff90f4e01 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -37,7 +37,8 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job)
  

  1   2   >