Re: [PATCH libdrm] amdgpu: add VM test to exercise max/min address space

2018-10-28 Thread Zhang, Jerry
在 2018年10月26日,18:59,Christian König  写道:
> 
> Make sure the kernel doesn't crash if we map something at the minimum/maximum 
> address.
> 
> Signed-off-by: Christian König 
> ---
> tests/amdgpu/vm_tests.c | 45 -
> 1 file changed, 44 insertions(+), 1 deletion(-)
> 
> diff --git a/tests/amdgpu/vm_tests.c b/tests/amdgpu/vm_tests.c
> index 7b6dc5d6..bbdeef4d 100644
> --- a/tests/amdgpu/vm_tests.c
> +++ b/tests/amdgpu/vm_tests.c
> @@ -31,8 +31,8 @@ static  amdgpu_device_handle device_handle;
> static  uint32_t  major_version;
> static  uint32_t  minor_version;
> 
> -
> static void amdgpu_vmid_reserve_test(void);
> +static void amdgpu_vm_mapping_test(void);
> 
> CU_BOOL suite_vm_tests_enable(void)
> {
> @@ -84,6 +84,7 @@ int suite_vm_tests_clean(void)
> 
> CU_TestInfo vm_tests[] = {
>   { "resere vmid test",  amdgpu_vmid_reserve_test },
> + { "vm mapping test",  amdgpu_vm_mapping_test },
>   CU_TEST_INFO_NULL,
> };
> 
> @@ -167,3 +168,45 @@ static void amdgpu_vmid_reserve_test(void)
>   r = amdgpu_cs_ctx_free(context_handle);
>   CU_ASSERT_EQUAL(r, 0);
> }
> +
> +static void amdgpu_vm_mapping_test(void)
> +{
> + struct amdgpu_bo_alloc_request req = {0};
> + struct drm_amdgpu_info_device dev_info;
> + const uint64_t size = 4096;
> + amdgpu_bo_handle buf;
> + uint64_t addr;
> + int r;
> +
> + req.alloc_size = size;
> + req.phys_alignment = 0;
> + req.preferred_heap = AMDGPU_GEM_DOMAIN_GTT;
> + req.flags = 0;
> +
> + r = amdgpu_bo_alloc(device_handle, &req, &buf);
> + CU_ASSERT_EQUAL(r, 0);
> +
> + r = amdgpu_query_info(device_handle, AMDGPU_INFO_DEV_INFO,
> +   sizeof(dev_info), &dev_info);
> + CU_ASSERT_EQUAL(r, 0);
> +
> + addr = dev_info.virtual_address_offset;
> + r = amdgpu_bo_va_op(buf, 0, size, addr, 0, AMDGPU_VA_OP_MAP);
> + CU_ASSERT_EQUAL(r, 0);

Please confirm:

We may need to unmap the VA before bo free, although this VA range is unlikely 
to be used by other test cases.

BTW, is it a chance in practice that a process may map different VA ranges to 
the same bo?

Regards,
Jerry

> +
> + addr = dev_info.virtual_address_max - size;
> + r = amdgpu_bo_va_op(buf, 0, size, addr, 0, AMDGPU_VA_OP_MAP);
> + CU_ASSERT_EQUAL(r, 0);
> +
> + if (dev_info.high_va_offset) {
> + addr = dev_info.high_va_offset;
> + r = amdgpu_bo_va_op(buf, 0, size, addr, 0, AMDGPU_VA_OP_MAP);
> + CU_ASSERT_EQUAL(r, 0);
> +
> + addr = dev_info.high_va_max - size;
> + r = amdgpu_bo_va_op(buf, 0, size, addr, 0, AMDGPU_VA_OP_MAP);
> + CU_ASSERT_EQUAL(r, 0);
> + }
> +
> + amdgpu_bo_free(buf);
> +}
> -- 
> 2.17.1
> 
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: fix VM leaf walking

2018-10-28 Thread Zhu, Rex
That sounds reasonable.

Thanks.


Best Regards

Rex



From: Christian König 
Sent: Friday, October 26, 2018 3:34 PM
To: Zhu, Rex; Deucher, Alexander; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/amdgpu: fix VM leaf walking

Yeah, that came to my mind as well.

But this is the only case where we would use the return value and not use 
cursor->pfn as criteria to abort.

So to be consistent I rather don't want to do this,
Christian.

Am 25.10.18 um 17:43 schrieb Zhu, Rex:

How about add a return value for the function amdgpu_vm_pt_next?

And change the code as:



 ret = amdgpu_vm_pt_next(adev, cursor);
-   while (amdgpu_vm_pt_descendant(adev, cursor));
+   if (!ret)
+   while (amdgpu_vm_pt_descendant(adev, cursor));



Best Regards

Rex

From: amd-gfx 

 On Behalf Of Zhu, Rex
Sent: Thursday, October 25, 2018 11:34 PM
To: Deucher, Alexander 
; Christian König 
; 
amd-gfx@lists.freedesktop.org
Subject: RE: [PATCH] drm/amdgpu: fix VM leaf walking



Patch is Tested-by:  Rex Zhu rex@amd.com



Regards

Rex



From: amd-gfx 
mailto:amd-gfx-boun...@lists.freedesktop.org>>
 On Behalf Of Deucher, Alexander
Sent: Thursday, October 25, 2018 11:08 PM
To: Christian König 
mailto:ckoenig.leichtzumer...@gmail.com>>; 
amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/amdgpu: fix VM leaf walking



Acked-by: Alex Deucher 
mailto:alexander.deuc...@amd.com>>





From: amd-gfx 
mailto:amd-gfx-boun...@lists.freedesktop.org>>
 on behalf of Christian König 
mailto:ckoenig.leichtzumer...@gmail.com>>
Sent: Thursday, October 25, 2018 10:38 AM
To: amd-gfx@lists.freedesktop.org
Subject: [PATCH] drm/amdgpu: fix VM leaf walking



Make sure we don't try to go down further after the leave walk already
ended. This fixes a crash with a new VM test.

Signed-off-by: Christian König 
mailto:christian.koe...@amd.com>>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index db0cbf8d219d..352b30409060 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -542,7 +542,8 @@ static void amdgpu_vm_pt_next_leaf(struct amdgpu_device 
*adev,
struct amdgpu_vm_pt_cursor *cursor)
 {
 amdgpu_vm_pt_next(adev, cursor);
-   while (amdgpu_vm_pt_descendant(adev, cursor));
+   if (cursor->pfn != ~0ll)
+   while (amdgpu_vm_pt_descendant(adev, cursor));
 }

 /**
--
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2] drm/amd/display: set backlight level limit to 1

2018-10-28 Thread Guttula, Suresh
From: "Guttula, Suresh" 

This patch will work as workaround for silicon limitation
related to PWM dutycycle when the backlight level goes to 0.

Actually PWM value is 16 bit value and valid range from 1-65535.
when ever user requested to set this PWM value to 0 which is not
fall in the range, in VBIOS taken care this by limiting to 1.
This patch here will do the same. Either driver or VBIOS can not
pass 0 value as it is not a valid range for PWM and it will
give a high PWM pulse which is not the intended behaviour as
per HW constraints.

Signed-off-by: suresh guttula 
Reviewed-by: Harry Wentland 
---
v2 : comment edited to represent general usecase
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 492230c..be261ef 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1518,6 +1518,13 @@ static int amdgpu_dm_backlight_update_status(struct 
backlight_device *bd)
 {
struct amdgpu_display_manager *dm = bl_get_data(bd);
 
+   /*
+* PWM interperts 0 as 100% rather than 0% because of HW
+* limitation for level 0.So limiting minimum brightness level
+* to 1.
+*/
+   if (bd->props.brightness < 1)
+   return 1;
if (dc_link_set_backlight_level(dm->backlight_link,
bd->props.brightness, 0, 0))
return 0;
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx