[PATCH] drm/amdgpu:cancel timer of virtual DCE(v2)

2017-11-15 Thread Monk Liu
virtual DCE Timer structure is already released
after its sw_fini(), so we need to cancel the
its Timer in hw_fini() otherwise the Timer canceling
is missed.

v2:
use for loop and num_crtc to replace original code

Change-Id: I03d6ca7aa07591d287da379ef4fe008f06edaff6
Signed-off-by: Monk Liu 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/dce_virtual.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c 
b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
index 39460eb..120dd3b 100644
--- a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
+++ b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
@@ -44,6 +44,9 @@ static void dce_virtual_set_display_funcs(struct 
amdgpu_device *adev);
 static void dce_virtual_set_irq_funcs(struct amdgpu_device *adev);
 static int dce_virtual_connector_encoder_init(struct amdgpu_device *adev,
  int index);
+static void dce_virtual_set_crtc_vblank_interrupt_state(struct amdgpu_device 
*adev,
+   int crtc,
+   enum 
amdgpu_interrupt_state state);
 
 /**
  * dce_virtual_vblank_wait - vblank wait asic callback.
@@ -491,6 +494,13 @@ static int dce_virtual_hw_init(void *handle)
 
 static int dce_virtual_hw_fini(void *handle)
 {
+   struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+   int i = 0;
+
+   for (i = 0; imode_info.num_crtc; i++)
+   if (adev->mode_info.crtcs[i])
+   dce_virtual_set_crtc_vblank_interrupt_state(adev, i, 
AMDGPU_IRQ_STATE_DISABLE);
+
return 0;
 }
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu:cancel timer of virtual DCE(v2)

2017-11-15 Thread Monk Liu
virtual DCE Timer structure is already released
after its sw_fini(), so we need to cancel the
its Timer in hw_fini() otherwise the Timer canceling
is missed.

v2:
use for loop and num_crtc to replace original code

Change-Id: I03d6ca7aa07591d287da379ef4fe008f06edaff6
Signed-off-by: Monk Liu 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/dce_virtual.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c 
b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
index cd4895b4..943efc3 100644
--- a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
+++ b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
@@ -44,6 +44,9 @@ static void dce_virtual_set_display_funcs(struct 
amdgpu_device *adev);
 static void dce_virtual_set_irq_funcs(struct amdgpu_device *adev);
 static int dce_virtual_connector_encoder_init(struct amdgpu_device *adev,
  int index);
+static void dce_virtual_set_crtc_vblank_interrupt_state(struct amdgpu_device 
*adev,
+   int crtc,
+   enum 
amdgpu_interrupt_state state);
 
 /**
  * dce_virtual_vblank_wait - vblank wait asic callback.
@@ -550,6 +553,13 @@ static int dce_virtual_hw_init(void *handle)
 
 static int dce_virtual_hw_fini(void *handle)
 {
+   struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+   int i = 0;
+
+   for (i = 0; imode_info.num_crtc; i++)
+   if (adev->mode_info.crtcs[i])
+   dce_virtual_set_crtc_vblank_interrupt_state(adev, i, 
AMDGPU_IRQ_STATE_DISABLE);
+
return 0;
 }
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 2/2] drm/amdgpu:cancel timer of virtual DCE

2017-11-15 Thread Liu, Monk
Good idea

-Original Message-
From: Deucher, Alexander 
Sent: 2017年11月16日 12:14
To: Liu, Monk ; amd-gfx@lists.freedesktop.org
Cc: Liu, Monk 
Subject: RE: [PATCH 2/2] drm/amdgpu:cancel timer of virtual DCE

> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf 
> Of Monk Liu
> Sent: Wednesday, November 15, 2017 10:14 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Liu, Monk
> Subject: [PATCH 2/2] drm/amdgpu:cancel timer of virtual DCE
> 
> virtual DCE Timer structure is already released after its sw_fini(), 
> so we need to cancel the its Timer in hw_fini() otherwise the Timer 
> canceling is missed.
> 
> Change-Id: I03d6ca7aa07591d287da379ef4fe008f06edaff6
> Signed-off-by: Monk Liu 
> ---
>  drivers/gpu/drm/amd/amdgpu/dce_virtual.c | 13 +
>  1 file changed, 13 insertions(+)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
> b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
> index 39460eb..7438491 100644
> --- a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
> +++ b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
> @@ -489,8 +489,21 @@ static int dce_virtual_hw_init(void *handle)
>   return 0;
>  }
> 
> +static void dce_virtual_set_crtc_vblank_interrupt_state(struct
> amdgpu_device *adev,
> + int crtc,
> + enum
> amdgpu_interrupt_state state);
> +

Please put the forward declaration at the top of the file.  

>  static int dce_virtual_hw_fini(void *handle)  {
> + struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> + int i = 0;
> +
> + while (i < AMDGPU_MAX_CRTCS) {
> + if (adev->mode_info.crtcs[i])
> + dce_virtual_set_crtc_vblank_interrupt_state(adev, i,
> AMDGPU_IRQ_STATE_DISABLE);
> + i++;
> + }

I think a for loop is clearer here. Also, why not use adev->mode_info.num_crtc 
so we don’t loop longer than we have to?

Alex

> +
>   return 0;
>  }
> 
> --
> 2.7.4
> 
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 2/2] drm/amdgpu:cancel timer of virtual DCE

2017-11-15 Thread Deucher, Alexander
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Monk Liu
> Sent: Wednesday, November 15, 2017 10:14 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Liu, Monk
> Subject: [PATCH 2/2] drm/amdgpu:cancel timer of virtual DCE
> 
> virtual DCE Timer structure is already released
> after its sw_fini(), so we need to cancel the
> its Timer in hw_fini() otherwise the Timer canceling
> is missed.
> 
> Change-Id: I03d6ca7aa07591d287da379ef4fe008f06edaff6
> Signed-off-by: Monk Liu 
> ---
>  drivers/gpu/drm/amd/amdgpu/dce_virtual.c | 13 +
>  1 file changed, 13 insertions(+)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
> b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
> index 39460eb..7438491 100644
> --- a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
> +++ b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
> @@ -489,8 +489,21 @@ static int dce_virtual_hw_init(void *handle)
>   return 0;
>  }
> 
> +static void dce_virtual_set_crtc_vblank_interrupt_state(struct
> amdgpu_device *adev,
> + int crtc,
> + enum
> amdgpu_interrupt_state state);
> +

Please put the forward declaration at the top of the file.  

>  static int dce_virtual_hw_fini(void *handle)
>  {
> + struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> + int i = 0;
> +
> + while (i < AMDGPU_MAX_CRTCS) {
> + if (adev->mode_info.crtcs[i])
> + dce_virtual_set_crtc_vblank_interrupt_state(adev, i,
> AMDGPU_IRQ_STATE_DISABLE);
> + i++;
> + }

I think a for loop is clearer here. Also, why not use adev->mode_info.num_crtc 
so we don’t loop longer than we have to?

Alex

> +
>   return 0;
>  }
> 
> --
> 2.7.4
> 
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/2] drm/ttm: fix ttm_mem_evict_first once more

2017-11-15 Thread He, Roger
Reviewed-by: Roger He 

Thanks
Roger(Hongbo.He)
-Original Message-
From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf Of 
Christian K?nig
Sent: Wednesday, November 15, 2017 8:32 PM
To: amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org
Subject: [PATCH 1/2] drm/ttm: fix ttm_mem_evict_first once more

The code path isn't hit at the moment, but we need to take the lock to add the 
BO back to the LRU.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/ttm/ttm_bo.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 
07d9c6e5b6ca..7c1eac4f4b4b 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -793,10 +793,13 @@ static int ttm_mem_evict_first(struct ttm_bo_device *bdev,
spin_unlock(>lru_lock);
 
ret = ttm_bo_evict(bo, interruptible, no_wait_gpu);
-   if (locked)
+   if (locked) {
ttm_bo_unreserve(bo);
-   else
+   } else {
+   spin_lock(>lru_lock);
ttm_bo_add_to_lru(bo);
+   spin_unlock(>lru_lock);
+   }
 
kref_put(>list_kref, ttm_bo_release_list);
return ret;
--
2.11.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/2] drm/amdgpu:cancel timer of virtual DCE

2017-11-15 Thread Monk Liu
virtual DCE Timer structure is already released
after its sw_fini(), so we need to cancel the
its Timer in hw_fini() otherwise the Timer canceling
is missed.

Change-Id: I03d6ca7aa07591d287da379ef4fe008f06edaff6
Signed-off-by: Monk Liu 
---
 drivers/gpu/drm/amd/amdgpu/dce_virtual.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c 
b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
index 39460eb..7438491 100644
--- a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
+++ b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
@@ -489,8 +489,21 @@ static int dce_virtual_hw_init(void *handle)
return 0;
 }
 
+static void dce_virtual_set_crtc_vblank_interrupt_state(struct amdgpu_device 
*adev,
+   int crtc,
+   enum 
amdgpu_interrupt_state state);
+
 static int dce_virtual_hw_fini(void *handle)
 {
+   struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+   int i = 0;
+
+   while (i < AMDGPU_MAX_CRTCS) {
+   if (adev->mode_info.crtcs[i])
+   dce_virtual_set_crtc_vblank_interrupt_state(adev, i, 
AMDGPU_IRQ_STATE_DISABLE);
+   i++;
+   }
+
return 0;
 }
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/2] drm/amdgpu:fix virtual dce bug

2017-11-15 Thread Monk Liu
this fix the issue that access memory after freed
after driver unloaded.

Change-Id: I64e2488c18f5dc044b57c74567785da21fc028da
Signed-off-by: Monk Liu 
---
 drivers/gpu/drm/amd/amdgpu/dce_virtual.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c 
b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
index a8829af..39460eb 100644
--- a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
+++ b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
@@ -437,6 +437,8 @@ static int dce_virtual_sw_fini(void *handle)
drm_kms_helper_poll_fini(adev->ddev);
 
drm_mode_config_cleanup(adev->ddev);
+   /* clear crtcs pointer to avoid dce irq finish routine access freed 
data */
+   memset(adev->mode_info.crtcs, 0, sizeof(adev->mode_info.crtcs[0]) * 
AMDGPU_MAX_CRTCS);
adev->mode_info.mode_config_initialized = false;
return 0;
 }
@@ -723,7 +725,7 @@ static void 
dce_virtual_set_crtc_vblank_interrupt_state(struct amdgpu_device *ad
int crtc,
enum 
amdgpu_interrupt_state state)
 {
-   if (crtc >= adev->mode_info.num_crtc) {
+   if (crtc >= adev->mode_info.num_crtc || !adev->mode_info.crtcs[crtc]) {
DRM_DEBUG("invalid crtc %d\n", crtc);
return;
}
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu: require a root bus window above 4GB for BAR resize

2017-11-15 Thread Deucher, Alexander


> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Christian König
> Sent: Wednesday, November 15, 2017 3:13 PM
> To: amd-gfx@lists.freedesktop.org
> Subject: [PATCH] drm/amdgpu: require a root bus window above 4GB for
> BAR resize
> 
> Don't even try to resize the BAR when there is no window above 4GB.
> 
> Signed-off-by: Christian König 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 18
> ++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index 4944f8e1577b..b5a8afdcfb5d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -765,6 +765,9 @@ int amdgpu_device_resize_fb_bar(struct
> amdgpu_device *adev)
>  {
>   u64 space_needed = roundup_pow_of_two(adev-
> >mc.real_vram_size);
>   u32 rbar_size = order_base_2(((space_needed >> 20) | 1)) - 1;
> + struct pci_bus *root;
> + struct resource *res;
> + unsigned i;
>   u16 cmd;
>   int r;
> 
> @@ -772,6 +775,21 @@ int amdgpu_device_resize_fb_bar(struct
> amdgpu_device *adev)
>   if (amdgpu_sriov_vf(adev))
>   return 0;
> 
> + /* Check if the root BUS has 64bit memory resources */
> + root = adev->pdev->bus;
> + while (root->parent)
> + root = root->parent;
> +
> + pci_bus_for_each_resource(root, res, i) {
> + if (res && res->flags & IORESOURCE_MEM_64 &&
> + res->start > 0x1ull)
> + break;
> + }
> +
> + /* Trying to resize is pointless without a root hub window above 4GB
> */
> + if (!res)
> + return 0;
> +
>   /* Disable memory decoding while we change the BAR addresses
> and size */
>   pci_read_config_word(adev->pdev, PCI_COMMAND, );
>   pci_write_config_word(adev->pdev, PCI_COMMAND,
> --
> 2.11.0
> 
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: require a root bus window above 4GB for BAR resize

2017-11-15 Thread Christian König
Don't even try to resize the BAR when there is no window above 4GB.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 4944f8e1577b..b5a8afdcfb5d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -765,6 +765,9 @@ int amdgpu_device_resize_fb_bar(struct amdgpu_device *adev)
 {
u64 space_needed = roundup_pow_of_two(adev->mc.real_vram_size);
u32 rbar_size = order_base_2(((space_needed >> 20) | 1)) - 1;
+   struct pci_bus *root;
+   struct resource *res;
+   unsigned i;
u16 cmd;
int r;
 
@@ -772,6 +775,21 @@ int amdgpu_device_resize_fb_bar(struct amdgpu_device *adev)
if (amdgpu_sriov_vf(adev))
return 0;
 
+   /* Check if the root BUS has 64bit memory resources */
+   root = adev->pdev->bus;
+   while (root->parent)
+   root = root->parent;
+
+   pci_bus_for_each_resource(root, res, i) {
+   if (res && res->flags & IORESOURCE_MEM_64 &&
+   res->start > 0x1ull)
+   break;
+   }
+
+   /* Trying to resize is pointless without a root hub window above 4GB */
+   if (!res)
+   return 0;
+
/* Disable memory decoding while we change the BAR addresses and size */
pci_read_config_word(adev->pdev, PCI_COMMAND, );
pci_write_config_word(adev->pdev, PCI_COMMAND,
-- 
2.11.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[pull] amdgpu dc drm-next-4.15-dc

2017-11-15 Thread Alex Deucher
Hi Dave,

Various fixes for DC for 4.15.  It doesn't look like you pulled the
smatch fixes for DC that I sent out last week.  Those are also on this branch.

The following changes since commit f368d3bfde225199eef2216b03e0ba4944a3434a:

  amd/display: Fix potential null dereference in dce_calcs.c (2017-11-08 
17:30:11 -0500)

are available in the git repository at:

  git://people.freedesktop.org/~agd5f/linux drm-next-4.15-dc

for you to fetch changes up to 00f713c6dc657397ba37b42d7f6887f526c730c6:

  drm/amd/display: fix MST link training fail division by 0 (2017-11-14 
11:32:46 -0500)


Bhawanpreet Lakha (1):
  drm/amd/display: add flip_immediate to commit update for stream

Charlene Liu (1):
  drm/amd/display: fix AZ clock not enabled before program AZ endpoint

Eric Yang (1):
  drm/amd/display: fix MST link training fail division by 0

Harry Wentland (1):
  drm/amd/display: Fix formatting for null pointer dereference fix

Jerry (Fangzhi) Zuo (1):
  drm/amd/display: Miss register MST encoder cbs

Ken Chalmers (1):
  drm/amd/display: use num_timing_generator instead of pipe_count

Leo (Sunpeng) Li (2):
  drm/amd/display: Fix warnings on S3 resume
  drm/amd/display: Remove dangling planes on dc commit state

Michel Dänzer (1):
  amdgpu/dm: Don't use DRM_ERROR in amdgpu_dm_atomic_check

Roman Li (1):
  drm/amd/display: use configurable FBC option in dm

 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  | 44 +-
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h  |  4 +-
 .../amd/display/amdgpu_dm/amdgpu_dm_mst_types.c| 12 +-
 drivers/gpu/drm/amd/display/dc/core/dc.c   | 42 +++--
 drivers/gpu/drm/amd/display/dc/core/dc_link.c  |  6 ++-
 drivers/gpu/drm/amd/display/dc/core/dc_stream.c|  2 +-
 drivers/gpu/drm/amd/display/dc/dce/dce_audio.c | 31 ++-
 .../drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c  |  2 +-
 8 files changed, 122 insertions(+), 21 deletions(-)
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[pull] amdgpu drm-next-4.15

2017-11-15 Thread Alex Deucher
Hi Dave,

Misc fixes for 4.15.

The following changes since commit a9386bb051931778436db3dd6e3a163f7db92b56:

  Merge tag 'drm-misc-next-fixes-2017-11-08' of 
git://anongit.freedesktop.org/drm/drm-misc into drm-next (2017-11-09 11:59:30 
+1000)

are available in the git repository at:

  git://people.freedesktop.org/~agd5f/linux drm-next-4.15

for you to fetch changes up to 451cc55dd17fa5130f05629ac8d90e32facf27f6:

  drm/amd/pp: fix dpm randomly failed on Vega10 (2017-11-15 14:03:45 -0500)


Christian König (2):
  drm/amdgpu: make AMDGPU_VA_RESERVED_SIZE 64bit
  drm/amdgpu: set f_mapping on exported DMA-bufs

Colin Ian King (1):
  drm/amd/powerplay: fix copy-n-paste error on vddci_buf index

Emily Deng (1):
  drm/amdgpu: Fix null pointer issue in amdgpu_cs_wait_any_fence

Ken Wang (2):
  drm/amdgpu: Remove check which is not valid for certain VBIOS
  drm/amdgpu: Add common golden settings for GFX9

Nicolai Hähnle (1):
  drm/amdgpu/gfx9: implement wave VGPR reading

Rex Zhu (1):
  drm/amd/pp: fix dpm randomly failed on Vega10

Roger He (1):
  drm/amd/amdgpu: if visible VRAM allocation fail, fall back to invisible 
try again

Tom St Denis (1):
  drm/amd/amdgpu: Fix wave mask in amdgpu_debugfs_wave_read() (v2)

ozeng (1):
  drm/amdgpu: Properly allocate VM invalidate eng v2

 drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c   |  6 
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c |  7 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 40 +++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 10 --
 drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c  |  6 +++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  3 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c  | 19 ++
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c  | 15 ++--
 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.c   |  2 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c | 29 
 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h |  1 +
 11 files changed, 87 insertions(+), 51 deletions(-)
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/2] dma-buf: try to replace a signaled fence in reservation_object_add_shared_inplace

2017-11-15 Thread Chris Wilson
Quoting Christian König (2017-11-15 18:56:43)
> Am 15.11.2017 um 18:43 schrieb Chris Wilson:
> > Quoting Christian König (2017-11-15 17:34:07)
> >> Am 15.11.2017 um 17:55 schrieb Chris Wilson:
> >>> Quoting Chris Wilson (2017-11-14 14:34:05)
>  Quoting Christian König (2017-11-14 14:24:44)
> > Am 06.11.2017 um 17:22 schrieb Chris Wilson:
> >> Quoting Christian König (2017-10-30 14:59:04)
> >>> @@ -126,17 +127,28 @@ reservation_object_add_shared_inplace(struct 
> >>> reservation_object *obj,
> >>>dma_fence_put(old_fence);
> >>>return;
> >>>}
> >>> +
> >>> +   if (!signaled && dma_fence_is_signaled(old_fence)) {
> >>> +   signaled = old_fence;
> >>> +   signaled_idx = i;
> >>> +   }
> >> How much do we care about only keeping one fence per-ctx here? You 
> >> could
> >> rearrange this to break on old_fence->context == fence->context ||
> >> dma_fence_is_signaled(old_fence) then everyone can use the final block.
> > Yeah, that is what David Zhou suggested as well.
> >
> > I've rejected this approach for now cause I think we still have cases
> > where we rely on one fence per ctx (but I'm not 100% sure).
> >
> > I changed patch #1 in this series as you suggest and going to send that
> > out once more in a minute.
> >
> > Can we get this upstream as is for now? I won't have much more time
> > working on this.
>  Sure, we are only discussing how we might make it look tidier, pure
>  micro-optimisation with the caveat of losing the one-fence-per-ctx
>  guarantee.
> >>> Ah, one thing to note is that extra checking pushed one of our corner
> >>> case tests over its time limit.
> >>>
> >>> If we can completely forgo the one-fence-per-ctx here, what works really
> >>> well in testing is
> >>>
> >>> diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c
> >>> index 5319ac478918..5755e95fab1b 100644
> >>> --- a/drivers/dma-buf/reservation.c
> >>> +++ b/drivers/dma-buf/reservation.c
> >>> @@ -104,39 +104,19 @@ reservation_object_add_shared_inplace(struct 
> >>> reservation_object *obj,
> >>> struct reservation_object_list 
> >>> *fobj,
> >>> struct dma_fence *fence)
> >>>{
> >>> -   struct dma_fence *replace = NULL;
> >>> -   u32 ctx = fence->context;
> >>> -   u32 i;
> >>> -
> >>>   dma_fence_get(fence);
> >>>
> >>>   preempt_disable();
> >>>   write_seqcount_begin(>seq);
> >>>
> >>> -   for (i = 0; i < fobj->shared_count; ++i) {
> >>> -   struct dma_fence *check;
> >>> -
> >>> -   check = rcu_dereference_protected(fobj->shared[i],
> >>> - 
> >>> reservation_object_held(obj));
> >>> -
> >>> -   if (check->context == ctx || 
> >>> dma_fence_is_signaled(check)) {
> >>> -   replace = old_fence;
> >>> -   break;
> >>> -   }
> >>> -   }
> >>> -
> >>>   /*
> >>>* memory barrier is added by write_seqcount_begin,
> >>>* fobj->shared_count is protected by this lock too
> >>>*/
> >>> -   RCU_INIT_POINTER(fobj->shared[i], fence);
> >>> -   if (!replace)
> >>> -   fobj->shared_count++;
> >>> +   RCU_INIT_POINTER(fobj->shared[fobj->shared_count++], fence);
> >>>
> >>>   write_seqcount_end(>seq);
> >>>   preempt_enable();
> >>> -
> >>> -   dma_fence_put(replace);
> >>>}
> >>>
> >>>static void
> >>>
> >>>i.e. don't check when not replacing the shared[], on creating the new
> >>>buffer we then discard all the old fences.
> >>>
> >>> It should work for amdgpu as well since you do a ht to coalesce
> >>> redundant fences before queuing.
> >> That won't work for all cases. This way the reservation object would
> >> keep growing without a chance to ever shrink.
> > We only keep the active fences when it grows, which is effective enough
> > to keep it in check on the workloads I can find in the hour since
> > noticing the failure in CI ;)
> 
> Not sure what tests you run, but this change means that we basically 
> just add the fence to an ever growing list of fences on every command 
> submission which is only reaped when we double the list size.

Sure, just the frequency of doubling is also halved everytime. Then the
entire array of shared is reaped when idle. Just throwing the issue out
there; something that is already slow, just got noticeably slower. And
that if we just ignore it, and only reap on reallocate, it vanishes.
(It's just one case that happened to be caught by CI because it
triggered a timeout, what CI isn't telling us is the dramatic improvement
in other cases from 

Re: [PATCH 2/2] dma-buf: try to replace a signaled fence in reservation_object_add_shared_inplace

2017-11-15 Thread Christian König

Am 15.11.2017 um 18:43 schrieb Chris Wilson:

Quoting Christian König (2017-11-15 17:34:07)

Am 15.11.2017 um 17:55 schrieb Chris Wilson:

Quoting Chris Wilson (2017-11-14 14:34:05)

Quoting Christian König (2017-11-14 14:24:44)

Am 06.11.2017 um 17:22 schrieb Chris Wilson:

Quoting Christian König (2017-10-30 14:59:04)

@@ -126,17 +127,28 @@ reservation_object_add_shared_inplace(struct 
reservation_object *obj,
   dma_fence_put(old_fence);
   return;
   }
+
+   if (!signaled && dma_fence_is_signaled(old_fence)) {
+   signaled = old_fence;
+   signaled_idx = i;
+   }

How much do we care about only keeping one fence per-ctx here? You could
rearrange this to break on old_fence->context == fence->context ||
dma_fence_is_signaled(old_fence) then everyone can use the final block.

Yeah, that is what David Zhou suggested as well.

I've rejected this approach for now cause I think we still have cases
where we rely on one fence per ctx (but I'm not 100% sure).

I changed patch #1 in this series as you suggest and going to send that
out once more in a minute.

Can we get this upstream as is for now? I won't have much more time
working on this.

Sure, we are only discussing how we might make it look tidier, pure
micro-optimisation with the caveat of losing the one-fence-per-ctx
guarantee.

Ah, one thing to note is that extra checking pushed one of our corner
case tests over its time limit.

If we can completely forgo the one-fence-per-ctx here, what works really
well in testing is

diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c
index 5319ac478918..5755e95fab1b 100644
--- a/drivers/dma-buf/reservation.c
+++ b/drivers/dma-buf/reservation.c
@@ -104,39 +104,19 @@ reservation_object_add_shared_inplace(struct 
reservation_object *obj,
struct reservation_object_list *fobj,
struct dma_fence *fence)
   {
-   struct dma_fence *replace = NULL;
-   u32 ctx = fence->context;
-   u32 i;
-
  dma_fence_get(fence);
   
  preempt_disable();

  write_seqcount_begin(>seq);
   
-   for (i = 0; i < fobj->shared_count; ++i) {

-   struct dma_fence *check;
-
-   check = rcu_dereference_protected(fobj->shared[i],
- reservation_object_held(obj));
-
-   if (check->context == ctx || dma_fence_is_signaled(check)) {
-   replace = old_fence;
-   break;
-   }
-   }
-
  /*
   * memory barrier is added by write_seqcount_begin,
   * fobj->shared_count is protected by this lock too
   */
-   RCU_INIT_POINTER(fobj->shared[i], fence);
-   if (!replace)
-   fobj->shared_count++;
+   RCU_INIT_POINTER(fobj->shared[fobj->shared_count++], fence);
   
  write_seqcount_end(>seq);

  preempt_enable();
-
-   dma_fence_put(replace);
   }
   
   static void


   i.e. don't check when not replacing the shared[], on creating the new
   buffer we then discard all the old fences.

It should work for amdgpu as well since you do a ht to coalesce
redundant fences before queuing.

That won't work for all cases. This way the reservation object would
keep growing without a chance to ever shrink.

We only keep the active fences when it grows, which is effective enough
to keep it in check on the workloads I can find in the hour since
noticing the failure in CI ;)


Not sure what tests you run, but this change means that we basically 
just add the fence to an ever growing list of fences on every command 
submission which is only reaped when we double the list size.


That is a serious no-go as far as I can see. What we could do is improve 
reservation_object_reserve_shared() as well to figure out how many 
signaled fences are actually in the array when we reallocate it.



And on the workloads where it is being
flooded with live fences from many contexts, the order of magnitude
throughput improvement is not easy to ignore.


Well not sure about the Intel driver, but since we started to mostly use 
a single reservation object per process the time spend in those 
functions on amdgpu are completely negligible.


Regards,
Christian.


-Chris



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu:fix virtual dce bug

2017-11-15 Thread Deucher, Alexander
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Monk Liu
> Sent: Wednesday, November 15, 2017 4:11 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Liu, Monk
> Subject: [PATCH] drm/amdgpu:fix virtual dce bug
> 
> this fix the issue that access memory after freed
> after driver unloaded.
> 
> Change-Id: I64e2488c18f5dc044b57c74567785da21fc028da
> Signed-off-by: Monk Liu 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/dce_virtual.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
> b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
> index a8829af..39460eb 100644
> --- a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
> +++ b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
> @@ -437,6 +437,8 @@ static int dce_virtual_sw_fini(void *handle)
>   drm_kms_helper_poll_fini(adev->ddev);
> 
>   drm_mode_config_cleanup(adev->ddev);
> + /* clear crtcs pointer to avoid dce irq finish routine access freed data
> */
> + memset(adev->mode_info.crtcs, 0, sizeof(adev-
> >mode_info.crtcs[0]) * AMDGPU_MAX_CRTCS);
>   adev->mode_info.mode_config_initialized = false;
>   return 0;
>  }
> @@ -723,7 +725,7 @@ static void
> dce_virtual_set_crtc_vblank_interrupt_state(struct amdgpu_device *ad
>   int crtc,
>   enum
> amdgpu_interrupt_state state)
>  {
> - if (crtc >= adev->mode_info.num_crtc) {
> + if (crtc >= adev->mode_info.num_crtc || !adev-
> >mode_info.crtcs[crtc]) {
>   DRM_DEBUG("invalid crtc %d\n", crtc);
>   return;
>   }
> --
> 2.7.4
> 
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/2] dma-buf: try to replace a signaled fence in reservation_object_add_shared_inplace

2017-11-15 Thread Chris Wilson
Quoting Christian König (2017-11-15 17:34:07)
> Am 15.11.2017 um 17:55 schrieb Chris Wilson:
> > Quoting Chris Wilson (2017-11-14 14:34:05)
> >> Quoting Christian König (2017-11-14 14:24:44)
> >>> Am 06.11.2017 um 17:22 schrieb Chris Wilson:
>  Quoting Christian König (2017-10-30 14:59:04)
> > @@ -126,17 +127,28 @@ reservation_object_add_shared_inplace(struct 
> > reservation_object *obj,
> >   dma_fence_put(old_fence);
> >   return;
> >   }
> > +
> > +   if (!signaled && dma_fence_is_signaled(old_fence)) {
> > +   signaled = old_fence;
> > +   signaled_idx = i;
> > +   }
>  How much do we care about only keeping one fence per-ctx here? You could
>  rearrange this to break on old_fence->context == fence->context ||
>  dma_fence_is_signaled(old_fence) then everyone can use the final block.
> >>> Yeah, that is what David Zhou suggested as well.
> >>>
> >>> I've rejected this approach for now cause I think we still have cases
> >>> where we rely on one fence per ctx (but I'm not 100% sure).
> >>>
> >>> I changed patch #1 in this series as you suggest and going to send that
> >>> out once more in a minute.
> >>>
> >>> Can we get this upstream as is for now? I won't have much more time
> >>> working on this.
> >> Sure, we are only discussing how we might make it look tidier, pure
> >> micro-optimisation with the caveat of losing the one-fence-per-ctx
> >> guarantee.
> > Ah, one thing to note is that extra checking pushed one of our corner
> > case tests over its time limit.
> >
> > If we can completely forgo the one-fence-per-ctx here, what works really
> > well in testing is
> >
> > diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c
> > index 5319ac478918..5755e95fab1b 100644
> > --- a/drivers/dma-buf/reservation.c
> > +++ b/drivers/dma-buf/reservation.c
> > @@ -104,39 +104,19 @@ reservation_object_add_shared_inplace(struct 
> > reservation_object *obj,
> >struct reservation_object_list *fobj,
> >struct dma_fence *fence)
> >   {
> > -   struct dma_fence *replace = NULL;
> > -   u32 ctx = fence->context;
> > -   u32 i;
> > -
> >  dma_fence_get(fence);
> >   
> >  preempt_disable();
> >  write_seqcount_begin(>seq);
> >   
> > -   for (i = 0; i < fobj->shared_count; ++i) {
> > -   struct dma_fence *check;
> > -
> > -   check = rcu_dereference_protected(fobj->shared[i],
> > - 
> > reservation_object_held(obj));
> > -
> > -   if (check->context == ctx || dma_fence_is_signaled(check)) {
> > -   replace = old_fence;
> > -   break;
> > -   }
> > -   }
> > -
> >  /*
> >   * memory barrier is added by write_seqcount_begin,
> >   * fobj->shared_count is protected by this lock too
> >   */
> > -   RCU_INIT_POINTER(fobj->shared[i], fence);
> > -   if (!replace)
> > -   fobj->shared_count++;
> > +   RCU_INIT_POINTER(fobj->shared[fobj->shared_count++], fence);
> >   
> >  write_seqcount_end(>seq);
> >  preempt_enable();
> > -
> > -   dma_fence_put(replace);
> >   }
> >   
> >   static void
> >
> >   i.e. don't check when not replacing the shared[], on creating the new
> >   buffer we then discard all the old fences.
> >
> > It should work for amdgpu as well since you do a ht to coalesce
> > redundant fences before queuing.
> 
> That won't work for all cases. This way the reservation object would 
> keep growing without a chance to ever shrink.

We only keep the active fences when it grows, which is effective enough
to keep it in check on the workloads I can find in the hour since
noticing the failure in CI ;) And on the workloads where it is being
flooded with live fences from many contexts, the order of magnitude
throughput improvement is not easy to ignore.
-Chris
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/2] dma-buf: try to replace a signaled fence in reservation_object_add_shared_inplace

2017-11-15 Thread Christian König

Am 15.11.2017 um 17:55 schrieb Chris Wilson:

Quoting Chris Wilson (2017-11-14 14:34:05)

Quoting Christian König (2017-11-14 14:24:44)

Am 06.11.2017 um 17:22 schrieb Chris Wilson:

Quoting Christian König (2017-10-30 14:59:04)

@@ -126,17 +127,28 @@ reservation_object_add_shared_inplace(struct 
reservation_object *obj,
  dma_fence_put(old_fence);
  return;
  }
+
+   if (!signaled && dma_fence_is_signaled(old_fence)) {
+   signaled = old_fence;
+   signaled_idx = i;
+   }

How much do we care about only keeping one fence per-ctx here? You could
rearrange this to break on old_fence->context == fence->context ||
dma_fence_is_signaled(old_fence) then everyone can use the final block.

Yeah, that is what David Zhou suggested as well.

I've rejected this approach for now cause I think we still have cases
where we rely on one fence per ctx (but I'm not 100% sure).

I changed patch #1 in this series as you suggest and going to send that
out once more in a minute.

Can we get this upstream as is for now? I won't have much more time
working on this.

Sure, we are only discussing how we might make it look tidier, pure
micro-optimisation with the caveat of losing the one-fence-per-ctx
guarantee.

Ah, one thing to note is that extra checking pushed one of our corner
case tests over its time limit.

If we can completely forgo the one-fence-per-ctx here, what works really
well in testing is

diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c
index 5319ac478918..5755e95fab1b 100644
--- a/drivers/dma-buf/reservation.c
+++ b/drivers/dma-buf/reservation.c
@@ -104,39 +104,19 @@ reservation_object_add_shared_inplace(struct 
reservation_object *obj,
   struct reservation_object_list *fobj,
   struct dma_fence *fence)
  {
-   struct dma_fence *replace = NULL;
-   u32 ctx = fence->context;
-   u32 i;
-
 dma_fence_get(fence);
  
 preempt_disable();

 write_seqcount_begin(>seq);
  
-   for (i = 0; i < fobj->shared_count; ++i) {

-   struct dma_fence *check;
-
-   check = rcu_dereference_protected(fobj->shared[i],
- reservation_object_held(obj));
-
-   if (check->context == ctx || dma_fence_is_signaled(check)) {
-   replace = old_fence;
-   break;
-   }
-   }
-
 /*
  * memory barrier is added by write_seqcount_begin,
  * fobj->shared_count is protected by this lock too
  */
-   RCU_INIT_POINTER(fobj->shared[i], fence);
-   if (!replace)
-   fobj->shared_count++;
+   RCU_INIT_POINTER(fobj->shared[fobj->shared_count++], fence);
  
 write_seqcount_end(>seq);

 preempt_enable();
-
-   dma_fence_put(replace);
  }
  
  static void


  i.e. don't check when not replacing the shared[], on creating the new
  buffer we then discard all the old fences.

It should work for amdgpu as well since you do a ht to coalesce
redundant fences before queuing.


That won't work for all cases. This way the reservation object would 
keep growing without a chance to ever shrink.


Christian.


-Chris



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH xf86-video-amdgpu] Add amdgpu_dirty_src_drawable helper

2017-11-15 Thread Michel Dänzer
From: Michel Dänzer 

Allows tidying up redisplay_dirty slightly.

Signed-off-by: Michel Dänzer 
---
 src/amdgpu_drv.h | 20 ++--
 src/amdgpu_kms.c |  7 ++-
 2 files changed, 12 insertions(+), 15 deletions(-)

diff --git a/src/amdgpu_drv.h b/src/amdgpu_drv.h
index 4ee13e12b..7e1a40af6 100644
--- a/src/amdgpu_drv.h
+++ b/src/amdgpu_drv.h
@@ -175,26 +175,26 @@ amdgpu_master_screen(ScreenPtr screen)
return screen;
 }
 
-static inline ScreenPtr
-amdgpu_dirty_master(PixmapDirtyUpdatePtr dirty)
+static inline DrawablePtr
+amdgpu_dirty_src_drawable(PixmapDirtyUpdatePtr dirty)
 {
 #ifdef HAS_DIRTYTRACKING_DRAWABLE_SRC
-   ScreenPtr screen = dirty->src->pScreen;
+   return dirty->src;
 #else
-   ScreenPtr screen = dirty->src->drawable.pScreen;
+   return >src->drawable;
 #endif
+}
 
-   return amdgpu_master_screen(screen);
+static inline ScreenPtr
+amdgpu_dirty_master(PixmapDirtyUpdatePtr dirty)
+{
+   return amdgpu_master_screen(amdgpu_dirty_src_drawable(dirty));
 }
 
 static inline Bool
 amdgpu_dirty_src_equals(PixmapDirtyUpdatePtr dirty, PixmapPtr pixmap)
 {
-#ifdef HAS_DIRTYTRACKING_DRAWABLE_SRC
-   return dirty->src == >drawable;
-#else
-   return dirty->src == pixmap;
-#endif
+   return amdgpu_dirty_src_drawable(dirty) == >drawable;
 }
 
 
diff --git a/src/amdgpu_kms.c b/src/amdgpu_kms.c
index a5f2040a8..c15711224 100644
--- a/src/amdgpu_kms.c
+++ b/src/amdgpu_kms.c
@@ -479,11 +479,8 @@ dirty_region(PixmapDirtyUpdatePtr dirty)
 static void
 redisplay_dirty(PixmapDirtyUpdatePtr dirty, RegionPtr region)
 {
-#ifdef HAS_DIRTYTRACKING_DRAWABLE_SRC
-   ScrnInfoPtr src_scrn = xf86ScreenToScrn(dirty->src->pScreen);
-#else
-   ScrnInfoPtr src_scrn = xf86ScreenToScrn(dirty->src->drawable.pScreen);
-#endif
+   ScrnInfoPtr src_scrn =
+   xf86ScreenToScrn(amdgpu_dirty_src_drawable(dirty)->pScreen);
 
if (RegionNil(region))
goto out;
-- 
2.15.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH xf86-video-ati] Use correct ScrnInfoPtr in redisplay_dirty

2017-11-15 Thread Michel Dänzer
From: Michel Dänzer 

We used the destination pixmap's screen for flushing drawing commands.
But when we are the master screen, the destination pixmap is from the
slave screen.

Fixes crash when the slave screen isn't using the same acceleration
architecture as us.

Bugzilla: https://bugs.freedesktop.org/103613
Fixes: 01b040b4a807 ("Adapt to PixmapDirtyUpdateRec::src being a
 DrawablePtr")
(Ported from amdgpu commit 3a4f7422913093ed9e26b73ecd7f9e773478cb1e)

Signed-off-by: Michel Dänzer 
---
 src/radeon_kms.c | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/src/radeon_kms.c b/src/radeon_kms.c
index 06c8a47fb..5fcd8f0b7 100644
--- a/src/radeon_kms.c
+++ b/src/radeon_kms.c
@@ -570,7 +570,11 @@ dirty_region(PixmapDirtyUpdatePtr dirty)
 static void
 redisplay_dirty(PixmapDirtyUpdatePtr dirty, RegionPtr region)
 {
-   ScrnInfoPtr pScrn = 
xf86ScreenToScrn(dirty->slave_dst->drawable.pScreen);
+#ifdef HAS_DIRTYTRACKING_DRAWABLE_SRC
+   ScrnInfoPtr src_scrn = xf86ScreenToScrn(dirty->src->pScreen);
+#else
+   ScrnInfoPtr src_scrn = xf86ScreenToScrn(dirty->src->drawable.pScreen);
+#endif
 
if (RegionNil(region))
goto out;
@@ -584,7 +588,7 @@ redisplay_dirty(PixmapDirtyUpdatePtr dirty, RegionPtr 
region)
PixmapSyncDirtyHelper(dirty, region);
 #endif
 
-   radeon_cs_flush_indirect(pScrn);
+   radeon_cs_flush_indirect(src_scrn);
if (dirty->slave_dst->master_pixmap)
DamageRegionProcessPending(>slave_dst->drawable);
 
-- 
2.15.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/2] dma-buf: try to replace a signaled fence in reservation_object_add_shared_inplace

2017-11-15 Thread Chris Wilson
Quoting Chris Wilson (2017-11-14 14:34:05)
> Quoting Christian König (2017-11-14 14:24:44)
> > Am 06.11.2017 um 17:22 schrieb Chris Wilson:
> > > Quoting Christian König (2017-10-30 14:59:04)
> > >> @@ -126,17 +127,28 @@ reservation_object_add_shared_inplace(struct 
> > >> reservation_object *obj,
> > >>  dma_fence_put(old_fence);
> > >>  return;
> > >>  }
> > >> +
> > >> +   if (!signaled && dma_fence_is_signaled(old_fence)) {
> > >> +   signaled = old_fence;
> > >> +   signaled_idx = i;
> > >> +   }
> > > How much do we care about only keeping one fence per-ctx here? You could
> > > rearrange this to break on old_fence->context == fence->context ||
> > > dma_fence_is_signaled(old_fence) then everyone can use the final block.
> > 
> > Yeah, that is what David Zhou suggested as well.
> > 
> > I've rejected this approach for now cause I think we still have cases 
> > where we rely on one fence per ctx (but I'm not 100% sure).
> > 
> > I changed patch #1 in this series as you suggest and going to send that 
> > out once more in a minute.
> > 
> > Can we get this upstream as is for now? I won't have much more time 
> > working on this.
> 
> Sure, we are only discussing how we might make it look tidier, pure
> micro-optimisation with the caveat of losing the one-fence-per-ctx
> guarantee.

Ah, one thing to note is that extra checking pushed one of our corner
case tests over its time limit.

If we can completely forgo the one-fence-per-ctx here, what works really
well in testing is

diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c
index 5319ac478918..5755e95fab1b 100644
--- a/drivers/dma-buf/reservation.c
+++ b/drivers/dma-buf/reservation.c
@@ -104,39 +104,19 @@ reservation_object_add_shared_inplace(struct 
reservation_object *obj,
  struct reservation_object_list *fobj,
  struct dma_fence *fence)
 {
-   struct dma_fence *replace = NULL;
-   u32 ctx = fence->context;
-   u32 i;
-
dma_fence_get(fence);
 
preempt_disable();
write_seqcount_begin(>seq);
 
-   for (i = 0; i < fobj->shared_count; ++i) {
-   struct dma_fence *check;
-
-   check = rcu_dereference_protected(fobj->shared[i],
- reservation_object_held(obj));
-
-   if (check->context == ctx || dma_fence_is_signaled(check)) {
-   replace = old_fence;
-   break;
-   }
-   }
-
/*
 * memory barrier is added by write_seqcount_begin,
 * fobj->shared_count is protected by this lock too
 */
-   RCU_INIT_POINTER(fobj->shared[i], fence);
-   if (!replace)
-   fobj->shared_count++;
+   RCU_INIT_POINTER(fobj->shared[fobj->shared_count++], fence);
 
write_seqcount_end(>seq);
preempt_enable();
-
-   dma_fence_put(replace);
 }
 
 static void

 i.e. don't check when not replacing the shared[], on creating the new
 buffer we then discard all the old fences.

It should work for amdgpu as well since you do a ht to coalesce
redundant fences before queuing.
-Chris
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amd/display: remove unnecessary cast and use kcalloc instead of kzalloc

2017-11-15 Thread Colin King
From: Colin Ian King 

Use kcalloc instead of kzalloc and the cast on the return from kzalloc is
unnecessary and can be removed.

Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/display/dc/basics/logger.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/basics/logger.c 
b/drivers/gpu/drm/amd/display/dc/basics/logger.c
index e04e8ecd4874..2ff5b467603d 100644
--- a/drivers/gpu/drm/amd/display/dc/basics/logger.c
+++ b/drivers/gpu/drm/amd/display/dc/basics/logger.c
@@ -70,9 +70,8 @@ static bool construct(struct dc_context *ctx, struct 
dal_logger *logger,
 {
/* malloc buffer and init offsets */
logger->log_buffer_size = DAL_LOGGER_BUFFER_MAX_SIZE;
-   logger->log_buffer = (char *)kzalloc(logger->log_buffer_size * 
sizeof(char),
-GFP_KERNEL);
-
+   logger->log_buffer = kcalloc(logger->log_buffer_size, sizeof(char),
+GFP_KERNEL);
if (!logger->log_buffer)
return false;
 
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH umr] Add UMC60 block for vega10

2017-11-15 Thread Tom St Denis
Signed-off-by: Tom St Denis 
---
 scripts/soc15_parse.sh|  1 +
 src/lib/asic/vega10.c |  1 +
 src/lib/ip/CMakeLists.txt |  1 +
 src/lib/ip/umc60.c| 55 +++
 src/lib/ip/umc60_bits.i   | 10 +
 src/lib/ip/umc60_regs.i   | 12 +++
 src/umr.h |  1 +
 7 files changed, 81 insertions(+)
 create mode 100644 src/lib/ip/umc60.c
 create mode 100644 src/lib/ip/umc60_bits.i
 create mode 100644 src/lib/ip/umc60_regs.i

diff --git a/scripts/soc15_parse.sh b/scripts/soc15_parse.sh
index 92c616cab013..2241dc07c601 100644
--- a/scripts/soc15_parse.sh
+++ b/scripts/soc15_parse.sh
@@ -84,6 +84,7 @@ parse_bits ${pk}/vega10/NBIO/nbio_6_1 src/lib/ip/nbio61
 parse_bits ${pk}/vega10/HDP/hdp_4_0 src/lib/ip/hdp40
 parse_bits ${pk}/vega10/MMHUB/mmhub_1_0 src/lib/ip/mmhub10
 parse_bits ${pk}/vega10/MP/mp_9_0 src/lib/ip/mp90
+parse_bits ${pk}/vega10/UMC/umc_6_0 src/lib/ip/umc60
 
 parse_bits ${pk}/raven1/VCN/vcn_1_0 src/lib/ip/vcn10
 parse_bits ${pk}/raven1/DCN/dcn_1_0 src/lib/ip/dcn10
diff --git a/src/lib/asic/vega10.c b/src/lib/asic/vega10.c
index 933d37fcfd24..744de298e44c 100644
--- a/src/lib/asic/vega10.c
+++ b/src/lib/asic/vega10.c
@@ -45,6 +45,7 @@ struct umr_asic *umr_create_vega10(struct umr_options 
*options)
umr_create_thm90(vega10_offs, options),
umr_create_mmhub10(vega10_offs, options),
umr_create_mp90(vega10_offs, options),
+   umr_create_umc60(vega10_offs, options),
NULL);
 }
 
diff --git a/src/lib/ip/CMakeLists.txt b/src/lib/ip/CMakeLists.txt
index a311b141d58a..c9cc1172e715 100644
--- a/src/lib/ip/CMakeLists.txt
+++ b/src/lib/ip/CMakeLists.txt
@@ -47,6 +47,7 @@ add_library(ip OBJECT
   smu713.c
   smu80.c
   thm90.c
+  umc60.c
   uvd40.c
   uvd42.c
   uvd5.c
diff --git a/src/lib/ip/umc60.c b/src/lib/ip/umc60.c
new file mode 100644
index ..30b18a217e1b
--- /dev/null
+++ b/src/lib/ip/umc60.c
@@ -0,0 +1,55 @@
+/*
+ * Copyright 2017 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Tom St Denis 
+ *
+ */
+#include "umr.h"
+
+#include "umc60_bits.i"
+
+static const struct umr_reg_soc15 umc60_registers[] = {
+#include "umc60_regs.i"
+};
+
+struct umr_ip_block *umr_create_umc60(struct umr_ip_offsets_soc15 
*soc15_offsets, struct umr_options *options)
+{
+   struct umr_ip_block *ip;
+
+   ip = calloc(1, sizeof *ip);
+   if (!ip)
+   return NULL;
+
+   ip->ipname = "umc60";
+   ip->no_regs = sizeof(umc60_registers)/sizeof(umc60_registers[0]);
+   ip->regs = calloc(ip->no_regs, sizeof(ip->regs[0]));
+   if (!ip->regs) {
+   free(ip);
+   return NULL;
+   }
+
+   if (umr_transfer_soc15_to_reg(options, soc15_offsets, "UMC", 
umc60_registers, ip)) {
+   free(ip);
+   return NULL;
+   }
+
+   return ip;
+}
diff --git a/src/lib/ip/umc60_bits.i b/src/lib/ip/umc60_bits.i
new file mode 100644
index ..1a3896c90fae
--- /dev/null
+++ b/src/lib/ip/umc60_bits.i
@@ -0,0 +1,10 @@
+static struct umr_bitfield mmUMCCH0_0_EccCtrl[] = {
+{ "RdEccEn", 10, 10, _bitfield_default },
+{ "WrEccEn", 0, 0, _bitfield_default },
+};
+static struct umr_bitfield mmUMCCH0_0_UMC_CONFIG[] = {
+{ "DramReady", 31, 31, _bitfield_default },
+};
+static struct umr_bitfield mmUMCCH0_0_UmcLocalCap[] = {
+{ "EccDis", 0, 0, _bitfield_default },
+};
diff --git a/src/lib/ip/umc60_regs.i b/src/lib/ip/umc60_regs.i
new file mode 100644
index ..822f8f751fc3
--- /dev/null
+++ b/src/lib/ip/umc60_regs.i
@@ -0,0 +1,12 @@
+   { "mmUMCCH0_0_EccCtrl", REG_MMIO, 0x0053, 0, _0_EccCtrl[0], 
sizeof(mmUMCCH0_0_EccCtrl)/sizeof(mmUMCCH0_0_EccCtrl[0]), 0, 0 },
+   { "mmUMCCH1_0_EccCtrl", REG_MMIO, 

Re: [PATCH 2/2] drm/ttm: completely rework ttm_bo_delayed_delete

2017-11-15 Thread Michel Dänzer
On 15/11/17 01:31 PM, Christian König wrote:
> There is no guarantee that the next entry on the ddelete list stays on
> the list when we drop the locks.
> 
> Completely rework this mess by moving processed entries on a temporary
> list.
> 
> Signed-off-by: Christian König 

[...]

>  static void ttm_bo_delayed_workqueue(struct work_struct *work)
>  {
>   struct ttm_bo_device *bdev =
>   container_of(work, struct ttm_bo_device, wq.work);
> + unsigned long delay = ((HZ / 100) < 1) ? 1 : HZ / 100;
>  
> - if (ttm_bo_delayed_delete(bdev, false)) {
> - schedule_delayed_work(>wq,
> -   ((HZ / 100) < 1) ? 1 : HZ / 100);
> - }
> + if (!ttm_bo_delayed_delete(bdev, false))
> + schedule_delayed_work(>wq, delay);
>  }

Would be better to only add the ! here and leave the rest of this
function unchanged in this patch. The cleanup can be done in a separate
patch.

Other than that, both patches are

Reviewed-and-Tested-by: Michel Dänzer 


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/2] drm/ttm: fix ttm_mem_evict_first once more

2017-11-15 Thread Christian König
The code path isn't hit at the moment, but we need to take the lock to
add the BO back to the LRU.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/ttm/ttm_bo.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 07d9c6e5b6ca..7c1eac4f4b4b 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -793,10 +793,13 @@ static int ttm_mem_evict_first(struct ttm_bo_device *bdev,
spin_unlock(>lru_lock);
 
ret = ttm_bo_evict(bo, interruptible, no_wait_gpu);
-   if (locked)
+   if (locked) {
ttm_bo_unreserve(bo);
-   else
+   } else {
+   spin_lock(>lru_lock);
ttm_bo_add_to_lru(bo);
+   spin_unlock(>lru_lock);
+   }
 
kref_put(>list_kref, ttm_bo_release_list);
return ret;
-- 
2.11.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH libdrm] amdgpu: Disable deadlock test suite for Vega 10

2017-11-15 Thread Christian König

Am 14.11.2017 um 15:07 schrieb Andrey Grodzovsky:

The suite stalls the CP, until RCA is done the suite is
disabled to not disrupt regression testing.

Signed-off-by: Andrey Grodzovsky 


Reviewed-by: Christian König 

Since you now have commit rights please try to push by yourself.

Thanks,
Christian.


---
  tests/amdgpu/amdgpu_test.c|  2 +-
  tests/amdgpu/amdgpu_test.h|  5 +
  tests/amdgpu/deadlock_tests.c | 19 +++
  3 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/tests/amdgpu/amdgpu_test.c b/tests/amdgpu/amdgpu_test.c
index 91010dc..ee64152 100644
--- a/tests/amdgpu/amdgpu_test.c
+++ b/tests/amdgpu/amdgpu_test.c
@@ -162,7 +162,7 @@ static Suites_Active_Status suites_active_stat[] = {
},
{
.pName = DEADLOCK_TESTS_STR,
-   .pActive = always_active,
+   .pActive = suite_deadlock_tests_enable,
},
{
.pName = VM_TESTS_STR,
diff --git a/tests/amdgpu/amdgpu_test.h b/tests/amdgpu/amdgpu_test.h
index dd236ed..414fcb8 100644
--- a/tests/amdgpu/amdgpu_test.h
+++ b/tests/amdgpu/amdgpu_test.h
@@ -160,6 +160,11 @@ int suite_deadlock_tests_init();
  int suite_deadlock_tests_clean();
  
  /**

+ * Decide if the suite is enabled by default or not.
+ */
+CU_BOOL suite_deadlock_tests_enable(void);
+
+/**
   * Tests in uvd enc test suite
   */
  extern CU_TestInfo deadlock_tests[];
diff --git a/tests/amdgpu/deadlock_tests.c b/tests/amdgpu/deadlock_tests.c
index f5c4552..84f4deb 100644
--- a/tests/amdgpu/deadlock_tests.c
+++ b/tests/amdgpu/deadlock_tests.c
@@ -36,6 +36,7 @@
  
  #include "amdgpu_test.h"

  #include "amdgpu_drm.h"
+#include "amdgpu_internal.h"
  
  #include 
  
@@ -87,6 +88,24 @@ static void amdgpu_deadlock_helper(unsigned ip_type);

  static void amdgpu_deadlock_gfx(void);
  static void amdgpu_deadlock_compute(void);
  
+CU_BOOL suite_deadlock_tests_enable(void)

+{
+   if (amdgpu_device_initialize(drm_amdgpu[0], _version,
+_version, _handle))
+   return CU_FALSE;
+
+   if (amdgpu_device_deinitialize(device_handle))
+   return CU_FALSE;
+
+
+   if (device_handle->info.family_id == AMDGPU_FAMILY_AI) {
+   printf("\n\nCurrently hangs the CP on this ASIC, deadlock suite 
disabled\n");
+   return CU_FALSE;
+   }
+
+   return CU_TRUE;
+}
+
  int suite_deadlock_tests_init(void)
  {
struct amdgpu_gpu_info gpu_info = {0};



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu:fix virtual dce bug

2017-11-15 Thread Monk Liu
this fix the issue that access memory after freed
after driver unloaded.

Change-Id: I64e2488c18f5dc044b57c74567785da21fc028da
Signed-off-by: Monk Liu 
---
 drivers/gpu/drm/amd/amdgpu/dce_virtual.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c 
b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
index a8829af..39460eb 100644
--- a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
+++ b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
@@ -437,6 +437,8 @@ static int dce_virtual_sw_fini(void *handle)
drm_kms_helper_poll_fini(adev->ddev);
 
drm_mode_config_cleanup(adev->ddev);
+   /* clear crtcs pointer to avoid dce irq finish routine access freed 
data */
+   memset(adev->mode_info.crtcs, 0, sizeof(adev->mode_info.crtcs[0]) * 
AMDGPU_MAX_CRTCS);
adev->mode_info.mode_config_initialized = false;
return 0;
 }
@@ -723,7 +725,7 @@ static void 
dce_virtual_set_crtc_vblank_interrupt_state(struct amdgpu_device *ad
int crtc,
enum 
amdgpu_interrupt_state state)
 {
-   if (crtc >= adev->mode_info.num_crtc) {
+   if (crtc >= adev->mode_info.num_crtc || !adev->mode_info.crtcs[crtc]) {
DRM_DEBUG("invalid crtc %d\n", crtc);
return;
}
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [DC] [CRTC:44:crtc-0] vblank wait timed out

2017-11-15 Thread Michel Dänzer
On 14/11/17 07:54 PM, Lazare, Jordan wrote:
> 
> Would you mind attaching the full dmesg? We'll have a look.

Attached.

This is with Alex's drm-next-4.15-dc branch, so it doesn't seem to have
been introduced by this week's DC changes.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
[0.00] Linux version 4.14.0+ (daenzer@kaveri) (gcc version 7.2.0 
(Debian 7.2.0-14)) #292 SMP Mon Nov 13 11:31:30 CET 2017
[0.00] Command line: BOOT_IMAGE=/boot/vmlinuz-4.14.0+ 
root=/dev/mapper/VG--Debian-LV--sid ro radeon.lockup_timeout=0 radeon.bapm=1 
amdgpu.bapm=1 quiet
[0.00] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point 
registers'
[0.00] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[0.00] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[0.00] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
[0.00] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, 
using 'compacted' format.
[0.00] e820: BIOS-provided physical RAM map:
[0.00] BIOS-e820: [mem 0x-0x0009] usable
[0.00] BIOS-e820: [mem 0x000a-0x000f] reserved
[0.00] BIOS-e820: [mem 0x0010-0x09d7] usable
[0.00] BIOS-e820: [mem 0x09d8-0x09ff] reserved
[0.00] BIOS-e820: [mem 0x0a00-0xbd27bfff] usable
[0.00] BIOS-e820: [mem 0xbd27c000-0xbd3bcfff] reserved
[0.00] BIOS-e820: [mem 0xbd3bd000-0xbd4b2fff] usable
[0.00] BIOS-e820: [mem 0xbd4b3000-0xbd88afff] ACPI NVS
[0.00] BIOS-e820: [mem 0xbd88b000-0xbe32afff] reserved
[0.00] BIOS-e820: [mem 0xbe32b000-0xbe3e8fff] type 20
[0.00] BIOS-e820: [mem 0xbe3e9000-0xbeff] usable
[0.00] BIOS-e820: [mem 0xbf00-0xbfff] reserved
[0.00] BIOS-e820: [mem 0xf800-0xfbff] reserved
[0.00] BIOS-e820: [mem 0xfdf0-0xfdff] reserved
[0.00] BIOS-e820: [mem 0xfea0-0xfea0] reserved
[0.00] BIOS-e820: [mem 0xfeb8-0xfec01fff] reserved
[0.00] BIOS-e820: [mem 0xfec1-0xfec10fff] reserved
[0.00] BIOS-e820: [mem 0xfec3-0xfec30fff] reserved
[0.00] BIOS-e820: [mem 0xfed0-0xfed00fff] reserved
[0.00] BIOS-e820: [mem 0xfed4-0xfed44fff] reserved
[0.00] BIOS-e820: [mem 0xfed8-0xfed8] reserved
[0.00] BIOS-e820: [mem 0xfedc2000-0xfedc] reserved
[0.00] BIOS-e820: [mem 0xfedd4000-0xfedd5fff] reserved
[0.00] BIOS-e820: [mem 0xfee0-0xfeef] reserved
[0.00] BIOS-e820: [mem 0xff00-0x] reserved
[0.00] BIOS-e820: [mem 0x0001-0x00043f37] usable
[0.00] NX (Execute Disable) protection: active
[0.00] efi: EFI v2.60 by American Megatrends
[0.00] efi:  ACPI 2.0=0xbd4b3000  ACPI=0xbd4b3000  SMBIOS=0xbe29a000  
ESRT=0xbb2b9518  MEMATTR=0xbb05b018 
[0.00] random: fast init done
[0.00] SMBIOS 3.0 present.
[0.00] DMI: Micro-Star International Co., Ltd. MS-7A34/B350 TOMAHAWK 
(MS-7A34), BIOS 1.80 09/13/2017
[0.00] tsc: Fast TSC calibration failed
[0.00] tsc: Using PIT calibration value
[0.00] e820: update [mem 0x-0x0fff] usable ==> reserved
[0.00] e820: remove [mem 0x000a-0x000f] usable
[0.00] e820: last_pfn = 0x43f380 max_arch_pfn = 0x4
[0.00] MTRR default type: uncachable
[0.00] MTRR fixed ranges enabled:
[0.00]   0-9 write-back
[0.00]   A-B write-through
[0.00]   C-F write-protect
[0.00] MTRR variable ranges enabled:
[0.00]   0 base  mask 8000 write-back
[0.00]   1 base 8000 mask C000 write-back
[0.00]   2 base BF00 mask FF00 uncachable
[0.00]   3 disabled
[0.00]   4 disabled
[0.00]   5 disabled
[0.00]   6 disabled
[0.00]   7 disabled
[0.00] TOM2: 00044000 aka 17408M
[0.00] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
[0.00] e820: update [mem 0xbf00-0x] usable ==> reserved
[0.00] e820: last_pfn = 0xbf000 max_arch_pfn = 0x4
[0.00] esrt: Reserving ESRT space from 0xbb2b9518 to 
0xbb2b9550.
[0.00] Base memory trampoline at [8d2280096000] 96000 size 24576
[0.00] Using GB pages for direct mapping
[0.00] BRK 

Re: [DC] [CRTC:44:crtc-0] vblank wait timed out

2017-11-15 Thread Michel Dänzer
On 15/11/17 02:52 AM, Andrey Grodzovsky wrote:
> Can you try this patch ? I noticed there is a new API to use to wait for
> flips instead of what we are using now.

I'll give it a spin, thanks. It'll take some time to have any certainty
it's fixed though.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [DC] [CRTC:44:crtc-0] vblank wait timed out

2017-11-15 Thread Michel Dänzer
On 14/11/17 09:18 PM, Matthias wrote:
> Hi,
> 
> might this be related to this bug I reportet some time ago?
> 
> https://bugs.freedesktop.org/show_bug.cgi?id=103489

Seems unlikely. I only see a single "vblank wait timed out" splat when
the display goes off, and no machine hangs.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] amdgpu: Don't use DRM_ERROR when failing to allocate a BO

2017-11-15 Thread Christian König

Am 14.11.2017 um 18:54 schrieb Deucher, Alexander:

-Original Message-
From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
Of Michel Dänzer
Sent: Tuesday, November 14, 2017 12:52 PM
To: amd-gfx@lists.freedesktop.org
Subject: [PATCH] amdgpu: Don't use DRM_ERROR when failing to allocate a
BO

From: Michel Dänzer 

This can be triggered by userspace, e.g. trying to allocate too large a
BO, so it shouldn't log anything by default.

Callers need to handle failure anyway.

Signed-off-by: Michel Dänzer 

Reviewed-by: Alex Deucher 


Reviewed-by: Christian König 




---
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 4 ++--
  2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 951d625bbdd7..04ddd782bf6d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -72,7 +72,7 @@ int amdgpu_gem_object_create(struct amdgpu_device
*adev, unsigned long size,
initial_domain |=
AMDGPU_GEM_DOMAIN_GTT;
goto retry;
}
-   DRM_ERROR("Failed to allocate GEM object (%ld,
%d, %u, %d)\n",
+   DRM_DEBUG("Failed to allocate GEM object (%ld,
%d, %u, %d)\n",
  size, initial_domain, alignment, r);
}
return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 5acf20cfb1d0..3233d5988f66 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -314,8 +314,8 @@ static bool amdgpu_bo_validate_size(struct
amdgpu_device *adev,
return true;

  fail:
-   DRM_ERROR("BO size %lu > total memory in domain: %llu\n", size,
- man->size << PAGE_SHIFT);
+   DRM_DEBUG("BO size %lu > total memory in domain: %llu\n", size,
+ man->size << PAGE_SHIFT);
return false;
  }

--
2.15.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx