[PATCH v2] drm/msm/adreno: Add missing MODULE_FIRMWARE macros

2023-06-19 Thread Juerg Haefliger
The driver references some firmware files that don't have corresponding
MODULE_FIRMWARE macros and thus won't be listed via modinfo. Fix that.

Signed-off-by: Juerg Haefliger 

---
v2:
  - Drop addition and removal of zap files (needs more discussion)
  - Add new a690_gmu.bin
  - Update commit subject and message accordingly
---
 drivers/gpu/drm/msm/adreno/adreno_device.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c 
b/drivers/gpu/drm/msm/adreno/adreno_device.c
index cb94cfd137a8..7c1f9a844009 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_device.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_device.c
@@ -397,10 +397,21 @@ MODULE_FIRMWARE("qcom/a530_zap.mdt");
 MODULE_FIRMWARE("qcom/a530_zap.b00");
 MODULE_FIRMWARE("qcom/a530_zap.b01");
 MODULE_FIRMWARE("qcom/a530_zap.b02");
+MODULE_FIRMWARE("qcom/a540_gpmu.fw2");
 MODULE_FIRMWARE("qcom/a619_gmu.bin");
 MODULE_FIRMWARE("qcom/a630_sqe.fw");
 MODULE_FIRMWARE("qcom/a630_gmu.bin");
 MODULE_FIRMWARE("qcom/a630_zap.mbn");
+MODULE_FIRMWARE("qcom/a640_gmu.bin");
+MODULE_FIRMWARE("qcom/a650_gmu.bin");
+MODULE_FIRMWARE("qcom/a650_sqe.fw");
+MODULE_FIRMWARE("qcom/a660_gmu.bin");
+MODULE_FIRMWARE("qcom/a660_sqe.fw");
+MODULE_FIRMWARE("qcom/a690_gmu.bin");
+MODULE_FIRMWARE("qcom/leia_pfp_470.fw");
+MODULE_FIRMWARE("qcom/leia_pm4_470.fw");
+MODULE_FIRMWARE("qcom/yamato_pfp.fw");
+MODULE_FIRMWARE("qcom/yamato_pm4.fw");
 
 static inline bool _rev_match(uint8_t entry, uint8_t id)
 {
-- 
2.37.2



Re: [PATCH] drm/msm/adreno: Update MODULE_FIRMWARE macros

2023-06-19 Thread Juerg Haefliger
On Fri, 16 Jun 2023 21:25:01 +0530
Akhil P Oommen  wrote:

> On Fri, Jun 16, 2023 at 02:28:15PM +0200, Juerg Haefliger wrote:
> > 
> > Add missing MODULE_FIRMWARE macros and remove some for firmwares that
> > the driver no longer references.
> > 
> > Signed-off-by: Juerg Haefliger 
> > ---
> >  drivers/gpu/drm/msm/adreno/adreno_device.c | 23 ++
> >  1 file changed, 19 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c 
> > b/drivers/gpu/drm/msm/adreno/adreno_device.c
> > index 8cff86e9d35c..9f70d7c1a72a 100644
> > --- a/drivers/gpu/drm/msm/adreno/adreno_device.c
> > +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c
> > @@ -364,17 +364,32 @@ MODULE_FIRMWARE("qcom/a330_pm4.fw");
> >  MODULE_FIRMWARE("qcom/a330_pfp.fw");
> >  MODULE_FIRMWARE("qcom/a420_pm4.fw");
> >  MODULE_FIRMWARE("qcom/a420_pfp.fw");
> > +MODULE_FIRMWARE("qcom/a506_zap.mdt");
> > +MODULE_FIRMWARE("qcom/a508_zap.mdt");
> > +MODULE_FIRMWARE("qcom/a512_zap.mdt");
> >  MODULE_FIRMWARE("qcom/a530_pm4.fw");
> >  MODULE_FIRMWARE("qcom/a530_pfp.fw");
> >  MODULE_FIRMWARE("qcom/a530v3_gpmu.fw2");
> >  MODULE_FIRMWARE("qcom/a530_zap.mdt");
> > -MODULE_FIRMWARE("qcom/a530_zap.b00");
> > -MODULE_FIRMWARE("qcom/a530_zap.b01");
> > -MODULE_FIRMWARE("qcom/a530_zap.b02");  
> Why are these not required when "qcom/a530_zap.mdt" is present?
> 
> mdt & b0* binaries are different partitions of the same secure
> firmware. Even though we specify only the .mdt file here, the PIL driver
> will load the *.b0* file automatically. OTOH, "*.mbn" is a standalone
> unified binary format.

Ah thanks for the clarification.


> If the requirement is to ensure that all necessary firmwares are part of
> your distribution, you should include the *.b0* files too here.

I'll look into that. IMO, everything that the drivers can load should be
listed for completeness.

...Juerg


> -Akhil
> 
> > +MODULE_FIRMWARE("qcom/a540_gpmu.fw2");
> > +MODULE_FIRMWARE("qcom/a540_zap.mdt");
> > +MODULE_FIRMWARE("qcom/a615_zap.mdt");
> >  MODULE_FIRMWARE("qcom/a619_gmu.bin");
> >  MODULE_FIRMWARE("qcom/a630_sqe.fw");
> >  MODULE_FIRMWARE("qcom/a630_gmu.bin");
> > -MODULE_FIRMWARE("qcom/a630_zap.mbn");
> > +MODULE_FIRMWARE("qcom/a630_zap.mdt");
> > +MODULE_FIRMWARE("qcom/a640_gmu.bin");
> > +MODULE_FIRMWARE("qcom/a640_zap.mdt");
> > +MODULE_FIRMWARE("qcom/a650_gmu.bin");
> > +MODULE_FIRMWARE("qcom/a650_sqe.fw");
> > +MODULE_FIRMWARE("qcom/a650_zap.mdt");
> > +MODULE_FIRMWARE("qcom/a660_gmu.bin");
> > +MODULE_FIRMWARE("qcom/a660_sqe.fw");
> > +MODULE_FIRMWARE("qcom/a660_zap.mdt");
> > +MODULE_FIRMWARE("qcom/leia_pfp_470.fw");
> > +MODULE_FIRMWARE("qcom/leia_pm4_470.fw");
> > +MODULE_FIRMWARE("qcom/yamato_pfp.fw");
> > +MODULE_FIRMWARE("qcom/yamato_pm4.fw");
> >  
> >  static inline bool _rev_match(uint8_t entry, uint8_t id)
> >  {
> > -- 
> > 2.37.2
> >   



pgpsxCFFv8ck6.pgp
Description: OpenPGP digital signature


[PATCH] drm/amd/amdgpu: Properly tune the size of struct

2023-06-19 Thread Su Hui
Smatch error:
gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:316:49: error:
static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"

Fixes: 1721bc1b2afa ("drm/amdgpu: Update VF2PF interface")
Signed-off-by: Su Hui 
---
 drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h 
b/drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h
index 24d42d24e6a0..a482b422fed2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h
@@ -177,10 +177,10 @@ struct amd_sriov_msg_pf2vf_info {
uint64_t mecfw_offset;
/* MEC FW size in BYTE */
uint32_t mecfw_size;
-   /* UVD FW position in BYTE from the start of VF visible frame buffer */
-   uint64_t uvdfw_offset;
/* UVD FW size in BYTE */
uint32_t uvdfw_size;
+   /* UVD FW position in BYTE from the start of VF visible frame buffer */
+   uint64_t uvdfw_offset;
/* VCE FW position in BYTE from the start of VF visible frame buffer */
uint64_t vcefw_offset;
/* VCE FW size in BYTE */
@@ -193,8 +193,8 @@ struct amd_sriov_msg_pf2vf_info {
/* frequency for VF to update the VF2PF area in msec, 0 = manual */
uint32_t vf2pf_update_interval_ms;
/* identification in ROCm SMI */
-   uint64_t uuid;
uint32_t fcn_idx;
+   uint64_t uuid;
/* flags to indicate which register access method VF should use */
union amd_sriov_reg_access_flags reg_access_flags;
/* MM BW management */
@@ -263,7 +263,7 @@ struct amd_sriov_msg_vf2pf_info {
struct {
uint8_t id;
uint32_t version;
-   } ucode_info[AMD_SRIOV_MSG_RESERVE_UCODE];
+   } __packed ucode_info[AMD_SRIOV_MSG_RESERVE_UCODE];
uint64_t dummy_page_addr;
 
/* reserved */
-- 
2.30.2



Re: [PATCH 06/13] drm/amdgpu: use the new drm_exec object for CS v2

2023-06-19 Thread Tatsuyuki Ishi

On 6/20/23 13:07, Tatsuyuki Ishi wrote:
@@ -1296,9 +1271,8 @@ static int amdgpu_cs_submit(struct 
amdgpu_cs_parser *p,

   */
  r = 0;
  amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) {
-    struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
-
-    r |= !amdgpu_ttm_tt_get_user_pages_done(bo->tbo.ttm, e->range);
+    r |= !amdgpu_ttm_tt_get_user_pages_done(e->bo->tbo.ttm,
+    e->range);
  e->range = NULL;


e->range = NULL; needs to be removed, or it's causing *massive* memory 
leaks.


Actually, I quoted the wrong hunk, the correct one is below.


@@ -928,18 +874,56 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser 
*p,
e->user_invalidated = userpage_invalidated;
}
 
-	r = ttm_eu_reserve_buffers(>ticket, >validated, true,

-  );
-   if (unlikely(r != 0)) {
-   if (r != -ERESTARTSYS)
-   DRM_ERROR("ttm_eu_reserve_buffers failed.\n");
-   goto out_free_user_pages;
+   drm_exec_while_not_all_locked(>exec) {
+   r = amdgpu_vm_lock_pd(>vm, >exec);
+   drm_exec_continue_on_contention(>exec);
+   if (unlikely(r))
+   goto out_free_user_pages;
+
+   amdgpu_bo_list_for_each_entry(e, p->bo_list) {
+   r = drm_exec_prepare_obj(>exec, >bo->tbo.base, 2);
+   drm_exec_break_on_contention(>exec);
+   if (unlikely(r))
+   goto out_free_user_pages;
+
+   e->bo_va = amdgpu_vm_bo_find(vm, e->bo);
+   e->range = NULL;


This causes the leak.


+   }
+   drm_exec_continue_on_contention(>exec);
+
+   if (p->uf_bo) {
+   r = drm_exec_prepare_obj(>exec, >uf_bo->tbo.base,
+2);
+   drm_exec_continue_on_contention(>exec);
+   if (unlikely(r))
+   goto out_free_user_pages;
+   }
}


Tatsuyuki


Re: [PATCH] drm/drm_gem.c: remove surplus else after return clause

2023-06-19 Thread Sui Jingfeng

ping ?

On 2023/3/14 20:53, Sui Jingfeng wrote:

  else is not generally useful after return

Signed-off-by: Sui Jingfeng <15330273...@189.cn>
---
  drivers/gpu/drm/drm_gem.c | 7 ---
  1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index a6208e2c089b..364e3733af98 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1150,8 +1150,8 @@ int drm_gem_pin(struct drm_gem_object *obj)
  {
if (obj->funcs->pin)
return obj->funcs->pin(obj);
-   else
-   return 0;
+
+   return 0;
  }
  
  void drm_gem_unpin(struct drm_gem_object *obj)

@@ -1172,7 +1172,8 @@ int drm_gem_vmap(struct drm_gem_object *obj, struct 
iosys_map *map)
ret = obj->funcs->vmap(obj, map);
if (ret)
return ret;
-   else if (iosys_map_is_null(map))
+
+   if (iosys_map_is_null(map))
return -ENOMEM;
  
  	return 0;


--
Jingfeng



Re: [PATCH 06/13] drm/amdgpu: use the new drm_exec object for CS v2

2023-06-19 Thread Tatsuyuki Ishi

+Boris and +Matthew in case you want to take over this patch set.

Here are some review / testing comments, including those I posted before 
to ease tracking.


On 5/4/23 20:51, Christian König wrote:

Use the new component here as well and remove the old handling.

v2: drop dupplicate handling

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu.h |   1 -
  drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c |  71 ++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h |   5 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 210 +---
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h  |   7 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c  |  22 --
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h  |   3 -
  7 files changed, 115 insertions(+), 204 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 02b827785e39..eba3e4f01ea6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -133,6 +141,8 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, 
struct drm_file *filp,
  
  	list->first_userptr = first_userptr;

list->num_entries = num_entries;
+   sort(array, last_entry, sizeof(struct amdgpu_bo_list_entry),
+amdgpu_bo_list_entry_cmp, NULL);


Previously amdgpu_bo_list_get_list sorted all entries, but this one only 
sorts userptr entries. I think this changes behavior?



@@ -928,18 +874,56 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser 
*p,
e->user_invalidated = userpage_invalidated;
}
  
-	r = ttm_eu_reserve_buffers(>ticket, >validated, true,

-  );
-   if (unlikely(r != 0)) {
-   if (r != -ERESTARTSYS)
-   DRM_ERROR("ttm_eu_reserve_buffers failed.\n");
-   goto out_free_user_pages;
+   drm_exec_while_not_all_locked(>exec) {
+   r = amdgpu_vm_lock_pd(>vm, >exec);
+   drm_exec_continue_on_contention(>exec);


Duplicate handling is needed for pretty much every call of 
amdgpu_vm_lock_pd, as bo->tbo.base.resv == vm->root.bo->tbo.base.resv 
for AMDGPU_GEM_CREATE_VM_ALWAYS_VALID.


I think Boris's suggestion of having this through a common 
DRM_EXEC_FLAG_ALLOW_DUPLICATES flag fits well.



+   if (unlikely(r))
+   goto out_free_user_pages;
+
+   amdgpu_bo_list_for_each_entry(e, p->bo_list) {
+   r = drm_exec_prepare_obj(>exec, >bo->tbo.base, 2);


Previously there were comments for how the fence count is calculated, 
now they seem to be removed. I'd prefer if they were properly retained 
as finding out who calls drm_resv_add_fence is not trivial, and wrong 
reservation count can also be really hard to debug.


Likewise for amdgpu_vm_lock_pd (which was added in another commit).


+   drm_exec_break_on_contention(>exec);
+   if (unlikely(r))
+   goto out_free_user_pages;
+
+   e->bo_va = amdgpu_vm_bo_find(vm, e->bo);
+   e->range = NULL;
+   }
+   drm_exec_continue_on_contention(>exec);
+
+   if (p->uf_bo) {
+   r = drm_exec_prepare_obj(>exec, >uf_bo->tbo.base,
+2);
+   drm_exec_continue_on_contention(>exec);
+   if (unlikely(r))
+   goto out_free_user_pages;
+   }
}
  
-	amdgpu_bo_list_for_each_entry(e, p->bo_list) {

-   struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
+   amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) {
+   struct mm_struct *usermm;
  
-		e->bo_va = amdgpu_vm_bo_find(vm, bo);

+   usermm = amdgpu_ttm_tt_get_usermm(e->bo->tbo.ttm);
+   if (usermm && usermm != current->mm) {
+   r = -EPERM;
+   goto out_free_user_pages;
+   }
+
+   if (amdgpu_ttm_tt_is_userptr(e->bo->tbo.ttm) &&
+   e->user_invalidated && e->user_pages) {
+   amdgpu_bo_placement_from_domain(e->bo,
+   AMDGPU_GEM_DOMAIN_CPU);
+   r = ttm_bo_validate(>bo->tbo, >bo->placement,
+   );
+   if (r)
+   goto out_free_user_pages;
+
+   amdgpu_ttm_tt_set_user_pages(e->bo->tbo.ttm,
+e->user_pages);
+   }
+
+   kvfree(e->user_pages);
+   e->user_pages = NULL;
}
  
  	amdgpu_cs_get_threshold_for_moves(p->adev, >bytes_moved_threshold,

@@ -1296,9 +1271,8 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 */
r = 0;

Re: [PATCH drm-next v5 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI

2023-06-19 Thread Dave Airlie
Since this is feature is nouveau only currently and doesn't disturb
the current nouveau code paths, I'd like to try and get this work in
tree so other drivers can work from it.

If there are any major objections to this, I'm happy to pull it back
out again, but I'd like to get some acks/rb in the next couple of days
in order to land some of it.

Dave.


>
> forgot to add your email address to the patch series - sorry about that.
>
> This series (v5) contains the Documentation changes you requested.
>
> - Danilo
>
> On 6/20/23 02:42, Danilo Krummrich wrote:
> > This patch series provides a new UAPI for the Nouveau driver in order to
> > support Vulkan features, such as sparse bindings and sparse residency.
> >
> > Furthermore, with the DRM GPUVA manager it provides a new DRM core feature 
> > to
> > keep track of GPU virtual address (VA) mappings in a more generic way.
> >
> > The DRM GPUVA manager is indented to help drivers implement 
> > userspace-manageable
> > GPU VA spaces in reference to the Vulkan API. In order to achieve this goal 
> > it
> > serves the following purposes in this context.
> >
> >  1) Provide infrastructure to track GPU VA allocations and mappings,
> > making use of the maple_tree.
> >
> >  2) Generically connect GPU VA mappings to their backing buffers, in
> > particular DRM GEM objects.
> >
> >  3) Provide a common implementation to perform more complex mapping
> > operations on the GPU VA space. In particular splitting and merging
> > of GPU VA mappings, e.g. for intersecting mapping requests or 
> > partial
> > unmap requests.
> >
> > The new VM_BIND Nouveau UAPI build on top of the DRM GPUVA manager, itself
> > providing the following new interfaces.
> >
> >  1) Initialize a GPU VA space via the new DRM_IOCTL_NOUVEAU_VM_INIT 
> > ioctl
> > for UMDs to specify the portion of VA space managed by the kernel 
> > and
> > userspace, respectively.
> >
> >  2) Allocate and free a VA space region as well as bind and unbind 
> > memory
> > to the GPUs VA space via the new DRM_IOCTL_NOUVEAU_VM_BIND ioctl.
> >
> >  3) Execute push buffers with the new DRM_IOCTL_NOUVEAU_EXEC ioctl.
> >
> > Both, DRM_IOCTL_NOUVEAU_VM_BIND and DRM_IOCTL_NOUVEAU_EXEC, make use of the 
> > DRM
> > scheduler to queue jobs and support asynchronous processing with DRM 
> > syncobjs
> > as synchronization mechanism.
> >
> > By default DRM_IOCTL_NOUVEAU_VM_BIND does synchronous processing,
> > DRM_IOCTL_NOUVEAU_EXEC supports asynchronous processing only.
> >
> > The new VM_BIND UAPI for Nouveau makes also use of drm_exec (execution 
> > context
> > for GEM buffers) by Christian König. Since the patch implementing drm_exec 
> > was
> > not yet merged into drm-next it is part of this series, as well as a small 
> > fix
> > for this patch, which was found while testing this series.
> >
> > This patch series is also available at [1].
> >
> > There is a Mesa NVK merge request by Dave Airlie [2] implementing the
> > corresponding userspace parts for this series.
> >
> > The Vulkan CTS test suite passes the sparse binding and sparse residency 
> > test
> > cases for the new UAPI together with Dave's Mesa work.
> >
> > There are also some test cases in the igt-gpu-tools project [3] for the new 
> > UAPI
> > and hence the DRM GPU VA manager. However, most of them are testing the DRM 
> > GPU
> > VA manager's logic through Nouveau's new UAPI and should be considered just 
> > as
> > helper for implementation.
> >
> > However, I absolutely intend to change those test cases to proper kunit test
> > cases for the DRM GPUVA manager, once and if we agree on it's usefulness and
> > design.
> >
> > [1] 
> > https://gitlab.freedesktop.org/nouvelles/kernel/-/tree/new-uapi-drm-next /
> >  https://gitlab.freedesktop.org/nouvelles/kernel/-/merge_requests/1
> > [2] https://gitlab.freedesktop.org/nouveau/mesa/-/merge_requests/150/
> > [3] 
> > https://gitlab.freedesktop.org/dakr/igt-gpu-tools/-/tree/wip_nouveau_vm_bind
> >
> > Changes in V2:
> > ==
> >Nouveau:
> >  - Reworked the Nouveau VM_BIND UAPI to avoid memory allocations in 
> > fence
> >signalling critical sections. Updates to the VA space are split up 
> > in three
> >separate stages, where only the 2. stage executes in a fence 
> > signalling
> >critical section:
> >
> >  1. update the VA space, allocate new structures and page tables
> >  2. (un-)map the requested memory bindings
> >  3. free structures and page tables
> >
> >  - Separated generic job scheduler code from specific job 
> > implementations.
> >  - Separated the EXEC and VM_BIND implementation of the UAPI.
> >  - Reworked the locking parts of the nvkm/vmm RAW interface, such that
> >(un-)map operations can be executed in fence signalling critical 
> > sections.
> >
> >GPUVA Manager:
> >  - made drm_gpuva_regions optional for 

Re: [PATCH v2] drm/logicvc: Kconfig: select REGMAP and REGMAP_MMIO

2023-06-19 Thread Sui Jingfeng

Hi,

On 2023/6/8 15:15, Paul Kocialkowski wrote:

Hi,

On Thu 08 Jun 23, 10:42, Sui Jingfeng wrote:

drm/logicvc driver is depend on REGMAP and REGMAP_MMIO, should select this
two kconfig option, otherwise the driver failed to compile on platform
without REGMAP_MMIO selected:

ERROR: modpost: "__devm_regmap_init_mmio_clk" 
[drivers/gpu/drm/logicvc/logicvc-drm.ko] undefined!
make[1]: *** [scripts/Makefile.modpost:136: Module.symvers] Error 1
make: *** [Makefile:1978: modpost] Error 2

Signed-off-by: Sui Jingfeng 

Thanks for the fix, looks good to me!

Acked-by: Paul Kocialkowski 



Thanks a lot,


Please don't forget to push this to drm-misc or drm-tip if you has the time,

as (even though trivial) it's precious for me.



Cheers,

Paul


---
  drivers/gpu/drm/logicvc/Kconfig | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/logicvc/Kconfig b/drivers/gpu/drm/logicvc/Kconfig
index fa7a88368809..1df22a852a23 100644
--- a/drivers/gpu/drm/logicvc/Kconfig
+++ b/drivers/gpu/drm/logicvc/Kconfig
@@ -5,5 +5,7 @@ config DRM_LOGICVC
select DRM_KMS_HELPER
select DRM_KMS_DMA_HELPER
select DRM_GEM_DMA_HELPER
+   select REGMAP
+   select REGMAP_MMIO
help
  DRM display driver for the logiCVC programmable logic block from Xylon
--
2.25.1


--
Jingfeng



Re: [PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA mappings

2023-06-19 Thread kernel test robot
Hi Danilo,

kernel test robot noticed the following build warnings:

[auto build test WARNING on dcb0775d36de28992f56455ab3967b30d380]

url:
https://github.com/intel-lab-lkp/linux/commits/Danilo-Krummrich/drm-execution-context-for-GEM-buffers-v4/20230620-084448
base:   dcb0775d36de28992f56455ab3967b30d380
patch link:https://lore.kernel.org/r/20230620004217.4700-4-dakr%40redhat.com
patch subject: [PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA 
mappings
config: hexagon-randconfig-r041-20230620 
(https://download.01.org/0day-ci/archive/20230620/202306201123.4nvlb3cq-...@intel.com/config)
compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 
4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a)
reproduce: 
(https://download.01.org/0day-ci/archive/20230620/202306201123.4nvlb3cq-...@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot 
| Closes: 
https://lore.kernel.org/oe-kbuild-all/202306201123.4nvlb3cq-...@intel.com/

All warnings (new ones prefixed by >>):

>> drivers/gpu/drm/drm_gpuva_mgr.c:676:7: warning: format specifies type 
>> 'unsigned long' but the argument has type 'unsigned int' [-Wformat]
 676 | return WARN(check_add_overflow(addr, range, ),
 |~~~
 677 | "GPUVA address limited to %lu bytes, see 
Documentation.\n",
 | 
~~~
 |   %u
 678 | MTREE_INDEX_SIZE);
 | ^
   drivers/gpu/drm/drm_gpuva_mgr.c:663:26: note: expanded from macro 
'MTREE_INDEX_SIZE'
 663 | #define MTREE_INDEX_SIZE sizeof(MTREE_INDEX_TYPE)
 |  ^
   include/asm-generic/bug.h:133:29: note: expanded from macro 'WARN'
 133 | __WARN_printf(TAINT_WARN, format);   
   \
 | ~~^~~
   include/asm-generic/bug.h:97:48: note: expanded from macro '__WARN_printf'
  97 | warn_slowpath_fmt(__FILE__, __LINE__, taint, arg);   
   \
 |  ^~~
   drivers/gpu/drm/drm_gpuva_mgr.c:1314:25: warning: variable 'prev' set but 
not used [-Wunused-but-set-variable]
1314 | struct drm_gpuva *va, *prev = NULL;
 |^
   2 warnings generated.


vim +676 drivers/gpu/drm/drm_gpuva_mgr.c

   668  
   669  static inline bool
   670  drm_gpuva_check_overflow(u64 addr, u64 range)
   671  {
   672  MTREE_INDEX_TYPE end;
   673  
   674  return WARN(check_add_overflow(addr, range, ),
   675  "GPUVA address limited to %lu bytes, see 
Documentation.\n",
 > 676  MTREE_INDEX_SIZE);
   677  }
   678  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


Re: [PATCH v6 2/8] PCI/VGA: Deal only with VGA class devices

2023-06-19 Thread Sui Jingfeng

Hi,

On 2023/6/20 02:12, Limonciello, Mario wrote:


On 6/12/2023 2:25 PM, Sui Jingfeng wrote:

From: Sui Jingfeng 

Deal only with the VGA devcie(pdev->class == 0x0300), so replace the
pci_get_subsys() function with pci_get_class(). Filter the non-PCI 
display
device(pdev->class != 0x0300) out. There no need to process the 
non-display

PCI device.

Signed-off-by: Sui Jingfeng 
---

This also means that deleting a PCI device no longer needs
to walk the list.

Reviewed-by: Mario Limonciello 


Thanks a lot,

can you help to resend this precious R-B to the V7 of this series [1],

This is V6.

[1] https://patchwork.freedesktop.org/series/119250/


  drivers/pci/vgaarb.c | 22 --
  1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/drivers/pci/vgaarb.c b/drivers/pci/vgaarb.c
index c1bc6c983932..22a505e877dc 100644
--- a/drivers/pci/vgaarb.c
+++ b/drivers/pci/vgaarb.c
@@ -754,10 +754,6 @@ static bool vga_arbiter_add_pci_device(struct 
pci_dev *pdev)

  struct pci_dev *bridge;
  u16 cmd;
  -    /* Only deal with VGA class devices */
-    if ((pdev->class >> 8) != PCI_CLASS_DISPLAY_VGA)
-    return false;
-
  /* Allocate structure */
  vgadev = kzalloc(sizeof(struct vga_device), GFP_KERNEL);
  if (vgadev == NULL) {
@@ -1500,7 +1496,9 @@ static int pci_notify(struct notifier_block 
*nb, unsigned long action,

  struct pci_dev *pdev = to_pci_dev(dev);
  bool notify = false;
  -    vgaarb_dbg(dev, "%s\n", __func__);
+    /* Only deal with VGA class devices */
+    if (pdev->class != PCI_CLASS_DISPLAY_VGA << 8)
+    return 0;
    /* For now we're only intereted in devices added and removed. 
I didn't

   * test this thing here, so someone needs to double check for the
@@ -1510,6 +1508,8 @@ static int pci_notify(struct notifier_block 
*nb, unsigned long action,

  else if (action == BUS_NOTIFY_DEL_DEVICE)
  notify = vga_arbiter_del_pci_device(pdev);
  +    vgaarb_dbg(dev, "%s: action = %lu\n", __func__, action);
+
  if (notify)
  vga_arbiter_notify_clients();
  return 0;
@@ -1534,8 +1534,8 @@ static struct miscdevice vga_arb_device = {
    static int __init vga_arb_device_init(void)
  {
+    struct pci_dev *pdev = NULL;
  int rc;
-    struct pci_dev *pdev;
    rc = misc_register(_arb_device);
  if (rc < 0)
@@ -1545,11 +1545,13 @@ static int __init vga_arb_device_init(void)
    /* We add all PCI devices satisfying VGA class in the arbiter by
   * default */
-    pdev = NULL;
-    while ((pdev =
-    pci_get_subsys(PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
-   PCI_ANY_ID, pdev)) != NULL)
+    while (1) {
+    pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev);
+    if (!pdev)
+    break;
+
  vga_arbiter_add_pci_device(pdev);
+    }
    pr_info("loaded\n");
  return rc;


--
Jingfeng



Re: [PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA mappings

2023-06-19 Thread kernel test robot
Hi Danilo,

kernel test robot noticed the following build warnings:

[auto build test WARNING on dcb0775d36de28992f56455ab3967b30d380]

url:
https://github.com/intel-lab-lkp/linux/commits/Danilo-Krummrich/drm-execution-context-for-GEM-buffers-v4/20230620-084448
base:   dcb0775d36de28992f56455ab3967b30d380
patch link:https://lore.kernel.org/r/20230620004217.4700-4-dakr%40redhat.com
patch subject: [PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA 
mappings
config: m68k-allyesconfig 
(https://download.01.org/0day-ci/archive/20230620/202306201034.gucldv3r-...@intel.com/config)
compiler: m68k-linux-gcc (GCC) 12.3.0
reproduce: 
(https://download.01.org/0day-ci/archive/20230620/202306201034.gucldv3r-...@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot 
| Closes: 
https://lore.kernel.org/oe-kbuild-all/202306201034.gucldv3r-...@intel.com/

All warnings (new ones prefixed by >>):

   In file included from arch/m68k/include/asm/bug.h:32,
from include/linux/bug.h:5,
from include/linux/thread_info.h:13,
from include/asm-generic/preempt.h:5,
from ./arch/m68k/include/generated/asm/preempt.h:1,
from include/linux/preempt.h:78,
from include/linux/spinlock.h:56,
from include/linux/kref.h:16,
from include/drm/drm_gem.h:37,
from drivers/gpu/drm/drm_gpuva_mgr.c:28:
   drivers/gpu/drm/drm_gpuva_mgr.c: In function 'drm_gpuva_check_overflow':
>> drivers/gpu/drm/drm_gpuva_mgr.c:675:21: warning: format '%lu' expects 
>> argument of type 'long unsigned int', but argument 5 has type 'unsigned int' 
>> [-Wformat=]
 675 | "GPUVA address limited to %lu bytes, see 
Documentation.\n",
 | 
^~
   include/asm-generic/bug.h:97:62: note: in definition of macro '__WARN_printf'
  97 | warn_slowpath_fmt(__FILE__, __LINE__, taint, arg);   
   \
 |  ^~~
   drivers/gpu/drm/drm_gpuva_mgr.c:674:16: note: in expansion of macro 'WARN'
 674 | return WARN(check_add_overflow(addr, range, ),
 |^~~~
   drivers/gpu/drm/drm_gpuva_mgr.c:675:49: note: format string is defined here
 675 | "GPUVA address limited to %lu bytes, see 
Documentation.\n",
 |   ~~^
 | |
 | long unsigned int
 |   %u
   drivers/gpu/drm/drm_gpuva_mgr.c: In function '__drm_gpuva_sm_map':
   drivers/gpu/drm/drm_gpuva_mgr.c:1314:32: warning: variable 'prev' set but 
not used [-Wunused-but-set-variable]
1314 | struct drm_gpuva *va, *prev = NULL;
 |^~~~


vim +675 drivers/gpu/drm/drm_gpuva_mgr.c

   668  
   669  static inline bool
   670  drm_gpuva_check_overflow(u64 addr, u64 range)
   671  {
   672  MTREE_INDEX_TYPE end;
   673  
   674  return WARN(check_add_overflow(addr, range, ),
 > 675  "GPUVA address limited to %lu bytes, see 
 > Documentation.\n",
   676  MTREE_INDEX_SIZE);
   677  }
   678  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


linux-next: manual merge of the fbdev tree with the drm tree

2023-06-19 Thread Stephen Rothwell
Hi all,

Today's linux-next merge of the fbdev tree got a conflict in:

  drivers/video/fbdev/hitfb.c

between commit:

  bb47f218fd01 ("fbdev/hitfb: Cast I/O offset to address")

from the drm tree and commit:

  dadeeffbe525 ("fbdev: hitfb: Use NULL for pointers")

from the fbdev tree.

I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging.  You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.

-- 
Cheers,
Stephen Rothwell

diff --cc drivers/video/fbdev/hitfb.c
index 7737923b7a0a,5f544a177033..
--- a/drivers/video/fbdev/hitfb.c
+++ b/drivers/video/fbdev/hitfb.c
@@@ -444,10 -428,10 +444,10 @@@ static int hitfb_suspend(struct device 
  {
u16 v;
  
-   hitfb_blank(1,0);
+   hitfb_blank(1, NULL);
 -  v = fb_readw(HD64461_STBCR);
 +  v = hitfb_readw(HD64461_STBCR);
v |= HD64461_STBCR_SLCKE_IST;
 -  fb_writew(v, HD64461_STBCR);
 +  hitfb_writew(v, HD64461_STBCR);
  
return 0;
  }
@@@ -456,13 -440,13 +456,13 @@@ static int hitfb_resume(struct device *
  {
u16 v;
  
 -  v = fb_readw(HD64461_STBCR);
 +  v = hitfb_readw(HD64461_STBCR);
v &= ~HD64461_STBCR_SLCKE_OST;
msleep(100);
 -  v = fb_readw(HD64461_STBCR);
 +  v = hitfb_readw(HD64461_STBCR);
v &= ~HD64461_STBCR_SLCKE_IST;
 -  fb_writew(v, HD64461_STBCR);
 +  hitfb_writew(v, HD64461_STBCR);
-   hitfb_blank(0,0);
+   hitfb_blank(0, NULL);
  
return 0;
  }


pgpFE4uSCwEwe.pgp
Description: OpenPGP digital signature


Re: [PATCH drm-next v5 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI

2023-06-19 Thread Danilo Krummrich

Hi Donald,

forgot to add your email address to the patch series - sorry about that.

This series (v5) contains the Documentation changes you requested.

- Danilo

On 6/20/23 02:42, Danilo Krummrich wrote:

This patch series provides a new UAPI for the Nouveau driver in order to
support Vulkan features, such as sparse bindings and sparse residency.

Furthermore, with the DRM GPUVA manager it provides a new DRM core feature to
keep track of GPU virtual address (VA) mappings in a more generic way.

The DRM GPUVA manager is indented to help drivers implement userspace-manageable
GPU VA spaces in reference to the Vulkan API. In order to achieve this goal it
serves the following purposes in this context.

 1) Provide infrastructure to track GPU VA allocations and mappings,
making use of the maple_tree.

 2) Generically connect GPU VA mappings to their backing buffers, in
particular DRM GEM objects.

 3) Provide a common implementation to perform more complex mapping
operations on the GPU VA space. In particular splitting and merging
of GPU VA mappings, e.g. for intersecting mapping requests or partial
unmap requests.

The new VM_BIND Nouveau UAPI build on top of the DRM GPUVA manager, itself
providing the following new interfaces.

 1) Initialize a GPU VA space via the new DRM_IOCTL_NOUVEAU_VM_INIT ioctl
for UMDs to specify the portion of VA space managed by the kernel and
userspace, respectively.

 2) Allocate and free a VA space region as well as bind and unbind memory
to the GPUs VA space via the new DRM_IOCTL_NOUVEAU_VM_BIND ioctl.

 3) Execute push buffers with the new DRM_IOCTL_NOUVEAU_EXEC ioctl.

Both, DRM_IOCTL_NOUVEAU_VM_BIND and DRM_IOCTL_NOUVEAU_EXEC, make use of the DRM
scheduler to queue jobs and support asynchronous processing with DRM syncobjs
as synchronization mechanism.

By default DRM_IOCTL_NOUVEAU_VM_BIND does synchronous processing,
DRM_IOCTL_NOUVEAU_EXEC supports asynchronous processing only.

The new VM_BIND UAPI for Nouveau makes also use of drm_exec (execution context
for GEM buffers) by Christian König. Since the patch implementing drm_exec was
not yet merged into drm-next it is part of this series, as well as a small fix
for this patch, which was found while testing this series.

This patch series is also available at [1].

There is a Mesa NVK merge request by Dave Airlie [2] implementing the
corresponding userspace parts for this series.

The Vulkan CTS test suite passes the sparse binding and sparse residency test
cases for the new UAPI together with Dave's Mesa work.

There are also some test cases in the igt-gpu-tools project [3] for the new UAPI
and hence the DRM GPU VA manager. However, most of them are testing the DRM GPU
VA manager's logic through Nouveau's new UAPI and should be considered just as
helper for implementation.

However, I absolutely intend to change those test cases to proper kunit test
cases for the DRM GPUVA manager, once and if we agree on it's usefulness and
design.

[1] https://gitlab.freedesktop.org/nouvelles/kernel/-/tree/new-uapi-drm-next /
 https://gitlab.freedesktop.org/nouvelles/kernel/-/merge_requests/1
[2] https://gitlab.freedesktop.org/nouveau/mesa/-/merge_requests/150/
[3] https://gitlab.freedesktop.org/dakr/igt-gpu-tools/-/tree/wip_nouveau_vm_bind

Changes in V2:
==
   Nouveau:
 - Reworked the Nouveau VM_BIND UAPI to avoid memory allocations in fence
   signalling critical sections. Updates to the VA space are split up in 
three
   separate stages, where only the 2. stage executes in a fence signalling
   critical section:

 1. update the VA space, allocate new structures and page tables
 2. (un-)map the requested memory bindings
 3. free structures and page tables

 - Separated generic job scheduler code from specific job implementations.
 - Separated the EXEC and VM_BIND implementation of the UAPI.
 - Reworked the locking parts of the nvkm/vmm RAW interface, such that
   (un-)map operations can be executed in fence signalling critical 
sections.

   GPUVA Manager:
 - made drm_gpuva_regions optional for users of the GPUVA manager
 - allow NULL GEMs for drm_gpuva entries
 - swichted from drm_mm to maple_tree for track drm_gpuva / drm_gpuva_region
   entries
 - provide callbacks for users to allocate custom drm_gpuva_op structures to
   allow inheritance
 - added user bits to drm_gpuva_flags
 - added a prefetch operation type in order to support generating prefetch
   operations in the same way other operations generated
 - hand the responsibility for mutual exclusion for a GEM's
   drm_gpuva list to the user; simplified corresponding (un-)link functions

   Maple Tree:
 - I added two maple tree patches to the series, one to support custom tree
   walk macros and one to hand the locking responsibility to the user of the
   GPUVA 

[PATCH drm-next v5 14/14] drm/nouveau: debugfs: implement DRM GPU VA debugfs

2023-06-19 Thread Danilo Krummrich
Provide the driver indirection iterating over all DRM GPU VA spaces to
enable the common 'gpuvas' debugfs file for dumping DRM GPU VA spaces.

Signed-off-by: Danilo Krummrich 
---
 drivers/gpu/drm/nouveau/nouveau_debugfs.c | 39 +++
 1 file changed, 39 insertions(+)

diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c 
b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
index 99d022a91afc..053f703f2f68 100644
--- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
@@ -203,6 +203,44 @@ nouveau_debugfs_pstate_open(struct inode *inode, struct 
file *file)
return single_open(file, nouveau_debugfs_pstate_get, inode->i_private);
 }
 
+static void
+nouveau_debugfs_gpuva_regions(struct seq_file *m, struct nouveau_uvmm *uvmm)
+{
+   MA_STATE(mas, >region_mt, 0, 0);
+   struct nouveau_uvma_region *reg;
+
+   seq_puts  (m, " VA regions  | start  | range  | 
end\n");
+   seq_puts  (m, 
"\n");
+   mas_for_each(, reg, ULONG_MAX)
+   seq_printf(m, " | 0x%016llx | 0x%016llx | 
0x%016llx\n",
+  reg->va.addr, reg->va.range, reg->va.addr + 
reg->va.range);
+}
+
+static int
+nouveau_debugfs_gpuva(struct seq_file *m, void *data)
+{
+   struct drm_info_node *node = (struct drm_info_node *) m->private;
+   struct nouveau_drm *drm = nouveau_drm(node->minor->dev);
+   struct nouveau_cli *cli;
+
+   mutex_lock(>clients_lock);
+   list_for_each_entry(cli, >clients, head) {
+   struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(cli);
+
+   if (!uvmm)
+   continue;
+
+   nouveau_uvmm_lock(uvmm);
+   drm_debugfs_gpuva_info(m, >umgr);
+   seq_puts(m, "\n");
+   nouveau_debugfs_gpuva_regions(m, uvmm);
+   nouveau_uvmm_unlock(uvmm);
+   }
+   mutex_unlock(>clients_lock);
+
+   return 0;
+}
+
 static const struct file_operations nouveau_pstate_fops = {
.owner = THIS_MODULE,
.open = nouveau_debugfs_pstate_open,
@@ -214,6 +252,7 @@ static const struct file_operations nouveau_pstate_fops = {
 static struct drm_info_list nouveau_debugfs_list[] = {
{ "vbios.rom",  nouveau_debugfs_vbios_image, 0, NULL },
{ "strap_peek", nouveau_debugfs_strap_peek, 0, NULL },
+   DRM_DEBUGFS_GPUVA_INFO(nouveau_debugfs_gpuva, NULL),
 };
 #define NOUVEAU_DEBUGFS_ENTRIES ARRAY_SIZE(nouveau_debugfs_list)
 
-- 
2.40.1



[PATCH drm-next v5 13/14] drm/nouveau: implement new VM_BIND uAPI

2023-06-19 Thread Danilo Krummrich
This commit provides the implementation for the new uapi motivated by the
Vulkan API. It allows user mode drivers (UMDs) to:

1) Initialize a GPU virtual address (VA) space via the new
   DRM_IOCTL_NOUVEAU_VM_INIT ioctl for UMDs to specify the portion of VA
   space managed by the kernel and userspace, respectively.

2) Allocate and free a VA space region as well as bind and unbind memory
   to the GPUs VA space via the new DRM_IOCTL_NOUVEAU_VM_BIND ioctl.
   UMDs can request the named operations to be processed either
   synchronously or asynchronously. It supports DRM syncobjs
   (incl. timelines) as synchronization mechanism. The management of the
   GPU VA mappings is implemented with the DRM GPU VA manager.

3) Execute push buffers with the new DRM_IOCTL_NOUVEAU_EXEC ioctl. The
   execution happens asynchronously. It supports DRM syncobj (incl.
   timelines) as synchronization mechanism. DRM GEM object locking is
   handled with drm_exec.

Both, DRM_IOCTL_NOUVEAU_VM_BIND and DRM_IOCTL_NOUVEAU_EXEC, use the DRM
GPU scheduler for the asynchronous paths.

Signed-off-by: Danilo Krummrich 
---
 Documentation/gpu/driver-uapi.rst   |3 +
 drivers/gpu/drm/nouveau/Kbuild  |3 +
 drivers/gpu/drm/nouveau/Kconfig |2 +
 drivers/gpu/drm/nouveau/nouveau_abi16.c |   24 +
 drivers/gpu/drm/nouveau/nouveau_abi16.h |1 +
 drivers/gpu/drm/nouveau/nouveau_bo.c|  147 +-
 drivers/gpu/drm/nouveau/nouveau_bo.h|2 +-
 drivers/gpu/drm/nouveau/nouveau_drm.c   |   27 +-
 drivers/gpu/drm/nouveau/nouveau_drv.h   |   59 +-
 drivers/gpu/drm/nouveau/nouveau_exec.c  |  418 +
 drivers/gpu/drm/nouveau/nouveau_exec.h  |   54 +
 drivers/gpu/drm/nouveau/nouveau_gem.c   |   25 +-
 drivers/gpu/drm/nouveau/nouveau_mem.h   |5 +
 drivers/gpu/drm/nouveau/nouveau_prime.c |2 +-
 drivers/gpu/drm/nouveau/nouveau_sched.c |  461 ++
 drivers/gpu/drm/nouveau/nouveau_sched.h |  123 ++
 drivers/gpu/drm/nouveau/nouveau_uvmm.c  | 1979 +++
 drivers/gpu/drm/nouveau/nouveau_uvmm.h  |  107 ++
 18 files changed, 3377 insertions(+), 65 deletions(-)
 create mode 100644 drivers/gpu/drm/nouveau/nouveau_exec.c
 create mode 100644 drivers/gpu/drm/nouveau/nouveau_exec.h
 create mode 100644 drivers/gpu/drm/nouveau/nouveau_sched.c
 create mode 100644 drivers/gpu/drm/nouveau/nouveau_sched.h
 create mode 100644 drivers/gpu/drm/nouveau/nouveau_uvmm.c
 create mode 100644 drivers/gpu/drm/nouveau/nouveau_uvmm.h

diff --git a/Documentation/gpu/driver-uapi.rst 
b/Documentation/gpu/driver-uapi.rst
index 9c7ca6e33a68..c08bcbb95fb3 100644
--- a/Documentation/gpu/driver-uapi.rst
+++ b/Documentation/gpu/driver-uapi.rst
@@ -13,4 +13,7 @@ drm/nouveau uAPI
 VM_BIND / EXEC uAPI
 ---
 
+.. kernel-doc:: drivers/gpu/drm/nouveau/nouveau_exec.c
+:doc: Overview
+
 .. kernel-doc:: include/uapi/drm/nouveau_drm.h
diff --git a/drivers/gpu/drm/nouveau/Kbuild b/drivers/gpu/drm/nouveau/Kbuild
index 5e5617006da5..cf6b3a80c0c8 100644
--- a/drivers/gpu/drm/nouveau/Kbuild
+++ b/drivers/gpu/drm/nouveau/Kbuild
@@ -47,6 +47,9 @@ nouveau-y += nouveau_prime.o
 nouveau-y += nouveau_sgdma.o
 nouveau-y += nouveau_ttm.o
 nouveau-y += nouveau_vmm.o
+nouveau-y += nouveau_exec.o
+nouveau-y += nouveau_sched.o
+nouveau-y += nouveau_uvmm.o
 
 # DRM - modesetting
 nouveau-$(CONFIG_DRM_NOUVEAU_BACKLIGHT) += nouveau_backlight.o
diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
index a70bd65e1400..c52e8096cca4 100644
--- a/drivers/gpu/drm/nouveau/Kconfig
+++ b/drivers/gpu/drm/nouveau/Kconfig
@@ -10,6 +10,8 @@ config DRM_NOUVEAU
select DRM_KMS_HELPER
select DRM_TTM
select DRM_TTM_HELPER
+   select DRM_EXEC
+   select DRM_SCHED
select I2C
select I2C_ALGOBIT
select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
diff --git a/drivers/gpu/drm/nouveau/nouveau_abi16.c 
b/drivers/gpu/drm/nouveau/nouveau_abi16.c
index 82dab51d8aeb..a112f28681d3 100644
--- a/drivers/gpu/drm/nouveau/nouveau_abi16.c
+++ b/drivers/gpu/drm/nouveau/nouveau_abi16.c
@@ -35,6 +35,7 @@
 #include "nouveau_chan.h"
 #include "nouveau_abi16.h"
 #include "nouveau_vmm.h"
+#include "nouveau_sched.h"
 
 static struct nouveau_abi16 *
 nouveau_abi16(struct drm_file *file_priv)
@@ -125,6 +126,17 @@ nouveau_abi16_chan_fini(struct nouveau_abi16 *abi16,
 {
struct nouveau_abi16_ntfy *ntfy, *temp;
 
+   /* When a client exits without waiting for it's queued up jobs to
+* finish it might happen that we fault the channel. This is due to
+* drm_file_free() calling drm_gem_release() before the postclose()
+* callback. Hence, we can't tear down this scheduler entity before
+* uvmm mappings are unmapped. Currently, we can't detect this case.
+*
+* However, this should be rare and harmless, since the channel isn't
+* needed anymore.
+*/
+   nouveau_sched_entity_fini(>sched_entity);
+
/* wait for all activity to 

[PATCH drm-next v5 12/14] drm/nouveau: nvkm/vmm: implement raw ops to manage uvmm

2023-06-19 Thread Danilo Krummrich
The new VM_BIND UAPI uses the DRM GPU VA manager to manage the VA space.
Hence, we a need a way to manipulate the MMUs page tables without going
through the internal range allocator implemented by nvkm/vmm.

This patch adds a raw interface for nvkm/vmm to pass the resposibility
for managing the address space and the corresponding map/unmap/sparse
operations to the upper layers.

Signed-off-by: Danilo Krummrich 
---
 drivers/gpu/drm/nouveau/include/nvif/if000c.h |  26 ++-
 drivers/gpu/drm/nouveau/include/nvif/vmm.h|  19 +-
 .../gpu/drm/nouveau/include/nvkm/subdev/mmu.h |  20 +-
 drivers/gpu/drm/nouveau/nouveau_svm.c |   2 +-
 drivers/gpu/drm/nouveau/nouveau_vmm.c |   4 +-
 drivers/gpu/drm/nouveau/nvif/vmm.c| 100 +++-
 .../gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.c| 213 --
 drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c | 197 
 drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h |  25 ++
 .../drm/nouveau/nvkm/subdev/mmu/vmmgf100.c|  16 +-
 .../drm/nouveau/nvkm/subdev/mmu/vmmgp100.c|  16 +-
 .../gpu/drm/nouveau/nvkm/subdev/mmu/vmmnv50.c |  27 ++-
 12 files changed, 566 insertions(+), 99 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/include/nvif/if000c.h 
b/drivers/gpu/drm/nouveau/include/nvif/if000c.h
index 9c7ff56831c5..a5a182b3c28d 100644
--- a/drivers/gpu/drm/nouveau/include/nvif/if000c.h
+++ b/drivers/gpu/drm/nouveau/include/nvif/if000c.h
@@ -3,7 +3,10 @@
 struct nvif_vmm_v0 {
__u8  version;
__u8  page_nr;
-   __u8  managed;
+#define NVIF_VMM_V0_TYPE_UNMANAGED 0x00
+#define NVIF_VMM_V0_TYPE_MANAGED   0x01
+#define NVIF_VMM_V0_TYPE_RAW   0x02
+   __u8  type;
__u8  pad03[5];
__u64 addr;
__u64 size;
@@ -17,6 +20,7 @@ struct nvif_vmm_v0 {
 #define NVIF_VMM_V0_UNMAP  0x04
 #define NVIF_VMM_V0_PFNMAP 0x05
 #define NVIF_VMM_V0_PFNCLR 0x06
+#define NVIF_VMM_V0_RAW0x07
 #define NVIF_VMM_V0_MTHD(i) ((i) + 
0x80)
 
 struct nvif_vmm_page_v0 {
@@ -66,6 +70,26 @@ struct nvif_vmm_unmap_v0 {
__u64 addr;
 };
 
+struct nvif_vmm_raw_v0 {
+   __u8 version;
+#define NVIF_VMM_RAW_V0_GET0x0
+#define NVIF_VMM_RAW_V0_PUT0x1
+#define NVIF_VMM_RAW_V0_MAP0x2
+#define NVIF_VMM_RAW_V0_UNMAP  0x3
+#define NVIF_VMM_RAW_V0_SPARSE 0x4
+   __u8  op;
+   __u8  sparse;
+   __u8  ref;
+   __u8  shift;
+   __u32 argc;
+   __u8  pad01[7];
+   __u64 addr;
+   __u64 size;
+   __u64 offset;
+   __u64 memory;
+   __u64 argv;
+};
+
 struct nvif_vmm_pfnmap_v0 {
__u8  version;
__u8  page;
diff --git a/drivers/gpu/drm/nouveau/include/nvif/vmm.h 
b/drivers/gpu/drm/nouveau/include/nvif/vmm.h
index a2ee92201ace..0ecedd0ee0a5 100644
--- a/drivers/gpu/drm/nouveau/include/nvif/vmm.h
+++ b/drivers/gpu/drm/nouveau/include/nvif/vmm.h
@@ -4,6 +4,12 @@
 struct nvif_mem;
 struct nvif_mmu;
 
+enum nvif_vmm_type {
+   UNMANAGED,
+   MANAGED,
+   RAW,
+};
+
 enum nvif_vmm_get {
ADDR,
PTES,
@@ -30,8 +36,9 @@ struct nvif_vmm {
int page_nr;
 };
 
-int nvif_vmm_ctor(struct nvif_mmu *, const char *name, s32 oclass, bool 
managed,
- u64 addr, u64 size, void *argv, u32 argc, struct nvif_vmm *);
+int nvif_vmm_ctor(struct nvif_mmu *, const char *name, s32 oclass,
+ enum nvif_vmm_type, u64 addr, u64 size, void *argv, u32 argc,
+ struct nvif_vmm *);
 void nvif_vmm_dtor(struct nvif_vmm *);
 int nvif_vmm_get(struct nvif_vmm *, enum nvif_vmm_get, bool sparse,
 u8 page, u8 align, u64 size, struct nvif_vma *);
@@ -39,4 +46,12 @@ void nvif_vmm_put(struct nvif_vmm *, struct nvif_vma *);
 int nvif_vmm_map(struct nvif_vmm *, u64 addr, u64 size, void *argv, u32 argc,
 struct nvif_mem *, u64 offset);
 int nvif_vmm_unmap(struct nvif_vmm *, u64);
+
+int nvif_vmm_raw_get(struct nvif_vmm *vmm, u64 addr, u64 size, u8 shift);
+int nvif_vmm_raw_put(struct nvif_vmm *vmm, u64 addr, u64 size, u8 shift);
+int nvif_vmm_raw_map(struct nvif_vmm *vmm, u64 addr, u64 size, u8 shift,
+void *argv, u32 argc, struct nvif_mem *mem, u64 offset);
+int nvif_vmm_raw_unmap(struct nvif_vmm *vmm, u64 addr, u64 size,
+  u8 shift, bool sparse);
+int nvif_vmm_raw_sparse(struct nvif_vmm *vmm, u64 addr, u64 size, bool ref);
 #endif
diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/mmu.h 
b/drivers/gpu/drm/nouveau/include/nvkm/subdev/mmu.h
index 70e7887ef4b4..2fd2f2433fc7 100644
--- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/mmu.h
+++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/mmu.h
@@ -17,6 +17,7 @@ 

[PATCH drm-next v5 11/14] drm/nouveau: chan: provide nouveau_channel_kill()

2023-06-19 Thread Danilo Krummrich
The new VM_BIND UAPI implementation introduced in subsequent commits
will allow asynchronous jobs processing push buffers and emitting fences.

If a job times out, we need a way to recover from this situation. For
now, simply kill the channel to unblock all hung up jobs and signal
userspace that the device is dead on the next EXEC or VM_BIND ioctl.

Signed-off-by: Danilo Krummrich 
---
 drivers/gpu/drm/nouveau/nouveau_chan.c | 14 +++---
 drivers/gpu/drm/nouveau/nouveau_chan.h |  1 +
 2 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c 
b/drivers/gpu/drm/nouveau/nouveau_chan.c
index f47c0363683c..a975f8b0e0e5 100644
--- a/drivers/gpu/drm/nouveau/nouveau_chan.c
+++ b/drivers/gpu/drm/nouveau/nouveau_chan.c
@@ -40,6 +40,14 @@ MODULE_PARM_DESC(vram_pushbuf, "Create DMA push buffers in 
VRAM");
 int nouveau_vram_pushbuf;
 module_param_named(vram_pushbuf, nouveau_vram_pushbuf, int, 0400);
 
+void
+nouveau_channel_kill(struct nouveau_channel *chan)
+{
+   atomic_set(>killed, 1);
+   if (chan->fence)
+   nouveau_fence_context_kill(chan->fence, -ENODEV);
+}
+
 static int
 nouveau_channel_killed(struct nvif_event *event, void *repv, u32 repc)
 {
@@ -47,9 +55,9 @@ nouveau_channel_killed(struct nvif_event *event, void *repv, 
u32 repc)
struct nouveau_cli *cli = (void *)chan->user.client;
 
NV_PRINTK(warn, cli, "channel %d killed!\n", chan->chid);
-   atomic_set(>killed, 1);
-   if (chan->fence)
-   nouveau_fence_context_kill(chan->fence, -ENODEV);
+
+   if (unlikely(!atomic_read(>killed)))
+   nouveau_channel_kill(chan);
 
return NVIF_EVENT_DROP;
 }
diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.h 
b/drivers/gpu/drm/nouveau/nouveau_chan.h
index e06a8ffed31a..e483f4a254da 100644
--- a/drivers/gpu/drm/nouveau/nouveau_chan.h
+++ b/drivers/gpu/drm/nouveau/nouveau_chan.h
@@ -65,6 +65,7 @@ int  nouveau_channel_new(struct nouveau_drm *, struct 
nvif_device *, bool priv,
 u32 vram, u32 gart, struct nouveau_channel **);
 void nouveau_channel_del(struct nouveau_channel **);
 int  nouveau_channel_idle(struct nouveau_channel *);
+void nouveau_channel_kill(struct nouveau_channel *);
 
 extern int nouveau_vram_pushbuf;
 
-- 
2.40.1



[PATCH drm-next v5 10/14] drm/nouveau: fence: fail to emit when fence context is killed

2023-06-19 Thread Danilo Krummrich
The new VM_BIND UAPI implementation introduced in subsequent commits
will allow asynchronous jobs processing push buffers and emitting
fences.

If a fence context is killed, e.g. due to a channel fault, jobs which
are already queued for execution might still emit new fences. In such a
case a job would hang forever.

To fix that, fail to emit a new fence on a killed fence context with
-ENODEV to unblock the job.

Signed-off-by: Danilo Krummrich 
---
 drivers/gpu/drm/nouveau/nouveau_fence.c | 7 +++
 drivers/gpu/drm/nouveau/nouveau_fence.h | 2 +-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c 
b/drivers/gpu/drm/nouveau/nouveau_fence.c
index e946408f945b..77c739a55b19 100644
--- a/drivers/gpu/drm/nouveau/nouveau_fence.c
+++ b/drivers/gpu/drm/nouveau/nouveau_fence.c
@@ -96,6 +96,7 @@ nouveau_fence_context_kill(struct nouveau_fence_chan *fctx, 
int error)
if (nouveau_fence_signal(fence))
nvif_event_block(>event);
}
+   fctx->killed = 1;
spin_unlock_irqrestore(>lock, flags);
 }
 
@@ -229,6 +230,12 @@ nouveau_fence_emit(struct nouveau_fence *fence, struct 
nouveau_channel *chan)
dma_fence_get(>base);
spin_lock_irq(>lock);
 
+   if (unlikely(fctx->killed)) {
+   spin_unlock_irq(>lock);
+   dma_fence_put(>base);
+   return -ENODEV;
+   }
+
if (nouveau_fence_update(chan, fctx))
nvif_event_block(>event);
 
diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.h 
b/drivers/gpu/drm/nouveau/nouveau_fence.h
index 7c73c7c9834a..2c72d96ef17d 100644
--- a/drivers/gpu/drm/nouveau/nouveau_fence.h
+++ b/drivers/gpu/drm/nouveau/nouveau_fence.h
@@ -44,7 +44,7 @@ struct nouveau_fence_chan {
char name[32];
 
struct nvif_event event;
-   int notify_ref, dead;
+   int notify_ref, dead, killed;
 };
 
 struct nouveau_fence_priv {
-- 
2.40.1



[PATCH drm-next v5 09/14] drm/nouveau: fence: separate fence alloc and emit

2023-06-19 Thread Danilo Krummrich
The new (VM_BIND) UAPI exports DMA fences through DRM syncobjs. Hence,
in order to emit fences within DMA fence signalling critical sections
(e.g. as typically done in the DRM GPU schedulers run_job() callback) we
need to separate fence allocation and fence emitting.

Signed-off-by: Danilo Krummrich 
---
 drivers/gpu/drm/nouveau/dispnv04/crtc.c |  9 -
 drivers/gpu/drm/nouveau/nouveau_bo.c| 52 +++--
 drivers/gpu/drm/nouveau/nouveau_chan.c  |  6 ++-
 drivers/gpu/drm/nouveau/nouveau_dmem.c  |  9 +++--
 drivers/gpu/drm/nouveau/nouveau_fence.c | 16 +++-
 drivers/gpu/drm/nouveau/nouveau_fence.h |  3 +-
 drivers/gpu/drm/nouveau/nouveau_gem.c   |  5 ++-
 7 files changed, 59 insertions(+), 41 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/dispnv04/crtc.c 
b/drivers/gpu/drm/nouveau/dispnv04/crtc.c
index a6f2e681bde9..a34924523133 100644
--- a/drivers/gpu/drm/nouveau/dispnv04/crtc.c
+++ b/drivers/gpu/drm/nouveau/dispnv04/crtc.c
@@ -1122,11 +1122,18 @@ nv04_page_flip_emit(struct nouveau_channel *chan,
PUSH_NVSQ(push, NV_SW, NV_SW_PAGE_FLIP, 0x);
PUSH_KICK(push);
 
-   ret = nouveau_fence_new(chan, false, pfence);
+   ret = nouveau_fence_new(pfence);
if (ret)
goto fail;
 
+   ret = nouveau_fence_emit(*pfence, chan);
+   if (ret)
+   goto fail_fence_unref;
+
return 0;
+
+fail_fence_unref:
+   nouveau_fence_unref(pfence);
 fail:
spin_lock_irqsave(>event_lock, flags);
list_del(>head);
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 057bc995f19b..e9cbbf594e6f 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -820,29 +820,39 @@ nouveau_bo_move_m2mf(struct ttm_buffer_object *bo, int 
evict,
mutex_lock(>mutex);
else
mutex_lock_nested(>mutex, SINGLE_DEPTH_NESTING);
+
ret = nouveau_fence_sync(nouveau_bo(bo), chan, true, 
ctx->interruptible);
-   if (ret == 0) {
-   ret = drm->ttm.move(chan, bo, bo->resource, new_reg);
-   if (ret == 0) {
-   ret = nouveau_fence_new(chan, false, );
-   if (ret == 0) {
-   /* TODO: figure out a better solution here
-*
-* wait on the fence here explicitly as going 
through
-* ttm_bo_move_accel_cleanup somehow doesn't 
seem to do it.
-*
-* Without this the operation can timeout and 
we'll fallback to a
-* software copy, which might take several 
minutes to finish.
-*/
-   nouveau_fence_wait(fence, false, false);
-   ret = ttm_bo_move_accel_cleanup(bo,
-   >base,
-   evict, false,
-   new_reg);
-   nouveau_fence_unref();
-   }
-   }
+   if (ret)
+   goto out_unlock;
+
+   ret = drm->ttm.move(chan, bo, bo->resource, new_reg);
+   if (ret)
+   goto out_unlock;
+
+   ret = nouveau_fence_new();
+   if (ret)
+   goto out_unlock;
+
+   ret = nouveau_fence_emit(fence, chan);
+   if (ret) {
+   nouveau_fence_unref();
+   goto out_unlock;
}
+
+   /* TODO: figure out a better solution here
+*
+* wait on the fence here explicitly as going through
+* ttm_bo_move_accel_cleanup somehow doesn't seem to do it.
+*
+* Without this the operation can timeout and we'll fallback to a
+* software copy, which might take several minutes to finish.
+*/
+   nouveau_fence_wait(fence, false, false);
+   ret = ttm_bo_move_accel_cleanup(bo, >base, evict, false,
+   new_reg);
+   nouveau_fence_unref();
+
+out_unlock:
mutex_unlock(>mutex);
return ret;
 }
diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c 
b/drivers/gpu/drm/nouveau/nouveau_chan.c
index 1068abe41024..f47c0363683c 100644
--- a/drivers/gpu/drm/nouveau/nouveau_chan.c
+++ b/drivers/gpu/drm/nouveau/nouveau_chan.c
@@ -62,9 +62,11 @@ nouveau_channel_idle(struct nouveau_channel *chan)
struct nouveau_fence *fence = NULL;
int ret;
 
-   ret = nouveau_fence_new(chan, false, );
+   ret = nouveau_fence_new();
if (!ret) {
-   ret = nouveau_fence_wait(fence, false, false);
+   ret = nouveau_fence_emit(fence, chan);
+   if (!ret)
+ 

[PATCH drm-next v5 08/14] drm/nouveau: move usercopy helpers to nouveau_drv.h

2023-06-19 Thread Danilo Krummrich
Move the usercopy helpers to a common driver header file to make it
usable for the new API added in subsequent commits.

Signed-off-by: Danilo Krummrich 
---
 drivers/gpu/drm/nouveau/nouveau_drv.h | 26 ++
 drivers/gpu/drm/nouveau/nouveau_gem.c | 26 --
 2 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h 
b/drivers/gpu/drm/nouveau/nouveau_drv.h
index 81350e685b50..20a7f31b9082 100644
--- a/drivers/gpu/drm/nouveau/nouveau_drv.h
+++ b/drivers/gpu/drm/nouveau/nouveau_drv.h
@@ -130,6 +130,32 @@ nouveau_cli(struct drm_file *fpriv)
return fpriv ? fpriv->driver_priv : NULL;
 }
 
+static inline void
+u_free(void *addr)
+{
+   kvfree(addr);
+}
+
+static inline void *
+u_memcpya(uint64_t user, unsigned nmemb, unsigned size)
+{
+   void *mem;
+   void __user *userptr = (void __force __user *)(uintptr_t)user;
+
+   size *= nmemb;
+
+   mem = kvmalloc(size, GFP_KERNEL);
+   if (!mem)
+   return ERR_PTR(-ENOMEM);
+
+   if (copy_from_user(mem, userptr, size)) {
+   u_free(mem);
+   return ERR_PTR(-EFAULT);
+   }
+
+   return mem;
+}
+
 #include 
 #include 
 
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c 
b/drivers/gpu/drm/nouveau/nouveau_gem.c
index 45ca4eb98f54..a48f42aaeab9 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -613,32 +613,6 @@ nouveau_gem_pushbuf_validate(struct nouveau_channel *chan,
return 0;
 }
 
-static inline void
-u_free(void *addr)
-{
-   kvfree(addr);
-}
-
-static inline void *
-u_memcpya(uint64_t user, unsigned nmemb, unsigned size)
-{
-   void *mem;
-   void __user *userptr = (void __force __user *)(uintptr_t)user;
-
-   size *= nmemb;
-
-   mem = kvmalloc(size, GFP_KERNEL);
-   if (!mem)
-   return ERR_PTR(-ENOMEM);
-
-   if (copy_from_user(mem, userptr, size)) {
-   u_free(mem);
-   return ERR_PTR(-EFAULT);
-   }
-
-   return mem;
-}
-
 static int
 nouveau_gem_pushbuf_reloc_apply(struct nouveau_cli *cli,
struct drm_nouveau_gem_pushbuf *req,
-- 
2.40.1



[PATCH drm-next v5 07/14] drm/nouveau: bo: initialize GEM GPU VA interface

2023-06-19 Thread Danilo Krummrich
Initialize the GEM's DRM GPU VA manager interface in preparation for the
(u)vmm implementation, provided by subsequent commits, to make use of it.

Signed-off-by: Danilo Krummrich 
---
 drivers/gpu/drm/nouveau/nouveau_bo.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 7724fe63067d..057bc995f19b 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -215,11 +215,14 @@ nouveau_bo_alloc(struct nouveau_cli *cli, u64 *size, int 
*align, u32 domain,
nvbo = kzalloc(sizeof(struct nouveau_bo), GFP_KERNEL);
if (!nvbo)
return ERR_PTR(-ENOMEM);
+
INIT_LIST_HEAD(>head);
INIT_LIST_HEAD(>entry);
INIT_LIST_HEAD(>vma_list);
nvbo->bo.bdev = >ttm.bdev;
 
+   drm_gem_gpuva_init(>bo.base);
+
/* This is confusing, and doesn't actually mean we want an uncached
 * mapping, but is what NOUVEAU_GEM_DOMAIN_COHERENT gets translated
 * into in nouveau_gem_new().
-- 
2.40.1



[PATCH drm-next v5 06/14] drm/nouveau: get vmm via nouveau_cli_vmm()

2023-06-19 Thread Danilo Krummrich
Provide a getter function for the client's current vmm context. Since
we'll add a new (u)vmm context for UMD bindings in subsequent commits,
this will keep the code clean.

Signed-off-by: Danilo Krummrich 
---
 drivers/gpu/drm/nouveau/nouveau_bo.c   | 2 +-
 drivers/gpu/drm/nouveau/nouveau_chan.c | 2 +-
 drivers/gpu/drm/nouveau/nouveau_drv.h  | 9 +
 drivers/gpu/drm/nouveau/nouveau_gem.c  | 6 +++---
 4 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
b/drivers/gpu/drm/nouveau/nouveau_bo.c
index c2ec91cc845d..7724fe63067d 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -204,7 +204,7 @@ nouveau_bo_alloc(struct nouveau_cli *cli, u64 *size, int 
*align, u32 domain,
struct nouveau_drm *drm = cli->drm;
struct nouveau_bo *nvbo;
struct nvif_mmu *mmu = >mmu;
-   struct nvif_vmm *vmm = cli->svm.cli ? >svm.vmm : >vmm.vmm;
+   struct nvif_vmm *vmm = _cli_vmm(cli)->vmm;
int i, pi = -1;
 
if (!*size) {
diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c 
b/drivers/gpu/drm/nouveau/nouveau_chan.c
index e648ecd0c1a0..1068abe41024 100644
--- a/drivers/gpu/drm/nouveau/nouveau_chan.c
+++ b/drivers/gpu/drm/nouveau/nouveau_chan.c
@@ -148,7 +148,7 @@ nouveau_channel_prep(struct nouveau_drm *drm, struct 
nvif_device *device,
 
chan->device = device;
chan->drm = drm;
-   chan->vmm = cli->svm.cli ? >svm : >vmm;
+   chan->vmm = nouveau_cli_vmm(cli);
atomic_set(>killed, 0);
 
/* allocate memory for dma push buffer */
diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h 
b/drivers/gpu/drm/nouveau/nouveau_drv.h
index b5de312a523f..81350e685b50 100644
--- a/drivers/gpu/drm/nouveau/nouveau_drv.h
+++ b/drivers/gpu/drm/nouveau/nouveau_drv.h
@@ -112,6 +112,15 @@ struct nouveau_cli_work {
struct dma_fence_cb cb;
 };
 
+static inline struct nouveau_vmm *
+nouveau_cli_vmm(struct nouveau_cli *cli)
+{
+   if (cli->svm.cli)
+   return >svm;
+
+   return >vmm;
+}
+
 void nouveau_cli_work_queue(struct nouveau_cli *, struct dma_fence *,
struct nouveau_cli_work *);
 
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c 
b/drivers/gpu/drm/nouveau/nouveau_gem.c
index ab9062e50977..45ca4eb98f54 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -103,7 +103,7 @@ nouveau_gem_object_open(struct drm_gem_object *gem, struct 
drm_file *file_priv)
struct nouveau_bo *nvbo = nouveau_gem_object(gem);
struct nouveau_drm *drm = nouveau_bdev(nvbo->bo.bdev);
struct device *dev = drm->dev->dev;
-   struct nouveau_vmm *vmm = cli->svm.cli ? >svm : >vmm;
+   struct nouveau_vmm *vmm = nouveau_cli_vmm(cli);
struct nouveau_vma *vma;
int ret;
 
@@ -180,7 +180,7 @@ nouveau_gem_object_close(struct drm_gem_object *gem, struct 
drm_file *file_priv)
struct nouveau_bo *nvbo = nouveau_gem_object(gem);
struct nouveau_drm *drm = nouveau_bdev(nvbo->bo.bdev);
struct device *dev = drm->dev->dev;
-   struct nouveau_vmm *vmm = cli->svm.cli ? >svm : & cli->vmm;
+   struct nouveau_vmm *vmm = nouveau_cli_vmm(cli);
struct nouveau_vma *vma;
int ret;
 
@@ -269,7 +269,7 @@ nouveau_gem_info(struct drm_file *file_priv, struct 
drm_gem_object *gem,
 {
struct nouveau_cli *cli = nouveau_cli(file_priv);
struct nouveau_bo *nvbo = nouveau_gem_object(gem);
-   struct nouveau_vmm *vmm = cli->svm.cli ? >svm : >vmm;
+   struct nouveau_vmm *vmm = nouveau_cli_vmm(cli);
struct nouveau_vma *vma;
 
if (is_power_of_2(nvbo->valid_domains))
-- 
2.40.1



[PATCH drm-next v5 05/14] drm/nouveau: new VM_BIND uapi interfaces

2023-06-19 Thread Danilo Krummrich
This commit provides the interfaces for the new UAPI motivated by the
Vulkan API. It allows user mode drivers (UMDs) to:

1) Initialize a GPU virtual address (VA) space via the new
   DRM_IOCTL_NOUVEAU_VM_INIT ioctl. UMDs can provide a kernel reserved
   VA area.

2) Bind and unbind GPU VA space mappings via the new
   DRM_IOCTL_NOUVEAU_VM_BIND ioctl.

3) Execute push buffers with the new DRM_IOCTL_NOUVEAU_EXEC ioctl.

Both, DRM_IOCTL_NOUVEAU_VM_BIND and DRM_IOCTL_NOUVEAU_EXEC support
asynchronous processing with DRM syncobjs as synchronization mechanism.

The default DRM_IOCTL_NOUVEAU_VM_BIND is synchronous processing,
DRM_IOCTL_NOUVEAU_EXEC supports asynchronous processing only.

Co-authored-by: Dave Airlie 
Signed-off-by: Danilo Krummrich 
---
 Documentation/gpu/driver-uapi.rst |   8 ++
 include/uapi/drm/nouveau_drm.h| 209 ++
 2 files changed, 217 insertions(+)

diff --git a/Documentation/gpu/driver-uapi.rst 
b/Documentation/gpu/driver-uapi.rst
index 4411e6919a3d..9c7ca6e33a68 100644
--- a/Documentation/gpu/driver-uapi.rst
+++ b/Documentation/gpu/driver-uapi.rst
@@ -6,3 +6,11 @@ drm/i915 uAPI
 =
 
 .. kernel-doc:: include/uapi/drm/i915_drm.h
+
+drm/nouveau uAPI
+
+
+VM_BIND / EXEC uAPI
+---
+
+.. kernel-doc:: include/uapi/drm/nouveau_drm.h
diff --git a/include/uapi/drm/nouveau_drm.h b/include/uapi/drm/nouveau_drm.h
index 853a327433d3..4d3a70529637 100644
--- a/include/uapi/drm/nouveau_drm.h
+++ b/include/uapi/drm/nouveau_drm.h
@@ -126,6 +126,209 @@ struct drm_nouveau_gem_cpu_fini {
__u32 handle;
 };
 
+/**
+ * struct drm_nouveau_sync - sync object
+ *
+ * This structure serves as synchronization mechanism for (potentially)
+ * asynchronous operations such as EXEC or VM_BIND.
+ */
+struct drm_nouveau_sync {
+   /**
+* @flags: the flags for a sync object
+*
+* The first 8 bits are used to determine the type of the sync object.
+*/
+   __u32 flags;
+#define DRM_NOUVEAU_SYNC_SYNCOBJ 0x0
+#define DRM_NOUVEAU_SYNC_TIMELINE_SYNCOBJ 0x1
+#define DRM_NOUVEAU_SYNC_TYPE_MASK 0xf
+   /**
+* @handle: the handle of the sync object
+*/
+   __u32 handle;
+   /**
+* @timeline_value:
+*
+* The timeline point of the sync object in case the syncobj is of
+* type DRM_NOUVEAU_SYNC_TIMELINE_SYNCOBJ.
+*/
+   __u64 timeline_value;
+};
+
+/**
+ * struct drm_nouveau_vm_init - GPU VA space init structure
+ *
+ * Used to initialize the GPU's VA space for a user client, telling the kernel
+ * which portion of the VA space is managed by the UMD and kernel respectively.
+ */
+struct drm_nouveau_vm_init {
+   /**
+* @unmanaged_addr: start address of the kernel managed VA space region
+*/
+   __u64 unmanaged_addr;
+   /**
+* @unmanaged_size: size of the kernel managed VA space region in bytes
+*/
+   __u64 unmanaged_size;
+};
+
+/**
+ * struct drm_nouveau_vm_bind_op - VM_BIND operation
+ *
+ * This structure represents a single VM_BIND operation. UMDs should pass
+ * an array of this structure via struct drm_nouveau_vm_bind's _ptr field.
+ */
+struct drm_nouveau_vm_bind_op {
+   /**
+* @op: the operation type
+*/
+   __u32 op;
+/**
+ * @DRM_NOUVEAU_VM_BIND_OP_MAP:
+ *
+ * Map a GEM object to the GPU's VA space. Optionally, the
+ * _NOUVEAU_VM_BIND_SPARSE flag can be passed to instruct the kernel to
+ * create sparse mappings for the given range.
+ */
+#define DRM_NOUVEAU_VM_BIND_OP_MAP 0x0
+/**
+ * @DRM_NOUVEAU_VM_BIND_OP_UNMAP:
+ *
+ * Unmap an existing mapping in the GPU's VA space. If the region the mapping
+ * is located in is a sparse region, new sparse mappings are created where the
+ * unmapped (memory backed) mapping was mapped previously. To remove a sparse
+ * region the _NOUVEAU_VM_BIND_SPARSE must be set.
+ */
+#define DRM_NOUVEAU_VM_BIND_OP_UNMAP 0x1
+   /**
+* @flags: the flags for a _nouveau_vm_bind_op
+*/
+   __u32 flags;
+/**
+ * @DRM_NOUVEAU_VM_BIND_SPARSE:
+ *
+ * Indicates that an allocated VA space region should be sparse.
+ */
+#define DRM_NOUVEAU_VM_BIND_SPARSE (1 << 8)
+   /**
+* @handle: the handle of the DRM GEM object to map
+*/
+   __u32 handle;
+   /**
+* @pad: 32 bit padding, should be 0
+*/
+   __u32 pad;
+   /**
+* @addr:
+*
+* the address the VA space region or (memory backed) mapping should be 
mapped to
+*/
+   __u64 addr;
+   /**
+* @bo_offset: the offset within the BO backing the mapping
+*/
+   __u64 bo_offset;
+   /**
+* @range: the size of the requested mapping in bytes
+*/
+   __u64 range;
+};
+
+/**
+ * struct drm_nouveau_vm_bind - structure for DRM_IOCTL_NOUVEAU_VM_BIND
+ */
+struct drm_nouveau_vm_bind {
+   /**
+* @op_count: the number of _nouveau_vm_bind_op
+  

[PATCH drm-next v5 04/14] drm: debugfs: provide infrastructure to dump a DRM GPU VA space

2023-06-19 Thread Danilo Krummrich
This commit adds a function to dump a DRM GPU VA space and a macro for
drivers to register the struct drm_info_list 'gpuvas' entry.

Most likely, most drivers might maintain one DRM GPU VA space per struct
drm_file, but there might also be drivers not having a fixed relation
between DRM GPU VA spaces and a DRM core infrastructure, hence we need the
indirection via the driver iterating it's maintained DRM GPU VA spaces.

Signed-off-by: Danilo Krummrich 
---
 drivers/gpu/drm/drm_debugfs.c | 41 +++
 include/drm/drm_debugfs.h | 25 +
 2 files changed, 66 insertions(+)

diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c
index 4855230ba2c6..82180fb1c200 100644
--- a/drivers/gpu/drm/drm_debugfs.c
+++ b/drivers/gpu/drm/drm_debugfs.c
@@ -39,6 +39,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "drm_crtc_internal.h"
 #include "drm_internal.h"
@@ -175,6 +176,46 @@ static const struct file_operations drm_debugfs_fops = {
.release = single_release,
 };
 
+/**
+ * drm_debugfs_gpuva_info - dump the given DRM GPU VA space
+ * @m: pointer to the _file to write
+ * @mgr: the _gpuva_manager representing the GPU VA space
+ *
+ * Dumps the GPU VA mappings of a given DRM GPU VA manager.
+ *
+ * For each DRM GPU VA space drivers should call this function from their
+ * _info_list's show callback.
+ *
+ * Returns: 0 on success, -ENODEV if the  is not initialized
+ */
+int drm_debugfs_gpuva_info(struct seq_file *m,
+  struct drm_gpuva_manager *mgr)
+{
+   DRM_GPUVA_ITER(it, mgr, 0);
+   struct drm_gpuva *va, *kva = >kernel_alloc_node;
+
+   if (!mgr->name)
+   return -ENODEV;
+
+   seq_printf(m, "DRM GPU VA space (%s) [0x%016llx;0x%016llx]\n",
+  mgr->name, mgr->mm_start, mgr->mm_start + mgr->mm_range);
+   seq_printf(m, "Kernel reserved node [0x%016llx;0x%016llx]\n",
+  kva->va.addr, kva->va.addr + kva->va.range);
+   seq_puts(m, "\n");
+   seq_puts(m, " VAs | start  | range  | end   
 | object | object offset\n");
+   seq_puts(m, 
"-\n");
+   drm_gpuva_iter_for_each(va, it) {
+   if (unlikely(va == >kernel_alloc_node))
+   continue;
+
+   seq_printf(m, " | 0x%016llx | 0x%016llx | 0x%016llx | 
0x%016llx | 0x%016llx\n",
+  va->va.addr, va->va.range, va->va.addr + 
va->va.range,
+  (u64)va->gem.obj, va->gem.offset);
+   }
+
+   return 0;
+}
+EXPORT_SYMBOL(drm_debugfs_gpuva_info);
 
 /**
  * drm_debugfs_create_files - Initialize a given set of debugfs files for DRM
diff --git a/include/drm/drm_debugfs.h b/include/drm/drm_debugfs.h
index 7616f457ce70..cb2c1956a214 100644
--- a/include/drm/drm_debugfs.h
+++ b/include/drm/drm_debugfs.h
@@ -34,6 +34,22 @@
 
 #include 
 #include 
+
+#include 
+
+/**
+ * DRM_DEBUGFS_GPUVA_INFO - _info_list entry to dump a GPU VA space
+ * @show: the _info_list's show callback
+ * @data: driver private data
+ *
+ * Drivers should use this macro to define a _info_list entry to provide a
+ * debugfs file for dumping the GPU VA space regions and mappings.
+ *
+ * For each DRM GPU VA space drivers should call drm_debugfs_gpuva_info() from
+ * their @show callback.
+ */
+#define DRM_DEBUGFS_GPUVA_INFO(show, data) {"gpuvas", show, DRIVER_GEM_GPUVA, 
data}
+
 /**
  * struct drm_info_list - debugfs info list entry
  *
@@ -134,6 +150,9 @@ void drm_debugfs_add_file(struct drm_device *dev, const 
char *name,
 
 void drm_debugfs_add_files(struct drm_device *dev,
   const struct drm_debugfs_info *files, int count);
+
+int drm_debugfs_gpuva_info(struct seq_file *m,
+  struct drm_gpuva_manager *mgr);
 #else
 static inline void drm_debugfs_create_files(const struct drm_info_list *files,
int count, struct dentry *root,
@@ -155,6 +174,12 @@ static inline void drm_debugfs_add_files(struct drm_device 
*dev,
 const struct drm_debugfs_info *files,
 int count)
 {}
+
+static inline int drm_debugfs_gpuva_info(struct seq_file *m,
+struct drm_gpuva_manager *mgr)
+{
+   return 0;
+}
 #endif
 
 #endif /* _DRM_DEBUGFS_H_ */
-- 
2.40.1



[PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA mappings

2023-06-19 Thread Danilo Krummrich
Add infrastructure to keep track of GPU virtual address (VA) mappings
with a decicated VA space manager implementation.

New UAPIs, motivated by Vulkan sparse memory bindings graphics drivers
start implementing, allow userspace applications to request multiple and
arbitrary GPU VA mappings of buffer objects. The DRM GPU VA manager is
intended to serve the following purposes in this context.

1) Provide infrastructure to track GPU VA allocations and mappings,
   making use of the maple_tree.

2) Generically connect GPU VA mappings to their backing buffers, in
   particular DRM GEM objects.

3) Provide a common implementation to perform more complex mapping
   operations on the GPU VA space. In particular splitting and merging
   of GPU VA mappings, e.g. for intersecting mapping requests or partial
   unmap requests.

Tested-by: Donald Robson 
Suggested-by: Dave Airlie 
Signed-off-by: Danilo Krummrich 
---
 Documentation/gpu/drm-mm.rst|   42 +
 drivers/gpu/drm/Makefile|1 +
 drivers/gpu/drm/drm_gem.c   |3 +
 drivers/gpu/drm/drm_gpuva_mgr.c | 1971 +++
 include/drm/drm_drv.h   |6 +
 include/drm/drm_gem.h   |   52 +
 include/drm/drm_gpuva_mgr.h |  682 +++
 7 files changed, 2757 insertions(+)
 create mode 100644 drivers/gpu/drm/drm_gpuva_mgr.c
 create mode 100644 include/drm/drm_gpuva_mgr.h

diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst
index a52e6f4117d6..0a9d54e723a8 100644
--- a/Documentation/gpu/drm-mm.rst
+++ b/Documentation/gpu/drm-mm.rst
@@ -466,6 +466,48 @@ DRM MM Range Allocator Function References
 .. kernel-doc:: drivers/gpu/drm/drm_mm.c
:export:
 
+DRM GPU VA Manager
+==
+
+Overview
+
+
+.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+   :doc: Overview
+
+Split and Merge
+---
+
+.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+   :doc: Split and Merge
+
+Locking
+---
+
+.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+   :doc: Locking
+
+Examples
+
+
+.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+   :doc: Examples
+
+Quirks
+--
+
+.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+   :doc: Quirks
+
+DRM GPU VA Manager Function References
+--
+
+.. kernel-doc:: include/drm/drm_gpuva_mgr.h
+   :internal:
+
+.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+   :export:
+
 DRM Buddy Allocator
 ===
 
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index 414855e2a463..6d6c9dec66e8 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -45,6 +45,7 @@ drm-y := \
drm_vblank.o \
drm_vblank_work.o \
drm_vma_manager.o \
+   drm_gpuva_mgr.o \
drm_writeback.o
 drm-$(CONFIG_DRM_LEGACY) += \
drm_agpsupport.o \
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 1a5a2cd0d4ec..cd878ebddbd0 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -164,6 +164,9 @@ void drm_gem_private_object_init(struct drm_device *dev,
if (!obj->resv)
obj->resv = >_resv;
 
+   if (drm_core_check_feature(dev, DRIVER_GEM_GPUVA))
+   drm_gem_gpuva_init(obj);
+
drm_vma_node_reset(>vma_node);
INIT_LIST_HEAD(>lru_node);
 }
diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuva_mgr.c
new file mode 100644
index ..66989db49cae
--- /dev/null
+++ b/drivers/gpu/drm/drm_gpuva_mgr.c
@@ -0,0 +1,1971 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2022 Red Hat.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Danilo Krummrich 
+ *
+ */
+
+#include 
+#include 
+
+/**
+ * DOC: Overview
+ *
+ * The DRM GPU VA Manager, represented by struct drm_gpuva_manager keeps track
+ * of a GPU's virtual address (VA) space and manages the corresponding virtual
+ * mappings represented 

[PATCH drm-next v5 02/14] maple_tree: split up MA_STATE() macro

2023-06-19 Thread Danilo Krummrich
Split up the MA_STATE() macro such that components using the maple tree
can easily inherit from struct ma_state and build custom tree walk
macros to hide their internals from users.

Example:

struct sample_iterator {
struct ma_state mas;
struct sample_mgr *mgr;
};

\#define SAMPLE_ITERATOR(name, __mgr, start)\
struct sample_iterator name = { \
.mas = MA_STATE_INIT(&(__mgr)->mt, start, 0),   \
.mgr = __mgr,   \
}

\#define sample_iter_for_each_range(it__, entry__, end__) \
mas_for_each(&(it__).mas, entry__, end__)

--

struct sample *sample;
SAMPLE_ITERATOR(si, min);

sample_iter_for_each_range(, sample, max) {
frob(mgr, sample);
}

Reviewed-by: Liam R. Howlett 
Signed-off-by: Danilo Krummrich 
---
 include/linux/maple_tree.h | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index 1fadb5f5978b..87d55334f1c2 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -423,8 +423,8 @@ struct ma_wr_state {
 #define MA_ERROR(err) \
((struct maple_enode *)(((unsigned long)err << 2) | 2UL))
 
-#define MA_STATE(name, mt, first, end) \
-   struct ma_state name = {\
+#define MA_STATE_INIT(mt, first, end)  \
+   {   \
.tree = mt, \
.index = first, \
.last = end,\
@@ -435,6 +435,9 @@ struct ma_wr_state {
.mas_flags = 0, \
}
 
+#define MA_STATE(name, mt, first, end) \
+   struct ma_state name = MA_STATE_INIT(mt, first, end)
+
 #define MA_WR_STATE(name, ma_state, wr_entry)  \
struct ma_wr_state name = { \
.mas = ma_state,\
-- 
2.40.1



[PATCH drm-next v5 01/14] drm: execution context for GEM buffers v4

2023-06-19 Thread Danilo Krummrich
From: Christian König 

This adds the infrastructure for an execution context for GEM buffers
which is similar to the existing TTMs execbuf util and intended to replace
it in the long term.

The basic functionality is that we abstracts the necessary loop to lock
many different GEM buffers with automated deadlock and duplicate handling.

v2: drop xarray and use dynamic resized array instead, the locking
overhead is unecessary and measurable.
v3: drop duplicate tracking, radeon is really the only one needing that.
v4: fixes issues pointed out by Danilo, some typos in comments and a
helper for lock arrays of GEM objects.

Signed-off-by: Christian König 
---
 Documentation/gpu/drm-mm.rst |  12 ++
 drivers/gpu/drm/Kconfig  |   6 +
 drivers/gpu/drm/Makefile |   2 +
 drivers/gpu/drm/drm_exec.c   | 278 +++
 include/drm/drm_exec.h   | 119 +++
 5 files changed, 417 insertions(+)
 create mode 100644 drivers/gpu/drm/drm_exec.c
 create mode 100644 include/drm/drm_exec.h

diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst
index a79fd3549ff8..a52e6f4117d6 100644
--- a/Documentation/gpu/drm-mm.rst
+++ b/Documentation/gpu/drm-mm.rst
@@ -493,6 +493,18 @@ DRM Sync Objects
 .. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
:export:
 
+DRM Execution context
+=
+
+.. kernel-doc:: drivers/gpu/drm/drm_exec.c
+   :doc: Overview
+
+.. kernel-doc:: include/drm/drm_exec.h
+   :internal:
+
+.. kernel-doc:: drivers/gpu/drm/drm_exec.c
+   :export:
+
 GPU Scheduler
 =
 
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index afb3b2f5f425..c2f3d234c89e 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -194,6 +194,12 @@ config DRM_TTM
  GPU memory types. Will be enabled automatically if a device driver
  uses it.
 
+config DRM_EXEC
+   tristate
+   depends on DRM
+   help
+ Execution context for command submissions
+
 config DRM_BUDDY
tristate
depends on DRM
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index 7a09a89b493b..414855e2a463 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -78,6 +78,8 @@ obj-$(CONFIG_DRM_PANEL_ORIENTATION_QUIRKS) += 
drm_panel_orientation_quirks.o
 #
 # Memory-management helpers
 #
+#
+obj-$(CONFIG_DRM_EXEC) += drm_exec.o
 
 obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o
 
diff --git a/drivers/gpu/drm/drm_exec.c b/drivers/gpu/drm/drm_exec.c
new file mode 100644
index ..18071bff20f4
--- /dev/null
+++ b/drivers/gpu/drm/drm_exec.c
@@ -0,0 +1,278 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+
+#include 
+#include 
+#include 
+
+/**
+ * DOC: Overview
+ *
+ * This component mainly abstracts the retry loop necessary for locking
+ * multiple GEM objects while preparing hardware operations (e.g. command
+ * submissions, page table updates etc..).
+ *
+ * If a contention is detected while locking a GEM object the cleanup procedure
+ * unlocks all previously locked GEM objects and locks the contended one first
+ * before locking any further objects.
+ *
+ * After an object is locked fences slots can optionally be reserved on the
+ * dma_resv object inside the GEM object.
+ *
+ * A typical usage pattern should look like this::
+ *
+ * struct drm_gem_object *obj;
+ * struct drm_exec exec;
+ * unsigned long index;
+ * int ret;
+ *
+ * drm_exec_init(, true);
+ * drm_exec_while_not_all_locked() {
+ * ret = drm_exec_prepare_obj(, boA, 1);
+ * drm_exec_continue_on_contention();
+ * if (ret)
+ * goto error;
+ *
+ * ret = drm_exec_prepare_obj(, boB, 1);
+ * drm_exec_continue_on_contention();
+ * if (ret)
+ * goto error;
+ * }
+ *
+ * drm_exec_for_each_locked_object(, index, obj) {
+ * dma_resv_add_fence(obj->resv, fence, DMA_RESV_USAGE_READ);
+ * ...
+ * }
+ * drm_exec_fini();
+ *
+ * See struct dma_exec for more details.
+ */
+
+/* Dummy value used to initially enter the retry loop */
+#define DRM_EXEC_DUMMY (void*)~0
+
+/* Unlock all objects and drop references */
+static void drm_exec_unlock_all(struct drm_exec *exec)
+{
+   struct drm_gem_object *obj;
+   unsigned long index;
+
+   drm_exec_for_each_locked_object(exec, index, obj) {
+   dma_resv_unlock(obj->resv);
+   drm_gem_object_put(obj);
+   }
+
+   drm_gem_object_put(exec->prelocked);
+   exec->prelocked = NULL;
+}
+
+/**
+ * drm_exec_init - initialize a drm_exec object
+ * @exec: the drm_exec object to initialize
+ * @interruptible: if locks should be acquired interruptible
+ *
+ * Initialize the object and make sure that we can track locked objects.
+ */
+void drm_exec_init(struct drm_exec *exec, bool interruptible)
+{
+   exec->interruptible = interruptible;
+   exec->objects = 

[PATCH drm-next v5 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI

2023-06-19 Thread Danilo Krummrich
This patch series provides a new UAPI for the Nouveau driver in order to
support Vulkan features, such as sparse bindings and sparse residency.

Furthermore, with the DRM GPUVA manager it provides a new DRM core feature to
keep track of GPU virtual address (VA) mappings in a more generic way.

The DRM GPUVA manager is indented to help drivers implement userspace-manageable
GPU VA spaces in reference to the Vulkan API. In order to achieve this goal it
serves the following purposes in this context.

1) Provide infrastructure to track GPU VA allocations and mappings,
   making use of the maple_tree.

2) Generically connect GPU VA mappings to their backing buffers, in
   particular DRM GEM objects.

3) Provide a common implementation to perform more complex mapping
   operations on the GPU VA space. In particular splitting and merging
   of GPU VA mappings, e.g. for intersecting mapping requests or partial
   unmap requests.

The new VM_BIND Nouveau UAPI build on top of the DRM GPUVA manager, itself
providing the following new interfaces.

1) Initialize a GPU VA space via the new DRM_IOCTL_NOUVEAU_VM_INIT ioctl
   for UMDs to specify the portion of VA space managed by the kernel and
   userspace, respectively.

2) Allocate and free a VA space region as well as bind and unbind memory
   to the GPUs VA space via the new DRM_IOCTL_NOUVEAU_VM_BIND ioctl.

3) Execute push buffers with the new DRM_IOCTL_NOUVEAU_EXEC ioctl.

Both, DRM_IOCTL_NOUVEAU_VM_BIND and DRM_IOCTL_NOUVEAU_EXEC, make use of the DRM
scheduler to queue jobs and support asynchronous processing with DRM syncobjs
as synchronization mechanism.

By default DRM_IOCTL_NOUVEAU_VM_BIND does synchronous processing,
DRM_IOCTL_NOUVEAU_EXEC supports asynchronous processing only.

The new VM_BIND UAPI for Nouveau makes also use of drm_exec (execution context
for GEM buffers) by Christian König. Since the patch implementing drm_exec was
not yet merged into drm-next it is part of this series, as well as a small fix
for this patch, which was found while testing this series.

This patch series is also available at [1].

There is a Mesa NVK merge request by Dave Airlie [2] implementing the
corresponding userspace parts for this series.

The Vulkan CTS test suite passes the sparse binding and sparse residency test
cases for the new UAPI together with Dave's Mesa work.

There are also some test cases in the igt-gpu-tools project [3] for the new UAPI
and hence the DRM GPU VA manager. However, most of them are testing the DRM GPU
VA manager's logic through Nouveau's new UAPI and should be considered just as
helper for implementation.

However, I absolutely intend to change those test cases to proper kunit test
cases for the DRM GPUVA manager, once and if we agree on it's usefulness and
design.

[1] https://gitlab.freedesktop.org/nouvelles/kernel/-/tree/new-uapi-drm-next /
https://gitlab.freedesktop.org/nouvelles/kernel/-/merge_requests/1
[2] https://gitlab.freedesktop.org/nouveau/mesa/-/merge_requests/150/
[3] https://gitlab.freedesktop.org/dakr/igt-gpu-tools/-/tree/wip_nouveau_vm_bind

Changes in V2:
==
  Nouveau:
- Reworked the Nouveau VM_BIND UAPI to avoid memory allocations in fence
  signalling critical sections. Updates to the VA space are split up in 
three
  separate stages, where only the 2. stage executes in a fence signalling
  critical section:

1. update the VA space, allocate new structures and page tables
2. (un-)map the requested memory bindings
3. free structures and page tables

- Separated generic job scheduler code from specific job implementations.
- Separated the EXEC and VM_BIND implementation of the UAPI.
- Reworked the locking parts of the nvkm/vmm RAW interface, such that
  (un-)map operations can be executed in fence signalling critical sections.

  GPUVA Manager:
- made drm_gpuva_regions optional for users of the GPUVA manager
- allow NULL GEMs for drm_gpuva entries
- swichted from drm_mm to maple_tree for track drm_gpuva / drm_gpuva_region
  entries
- provide callbacks for users to allocate custom drm_gpuva_op structures to
  allow inheritance
- added user bits to drm_gpuva_flags
- added a prefetch operation type in order to support generating prefetch
  operations in the same way other operations generated
- hand the responsibility for mutual exclusion for a GEM's
  drm_gpuva list to the user; simplified corresponding (un-)link functions

  Maple Tree:
- I added two maple tree patches to the series, one to support custom tree
  walk macros and one to hand the locking responsibility to the user of the
  GPUVA manager without pre-defined lockdep checks.

Changes in V3:
==
  Nouveau:
- Reworked the Nouveau VM_BIND UAPI to do the job cleanup (including page
  table cleanup) within a workqueue rather than the job_free() callback of
  the 

[PATCH 5/8] drm/msm/dpu: drop the dpu_core_perf_crtc_update()'s stop_req param

2023-06-19 Thread Dmitry Baryshkov
The stop_req is true only in the dpu_crtc_disable() case, when
crtc->enable has already been set to false. This renders the stop_req
argument useless. Remove it completely.

Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c | 12 ++--
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h |  3 +--
 drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c  |  6 +++---
 3 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
index f8d5c87d0915..773e641eab28 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
@@ -277,7 +277,7 @@ static u64 _dpu_core_perf_get_core_clk_rate(struct dpu_kms 
*kms)
 }
 
 int dpu_core_perf_crtc_update(struct drm_crtc *crtc,
-   int params_changed, bool stop_req)
+ int params_changed)
 {
struct dpu_core_perf_params *new, *old;
bool update_bus = false, update_clk = false;
@@ -301,13 +301,13 @@ int dpu_core_perf_crtc_update(struct drm_crtc *crtc,
dpu_crtc = to_dpu_crtc(crtc);
dpu_cstate = to_dpu_crtc_state(crtc->state);
 
-   DRM_DEBUG_ATOMIC("crtc:%d stop_req:%d core_clk:%llu\n",
-   crtc->base.id, stop_req, kms->perf.core_clk_rate);
+   DRM_DEBUG_ATOMIC("crtc:%d enabled:%d core_clk:%llu\n",
+   crtc->base.id, crtc->enabled, kms->perf.core_clk_rate);
 
old = _crtc->cur_perf;
new = _cstate->new_perf;
 
-   if (crtc->enabled && !stop_req) {
+   if (crtc->enabled) {
/*
 * cases for bus bandwidth update.
 * 1. new bandwidth vote - "ab or ib vote" is higher
@@ -337,7 +337,7 @@ int dpu_core_perf_crtc_update(struct drm_crtc *crtc,
}
 
trace_dpu_perf_crtc_update(crtc->base.id, new->bw_ctl,
-   new->core_clk_rate, stop_req, update_bus, update_clk);
+   new->core_clk_rate, !crtc->enabled, update_bus, update_clk);
 
if (update_bus) {
ret = _dpu_core_perf_crtc_update_bus(kms, crtc);
@@ -355,7 +355,7 @@ int dpu_core_perf_crtc_update(struct drm_crtc *crtc,
if (update_clk) {
clk_rate = _dpu_core_perf_get_core_clk_rate(kms);
 
-   trace_dpu_core_perf_update_clk(kms->dev, stop_req, clk_rate);
+   trace_dpu_core_perf_update_clk(kms->dev, !crtc->enabled, 
clk_rate);
 
clk_rate = min(clk_rate, kms->perf.max_core_clk_rate);
ret = dev_pm_opp_set_rate(>pdev->dev, clk_rate);
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
index 2bf7836f79bb..c29ec72984b8 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
@@ -58,11 +58,10 @@ int dpu_core_perf_crtc_check(struct drm_crtc *crtc,
  * dpu_core_perf_crtc_update - update performance of the given crtc
  * @crtc: Pointer to crtc
  * @params_changed: true if crtc parameters are modified
- * @stop_req: true if this is a stop request
  * return: zero if success, or error code otherwise
  */
 int dpu_core_perf_crtc_update(struct drm_crtc *crtc,
-   int params_changed, bool stop_req);
+ int params_changed);
 
 /**
  * dpu_core_perf_crtc_release_bw - release bandwidth of the given crtc
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
index ff5d306b95ed..214229d11e3e 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
@@ -718,7 +718,7 @@ static void dpu_crtc_frame_event_cb(void *data, u32 event)
 void dpu_crtc_complete_commit(struct drm_crtc *crtc)
 {
trace_dpu_crtc_complete_commit(DRMID(crtc));
-   dpu_core_perf_crtc_update(crtc, 0, false);
+   dpu_core_perf_crtc_update(crtc, 0);
_dpu_crtc_complete_flip(crtc);
 }
 
@@ -884,7 +884,7 @@ static void dpu_crtc_atomic_flush(struct drm_crtc *crtc,
return;
 
/* update performance setting before crtc kickoff */
-   dpu_core_perf_crtc_update(crtc, 1, false);
+   dpu_core_perf_crtc_update(crtc, 1);
 
/*
 * Final plane updates: Give each plane a chance to complete all
@@ -1100,7 +1100,7 @@ static void dpu_crtc_disable(struct drm_crtc *crtc,
atomic_set(_crtc->frame_pending, 0);
}
 
-   dpu_core_perf_crtc_update(crtc, 0, true);
+   dpu_core_perf_crtc_update(crtc, 0);
 
drm_for_each_encoder_mask(encoder, crtc->dev, crtc->state->encoder_mask)
dpu_encoder_register_frame_event_callback(encoder, NULL, NULL);
-- 
2.39.2



[PATCH 7/8] drm/msm/dpu: drop dpu_core_perf_destroy()

2023-06-19 Thread Dmitry Baryshkov
This function does nothing, just clears several data pointers. Drop it
now.

Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c | 12 
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h |  6 --
 drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  1 -
 3 files changed, 19 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
index 78a7e3ea27a4..f779ad544347 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
@@ -394,18 +394,6 @@ int dpu_core_perf_debugfs_init(struct dpu_kms *dpu_kms, 
struct dentry *parent)
 }
 #endif
 
-void dpu_core_perf_destroy(struct dpu_core_perf *perf)
-{
-   if (!perf) {
-   DPU_ERROR("invalid parameters\n");
-   return;
-   }
-
-   perf->max_core_clk_rate = 0;
-   perf->core_clk = NULL;
-   perf->dev = NULL;
-}
-
 int dpu_core_perf_init(struct dpu_core_perf *perf,
struct drm_device *dev,
const struct dpu_perf_cfg *perf_cfg,
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
index e8a7916b6f71..e1198c104b5e 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
@@ -69,12 +69,6 @@ int dpu_core_perf_crtc_update(struct drm_crtc *crtc,
  */
 void dpu_core_perf_crtc_release_bw(struct drm_crtc *crtc);
 
-/**
- * dpu_core_perf_destroy - destroy the given core performance context
- * @perf: Pointer to core performance context
- */
-void dpu_core_perf_destroy(struct dpu_core_perf *perf);
-
 /**
  * dpu_core_perf_init - initialize the given core performance context
  * @perf: Pointer to core performance context
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
index 6e62606e32de..4439147d2c35 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
@@ -1162,7 +1162,6 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
return 0;
 
 drm_obj_init_err:
-   dpu_core_perf_destroy(_kms->perf);
 hw_intr_init_err:
 perf_err:
 power_error:
-- 
2.39.2



[PATCH 8/8] drm/msm/dpu: remove unused fields from struct dpu_core_perf

2023-06-19 Thread Dmitry Baryshkov
Remove dpu_core_perf::dev and dpu_core_perf::debugfs_root fields, they
are not used by the code.

Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c | 2 --
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h | 4 
 drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   | 2 +-
 3 files changed, 1 insertion(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
index f779ad544347..7f110d15b101 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
@@ -395,11 +395,9 @@ int dpu_core_perf_debugfs_init(struct dpu_kms *dpu_kms, 
struct dentry *parent)
 #endif
 
 int dpu_core_perf_init(struct dpu_core_perf *perf,
-   struct drm_device *dev,
const struct dpu_perf_cfg *perf_cfg,
struct clk *core_clk)
 {
-   perf->dev = dev;
perf->perf_cfg = perf_cfg;
perf->core_clk = core_clk;
 
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
index e1198c104b5e..623e2d058695 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
@@ -27,7 +27,6 @@ struct dpu_core_perf_params {
 /**
  * struct dpu_core_perf - definition of core performance context
  * @dev: Pointer to drm device
- * @debugfs_root: top level debug folder
  * @perf_cfg: Platform-specific performance configuration
  * @core_clk: Pointer to the core clock
  * @core_clk_rate: current core clock rate
@@ -36,8 +35,6 @@ struct dpu_core_perf_params {
  * @enable_bw_release: debug control for bandwidth release
  */
 struct dpu_core_perf {
-   struct drm_device *dev;
-   struct dentry *debugfs_root;
const struct dpu_perf_cfg *perf_cfg;
struct clk *core_clk;
u64 core_clk_rate;
@@ -77,7 +74,6 @@ void dpu_core_perf_crtc_release_bw(struct drm_crtc *crtc);
  * @core_clk: pointer to core clock
  */
 int dpu_core_perf_init(struct dpu_core_perf *perf,
-   struct drm_device *dev,
const struct dpu_perf_cfg *perf_cfg,
struct clk *core_clk);
 
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
index 4439147d2c35..5297cec68c9c 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
@@ -1115,7 +1115,7 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
dpu_kms->hw_vbif[vbif->id] = hw;
}
 
-   rc = dpu_core_perf_init(_kms->perf, dev, dpu_kms->catalog->perf,
+   rc = dpu_core_perf_init(_kms->perf, dpu_kms->catalog->perf,
msm_clk_bulk_get_clock(dpu_kms->clocks, 
dpu_kms->num_clocks, "core"));
if (rc) {
DPU_ERROR("failed to init perf %d\n", rc);
-- 
2.39.2



[PATCH 2/8] drm/msm/dpu: drop performance tuning modes

2023-06-19 Thread Dmitry Baryshkov
DPU performance module contains code to change performance state
calculations. In addition to normal (sum plane and CRTC requirements),
it can work in 'minimal' or 'fixed' modes. Both modes are impractical,
since they can easily end up with the display underruns. Userspace also
should not depend on these modes availability, since they are tuned
through debugfs, which might not be available.

Drop relevant code to simplify performance state calculations.

Suggested-by: Konrad Dybcio 
Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c | 99 +--
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h | 19 
 2 files changed, 4 insertions(+), 114 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
index 1d9d83d7b99e..9902febc72c0 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
@@ -17,20 +17,6 @@
 #include "dpu_crtc.h"
 #include "dpu_core_perf.h"
 
-/**
- * enum dpu_perf_mode - performance tuning mode
- * @DPU_PERF_MODE_NORMAL: performance controlled by user mode client
- * @DPU_PERF_MODE_MINIMUM: performance bounded by minimum setting
- * @DPU_PERF_MODE_FIXED: performance bounded by fixed setting
- * @DPU_PERF_MODE_MAX: maximum value, used for error checking
- */
-enum dpu_perf_mode {
-   DPU_PERF_MODE_NORMAL,
-   DPU_PERF_MODE_MINIMUM,
-   DPU_PERF_MODE_FIXED,
-   DPU_PERF_MODE_MAX
-};
-
 /**
  * _dpu_core_perf_calc_bw() - to calculate BW per crtc
  * @kms:  pointer to the dpu_kms
@@ -118,19 +104,9 @@ static void _dpu_core_perf_calc_crtc(struct dpu_kms *kms,
 
memset(perf, 0, sizeof(struct dpu_core_perf_params));
 
-   if (kms->perf.perf_tune.mode == DPU_PERF_MODE_MINIMUM) {
-   perf->bw_ctl = 0;
-   perf->max_per_pipe_ib = 0;
-   perf->core_clk_rate = 0;
-   } else if (kms->perf.perf_tune.mode == DPU_PERF_MODE_FIXED) {
-   perf->bw_ctl = kms->perf.fix_core_ab_vote;
-   perf->max_per_pipe_ib = kms->perf.fix_core_ib_vote;
-   perf->core_clk_rate = kms->perf.fix_core_clk_rate;
-   } else {
-   perf->bw_ctl = _dpu_core_perf_calc_bw(kms, crtc);
-   perf->max_per_pipe_ib = kms->catalog->perf->min_dram_ib;
-   perf->core_clk_rate = _dpu_core_perf_calc_clk(kms, crtc, state);
-   }
+   perf->bw_ctl = _dpu_core_perf_calc_bw(kms, crtc);
+   perf->max_per_pipe_ib = kms->catalog->perf->min_dram_ib;
+   perf->core_clk_rate = _dpu_core_perf_calc_clk(kms, crtc, state);
 
DRM_DEBUG_ATOMIC(
"crtc=%d clk_rate=%llu core_ib=%llu core_ab=%llu\n",
@@ -286,7 +262,7 @@ void dpu_core_perf_crtc_release_bw(struct drm_crtc *crtc)
 
 static u64 _dpu_core_perf_get_core_clk_rate(struct dpu_kms *kms)
 {
-   u64 clk_rate = kms->perf.perf_tune.min_core_clk;
+   u64 clk_rate = 0;
struct drm_crtc *crtc;
struct dpu_crtc_state *dpu_cstate;
 
@@ -300,9 +276,6 @@ static u64 _dpu_core_perf_get_core_clk_rate(struct dpu_kms 
*kms)
}
}
 
-   if (kms->perf.perf_tune.mode == DPU_PERF_MODE_FIXED)
-   clk_rate = kms->perf.fix_core_clk_rate;
-
DRM_DEBUG_ATOMIC("clk:%llu\n", clk_rate);
 
return clk_rate;
@@ -409,62 +382,6 @@ int dpu_core_perf_crtc_update(struct drm_crtc *crtc,
 
 #ifdef CONFIG_DEBUG_FS
 
-static ssize_t _dpu_core_perf_mode_write(struct file *file,
-   const char __user *user_buf, size_t count, loff_t *ppos)
-{
-   struct dpu_core_perf *perf = file->private_data;
-   const struct dpu_perf_cfg *cfg = perf->catalog->perf;
-   u32 perf_mode = 0;
-   int ret;
-
-   ret = kstrtouint_from_user(user_buf, count, 0, _mode);
-   if (ret)
-   return ret;
-
-   if (perf_mode >= DPU_PERF_MODE_MAX)
-   return -EINVAL;
-
-   if (perf_mode == DPU_PERF_MODE_FIXED) {
-   DRM_INFO("fix performance mode\n");
-   } else if (perf_mode == DPU_PERF_MODE_MINIMUM) {
-   /* run the driver with max clk and BW vote */
-   perf->perf_tune.min_core_clk = perf->max_core_clk_rate;
-   perf->perf_tune.min_bus_vote =
-   (u64) cfg->max_bw_high * 1000;
-   DRM_INFO("minimum performance mode\n");
-   } else if (perf_mode == DPU_PERF_MODE_NORMAL) {
-   /* reset the perf tune params to 0 */
-   perf->perf_tune.min_core_clk = 0;
-   perf->perf_tune.min_bus_vote = 0;
-   DRM_INFO("normal performance mode\n");
-   }
-   perf->perf_tune.mode = perf_mode;
-
-   return count;
-}
-
-static ssize_t _dpu_core_perf_mode_read(struct file *file,
-   char __user *buff, size_t count, loff_t *ppos)
-{
-   struct dpu_core_perf *perf = file->private_data;
-   int len;
-   char buf[128];
-
-   len = 

[PATCH 4/8] drm/msm/dpu: rework indentation in dpu_core_perf

2023-06-19 Thread Dmitry Baryshkov
dpu_core_perf.c contains several multi-line conditions which are hard to
comprehent because of the indentation. Rework the identation of these
conditions to make it easier to understand them.

Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c | 13 +
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
index ba146af73bc5..f8d5c87d0915 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
@@ -148,8 +148,8 @@ int dpu_core_perf_crtc_check(struct drm_crtc *crtc,
 
drm_for_each_crtc(tmp_crtc, crtc->dev) {
if (tmp_crtc->enabled &&
-   (dpu_crtc_get_client_type(tmp_crtc) ==
-   curr_client_type) && (tmp_crtc != crtc)) {
+   dpu_crtc_get_client_type(tmp_crtc) == curr_client_type &&
+   tmp_crtc != crtc) {
struct dpu_crtc_state *tmp_cstate =
to_dpu_crtc_state(tmp_crtc->state);
 
@@ -194,8 +194,7 @@ static int _dpu_core_perf_crtc_update_bus(struct dpu_kms 
*kms,
 
drm_for_each_crtc(tmp_crtc, crtc->dev) {
if (tmp_crtc->enabled &&
-   curr_client_type ==
-   dpu_crtc_get_client_type(tmp_crtc)) {
+   curr_client_type == dpu_crtc_get_client_type(tmp_crtc)) {
dpu_cstate = to_dpu_crtc_state(tmp_crtc->state);
 
perf.bw_ctl += dpu_cstate->new_perf.bw_ctl;
@@ -325,10 +324,8 @@ int dpu_core_perf_crtc_update(struct drm_crtc *crtc,
update_bus = true;
}
 
-   if ((params_changed &&
-   (new->core_clk_rate > old->core_clk_rate)) ||
-   (!params_changed &&
-   (new->core_clk_rate < old->core_clk_rate))) {
+   if ((params_changed && new->core_clk_rate > old->core_clk_rate) 
||
+   (!params_changed && new->core_clk_rate < 
old->core_clk_rate)) {
old->core_clk_rate = new->core_clk_rate;
update_clk = true;
}
-- 
2.39.2



[PATCH 6/8] drm/msm/dpu: use dpu_perf_cfg in DPU core_perf code

2023-06-19 Thread Dmitry Baryshkov
Simplify dpu_core_perf code by using only dpu_perf_cfg instead of using
full-featured catalog data.

Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c | 52 ---
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h |  8 +--
 drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  2 +-
 3 files changed, 27 insertions(+), 35 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
index 773e641eab28..78a7e3ea27a4 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
@@ -19,11 +19,11 @@
 
 /**
  * _dpu_core_perf_calc_bw() - to calculate BW per crtc
- * @kms:  pointer to the dpu_kms
+ * @perf_cfg: performance configuration
  * @crtc: pointer to a crtc
  * Return: returns aggregated BW for all planes in crtc.
  */
-static u64 _dpu_core_perf_calc_bw(struct dpu_kms *kms,
+static u64 _dpu_core_perf_calc_bw(const struct dpu_perf_cfg *perf_cfg,
struct drm_crtc *crtc)
 {
struct drm_plane *plane;
@@ -39,7 +39,7 @@ static u64 _dpu_core_perf_calc_bw(struct dpu_kms *kms,
crtc_plane_bw += pstate->plane_fetch_bw;
}
 
-   bw_factor = kms->catalog->perf->bw_inefficiency_factor;
+   bw_factor = perf_cfg->bw_inefficiency_factor;
if (bw_factor) {
crtc_plane_bw *= bw_factor;
do_div(crtc_plane_bw, 100);
@@ -50,12 +50,12 @@ static u64 _dpu_core_perf_calc_bw(struct dpu_kms *kms,
 
 /**
  * _dpu_core_perf_calc_clk() - to calculate clock per crtc
- * @kms:  pointer to the dpu_kms
+ * @perf_cfg: performance configuration
  * @crtc: pointer to a crtc
  * @state: pointer to a crtc state
  * Return: returns max clk for all planes in crtc.
  */
-static u64 _dpu_core_perf_calc_clk(struct dpu_kms *kms,
+static u64 _dpu_core_perf_calc_clk(const struct dpu_perf_cfg *perf_cfg,
struct drm_crtc *crtc, struct drm_crtc_state *state)
 {
struct drm_plane *plane;
@@ -76,7 +76,7 @@ static u64 _dpu_core_perf_calc_clk(struct dpu_kms *kms,
crtc_clk = max(pstate->plane_clk, crtc_clk);
}
 
-   clk_factor = kms->catalog->perf->clk_inefficiency_factor;
+   clk_factor = perf_cfg->clk_inefficiency_factor;
if (clk_factor) {
crtc_clk *= clk_factor;
do_div(crtc_clk, 100);
@@ -92,20 +92,20 @@ static struct dpu_kms *_dpu_crtc_get_kms(struct drm_crtc 
*crtc)
return to_dpu_kms(priv->kms);
 }
 
-static void _dpu_core_perf_calc_crtc(struct dpu_kms *kms,
+static void _dpu_core_perf_calc_crtc(const struct dpu_perf_cfg *perf_cfg,
struct drm_crtc *crtc,
struct drm_crtc_state *state,
struct dpu_core_perf_params *perf)
 {
-   if (!kms || !kms->catalog || !crtc || !state || !perf) {
+   if (!perf_cfg || !crtc || !state || !perf) {
DPU_ERROR("invalid parameters\n");
return;
}
 
memset(perf, 0, sizeof(struct dpu_core_perf_params));
 
-   perf->bw_ctl = _dpu_core_perf_calc_bw(kms, crtc);
-   perf->core_clk_rate = _dpu_core_perf_calc_clk(kms, crtc, state);
+   perf->bw_ctl = _dpu_core_perf_calc_bw(perf_cfg, crtc);
+   perf->core_clk_rate = _dpu_core_perf_calc_clk(perf_cfg, crtc, state);
 
DRM_DEBUG_ATOMIC(
"crtc=%d clk_rate=%llu core_ab=%llu\n",
@@ -122,6 +122,7 @@ int dpu_core_perf_crtc_check(struct drm_crtc *crtc,
struct dpu_crtc_state *dpu_cstate;
struct drm_crtc *tmp_crtc;
struct dpu_kms *kms;
+   const struct dpu_perf_cfg *perf_cfg;
 
if (!crtc || !state) {
DPU_ERROR("invalid crtc\n");
@@ -129,10 +130,7 @@ int dpu_core_perf_crtc_check(struct drm_crtc *crtc,
}
 
kms = _dpu_crtc_get_kms(crtc);
-   if (!kms->catalog) {
-   DPU_ERROR("invalid parameters\n");
-   return 0;
-   }
+   perf_cfg = kms->perf.perf_cfg;
 
/* we only need bandwidth check on real-time clients (interfaces) */
if (dpu_crtc_get_client_type(crtc) == NRT_CLIENT)
@@ -141,7 +139,7 @@ int dpu_core_perf_crtc_check(struct drm_crtc *crtc,
dpu_cstate = to_dpu_crtc_state(state);
 
/* obtain new values */
-   _dpu_core_perf_calc_crtc(kms, crtc, state, _cstate->new_perf);
+   _dpu_core_perf_calc_crtc(perf_cfg, crtc, state, _cstate->new_perf);
 
bw_sum_of_intfs = dpu_cstate->new_perf.bw_ctl;
curr_client_type = dpu_crtc_get_client_type(crtc);
@@ -164,7 +162,7 @@ int dpu_core_perf_crtc_check(struct drm_crtc *crtc,
bw = DIV_ROUND_UP_ULL(bw_sum_of_intfs, 1000);
DRM_DEBUG_ATOMIC("calculated bandwidth=%uk\n", bw);
 
-   threshold = kms->catalog->perf->max_bw_high;
+   threshold = perf_cfg->max_bw_high;
 
DRM_DEBUG_ATOMIC("final threshold bw limit = %d\n", threshold);
 
@@ -212,7 +210,7 @@ static int 

[PATCH 1/8] drm/msm/dpu: drop enum dpu_core_perf_data_bus_id

2023-06-19 Thread Dmitry Baryshkov
Drop the leftover of bus-client -> interconnect conversion, the enum
dpu_core_perf_data_bus_id.

Fixes: cb88482e2570 ("drm/msm/dpu: clean up references of DPU custom bus 
scaling")
Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h | 13 -
 1 file changed, 13 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
index e3795995e145..29bb8ee2bc26 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
@@ -14,19 +14,6 @@
 
 #defineDPU_PERF_DEFAULT_MAX_CORE_CLK_RATE  41250
 
-/**
- * enum dpu_core_perf_data_bus_id - data bus identifier
- * @DPU_CORE_PERF_DATA_BUS_ID_MNOC: DPU/MNOC data bus
- * @DPU_CORE_PERF_DATA_BUS_ID_LLCC: MNOC/LLCC data bus
- * @DPU_CORE_PERF_DATA_BUS_ID_EBI: LLCC/EBI data bus
- */
-enum dpu_core_perf_data_bus_id {
-   DPU_CORE_PERF_DATA_BUS_ID_MNOC,
-   DPU_CORE_PERF_DATA_BUS_ID_LLCC,
-   DPU_CORE_PERF_DATA_BUS_ID_EBI,
-   DPU_CORE_PERF_DATA_BUS_ID_MAX,
-};
-
 /**
  * struct dpu_core_perf_params - definition of performance parameters
  * @max_per_pipe_ib: maximum instantaneous bandwidth request
-- 
2.39.2



[PATCH 3/8] drm/msm/dpu: drop dpu_core_perf_params::max_per_pipe_ib

2023-06-19 Thread Dmitry Baryshkov
The max_per_pipe_ib is a constant across all CRTCs and is read from the
catalog. Drop corresponding calculations and read the value directly at
icc_set_bw() time.

Suggested-by: Konrad Dybcio 
Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c | 17 +
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h |  2 --
 drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c  |  2 --
 3 files changed, 5 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
index 9902febc72c0..ba146af73bc5 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
@@ -105,13 +105,12 @@ static void _dpu_core_perf_calc_crtc(struct dpu_kms *kms,
memset(perf, 0, sizeof(struct dpu_core_perf_params));
 
perf->bw_ctl = _dpu_core_perf_calc_bw(kms, crtc);
-   perf->max_per_pipe_ib = kms->catalog->perf->min_dram_ib;
perf->core_clk_rate = _dpu_core_perf_calc_clk(kms, crtc, state);
 
DRM_DEBUG_ATOMIC(
-   "crtc=%d clk_rate=%llu core_ib=%llu core_ab=%llu\n",
+   "crtc=%d clk_rate=%llu core_ab=%llu\n",
crtc->base.id, perf->core_clk_rate,
-   perf->max_per_pipe_ib, perf->bw_ctl);
+   perf->bw_ctl);
 }
 
 int dpu_core_perf_crtc_check(struct drm_crtc *crtc,
@@ -199,9 +198,6 @@ static int _dpu_core_perf_crtc_update_bus(struct dpu_kms 
*kms,
dpu_crtc_get_client_type(tmp_crtc)) {
dpu_cstate = to_dpu_crtc_state(tmp_crtc->state);
 
-   perf.max_per_pipe_ib = max(perf.max_per_pipe_ib,
-   dpu_cstate->new_perf.max_per_pipe_ib);
-
perf.bw_ctl += dpu_cstate->new_perf.bw_ctl;
 
DRM_DEBUG_ATOMIC("crtc=%d bw=%llu paths:%d\n",
@@ -217,7 +213,7 @@ static int _dpu_core_perf_crtc_update_bus(struct dpu_kms 
*kms,
do_div(avg_bw, (kms->num_paths * 1000)); /*Bps_to_icc*/
 
for (i = 0; i < kms->num_paths; i++)
-   icc_set_bw(kms->path[i], avg_bw, perf.max_per_pipe_ib);
+   icc_set_bw(kms->path[i], avg_bw, 
kms->catalog->perf->min_dram_ib);
 
return ret;
 }
@@ -320,15 +316,12 @@ int dpu_core_perf_crtc_update(struct drm_crtc *crtc,
 * 2. new bandwidth vote - "ab or ib vote" is lower
 *than current vote at end of commit or stop.
 */
-   if ((params_changed && ((new->bw_ctl > old->bw_ctl) ||
-   (new->max_per_pipe_ib > old->max_per_pipe_ib))) ||
-   (!params_changed && ((new->bw_ctl < old->bw_ctl) ||
-   (new->max_per_pipe_ib < old->max_per_pipe_ib {
+   if ((params_changed && new->bw_ctl > old->bw_ctl) ||
+   (!params_changed && new->bw_ctl < old->bw_ctl)) {
DRM_DEBUG_ATOMIC("crtc=%d p=%d 
new_bw=%llu,old_bw=%llu\n",
crtc->base.id, params_changed,
new->bw_ctl, old->bw_ctl);
old->bw_ctl = new->bw_ctl;
-   old->max_per_pipe_ib = new->max_per_pipe_ib;
update_bus = true;
}
 
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
index e02cc2324af2..2bf7836f79bb 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
@@ -16,12 +16,10 @@
 
 /**
  * struct dpu_core_perf_params - definition of performance parameters
- * @max_per_pipe_ib: maximum instantaneous bandwidth request
  * @bw_ctl: arbitrated bandwidth request
  * @core_clk_rate: core clock rate request
  */
 struct dpu_core_perf_params {
-   u64 max_per_pipe_ib;
u64 bw_ctl;
u64 core_clk_rate;
 };
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
index 1edf2b6b0a26..ff5d306b95ed 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
@@ -1400,8 +1400,6 @@ static int dpu_crtc_debugfs_state_show(struct seq_file 
*s, void *v)
seq_printf(s, "core_clk_rate: %llu\n",
dpu_crtc->cur_perf.core_clk_rate);
seq_printf(s, "bw_ctl: %llu\n", dpu_crtc->cur_perf.bw_ctl);
-   seq_printf(s, "max_per_pipe_ib: %llu\n",
-   dpu_crtc->cur_perf.max_per_pipe_ib);
 
return 0;
 }
-- 
2.39.2



[PATCH 0/8] drm/msm/dpu: cleanup dpu_core_perf module

2023-06-19 Thread Dmitry Baryshkov
Apply several cleanups to the DPU's core_perf module.

Dmitry Baryshkov (8):
  drm/msm/dpu: drop enum dpu_core_perf_data_bus_id
  drm/msm/dpu: drop performance tuning modes
  drm/msm/dpu: drop dpu_core_perf_params::max_per_pipe_ib
  drm/msm/dpu: rework indentation in dpu_core_perf
  drm/msm/dpu: drop the dpu_core_perf_crtc_update()'s stop_req param
  drm/msm/dpu: use dpu_perf_cfg in DPU core_perf code
  drm/msm/dpu: drop dpu_core_perf_destroy()
  drm/msm/dpu: remove unused fields from struct dpu_core_perf

 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c | 199 --
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h |  55 +
 drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c  |   8 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |   3 +-
 4 files changed, 47 insertions(+), 218 deletions(-)

-- 
2.39.2



Re: [PATCH v4 4/6] dt-bindings: display: stm32-ltdc: add optional st,fb-bpp property

2023-06-19 Thread Rob Herring
On Mon, Jun 19, 2023 at 09:18:25PM +0100, Conor Dooley wrote:
> Hey,
> 
> On Mon, Jun 19, 2023 at 06:55:23PM +0200, Dario Binacchi wrote:
> > Boards that use the STM32F{4,7} series have limited amounts of RAM. The
> > added property allows to size, within certain limits, the memory footprint
> > required by the framebuffer.
> 
> Hmm, this sounds quite a lot like "software policy", since the actual
> display doesn't have these limitations. Rob, Krzysztof?

Indeed. This doesn't belong in DT.

Rob



Re: [PATCH] drm/virtio: conditionally allocate virtio_gpu_fence

2023-06-19 Thread Dmitry Osipenko
On 6/13/23 20:43, Gurchetan Singh wrote:
> We don't want to create a fence for every command submission.  It's
> only necessary when userspace provides a waitable token for submission.
> This could be:
> 
> 1) bo_handles, to be used with VIRTGPU_WAIT
> 2) out_fence_fd, to be used with dma_fence apis
> 3) a ring_idx provided with VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK
>+ DRM event API
> 4) syncobjs in the future
> 
> The use case for just submitting a command to the host, and expecting
> no response.  For example, gfxstream has GFXSTREAM_CONTEXT_PING that
> just wakes up the host side worker threads.  There's also
> CROSS_DOMAIN_CMD_SEND which just sends data to the Wayland server.
> 
> This prevents the need to signal the automatically created
> virtio_gpu_fence.
> 
> Signed-off-by: Gurchetan Singh 
> ---
>  drivers/gpu/drm/virtio/virtgpu_submit.c | 10 +++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/virtio/virtgpu_submit.c 
> b/drivers/gpu/drm/virtio/virtgpu_submit.c
> index cf3c04b16a7a..add106c06ab2 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_submit.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_submit.c
> @@ -168,9 +168,13 @@ static int virtio_gpu_init_submit(struct 
> virtio_gpu_submit *submit,
>  
>   memset(submit, 0, sizeof(*submit));
>  
> - out_fence = virtio_gpu_fence_alloc(vgdev, fence_ctx, ring_idx);
> - if (!out_fence)
> - return -ENOMEM;
> + if ((exbuf->flags & VIRTGPU_EXECBUF_FENCE_FD_OUT) ||
> + ((exbuf->flags & VIRTGPU_EXECBUF_RING_IDX) &&
> + (vfpriv->ring_idx_mask & BIT_ULL(ring_idx))) ||
> + exbuf->num_bo_handles)
> + out_fence = virtio_gpu_fence_alloc(vgdev, fence_ctx, ring_idx);
> + else
> + out_fence = NULL;
>  
>   err = virtio_gpu_fence_event_create(dev, file, out_fence, ring_idx);
>   if (err) {

Looks okay, code indentation may be improved a tad to make it more eye-friendly:

+   if ((exbuf->flags & VIRTGPU_EXECBUF_FENCE_FD_OUT) ||
+  ((exbuf->flags & VIRTGPU_EXECBUF_RING_IDX) && (vfpriv->ring_idx_mask 
& BIT_ULL(ring_idx))) ||
+exbuf->num_bo_handles)
+   out_fence = virtio_gpu_fence_alloc(vgdev, fence_ctx, ring_idx);
+   else
+   out_fence = NULL;

Checkpatch will complain about this variant, but the complaint can be ignored 
in this case.

-- 
Best regards,
Dmitry



Re: [PATCH] drm/msm: Fix typo in comment

2023-06-19 Thread Dmitry Baryshkov

On 18/06/2023 17:54, zhumao...@208suo.com wrote:

Fix typo in comment of msm_gem.c.

Signed-off-by: Zhu Mao 
---
  drivers/gpu/drm/msm/msm_gem.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)


This patch doesn't apply. Please use git send-email to send patches.

--
With best wishes
Dmitry



Re: [PATCH v3 2/2] drm/msm/dpu: remove struct drm_dsc_config from struct msm_display_info

2023-06-19 Thread Dmitry Baryshkov

On 14/06/2023 01:19, Kuogee Hsieh wrote:

ince struct drm_dsc_config is stored at atomic_enable() instead
of display setup time during boot up, saving struct drm_dsc_config
at struct msm_display_info is not necessary. Lets drop the dsc member
from struct msm_display_info.


With the 'S' in 'Since' brought back in place:

Reviewed-by: Dmitry Baryshkov 



Signed-off-by: Kuogee Hsieh 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 2 --
  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h | 2 --
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 2 --
  3 files changed, 6 deletions(-)


--
With best wishes
Dmitry



Re: [PATCH v3 1/2] drm/msm/dpu: retrieve DSI DSC struct through priv->dsi[0]

2023-06-19 Thread Dmitry Baryshkov

On 14/06/2023 01:19, Kuogee Hsieh wrote:

Currently struct drm_dsc_config for DSI is populated at display
setup during system boot up. This mechanism works fine with
embedded display but not for pluggable displays as the
struct drm_dsc_config will become stale once external display
is unplugged.

Move storing of DSI DSC struct to atomic_enable() so that same
mechanism will work for both embedded display and pluggable
displays.

Signed-off-by: Kuogee Hsieh 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 42 -
  1 file changed, 30 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
index 2e1873d..e00cd39 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
@@ -543,11 +543,24 @@ bool dpu_encoder_use_dsc_merge(struct drm_encoder 
*drm_enc)
return (num_dsc > 0) && (num_dsc > intf_count);
  }
  
+static struct drm_dsc_config *dpu_encoder_get_dsc_config(struct drm_encoder *drm_enc)

+{
+   struct msm_drm_private *priv = drm_enc->dev->dev_private;
+   struct dpu_encoder_virt *dpu_enc = to_dpu_encoder_virt(drm_enc);
+   int index = dpu_enc->disp_info.h_tile_instance[0];
+
+if (dpu_enc->disp_info.intf_type == INTF_DSI)
+   return msm_dsi_get_dsc_config(priv->dsi[index]);


Wrong indentation.


+
+   return NULL;
+}
+   


A string of 4 tabs causes checkpatch.pl to report an error.


  static struct msm_display_topology dpu_encoder_get_topology(
struct dpu_encoder_virt *dpu_enc,
struct dpu_kms *dpu_kms,
struct drm_display_mode *mode,
-   struct drm_crtc_state *crtc_state)
+   struct drm_crtc_state *crtc_state,
+   struct drm_dsc_config *dsc)
  {
struct msm_display_topology topology = {0};
int i, intf_count = 0;
@@ -579,7 +592,7 @@ static struct msm_display_topology dpu_encoder_get_topology(
  
  	topology.num_intf = intf_count;
  
-	if (dpu_enc->dsc) {

+   if (dsc) {
/*
 * In case of Display Stream Compression (DSC), we would use
 * 2 DSC encoders, 2 layer mixers and 1 interface
@@ -605,6 +618,7 @@ static int dpu_encoder_virt_atomic_check(
struct drm_display_mode *adj_mode;
struct msm_display_topology topology;
struct dpu_global_state *global_state;
+   struct drm_dsc_config *dsc;
int i = 0;
int ret = 0;
  
@@ -640,7 +654,9 @@ static int dpu_encoder_virt_atomic_check(

}
}
  
-	topology = dpu_encoder_get_topology(dpu_enc, dpu_kms, adj_mode, crtc_state);

+   dsc = dpu_encoder_get_dsc_config(drm_enc);
+
+   topology = dpu_encoder_get_topology(dpu_enc, dpu_kms, adj_mode, 
crtc_state, dsc);
  
  	/*

 * Release and Allocate resources on every modeset
@@ -1072,14 +1088,12 @@ static void dpu_encoder_virt_atomic_mode_set(struct 
drm_encoder *drm_enc,
dpu_enc->hw_pp[i] = i < num_pp ? to_dpu_hw_pingpong(hw_pp[i])
: NULL;
  
-	if (dpu_enc->dsc) {

-   num_dsc = dpu_rm_get_assigned_resources(_kms->rm, 
global_state,
-   drm_enc->base.id, 
DPU_HW_BLK_DSC,
-   hw_dsc, 
ARRAY_SIZE(hw_dsc));
-   for (i = 0; i < num_dsc; i++) {
-   dpu_enc->hw_dsc[i] = to_dpu_hw_dsc(hw_dsc[i]);
-   dsc_mask |= BIT(dpu_enc->hw_dsc[i]->idx - DSC_0);
-   }
+   num_dsc = dpu_rm_get_assigned_resources(_kms->rm, global_state,
+   drm_enc->base.id, 
DPU_HW_BLK_DSC,
+   hw_dsc, ARRAY_SIZE(hw_dsc));
+   for (i = 0; i < num_dsc; i++) {
+   dpu_enc->hw_dsc[i] = to_dpu_hw_dsc(hw_dsc[i]);
+   dsc_mask |= BIT(dpu_enc->hw_dsc[i]->idx - DSC_0);
}
  
  	dpu_enc->dsc_mask = dsc_mask;

@@ -1187,6 +1201,8 @@ static void dpu_encoder_virt_atomic_enable(struct 
drm_encoder *drm_enc,
  
  	dpu_enc = to_dpu_encoder_virt(drm_enc);
  
+	dpu_enc->dsc = dpu_encoder_get_dsc_config(drm_enc);

+
mutex_lock(_enc->enc_lock);
cur_mode = _enc->base.crtc->state->adjusted_mode;
  
@@ -2109,8 +2125,10 @@ void dpu_encoder_helper_phys_cleanup(struct dpu_encoder_phys *phys_enc)

phys_enc->hw_pp->merge_3d->idx);
}
  
-	if (dpu_enc->dsc)

+   if (dpu_enc->dsc) {
dpu_encoder_unprep_dsc(dpu_enc);
+   dpu_enc->dsc = NULL;
+   }
  
  	intf_cfg.stream_sel = 0; /* Don't care value for video mode */

intf_cfg.mode_3d = dpu_encoder_helper_get_3d_blend_mode(phys_enc);


--
With best wishes
Dmitry



[PATCH v4 19/19] drm/msm/dpu: drop empty features mask INTF_SDM845_MASK

2023-06-19 Thread Dmitry Baryshkov
The INTF_SDM845_MASK features mask is zero. Drop it completely.

Reviewed-by: Marijn Suijten 
Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h | 4 
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h  | 4 
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c  | 2 --
 3 files changed, 10 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
index 4ce25ed4e36f..7d87dc2d7b1b 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
@@ -244,7 +244,6 @@ static const struct dpu_intf_cfg msm8998_intf[] = {
{
.name = "intf_0", .id = INTF_0,
.base = 0x6a000, .len = 0x280,
-   .features = INTF_SDM845_MASK,
.type = INTF_DP,
.controller_id = MSM_DP_CONTROLLER_0,
.prog_fetch_lines_worst_case = 21,
@@ -254,7 +253,6 @@ static const struct dpu_intf_cfg msm8998_intf[] = {
}, {
.name = "intf_1", .id = INTF_1,
.base = 0x6a800, .len = 0x280,
-   .features = INTF_SDM845_MASK,
.type = INTF_DSI,
.controller_id = MSM_DSI_CONTROLLER_0,
.prog_fetch_lines_worst_case = 21,
@@ -264,7 +262,6 @@ static const struct dpu_intf_cfg msm8998_intf[] = {
}, {
.name = "intf_2", .id = INTF_2,
.base = 0x6b000, .len = 0x280,
-   .features = INTF_SDM845_MASK,
.type = INTF_DSI,
.controller_id = MSM_DSI_CONTROLLER_1,
.prog_fetch_lines_worst_case = 21,
@@ -274,7 +271,6 @@ static const struct dpu_intf_cfg msm8998_intf[] = {
}, {
.name = "intf_3", .id = INTF_3,
.base = 0x6b800, .len = 0x280,
-   .features = INTF_SDM845_MASK,
.type = INTF_HDMI,
.prog_fetch_lines_worst_case = 21,
.intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30),
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
index 5ad82b109ebb..66e3573eb613 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
@@ -260,7 +260,6 @@ static const struct dpu_intf_cfg sdm845_intf[] = {
{
.name = "intf_0", .id = INTF_0,
.base = 0x6a000, .len = 0x280,
-   .features = INTF_SDM845_MASK,
.type = INTF_DP,
.controller_id = MSM_DP_CONTROLLER_0,
.prog_fetch_lines_worst_case = 24,
@@ -270,7 +269,6 @@ static const struct dpu_intf_cfg sdm845_intf[] = {
}, {
.name = "intf_1", .id = INTF_1,
.base = 0x6a800, .len = 0x280,
-   .features = INTF_SDM845_MASK,
.type = INTF_DSI,
.controller_id = MSM_DSI_CONTROLLER_0,
.prog_fetch_lines_worst_case = 24,
@@ -280,7 +278,6 @@ static const struct dpu_intf_cfg sdm845_intf[] = {
}, {
.name = "intf_2", .id = INTF_2,
.base = 0x6b000, .len = 0x280,
-   .features = INTF_SDM845_MASK,
.type = INTF_DSI,
.controller_id = MSM_DSI_CONTROLLER_1,
.prog_fetch_lines_worst_case = 24,
@@ -290,7 +287,6 @@ static const struct dpu_intf_cfg sdm845_intf[] = {
}, {
.name = "intf_3", .id = INTF_3,
.base = 0x6b800, .len = 0x280,
-   .features = INTF_SDM845_MASK,
.type = INTF_DP,
.controller_id = MSM_DP_CONTROLLER_1,
.prog_fetch_lines_worst_case = 24,
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
index 4a18fc66a412..3efa22429e5f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
@@ -95,8 +95,6 @@
 
 #define DSPP_SC7180_MASK BIT(DPU_DSPP_PCC)
 
-#define INTF_SDM845_MASK (0)
-
 #define INTF_SC7180_MASK \
(BIT(DPU_INTF_INPUT_CTRL) | \
 BIT(DPU_INTF_TE) | \
-- 
2.39.2



[PATCH v4 17/19] drm/msm/dpu: inline INTF_BLK and INTF_BLK_DSI_TE macros

2023-06-19 Thread Dmitry Baryshkov
To simplify making changes to the hardware block definitions, expand
corresponding macros. This way making all the changes are more obvious
and visible in the source files.

Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 .../msm/disp/dpu1/catalog/dpu_3_0_msm8998.h   |  52 ++--
 .../msm/disp/dpu1/catalog/dpu_4_0_sdm845.h|  53 ++--
 .../msm/disp/dpu1/catalog/dpu_5_0_sm8150.h|  55 ++--
 .../msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h   |  82 +---
 .../msm/disp/dpu1/catalog/dpu_6_0_sm8250.h|  55 ++--
 .../msm/disp/dpu1/catalog/dpu_6_2_sc7180.h|  28 +++-
 .../msm/disp/dpu1/catalog/dpu_6_3_sm6115.h|  15 ++-
 .../msm/disp/dpu1/catalog/dpu_6_4_sm6350.h|  28 +++-
 .../msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h   |  15 ++-
 .../msm/disp/dpu1/catalog/dpu_6_9_sm6375.h|  15 ++-
 .../msm/disp/dpu1/catalog/dpu_7_0_sm8350.h|  55 ++--
 .../msm/disp/dpu1/catalog/dpu_7_2_sc7280.h|  41 --
 .../msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h  | 120 +-
 .../msm/disp/dpu1/catalog/dpu_8_1_sm8450.h|  55 ++--
 .../msm/disp/dpu1/catalog/dpu_9_0_sm8550.h|  55 ++--
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c|  30 -
 16 files changed, 545 insertions(+), 209 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
index 9181d3ef8013..4ce25ed4e36f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
@@ -241,18 +241,46 @@ static const struct dpu_dspp_cfg msm8998_dspp[] = {
 };
 
 static const struct dpu_intf_cfg msm8998_intf[] = {
-   INTF_BLK("intf_0", INTF_0, 0x6a000, 0x280, INTF_DP, 
MSM_DP_CONTROLLER_0, 21, INTF_SDM845_MASK,
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 24),
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 25)),
-   INTF_BLK("intf_1", INTF_1, 0x6a800, 0x280, INTF_DSI, 
MSM_DSI_CONTROLLER_0, 21, INTF_SDM845_MASK,
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 26),
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 27)),
-   INTF_BLK("intf_2", INTF_2, 0x6b000, 0x280, INTF_DSI, 
MSM_DSI_CONTROLLER_1, 21, INTF_SDM845_MASK,
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 28),
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 29)),
-   INTF_BLK("intf_3", INTF_3, 0x6b800, 0x280, INTF_HDMI, 0, 21, 
INTF_SDM845_MASK,
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30),
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31)),
+   {
+   .name = "intf_0", .id = INTF_0,
+   .base = 0x6a000, .len = 0x280,
+   .features = INTF_SDM845_MASK,
+   .type = INTF_DP,
+   .controller_id = MSM_DP_CONTROLLER_0,
+   .prog_fetch_lines_worst_case = 21,
+   .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 24),
+   .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 25),
+   .intr_tear_rd_ptr = -1,
+   }, {
+   .name = "intf_1", .id = INTF_1,
+   .base = 0x6a800, .len = 0x280,
+   .features = INTF_SDM845_MASK,
+   .type = INTF_DSI,
+   .controller_id = MSM_DSI_CONTROLLER_0,
+   .prog_fetch_lines_worst_case = 21,
+   .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 26),
+   .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 27),
+   .intr_tear_rd_ptr = -1,
+   }, {
+   .name = "intf_2", .id = INTF_2,
+   .base = 0x6b000, .len = 0x280,
+   .features = INTF_SDM845_MASK,
+   .type = INTF_DSI,
+   .controller_id = MSM_DSI_CONTROLLER_1,
+   .prog_fetch_lines_worst_case = 21,
+   .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 28),
+   .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 29),
+   .intr_tear_rd_ptr = -1,
+   }, {
+   .name = "intf_3", .id = INTF_3,
+   .base = 0x6b800, .len = 0x280,
+   .features = INTF_SDM845_MASK,
+   .type = INTF_HDMI,
+   .prog_fetch_lines_worst_case = 21,
+   .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30),
+   .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31),
+   .intr_tear_rd_ptr = -1,
+   },
 };
 
 static const struct dpu_perf_cfg msm8998_perf_data = {
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
index 8119a81ff260..5ad82b109ebb 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
@@ -257,18 +257,47 @@ static const struct dpu_dsc_cfg sdm845_dsc[] = {
 };
 
 static const struct dpu_intf_cfg sdm845_intf[] = {
-   INTF_BLK("intf_0", INTF_0, 0x6a000, 0x280, INTF_DP, 
MSM_DP_CONTROLLER_0, 24, 

[PATCH v4 14/19] drm/msm/dpu: inline MERGE_3D_BLK macros

2023-06-19 Thread Dmitry Baryshkov
To simplify making changes to the hardware block definitions, expand
corresponding macros. This way making all the changes are more obvious
and visible in the source files.

Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 .../msm/disp/dpu1/catalog/dpu_5_0_sm8150.h| 16 +++---
 .../msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h   | 16 +++---
 .../msm/disp/dpu1/catalog/dpu_6_0_sm8250.h| 16 +++---
 .../msm/disp/dpu1/catalog/dpu_7_0_sm8350.h| 16 +++---
 .../msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h  | 16 +++---
 .../msm/disp/dpu1/catalog/dpu_8_1_sm8450.h| 21 +++
 .../msm/disp/dpu1/catalog/dpu_9_0_sm8550.h| 21 +++
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c| 11 --
 8 files changed, 99 insertions(+), 34 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
index 9b2de5986e82..0e09e759dc99 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
@@ -239,9 +239,19 @@ static const struct dpu_pingpong_cfg sm8150_pp[] = {
 };
 
 static const struct dpu_merge_3d_cfg sm8150_merge_3d[] = {
-   MERGE_3D_BLK("merge_3d_0", MERGE_3D_0, 0x83000),
-   MERGE_3D_BLK("merge_3d_1", MERGE_3D_1, 0x83100),
-   MERGE_3D_BLK("merge_3d_2", MERGE_3D_2, 0x83200),
+   {
+   .name = "merge_3d_0", .id = MERGE_3D_0,
+   .base = 0x83000, .len = 0x8,
+   .features = MERGE_3D_SM8150_MASK,
+   }, {
+   .name = "merge_3d_1", .id = MERGE_3D_1,
+   .base = 0x83100, .len = 0x8,
+   .features = MERGE_3D_SM8150_MASK,
+   }, {
+   .name = "merge_3d_2", .id = MERGE_3D_2,
+   .base = 0x83200, .len = 0x8,
+   .features = MERGE_3D_SM8150_MASK,
+   },
 };
 
 static const struct dpu_dsc_cfg sm8150_dsc[] = {
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
index 683602e54c0e..4d2b0409a244 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
@@ -238,9 +238,19 @@ static const struct dpu_pingpong_cfg sc8180x_pp[] = {
 };
 
 static const struct dpu_merge_3d_cfg sc8180x_merge_3d[] = {
-   MERGE_3D_BLK("merge_3d_0", MERGE_3D_0, 0x83000),
-   MERGE_3D_BLK("merge_3d_1", MERGE_3D_1, 0x83100),
-   MERGE_3D_BLK("merge_3d_2", MERGE_3D_2, 0x83200),
+   {
+   .name = "merge_3d_0", .id = MERGE_3D_0,
+   .base = 0x83000, .len = 0x8,
+   .features = MERGE_3D_SM8150_MASK,
+   }, {
+   .name = "merge_3d_1", .id = MERGE_3D_1,
+   .base = 0x83100, .len = 0x8,
+   .features = MERGE_3D_SM8150_MASK,
+   }, {
+   .name = "merge_3d_2", .id = MERGE_3D_2,
+   .base = 0x83200, .len = 0x8,
+   .features = MERGE_3D_SM8150_MASK,
+   },
 };
 
 static const struct dpu_dsc_cfg sc8180x_dsc[] = {
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
index a98d63f6c47c..50f857565dbf 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
@@ -239,9 +239,19 @@ static const struct dpu_pingpong_cfg sm8250_pp[] = {
 };
 
 static const struct dpu_merge_3d_cfg sm8250_merge_3d[] = {
-   MERGE_3D_BLK("merge_3d_0", MERGE_3D_0, 0x83000),
-   MERGE_3D_BLK("merge_3d_1", MERGE_3D_1, 0x83100),
-   MERGE_3D_BLK("merge_3d_2", MERGE_3D_2, 0x83200),
+   {
+   .name = "merge_3d_0", .id = MERGE_3D_0,
+   .base = 0x83000, .len = 0x8,
+   .features = MERGE_3D_SM8150_MASK,
+   }, {
+   .name = "merge_3d_1", .id = MERGE_3D_1,
+   .base = 0x83100, .len = 0x8,
+   .features = MERGE_3D_SM8150_MASK,
+   }, {
+   .name = "merge_3d_2", .id = MERGE_3D_2,
+   .base = 0x83200, .len = 0x8,
+   .features = MERGE_3D_SM8150_MASK,
+   },
 };
 
 static const struct dpu_dsc_cfg sm8250_dsc[] = {
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
index 8a9bfc4af72a..0added438239 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
@@ -237,9 +237,19 @@ static const struct dpu_pingpong_cfg sm8350_pp[] = {
 };
 
 static const struct dpu_merge_3d_cfg sm8350_merge_3d[] = {
-   MERGE_3D_BLK("merge_3d_0", MERGE_3D_0, 0x4e000),
-   MERGE_3D_BLK("merge_3d_1", MERGE_3D_1, 0x4f000),
-   MERGE_3D_BLK("merge_3d_2", MERGE_3D_2, 0x5),
+   {
+   .name = "merge_3d_0", .id = MERGE_3D_0,
+   .base = 

[PATCH v4 16/19] drm/msm/dpu: inline WB_BLK macros

2023-06-19 Thread Dmitry Baryshkov
To simplify making changes to the hardware block definitions, expand
corresponding macros. This way making all the changes are more obvious
and visible in the source files.

Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 .../drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h | 14 --
 .../drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h | 14 --
 .../drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h | 14 --
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c | 18 --
 4 files changed, 36 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
index c8f4c6326a1a..9148d7da62e4 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
@@ -323,8 +323,18 @@ static const struct dpu_intf_cfg sm8250_intf[] = {
 };
 
 static const struct dpu_wb_cfg sm8250_wb[] = {
-   WB_BLK("wb_2", WB_2, 0x65000, WB_SM8250_MASK, DPU_CLK_CTRL_WB2, 6,
-   VBIF_RT, MDP_SSPP_TOP0_INTR, 4096, 4),
+   {
+   .name = "wb_2", .id = WB_2,
+   .base = 0x65000, .len = 0x2c8,
+   .features = WB_SM8250_MASK,
+   .format_list = wb2_formats,
+   .num_formats = ARRAY_SIZE(wb2_formats),
+   .clk_ctrl = DPU_CLK_CTRL_WB2,
+   .xin_id = 6,
+   .vbif_idx = VBIF_RT,
+   .maxlinewidth = 4096,
+   .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4),
+   },
 };
 
 static const struct dpu_perf_cfg sm8250_perf_data = {
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
index d7d117e3af36..904c758a60df 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
@@ -148,8 +148,18 @@ static const struct dpu_intf_cfg sc7180_intf[] = {
 };
 
 static const struct dpu_wb_cfg sc7180_wb[] = {
-   WB_BLK("wb_2", WB_2, 0x65000, WB_SM8250_MASK, DPU_CLK_CTRL_WB2, 6,
-   VBIF_RT, MDP_SSPP_TOP0_INTR, 4096, 4),
+   {
+   .name = "wb_2", .id = WB_2,
+   .base = 0x65000, .len = 0x2c8,
+   .features = WB_SM8250_MASK,
+   .format_list = wb2_formats,
+   .num_formats = ARRAY_SIZE(wb2_formats),
+   .clk_ctrl = DPU_CLK_CTRL_WB2,
+   .xin_id = 6,
+   .vbif_idx = VBIF_RT,
+   .maxlinewidth = 4096,
+   .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4),
+   },
 };
 
 static const struct dpu_perf_cfg sc7180_perf_data = {
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
index 3b67010f336b..7b5c9a77b102 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
@@ -176,8 +176,18 @@ static const struct dpu_dsc_cfg sc7280_dsc[] = {
 };
 
 static const struct dpu_wb_cfg sc7280_wb[] = {
-   WB_BLK("wb_2", WB_2, 0x65000, WB_SM8250_MASK, DPU_CLK_CTRL_WB2, 6,
-   VBIF_RT, MDP_SSPP_TOP0_INTR, 4096, 4),
+   {
+   .name = "wb_2", .id = WB_2,
+   .base = 0x65000, .len = 0x2c8,
+   .features = WB_SM8250_MASK,
+   .format_list = wb2_formats,
+   .num_formats = ARRAY_SIZE(wb2_formats),
+   .clk_ctrl = DPU_CLK_CTRL_WB2,
+   .xin_id = 6,
+   .vbif_idx = VBIF_RT,
+   .maxlinewidth = 4096,
+   .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4),
+   },
 };
 
 static const struct dpu_intf_cfg sc7280_intf[] = {
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
index 3ea63ca358a4..d2bca1ec0e63 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
@@ -493,24 +493,6 @@ static const struct dpu_dsc_sub_blks dsc_sblk_1 = {
.intr_tear_rd_ptr = _tear_rd_ptr, \
}
 
-/*
- * Writeback blocks config
- */
-#define WB_BLK(_name, _id, _base, _features, _clk_ctrl, \
-   __xin_id, vbif_id, _reg, _max_linewidth, _wb_done_bit) \
-   { \
-   .name = _name, .id = _id, \
-   .base = _base, .len = 0x2c8, \
-   .features = _features, \
-   .format_list = wb2_formats, \
-   .num_formats = ARRAY_SIZE(wb2_formats), \
-   .clk_ctrl = _clk_ctrl, \
-   .xin_id = __xin_id, \
-   .vbif_idx = vbif_id, \
-   .maxlinewidth = _max_linewidth, \
-   .intr_wb_done = DPU_IRQ_IDX(_reg, _wb_done_bit) \
-   }
-
 /*
  * VBIF sub blocks 

[PATCH v4 12/19] drm/msm/dpu: inline LM_BLK macros

2023-06-19 Thread Dmitry Baryshkov
To simplify making changes to the hardware block definitions, expand
corresponding macros. This way making all the changes are more obvious
and visible in the source files.

Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 .../msm/disp/dpu1/catalog/dpu_3_0_msm8998.h   | 55 +
 .../msm/disp/dpu1/catalog/dpu_4_0_sdm845.h| 57 ++
 .../msm/disp/dpu1/catalog/dpu_5_0_sm8150.h| 57 ++
 .../msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h   | 57 ++
 .../msm/disp/dpu1/catalog/dpu_6_0_sm8250.h| 57 ++
 .../msm/disp/dpu1/catalog/dpu_6_2_sc7180.h| 20 +--
 .../msm/disp/dpu1/catalog/dpu_6_3_sm6115.h| 10 +++-
 .../msm/disp/dpu1/catalog/dpu_6_4_sm6350.h| 21 +--
 .../msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h   | 10 +++-
 .../msm/disp/dpu1/catalog/dpu_6_9_sm6375.h| 11 +++-
 .../msm/disp/dpu1/catalog/dpu_7_0_sm8350.h| 57 ++
 .../msm/disp/dpu1/catalog/dpu_7_2_sc7280.h| 28 +++--
 .../msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h  | 53 +++--
 .../msm/disp/dpu1/catalog/dpu_8_1_sm8450.h| 59 +++
 .../msm/disp/dpu1/catalog/dpu_9_0_sm8550.h| 57 ++
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c| 11 
 16 files changed, 487 insertions(+), 133 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
index 6b254753774c..a07c68744b29 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
@@ -139,18 +139,49 @@ static const struct dpu_sspp_cfg msm8998_sspp[] = {
 };
 
 static const struct dpu_lm_cfg msm8998_lm[] = {
-   LM_BLK("lm_0", LM_0, 0x44000, MIXER_MSM8998_MASK,
-   _lm_sblk, PINGPONG_0, LM_1, DSPP_0),
-   LM_BLK("lm_1", LM_1, 0x45000, MIXER_MSM8998_MASK,
-   _lm_sblk, PINGPONG_1, LM_0, DSPP_1),
-   LM_BLK("lm_2", LM_2, 0x46000, MIXER_MSM8998_MASK,
-   _lm_sblk, PINGPONG_2, LM_5, 0),
-   LM_BLK("lm_3", LM_3, 0x47000, MIXER_MSM8998_MASK,
-   _lm_sblk, PINGPONG_NONE, 0, 0),
-   LM_BLK("lm_4", LM_4, 0x48000, MIXER_MSM8998_MASK,
-   _lm_sblk, PINGPONG_NONE, 0, 0),
-   LM_BLK("lm_5", LM_5, 0x49000, MIXER_MSM8998_MASK,
-   _lm_sblk, PINGPONG_3, LM_2, 0),
+   {
+   .name = "lm_0", .id = LM_0,
+   .base = 0x44000, .len = 0x320,
+   .features = MIXER_MSM8998_MASK,
+   .sblk = _lm_sblk,
+   .lm_pair = LM_1,
+   .pingpong = PINGPONG_0,
+   .dspp = DSPP_0,
+   }, {
+   .name = "lm_1", .id = LM_1,
+   .base = 0x45000, .len = 0x320,
+   .features = MIXER_MSM8998_MASK,
+   .sblk = _lm_sblk,
+   .lm_pair = LM_0,
+   .pingpong = PINGPONG_1,
+   .dspp = DSPP_1,
+   }, {
+   .name = "lm_2", .id = LM_2,
+   .base = 0x46000, .len = 0x320,
+   .features = MIXER_MSM8998_MASK,
+   .sblk = _lm_sblk,
+   .lm_pair = LM_5,
+   .pingpong = PINGPONG_2,
+   }, {
+   .name = "lm_3", .id = LM_3,
+   .base = 0x47000, .len = 0x320,
+   .features = MIXER_MSM8998_MASK,
+   .sblk = _lm_sblk,
+   .pingpong = PINGPONG_NONE,
+   }, {
+   .name = "lm_4", .id = LM_4,
+   .base = 0x48000, .len = 0x320,
+   .features = MIXER_MSM8998_MASK,
+   .sblk = _lm_sblk,
+   .pingpong = PINGPONG_NONE,
+   }, {
+   .name = "lm_5", .id = LM_5,
+   .base = 0x49000, .len = 0x320,
+   .features = MIXER_MSM8998_MASK,
+   .sblk = _lm_sblk,
+   .lm_pair = LM_2,
+   .pingpong = PINGPONG_3,
+   },
 };
 
 static const struct dpu_pingpong_cfg msm8998_pp[] = {
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
index 8661ef2f45e0..786263ed1ef2 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
@@ -137,18 +137,51 @@ static const struct dpu_sspp_cfg sdm845_sspp[] = {
 };
 
 static const struct dpu_lm_cfg sdm845_lm[] = {
-   LM_BLK("lm_0", LM_0, 0x44000, MIXER_SDM845_MASK,
-   _lm_sblk, PINGPONG_0, LM_1, DSPP_0),
-   LM_BLK("lm_1", LM_1, 0x45000, MIXER_SDM845_MASK,
-   _lm_sblk, PINGPONG_1, LM_0, DSPP_1),
-   LM_BLK("lm_2", LM_2, 0x46000, MIXER_SDM845_MASK,
-   _lm_sblk, PINGPONG_2, LM_5, DSPP_2),
-   LM_BLK("lm_3", LM_3, 0x0, MIXER_SDM845_MASK,
-   _lm_sblk, PINGPONG_NONE, 0, DSPP_3),
-   LM_BLK("lm_4", LM_4, 0x0, MIXER_SDM845_MASK,
-   _lm_sblk, PINGPONG_NONE, 0, 0),
-   

[PATCH v4 15/19] drm/msm/dpu: inline various PP_BLK_* macros

2023-06-19 Thread Dmitry Baryshkov
To simplify making changes to the hardware block definitions, expand
corresponding macros. This way making all the changes are more obvious
and visible in the source files.

Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 .../msm/disp/dpu1/catalog/dpu_3_0_msm8998.h   | 41 ++---
 .../msm/disp/dpu1/catalog/dpu_4_0_sdm845.h| 41 ++---
 .../msm/disp/dpu1/catalog/dpu_5_0_sm8150.h| 67 ++
 .../msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h   | 67 ++
 .../msm/disp/dpu1/catalog/dpu_6_0_sm8250.h| 67 ++
 .../msm/disp/dpu1/catalog/dpu_6_2_sc7180.h| 23 +++--
 .../msm/disp/dpu1/catalog/dpu_6_3_sm6115.h| 12 ++-
 .../msm/disp/dpu1/catalog/dpu_6_4_sm6350.h| 23 +++--
 .../msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h   | 12 ++-
 .../msm/disp/dpu1/catalog/dpu_6_9_sm6375.h| 12 ++-
 .../msm/disp/dpu1/catalog/dpu_7_0_sm8350.h| 67 ++
 .../msm/disp/dpu1/catalog/dpu_7_2_sc7280.h| 45 +++---
 .../msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h  | 61 ++---
 .../msm/disp/dpu1/catalog/dpu_8_1_sm8450.h| 89 ++-
 .../msm/disp/dpu1/catalog/dpu_9_0_sm8550.h| 89 ++-
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c| 21 -
 16 files changed, 527 insertions(+), 210 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
index d5111f3782a2..9181d3ef8013 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
@@ -185,18 +185,35 @@ static const struct dpu_lm_cfg msm8998_lm[] = {
 };
 
 static const struct dpu_pingpong_cfg msm8998_pp[] = {
-   PP_BLK("pingpong_0", PINGPONG_0, 0x7, PINGPONG_SDM845_TE2_MASK, 0, 
sdm845_pp_sblk_te,
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
-   PP_BLK("pingpong_1", PINGPONG_1, 0x70800, PINGPONG_SDM845_TE2_MASK, 0, 
sdm845_pp_sblk_te,
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13)),
-   PP_BLK("pingpong_2", PINGPONG_2, 0x71000, PINGPONG_SDM845_MASK, 0, 
sdm845_pp_sblk,
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10),
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 14)),
-   PP_BLK("pingpong_3", PINGPONG_3, 0x71800, PINGPONG_SDM845_MASK, 0, 
sdm845_pp_sblk,
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11),
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 15)),
+   {
+   .name = "pingpong_0", .id = PINGPONG_0,
+   .base = 0x7, .len = 0xd4,
+   .features = PINGPONG_SDM845_TE2_MASK,
+   .sblk = _pp_sblk_te,
+   .intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
+   .intr_rdptr = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12),
+   }, {
+   .name = "pingpong_1", .id = PINGPONG_1,
+   .base = 0x70800, .len = 0xd4,
+   .features = PINGPONG_SDM845_TE2_MASK,
+   .sblk = _pp_sblk_te,
+   .intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
+   .intr_rdptr = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13),
+   }, {
+   .name = "pingpong_2", .id = PINGPONG_2,
+   .base = 0x71000, .len = 0xd4,
+   .features = PINGPONG_SDM845_MASK,
+   .sblk = _pp_sblk,
+   .intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10),
+   .intr_rdptr = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 14),
+   }, {
+   .name = "pingpong_3", .id = PINGPONG_3,
+   .base = 0x71800, .len = 0xd4,
+   .features = PINGPONG_SDM845_MASK,
+   .sblk = _pp_sblk,
+   .intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11),
+   .intr_rdptr = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 15),
+   },
 };
 
 static const struct dpu_dsc_cfg msm8998_dsc[] = {
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
index b6f52b3864ce..8119a81ff260 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
@@ -209,18 +209,35 @@ static const struct dpu_dspp_cfg sdm845_dspp[] = {
 };
 
 static const struct dpu_pingpong_cfg sdm845_pp[] = {
-   PP_BLK("pingpong_0", PINGPONG_0, 0x7, PINGPONG_SDM845_TE2_MASK, 0, 
sdm845_pp_sblk_te,
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
-   PP_BLK("pingpong_1", PINGPONG_1, 0x70800, PINGPONG_SDM845_TE2_MASK, 0, 
sdm845_pp_sblk_te,
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
-   DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13)),
-   PP_BLK("pingpong_2", PINGPONG_2, 0x71000, PINGPONG_SDM845_MASK, 0, 
sdm845_pp_sblk,
-   

[PATCH v4 06/19] drm/msm/dpu: expand .clk_ctrls definitions

2023-06-19 Thread Dmitry Baryshkov
Use more standard initialisation for .clk_ctrls definitions. Define a
single .clk_ctrls field and use array init inside.

Reviewed-by: Marijn Suijten 
Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 .../msm/disp/dpu1/catalog/dpu_3_0_msm8998.h   | 22 +
 .../msm/disp/dpu1/catalog/dpu_4_0_sdm845.h| 18 +++---
 .../msm/disp/dpu1/catalog/dpu_5_0_sm8150.h| 18 +++---
 .../msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h   | 18 +++---
 .../msm/disp/dpu1/catalog/dpu_6_0_sm8250.h| 22 +
 .../msm/disp/dpu1/catalog/dpu_6_2_sc7180.h| 12 ++
 .../msm/disp/dpu1/catalog/dpu_6_3_sm6115.h|  6 +++--
 .../msm/disp/dpu1/catalog/dpu_6_4_sm6350.h| 12 ++
 .../msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h   |  6 +++--
 .../msm/disp/dpu1/catalog/dpu_6_9_sm6375.h|  6 +++--
 .../msm/disp/dpu1/catalog/dpu_7_0_sm8350.h| 20 +---
 .../msm/disp/dpu1/catalog/dpu_7_2_sc7280.h| 12 ++
 .../msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h  | 20 +---
 .../msm/disp/dpu1/catalog/dpu_8_1_sm8450.h| 20 +---
 .../msm/disp/dpu1/catalog/dpu_9_0_sm8550.h| 24 ++-
 15 files changed, 133 insertions(+), 103 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
index 30565b245b29..757ac648a692 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
@@ -30,16 +30,18 @@ static const struct dpu_mdp_cfg msm8998_mdp = {
.name = "top_0",
.base = 0x0, .len = 0x458,
.features = BIT(DPU_MDP_VSYNC_SEL),
-   .clk_ctrls[DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0 },
-   .clk_ctrls[DPU_CLK_CTRL_VIG1] = { .reg_off = 0x2b4, .bit_off = 0 },
-   .clk_ctrls[DPU_CLK_CTRL_VIG2] = { .reg_off = 0x2bc, .bit_off = 0 },
-   .clk_ctrls[DPU_CLK_CTRL_VIG3] = { .reg_off = 0x2c4, .bit_off = 0 },
-   .clk_ctrls[DPU_CLK_CTRL_DMA0] = { .reg_off = 0x2ac, .bit_off = 8 },
-   .clk_ctrls[DPU_CLK_CTRL_DMA1] = { .reg_off = 0x2b4, .bit_off = 8 },
-   .clk_ctrls[DPU_CLK_CTRL_DMA2] = { .reg_off = 0x2c4, .bit_off = 8 },
-   .clk_ctrls[DPU_CLK_CTRL_DMA3] = { .reg_off = 0x2c4, .bit_off = 12 },
-   .clk_ctrls[DPU_CLK_CTRL_CURSOR0] = { .reg_off = 0x3a8, .bit_off = 16 },
-   .clk_ctrls[DPU_CLK_CTRL_CURSOR1] = { .reg_off = 0x3b0, .bit_off = 16 },
+   .clk_ctrls = {
+   [DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0 },
+   [DPU_CLK_CTRL_VIG1] = { .reg_off = 0x2b4, .bit_off = 0 },
+   [DPU_CLK_CTRL_VIG2] = { .reg_off = 0x2bc, .bit_off = 0 },
+   [DPU_CLK_CTRL_VIG3] = { .reg_off = 0x2c4, .bit_off = 0 },
+   [DPU_CLK_CTRL_DMA0] = { .reg_off = 0x2ac, .bit_off = 8 },
+   [DPU_CLK_CTRL_DMA1] = { .reg_off = 0x2b4, .bit_off = 8 },
+   [DPU_CLK_CTRL_DMA2] = { .reg_off = 0x2c4, .bit_off = 8 },
+   [DPU_CLK_CTRL_DMA3] = { .reg_off = 0x2c4, .bit_off = 12 },
+   [DPU_CLK_CTRL_CURSOR0] = { .reg_off = 0x3a8, .bit_off = 16 },
+   [DPU_CLK_CTRL_CURSOR1] = { .reg_off = 0x3b0, .bit_off = 16 },
+   },
 };
 
 static const struct dpu_ctl_cfg msm8998_ctl[] = {
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
index 35c495bdcbe9..9fb8ef21c7f0 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
@@ -30,14 +30,16 @@ static const struct dpu_mdp_cfg sdm845_mdp = {
.name = "top_0",
.base = 0x0, .len = 0x45c,
.features = BIT(DPU_MDP_AUDIO_SELECT) | BIT(DPU_MDP_VSYNC_SEL),
-   .clk_ctrls[DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0 },
-   .clk_ctrls[DPU_CLK_CTRL_VIG1] = { .reg_off = 0x2b4, .bit_off = 0 },
-   .clk_ctrls[DPU_CLK_CTRL_VIG2] = { .reg_off = 0x2bc, .bit_off = 0 },
-   .clk_ctrls[DPU_CLK_CTRL_VIG3] = { .reg_off = 0x2c4, .bit_off = 0 },
-   .clk_ctrls[DPU_CLK_CTRL_DMA0] = { .reg_off = 0x2ac, .bit_off = 8 },
-   .clk_ctrls[DPU_CLK_CTRL_DMA1] = { .reg_off = 0x2b4, .bit_off = 8 },
-   .clk_ctrls[DPU_CLK_CTRL_DMA2] = { .reg_off = 0x2bc, .bit_off = 8 },
-   .clk_ctrls[DPU_CLK_CTRL_DMA3] = { .reg_off = 0x2c4, .bit_off = 8 },
+   .clk_ctrls = {
+   [DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0 },
+   [DPU_CLK_CTRL_VIG1] = { .reg_off = 0x2b4, .bit_off = 0 },
+   [DPU_CLK_CTRL_VIG2] = { .reg_off = 0x2bc, .bit_off = 0 },
+   [DPU_CLK_CTRL_VIG3] = { .reg_off = 0x2c4, .bit_off = 0 },
+   [DPU_CLK_CTRL_DMA0] = { .reg_off = 0x2ac, .bit_off = 8 },
+   [DPU_CLK_CTRL_DMA1] = { .reg_off = 0x2b4, .bit_off = 8 },
+   [DPU_CLK_CTRL_DMA2] = { .reg_off = 0x2bc, .bit_off = 8 },
+   [DPU_CLK_CTRL_DMA3] 

[PATCH v4 13/19] drm/msm/dpu: inline DSC_BLK and DSC_BLK_1_2 macros

2023-06-19 Thread Dmitry Baryshkov
To simplify making changes to the hardware block definitions, expand
corresponding macros. This way making all the changes are more obvious
and visible in the source files.

Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 .../msm/disp/dpu1/catalog/dpu_3_0_msm8998.h   |  9 -
 .../msm/disp/dpu1/catalog/dpu_4_0_sdm845.h| 17 +++--
 .../msm/disp/dpu1/catalog/dpu_5_0_sm8150.h| 21 +--
 .../msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h   | 31 +---
 .../msm/disp/dpu1/catalog/dpu_6_0_sm8250.h| 21 +--
 .../msm/disp/dpu1/catalog/dpu_6_4_sm6350.h|  6 ++-
 .../msm/disp/dpu1/catalog/dpu_6_9_sm6375.h|  6 ++-
 .../msm/disp/dpu1/catalog/dpu_7_0_sm8350.h| 25 +++--
 .../msm/disp/dpu1/catalog/dpu_7_2_sc7280.h|  7 +++-
 .../msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h  | 37 ---
 .../msm/disp/dpu1/catalog/dpu_8_1_sm8450.h| 25 +++--
 .../msm/disp/dpu1/catalog/dpu_9_0_sm8550.h| 25 +++--
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c| 15 
 13 files changed, 189 insertions(+), 56 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
index a07c68744b29..d5111f3782a2 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
@@ -200,8 +200,13 @@ static const struct dpu_pingpong_cfg msm8998_pp[] = {
 };
 
 static const struct dpu_dsc_cfg msm8998_dsc[] = {
-   DSC_BLK("dsc_0", DSC_0, 0x8, 0),
-   DSC_BLK("dsc_1", DSC_1, 0x80400, 0),
+   {
+   .name = "dsc_0", .id = DSC_0,
+   .base = 0x8, .len = 0x140,
+   }, {
+   .name = "dsc_1", .id = DSC_1,
+   .base = 0x80400, .len = 0x140,
+   },
 };
 
 static const struct dpu_dspp_cfg msm8998_dspp[] = {
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
index 786263ed1ef2..b6f52b3864ce 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
@@ -224,10 +224,19 @@ static const struct dpu_pingpong_cfg sdm845_pp[] = {
 };
 
 static const struct dpu_dsc_cfg sdm845_dsc[] = {
-   DSC_BLK("dsc_0", DSC_0, 0x8, 0),
-   DSC_BLK("dsc_1", DSC_1, 0x80400, 0),
-   DSC_BLK("dsc_2", DSC_2, 0x80800, 0),
-   DSC_BLK("dsc_3", DSC_3, 0x80c00, 0),
+   {
+   .name = "dsc_0", .id = DSC_0,
+   .base = 0x8, .len = 0x140,
+   }, {
+   .name = "dsc_1", .id = DSC_1,
+   .base = 0x80400, .len = 0x140,
+   }, {
+   .name = "dsc_2", .id = DSC_2,
+   .base = 0x80800, .len = 0x140,
+   }, {
+   .name = "dsc_3", .id = DSC_3,
+   .base = 0x80c00, .len = 0x140,
+   },
 };
 
 static const struct dpu_intf_cfg sdm845_intf[] = {
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
index 6b9bfeac6e0a..9b2de5986e82 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
@@ -245,10 +245,23 @@ static const struct dpu_merge_3d_cfg sm8150_merge_3d[] = {
 };
 
 static const struct dpu_dsc_cfg sm8150_dsc[] = {
-   DSC_BLK("dsc_0", DSC_0, 0x8, BIT(DPU_DSC_OUTPUT_CTRL)),
-   DSC_BLK("dsc_1", DSC_1, 0x80400, BIT(DPU_DSC_OUTPUT_CTRL)),
-   DSC_BLK("dsc_2", DSC_2, 0x80800, BIT(DPU_DSC_OUTPUT_CTRL)),
-   DSC_BLK("dsc_3", DSC_3, 0x80c00, BIT(DPU_DSC_OUTPUT_CTRL)),
+   {
+   .name = "dsc_0", .id = DSC_0,
+   .base = 0x8, .len = 0x140,
+   .features = BIT(DPU_DSC_OUTPUT_CTRL),
+   }, {
+   .name = "dsc_1", .id = DSC_1,
+   .base = 0x80400, .len = 0x140,
+   .features = BIT(DPU_DSC_OUTPUT_CTRL),
+   }, {
+   .name = "dsc_2", .id = DSC_2,
+   .base = 0x80800, .len = 0x140,
+   .features = BIT(DPU_DSC_OUTPUT_CTRL),
+   }, {
+   .name = "dsc_3", .id = DSC_3,
+   .base = 0x80c00, .len = 0x140,
+   .features = BIT(DPU_DSC_OUTPUT_CTRL),
+   },
 };
 
 static const struct dpu_intf_cfg sm8150_intf[] = {
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
index 414f0db3306c..683602e54c0e 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
@@ -244,12 +244,31 @@ static const struct dpu_merge_3d_cfg sc8180x_merge_3d[] = 
{
 };
 
 static const struct dpu_dsc_cfg sc8180x_dsc[] = {
-   DSC_BLK("dsc_0", DSC_0, 0x8, BIT(DPU_DSC_OUTPUT_CTRL)),
-   DSC_BLK("dsc_1", DSC_1, 0x80400, BIT(DPU_DSC_OUTPUT_CTRL)),
-   DSC_BLK("dsc_2", DSC_2, 

[PATCH v4 18/19] drm/msm/dpu: drop empty features mask MERGE_3D_SM8150_MASK

2023-06-19 Thread Dmitry Baryshkov
The MERGE_3D_SM8150_MASK features mask is zero. Drop it completely.

Reviewed-by: Marijn Suijten 
Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h   | 3 ---
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h  | 3 ---
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h   | 3 ---
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h   | 3 ---
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h | 3 ---
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h   | 4 
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h   | 4 
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c   | 2 --
 8 files changed, 25 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
index 341ab9b84d20..e6d4a2bfc2be 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
@@ -273,15 +273,12 @@ static const struct dpu_merge_3d_cfg sm8150_merge_3d[] = {
{
.name = "merge_3d_0", .id = MERGE_3D_0,
.base = 0x83000, .len = 0x8,
-   .features = MERGE_3D_SM8150_MASK,
}, {
.name = "merge_3d_1", .id = MERGE_3D_1,
.base = 0x83100, .len = 0x8,
-   .features = MERGE_3D_SM8150_MASK,
}, {
.name = "merge_3d_2", .id = MERGE_3D_2,
.base = 0x83200, .len = 0x8,
-   .features = MERGE_3D_SM8150_MASK,
},
 };
 
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
index 8dd36a85b685..b4baf6707018 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
@@ -272,15 +272,12 @@ static const struct dpu_merge_3d_cfg sc8180x_merge_3d[] = 
{
{
.name = "merge_3d_0", .id = MERGE_3D_0,
.base = 0x83000, .len = 0x8,
-   .features = MERGE_3D_SM8150_MASK,
}, {
.name = "merge_3d_1", .id = MERGE_3D_1,
.base = 0x83100, .len = 0x8,
-   .features = MERGE_3D_SM8150_MASK,
}, {
.name = "merge_3d_2", .id = MERGE_3D_2,
.base = 0x83200, .len = 0x8,
-   .features = MERGE_3D_SM8150_MASK,
},
 };
 
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
index e16ffade5aca..265d88b288b6 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
@@ -273,15 +273,12 @@ static const struct dpu_merge_3d_cfg sm8250_merge_3d[] = {
{
.name = "merge_3d_0", .id = MERGE_3D_0,
.base = 0x83000, .len = 0x8,
-   .features = MERGE_3D_SM8150_MASK,
}, {
.name = "merge_3d_1", .id = MERGE_3D_1,
.base = 0x83100, .len = 0x8,
-   .features = MERGE_3D_SM8150_MASK,
}, {
.name = "merge_3d_2", .id = MERGE_3D_2,
.base = 0x83200, .len = 0x8,
-   .features = MERGE_3D_SM8150_MASK,
},
 };
 
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
index d5191a663ae1..59a96a4b250c 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
@@ -271,15 +271,12 @@ static const struct dpu_merge_3d_cfg sm8350_merge_3d[] = {
{
.name = "merge_3d_0", .id = MERGE_3D_0,
.base = 0x4e000, .len = 0x8,
-   .features = MERGE_3D_SM8150_MASK,
}, {
.name = "merge_3d_1", .id = MERGE_3D_1,
.base = 0x4f000, .len = 0x8,
-   .features = MERGE_3D_SM8150_MASK,
}, {
.name = "merge_3d_2", .id = MERGE_3D_2,
.base = 0x5, .len = 0x8,
-   .features = MERGE_3D_SM8150_MASK,
},
 };
 
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h
index 9f94cc6369dd..7110caae7251 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h
@@ -275,15 +275,12 @@ static const struct dpu_merge_3d_cfg sc8280xp_merge_3d[] 
= {
{
.name = "merge_3d_0", .id = MERGE_3D_0,
.base = 0x4e000, .len = 0x8,
-   .features = MERGE_3D_SM8150_MASK,
}, {
.name = "merge_3d_1", .id = MERGE_3D_1,
.base = 0x4f000, .len = 0x8,
-   .features = MERGE_3D_SM8150_MASK,
}, {
.name = "merge_3d_2", .id = 

[PATCH v4 09/19] drm/msm/dpu: correct indentation for CTL definitions

2023-06-19 Thread Dmitry Baryshkov
Shift dpu_ctl_cfg contents to correct the indentation of CTL blocks.
This is done in preparation to expanding the rest of hardware block
defines, so that all blocks have similar indentation.

Reviewed-by: Marijn Suijten 
Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 .../msm/disp/dpu1/catalog/dpu_3_0_msm8998.h   | 46 +++---
 .../msm/disp/dpu1/catalog/dpu_4_0_sdm845.h| 46 +++---
 .../msm/disp/dpu1/catalog/dpu_5_0_sm8150.h| 63 +--
 .../msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h   | 63 +--
 .../msm/disp/dpu1/catalog/dpu_6_0_sm8250.h| 63 +--
 .../msm/disp/dpu1/catalog/dpu_6_2_sc7180.h| 30 +
 .../msm/disp/dpu1/catalog/dpu_6_3_sm6115.h|  8 +--
 .../msm/disp/dpu1/catalog/dpu_6_4_sm6350.h| 41 ++--
 .../msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h   |  8 +--
 .../msm/disp/dpu1/catalog/dpu_6_9_sm6375.h|  8 +--
 .../msm/disp/dpu1/catalog/dpu_7_0_sm8350.h| 63 +--
 .../msm/disp/dpu1/catalog/dpu_7_2_sc7280.h| 41 ++--
 .../msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h  | 63 +--
 .../msm/disp/dpu1/catalog/dpu_8_1_sm8450.h| 63 +--
 .../msm/disp/dpu1/catalog/dpu_9_0_sm8550.h| 63 +--
 15 files changed, 309 insertions(+), 360 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
index e0cc1ce3f3e2..6660a55909e7 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
@@ -46,31 +46,27 @@ static const struct dpu_mdp_cfg msm8998_mdp = {
 
 static const struct dpu_ctl_cfg msm8998_ctl[] = {
{
-   .name = "ctl_0", .id = CTL_0,
-   .base = 0x1000, .len = 0x94,
-   .features = BIT(DPU_CTL_SPLIT_DISPLAY),
-   .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 9),
-   },
-   {
-   .name = "ctl_1", .id = CTL_1,
-   .base = 0x1200, .len = 0x94,
-   .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 10),
-   },
-   {
-   .name = "ctl_2", .id = CTL_2,
-   .base = 0x1400, .len = 0x94,
-   .features = BIT(DPU_CTL_SPLIT_DISPLAY),
-   .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 11),
-   },
-   {
-   .name = "ctl_3", .id = CTL_3,
-   .base = 0x1600, .len = 0x94,
-   .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 12),
-   },
-   {
-   .name = "ctl_4", .id = CTL_4,
-   .base = 0x1800, .len = 0x94,
-   .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 13),
+   .name = "ctl_0", .id = CTL_0,
+   .base = 0x1000, .len = 0x94,
+   .features = BIT(DPU_CTL_SPLIT_DISPLAY),
+   .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 9),
+   }, {
+   .name = "ctl_1", .id = CTL_1,
+   .base = 0x1200, .len = 0x94,
+   .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 10),
+   }, {
+   .name = "ctl_2", .id = CTL_2,
+   .base = 0x1400, .len = 0x94,
+   .features = BIT(DPU_CTL_SPLIT_DISPLAY),
+   .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 11),
+   }, {
+   .name = "ctl_3", .id = CTL_3,
+   .base = 0x1600, .len = 0x94,
+   .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 12),
+   }, {
+   .name = "ctl_4", .id = CTL_4,
+   .base = 0x1800, .len = 0x94,
+   .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 13),
},
 };
 
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
index f52e1fa27e2c..8f96a9e4ee4c 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
@@ -44,31 +44,27 @@ static const struct dpu_mdp_cfg sdm845_mdp = {
 
 static const struct dpu_ctl_cfg sdm845_ctl[] = {
{
-   .name = "ctl_0", .id = CTL_0,
-   .base = 0x1000, .len = 0xe4,
-   .features = BIT(DPU_CTL_SPLIT_DISPLAY),
-   .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 9),
-   },
-   {
-   .name = "ctl_1", .id = CTL_1,
-   .base = 0x1200, .len = 0xe4,
-   .features = BIT(DPU_CTL_SPLIT_DISPLAY),
-   .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 10),
-   },
-   {
-   .name = "ctl_2", .id = CTL_2,
-   .base = 0x1400, .len = 0xe4,
-   .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 11),
-   },
-   {
-   .name = "ctl_3", .id = CTL_3,
-   .base = 0x1600, .len = 0xe4,
-   .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 12),
-   },
-   {
-   .name = "ctl_4", .id = CTL_4,
-   .base = 0x1800, .len = 0xe4,
-   .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 13),
+   .name = "ctl_0", .id = CTL_0,
+   .base = 0x1000, .len = 0xe4,
+   .features = 

[PATCH v4 07/19] drm/msm/dpu: drop zero features from dpu_mdp_cfg data

2023-06-19 Thread Dmitry Baryshkov
Drop useless zero assignments to the dpu_mdp_cfg::features field.

Reviewed-by: Marijn Suijten 
Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h  | 1 -
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h  | 1 -
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h  | 1 -
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_4_sm6350.h  | 1 -
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h | 1 -
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_9_sm6375.h  | 1 -
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h  | 1 -
 7 files changed, 7 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
index ab1820f1ac54..e321cc0a80ee 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
@@ -28,7 +28,6 @@ static const struct dpu_ubwc_cfg sm8250_ubwc_cfg = {
 static const struct dpu_mdp_cfg sm8250_mdp = {
.name = "top_0",
.base = 0x0, .len = 0x494,
-   .features = 0,
.clk_ctrls = {
[DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0 },
[DPU_CLK_CTRL_VIG1] = { .reg_off = 0x2b4, .bit_off = 0 },
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
index 2df9a00728c0..1919ee487e68 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
@@ -25,7 +25,6 @@ static const struct dpu_ubwc_cfg sc7180_ubwc_cfg = {
 static const struct dpu_mdp_cfg sc7180_mdp = {
.name = "top_0",
.base = 0x0, .len = 0x494,
-   .features = 0,
.clk_ctrls = {
[DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0 },
[DPU_CLK_CTRL_DMA0] = { .reg_off = 0x2ac, .bit_off = 8 },
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h
index 1982654e74a0..0252fe9590e7 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h
@@ -26,7 +26,6 @@ static const struct dpu_ubwc_cfg sm6115_ubwc_cfg = {
 static const struct dpu_mdp_cfg sm6115_mdp = {
.name = "top_0",
.base = 0x0, .len = 0x494,
-   .features = 0,
.clk_ctrls = {
[DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0 },
[DPU_CLK_CTRL_DMA0] = { .reg_off = 0x2ac, .bit_off = 8 },
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_4_sm6350.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_4_sm6350.h
index ac237c3197cf..3c2083760294 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_4_sm6350.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_4_sm6350.h
@@ -28,7 +28,6 @@ static const struct dpu_ubwc_cfg sm6350_ubwc_cfg = {
 static const struct dpu_mdp_cfg sm6350_mdp = {
.name = "top_0",
.base = 0x0, .len = 0x494,
-   .features = 0,
.clk_ctrls = {
[DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0 },
[DPU_CLK_CTRL_DMA0] = { .reg_off = 0x2ac, .bit_off = 8 },
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h
index 24c4536e7981..54cc6ad8ee36 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h
@@ -23,7 +23,6 @@ static const struct dpu_ubwc_cfg qcm2290_ubwc_cfg = {
 static const struct dpu_mdp_cfg qcm2290_mdp = {
.name = "top_0",
.base = 0x0, .len = 0x494,
-   .features = 0,
.clk_ctrls = {
[DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0 },
[DPU_CLK_CTRL_DMA0] = { .reg_off = 0x2ac, .bit_off = 8 },
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_9_sm6375.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_9_sm6375.h
index 099b74be3fd2..f0f6f2d801b4 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_9_sm6375.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_9_sm6375.h
@@ -27,7 +27,6 @@ static const struct dpu_ubwc_cfg sm6375_ubwc_cfg = {
 static const struct dpu_mdp_cfg sm6375_mdp = {
.name = "top_0",
.base = 0x0, .len = 0x494,
-   .features = 0,
.clk_ctrls = {
[DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0 },
[DPU_CLK_CTRL_DMA0] = { .reg_off = 0x2ac, .bit_off = 8 },
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
index 7db3a6969189..318bed612da5 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
@@ -27,7 +27,6 @@ static const struct dpu_ubwc_cfg sm8350_ubwc_cfg = {
 static const struct dpu_mdp_cfg 

[PATCH v4 11/19] drm/msm/dpu: inline DSPP_BLK macros

2023-06-19 Thread Dmitry Baryshkov
To simplify making changes to the hardware block definitions, expand
corresponding macros. This way making all the changes are more obvious
and visible in the source files.

Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 .../msm/disp/dpu1/catalog/dpu_3_0_msm8998.h   | 15 +++---
 .../msm/disp/dpu1/catalog/dpu_4_0_sdm845.h| 29 ++-
 .../msm/disp/dpu1/catalog/dpu_5_0_sm8150.h| 29 ++-
 .../msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h   | 29 ++-
 .../msm/disp/dpu1/catalog/dpu_6_0_sm8250.h| 29 ++-
 .../msm/disp/dpu1/catalog/dpu_6_2_sc7180.h|  8 +++--
 .../msm/disp/dpu1/catalog/dpu_6_3_sm6115.h|  8 +++--
 .../msm/disp/dpu1/catalog/dpu_6_4_sm6350.h|  8 +++--
 .../msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h   |  8 +++--
 .../msm/disp/dpu1/catalog/dpu_6_9_sm6375.h|  8 +++--
 .../msm/disp/dpu1/catalog/dpu_7_0_sm8350.h| 29 ++-
 .../msm/disp/dpu1/catalog/dpu_7_2_sc7280.h|  8 +++--
 .../msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h  | 29 ++-
 .../msm/disp/dpu1/catalog/dpu_8_1_sm8450.h| 29 ++-
 .../msm/disp/dpu1/catalog/dpu_9_0_sm8550.h| 29 ++-
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c|  8 -
 16 files changed, 215 insertions(+), 88 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
index fd0081469a82..6b254753774c 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
@@ -174,10 +174,17 @@ static const struct dpu_dsc_cfg msm8998_dsc[] = {
 };
 
 static const struct dpu_dspp_cfg msm8998_dspp[] = {
-   DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
-_dspp_sblk),
-   DSPP_BLK("dspp_1", DSPP_1, 0x56000, DSPP_SC7180_MASK,
-_dspp_sblk),
+   {
+   .name = "dspp_0", .id = DSPP_0,
+   .base = 0x54000, .len = 0x1800,
+   .features = DSPP_SC7180_MASK,
+   .sblk = _dspp_sblk,
+   }, {
+   .name = "dspp_1", .id = DSPP_1,
+   .base = 0x56000, .len = 0x1800,
+   .features = DSPP_SC7180_MASK,
+   .sblk = _dspp_sblk,
+   },
 };
 
 static const struct dpu_intf_cfg msm8998_intf[] = {
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
index 7ba99060d13d..8661ef2f45e0 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
@@ -152,14 +152,27 @@ static const struct dpu_lm_cfg sdm845_lm[] = {
 };
 
 static const struct dpu_dspp_cfg sdm845_dspp[] = {
-   DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
-_dspp_sblk),
-   DSPP_BLK("dspp_1", DSPP_1, 0x56000, DSPP_SC7180_MASK,
-_dspp_sblk),
-   DSPP_BLK("dspp_2", DSPP_2, 0x58000, DSPP_SC7180_MASK,
-_dspp_sblk),
-   DSPP_BLK("dspp_3", DSPP_3, 0x5a000, DSPP_SC7180_MASK,
-_dspp_sblk),
+   {
+   .name = "dspp_0", .id = DSPP_0,
+   .base = 0x54000, .len = 0x1800,
+   .features = DSPP_SC7180_MASK,
+   .sblk = _dspp_sblk,
+   }, {
+   .name = "dspp_1", .id = DSPP_1,
+   .base = 0x56000, .len = 0x1800,
+   .features = DSPP_SC7180_MASK,
+   .sblk = _dspp_sblk,
+   }, {
+   .name = "dspp_2", .id = DSPP_2,
+   .base = 0x58000, .len = 0x1800,
+   .features = DSPP_SC7180_MASK,
+   .sblk = _dspp_sblk,
+   }, {
+   .name = "dspp_3", .id = DSPP_3,
+   .base = 0x5a000, .len = 0x1800,
+   .features = DSPP_SC7180_MASK,
+   .sblk = _dspp_sblk,
+   },
 };
 
 static const struct dpu_pingpong_cfg sdm845_pp[] = {
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
index 13d86229219e..ab933b5a4806 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
@@ -161,14 +161,27 @@ static const struct dpu_lm_cfg sm8150_lm[] = {
 };
 
 static const struct dpu_dspp_cfg sm8150_dspp[] = {
-   DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
-_dspp_sblk),
-   DSPP_BLK("dspp_1", DSPP_1, 0x56000, DSPP_SC7180_MASK,
-_dspp_sblk),
-   DSPP_BLK("dspp_2", DSPP_2, 0x58000, DSPP_SC7180_MASK,
-_dspp_sblk),
-   DSPP_BLK("dspp_3", DSPP_3, 0x5a000, DSPP_SC7180_MASK,
-_dspp_sblk),
+   {
+   .name = "dspp_0", .id = DSPP_0,
+   .base = 0x54000, .len = 0x1800,
+   .features = DSPP_SC7180_MASK,
+   .sblk = _dspp_sblk,
+   }, {
+ 

[PATCH v4 08/19] drm/msm/dpu: drop zero features from dpu_ctl_cfg data

2023-06-19 Thread Dmitry Baryshkov
Drop useless zero assignments to the dpu_ctl_cfg::features field.

Reviewed-by: Marijn Suijten 
Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h | 3 ---
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h  | 3 ---
 2 files changed, 6 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
index 757ac648a692..e0cc1ce3f3e2 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
@@ -54,7 +54,6 @@ static const struct dpu_ctl_cfg msm8998_ctl[] = {
{
.name = "ctl_1", .id = CTL_1,
.base = 0x1200, .len = 0x94,
-   .features = 0,
.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 10),
},
{
@@ -66,13 +65,11 @@ static const struct dpu_ctl_cfg msm8998_ctl[] = {
{
.name = "ctl_3", .id = CTL_3,
.base = 0x1600, .len = 0x94,
-   .features = 0,
.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 12),
},
{
.name = "ctl_4", .id = CTL_4,
.base = 0x1800, .len = 0x94,
-   .features = 0,
.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 13),
},
 };
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
index 9fb8ef21c7f0..f52e1fa27e2c 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
@@ -58,19 +58,16 @@ static const struct dpu_ctl_cfg sdm845_ctl[] = {
{
.name = "ctl_2", .id = CTL_2,
.base = 0x1400, .len = 0xe4,
-   .features = 0,
.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 11),
},
{
.name = "ctl_3", .id = CTL_3,
.base = 0x1600, .len = 0xe4,
-   .features = 0,
.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 12),
},
{
.name = "ctl_4", .id = CTL_4,
.base = 0x1800, .len = 0xe4,
-   .features = 0,
.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 13),
},
 };
-- 
2.39.2



[PATCH v4 10/19] drm/msm/dpu: inline SSPP_BLK macros

2023-06-19 Thread Dmitry Baryshkov
To simplify making changes to the hardware block definitions, expand
corresponding macros. This way making all the changes are more obvious
and visible in the source files.

Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 .../msm/disp/dpu1/catalog/dpu_3_0_msm8998.h   |  81 +++---
 .../msm/disp/dpu1/catalog/dpu_4_0_sdm845.h|  81 +++---
 .../msm/disp/dpu1/catalog/dpu_5_0_sm8150.h|  81 +++---
 .../msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h   |  81 +++---
 .../msm/disp/dpu1/catalog/dpu_6_0_sm8250.h|  81 +++---
 .../msm/disp/dpu1/catalog/dpu_6_2_sc7180.h|  41 +--
 .../msm/disp/dpu1/catalog/dpu_6_3_sm6115.h|  21 +++-
 .../msm/disp/dpu1/catalog/dpu_6_4_sm6350.h|  41 +--
 .../msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h   |  21 +++-
 .../msm/disp/dpu1/catalog/dpu_6_9_sm6375.h|  21 +++-
 .../msm/disp/dpu1/catalog/dpu_7_0_sm8350.h|  81 +++---
 .../msm/disp/dpu1/catalog/dpu_7_2_sc7280.h|  41 +--
 .../msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h  |  81 +++---
 .../msm/disp/dpu1/catalog/dpu_8_1_sm8450.h|  81 +++---
 .../msm/disp/dpu1/catalog/dpu_9_0_sm8550.h| 101 ++
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c|  12 ---
 16 files changed, 751 insertions(+), 196 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
index 6660a55909e7..fd0081469a82 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
@@ -71,22 +71,71 @@ static const struct dpu_ctl_cfg msm8998_ctl[] = {
 };
 
 static const struct dpu_sspp_cfg msm8998_sspp[] = {
-   SSPP_BLK("sspp_0", SSPP_VIG0, 0x4000, 0x1ac, VIG_MSM8998_MASK,
-   msm8998_vig_sblk_0, 0, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG0),
-   SSPP_BLK("sspp_1", SSPP_VIG1, 0x6000, 0x1ac, VIG_MSM8998_MASK,
-   msm8998_vig_sblk_1, 4, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG1),
-   SSPP_BLK("sspp_2", SSPP_VIG2, 0x8000, 0x1ac, VIG_MSM8998_MASK,
-   msm8998_vig_sblk_2, 8, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG2),
-   SSPP_BLK("sspp_3", SSPP_VIG3, 0xa000, 0x1ac, VIG_MSM8998_MASK,
-   msm8998_vig_sblk_3, 12, SSPP_TYPE_VIG, DPU_CLK_CTRL_VIG3),
-   SSPP_BLK("sspp_8", SSPP_DMA0, 0x24000, 0x1ac, DMA_MSM8998_MASK,
-   sdm845_dma_sblk_0, 1, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA0),
-   SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000, 0x1ac, DMA_MSM8998_MASK,
-   sdm845_dma_sblk_1, 5, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA1),
-   SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000, 0x1ac, DMA_CURSOR_MSM8998_MASK,
-   sdm845_dma_sblk_2, 9, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA2),
-   SSPP_BLK("sspp_11", SSPP_DMA3, 0x2a000, 0x1ac, DMA_CURSOR_MSM8998_MASK,
-   sdm845_dma_sblk_3, 13, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA3),
+   {
+   .name = "sspp_0", .id = SSPP_VIG0,
+   .base = 0x4000, .len = 0x1ac,
+   .features = VIG_MSM8998_MASK,
+   .sblk = _vig_sblk_0,
+   .xin_id = 0,
+   .type = SSPP_TYPE_VIG,
+   .clk_ctrl = DPU_CLK_CTRL_VIG0,
+   }, {
+   .name = "sspp_1", .id = SSPP_VIG1,
+   .base = 0x6000, .len = 0x1ac,
+   .features = VIG_MSM8998_MASK,
+   .sblk = _vig_sblk_1,
+   .xin_id = 4,
+   .type = SSPP_TYPE_VIG,
+   .clk_ctrl = DPU_CLK_CTRL_VIG1,
+   }, {
+   .name = "sspp_2", .id = SSPP_VIG2,
+   .base = 0x8000, .len = 0x1ac,
+   .features = VIG_MSM8998_MASK,
+   .sblk = _vig_sblk_2,
+   .xin_id = 8,
+   .type = SSPP_TYPE_VIG,
+   .clk_ctrl = DPU_CLK_CTRL_VIG2,
+   }, {
+   .name = "sspp_3", .id = SSPP_VIG3,
+   .base = 0xa000, .len = 0x1ac,
+   .features = VIG_MSM8998_MASK,
+   .sblk = _vig_sblk_3,
+   .xin_id = 12,
+   .type = SSPP_TYPE_VIG,
+   .clk_ctrl = DPU_CLK_CTRL_VIG3,
+   }, {
+   .name = "sspp_8", .id = SSPP_DMA0,
+   .base = 0x24000, .len = 0x1ac,
+   .features = DMA_MSM8998_MASK,
+   .sblk = _dma_sblk_0,
+   .xin_id = 1,
+   .type = SSPP_TYPE_DMA,
+   .clk_ctrl = DPU_CLK_CTRL_DMA0,
+   }, {
+   .name = "sspp_9", .id = SSPP_DMA1,
+   .base = 0x26000, .len = 0x1ac,
+   .features = DMA_MSM8998_MASK,
+   .sblk = _dma_sblk_1,
+   .xin_id = 5,
+   .type = SSPP_TYPE_DMA,
+   .clk_ctrl = DPU_CLK_CTRL_DMA1,
+   }, {
+   .name = "sspp_10", .id = SSPP_DMA2,
+   .base = 0x28000, .len = 0x1ac,
+   .features = DMA_CURSOR_MSM8998_MASK,
+   .sblk = _dma_sblk_2,
+   .xin_id = 

[PATCH v4 05/19] drm/msm/dpu: drop enum dpu_mdp and MDP_TOP value

2023-06-19 Thread Dmitry Baryshkov
Since there is always just a single MDP_TOP instance, drop the enum
dpu_mdp and corresponding index value.

Reviewed-by: Marijn Suijten 
Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h  | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h   | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h   | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h  | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h   | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h   | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h   | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_4_sm6350.h   | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h  | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_9_sm6375.h   | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h   | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h   | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h   | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h   | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h  | 5 -
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c   | 1 -
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.h   | 1 -
 18 files changed, 15 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
index e0d2ee48d733..30565b245b29 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
@@ -27,7 +27,7 @@ static const struct dpu_ubwc_cfg msm8998_ubwc_cfg = {
 };
 
 static const struct dpu_mdp_cfg msm8998_mdp = {
-   .name = "top_0", .id = MDP_TOP,
+   .name = "top_0",
.base = 0x0, .len = 0x458,
.features = BIT(DPU_MDP_VSYNC_SEL),
.clk_ctrls[DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0 },
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
index 72295d5a10dc..35c495bdcbe9 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
@@ -27,7 +27,7 @@ static const struct dpu_ubwc_cfg sdm845_ubwc_cfg = {
 };
 
 static const struct dpu_mdp_cfg sdm845_mdp = {
-   .name = "top_0", .id = MDP_TOP,
+   .name = "top_0",
.base = 0x0, .len = 0x45c,
.features = BIT(DPU_MDP_AUDIO_SELECT) | BIT(DPU_MDP_VSYNC_SEL),
.clk_ctrls[DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0 },
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
index 418312b164b8..cb2716715e3d 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
@@ -27,7 +27,7 @@ static const struct dpu_ubwc_cfg sm8150_ubwc_cfg = {
 };
 
 static const struct dpu_mdp_cfg sm8150_mdp = {
-   .name = "top_0", .id = MDP_TOP,
+   .name = "top_0",
.base = 0x0, .len = 0x45c,
.features = BIT(DPU_MDP_AUDIO_SELECT),
.clk_ctrls[DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0 },
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
index ffacf29926b3..a655e84cf147 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
@@ -27,7 +27,7 @@ static const struct dpu_ubwc_cfg sc8180x_ubwc_cfg = {
 };
 
 static const struct dpu_mdp_cfg sc8180x_mdp = {
-   .name = "top_0", .id = MDP_TOP,
+   .name = "top_0",
.base = 0x0, .len = 0x45c,
.features = BIT(DPU_MDP_AUDIO_SELECT),
.clk_ctrls[DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0 },
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
index 86dfc5745630..90e561d086e0 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
@@ -26,7 +26,7 @@ static const struct dpu_ubwc_cfg sm8250_ubwc_cfg = {
 };
 
 static const struct dpu_mdp_cfg sm8250_mdp = {
-   .name = "top_0", .id = MDP_TOP,
+   .name = "top_0",
.base = 0x0, .len = 0x494,
.features = 0,
.clk_ctrls[DPU_CLK_CTRL_VIG0] = { .reg_off = 0x2ac, .bit_off = 0 },
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
index f42f27707453..3aafe4dfb663 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
@@ -23,7 +23,7 @@ static const struct dpu_ubwc_cfg sc7180_ubwc_cfg = {
 };
 
 static const 

[PATCH v4 03/19] drm/msm/dpu: simplify peer LM handling

2023-06-19 Thread Dmitry Baryshkov
For each LM there is at max 1 peer LM which can be driven by the same
CTL, so there no need to have a mask instead of just an ID of the peer
LM.

Reviewed-by: Marijn Suijten 
Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c|  2 +-
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h|  4 +--
 drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c| 34 +++
 3 files changed, 15 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
index 0de507d4d7b7..30fb5b1f3966 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
@@ -394,7 +394,7 @@ static const struct dpu_sspp_sub_blks qcm2290_dma_sblk_0 = 
_DMA_SBLK("8", 1);
.features = _fmask, \
.sblk = _sblk, \
.pingpong = _pp, \
-   .lm_pair_mask = (1 << _lmpair), \
+   .lm_pair = _lmpair, \
.dspp = _dspp \
}
 
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
index b860784ade72..b07caa4b867e 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
@@ -554,14 +554,14 @@ struct dpu_sspp_cfg {
  * @features   bit mask identifying sub-blocks/features
  * @sblk:  LM Sub-blocks information
  * @pingpong:  ID of connected PingPong, PINGPONG_NONE if unsupported
- * @lm_pair_mask:  Bitmask of LMs that can be controlled by same CTL
+ * @lm_pair:   ID of LM that can be controlled by same CTL
  */
 struct dpu_lm_cfg {
DPU_HW_BLK_INFO;
const struct dpu_lm_sub_blks *sblk;
u32 pingpong;
u32 dspp;
-   unsigned long lm_pair_mask;
+   unsigned long lm_pair;
 };
 
 /**
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
index 471842bbb950..e333f4eeafc1 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
@@ -253,28 +253,19 @@ static bool _dpu_rm_needs_split_display(const struct 
msm_display_topology *top)
 }
 
 /**
- * _dpu_rm_check_lm_peer - check if a mixer is a peer of the primary
+ * _dpu_rm_get_lm_peer - get the id of a mixer which is a peer of the primary
  * @rm: dpu resource manager handle
  * @primary_idx: index of primary mixer in rm->mixer_blks[]
- * @peer_idx: index of other mixer in rm->mixer_blks[]
- * Return: true if rm->mixer_blks[peer_idx] is a peer of
- *  rm->mixer_blks[primary_idx]
  */
-static bool _dpu_rm_check_lm_peer(struct dpu_rm *rm, int primary_idx,
-   int peer_idx)
+static int _dpu_rm_get_lm_peer(struct dpu_rm *rm, int primary_idx)
 {
const struct dpu_lm_cfg *prim_lm_cfg;
-   const struct dpu_lm_cfg *peer_cfg;
 
prim_lm_cfg = to_dpu_hw_mixer(rm->mixer_blks[primary_idx])->cap;
-   peer_cfg = to_dpu_hw_mixer(rm->mixer_blks[peer_idx])->cap;
 
-   if (!test_bit(peer_cfg->id, _lm_cfg->lm_pair_mask)) {
-   DPU_DEBUG("lm %d not peer of lm %d\n", peer_cfg->id,
-   peer_cfg->id);
-   return false;
-   }
-   return true;
+   if (prim_lm_cfg->lm_pair >= LM_0 && prim_lm_cfg->lm_pair < LM_MAX)
+   return prim_lm_cfg->lm_pair - LM_0;
+   return -EINVAL;
 }
 
 /**
@@ -351,7 +342,7 @@ static int _dpu_rm_reserve_lms(struct dpu_rm *rm,
int lm_idx[MAX_BLOCKS];
int pp_idx[MAX_BLOCKS];
int dspp_idx[MAX_BLOCKS] = {0};
-   int i, j, lm_count = 0;
+   int i, lm_count = 0;
 
if (!reqs->topology.num_lm) {
DPU_ERROR("invalid number of lm: %d\n", reqs->topology.num_lm);
@@ -376,16 +367,15 @@ static int _dpu_rm_reserve_lms(struct dpu_rm *rm,
++lm_count;
 
/* Valid primary mixer found, find matching peers */
-   for (j = i + 1; j < ARRAY_SIZE(rm->mixer_blks) &&
-   lm_count < reqs->topology.num_lm; j++) {
-   if (!rm->mixer_blks[j])
+   if (lm_count < reqs->topology.num_lm) {
+   int j = _dpu_rm_get_lm_peer(rm, i);
+
+   /* ignore the peer if there is an error or if the peer 
was already processed */
+   if (j < 0 || j < i)
continue;
 
-   if (!_dpu_rm_check_lm_peer(rm, i, j)) {
-   DPU_DEBUG("lm %d not peer of lm %d\n", LM_0 + j,
-   LM_0 + i);
+   if (!rm->mixer_blks[j])
continue;
-   }
 
if (!_dpu_rm_check_lm_and_get_connected_blks(rm,
global_state, enc_id, j,
-- 
2.39.2



[PATCH v4 04/19] drm/msm/dpu: drop dpu_mdss_cfg::mdp_count field

2023-06-19 Thread Dmitry Baryshkov
There is always a single MDP TOP block. Drop the mdp_count field and
stop declaring dpu_mdp_cfg instances as arrays.

Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 .../msm/disp/dpu1/catalog/dpu_3_0_msm8998.h   |  7 +---
 .../msm/disp/dpu1/catalog/dpu_4_0_sdm845.h|  7 +---
 .../msm/disp/dpu1/catalog/dpu_5_0_sm8150.h|  7 +---
 .../msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h   |  7 +---
 .../msm/disp/dpu1/catalog/dpu_6_0_sm8250.h|  7 +---
 .../msm/disp/dpu1/catalog/dpu_6_2_sc7180.h|  7 +---
 .../msm/disp/dpu1/catalog/dpu_6_3_sm6115.h|  7 +---
 .../msm/disp/dpu1/catalog/dpu_6_4_sm6350.h|  7 +---
 .../msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h   |  7 +---
 .../msm/disp/dpu1/catalog/dpu_6_9_sm6375.h|  7 +---
 .../msm/disp/dpu1/catalog/dpu_7_0_sm8350.h|  7 +---
 .../msm/disp/dpu1/catalog/dpu_7_2_sc7280.h|  7 +---
 .../msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h  |  7 +---
 .../msm/disp/dpu1/catalog/dpu_8_1_sm8450.h|  7 +---
 .../msm/disp/dpu1/catalog/dpu_9_0_sm8550.h|  7 +---
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h|  1 -
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c| 38 +++
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.h|  8 ++--
 drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  4 +-
 19 files changed, 41 insertions(+), 115 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
index be0514bf27ec..e0d2ee48d733 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
@@ -26,8 +26,7 @@ static const struct dpu_ubwc_cfg msm8998_ubwc_cfg = {
.highest_bank_bit = 0x2,
 };
 
-static const struct dpu_mdp_cfg msm8998_mdp[] = {
-   {
+static const struct dpu_mdp_cfg msm8998_mdp = {
.name = "top_0", .id = MDP_TOP,
.base = 0x0, .len = 0x458,
.features = BIT(DPU_MDP_VSYNC_SEL),
@@ -41,7 +40,6 @@ static const struct dpu_mdp_cfg msm8998_mdp[] = {
.clk_ctrls[DPU_CLK_CTRL_DMA3] = { .reg_off = 0x2c4, .bit_off = 12 },
.clk_ctrls[DPU_CLK_CTRL_CURSOR0] = { .reg_off = 0x3a8, .bit_off = 16 },
.clk_ctrls[DPU_CLK_CTRL_CURSOR1] = { .reg_off = 0x3b0, .bit_off = 16 },
-   },
 };
 
 static const struct dpu_ctl_cfg msm8998_ctl[] = {
@@ -192,8 +190,7 @@ static const struct dpu_perf_cfg msm8998_perf_data = {
 const struct dpu_mdss_cfg dpu_msm8998_cfg = {
.caps = _dpu_caps,
.ubwc = _ubwc_cfg,
-   .mdp_count = ARRAY_SIZE(msm8998_mdp),
-   .mdp = msm8998_mdp,
+   .mdp = _mdp,
.ctl_count = ARRAY_SIZE(msm8998_ctl),
.ctl = msm8998_ctl,
.sspp_count = ARRAY_SIZE(msm8998_sspp),
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
index b33472625fcb..72295d5a10dc 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
@@ -26,8 +26,7 @@ static const struct dpu_ubwc_cfg sdm845_ubwc_cfg = {
.highest_bank_bit = 0x2,
 };
 
-static const struct dpu_mdp_cfg sdm845_mdp[] = {
-   {
+static const struct dpu_mdp_cfg sdm845_mdp = {
.name = "top_0", .id = MDP_TOP,
.base = 0x0, .len = 0x45c,
.features = BIT(DPU_MDP_AUDIO_SELECT) | BIT(DPU_MDP_VSYNC_SEL),
@@ -39,7 +38,6 @@ static const struct dpu_mdp_cfg sdm845_mdp[] = {
.clk_ctrls[DPU_CLK_CTRL_DMA1] = { .reg_off = 0x2b4, .bit_off = 8 },
.clk_ctrls[DPU_CLK_CTRL_DMA2] = { .reg_off = 0x2bc, .bit_off = 8 },
.clk_ctrls[DPU_CLK_CTRL_DMA3] = { .reg_off = 0x2c4, .bit_off = 8 },
-   },
 };
 
 static const struct dpu_ctl_cfg sdm845_ctl[] = {
@@ -196,8 +194,7 @@ static const struct dpu_perf_cfg sdm845_perf_data = {
 const struct dpu_mdss_cfg dpu_sdm845_cfg = {
.caps = _dpu_caps,
.ubwc = _ubwc_cfg,
-   .mdp_count = ARRAY_SIZE(sdm845_mdp),
-   .mdp = sdm845_mdp,
+   .mdp = _mdp,
.ctl_count = ARRAY_SIZE(sdm845_ctl),
.ctl = sdm845_ctl,
.sspp_count = ARRAY_SIZE(sdm845_sspp),
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
index 64ed10da1b73..418312b164b8 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
@@ -26,8 +26,7 @@ static const struct dpu_ubwc_cfg sm8150_ubwc_cfg = {
.highest_bank_bit = 0x2,
 };
 
-static const struct dpu_mdp_cfg sm8150_mdp[] = {
-   {
+static const struct dpu_mdp_cfg sm8150_mdp = {
.name = "top_0", .id = MDP_TOP,
.base = 0x0, .len = 0x45c,
.features = BIT(DPU_MDP_AUDIO_SELECT),
@@ -39,7 +38,6 @@ static const struct dpu_mdp_cfg sm8150_mdp[] = {
.clk_ctrls[DPU_CLK_CTRL_DMA1] = { .reg_off = 0x2b4, .bit_off = 8 },
.clk_ctrls[DPU_CLK_CTRL_DMA2] = { .reg_off = 0x2bc, .bit_off = 8 },

[PATCH v4 02/19] drm/msm/dpu: always use MSM_DP/DSI_CONTROLLER_n

2023-06-19 Thread Dmitry Baryshkov
In several catalog entries we did not use existing MSM_DP_CONTROLLER_n
constants. Fill them in. Also use freshly defined MSM_DSI_CONTROLLER_n
for DSI interfaces.

Reviewed-by: Marijn Suijten 
Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h  | 6 +++---
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h   | 8 
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h   | 8 
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h  | 4 ++--
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h   | 8 
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h   | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h   | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_4_sm6350.h   | 4 ++--
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h  | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_9_sm6375.h   | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h   | 4 ++--
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h   | 2 +-
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h | 4 ++--
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h   | 4 ++--
 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h   | 4 ++--
 15 files changed, 32 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
index 7d0d0e74c3b0..be0514bf27ec 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
@@ -139,13 +139,13 @@ static const struct dpu_dspp_cfg msm8998_dspp[] = {
 };
 
 static const struct dpu_intf_cfg msm8998_intf[] = {
-   INTF_BLK("intf_0", INTF_0, 0x6a000, 0x280, INTF_DP, 0, 21, 
INTF_SDM845_MASK,
+   INTF_BLK("intf_0", INTF_0, 0x6a000, 0x280, INTF_DP, 
MSM_DP_CONTROLLER_0, 21, INTF_SDM845_MASK,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 24),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 25)),
-   INTF_BLK("intf_1", INTF_1, 0x6a800, 0x280, INTF_DSI, 0, 21, 
INTF_SDM845_MASK,
+   INTF_BLK("intf_1", INTF_1, 0x6a800, 0x280, INTF_DSI, 
MSM_DSI_CONTROLLER_0, 21, INTF_SDM845_MASK,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 26),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 27)),
-   INTF_BLK("intf_2", INTF_2, 0x6b000, 0x280, INTF_DSI, 1, 21, 
INTF_SDM845_MASK,
+   INTF_BLK("intf_2", INTF_2, 0x6b000, 0x280, INTF_DSI, 
MSM_DSI_CONTROLLER_1, 21, INTF_SDM845_MASK,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 28),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 29)),
INTF_BLK("intf_3", INTF_3, 0x6b800, 0x280, INTF_HDMI, 0, 21, 
INTF_SDM845_MASK,
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
index b6098141bb9b..b33472625fcb 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
@@ -143,16 +143,16 @@ static const struct dpu_dsc_cfg sdm845_dsc[] = {
 };
 
 static const struct dpu_intf_cfg sdm845_intf[] = {
-   INTF_BLK("intf_0", INTF_0, 0x6a000, 0x280, INTF_DP, 0, 24, 
INTF_SDM845_MASK,
+   INTF_BLK("intf_0", INTF_0, 0x6a000, 0x280, INTF_DP, 
MSM_DP_CONTROLLER_0, 24, INTF_SDM845_MASK,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 24),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 25)),
-   INTF_BLK("intf_1", INTF_1, 0x6a800, 0x280, INTF_DSI, 0, 24, 
INTF_SDM845_MASK,
+   INTF_BLK("intf_1", INTF_1, 0x6a800, 0x280, INTF_DSI, 
MSM_DSI_CONTROLLER_0, 24, INTF_SDM845_MASK,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 26),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 27)),
-   INTF_BLK("intf_2", INTF_2, 0x6b000, 0x280, INTF_DSI, 1, 24, 
INTF_SDM845_MASK,
+   INTF_BLK("intf_2", INTF_2, 0x6b000, 0x280, INTF_DSI, 
MSM_DSI_CONTROLLER_1, 24, INTF_SDM845_MASK,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 28),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 29)),
-   INTF_BLK("intf_3", INTF_3, 0x6b800, 0x280, INTF_DP, 1, 24, 
INTF_SDM845_MASK,
+   INTF_BLK("intf_3", INTF_3, 0x6b800, 0x280, INTF_DP, 
MSM_DP_CONTROLLER_1, 24, INTF_SDM845_MASK,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31)),
 };
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h 
b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
index b5f751354267..64ed10da1b73 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
@@ -162,18 +162,18 @@ static const struct dpu_dsc_cfg sm8150_dsc[] = {
 };
 
 static const struct dpu_intf_cfg sm8150_intf[] = {
-   INTF_BLK("intf_0", INTF_0, 0x6a000, 0x280, INTF_DP, 0, 24, 
INTF_SC7180_MASK,
+   INTF_BLK("intf_0", INTF_0, 

[PATCH v4 01/19] drm/msm: enumerate DSI interfaces

2023-06-19 Thread Dmitry Baryshkov
Follow the DP example and define MSM_DSI_CONTROLLER_n enumeration.

Reviewed-by: Abhinav Kumar 
Reviewed-by: Marijn Suijten 
Tested-by: Marijn Suijten 
Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/msm_drv.h | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index e13a8cbd61c9..ad4fad2bcdc8 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -65,6 +65,12 @@ enum msm_dp_controller {
MSM_DP_CONTROLLER_COUNT,
 };
 
+enum msm_dsi_controller {
+   MSM_DSI_CONTROLLER_0,
+   MSM_DSI_CONTROLLER_1,
+   MSM_DSI_CONTROLLER_COUNT,
+};
+
 #define MSM_GPU_MAX_RINGS 4
 #define MAX_H_TILES_PER_DISPLAY 2
 
@@ -117,7 +123,7 @@ struct msm_drm_private {
struct hdmi *hdmi;
 
/* DSI is shared by mdp4 and mdp5 */
-   struct msm_dsi *dsi[2];
+   struct msm_dsi *dsi[MSM_DSI_CONTROLLER_COUNT];
 
struct msm_dp *dp[MSM_DP_CONTROLLER_COUNT];
 
-- 
2.39.2



[PATCH v4 00/19] drm/msm/dpu: another catalog rework

2023-06-19 Thread Dmitry Baryshkov
Having a macro with 10 arguments doesn't seem like a good idea. It makes
it inherently harder to compare the actual structure values. Also this
leads to adding macros covering varieties of the block.

As it was previously discussed, inline all foo_BLK macros in order to
ease performing changes to the catalog data.

Major part of the conversion was performed using vim script found at
[1]. Then some manual cleanups were applied, like dropping fields set to
0.

Dependencies: msm-next-lumag.

Changes since v3:
 - Fixed DSC 1.1 block length to 0x140 (Marijn)
 - Fixed mdp->caps assignment in dpu_hw_mdptop_init() (Marijn)

Changes since v2:
 - Rebased on top of msm-next-lumag
 - Fixed MSM_DP/DSI_CONTROLLER_n usage in sm6350 and sm6375 catalog data
   (Abhinav, Marijn).

Changes since v1:
 - Rebased on top of msm-next
 - Dropped dependency on interrupt rework

[1] https://pastebin.ubuntu.com/p/K6vkjmxZdd/

Dmitry Baryshkov (19):
  drm/msm: enumerate DSI interfaces
  drm/msm/dpu: always use MSM_DP/DSI_CONTROLLER_n
  drm/msm/dpu: simplify peer LM handling
  drm/msm/dpu: drop dpu_mdss_cfg::mdp_count field
  drm/msm/dpu: drop enum dpu_mdp and MDP_TOP value
  drm/msm/dpu: expand .clk_ctrls definitions
  drm/msm/dpu: drop zero features from dpu_mdp_cfg data
  drm/msm/dpu: drop zero features from dpu_ctl_cfg data
  drm/msm/dpu: correct indentation for CTL definitions
  drm/msm/dpu: inline SSPP_BLK macros
  drm/msm/dpu: inline DSPP_BLK macros
  drm/msm/dpu: inline LM_BLK macros
  drm/msm/dpu: inline DSC_BLK and DSC_BLK_1_2 macros
  drm/msm/dpu: inline MERGE_3D_BLK macros
  drm/msm/dpu: inline various PP_BLK_* macros
  drm/msm/dpu: inline WB_BLK macros
  drm/msm/dpu: inline INTF_BLK and INTF_BLK_DSI_TE macros
  drm/msm/dpu: drop empty features mask MERGE_3D_SM8150_MASK
  drm/msm/dpu: drop empty features mask INTF_SDM845_MASK

 .../msm/disp/dpu1/catalog/dpu_3_0_msm8998.h   | 327 
 .../msm/disp/dpu1/catalog/dpu_4_0_sdm845.h| 348 +
 .../msm/disp/dpu1/catalog/dpu_5_0_sm8150.h| 411 ++-
 .../msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h   | 448 +++-
 .../msm/disp/dpu1/catalog/dpu_6_0_sm8250.h| 430 +++-
 .../msm/disp/dpu1/catalog/dpu_6_2_sc7180.h| 184 +--
 .../msm/disp/dpu1/catalog/dpu_6_3_sm6115.h|  88 +++-
 .../msm/disp/dpu1/catalog/dpu_6_4_sm6350.h| 188 ---
 .../msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h   |  88 +++-
 .../msm/disp/dpu1/catalog/dpu_6_9_sm6375.h|  95 +++-
 .../msm/disp/dpu1/catalog/dpu_7_0_sm8350.h| 418 ++-
 .../msm/disp/dpu1/catalog/dpu_7_2_sc7280.h| 244 ++---
 .../msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h  | 484 +-
 .../msm/disp/dpu1/catalog/dpu_8_1_sm8450.h| 445 +++-
 .../msm/disp/dpu1/catalog/dpu_9_0_sm8550.h| 467 -
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c| 130 -
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h|   5 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h   |   5 -
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c|  37 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.h|   9 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |   4 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c|  34 +-
 drivers/gpu/drm/msm/msm_drv.h |   8 +-
 23 files changed, 3321 insertions(+), 1576 deletions(-)

-- 
2.39.2



[PATCH v2] drm/msm/dsi: Document DSC related pclk_rate and hdisplay calculations

2023-06-19 Thread Dmitry Baryshkov
Provide actual documentation for the pclk and hdisplay calculations in
the case of DSC compression being used.

Signed-off-by: Dmitry Baryshkov 
---

Changes since v1:
- Converted dsi_adjust_pclk_for_compression() into kerneldoc (Marijn)
- Added a pointer from dsi_timing_setup() docs to
  dsi_adjust_pclk_for_compression() (Marijn)
- Fixed two typo (Marijn)

---
 drivers/gpu/drm/msm/dsi/dsi_host.c | 40 --
 1 file changed, 38 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
b/drivers/gpu/drm/msm/dsi/dsi_host.c
index 3f6dfb4f9d5a..a8a31c3dd168 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
@@ -528,6 +528,25 @@ void dsi_link_clk_disable_v2(struct msm_dsi_host *msm_host)
clk_disable_unprepare(msm_host->byte_clk);
 }
 
+/**
+ * dsi_adjust_pclk_for_compression() - Adjust the pclk rate for compression 
case
+ * @mode: the selected mode for the DSI output
+ * @dsc: DRM DSC configuration for this DSI output
+ *
+ * Adjust the pclk rate by calculating a new hdisplay proportional to
+ * the compression ratio such that:
+ * new_hdisplay = old_hdisplay * compressed_bpp / uncompressed_bpp
+ *
+ * Porches do not need to be adjusted:
+ * - For the VIDEO mode they are not compressed by DSC and are passed as is.
+ * - For the CMD mode there are no actual porches. Instead these fields
+ *   currently represent the overhead to the image data transfer. As such, they
+ *   are calculated for the final mode parameters (after the compression) and
+ *   are not to be adjusted too.
+ *
+ *  FIXME: Reconsider this if/when CMD mode handling is rewritten to use
+ *  refresh rate and data overhead as a starting point of the calculations.
+ */
 static unsigned long dsi_adjust_pclk_for_compression(const struct 
drm_display_mode *mode,
const struct drm_dsc_config *dsc)
 {
@@ -926,8 +945,25 @@ static void dsi_timing_setup(struct msm_dsi_host 
*msm_host, bool is_bonded_dsi)
if (ret)
return;
 
-   /* Divide the display by 3 but keep back/font porch and
-* pulse width same
+   /*
+* DPU sends 3 bytes per pclk cycle to DSI. If compression is
+* not used, a single pixel is transferred at each pulse, no
+* matter what bpp or pixel format is used. In case of DSC
+* compression this results (due to data alignment
+* requirements) in a transfer of 3 compressed pixel per pclk
+* cycle.
+*
+* If widebus is enabled, bus width is extended to 6 bytes.
+* This way the DPU can transfer 6 compressed pixels with bpp
+* less or equal to 8 or 3 compressed pixels in case bpp is
+* greater than 8.
+*
+* The back/font porch and pulse width are kept intact.  They
+* represent timing parameters rather than actual data
+* transfer. See the documentation of
+* dsi_adjust_pclk_for_compression().
+*
+* XXX: widebus is not supported by the driver (yet).
 */
h_total -= hdisplay;
hdisplay = 
DIV_ROUND_UP(msm_dsc_get_bytes_per_line(msm_host->dsc), 3);
-- 
2.39.2



Re: [PATCH] drm/msm/dsi: Document DSC related pclk_rate and hdisplay calculations

2023-06-19 Thread Dmitry Baryshkov

On 16/06/2023 15:25, Marijn Suijten wrote:

On 2023-06-16 12:41:52, Dmitry Baryshkov wrote:

Provide actual documentation for the pclk and hdisplay calculations in
the case of DSC compression being used.

Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/dsi/dsi_host.c | 35 --
  1 file changed, 33 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
b/drivers/gpu/drm/msm/dsi/dsi_host.c
index 3f6dfb4f9d5a..72c377c9c7be 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
@@ -528,6 +528,21 @@ void dsi_link_clk_disable_v2(struct msm_dsi_host *msm_host)
clk_disable_unprepare(msm_host->byte_clk);
  }
  
+/*

+ * Adjust the pclk rate by calculating a new hdisplay proportional to


Make this a kerneldoc with:


Ack



 /**
  * dsi_adjust_pclk_for_compression() - Adjust ...


+ * the compression ratio such that:
+ * new_hdisplay = old_hdisplay * compressed_bpp / uncompressed_bpp
+ *
+ * Porches do not need to be adjusted:
+ * - For the VIDEO mode they are not compressed by DSC and are passed as is.


as-is


Cambridge dictionary gives this "as is", without dash.



Though this was never tested nor confirmed by QUIC, but we can assume it
is the case for now?


+ * - For the CMD mode the are no actual porches. Instead they represent the


the are no -> these are not

they currently* represent.  


Ack


Let's make sure that folks read the FIXME
below by perhaps rewording it right into this entry?


I kept it separately, so that the FIXME can be removed once CMD handling 
is reworked.





+ *   overhead to the image data transfer. As such, they are calculated for the
+ *   final mode parameters (after the compression) and are not to be adjusted
+ *   too.
+ *
+ *  FIXME: Reconsider this if/when CMD mode handling is rewritten to use
+ *  refresh rate and data overhead as a starting point of the calculations.
+ */
  static unsigned long dsi_adjust_pclk_for_compression(const struct 
drm_display_mode *mode,
const struct drm_dsc_config *dsc)
  {
@@ -926,8 +941,24 @@ static void dsi_timing_setup(struct msm_dsi_host 
*msm_host, bool is_bonded_dsi)
if (ret)
return;
  
-		/* Divide the display by 3 but keep back/font porch and

-* pulse width same
+   /*
+* DPU sends 3 bytes per pclk cycle to DSI. If compression is
+* not used, a single pixel is transferred at each pulse, no
+* matter what bpp or pixel format is used. In case of DSC
+* compression this results (due to data alignment
+* requirements) in a transfer of 3 compressed pixel per pclk


3 compressed bytes*, not pixels.


No, that's the point. With 6bpp one can think that 4 pixels would fit, 
but they don't.





+* cycle.
+*
+* If widebus is enabled, bus width is extended to 6 bytes.
+* This way the DPU can transfer 6 compressed pixels with bpp


pixels -> bytes?


Same comment, no.




+* less or equal to 8 or 3 compressed pyxels in case bpp is


pixels*... bytes?

And I will ask this **again**: does this mean we can halve pclk?


My guess would be no, since all other data transfers are not scaled by 
wide bus.





+* greater than 8.
+*
+* The back/font porch and pulse width are kept intact.  They
+* represent timing parameters rather than actual data
+* transfer.


See FIXME above on dsi_adjust_pclk_for_compression()?

Thanks so much for finally putting some of this to paper.

- Marijn


+*
+* XXX: widebus is not supported by the driver (yet).
 */
h_total -= hdisplay;
hdisplay = 
DIV_ROUND_UP(msm_dsc_get_bytes_per_line(msm_host->dsc), 3);
--
2.39.2



--
With best wishes
Dmitry



Re: [PATCH] drm/bridge_connector: Handle drm_connector_init_with_ddc() failures

2023-06-19 Thread Laurent Pinchart
Hi Geert,

Thank you for the patch.

On Mon, Jun 19, 2023 at 02:24:21PM +0200, Geert Uytterhoeven wrote:
> drm_connector_init_with_ddc() can fail, but the call in
> drm_bridge_connector_init() does not check that.  Fix this by adding
> the missing error handling.
> 
> Signed-off-by: Geert Uytterhoeven 
> ---
>  drivers/gpu/drm/drm_bridge_connector.c | 12 +---
>  1 file changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_bridge_connector.c 
> b/drivers/gpu/drm/drm_bridge_connector.c
> index 19ae4a177ac386b2..d2f5602ad4eb5953 100644
> --- a/drivers/gpu/drm/drm_bridge_connector.c
> +++ b/drivers/gpu/drm/drm_bridge_connector.c
> @@ -317,7 +317,7 @@ struct drm_connector *drm_bridge_connector_init(struct 
> drm_device *drm,
>   struct drm_connector *connector;
>   struct i2c_adapter *ddc = NULL;
>   struct drm_bridge *bridge, *panel_bridge = NULL;
> - int connector_type;
> + int connector_type, ret;

With 'ret' declared on a separate line,

Reviewed-by: Laurent Pinchart 

>  
>   bridge_connector = kzalloc(sizeof(*bridge_connector), GFP_KERNEL);
>   if (!bridge_connector)
> @@ -368,8 +368,14 @@ struct drm_connector *drm_bridge_connector_init(struct 
> drm_device *drm,
>   return ERR_PTR(-EINVAL);
>   }
>  
> - drm_connector_init_with_ddc(drm, connector, _bridge_connector_funcs,
> - connector_type, ddc);
> + ret = drm_connector_init_with_ddc(drm, connector,
> +   _bridge_connector_funcs,
> +   connector_type, ddc);
> + if (ret) {
> + kfree(bridge_connector);
> + return ERR_PTR(ret);
> + }
> +
>   drm_connector_helper_add(connector, _bridge_connector_helper_funcs);
>  
>   if (bridge_connector->bridge_hpd)

-- 
Regards,

Laurent Pinchart


[PATCH] drm/panel: simple: Add connector_type for innolux_at043tn24

2023-06-19 Thread Fabio Estevam
From: Fabio Estevam 

The innolux at043tn24 display is a parallel LCD. Pass the 'connector_type'
information to avoid the following warning:

panel-simple panel: Specify missing connector_type

Signed-off-by: Fabio Estevam 
---
 drivers/gpu/drm/panel/panel-simple.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/panel/panel-simple.c 
b/drivers/gpu/drm/panel/panel-simple.c
index a247a0e7c799..7c80528d571e 100644
--- a/drivers/gpu/drm/panel/panel-simple.c
+++ b/drivers/gpu/drm/panel/panel-simple.c
@@ -2178,6 +2178,7 @@ static const struct panel_desc innolux_at043tn24 = {
.height = 54,
},
.bus_format = MEDIA_BUS_FMT_RGB888_1X24,
+   .connector_type = DRM_MODE_CONNECTOR_DPI,
.bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE,
 };
 
-- 
2.34.1



Re: [PATCH v4 4/6] dt-bindings: display: stm32-ltdc: add optional st,fb-bpp property

2023-06-19 Thread Conor Dooley
Hey,

On Mon, Jun 19, 2023 at 06:55:23PM +0200, Dario Binacchi wrote:
> Boards that use the STM32F{4,7} series have limited amounts of RAM. The
> added property allows to size, within certain limits, the memory footprint
> required by the framebuffer.

Hmm, this sounds quite a lot like "software policy", since the actual
display doesn't have these limitations. Rob, Krzysztof?

> 
> Signed-off-by: Dario Binacchi 
> ---
> 
> (no changes since v1)

Really?
https://lore.kernel.org/all/?q=dfn:st,stm32-ltdc.yaml%20

You sure this shouldn't be "new in v4"?

>  .../devicetree/bindings/display/st,stm32-ltdc.yaml  | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/display/st,stm32-ltdc.yaml 
> b/Documentation/devicetree/bindings/display/st,stm32-ltdc.yaml
> index d6ea4d62a2cf..1c3a3653579f 100644
> --- a/Documentation/devicetree/bindings/display/st,stm32-ltdc.yaml
> +++ b/Documentation/devicetree/bindings/display/st,stm32-ltdc.yaml
> @@ -42,6 +42,12 @@ properties:
>- for internal dpi input of the MIPI DSI host controller.
>Note: These 2 endpoints cannot be activated simultaneously.
>  
> +  st,fb-bpp:

Is there not a more understandable property name than this?
Maybe I just had to think about it because fbdev stuff aint something
I've worked with...

> +$ref: /schemas/types.yaml#/definitions/uint32
> +description: |
> +  bit depth of framebuffer (8, 16 or 32)
> +maxItems: 1

Why not make it an enum, since there are only 4 values?

Cheers,
Conor.

> +
>  required:
>- compatible
>- reg
> -- 
> 2.32.0
> 


signature.asc
Description: PGP signature


Re: [PATCH v6 2/8] PCI/VGA: Deal only with VGA class devices

2023-06-19 Thread Limonciello, Mario



On 6/12/2023 2:25 PM, Sui Jingfeng wrote:

From: Sui Jingfeng 

Deal only with the VGA devcie(pdev->class == 0x0300), so replace the
pci_get_subsys() function with pci_get_class(). Filter the non-PCI display
device(pdev->class != 0x0300) out. There no need to process the non-display
PCI device.

Signed-off-by: Sui Jingfeng 
---

This also means that deleting a PCI device no longer needs
to walk the list.

Reviewed-by: Mario Limonciello 


  drivers/pci/vgaarb.c | 22 --
  1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/drivers/pci/vgaarb.c b/drivers/pci/vgaarb.c
index c1bc6c983932..22a505e877dc 100644
--- a/drivers/pci/vgaarb.c
+++ b/drivers/pci/vgaarb.c
@@ -754,10 +754,6 @@ static bool vga_arbiter_add_pci_device(struct pci_dev 
*pdev)
struct pci_dev *bridge;
u16 cmd;
  
-	/* Only deal with VGA class devices */

-   if ((pdev->class >> 8) != PCI_CLASS_DISPLAY_VGA)
-   return false;
-
/* Allocate structure */
vgadev = kzalloc(sizeof(struct vga_device), GFP_KERNEL);
if (vgadev == NULL) {
@@ -1500,7 +1496,9 @@ static int pci_notify(struct notifier_block *nb, unsigned 
long action,
struct pci_dev *pdev = to_pci_dev(dev);
bool notify = false;
  
-	vgaarb_dbg(dev, "%s\n", __func__);

+   /* Only deal with VGA class devices */
+   if (pdev->class != PCI_CLASS_DISPLAY_VGA << 8)
+   return 0;
  
  	/* For now we're only intereted in devices added and removed. I didn't

 * test this thing here, so someone needs to double check for the
@@ -1510,6 +1508,8 @@ static int pci_notify(struct notifier_block *nb, unsigned 
long action,
else if (action == BUS_NOTIFY_DEL_DEVICE)
notify = vga_arbiter_del_pci_device(pdev);
  
+	vgaarb_dbg(dev, "%s: action = %lu\n", __func__, action);

+
if (notify)
vga_arbiter_notify_clients();
return 0;
@@ -1534,8 +1534,8 @@ static struct miscdevice vga_arb_device = {
  
  static int __init vga_arb_device_init(void)

  {
+   struct pci_dev *pdev = NULL;
int rc;
-   struct pci_dev *pdev;
  
  	rc = misc_register(_arb_device);

if (rc < 0)
@@ -1545,11 +1545,13 @@ static int __init vga_arb_device_init(void)
  
  	/* We add all PCI devices satisfying VGA class in the arbiter by

 * default */
-   pdev = NULL;
-   while ((pdev =
-   pci_get_subsys(PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
-  PCI_ANY_ID, pdev)) != NULL)
+   while (1) {
+   pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev);
+   if (!pdev)
+   break;
+
vga_arbiter_add_pci_device(pdev);
+   }
  
  	pr_info("loaded\n");

return rc;


Re: [PATCH v4 4/6] dt-bindings: display: stm32-ltdc: add optional st,fb-bpp property

2023-06-19 Thread Rob Herring


On Mon, 19 Jun 2023 18:55:23 +0200, Dario Binacchi wrote:
> Boards that use the STM32F{4,7} series have limited amounts of RAM. The
> added property allows to size, within certain limits, the memory footprint
> required by the framebuffer.
> 
> Signed-off-by: Dario Binacchi 
> ---
> 
> (no changes since v1)
> 
>  .../devicetree/bindings/display/st,stm32-ltdc.yaml  | 6 ++
>  1 file changed, 6 insertions(+)
> 

My bot found errors running 'make DT_CHECKER_FLAGS=-m dt_binding_check'
on your patch (DT_CHECKER_FLAGS is new in v5.13):

yamllint warnings/errors:

dtschema/dtc warnings/errors:
/builds/robherring/dt-review-ci/linux/Documentation/devicetree/bindings/display/st,stm32-ltdc.yaml:
 properties:st,fb-bpp:maxItems: False schema does not allow 1
from schema $id: http://devicetree.org/meta-schemas/core.yaml#

doc reference errors (make refcheckdocs):

See 
https://patchwork.ozlabs.org/project/devicetree-bindings/patch/20230619165525.1035243-5-dario.binac...@amarulasolutions.com

The base for the series is generally the latest rc1. A different dependency
should be noted in *this* patch.

If you already ran 'make dt_binding_check' and didn't see the above
error(s), then make sure 'yamllint' is installed and dt-schema is up to
date:

pip3 install dtschema --upgrade

Please check and re-submit after running the above command yourself. Note
that DT_SCHEMA_FILES can be set to your schema file to speed up checking
your schema. However, it must be unset to test all examples with your schema.



[PATCH v4 6/6] drm/stm: set framebuffer bit depth through DTS property

2023-06-19 Thread Dario Binacchi
The patch, which is backwards compatible, sets the bit depth of the
framebuffer using the optional property 'st,fb-bpp' in the DTS.

Signed-off-by: Dario Binacchi 

---

Changes in v4:
- Use DTS property instead of module parameter to set the framebuffer bit depth.

Changes in v3:
- drop [4/6] dt-bindings: display: simple: add Rocktech RK043FN48H
  Applied to https://anongit.freedesktop.org/git/drm/drm-misc.git 
(drm-misc-next):
  
https://cgit.freedesktop.org/drm/drm-misc/commit/?id=c42a37a27c777d63961dd634a30f7c887949491a
- drop [5/6] drm/panel: simple: add support for Rocktech RK043FN48H panel
  Applied to https://anongit.freedesktop.org/git/drm/drm-misc.git 
(drm-misc-next)
  
https://cgit.freedesktop.org/drm/drm-misc/commit/?id=13cdd12a9f934158f4ec817cf048fcb4384aa9dc

 drivers/gpu/drm/stm/drv.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/stm/drv.c b/drivers/gpu/drm/stm/drv.c
index 40df7d8c..7a61a3c63469 100644
--- a/drivers/gpu/drm/stm/drv.c
+++ b/drivers/gpu/drm/stm/drv.c
@@ -180,7 +180,9 @@ static const struct dev_pm_ops drv_pm_ops = {
 static int stm_drm_platform_probe(struct platform_device *pdev)
 {
struct device *dev = >dev;
+   struct device_node *np = pdev->dev.of_node;
struct drm_device *ddev;
+   u32 fb_bpp = 16;
int ret;
 
DRM_DEBUG("%s\n", __func__);
@@ -203,7 +205,9 @@ static int stm_drm_platform_probe(struct platform_device 
*pdev)
if (ret)
goto err_put;
 
-   drm_fbdev_dma_setup(ddev, 16);
+   of_property_read_u32(np, "st,fb-bpp", _bpp);
+
+   drm_fbdev_dma_setup(ddev, fb_bpp);
 
return 0;
 
-- 
2.32.0



[PATCH v4 4/6] dt-bindings: display: stm32-ltdc: add optional st, fb-bpp property

2023-06-19 Thread Dario Binacchi
Boards that use the STM32F{4,7} series have limited amounts of RAM. The
added property allows to size, within certain limits, the memory footprint
required by the framebuffer.

Signed-off-by: Dario Binacchi 
---

(no changes since v1)

 .../devicetree/bindings/display/st,stm32-ltdc.yaml  | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/Documentation/devicetree/bindings/display/st,stm32-ltdc.yaml 
b/Documentation/devicetree/bindings/display/st,stm32-ltdc.yaml
index d6ea4d62a2cf..1c3a3653579f 100644
--- a/Documentation/devicetree/bindings/display/st,stm32-ltdc.yaml
+++ b/Documentation/devicetree/bindings/display/st,stm32-ltdc.yaml
@@ -42,6 +42,12 @@ properties:
   - for internal dpi input of the MIPI DSI host controller.
   Note: These 2 endpoints cannot be activated simultaneously.
 
+  st,fb-bpp:
+$ref: /schemas/types.yaml#/definitions/uint32
+description: |
+  bit depth of framebuffer (8, 16 or 32)
+maxItems: 1
+
 required:
   - compatible
   - reg
-- 
2.32.0



[PATCH v4 0/6] Add display support on the stm32f746-disco board

2023-06-19 Thread Dario Binacchi
The series adds support for the display on the stm32f746-disco board,
along with a generic patch that adds the "bpp" parameter to the stm-drm
module. The intention is to allow users to size, within certain limits,
the memory footprint required by the framebuffer.

Changes in v4:
- Use DTS property instead of module parameter to set the framebuffer bit depth.

Changes in v3:
- rename ltdc-pins-a-0 to ltdc-0.
- drop [4/6] dt-bindings: display: simple: add Rocktech RK043FN48H
  Applied to https://anongit.freedesktop.org/git/drm/drm-misc.git 
(drm-misc-next):
  
https://cgit.freedesktop.org/drm/drm-misc/commit/?id=c42a37a27c777d63961dd634a30f7c887949491a
- drop [5/6] drm/panel: simple: add support for Rocktech RK043FN48H panel
  Applied to https://anongit.freedesktop.org/git/drm/drm-misc.git 
(drm-misc-next)
  
https://cgit.freedesktop.org/drm/drm-misc/commit/?id=13cdd12a9f934158f4ec817cf048fcb4384aa9dc

Dario Binacchi (6):
  ARM: dts: stm32: add ltdc support on stm32f746 MCU
  ARM: dts: stm32: add pin map for LTDC on stm32f7
  ARM: dts: stm32: support display on stm32f746-disco board
  dt-bindings: display: stm32-ltdc: add optional st,fb-bpp property
  ARM: dts: stm32: set framebuffer bit depth on stm32f746-disco
  drm/stm: set framebuffer bit depth through DTS property

 .../bindings/display/st,stm32-ltdc.yaml   |  6 +++
 arch/arm/boot/dts/stm32f7-pinctrl.dtsi| 35 +
 arch/arm/boot/dts/stm32f746-disco.dts | 52 +++
 arch/arm/boot/dts/stm32f746.dtsi  | 10 
 drivers/gpu/drm/stm/drv.c |  6 ++-
 5 files changed, 108 insertions(+), 1 deletion(-)

-- 
2.32.0



[PATCH v3 6/6] drm/msm/a6xx: Fix up GMU region reservations

2023-06-19 Thread Konrad Dybcio
Change the order of region allocations to make the addresses match
downstream. This shouldn't matter very much, but helps eliminate one
more difference when comparing register accesses.

Also, make the log region 16K long. That's what it is, unconditionally
on A6xx and A7xx.

Signed-off-by: Konrad Dybcio 
---
 drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index 55b12a8066ee..d682c1ed48db 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -1640,13 +1640,13 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct 
device_node *node)
goto err_memory;
}
 
-   /* Allocate memory for for the HFI queues */
-   ret = a6xx_gmu_memory_alloc(gmu, >hfi, SZ_16K, 0, "hfi");
+   /* Allocate memory for the GMU log region */
+   ret = a6xx_gmu_memory_alloc(gmu, >log, SZ_16K, 0, "log");
if (ret)
goto err_memory;
 
-   /* Allocate memory for the GMU log region */
-   ret = a6xx_gmu_memory_alloc(gmu, >log, SZ_4K, 0, "log");
+   /* Allocate memory for for the HFI queues */
+   ret = a6xx_gmu_memory_alloc(gmu, >hfi, SZ_16K, 0, "hfi");
if (ret)
goto err_memory;
 

-- 
2.41.0



[PATCH v3 5/6] drm/msm/a6xx: Improve GMU force shutdown sequence

2023-06-19 Thread Konrad Dybcio
The GMU force shutdown sequence involves some additional register cleanup
which was not implemented previously. Do so.

Signed-off-by: Konrad Dybcio 
---
 drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index 9929ff187368..55b12a8066ee 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -893,6 +893,13 @@ static void a6xx_gmu_force_off(struct a6xx_gmu *gmu)
/* Make sure there are no outstanding RPMh votes */
a6xx_gmu_rpmh_off(gmu);
 
+   /* Clear the WRITEDROPPED fields and put fence into allow mode */
+   gmu_write(gmu, REG_A6XX_GMU_AHB_FENCE_STATUS_CLR, 0x7);
+   gmu_write(gmu, REG_A6XX_GMU_AO_AHB_FENCE_CTRL, 0);
+
+   /* Make sure the above writes go through */
+   wmb();
+
/* Halt the gmu cm3 core */
gmu_write(gmu, REG_A6XX_GMU_CM3_SYSRESET, 1);
 

-- 
2.41.0



[PATCH v3 0/6] Adreno QoL changes

2023-06-19 Thread Konrad Dybcio
This series brings some niceties in preparation for A7xx introduction.

It should be fully independent of the GMU wrapper series.

Signed-off-by: Konrad Dybcio 
---
Changes in v3:
- Pull more definitions from mesa
- Decode CP_PROTECT_CNTL bitfields
- Rebase on next-20230619
- Link to v2: 
https://lore.kernel.org/r/20230517-topic-a7xx_prep-v2-0-5b9daa2b2...@linaro.org

Changes in v2:
- Drop switching to using the GMU_AO counter in timestamp
- Add a definition for REG_A6XX_GMU_AHB_FENCE_STATUS_CLR, may be subbed
  with a register sync after mesa MR22901
- Link to v1: 
https://lore.kernel.org/r/20230517-topic-a7xx_prep-v1-0-7a964f2e9...@linaro.org

---
Konrad Dybcio (6):
  drm/msm/a6xx: Add some missing header definitions
  drm/msm/a6xx: Use descriptive bitfield names for CP_PROTECT_CNTL
  drm/msm/a6xx: Skip empty protection ranges entries
  drm/msm/a6xx: Ensure clean GMU state in a6xx_gmu_fw_start
  drm/msm/a6xx: Improve GMU force shutdown sequence
  drm/msm/a6xx: Fix up GMU region reservations

 drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 21 +
 drivers/gpu/drm/msm/adreno/a6xx_gmu.xml.h |  2 ++
 drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 14 ++
 3 files changed, 29 insertions(+), 8 deletions(-)
---
base-commit: 47045630bc409ce6606d97b790895210dd1d517d
change-id: 20230517-topic-a7xx_prep-787a69c7d0ff

Best regards,
-- 
Konrad Dybcio 



[PATCH v3 3/6] drm/msm/a6xx: Skip empty protection ranges entries

2023-06-19 Thread Konrad Dybcio
Some specific SKUs leave certain protection range registers empty.
Allow for that behavior.

Signed-off-by: Konrad Dybcio 
---
 drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index cd0c9bccdc19..488c69cf08d3 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -935,8 +935,11 @@ static void a6xx_set_cp_protect(struct msm_gpu *gpu)
  A6XX_CP_PROTECT_CNTL_ACCESS_FAULT_ON_VIOL_EN |
  A6XX_CP_PROTECT_CNTL_LAST_SPAN_INF_RANGE);
 
-   for (i = 0; i < count - 1; i++)
-   gpu_write(gpu, REG_A6XX_CP_PROTECT(i), regs[i]);
+   for (i = 0; i < count - 1; i++) {
+   /* Intentionally skip writing to some registers */
+   if (regs[i])
+   gpu_write(gpu, REG_A6XX_CP_PROTECT(i), regs[i]);
+   }
/* last CP_PROTECT to have "infinite" length on the last entry */
gpu_write(gpu, REG_A6XX_CP_PROTECT(count_max - 1), regs[i]);
 }

-- 
2.41.0



[PATCH v3 4/6] drm/msm/a6xx: Ensure clean GMU state in a6xx_gmu_fw_start

2023-06-19 Thread Konrad Dybcio
While it's not very well understood, there is some sort of a fault
handler implemented in the GMU firmware which triggers when a certain
bit is set, resulting in the M3 core not booting up the way we expect
it to.

Write a magic value to a magic register to hopefully prevent that
from happening.

Signed-off-by: Konrad Dybcio 
---
 drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index 5deb79924897..9929ff187368 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -790,6 +790,12 @@ static int a6xx_gmu_fw_start(struct a6xx_gmu *gmu, 
unsigned int state)
gmu_write(gmu, REG_A6XX_GMU_AHB_FENCE_RANGE_0,
(1 << 31) | (0xa << 18) | (0xa0));
 
+   /*
+* Snapshots toggle the NMI bit which will result in a jump to the NMI
+* handler instead of __main. Set the M3 config value to avoid that.
+*/
+   gmu_write(gmu, REG_A6XX_GMU_CM3_CFG, 0x4052);
+
chipid = adreno_gpu->rev.core << 24;
chipid |= adreno_gpu->rev.major << 16;
chipid |= adreno_gpu->rev.minor << 12;

-- 
2.41.0



[PATCH v3 2/6] drm/msm/a6xx: Use descriptive bitfield names for CP_PROTECT_CNTL

2023-06-19 Thread Konrad Dybcio
We have the necessary information, so explain which bit does what.

Signed-off-by: Konrad Dybcio 
---
 drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index b3ada1e7b598..cd0c9bccdc19 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -930,7 +930,10 @@ static void a6xx_set_cp_protect(struct msm_gpu *gpu)
 * protect violation and select the last span to protect from the start
 * address all the way to the end of the register address space
 */
-   gpu_write(gpu, REG_A6XX_CP_PROTECT_CNTL, BIT(0) | BIT(1) | BIT(3));
+   gpu_write(gpu, REG_A6XX_CP_PROTECT_CNTL,
+ A6XX_CP_PROTECT_CNTL_ACCESS_PROT_EN |
+ A6XX_CP_PROTECT_CNTL_ACCESS_FAULT_ON_VIOL_EN |
+ A6XX_CP_PROTECT_CNTL_LAST_SPAN_INF_RANGE);
 
for (i = 0; i < count - 1; i++)
gpu_write(gpu, REG_A6XX_CP_PROTECT(i), regs[i]);

-- 
2.41.0



[PATCH v3 1/6] drm/msm/a6xx: Add some missing header definitions

2023-06-19 Thread Konrad Dybcio
Add a definition of the GMU_AHB_FENCE_STATUS_CLR reg and CP_PROTECT_CNTL
bitfields.

This may be substituted with a mesa header sync.

Signed-off-by: Konrad Dybcio 
---
 drivers/gpu/drm/msm/adreno/a6xx_gmu.xml.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.xml.h 
b/drivers/gpu/drm/msm/adreno/a6xx_gmu.xml.h
index 9ab15d91aced..fcd9eb53baf8 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.xml.h
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.xml.h
@@ -425,6 +425,8 @@ static inline uint32_t A6XX_GMU_GPU_NAP_CTRL_SID(uint32_t 
val)
 
 #define REG_A6XX_GMU_AHB_FENCE_STATUS  0x9313
 
+#define REG_A6XX_GMU_AHB_FENCE_STATUS_CLR  0x9314
+
 #define REG_A6XX_GMU_RBBM_INT_UNMASKED_STATUS  0x9315
 
 #define REG_A6XX_GMU_AO_SPARE_CNTL 0x9316

-- 
2.41.0



[PATCH v3 6/6] drm/panel: sitronix-st7789v: Check display ID

2023-06-19 Thread Miquel Raynal
A very basic debugging rule when a device is connected for the first
time is to access a read-only register which contains known data in
order to ensure the communication protocol is properly working. This
driver lacked any read helper which is often a critical piece for
speeding-up bring-ups.

Add a read helper and use it to verify the communication with the panel
is working as soon as possible in order to inform the user early if this
is not the case.

As this panel may work with no MISO line, the check is discarded in this
case. Upon error, we do not fail probing but just warn the user, in case
the DT description would be lacking the Rx bus width (which is likely on
old descriptions) in order to avoid breaking existing devices.

Signed-off-by: Miquel Raynal 
Acked-by: Sam Ravnborg 
Acked-by: Maxime Ripard 
---
 .../gpu/drm/panel/panel-sitronix-st7789v.c| 81 +++
 1 file changed, 81 insertions(+)

diff --git a/drivers/gpu/drm/panel/panel-sitronix-st7789v.c 
b/drivers/gpu/drm/panel/panel-sitronix-st7789v.c
index 8649966ceae8..205de179f7f2 100644
--- a/drivers/gpu/drm/panel/panel-sitronix-st7789v.c
+++ b/drivers/gpu/drm/panel/panel-sitronix-st7789v.c
@@ -110,6 +110,9 @@
return val; \
} while (0)
 
+#define ST7789V_IDS { 0x85, 0x85, 0x52 }
+#define ST7789V_IDS_SIZE 3
+
 struct st7789_panel_info {
const struct drm_display_mode *mode;
u32 bus_format;
@@ -157,6 +160,76 @@ static int st7789v_write_data(struct st7789v *ctx, u8 cmd)
return st7789v_spi_write(ctx, ST7789V_DATA, cmd);
 }
 
+static int st7789v_read_data(struct st7789v *ctx, u8 cmd, u8 *buf,
+unsigned int len)
+{
+   struct spi_transfer xfer[2] = { };
+   struct spi_message msg;
+   u16 txbuf = ((ST7789V_COMMAND & 1) << 8) | cmd;
+   u16 rxbuf[4] = {};
+   u8 bit9 = 0;
+   int ret, i;
+
+   switch (len) {
+   case 1:
+   case 3:
+   case 4:
+   break;
+   default:
+   return -EOPNOTSUPP;
+   }
+
+   spi_message_init();
+
+   xfer[0].tx_buf = 
+   xfer[0].len = sizeof(txbuf);
+   spi_message_add_tail([0], );
+
+   xfer[1].rx_buf = rxbuf;
+   xfer[1].len = len * 2;
+   spi_message_add_tail([1], );
+
+   ret = spi_sync(ctx->spi, );
+   if (ret)
+   return ret;
+
+   for (i = 0; i < len; i++) {
+   buf[i] = rxbuf[i] >> i | (bit9 << (9 - i));
+   if (i)
+   bit9 = rxbuf[i] & GENMASK(i - 1, 0);
+   }
+
+   return 0;
+}
+
+static int st7789v_check_id(struct drm_panel *panel)
+{
+   const u8 st7789v_ids[ST7789V_IDS_SIZE] = ST7789V_IDS;
+   struct st7789v *ctx = panel_to_st7789v(panel);
+   bool invalid_ids = false;
+   int ret, i;
+   u8 ids[3];
+
+   if (ctx->spi->mode & SPI_NO_RX)
+   return 0;
+
+   ret = st7789v_read_data(ctx, MIPI_DCS_GET_DISPLAY_ID, ids, 
ST7789V_IDS_SIZE);
+   if (ret)
+   return ret;
+
+   for (i = 0; i < ST7789V_IDS_SIZE; i++) {
+   if (ids[i] != st7789v_ids[i]) {
+   invalid_ids = true;
+   break;
+   }
+   }
+
+   if (invalid_ids)
+   return -EIO;
+
+   return 0;
+}
+
 static const struct drm_display_mode default_mode = {
.clock = 7000,
.hdisplay = 240,
@@ -295,6 +368,14 @@ static int st7789v_prepare(struct drm_panel *panel)
gpiod_set_value(ctx->reset, 0);
msleep(120);
 
+   /*
+* Avoid failing if the IDs are invalid in case the Rx bus width
+* description is missing.
+*/
+   ret = st7789v_check_id(panel);
+   if (ret)
+   dev_warn(panel->dev, "Unrecognized panel IDs");
+
ST7789V_TEST(ret, st7789v_write_command(ctx, MIPI_DCS_EXIT_SLEEP_MODE));
 
/* We need to wait 120ms after a sleep out command */
-- 
2.34.1



[PATCH v3 4/6] drm/panel: sitronix-st7789v: Clarify a definition

2023-06-19 Thread Miquel Raynal
The Sitronix datasheet explains BIT(1) of the RGBCTRL register as the
DOTCLK/PCLK edge used to sample the data lines:

“0” The data is input on the positive edge of DOTCLK
“1” The data is input on the negative edge of DOTCLK

IOW, this bit implies a falling edge and not a high state. Correct the
definition to ease the comparison with the datasheet.

Signed-off-by: Miquel Raynal 
Acked-by: Maxime Ripard 
---
 drivers/gpu/drm/panel/panel-sitronix-st7789v.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/panel/panel-sitronix-st7789v.c 
b/drivers/gpu/drm/panel/panel-sitronix-st7789v.c
index 605b9f6d0f14..d7c5b3ad1baa 100644
--- a/drivers/gpu/drm/panel/panel-sitronix-st7789v.c
+++ b/drivers/gpu/drm/panel/panel-sitronix-st7789v.c
@@ -27,7 +27,7 @@
 #define ST7789V_RGBCTRL_RCM(n) (((n) & 3) << 5)
 #define ST7789V_RGBCTRL_VSYNC_HIGH BIT(3)
 #define ST7789V_RGBCTRL_HSYNC_HIGH BIT(2)
-#define ST7789V_RGBCTRL_PCLK_HIGH  BIT(1)
+#define ST7789V_RGBCTRL_PCLK_FALLING   BIT(1)
 #define ST7789V_RGBCTRL_DE_LOW BIT(0)
 #define ST7789V_RGBCTRL_VBP(n) ((n) & 0x7f)
 #define ST7789V_RGBCTRL_HBP(n) ((n) & 0x1f)
@@ -259,7 +259,7 @@ static int st7789v_prepare(struct drm_panel *panel)
if (ctx->info->mode->flags & DRM_MODE_FLAG_PHSYNC)
polarity |= ST7789V_RGBCTRL_HSYNC_HIGH;
if (ctx->info->bus_flags & DRM_BUS_FLAG_PIXDATA_SAMPLE_NEGEDGE)
-   polarity |= ST7789V_RGBCTRL_PCLK_HIGH;
+   polarity |= ST7789V_RGBCTRL_PCLK_FALLING;
if (ctx->info->bus_flags & DRM_BUS_FLAG_DE_LOW)
polarity |= ST7789V_RGBCTRL_DE_LOW;
 
-- 
2.34.1



[PATCH v3 5/6] drm/panel: sitronix-st7789v: Add EDT ET028013DMA panel support

2023-06-19 Thread Miquel Raynal
This panel from Emerging Display Technologies Corporation features an
ST7789V2 LCD controller panel inside which is almost identical to what
the Sitronix panel driver supports.

In practice, the module physical size is specific, and experiments show
that the display will malfunction if any of the following situation
occurs:
* Pixel clock is above 3MHz
* Pixel clock is not inverted
I could not properly identify the reasons behind these failures, scope
captures show valid input signals.

Signed-off-by: Miquel Raynal 
Acked-by: Maxime Ripard 
---
 .../gpu/drm/panel/panel-sitronix-st7789v.c| 25 +++
 1 file changed, 25 insertions(+)

diff --git a/drivers/gpu/drm/panel/panel-sitronix-st7789v.c 
b/drivers/gpu/drm/panel/panel-sitronix-st7789v.c
index d7c5b3ad1baa..8649966ceae8 100644
--- a/drivers/gpu/drm/panel/panel-sitronix-st7789v.c
+++ b/drivers/gpu/drm/panel/panel-sitronix-st7789v.c
@@ -187,6 +187,21 @@ static const struct drm_display_mode t28cp45tn89_mode = {
.flags = DRM_MODE_FLAG_PVSYNC | DRM_MODE_FLAG_NVSYNC,
 };
 
+static const struct drm_display_mode et028013dma_mode = {
+   .clock = 3000,
+   .hdisplay = 240,
+   .hsync_start = 240 + 38,
+   .hsync_end = 240 + 38 + 10,
+   .htotal = 240 + 38 + 10 + 10,
+   .vdisplay = 320,
+   .vsync_start = 320 + 8,
+   .vsync_end = 320 + 8 + 4,
+   .vtotal = 320 + 8 + 4 + 4,
+   .width_mm = 43,
+   .height_mm = 58,
+   .flags = DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC,
+};
+
 struct st7789_panel_info default_panel = {
.mode = _mode,
.invert_mode = true,
@@ -203,6 +218,14 @@ struct st7789_panel_info t28cp45tn89_panel = {
 DRM_BUS_FLAG_PIXDATA_SAMPLE_POSEDGE,
 };
 
+struct st7789_panel_info et028013dma_panel = {
+   .mode = _mode,
+   .invert_mode = true,
+   .bus_format = MEDIA_BUS_FMT_RGB666_1X18,
+   .bus_flags = DRM_BUS_FLAG_DE_HIGH |
+DRM_BUS_FLAG_PIXDATA_SAMPLE_POSEDGE,
+};
+
 static int st7789v_get_modes(struct drm_panel *panel,
 struct drm_connector *connector)
 {
@@ -474,6 +497,7 @@ static void st7789v_remove(struct spi_device *spi)
 static const struct spi_device_id st7789v_spi_id[] = {
{ "st7789v", (unsigned long) _panel },
{ "t28cp45tn89-v17", (unsigned long) _panel },
+   { "et028013dma", (unsigned long) _panel },
{ }
 };
 MODULE_DEVICE_TABLE(spi, st7789v_spi_id);
@@ -481,6 +505,7 @@ MODULE_DEVICE_TABLE(spi, st7789v_spi_id);
 static const struct of_device_id st7789v_of_match[] = {
{ .compatible = "sitronix,st7789v", .data = _panel },
{ .compatible = "inanbo,t28cp45tn89-v17", .data = _panel },
+   { .compatible = "edt,et028013dma", .data = _panel },
{ }
 };
 MODULE_DEVICE_TABLE(of, st7789v_of_match);
-- 
2.34.1



[PATCH v3 3/6] drm/panel: sitronix-st7789v: Use 9 bits per spi word by default

2023-06-19 Thread Miquel Raynal
The Sitronix controller expects 9-bit words, provide this as default at
probe time rather than specifying this in each and every access.

Signed-off-by: Miquel Raynal 
Reviewed-by: Sam Ravnborg 
Acked-by: Maxime Ripard 
---
 drivers/gpu/drm/panel/panel-sitronix-st7789v.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/panel/panel-sitronix-st7789v.c 
b/drivers/gpu/drm/panel/panel-sitronix-st7789v.c
index 172c6c1fc090..605b9f6d0f14 100644
--- a/drivers/gpu/drm/panel/panel-sitronix-st7789v.c
+++ b/drivers/gpu/drm/panel/panel-sitronix-st7789v.c
@@ -142,7 +142,6 @@ static int st7789v_spi_write(struct st7789v *ctx, enum 
st7789v_prefix prefix,
u16 txbuf = ((prefix & 1) << 8) | data;
 
xfer.tx_buf = 
-   xfer.bits_per_word = 9;
xfer.len = sizeof(txbuf);
 
return spi_sync_transfer(ctx->spi, , 1);
@@ -436,6 +435,11 @@ static int st7789v_probe(struct spi_device *spi)
spi_set_drvdata(spi, ctx);
ctx->spi = spi;
 
+   spi->bits_per_word = 9;
+   ret = spi_setup(spi);
+   if (ret < 0)
+   return dev_err_probe(>dev, ret, "Failed to setup spi\n");
+
ctx->info = device_get_match_data(>dev);
 
drm_panel_init(>panel, dev, _drm_funcs,
-- 
2.34.1



[PATCH v3 2/6] dt-bindings: display: st7789v: bound the number of Rx data lines

2023-06-19 Thread Miquel Raynal
The ST7789V LCD controller supports regular SPI wiring, as well as no Rx
data line at all. The operating system needs to know whether it can read
registers from the device or not. Let's detail this specific design
possibility by bounding the spi-rx-bus-width property.

Signed-off-by: Miquel Raynal 
Acked-by: Krzysztof Kozlowski 
---
 .../devicetree/bindings/display/panel/sitronix,st7789v.yaml   | 4 
 1 file changed, 4 insertions(+)

diff --git 
a/Documentation/devicetree/bindings/display/panel/sitronix,st7789v.yaml 
b/Documentation/devicetree/bindings/display/panel/sitronix,st7789v.yaml
index 0ccf0487fd8e..a25df7e1df88 100644
--- a/Documentation/devicetree/bindings/display/panel/sitronix,st7789v.yaml
+++ b/Documentation/devicetree/bindings/display/panel/sitronix,st7789v.yaml
@@ -29,6 +29,10 @@ properties:
   spi-cpha: true
   spi-cpol: true
 
+  spi-rx-bus-width:
+minimum: 0
+maximum: 1
+
 required:
   - compatible
   - reg
-- 
2.34.1



[PATCH v3 0/6] drm/panel: sitronix-st7789v: Support ET028013DMA panel

2023-06-19 Thread Miquel Raynal
Hello,

The aim of this series is to add support for the EDT ET028013DMA
panel. This panel features a Sitronix ST7789V2 LCD controller, which is
already supported mainline (or very close to the ST7789V for which
Maxime added support years ago).

The EDT panel is slightly different on the geometry and appears not to
support refresh rates higher than 30fps (above, glitches are visible,
despite the incoming signals being rather clean). While I was working on
this panel, I found quite inconvenient to not be able to read anything
back as it is a great tool for debugging purposes. So the last patch
actually adds a read helper and uses it to perform a sanity check at
probe time by verifying the Sitronix controller IDs.

This series applies on top of Sebastian's series which was also bringing
a number of good improvements to this driver. As Sebastian started and
contributed his work before me, I think his series is close to be merged
so I adapted my changes on top of it.

Link: https://lore.kernel.org/dri-devel/20230422205012.2464933-1-...@kernel.org/

Thanks,
Miquèl

Changes in v3:
* Following the exchanges with Maxime, existing devices will not stop
  probing if the IDs are wrong, just because old description might
  actually miss the Rx bus width DT parameter.
* Collected tags.

Changes in v2:
* Rebased on top of Sebastian's series and adapted all my changes to the
  existing infrastructure he has already added.
* Collected tags.
* Prevented the ID check to fail if there is no MISO line.
* Used dev_err_probe() when relevant.
* Sorted the IDs in the tables.
* Renamed the panel mode.
* Fixed typos.

Miquel Raynal (6):
  dt-bindings: display: st7789v: Add the edt,et028013dma panel
compatible
  dt-bindings: display: st7789v: bound the number of Rx data lines
  drm/panel: sitronix-st7789v: Use 9 bits per spi word by default
  drm/panel: sitronix-st7789v: Clarify a definition
  drm/panel: sitronix-st7789v: Add EDT ET028013DMA panel support
  drm/panel: sitronix-st7789v: Check display ID

 .../display/panel/sitronix,st7789v.yaml   |   5 +
 .../gpu/drm/panel/panel-sitronix-st7789v.c| 116 +-
 2 files changed, 118 insertions(+), 3 deletions(-)

-- 
2.34.1



[PATCH v3 1/6] dt-bindings: display: st7789v: Add the edt, et028013dma panel compatible

2023-06-19 Thread Miquel Raynal
The ST7789V LCD controller is also embedded in the ET028013DMA
panel. Add a compatible string to describe this other panel.

Signed-off-by: Miquel Raynal 
Acked-by: Krzysztof Kozlowski 
Acked-by: Maxime Ripard 
---
 .../devicetree/bindings/display/panel/sitronix,st7789v.yaml  | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/Documentation/devicetree/bindings/display/panel/sitronix,st7789v.yaml 
b/Documentation/devicetree/bindings/display/panel/sitronix,st7789v.yaml
index 7c5e4313db1d..0ccf0487fd8e 100644
--- a/Documentation/devicetree/bindings/display/panel/sitronix,st7789v.yaml
+++ b/Documentation/devicetree/bindings/display/panel/sitronix,st7789v.yaml
@@ -16,6 +16,7 @@ allOf:
 properties:
   compatible:
 enum:
+  - edt,et028013dma
   - inanbo,t28cp45tn89-v17
   - sitronix,st7789v
 
-- 
2.34.1



Re: [PATCH v2] drm/i915: Replace kmap() with kmap_local_page()

2023-06-19 Thread Sumitra Sharma
On Sun, Jun 18, 2023 at 11:11:08AM -0700, Ira Weiny wrote:
> Sumitra Sharma wrote:
> > kmap() has been deprecated in favor of the kmap_local_page()
> > due to high cost, restricted mapping space, the overhead of a
> > global lock for synchronization, and making the process sleep
> > in the absence of free slots.
> > 
> > kmap_local_page() is faster than kmap() and offers thread-local
> > and CPU-local mappings, take pagefaults in a local kmap region
> > and preserves preemption by saving the mappings of outgoing tasks
> > and restoring those of the incoming one during a context switch.
> > 
> > The mapping is kept thread local in the function
> > “i915_vma_coredump_create” in i915_gpu_error.c
> > 
> > Therefore, replace kmap() with kmap_local_page().
> > 
> > Suggested-by: Ira Weiny 
> > 
> 
> NIT: No need for the line break between Suggested-by and your signed off line.
> 

Hi Ira,

What does NIT stand for? 

Thank you. I will take care about the line breaks.

> > Signed-off-by: Sumitra Sharma 
> > ---
> > 
> > Changes in v2:
> > - Replace kmap() with kmap_local_page().
> 
> Generally it is customary to attribute a change like this to those who
> suggested it in a V1 review.
> 
> For example:
> 
>   - Tvrtko/Thomas: Use kmap_local_page() instead of page_address()
> 
> Also I don't see Thomas on the new email list.  Since he took the time to
> review V1 he might want to check this version out.  I've added him to the
> 'To:' list.
> 
> Also a link to V1 is nice.  B4 formats it like this:
> 
> - Link to v1: https://lore.kernel.org/all/20230614123556.ga381...@sumitra.com/
> 
> All that said the code looks good to me.  So with the above changes.
> 
> Reviewed-by: Ira Weiny 
> 

I have noted down the points mentioned above. Thank you again.

I am not supposed to create another version of this patch for 
adding the above mentions, as you and Thomas both gave this patch 
a reviewed-by tag. Right?


Thanks & regards
Sumitra

PS: I am new to the open source vocabulary terms.

> > - Change commit subject and message.
> > 
> >  drivers/gpu/drm/i915/i915_gpu_error.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c 
> > b/drivers/gpu/drm/i915/i915_gpu_error.c
> > index f020c0086fbc..bc41500eedf5 100644
> > --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> > +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> > @@ -1164,9 +1164,9 @@ i915_vma_coredump_create(const struct intel_gt *gt,
> >  
> > drm_clflush_pages(, 1);
> >  
> > -   s = kmap(page);
> > +   s = kmap_local_page(page);
> > ret = compress_page(compress, s, dst, false);
> > -   kunmap(page);
> > +   kunmap_local(s);
> >  
> > drm_clflush_pages(, 1);
> >  
> > -- 
> > 2.25.1
> > 
> 
> 


[PATCH v2 6/6] drm/ttm: Don't shadow the operation context

2023-06-19 Thread Thomas Hellström
ttm_bo_swapout() shadows the ttm operation context which may cause
major confusion in driver callbacks when swapping out !TTM_PL_SYSTEM
memory. Fix this by reusing the operation context argument to
ttm_bo_swapout().

Cc: "Christian König" 
Cc: 
Cc: 
Signed-off-by: Thomas Hellström 
Acked-by: Matthew Brost 
---
 drivers/gpu/drm/ttm/ttm_bo.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index bd5dae4d1624..615d30c4262d 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -1154,7 +1154,6 @@ int ttm_bo_swapout(struct ttm_buffer_object *bo, struct 
ttm_operation_ctx *ctx,
 * Move to system cached
 */
if (bo->resource->mem_type != TTM_PL_SYSTEM) {
-   struct ttm_operation_ctx ctx = { false, false };
struct ttm_resource *evict_mem;
struct ttm_place hop;
 
@@ -1164,7 +1163,7 @@ int ttm_bo_swapout(struct ttm_buffer_object *bo, struct 
ttm_operation_ctx *ctx,
if (unlikely(ret))
goto out;
 
-   ret = ttm_bo_handle_move_mem(bo, evict_mem, true, , );
+   ret = ttm_bo_handle_move_mem(bo, evict_mem, true, ctx, );
if (unlikely(ret != 0)) {
WARN(ret == -EMULTIHOP, "Unexpected multihop in swaput 
- likely driver bug.\n");
goto out;
-- 
2.40.1



Re: [PATCH v2, 3/3] drm/mediatek: dsi: Add dsi cmdq_ctl to send panel initial code

2023-06-19 Thread Matthias Brugger




On 16/06/2023 09:36, Shuijing Li wrote:

For mt8188, add dsi cmdq reg control to send long packets to panel
initialization.

Signed-off-by: Shuijing Li 
Signed-off-by: Jitao Shi 


Reviewed-by: Matthias Brugger 


---
Changes in v2:
use mtk_dsi_mask(dsi, DSI_CMDQ_SIZE, CMDQ_SIZE_SEL, CMDQ_SIZE_SEL); directly,
per suggestion from the previous thread:
https://lore.kernel.org/lkml/015f4c60-ed77-9e1f-8a6b-cda6e4f6a...@gmail.com/
---
  drivers/gpu/drm/mediatek/mtk_dsi.c | 7 +++
  1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c 
b/drivers/gpu/drm/mediatek/mtk_dsi.c
index 500a3054282d..8b43d9f48178 100644
--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
+++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
@@ -86,6 +86,7 @@
  
  #define DSI_CMDQ_SIZE		0x60

  #define CMDQ_SIZE 0x3f
+#define CMDQ_SIZE_SEL  BIT(15)
  
  #define DSI_HSTX_CKL_WC		0x64
  
@@ -178,6 +179,7 @@ struct mtk_dsi_driver_data {

const u32 reg_cmdq_off;
bool has_shadow_ctl;
bool has_size_ctl;
+   bool cmdq_long_packet_ctl;
  };
  
  struct mtk_dsi {

@@ -996,6 +998,8 @@ static void mtk_dsi_cmdq(struct mtk_dsi *dsi, const struct 
mipi_dsi_msg *msg)
  
  	mtk_dsi_mask(dsi, reg_cmdq_off, cmdq_mask, reg_val);

mtk_dsi_mask(dsi, DSI_CMDQ_SIZE, CMDQ_SIZE, cmdq_size);
+   if (dsi->driver_data->cmdq_long_packet_ctl)
+   mtk_dsi_mask(dsi, DSI_CMDQ_SIZE, CMDQ_SIZE_SEL, CMDQ_SIZE_SEL);
  }
  
  static ssize_t mtk_dsi_host_send_cmd(struct mtk_dsi *dsi,

@@ -1200,18 +1204,21 @@ static const struct mtk_dsi_driver_data 
mt8183_dsi_driver_data = {
.reg_cmdq_off = 0x200,
.has_shadow_ctl = true,
.has_size_ctl = true,
+   .cmdq_long_packet_ctl = false,
  };
  
  static const struct mtk_dsi_driver_data mt8186_dsi_driver_data = {

.reg_cmdq_off = 0xd00,
.has_shadow_ctl = true,
.has_size_ctl = true,
+   .cmdq_long_packet_ctl = false,
  };
  
  static const struct mtk_dsi_driver_data mt8188_dsi_driver_data = {

.reg_cmdq_off = 0xd00,
.has_shadow_ctl = true,
.has_size_ctl = true,
+   .cmdq_long_packet_ctl = true,
  };
  
  static const struct of_device_id mtk_dsi_of_match[] = {


Re: [PATCH V3 4/7] drm/amd/pm: setup the framework to support Wifi RFI mitigation feature

2023-06-19 Thread Lazar, Lijo




On 6/16/2023 12:27 PM, Evan Quan wrote:

With WBRF feature supported, as a driver responding to the frequencies,
amdgpu driver is able to do shadow pstate switching to mitigate possible
interference(between its (G-)DDR memory clocks and local radio module
frequency bands used by Wifi 6/6e/7).

To make WBRF feature functional, the kernel needs to be configured with
CONFIG_ACPI_WBRF and the platform is equipped with necessary ACPI based
mechanism to get amdgpu driver notified.

Signed-off-by: Evan Quan 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  26 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c  |  63 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  19 ++
  drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c | 184 ++
  drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h |  20 ++
  drivers/gpu/drm/amd/pm/swsmu/smu_internal.h   |   3 +
  6 files changed, 315 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 02b827785e39..2f2ec64ed1b2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -50,6 +50,7 @@
  #include 
  #include 
  #include 
+#include 
  
  #include 

  #include 
@@ -241,6 +242,7 @@ extern int amdgpu_num_kcq;
  #define AMDGPU_VCNFW_LOG_SIZE (32 * 1024)
  extern int amdgpu_vcnfw_log;
  extern int amdgpu_sg_display;
+extern int amdgpu_wbrf;
  
  #define AMDGPU_VM_MAX_NUM_CTX			4096

  #define AMDGPU_SG_THRESHOLD   (256*1024*1024)
@@ -741,6 +743,9 @@ struct amdgpu_reset_domain;
   */
  #define AMDGPU_HAS_VRAM(_adev) ((_adev)->gmc.real_vram_size)
  
+typedef

+void (*wbrf_notify_handler) (struct amdgpu_device *adev);
+
  struct amdgpu_device {
struct device   *dev;
struct pci_dev  *pdev;
@@ -1050,6 +1055,8 @@ struct amdgpu_device {
  
  	booljob_hang;

booldc_enabled;
+
+   wbrf_notify_handler wbrf_event_handler;
  };
  
  static inline struct amdgpu_device *drm_to_adev(struct drm_device *ddev)

@@ -1381,6 +1388,25 @@ static inline int amdgpu_acpi_smart_shift_update(struct 
drm_device *dev,
 enum amdgpu_ss ss_state) { 
return 0; }
  #endif
  
+#if defined(CONFIG_ACPI_WBRF)

+bool amdgpu_acpi_is_wbrf_supported(struct amdgpu_device *adev);
+int amdgpu_acpi_wbrf_retrieve_exclusions(struct amdgpu_device *adev,
+struct wbrf_ranges_out 
*exclusions_out);
+int amdgpu_acpi_register_wbrf_notify_handler(struct amdgpu_device *adev,
+wbrf_notify_handler handler);
+int amdgpu_acpi_unregister_wbrf_notify_handler(struct amdgpu_device *adev);
+#else
+static inline bool amdgpu_acpi_is_wbrf_supported(struct amdgpu_device *adev) { 
return false; }
+static inline
+int amdgpu_acpi_wbrf_retrieve_exclusions(struct amdgpu_device *adev,
+struct wbrf_ranges_out 
*exclusions_out) { return 0; }
+static inline
+int amdgpu_acpi_register_wbrf_notify_handler(struct amdgpu_device *adev,
+wbrf_notify_handler handler) { 
return 0; }
+static inline
+int amdgpu_acpi_unregister_wbrf_notify_handler(struct amdgpu_device *adev) { 
return 0; }
+#endif
+
  #if defined(CONFIG_ACPI) && defined(CONFIG_SUSPEND)
  bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev);
  bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
index aeeec211861c..efbe6dd91d1a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
@@ -1105,3 +1105,66 @@ bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device 
*adev)
  }
  
  #endif /* CONFIG_SUSPEND */

+
+#ifdef CONFIG_ACPI_WBRF
+bool amdgpu_acpi_is_wbrf_supported(struct amdgpu_device *adev)
+{
+   struct acpi_device *acpi_dev = ACPI_COMPANION(adev->dev);
+
+   if (!acpi_dev)
+   return false;
+
+   return wbrf_supported_consumer(acpi_dev);
+}
+
+int amdgpu_acpi_wbrf_retrieve_exclusions(struct amdgpu_device *adev,
+struct wbrf_ranges_out *exclusions_out)
+{
+   struct acpi_device *acpi_dev = ACPI_COMPANION(adev->dev);
+
+   if (!acpi_dev)
+   return -ENODEV;
+
+   return wbrf_retrieve_exclusions(acpi_dev, exclusions_out);
+}
+
+#define CPM_GPU_NOTIFY_COMMAND 0x55
+static void amdgpu_acpi_wbrf_event(acpi_handle handle, u32 event, void *data)
+{
+   struct amdgpu_device *adev = (struct amdgpu_device *)data;
+
+   if (event == CPM_GPU_NOTIFY_COMMAND &&
+   adev->wbrf_event_handler)
+   adev->wbrf_event_handler(adev); > +}
+
+int amdgpu_acpi_register_wbrf_notify_handler(struct amdgpu_device *adev,
+wbrf_notify_handler handler)

Re: patches dropped from drm-misc-next [Was: Re: [PATCH 00/53] drm: Convert to platform remove callback returning] void

2023-06-19 Thread Geert Uytterhoeven
Hi Jani,

On Mon, Jun 19, 2023 at 4:30 PM Jani Nikula  wrote:
> [Trimmed the recipients considerably; there's really no need to keep
> spamming so many people about this.]

CC sfr

> On Mon, 19 Jun 2023, Uwe Kleine-König  wrote:
> > Not knowing dim I think there is a simple(?) technical solution here: It
> > only has to make sure that after the pull request from drm-misc to drm
> > was sent, no new patches are added to the branch that is merged in next.
>
> The drm-misc-next and drm-intel-next branches are *always* open to
> patches, regardless of the merge window. That's not going to change. We
> never tell people "this is not the right time for your patches" due to
> the merge window, like some subsystems do.

Good (personally, I don't like it when a subsystem is not open to patches,
as it means that when I finally have time to work on patches myself, I
cannot submit them ;-)

> We have separate branches specifically for feeding to linux-next and
> they serve no other purpose. The tooling tries to push the right thing
> there, depending on the last pull request cutoff, so that linux-next
> reflects what it's supposed to, but obviously the tooling doesn't have
> the smarts to figure out when the last pull request is going to be
> sent. (Really, humans don't always get that right either, because
> predicting the future is kind of hard.)

OK. So all of this was a genuine mistake...

> Looks like you hit an issue, and although nobody else has complained
> about this one over the 9 years we've been using dim, it royally
> confused you. Sorry about that. There's always room for improvement in
> the tooling, in the process, and in the human communication.

Thanks for the explanation!

Gr{oetje,eeting}s,

Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds


Re: [PATCH] drm/amdgpu: Add missing MODULE_FIRMWARE macro

2023-06-19 Thread Juerg Haefliger
On Fri, 16 Jun 2023 08:53:20 -0400
Alex Deucher  wrote:

> On Fri, Jun 16, 2023 at 8:11 AM Juerg Haefliger
>  wrote:
> >
> > Add the missing MODULE_FIRMWARE macro for "amdgpu/fiji_smc.bin".
> >
> > Signed-off-by: Juerg Haefliger 
> > ---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 +
> >  1 file changed, 1 insertion(+)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > index 5c7d40873ee2..1f83a939d641 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > @@ -92,6 +92,7 @@ MODULE_FIRMWARE("amdgpu/picasso_gpu_info.bin");
> >  MODULE_FIRMWARE("amdgpu/raven2_gpu_info.bin");
> >  MODULE_FIRMWARE("amdgpu/arcturus_gpu_info.bin");
> >  MODULE_FIRMWARE("amdgpu/navi12_gpu_info.bin");
> > +MODULE_FIRMWARE("amdgpu/fiji_smc.bin");  
> 
> This is already specified in smumgr.c.

It sure is. Sorry for the noise :-(

Thanks for looking at it.
..Juerg

 
> Alex
> 
> >
> >  #define AMDGPU_RESUME_MS   2000
> >  #define AMDGPU_MAX_RETRY_LIMIT 2
> > --
> > 2.37.2
> >  



pgpKLEUin_HCy.pgp
Description: OpenPGP digital signature


Re: patches dropped from drm-misc-next [Was: Re: [PATCH 00/53] drm: Convert to platform remove callback returning] void

2023-06-19 Thread Jani Nikula


[Trimmed the recipients considerably; there's really no need to keep
spamming so many people about this.]

On Mon, 19 Jun 2023, Uwe Kleine-König  wrote:
> Not knowing dim I think there is a simple(?) technical solution here: It
> only has to make sure that after the pull request from drm-misc to drm
> was sent, no new patches are added to the branch that is merged in next.

The drm-misc-next and drm-intel-next branches are *always* open to
patches, regardless of the merge window. That's not going to change. We
never tell people "this is not the right time for your patches" due to
the merge window, like some subsystems do.

We have separate branches specifically for feeding to linux-next and
they serve no other purpose. The tooling tries to push the right thing
there, depending on the last pull request cutoff, so that linux-next
reflects what it's supposed to, but obviously the tooling doesn't have
the smarts to figure out when the last pull request is going to be
sent. (Really, humans don't always get that right either, because
predicting the future is kind of hard.)

Looks like you hit an issue, and although nobody else has complained
about this one over the 9 years we've been using dim, it royally
confused you. Sorry about that. There's always room for improvement in
the tooling, in the process, and in the human communication.

BR,
Jani.



-- 
Jani Nikula, Intel Open Source Graphics Center


Re: [PATCH] drm/amdgpu: Remove struct drm_driver.gem_prime_mmap

2023-06-19 Thread Thomas Zimmermann

Hi Christian

Am 19.06.23 um 16:13 schrieb Christian König:



Am 19.06.23 um 16:11 schrieb Thomas Zimmermann:

The callback struct drm_driver.gem_prime_mmap as been removed in
commit 0adec22702d4 ("drm: Remove struct drm_driver.gem_prime_mmap").
Do not assign to it. The assigned function, drm_gem_prime_mmap(), is
now the default for the operation, so there is no change in 
functionality.


Signed-off-by: Thomas Zimmermann 
Fixes: 0adec22702d4 ("drm: Remove struct drm_driver.gem_prime_mmap")
Cc: Thomas Zimmermann 
Cc: Alex Deucher 
Cc: "Christian König" 
Cc: "Pan, Xinhui" 
Cc: amd-...@lists.freedesktop.org
Cc: dri-devel@lists.freedesktop.org


Reviewed-by: Christian König 


Thanks for the quick response. I'll add the patch to drm-misc-next 
immediately, to make the tree's amdgpu build again.


Best regards
Thomas




---
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 1 -
  1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c

index 43613569801b6..07e16ad465d06 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -2879,7 +2879,6 @@ const struct drm_driver amdgpu_partition_driver = {
  .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
  .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
  .gem_prime_import = amdgpu_gem_prime_import,
-    .gem_prime_mmap = drm_gem_prime_mmap,
  .name = DRIVER_NAME,
  .desc = DRIVER_DESC,




--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH 4/9] drm/verisilicon: Add gem driver for JH7110 SoC

2023-06-19 Thread Thomas Zimmermann

Hi

Am 02.06.23 um 09:40 schrieb Keith Zhao:

This patch implements gem related APIs for JH7100 SoC.


please also see my other reply to this patch. My mail client had a bug 
before I could finish it. Below are some more comments.




Signed-off-by: Keith Zhao 
---

[...]

+#ifndef __VS_GEM_H__
+#define __VS_GEM_H__
+
+#include 
+
+#include 
+#include 
+
+#include "vs_drv.h"
+/*
+ *
+ * @base: drm gem object.
+ * @size: size requested from user
+ * @cookie: cookie returned by dma_alloc_attrs
+ * - not kernel virtual address with DMA_ATTR_NO_KERNEL_MAPPING
+ * @dma_addr: bus address(accessed by dma) to allocated memory region.
+ * - this address could be physical address without IOMMU and
+ * device address with IOMMU.
+ * @dma_attrs: attribute for DMA API
+ * @get_pages: flag for manually applying for non-contiguous memory.
+ * @pages: Array of backing pages.
+ * @sgt: Imported sg_table.
+ *
+ */
+struct vs_gem_object {
+   struct drm_gem_object   base;
+   size_t  size;
+   void*cookie;
+   dma_addr_t  dma_addr;
+   u32 iova;
+   unsigned long   dma_attrs;
+   boolget_pages;
+   struct page **pages;
+   struct sg_table *sgt;
+};
+
+static inline
+struct vs_gem_object *to_vs_gem_object(struct drm_gem_object *obj)
+{
+   return container_of(obj, struct vs_gem_object, base);
+}
+
+struct vs_gem_object *vs_gem_create_object(struct drm_device *dev,
+  size_t size);
+
+int vs_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map);
+void vs_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map);


I'd consider this bad style. Your functions are in the vs_ namespace, so 
they should take a vs_gem_object as first argument. Rather implement 
vs_gem_prime_vmap(struct vs_gem_object *vs_obj, struct iosys_map *map)

and _vunmap() and _mmap().

For the callbacks in struct drm_gemobject_funcs, you can write small 
wrappers around the helpers to do the type casting. See 
drm_gem_shmem_object_mmap() and drm_gem_shmem_mmap() for an example.


https://elixir.bootlin.com/linux/latest/source/include/drm/drm_gem_shmem_helper.h#L233




+
+int vs_gem_prime_mmap(struct drm_gem_object *obj,
+ struct vm_area_struct *vma);
+
+int vs_gem_dumb_create(struct drm_file *file_priv,
+  struct drm_device *drm,
+  struct drm_mode_create_dumb *args);
+
+int vs_gem_mmap(struct file *filp, struct vm_area_struct *vma);
+
+struct sg_table *vs_gem_prime_get_sg_table(struct drm_gem_object *obj);
+
+struct drm_gem_object *vs_gem_prime_import(struct drm_device *dev,
+  struct dma_buf *dma_buf);
+struct drm_gem_object *
+vs_gem_prime_import_sg_table(struct drm_device *dev,
+struct dma_buf_attachment *attach,
+struct sg_table *sgt);
+
+#endif /* __VS_GEM_H__ */


--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH] drm/amdgpu: Remove struct drm_driver.gem_prime_mmap

2023-06-19 Thread Christian König




Am 19.06.23 um 16:11 schrieb Thomas Zimmermann:

The callback struct drm_driver.gem_prime_mmap as been removed in
commit 0adec22702d4 ("drm: Remove struct drm_driver.gem_prime_mmap").
Do not assign to it. The assigned function, drm_gem_prime_mmap(), is
now the default for the operation, so there is no change in functionality.

Signed-off-by: Thomas Zimmermann 
Fixes: 0adec22702d4 ("drm: Remove struct drm_driver.gem_prime_mmap")
Cc: Thomas Zimmermann 
Cc: Alex Deucher 
Cc: "Christian König" 
Cc: "Pan, Xinhui" 
Cc: amd-...@lists.freedesktop.org
Cc: dri-devel@lists.freedesktop.org


Reviewed-by: Christian König 


---
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 1 -
  1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 43613569801b6..07e16ad465d06 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -2879,7 +2879,6 @@ const struct drm_driver amdgpu_partition_driver = {
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
.gem_prime_import = amdgpu_gem_prime_import,
-   .gem_prime_mmap = drm_gem_prime_mmap,
  
  	.name = DRIVER_NAME,

.desc = DRIVER_DESC,




[PATCH] drm/amdgpu: Remove struct drm_driver.gem_prime_mmap

2023-06-19 Thread Thomas Zimmermann
The callback struct drm_driver.gem_prime_mmap as been removed in
commit 0adec22702d4 ("drm: Remove struct drm_driver.gem_prime_mmap").
Do not assign to it. The assigned function, drm_gem_prime_mmap(), is
now the default for the operation, so there is no change in functionality.

Signed-off-by: Thomas Zimmermann 
Fixes: 0adec22702d4 ("drm: Remove struct drm_driver.gem_prime_mmap")
Cc: Thomas Zimmermann 
Cc: Alex Deucher 
Cc: "Christian König" 
Cc: "Pan, Xinhui" 
Cc: amd-...@lists.freedesktop.org
Cc: dri-devel@lists.freedesktop.org
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 43613569801b6..07e16ad465d06 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -2879,7 +2879,6 @@ const struct drm_driver amdgpu_partition_driver = {
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
.gem_prime_import = amdgpu_gem_prime_import,
-   .gem_prime_mmap = drm_gem_prime_mmap,
 
.name = DRIVER_NAME,
.desc = DRIVER_DESC,
-- 
2.41.0



Requests For Proposals for hosting XDC 2024 are now open

2023-06-19 Thread Ricardo Garcia
Hello everyone!

The X.org board is soliciting proposals to host XDC in 2024. Since XDC
2023 is being held in Europe this year, we've decided to host in North
America. However, the board is open to other locations, especially if
there's an interesting co-location with another conference.

If you're considering hosting XDC, we've assembled a wiki page with
what's generally expected and needed:

https://www.x.org/wiki/Events/RFP/

When submitting your proposal, please make sure to include at least the
key information about the potential location in question, possible
dates along with estimated costs. Proposals can be submitted to board
at foundation.x.org until the deadline of *September 17th, 2023*. 

Additionally, an quirk early heads-up to the board if you're
considering hosting would be appreciated, in case we need to adjust the
schedule a bit. Also, earlier is better since there generally will be a
bit of Q with organizers.

And if you just have some questions about what organizing XDC entails,
please feel free to chat with previous organizers, or someone from the
board.

Thanks,
Ricardo Garcia, on behalf of X.Org


Re: [PATCH 4/9] drm/verisilicon: Add gem driver for JH7110 SoC

2023-06-19 Thread Thomas Zimmermann



Am 02.06.23 um 09:40 schrieb Keith Zhao:

This patch implements gem related APIs for JH7100 SoC.

Signed-off-by: Keith Zhao 
---
  drivers/gpu/drm/verisilicon/Makefile |   3 +-
  drivers/gpu/drm/verisilicon/vs_drv.c |   6 +
  drivers/gpu/drm/verisilicon/vs_gem.c | 372 +++
  drivers/gpu/drm/verisilicon/vs_gem.h |  72 ++
  4 files changed, 452 insertions(+), 1 deletion(-)
  create mode 100644 drivers/gpu/drm/verisilicon/vs_gem.c
  create mode 100644 drivers/gpu/drm/verisilicon/vs_gem.h

diff --git a/drivers/gpu/drm/verisilicon/Makefile 
b/drivers/gpu/drm/verisilicon/Makefile
index 64ce1b26546c..30360e370e47 100644
--- a/drivers/gpu/drm/verisilicon/Makefile
+++ b/drivers/gpu/drm/verisilicon/Makefile
@@ -1,6 +1,7 @@
  # SPDX-License-Identifier: GPL-2.0
  
-vs_drm-objs := vs_drv.o

+vs_drm-objs := vs_drv.o \
+   vs_gem.o
  
  obj-$(CONFIG_DRM_VERISILICON) += vs_drm.o
  
diff --git a/drivers/gpu/drm/verisilicon/vs_drv.c b/drivers/gpu/drm/verisilicon/vs_drv.c

index 24d333598477..e0a2fc43b55f 100644
--- a/drivers/gpu/drm/verisilicon/vs_drv.c
+++ b/drivers/gpu/drm/verisilicon/vs_drv.c
@@ -30,6 +30,7 @@
  #include 
  
  #include "vs_drv.h"

+#include "vs_gem.h"
  
  #define DRV_NAME	"starfive"

  #define DRV_DESC  "Starfive DRM driver"
@@ -47,6 +48,7 @@ static const struct file_operations fops = {
.compat_ioctl   = drm_compat_ioctl,
.poll   = drm_poll,
.read   = drm_read,
+   .mmap   = vs_gem_mmap,
  };
  
  static struct drm_driver vs_drm_driver = {

@@ -54,6 +56,10 @@ static struct drm_driver vs_drm_driver = {
.lastclose  = drm_fb_helper_lastclose,
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
+   .gem_prime_import   = vs_gem_prime_import,
+   .gem_prime_import_sg_table = vs_gem_prime_import_sg_table,
+   .gem_prime_mmap = vs_gem_prime_mmap,
+   .dumb_create= vs_gem_dumb_create,
.fops   = ,
.name   = DRV_NAME,
.desc   = DRV_DESC,
diff --git a/drivers/gpu/drm/verisilicon/vs_gem.c 
b/drivers/gpu/drm/verisilicon/vs_gem.c
new file mode 100644
index ..3f963471c1ab
--- /dev/null
+++ b/drivers/gpu/drm/verisilicon/vs_gem.c
@@ -0,0 +1,372 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2023 VeriSilicon Holdings Co., Ltd.
+ */
+
+#include 
+#include 
+#include 
+
+#include "vs_drv.h"
+#include "vs_gem.h"
+
+static const struct drm_gem_object_funcs vs_gem_default_funcs;
+
+static int vs_gem_alloc_buf(struct vs_gem_object *vs_obj)
+{
+   struct drm_device *dev = vs_obj->base.dev;
+   unsigned int nr_pages;
+   struct sg_table sgt;
+   int ret = -ENOMEM;
+
+   if (vs_obj->dma_addr) {
+   DRM_DEV_DEBUG_KMS(dev->dev, "already allocated.\n");
+   return 0;
+   }
+
+   vs_obj->dma_attrs = DMA_ATTR_WRITE_COMBINE | DMA_ATTR_FORCE_CONTIGUOUS
+  | DMA_ATTR_NO_KERNEL_MAPPING;
+
+   nr_pages = vs_obj->size >> PAGE_SHIFT;
+
+   vs_obj->pages = kvmalloc_array(nr_pages, sizeof(struct page *),
+  GFP_KERNEL | __GFP_ZERO);
+   if (!vs_obj->pages) {
+   DRM_DEV_ERROR(dev->dev, "failed to allocate pages.\n");
+   return -ENOMEM;
+   }
+
+   vs_obj->cookie = dma_alloc_attrs(to_dma_dev(dev), vs_obj->size,
+_obj->dma_addr, GFP_KERNEL,
+vs_obj->dma_attrs);
+
+   if (!vs_obj->cookie) {
+   DRM_DEV_ERROR(dev->dev, "failed to allocate buffer.\n");
+   goto err_free;
+   }
+
+   vs_obj->iova = vs_obj->dma_addr;
+
+   ret = dma_get_sgtable_attrs(to_dma_dev(dev), ,
+   vs_obj->cookie, vs_obj->dma_addr,
+   vs_obj->size, vs_obj->dma_attrs);
+   if (ret < 0) {
+   DRM_DEV_ERROR(dev->dev, "failed to get sgtable.\n");
+   goto err_mem_free;
+   }
+
+   if (drm_prime_sg_to_page_array(, vs_obj->pages, nr_pages)) {
+   DRM_DEV_ERROR(dev->dev, "invalid sgtable.\n");
+   ret = -EINVAL;
+   goto err_sgt_free;
+   }
+
+   sg_free_table();
+
+   return 0;
+
+err_sgt_free:
+   sg_free_table();
+err_mem_free:
+   dma_free_attrs(to_dma_dev(dev), vs_obj->size, vs_obj->cookie,
+  vs_obj->dma_addr, vs_obj->dma_attrs);
+err_free:
+   kvfree(vs_obj->pages);
+
+   return ret;
+}
+
+static void vs_gem_free_buf(struct vs_gem_object *vs_obj)
+{
+   struct drm_device *dev = vs_obj->base.dev;
+
+   if (!vs_obj->dma_addr) {
+   DRM_DEV_DEBUG_KMS(dev->dev, "dma_addr is invalid.\n");
+   return;
+   }
+
+   

Re: [PATCH 5/8] drm/etnaviv: avoid runtime PM usage in etnaviv_gpu_bind

2023-06-19 Thread Christian Gmeiner
Hi Lucas

>
> Nothing in this callpath actually touches the GPU, so there is no reason
> to get it out of suspend state here. Only if runtime PM isn't enabled at
> all we must make sure to enable the clocks, so the GPU init routine can
> access the GPU later on.
>
> This also removes the need to guard against the state where the driver
> isn't fully initialized yet in the runtime PM resume handler.
>
> Signed-off-by: Lucas Stach 

Reviewed-by: Christian Gmeiner 

> ---
>  drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 15 +--
>  1 file changed, 5 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c 
> b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> index 57cf77ed2fcf..fb07d0e73802 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> @@ -1735,13 +1735,11 @@ static int etnaviv_gpu_bind(struct device *dev, 
> struct device *master,
> if (ret)
> goto out_workqueue;
>
> -   if (IS_ENABLED(CONFIG_PM))
> -   ret = pm_runtime_get_sync(gpu->dev);
> -   else
> +   if (!IS_ENABLED(CONFIG_PM)) {
> ret = etnaviv_gpu_clk_enable(gpu);
> -   if (ret < 0)
> -   goto out_sched;
> -
> +   if (ret < 0)
> +   goto out_sched;
> +   }
>
> gpu->drm = drm;
> gpu->fence_context = dma_fence_context_alloc(1);
> @@ -1753,9 +1751,6 @@ static int etnaviv_gpu_bind(struct device *dev, struct 
> device *master,
>
> priv->gpu[priv->num_gpus++] = gpu;
>
> -   pm_runtime_mark_last_busy(gpu->dev);
> -   pm_runtime_put_autosuspend(gpu->dev);
> -
> return 0;
>
>  out_sched:
> @@ -1936,7 +1931,7 @@ static int etnaviv_gpu_rpm_resume(struct device *dev)
> return ret;
>
> /* Re-initialise the basic hardware state */
> -   if (gpu->drm && gpu->initialized) {
> +   if (gpu->initialized) {
> ret = etnaviv_gpu_hw_resume(gpu);
> if (ret) {
> etnaviv_gpu_clk_disable(gpu);
> --
> 2.39.2
>


-- 
greets
--
Christian Gmeiner, MSc

https://christian-gmeiner.info/privacypolicy


  1   2   >