On 17.04.2023 17:30, Konrad Dybcio wrote:
> Apart from the already handled data bus (MAS_MDP_Pn<->DDR), there's
> another path that needs to be handled to ensure MDSS functions properly,
> namely the "reg bus", a.k.a the CPU-MDSS interconnect.
> 
> Gating that path may have a variety of effects.. from none to otherwise
> inexplicable DSI timeouts..
> 
> On the DPU side, we need to keep the bus alive. The vendor driver
> kickstarts it to max (300Mbps) throughput on first commit, but in
> exchange for some battery life in rare DPU-enabled-panel-disabled
> usecases, we can request it at DPU init and gate it at suspend.
> 
> Signed-off-by: Konrad Dybcio <konrad.dyb...@linaro.org>
> ---
[...]
> @@ -1261,6 +1270,15 @@ static int __maybe_unused dpu_runtime_resume(struct 
> device *dev)
>               return rc;
>       }
>  
> +     /*
> +      * The vendor driver supports setting 76.8 / 150 / 300 Mbps on this
This should have obviously been M>B<ps..

Konrad
> +      * path, but it seems to go for the highest level when display output
> +      * is enabled and zero otherwise. For simplicity, we can assume that
> +      * DPU being enabled and running implies that.
> +      */
> +     if (dpu_kms->reg_bus_path)
> +             icc_set_bw(dpu_kms->reg_bus_path, 0, MBps_to_icc(300));
> +
>       dpu_vbif_init_memtypes(dpu_kms);
>  
>       drm_for_each_encoder(encoder, ddev)
> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h 
> b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
> index d5d9bec90705..c332381d58c4 100644
> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
> @@ -111,6 +111,7 @@ struct dpu_kms {
>       atomic_t bandwidth_ref;
>       struct icc_path *mdp_path[2];
>       u32 num_mdp_paths;
> +     struct icc_path *reg_bus_path;
>  };
>  
>  struct vsync_info {
> 

Reply via email to