Re: [Freedreno] [PATCH v2 4/5] drm/msm/mdss: Handle the reg bus ICC path

2023-04-24 Thread Georgi Djakov

Hi Konrad,

On 18.04.23 15:10, Konrad Dybcio wrote:

Apart from the already handled data bus (MAS_MDP_Pn<->DDR), there's
another path that needs to be handled to ensure MDSS functions properly,
namely the "reg bus", a.k.a the CPU-MDSS interconnect.

Gating that path may have a variety of effects.. from none to otherwise
inexplicable DSI timeouts..

On the MDSS side, we only have to ensure that it's on at what Qualcomm
downstream calls "77 MHz", a.k.a 76.8 Mbps and turn it off at suspend.

To achieve that, make msm_mdss_icc_request_bw() accept a boolean to
indicate whether we want the busses to be on or off, as this function's
only use is to vote for minimum or no bandwidth at all.

Signed-off-by: Konrad Dybcio 
---
  drivers/gpu/drm/msm/msm_mdss.c | 17 +
  1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_mdss.c b/drivers/gpu/drm/msm/msm_mdss.c

[..]

-static void msm_mdss_icc_request_bw(struct msm_mdss *msm_mdss, unsigned long 
bw)
+static void msm_mdss_icc_request_bw(struct msm_mdss *msm_mdss, bool enable)
  {
int i;
  
  	for (i = 0; i < msm_mdss->num_mdp_paths; i++)

-   icc_set_bw(msm_mdss->mdp_path[i], 0, Bps_to_icc(bw));
+   icc_set_bw(msm_mdss->mdp_path[i], 0, enable ? 
Bps_to_icc(MIN_IB_BW) : 0);
+
+   if (msm_mdss->reg_bus_path)
+   icc_set_bw(msm_mdss->reg_bus_path, 0, enable ? 76800 : 0);


Please use Bps_to_icc, kbps_to_icc or any of the other macros.

BR,
Georgi


Re: [Freedreno] [PATCH v3 08/11] arm64: dts: qcom: sm8350: Use 2 interconnect cells

2022-12-05 Thread Georgi Djakov

Hi Robert,

On 5.12.22 18:37, Robert Foss wrote:

Use two interconnect cells in order to optionally
support a path tag.

Signed-off-by: Robert Foss 
Reviewed-by: Konrad Dybcio 
---
  arch/arm64/boot/dts/qcom/sm8350.dtsi | 28 ++--
  1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi 
b/arch/arm64/boot/dts/qcom/sm8350.dtsi
index 805d53d91952..434f8e8b12c1 100644
--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
@@ -1543,56 +1543,56 @@ apps_smmu: iommu@1500 {
config_noc: interconnect@150 {
compatible = "qcom,sm8350-config-noc";
reg = <0 0x0150 0 0xa580>;
-   #interconnect-cells = <1>;
+   #interconnect-cells = <2>;
qcom,bcm-voters = <_bcm_voter>;
};
  
  		mc_virt: interconnect@158 {

compatible = "qcom,sm8350-mc-virt";
reg = <0 0x0158 0 0x1000>;
-   #interconnect-cells = <1>;
+   #interconnect-cells = <2>;
qcom,bcm-voters = <_bcm_voter>;
};

[..]

@@ -1620,8 +1620,8 @@ ipa: ipa@1e4 {
clocks = < RPMH_IPA_CLK>;
clock-names = "core";
  
-			interconnects = <_noc MASTER_IPA _virt SLAVE_EBI1>,

-   <_noc MASTER_APPSS_PROC _noc 
SLAVE_IPA_CFG>;
+   interconnects = <_noc MASTER_IPA 0 _virt 
SLAVE_EBI1 0>,
+   <_noc MASTER_APPSS_PROC 0 _noc 
SLAVE_IPA_CFG 0>;
interconnect-names = "memory",
 "config";
  
@@ -1661,7 +1661,7 @@ mpss: remoteproc@408 {

< SM8350_MSS>;
power-domain-names = "cx", "mss";
  
-			interconnects = <_virt MASTER_LLCC _virt SLAVE_EBI1>;

+   interconnects = <_virt MASTER_LLCC _virt SLAVE_EBI1 
0>;


The second cell for the first endpoint is missing, so this should be:
interconnects = <_virt MASTER_LLCC 0 _virt SLAVE_EBI1 0>;

Thanks,
Georgi

  
  			memory-region = <_modem_mem>;
  
@@ -2239,7 +2239,7 @@ cdsp: remoteproc@9890 {

< SM8350_MXC>;
power-domain-names = "cx", "mxc";
  
-			interconnects = <_noc MASTER_CDSP_PROC _virt SLAVE_EBI1>;

+   interconnects = <_noc MASTER_CDSP_PROC 0 _virt 
SLAVE_EBI1 0>;
  
  			memory-region = <_cdsp_mem>;
  
@@ -2421,14 +2421,14 @@ usb_2_ssphy: phy@88ebe00 {

dc_noc: interconnect@90c {
compatible = "qcom,sm8350-dc-noc";
reg = <0 0x090c 0 0x4200>;
-   #interconnect-cells = <1>;
+   #interconnect-cells = <2>;
qcom,bcm-voters = <_bcm_voter>;
};
  
  		gem_noc: interconnect@910 {

compatible = "qcom,sm8350-gem-noc";
reg = <0 0x0910 0 0xb4000>;
-   #interconnect-cells = <1>;
+   #interconnect-cells = <2>;
qcom,bcm-voters = <_bcm_voter>;
};
  




Re: [Freedreno] [PATCH 0/3] iommu/drm/msm: Allow non-coherent masters to use system cache

2021-07-28 Thread Georgi Djakov
On Mon, Jan 11, 2021 at 07:45:02PM +0530, Sai Prakash Ranjan wrote:
> commit ecd7274fb4cd ("iommu: Remove unused IOMMU_SYS_CACHE_ONLY flag")
> removed unused IOMMU_SYS_CACHE_ONLY prot flag and along with it went
> the memory type setting required for the non-coherent masters to use
> system cache. Now that system cache support for GPU is added, we will
> need to set the right PTE attribute for GPU buffers to be sys cached.
> Without this, the system cache lines are not allocated for GPU.
> 
> So the patches in this series introduces a new prot flag IOMMU_LLC,
> renames IO_PGTABLE_QUIRK_ARM_OUTER_WBWA to IO_PGTABLE_QUIRK_PTW_LLC
> and makes GPU the user of this protection flag.

Hi Sai,

Thank you for the patchset! Are you planning to refresh it, as it does
not apply anymore?

Thanks,
Georgi

> 
> The series slightly depends on following 2 patches posted earlier and
> is based on msm-next branch:
>  * https://lore.kernel.org/patchwork/patch/1363008/
>  * https://lore.kernel.org/patchwork/patch/1363010/
> 
> Sai Prakash Ranjan (3):
>   iommu/io-pgtable: Rename last-level cache quirk to
> IO_PGTABLE_QUIRK_PTW_LLC
>   iommu/io-pgtable-arm: Add IOMMU_LLC page protection flag
>   drm/msm: Use IOMMU_LLC page protection flag to map gpu buffers
> 
>  drivers/gpu/drm/msm/adreno/a6xx_gpu.c   | 3 +++
>  drivers/gpu/drm/msm/adreno/adreno_gpu.c | 2 +-
>  drivers/gpu/drm/msm/msm_iommu.c | 3 +++
>  drivers/gpu/drm/msm/msm_mmu.h   | 4 
>  drivers/iommu/io-pgtable-arm.c  | 9 ++---
>  include/linux/io-pgtable.h  | 6 +++---
>  include/linux/iommu.h   | 6 ++
>  7 files changed, 26 insertions(+), 7 deletions(-)
> 
> 
> base-commit: 00fd44a1a4700718d5d962432b55c09820f7e709
> -- 
> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
> of Code Aurora Forum, hosted by The Linux Foundation
> 
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH v2 1/7] iommu/io-pgtable: Introduce dynamic io-pgtable fmt registration

2020-12-23 Thread Georgi Djakov

Hi Isaac,

On 22.12.20 2:44, Isaac J. Manjarres wrote:

The io-pgtable code constructs an array of init functions for each
page table format at compile time. This is not ideal, as this
increases the footprint of the io-pgtable code, as well as prevents
io-pgtable formats from being built as kernel modules.

In preparation for modularizing the io-pgtable formats, switch to a
dynamic registration scheme, where each io-pgtable format can register
their init functions with the io-pgtable code at boot or module
insertion time.

Signed-off-by: Isaac J. Manjarres 
---
  drivers/iommu/io-pgtable-arm-v7s.c | 34 +-
  drivers/iommu/io-pgtable-arm.c | 90 ++--
  drivers/iommu/io-pgtable.c | 94 --
  include/linux/io-pgtable.h | 51 +
  4 files changed, 209 insertions(+), 60 deletions(-)

diff --git a/drivers/iommu/io-pgtable-arm-v7s.c 
b/drivers/iommu/io-pgtable-arm-v7s.c
index 1d92ac9..89aad2f 100644
--- a/drivers/iommu/io-pgtable-arm-v7s.c
+++ b/drivers/iommu/io-pgtable-arm-v7s.c
@@ -28,6 +28,7 @@

[..]

+static int __init arm_lpae_init(void)
+{
+   int ret, i;
+
+   for (i = 0; i < ARRAY_SIZE(io_pgtable_arm_lpae_init_fns); i++) {
+   ret = io_pgtable_ops_register(_pgtable_arm_lpae_init_fns[i]);
+   if (ret < 0) {
+   pr_err("Failed to register ARM LPAE fmt: %d\n");


I guess we want to print the format here?

Thanks,
Georgi
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


[Freedreno] [PATCH] drm/msm: Remove depends on interconnect

2020-09-16 Thread Georgi Djakov
The dependency on interconnect in the Kconfig was introduced to avoid
the case of interconnect=m and driver=y, but the interconnect framework
has been converted from tristate to bool now. Remove the dependency as
the framework can't be a module anymore.

Signed-off-by: Georgi Djakov 
---
 drivers/gpu/drm/msm/Kconfig | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig
index 5c55cd0ce9f9..3348969460ab 100644
--- a/drivers/gpu/drm/msm/Kconfig
+++ b/drivers/gpu/drm/msm/Kconfig
@@ -6,7 +6,6 @@ config DRM_MSM
depends on ARCH_QCOM || SOC_IMX5 || (ARM && COMPILE_TEST)
depends on OF && COMMON_CLK
depends on MMU
-   depends on INTERCONNECT || !INTERCONNECT
depends on QCOM_OCMEM || QCOM_OCMEM=n
select QCOM_MDT_LOADER if ARCH_QCOM
select REGULATOR
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [RFC PATCH] interconnect: qcom: add functions to query addr/cmds for a path

2020-07-13 Thread Georgi Djakov
On 7/1/20 07:25, Jonathan Marek wrote:
> The a6xx GMU can vote for ddr and cnoc bandwidth, but it needs to be able
> to query the interconnect driver for bcm addresses and commands.

It's not very clear to me how the GMU firmware would be dealing with this? Does
anyone have an idea whether the GMU makes any bandwidth decisions? Or is it just
a static configuration and it just enables/disables a TCS?

I think that we can query the address from the cmd-db, but we have to know the
bcm names and the path. All the BCM/TCS information looks to be very low-level
and implementation specific, so exposing it through an API is not very good,
but hard-coding all this information is not good either.

Thanks,
Georgi

> 
> I'm not sure what is the best way to go about implementing this, this is
> what I came up with.
> 
> I included a quick example of how this can be used by the a6xx driver to
> fill out the GMU bw_table (two ddr bandwidth levels in this example, note
> this would be using the frequency table in dts and not hardcoded values).
> 
> Signed-off-by: Jonathan Marek 
> ---
>  drivers/gpu/drm/msm/adreno/a6xx_hfi.c | 20 ---
>  drivers/interconnect/qcom/icc-rpmh.c  | 50 +++
>  include/soc/qcom/icc.h| 11 ++
>  3 files changed, 68 insertions(+), 13 deletions(-)
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [v2 1/3] drm/msm/dpu: add support for clk and bw scaling for display

2020-04-15 Thread Georgi Djakov
Hi Krishna,

Thanks for the patch!

On 4/2/20 09:52, Krishna Manikandan wrote:
> This change adds support to scale src clk and bandwidth as
> per composition requirements.
> 
> Interconnect registration for bw has been moved to mdp
> device node from mdss to facilitate the scaling.

No Signed-off-by ?

> ---
>  drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c  | 106 
> +
>  drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c |   5 +-
>  drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h |   4 +
>  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c|  37 -
>  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h|   4 +
>  drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c   |   9 ++-
>  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c  |  82 +++
>  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h  |   4 +
>  8 files changed, 228 insertions(+), 23 deletions(-)
> 
> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c 
> b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c

[..]
> @@ -186,10 +247,21 @@ static int _dpu_core_perf_crtc_update_bus(struct 
> dpu_kms *kms,
>   perf.max_per_pipe_ib = max(perf.max_per_pipe_ib,
>   dpu_cstate->new_perf.max_per_pipe_ib);
>  
> - DPU_DEBUG("crtc=%d bw=%llu\n", tmp_crtc->base.id,
> - dpu_cstate->new_perf.bw_ctl);
> + perf.bw_ctl += dpu_cstate->new_perf.bw_ctl;
> +
> + DPU_DEBUG("crtc=%d bw=%llu paths:%d\n",
> +   tmp_crtc->base.id,
> +   dpu_cstate->new_perf.bw_ctl, kms->num_paths);
>   }
>   }
> +
> + avg_bw = kms->num_paths ?
> + perf.bw_ctl / kms->num_paths : 0;
> +
> + for (i = 0; i < kms->num_paths; i++)
> + icc_set_bw(kms->path[i],
> + Bps_to_icc(avg_bw), (perf.max_per_pipe_ib));

In what units is max_per_pipe_ib? Can you use Bps_to_icc() or KBps_to_icc()?

> +
>   return ret;
>  }
>

[..]

> @@ -1037,8 +1065,15 @@ static int __maybe_unused dpu_runtime_resume(struct 
> device *dev)
>   struct drm_encoder *encoder;
>   struct drm_device *ddev;
>   struct dss_module_power *mp = _kms->mp;
> + int i;
>  
>   ddev = dpu_kms->dev;
> +
> + /* Min vote of BW is required before turning on AXI clk */
> + for (i = 0; i < dpu_kms->num_paths; i++)
> + icc_set_bw(dpu_kms->path[i], 0,
> + dpu_kms->catalog->perf.min_dram_ib);

Bps_to_icc() ?

Thanks,
Georgi
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH 5/5] ARM: dts: qcom: msm8974: add interconnect nodes

2019-10-24 Thread Georgi Djakov
On 24.10.19 г. 10:07 ч., Brian Masney wrote:
> On Wed, Oct 23, 2019 at 04:39:21PM +0300, Georgi Djakov wrote:
>> On 23.10.19 г. 15:47 ч., Brian Masney wrote:
>>> On Wed, Oct 23, 2019 at 02:50:19PM +0300, Georgi Djakov wrote:
>>>> On 13.10.19 г. 11:08 ч., Brian Masney wrote:
>>>>> Add interconnect nodes that's needed to support bus scaling.
>>>>>
>>>>> Signed-off-by: Brian Masney 
>>>>> ---
>>>>>  arch/arm/boot/dts/qcom-msm8974.dtsi | 60 +
>>>>>  1 file changed, 60 insertions(+)
>>>>>
>>>>> diff --git a/arch/arm/boot/dts/qcom-msm8974.dtsi 
>>>>> b/arch/arm/boot/dts/qcom-msm8974.dtsi
>>>>> @@ -1152,6 +1207,11 @@
>>>>> "core",
>>>>> "vsync";
>>>>>  
>>>>> + interconnects = < MNOC_MAS_GRAPHICS_3D 
>>>>>  BIMC_SLV_EBI_CH0>,
>>>>> + < OCMEM_VNOC_MAS_GFX3D 
>>>>>  OCMEM_SLV_OCMEM>;
>>>>
>>>> Who will be the requesting bandwidth to DDR and ocmem? Is it the display 
>>>> or GPU
>>>> or both? The above seem like GPU-related interconnects, so maybe these
>>>> properties should be in the GPU DT node.
>>>
>>> The display is what currently requests the interconnect path,
>>> specifically mdp5_setup_interconnect() in
>>> drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c. The Freedreno GPU bindings
>>> currently don't have interconnect support. Maybe this is something that
>>> I should add to that driver as well?
>>
>> The "mdp0-mem" and "mdp1-mem" paths mentioned in the mdp5_kms.c are the two
>> interconnects between the display and DDR memory.
> 
> OK, I see. Most of the interconnect paths in the downstream MSM 3.4
> sources are configured in device tree using the
> qcom,msm-bus,vectors-KBps property, which is what I was only looking at
> before. The interconnect path for the display is configured directly in
> code (drivers/video/msm/mdss/mdss_mdp.c) to setup a path between
> MSM_BUS_MASTER_MDP_PORT0 and MSM_BUS_SLAVE_EBI_CH0.

Correct!

> 
> In the upstream kernel, it looks like I'll need to
> 
>   1) add support for an optional second interconnect path for ocmem to
>  drivers/gpu/drm/msm/adreno/adreno_gpu.c.

Yes, just check if there is a "gpu-ocmem" path in DT and scale it when needed.

> 
>   2) add implementations of gpu_get_freq and gpu_get_freq to the
>  adreno_gpu_funcs struct in drivers/gpu/drm/msm/adreno/a3xx_gpu.c.
> 

Maybe, i am not very familiar with adreno stuff. It might be good to CC the
freedreno guys.

Thanks,
Georgi
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno

Re: [PATCH] qcom: Add BCM vote macro to TCS header

2019-07-24 Thread Georgi Djakov
Hi Jordan,

On 7/17/19 18:53, Jordan Crouse wrote:
> The A6XX family of Adreno GPUs use a microcontroller to control the
> GPU clock independently. The microcontroller also has the capability
> to vote for the bus but doesn't currently do so except for one initial
> vote that is hard coded [1].
> 
> Currently there is no good way to construct a valid TCS command outside
> of the inner workings of the QCOM interconnect driver which is something
> that will need to be addressed for the next generation of GPU drivers.
> 
> To start the process, this change moves the TCS command macros from the
> sdm845 interconnect driver into a soc specific header to make it available
> for future efforts into this area.

I agree that we should move the TCS macros into tsc.h. There is also a similar
macro in drivers/clk/qcom/clk-rpmh.c. Maybe we can replace both with a common 
one?

Thanks,
Georgi

> 
> [1] 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/gpu/drm/msm/adreno/a6xx_hfi.c#n219
> 
> Signed-off-by: Jordan Crouse 
> ---
> 
>  drivers/interconnect/qcom/sdm845.c | 17 -
>  include/soc/qcom/tcs.h | 17 +
>  2 files changed, 17 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/interconnect/qcom/sdm845.c 
> b/drivers/interconnect/qcom/sdm845.c
> index 4915b78..79b6f01 100644
> --- a/drivers/interconnect/qcom/sdm845.c
> +++ b/drivers/interconnect/qcom/sdm845.c
> @@ -20,23 +20,6 @@
>  #include 
>  #include 
>  
> -#define BCM_TCS_CMD_COMMIT_SHFT  30
> -#define BCM_TCS_CMD_COMMIT_MASK  0x4000
> -#define BCM_TCS_CMD_VALID_SHFT   29
> -#define BCM_TCS_CMD_VALID_MASK   0x2000
> -#define BCM_TCS_CMD_VOTE_X_SHFT  14
> -#define BCM_TCS_CMD_VOTE_MASK0x3fff
> -#define BCM_TCS_CMD_VOTE_Y_SHFT  0
> -#define BCM_TCS_CMD_VOTE_Y_MASK  0xfffc000
> -
> -#define BCM_TCS_CMD(commit, valid, vote_x, vote_y)   \
> - (((commit) << BCM_TCS_CMD_COMMIT_SHFT) |\
> - ((valid) << BCM_TCS_CMD_VALID_SHFT) |   \
> - ((cpu_to_le32(vote_x) & \
> - BCM_TCS_CMD_VOTE_MASK) << BCM_TCS_CMD_VOTE_X_SHFT) |\
> - ((cpu_to_le32(vote_y) & \
> - BCM_TCS_CMD_VOTE_MASK) << BCM_TCS_CMD_VOTE_Y_SHFT))
> -
>  #define to_qcom_provider(_provider) \
>   container_of(_provider, struct qcom_icc_provider, provider)
>  
> diff --git a/include/soc/qcom/tcs.h b/include/soc/qcom/tcs.h
> index 262876a..6012a9e 100644
> --- a/include/soc/qcom/tcs.h
> +++ b/include/soc/qcom/tcs.h
> @@ -53,4 +53,21 @@ struct tcs_request {
>   struct tcs_cmd *cmds;
>  };
>  
> +#define BCM_TCS_CMD_COMMIT_SHFT  30
> +#define BCM_TCS_CMD_COMMIT_MASK  0x4000
> +#define BCM_TCS_CMD_VALID_SHFT   29
> +#define BCM_TCS_CMD_VALID_MASK   0x2000
> +#define BCM_TCS_CMD_VOTE_X_SHFT  14
> +#define BCM_TCS_CMD_VOTE_MASK0x3fff
> +#define BCM_TCS_CMD_VOTE_Y_SHFT  0
> +#define BCM_TCS_CMD_VOTE_Y_MASK  0xfffc000
> +
> +#define BCM_TCS_CMD(commit, valid, vote_x, vote_y)   \
> + (((commit) << BCM_TCS_CMD_COMMIT_SHFT) |\
> + ((valid) << BCM_TCS_CMD_VALID_SHFT) |   \
> + ((cpu_to_le32(vote_x) & \
> + BCM_TCS_CMD_VOTE_MASK) << BCM_TCS_CMD_VOTE_X_SHFT) |\
> + ((cpu_to_le32(vote_y) & \
> + BCM_TCS_CMD_VOTE_MASK) << BCM_TCS_CMD_VOTE_Y_SHFT))
> +
>  #endif /* __SOC_QCOM_TCS_H__ */
> 


Re: [Freedreno] [PATCH 5/5] drm/msm/mdp5: Use the interconnect API

2019-05-29 Thread Georgi Djakov
On 5/8/19 23:42, Rob Clark wrote:
> From: Georgi Djakov 
> 

Let's put some text in the commit message:

The interconnect API provides an interface for consumer drivers to express
their bandwidth needs in the SoC. This data is aggregated and the on-chip
interconnect hardware is configured to the most appropriate power/performance
profile.

Use the API to configure the interconnects and request bandwidth between DDR and
the display hardware (MDP port(s) and rotator downscaler).


> Signed-off-by: Georgi Djakov 
> Signed-off-by: Rob Clark 
> ---
>  drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 14 ++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c 
> b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
> index 97179bec8902..54d2b4c2b09f 100644
> --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
> +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
> @@ -16,6 +16,7 @@
>   * this program.  If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include 
>  #include 
>  
>  #include "msm_drv.h"
> @@ -1050,6 +1051,19 @@ static const struct component_ops mdp5_ops = {
>  
>  static int mdp5_dev_probe(struct platform_device *pdev)
>  {
> + struct icc_path *path0 = of_icc_get(>dev, "port0");
> + struct icc_path *path1 = of_icc_get(>dev, "port1");
> + struct icc_path *path_rot = of_icc_get(>dev, "rotator");

It would be better change just the names to "mdp0-mem', "mdp1-mem",
"rotator-mem" for consistency and denote that the path target is the DDR memory.

> +
> + if (IS_ERR(path0))
> + return PTR_ERR(path0);
> + icc_set_bw(path0, 0, MBps_to_icc(6400));
> +
> + if (!IS_ERR(path1))
> + icc_set_bw(path1, 0, MBps_to_icc(6400));
> + if (!IS_ERR(path_rot))
> + icc_set_bw(path_rot, 0, MBps_to_icc(6400));
> +
>   DBG("");
>   return component_add(>dev, _ops);
>  }
> 

Thanks,
Georgi
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno

Re: [Freedreno] [PATCH 2/5] drm/msm/dpu: Integrate interconnect API in MDSS

2019-05-29 Thread Georgi Djakov
On 5/13/19 17:47, Sean Paul wrote:
> On Wed, May 08, 2019 at 01:42:12PM -0700, Rob Clark wrote:
>> From: Jayant Shekhar 
>>
>> The interconnect framework is designed to provide a
>> standard kernel interface to control the settings of
>> the interconnects on a SoC.
>>
>> The interconnect API uses a consumer/provider-based model,
>> where the providers are the interconnect buses and the
>> consumers could be various drivers.
>>
>> MDSS is one of the interconnect consumers which uses the
>> interconnect APIs to get the path between endpoints and
>> set its bandwidth requirement for the given interconnected
>> path.
>>
>> Changes in v2:
>>  - Remove error log and unnecessary check (Jordan Crouse)
>>
>> Changes in v3:
>>  - Code clean involving variable name change, removal
>>of extra paranthesis and variables (Matthias Kaehlcke)
>>
>> Changes in v4:
>>  - Add comments, spacings, tabs, proper port name
>>and icc macro (Georgi Djakov)
>>
>> Changes in v5:
>>  - Commit text and parenthesis alignment (Georgi Djakov)
>>
>> Changes in v6:
>>  - Change to new icc_set API's (Doug Anderson)
>>
>> Changes in v7:
>>  - Fixed a typo
>>
>> Signed-off-by: Sravanthi Kollukuduru 
>> Signed-off-by: Jayant Shekhar 
>> Signed-off-by: Rob Clark 
>> ---
>>  drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c | 49 ++--
>>  1 file changed, 45 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c 
>> b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
>> index 7316b4ab1b85..e3c56ccd7357 100644
>> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
>> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
>> @@ -4,11 +4,15 @@
>>   */
>>  
>>  #include "dpu_kms.h"
>> +#include 
>>  
>>  #define to_dpu_mdss(x) container_of(x, struct dpu_mdss, base)
>>  
>>  #define HW_INTR_STATUS  0x0010
>>  
>> +/* Max BW defined in KBps */
>> +#define MAX_BW  680
>> +
>>  struct dpu_irq_controller {
>>  unsigned long enabled_mask;
>>  struct irq_domain *domain;
>> @@ -21,8 +25,30 @@ struct dpu_mdss {
>>  u32 hwversion;
>>  struct dss_module_power mp;
>>  struct dpu_irq_controller irq_controller;
>> +struct icc_path *path[2];
>> +u32 num_paths;
>>  };
>>  
>> +static int dpu_mdss_parse_data_bus_icc_path(struct drm_device *dev,
>> +struct dpu_mdss *dpu_mdss)
>> +{
>> +struct icc_path *path0 = of_icc_get(dev->dev, "mdp0-mem");
>> +struct icc_path *path1 = of_icc_get(dev->dev, "mdp1-mem");
>> +
>> +if (IS_ERR(path0))
> 
> of_icc_get can also return NULL, it looks like we also want to guard against
> this case and keep num_paths == 0.

of_icc_get() returns NULL when the interconnect API is not enabled or the DT
properties are not present. In this case, passing a NULL path to the icc
functions is just a nop. So it should be fine either way.

Acked-by: Georgi Djakov 

Thanks,
Georgi
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno

Re: [Freedreno] [PATCH v3 2/3] dt-bindings: drm/msm/a6xx: Document interconnect properties for GPU

2019-02-21 Thread Georgi Djakov
Hi,

On 12/20/18 19:30, Jordan Crouse wrote:
> Add documentation for the interconnect and interconnect-names bindings
> for the GPU node as detailed by bindings/interconnect/interconnect.txt.
> 
> Signed-off-by: Jordan Crouse 
> ---
> 
>  Documentation/devicetree/bindings/display/msm/gpu.txt | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/display/msm/gpu.txt 
> b/Documentation/devicetree/bindings/display/msm/gpu.txt
> index 9c89f4fdb8ca..5b04393dcb15 100644
> --- a/Documentation/devicetree/bindings/display/msm/gpu.txt
> +++ b/Documentation/devicetree/bindings/display/msm/gpu.txt
> @@ -20,6 +20,8 @@ Required properties:
> - qcom,adreno-630.2
>  - iommus: optional phandle to an adreno iommu instance
>  - operating-points-v2: optional phandle to the OPP operating points
> +- interconnect: optional phandle to a interconnect provider.  See

Nit: s/interconnect:/interconnects:/
Nit: s/a interconnect/an interconnect/

> +  ../interconnect/interconnect.txt for details.
>  - qcom,gmu: For GMU attached devices a phandle to the GMU device that will
>control the power for the GPU. Applicable targets:
>  - qcom,adreno-630.2
> @@ -68,6 +70,8 @@ Example a6xx (with GMU):
>  
>   operating-points-v2 = <_opp_table>;
>  
> + interconnects = <_hlos MASTER_GFX3D _hlos SLAVE_EBI1>;
> +
>   qcom,gmu = <>;
>   };
>  };
> 

Acked-by: Georgi Djakov 

Thanks,
Georgi
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno

Re: [Freedreno] [PATCH] drm/msm/a6xx: Add support for an interconnect path

2019-02-12 Thread Georgi Djakov
Hi Greg,

On 2/12/19 12:16, Greg KH wrote:
> On Tue, Feb 12, 2019 at 11:52:38AM +0200, Georgi Djakov wrote:
>> From: Jordan Crouse 
>>
>> Try to get the interconnect path for the GPU and vote for the maximum
>> bandwidth to support all frequencies. This is needed for performance.
>> Later we will want to scale the bandwidth based on the frequency to
>> also optimize for power but that will require some device tree
>> infrastructure that does not yet exist.
>>
>> v6: use icc_set_bw() instead of icc_set()
>> v5: Remove hardcoded interconnect name and just use the default
>> v4: Don't use a port string at all to skip the need for names in the DT
>> v3: Use macros and change port string per Georgi Djakov
>>
>> Signed-off-by: Jordan Crouse 
>> Acked-by: Rob Clark 
>> Reviewed-by: Evan Green 
>> Signed-off-by: Georgi Djakov 
>> ---
>>
>> Hi Greg,
>>
>> If not too late, could you please take this patch into char-misc-next.
>> It is adding the first consumer of the interconnect API. We are just
>> getting the code in place, without making it functional yet, as some
>> DT bits are still needed to actually enable it. We have Rob's Ack to
>> merge this together with the interconnect code. This patch has already
>> spent some time in linux-next without any issues.
> 
> I have a question about the interconnect code.  Last week I saw a
> presentation about the resctrl/RDT code from ARM that is coming (MPAM),
> and it really looks like the same functionality as this interconnect
> code.  In fact, this code looks like the existing resctrl stuff, right?

Thanks for the question! It's nice that MPAM is moving forward. When i
looked into the MPAM draft spec an year ago, it was an optional
extension mentioning mostly use-cases with VMs on server systems.

But anyway, MPAM is only available for ARMv8.2+ cores as an optional
extension and aarch32 is not supported. In contrast to that, the
interconnect code is generic and does not put any limitations on the
platform/architecture that can use it - just the platform specific
implementation would be different. We have discussed in that past that
it can be used even on x86 platforms to provide hints to firmware.

> So why shouldn't we just drop the interconnect code and use resctrl
> instead as it's already merged?

I haven't seen any MPAM code so far, but i assume that we can have an
interconnect provider that implements this MPAM extension for systems
that support it (and want to use it). Currently there are people working
on various interconnect platform drivers from 5 different SoC vendors
and we have agreed to use a common DT bindings (and API). I doubt that
even a single one of these platforms is based on v8.2+. Probably such
SoCs would be coming in the future and then i expect people making use
of MPAM in some interconnect provider driver.

Thanks,
Georgi
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno

[Freedreno] [PATCH] drm/msm/a6xx: Add support for an interconnect path

2019-02-12 Thread Georgi Djakov
From: Jordan Crouse 

Try to get the interconnect path for the GPU and vote for the maximum
bandwidth to support all frequencies. This is needed for performance.
Later we will want to scale the bandwidth based on the frequency to
also optimize for power but that will require some device tree
infrastructure that does not yet exist.

v6: use icc_set_bw() instead of icc_set()
v5: Remove hardcoded interconnect name and just use the default
v4: Don't use a port string at all to skip the need for names in the DT
v3: Use macros and change port string per Georgi Djakov

Signed-off-by: Jordan Crouse 
Acked-by: Rob Clark 
Reviewed-by: Evan Green 
Signed-off-by: Georgi Djakov 
---

Hi Greg,

If not too late, could you please take this patch into char-misc-next.
It is adding the first consumer of the interconnect API. We are just
getting the code in place, without making it functional yet, as some
DT bits are still needed to actually enable it. We have Rob's Ack to
merge this together with the interconnect code. This patch has already
spent some time in linux-next without any issues.

Thanks,
Georgi

 drivers/gpu/drm/msm/Kconfig |  1 +
 drivers/gpu/drm/msm/adreno/a6xx_gmu.c   | 20 
 drivers/gpu/drm/msm/adreno/adreno_gpu.c |  9 +
 drivers/gpu/drm/msm/msm_gpu.h   |  3 +++
 4 files changed, 33 insertions(+)

diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig
index cf549f1ed403..78c9e5a5e793 100644
--- a/drivers/gpu/drm/msm/Kconfig
+++ b/drivers/gpu/drm/msm/Kconfig
@@ -5,6 +5,7 @@ config DRM_MSM
depends on ARCH_QCOM || SOC_IMX5 || (ARM && COMPILE_TEST)
depends on OF && COMMON_CLK
depends on MMU
+   depends on INTERCONNECT || !INTERCONNECT
select QCOM_MDT_LOADER if ARCH_QCOM
select REGULATOR
select DRM_KMS_HELPER
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index ce1b3cc4bf6d..d1662a75c7ec 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -2,6 +2,7 @@
 /* Copyright (c) 2017-2018 The Linux Foundation. All rights reserved. */
 
 #include 
+#include 
 #include 
 #include 
 
@@ -84,6 +85,9 @@ bool a6xx_gmu_gx_is_on(struct a6xx_gmu *gmu)
 
 static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index)
 {
+   struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu);
+   struct adreno_gpu *adreno_gpu = _gpu->base;
+   struct msm_gpu *gpu = _gpu->base;
int ret;
 
gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
@@ -106,6 +110,12 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int 
index)
dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret);
 
gmu->freq = gmu->gpu_freqs[index];
+
+   /*
+* Eventually we will want to scale the path vote with the frequency but
+* for now leave it at max so that the performance is nominal.
+*/
+   icc_set_bw(gpu->icc_path, 0, MBps_to_icc(7216));
 }
 
 void a6xx_gmu_set_freq(struct msm_gpu *gpu, unsigned long freq)
@@ -705,6 +715,8 @@ int a6xx_gmu_reset(struct a6xx_gpu *a6xx_gpu)
 
 int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu)
 {
+   struct adreno_gpu *adreno_gpu = _gpu->base;
+   struct msm_gpu *gpu = _gpu->base;
struct a6xx_gmu *gmu = _gpu->gmu;
int status, ret;
 
@@ -720,6 +732,9 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu)
if (ret)
goto out;
 
+   /* Set the bus quota to a reasonable value for boot */
+   icc_set_bw(gpu->icc_path, 0, MBps_to_icc(3072));
+
a6xx_gmu_irq_enable(gmu);
 
/* Check to see if we are doing a cold or warm boot */
@@ -760,6 +775,8 @@ bool a6xx_gmu_isidle(struct a6xx_gmu *gmu)
 
 int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu)
 {
+   struct adreno_gpu *adreno_gpu = _gpu->base;
+   struct msm_gpu *gpu = _gpu->base;
struct a6xx_gmu *gmu = _gpu->gmu;
u32 val;
 
@@ -806,6 +823,9 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu)
/* Tell RPMh to power off the GPU */
a6xx_rpmh_stop(gmu);
 
+   /* Remove the bus vote */
+   icc_set_bw(gpu->icc_path, 0, 0);
+
clk_bulk_disable_unprepare(gmu->nr_clocks, gmu->clocks);
 
pm_runtime_put_sync(gmu->dev);
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c 
b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
index 2cfee1a4fe0b..27898475cdf4 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
@@ -18,6 +18,7 @@
  */
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -747,6 +748,11 @@ static int adreno_get_pwrlevels(struct device *dev,
 
DBG("fast_rate=%u, slow_rate=2700", gpu->fast_rate);
 
+   /* Check for an interconnect path for the bus */
+   gpu->icc_path = of_icc_get(dev, NULL);
+   if (IS_ERR(gpu->icc

Re: [Freedreno] [PATCH v3 1/3] drm/msm/a6xx: Add support for an interconnect path

2019-01-21 Thread Georgi Djakov
Hi Rob,

On 1/18/19 21:16, Rob Clark wrote:
> On Fri, Jan 18, 2019 at 1:06 PM Doug Anderson  wrote:
>>
>> Hi,
>>
>> On Thu, Dec 20, 2018 at 9:30 AM Jordan Crouse  wrote:
>>>
>>> Try to get the interconnect path for the GPU and vote for the maximum
>>> bandwidth to support all frequencies. This is needed for performance.
>>> Later we will want to scale the bandwidth based on the frequency to
>>> also optimize for power but that will require some device tree
>>> infrastructure that does not yet exist.
>>>
>>> v5: Remove hardcoded interconnect name and just use the default
>>
>> nit: ${SUBJECT} says v3, but this is v5.
>>
>> I'll put in my usual plug for considering "patman" to help post
>> patches.  Even though it lives in the u-boot git repo it's still a gem
>> for kernel work.
>> 
>>
>>
>>> @@ -85,6 +89,12 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, 
>>> int index)
>>> dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret);
>>>
>>> gmu->freq = gmu->gpu_freqs[index];
>>> +
>>> +   /*
>>> +* Eventually we will want to scale the path vote with the 
>>> frequency but
>>> +* for now leave it at max so that the performance is nominal.
>>> +*/
>>> +   icc_set(gpu->icc_path, 0, MBps_to_icc(7216));
>>
>> You'll need to change icc_set() here to icc_set_bw() to match v13, AKA:
>>
>> - https://patchwork.kernel.org/patch/10766335/
>> - https://lkml.kernel.org/r/20190116161103.6937-2-georgi.dja...@linaro.org
>>
>>
>>> @@ -695,6 +707,9 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu)
>>> if (ret)
>>> goto out;
>>>
>>> +   /* Set the bus quota to a reasonable value for boot */
>>> +   icc_set(gpu->icc_path, 0, MBps_to_icc(3072));
>>
>> This will also need to change to icc_set_bw()
>>
>>
>>> @@ -781,6 +798,9 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu)
>>> /* Tell RPMh to power off the GPU */
>>> a6xx_rpmh_stop(gmu);
>>>
>>> +   /* Remove the bus vote */
>>> +   icc_set(gpu->icc_path, 0, 0);
>>
>> This will also need to change to icc_set_bw()
>>
>>
>> I have the same questions for this series that I had in response to
>> the email ("[v5 2/3] drm/msm/dpu: Integrate interconnect API in MDSS")
>> 
>>
>>
>> Copy / pasting here (with minor name changes) so folks don't have to
>> follow links / search email.
>>
>> ==
>>
>> I'm curious what the plan is for landing this series.   Rob / Gerogi:
>> do you have any preference?  Options I'd imagine:
>>
>> A) Wait until interconnect lands (in 5.1?) and land this through
>> msm-next in the version after (5.2?)
>>
>> B) Georgi provides an immutable branch for interconnect when his lands
>> (assuming he's landing via pull request) and that gets pulled into the
>> the relevant drm tree.
>>
>> C) Rob Acks this series and indicates that it should go in through
>> Gerogi's tree (probably only works if Georgi plans to send a pull
>> request).  If we're going this route then (IIUC) we'd want to land
>> this in Gerogi's tree sooner rather than later so it can get some bake
>> time?  NOTE: as per my prior reply, I believe Rob has already Acked
>> this patch.
>>
> 
> I'm ok to ack and have it land via Georgi's tree, if Georgi wants to
> do that.  Or otherwise, I could maybe coordinate w/ airlied to send a
> 2nd late msm-next pr including the gpu and display interconnect
> patches.

I'm fine either way. But it would be nice if both patches (this one and
the dt-bindings go together. The v6 of this patch applies cleanly to my
tree, but the next one (2/3) with the dt-bindings doesn't.

Thanks,
Georgi
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH v4 2/3] drm/msm/dpu: Integrate interconnect API in MDSS

2019-01-09 Thread Georgi Djakov
Hi Jayant,

On 12/21/18 08:20, Jayant Shekhar wrote:
> From: Sravanthi Kollukuduru 
> 
> The interconnect framework is designed to provide a
> standard kernel interface to control the settings of
> the interconnects on a SoC.
> 
> The interconnect API uses a consumer/provider-based model,
> where the providers are the interconnect buses and the
> consumers could be various drivers.
> 
> MDSS is one of the interconnect consumers which uses the
> interconnect APIs to get the path between endpoints and
> set its bandwidth/latency/QoS requirements for the given

Currently we set only bandwidth.

> interconnected path.
> 
> Changes in v2:
>   - Remove error log and unnecessary check (Jordan Crouse)
> 
> Changes in v3:
>   - Code clean involving variable name change, removal
> of extra paranthesis and variables (Matthias Kaehlcke)
> 
> Changes in v4:
>   - Add comments, spacings, tabs, proper port name
> and icc macro (Georgi Djakov)

The changes should not be part of the commit text, but should move below
the "---" line.

> 
> Signed-off-by: Sravanthi Kollukuduru 
> Signed-off-by: Jayant Shekhar 
> ---
>  drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c | 49 
> +---
>  1 file changed, 45 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c 
> b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
> index 38576f8..fcaa71f 100644
> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
> @@ -4,11 +4,15 @@
>   */
>  
>  #include "dpu_kms.h"
> +#include 
>  
>  #define to_dpu_mdss(x) container_of(x, struct dpu_mdss, base)
>  
>  #define HW_INTR_STATUS   0x0010
>  
> +/* Max BW defined in KBps */
> +#define MAX_BW   680
> +
>  struct dpu_mdss {
>   struct msm_mdss base;
>   void __iomem *mmio;
> @@ -16,8 +20,30 @@ struct dpu_mdss {
>   u32 hwversion;
>   struct dss_module_power mp;
>   struct dpu_irq_controller irq_controller;
> + struct icc_path *path[2];
> + u32 num_paths;
>  };
>  
> +static int dpu_mdss_parse_data_bus_icc_path(struct drm_device *dev,
> + struct dpu_mdss *dpu_mdss)

Nit: Please align to the open parenthesis.

Otherwise looks good to me.

Thanks,
Georgi
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH v3 3/3] dt-bindings: msm/disp: Introduce interconnect bindings for MDSS on SDM845

2018-12-19 Thread Georgi Djakov
Hi Sravanthi,

Thanks for the patch!

On 11/22/18 11:06, Sravanthi Kollukuduru wrote:
> Add interconnect properties such as interconnect provider specifier
> , the edge source and destination ports which are required by the
> interconnect API to configure interconnect path for MDSS.
> 
> Changes in v2:
>   - none
> 
> Changes in v3:
>   - Remove common property definitions (Rob Herring)
> 
> Signed-off-by: Sravanthi Kollukuduru 
> ---
>  Documentation/devicetree/bindings/display/msm/dpu.txt | 9 +
>  1 file changed, 9 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/display/msm/dpu.txt 
> b/Documentation/devicetree/bindings/display/msm/dpu.txt
> index ad2e8830324e..d75b4360a4be 100644
> --- a/Documentation/devicetree/bindings/display/msm/dpu.txt
> +++ b/Documentation/devicetree/bindings/display/msm/dpu.txt
> @@ -28,6 +28,11 @@ Required properties:
>  - #address-cells: number of address cells for the MDSS children. Should be 1.
>  - #size-cells: Should be 1.
>  - ranges: parent bus address space is the same as the child bus address 
> space.
> +- interconnects : interconnect path specifier for MDSS according to
> +  Documentation/devicetree/bindings/interconnect/interconnect.txt. Should be
> +  2 paths corresponding to 2 AXI ports.
> +- interconnect-names : MDSS will have 2 port names to differentiate between 
> the
> +  2 interconnect paths defined with interconnect specifier.
>  
>  Optional properties:
>  - assigned-clocks: list of clock specifiers for clocks needing rate 
> assignment
> @@ -86,6 +91,10 @@ Example:
>   interrupt-controller;
>   #interrupt-cells = <1>;
>  
> + interconnects = < 38  512>,
> + < 39  512>;

Please use string names instead of hard-coded integers.

interconnects = <_hlos MASTER_MDP0 _hlos SLAVE_EBI1>,
<_hlos MASTER_MDP1 _hlos SLAVE_EBI1>;

> + interconnect-names = "port0", "port1";

We are trying to be more descriptive and include both the source and the
destination like: "mdp0-mem", "mdp1-mem"

> +
>   iommus = <_iommu 0>;
>  
>   #address-cells = <2>;

Otherwise looks good.

Thanks,
Georgi
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH v3 2/3] drm/msm/dpu: Integrate interconnect API in MDSS

2018-12-19 Thread Georgi Djakov
Hi Sravanthi,

Thanks for the patch!

On 11/22/18 11:06, Sravanthi Kollukuduru wrote:
> The interconnect framework is designed to provide a
> standard kernel interface to control the settings of
> the interconnects on a SoC.
> 
> The interconnect API uses a consumer/provider-based model,
> where the providers are the interconnect buses and the
> consumers could be various drivers.
> 
> MDSS is one of the interconnect consumers which uses the
> interconnect APIs to get the path between endpoints and
> set its bandwidth/latency/QoS requirements for the given
> interconnected path.
> 
> Changes in v2:
>   - Remove error log and unnecessary check (Jordan Crouse)
> 
> Changes in v3:
>   - Code clean involving variable name change, removal
> of extra paranthesis and variables (Matthias Kaehlcke)
> 
> Signed-off-by: Sravanthi Kollukuduru 
> ---
>  drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c | 49 
> 
>  1 file changed, 44 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c 
> b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
> index 38576f8b90b6..1387a6b1b39e 100644
> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
> @@ -4,10 +4,12 @@
>   */
>  
>  #include "dpu_kms.h"
> +#include 
>  
>  #define to_dpu_mdss(x) container_of(x, struct dpu_mdss, base)
>  
> -#define HW_INTR_STATUS   0x0010
> +#define HW_INTR_STATUS   0x0010

Unrelated change?

> +#define MAX_BW   680

In what units? Maybe add a comment.

>  
>  struct dpu_mdss {
>   struct msm_mdss base;
> @@ -16,8 +18,30 @@ struct dpu_mdss {
>   u32 hwversion;
>   struct dss_module_power mp;
>   struct dpu_irq_controller irq_controller;
> + struct icc_path *path[2];
> + u32 num_paths;
>  };
>  
> +static int dpu_mdss_parse_data_bus_icc_path(
> + struct drm_device *dev, struct dpu_mdss *dpu_mdss)

Nit: Lines should not end with a '('. Please move the first argument up:

static int dpu_mdss_parse_data_bus_icc_path(struct drm_device *dev,
struct dpu_mdss *dpu_mdss)

> +{
> + struct icc_path *path0 = of_icc_get(dev->dev, "port0");
> + struct icc_path *path1 = of_icc_get(dev->dev, "port1");

In DT it's preferred that the name contains both the source and
destination, so maybe of_icc_get(dev->dev, "mdp0-mem") etc.

> +
> + if (IS_ERR(path0))
> + return PTR_ERR(path0);
> +
> + dpu_mdss->path[0] = path0;
> + dpu_mdss->num_paths = 1;
> +
> + if (!IS_ERR(path1)) {
> + dpu_mdss->path[1] = path1;
> + dpu_mdss->num_paths++;
> + }
> +
> + return 0;
> +}
> +
>  static irqreturn_t dpu_mdss_irq(int irq, void *arg)
>  {
>   struct dpu_mdss *dpu_mdss = arg;
> @@ -127,7 +151,11 @@ static int dpu_mdss_enable(struct msm_mdss *mdss)
>  {
>   struct dpu_mdss *dpu_mdss = to_dpu_mdss(mdss);
>   struct dss_module_power *mp = _mdss->mp;
> - int ret;
> + int ret, i;
> + u64 avg_bw = dpu_mdss->num_paths ? MAX_BW/dpu_mdss->num_paths : 0;

Nit: Please add spaces around "/"

> +
> + for (i = 0; i < dpu_mdss->num_paths; i++)
> + icc_set(dpu_mdss->path[i], avg_bw, MAX_BW);

Now we have macros in the header, that can be used to specify the
bandwidth units. So please use kBps_to_icc or MBps_to_icc etc. If we
decide in the future to change the internal units, we will be able to do
it without touching the users.

Thanks,
Georgi

>  
>   ret = msm_dss_enable_clk(mp->clk_config, mp->num_clk, true);
>   if (ret)
> @@ -140,12 +168,15 @@ static int dpu_mdss_disable(struct msm_mdss *mdss)
>  {
>   struct dpu_mdss *dpu_mdss = to_dpu_mdss(mdss);
>   struct dss_module_power *mp = _mdss->mp;
> - int ret;
> + int ret, i;
>  
>   ret = msm_dss_enable_clk(mp->clk_config, mp->num_clk, false);
>   if (ret)
>   DPU_ERROR("clock disable failed, ret:%d\n", ret);
>  
> + for (i = 0; i < dpu_mdss->num_paths; i++)
> + icc_set(dpu_mdss->path[i], 0, 0);
> +
>   return ret;
>  }
>  
> @@ -155,6 +186,7 @@ static void dpu_mdss_destroy(struct drm_device *dev)
>   struct msm_drm_private *priv = dev->dev_private;
>   struct dpu_mdss *dpu_mdss = to_dpu_mdss(priv->mdss);
>   struct dss_module_power *mp = _mdss->mp;
> + int i;
>  
>   pm_runtime_disable(dev->dev);
>   _dpu_mdss_irq_domain_fini(dpu_mdss);
> @@ -162,6 +194,9 @@ static void dpu_mdss_destroy(struct drm_device *dev)
>   msm_dss_put_clk(mp->clk_config, mp->num_clk);
>   devm_kfree(>dev, mp->clk_config);
>  
> + for (i = 0; i < dpu_mdss->num_paths; i++)
> + icc_put(dpu_mdss->path[i]);
> +
>   if (dpu_mdss->mmio)
>   devm_iounmap(>dev, dpu_mdss->mmio);
>   dpu_mdss->mmio = NULL;
> @@ -200,6 +235,10 @@ int dpu_mdss_init(struct drm_device *dev)
>   }
>   dpu_mdss->mmio_len = resource_size(res);

Re: [Freedreno] [PATCH 1/1] drm/msm/a6xx: Add support for an interconnect path

2018-12-05 Thread Georgi Djakov
Hi Jordan,

Thanks for the patch!

On 11/29/18 19:26, Jordan Crouse wrote:
> Try to get the interconnect path for the GPU and vote for the maximum
> bandwidth to support all frequencies. This is needed for performance.
> Later we will want to scale the bandwidth based on the frequency to
> also optimize for power but that will require some device tree
> infrastructure that does not yet exist.
> 
> v3: Absolute bandwidth values should be specified in KBps

btw. now i have also included macros in the header, that can be used to
specify the bandwidth units. So now you can now use kBps_to_icc or
MBps_to_icc etc. If we decide at some point that we change the units we
use internally, we will not have update all the users.

> 
> Signed-off-by: Jordan Crouse 
> ---
>  drivers/gpu/drm/msm/adreno/a6xx_gmu.c   | 20 
>  drivers/gpu/drm/msm/adreno/adreno_gpu.c |  9 +
>  drivers/gpu/drm/msm/msm_gpu.h   |  3 +++
>  3 files changed, 32 insertions(+)
> 
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c 
> b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> index 546599a7ab05..fe0f5b10fd9c 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> @@ -3,6 +3,7 @@
>  
>  #include 
>  #include 
> +#include 
>  #include 

Alphabetic order maybe?

>  #include "a6xx_gpu.h"
> @@ -63,6 +64,9 @@ static bool a6xx_gmu_gx_is_on(struct a6xx_gmu *gmu)
>  
>  static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index)
>  {
> + struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu);
> + struct adreno_gpu *adreno_gpu = _gpu->base;
> + struct msm_gpu *gpu = _gpu->base;
>   int ret;
>  
>   gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
> @@ -85,6 +89,12 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int 
> index)
>   dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret);
>  
>   gmu->freq = gmu->gpu_freqs[index];
> +
> + /*
> +  * Eventually we will want to scale the path vote with the frequency but
> +  * for now leave it t at max so that the performance is nominal.

An extra t above.

> +  */
> + icc_set(gpu->icc_path, 0, 7216000);
>  }
>  
>  void a6xx_gmu_set_freq(struct msm_gpu *gpu, unsigned long freq)
> @@ -680,6 +690,8 @@ int a6xx_gmu_reset(struct a6xx_gpu *a6xx_gpu)
>  
>  int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu)
>  {
> + struct adreno_gpu *adreno_gpu = _gpu->base;
> + struct msm_gpu *gpu = _gpu->base;
>   struct a6xx_gmu *gmu = _gpu->gmu;
>   int status, ret;
>  
> @@ -695,6 +707,9 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu)
>   if (ret)
>   goto out;
>  
> + /* Set the bus quota to a reasonable value for boot */
> + icc_set(gpu->icc_path, 0, 3072000);

Here you can do:
icc_set(gpu->icc_path, 0, MBps_to_icc(3072));

> +
>   a6xx_gmu_irq_enable(gmu);
>  
>   /* Check to see if we are doing a cold or warm boot */
> @@ -735,6 +750,8 @@ bool a6xx_gmu_isidle(struct a6xx_gmu *gmu)
>  
>  int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu)
>  {
> + struct adreno_gpu *adreno_gpu = _gpu->base;
> + struct msm_gpu *gpu = _gpu->base;
>   struct a6xx_gmu *gmu = _gpu->gmu;
>   u32 val;
>  
> @@ -781,6 +798,9 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu)
>   /* Tell RPMh to power off the GPU */
>   a6xx_rpmh_stop(gmu);
>  
> + /* Remove the bus vote */
> + icc_set(gpu->icc_path, 0, 0);
> +
>   clk_bulk_disable_unprepare(gmu->nr_clocks, gmu->clocks);
>  
>   pm_runtime_put_sync(gmu->dev);
> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c 
> b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> index 93d70f4a2154..9bab491912cf 100644
> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> @@ -21,6 +21,7 @@
>  #include 
>  #include 
>  #include 
> +#include 

Alphabetic order?

>  #include "adreno_gpu.h"
>  #include "msm_gem.h"
>  #include "msm_mmu.h"
> @@ -695,6 +696,11 @@ static int adreno_get_pwrlevels(struct device *dev,
>  
>   DBG("fast_rate=%u, slow_rate=2700", gpu->fast_rate);
>  
> + /* Check for an interconnect path for the bus */
> + gpu->icc_path = of_icc_get(dev, "port0");

I was wondering if port0 is appropriate name here. I assume this is the
port to DDR. Maybe we could name it gfx-mem or gpu-mem. Are there any
other interconnects that need to be scaled on the a6xx?

> + if (IS_ERR(gpu->icc_path))
> + gpu->icc_path = NULL;
> +
>   return 0;
>  }
>  
> @@ -733,10 +739,13 @@ int adreno_gpu_init(struct drm_device *drm, struct 
> platform_device *pdev,
>  
>  void adreno_gpu_cleanup(struct adreno_gpu *adreno_gpu)
>  {
> + struct msm_gpu *gpu = _gpu->base;
>   unsigned int i;
>  
>   for (i = 0; i < ARRAY_SIZE(adreno_gpu->info->fw); i++)
>   release_firmware(adreno_gpu->fw[i]);
>  
> + icc_put(gpu->icc_path);
> +
>   msm_gpu_cleanup(_gpu->base);
>  }
> diff --git 

Re: [Freedreno] [PATCH 6/9] PM / OPP: dt-bindings: Add opp-interconnect-bw

2018-09-27 Thread Georgi Djakov
Hi Jordan,

Thanks for the patch!

On 08/27/2018 06:11 PM, Jordan Crouse wrote:
> Add the "opp-interconnect-bw" property to specify the
> average and peak bandwidth for an interconnect path for
> a specific operating power point. A separate bandwidth
> pair can be specified for each of the interconnects
> defined for the device by appending the interconnect
> name to the property.
> 
> Signed-off-by: Jordan Crouse 
> ---
>  Documentation/devicetree/bindings/opp/opp.txt | 36 +++
>  1 file changed, 36 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/opp/opp.txt 
> b/Documentation/devicetree/bindings/opp/opp.txt
> index c396c4c0af92..d714c084f36d 100644
> --- a/Documentation/devicetree/bindings/opp/opp.txt
> +++ b/Documentation/devicetree/bindings/opp/opp.txt
> @@ -170,6 +170,11 @@ Optional properties:
>functioning of the current device at the current OPP (where this property 
> is
>present).
>  
> +- opp-interconnect-bw-: This is an array of pairs specifying the 
> average
> +  and peak bandwidth in bytes per second for the interconnect path known by
> +  'name'.  This should match the name(s) specified by interconnect-names in 
> the
> +  device definition.
> +
>  Example 1: Single cluster Dual-core ARM cortex A9, switch DVFS states 
> together.
>  
>  / {
> @@ -543,3 +548,34 @@ Example 6: opp-microvolt-, opp-microamp-:
>   };
>   };
>  };
> +
> +Example 7: opp-interconnect-bw:
> +(example: leaf device with frequency and BW quotas)
> +
> +/ {
> + soc {
> + gpu@500 {
> + ...
> + interconnects = < 26  512>;
> + interconnect-names = "port0";
> + ...
> + operating-points-v2 = <_opp_table>;
> + };
> + };
> +
> + gpu_opp_table: opp_table0 {
> + compatible = "operating-points-v2";
> +
> + opp-71000 {
> + op-hz = /bits/ 64 <71000>;
> + /* Set peak bandwidth @ 7216000 KB/s */
> + opp-interconnect-bw-port0 = /bits/ 64 <0 721600>;

This seems a bit long. I would suggest the following instead.
If there is only one path:
/* average bandwidth = 0 KB/s,  peak bandwidth = 7216000 KB/s */
opp-bw-KBps = <0 7216000>;
or  
opp-bw-MBps = <0 7216>;

If there are multiple paths:
opp-bw-KBps-port0 = <0 7216000>;
opp-bw-KBps-port1 = <0 100>;

The above follows a convention similar to opp-microvolt, where at
runtime the platform can pick a  and a matching opp-bw-KBps-
property. If the platform doesn't pick a specific  or  does
not match with any of the opp-bw-KBps- properties, then
opp-bw-KBps shall be used if present.
For now supporting only KBps values seems enough to cover all present
platforms, so we can start with this and based on the requirements of
future platforms we can add MBps etc.

Thanks,
Georgi

> + };
> +
> + opp-21000 {
> + op-hz = /bits/ 64 <21000>;
> + /* Set peak bandwidth @ 120 KB/s */
> + opp-interconnect-bw-port0 = /bits/ 64 <0 12>;
> + };
> + };
> +};
> 
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno