On 6/4/2019 11:44 AM, Anson Huang wrote: >>>>> As exemple, this series implements busfreq for i.MX8MM whose >>>>> upstreaming is in progress. Because this relies on ATF to do the >>>>> frequency scaling, it won't be hard make it work. > > I have similar question as previous reviewer, is there any branch that we can > test > this series?
I've been looking at this and pushed a fixed-up functional variant to my personal github: https://github.com/cdleonard/linux/commits/next_imx8mm_busfreq It builds and probes and switches DRAM freq between low and high based on whether ethernet is down or up (for testing purposes). The pile of out-of-tree patches required to get this work is quite small. The DRAM freq switch is done via a clk wrapper previously sent as RFC: https://patchwork.kernel.org/patch/10968303/ That part needs more work but it could serve as a neat encapsulation similar to imx_cpu clk used for cpufreq-dt. > And, from the patch, it has multiple levels description of fabric arch, while > we ONLY > intend to scale "bus" frequency per devices' request, here "bus" includes > DRAM, NOC and > AHB, AXI, should we make it more flatter, such as just a virtual fabric as a > single provider, and then > all other devices as nodes under this provider? The imx8mm interconnect bindings describe many bus endpoints but all requests are aggregated to a single platform-level OPP which is equivalent to "low/audio/high mode" from NXP tree. It might be better to associate clks to several ICC nodes and this way scale NOC and DRAM separately? As far as I understand an interconnect provider is free to decide on granularity. As a wilder idea it might even be possible to use a stanard "devfreq-with-perfmon" for DDRC and have interconnect request a min freq to that instead of doing clk_set_rate on dram directly. That could bring features from both worlds, scaling both proactively and reactively. -- Regards, Leonard