Re: Setting CPU clock frequency on early boot

2017-09-05 Thread Vineet Gupta

On 09/05/2017 03:04 PM, Rob Herring wrote:

On Tue, Sep 5, 2017 at 10:37 AM, Alexey Brodkin
 wrote:

Hello,

I'd like to get some feedback on our idea as well as check
if somebody faces similar situations and if so what would be the best
way to implement some generic solution that suits everyone.

So that's our problem:
1. On power-on hardware might start clocking CPU with either
too high frequency (such that CPU may get stuck at some point)
or too low frequency.

That all sounds stupid but let me elaborate a bit here.
I'm talking about FPGA-based devboards firmware for which
(here I mean just image loaded in FPGA with CPU implementation
but not some software yet) might not be stable or be even experimental.

For example we may deal with dual-core or quad-core designs.
Former might be OK running @100MHz and latter is only usable
@75MHz and lower. The simplest solution might be to use some safe
value before something like CPUfreq kicks in. But we don't yet have
CPUfreq for ARC (we do plan to get it working sometime soon)


But even if we had cpufreq driver going - I don't think it would be usable for 
doing large freq switches, since in current implementations of SoCs (or fpga), the 
clk/pll etc driving core (and all timers etc) are not fixed like say ARM. And as 
discussed before (and pointed to by tglx), timer subsys can't tolerate (on 
purpose) such large drifts.



which
means simple change of CPU frequency once time-keeping infrastructure
was brought-up is not an option... I.e. we'll end up with the system running
much slower compared what could have been possible.

2. Up until now we used to do dirty hacks in early platform init code.
Namely (see axs103_early_init() in arch/arc/plat-axs10x/axs10x.c):
 1) Read CPU's "clock-frequency" from .dtb (remember we're on very early
boot stage still so no expanded DevTree yet exists).
 2) Check how many cores we have and which freq is usable
 3) Update PLL settings right in place if new freq != existing in PLL.

Even though it is proven to work but with more platforms in the pipeline
we'll need to copy-paste pretty much the same stuff across all affected
plats. Which is not nice.

Moreover back in the day we didn't have a proper clk driver for CPU's PLL.
Thus acting on PLL registers right in place was the only thing we were able
to do. Now with introduction of normal clk driver
(see drivers/clk/axs10x/pll_clock.c in linux-next) we'd like to utilize
it and have a cleaner and more universal solution to the problem.

That's how it could be done - 
https://urldefense.proofpoint.com/v2/url?u=http-3A__patchwork.ozlabs.org_patch_801240_=DwICAg=DPL6_X_6JkXFx7AXWqB0tg=c14YS-cH-kdhTOW89KozFhBtBJgs1zXscZojEZQ0THs=wuUcceY8Cz5EhVklWLAgj7RzU3rvpanujvQ3qTJK0Gw=N5IBjq_eCyOf_GRkZskzqGhczBPTbxLJW_MUfauKvuA=
Basically in architecture's time_init() we check if there's explicitly
specified "clock-frequency" parameter in cpu's node in Device Tree and
if there's one we set it via just instantiated clk driver.

The patch looks generally okay. I'd move all the logic to the clock
driver unless perhaps how to set the cpu freq is defined by the
architecture.


But the above patch is clk driver agnostic and it would have to be added each clk 
driver (axs10x, hsdk - both in linux-next) ?
Also note that this code is using a new / adhoc DT binding cpu-freq in cou node to 
do the override - is that acceptable ?


-Vineet




We may indeed proceed with mentioned solution for ARC but if that makes
sense for somebody else it might worth getting something similar in generic
init code. Any thoughts?

If any ARM platforms are doing something similar, then it's done in
their clock driver via of_clk_init. Or they just wait for cpufreq to
kick in and expect the bootloader has initialized things to a
reasonable frequency.



___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: Setting CPU clock frequency on early boot

2017-09-05 Thread Rob Herring
On Tue, Sep 5, 2017 at 10:37 AM, Alexey Brodkin
 wrote:
> Hello,
>
> I'd like to get some feedback on our idea as well as check
> if somebody faces similar situations and if so what would be the best
> way to implement some generic solution that suits everyone.
>
> So that's our problem:
> 1. On power-on hardware might start clocking CPU with either
>too high frequency (such that CPU may get stuck at some point)
>or too low frequency.
>
>That all sounds stupid but let me elaborate a bit here.
>I'm talking about FPGA-based devboards firmware for which
>(here I mean just image loaded in FPGA with CPU implementation
>but not some software yet) might not be stable or be even experimental.
>
>For example we may deal with dual-core or quad-core designs.
>Former might be OK running @100MHz and latter is only usable
>@75MHz and lower. The simplest solution might be to use some safe
>value before something like CPUfreq kicks in. But we don't yet have
>CPUfreq for ARC (we do plan to get it working sometime soon) which
>means simple change of CPU frequency once time-keeping infrastructure
>was brought-up is not an option... I.e. we'll end up with the system 
> running
>much slower compared what could have been possible.
>
> 2. Up until now we used to do dirty hacks in early platform init code.
>Namely (see axs103_early_init() in arch/arc/plat-axs10x/axs10x.c):
> 1) Read CPU's "clock-frequency" from .dtb (remember we're on very early
>boot stage still so no expanded DevTree yet exists).
> 2) Check how many cores we have and which freq is usable
> 3) Update PLL settings right in place if new freq != existing in PLL.
>
>Even though it is proven to work but with more platforms in the pipeline
>we'll need to copy-paste pretty much the same stuff across all affected
>plats. Which is not nice.
>
>Moreover back in the day we didn't have a proper clk driver for CPU's PLL.
>Thus acting on PLL registers right in place was the only thing we were able
>to do. Now with introduction of normal clk driver
>(see drivers/clk/axs10x/pll_clock.c in linux-next) we'd like to utilize
>it and have a cleaner and more universal solution to the problem.
>
>That's how it could be done - http://patchwork.ozlabs.org/patch/801240/
>Basically in architecture's time_init() we check if there's explicitly
>specified "clock-frequency" parameter in cpu's node in Device Tree and
>if there's one we set it via just instantiated clk driver.

The patch looks generally okay. I'd move all the logic to the clock
driver unless perhaps how to set the cpu freq is defined by the
architecture.

> We may indeed proceed with mentioned solution for ARC but if that makes
> sense for somebody else it might worth getting something similar in generic
> init code. Any thoughts?

If any ARM platforms are doing something similar, then it's done in
their clock driver via of_clk_init. Or they just wait for cpufreq to
kick in and expect the bootloader has initialized things to a
reasonable frequency.

Rob

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH v2 2/2] console: don't select first registered console if stdout-path used

2017-09-05 Thread Petr Mladek
On Mon 2017-08-28 19:58:07, Eugeniy Paltsev wrote:
> In the current implementation we take the first console that
> registers if we didn't select one.
> 
> But if we specify console via "stdout-path" property in device tree
> we don't want first console that registers here to be selected.
> Otherwise we may choose wrong console - for example if some console
> is registered earlier than console is pointed in "stdout-path"
> property because console pointed in "stdout-path" property can be add as
> preferred quite late - when it's driver is probed.

register_console() is really twisted function. I would like to better
understand your problems before we add yet another twist there.

Could you please be more specific about your problems?
What was the output of "cat /proc/consoles" before and after the fix?
What exactly started and stopped working?


> We retain previous behavior for tty0 console (if "stdout-path" used)
> as a special case:
> tty0 will be registered even if it was specified neither
> in "bootargs" nor in "stdout-path".
> We had to retain this behavior because a lot of ARM boards (and some
> powerpc) rely on it.

My main concern is the exception for "tty". Yes, it was regiression
reported in the commit c6c7d83b9c9e6a8b3e ("Revert "console: don't
prefer first registered if DT specifies stdout-path""). But is this
the only possible regression?


All this is about the fallback code that tries to enable all
consoles until a real one with tty binding (newcon->device)
is enabled.

v1 version of you patch disabled this fallback code when a console
was defined by stdout-path in the device tree. This emulates
defining the console by console= parameter on the command line.

It might make sense until some complains that a console is not
longer automatically enabled while it was before. But wait.
Someone already complained about "tty0". We can solve this
by adding an exception for "tty0". And if anyone else complains
about another console, we might need more exceptions.

We might endup with so many exceptions that the fallback code
will be always used. But then we are back in the square
and have the original behavior before your patch.

This is why I would like to know more info about your problem.
We need to decide if it is more important than a regression.
Or if it can be fixed another way.

Best Regards,
Petr

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc