On Tue, Sep 29, 2020 at 09:50:51AM +0800, [email protected] wrote:
> From: Quanyang Wang <[email protected]>
> 
> Since sched_clock_read_begin and sched_clock_read_retry are called
> by notrace function sched_clock, they shouldn't be traceable either,
> or else ftrace_graph_caller will run into a dead loop on the path
> as below (arm for instance):
> 
>   ftrace_graph_caller
>     prepare_ftrace_return
>       function_graph_enter
>         ftrace_push_return_trace
>           trace_clock_local
>             sched_clock
>               sched_clock_read_begin/retry
> 
> Fixes: 1b86abc1c645 ("sched_clock: Expose struct clock_read_data")
> Signed-off-by: Quanyang Wang <[email protected]>
> ---
> Changes:
> V2: Add notrace to sched_clock_read_retry according to Peter's suggestion.
> 
>  kernel/time/sched_clock.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
> index 1c03eec6ca9b..f629e3f5afbe 100644
> --- a/kernel/time/sched_clock.c
> +++ b/kernel/time/sched_clock.c
> @@ -68,13 +68,13 @@ static inline u64 notrace cyc_to_ns(u64 cyc, u32 mult, 
> u32 shift)
>       return (cyc * mult) >> shift;
>  }
>  
> -struct clock_read_data *sched_clock_read_begin(unsigned int *seq)
> +notrace struct clock_read_data *sched_clock_read_begin(unsigned int *seq)
>  {
>       *seq = raw_read_seqcount_latch(&cd.seq);
>       return cd.read_data + (*seq & 1);
>  }
>  
> -int sched_clock_read_retry(unsigned int seq)
> +int notrace sched_clock_read_retry(unsigned int seq)
>  {
>       return read_seqcount_retry(&cd.seq, seq);
>  }

You're inconsistent in the placement of notrace, I prefer the first, the
second read like notrace is the return type instead of a qualifier.

In any case,

Acked-by: Peter Zijlstra (Intel) <[email protected]>

Reply via email to