Hi Peter,

On 9/28/20 6:58 PM, Peter Zijlstra wrote:
On Mon, Sep 28, 2020 at 06:49:52PM +0800, quanyang.w...@windriver.com wrote:
From: Quanyang Wang <quanyang.w...@windriver.com>

Since sched_clock_read_begin is called by notrace function sched_clock,
it shouldn't be traceable either, or else __ftrace_graph_caller will
run into a dead loop on the path (arm for instance):

   ftrace_graph_caller
     prepare_ftrace_return
       function_graph_enter
         ftrace_push_return_trace
           trace_clock_local
             sched_clock
               sched_clock_read_begin

Fixes: 1b86abc1c645 ("sched_clock: Expose struct clock_read_data")
Signed-off-by: Quanyang Wang <quanyang.w...@windriver.com>
---
  kernel/time/sched_clock.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
index 1c03eec6ca9b..58459e1359d7 100644
--- a/kernel/time/sched_clock.c
+++ b/kernel/time/sched_clock.c
@@ -68,7 +68,7 @@ static inline u64 notrace cyc_to_ns(u64 cyc, u32 mult, u32 
shift)
        return (cyc * mult) >> shift;
  }
-struct clock_read_data *sched_clock_read_begin(unsigned int *seq)
+notrace struct clock_read_data *sched_clock_read_begin(unsigned int *seq)
  {
        *seq = raw_read_seqcount_latch(&cd.seq);
        return cd.read_data + (*seq & 1);
At the very least sched_clock_read_retry() should also be marked such.

In fact, the sched_clock_read_retry is treated as a "inline" function, so

it doesn't trigger theĀ  dead loop. But for safe, add notrace to it is better.

I will send a V2 patch.

Thanks,

Quanyang



But Steve, how come x86 works? Our sched_clock() doesn't have notrace on
at all.

Reply via email to