With the function graph tracer, each traced function calls sched_clock() to take a timestamp. As sched_clock() uses raw_read_seqcount()/read_seqcount_retry(), we must ensure that these do not in turn trigger the graph tracer. Both functions is marked as inline. However, if CONFIG_OPTIMIZE_INLINING is set that may make the two functions tracable which they shouldn't.
Rework so that functions raw_read_seqcount and read_seqcount_retry are marked with __always_inline so they will be inlined even if CONFIG_OPTIMIZE_INLINING is turned on. Acked-by: Will Deacon <[email protected]> Signed-off-by: Anders Roxell <[email protected]> --- include/linux/seqlock.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index bcf4cf26b8c8..1b18e3df186e 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -127,7 +127,7 @@ static inline unsigned __read_seqcount_begin(const seqcount_t *s) * seqcount without any lockdep checking and without checking or * masking the LSB. Calling code is responsible for handling that. */ -static inline unsigned raw_read_seqcount(const seqcount_t *s) +static __always_inline unsigned raw_read_seqcount(const seqcount_t *s) { unsigned ret = READ_ONCE(s->sequence); smp_rmb(); @@ -215,7 +215,8 @@ static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start) * If the critical section was invalid, it must be ignored (and typically * retried). */ -static inline int read_seqcount_retry(const seqcount_t *s, unsigned start) +static +__always_inline int read_seqcount_retry(const seqcount_t *s, unsigned start) { smp_rmb(); return __read_seqcount_retry(s, start); -- 2.20.1

