The following commit has been merged into the locking/core branch of tip:

Commit-ID:     a690ed07353ec45f056b0a6f87c23a12a59c030d
Gitweb:        
https://git.kernel.org/tip/a690ed07353ec45f056b0a6f87c23a12a59c030d
Author:        Ahmed S. Darwish <a.darw...@linutronix.de>
AuthorDate:    Thu, 27 Aug 2020 13:40:40 +02:00
Committer:     Peter Zijlstra <pet...@infradead.org>
CommitterDate: Thu, 10 Sep 2020 11:19:29 +02:00

time/sched_clock: Use seqcount_latch_t

Latch sequence counters have unique read and write APIs, and thus
seqcount_latch_t was recently introduced at seqlock.h.

Use that new data type instead of plain seqcount_t. This adds the
necessary type-safety and ensures only latching-safe seqcount APIs are
to be used.

Signed-off-by: Ahmed S. Darwish <a.darw...@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Link: https://lkml.kernel.org/r/20200827114044.11173-5-a.darw...@linutronix.de
---
 kernel/time/sched_clock.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
index 8c6b5fe..0642013 100644
--- a/kernel/time/sched_clock.c
+++ b/kernel/time/sched_clock.c
@@ -35,7 +35,7 @@
  * into a single 64-byte cache line.
  */
 struct clock_data {
-       seqcount_t              seq;
+       seqcount_latch_t        seq;
        struct clock_read_data  read_data[2];
        ktime_t                 wrap_kt;
        unsigned long           rate;
@@ -76,7 +76,7 @@ struct clock_read_data *sched_clock_read_begin(unsigned int 
*seq)
 
 int sched_clock_read_retry(unsigned int seq)
 {
-       return read_seqcount_retry(&cd.seq, seq);
+       return read_seqcount_latch_retry(&cd.seq, seq);
 }
 
 unsigned long long notrace sched_clock(void)

Reply via email to