The following commit has been merged into the locking/core branch of tip:
Commit-ID: 58faf20a086bd34f91983609e26eac3d5fe76be3
Gitweb:
https://git.kernel.org/tip/58faf20a086bd34f91983609e26eac3d5fe76be3
Author: Ahmed S. Darwish <[email protected]>
AuthorDate: Thu, 27 Aug 2020 13:40:37 +02:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Thu, 10 Sep 2020 11:19:28 +02:00
time/sched_clock: Use raw_read_seqcount_latch() during suspend
sched_clock uses seqcount_t latching to switch between two storage
places protected by the sequence counter. This allows it to have
interruptible, NMI-safe, seqcount_t write side critical sections.
Since 7fc26327b756 ("seqlock: Introduce raw_read_seqcount_latch()"),
raw_read_seqcount_latch() became the standardized way for seqcount_t
latch read paths. Due to the dependent load, it has one read memory
barrier less than the currently used raw_read_seqcount() API.
Use raw_read_seqcount_latch() for the suspend path.
Commit aadd6e5caaac ("time/sched_clock: Use raw_read_seqcount_latch()")
missed changing that instance of raw_read_seqcount().
References: 1809bfa44e10 ("timers, sched/clock: Avoid deadlock during read from
NMI")
Signed-off-by: Ahmed S. Darwish <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link:
https://lkml.kernel.org/r/[email protected]
---
kernel/time/sched_clock.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
index 1c03eec..8c6b5fe 100644
--- a/kernel/time/sched_clock.c
+++ b/kernel/time/sched_clock.c
@@ -258,7 +258,7 @@ void __init generic_sched_clock_init(void)
*/
static u64 notrace suspended_sched_clock_read(void)
{
- unsigned int seq = raw_read_seqcount(&cd.seq);
+ unsigned int seq = raw_read_seqcount_latch(&cd.seq);
return cd.read_data[seq & 1].epoch_cyc;
}