hujun260 commented on code in PR #18284:
URL: https://github.com/apache/nuttx/pull/18284#discussion_r2752612308


##########
sched/clock/clock_sched_ticks.c:
##########
@@ -62,13 +64,13 @@ static volatile clock_t g_system_ticks = 
INITIAL_SYSTEM_TIMER_TICKS;
 
 void clock_increase_sched_ticks(clock_t ticks)
 {
+  irqstate_t flags;
+
   /* Increment the per-tick scheduler counter */
 
-#ifdef CONFIG_SYSTEM_TIME64
-  atomic64_fetch_add((FAR atomic64_t *)&g_system_ticks, ticks);
-#else
-  atomic_fetch_add((FAR atomic_t *)&g_system_ticks, ticks);
-#endif
+  flags = write_seqlock_irqsave(&g_system_tick_lock);

Review Comment:
   The old implementation has a bug. Even if sample and verify are equal, it 
cannot guarantee a correct result because g_system_ticks changes 
non-atomically, potentially resulting in a torn read (reading a mid-state).
   ```
     clock_t sample;
     clock_t verify;
   
     /* 64-bit accesses are not atomic on most architectures. The following
      * loop samples the 64-bit counter twice and retries in the rare case
      * that a 32-bit rollover occurs between samples.
      *
      * If no 32-bit rollover occurs:
      *  - The MS 32 bits of both samples will be identical, and
      *  - The LS 32 bits of the second sample will be greater than or equal
      *    to those of the first.
      */
   
     do
       {
         verify = g_system_ticks;
         sample= g_system_ticks;
       }
     while ((sample & TIMER_MASK32)  < (verify & TIMER_MASK32) ||
            (sample & ~TIMER_MASK32) != (verify & ~TIMER_MASK32));
   
     return sample;
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to