On Thu, Jul 31, 2025 at 10:32:10AM +0200, Igor Mammedov wrote:
> On Wed, 30 Jul 2025 18:15:03 -0400
> Peter Xu <pet...@redhat.com> wrote:
> 
> > On Wed, Jul 30, 2025 at 02:39:33PM +0200, Igor Mammedov wrote:
> > > Make access to main HPET counter lock-less when enable/disable
> > > state isn't changing (which is the most of the time).
> > > 
> > > A read will fallback to locked access if the state change happens
> > > in the middle of read or read happens in the middle of the state
> > > change.
> > > 
> > > This basically uses the same approach as cpu_get_clock(),
> > > modulo instead of busy wait it piggibacks to taking device lock
> > > to wait until HPET reaches consistent state.  
> > 
> > The open-coded seqlock will slightly add complexity of the hpet code.  Is
> > it required? IOW, is it common to have concurrent writters while reading?
> 
> Write path has to be lock protected for correctness sake even though
> concurrent writers are not likely.

Right.  Though we have seqlock_write_lock() for that, IIUC (even though
maybe in hpet's use case we don't need it..).

> 
> I've tried seqlock as well, the difference wrt seqlock is few LOC only
> it didn't make HPET code any simpler.

I tried to do this and it looks still worthwhile to do, but maybe I missed
something alone the lines. Please have a look if so.  That is still a lot
of LOC saved, meanwhile IMHO the important part is mem barriers are just
tricky to always hard-code in users, so I thought it would always be nice
to reuse the lock APIs whenever possible.

One example is, IIUC this current patch may have missed the mem barriers
when boosting state_version in hpet_ram_write().

> 
> > How bad it is to spin on read waiting for the writer to finish?
> that will waste CPU cycles, and on large NUMA system it will generate
> more cross node traffic. (i.e. it would scale badly, though TBH
> I don't have numbers. I think measuring it would be hard as it
> would drown in the noise.)
> 
> hence I've opted for a more effective option, to halt readers
> until update is done. (at the cost of latency spike when that
> unlikely event happens)

If it is extremely unlikely (IIUC, disabling HPET while someone is using /
reading the counter.. should never happen in normal production?), would
spinning read also be fine?  Maybe that's also why I can save more LOCs in
the diff below.

In the diff I also removed a "addr <= 0xff" check, that might belong to a
prior patch that I thought is not needed.

Thanks,

diff --git a/hw/timer/hpet.c b/hw/timer/hpet.c
index d822ca1cd0..09a84d19f3 100644
--- a/hw/timer/hpet.c
+++ b/hw/timer/hpet.c
@@ -39,6 +39,7 @@
 #include "system/address-spaces.h"
 #include "qom/object.h"
 #include "qemu/lockable.h"
+#include "qemu/seqlock.h"
 #include "trace.h"
 
 struct hpet_fw_config hpet_fw_cfg = {.count = UINT8_MAX};
@@ -74,7 +75,7 @@ struct HPETState {
     MemoryRegion iomem;
     uint64_t hpet_offset;
     bool hpet_offset_saved;
-    unsigned state_version;
+    QemuSeqLock state_version;
     qemu_irq irqs[HPET_NUM_IRQ_ROUTES];
     uint32_t flags;
     uint8_t rtc_irq_level;
@@ -431,39 +432,17 @@ static uint64_t hpet_ram_read(void *opaque, hwaddr addr,
     trace_hpet_ram_read(addr);
     addr &= ~4;
 
-    if ((addr <= 0xff) && (addr == HPET_COUNTER)) {
+    if (addr == HPET_COUNTER) {
         unsigned version;
-        bool release_lock = false;
-redo:
-        version = qatomic_load_acquire(&s->state_version);
-        if (unlikely(version & 1)) {
-                /*
-                 * Updater is running, state can be inconsistent
-                 * wait till it's done before reading counter
-                 */
-                release_lock = true;
-                qemu_mutex_lock(&s->lock);
-        }
-
-        if (unlikely(!hpet_enabled(s))) {
-            cur_tick = s->hpet_counter;
-        } else {
-            cur_tick = hpet_get_ticks(s);
-        }
-
-        /*
-         * ensure counter math happens before we check version again
-         */
-        smp_rmb();
-        if (unlikely(version != qatomic_load_acquire(&s->state_version))) {
-            /*
-             * counter state has changed, re-read counter again
-             */
-            goto redo;
-        }
-        if (unlikely(release_lock)) {
-            qemu_mutex_unlock(&s->lock);
-        }
+        /* Write update is extremely rare, so spinning is fine */
+        do {
+            version = seqlock_read_begin(&s->state_version);
+            if (unlikely(!hpet_enabled(s))) {
+                cur_tick = s->hpet_counter;
+            } else {
+                cur_tick = hpet_get_ticks(s);
+            }
+        } while (seqlock_read_retry(&s->state_version, version));
         trace_hpet_ram_read_reading_counter(addr & 4, cur_tick);
         return cur_tick >> shift;
     }
@@ -528,11 +507,7 @@ static void hpet_ram_write(void *opaque, hwaddr addr,
             old_val = s->config;
             new_val = deposit64(old_val, shift, len, value);
             new_val = hpet_fixup_reg(new_val, old_val, HPET_CFG_WRITE_MASK);
-            /*
-             * Odd versions mark the critical section, any readers will be
-             * forced into lock protected read if they come in the middle of it
-             */
-            qatomic_inc(&s->state_version);
+            seqlock_write_begin(&s->state_version);
             s->config = new_val;
             if (activating_bit(old_val, new_val, HPET_CFG_ENABLE)) {
                 /* Enable main counter and interrupt generation. */
@@ -551,12 +526,7 @@ static void hpet_ram_write(void *opaque, hwaddr addr,
                     hpet_del_timer(&s->timer[i]);
                 }
             }
-            /*
-             * even versions mark the end of critical section,
-             * any readers started before config change, but were still 
executed
-             * during the change, will be forced to re-read counter state
-             */
-            qatomic_inc(&s->state_version);
+            seqlock_write_end(&s->state_version);
 
             /* i8254 and RTC output pins are disabled
              * when HPET is in legacy mode */


-- 
Peter Xu


Reply via email to