This makes the refcnt implementation issue memory barriers when releasing references, splitting memory activity cleanly into preceding and succeeding stages around refcnt 1->0 transition.
I think the while loop could be optimized a little by re-reading r->r_refs just after sleep_finish(). That should avoid re-entering the SCHED_LOCK()'ed section. However, that is not part of this patch. OK? Index: share/man/man9/refcnt_init.9 =================================================================== RCS file: src/share/man/man9/refcnt_init.9,v retrieving revision 1.1 diff -u -p -r1.1 refcnt_init.9 --- share/man/man9/refcnt_init.9 11 Sep 2015 19:13:22 -0000 1.1 +++ share/man/man9/refcnt_init.9 13 Mar 2022 11:40:12 -0000 @@ -68,6 +68,18 @@ There may only be one caller to per refcnt .Fa r . .Pp +.Fn refcnt_rele , +.Fn refcnt_rele_wake +and +.Fn refcnt_finalize +provide release memory ordering. +The caller's prior memory loads and stores are completed +before the reference is released. +The functions provide acquire memory ordering after all the references +have been released. +This ensures the object's destructor sees all updates +done during the lifetime of the object. +.Pp .Fn REFCNT_INITIALIZER initialises a declaration of a refcnt to 1. .Sh CONTEXT Index: sys/kern/kern_synch.c =================================================================== RCS file: src/sys/kern/kern_synch.c,v retrieving revision 1.183 diff -u -p -r1.183 kern_synch.c --- sys/kern/kern_synch.c 10 Mar 2022 15:21:08 -0000 1.183 +++ sys/kern/kern_synch.c 13 Mar 2022 11:40:13 -0000 @@ -825,10 +825,16 @@ refcnt_rele(struct refcnt *r) { u_int refcnt; + membar_exit_before_atomic(); refcnt = atomic_dec_int_nv(&r->r_refs); KASSERT(refcnt != ~0); - return (refcnt == 0); + if (refcnt == 0) { + membar_enter_after_atomic(); + return (1); + } + + return (0); } void @@ -844,12 +850,20 @@ refcnt_finalize(struct refcnt *r, const struct sleep_state sls; u_int refcnt; + membar_exit_before_atomic(); refcnt = atomic_dec_int_nv(&r->r_refs); + if (refcnt == 0) { + membar_enter_after_atomic(); + return; + } + while (refcnt) { sleep_setup(&sls, r, PWAIT, wmesg, 0); refcnt = atomic_load_int(&r->r_refs); sleep_finish(&sls, refcnt); } + /* Provide acquire ordering after seeing refcnt == 0. */ + membar_sync(); } void