Hi Jan,

On 04/02/2026 13:15, Jan Beulich wrote:
The counterpart in vcpu_context_saved() is smp_wmb(), and here we don't
really need any more than a read barrier: The concern expressed in the
comment is void, as updates to the context are necessarily observed ahead
of ->is_running going false, by virtue of said barrier in
vcpu_context_saved().

Fixes: f6790389613c ("xen/arm: sched: Ensure the vCPU context is seen before 
vcpu_pause() returns")
Signed-off-by: Jan Beulich <[email protected]>

Acked-by: Julien Grall <[email protected]>

---
As to the tag, this is one of the cases where I'm on the edge between
Amends: and Fixes:.

Good question. I would not consider the patch as backport and it is border line a bug fix (the barrier is just too strong) so Amends may be better. I will leave it up to you when you commit.


Subsequently we may want to move the barrier into the sole common code
caller, seeing that the other barrier also is in common code. Furthermore,
seeing that for all but x86 the function is then entirely empty, we may
want to allow it to be inline to avoid the unnecessary (tail) call.

+1 for both proposal.

--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -342,11 +342,8 @@ void sync_vcpu_execstate(struct vcpu *v)
       * Per vcpu_context_saved(), the context can be observed when
       * v->is_running is false (the caller should check it before calling
       * this function).
-     *
-     * Note this is a full barrier to also prevent update of the context
-     * to happen before it was observed.
       */
-    smp_mb();
+    smp_rmb();
  }
#define NEXT_ARG(fmt, args) \

Cheers,

--
Julien Grall


Reply via email to