Marc,
On Thu, Sep 22, 2022 at 06:01:28PM +0100, Marc Zyngier wrote:
> The current implementation of the dirty ring has an implicit requirement
> that stores to the dirty ring from userspace must be:
>
> - be ordered with one another
>
> - visible from another CPU executing a ring reset
>
> While these implicit requirements work well for x86 (and any other
> TSO-like architecture), they do not work for more relaxed architectures
> such as arm64 where stores to different addresses can be freely
> reordered, and loads from these addresses not observing writes from
> another CPU unless the required barriers (or acquire/release semantics)
> are used.
>
> In order to start fixing this, upgrade the ring reset accesses:
>
> - the kvm_dirty_gfn_harvested() helper now uses acquire semantics
> so it is ordered after all previous writes, including that from
> userspace
>
> - the kvm_dirty_gfn_set_invalid() helper now uses release semantics
> so that the next_slot and next_offset reads don't drift past
> the entry invalidation
>
> This is only a partial fix as the userspace side also need upgrading.
Paolo has one fix 4802bf910e ("KVM: dirty ring: add missing memory
barrier", 2022-09-01) which has already landed.
I think the other one to reset it was lost too. I just posted a patch.
https://lore.kernel.org/qemu-devel/[email protected]/
(link still not yet available so far, but should be)
>
> Signed-off-by: Marc Zyngier <[email protected]>
> ---
> virt/kvm/dirty_ring.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
> index f4c2a6eb1666..784bed80221d 100644
> --- a/virt/kvm/dirty_ring.c
> +++ b/virt/kvm/dirty_ring.c
> @@ -79,12 +79,12 @@ static inline void kvm_dirty_gfn_set_invalid(struct
> kvm_dirty_gfn *gfn)
>
> static inline void kvm_dirty_gfn_set_dirtied(struct kvm_dirty_gfn *gfn)
> {
> - gfn->flags = KVM_DIRTY_GFN_F_DIRTY;
> + smp_store_release(&gfn->flags, KVM_DIRTY_GFN_F_DIRTY);
IIUC you meant kvm_dirty_gfn_set_invalid as the comment says?
kvm_dirty_gfn_set_dirtied() has been guarded by smp_wmb() and AFAICT that's
already safe. Otherwise looks good to me.
Thanks,
> }
>
> static inline bool kvm_dirty_gfn_harvested(struct kvm_dirty_gfn *gfn)
> {
> - return gfn->flags & KVM_DIRTY_GFN_F_RESET;
> + return smp_load_acquire(&gfn->flags) & KVM_DIRTY_GFN_F_RESET;
> }
>
> int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring)
> --
> 2.34.1
>
--
Peter Xu
_______________________________________________
kvmarm mailing list
[email protected]
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm