On Thu, Feb 16, 2023, Yu Zhao wrote:
> +static bool kvm_mmu_notifier_test_clear_young(struct mmu_notifier *mn, 
> struct mm_struct *mm,
> +                                           unsigned long start, unsigned 
> long end,
> +                                           unsigned long *bitmap)
> +{
> +     if (kvm_arch_has_test_clear_young())
> +             return kvm_test_clear_young(mmu_notifier_to_kvm(mn), start, 
> end, bitmap);
> +
> +     return false;
> +}
> +
>  static int kvm_mmu_notifier_test_young(struct mmu_notifier *mn,
>                                      struct mm_struct *mm,
>                                      unsigned long address)
> @@ -903,6 +960,7 @@ static const struct mmu_notifier_ops kvm_mmu_notifier_ops 
> = {
>       .clear_flush_young      = kvm_mmu_notifier_clear_flush_young,
>       .clear_young            = kvm_mmu_notifier_clear_young,
>       .test_young             = kvm_mmu_notifier_test_young,
> +     .test_clear_young       = kvm_mmu_notifier_test_clear_young,

I am strongly opposed to adding yet another "young" mmu_notifier hook for this.
I would much rather we extend (and rename) mmu_notifier_clear_young() to take 
the
bitmap as an optional parameter, and then have KVM hide the details of whether 
or
not it supports lockless processing of the range/bitmap.

I also think for KVM x86 in particular, this series should first convert to a
lockless walk for aging ranges, and then add the batching.  It might also make
sense to land x86 first and then follow-up with ARM and PPC.  I haven't looked 
at
the ARM or PPC patches in too much depth, but on the x86 side of things KVM 
already
has the pieces in place to support a fully lockless walk, i.e. the x86 specific
changes aren't all that contentious, the only thing we need to figure out is 
what
to do about nested VMs.

Reply via email to