On Fri, 13 Apr 2012 15:38:41 -0700
Ying Han <[email protected]> wrote:
> The mmu_shrink() is heavy by itself by iterating all kvms and holding
> the kvm_lock. spotted the code w/ Rik during LSF, and it turns out we
> don't need to call the shrinker if nothing to shrink.
>
We should probably tell the kvm maintainers about this ;)
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -188,6 +188,11 @@ static u64 __read_mostly shadow_mmio_mask;
>
> static void mmu_spte_set(u64 *sptep, u64 spte);
>
> +static inline int get_kvm_total_used_mmu_pages()
> +{
> + return percpu_counter_read_positive(&kvm_total_used_mmu_pages);
> +}
> +
> void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask)
> {
> shadow_mmio_mask = mmio_mask;
> @@ -3900,6 +3905,9 @@ static int mmu_shrink(struct shrinker *shrink, struct
> shrink_control *sc)
> if (nr_to_scan == 0)
> goto out;
>
> + if (!get_kvm_total_used_mmu_pages())
> + return 0;
> +
> raw_spin_lock(&kvm_lock);
>
> list_for_each_entry(kvm, &vm_list, vm_list) {
> @@ -3926,7 +3934,7 @@ static int mmu_shrink(struct shrinker *shrink, struct
> shrink_control *sc)
> raw_spin_unlock(&kvm_lock);
>
> out:
> - return percpu_counter_read_positive(&kvm_total_used_mmu_pages);
> + return get_kvm_total_used_mmu_pages();
> }
>
> static struct shrinker mmu_shrinker = {
There's a small functional change: percpu_counter_read_positive() is an
approximate thing, so there will be cases where there will be some
pages which are accounted for only in the percpu_counter's per-cpu
accumulators. In that case mmu_shrink() will bale out when there are
in fact some freeable pages available. This is hopefully unimportant.
Do we actually know that this patch helps anything? Any measurements? Is
kvm_total_used_mmu_pages==0 at all common?
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html