merged into previous patch: [PATCH vz8 3/7] kvm: move VMs which we skip during shrink to vm_list tail
-- Best regards, Konstantin Khorenko, Virtuozzo Linux Kernel Team On 06/08/2021 08:58 PM, Valeriy Vdovin wrote:
From: Konstantin Khorenko <[email protected]> As we skip some VMs during shrink and don't want to iterate them again and again on each shrink, we move those skipped VMs to the list's tail, thus we need to use _safe version of list iteration. Fixes: bb2d7ab43eba ("kvm: move VMs which we skip during shrink to vm_list tail") https://jira.sw.ru/browse/PSBM-95077 Signed-off-by: Konstantin Khorenko <[email protected]> (cherry-picked from bde385cf90bf89b255fe56f90605c07da5577a9e) https://jira.sw.ru/browse/PSBM-127849 Signed-off-by: Valeriy Vdovin <[email protected]> --- arch/x86/kvm/mmu/mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 74456f4a738a..9e58984fdfaf 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6074,13 +6074,13 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen) static unsigned long mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) { - struct kvm *kvm; + struct kvm *kvm, *tmp; int nr_to_scan = sc->nr_to_scan; unsigned long freed = 0; mutex_lock(&kvm_lock); - list_for_each_entry(kvm, &vm_list, vm_list) { + list_for_each_entry_safe(kvm, tmp, &vm_list, vm_list) { int idx; LIST_HEAD(invalid_list);
_______________________________________________ Devel mailing list [email protected] https://lists.openvz.org/mailman/listinfo/devel
