CC: [email protected] In-Reply-To: <[email protected]> References: <[email protected]> TO: Sean Christopherson <[email protected]> TO: Paolo Bonzini <[email protected]> CC: Sean Christopherson <[email protected]> CC: Vitaly Kuznetsov <[email protected]> CC: Wanpeng Li <[email protected]> CC: Jim Mattson <[email protected]> CC: Joerg Roedel <[email protected]> CC: [email protected] CC: [email protected] CC: Ben Gardon <[email protected]>
Hi Sean, I love your patch! Perhaps something to improve: [auto build test WARNING on kvm/queue] [also build test WARNING on v5.14-rc5 next-20210810] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Sean-Christopherson/KVM-x86-mmu-Fix-unsync-races-within-TDP-MMU/20210811-064845 base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git queue :::::: branch date: 7 hours ago :::::: commit date: 7 hours ago config: x86_64-rhel-8.3-kselftests (attached as .config) compiler: gcc-9 (Debian 9.3.0-22) 9.3.0 reproduce: # apt-get install sparse # sparse version: v0.6.3-348-gf0e6938b-dirty # https://github.com/0day-ci/linux/commit/d114d08445896b8e18922d411ee4240ece169793 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Sean-Christopherson/KVM-x86-mmu-Fix-unsync-races-within-TDP-MMU/20210811-064845 git checkout d114d08445896b8e18922d411ee4240ece169793 # save the attached .config to linux build tree make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=x86_64 SHELL=/bin/bash arch/x86/kvm/ If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <[email protected]> sparse warnings: (new ones prefixed by >>) arch/x86/kvm/mmu/mmu.c:699:9: sparse: sparse: context imbalance in 'walk_shadow_page_lockless_begin' - different lock contexts for basic block arch/x86/kvm/mmu/mmu.c: note: in included file (through include/linux/rbtree.h, include/linux/mm_types.h, arch/x86/kvm/irq.h): include/linux/rcupdate.h:718:9: sparse: sparse: context imbalance in 'walk_shadow_page_lockless_end' - unexpected unlock >> arch/x86/kvm/mmu/mmu.c:2622:9: sparse: sparse: context imbalance in >> 'mmu_try_to_unsync_pages' - different lock contexts for basic block arch/x86/kvm/mmu/mmu.c:4491:57: sparse: sparse: cast truncates bits from constant value (ffffff33 becomes 33) arch/x86/kvm/mmu/mmu.c:4493:56: sparse: sparse: cast truncates bits from constant value (ffffff0f becomes f) arch/x86/kvm/mmu/mmu.c:4495:57: sparse: sparse: cast truncates bits from constant value (ffffff55 becomes 55) vim +/mmu_try_to_unsync_pages +2622 arch/x86/kvm/mmu/mmu.c 9cf5cf5ad43b293 arch/x86/kvm/mmu.c Xiao Guangrong 2010-05-24 2590 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2591 /* 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2592 * Attempt to unsync any shadow pages that can be reached by the specified gfn, 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2593 * KVM is creating a writable mapping for said gfn. Returns 0 if all pages 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2594 * were marked unsync (or if there is no shadow page), -EPERM if the SPTE must 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2595 * be write-protected. 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2596 */ d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2597 int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync, d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2598 spinlock_t *unsync_lock) 4731d4c7a07769c arch/x86/kvm/mmu.c Marcelo Tosatti 2008-09-23 2599 { d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2600 bool locked_write = !unsync_lock; 5c520e90af3ad54 arch/x86/kvm/mmu.c Xiao Guangrong 2016-02-24 2601 struct kvm_mmu_page *sp; 9cf5cf5ad43b293 arch/x86/kvm/mmu.c Xiao Guangrong 2010-05-24 2602 d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2603 if (locked_write) d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2604 lockdep_assert_held_write(&vcpu->kvm->mmu_lock); d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2605 else d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2606 lockdep_assert_held_read(&vcpu->kvm->mmu_lock); d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2607 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2608 /* 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2609 * Force write-protection if the page is being tracked. Note, the page 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2610 * track machinery is used to write-protect upper-level shadow pages, 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2611 * i.e. this guards the role.level == 4K assertion below! 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2612 */ 3d0c27ad6ee465f arch/x86/kvm/mmu.c Xiao Guangrong 2016-02-24 2613 if (kvm_page_track_is_active(vcpu, gfn, KVM_PAGE_TRACK_WRITE)) 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2614 return -EPERM; 3d0c27ad6ee465f arch/x86/kvm/mmu.c Xiao Guangrong 2016-02-24 2615 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2616 /* 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2617 * The page is not write-tracked, mark existing shadow pages unsync 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2618 * unless KVM is synchronizing an unsync SP (can_unsync = false). In 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2619 * that case, KVM must complete emulation of the guest TLB flush before 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2620 * allowing shadow pages to become unsync (writable by the guest). 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2621 */ 5c520e90af3ad54 arch/x86/kvm/mmu.c Xiao Guangrong 2016-02-24 @2622 for_each_gfn_indirect_valid_sp(vcpu->kvm, sp, gfn) { 36a2e6774bfb5f3 arch/x86/kvm/mmu.c Xiao Guangrong 2010-06-30 2623 if (!can_unsync) 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2624 return -EPERM; 36a2e6774bfb5f3 arch/x86/kvm/mmu.c Xiao Guangrong 2010-06-30 2625 5c520e90af3ad54 arch/x86/kvm/mmu.c Xiao Guangrong 2016-02-24 2626 if (sp->unsync) 5c520e90af3ad54 arch/x86/kvm/mmu.c Xiao Guangrong 2016-02-24 2627 continue; 9cf5cf5ad43b293 arch/x86/kvm/mmu.c Xiao Guangrong 2010-05-24 2628 d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2629 /* d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2630 * TDP MMU page faults require an additional spinlock as they d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2631 * run with mmu_lock held for read, not write, and the unsync d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2632 * logic is not thread safe. d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2633 */ d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2634 if (!locked_write) { d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2635 locked_write = true; d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2636 spin_lock(unsync_lock); d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2637 d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2638 /* d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2639 * Recheck after taking the spinlock, a different vCPU d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2640 * may have since marked the page unsync. A false d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2641 * positive on the unprotected check above is not d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2642 * possible as clearing sp->unsync _must_ hold mmu_lock d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2643 * for write, i.e. unsync cannot transition from 0->1 d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2644 * while this CPU holds mmu_lock for read. d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2645 */ d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2646 if (READ_ONCE(sp->unsync)) d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2647 continue; d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2648 } d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2649 3bae0459bcd5595 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-04-27 2650 WARN_ON(sp->role.level != PG_LEVEL_4K); 5c520e90af3ad54 arch/x86/kvm/mmu.c Xiao Guangrong 2016-02-24 2651 kvm_unsync_page(vcpu, sp); 9cf5cf5ad43b293 arch/x86/kvm/mmu.c Xiao Guangrong 2010-05-24 2652 } d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2653 if (unsync_lock && locked_write) d114d08445896b8 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-08-10 2654 spin_unlock(unsync_lock); 3d0c27ad6ee465f arch/x86/kvm/mmu.c Xiao Guangrong 2016-02-24 2655 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2656 /* 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2657 * We need to ensure that the marking of unsync pages is visible 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2658 * before the SPTE is updated to allow writes because 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2659 * kvm_mmu_sync_roots() checks the unsync flags without holding 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2660 * the MMU lock and so can race with this. If the SPTE was updated 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2661 * before the page had been marked as unsync-ed, something like the 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2662 * following could happen: 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2663 * 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2664 * CPU 1 CPU 2 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2665 * --------------------------------------------------------------------- 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2666 * 1.2 Host updates SPTE 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2667 * to be writable 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2668 * 2.1 Guest writes a GPTE for GVA X. 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2669 * (GPTE being in the guest page table shadowed 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2670 * by the SP from CPU 1.) 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2671 * This reads SPTE during the page table walk. 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2672 * Since SPTE.W is read as 1, there is no 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2673 * fault. 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2674 * 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2675 * 2.2 Guest issues TLB flush. 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2676 * That causes a VM Exit. 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2677 * 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2678 * 2.3 Walking of unsync pages sees sp->unsync is 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2679 * false and skips the page. 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2680 * 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2681 * 2.4 Guest accesses GVA X. 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2682 * Since the mapping in the SP was not updated, 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2683 * so the old mapping for GVA X incorrectly 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2684 * gets used. 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2685 * 1.1 Host marks SP 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2686 * as unsync 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2687 * (sp->unsync = true) 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2688 * 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2689 * The write barrier below ensures that 1.1 happens before 1.2 and thus 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2690 * the situation in 2.4 does not arise. The implicit barrier in 2.2 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2691 * pairs with this write barrier. 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2692 */ 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2693 smp_wmb(); 578e1c4db22135d arch/x86/kvm/mmu.c Junaid Shahid 2018-06-27 2694 0337f585f57fc80 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-06-22 2695 return 0; 4731d4c7a07769c arch/x86/kvm/mmu.c Marcelo Tosatti 2008-09-23 2696 } 4731d4c7a07769c arch/x86/kvm/mmu.c Marcelo Tosatti 2008-09-23 2697 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/[email protected]
.config.gz
Description: application/gzip
_______________________________________________ kbuild mailing list -- [email protected] To unsubscribe send an email to [email protected]
