Re: [Qemu-devel] [PATCH] msix: fix interrupt aggregation problem at the passthrough of NVMe SSD

2019-04-10 Thread Zhuangyanying
> -Original Message- > From: Michael S. Tsirkin [mailto:m...@redhat.com] > Sent: Tuesday, April 09, 2019 11:04 PM > To: Zhuangyanying > Cc: marcel.apfelb...@gmail.com; qemu-devel@nongnu.org; Gonglei (Arei) > > Subject: Re: [PATCH] msix: fix interrup

[Qemu-devel] [PATCH] msix: fix interrupt aggregation problem at the passthrough of NVMe SSD

2019-04-09 Thread Zhuangyanying
From: Zhuang Yanying Recently I tested the performance of NVMe SSD passthrough and found that interrupts were aggregated on vcpu0(or the first vcpu of each numa) by /proc/interrupts,when GuestOS was upgraded to sles12sp3 (or redhat7.6). But /proc/irq/X/smp_affinity_list shows that the

[Qemu-devel] [PATCH v2 2/3] KVM: MMU: introduce kvm_mmu_write_protect_all_pages

2019-01-24 Thread Zhuangyanying
From: Xiao Guangrong The original idea is from Avi. kvm_mmu_write_protect_all_pages() is extremely fast to write protect all the guest memory. Comparing with the ordinary algorithm which write protects last level sptes based on the rmap one by one, it just simply updates the generation number to

[Qemu-devel] [PATCH v2 3/3] KVM: MMU: fast cleanup D bit based on fast write protect

2019-01-24 Thread Zhuangyanying
From: Zhuang Yanying When live-migration with large-memory guests, vcpu may hang for a long time while starting migration, such as 9s for 2T (linux-5.0.0-rc2+qemu-3.1.0). The reason is memory_global_dirty_log_start() taking too long, and the vcpu is waiting for BQL. The page-by-page D bit

[Qemu-devel] [PATCH v2 1/3] KVM: MMU: introduce possible_writable_spte_bitmap

2019-01-24 Thread Zhuangyanying
From: Xiao Guangrong It is used to track possible writable sptes on the shadow page on which the bit is set to 1 for the sptes that are already writable or can be locklessly updated to writable on the fast_page_fault path, also a counter for the number of possible writable sptes is introduced to

[Qemu-devel] [PATCH v2 0/3] KVM: MMU: fast cleanup D bit based on fast write protect

2019-01-24 Thread Zhuangyanying
From: Zhuang yanying When live-migration with large-memory guests, vcpu may hang for a long time while starting migration, such as 9s for 2T (linux-5.0.0-rc2+qemu-3.1.0). The reason is memory_global_dirty_log_start() taking too long, and the vcpu is waiting for BQL. The page-by-page D bit

Re: [Qemu-devel] [PATCH 4/4] KVM: MMU: fast cleanup D bit based on fast write protect

2019-01-23 Thread Zhuangyanying
> -Original Message- > From: Sean Christopherson [mailto:sean.j.christopher...@intel.com] > Sent: Tuesday, January 22, 2019 11:17 PM > To: Zhuangyanying > Cc: xiaoguangr...@tencent.com; pbonz...@redhat.com; Gonglei (Arei) > ; qemu-devel@nongnu.org; k...@vger.kernel

Re: [Qemu-devel] [PATCH 4/4] KVM: MMU: fast cleanup D bit based on fast write protect

2019-01-20 Thread Zhuangyanying
> -Original Message- > From: Sean Christopherson [mailto:sean.j.christopher...@intel.com] > Sent: Friday, January 18, 2019 12:32 AM > To: Zhuangyanying > Cc: xiaoguangr...@tencent.com; pbonz...@redhat.com; Gonglei (Arei) > ; qemu-devel@nongnu.org; k...@vger.kernel

[Qemu-devel] [PATCH 4/4] KVM: MMU: fast cleanup D bit based on fast write protect

2019-01-17 Thread Zhuangyanying
From: Zhuang Yanying When live-migration with large-memory guests, vcpu may hang for a long time while starting migration, such as 9s for 2T (linux-5.0.0-rc2+qemu-3.1.0). The reason is memory_global_dirty_log_start() taking too long, and the vcpu is waiting for BQL. The page-by-page D bit

[Qemu-devel] [PATCH 2/4] KVM: MMU: introduce possible_writable_spte_bitmap

2019-01-17 Thread Zhuangyanying
From: Xiao Guangrong It is used to track possible writable sptes on the shadow page on which the bit is set to 1 for the sptes that are already writable or can be locklessly updated to writable on the fast_page_fault path, also a counter for the number of possible writable sptes is introduced to

[Qemu-devel] [PATCH 1/4] KVM: MMU: correct the behavior of mmu_spte_update_no_track

2019-01-17 Thread Zhuangyanying
From: Xiao Guangrong Current behavior of mmu_spte_update_no_track() does not match the name of _no_track() as actually the A/D bits are tracked and returned to the caller This patch introduces the real _no_track() function to update the spte regardless of A/D bits and rename the original

[Qemu-devel] [PATCH 3/4] KVM: MMU: introduce kvm_mmu_write_protect_all_pages

2019-01-17 Thread Zhuangyanying
From: Xiao Guangrong The original idea is from Avi. kvm_mmu_write_protect_all_pages() is extremely fast to write protect all the guest memory. Comparing with the ordinary algorithm which write protects last level sptes based on the rmap one by one, it just simply updates the generation number to

[Qemu-devel] [PATCH 0/4] KVM: MMU: fast cleanup D bit based on fast write protect

2019-01-17 Thread Zhuangyanying
From: Zhuang Yanying Recently I tested live-migration with large-memory guests, find vcpu may hang for a long time while starting migration, such as 9s for 2048G(linux-5.0.0-rc2+qemu-3.1.0). The reason is memory_global_dirty_log_start() taking too long, and the vcpu is waiting for BQL. The

[Qemu-devel] [PATCH] KVM: MMU: fast cleanup D bit based on fast write protect

2019-01-12 Thread Zhuangyanying
From: Zhuang Yanying Recently I tested live-migration with large-memory guests, find vcpu may hang for a long time while starting migration, such as 9s for 2048G(linux-4.20.1+qemu-3.1.0). The reason is memory_global_dirty_log_start() taking too long, and the vcpu is waiting for BQL. The

[Qemu-devel] [RFH]vcpu may hang for up to 4s while starting migration

2018-12-11 Thread Zhuangyanying
From: Zhuang Yanying Hi, Recently I test live-migration vm with 1T memory, find vcpu may hang for up to 4s while starting migration. The reason is memory_global_dirty_log_start taking too long, and the vcpu is waiting for BQL. migrate threadvcpu

[Qemu-devel] [PATCH v3] KVM: x86: Fix nmi injection failure when vcpu got blocked

2017-05-25 Thread Zhuangyanying
From: ZhuangYanying <ann.zhuangyany...@huawei.com> When spin_lock_irqsave() deadlock occurs inside the guest, vcpu threads, other than the lock-holding one, would enter into S state because of pvspinlock. Then inject NMI via libvirt API "inject-nmi", the NMI could not be

[Qemu-devel] [PATCH v2] KVM: x86: Fix nmi injection failure when vcpu got blocked

2017-05-25 Thread Zhuangyanying
From: ZhuangYanying <ann.zhuangyany...@huawei.com> When spin_lock_irqsave() deadlock occurs inside the guest, vcpu threads, other than the lock-holding one, would enter into S state because of pvspinlock. Then inject NMI via libvirt API "inject-nmi", the NMI could not be

Re: [Qemu-devel] [PATCH] Fix nmi injection failure when vcpu got blocked

2017-05-25 Thread Zhuangyanying
> -Original Message- > From: Radim Krčmář [mailto:rkrc...@redhat.com] > Sent: Wednesday, May 24, 2017 10:34 PM > To: Zhuangyanying > Cc: pbonz...@redhat.com; Herongguang (Stephen); qemu-devel@nongnu.org; > Gonglei (Arei); Zhangbo (Oscar); k...@vger.kernel.org > Sub

[Qemu-devel] [PATCH] Fix nmi injection failure when vcpu got blocked

2017-05-23 Thread Zhuangyanying
From: ZhuangYanying <ann.zhuangyany...@huawei.com> Recently I found NMI could not be injected to vm via libvirt API Reproduce the problem: 1 use guest of redhat 7.3 2 disable nmi_watchdog and trig spinlock deadlock inside the guest check the running vcpu thread, make sure not vcpu0 3 inje

Re: [Qemu-devel] [BUG] Migrate failes between boards with different PMC counts

2017-04-24 Thread Zhuangyanying
> -Original Message- > From: Daniel P. Berrange [mailto:berra...@redhat.com] > Sent: Monday, April 24, 2017 6:34 PM > To: Dr. David Alan Gilbert > Cc: Zhuangyanying; Zhanghailiang; wangxin (U); qemu-devel@nongnu.org; > Gonglei (Arei); Huangzhichao; pbonz...@redhat

[Qemu-devel] [BUG] Migrate failes between boards with different PMC counts

2017-04-24 Thread Zhuangyanying
Hi all, Recently, I found migration failed when enable vPMU. migrate vPMU state was introduced in linux-3.10 + qemu-1.7. As long as enable vPMU, qemu will save / load the vmstate_msr_architectural_pmu(msr_global_ctrl) register during the migration. But global_ctrl generated based on cpuid(0xA),

[Qemu-devel] [PATCH] ipmi: fix qemu crash while migrating with ipmi

2016-11-18 Thread Zhuangyanying
From: ZhuangYanying <ann.zhuangyany...@huawei.com> Qemu crash in the source side while migrating, after starting ipmi service inside vm. ./x86_64-softmmu/qemu-system-x86_64 --enable-kvm -smp 4 -m 4096 \ -drive file=/work/suse/suse11_sp3_64_vt,format=raw,if=none,id=drive-virtio-disk0,cach

[Qemu-devel] [PATCH v3] ivshmem: Fix 64 bit memory bar configuration

2016-11-17 Thread Zhuangyanying
From: Zhuang Yanying Device ivshmem property use64=0 is designed to make the device expose a 32 bit shared memory BAR instead of 64 bit one. The default is a 64 bit BAR, except pc-1.2 and older retain a 32 bit BAR. A 32 bit BAR can support only up

[Qemu-devel] [PATCH v2 1/2] ivshmem: fix misconfig of not_legacy_32bit

2016-11-15 Thread Zhuangyanying
From: ZhuangYanying <ann.zhuangyany...@huawei.com> After commit 5400c02, ivshmem_64bit renamed to not_legacy_32bit, and changed the implementation of this property. Then use64 = 1, ~PCI_BASE_ADDRESS_MEM_TYPE_64 (default for ivshmem), the actual use is the legacy model, can not support g

[Qemu-devel] [PATCH v2 0/2] ivshmem: fix misconfig of not_legacy_32bit

2016-11-15 Thread Zhuangyanying
From: ZhuangYanying <ann.zhuangyany...@huawei.com> Recently, I tested ivshmem, found that use64, that is not_legacy_32bit implementation is odd, or even the opposite. Previous use64 = ivshmem_64bit = 1, then attr |= PCI_BASE_ADDRESS_MEM_TYPE_64, ivshmem support 1G and above packaged int

[Qemu-devel] [PATCH v2 2/2] ivshmem: set not_legacy_32bit to 1 for ivshmem_doorbell and ivshmem-plain

2016-11-15 Thread Zhuangyanying
From: ZhuangYanying <ann.zhuangyany...@huawei.com> Signed-off-by: Zhuang Yanying <ann.zhuangyany...@huawei.com> --- hw/misc/ivshmem.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/hw/misc/ivshmem.c b/hw/misc/ivshmem.c index b897685..abeaf3d 100644 --- a/hw/misc/ivshmem.c +

[Qemu-devel] [PATCH] hw/misc/ivshmem:fix misconfig of not_legacy_32bit

2016-11-14 Thread Zhuangyanying
From: ZhuangYanying <ann.zhuangyany...@huawei.com> After "ivshmem: Split ivshmem-plain, ivshmem-doorbell off ivshmem", ivshmem_64bit renamed to not_legacy_32bit, and changed the implementation of this property. Then use64 = not_legacy_32bit = 1, then PCI attribu

[Qemu-devel] [PATCH] target-i386/machine:fix migrate faile because of Hyper-V HV_X64_MSR_VP_RUNTIME

2016-11-04 Thread Zhuangyanying
From: ZhuangYanying <ann.zhuangyany...@huawei.com> Hyper-V HV_X64_MSR_VP_RUNTIME was introduced in linux-4.4 + qemu-2.5. As long as the KVM module supports, qemu will save / load the vmstate_msr_hyperv_runtime register during the migration. Regardless of whether the hyperv_r