I do not know why i was removed from the list.
On 05/19/2017 04:09 PM, Jay Zhou wrote:
Hi Paolo and Wanpeng,
On 2017/5/17 16:38, Wanpeng Li wrote:
2017-05-17 15:43 GMT+08:00 Paolo Bonzini <pbonz...@redhat.com>:
Recently, I have tested the performance before migration and after migration
failure
using spec cpu2006 https://www.spec.org/cpu2006/, which is a standard
performance
evaluation tool.
These are the steps:
======
(1) the version of kmod is 4.4.11(with slightly modified) and the version of
qemu is 2.6.0
(with slightly modified), the kmod is applied with the following patch
diff --git a/source/x86/x86.c b/source/x86/x86.c
index 054a7d3..75a4bb3 100644
--- a/source/x86/x86.c
+++ b/source/x86/x86.c
@@ -8550,8 +8550,10 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
*/
if ((change != KVM_MR_DELETE) &&
(old->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
- !(new->flags & KVM_MEM_LOG_DIRTY_PAGES))
- kvm_mmu_zap_collapsible_sptes(kvm, new);
+ !(new->flags & KVM_MEM_LOG_DIRTY_PAGES)) {
+ printk(KERN_ERR "zj make KVM_REQ_MMU_RELOAD request\n");
+ kvm_make_all_cpus_request(kvm, KVM_REQ_MMU_RELOAD);
+ }
/*
* Set up write protection and/or dirty logging for the new slot.
Try these modifications to the setup:
1) set up 1G hugetlbfs hugepages and use those for the guest's memory
2) test both without and with the above patch.
In order to avoid random memory allocation issues, I reran the test cases:
(1) setup: start a 4U10G VM with memory preoccupied, each vcpu is pinned to a
pcpu respectively, these resources(memory and pcpu) allocated to VM are all
from NUMA node 0
(2) sequence: firstly, I run the 429.mcf of spec cpu2006 before migration, and
get a result. And then, migration failure is constructed. At last, I run the
test case again, and get an another result.
I guess this case purely writes the memory, that means the readonly mappings
will
always be dropped by #PF, then huge mappings are established.
If benchmark memory read, you show observe its difference.
Thanks!