On 06/29/2011 07:18 PM, Avi Kivity wrote:
> On 06/29/2011 02:16 PM, Xiao Guangrong wrote:
>> >>  @@ -1767,6 +1874,14 @@ static void kvm_mmu_commit_zap_page(struct kvm 
>> >> *kvm,
>> >>
>> >>        kvm_flush_remote_tlbs(kvm);
>> >>
>> >>  +    if (atomic_read(&kvm->arch.reader_counter)) {
>> >>  +        kvm_mmu_isolate_pages(invalid_list);
>> >>  +        sp = list_first_entry(invalid_list, struct kvm_mmu_page, link);
>> >>  +        list_del_init(invalid_list);
>> >>  +        call_rcu(&sp->rcu, free_pages_rcu);
>> >>  +        return;
>> >>  +    }
>> >>  +
>> >
>> >  I think we should do this unconditionally.  The cost of ping-ponging the 
>> > shared cache line containing reader_counter will increase with large smp 
>> > counts.  On the other hand, zap_page is very rare, so it can be a little 
>> > slower.  Also, less code paths = easier to understand.
>> >
>>
>> On soft mmu, zap_page is very frequently, it can cause performance 
>> regression in my test.
> 
> Any idea what the cause of the regression is?  It seems to me that simply 
> deferring freeing shouldn't have a large impact.
> 

I guess it is because the page is freed too frequently, i have done the test, 
it shows
about 3219 pages is freed per second

Kernbench performance comparing:

the origin way: 3m27.723
free all shadow page in rcu context: 3m30.519
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to