Marcelo Tosatti wrote:
> On Mon, May 03, 2010 at 09:38:54PM +0800, Gui Jianfeng wrote:
>> Hi Marcelo
>>
>> Actually, it doesn't only affect kvm_mmu_change_mmu_pages() but also affects 
>> kvm_mmu_remove_some_alloc_mmu_pages()
>> which is called by mmu shrink routine. This will induce upper layer get a 
>> wrong number, so i think this should be
>> fixed. Here is a updated version.
>>
>> ---
>> From: Gui Jianfeng <guijianf...@cn.fujitsu.com>
>>
>> Currently, in kvm_mmu_change_mmu_pages(kvm, page), "used_pages--" is  
>> performed after calling
>> kvm_mmu_zap_page() in spite of that whether "page" is actually reclaimed. 
>> Because root sp won't 
>> be reclaimed by kvm_mmu_zap_page(). So making kvm_mmu_zap_page() return 
>> total number of reclaimed 
>> sp makes more sense. A new flag is put into kvm_mmu_zap_page() to indicate 
>> whether the top page is
>> reclaimed. kvm_mmu_remove_some_alloc_mmu_pages() also rely on 
>> kvm_mmu_zap_page() to return a total
>> relcaimed number.
> 
> Isnt it simpler to have kvm_mmu_zap_page return the number of pages it
> actually freed? Then always restart the hash walk if return is positive.
> 

OK, although in some cases we might encounter unneeded hash walk restart, but 
it's not a big
problem. I don't object this solution, will post a new patch.

Thanks,
Gui

> 
> 
> 



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to