On Wed, Nov 26, 2008 at 01:12:20PM +0200, Avi Kivity wrote:
> Marcelo Tosatti wrote:
>>
>> Here it goes.
>>
>> KVM: MMU: optimize set_spte for page sync
>>
>> The write protect verification in set_spte is unnecessary for page sync.
>>
>> Its guaranteed that, if the unsync spte was writable, the target page
>> does not have a write protected shadow (if it had, the spte would have
>> been write protected under mmu_lock by rmap_write_protect before).
>>
>> Same reasoning applies to mark_page_dirty: the gfn has been marked as
>> dirty via the pagefault path.
>>
>> The cost of hash table and memslot lookups are quite significant if the
>> workload is pagetable write intensive resulting in increased mmu_lock
>> contention.
>>
>>   
>
> Applied, thanks.

Avi,

Do you have objections for submitting this patch for 2.6.28 ? The hash
lookup kills performance of pagetable write + context switch intensive
workloads.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to