On Sat, Apr 04, 2009 at 01:37:39PM +0300, Avi Kivity wrote:
>> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
>> index 09782a9..728be72 100644
>> --- a/arch/x86/kvm/paging_tmpl.h
>> +++ b/arch/x86/kvm/paging_tmpl.h
>> @@ -308,8 +308,14 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t
>> addr,
>> break;
>> }
>> - if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep))
>> + if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep)) {
>> + if (level-1 == PT_PAGE_TABLE_LEVEL) {
>> + shadow_page = page_header(__pa(sptep));
>> + if (shadow_page->unsync && shadow_page->global)
>> + kvm_sync_page(vcpu, shadow_page);
>> + }
>> continue;
>> + }
>> if (is_large_pte(*sptep)) {
>> rmap_remove(vcpu->kvm, sptep);
>>
>
> But here the shadow page is already linked? Isn't the root cause that
> an invlpg was called when the page wasn't linked, so it wasn't seen by
> invlpg?
>
> So I thought the best place would be in fetch(), after
> kvm_mmu_get_page(). If we're linking a page which contains global ptes,
> they might be unsynced due to invlpgs that we've missed.
>
> Or am I missing something about the root cause?
The problem is when the page is unreachable due to a higher level path
being unlinked. Say:
level 4 -> level 3 . level 2 -> level 1 (global unsync)
The dot there means level 3 is not linked to level 2, so invlpg can't
reach the global unsync at level 1.
kvm_mmu_get_page does sync pages when it finds them, so the code is
already safe for the "linking a page which contains global ptes" case
you mention above.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html