On Fri, Sep 19, 2008 at 06:22:52PM -0700, Avi Kivity wrote:
> Instead of private, have an object contain both callback and private  
> data, and use container_of().  Reduces the chance of type errors.

OK.

>> +    while (parent->unsync_children) {
>> +            for (i = 0; i < PT64_ENT_PER_PAGE; ++i) {
>> +                    u64 ent = sp->spt[i];
>> +
>> +                    if (is_shadow_present_pte(ent)) {
>> +                            struct kvm_mmu_page *child;
>> +                            child = page_header(ent & PT64_BASE_ADDR_MASK);
>
> What does this do?

Walks all children of given page with no efficiency. Its replaced later
by the bitmap version.

>> +static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>> +{
>> +    if (sp->role.glevels != vcpu->arch.mmu.root_level) {
>> +            kvm_mmu_zap_page(vcpu->kvm, sp);
>> +            return 1;
>> +    }
>>   
>
> Suppose we switch to real mode, touch a pte, switch back.  Is this handled?

The shadow page will go unsync on pte touch and resynced as soon as its
visible (after return to paging).

Or, while still in real mode, it might be zapped by
kvm_mmu_get_page->kvm_sync_page.

Am I missing something?

>> @@ -991,8 +1066,18 @@ static struct kvm_mmu_page *kvm_mmu_get_
>>               gfn, role.word);
>>      index = kvm_page_table_hashfn(gfn);
>>      bucket = &vcpu->kvm->arch.mmu_page_hash[index];
>> -    hlist_for_each_entry(sp, node, bucket, hash_link)
>> -            if (sp->gfn == gfn && sp->role.word == role.word) {
>> +    hlist_for_each_entry_safe(sp, node, tmp, bucket, hash_link)
>> +            if (sp->gfn == gfn) {
>> +                    if (sp->unsync)
>> +                            if (kvm_sync_page(vcpu, sp))
>> +                                    continue;
>> +
>> +                    if (sp->role.word != role.word)
>> +                            continue;
>> +
>> +                    if (sp->unsync_children)
>> +                            vcpu->arch.mmu.need_root_sync = 1;
>>   
>
> mmu_reload() maybe?

Hum, will think about it.

>>  static int kvm_mmu_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp)
>> -    return 0;
>> +    return ret;
>>  }
>>   
>
> Why does the caller care if zap also zapped some other random pages?  To  
> restart walking the list?

Yes. The next element for_each_entry_safe saved could have been zapped.

>> +    /* don't unsync if pagetable is shadowed with multiple roles */
>> +    hlist_for_each_entry_safe(s, node, n, bucket, hash_link) {
>> +            if (s->gfn != sp->gfn || s->role.metaphysical)
>> +                    continue;
>> +            if (s->role.word != sp->role.word)
>> +                    return 1;
>> +    }
>>   
>
> This will happen for nonpae paging.  But why not allow it?  Zap all  
> unsynced pages on mode switch.
>
> Oh, if a page is both a page directory and page table, yes.  

Yes. 

> So to allow nonpae oos, check the level instead.

Windows 2008 64-bit has all sorts of sharing a pagetable at multiple
levels too.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to