On 03/06/2011 12:48 AM, Marcelo Tosatti wrote:
> On Fri, Mar 04, 2011 at 06:57:27PM +0800, Xiao Guangrong wrote:
>> Don not walk to the next level if spte is mapping to the large page
>>
>> Signed-off-by: Xiao Guangrong <xiaoguangr...@cn.fujitsu.com>
>> ---
>>  arch/x86/kvm/mmu.c |    3 ++-
>>  1 files changed, 2 insertions(+), 1 deletions(-)
>>
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index b9bf016..10e0982 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -3819,7 +3819,8 @@ int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, 
>> u64 addr, u64 sptes[4])
>>      for_each_shadow_entry(vcpu, addr, iterator) {
>>              sptes[iterator.level-1] = *iterator.sptep;
>>              nr_sptes++;
>> -            if (!is_shadow_present_pte(*iterator.sptep))
>> +            if (!is_shadow_present_pte(*iterator.sptep) ||
>> +                  is_last_spte(*iterator.sptep, iterator.level))
>>                      break;
>>      }
>>      spin_unlock(&vcpu->kvm->mmu_lock);
> 
> shadow_walk_okay covers that case.
> 

We can get the large mapping pte in the loop:

static bool shadow_walk_okay(struct kvm_shadow_walk_iterator *iterator)
{
        if (iterator->level < PT_PAGE_TABLE_LEVEL)
                return false;

        if (iterator->level == PT_PAGE_TABLE_LEVEL)
                if (is_large_pte(*iterator->sptep))
                        return false;

        ......
}

if level > 1 and pte.pse is set, it will return true.

And, i think this judgment is useless:
        if (iterator->level == PT_PAGE_TABLE_LEVEL)
                if (is_large_pte(*iterator->sptep))
                        return false;
since if level = 1, the pte bit7 is PAT.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to