On Wed, Jun 10, 2009 at 12:21:05PM +0300, Avi Kivity wrote:
> Avi Kivity wrote:
>> Marcelo Tosatti wrote:
>>> This way there is no need to add explicit checks in every
>>> for_each_shadow_entry user.
>>>
>>> Signed-off-by: Marcelo Tosatti <[email protected]>
>>>
>>> Index: kvm/arch/x86/kvm/mmu.c
>>> ===================================================================
>>> --- kvm.orig/arch/x86/kvm/mmu.c
>>> +++ kvm/arch/x86/kvm/mmu.c
>>> @@ -1273,6 +1273,11 @@ static bool shadow_walk_okay(struct kvm_
>>>  {
>>>      if (iterator->level < PT_PAGE_TABLE_LEVEL)
>>>          return false;
>>> +
>>> +    if (iterator->level == PT_PAGE_TABLE_LEVEL)
>>> +        if (is_large_pte(*iterator->sptep))
>>> +            return false;
>>>
>>>   
>> s/==/>/?
>>
>
> Ah, it's actually fine.  But changing == to >= will make it 1GBpage-ready.

Humpf, better check level explicitly before interpreting bit 7, so lets 
skip this for 1GB pages.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to