On Fri, Aug 29, 2008 at 02:49:38PM +0300, Avi Kivity wrote:
> Yang, Sheng wrote:
>> From: Sheng Yang <[EMAIL PROTECTED]>
>> Date: Fri, 29 Aug 2008 14:02:29 +0800
>> Subject: [PATCH] KVM: MMU: Add shadow_accessed_shift
>>
>
>> static u64 __read_mostly shadow_dirty_mask;
>>
>> void kvm_mmu_set_nonpresent_ptes(u64 trap_pte, u64 notrap_pte)
>> @@ -171,6 +172,8 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64
>> accessed_mask,
>> {
>> shadow_user_mask = user_mask;
>> shadow_accessed_mask = accessed_mask;
>> + shadow_accessed_shift = find_first_bit((unsigned long *)&accessed_mask,
>> + sizeof(accessed_mask));
>>
>
> Ah, I thought ept accessed_mask was zero.
Ah... Indeed I gave a FAKE_ACCESSED_MASK(1<<62) to EPT when the patch checked
in. I just want to keep the behaviour consistent with EPT and shadow
rather than do "if xxx_mask == 0" all the time.
I thought the FAKE one can be set when the EPTE is new (and maybe some
rare condition below). But I have neglected the check of
kvm_mmu_access_page() below when I add those fake bits...
>
>> shadow_dirty_mask = dirty_mask;
>> shadow_nx_mask = nx_mask;
>> shadow_x_mask = x_mask;
>> @@ -709,10 +712,10 @@ static int kvm_age_rmapp(struct kvm *kvm,
>> unsigned long *rmapp)
>> int _young;
>> u64 _spte = *spte;
>> BUG_ON(!(_spte & PT_PRESENT_MASK));
>> - _young = _spte & PT_ACCESSED_MASK;
>> + _young = _spte & shadow_accessed_mask;
>>
>
> _young is an int, so will be set to zero on ept due to truncation.
>
>> if (_young) {
>> young = 1;
>> - clear_bit(PT_ACCESSED_SHIFT, (unsigned long *)spte);
>> + clear_bit(shadow_accessed_shift, (unsigned long *)spte);
>> }
>> spte = rmap_next(kvm, rmapp, spte);
>> }
>>
>
> Even if the former is fixed, what does it mean? guest memory will never
> be considered young by Linux, and so will be swapped sooner.
>
> Maybe we ought to cheat in the other direction and return young = 1 for ept.
If I understand correctly, now I don't have idea when we can return young=1
except ept entry was just created. For EPT, we may return 0 for the most of
time, after this clear_bit action.
>
>> @@ -1785,7 +1788,7 @@ static void kvm_mmu_access_page(struct kvm_vcpu
>> *vcpu, gfn_t gfn)
>> && shadow_accessed_mask
>>
>
> Ah, this is to disable this check for ept (I thought accessed_mask was
> zero; why?)
My fault. Overlook this from the beginning... The check here no longer
fit for EPT, for shadow_accessed_mask of EPT is not zero now. I will
remove that.
>
>> && !(*spte & shadow_accessed_mask)
>> && is_shadow_present_pte(*spte))
>> - set_bit(PT_ACCESSED_SHIFT, (unsigned long *)spte);
>> + set_bit(shadow_accessed_shift, (unsigned long *)spte);
>> }
>>
>
> ... but in any case I don't think this code path is ever hit? maybe
> doing a movsb from mmio to RAM.
Thanks... movsb seems reasonable, maybe I just thought it's "in case"... :)
--
regards
Yang, Sheng
>
> --
> I have a truly marvellous patch that fixes the bug which this
> signature is too narrow to contain.
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html