On 09/12/2012 10:29 PM, Avi Kivity wrote:
> walk_addr_generic() permission checks are a maze of branchy code, which is
> performed four times per lookup.  It depends on the type of access, efer.nxe,
> cr0.wp, cr4.smep, and in the near future, cr4.smap.
> 
> Optimize this away by precalculating all variants and storing them in a
> bitmap.  The bitmap is recalculated when rarely-changing variables change
> (cr0, cr4) and is indexed by the often-changing variables (page fault error
> code, pte access permissions).

Really graceful!

> 
> The result is short, branch-free code.
> 
> Signed-off-by: Avi Kivity <[email protected]>

> +static void update_permission_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu 
> *mmu)
> +{
> +     unsigned bit, byte, pfec;
> +     u8 map;
> +     bool fault, x, w, u, wf, uf, ff, smep;
> +
> +     smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP);
> +     for (byte = 0; byte < ARRAY_SIZE(mmu->permissions); ++byte) {
> +             pfec = byte << 1;
> +             map = 0;
> +             wf = pfec & PFERR_WRITE_MASK;
> +             uf = pfec & PFERR_USER_MASK;
> +             ff = pfec & PFERR_FETCH_MASK;
> +             for (bit = 0; bit < 8; ++bit) {
> +                     x = bit & ACC_EXEC_MASK;
> +                     w = bit & ACC_WRITE_MASK;
> +                     u = bit & ACC_USER_MASK;
> +
> +                     /* Not really needed: !nx will cause pte.nx to fault */
> +                     x |= !mmu->nx;
> +                     /* Allow supervisor writes if !cr0.wp */
> +                     w |= !is_write_protection(vcpu) && !uf;
> +                     /* Disallow supervisor fetches if cr4.smep */
> +                     x &= !(smep && !uf);

In the case of smep, supervisor mode can fetch the memory if pte.u == 0,
so, it should be x &= !(smep && !uf && u)?

> @@ -3672,20 +3672,18 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu 
> *vcpu, unsigned long gva,
>                               gpa_t *gpa, struct x86_exception *exception,
>                               bool write)
>  {
> -     u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
> +     u32 access = ((kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0)
> +             | (write ? PFERR_WRITE_MASK : 0);
> +     u8 bit = vcpu->arch.access;
> 
> -     if (vcpu_match_mmio_gva(vcpu, gva) &&
> -               check_write_user_access(vcpu, write, access,
> -               vcpu->arch.access)) {
> +     if (vcpu_match_mmio_gva(vcpu, gva)
> +         && ((vcpu->arch.walk_mmu->permissions[access >> 1] >> bit) & 1)) {

!((vcpu->arch.walk_mmu->permissions[access >> 1] >> bit) & 1) ?

It is better introducing a function to do the permission check?

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to