On Wednesday 12 May 2010 14:59:14 Avi Kivity wrote:
> On 05/12/2010 09:33 AM, Sheng Yang wrote:
> > Only modifying some bits of CR0/CR4 needs paging mode switch.
> >
> > Add update_rsvd_bits_mask() to address EFER.NX bit updating for reserved
> > bits.
> >
> >
> > @@ -2335,6 +2335,19 @@ static void reset_rsvds_bits_mask(struct kvm_vcpu
> > *vcpu, int level)
> >
> > }
> >
> > }
> >
> > +void update_rsvd_bits_mask(struct kvm_vcpu *vcpu)
> > +{
> > + if (!is_paging(vcpu))
> > + return;
> > + if (is_long_mode(vcpu))
> > + reset_rsvds_bits_mask(vcpu, PT64_ROOT_LEVEL);
> > + else if (is_pae(vcpu))
> > + reset_rsvds_bits_mask(vcpu, PT32E_ROOT_LEVEL);
> > + else
> > + reset_rsvds_bits_mask(vcpu, PT32_ROOT_LEVEL);
> > +}
> > +EXPORT_SYMBOL_GPL(update_rsvd_bits_mask);
>
> Needs a kvm_ prefix if made a global symbol. But isn't nx switching
> rare enough so we can reload the mmu completely?
Yes, it should be rare enough. I am OK with keeping it, though "reload MMU"
seems
a little misleading meaning here.
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index b59fc67..971a295 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> >
> > @@ -416,6 +416,10 @@ out:
> > static int __kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
> > {
> >
> > + unsigned long old_cr0 = kvm_read_cr0(vcpu);
> > + unsigned long update_bits = X86_CR0_PG | X86_CR0_PE |
> > + X86_CR0_CD | X86_CR0_NW;
>
> PE doesn't affect paging, CD, NW don't either?
Yes, PE can't affect alone.
Marcelo has commented on CD/NW, because we need to reload pdptrs if they
changed,
then we need to reload MMU.
>
> What about WP?
How WP would affect?
> > +
> >
> > cr0 |= X86_CR0_ET;
> >
> > #ifdef CONFIG_X86_64
> >
> > @@ -449,7 +453,8 @@ static int __kvm_set_cr0(struct kvm_vcpu *vcpu,
> > unsigned long cr0)
> >
> > kvm_x86_ops->set_cr0(vcpu, cr0);
> >
> > - kvm_mmu_reset_context(vcpu);
> > + if ((cr0 ^ old_cr0)& update_bits)
> > + kvm_mmu_reset_context(vcpu);
> >
> > return 0;
> >
> > }
> >
> > @@ -692,6 +698,8 @@ static u32 emulated_msrs[] = {
> >
> > static int set_efer(struct kvm_vcpu *vcpu, u64 efer)
> > {
> >
> > + u64 old_efer = vcpu->arch.efer;
> > +
> >
> > if (efer& efer_reserved_bits)
> >
> > return 1;
> >
> > @@ -722,6 +730,9 @@ static int set_efer(struct kvm_vcpu *vcpu, u64 efer)
> >
> > vcpu->arch.mmu.base_role.nxe = (efer& EFER_NX)&& !tdp_enabled;
> >
> > + if ((efer ^ old_efer)& EFER_NX)
> > + update_rsvd_bits_mask(vcpu);
> > +
> >
> > return 0;
> >
> > }
>
> I think it's fine to reset the entire mmu context here, most guests
> won't toggle nx all the time. But it needs to be in patch 3, otherwise
> we have a regression between 3 and 4.
OK. Would drop patch 3 and keep mmu reset if you like...
--
regards
Yang, Sheng
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html