On 06/29/2012 05:37 AM, Mao, Junjie wrote:
> >
> > >
> > > static void vmx_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2
> > > *entry) @@ -6610,6 +6641,9 @@ static void prepare_vmcs02(struct
> > kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
> > > page_to_phys(vmx->nested.apic_access_page));
> > > }
> > >
> > > + /* Explicitly disable INVPCID until PCID for L2 guest is
> > > supported */
> > > + exec_control &= ~SECONDARY_EXEC_ENABLE_INVPCID;
> > > +
> >
> > We can't just disable it without the guest knowing. If we don't expose
> > INCPCID through the MSR, then we should fail VMKLAUNCH or VMRESUME is
> > this bit is set.
>
> I think this means I can replace the code here with a check in
> nested_vmx_run. Do I understand correctly?
Correct, but the check already exists:
if (!vmx_control_verify(vmcs12->cpu_based_vm_exec_control,
nested_vmx_procbased_ctls_low, nested_vmx_procbased_ctls_high) ||
!vmx_control_verify(vmcs12->secondary_vm_exec_control,
nested_vmx_secondary_ctls_low, nested_vmx_secondary_ctls_high) ||
!vmx_control_verify(vmcs12->pin_based_vm_exec_control,
nested_vmx_pinbased_ctls_low, nested_vmx_pinbased_ctls_high) ||
!vmx_control_verify(vmcs12->vm_exit_controls,
nested_vmx_exit_ctls_low, nested_vmx_exit_ctls_high) ||
!vmx_control_verify(vmcs12->vm_entry_controls,
nested_vmx_entry_ctls_low, nested_vmx_entry_ctls_high))
{
nested_vmx_failValid(vcpu, VMXERR_ENTRY_INVALID_CONTROL_FIELD);
return 1;
}
So all that is needed is to initializr nested_vmx_entry_ctls_high properly.
> >
> > > vmcs_write32(SECONDARY_VM_EXEC_CONTROL, exec_control);
> > > }
> > >
> > > @@ -528,6 +528,10 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned
> > long cr0)
> > > return 1;
> > > }
> > >
> > > + if ((old_cr0 & X86_CR0_PG) && !(cr0 & X86_CR0_PG) &&
> > > + kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE))
> > > + return 1;
> >
> > Why check old_cr0?
>
> MOV to CR0 causes a general-protection exception if it would clear CR0.PG to
> 0 while CR4.PCIDE = 1. This check reflects the restriction.
It should not be possible to have cr0.pg=0 and cr4.pcide=1 in the first
place. We are guaranteed that old_cr0.pg=1.
>
> >
> > >
> > > + if ((cr4 & X86_CR4_PCIDE) && !(old_cr4 & X86_CR4_PCIDE)) {
> > > + if (!guest_cpuid_has_pcid(vcpu))
> > > + return 1;
> > > +
> > > + /* PCID can not be enabled when cr3[11:0]!=000H or EFER.LMA=0 */
> > > + if ((kvm_read_cr3(vcpu) & X86_CR3_PCID_MASK)
> > || !is_long_mode(vcpu))
> > > + return 1;
> > > + }
> > > +
> > > if (kvm_x86_ops->set_cr4(vcpu, cr4))
> > > return 1;
> > >
> > > - if ((cr4 ^ old_cr4) & pdptr_bits)
> > > + if (((cr4 ^ old_cr4) & pdptr_bits) ||
> > > + (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE)))
> > > kvm_mmu_reset_context(vcpu);
> >
> >
> > You can do
> >
> >
> > if ((cr4 ^ old_cr4) & (pdptr_bits | X86_CR4_PCIDE))
> > ...
>
> TLB is not invalidated when CR4.PCIDE is changed from 0 to 1.
Ok.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html