Gleb Natapov <[email protected]> wrote on 17/04/2013 05:34:49 PM:
> > @@ -5716,6 +5725,10 @@ static void nested_vmx_failValid(struct
> > X86_EFLAGS_SF | X86_EFLAGS_OF))
> > | X86_EFLAGS_ZF);
> > get_vmcs12(vcpu)->vm_instruction_error = vm_instruction_error;
> > + /*
> > + * We don't need to force a shadow sync because
> > + * VM_INSTRUCTION_ERROR is not shdowed
> ---------------------------------------^ shadowed.
> But lets just request sync. This is slow path anyway.
Why then ?
Note this will require to call copy_shadow_to_vmcs12
because nested_vmx_failValid can be called while L0 handles a L1
exit (for vmx instruction emulation) and the shadow vmcs could
have been modified by L1 before the exit.
>
> > + */
> > }
> >
> > /* Emulate the VMCLEAR instruction */
> > @@ -6127,6 +6140,7 @@ static int handle_vmptrld(struct kvm_vcp
> > /* init shadow vmcs */
> > vmcs_clear(shadow_vmcs);
> > vmx->nested.current_shadow_vmcs = shadow_vmcs;
> > + vmx->nested.sync_shadow_vmcs = true;
> > }
> > }
> >
> > @@ -6876,6 +6890,10 @@ static void __noclone vmx_vcpu_run(struc
> > {
> > struct vcpu_vmx *vmx = to_vmx(vcpu);
> > unsigned long debugctlmsr;
> Leave free line here and move it after if(vmx->emulation_required).
Will do
> > + if (vmx->nested.sync_shadow_vmcs) {
> > + copy_vmcs12_to_shadow(vmx);
> > + vmx->nested.sync_shadow_vmcs = false;
> > + }
> >
> > /* Record the guest's net vcpu time for enforced NMI injections. */
> > if (unlikely(!cpu_has_virtual_nmis() && vmx->soft_vnmi_blocked))
> > @@ -7496,6 +7514,8 @@ static int nested_vmx_run(struct kvm_vcp
> > skip_emulated_instruction(vcpu);
> > vmcs12 = get_vmcs12(vcpu);
> >
> > + if (enable_shadow_vmcs)
> > + copy_shadow_to_vmcs12(vmx);
> And free line here.
Will do
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html