Re: [Xen-devel] [PATCH v3 2/2] SVM: introduce a VM entry helper

2018-05-07 Thread Boris Ostrovsky
On 05/07/2018 11:49 AM, Andrew Cooper wrote:
> On 07/05/18 16:46, Boris Ostrovsky wrote:
>> On 05/07/2018 11:29 AM, Andrew Cooper wrote:
>>> On 07/05/18 16:25, Jan Beulich wrote:
>>> On 07.05.18 at 16:19,  wrote:
> On 07/05/18 15:11, Jan Beulich wrote:
> On 04.05.18 at 17:11,  wrote:
>>> --- a/xen/arch/x86/hvm/svm/entry.S
>>> +++ b/xen/arch/x86/hvm/svm/entry.S
>>> @@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
>>>  jmp  .Lsvm_do_resume
>>>  __UNLIKELY_END(nsvm_hap)
>>>  
>>> -call svm_asid_handle_vmrun
>>> -
>>> -cmpb $0,tb_init_done(%rip)
>>> -UNLIKELY_START(nz, svm_trace)
>>> -call svm_trace_vmentry
>>> -UNLIKELY_END(svm_trace)
>>> -
>>> -mov  VCPU_svm_vmcb(%rbx),%rcx
>>> -mov  UREGS_rax(%rsp),%rax
>>> -mov  %rax,VMCB_rax(%rcx)
>>> -mov  UREGS_rip(%rsp),%rax
>>> -mov  %rax,VMCB_rip(%rcx)
>>> -mov  UREGS_rsp(%rsp),%rax
>>> -mov  %rax,VMCB_rsp(%rcx)
>>> -mov  UREGS_eflags(%rsp),%rax
>>> -or   $X86_EFLAGS_MBS,%rax
>>> -mov  %rax,VMCB_rflags(%rcx)
>>> +mov  %rsp, %rdi
>>> +call svm_vmenter_helper
>> While I had committed this earlier today, there's one concern I've just 
>> come
>> to think of: Now that we're calling into C land with CLGI in effect (for 
> more
>> than just the trivial svm_trace_vmentry()) we are at risk of confusing
>> parties using local_irq_is_enabled(), first and foremost
>> common/spinlock.c:check_lock(). While it's some extra overhead, I wonder
>> whether the call wouldn't better be framed by a CLI/STI pair.
> I can't see why the SVM vmentry path uses CLGI/STGI in the first place.
>
> The VMX path uses plain cli/sti and our NMI/MCE handlers can cope. 
> Furthermore, processing NMIs/MCEs at this point will be more efficient
> that taking a vmentry then immediately exiting again.
 Perhaps you're right, i.e. we could replace all current CLGI/STGI by
 CLI/STI, adding a single STGI right after VMRUN.
>> The APM say "It is assumed that VMM software cleared GIF some time before
>> executing the VMRUN instruction, to ensure an atomic state switch."
>>
>> Not sure if this is meant as suggestion or requirement.
> Hmm - that can probably be tested with this proposed patch and a very
> high frequency NMI perf counter.


This may only prove the we do need it, if the test without CLGI fails.

If the test passes I don't think we can say anything one way or the other.

I am adding Suravee and Brian, perhaps they know the answer (or can
check internally).


>
> Basically every other hypervisor does CLGI; VMSAVE (host state); VMLOAD
> (guest state); VMRUN, and Xen's lack of doing this is why we have to
> play with the IDT IST settings, as well as why we can't cope cleanly
> with stack overflows.
>

KVM manipulates both GIF and RFLAGS.IF.

-boris

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 2/2] SVM: introduce a VM entry helper

2018-05-07 Thread Andrew Cooper
On 07/05/18 16:46, Boris Ostrovsky wrote:
> On 05/07/2018 11:29 AM, Andrew Cooper wrote:
>> On 07/05/18 16:25, Jan Beulich wrote:
>> On 07.05.18 at 16:19,  wrote:
 On 07/05/18 15:11, Jan Beulich wrote:
 On 04.05.18 at 17:11,  wrote:
>> --- a/xen/arch/x86/hvm/svm/entry.S
>> +++ b/xen/arch/x86/hvm/svm/entry.S
>> @@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
>>  jmp  .Lsvm_do_resume
>>  __UNLIKELY_END(nsvm_hap)
>>  
>> -call svm_asid_handle_vmrun
>> -
>> -cmpb $0,tb_init_done(%rip)
>> -UNLIKELY_START(nz, svm_trace)
>> -call svm_trace_vmentry
>> -UNLIKELY_END(svm_trace)
>> -
>> -mov  VCPU_svm_vmcb(%rbx),%rcx
>> -mov  UREGS_rax(%rsp),%rax
>> -mov  %rax,VMCB_rax(%rcx)
>> -mov  UREGS_rip(%rsp),%rax
>> -mov  %rax,VMCB_rip(%rcx)
>> -mov  UREGS_rsp(%rsp),%rax
>> -mov  %rax,VMCB_rsp(%rcx)
>> -mov  UREGS_eflags(%rsp),%rax
>> -or   $X86_EFLAGS_MBS,%rax
>> -mov  %rax,VMCB_rflags(%rcx)
>> +mov  %rsp, %rdi
>> +call svm_vmenter_helper
> While I had committed this earlier today, there's one concern I've just 
> come
> to think of: Now that we're calling into C land with CLGI in effect (for 
 more
> than just the trivial svm_trace_vmentry()) we are at risk of confusing
> parties using local_irq_is_enabled(), first and foremost
> common/spinlock.c:check_lock(). While it's some extra overhead, I wonder
> whether the call wouldn't better be framed by a CLI/STI pair.
 I can't see why the SVM vmentry path uses CLGI/STGI in the first place.

 The VMX path uses plain cli/sti and our NMI/MCE handlers can cope. 
 Furthermore, processing NMIs/MCEs at this point will be more efficient
 that taking a vmentry then immediately exiting again.
>>> Perhaps you're right, i.e. we could replace all current CLGI/STGI by
>>> CLI/STI, adding a single STGI right after VMRUN.
>
> The APM say "It is assumed that VMM software cleared GIF some time before
> executing the VMRUN instruction, to ensure an atomic state switch."
>
> Not sure if this is meant as suggestion or requirement.

Hmm - that can probably be tested with this proposed patch and a very
high frequency NMI perf counter.

Basically every other hypervisor does CLGI; VMSAVE (host state); VMLOAD
(guest state); VMRUN, and Xen's lack of doing this is why we have to
play with the IDT IST settings, as well as why we can't cope cleanly
with stack overflows.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 2/2] SVM: introduce a VM entry helper

2018-05-07 Thread Jan Beulich
>>> On 07.05.18 at 17:46,  wrote:
> On 05/07/2018 11:29 AM, Andrew Cooper wrote:
>> On 07/05/18 16:25, Jan Beulich wrote:
>> On 07.05.18 at 16:19,  wrote:
 On 07/05/18 15:11, Jan Beulich wrote:
 On 04.05.18 at 17:11,  wrote:
>> --- a/xen/arch/x86/hvm/svm/entry.S
>> +++ b/xen/arch/x86/hvm/svm/entry.S
>> @@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
>>  jmp  .Lsvm_do_resume
>>  __UNLIKELY_END(nsvm_hap)
>>  
>> -call svm_asid_handle_vmrun
>> -
>> -cmpb $0,tb_init_done(%rip)
>> -UNLIKELY_START(nz, svm_trace)
>> -call svm_trace_vmentry
>> -UNLIKELY_END(svm_trace)
>> -
>> -mov  VCPU_svm_vmcb(%rbx),%rcx
>> -mov  UREGS_rax(%rsp),%rax
>> -mov  %rax,VMCB_rax(%rcx)
>> -mov  UREGS_rip(%rsp),%rax
>> -mov  %rax,VMCB_rip(%rcx)
>> -mov  UREGS_rsp(%rsp),%rax
>> -mov  %rax,VMCB_rsp(%rcx)
>> -mov  UREGS_eflags(%rsp),%rax
>> -or   $X86_EFLAGS_MBS,%rax
>> -mov  %rax,VMCB_rflags(%rcx)
>> +mov  %rsp, %rdi
>> +call svm_vmenter_helper
> While I had committed this earlier today, there's one concern I've just 
> come
> to think of: Now that we're calling into C land with CLGI in effect (for 
 more
> than just the trivial svm_trace_vmentry()) we are at risk of confusing
> parties using local_irq_is_enabled(), first and foremost
> common/spinlock.c:check_lock(). While it's some extra overhead, I wonder
> whether the call wouldn't better be framed by a CLI/STI pair.
 I can't see why the SVM vmentry path uses CLGI/STGI in the first place.

 The VMX path uses plain cli/sti and our NMI/MCE handlers can cope. 
 Furthermore, processing NMIs/MCEs at this point will be more efficient
 that taking a vmentry then immediately exiting again.
>>> Perhaps you're right, i.e. we could replace all current CLGI/STGI by
>>> CLI/STI, adding a single STGI right after VMRUN.
> 
> 
> The APM say "It is assumed that VMM software cleared GIF some time before
> executing the VMRUN instruction, to ensure an atomic state switch."

Well, that means we might additionally need CLGI right before VMRUN.

> Not sure if this is meant as suggestion or requirement.

How do we find out?

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 2/2] SVM: introduce a VM entry helper

2018-05-07 Thread Boris Ostrovsky
On 05/07/2018 11:29 AM, Andrew Cooper wrote:
> On 07/05/18 16:25, Jan Beulich wrote:
> On 07.05.18 at 16:19,  wrote:
>>> On 07/05/18 15:11, Jan Beulich wrote:
>>> On 04.05.18 at 17:11,  wrote:
> --- a/xen/arch/x86/hvm/svm/entry.S
> +++ b/xen/arch/x86/hvm/svm/entry.S
> @@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
>  jmp  .Lsvm_do_resume
>  __UNLIKELY_END(nsvm_hap)
>  
> -call svm_asid_handle_vmrun
> -
> -cmpb $0,tb_init_done(%rip)
> -UNLIKELY_START(nz, svm_trace)
> -call svm_trace_vmentry
> -UNLIKELY_END(svm_trace)
> -
> -mov  VCPU_svm_vmcb(%rbx),%rcx
> -mov  UREGS_rax(%rsp),%rax
> -mov  %rax,VMCB_rax(%rcx)
> -mov  UREGS_rip(%rsp),%rax
> -mov  %rax,VMCB_rip(%rcx)
> -mov  UREGS_rsp(%rsp),%rax
> -mov  %rax,VMCB_rsp(%rcx)
> -mov  UREGS_eflags(%rsp),%rax
> -or   $X86_EFLAGS_MBS,%rax
> -mov  %rax,VMCB_rflags(%rcx)
> +mov  %rsp, %rdi
> +call svm_vmenter_helper
 While I had committed this earlier today, there's one concern I've just 
 come
 to think of: Now that we're calling into C land with CLGI in effect (for 
>>> more
 than just the trivial svm_trace_vmentry()) we are at risk of confusing
 parties using local_irq_is_enabled(), first and foremost
 common/spinlock.c:check_lock(). While it's some extra overhead, I wonder
 whether the call wouldn't better be framed by a CLI/STI pair.
>>> I can't see why the SVM vmentry path uses CLGI/STGI in the first place.
>>>
>>> The VMX path uses plain cli/sti and our NMI/MCE handlers can cope. 
>>> Furthermore, processing NMIs/MCEs at this point will be more efficient
>>> that taking a vmentry then immediately exiting again.
>> Perhaps you're right, i.e. we could replace all current CLGI/STGI by
>> CLI/STI, adding a single STGI right after VMRUN.


The APM say "It is assumed that VMM software cleared GIF some time before
executing the VMRUN instruction, to ensure an atomic state switch."

Not sure if this is meant as suggestion or requirement.

-boris

> We want to retain the one STGI on the svm_stgi_label, but I think all
> other CLGI/STGI should be downgraded to CLI/STI.
>
>>> As for running with interrupts disabled, that is already the case on the
>>> VMX side, and I don't see why the SVM side needs to be different.
>> Sure, as does SVM - CLGI is a superset of CLI, after all. My observation
>> was just that this state of interrupts being disabled can't be observed by
>> users of the normal infrastructure (inspecting EFLAGS.IF).
> Ah ok.
>
> ~Andrew


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 2/2] SVM: introduce a VM entry helper

2018-05-07 Thread Andrew Cooper
On 07/05/18 16:25, Jan Beulich wrote:
 On 07.05.18 at 16:19,  wrote:
>> On 07/05/18 15:11, Jan Beulich wrote:
>> On 04.05.18 at 17:11,  wrote:
 --- a/xen/arch/x86/hvm/svm/entry.S
 +++ b/xen/arch/x86/hvm/svm/entry.S
 @@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
  jmp  .Lsvm_do_resume
  __UNLIKELY_END(nsvm_hap)
  
 -call svm_asid_handle_vmrun
 -
 -cmpb $0,tb_init_done(%rip)
 -UNLIKELY_START(nz, svm_trace)
 -call svm_trace_vmentry
 -UNLIKELY_END(svm_trace)
 -
 -mov  VCPU_svm_vmcb(%rbx),%rcx
 -mov  UREGS_rax(%rsp),%rax
 -mov  %rax,VMCB_rax(%rcx)
 -mov  UREGS_rip(%rsp),%rax
 -mov  %rax,VMCB_rip(%rcx)
 -mov  UREGS_rsp(%rsp),%rax
 -mov  %rax,VMCB_rsp(%rcx)
 -mov  UREGS_eflags(%rsp),%rax
 -or   $X86_EFLAGS_MBS,%rax
 -mov  %rax,VMCB_rflags(%rcx)
 +mov  %rsp, %rdi
 +call svm_vmenter_helper
>>> While I had committed this earlier today, there's one concern I've just come
>>> to think of: Now that we're calling into C land with CLGI in effect (for 
>> more
>>> than just the trivial svm_trace_vmentry()) we are at risk of confusing
>>> parties using local_irq_is_enabled(), first and foremost
>>> common/spinlock.c:check_lock(). While it's some extra overhead, I wonder
>>> whether the call wouldn't better be framed by a CLI/STI pair.
>> I can't see why the SVM vmentry path uses CLGI/STGI in the first place.
>>
>> The VMX path uses plain cli/sti and our NMI/MCE handlers can cope. 
>> Furthermore, processing NMIs/MCEs at this point will be more efficient
>> that taking a vmentry then immediately exiting again.
> Perhaps you're right, i.e. we could replace all current CLGI/STGI by
> CLI/STI, adding a single STGI right after VMRUN.

We want to retain the one STGI on the svm_stgi_label, but I think all
other CLGI/STGI should be downgraded to CLI/STI.

>
>> As for running with interrupts disabled, that is already the case on the
>> VMX side, and I don't see why the SVM side needs to be different.
> Sure, as does SVM - CLGI is a superset of CLI, after all. My observation
> was just that this state of interrupts being disabled can't be observed by
> users of the normal infrastructure (inspecting EFLAGS.IF).

Ah ok.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 2/2] SVM: introduce a VM entry helper

2018-05-07 Thread Jan Beulich
>>> On 07.05.18 at 16:19,  wrote:
> On 07/05/18 15:11, Jan Beulich wrote:
> On 04.05.18 at 17:11,  wrote:
>>> --- a/xen/arch/x86/hvm/svm/entry.S
>>> +++ b/xen/arch/x86/hvm/svm/entry.S
>>> @@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
>>>  jmp  .Lsvm_do_resume
>>>  __UNLIKELY_END(nsvm_hap)
>>>  
>>> -call svm_asid_handle_vmrun
>>> -
>>> -cmpb $0,tb_init_done(%rip)
>>> -UNLIKELY_START(nz, svm_trace)
>>> -call svm_trace_vmentry
>>> -UNLIKELY_END(svm_trace)
>>> -
>>> -mov  VCPU_svm_vmcb(%rbx),%rcx
>>> -mov  UREGS_rax(%rsp),%rax
>>> -mov  %rax,VMCB_rax(%rcx)
>>> -mov  UREGS_rip(%rsp),%rax
>>> -mov  %rax,VMCB_rip(%rcx)
>>> -mov  UREGS_rsp(%rsp),%rax
>>> -mov  %rax,VMCB_rsp(%rcx)
>>> -mov  UREGS_eflags(%rsp),%rax
>>> -or   $X86_EFLAGS_MBS,%rax
>>> -mov  %rax,VMCB_rflags(%rcx)
>>> +mov  %rsp, %rdi
>>> +call svm_vmenter_helper
>> While I had committed this earlier today, there's one concern I've just come
>> to think of: Now that we're calling into C land with CLGI in effect (for 
> more
>> than just the trivial svm_trace_vmentry()) we are at risk of confusing
>> parties using local_irq_is_enabled(), first and foremost
>> common/spinlock.c:check_lock(). While it's some extra overhead, I wonder
>> whether the call wouldn't better be framed by a CLI/STI pair.
> 
> I can't see why the SVM vmentry path uses CLGI/STGI in the first place.
> 
> The VMX path uses plain cli/sti and our NMI/MCE handlers can cope. 
> Furthermore, processing NMIs/MCEs at this point will be more efficient
> that taking a vmentry then immediately exiting again.

Perhaps you're right, i.e. we could replace all current CLGI/STGI by
CLI/STI, adding a single STGI right after VMRUN.

> As for running with interrupts disabled, that is already the case on the
> VMX side, and I don't see why the SVM side needs to be different.

Sure, as does SVM - CLGI is a superset of CLI, after all. My observation
was just that this state of interrupts being disabled can't be observed by
users of the normal infrastructure (inspecting EFLAGS.IF).

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 2/2] SVM: introduce a VM entry helper

2018-05-07 Thread Andrew Cooper
On 07/05/18 15:11, Jan Beulich wrote:
 On 04.05.18 at 17:11,  wrote:
>> --- a/xen/arch/x86/hvm/svm/entry.S
>> +++ b/xen/arch/x86/hvm/svm/entry.S
>> @@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
>>  jmp  .Lsvm_do_resume
>>  __UNLIKELY_END(nsvm_hap)
>>  
>> -call svm_asid_handle_vmrun
>> -
>> -cmpb $0,tb_init_done(%rip)
>> -UNLIKELY_START(nz, svm_trace)
>> -call svm_trace_vmentry
>> -UNLIKELY_END(svm_trace)
>> -
>> -mov  VCPU_svm_vmcb(%rbx),%rcx
>> -mov  UREGS_rax(%rsp),%rax
>> -mov  %rax,VMCB_rax(%rcx)
>> -mov  UREGS_rip(%rsp),%rax
>> -mov  %rax,VMCB_rip(%rcx)
>> -mov  UREGS_rsp(%rsp),%rax
>> -mov  %rax,VMCB_rsp(%rcx)
>> -mov  UREGS_eflags(%rsp),%rax
>> -or   $X86_EFLAGS_MBS,%rax
>> -mov  %rax,VMCB_rflags(%rcx)
>> +mov  %rsp, %rdi
>> +call svm_vmenter_helper
> While I had committed this earlier today, there's one concern I've just come
> to think of: Now that we're calling into C land with CLGI in effect (for more
> than just the trivial svm_trace_vmentry()) we are at risk of confusing
> parties using local_irq_is_enabled(), first and foremost
> common/spinlock.c:check_lock(). While it's some extra overhead, I wonder
> whether the call wouldn't better be framed by a CLI/STI pair.

I can't see why the SVM vmentry path uses CLGI/STGI in the first place.

The VMX path uses plain cli/sti and our NMI/MCE handlers can cope. 
Furthermore, processing NMIs/MCEs at this point will be more efficient
that taking a vmentry then immediately exiting again.

As for running with interrupts disabled, that is already the case on the
VMX side, and I don't see why the SVM side needs to be different.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 2/2] SVM: introduce a VM entry helper

2018-05-07 Thread Jan Beulich
>>> On 04.05.18 at 17:11,  wrote:
> --- a/xen/arch/x86/hvm/svm/entry.S
> +++ b/xen/arch/x86/hvm/svm/entry.S
> @@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
>  jmp  .Lsvm_do_resume
>  __UNLIKELY_END(nsvm_hap)
>  
> -call svm_asid_handle_vmrun
> -
> -cmpb $0,tb_init_done(%rip)
> -UNLIKELY_START(nz, svm_trace)
> -call svm_trace_vmentry
> -UNLIKELY_END(svm_trace)
> -
> -mov  VCPU_svm_vmcb(%rbx),%rcx
> -mov  UREGS_rax(%rsp),%rax
> -mov  %rax,VMCB_rax(%rcx)
> -mov  UREGS_rip(%rsp),%rax
> -mov  %rax,VMCB_rip(%rcx)
> -mov  UREGS_rsp(%rsp),%rax
> -mov  %rax,VMCB_rsp(%rcx)
> -mov  UREGS_eflags(%rsp),%rax
> -or   $X86_EFLAGS_MBS,%rax
> -mov  %rax,VMCB_rflags(%rcx)
> +mov  %rsp, %rdi
> +call svm_vmenter_helper

While I had committed this earlier today, there's one concern I've just come
to think of: Now that we're calling into C land with CLGI in effect (for more
than just the trivial svm_trace_vmentry()) we are at risk of confusing
parties using local_irq_is_enabled(), first and foremost
common/spinlock.c:check_lock(). While it's some extra overhead, I wonder
whether the call wouldn't better be framed by a CLI/STI pair.

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 2/2] SVM: introduce a VM entry helper

2018-05-07 Thread Juergen Gross
On 04/05/18 17:11, Jan Beulich wrote:
> Neither the register values copying nor the trace entry generation need
> doing in assembly. The VMLOAD invocation can also be further deferred
> (and centralized). Therefore replace the svm_asid_handle_vmrun()
> invocation with one of the new helper.
> 
> Similarly move the VM exit side register value copying into
> svm_vmexit_handler().
> 
> Now that we always make it out to guest context after VMLOAD,
> svm_sync_vmcb() no longer overrides vmcb_needs_vmsave, making
> svm_vmexit_handler() setting the field early unnecessary.
> 
> Signed-off-by: Jan Beulich 

Release-acked-by: Juergen Gross 


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v3 2/2] SVM: introduce a VM entry helper

2018-05-04 Thread Jan Beulich
Neither the register values copying nor the trace entry generation need
doing in assembly. The VMLOAD invocation can also be further deferred
(and centralized). Therefore replace the svm_asid_handle_vmrun()
invocation with one of the new helper.

Similarly move the VM exit side register value copying into
svm_vmexit_handler().

Now that we always make it out to guest context after VMLOAD,
svm_sync_vmcb() no longer overrides vmcb_needs_vmsave, making
svm_vmexit_handler() setting the field early unnecessary.

Signed-off-by: Jan Beulich 
---
v3: svm_vmexit_handler() no longer explicitly sets vmcb_sync_state, and
svm_sync_vmcb() no longer converts a needs-vmsave request into
in-sync state. Also move the svm_trace_vmentry() invocation to C.
v2: New.

--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
 jmp  .Lsvm_do_resume
 __UNLIKELY_END(nsvm_hap)
 
-call svm_asid_handle_vmrun
-
-cmpb $0,tb_init_done(%rip)
-UNLIKELY_START(nz, svm_trace)
-call svm_trace_vmentry
-UNLIKELY_END(svm_trace)
-
-mov  VCPU_svm_vmcb(%rbx),%rcx
-mov  UREGS_rax(%rsp),%rax
-mov  %rax,VMCB_rax(%rcx)
-mov  UREGS_rip(%rsp),%rax
-mov  %rax,VMCB_rip(%rcx)
-mov  UREGS_rsp(%rsp),%rax
-mov  %rax,VMCB_rsp(%rcx)
-mov  UREGS_eflags(%rsp),%rax
-or   $X86_EFLAGS_MBS,%rax
-mov  %rax,VMCB_rflags(%rcx)
+mov  %rsp, %rdi
+call svm_vmenter_helper
 
 mov VCPU_arch_msr(%rbx), %rax
 mov VCPUMSR_spec_ctrl_raw(%rax), %eax
@@ -111,16 +96,6 @@ UNLIKELY_END(svm_trace)
 SPEC_CTRL_ENTRY_FROM_VMEXIT /* Req: b=curr %rsp=regs/cpuinfo, Clob: 
acd */
 /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
-mov  VCPU_svm_vmcb(%rbx),%rcx
-mov  VMCB_rax(%rcx),%rax
-mov  %rax,UREGS_rax(%rsp)
-mov  VMCB_rip(%rcx),%rax
-mov  %rax,UREGS_rip(%rsp)
-mov  VMCB_rsp(%rcx),%rax
-mov  %rax,UREGS_rsp(%rsp)
-mov  VMCB_rflags(%rcx),%rax
-mov  %rax,UREGS_eflags(%rsp)
-
 STGI
 GLOBAL(svm_stgi_label)
 mov  %rsp,%rdi
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -687,10 +687,9 @@ static void svm_sync_vmcb(struct vcpu *v
 if ( new_state == vmcb_needs_vmsave )
 {
 if ( arch_svm->vmcb_sync_state == vmcb_needs_vmload )
-{
 svm_vmload(arch_svm->vmcb);
-arch_svm->vmcb_sync_state = vmcb_in_sync;
-}
+
+arch_svm->vmcb_sync_state = new_state;
 }
 else
 {
@@ -1171,11 +1170,29 @@ static void noreturn svm_do_resume(struc
 
 hvm_do_resume(v);
 
-svm_sync_vmcb(v, vmcb_needs_vmsave);
-
 reset_stack_and_jump(svm_asm_do_resume);
 }
 
+void svm_vmenter_helper(const struct cpu_user_regs *regs)
+{
+struct vcpu *curr = current;
+struct vmcb_struct *vmcb = curr->arch.hvm_svm.vmcb;
+
+svm_asid_handle_vmrun();
+
+if ( unlikely(tb_init_done) )
+HVMTRACE_ND(VMENTRY,
+nestedhvm_vcpu_in_guestmode(curr) ? TRC_HVM_NESTEDFLAG : 0,
+1/*cycles*/, 0, 0, 0, 0, 0, 0, 0);
+
+svm_sync_vmcb(curr, vmcb_needs_vmsave);
+
+vmcb->rax = regs->rax;
+vmcb->rip = regs->rip;
+vmcb->rsp = regs->rsp;
+vmcb->rflags = regs->rflags | X86_EFLAGS_MBS;
+}
+
 static void svm_guest_osvw_init(struct vcpu *vcpu)
 {
 if ( boot_cpu_data.x86_vendor != X86_VENDOR_AMD )
@@ -2621,7 +2638,11 @@ void svm_vmexit_handler(struct cpu_user_
 bool_t vcpu_guestmode = 0;
 struct vlapic *vlapic = vcpu_vlapic(v);
 
-v->arch.hvm_svm.vmcb_sync_state = vmcb_needs_vmsave;
+regs->rax = vmcb->rax;
+regs->rip = vmcb->rip;
+regs->rsp = vmcb->rsp;
+regs->rflags = vmcb->rflags;
+
 hvm_invalidate_regs_fields(regs);
 
 if ( paging_mode_hap(v->domain) )
@@ -3108,8 +3129,6 @@ void svm_vmexit_handler(struct cpu_user_
 }
 
   out:
-svm_sync_vmcb(v, vmcb_needs_vmsave);
-
 if ( vcpu_guestmode || vlapic_hw_disabled(vlapic) )
 return;
 
@@ -3118,17 +3137,8 @@ void svm_vmexit_handler(struct cpu_user_
 intr.fields.tpr =
 (vlapic_get_reg(vlapic, APIC_TASKPRI) & 0xFF) >> 4;
 vmcb_set_vintr(vmcb, intr);
-ASSERT(v->arch.hvm_svm.vmcb_sync_state != vmcb_needs_vmload);
 }
 
-void svm_trace_vmentry(void)
-{
-struct vcpu *curr = current;
-HVMTRACE_ND(VMENTRY,
-nestedhvm_vcpu_in_guestmode(curr) ? TRC_HVM_NESTEDFLAG : 0,
-1/*cycles*/, 0, 0, 0, 0, 0, 0, 0);
-}
-  
 /*
  * Local variables:
  * mode: C
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -119,12 +119,6 @@ void __dummy__(void)
 OFFSET(DOMAIN_is_32bit_pv, struct domain, arch.is_32bit_pv);
 BLANK();
 
-OFFSET(VMCB_rax, struct vmcb_struct, rax);
-OFFSET(VMCB_rip, struct vmcb_struct, rip);
-OFFSET(VMCB_rsp, struct vmcb_struct, rsp);
-