On 21/03/17 17:41, James Morse wrote:
> On 21/03/17 17:37, Marc Zyngier wrote:
>> On 21/03/17 17:25, James Morse wrote:
>>> On 21/03/17 17:04, Catalin Marinas wrote:
>>>> On Mon, Mar 06, 2017 at 02:24:34PM +0000, Marc Zyngier wrote:
>>>>> Let's define a new stub hypercall that resets the HYP configuration
>>>>> to its default: hyp-stub vectors, and MMU disabled.
>>>>>
>>>>> Of course, for the hyp-stub itself, this is a trivial no-op.
>>>>> Hypervisors will have a bit more work to do.
>>>>>
>>>>> Signed-off-by: Marc Zyngier <[email protected]>
>>>>> ---
>>>>>  arch/arm64/include/asm/virt.h |  9 +++++++++
>>>>>  arch/arm64/kernel/hyp-stub.S  | 13 ++++++++++++-
>>>>>  2 files changed, 21 insertions(+), 1 deletion(-)
>>>> [...]
>>>>> +ENTRY(__hyp_reset_vectors)
>>>>> + str     lr, [sp, #-16]!
>>>>> + mov     x0, #HVC_RESET_VECTORS
>>>>> + hvc     #0
>>>>> + ldr     lr, [sp], #16
>>>>> + ret
>>>>> +ENDPROC(__hyp_reset_vectors)
>>>>
>>>> Why do we need to specifically preserve lr across the hvc call? Is it
>>>> corrupted by the EL2 code (if yes, are other caller-saved registers that
>>>> need preserving)? I don't see something similar in the arch/arm code.
>>>
>>> Kexec on arm64 needed a register to clobber in the hyp-stub's el1_sync 
>>> code. We
>>> wanted to preserve all the registers so soft_restart() could look more like 
>>> a
>>> function call.
>>
>> I don't think we need this anymore. Once we enter __cpu_soft_restart(),
>> there is no turning back. Or am I missing something else?
> 
> My recollection of the history may be wrong: but we needed to mess with 
> esr_el2
> before we know its a soft_restart() call, at which point we didn't want to
> clobber the registers. This was the strange use of x18 in kexec.

After a bit of digging together with James, we found the guilty one.
The hyp-stub entry code uses lr (aka x30) as a scratch register to 
find out if we've made it here via a HVC instruction. This is an 
absolutely pointless test, because by definition HVC is the only way to 
get there.

I've ended up with the following patch:

diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index 8ccdd549f7c7..210bd6b3849d 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -55,12 +55,6 @@ ENDPROC(__hyp_stub_vectors)
        .align 11
 
 el1_sync:
-       mrs     x30, esr_el2
-       lsr     x30, x30, #ESR_ELx_EC_SHIFT
-
-       cmp     x30, #ESR_ELx_EC_HVC64
-       b.ne    9f                              // Not an HVC trap
-
        cmp     x0, #HVC_SET_VECTORS
        b.ne    2f
        msr     vbar_el2, x1
@@ -120,18 +114,14 @@ ENDPROC(\label)
  */
 
 ENTRY(__hyp_set_vectors)
-       str     lr, [sp, #-16]!
        mov     x1, x0
        mov     x0, #HVC_SET_VECTORS
        hvc     #0
-       ldr     lr, [sp], #16
        ret
 ENDPROC(__hyp_set_vectors)
 
 ENTRY(__hyp_reset_vectors)
-       str     lr, [sp, #-16]!
        mov     x0, #HVC_RESET_VECTORS
        hvc     #0
-       ldr     lr, [sp], #16
        ret
 ENDPROC(__hyp_reset_vectors)
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 1277f81b63b7..5170ce1021da 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -32,17 +32,17 @@
         * Shuffle the parameters before calling the function
         * pointed to in x0. Assumes parameters in x[1,2,3].
         */
+       str     lr, [sp, #-16]!
        mov     lr, x0
        mov     x0, x1
        mov     x1, x2
        mov     x2, x3
        blr     lr
+       ldr     lr, [sp], #16
 .endm
 
 ENTRY(__vhe_hyp_call)
-       str     lr, [sp, #-16]!
        do_el2_call
-       ldr     lr, [sp], #16
        /*
         * We used to rely on having an exception return to get
         * an implicit isb. In the E2H case, we don't have it anymore.

I'll split that in convenient patches and repost the series.

Thanks,

        M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
[email protected]
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to