On Mon, Oct 26, 2020 at 01:34:40PM +0000, Marc Zyngier wrote:
> On SMC trap, the prefered return address is set to that of the SMC
> instruction itself. It is thus wrong to tyr and roll it back when

Typo: s/tyr/try/

> an SError occurs while trapping on SMC. It is still necessary on
> HVC though, as HVC doesn't cause a trap, and sets ELR to returning
> *after* the HVC.
> 
> It also became apparent that the is 16bit encoding for an AArch32

I guess s/that the is/that there is no/ ?

> HVC instruction, meaning that the displacement is always 4 bytes,
> no matter what the ISA is. Take this opportunity to simplify it.
> 
> Signed-off-by: Marc Zyngier <m...@kernel.org>

Assuming that there is no 16-bit HVC:

Acked-by: Mark Rutland <mark.rutl...@arm.com>

Mark.

> ---
>  arch/arm64/kvm/handle_exit.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 5d690d60ccad..79a720657c47 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -245,15 +245,15 @@ int handle_exit(struct kvm_vcpu *vcpu, int 
> exception_index)
>               u8 esr_ec = ESR_ELx_EC(kvm_vcpu_get_esr(vcpu));
>  
>               /*
> -              * HVC/SMC already have an adjusted PC, which we need
> -              * to correct in order to return to after having
> -              * injected the SError.
> +              * HVC already have an adjusted PC, which we need to
> +              * correct in order to return to after having injected
> +              * the SError.
> +              *
> +              * SMC, on the other hand, is *trapped*, meaning its
> +              * preferred return address is the SMC itself.
>                */
> -             if (esr_ec == ESR_ELx_EC_HVC32 || esr_ec == ESR_ELx_EC_HVC64 ||
> -                 esr_ec == ESR_ELx_EC_SMC32 || esr_ec == ESR_ELx_EC_SMC64) {
> -                     u32 adj =  kvm_vcpu_trap_il_is32bit(vcpu) ? 4 : 2;
> -                     *vcpu_pc(vcpu) -= adj;
> -             }
> +             if (esr_ec == ESR_ELx_EC_HVC32 || esr_ec == ESR_ELx_EC_HVC64)
> +                     *vcpu_pc(vcpu) -= 4;
>  
>               return 1;
>       }
> -- 
> 2.28.0
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to