On Mon, Jan 18, 2021 at 09:45:26AM +0000, Marc Zyngier wrote:
> diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> index 59820f9b8522..bbab2148a2a2 100644
> --- a/arch/arm64/kernel/hyp-stub.S
> +++ b/arch/arm64/kernel/hyp-stub.S
> @@ -77,13 +77,24 @@ SYM_CODE_END(el1_sync)
>  SYM_CODE_START_LOCAL(mutate_to_vhe)
>       // Sanity check: MMU *must* be off
>       mrs     x0, sctlr_el2
> -     tbnz    x0, #0, 1f
> +     tbnz    x0, #0, 2f
>  
>       // Needs to be VHE capable, obviously
>       mrs     x0, id_aa64mmfr1_el1
>       ubfx    x0, x0, #ID_AA64MMFR1_VHE_SHIFT, #4
> -     cbz     x0, 1f
> +     cbz     x0, 2f
>  
> +     // Check whether VHE is disabled from the command line
> +     adr_l   x1, id_aa64mmfr1_val
> +     ldr     x0, [x1]
> +     adr_l   x1, id_aa64mmfr1_mask
> +     ldr     x1, [x1]
> +     ubfx    x0, x0, #ID_AA64MMFR1_VHE_SHIFT, #4
> +     ubfx    x1, x1, #ID_AA64MMFR1_VHE_SHIFT, #4
> +     cbz     x1, 1f
> +     and     x0, x0, x1
> +     cbz     x0, 2f
> +1:

I can see the advantage here in separate id_aa64mmfr1_val/mask but we
could use some asm offsets here and keep the pointer indirection simpler
in C code. You'd just need something like 'adr_l mmfr1_ovrd + VAL_OFFSET'.

Anyway, if you have a strong preference for the current approach, leave
it as is.

-- 
Catalin
_______________________________________________
kvmarm mailing list
[email protected]
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to