Re: [U-Boot] [PATCH v4 06/10] ARM: HYP/non-sec: allow relocation to secure RAM
Hi Marc, On Sat, 26 Apr 2014 13:17:07 +0100, Marc Zyngier marc.zyng...@arm.com wrote: The current non-sec switching code suffers from one major issue: it cannot run in secure RAM, as a large part of u-boot still needs to be run while we're switched to non-secure. This patch reworks the whole HYP/non-secure strategy by: - making sure the secure code is the *last* thing u-boot executes before entering the payload - performing an exception return from secure mode directly into the payload - allowing the code to be dynamically relocated to secure RAM before switching to non-secure. This involves quite a bit of horrible code, specially as u-boot relocation is quite primitive. Signed-off-by: Marc Zyngier marc.zyng...@arm.com --- This one causes a minor warning to appear when building aarch64 board vexpress_aemv8a: /home/albert.u.boot/src/u-boot-arm/arch/arm/lib/bootm.c:189:13: warning: 'do_nonsec_virt_switch' defined but not used [-Wunused-function] static void do_nonsec_virt_switch(void) ^ Can you look into removing this warning? Thanks in advance. Amicalement, -- Albert. ___ U-Boot mailing list U-Boot@lists.denx.de http://lists.denx.de/mailman/listinfo/u-boot
Re: [U-Boot] [PATCH v4 06/10] ARM: HYP/non-sec: allow relocation to secure RAM
On Fri, May 02 2014 at 9:30:05 pm BST, Jon Loeliger loeli...@gmail.com wrote: Hi Jon, I finally have all this working for me on an A9 system too! Awesome! Ship it! ;-) However, there were a few things that I had to change a bit. For example, by CPUs will always come out of reset at 0x0 and I do not have the ability to set their first-fetch address to anything else. To accommodate this, I need to ensure that the _monitor_vectors are loaded at address 0x0, and that the first entry in the exception vector (for reset) jumped to some notion of secure_reset code. So I changed this code: diff --git a/arch/arm/cpu/armv7/nonsec_virt.S b/arch/arm/cpu/armv7/nonsec_virt.S index b5c946f..2a43e3c 100644 --- a/arch/arm/cpu/armv7/nonsec_virt.S +++ b/arch/arm/cpu/armv7/nonsec_virt.S @@ -10,10 +10,13 @@ #include linux/linkage.h #include asm/gic.h #include asm/armv7.h +#include asm/proc-armv/ptrace.h .arch_extension sec .arch_extension virt + .pushsection ._secure.text, ax + .align 5 /* the vector table for secure state and HYP mode */ _monitor_vectors: @@ -22,51 +25,86 @@ _monitor_vectors: adr pc, _secure_monitor .word 0 .word 0 - adr pc, _hyp_trap + .word 0 .word 0 .word 0 +.macro is_cpu_virt_capable tmp + mrc p15, 0, \tmp, c0, c1, 1 @ read ID_PFR1 + and \tmp, \tmp, #CPUID_ARM_VIRT_MASK@ mask virtualization bits + cmp \tmp, #(1 CPUID_ARM_VIRT_SHIFT) +.endm So that it did this too: @@ -20,15 +20,23 @@ .align 5 /* the vector table for secure state and HYP mode */ _monitor_vectors: - .word 0 /* reset */ - .word 0 /* undef */ - adr pc, _secure_monitor + ldr pc, _secure_reset /* reset */ + .word 0 /* undef */ + adr pc, _secure_monitor /* SMC */ .word 0 .word 0 .word 0 .word 0 .word 0 + +_secure_reset: +#ifdef CONFIG_SECURE_MONITOR_RESET_FUNCTION + .word CONFIG_SECURE_MONITOR_RESET_FUNCTION +#else + .word 0 +#endif + .macro is_cpu_virt_capable tmp That enabled me to define CONFIG_SECURE_MONITOR_RESET_FUNCTION in my config header file: /* * With the Secure Monitor at 0x0, its reset vector must also * then point off to the correct out-of-reset entry function. */ #define CONFIG_SECURE_MONITOR_RESET_FUNCTION_myplatform_cpu_entry #define CONFIG_ARMV7_SECURE_BASE0x0 That _myplatform_cpu_entry corresponds to your sunxi_cpu_entry code. Yup, makes sense. Nit-pick: make the _secure_reset a weak symbol that your platform code will overload, just like the rest of the PSCI stuff. Saves the #ifdef horror; ;-) So, yeah, I know that isn't a proper patch and all. :-) I'm just sending you more information to ponder for this patch series! If you would like to generalize your patch this way, please feel free to do so. If not, I can send a proper patch after this hits mainline or so. My prefered way would be indeed to have a proper patch on top of this to handle the coming out of reset case. You'll get proper credit for the idea! :-) Thanks, M. -- Without deviation from the norm, progress is not possible. ___ U-Boot mailing list U-Boot@lists.denx.de http://lists.denx.de/mailman/listinfo/u-boot
Re: [U-Boot] [PATCH v4 06/10] ARM: HYP/non-sec: allow relocation to secure RAM
On Fri, May 02 2014 at 10:03:37 pm BST, Jon Loeliger loeli...@gmail.com wrote: Mark, In your nonsec_init code, you suggest this change: + mrc p15, 0, r0, c1, c1, 2 movwr1, #0x3fff - movtr1, #0x0006 - mcr p15, 0, r1, c1, c1, 2 @ NSACR = all copros to non-sec + movtr1, #0x0004 + orr r0, r0, r1 + mcr p15, 0, r0, c1, c1, 2 @ NSACR = all copros to non-sec Leaving: mrc p15, 0, r0, c1, c1, 2 movwr1, #0x3fff movtr1, #0x0004 orr r0, r0, r1 mcr p15, 0, r0, c1, c1, 2 @ NSACR = all copros to non-sec That sets all the co-processor bits, but the man page suggests that only Just to be clear: which document are you referring to? copros with bits 10 and 11 should be modified. It also seems that if the The ARM ARM says that NSACR[13:0] is either RAZ/WI or writable from secure for unimplemented coprocessors. So I believe the above is safe. If you wanted to be really picky, you'd start by reading CPACR, write either 1 or 3 to all the CPn fields, read it back again, see what sticks, and populate NSACR accordingly. Did I hear someone saying Boring? ;-) PLE is enabled, we should mark it NS-enabled at bit 16 also:. Perhaps: mrcp15, 0, r0, c1, c1, 2 movwr1, #0x0c00 movtr1, #0x0005 orrr0, r0, r1 mcrp15, 0, r0, c1, c1, 2@ NSACR = all copros to non-sec We're getting into IMPDEF territory pretty quickly here. PLE only exists on A9, and is optionnal there (and probably doesn't exist on all versions, if memory serves well...). This could be implemented as a per-platform optional feature, though. What do you think? M. -- Without deviation from the norm, progress is not possible. ___ U-Boot mailing list U-Boot@lists.denx.de http://lists.denx.de/mailman/listinfo/u-boot
Re: [U-Boot] [PATCH v4 06/10] ARM: HYP/non-sec: allow relocation to secure RAM
On Wed, May 7, 2014 at 2:05 AM, Marc Zyngier marc.zyng...@arm.com wrote: On Fri, May 02 2014 at 10:03:37 pm BST, Jon Loeliger loeli...@gmail.com wrote: Mark, In your nonsec_init code, you suggest this change: + mrc p15, 0, r0, c1, c1, 2 movwr1, #0x3fff - movtr1, #0x0006 - mcr p15, 0, r1, c1, c1, 2 @ NSACR = all copros to non-sec + movtr1, #0x0004 + orr r0, r0, r1 + mcr p15, 0, r0, c1, c1, 2 @ NSACR = all copros to non-sec Leaving: mrc p15, 0, r0, c1, c1, 2 movwr1, #0x3fff movtr1, #0x0004 orr r0, r0, r1 mcr p15, 0, r0, c1, c1, 2 @ NSACR = all copros to non-sec That sets all the co-processor bits, but the man page suggests that only Just to be clear: which document are you referring to? Hmm... Lessee.. Uh, this one: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0388i/CIHEAIAJ.html So, Cortex-A9 TRM 4.3.13 That one happens to be r4p1, but the description is the same for my part rev (r2p9, IIRC). Anyway, those low bits are marked as UNK/SBZP, hence my concern for the apparent extra ON bits. copros with bits 10 and 11 should be modified. It also seems that if the The ARM ARM says that NSACR[13:0] is either RAZ/WI or writable from secure for unimplemented coprocessors. The ARM ARM uber alles. :-) So I believe the above is safe. If you wanted to be really picky, you'd start by reading CPACR, write either 1 or 3 to all the CPn fields, read it back again, see what sticks, and populate NSACR accordingly. Did I hear someone saying Boring? ;-) I'm sorry, did you say something? Sounded like you said Waw-waw waw-wah CPn waw ... PLE is enabled, we should mark it NS-enabled at bit 16 also:. Perhaps: mrcp15, 0, r0, c1, c1, 2 movwr1, #0x0c00 movtr1, #0x0005 orrr0, r0, r1 mcrp15, 0, r0, c1, c1, 2@ NSACR = all copros to non-sec We're getting into IMPDEF territory pretty quickly here. PLE only exists on A9, and is optionnal there (and probably doesn't exist on all versions, if memory serves well...). Ah. Gotcha. Blah blah Osprey ah-chew! Gesundheit! This could be implemented as a per-platform optional feature, though. What do you think? I think we should all convert to A57 on a Dickens ring and be done. In the meantime, it's likely not worth it to be this picky about the darn PLE bits, nor the rest of the NSACR bits. Especially if the ARM ARM says we can let it slide. M. -- Without deviation from the norm, progress is not possible. Yes, yes, Everybody got to deviate from the norm. jdl On Wed, May 7, 2014 at 2:05 AM, Marc Zyngier marc.zyng...@arm.com wrote: On Fri, May 02 2014 at 10:03:37 pm BST, Jon Loeliger loeli...@gmail.com wrote: Mark, In your nonsec_init code, you suggest this change: + mrc p15, 0, r0, c1, c1, 2 movwr1, #0x3fff - movtr1, #0x0006 - mcr p15, 0, r1, c1, c1, 2 @ NSACR = all copros to non-sec + movtr1, #0x0004 + orr r0, r0, r1 + mcr p15, 0, r0, c1, c1, 2 @ NSACR = all copros to non-sec Leaving: mrc p15, 0, r0, c1, c1, 2 movwr1, #0x3fff movtr1, #0x0004 orr r0, r0, r1 mcr p15, 0, r0, c1, c1, 2 @ NSACR = all copros to non-sec That sets all the co-processor bits, but the man page suggests that only Just to be clear: which document are you referring to? copros with bits 10 and 11 should be modified. It also seems that if the The ARM ARM says that NSACR[13:0] is either RAZ/WI or writable from secure for unimplemented coprocessors. So I believe the above is safe. If you wanted to be really picky, you'd start by reading CPACR, write either 1 or 3 to all the CPn fields, read it back again, see what sticks, and populate NSACR accordingly. Did I hear someone saying Boring? ;-) PLE is enabled, we should mark it NS-enabled at bit 16 also:. Perhaps: mrcp15, 0, r0, c1, c1, 2 movwr1, #0x0c00 movtr1, #0x0005 orrr0, r0, r1 mcrp15, 0, r0, c1, c1, 2@ NSACR = all copros to non-sec We're getting into IMPDEF territory pretty quickly here. PLE only exists on A9, and is optionnal there (and probably doesn't exist on all versions, if memory serves well...). This could be implemented as a per-platform optional feature, though. What do you think? M. -- Without deviation from the norm, progress is not possible. ___ U-Boot mailing list U-Boot@lists.denx.de http://lists.denx.de/mailman/listinfo/u-boot
Re: [U-Boot] [PATCH v4 06/10] ARM: HYP/non-sec: allow relocation to secure RAM
/* * With the Secure Monitor at 0x0, its reset vector must also * then point off to the correct out-of-reset entry function. */ #define CONFIG_SECURE_MONITOR_RESET_FUNCTION_myplatform_cpu_entry #define CONFIG_ARMV7_SECURE_BASE0x0 That _myplatform_cpu_entry corresponds to your sunxi_cpu_entry code. Yup, makes sense. Nit-pick: make the _secure_reset a weak symbol that your platform code will overload, just like the rest of the PSCI stuff. Saves the #ifdef horror; ;-) Oh, good idea. I'll add that bit in. Thanks! So, yeah, I know that isn't a proper patch and all. :-) I'm just sending you more information to ponder for this patch series! If you would like to generalize your patch this way, please feel free to do so. If not, I can send a proper patch after this hits mainline or so. My prefered way would be indeed to have a proper patch on top of this to handle the coming out of reset case. You'll get proper credit for the idea! :-) Will do. Thanks, jdl ___ U-Boot mailing list U-Boot@lists.denx.de http://lists.denx.de/mailman/listinfo/u-boot
Re: [U-Boot] [PATCH v4 06/10] ARM: HYP/non-sec: allow relocation to secure RAM
Mark, I finally have all this working for me on an A9 system too! However, there were a few things that I had to change a bit. For example, by CPUs will always come out of reset at 0x0 and I do not have the ability to set their first-fetch address to anything else. To accommodate this, I need to ensure that the _monitor_vectors are loaded at address 0x0, and that the first entry in the exception vector (for reset) jumped to some notion of secure_reset code. So I changed this code: diff --git a/arch/arm/cpu/armv7/nonsec_virt.S b/arch/arm/cpu/armv7/nonsec_virt.S index b5c946f..2a43e3c 100644 --- a/arch/arm/cpu/armv7/nonsec_virt.S +++ b/arch/arm/cpu/armv7/nonsec_virt.S @@ -10,10 +10,13 @@ #include linux/linkage.h #include asm/gic.h #include asm/armv7.h +#include asm/proc-armv/ptrace.h .arch_extension sec .arch_extension virt + .pushsection ._secure.text, ax + .align 5 /* the vector table for secure state and HYP mode */ _monitor_vectors: @@ -22,51 +25,86 @@ _monitor_vectors: adr pc, _secure_monitor .word 0 .word 0 - adr pc, _hyp_trap + .word 0 .word 0 .word 0 +.macro is_cpu_virt_capable tmp + mrc p15, 0, \tmp, c0, c1, 1 @ read ID_PFR1 + and \tmp, \tmp, #CPUID_ARM_VIRT_MASK@ mask virtualization bits + cmp \tmp, #(1 CPUID_ARM_VIRT_SHIFT) +.endm So that it did this too: @@ -20,15 +20,23 @@ .align 5 /* the vector table for secure state and HYP mode */ _monitor_vectors: - .word 0 /* reset */ - .word 0 /* undef */ - adr pc, _secure_monitor + ldr pc, _secure_reset /* reset */ + .word 0 /* undef */ + adr pc, _secure_monitor /* SMC */ .word 0 .word 0 .word 0 .word 0 .word 0 + +_secure_reset: +#ifdef CONFIG_SECURE_MONITOR_RESET_FUNCTION + .word CONFIG_SECURE_MONITOR_RESET_FUNCTION +#else + .word 0 +#endif + .macro is_cpu_virt_capable tmp That enabled me to define CONFIG_SECURE_MONITOR_RESET_FUNCTION in my config header file: /* * With the Secure Monitor at 0x0, its reset vector must also * then point off to the correct out-of-reset entry function. */ #define CONFIG_SECURE_MONITOR_RESET_FUNCTION_myplatform_cpu_entry #define CONFIG_ARMV7_SECURE_BASE0x0 That _myplatform_cpu_entry corresponds to your sunxi_cpu_entry code. So, yeah, I know that isn't a proper patch and all. :-) I'm just sending you more information to ponder for this patch series! If you would like to generalize your patch this way, please feel free to do so. If not, I can send a proper patch after this hits mainline or so. HTH, jdl On Sat, Apr 26, 2014 at 7:17 AM, Marc Zyngier marc.zyng...@arm.com wrote: The current non-sec switching code suffers from one major issue: it cannot run in secure RAM, as a large part of u-boot still needs to be run while we're switched to non-secure. This patch reworks the whole HYP/non-secure strategy by: - making sure the secure code is the *last* thing u-boot executes before entering the payload - performing an exception return from secure mode directly into the payload - allowing the code to be dynamically relocated to secure RAM before switching to non-secure. This involves quite a bit of horrible code, specially as u-boot relocation is quite primitive. Signed-off-by: Marc Zyngier marc.zyng...@arm.com --- arch/arm/cpu/armv7/nonsec_virt.S | 161 +++ arch/arm/cpu/armv7/virt-v7.c | 59 +- arch/arm/include/asm/armv7.h | 10 ++- arch/arm/include/asm/secure.h| 26 +++ arch/arm/lib/bootm.c | 22 +++--- 5 files changed, 138 insertions(+), 140 deletions(-) create mode 100644 arch/arm/include/asm/secure.h diff --git a/arch/arm/cpu/armv7/nonsec_virt.S b/arch/arm/cpu/armv7/nonsec_virt.S index b5c946f..2a43e3c 100644 --- a/arch/arm/cpu/armv7/nonsec_virt.S +++ b/arch/arm/cpu/armv7/nonsec_virt.S @@ -10,10 +10,13 @@ #include linux/linkage.h #include asm/gic.h #include asm/armv7.h +#include asm/proc-armv/ptrace.h .arch_extension sec .arch_extension virt + .pushsection ._secure.text, ax + .align 5 /* the vector table for secure state and HYP mode */ _monitor_vectors: @@ -22,51 +25,86 @@ _monitor_vectors: adr pc, _secure_monitor .word 0 .word 0 - adr pc, _hyp_trap + .word 0 .word 0 .word 0 +.macro is_cpu_virt_capable tmp + mrc p15, 0, \tmp, c0, c1, 1 @ read ID_PFR1 + and \tmp, \tmp, #CPUID_ARM_VIRT_MASK@ mask virtualization bits + cmp \tmp, #(1 CPUID_ARM_VIRT_SHIFT) +.endm + /* * secure monitor handler * U-boot calls this software interrupt in start.S * This is executed on a smc instruction, we use a smc #0 to switch * to non-secure state. - * We use
Re: [U-Boot] [PATCH v4 06/10] ARM: HYP/non-sec: allow relocation to secure RAM
Mark, In your nonsec_init code, you suggest this change: + mrc p15, 0, r0, c1, c1, 2 movwr1, #0x3fff - movtr1, #0x0006 - mcr p15, 0, r1, c1, c1, 2 @ NSACR = all copros to non-sec + movtr1, #0x0004 + orr r0, r0, r1 + mcr p15, 0, r0, c1, c1, 2 @ NSACR = all copros to non-sec Leaving: mrc p15, 0, r0, c1, c1, 2 movwr1, #0x3fff movtr1, #0x0004 orr r0, r0, r1 mcr p15, 0, r0, c1, c1, 2 @ NSACR = all copros to non-sec That sets all the co-processor bits, but the man page suggests that only copros with bits 10 and 11 should be modified. It also seems that if the PLE is enabled, we should mark it NS-enabled at bit 16 also:. Perhaps: mrcp15, 0, r0, c1, c1, 2 movwr1, #0x0c00 movtr1, #0x0005 orrr0, r0, r1 mcrp15, 0, r0, c1, c1, 2@ NSACR = all copros to non-sec HTH, jdl On Fri, May 2, 2014 at 3:30 PM, Jon Loeliger loeli...@gmail.com wrote: Mark, I finally have all this working for me on an A9 system too! However, there were a few things that I had to change a bit. For example, by CPUs will always come out of reset at 0x0 and I do not have the ability to set their first-fetch address to anything else. To accommodate this, I need to ensure that the _monitor_vectors are loaded at address 0x0, and that the first entry in the exception vector (for reset) jumped to some notion of secure_reset code. So I changed this code: diff --git a/arch/arm/cpu/armv7/nonsec_virt.S b/arch/arm/cpu/armv7/nonsec_virt.S index b5c946f..2a43e3c 100644 --- a/arch/arm/cpu/armv7/nonsec_virt.S +++ b/arch/arm/cpu/armv7/nonsec_virt.S @@ -10,10 +10,13 @@ #include linux/linkage.h #include asm/gic.h #include asm/armv7.h +#include asm/proc-armv/ptrace.h .arch_extension sec .arch_extension virt + .pushsection ._secure.text, ax + .align 5 /* the vector table for secure state and HYP mode */ _monitor_vectors: @@ -22,51 +25,86 @@ _monitor_vectors: adr pc, _secure_monitor .word 0 .word 0 - adr pc, _hyp_trap + .word 0 .word 0 .word 0 +.macro is_cpu_virt_capable tmp + mrc p15, 0, \tmp, c0, c1, 1 @ read ID_PFR1 + and \tmp, \tmp, #CPUID_ARM_VIRT_MASK@ mask virtualization bits + cmp \tmp, #(1 CPUID_ARM_VIRT_SHIFT) +.endm So that it did this too: @@ -20,15 +20,23 @@ .align 5 /* the vector table for secure state and HYP mode */ _monitor_vectors: - .word 0 /* reset */ - .word 0 /* undef */ - adr pc, _secure_monitor + ldr pc, _secure_reset /* reset */ + .word 0 /* undef */ + adr pc, _secure_monitor /* SMC */ .word 0 .word 0 .word 0 .word 0 .word 0 + +_secure_reset: +#ifdef CONFIG_SECURE_MONITOR_RESET_FUNCTION + .word CONFIG_SECURE_MONITOR_RESET_FUNCTION +#else + .word 0 +#endif + .macro is_cpu_virt_capable tmp That enabled me to define CONFIG_SECURE_MONITOR_RESET_FUNCTION in my config header file: /* * With the Secure Monitor at 0x0, its reset vector must also * then point off to the correct out-of-reset entry function. */ #define CONFIG_SECURE_MONITOR_RESET_FUNCTION_myplatform_cpu_entry #define CONFIG_ARMV7_SECURE_BASE0x0 That _myplatform_cpu_entry corresponds to your sunxi_cpu_entry code. So, yeah, I know that isn't a proper patch and all. :-) I'm just sending you more information to ponder for this patch series! If you would like to generalize your patch this way, please feel free to do so. If not, I can send a proper patch after this hits mainline or so. HTH, jdl On Sat, Apr 26, 2014 at 7:17 AM, Marc Zyngier marc.zyng...@arm.com wrote: The current non-sec switching code suffers from one major issue: it cannot run in secure RAM, as a large part of u-boot still needs to be run while we're switched to non-secure. This patch reworks the whole HYP/non-secure strategy by: - making sure the secure code is the *last* thing u-boot executes before entering the payload - performing an exception return from secure mode directly into the payload - allowing the code to be dynamically relocated to secure RAM before switching to non-secure. This involves quite a bit of horrible code, specially as u-boot relocation is quite primitive. Signed-off-by: Marc Zyngier marc.zyng...@arm.com --- arch/arm/cpu/armv7/nonsec_virt.S | 161 +++ arch/arm/cpu/armv7/virt-v7.c | 59 +- arch/arm/include/asm/armv7.h | 10 ++- arch/arm/include/asm/secure.h| 26 +++ arch/arm/lib/bootm.c | 22 +++--- 5 files changed, 138 insertions(+), 140 deletions(-) create mode 100644 arch/arm/include/asm/secure.h diff --git
[U-Boot] [PATCH v4 06/10] ARM: HYP/non-sec: allow relocation to secure RAM
The current non-sec switching code suffers from one major issue: it cannot run in secure RAM, as a large part of u-boot still needs to be run while we're switched to non-secure. This patch reworks the whole HYP/non-secure strategy by: - making sure the secure code is the *last* thing u-boot executes before entering the payload - performing an exception return from secure mode directly into the payload - allowing the code to be dynamically relocated to secure RAM before switching to non-secure. This involves quite a bit of horrible code, specially as u-boot relocation is quite primitive. Signed-off-by: Marc Zyngier marc.zyng...@arm.com --- arch/arm/cpu/armv7/nonsec_virt.S | 161 +++ arch/arm/cpu/armv7/virt-v7.c | 59 +- arch/arm/include/asm/armv7.h | 10 ++- arch/arm/include/asm/secure.h| 26 +++ arch/arm/lib/bootm.c | 22 +++--- 5 files changed, 138 insertions(+), 140 deletions(-) create mode 100644 arch/arm/include/asm/secure.h diff --git a/arch/arm/cpu/armv7/nonsec_virt.S b/arch/arm/cpu/armv7/nonsec_virt.S index b5c946f..2a43e3c 100644 --- a/arch/arm/cpu/armv7/nonsec_virt.S +++ b/arch/arm/cpu/armv7/nonsec_virt.S @@ -10,10 +10,13 @@ #include linux/linkage.h #include asm/gic.h #include asm/armv7.h +#include asm/proc-armv/ptrace.h .arch_extension sec .arch_extension virt + .pushsection ._secure.text, ax + .align 5 /* the vector table for secure state and HYP mode */ _monitor_vectors: @@ -22,51 +25,86 @@ _monitor_vectors: adr pc, _secure_monitor .word 0 .word 0 - adr pc, _hyp_trap + .word 0 .word 0 .word 0 +.macro is_cpu_virt_capable tmp + mrc p15, 0, \tmp, c0, c1, 1 @ read ID_PFR1 + and \tmp, \tmp, #CPUID_ARM_VIRT_MASK@ mask virtualization bits + cmp \tmp, #(1 CPUID_ARM_VIRT_SHIFT) +.endm + /* * secure monitor handler * U-boot calls this software interrupt in start.S * This is executed on a smc instruction, we use a smc #0 to switch * to non-secure state. - * We use only r0 and r1 here, due to constraints in the caller. + * r0, r1, r2: passed to the callee + * ip: target PC */ _secure_monitor: - mrc p15, 0, r1, c1, c1, 0 @ read SCR - bic r1, r1, #0x4e @ clear IRQ, FIQ, EA, nET bits - orr r1, r1, #0x31 @ enable NS, AW, FW bits + mrc p15, 0, r5, c1, c1, 0 @ read SCR + bic r5, r5, #0x4e @ clear IRQ, FIQ, EA, nET bits + orr r5, r5, #0x31 @ enable NS, AW, FW bits - mrc p15, 0, r0, c0, c1, 1 @ read ID_PFR1 - and r0, r0, #CPUID_ARM_VIRT_MASK@ mask virtualization bits - cmp r0, #(1 CPUID_ARM_VIRT_SHIFT) + mov r6, #SVC_MODE @ default mode is SVC + is_cpu_virt_capable r4 #ifdef CONFIG_ARMV7_VIRT - orreq r1, r1, #0x100 @ allow HVC instruction + orreq r5, r5, #0x100 @ allow HVC instruction + moveq r6, #HYP_MODE @ Enter the kernel as HYP #endif - mcr p15, 0, r1, c1, c1, 0 @ write SCR (with NS bit set) + mcr p15, 0, r5, c1, c1, 0 @ write SCR (with NS bit set) isb -#ifdef CONFIG_ARMV7_VIRT - mrceq p15, 0, r0, c12, c0, 1 @ get MVBAR value - mcreq p15, 4, r0, c12, c0, 0 @ write HVBAR -#endif bne 1f @ Reset CNTVOFF to 0 before leaving monitor mode - mrc p15, 0, r0, c0, c1, 1 @ read ID_PFR1 - andsr0, r0, #CPUID_ARM_GENTIMER_MASK@ test arch timer bits - movne r0, #0 - mcrrne p15, 4, r0, r0, c14 @ Reset CNTVOFF to zero + mrc p15, 0, r4, c0, c1, 1 @ read ID_PFR1 + andsr4, r4, #CPUID_ARM_GENTIMER_MASK@ test arch timer bits + movne r4, #0 + mcrrne p15, 4, r4, r4, c14 @ Reset CNTVOFF to zero 1: - movspc, lr @ return to non-secure SVC - -_hyp_trap: - mrs lr, elr_hyp @ for older asm: .byte 0x00, 0xe3, 0x0e, 0xe1 - mov pc, lr @ do no switch modes, but - @ return to caller - + mov lr, ip + mov ip, #(F_BIT | I_BIT | A_BIT)@ Set A, I and F + tst lr, #1 @ Check for Thumb PC + orrne ip, ip, #T_BIT @ Set T if Thumb + orr ip, ip, r6 @ Slot target mode in + msr spsr_cxfs, ip @ Set full SPSR + movspc, lr @ ERET to non-secure + +ENTRY(_do_nonsec_entry) + mov ip, r0 + mov r0, r1 + mov r1, r2 + mov r2, r3 + smc #0