From: Chali Anis <chalian...@gmail.com>

relocate_to_adr copies around executable code and thus needs to
ensure coherence between I$ and D$. When the function was first added,
it didn't maintain cache correctly, because while it did call
arm_early_mmu_cache_flush(), back then that function did not invalidate
I$ after the D$ clean.

This likely went unnoticed, because a comment in relocate_to_adr
suggested that ic ivau is invalidating the I$, but in reality that
instruction did an unconditional invalidation of the single
cache line corresponding to virtual address 0 if it exists.

Back in 2019, sync_caches_for_execution() was introduced, which
correctly invalidates I$ after D$ cleaning, but the invalidation of
address 0 still remained.

On a 64-bit Tegra SoC with barebox running as EFI payload, it was
observed that this instruction was triggering a translation fault[1] at
address 0. The reason behind that is not completely understood, but it's
fixed by removing these two lines that are erroneous anyway, so let's do
that.

[1]: https://esr.arm64.dev/#0x96000147

Fixes: 868df08038a9 ("ARM: aarch64: Add relocation support")
Signed-off-by: Chali Anis <chalian...@gmail.com>
Signed-off-by: Ahmad Fatoum <a.fat...@barebox.org>
---
 arch/arm/cpu/setupc_64.S | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/arm/cpu/setupc_64.S b/arch/arm/cpu/setupc_64.S
index 2138c2a600fa..fd95187a0422 100644
--- a/arch/arm/cpu/setupc_64.S
+++ b/arch/arm/cpu/setupc_64.S
@@ -63,9 +63,6 @@ ENTRY(relocate_to_adr)
 
        bl      sync_caches_for_execution
 
-       mov     x0,#0
-       ic      ivau, x0        /* flush icache */
-
        adr_l   x0, 1f
        sub     x0, x0, x20
        add     x0, x0, x21
-- 
2.47.3


Reply via email to