Hi Ard,

Some comments inline!

On Wed, May 11, 2016 at 10:41:57AM +0200, Ard Biesheuvel wrote:
> Instead of cleaning the data cache to the PoU by virtual address and
> subsequently invalidating the entire I-cache, invalidate only the
> range that we just cleaned. This way, we don't invalidate other
> cachelines unnecessarily.
>
> Contributed-under: TianoCore Contribution Agreement 1.0
> Signed-off-by: Ard Biesheuvel <[email protected]>
> ---
>  ArmPkg/Include/Library/ArmLib.h                                | 10 
> ++++++++--
>  ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c |  3 ++-
>  ArmPkg/Library/ArmLib/AArch64/AArch64Support.S                 |  5 +++++
>  ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S                     |  5 +++++
>  ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm                   |  6 ++++++
>  5 files changed, 26 insertions(+), 3 deletions(-)
>
> diff --git a/ArmPkg/Include/Library/ArmLib.h b/ArmPkg/Include/Library/ArmLib.h
> index 1689f0072db6..4608b0cccccc 100644
> --- a/ArmPkg/Include/Library/ArmLib.h
> +++ b/ArmPkg/Include/Library/ArmLib.h
> @@ -183,13 +183,19 @@ ArmInvalidateDataCacheEntryByMVA (
>
>  VOID
>  EFIAPI
> -ArmCleanDataCacheEntryToPoUByMVA(
> +ArmCleanDataCacheEntryToPoUByMVA (
>    IN  UINTN   Address
>    );
>
>  VOID
>  EFIAPI
> -ArmCleanDataCacheEntryByMVA(
> +ArmInvalidateInstructionCacheEntryToPoUByMVA (
> +  IN  UINTN   Address
> +  );
> +
> +VOID
> +EFIAPI
> +ArmCleanDataCacheEntryByMVA (
>  IN  UINTN   Address
>  );
>
> diff --git a/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c 
> b/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c
> index 1045f9068f4d..cc7555061428 100644
> --- a/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c
> +++ b/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c
> @@ -65,7 +65,8 @@ InvalidateInstructionCacheRange (
>    )
>  {
>    CacheRangeOperation (Address, Length, ArmCleanDataCacheEntryToPoUByMVA);
> -  ArmInvalidateInstructionCache ();
> +  CacheRangeOperation (Address, Length,
> +    ArmInvalidateInstructionCacheEntryToPoUByMVA);

Afaics, CacheRangeOperation() uses the data cache line length by default. This
will match the I$ line length in most cases. But, it would be better to use the
ArmInstructionCacheLineLength() function instead. I suggest that we add a cache
line length parameter to CacheRangeOperation() to allow the distinction.

Also, from my reading of the ARMv8 ARM B2.6.5 & E2.6.5, we need an ISB at the
end of all the operations.

>    return Address;
>  }
>
> diff --git a/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S 
> b/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S
> index 43f7a795acec..9441f47e30ba 100644
> --- a/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S
> +++ b/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S
> @@ -23,6 +23,7 @@ GCC_ASM_EXPORT (ArmInvalidateInstructionCache)
>  GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryByMVA)
>  GCC_ASM_EXPORT (ArmCleanDataCacheEntryByMVA)
>  GCC_ASM_EXPORT (ArmCleanDataCacheEntryToPoUByMVA)
> +GCC_ASM_EXPORT (ArmInvalidateInstructionCacheEntryToPoUByMVA)
>  GCC_ASM_EXPORT (ArmCleanInvalidateDataCacheEntryByMVA)
>  GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryBySetWay)
>  GCC_ASM_EXPORT (ArmCleanDataCacheEntryBySetWay)
> @@ -80,6 +81,10 @@ ASM_PFX(ArmCleanDataCacheEntryToPoUByMVA):
>    dc      cvau, x0    // Clean single data cache line to PoU
>    ret
>
> +ASM_PFX(ArmInvalidateInstructionCacheEntryToPoUByMVA):
> +  ic      ivau, x0    // Invalidate single instruction cache line to PoU
> +  ret
> +
>
>  ASM_PFX(ArmCleanInvalidateDataCacheEntryByMVA):
>    dc      civac, x0   // Clean and invalidate single data cache line
> diff --git a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S 
> b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S
> index 50c760f335de..c765032c9e4d 100644
> --- a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S
> +++ b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S
> @@ -18,6 +18,7 @@
>
>  GCC_ASM_EXPORT (ArmInvalidateInstructionCache)
>  GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryByMVA)
> +GCC_ASM_EXPORT (ArmInvalidateInstructionCacheEntryToPoUByMVA)
>  GCC_ASM_EXPORT (ArmCleanDataCacheEntryByMVA)
>  GCC_ASM_EXPORT (ArmCleanDataCacheEntryToPoUByMVA)
>  GCC_ASM_EXPORT (ArmCleanInvalidateDataCacheEntryByMVA)
> @@ -74,6 +75,10 @@ ASM_PFX(ArmCleanDataCacheEntryToPoUByMVA):
>    mcr     p15, 0, r0, c7, c11, 1  @clean single data cache line to PoU
>    bx      lr
>
> +ASM_PFX(ArmInvalidateInstructionCacheEntryToPoUByMVA):
> +  mcr     p15, 0, r0, c7, c5, 1  @Invalidate single instruction cache line 
> to PoU
> +  mcr     p15, 0, r0, c7, c5, 7  @Invalidate branch predictor

Is it reasonable to assume that for every Icache invalidation by MVA, the BP
will have to be invalidated for that MVA as well? I guess so!

cheers,
Achin

> +  bx      lr
>
>  ASM_PFX(ArmCleanInvalidateDataCacheEntryByMVA):
>    mcr     p15, 0, r0, c7, c14, 1  @clean and invalidate single data cache 
> line
> diff --git a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm 
> b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm
> index a460bd2da7a9..2363ee457632 100644
> --- a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm
> +++ b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm
> @@ -34,6 +34,12 @@ CTRL_I_BIT      EQU     (1 << 12)
>    bx      lr
>
>
> + RVCT_ASM_EXPORT ArmInvalidateInstructionCacheEntryToPoUByMVA
> +  mcr     p15, 0, r0, c7, c5, 1   ; invalidate single instruction cache line 
> to PoU
> +  mcr     p15, 0, r0, c7, c5, 7   ; invalidate branch predictor
> +  bx      lr
> +
> +
>   RVCT_ASM_EXPORT ArmCleanDataCacheEntryToPoUByMVA
>    mcr     p15, 0, r0, c7, c11, 1  ; clean single data cache line to PoU
>    bx      lr
> --
> 2.7.4
>

_______________________________________________
edk2-devel mailing list
[email protected]
https://lists.01.org/mailman/listinfo/edk2-devel

Reply via email to