On 09/07/2015 06:17 PM, Ard Biesheuvel wrote:
On 7 September 2015 at 10:41, Heyi Guo <[email protected]> wrote:
On 09/06/2015 07:43 PM, Ard Biesheuvel wrote:
On 6 September 2015 at 10:15, Heyi Guo <[email protected]> wrote:
The code has a simple bug on calculating aligned page table address.
We need to add alignment - 1 to allocated address first and then mask
the unaligned bits.
Nice find!
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Heyi Guo <[email protected]>
Cc: Leif Lindholm <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
---
ArmPkg/Library/ArmLib/AArch64/AArch64Mmu.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/ArmPkg/Library/ArmLib/AArch64/AArch64Mmu.c
b/ArmPkg/Library/ArmLib/AArch64/AArch64Mmu.c
index 3d58d5d..4db4bbe 100644
--- a/ArmPkg/Library/ArmLib/AArch64/AArch64Mmu.c
+++ b/ArmPkg/Library/ArmLib/AArch64/AArch64Mmu.c
@@ -381,7 +381,7 @@ GetBlockEntryListFromAddress (
if (TranslationTable == NULL) {
return NULL;
}
- TranslationTable = (UINT64*)((UINTN)TranslationTable &
TT_ADDRESS_MASK_DESCRIPTION_TABLE);
+ TranslationTable = (UINT64*)(((UINTN)TranslationTable +
TT_ALIGNMENT_DESCRIPTION_TABLE - 1) & TT_ADDRESS_MASK_DESCRIPTION_TABLE);
Could we do (TranslationTable | (TT_ALIGNMENT_DESCRIPTION_TABLE - 1))
+ 1 instead of using two different symbolic constants?
// Populate the newly created lower level table
SubTableBlockEntry = TranslationTable;
@@ -409,7 +409,7 @@ GetBlockEntryListFromAddress (
if (TranslationTable == NULL) {
return NULL;
}
- TranslationTable = (UINT64*)((UINTN)TranslationTable &
TT_ADDRESS_MASK_DESCRIPTION_TABLE);
+ TranslationTable = (UINT64*)(((UINTN)TranslationTable +
TT_ALIGNMENT_DESCRIPTION_TABLE - 1) & TT_ADDRESS_MASK_DESCRIPTION_TABLE);
ZeroMem (TranslationTable, TT_ENTRY_COUNT * sizeof(UINT64));
I think you missed another instance at line 625
So what is hidden from view by all these symbolic constants is that we
are aligning the return value of AllocatePages() to 4 KB, which is
rather pointless, and is actually resulting in every allocation to
waste 4 KB. But that also means the buggy rounding code never produces
incorrect values, making the severity of this bug low.
So instead, could we get a comprehensive patch that:
- replaces all three instances with a call to a static function that
does the allocation and the rounding
- apply the rounding only if the alignment exceeds 4 KB
- fix the rounding logic.
Good suggestion to make code clean; thanks :)
Just as a note, you could implement something like:
Table = AllocatePages (size + rounding)
RoundedTable = (Table | (Alignment - 1)) +1
FreePages (Table, RoundedTable - Table)
FreePages(<the unused pages at the top>)
to immediately free the pages that you will not use after rounding.
Can we just use AllocateAlignedPages in MemoryAllocationLib to get the
same result?
Heyi
_______________________________________________
edk2-devel mailing list
[email protected]
https://lists.01.org/mailman/listinfo/edk2-devel