On 07/16/13 20:12, Jordan Justen wrote:
> Hmm, I should have mentioned in the commit log that this is a slightly
> simplified version of the code in
> MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c.
> 
> It's not great that I'm duplicating code here, but apparently with the
> new KVM READONLY support, we can't wait for DxeIpl to rebuild the
> tables in RAM.

Can you perhaps still drop "TotalPagesNum" (my point (1))? It is quite
confusing. You could consider that change part of the simplification,
since what's not there can't diverge.

Thanks,
Laszlo

> 
> -Jordan
> 
> On Tue, Jul 16, 2013 at 4:38 AM, Laszlo Ersek <[email protected]> wrote:
>> On 07/15/13 19:37, Jordan Justen wrote:
>>> Previously we would run using page tables built into the
>>> firmware device.
>>>
>>> If a flash memory is available, it is unsafe for the page
>>> tables to be stored in memory since the processor may try
>>> to write to the page table data structures.
>>>
>>> Additionally, when KVM ROM support is enabled for the
>>> firmware device, then PEI fails to boot when the page
>>> tables are in the firmware device.
>>>
>>> Contributed-under: TianoCore Contribution Agreement 1.0
>>> Signed-off-by: Jordan Justen <[email protected]>
>>> ---
>>>  OvmfPkg/Sec/SecMain.c |  165 
>>> ++++++++++++++++++++++++++++++++++++++++++++++++-
>>>  1 file changed, 164 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/OvmfPkg/Sec/SecMain.c b/OvmfPkg/Sec/SecMain.c
>>> index 3882dad..0f2b9c9 100644
>>> --- a/OvmfPkg/Sec/SecMain.c
>>> +++ b/OvmfPkg/Sec/SecMain.c
>>> @@ -31,6 +31,8 @@
>>>
>>>  #include <Ppi/TemporaryRamSupport.h>
>>>
>>> +#include <IndustryStandard/X64Paging.h>
>>> +
>>>  #define SEC_IDT_ENTRY_COUNT  34
>>>
>>>  typedef struct _SEC_IDT_TABLE {
>>> @@ -523,6 +525,160 @@ FindImageBase (
>>>    }
>>>  }
>>>
>>> +#if defined (MDE_CPU_X64)
>>> +/**
>>> +  Allocates and fills in the Page Directory and Page Table Entries to
>>> +  establish a 1:1 Virtual to Physical mapping.
>>> +
>>> +  @param  Location   Memory to build the page tables in
>>> +
>>> +**/
>>> +VOID
>>> +Create4GbIdentityMappingPageTables (
>>> +  VOID *Location
>>> +  )
>>> +{
>>> +  UINT32                                        RegEax;
>>> +  UINT32                                        RegEdx;
>>> +  UINT8                                         PhysicalAddressBits;
>>> +  EFI_PHYSICAL_ADDRESS                          PageAddress;
>>> +  UINTN                                         IndexOfPml4Entries;
>>> +  UINTN                                         IndexOfPdpEntries;
>>> +  UINTN                                         
>>> IndexOfPageDirectoryEntries;
>>> +  UINT32                                        NumberOfPml4EntriesNeeded;
>>> +  UINT32                                        NumberOfPdpEntriesNeeded;
>>> +  X64_PAGE_MAP_AND_DIRECTORY_POINTER            *PageMapLevel4Entry;
>>> +  X64_PAGE_MAP_AND_DIRECTORY_POINTER            *PageMap;
>>> +  X64_PAGE_MAP_AND_DIRECTORY_POINTER            *PageDirectoryPointerEntry;
>>> +  X64_PAGE_TABLE_ENTRY                          *PageDirectoryEntry;
>>> +  UINTN                                         TotalPagesNum;
>>> +  UINTN                                         BigPageAddress;
>>> +  BOOLEAN                                       Page1GSupport;
>>> +  X64_PAGE_TABLE_1G_ENTRY                       *PageDirectory1GEntry;
>>> +
>>> +  Page1GSupport = FALSE;
>>> +  AsmCpuid (0x80000000, &RegEax, NULL, NULL, NULL);
>>> +  if (RegEax >= 0x80000001) {
>>> +    AsmCpuid (0x80000001, NULL, NULL, NULL, &RegEdx);
>>> +    if ((RegEdx & BIT26) != 0) {
>>> +      Page1GSupport = TRUE;
>>> +    }
>>> +  }
>>
>> Seems to match Page1GB in "4.1.4 Enumeration of Paging Features by CPUID".
>>
>>> +
>>> +  //
>>> +  // Only build entries for the first 4GB at this stage.
>>> +  //
>>> +  // DXE IPL will build tables for the rest of the processor's
>>> +  // address space.
>>> +  //
>>> +  PhysicalAddressBits = 32;
>>> +
>>> +  //
>>> +  // Calculate the table entries needed.
>>> +  //
>>> +  if (PhysicalAddressBits <= 39 ) {
>>> +    NumberOfPml4EntriesNeeded = 1;
>>
>> "Because a PML4E is identified using bits 47:39 of the linear address,
>> it controls access to a 512-GByte region of the linear-address space."
>>
>> OK.
>>
>> (BTW runaway space before closing paren.)
>>
>>> +    NumberOfPdpEntriesNeeded = (UINT32)LShiftU64 (1, (PhysicalAddressBits 
>>> - 30));
>>
>> "Because a PDPTE is identified using bits 47:30 [actually I think this
>> should say 38:30 like a bit higher up] of the linear address, it
>> controls access to a 1-GByte region of the linear-address space."
>>
>> OK.
>>
>>
>>> +  } else {
>>> +    NumberOfPml4EntriesNeeded = (UINT32)LShiftU64 (1, (PhysicalAddressBits 
>>> - 39));
>>> +    NumberOfPdpEntriesNeeded = 512;
>>
>> This is dead code for now, but I think it is correct. 38:30 in the
>> linear adress provide 9 bits as PDPTE index, hence 512 == 2**9. OK.
>>
>>> +  }
>>> +
>>> +  //
>>> +  // Pre-allocate big pages to avoid later allocations.
>>> +  //
>>> +  if (!Page1GSupport) {
>>> +    TotalPagesNum = (NumberOfPdpEntriesNeeded + 1) * 
>>> NumberOfPml4EntriesNeeded + 1;
>>> +  } else {
>>> +    TotalPagesNum = NumberOfPml4EntriesNeeded + 1;
>>> +  }
>>
>> (1) This seemed non-obvious, but thankfully the TotalPagesNum variable
>> is never used after this assignment. Can you drop it? :)
>>
>>
>>> +  BigPageAddress = (UINTN) Location;
>>> +
>>> +  //
>>> +  // By architecture only one PageMapLevel4 exists - so lets allocate 
>>> storage for it.
>>> +  //
>>> +  PageMap         = (VOID *) BigPageAddress;
>>> +  BigPageAddress += SIZE_4KB;
>>
>> OK, we're placing the PML4 table at TopOfCurrentStack (hopefully that's
>> correctly aligned).
>>
>>> +
>>> +  PageMapLevel4Entry = PageMap;
>>> +  PageAddress        = 0;
>>> +  for (IndexOfPml4Entries = 0; IndexOfPml4Entries < 
>>> NumberOfPml4EntriesNeeded; IndexOfPml4Entries++, PageMapLevel4Entry++) {
>>> +    //
>>> +    // Each PML4 entry points to a page of Page Directory Pointer entires.
>>> +    // So lets allocate space for them and fill them in in the 
>>> IndexOfPdpEntries loop.
>>> +    //
>>> +    PageDirectoryPointerEntry = (VOID *) BigPageAddress;
>>> +    BigPageAddress += SIZE_4KB;
>>> +
>>> +    //
>>> +    // Make a PML4 Entry
>>> +    //
>>> +    PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry;
>>
>> OK, the low 12 bits seem to be clear here.
>>
>>> +    PageMapLevel4Entry->Bits.ReadWrite = 1;
>>> +    PageMapLevel4Entry->Bits.Present = 1;
>>
>> I guess zero values for WriteThrough and CacheDisabled are OK...
>>
>>> +
>>> +    if (Page1GSupport) {
>>> +      PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry;
>>> +
>>> +      for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 
>>> 512; IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress += 
>>> SIZE_1GB) {
>>> +        //
>>> +        // Fill in the Page Directory entries
>>> +        //
>>> +        PageDirectory1GEntry->Uint64 = (UINT64)PageAddress;
>>> +        PageDirectory1GEntry->Bits.ReadWrite = 1;
>>> +        PageDirectory1GEntry->Bits.Present = 1;
>>> +        PageDirectory1GEntry->Bits.MustBe1 = 1;
>>> +      }
>>
>> (2) You're mapping the full 512 GB here for the current PML4E. Is that
>> your intent?
>>
>>
>> The outermost loop seems OK for 1GB pages. I'd prefer if variables were
>> moved to the narrowest possible block scope, but I've gotten comments
>> before that such conflicts with the coding style.
>>
>>> +    } else {
>>> +      for (IndexOfPdpEntries = 0; IndexOfPdpEntries < 
>>> NumberOfPdpEntriesNeeded; IndexOfPdpEntries++, PageDirectoryPointerEntry++) 
>>> {
>>> +        //
>>> +        // Each Directory Pointer entries points to a page of Page 
>>> Directory entires.
>>> +        // So allocate space for them and fill them in in the 
>>> IndexOfPageDirectoryEntries loop.
>>> +        //
>>> +        PageDirectoryEntry = (VOID *) BigPageAddress;
>>> +        BigPageAddress += SIZE_4KB;
>>> +
>>> +        //
>>> +        // Fill in a Page Directory Pointer Entries
>>> +        //
>>> +        PageDirectoryPointerEntry->Uint64 = 
>>> (UINT64)(UINTN)PageDirectoryEntry;
>>> +        PageDirectoryPointerEntry->Bits.ReadWrite = 1;
>>> +        PageDirectoryPointerEntry->Bits.Present = 1;
>>
>> I see, this is why no fourth type was defined; it also seems to explain
>> the grouping of Reserved / MustBeZero / Available in
>> X64_PAGE_MAP_AND_DIRECTORY_POINTER.
>>
>>> +
>>> +        for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries 
>>> < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += 
>>> SIZE_2MB) {
>>> +          //
>>> +          // Fill in the Page Directory entries
>>> +          //
>>> +          PageDirectoryEntry->Uint64 = (UINT64)PageAddress;
>>> +          PageDirectoryEntry->Bits.ReadWrite = 1;
>>> +          PageDirectoryEntry->Bits.Present = 1;
>>> +          PageDirectoryEntry->Bits.MustBe1 = 1;
>>> +        }
>>> +      }
>>
>> Seems OK. For 4GB, we have NumberOfPdpEntriesNeeded==4, and we cover 4 *
>> 512 * 2MB = 4096MB.
>>
>> Also, NumberOfPdpEntriesNeeded<512 implies NumberOfPml4EntriesNeeded==1;
>> good.
>>
>>> +
>>> +      for (; IndexOfPdpEntries < 512; IndexOfPdpEntries++, 
>>> PageDirectoryPointerEntry++) {
>>> +        ZeroMem (
>>> +          PageDirectoryPointerEntry,
>>> +          sizeof(X64_PAGE_MAP_AND_DIRECTORY_POINTER)
>>> +          );
>>> +      }
>>> +    }
>>> +  }
>>> +
>>> +  //
>>> +  // For the PML4 entries we are not using fill in a null entry.
>>> +  //
>>> +  for (; IndexOfPml4Entries < 512; IndexOfPml4Entries++, 
>>> PageMapLevel4Entry++) {
>>> +    ZeroMem (
>>> +      PageMapLevel4Entry,
>>> +      sizeof (X64_PAGE_MAP_AND_DIRECTORY_POINTER)
>>> +      );
>>> +  }
>>> +
>>> +  AsmWriteCr3 ((UINTN) PageMap);
>>> +}
>>> +#endif
>>> +
>>
>> (3) Can you rename BigPageAddress to NextAllocAddress?
>>
>>>  /*
>>>    Find and return Pei Core entry point.
>>>
>>> @@ -645,7 +801,14 @@ SecCoreStartupWithStack (
>>>    //
>>>    IoWrite8 (0x21, 0xff);
>>>    IoWrite8 (0xA1, 0xff);
>>> -
>>> +
>>> +#if defined (MDE_CPU_X64)
>>> +  //
>>> +  // Create Identity Mapped Pages in RAM
>>> +  //
>>> +  Create4GbIdentityMappingPageTables (TopOfCurrentStack);
>>> +#endif
>>> +
>>>    //
>>>    // Initialize Debug Agent to support source level debug in SEC/PEI 
>>> phases before memory ready.
>>>    //
>>>
>>
>> Thanks
>> Laszlo
>>
>> ------------------------------------------------------------------------------
>> See everything from the browser to the database with AppDynamics
>> Get end-to-end visibility with application monitoring from AppDynamics
>> Isolate bottlenecks and diagnose root cause in seconds.
>> Start your free trial of AppDynamics Pro today!
>> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk
>> _______________________________________________
>> edk2-devel mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/edk2-devel


------------------------------------------------------------------------------
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk
_______________________________________________
edk2-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/edk2-devel

Reply via email to