I wanted to hold off on responding to this for a while to see if anyone else 
chimed in and because it required quite a bit of thought.  In the end I thought 
I should probably post what I ended up doing in case anyone scans the e-mail 
archives.  [I edited the awful quoting job this employer mandated e-mail client 
does to hopefully make it a little more readable.]

Background:

I have been tasked with implementing UEFI boot in our VOS operating system.  
We've been using GPT partitions for more than 15 years, but only within our own 
OS...  We haven't had to interact with any other software before this.  We have 
a fault tolerant OS; so, all disks are RAID1 (software supported).  We don't 
expose the GPT partitioning to our user interface:  We have just use it as a 
wrapper for boot support to keep BIOS from being confused.  The intent was to 
set it up to boot with either the legacy BIOS or UEFI.  At the time, we only 
had a legacy BIOS to test with; so, we never finished the UEFI boot.

I've reviewed our current implementation and found a few minor things wrong; 
so, I have been working on a utility to fix them.  But the might be some more 
issues.  I have three questions, but relating to RAID 1.

1.       We have historically paired entire disks when we do RAID1, not 
partitions (we have never supported multiple file system partitions on one 
disk, because it didn't make sense from a performance standpoint).  I believe 
the current initialization uses the same DiskGUID in the GPT header for both 
disks.  I'm assuming that is not going to work properly.  Is that correct?
[Andrew Fish] Herbie,

I'm not sure that a unique  DiskGUID is required for RAID1 given the disks are 
mirrors. I think the ask is that each unique GPT (some software has to create 
it) always gets a new GUID/UUID.
[Robinson, Herbie] I ended up deciding that the GPT partitions should be unique 
and that only the contents of our specific partition should be treated as 
mirrored.  The main reasoning behind this was because the UEFI firmware (and 
third party tools) wouldn't treat the GPT partitions as paired and update them 
simultaneously - If anything, the firmware would just be confused by the 
duplicated GUIDs.  Another factor is that the disks could be different sizes.  
Also, one would also be obligated to keep the ESPs in sync.  It would entail a 
lot more work, might not be compatible with other software and wouldn't really 
buy anything useful functionally.

3.       We have learned over the years that one doesn't allocate an entire 
disk for a RAID (because one may have to replace a drive and replacement may 
not come with exactly the same ending LBA).  We are currently leaving off some 
space at the end.  When we do that, we are not putting the backup GPT header at 
the last LBA the devices.  By my reading of the spec, that is a mistake.  I do 
believe the spec allows me to leave a large gap between the LastUsableLBA in 
the backup GPT header with the backup table placed anywhere within that gap.  
Is that correct?
[Andrew Fish] There has been language added over the years to try to help 
people deal with issues like this. The ATA8-ACS language and this section:
"To avoid the need to determine the physical block size and the optimal 
transfer length granularity, software may align GPT partitions at significantly 
larger boundaries. For example, assuming logical block 0 is aligned, it may use 
LBAs that are multiples of 2,048 to align to 1,048,576 byte (1 MiB) boundaries, 
which supports most common physical block sizes and RAID stripe sizes."

I think the "software may align GPT partitions at significantly larger 
boundaries." in the section above grants you a lot of latitude about how you 
layout the disks.
[Robinson, Herbie] I did, in fact leave a large hole between the backup 
partition table and the backup GPT header and at least our bios is happy with 
it.

And again, thanks for the help.


_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel

Reply via email to