Hi,

On Wed, Jan 24, 2024 at 11:17:34AM +0100, Nicolas George wrote:
> Since mdadm can only put its superblock at the end of the device (1.0),
> at the beginning of the device (1.1) and 4 Ko from the beginning (1.2),
> but they still have not invented 1.3 to have the metadata 17 Ko from the
> beginning or the end, which would be necessary to be compatible with
> GPT, we have to partition them and put the EFI system partition outside
> them.

Sorry, what is the issue about being compatible with GPT?

For example, here is one of the drives in a machine of mine, and it
is a drive I boot from:

$ sudo gdisk -l /dev/sda
GPT fdisk (gdisk) version 1.0.3

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 7501476528 sectors, 3.5 TiB
Model: INTEL SSDSC2KG03
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): D97BD886-7F31-9E46-B454-6703BC90AF09
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 2048, last usable sector is 7501476494
Partitions will be aligned on 2048-sector boundaries
Total free space is 0 sectors (0 bytes)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         1075199   524.0 MiB   EF00  
   2         1075200         3172351   1024.0 MiB  FD00  
   3         3172352         7366655   2.0 GiB     FD00  
   4         7366656        24143871   8.0 GiB     FD00  
   5        24143872      7501476494   3.5 TiB     FD00

Here, sda1 is an EFI System Partition and sda2 is a RAID-1 member
that comprises /boot when assembled. It is md2 when assembled which
has superblock format 1.2:

 sudo mdadm --detail /dev/md2
/dev/md2:
           Version : 1.2
     Creation Time : Mon Jun  7 22:21:08 2021
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sun Jan 21 00:00:07 2024
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : tanq:2  (local to host tanq)
              UUID : ea533a16:63523ac4:da6bf866:508f8f1d
            Events : 459

    Number   Major   Minor   RaidDevice State
       0     259        2        0      active sync   /dev/nvme0n1p2
       1       8        2        1      active sync   /dev/sda2

Thus, grub is installed to sda and nvme0n1.

Have I made an error here?

> Which leads me to wonder if there is an automated way to install GRUB on
> all the EFI partitions.

I just install it on each boot drive, but you have me worried now
that there is something I am ignorant of.

There is also the issue of making the ESP redundant. I'd like to put
it in RAID but I've been convinced that it is a bad idea: firmware
will not understand md RAID and though it may be able to read it
(due to it being RAID-1, 1.2 superblock), if it writes to it then it
will desync the RAID.

There was a deeper discussion of this issue here:

    https://lists.debian.org/debian-user/2020/11/msg00455.html

As you can see, more people were in favour of manually syncing ESP
contents to backup ESPs on other drives so that firmware can choose
(or be told to choose) a different ESP in event of trying to boot
with device failure.

I don't like it, but…

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting

Reply via email to