Hello,

The Debian installer partitioning tool, partman, is based on libparted. It implements a "DUMP" command which prints all partition information, including free space and metadata pseudo-partitions, using ped_disk_next_partition(). See dump_info() in <https://salsa.debian.org/installer-team/partman-base/-/blob/master/parted_server.c>.

Trimmed output example (positions and lengths are in bytes):

start-end                  length   type     fs
0-17407                    17408    primary  label (main metadata)
17408-1048575              1031168  primary  free (free space)
(...)
160041868800-160041885695  16896    primary  label (backup metadata)

I would like to use this information to check if GPT metadata overlap with the boot loader area for some ARM SoCs on eMMC and SD card. For example, Freescale/NXP i.MX SoC boot loader area starts at LBA 2 and Allwinner sunxi SoC boot loader area starts at LBA 16; both overlap with standard GPT layout which uses LBA 0-33. Non-standard GPT layout can be created with gdisk so that the partition entry array starts at a higher offset or has fewer partition entries and does not overlap with the boot loader area. libparted creates only standard GPT layout but handles existing non-standard layout.

However the reported metadata and free space geometries are wrong when the GPT partition entry array does not start at LBA 2: they are reported as if the partition array started at LBA 2, like in the above example. This is confusing and gets in the way of my intent to check if the GPT metadata overlap with the boot loader area.

Indeed in libparted/labels/gpt.c gpt_alloc_metadata() takes into account only the partition entry count, not the partition entry array LBA.

Would creating two metadata partitions, one for the PMBR + GPT header and another for the partition entry array be a valid solution ? What about the space between the two, would it be accounted as free space ? IMO unused space outside of the data area should be ignored.

PS: Should I file a bug for this issue ?

Reply via email to