On Wed, 15 Oct 2025 at 03:30, Lennart Poettering <[email protected]>
wrote:

> On Di, 14.10.25 22:43, Chris Murphy ([email protected]) wrote:
>
> > Last time this came up (5ish years ago) they said no because it's
> > too simple, doesn't support enough of RHEL's use cases. And that
> > they had insufficient bandwidth for more work.
> >
> > And it's the same in Fedora. Fedora Server can't use systemd-boot or
> > plain FAT only $BOOT, because they require support for /boot on
> > mdadm raid1. Same with CentOS and RHEL.
>
> Sorry, but using mdadm raid1 in the boot loader is a really bad idea,
> you have to replicate the whole raid stack, because as mentioned many
> times, *write* support actually matters for boot count/RNG support.
>
>
A big reason that raid1 /boot is  because it is one of the least worst
decisions when dealing with server hardware. Using non-raid ends up with
dozens of bad 'Poor Man Raid' solutions where you are trying to keep
multiple partitions in sync with each other which invariably end up running
into some consistency problem at the worst time. I have had to 'maintain'
several of these before mdadm RAID1 boot via grub was widely available..
and most of them made the nightmare of sysV scripts look pleasant. Why did
drive 5 say it was in sync when it is clearly holding 2 month old boot
entries? Why is drive 2 on a dozen systems full when they are all the same
size partition? Why did the firmware decide it could boot from drive 3 only
when it is in slot 5 (oh its because the kernel is not in the right spot on
this drive??? )

I remember a lot of different kernel issues coming up where to get help was
'make the drives all raid1 and if it happens then we can fix it.' Because
these were oddities.. usually the cargo cult fix of making them all raid1
would make it go away.

Is there a way that there could be a new tool in the systemd family which
keeps multiple boot partitions in sync for booting purposes? I say this
because we need a tool which does this, checks on the validity regularly
and keeps up with 'drive A has new content, we need to deal with N other
drives' in a common way. [Where N is going to be set by the various
oddities of the server hardware and business logic someone has to follow
even if it looks insane.]

If there is something like that, then one of the biggest reasons for RAID1
(keep N drives in sync without manual interaction or my-poor-man-raid.sh
not working today) goes away.



> If you have a software raid setup and want to boot from it, that's OK,
> but then simply install the boot loader individually on each HDD, with
> its own ESP/XBOOTLDR, and apply raid only to the rootfs itself, but
> not ESP/XBOOTLDR. It's the only thing that is reasonably safe, as
> firmware generally has no understanding of linux raid, and you never
> want to risk things go out of sync, because firmware doesn't respect
> raid, but other stuff does. With "bootctl install" you can use
> --esp-path= and --xbootldr-path= to install systemd-boot into as many
> ESPs/XBOOTLDRs as you want. kernel-install has the same.
>
> Lennart
>
> --
> Lennart Poettering, Berlin
> --
> _______________________________________________
> devel mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedoraproject.org/archives/list/[email protected]
> Do not reply to spam, report it:
> https://pagure.io/fedora-infrastructure/new_issue
>


-- 
Stephen Smoogen, Red Hat Automotive
Let us be kind to one another, for most of us are fighting a hard battle.
-- Ian MacClaren
-- 
_______________________________________________
devel mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/[email protected]
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

Reply via email to