On 1/25/26 02:26, D. R. Evans wrote:
Alexander V. Makartsev wrote on 1/24/26 1:48 PM:
It looks like these warnings appear when you try to install grub into
disk which contains RAID array in a degraded state. [1]
Try to boot from this disk now, if boot will be successful then you've
done with grub and the next step is to replace missing drive and resync
the array.
OK, boot was NOT successful:
Pity. This means something is wrong with the mdraid setup?
I received the error message: disk mduuid<8d86...0aed> not found, and
then it dropped me to the "grub rescue>" prompt.
I expect that the named missing disk is the one that contained the
running system (sdb) [which is a non-RAID disk, created just for the
purpose of allowing me to boot and execute all these commands].
However, I get the above message when I try to boot from the failed
disk, regardless of whether the non-RAID disk is in the machine.
Your case is interesting enough so I spun up a VM to try to reproduce
the problem you have.
However, my VM boots normally, after a slight delay, with one disk
removed and with MD RAID1 arrays in a clean, degraded state.
There wasn't "grub rescue>" prompt at any point, it just works.
Still, a quick search on the internet shows that there were others who
had, at some point, similar problems with MD arrays and Grub.
Experiments with VM shows that Grub stage 1 bootloader is capable enough
to identify a MD array, "mount" it for itself, get into ext4 filesystem
on that array to access its modules inside /boot/grub/i386-pc directory
and its /boot/grub/grub.cfg config file.
I suspect your MD RAID1 was probably setup is a non-standard way,
somehow, so grub gets confused during system startup, or during
"grub-install".
The whole procedure with chroot and "grub-install", as my tests show,
regenerates all necessary grub modules inside "/boot" and creates valid
grub config with all UUIDs necessary for system to startup.
My first guess is that you are using that non-RAID disk which somehow
still contains MD array metadata on it.
Maybe all this time we were recovering wrong disk? Is it possible that
sda and sdb devices were mixed up during boot time? I'm really grasping
at straws.
So my first suggestion is to make a bootable USB thumb drive with Live
system on it, to exclude possible interference with other disks.
I also need output from these commands, to see how your MD arrays were
setup, their current states, disk partition table, UUIDs, etc:
blkid
ls /dev/md* #Asterisk here because when I setup MD RAID with
Debian installer, arrays were named md0 and md1. First is swap and
second is root.
cat /proc/mdstat
mdadm --detail /dev/md*
fdisk -x
cat /etc/fstab #After chroot ofcourse
cat /boot/grub/grub.cfg | grep -iE -- "--set=root|root=|insmod"
#After chroot ofcourse
You can upload it all to paste service and send just the link:
https://paste.debian.net/
I know it is a lot, but it is hard to suggest anything useful without
seeing the whole picture.
Maybe your experiments with "grub rescue>" will be useful and reveal
more information.
--
With kindest regards, Alexander.
Debian - The universal operating system
https://www.debian.org