On Sun, 25 Jan 2026 at 18:01, D. R. Evans <[email protected]> wrote:
> Hello, David Hi. > I will print out your e-mail and study it, and the referenced documents, > carefully -- putting aside the temptation to rush and skip things in my > hurry to get things working again. > > As the attempt to get everything working without using grub rescue seems > to be stalled (see my e-mail > <[email protected]> in another sub-thread), > trying to understand grub rescue seems like the best use of my time at > the moment. If you want to reference other list messages, can you please do that by providing links into the list archive which can be found at for example: https://lists.debian.org/debian-user/2026/01/threads.html I think that understanding grub-rescue was always the best use of your time, because it's simpler and safer than the other stuff you've been attempting, and we know it's never going to make things more complicated than they already are. Or were :) > One comment: > > David wrote on 1/25/26 5:04 AM: > > > given us some detail to work with. We can see your incorrect 'prefix' > > (or something like it, because I'm guessing that you retyped that > > because 'debain@' looks weird. > > I agree that it looks weird (and that I had to retype it): it should have > been "debian@" [which also looks weird to me, but that's what it said]. > > Having gone through the procedure in the other sub-thread, and thereby > installing a new MBR What's also important is that by running 'grub-install' on this drive, at the same time as installing a new MBR it also (see my reference [wgft]) installed a new "core image", overwriting the one that was there previously. This new core image will now be providing the initial values of 'prefix' and 'root' that you show below. It's curious that previously [2] the core image had no sign of RAID (md devices) but now it does, even though there was no active RAID when the new core image was written. I've no experience with RAID so I'm unable to offer any explanation of this. I'm unsure what to trust more: the original simple (hd0) or the new values. I note that in the new values there is no (md/1), only (md/0). > what I now see at the "grub rescue>" prompt is quite different (I have to > take a photo of the screen, upload it to this computer, and type what it > says into this e-mail; I think I have done that without any mistakes) I know it's tedious but it's very helpful that you provided this info. > [recall from that sub-thread that when the RAID MBR was written there > were two drives in the machine: /dev/sda was the non-booting RAID disk, > and /dev/sdb was the non-RAID disk that had supplied the running OS]: I've no interest or time to spend digging into that other thread, but I follow what you are saying here, and I agree with what you are saying about the two drives being present as detected by grub, except you have no way of knowing that /dev/sda corresponds to hd0 and /dev/sdb corresponds to hd1, it might be the other way around. If needed, you could figure out which is which if they have any different content, which you can observe using 'ls' command in 'grub-rescue'. But it would be better if you removed all hard drives except the one that you are trying to rescue, more comments on this below. > "ls" now returns: > > (hd0) (hd0,msdos2) (hd0,msdos1) (hd1) (hd1,msdos5) (hd1,msdos1) (md/0) (fd0) > error: failure reading sector 0xb30 from 'fd0'. > > I interpret that to mean, in order: > the entire RAID disk (/dev/sda) > the second partition on the RAID disk (/dev/sda2) > the first partition on the RAID disk (/dev/sda1) > the entire non-RAID disk (/dev/sdb) > the fifth partition on the non-RAID disk (/dev/sdb5) > the first partition on the non-RAID disk (/dev/sdb1) > the logical RAID disk > the floppy drive > and there was an error accessing the floppy, probably because there was > nothing in the drive. > > "set" now returns ("8d86...0aed" is shorthand for a long UUID): > > prefix='(mduuid/8d86...0aed)/boot/grub' > root = 'mduuid/8d86...0aed' > > Right, that's where we are right now. Now to print out your e-mail and > everything else, and when I get a bit of time -- this afternoon, I hope > -- I'll try get my head around grub rescue. Ok. It's important to keep in mind: 'ls' is showing you what's actually detected right now, and 'prefix' and 'root' are showing you what got written the last time you used 'grub-install' on that drive. The rest of my comments below are motivated by the fact that your 'ls' results above contain more than one hard drive (some of them have "hd0" and some have "hd1"). This is why in the first email I wrote [1] on this subject (in the two paragraphs beginning "So, ..." and "While ...") , I recommended to utilise 'grub-rescue' *before* ever using 'grub-install' to modify what is on the drive, because 'grub-install' has destroyed the evidence of what was there previously, which you showed in [2]. That would have avoided these extra uncertainties that have been introduced. Anytime using 'grub-rescue' it will greatly reduce the scope of confusion if 'ls' reports only one hard drive (hd0) as you showed in your previous message [2]. To achieve this now, I suggest to disable or remove all drives from the machine except the drive you are trying to rescue, before booting it, so that 'ls' returns only "(hd0) (fd)" as you showed in your previous email [2]. Hopefully when you do that, you will still see a 'grub-rescue' prompt that you can use to attempt recovery. Given that I'm entirely clueless about RAID, I'd suggest to ignore grub's detection of any devices that look like RAID devices, such as the (md/0) or (mduuid/8d86..) that you mention above. My thinking on this point being that at the time we can get the recovered drive to boot successfully, I'm assuming there will be no RAID active. On the other hand, we might want RAID to be active, I don't know. My focus is on getting the drive to boot somehow, not being concerned about getting the previous RAID working again. If there are any other people reading here with actual experience of degraded RAID systems, please share your thoughts on this point. Emphasising gain: 'grub-rescue' will not modify the drive and is just a way to get the drive to boot one time. What you do after it has booted is up to you. I imagine that would be the best time to be running 'grub-install' or trying to recover the RAID, or whatever else you need to do. [1] https://lists.debian.org/debian-user/2026/01/msg00411.html [2] https://lists.debian.org/debian-user/2026/01/msg00449.html

