Re: Rescue mode: cannot fix boot after raid1 repair

2014-11-07 Thread Ron Leach

On 06/11/2014 20:46, Don Armstrong wrote:

On Thu, 06 Nov 2014, Ron Leach wrote:

Tried the grub-install, but it failed complaining (loosely) that the
'image was too large for embedding, but the embedded image was needed
for raid or lvm systems'.

I'd mounted /boot2 to retain compatibility with the Lenny build.  I
executed:

# mount /dev/sdb1 /boot2
# grub-install --root-directory=/boot2 /dev/sdb

which failed, complaining that the 'image' was too big.   I tried grub-setup
as well but that complained, too.

Didn't get any further, because the grub-install failed.


Ugh. It's possible that this is because it was partitioned badly, and
the first partition starts too early to fit the full core image.



The problem was (I think) that the Squeeze installer rescue version of 
Grub might not have been compatible with the Lenny installation, in 
some way.  Late on, I found another Lenny CD1 (Lenny 5.0.6) which read 
fine and, with that rescue system, managed to install Grub onto 
/dev/sdb though, on reboot, it only reached a Grub prompt.  (But that 
was what you'd suggested, so that was ok.)  Incidentally, Grub on 
Lenny is version 1.96.  From the Grub prompt, I was able to see that

 vmlinuz-xxx, and
 initrd-xxx
were 'seen', so using manual completion at the prompt I was able to 
enter (from the keyboard):


 linux /vm[tab] root=/dev/md1
 initrd /in[tab]
 boot

and the system came up, running on sdb only, with the raid partitions 
active but degraded.


Used parted to partition the new, blank, /dev/sda, including all the 
raid flags, and restored all 6 raid1 arrays.


That left me with the booting problem of only reaching a Grub prompt. 
 Grub-install didn't solve it - the system continued to only reach 
the Grub prompt on reboot.  That happened whether the system booted 
from sda or from sdb.  Looking around with mc, I saw that there were 
no grub.cfg files - which normally hold the Grub menu and commands - 
and looking at some Grub documents they pointed to /etc/default/grub. 
 Looking at that, I could see that I also needed to run

# update-grub
before
# grub-install /dev/sda

After doing that (and making sure that /dev/sda was the first hard 
disk boot choice in the BIOS) , the system rebooted perfectly.


I couldn't do the same for /dev/sdb because the grub.cfg looks for 
some files on the (non-raid) sda1 filesystem, and if I left that there 
then the system would not boot from sdb when sda fails.  (I see now 
why you suggested making sda1/sdb1 a raid1, as well; using raid1 here 
would mean that Grub would find the files by searching for the 
filesystem offered by raid, which would be on whichever disk was 
available.  Neat.)  I will need to do that, and I need to reread the 
approach you suggested.  In the meantime, I copied the grub 
installation from /dev/sda1 to /dev/sdb1, and changed grub.cfg to 
refer to hd(1,1) instead of hd(0,1), and commented out the lines that 
search for the filesystem by uuid.  Even if it doesn't work, I can 
reload from /dev/sdb by typing in the commands, as above.


Thanks for your help, we're back and running, the raid is synced and 
we can boot.  I'll get sda1/sdb1 onto a raid1 in the next few days.


regards, Ron


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: https://lists.debian.org/545d1dc1.1030...@tesco.net



Re: Rescue mode: cannot fix boot after raid1 repair

2014-11-07 Thread Don Armstrong
On Fri, 07 Nov 2014, Ron Leach wrote:
 Late on, I found another Lenny CD1 (Lenny 5.0.6) which read fine and,
 with that rescue system, managed to install Grub onto /dev/sdb though,
 on reboot, it only reached a Grub prompt.

Cool; glad you were able to get this to work.

I probably wasn't clear enough in my original e-mail that I suggested
that you use grub after chrooting into your lenny system. In other
words, use grub that you already had installed, not the one on the
rescue media.

And let me also make a plea for you to strongly consider upgrading this
system to at least squeeze, and hopefully wheezy. Lenny has been EOLed
for quite some time, and has known security exploits.

-- 
Don Armstrong  http://www.donarmstrong.com

You have many years to live--do things you will be proud to remember
when you are old.
 -- Shinka proverb. (John Brunner _Stand On Zanzibar_ p413)


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141108012801.gp29...@teltox.donarmstrong.com



Re: Rescue mode: cannot fix boot after raid1 repair

2014-11-06 Thread Don Armstrong
On Thu, 06 Nov 2014, Ron Leach wrote:
 /dev/sdb is partitioned:
 sdb1: /boot2 (this was a simple copy of /boot on /dev/sda1; sda1 and sdb1
 were not raid)
 sdb2: (raid1) /
 sdb3: (raid1) /usr
 sdb4: (raid1) /var
 sdb5: swap
 sdb6: (raid1) /tmp
 sdb7: (raid1) /home
 sdb8: (raid1) /userdata
 
 Could I ask 3 questions?  Is there a way I can check whether there is a
 correct MBR on /dev/sdb?
 
 Secondly, /boot on /dev/sda1 had been flagged as bootable, so to boot from
 /dev/sdb1 should I now flag /dev/sdb1 as bootable and, if so, how should I
 do that from the rescue mode?
 
 Lastly, do I need to run a grub installer and, if so, where ought I be able
 to find that?  I tried running grub-install from both a shell in the root
 filesystem, and from a shell in the rescue system, but grub-install seemed
 not to be there.  Perhaps the shell needs a 'path' to find it or, even, some
 other partitions mounted.

1) Boot off of rescue media (the Debian installer will work fine)
2) Start the raid for / in degraded mode from the rescue media shell
3) Mount the raid for / on /target or similar
4) Bind mount /dev, and mount /proc and /sys on /target/ as appropriate
5) Chroot into /target; start the other raid devices if necessary, and
mount them. You'll at least need /usr.
6) Copy the partition table from /dev/sdb to /dev/sda
7) Add the new partitions from /dev/sda to the raid devices which you
have started
8) At this point, I would suggest imaging /dev/sdb1, and turning
/dev/sdb1 into a real raid device, and using that as /boot and mounting
it on /boot. Copy the files back by mounting the old /dev/sdb1 on a
loopback device, not by dd'ing (or similar) /dev/sdb1 to the new raid
device.
9) If necessary, bring networking up, and install/reinstall grub.
10) Run grub-install on /dev/sda and /dev/sdb
11) Make sure that /boot now has the correct kernel. (You'll probably
want to re-install it anyway, because it's highly likely that you
haven't kept /boot2 in sync.)
12) unmount things, exit the chroot, and unmount more.
13) Reboot. You should *at least* get a grub prompt.
 

-- 
Don Armstrong  http://www.donarmstrong.com

We cast this message into the cosmos. [...] We are trying to survive
our time so we may live into yours. We hope some day, having solved
the problems we face, to join a community of Galactic Civilizations.
This record represents our hope and our determination and our goodwill
in a vast and awesome universe.
 -- Jimmy Carter on the Voyager Golden Record


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141106163011.gk29...@teltox.donarmstrong.com



Re: Rescue mode: cannot fix boot after raid1 repair

2014-11-06 Thread Ron Leach
Don, thank you for a very comprehensive sequence.  I wanted to skip 
the part where the new disk was re-synced because that's going to take 
some 8 hours or so and I rather need to have the basic server up, 
albeit in degraded raid1 state, and then let it resync, so that the 
(few) users can access their files.  (This server's role is to provide 
the file system for the network; it runs NFS and Samba, and timed 
rdiff-backup jobs onto a physically separate filesystem.)


So I went to rebuild grub, after starting the raid array, but before 
partitioning /dev/sda and resyncing the raid.  Essentially, bringing 
Lenny back up, then resyncing the raid, and finally making the boot 
partition into a raid.


Failed though.  Notes in-line, below.

On 06/11/2014 16:30, Don Armstrong wrote:


1) Boot off of rescue media (the Debian installer will work fine)


None of my Lenny install CD-1s would run - I think the CDs may be 
damaged.  Used a Squeeze (testing) XFCE CD, in rescue mode.



2) Start the raid for / in degraded mode from the rescue media shell


Not quite.  The rescue mode installer gives the option to start the 
array, and I started all 6 raid arrays.  (Was that the right thing to 
do?)  Hadn't had a shell offered, at that point.



3) Mount the raid for / on /target or similar


Didn't do this, because I wanted just to load a version of grub that 
would let me boot the degraded system.



4) Bind mount /dev, and mount /proc and /sys on /target/ as appropriate


Didn't do this, either, to get straight to grub.


5) Chroot into /target; start the other raid devices if necessary, and
mount them. You'll at least need /usr.


Took a shell offered by the installer, which was offered in /dev/md122 
(was md1, previously).

# ls
showed all the directories that I expected to see, including 
/userdata, so the shell was running in my Lenny '/'.


mounted /dev/md123 on /usr; was able to cd into it and see several 
directories.



6) Copy the partition table from /dev/sdb to /dev/sda


Skipped this.  Also, I don't think I can do this without gdisk (which 
has a partition table clone command) and gdisk is not installed on my 
Lenny system, neither is it available in the installer rescue. 
Neither was parted.  I tried fdisk, but it complained because /dev/sdb 
is using GPT.


So I decided that I would create the dev/sda partitions by hand, using 
Gparted (which is on my Lenny system) once I had managed to reboot 
into Lenny.  This is the other reason why I wanted to get booting 
working, before resyncing the raid, because I can't partition /dev/sda 
using fdisk.


(In the meantime, I've downloaded Gparted live ISO in case I do have 
to revert to your specific ordering of the steps, and I will partition 
/dev/sda first, before then starting with Debian-rescue.)



7) Add the new partitions from /dev/sda to the raid devices which you
have started


Haven't done this because /dev/sda isn't partitioned yet, and I wanted 
the /userdata fairly urgently, for the users.



8) At this point, I would suggest imaging /dev/sdb1, and turning
/dev/sdb1 into a real raid device, and using that as /boot and mounting
it on /boot. Copy the files back by mounting the old /dev/sdb1 on a
loopback device, not by dd'ing (or similar) /dev/sdb1 to the new raid
device.


I like that suggestion, and I will do that but I've skipped that for 
now, as well, because I do need the Lenny system up.



9) If necessary, bring networking up, and install/reinstall grub.


I did enable networking.  I don't think the installer used it - 
except, maybe, for the time.



10) Run grub-install on /dev/sda and /dev/sdb


Tried the grub-install, but it failed complaining (loosely) that the 
'image was too large for embedding, but the embedded image was needed 
for raid or lvm systems'.


I'd mounted /boot2 to retain compatibility with the Lenny build.  I 
executed:


# mount /dev/sdb1 /boot2
# grub-install --root-directory=/boot2 /dev/sdb

which failed, complaining that the 'image' was too big.   I tried 
grub-setup as well but that complained, too.


Didn't get any further, because the grub-install failed.


11) Make sure that /boot now has the correct kernel. (You'll probably
want to re-install it anyway, because it's highly likely that you
haven't kept /boot2 in sync.)
12) unmount things, exit the chroot, and unmount more.
13) Reboot. You should *at least* get a grub prompt.



Perhaps I should try with a new-burn Lenny install CD.  The Lenny ISOs 
are on /userdata in the broken Lenny system (!), so I'll see if Debian 
archives have one.


Don, thanks very much for the clear steps; these have been useful. 
I'll continue with trying to restart the Lenny system first.  If that 
does continue to fail, then I'll revert to the sequence you suggest. 
The grub failure is a bit odd, though.


Ron


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: 

Re: Rescue mode: cannot fix boot after raid1 repair

2014-11-06 Thread Don Armstrong
On Thu, 06 Nov 2014, Ron Leach wrote:
 I wanted to skip the part where the new disk was re-synced because
 that's going to take some 8 hours or so

Start it now; you don't have to wait for it to finish.

 On 06/11/2014 16:30, Don Armstrong wrote:
 2) Start the raid for / in degraded mode from the rescue media shell
 
 Not quite. The rescue mode installer gives the option to start the
 array, and I started all 6 raid arrays. (Was that the right thing to
 do?) Hadn't had a shell offered, at that point.

That's fine.
 
 3) Mount the raid for / on /target or similar
 
 Didn't do this, because I wanted just to load a version of grub that
 would let me boot the degraded system.

The reason you do this is because you want to use the copy of grub which
exists on your pre-existing grub configuration.

 4) Bind mount /dev, and mount /proc and /sys on /target/ as appropriate
 
 Didn't do this, either, to get straight to grub.

The rescue installer might do this for you; I can't remember.
 
 5) Chroot into /target; start the other raid devices if necessary, and
 mount them. You'll at least need /usr.
 
 Took a shell offered by the installer, which was offered in /dev/md122 (was
 md1, previously).
 # ls
 showed all the directories that I expected to see, including /userdata, so
 the shell was running in my Lenny '/'.
 
 mounted /dev/md123 on /usr; was able to cd into it and see several
 directories.

Cool.
 
 6) Copy the partition table from /dev/sdb to /dev/sda
 
 Skipped this. Also, I don't think I can do this without gdisk (which
 has a partition table clone command) and gdisk is not installed on my
 Lenny system, neither is it available in the installer rescue. Neither
 was parted. I tried fdisk, but it complained because /dev/sdb is using
 GPT.

You can use gparted from your lenny system to partition /dev/sda. (Or
install it).

 10) Run grub-install on /dev/sda and /dev/sdb
 
 Tried the grub-install, but it failed complaining (loosely) that the
 'image was too large for embedding, but the embedded image was needed
 for raid or lvm systems'.
 
 I'd mounted /boot2 to retain compatibility with the Lenny build.  I
 executed:
 
 # mount /dev/sdb1 /boot2
 # grub-install --root-directory=/boot2 /dev/sdb
 
 which failed, complaining that the 'image' was too big.   I tried grub-setup
 as well but that complained, too.
 
 Didn't get any further, because the grub-install failed.

Ugh. It's possible that this is because it was partitioned badly, and
the first partition starts too early to fit the full core image.

You might need to partition /dev/sda, and install onto that instead of
/dev/sdb.

The output of grub-install -v --root-directory=/boot2 /dev/sdb; would be
useful, though.

-- 
Don Armstrong  http://www.donarmstrong.com

A one-question geek test. If you get the joke, you're a geek: Seen on
a California license plate on a VW Beetle: 'FEATURE'...
 -- Joshua D. Wachs - Natural Intelligence, Inc.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141106204657.gm24...@rzlab.ucr.edu