A few comments in-line and at the bottom.

Date: Sat, 06 Dec 2014 11:32:24 -0500
From: Ted Miller <tedli...@sbcglobal.net>
To: centos@centos.org
Subject: Re: [CentOS] CentOS 7 install software Raid on large drives
error

On 12/05/2014 01:50 PM, Jeff Boyce wrote:

----- Original Message ----- From: "Mark Milhollan" <m...@pixelgate.net>
To: "Jeff Boyce" <jbo...@meridianenv.com>
Sent: Thursday, December 04, 2014 7:18 AM
Subject: Re: [CentOS] CentOS 7 install software Raid on large drives error


On Wed, 3 Dec 2014, Jeff Boyce wrote:

I am trying to install CentOS 7 into a new Dell Precision 3610. I have
two 3
TB drives that I want to setup in software RAID1. I followed the guide
here
for my install as it looked fairly detailed and complete
(http://www.ictdude.com/howto/install-centos-7-software-raid-lvm/).

I suggest using the install guide rather than random crud. The storage
admin guide is fine to read too, but go back to the install guide when
installing.


/mark


Well I thought I had found a decent guide that wasn't random crud, but I
can see now that it was incomplete. I have read the RHEL installation
guide (several times now) and I am still not quite sure that it has all the
knowledge I am looking for.

I have played around with the automated and the manual disk partitioning
system in the installation GUI numerous times now trying to understand what
it is doing, or more accurately, how it responds to what I am doing. I
have made a couple of observations.

1. The installer requires that I have separate partitions for both /boot
and /boot/efi. And it appears that I have to have both of these, not just
one of them.

2. The /boot partition can not reside on LVM.

3. The options within the installer then appear to allow me to create my
LVM with Raid1, but the /boot and /boot/efi are then outside the Raid.

4. It looks like I can set the /boot partition to be Raid1, but then it is
a separate Raid1 from the LVM Raid1 on the rest of the disk. Resulting in
two separate Raid1s; a small Raid1 for /boot and a much larger Raid1 for
the LVM volume group.

I finally manually setup a base partition structure using GParted that
allowed the install to complete using the format below.

sda (3TB)
sda1 /boot fat32 500MB
sda2 /boot/efi fat32 500MB
sdb (3TB)
sdb1 /boot fat32 500MB
sdb2 /boot/efi fat32 500MB

The remaining space was left unpartitioned in GParted, which was then
prepared as LVM Raid1 in the CentOS installer. The installer also put the
/boot and /boot/efi files on sda1 and sda2. Then I would have to manually
copy them over to sdb1 and sdb2 if I wanted to be able to boot from drive
sdb if drive sda failed.

I am not sure that this result is what I really want, as it doesn't Raid my
entire drives. The structure below is what I believe I want to have.

sda & sdb RAID1 to produce md1
md1 partitioned
md1a /boot non-LVM
md1b /boot/efi non-LVM
md1c-f LVM containing /, /var, /home, and /swap

Well the abbreviations may not be the proper syntax, but you probably get
the idea of where I am going. If this is correct, then it looks like I
need to create the RAID from the command line of a rescue disk and set the
/boot and /boot/efi partitions first before beginning the installer. But
then again I could be totally off the mark here so I am looking for someone
to set me straight. Thanks.

Jeff

The last time I actually needed to do this was probably Centos 5, so someone will correct me if I have not kept up with all the changes.

1. Even though GRUB2 is capable of booting off of an LVM drive, that capability is disabled in RHEL & Centos. Apparently RH doesn't feel it is mature yet. Therefore, you need the separate boot partition. (I have a computer running a non-RH grub2 installation, and it boots off of LVM OK, but apparently it falls into the "works for me" category).

Now that you say that I do recall seeing someone mention that before on this list, but had not run across it recently in all my Goggle searching.

2. I cannot comment from experience about the separate drive for /boot/efi, but needing a separate partition surprises me. I have not read about others needing that. I would think that having an accessible /boot partition would suffice.

I tried a lot of different combinations with the installer and pre-partitioning the drives, but I don't recall if I tried putting the /boot and /boot/efi on the same partition outside of the RAID. That may work, but I am not going back to try that combination now.

3. When grub (legacy or grub2) boots off of a RAID1 drive, it doesn't "really" boot off of the RAID. I just finds one of the pair, and boots off of that "half" of the RAID. It doesn't understand that this is a RAID drive, but the disk structure for RAID1 is such that it just looks like a regular drive to GRUB. Basically, it always boots off of sda1. If sda fails, you have to physically (or in BIOS) swap sda and sdb in order for grub to find the RAID copy.

This seems reasonable, and appears to jive with a lot of the information that I read this weekend.

4. At one time, I recall that the process for setting up RAID for the boot drive was basically: a. Create identical boot partitions on both drives (used to have to be at the beginning of the drive, I don't think that is necessary any more).

Yep, I created an sda1 and sda2 (for /boot/efi and /boot), then created an identical sdb1 and sdb2 using GParted prior to running the installer.

b. Partition the rest of your drive as desired.

What I did here was leave the remaining portion of the drive unpartitioned in GParted, so that I would then use the installer to create the RAID and LVM volume group.

c. Do the install using sda1 as the boot partition (ignore sdb1).

Yep, I had the installer put /boot/efi on sda1 and /boot on sda2. Ignored sdb1 and sdb2 during the installation.

d. After the installation, convert sda1 and sdb1 into a RAID1 array (probably md1 in your case).

I think I am going to leave those partitions outside of a RAID configuration and just do something periodically with rsync to keep them synchronized. It is my understanding that there is not going to be a lot of file changes made within these partitions, and this way I don't have two RAID1's on the same set of disks.

e. Go through a process that copies the boot sector information from sda to sdb, so sdb is ready for the scenario mentioned is step 3.

I haven't done this yet; that is my next step. I see plenty of advice for using dd to copy sda1 and sda2 to sdb1 and sdb2. Then also needing to make them bootable. I will have to check my notes again to see exactly what to do here.

In summary: grub doesn't understand RAID arrays, but it can be tricked into booting off of a RAID1 disk partition. However you don't get full RAID benefits. Yes, you have a backup copy, but grub doesn't know it is there. It's more like you have to put it in grub's way, so that grub trips over it and uses it.

I like that description; put it in grub's way so that it trips over it and uses it.

The only way to find out if your setup has all the pieces in place is to physically remove sda, and see if the boot off of sdb completes or not.

Ted Miller
Indiana, USA

Once I get my boot partitions copied over to sdb and make them bootable, I plan on disconnecting sda and verifying that everything boots up properly. Probably repeating that a couple of times back and forth with each drive to be sure. Then completing my notes regarding what to do to restore a system when I have to replace a failed drive.

Thanks for your summary of the situation. It confirms most of the information I waded through in Google searches this weekend to see if what I had prepared up to this point was the proper way to meet my objective.

Jeff



_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Reply via email to