Re: [gentoo-user] raid/partition question

2006-02-20 Thread Richard Fish
On 2/20/06, Nick Smith [EMAIL PROTECTED] wrote:
 i think im confusing myself here. can you partition a raid device aka
 /dev/md0?

Yes.  You can either use mdadm to create a partitionable raid device,
or use LVM/EVMS (which would be my recommendation) to create logical
volumes on the array.

Just beware that /boot should either be it's own partition (non-raid),
or on a RAID-1 array (with no partitions).  Otherwise the boot loader
will have trouble locating and loading the kernel.

-Richard

-- 
gentoo-user@gentoo.org mailing list



Re: [gentoo-user] raid/partition question

2006-02-20 Thread Boyd Stephen Smith Jr.
On Monday 20 February 2006 09:57, Nick Smith [EMAIL PROTECTED] 
wrote about '[gentoo-user] raid/partition question':
 just wanted to ask before i mess something up.
 i have booted off the install cd, created a raidtab with my mirrored
 drives on it. i have created the raid.  now, do i go in and setup the
 partitions i want on that raid? or should i have done that before
 creating the raid? so instead of having one big mirror and then
 partitioning that, do i need to create my seperate partitions and then
 mark them as fd and then create each raid seperate?

I would suggest partitioning the drives identically, then using mdadm to 
create your raid devices.  The reason I say this is because the kernel 
does not seem to have any room in the device node space for partitions on 
an md device.

I could be wrong here; but I know partition and then build will work.

If you'll look at the major/minor number of IDE devices, you'll see that 
hda and hdb have the same major, but the minor number on hdb is +64... 
thus this allows 63 recognized partitions / disk labels on an IDE device. 
(hda1 is +1 minor from hda, hda2 is +2, etc.; similarly for hdb)

If you do the same investigation on SCSI/SATA devices, you'll see that sda 
and sdb have the same major number, but the minor number on sdb is +16... 
thus only 15 partitions / disk labels are recognized on a SCSI/SATA 
device.  I do believe we recently had a member of gentoo-user run into 
this problem.  (Switching to not using partitions as much will help; I 
prefer LVM LVs myself.)

Finally, you can look at the software raid devices, you'll see that md0 and 
md1 have the same major number (9) and the minor number on md1 (1) is only 
+1 from the minor number on md0 (0).  Due do this, I fear that the kernel 
may not properly recognize partitions / disk labels on software raid 
devices.

It's entirely possible that partitions on software raid devices use a 
different major number and/or use dynamic minor numbers so partitioning 
the raid device may work -- I just can't recommend it because I don't know 
it'll work and I know partitioning first, then raid-ing the partitions 
does work.

As the other poster said, be careful with how you treat your bootable 
partition.  It must be a partition recognized by your bootloader, on a 
disk recognized by the BIOS / EMI, using a filesystem understood by your 
bootloader.  If you use the old-style software raid (no superblock; by 
default mdadm does create a superblock), you can use raid 1 for boot, but 
each component partition should satisfy all the conditions for a bootable 
partition.

-- 
Boyd Stephen Smith Jr.
[EMAIL PROTECTED]
ICQ: 514984 YM/AIM: DaTwinkDaddy
-- 
gentoo-user@gentoo.org mailing list



Re: [gentoo-user] raid/partition question

2006-02-20 Thread Boyd Stephen Smith Jr.
On Monday 20 February 2006 11:51, [EMAIL PROTECTED] wrote about 
'Re: Re: [gentoo-user] raid/partition question':
 As an extension of this question since I'm working on setting up a
 system now.

 What is better to do with LVM2 after the RAID is created.  I am using
 EVMS also.

 1.  Make all the RAID freespace a big LVM2 container and then and then
 create LVM2 volumes on top of this big container.

 or

 2.  Parcel out the RAID freespace into LVM2 containers for each partiton
 (/, /user, etc.).

3. Neither.  See below.  First a discussion of the two options.

1. Is fine, but it forces you to choose a single raid level for all your 
data.  I like raid 0 for filesystems that are used a lot, but can easily 
be reconstructed given time (/usr) and especially filesystems that don't 
need to be reconstructed (/var/tmp), raid 5 or 6 for large filesystems 
that I don't want to lose (/home, particularly), and raid 1 for critical, 
but small, filesystems (/boot, maybe).  

2. Is a little silly, since LVM is designed so that you can treat multiple 
pvs as a single pool of data OR you can allocate from a certain pv -- 
whatever suits the task at hand.  So, it rarely makes sense to have 
multiple volume groups; you'd only do this when you want a fault-tolerant 
air-gap between two filesystems.

Failure of a single pv in a vg will require some damage control, maybe a 
little, maybe a lot, but having production encounter any problems just 
because development had a disk go bad is unacceptable is many 
environments.  So, you have a strong argument for separate vgs there.

3. My approach: While I don't use EVMS (the LVM tools are fine with me, at 
least for now) I have a software raid 0 and a hw raid 5 as separate pvs in 
a single vg.  I create and expand lvs on the pv that suits the data.  I 
also have a separate (not under lvm) hw raid 0 for swap and hw raid 6 for 
boot.  I may migrate my swap to LVM in the near future; during my initial 
setup, I feared it was unsafe.  Recent experience tells me that's (most 
likely) not the case.

For the uninitiated, you can specify the pv to place lv data on like so:
lvcreate -L size -n name vg pv
lvresize -L size vg/lv pv
The second command only affect where new extents are allocated, it will not 
move old extents; use pvmove for that.

-- 
Boyd Stephen Smith Jr.
[EMAIL PROTECTED]
ICQ: 514984 YM/AIM: DaTwinkDaddy
-- 
gentoo-user@gentoo.org mailing list



Re: Re: [gentoo-user] raid/partition question

2006-02-20 Thread brettholcomb
Thank you very much.  I'll need to go back and reread this and digest it some 
more.  I hadn't thought of doing multiple RAID types on the drives.  I have two 
and did RAID1 for /boot and was going to RAID1 the rest.  However, I really 
want RAID0 for speed and capacity on some file systems.  The swap comment is 
interesting, too.  I have two small partitons for swap - one on each drive and 
I was going to parallel them per one of  DRobbins articles.



 
 From: Boyd Stephen Smith Jr. [EMAIL PROTECTED]
 Date: 2006/02/20 Mon PM 01:30:59 EST
 To: gentoo-user@lists.gentoo.org
 Subject: Re: [gentoo-user] raid/partition question
 
 On Monday 20 February 2006 11:51, [EMAIL PROTECTED] wrote about 
 'Re: Re: [gentoo-user] raid/partition question':
  As an extension of this question since I'm working on setting up a
  system now.
 
 
 3. Neither.  See below.  First a discussion of the two options.
 
 1. Is fine, but it forces you to choose a single raid level for all your 
 data.  I like raid 0 for filesystems that are used a lot, but can easily 
 be reconstructed given time (/usr) and especially filesystems that don't 
 need to be reconstructed (/var/tmp), raid 5 or 6 for large filesystems 
 that I don't want to lose (/home, particularly), and raid 1 for critical, 
 but small, filesystems (/boot, maybe).  
 
 2. Is a little silly, since LVM is designed so that you can treat multiple 
 pvs as a single pool of data OR you can allocate from a certain pv -- 
 whatever suits the task at hand.  So, it rarely makes sense to have 
 multiple volume groups; you'd only do this when you want a fault-tolerant 
 air-gap between two filesystems.
 
 Failure of a single pv in a vg will require some damage control, maybe a 
 little, maybe a lot, but having production encounter any problems just 
 because development had a disk go bad is unacceptable is many 
 environments.  So, you have a strong argument for separate vgs there.
 
 3. My approach: While I don't use EVMS (the LVM tools are fine with me, at 
 least for now) I have a software raid 0 and a hw raid 5 as separate pvs in 
 a single vg.  I create and expand lvs on the pv that suits the data.  I 
 also have a separate (not under lvm) hw raid 0 for swap and hw raid 6 for 
 boot.  I may migrate my swap to LVM in the near future; during my initial 
 setup, I feared it was unsafe.  Recent experience tells me that's (most 
 likely) not the case.
 
 For the uninitiated, you can specify the pv to place lv data on like so:
 lvcreate -L size -n name vg pv
 lvresize -L size vg/lv pv
 The second command only affect where new extents are allocated, it will not 
 move old extents; use pvmove for that.
 
 -- 
 Boyd Stephen Smith Jr.
 [EMAIL PROTECTED]
 ICQ: 514984 YM/AIM: DaTwinkDaddy
 -- 
 gentoo-user@gentoo.org mailing list
 
 

-- 
gentoo-user@gentoo.org mailing list