Thank you very much.  I'll need to go back and reread this and digest it some 
more.  I hadn't thought of doing multiple RAID types on the drives.  I have two 
and did RAID1 for /boot and was going to RAID1 the rest.  However, I really 
want RAID0 for speed and capacity on some file systems.  The swap comment is 
interesting, too.  I have two small partitons for swap - one on each drive and 
I was going to parallel them per one of  DRobbins articles.



> 
> From: "Boyd Stephen Smith Jr." <[EMAIL PROTECTED]>
> Date: 2006/02/20 Mon PM 01:30:59 EST
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] raid/partition question
> 
> On Monday 20 February 2006 11:51, [EMAIL PROTECTED] wrote about 
> 'Re: Re: [gentoo-user] raid/partition question':
> > As an extension of this question since I'm working on setting up a
> > system now.
> >
> 
> 3. Neither.  See below.  First a discussion of the two options.
> 
> 1. Is fine, but it forces you to choose a single raid level for all your 
> data.  I like raid 0 for filesystems that are used a lot, but can easily 
> be reconstructed given time (/usr) and especially filesystems that don't 
> need to be reconstructed (/var/tmp), raid 5 or 6 for large filesystems 
> that I don't want to lose (/home, particularly), and raid 1 for critical, 
> but small, filesystems (/boot, maybe).  
> 
> 2. Is a little silly, since LVM is designed so that you can treat multiple 
> pvs as a single pool of data OR you can allocate from a certain pv -- 
> whatever suits the task at hand.  So, it rarely makes sense to have 
> multiple volume groups; you'd only do this when you want a fault-tolerant 
> "air-gap" between two filesystems.
> 
> Failure of a single pv in a vg will require some damage control, maybe a 
> little, maybe a lot, but having production encounter any problems just 
> because development had a disk go bad is unacceptable is many 
> environments.  So, you have a strong argument for separate vgs there.
> 
> 3. My approach: While I don't use EVMS (the LVM tools are fine with me, at 
> least for now) I have a software raid 0 and a hw raid 5 as separate pvs in 
> a single vg.  I create and expand lvs on the pv that suits the data.  I 
> also have a separate (not under lvm) hw raid 0 for swap and hw raid 6 for 
> boot.  I may migrate my swap to LVM in the near future; during my initial 
> setup, I feared it was unsafe.  Recent experience tells me that's (most 
> likely) not the case.
> 
> For the uninitiated, you can specify the pv to place lv data on like so:
> lvcreate -L <size> -n <name> <vg> <pv>
> lvresize -L <size> <vg>/<lv> <pv>
> The second command only affect where new extents are allocated, it will not 
> move old extents; use pvmove for that.
> 
> -- 
> Boyd Stephen Smith Jr.
> [EMAIL PROTECTED]
> ICQ: 514984 YM/AIM: DaTwinkDaddy
> -- 
> gentoo-user@gentoo.org mailing list
> 
> 

-- 
gentoo-user@gentoo.org mailing list

Reply via email to