I'd recommend that you allow your RAID controller to do what it does
best and not ask LVM to do what RAID should be doing. Let LVM manage the
volumes.
I'm guessing that Josh is not new to using LVM but for those people that
are I highly recommend that you carefully read the documentation before
committing production data to an LVM volume. It is somewhat easy to
accidentally mess up your filesystem. So experimenting a few times with
some storage that contains nothing of value is a very good idea when
you're first learning how to use LVM.
Once you understand how to use LVM properly it is a wonderful thing. You
never again have to deal with jockeying around data between filesystems
or having a rat's nest of mount points just to handle your growing data
storage needs.
Dan
Josh Sled wrote:
I've been dealing with dwindling disk space, recently. :/ It just bit
me hard after I tried to free up some disk space and found a corner-case
where VirtualBox fails in a disk full situation in such a way as to
render the whole tree of VM snapshots unusable. Thankfully, it's "only"
a couple of weekends of a clean Gentoo install for testing purposes –
only lost time, no lost work – but still really frustrating. :/
In any case, I've been putting off upgrading my storage subsystem for
too long. I don't know this stuff very well, so please check my
thoughts, here, please. :)
My goal is 1-2TB of storage, with basic redundancy, to support my
machine in its roles as my primary development workstation (java
compilation, lots of dvcs activity, &c.) and general media storage for
the household (pictures, mp3s and limited video; we're not running
mythtv or anything, but sometimes do some dvd ripping or BT).
It appears 750GB 7200RPM SATA drives are ~$120.
×4 in RAID 1+0 = 1.5 TB @ $480.
ZFS is totally awesome, but I just can't be arsed to learn Solaris,
right now. :/
It seems like using LVM on top of RAID is still reasonable for
future-proofing storage additions, backup snapshots, &c., even if I'm
not making huge use of it straight away.
Except for a non-RAID/-LVM boot partition, I'm not sure of a good reason
anymore (for my machine roles, anyways) to have separate /home, /var,
/opt, &c. partitions … am I wrong? All I've noticed with having a
separate /, /home and /a (secondary hard drive) is that I spend a lot of
time moving and symlinking stuff between them to balance the two drives
on an ad-hoc basis. :/
It seems like LVM can do striping, but maybe this is silly? Leave it to
RAID, and just use LVM for VM? Or use LVM to compose two 750GB RAID0
drives in a striped manner?
Also, is boot/livecd support for RAID+LVM "there", or should I keep a
separate simple boot drive?