On Tue, Nov 29, 2011 at 1:20 PM, Florian Philipp <[email protected]> wrote: > Am 29.11.2011 14:44, schrieb Michael Mol: >> On Tue, Nov 29, 2011 at 2:07 AM, Florian Philipp <[email protected]> >> wrote: >>> Am 29.11.2011 05:10, schrieb Michael Mol: >>>> I've got four 750GB drives in addition to the installed system drive. >>>> >>>> I'd like to aggregate them and split them into a few volumes. My first >>>> inclination would be to raid them and drop lvm on top. I know lvm well >>>> enough, but I don't remember md that well. >>>> >>>> Since I don't recall md well, and this isn't urgent, I figure I can look >>>> at the options. >>>> >>>> The obvious ones appear tobe mdraid, dmraid and btrfs. I'm not sure I'm >>>> interested in btrfs until it's got a fsck that will repair errors, but >>>> I'm looking forward to it once it's ready. >>>> >>>> Any options I missed? What are the advantages and disadvantages? >>>> >>>> ZZ >>>> >>> >>> Sounds good so far. Of course, you only need mdraid OR dmraid (md >>> recommended). >> >> dmraid looks rather new on the block. Or, at least, I've been more >> aware of md than dm over the years. What's its purpose, as compared to >> dmraid? Why is mdraid recommended over it? >> > > dmraid being new? Not really. Anyway: Under the hood, md and dm use the > exactly same code in the kernel. They just provide different interfaces. > mdraid is a linux-specific software RAID implemented on top of ordinary > single-disk disk controllers. It works like a charm and any Linux system > with any disk controller can work with it (if you ever change your > hardware). > > dmraid provides a "fake-RAID": A software RAID with support of (or > rather, under control of) a cheap on-board RAID controller. > Performance-wise, it usually doesn't provide any kind of advantage > because the kernel driver still has to do all the heavy lifting > (therefore it uses the same code base as mdraid). Its most important > disadvantage is that it binds you to the vendor of the chipset who > determines the on-disk layout. Apparently, this gets better in the last > few years because of some pretty major consolidations on the chipset > market. It might be helpful if you consider dual-booting Windows on the > same RAID (both systems ought to use the same disk layout by means of > their respective drivers). > > >>> What kind of RAID level do you want to use, 10 or 5? You >>> can also split it: Use a smaller RAID 10 for performance-critical >>> partitions like /usr and the more space-efficient RAID 5 for bulk like >>> videos. You can handle this with one LVM volume group consisting of two >>> physical volumes. Then you can decide on a per-logical-volume basis >>> where it should allocate space and also migrate LVs between the two PVs. >> >> Since I've got four disks for the pool, I was thinking raid10 with lvm >> on top, and a single lvm pv above that. >> > > Yeah, that would also be my recommendation. But if storage efficiency is > more relevant, RAID-5 with 4 disks brings you 750GB more usable storage. > >
It looks like I'll want to try two different configurations. RAID5 and RAID10. Not for different storage requirements, but I want to see exactly what the performance drop is. I wish lvm striping supported data redundancy. But, then, I wish btrfs was ready... -- :wq

