>>> On Thu, 21 Feb 2008 13:12:30 -0500, Norman Elton
>>> <[EMAIL PROTECTED]> said:

[ ... ]

normelton> Assuming we go with Guy's layout of 8 arrays of 6
normelton> drives (picking one from each controller),

Guy Watkins proposed another one too:

   «Assuming the 6 controllers are equal, I would make 3 16 disk
    RAID6 arrays using 2 disks from each controller.  That way
    any 1 controller can fail and your system will still be
    running. 6 disks will be used for redundancy.

    Or 6 8 disk RAID6 arrays using 1 disk from each controller).
    That way any 2 controllers can fail and your system will
    still be running. 12 disks will be used for redundancy.
    Might be too excessive!»

So, I would not be overjoyed with either physical configuration,
except in a few particular cases. It is very amusing to read such
worries about host adapter failures, and somewhat depressing to
see "too excessive" used to describe 4+2 parity RAID.

normelton> how would you setup the LVM VolGroups over top of
normelton> these already distributed arrays?

That looks like a trick question, or at least an incorrect
question; because I would rather not do anything like that
except in a very few cases.

However, if one wants to do a bad thing in the least bad way,
perhaps a volume group per array would be least bad.

Going back to your original question:

  «So... we're curious how Linux will handle such a beast. Has
   anyone run MD software RAID over so many disks? Then piled
   LVM/ext3 on top of that?»

I haven't because it sounds rather inappropriate to me.

  «Any suggestions?»

Not easy to respond without a clear statement of what the array
be used for: RAID levels and file systems are very anisotropic
in both performance an resilience, so a particular configuration
may be very good for something but not for something else.

For example a 48 drive RAID0 with 'ext2' on top would be very
good for some cases, but perhaps not for archival :-).
In general, I'd use RAID10 (http://WWW.BAARF.com/), RAID5 in
very few cases and RAID6 almost never.

In general current storage practices do not handle that well
large single computer storage pools (just consider 'fsck'
times) and beyond 10TB I reckon that currently only multi-host
parallel/cluster file systems are good enough, for example
Lustre (for smaller multi TB filesystem I'd use JFS or XFS).

But then Lustre can be also used on a single machine with
multiple (say 2TB) block devices, and this may be the best
choice here too if a single virtual filesystem is the goal:

  http://wiki.Lustre.org/index.php?title=Lustre_Howto
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to