On Fri, Dec 09, 2011 at 11:36:04AM +0100, Benny Lofgren wrote:
[snip]
> > wd1 =  80 GB, two 40GB partitions
> > wd2 = 120 GB, three 40GB partitions
> > Something like this should work:
> > # bioctl -c 0 -l /dev/wd1a,/dev/wd1d,/dev/wd2a,/dev/wd2d,/dev/wd2e softraid0
> 
> Out of curiosity, have you actually tried something like this? While I'm
> sure it works technically, I'd imagine the performance would be abysmal.

Obviously, an optimal solution would be concatenation.  Since that does not
exist, the closest matching solution without ccd(4) is RAID0.  And no, I 
haven't tried it; what I wrote was nothing more than a thought experiment.
 
> Think about it: When writing a chunk of data, the first part goes to one
> part of the first disk, the next part goes to another part, 40 gigs away,
> then the second disk gets three writes, all separated by long platter
> distances requiring large seek times for *every* write.

The optimal solution would be either a larger disk or an array of smaller
disks of the same size.  If this is a lightly loaded working set, the
abysmal performance might possibly be acceptable.  Interleaving the partitions
in the device list may give slightly better performance, though I would 
agree with you, with any significant I/O rate, this may not be a usable
solution.  As I wrote earlier in this thread, this may or may not meet the OP's
needs.  If not, the appropriate solution is either alternative hardware or 
an alternative OS.
 
> Also, striping or concatentation without redundancy is generally a very,
> very bad idea for anything but temporary data you can live without...

I agree with you.  I've never used RAID 0 arrays for anything other than 
temporary data space; nested arrays such as "RAID 10" provide availability 
and redundancy that RAID 0 does not, yet can provide similar performance
characteristics.  (Personally, I'd have preferred if Berkeley had come up
with some other term than "RAID 0", more indicative of it's purpose.)

Reply via email to