The primary limitation is probably the rotational speed of the disks and 
how fast you can rip data off the drives. For instance, the big IBM 
drives (20 - 40 gigs) have a limitation of about 27mbs for both the 7200 
and 10k rpm models. The Drives to come will have to make trade-offs 
between density and speed, as the technologys in the works have upper 
constraints on one or the other. So... given enough controllers (either 
scsii on disk or ide individual) the limit will be related to the 
bandwidth of the disk interface rather than the speed of the processor 
it's talking too.

On Wed, 3 May 2000, Christopher E. Brown wrote:

> On Sun, 23 Apr 2000, Chris Mauritz wrote:
> 
> > > I wonder what the fastest speed any linux software raid has gotten, it
> > > would be great if the limitation was a hardware limitation i.e. cpu,
> > > (scsi/ide) interface speed, number of (scsi/ide) interfaces, drive
> > > speed. It would be interesting to see how close software raid could get
> > > to its hardware limitations.
> > 
> > RAID0 seemed to scale rather linearly.  I don't think there would be
> > much of a problem getting over 100mbits/sec on an array of 8-10 ultra2
> > wide drives.  I ultimately stopped fiddling with software RAID on my
> > production boxes as I needed something that would reliably do hot
> > swapping of dead drives.  So I've switched to using Mylex ExtremeRAID
> > 1100 cards instead (certainly not the card you want to use for low 
> > budget applications...heh).
> 
> 
>       Umm, I can get 13,000K/sec to/from ext2 from a *single*
> UltraWide Cheeta (best case, *long* reads, no seeks).  100Mbit is only
> 12,500K/sec.
> 
> 
>       A 4 drive UltraWide Cheeta array will top out an UltraWide bus
> at 40MByte/sec, over 3 times the max rate of a 100Mbit ethernet.
> 
> ---
> As folks might have suspected, not much survives except roaches, 
> and they don't carry large enough packets fast enough...
>         --About the Internet and nuclear war.
> 
> 

Reply via email to