> From [EMAIL PROTECTED] Sun Apr 23 01:06:24 2000
> 
> Chris Mauritz wrote:
> > 
> > > From [EMAIL PROTECTED] Sat Apr 22 21:37:37 2000
> > >
> > > Hi, im just wondering has anyone really explored the performance
> > > limitations of linux raid ?
> > >
> > > Recognising ones limitations is the first step to overcomming them.
> > >
> > > Ive found that relative performance increases are better with less
> > > drives.
> > >
> > > Ive been using raid for a year or so, ive never managed to get a 4-way
> > > ide raid working efficiently (no timeouts), 2.3.99pre6pre5 is the best
> > > ive had though.
> > >
> > > Anyone have any thoughts on where the bottleneck is, or orther
> > > experiences with raid limitations ?
> > 
> > I've not had any real problems using striped sets of SCSI drives as
> > RAID 0 and RAID 5.  You're always going to get rather crappy performance
> > with lots of IDE drives unless you have only 1 drive per channel.  By
> > the time you buy that many controllers, the cost is pretty much a
> > wash with SCSI.
> > 
> 
> Ive just managed to setup a 4 way ide raid0 that works.
> 
> The only way i can get it working is to use *two* drives per channel.
> I have to do this as i have concluded that i cannot use both my onboard
> hpt366 channels and my pci hpt366 channels together.
> 
> Ive done some superficial performance tests using dd, 55MB/s write
> 12MB/s read, interestingly i did get 42MB/s write using just a 2 way ide
> raid0, and got 55MB/s write with one drive per channel on four channels
> (i had no problem writing, just reading) so surprisingly i dont think
> the drive interface is my bottleneck.

I find those numbers rather hard to believe.  I've not yet heard of a
disk (IDE or SCSI) that can reliably dump 22mb/sec which is what your
2 drive setup implies.  Something isn't right.

> I think read performance is a known problem, but at least i dont get
> lockups or timeouts anymore.
> 
> IDE will always beat SCSI hands down for price/performance, but scsi is
> clearly the winner if you want 4 or more drives on one machine, as ide
> just doesnt scale.

It's not so hands down anymore.  SCSI drives are becoming quite cheap.
Yes, IDE is cheaper, but not THAT much cheaper...especially if you want
to use more than a couple of disks.

> SCSI drives are >50% dearer than ide arent they ?
> 
> What sort of performance do you get from your scsi sets ?

I was getting 45-50mb/sec with a striped set of 3 50gig Seagate
barracudas using software RAID0 and an Adaptec 2940U2W controller.
I used bonnie to test and used 4-5 times the system memory for 
the data size.

> I wonder what the fastest speed any linux software raid has gotten, it
> would be great if the limitation was a hardware limitation i.e. cpu,
> (scsi/ide) interface speed, number of (scsi/ide) interfaces, drive
> speed. It would be interesting to see how close software raid could get
> to its hardware limitations.

RAID0 seemed to scale rather linearly.  I don't think there would be
much of a problem getting over 100mbits/sec on an array of 8-10 ultra2
wide drives.  I ultimately stopped fiddling with software RAID on my
production boxes as I needed something that would reliably do hot
swapping of dead drives.  So I've switched to using Mylex ExtremeRAID
1100 cards instead (certainly not the card you want to use for low 
budget applications...heh).

Cheers,

Chris

-- 
Christopher Mauritz
[EMAIL PROTECTED]

Reply via email to