Seth Vidal wrote:
> 
> So my questions are these:
>  Is 90MB/s a reasonable speed to be able to achieve in a raid0 array
> across say 5-8 drives?
> What controllers/drives should I be looking at?


Im a big IDE fan, and have experimented with raid0 a fair bit, i dreamt
of achieving theses speeds with 5 udma66 drives with multiple
controllers.

My experience is that IDE performance is seriously restricted in linux
and definetaly doesnt scale linearly.

I have had good experiences with promise controllers, but will never
again try and use multiple HPT cards.

Under 2.2.x a IDE raid0 you can get around 50 MB/s max, it effectivly
doesnt scale past 3 drives.

Under 2.4.x-test you can get a bit more than the speed of a single
drive, its just f*cked, it has been a known problem for many months, but
it doesnt look like anything much is going to happen to fix it. I
suspect performance problems maybe tied up with the whole latency thing,
or the elevator code, but thats really just me guessing.

As far as i can tell IDE raid performance is limited by the IDE code not
the raid code.

I guess the good news is that SCSI performance is said to scale
linearly, and sounds like in your situation you would have a few $
available to spend on SCSI hardware.

Im sure a few people here would be interested in seeing some benchmarks
when you do get soemthing happening.

Good luck.

Glenn

Reply via email to