On Sun, Nov 28, 1999 at 06:37:46PM -0500, [EMAIL PROTECTED] wrote:
> On 28 Nov, [EMAIL PROTECTED] wrote:
> [clip]
> > If the controller is built right, there is some potential for a performance
> > increase. The idea is to have more than one drive simultaneously reading
> > into it's buffer (they run about .5M these days). Assuming each drive stays
> > busy, ie, as would be the case in large sequential transfers, it is possible
> > to produce a rate at the host that approaches the combined rate of the
> > drives. Same theory holds for SCSI, or Fibre Channel, for that matter.
> [snip]
>
> The problem with UDMA or any other IDE derivative is that only one
> device can communicate on the bus at a time. In the case of a
SCSI is the same. There is only one message running on the bus at any
given time.
SCSI just has a way better way of letting devices share the bus. That's
where IDE fails tremendously. But with the current cost of IDE drives
versus SCSI drives, I'd say it's quite reasonable to use IDE drives for
arrays with up to say 6-8 disks. You can buy enough controllers to
still just have one drive on each bus.
> master/slave situation, you're looking at no better than single-drive
> performance (caps at about 20MB/s for really good drives). For a
> controller with two channels, primary/secondary, you can get double
> performance, in theory. So in theory you're looking at no better than
> 40MB/s with RAID-0. With UDMA somewhat more realistic numbers are
Except that someone just posted 50+ MB/s on RAID-5 with IDE ;)
That was with enough controller though. That's the secret.
> probably half that, as I understand it. Someone on here who has more
> experience (I only use SCSI so I'm not an authority by any stretch of
> the imagination), and I know there are plenty, can probably give you
> better figures. I'm just regurgitating what I've seen time and time
> again on this list in the past.
You're absolutely right, that given you only have one bus, SCSI is
superior. But that's not a valid point for a lot of small arrays. You
can easily afford enough controllers with the price advantage on IDE
disks today.
The problems start when you need more disks. SCSI is still the way
to go for 10+ disks. Oh, and the cable length restrictions with IDE
are also terrible. But when you run out of space in your cabinet,
you'll most likely also have run out of PCI slots for more controllers,
so there's a nice balance there ;)
I use both SCSI and IDE RAID systems, and I get a very nice performance
from all of them. The performance matches what I could expect.
--
................................................................
: [EMAIL PROTECTED] : And I see the elder races, :
:.........................: putrid forms of man :
: Jakob �stergaard : See him rise and claim the earth, :
: OZ9ABN : his downfall is at hand. :
:.........................:............{Konkhra}...............: