I'd like to explain why the HW numbers aren't strictly the issue, in IT
departments. There are two things that increase annual cost,
non-standard equipment, and high-admin equipment. IDE, at half the cost,
might increase labor costs by much more than the difference in purchase
price. A thousand dollars is roughly equivalent to 10 hours of
technician/admin time. If someone related to the box, had to spend that
extra 10 hours, during the course of the year, tinkering with it instead
of "other things"(tm), then the savings is spent AND you have reduced
performance. This is the single main reason that commercial UNIX still
sells, on standard hardware. Otherwise everyone would be running Linux.
This argument covers high-admin systems. For non-standard equipment, one
simply has to consider the hours spent re-learning the odd setup,
everytime someone has to work on it. It reduces, by a LOT, the number of
machine a given SysAdmin can watch over, effectively.
But, if you really think about it, this is the same issue the home users
have. Only they don't see it directly. An increased cost of maintenance,
for the home user, results in radically reduced productivity (assuming
that building computers isn't the end-goal). They might save a $1,000,
but lose another 10 hours in tinkering time. That's a lot when you
realize that the average home user only has 20 hours per week to devote
to extra-curricular activities.
SCSI is much more manageable than IDE. I spend fewer man-hours getting a
SCSI system running right than a multi-IDE controller system (I have
both, I know). I can also speak directly to WinNT, because performance
tinker-time kills you there.
Regretably, I have 100% custom-built server hardware here, because it
was cheaper. Fortunately, it is now all Linux and regardless of the
hardware, it all runs a monolithic one-size-fits-all kernel. (The WinNT
server is now a Linux box). I have both IDE and SCSI software RAID. The
performance on the IDE RAID sux rocks. I also have SCSI hardware RAID
and it screams.
The main reason that I use RAID is reliability, after that it is
low-maintenance, followed by performance, and cost follwos after them.
Two years ago I had a main non-RAID system drive (IDE) go bad
(heat-stroke). I had to completely rebuild the system on the new drive.
It cost me a man-week, which hurt much more than the price of the disk.
Not that SCSI would have helped here (the PS fan quite working).
The drive was replaced by a RAID1 system. Performance dropped to half
and over-night processing went out to four hours, blowing the window
open.. Does disk system performance matter? Sometimes it does. In fact,
since disk-access is almost ALWAYS the main bottle-neck, in these days
of 500Mhz CPUs and Gigabytes of RAM, I submit that it always matters.
Moving one disk out to another IDE conroller sped things up, but I had
to tinker like hell to get it to work right. Whereas, SCSI would have
been much easier.
> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Tom Livingston
> Sent: Tuesday, July 27, 1999 7:15 PM
> To: Marc Mutz; Kiyan Azarbar
> Cc: [EMAIL PROTECTED]
> Subject: RE: Suggestions for running RAID5 (3 disks): buy 2 extra
> controllers??
>
>
> Marc Mutz wrote:
> > You should have bought SCSI disks. They may would have been cheaper,
> > too, because you need only one controller for three disks.
> (Sorry -could
> > not resist :-)
>
> I know it's fun for all the server purists to knock eide, but
> it does have
> some advantages:
>
> A mythical ~90GB array:
> (all prices shown at http://www.pricewatch.com)
>
> IDE:
> 6x Maxtor 17.2G udma33 drives: $169/ea = $1014
> 2x Promise ultra33 eide controllers: $19/ea = $38
> Grand total for an 86GB array = $1052
>
> SCSI:
> 6x Hitachi 18.0G uw-fast drives: $445/ea = $2670
> 1x Tekram DC-390U2B controller: $130
> (note you should probably use a dual channel controller)
> Grand total for a 90GB array: $2800
>
> In this situation, an ide setup is less than half the price
> of a scsi setup.
> The ide's would be running on separate channels, so your
> drawbacks would be
> left as: a) SCSI is faster (true, but not night & day) and b)
> scsi is more
> manageable (true).
>
> For my work, I wouldn't suggest an IDE array. Not because it wouldn't
> suffice, but IT staff in general has a nearly overwhelming fixation on
> buying the "right" equipment... Notice how many NT licenses
> were sold a
> number of years ago when it was all the rage.
>
> But for a hobbiest system, or one where you're making a
> decision on the
> numbers, I think IDE raid is a serious contender. I am happy
> with my 70GB
> ide raid, and it's active 24x7 on the internet. If I had
> needed to spend 2x
> what I spent for IDE to buy SCSI, I just wouldn't have been
> able to build
> the box.
>
> Tom
>