Quoting Ian C. Sison ([EMAIL PROTECTED]):
> On Thu, 27 Jun 2002, Rick Moen wrote:

>> Cutting and pasting from my earlier post:  "hot-fix mapping out of bad
>> sectors automatically, scatter-gather at the hardware level, ability to
>> do genuine low-level reformatting right in the host adapter BIOS, [and]
>> more-stable standards".  Also fewer IRQs consumed, and (usually) lower
>> CPU load.
> 
> I fail to see how these additional  features are actually needed in the
> real world, _using_available_RAID_solutions_, such as 3ware or the
> SCSI-to-IDE-RAID, or FC-to-IDE-RAID boxes from Arena or Promise.

I'm sorry that you fail to see that.  But that's life.

People who've accustomed themselves to living with design-limited
hardware almost always fail to see the point of better hardware they've
never enjoyed.  I know many people who think internal modems are great,
for example.  Hell, some even think their internal _winmodems_ are
great.

Those things I cited are some of the keys to longer service lives and
more reliable service.  You don't believe me?  OK, I can live with that.
I'm not out to convince you; I'm just telling you my views and some of
the reasoning behind them.  Take it or leave it; OK by me, either way.

>> Cutting and pasting from my earlier post:  "hot-fix mapping out of bad
>> sectors automatically,
> 
> Pointless, as with RAID, you simply swap out a bad drive, and replace with
> a new one.

Not pointless.  Again, you've grown accustomed to less-reliable
hardware.  But I infer from the above that you may be unfamiliar with
how hot-fix redirection works.  

>> scatter-gather at the hardware level
> 
> All I/O is performed by a separate RAID controller anyway, so why bother
> with this?

The merit of reordering pending access requests in the order involving
the least head movement should be obvious.  Seeking is by orders of
magnitude the slowest drive operation.  Minimising it yields
considerable benefits.

>> ability to do genuine low-level reformatting right in the host adapter
>> BIOS
> 
> For a production system, do you really need to have this feature?  Why not
> throw away the bad drive, and shove in a new one?

The point is that it's _not_ a "bad drive".  People throw away often
otherwise good ATA drives because they don't know how to fix the timing 
tracks.  (Please note that you can sometimes download "pseudo-low-level
formatting utilities" specific to that make and model.  But not always,
and you have to search for them.)  

I find it scandalous that people put up with this situation.  But, of
course, most simply don't know better.  Again, these are people used to
living with design-compromised hardware, who think what they're used to
is fine for lack of experience of better things.  Some will try to
convince you that what they're used to is _better_ than alternatives
they know much less about, which is a bit ludicrous but extremely
common.

>> more-stable standards".
> 
> If the IDE standards were so unstable, and unsupported, they should not
> have as much acceptance as they have today?

The fact that ATA has over time been a less-stable standard can be seen
in many areas of its operation, and in its inconsistent interoperability
history.  Frankly, the matter should be self-evident.  

>> Also fewer IRQs consumed, and (usually)
> 
> The 3ware escalade only gets 1 IRQ, for a max of 8 drives, 8 IDE busses.

Done through IRQ-sharing.  Historically a bad idea; could be done OK,
but that's a reservation I have about the hardwarre.

>> lower CPU load.
> 
> For promise or software raid, yes scsi will have a lower CPU load.  but
> for 3ware, that is debatable, as no benchmarks are available on this yet.

1.  Benchmarks are a common form of fiction, in any event.

2.  CPU loading might well hinge on how well they do bus-mastering.  
Lots of firms have claimed to do bus-mastering DMA well:  This is an
additional common form of fiction.  ;->

However, it should be noted that most Linux boxes have basically idle 
CPUs in most deployments.  

> Your statement seems to say that a lower quality SCSI card, on SCSI disks
> on software RAID will work more reliably than IDE drives connected to a
> 3ware controller?

No, my statement does not say that.  That is a very perverse misreading.

> For me, the only reason to get SCSI technology is because of the "better
> quality or workmanship" that goes into the production of these products.

That's one good reason.  I've cited some others.

> But given today's breakneck one-upsmanship between drive producers (and
> software developers' disk requirements), a drive that's 2 years old is
> obsolete, and will be in need of replacement anyway, so why bother with
> 'technology that will last'?

Well, the host adapters' economic service life is much longer -- and you
can redeploy for lesser roles over time.  Over the years, for example, I
accumulated a bunch of Adaptec AHA-1542 series cards, which remain
workhorses for tape drives, CD-ROMs, CDRs, scanners, and ZIP drives, and
keep being recycled to new machines.   The AHA-2940 series, even the
earliest ones with 20 MB bus limits and 1-byte bus-width, are still
recycleable over and over even with hard drives -- though modern
drives from the less-dusty garage boxes have to hang off 2940UW cards.

> Today's raid technology makes it simple to recover from a hard drive
> failure, so you just need to swap in a new inexpensive disk.  So
> indeed, why bother with SCSI at all?

Quality and reliability.  Fewer hassles.

My view; yours for a small royalty payment and waiver of
reverse-engineering rights.  ;->

_
Philippine Linux Users Group. Web site and archives at http://plug.linux.org.ph
To leave: send "unsubscribe" in the body to [EMAIL PROTECTED]

To subscribe to the Linux Newbies' List: send "subscribe" in the body to 
[EMAIL PROTECTED]

Reply via email to