Hi Jim,

Good to have your comments for a refreshing debate on this. Thanks for your
comments on my comments :-)

[EMAIL PROTECTED] wrote:

> >>>>> "peo" == Per-Ola Mard <[EMAIL PROTECTED]> writes:
>
> peo> Hi Yuji, Just a general note about disks that I find myself suggesting
> peo> to my customers all the time; the next server you install, make sure put
> peo> *everything* outside the unix-server cabinet.
>
> I have to disagree...  No slam intended on HDS; I believe they make excellent
> raid systems.

Thanks! I can tell you are a nice guy :-)

>
>
> [...]
>
> peo> There are alot to gain from doing it this way. It does
> peo> however bring up a couple of disk-availability and management issues
> peo> that the the disk needs to adress; Like once installed, *never* taken
> peo> down for instance...
>
> I agree that there is a lot to be gained.  It definitely makes upgrading a
> cpu easy.  However, one of the foundations for why AFS and DFS are easy to
> manage is the ability for servers to come and go without impacting the user
> community.  To have multiple servers all depend on a single (complex)
> subsystem is to give up some of that administrative freedom.

I agree. Then again both AFS and DCE/DFS is quite complex, actually any computer
systemis. The design and structure of, in this case the disk subsystem, is
mandatory. This also
stand true for a distributed sibling like AFS/DCE.

>
>
> I made the choice (what I now call a mistake) once to configure multiple
> fileservers sharing one of these fancy multi-host standalone raid disk
> systems.   It worked just fine until the controller had a problem...  at which
> point both fileservers died.  [ I don't remember the details of the actual
> problem, and I won't name the vendor.  (It wasn't HDS.) However, it was the
> sort of thing that could happen to almost any hardware raid box -- as I
> recall it was a firmware bug. ]

Thanks for making the remark about HDS, I appreceate that. I am not going about
totell you that HDS is the only company on the planet that don't have bugs, but I
feel that I need to
make another point about the structure and design of a disk subsystem of this
kind.
It needs to have one mandatory design goal; availability. Without availablity the
performance
figures are worth nothing. In the case with HDS we claim that *everything* is
replaceable
online. Both hardware (disk/controllers/cache/ports) and software (firmware
upgrade/downgrade).
Once installed never taken down. This is the base for the design. The only group
of
people that are able to tell if we succeed with this is our customers I guess...

>
>
> peo> Having a Unix background, for me this is a new way to deal with disk and
> peo> it took quite some time for me to get used to the thought of building
> peo> systems this way. Data was ok, but I wanted to keep the operating system
> peo> with all its patches and configurations locally, just in case... Why?
> peo> Really? It's just a disk on a SCSI line in both cases. A real disk in
> peo> one case and a virtual disk in the second case.
>
> Yes... and to implement this virtual disk there is special dedicated hardware
> and custom software.  It's not as simple as a real disk.  It may or may not
> be more reliable than a real disk.. it depends on the implementation.

Agreed! See above.

>
>
> [...]
>
> peo> Basically this is the way mainiframes has been built for ages. The
> peo> concept of a local disk is not even on the map of a traditional
> peo> Mainframe; all disk are "outside". (Mind you, I don't think it's
> peo> possible to buy e.g a Sun system without at least one single disk
> peo> internally.)  By doing it this way, the unix-admin is able to make use
> peo> of the mainframedisk structure/availablity standard without losing the
> peo> challenge and benefits of distributed systems that the Mainframe lacks.
>
> One of the problems with Mainframes was/is the fact that as you pile more
> eggs into that basket, it makes it harder and harder to update low-level
> subsystems -- such as the disk system.  It also forces outages to be
> scheduled to do these updates/upgrades.  In the distributed case, it can be
> arranged for other servers to take over.

Agreed! I do sometimes envy the mainframers for the simplicity of havingonly one
node to worry about (OK, there are clusters of mainframes to i know...)
Distributed systems tend to get very complex, and they all share the dependancy
of networks. The need for redundancy stands true regarless if you have a
distributed system
or a single-disk based setup. The challange is to stay online no matter what!
Right?

> When do you get a chance to update
> the microcode in the (single) raid subsystem?

See my comments above. Online.

> Mainframes measure change in
> years, partially for this reason.

Could you clarify this point? Sorry, what do you mean by measure changes?

Take care,
  /peo


Reply via email to