On Thu, Sep 24, 1998 at 03:44:42PM -0500, Peter J. Schoenster wrote:
> When do you buy a DEC alpha or just buy a couple of pentium pros with linux?

It *is* hard to tell.  But I can tell you that I'm not spec'ing Alphas
any more because it looks to me like a dead architecture -- and because
used Sparc hardware is getting really cheap.  Cheap enough, in fact,
that I've sometimes found it cost-effective to buy two or whatever-it-is
and then forget to pay for maintenance contracts on them.  (As long as
one if configured as a hot spare and kept updated, it's cheaper and faster
to just switch over to the backup and then scavenge the original for
spare parts, then donate what's left and take the write-off.)

I don't always use Linux on Intel, btw: I have a decided preference for
BSDI Unix (which is quite similar) in those cases when my customers are
going to need ongoing support and/or don't have a Unix/Linux person
on staff.  (BSDI's support is outrageously good -- I've gotten fixes
to critical problems via email *on the weekend* from them.)

But I do have a very strong preference for distributed solutions
over monolithic ones: I frequently tell people that -- with few
exceptions -- nobody should have a "computer room" anymore, because
that's a tell-tale sign that their system & network architecture
is stuck in the past.  I have developed this preference over years
of working with parallel and distributed systems, and observing
their inherent fault-tolerance, load-balancing, and scalability features.

I'm not talking about client-server here -- that's been dead and gone
for most of the decade, despite the fact that the herd is still
desperately pursuing it.  I'm talking about truly distributed systems
and networks where there are no pure clients and no pure servers.
I'm talking about recognizing that hardware is dirt cheap, freeware
is ubiquitous, powerful and in most cases, far more reliable than
the commercial equivalent, and that clueful administrator/programmer time
is hideously expensive as well as scarce.  So if there's any problem
that looks like it can be solved by lobbing hardware at it,
that's *probably* the right way to approach it.

(Example: Same company I talked about earlier had a rather nasty backup
problem with their large servers.  They solved it by buying an
expensive proprietary software package that ran on
Sun/SGI/Digital/etc. Unix boxes...and then tying up just about the
equivalent of a full-time person @ $90K/year babysitting it when it
whined.  (As well as clogging the heck out of network whenever backups
were running.)  It would have been much cheaper to simply buy a high-capacity
tape drive for each box, use dump/ufsdump on each machine, and have one
of the operator staff shuffle the tapes every day.  It would also have
allowed to them to run all their backups in parallel instead of serially.
It also scales -- if one drive isn't enough, plug in a second one.  Tape
drives are far cheaper than s/w and personnel.)

So, to get back to your question, I tend to architect solutions
that rely on lots of cheap little expendable boxes.  I don`t think
of computer systems as assets: they are disposable commodities, to
be used up, recycled, donated, and otherwise shuffed off.  It's
my contention that this is a highly cost-effective approach: the expense
of dealing with "legacy" systems can quickly grow to be far greater
than replacing them lock, stock and barrel -- *and* it ties one to
the past, which is a surefire way to set oneself up for future problems.
IMHO, anything more than three years old is a target for replacement,
elimination or disconnection.

One of the inspirations for this approach is a little article that
I'm going to quote here:

        From: Nathaniel Borenstein <[EMAIL PROTECTED]>
        Date: Thu, 21 Jun 90 08:34:25 -0400 (EDT)
        Newsgroups: comp.risks
        Subject: "Artificial Life" out of control

        The latest issue of the Whole Earth Review has an article
        ("Perpetual Novelty") about the growing "artificial life"
        movement, which works to create computer simulations of
        artificial beings, with rather far-fetched and grandiose
        long-term goals.  I was particularly struck by the discussion
        of the idea that some of these people have to release lots of
        relatively dumb robots and simply let them evolve.  Talking
        about one researcher's goals, the article says:

        He wants to flood the world (and beyond) with inexpensive,
        small, ubiquitous thinking things.  He's been making robots
        that weigh less than 10 pounds.  The six-legged walker weighs
        only 3.6 pounds.  It's constructed of model-car parts.  In
        three years, he'd like to have a 1mm (pencil tip-size) robot.
        He has plans to invade the moon with a fleet of shoe-box-size
        robots that can be launched from throw-away rockets.  It's the
        ant strategy:  send an army of dispensable, limited agents
        coordinated on a task, and set them loose.  Some will die, most
        will work, something will get done.  In the time it takes to
        argue about one big sucker, he can have his invasion built and
        delivered.  The motto:

        "Fast, Cheap, and Out of Control."


Consider this in conjunction with the discussion we've had about
database vs. memory-resident flat files.  Or the next time someone
says the phrase "three-tier client-server architecture".

> But doing that with the DNS is beyond me.  

I don't think so -- it's not that hard, and I'll bet you really can do it.
Take a look at this URL, which has a quickie summary of how to pull it off:

        http://mayor.dia.fi.upm.es/~alopez/solaris/sun-managers1/0358.html

> I was wondering.  I have one domain:
> 
> www.mydomain.com
> 
> and I want to put  that domain in strategic locations:  one server in 
> Europe, one in SA, one in Asia and a couple in the U.S.  I don't want 
> different domain names.  One domain name.  Your type in the url and 
> if you are coming from Europe then you go the European server, if 
> from SA to the SA server etc.  Is that possible?  The web developers 
> upload files to the "core server" and I have some daemon running on 
> the "core server" which detects new files and transfers them to the 
> other servers to maintain synchronicity.  Is there any explanation 
> anywhere about how to do this or why it isn't possible or what is the 
> next best solution?

Absolutely this is possible -- the CPAN archive (for Perl-related
material works precisely this way).  Check out

        http://www.perl.com/CPAN

for a look at it, and an explanation of how they did it.  I've also
done the same thing in a slightly different way for a very, very
large web site where the flow looks something like this:


Web Developer #1 -----
                     |
Web Developer #2 -----                                  --------- web server 1
                     |   web staging area               |
(etc.) -----------------> (QC is done, anything ----->----------- web server 2
                     |   sensitive is scrutinized)      |
Web Author #1 --------                                  --------- web server 3
                     |
Web Author #2 --------


The flow of work from the developers/authors to the staging area is
manual -- those people working on the site 'submit' their work whenever
they think the time is right.  The flow out to the web servers is automated
and keeps them in lockstep with the latest "gold" copy that's kept internally.
If one of the external web servers fries itself, it can be replaced in
a couple of hours by loading the OS on a new piece of hardware and then
pushing the gold copy of the web site out to it.  (And the gold copy
gets backed up in its own way, internally.)

---Rsk
Rich Kulawiec
[EMAIL PROTECTED]
____________________________________________________________________
--------------------------------------------------------------------
 Join The Web Consultants Association :  Register on our web site Now
Web Consultants Web Site : http://just4u.com/webconsultants
If you lose the instructions All subscription/unsubscribing can be done
directly from our website for all our lists.
---------------------------------------------------------------------

Reply via email to