On Fri, Oct 30, 1998 at 10:47:30PM -0500, Bob Munck wrote:
> It never occurred to me that
> the program would still be in use 32 years later, and it
> wasn't, but it made it to 25 years.
Nor should have it occured to you: running software that's
that old is a sign of gross incompetence and negligence,
and I'd instantly fire anyone in my organization who
allowed it to happen.
Folks, computers are no longer capital investments. They are
consumables, with a short useful lifespan, beyond which they
cease to save/make you money and start instead costing you money.
The same is true of software: look at the incredible sums
of money being wasted band-aiding ancient code that should
have been replaced years ago.
The problem is that bean-counters don't get it. They seem
to think of hardware/software as "investments", which is about
as stupid as thinking of a car as an "investment".
Folks, I want to change your thinking about how you architect systems,
whether they be hardware or software or both. The change is this:
Design them to be thrown away.
Instead of creating monolithic systems -- which cost so
much that you *can't* throw them away, and which you will
no doubt have to continue to spend money on ad infinitum
(see Y2K), design modular systems made up a of a *lot* of
low-cost components. This allows you to replace any or
all of the components whenever they become flaky, or
outdated, or obsolete, or too expensive to support --
and keep the system working without missing a beat.
By continuously throwing pieces away and replacing them,
the system never gets "old", per se, never has to be replaced
en masse, and never winds up being a technological dinosaur.
That means writing portable software; it means writing simple
applications and not massive over-complex ones; it means using
jellybean hardware whenever possible and avoiding anything
that needs to be in a "computer room"; it means never tying
yourself to a single vendor for anything; it means assuming
that the hardware will be fast enough, because it probably will;
and it means learning how to use somebody else's code (the
freeware model) to get 90% of the job done with 10% of the work.
Sure, there are problems that can't be solved this way -- not
many, but there are some. Weather prediction still requires
a supercomputer, for example. But the everyday problems --
web servers and web applications and the like -- are very
susceptible to this methodology.
It astonishes me (and maybe it shouldn't; perhaps I'm not
cynical enough yet) that the masses of short-sighted people
fretting over Y2K problems haven't figured this out yet.
Instead of flat-out replacing their ancient systems --
which has a laundry list of benefits far beyond just
addressing the Y2K issue -- they've elected to keep
them running, even though it's costing them a fortune,
and WILL cost them a fortune. "It's too expensive",
some of them say.
Oh, really?
REALLY?
I don't think so. I think it might possibly be too
expensive in the next quarter, or perhaps even in the
next fiscal year, and I know that bean-counters are
genetically incapable of seeing any further, but it's
the cheaper and better approach for the long run.
I now (momentarily) relinquish the soapbox. ;-)
---Rsk
Rich Kulawiec
[EMAIL PROTECTED]
____________________________________________________________________
--------------------------------------------------------------------
Join The Web Consultants Association : Register on our web site Now
Web Consultants Web Site : http://just4u.com/webconsultants
If you lose the instructions All subscription/unsubscribing can be done
directly from our website for all our lists.
---------------------------------------------------------------------