On Wed, 19 Mar 2008, Nicolas Williams wrote:

> On Wed, Mar 19, 2008 at 02:40:28PM -0700, Darren Reed wrote:
> > My personal theory on why is simple:
> > SMF was developed by developers and not system admins.
>
> As a former sysadmin I believe what's missing is remote access.

As a current sysadmin, I don't.
I'm not interested in remote access to the configuration database on my
machines.  I don't want to reach out and touch my machines (it doesn't
scale), and I don't want others to be able to do so, for what should be
obvious security reasons (sorry, Nico, but Sun doesn't have such a great
track record in this area).

So remote access would be another example of creating a new problem where
I didn't have one before, in the name of solving a problem I don't
actually have.


The problem I actually _have_ is maintaining a distributed computing
environment consisting of many machines running a variety of platforms,
and keeping them all up-to-date with respect to both vendor-provided and
local software, configuration, and policy, modulo explicit variations for
a specific machine or group of machines.

The solution I have involves every machine of a given platform being
installed in exactly the same way (*) from exactly the same OS media, and
then having every machine pull regular updates from a secure, replicated
repository, including both software and configuration.  When we have to
invoke a command to cause a change to get noticed, it is done
automatically.  When we have to translate configuration from a common form
to some OS-specific form, it gets done -- usually on the client, at the
same time the centrally-distributed file gets merged with local policy.

This is not a solution we developed in the last 5 minutes.  It's one we've
been using in production for somewhere around 20 years.

FWIW, I used a similar approach to manage a much smaller shop of MS-DOS
and early Windows systems in the early 1990's.  Like my current site, it
featured a subscription model, where different machines could be
"subscribed" to different software packages and would automatically
receive updates as needed.  This process was managed by a combination of
batch scripts running on the clients (mostly at boot, IIRC) and some tools
running on the distribution server, which was a UNIX box.  The registry
killed it, because it was no longer possible to install an operating
system, a bunch of software packages, and configuration by dropping down a
bunch of files.  I left around that time, and they fell back to a labor
intensive process of manually maintaining each machine.  Today, Windows
comes with a fairly powerful set of tools for managing this problem, also
based primarily on a pull model.  But that didn't help us then, and it
wouldn't help today if we tried to pretend that "UNIX" and "Windows"
aren't two completely separate universes.

When we have to write something that compares the common configuration to
some bizarre platform-speciific database and apply updates to make them
look the same, we do that, too.  We did it for lp and again for CUPS, when
really, just obeying /etc/printcap would have been fine for us.  But every
time we do such a thing, it becomes a pain for us, because first we have
to spend time developing it, and then we have to maintain it.


-- Jeff


Reply via email to