On 8/24/07, Mika Borner <mika.borner at bluewin.ch> wrote:
> Let's say you have set up a couple of hundred servers.
> After several months you want to change some configuration
> files on a range of servers.  I guess you don't do it with
> a pkgrm and then run pkgadd with a new responsefile?
> That's because configuration does not belong to the
> package.

No one would pkgrm + pkgadd to reconfigure.  People modify
the configuration files, run the admin command, etc. to
modify configurations.  Pretty much every package that I
have seen that asks questions via a request script does so
as a means to ensure that the software "works" after
installation.  It is normally the minimal configuration and
is inadequate for standard administration.

> What is needed is an integration with something like
> cfengine, puppet or bcfg2. Something that covers a servers
> whole lifecycle, not only the installation and scales well
> over a couple of hundreds of boxes.

Yeah, but which one.  And which one will Red Hat, Novell,
HP, IBM, and Microsoft all agree to use?  In many shops a
tool that doesn't at least pretend to do all OS's aren't
good enough and a third-party product is brought in to
manage configurations across a wide variety of systems.

> In fact, in my life as a sysadmin, the installation part
> of a server is kids play. Keeping OS configuration across
> the whole server farm under control, is what consumes huge
> amounts of time.

Installation is critical to ensuring that you have a solid,
consistent base.  Once you establish that, it is so much
easier to use whatever tool is chosen.  The next problem to
tackle is reality that the management tool will have bugs or
lack features and the servers you need to manage will
similarly have bugs or lack features.  It is compounded when
business needs keep you on older OS releases, severely
outdated patches, or unruly application software.

> How do you want to manage your datacenter?

I've used home grown tools and a variety of others such as
rdist, BladeLogic, and Opsware.  I've considered cfengine.

I'm not so sure that the tool used is the most important
part to ensure success.  The most stable environment that
I've managed used homegrown tools that were little more than
a convenient way to say "run this script on these machines."
I've seen the commercial tools take man years of effort with
significant engagement with support, professional services, and
engineering to get them to work to a portion of their
anticipated glory.

Why did the simple homegrown tool work better?  I would
sugget that it boiled down to how much control the sysadmins
had of specifying the hardware and software used, the
definition of standards, the adherence to those standards,
and ensuring that everyone that had their fingers in the pot
understood all of the above.  Part of the success was
undoubtly the smaller expectations of the tool.  The more
complicated the tool is, the less likelihood there is that
the entire team that needs to know how to use it will be
able to make good use of it.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/

Reply via email to