On Mon, 19 Mar 2012, Miles Fidelman wrote:
New list member here, seems like this might be the place to pose this
question:
I'm getting ready to rebuild a small cluster - mostly used for development,
but likely to move into a production role, and thinking about being prepared
if we need to scale. As a result, I'm looking at way to move from a rather
ad hoc management approach - configuration notes and task checklists in word
documents, spreadsheet of IP addresses and DNS records, bunches of shell
scripts and cron jobs - to something a bit more organized and scaleable.
My first thought is to start with a simple database for config. info, and
moving scripts under configuration control; maybe adding an orchestration
tool like rundeck (though call me old fashioned enough to think about writing
some wrapper code in tcl).
Anyway, when I asked for tool suggestions on one of the devops lists, where I
kind of figured was the place to ask about tools, about all I got were
religious pontifications about puppet or chef being the "one true way" -
which seems a bit out of line with my experience of real-world operations,
both in the small, and in the large (been around both, though more as an
architect for large systems than as an operator).
Which leads me to pose two questions:
1. What is the state of the practice right now? How much is sys
administration a world of traditional approaches, vs. how much have the new
generation of devops tools caught on outside the core of folks who've drunk
the koolaid? (I'm really trying to get calibrated in reality here.)
2. My more specific question re. what are people using to manage
accumulated scripts and semi-manual procedures?
You name it, it's probably in use in a very large environment.
at $work we have hundreds of production systems serving over ten million
users (real users, not freebe logins) where the majority of the real
configuation is logging into the server and running vi on a file. A couple
of years ago I got a report from 302 production systems as to what
packages were installed, I did a count of the number of packages on each
system and came up with 149 different package counts among the systems
(and this was with ~100 of the systems being identical)
we have other places where there are hundreds of systems configured
entirely through automated tools where we are utterly confident that all
of the systems are running identical software, even though these are
organized into >100 different farms, each with different configurations,
connected to different network (in-house tools, dating back 10+ years in
an environment that cfengine was not a valid choice for), and we have
other areas where the company has spent millions on commercial automation
tools.
And this is all in one company.
I think the one constant is that nobody is completely satisfied with what
they have and knows things that they would like to do to improve things
(either that, or they are completely ignorant of automation and are
wishing that things were easier to configure, which boils down to the same
thing)
I would say that the key is not to try and do everything at once. I would
go through a progression along the lines of the following.
1. start off by managing installed software and patches
2. move from there to making sure configs were synced between appropriate
boxes (especially primary and backup where you have them)
3. then get user management under control
4. then work to publish/track your config files from a central master
(even if they are manually edited on that central master)
5. then work to eliminate the manual editing.
David Lang
_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
http://lopsa.org/