Having played with Puppet from 2008 until present, some remarks from
experience:

* It's not about scale, but about executable documentation that tell you
everything that's non-default about any system
* Continuing some task that your collegue was working on is far easier than
before
* Sharing configuration data with tech savvy clients helps communication
* Our base images are tiny, just enough to run - everything else is a
service dependency that's installed through Puppet
* Version control is wonderful
* Fixing SSL or bash bugs on lots of nodes becomes easy
* When we're done fiddling with a new node, we redeploy the entire server
from scratch to make sure everything is properly deployed through Puppet.

Here's a few of the default things we manage. No way I'd ever want to go
back to manual labor:
* administrative user accounts, including SSH-keys
* networking: resolv.conf, iptables (default policy, roles, per-host), MTA,
NTP
* software: (custom) repository setup, base package set, update
notifications, integrity checking
* monitoring: SNMP, raid monitoring, collectd is on the wish list
* logging: syslog, log rotation
* access: SSH daemon (key auth only) and key distribution

On top, there's the LAMP stack and other services, also managed with
Puppet, but just keeping the items above up to date would be a pain. Having
that agent keeping everything in check every 30 mins (no hangs, Puppet
dashboard would show) is a very comforting thought for me.

Contact me off list if you have any questions.

Best,

Hans




On Thu, Nov 13, 2014 at 4:17 PM, John Stoffel <j...@stoffel.org> wrote:

> >>>>> "Edward" == Edward Ned Harvey (lopser) <lop...@nedharvey.com>
> writes:
>
> >> From: Edmund White [mailto:ewwh...@mac.com]
> >>
> >> Try Blueprint, then - http://devstructure.com/blueprint/
>
> Edward> That.  Sounds.  Awesome.  Will try, thanks for the suggestion.
>
> This has been an awesome suggestion, and a discussion I've been
> following with alot of enjoyment and hope to actually get off my ass
> and start deploying some sort of CM.
>
> I have compute clusters with identical systems which I'd like to bring
> into cohesion with each other, but the learning curve of cfengine2 and
> cfengine3 has always turned me off, even though I keep making half
> hearted efforts to deploy it.
>
> The other holdback is legacy systems.  Lots of them.  Old crufty
> Solaris 5.8 systems, slightly better 5.9 and now a group of Solaris
> 5.10 Sparc and x86_64 systems, along with 5.11 starting to appear.
> Sigh...
>
> The other big issue has been just getting the rest of the team to
> agree to use this setup.  No sense in doing all this work if I'm not
> going to get anyone else to use it as well.  Which is a management
> issue really, but the biggest stumbling block of all.
>
> So using chef/puppet/salt/ansible/blueprint all fall down on the
> legacy support.  But maybe that's just me being too perfectionist
> here.  But I do want to automate even these Sparc systems, esp the
> standalone Oracle servers which need accounts sync'd between them,
> though not all accounts on all systems.
>
> A pain.  And the one which cfengine with it's C base seems the best
> way to solve...
>
> So please keep up this discussion, and please keep posting solutions,
> pointers and maybe even recipes for some of this would be solved.
>
> John
> _______________________________________________
> Tech mailing list
> Tech@lists.lopsa.org
> https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
> This list provided by the League of Professional System Administrators
>  http://lopsa.org/
>
_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to