On Monday, June 16, 2014 2:33:12 PM UTC-5, Stephen Morton wrote:
>
> I've got some newbie puppet questions.
> My team has a tremendous amount of linux/computer knowledge, but we're new 
> to Puppet. 
> We recently started using puppet to manage some 100 servers. Their configs 
> are all pretty similar with some small changes.
>
> ----
> History
>
> Prior to Puppet, we already had a management system that involved having 
> config files under revision control and the config file repo checked out on 
> every server and the repo config files symlinked into the appropriate place 
> in the filesystem. Updating the repo would update these files.This was 
> mostly just great, with the following limitations:
>
>  
>    - If the symlink got broken, it didn't work. 
>    - Some files require very specific ownership, or were required not to 
>    be symlinks (e.g. /etc/sudoers. /etc/vsftpd/ files I think) 
>    - Updating a daemon's config file does not mean that the daemon is 
>    restarted. e.g. updating /etc/httpd/conf/httpd.conf does not do a "service 
>    httpd reload" 
>    - You can't add a new symlink.
>    - All files must be in revision control to link to. Some 
>    security-sensitive files we want to only be available to some servers and 
>    something like puppet that can send files over the network is a good 
>    solution to this.
>    
> ----
>
> Puppet to the rescue?
>
> So we've tried a very conservative Puppet implementation. We've left our 
> existing infrastructure and we just add new rules in Puppet. So far, we 
> have a single site.pp file and only a dozen or so rules. But already we're 
> seeing problems.
>
>    1. Puppet is good for configuring dynamic stuff that changes. But it 
>    seems silly to have rules for stuff that will be configured just one time 
>    and then will not change. If we set up some files, we don't expect them to 
>    disappear. In fact if they do disappear we might not want them silently 
>    fixed up we probably want to know what's going on.
>
>
Puppet is fine for stuff that changes from time to time, but it is even 
more for stuff that, once configured, is stable for a long time.  The core 
concept around which it is designed is that you describe the state you want 
your machines to be in, and Puppet will both put them in that state and 
make sure they stay there (on a per-run basis).

If you want Puppet just to check the resources declared for the target node 
without syncing them, then you can run it in --noop mode, and Puppet will 
flag resources that are out of sync.  Alternatively, your manifests can 
declare individual resources to managed in noop mode if you want finer 
granularity.  In any case, Puppet certainly notifies you when it syncs an 
out of sync resource, both in its output and in the reports it sends back 
to the master (if you enable those).  Additionally, you can use the 
--detailed-exitcodes option to make the agent's return code yield 
information about whether anything changed and/or whether there were any 
failed resources.
 

>
>    1.   Doing everything in puppet results in ever-growing manifests. I 
>    don't know of a way to specify different manifests, e.g. every 30 minutes 
> I 
>    want Puppet to run and request the lean and mean regular manifest and then 
>    once a week I want it to run the "make sure everything is in the right 
>    place" manifest. 
>    
>
Yes, everything you configure for Puppet to manage must be described in a 
manifest file, therefore the more you bring under Puppet management, the 
larger the volume of your manifests.  That's like saying "every time I want 
a new feature in my program, I have to add source code!"

Puppet does offer facilities for limiting the scope of runs.  The main ones 
are the --tags agent option to select a subset of the resources that 
normally would be applied, and schedules 
<http://docs.puppetlabs.com/references/latest/metaparameter.html#schedule> 
to declare master-side limits on when and how frequently particular 
resources and groups of resources should be applied.

 

>
>    1. 
>    2. Puppet seems very sensitive to network glitches. We run puppet from 
>    a cron job and errors were so frequent that we just started sending all 
>    output to /dev/null.
>    
>
I'm not sure I understand.  What sort of network glitches are we talking 
about?  Are these frequent in your environment?  And what sort of errors?
 

>
>    1. 
>    2. Endless certificate issues. It's crazy. So sometimes hosts would 
>    get "dropped"... for unknown reasons their certificates were no longer 
>    accepted. Because we'd already stopped output (see previous bullet point) 
>    we would not know this and the server would be quietly not updated. And 
>    when you get a certificate problem, often simply deleting the cert on the 
>    agent and master won't fix it. Sometimes a restart of the master service 
>    (or more?) is required.
>    - The solution to this to me is not "you should run puppet dashboard, 
>       then you'd know". This shouldn't be failing in the first place. If 
>       something is that flaky, I don't want to run it.
>       
> (We're running version 3.4.2 on CentOS 6.5, 64-bit.)
>
> ---
>
> Questions.
>
> So my questions for the above three issue are I guess as follows
>
>    1. Is there a common Puppet pattern to address this? Or am I thinking 
>    about things all wrong.
>    2. Is there a way to get puppet to be more fault-tolerant, or at least 
>    complain less?
>
>
If you are not running in --verbose mode (also implied by --test), do not 
have --debug messages enabled, and do not have the 'show_diff' option 
enabled (defaults to disabled, unless you are using --test), then you are 
getting the minimum messages that the agent emits.  You can, however, 
configure Puppet to send them to a logfile or to syslog (--logdest), and if 
you send them to syslog then you can use that subsystem's facilities to 
filter what messages are actually recorded, and where.
 

>
>    1. Are endless certificate woes the norm? Once an agent has 
>    successfully got its certificates working with the server, is it a known 
>    issue that it should sometimes start to subsequently fail?
>
>
No, they are not the norm.  Once a client has a cert signed by a CA that 
the master recognizes (most often the CA provided by the master itself), it 
is normally good until the certificate expires.  The default certificate 
lifetime is <mumble> years.

Is it possible that something is occasionally damaging or removing your 
clients' certificates?  If a client's certificate is occasionally removed, 
then on its next run after such an event the agent will generate a new 
one.  The master will not accept or sign that new cert, however, because it 
already has a signed cert for the requested certname (else the system would 
be wide open to spoofing).  The client will in that case log the new 
certificate generation, but only on the run when it is generated, and that 
could be easy to miss.

We might have other ideas if you provided some additional detail about the 
SSL issues you are seeing.


Overall, though, I wonder whether you might find puppets "apply" face to be 
a more comfortable fit for you than the "agent" face.  You already have an 
infrastructure by which you could distribute the manifests and data to each 
server, and you're already running under a separate scheduler rather than 
running Puppet in daemon mode.  Puppet apply does not depend on SSL (since 
it builds catalogs locally, from local files), and it provides more direct, 
file-based mechanisms for selecting which resources to apply.


John

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/cc99c66a-98a9-4cb3-b10a-cc77b388fca5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to