Issue #19153 has been updated by Charlie Sharpsteen.

The core problem here is that Puppet is being run under two scheduling systems, 
cron and mcollective, and this makes it difficult to guarantee that agents will 
not step on each other. There is work underway on #7273 that will make it 
easier to integrate this sort of setup by providing signals to abort/restart an 
agent run that is in progress. Until signaling support is integrated, reverting 
to a system where only one scheduler is controlling the runs---such as daemons 
with kick or running everything through mcollective---makes it much easier 
ensure two runs aren't scheduled to occur at the same time.

Continuing with your cron/mcollective setup, a couple of ways to mitigate the 
problem:

  - Ensure MCollective uses the `--no-daemonize` flag so that it doesn't kill 
its self off.

  - Consider dropping `ensure => stopped` and just use `enable => false`. This 
will prevent `puppet agent` from launching as a daemon when the machine starts 
up but won't go hunting for daemons to kill.

You will still get runs that fail to occur, unless the mcollective runes are 
perfectly timed to miss the cron runs, but this should solve the problem of 
partially completed runs that got killed off.
----------------------------------------
Bug #19153: service puppet ensure stopped kills off cron-run puppet with 
"Caught TERM; calling stop"
https://projects.puppetlabs.com/issues/19153#change-84556

Author: Jo Rhett
Status: Needs More Information
Priority: Urgent
Assignee: Jo Rhett
Category: agent
Target version: 3.1.1
Affected Puppet version: 3.1.0
Keywords: 
Branch: 


We have recently switched from puppet agent in daemon mode (for kick) to 
cron-run puppet with mcollective agent. However, I started noticing that puppet 
policies were being inconsistently applied across the hosts. It turns out that 
this policy is the problem:

<pre>
service { 'puppet':
        ensure      => stopped,
        enable      => false,
        require     => File['/etc/cron.d/puppet','/etc/puppet/puppet.conf'],
}
</pre>

I have checked and confirmed that the puppet init script returns the correct 
response even when puppet is running. If I run "puppet agent --test" in one 
window and while it is running I run this in the other window, it shows clear:

<pre>
root@sj2-noc01 ~$ service puppet status ; echo $?
puppet is stopped
3
</pre>

However, if I run puppet in a silent mode with --onetime and --no-daemon then 
the init script returns this value

<pre>
root@sj2-noc01 ~$ service puppet status ; echo $?
puppet (pid  30406) is running...
0
</pre>

This causes it to kill itself off, and not finish the run. Due to the 
semi-random nature of ordering, this happens near the end or near the beginning 
of the puppet run on different hosts. (there are few dependancies on the puppet 
module, so its order in the manifest is random from host to host)

This is clearly a major flaw. We need the above policy to ensure that no puppet 
daemons are running, however it interferes with the cron-run instance. However 
it is handled properly when run verbosely with test.

Environment is a mixture of CentOS 5 & 6.



-- 
You have received this notification because you have either subscribed to it, 
or are involved in it.
To change your notification preferences, please click here: 
http://projects.puppetlabs.com/my/account

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Bugs" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/puppet-bugs?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to