Issue #2888 has been updated by Jeff McCune.
A bit more information after diving into this. First, complicated multi-node scenarios aren't required to reproduce the issue. It was sufficient for me to start three different processes on my laptop. A webrick master, a daemonized agent with a 1 second runinterval, and BASH running a tight while loop with puppet agent --test inside the loop. With these three processes running, I observed the file referenced by `puppet agent --configprint puppetdlockfile` in 2.7.x coming into and out of existence. Sometimes it contained the PID of the interactive agent. Sometimes it contained the PID of the daemon process. When it contained a PID the system worked as I expected it to. Only one of the two scheduled agent processes performed a configuration run. It only took a few minutes for this scenario to run into the issue. When I observed the issue of no process performing a configuration run, I also observed the file referenced by `puppet agent --configprint puppetdlockfile` contained no PID file and was zero length. 2.7.x currently reports this as `Skipping run of Puppet configuration client; administratively disabled; use 'puppet agent --enable' to re-enable.` I'm going to proceed by tracking down whatever is creating the zero length file. I suspect changing to ensure the PID is written will fix the specific issue. ---------------------------------------- Bug #2888: puppetd doesn't always cleanup lockfile properly https://projects.puppetlabs.com/issues/2888#change-70908 Author: Peter Meier Status: Accepted Priority: Normal Assignee: Jeff McCune Category: plumbing Target version: 3.0.0 Affected Puppet version: 0.25.1 Keywords: Branch: ok I had the patch #2661 now running for some weeks and I had nearly no problems anymore. However from time to time (maybe once,twice a week) a random client doesn't remove its lockfile (@/var/lib/puppet/state/puppetdlock@), hence future runs fail. I assume this might still happen due to a uncatched exception (as in #2261), however the problem is a) hard or nearly impossible to reproduce and b) it occurs really by random. The only thing I can see in the logs: <pre> Nov 30 19:27:41 foobar puppetd[26228]: Finished catalog run in 98.79 seconds Nov 30 20:00:02 foobar puppetd[3000]: Could not retrieve catalog from remote server: Error 502 on SERVER: <html>^M <head><title>502 Bad Gateway</title></head>^M <body bgcolor="white">^M <center><h1>502 Bad Gateway</h1></center>^M <hr><center>nginx/0.6.39</center>^M </body>^M </html>^M Nov 30 20:00:03 foobar puppetd[3000]: Using cached catalog Nov 30 20:00:03 foobar puppetd[3000]: Could not retrieve catalog; skipping run Nov 30 20:00:04 foobar puppetd[12169]: Run of Puppet configuration client already in progress; skipping Nov 30 20:30:04 foobar puppetd[21230]: Run of Puppet configuration client already in progress; skipping </pre> as I run puppetd by cron twice an hour with --splay I assume that the run between 19:30 and 20:00 got delayed till 20:00. At this time (20:00) a puppetmaster restart happens and due to that the 502 occured. This was the run of pid 3000, the next run (pid 12169) failed, this could either be as pid 3000 was still running or because there was already no puppetd anymore running and the lock file haven't been removed. However every future run failed as well as the lockfile wasn't removed. So somehow puppet doesn't remove lockfiles properly under certain conditions. PS: If you think it's better to reopen the old bugreport, close this one and duplicate and re-open #2261 -- You have received this notification because you have either subscribed to it, or are involved in it. To change your notification preferences, please click here: http://projects.puppetlabs.com/my/account -- You received this message because you are subscribed to the Google Groups "Puppet Bugs" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/puppet-bugs?hl=en.
