Issue #2888 has been updated by Josh Cooper.

These are my notes of other issues that we undercovered, but were not required 
to fix the deadlock.

1. In `Puppet::Agent#run`, we check if we're `running?` before acquiring a 
lock, so two agents can race and run in parallel unaware of the other.
1. In `Puppet::Util::Pidlock#lock` and `#unlock`, we frequently check if the 
lockfile exists and if so read from it. But this is not atomic and can cause 
races. In cases, were we just want to read the file, we should call 
`File.read`, but be prepared to handle `Errno::ENOENT`. 
1. In cases, like `lock`, where we need to read the file, check if the pid 
still exists, and if not, overwrite it, then we should be using an advisory 
lock (see `lib/puppet/external/lock.rb`) to ensure the test & set is atomic 
with respect to other puppet processes.
1. In addition, when writing to the pidfile, we don't use `replace_file`, so 
partial reads, including an empty file, are possible.
1. In `Puppet::Util::Pidlock#clear_if_stale`, we call `lock_pid` twice, which 
reads the lockfile twice unnecessarily.
1. In addition, if the lockfile is empty, then we end up calling `"".to_i`, 
which returns 0, so then we call `Process.kill(0, 0)`, which will actually 
succeed, even though there's no pid 0. As a result, we may think the pidfile is 
still owned, when really it means we should be disabled.
1. In the same method, `Process.kill(0, pid)` will raise EPERM if the process 
exists, but we don't have permission to signal it. Normally this isn't a 
problem for the agent since it's running as root.
----------------------------------------
Bug #2888: puppetd doesn't always cleanup lockfile properly
https://projects.puppetlabs.com/issues/2888#change-71318

Author: Peter Meier
Status: Merged - Pending Release
Priority: Normal
Assignee: Jeff McCune
Category: plumbing
Target version: 3.0.0
Affected Puppet version: 0.25.1
Keywords: 
Branch: https://github.com/puppetlabs/puppet/pull/1158


ok I had the patch #2661 now running for some weeks and I had nearly no 
problems anymore. However from time to time (maybe once,twice a week) a random 
client doesn't remove its lockfile (@/var/lib/puppet/state/puppetdlock@), hence 
future runs fail. I assume this might still happen due to a uncatched exception 
(as in #2261), however the problem is a) hard or nearly impossible to reproduce 
and b) it occurs really by random. The only thing I can see in the logs:

<pre>
Nov 30 19:27:41 foobar puppetd[26228]: Finished catalog run in 98.79 seconds
Nov 30 20:00:02 foobar puppetd[3000]: Could not retrieve catalog from remote 
server: Error 502 on SERVER: <html>^M <head><title>502 Bad 
Gateway</title></head>^M <body bgcolor="white">^M <center><h1>502 Bad 
Gateway</h1></center>^M <hr><center>nginx/0.6.39</center>^M </body>^M </html>^M
Nov 30 20:00:03 foobar puppetd[3000]: Using cached catalog
Nov 30 20:00:03 foobar puppetd[3000]: Could not retrieve catalog; skipping run
Nov 30 20:00:04 foobar puppetd[12169]: Run of Puppet configuration client 
already in progress; skipping
Nov 30 20:30:04 foobar puppetd[21230]: Run of Puppet configuration client 
already in progress; skipping
</pre>

as I run puppetd by cron twice an hour with --splay I assume that the run 
between 19:30 and 20:00 got delayed till 20:00. At this time (20:00) a 
puppetmaster restart happens and due to that the 502 occured. This was the run 
of pid 3000, the next run (pid 12169) failed, this could either be as pid 3000 
was still running or because there was already no puppetd anymore running and 
the lock file haven't been removed. However every future run failed as well as 
the lockfile wasn't removed.

So somehow puppet doesn't remove lockfiles properly under certain conditions.

PS: If you think it's better to reopen the old bugreport, close this one and 
duplicate and re-open #2261


-- 
You have received this notification because you have either subscribed to it, 
or are involved in it.
To change your notification preferences, please click here: 
http://projects.puppetlabs.com/my/account

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Bugs" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/puppet-bugs?hl=en.

Reply via email to