Issue #1563 has been updated by seanmil.
I have spent some time with Etch, apt-get, osirisd, and Puppet and I believe I
have narrowed down exactly what the problem is/was with the pipe-based method:
Puppet creates the pipe, sets stdin to /dev/null, and points stdout/stderr to
the pipe.
Then Puppet fork()s and exec()s <pre>"/usr/bin/apt-get -q -y -o
DPkg::Options::=--force-confold install osirisd"</pre>, passing the pipe to
apt-get's stdout/stderr as we would hope and expect.
apt-get then goes through a series of about 47 PIDs (presumably from
fork/execing subprocesses and shell commands during script execution, I
strace'd it following all children), and apparently still passing our pipe to
all of them on stdout/stderr.
During the package install the osirisd daemon is started (presumably as part of
a post-install script). This daemon ALSO inherits our pipe on stdout/stderr,
but it doesn't daemonize properly and fails to close stdin/stdout/stderr. Then
it remains running, holding the pipe open beyond the life of apt-get and all of
the other apt-get generated PIDs.
The initial PID we had to exec() apt-get finishes, triggering continuation from
the Process.waitpid2() in Puppet. Puppet promptly tries to read from the pipe,
but the pipe is still open in osirisd on its stdout/stderr so the data in the
pipe isn't flushed, EOF is never reached, and Puppet never returns from the
IO.read() call (well, never unless someone stops osirisd, releasing and closing
the pipes).
I think it is safe to say that this situation will only occur when the process
Puppet exec()s is, or calls, a poorly written daemon. For "normal" process
runs this won't happen because Puppet won't return from Process.waitpid2()
until the child is done with an exit code, and by the time the child has exited
the pipe resources have been closed and flushed by the OS (if not previously by
the child process).
This is probably more of a minor concern with generic exec{}s, but is clearly a
moderate concern with package{}. Interestingly, this would be presumably most
noticeable with service{} definitions, except the service{} provider calls
Util::execute() with ":squelch => true", which sets stdout/stderr to /dev/null
instead of a pipe, thus nicely side-stepping the block-on-read problem.
So where does this leave us?
1) I think everyone agrees having Puppet block forever (for any reason) is a
bad thing.
2) This can be fixed effectively with temporary files, but temporary files as
Puppet implements them now aren't happy with SELinux.
3) This can be fixed effectively with non-blocking reads on pipes. It works
great for the cases where the programs/daemons are well-behaved but might cause
odd problems for the exec'd children (like apt-get not completing the package
installs) for the corner cases where badly daemonized processes are started
from a Util::execute() where ":squelch = false". In all cases non-blocking
reads causes Puppet to always return. Based on some testing, the corner cases
can be worked around by introducing a "sleep (N)" between the two nonblocking
IO.read() calls. However, while a value of N=3 seems safe for the apt-get
install osirisd case, it seems to me that the safe value of N could vary
depending on what was being run. Also, introducing a sleep measured in seconds
for every Util::Execute call just to compensate for some corner cases caused by
buggy software seems very suboptimal and like the wrong approach.
I have posted on the SELinux mailing list to see if there is any way to use
Puppet-generated temporary files in a way that satisfies SELinux for any
possible policy and domain transition. If there is I can try to modify the
temporary file approach to use it.
I'll update this ticket when I know more.
----------------------------------------
Bug #1563: [PATCH] Change Util::Execute to use pipes instead of temporary files
for capturing output
http://projects.reductivelabs.com/issues/show/1563
Author: seanmil
Status: Needs more information
Priority: High
Assigned to: luke
Category: plumbing
Target version: 0.24.6
Complexity: Easy
Affected version: 0.24.5
Keywords: SELinux execute Tempfile
Patch attached to fix reported behavior.
When triggering Puppet runs which included initscript starts/stops I noticed
that I would receive three SELinux AVC denials logged for the process that was
being started/stopped for a file of the form /tmp/puppet.$PID.0. Many of the
system daemons which ship with CentOS 5 have confined SELinux domains which
don't permit access to much of the system - including these Puppet temp files.
Trying to figure out where to create the file (and with which context) for
every service would be impractical (impossible? some services may not have any
context that would be usable for write permissions) so I decided to just
rewrite it to use Unix pipes.
WorksForMe in my testing.
I'm marking this as high because, depending on what commands are being run and
their SELinux policies, this could cause command output to silently disappear
(other then the denials in the logs). This could be very frustrating for
someone who is trying to use that output.
----------------------------------------
You have received this notification because you have either subscribed to it,
or are involved in it.
To change your notification preferences, please click here:
http://reductivelabs.com/redmine/my/account
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"Puppet Bugs" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at
http://groups.google.com/group/puppet-bugs?hl=en
-~----------~----~----~----~------~----~------~--~---