On Wed, Feb 25, 2015 at 03:58:07PM +0100, Gorka Lertxundi wrote: > Hello, > > After that great john's post, I tried to solve exactly your same problems. I > created my own base image based primarily on John's and Phusion's base > images.
That's awesome - I get so excited when I hear somebody's actually read, digested, and taken action based on something I wrote. So cool! :) > > See my thoughts below. > > 2015-02-25 12:30 GMT+01:00 Laurent Bercot <ska-skaw...@skarnet.org>: > > > > > (Moving the discussion to the supervision@list.skarnet.org list. > > The original message is quoted below.) > > > > Hi Dreamcat4, > > > > Thanks for your detailed message. I'm very happy that s6 found an > > application in docker, and that there's such an interest for it! > > skaw...@list.skarnet.org is indeed the right place to reach me and > > discuss the software I write, but for s6 in particular and process > > supervisors in general, supervision@list.skarnet.org is the better > > place - it's full of people with process supervision experience. > > > > Your message gives a lot of food for thought, and I don't have time > > right now to give it all the attention it deserves. Tonight or > > tomorrow, though, I will; and other people on the supervisionlist > > will certainly have good insights. > > > > Cheers! > > > > -- Laurent > > > > > > On 25/02/2015 11:55, Dreamcat4 wrote: > > > >> Hello, > >> Now there is someone (John Regan) who has made s6 images for docker. > >> And written a blog post about it. Which is a great effort - and the > >> reason I've come here. But it gives me a taste of wanting more. > >> Something a bit more foolproof, and simpler, to work specifically > >> inside of docker. > >> > >> From that blog post I get a general impression that s6 has many > >> advantages. And it may be a good candidate for docker. But I would be > >> remiss not to ask the developers of s6 themselves not to try to take > >> some kind of a personal an interest in considering how s6 might best > >> work inside of docker specifically. I hope that this is the right > >> mailing list to reach s6 developers / discuss such matters. Is this > >> the correct mailing list for s6 dev discussions? > >> > >> I've read and read around the subject of process supervision inside > >> docker. Various people explain how or why they use various different > >> process supervisors in docker (not just s6). None of them really quite > >> seem ideal. I would like to be wrong about that but nothing has fully > >> convinced me so far. Perhaps it is a fair criticism to say that I > >> still have a lot more to learn in regards to process supervisors. But > >> I have no interest in getting bogged down by that. To me, I already > >> know more-or-less enough about how docker manages (or rather > >> mis-manages!) it's container processes to have an opinion about what > >> is needed, from a docker-sided perspective. And know enough that > >> docker project itself won't fix these issues. For one thing because of > >> not owning what's running on the inside of containers. And also > >> because of their single-process viewpoint take on things. Andy way. > >> That kind of political nonsense doesn't matter for our discussion. I > >> just want to have a technical discussion about what is needed, and how > >> might be the best way to solve the problem! > >> > >> > >> MY CONCERNS ABOUT USING S6 INSIDE OF DOCKER > >> > >> In regards of s6 only, currently these are my currently perceived > >> shortcomings when using it in docker: > >> > >> * it's not clear how to pass in programs arguments via CMD and > >> ENTRYPOINT in docker > > > > - in fact i have not seen ANY docker process supervisor solutions > >> show how to do this (except perhaps phusion base image) > >> > >> * it is not clear if ENV vars are preserved. That is also something > >> essential for docker. > > > > > >> * s6 has many utilities s6-* > >> - not clear which ones are actually required for making a docker > >> process supervisor > >> > >> > > * s6 not available yet as .deb or .rpm package > >> - official packages are helpful because on different distros: > >> + standard locations where to put config files and so on may > >> differ. > >> + to install man pages too, in the right place > >> > >> > > * s6 is not available as official single pre-compiled binary file for > >> download via wget or curl > >> - which would be the most ideal way to install it into a docker > >> container > >> > > > >> ^^ Some of these perceived shortcomings are more important / > >> significant than others! Some are not in the remit of s6 development > >> to be concerned about. Some are mild nit-picking, or the ignorance of > >> not-knowning, having not actually tried out s6 before. > >> > >> But my general point is that it is not clear-enough to me (from my > >> perspective) whether s6 can actually satisfy all of the significant > >> docker-specific considerations. Which I have not properly stated yet. > >> So here they are listed below… > >> > >> > >> DOCKER-SPECIFIC CONSIDERATIONS FOR A PROCESS SUPERVISOR > >> > >> A good process supervisor for docker should ideally: > >> > >> * be a single pre-compiled binary program file. That can be downloaded > >> by curl/wget (or can be installed from .deb or .rpm). > >> > > > I always include s6, execline, s6-portable-utils in my base images, they > are pretty > small and manageable. You could find/build them using this builder > container: > > https://github.com/glerchundi/container-s6-builder > > I try to keep this repo as updated as possible so that when Laurent > releases a > version I publish a new release in github: > > https://github.com/glerchundi/container-s6-builder/releases > That's genius, I never thought of using GitHub's release feature like that. Expect a pull request soon for the other s6 packages. > > > >> * can take directly command and arguments. With argv[] like this: > >> "process_supervisor" "my_program_or_script" "my program or script > >> arguments…" > > > > > What do you want to achieve with that? Get into the container to debug? If > so, > just using CMD with [ "/init" ] is enough and allows to anyone get into the > container > without running any init process / supervisor. > > I cannot imagine why this would be useful, but if it was required, > registering a > new service in runtime into s6 could be a possible solution. This service > would > include your custom program or script. > > > >> * will pass on all ENV vars to "my_program_or_script" faithfully > >> > >> > I take care of ENV's in my init process. > > Concretely, as phusion does, I dump all ENV's into > /etc/container_environment: > > https://github.com/glerchundi/container-base/blob/master/rootfs/init#L19-L30 > > And then make use of them in this script which internally uses s6-envdir to > publish > environment variables in the spawned process. > > https://raw.githubusercontent.com/glerchundi/container-base/master/rootfs/usr/bin/with-contenv > > So all container init.d, service run and service finish scripts will always > have > environment variables accessible. I just wanted to let you know this image is pretty sweet and makes mine look like garbage :) > > * will run as PID 1 inside the linux namespace > >> > >> > OK. s6 can run as pid 1, so no problem. The init script I mentioned > delegates > responsibility to s6 so it runs as a init/pid1/supervisor perfectly. > > > > > * where my_program_or_script may spawn BOTH child AND non-child > >> (orphaned) processes > >> > > > Laurent correct me if I'm wrong but, this is an intrinsic feature of s6. As > it can run > as pid 1 it inherits all orphaned child processes. > > > > > >> * when "process_supervisor" (e.g. s6 or whatever) receives a TERM signal > >> * it faithfully passes that signal to "my_program_or_script" > >> * it also passes that signal to any orphaned non-child processes too > >> > >> > yes, it can: > > https://github.com/glerchundi/container-base/blob/master/rootfs/etc/s6/.s6-svscan/finish#L23-L31 I agree, this seems like the right way to properly stop a Docker container. Like I said in the other email, I think it's up to the image-creator to do all this - not the responsibility of s6 directly. > > > > * when my_program_or_script dies, or exits > >> * clean up ALL remaining non-children orphaned processes afterwards > >> * which share the same linux namespace > >> - this is VERY important for docker, as docker does not do this > >> > >> > this should be your concrete service concern. For example, imagine you have > a load > balancer with two supervised processes, nginx (main) & confd. Probably you > want to > have confd always running, no matter why this process could die, but you > want it up. > Instead, if your main process dies, you want your container to die too. So > that should > be implemented in you specific use case: > > For example: > > https://github.com/glerchundi/container-mariadb/blob/master/rootfs/etc/s6/mysql/finish#L8-L13 This is really similar to calling `s6-svscanctl -t /path/to/whatever` I'm not sure if s6 will do anything subtly differently, I'm guessing it treats receiving a TERM signal in exactly the same way. Just wanted to raise that and double-check there's no differences. > > > > So to ENSURE these things: > >> > >> * to pass full command line arguments and ENV vars to > >> "my_program_or_script" > >> > >> * ensure that when "my_program_or_script" exits, (crashes or normal exit) > >> * no processes are left running in that linux namespace > >> > >> * ensure that when the service reveives a TERM, (docker stop) > >> * no processes are left running in that linux namespace > >> > >> * ensure that when the service reveives a KILL, (docker stop) > >> * no processes are left running in that linux namespace > >> > >> > >> BUT in addition, any extra configurability should be entirely optional: > >> > >> * to add supplemental run scripts (for support programs like cron and > >> rsyslog etc) > >> * which is what most general process supervisors consider as their > >> main mechanism for starting services > >> > >> SO > >> * if "my_program_or_script" is supplied as an argument, THAT is the > >> main process running inside the docker container. > >> * if no "my_program_or_script" argument is supplied, then use > >> whatever other conventional ways to determine the main process from > >> the directories of run scripts. > >> > >> > >> > >> SUMMARY > >> > >> Current solutions for docker seem to be "too complex" for various > >> reasons. Such as: > >> > >> * no mandatory ssh server (phusion baseimage fails) > >> * no python dependancy (supervisor fails, and phusion baseimage fails) > >> * no mandatory cron or rsyslog (current s6 base image fails, and > >> phusion base image fails) > >> * no "hundreds of cli tools" - most of which may never be used > >> (current s6 base image fails) > >> * no awkward intermediate bootstrap script required to write ENV vars > >> and cmdline args to a file (runit fails) > >> > >> So for whichever of those reasons, it feels the problem of docker > >> remains un-satisfied. And not fully addressed by any individual > >> solution. What's the best course of action? How can these problems be > >> solved into one single tool? > >> > >> I am hoping that the person who wrote this page: > >> > >> http://skarnet.org/software/s6/why.html > >> > >> Might be able to comment on some these above ^^ docker-specific > >> considerations? If so please, it would really be appreciated. > >> > >> Also cc'ing the author of the docker s6 base images. Who perhaps can > >> comment about some of the problems he has encountered. Many thanks for > >> any comments again (but please reply on the mailing list). > >> > >> > >> Kind Regards > >> dreamcat4 > >> > > > >