Really overwhelmed by all this. It is a much more positive response
than expected. And many good things. I am very grateful. Some bits I
still like us to continue discussion upon.

But Gornak - I must say that your new ubuntu base image really seem *a
lot* better than the phusion/baseimage one. It is fantastic and an
excellent job you have done there and you continue to update with new
versions of s6, etc. Can't really say thank you enough for that.

Anyway, back to the discussion:

On Wed, Feb 25, 2015 at 3:57 PM, John Regan <j...@jrjrtech.com> wrote:
> On Wed, Feb 25, 2015 at 03:58:07PM +0100, Gorka Lertxundi wrote:
>> Hello,
>>
>> After that great john's post, I tried to solve exactly your same problems. I
>> created my own base image based primarily on John's and Phusion's base
>> images.
>
> That's awesome - I get so excited when I hear somebody's actually
> read, digested, and taken action based on something I wrote. So cool!
> :)
>
>>
>> See my thoughts below.
>>
>> 2015-02-25 12:30 GMT+01:00 Laurent Bercot <ska-skaw...@skarnet.org>:
>>
>> >
>> >  (Moving the discussion to the supervision@list.skarnet.org list.
>> > The original message is quoted below.)
>> >
>> >  Hi Dreamcat4,
>> >
>> >  Thanks for your detailed message. I'm very happy that s6 found an
>> > application in docker, and that there's such an interest for it!
>> > skaw...@list.skarnet.org is indeed the right place to reach me and
>> > discuss the software I write, but for s6 in particular and process
>> > supervisors in general, supervision@list.skarnet.org is the better
>> > place - it's full of people with process supervision experience.
>> >
>> >  Your message gives a lot of food for thought, and I don't have time
>> > right now to give it all the attention it deserves. Tonight or
>> > tomorrow, though, I will; and other people on the supervisionlist
>> > will certainly have good insights.
>> >
>> >  Cheers!
>> >
>> > -- Laurent
>> >
>> >
>> > On 25/02/2015 11:55, Dreamcat4 wrote:
>> >
>> >> Hello,
>> >> Now there is someone (John Regan) who has made s6 images for docker.
>> >> And written a blog post about it. Which is a great effort - and the
>> >> reason I've come here. But it gives me a taste of wanting more.
>> >> Something a bit more foolproof, and simpler, to work specifically
>> >> inside of docker.
>> >>
>> >>  From that blog post I get a general impression that s6 has many
>> >> advantages. And it may be a good candidate for docker. But I would be
>> >> remiss not to ask the developers of s6 themselves not to try to take
>> >> some kind of a personal an interest in considering how s6 might best
>> >> work inside of docker specifically. I hope that this is the right
>> >> mailing list to reach s6 developers / discuss such matters. Is this
>> >> the correct mailing list for s6 dev discussions?
>> >>
>> >> I've read and read around the subject of process supervision inside
>> >> docker. Various people explain how or why they use various different
>> >> process supervisors in docker (not just s6). None of them really quite
>> >> seem ideal. I would like to be wrong about that but nothing has fully
>> >> convinced me so far. Perhaps it is a fair criticism to say that I
>> >> still have a lot more to learn in regards to process supervisors. But
>> >> I have no interest in getting bogged down by that. To me, I already
>> >> know more-or-less enough about how docker manages (or rather
>> >> mis-manages!) it's container processes to have an opinion about what
>> >> is needed, from a docker-sided perspective. And know enough that
>> >> docker project itself won't fix these issues. For one thing because of
>> >> not owning what's running on the inside of containers. And also
>> >> because of their single-process viewpoint take on things. Andy way.
>> >> That kind of political nonsense doesn't matter for our discussion. I
>> >> just want to have a technical discussion about what is needed, and how
>> >> might be the best way to solve the problem!
>> >>
>> >>
>> >> MY CONCERNS ABOUT USING S6 INSIDE OF DOCKER
>> >>
>> >> In regards of s6 only, currently these are my currently perceived
>> >> shortcomings when using it in docker:
>> >>
>> >> * it's not clear how to pass in programs arguments via CMD and
>> >> ENTRYPOINT in docker
>> >
>> >    - in fact i have not seen ANY docker process supervisor solutions
>> >> show how to do this (except perhaps phusion base image)
>> >>
>> >> * it is not clear if ENV vars are preserved. That is also something
>> >> essential for docker.
>> >
>> >
>> >> * s6 has many utilities s6-*
>> >>      - not clear which ones are actually required for making a docker
>> >> process supervisor
>> >>
>> >>
>> > * s6 not available yet as .deb or .rpm package
>> >>      - official packages are helpful because on different distros:
>> >>         + standard locations where to put config files and so on may
>> >> differ.
>> >>         + to install man pages too, in the right place
>> >>
>> >>
>> > * s6 is not available as official single pre-compiled binary file for
>> >> download via wget or curl
>> >>     - which would be the most ideal way to install it into a docker
>> >> container
>> >>
>> >
>> >> ^^ Some of these perceived shortcomings are more important /
>> >> significant than others! Some are not in the remit of s6 development
>> >> to be concerned about. Some are mild nit-picking, or the ignorance of
>> >> not-knowning, having not actually tried out s6 before.
>> >>
>> >> But my general point is that it is not clear-enough to me (from my
>> >> perspective) whether s6 can actually satisfy all of the significant
>> >> docker-specific considerations. Which I have not properly stated yet.
>> >> So here they are listed below…
>> >>
>> >>
>> >> DOCKER-SPECIFIC CONSIDERATIONS FOR A PROCESS SUPERVISOR
>> >>
>> >> A good process supervisor for docker should ideally:
>> >>
>> >> * be a single pre-compiled binary program file. That can be downloaded
>> >> by curl/wget (or can be installed from .deb or .rpm).
>> >>
>> >
>> I always include s6, execline, s6-portable-utils in my base images, they
>> are pretty
>> small and manageable. You could find/build them using this builder
>> container:
>>
>> https://github.com/glerchundi/container-s6-builder
>>
>> I try to keep this repo as updated as possible so that when Laurent
>> releases a
>> version I publish a new release in github:
>>
>> https://github.com/glerchundi/container-s6-builder/releases
>>
>
> That's genius, I never thought of using GitHub's release feature like
> that.

Great work Gorka for providing these linux x86_64 binaries on Github releases.
This was exactly the kind of thing I was hoping for / looking for in
regards to that aspect.

> Expect a pull request soon for the other s6 packages.
>
>> >
>> >> * can take directly command and arguments. With argv[] like this:
>> >>      "process_supervisor" "my_program_or_script" "my program or script
>> >> arguments…"
>> >
>> >
>> What do you want to achieve with that? Get into the container to debug? If

Right so I was half-expecting this kind of response (and from John
Regan too). In my initial post I could not think of a concise-enough
way to demonstrate and explain my reasoning behind that specific
request. At least not without entering into a whole other big long
discussion that would have detracted / derailed from some of the other
important considerations and discussion points in respect to docker.

Basically without that capability (which I am aware goes against
convention for process supervisors that occupy pid 1). Then you are
forcing docker users to choose an XOR (exclusive-OR) between either
using s6 process supervision or the ability to specify command line
arguments to their docker containers (via ENTRYPOINT and/or CMD).
Which essentially is like a breakage of those ENTRYPOINT and CMD
features of docker. At least that is my understanding how pretty much
all of these process supervisors behave. And not any criticism
levelled at s6 alone. Since you would not typically expect this
feature anyway (before we had containerisation etc.). It is very
docker-specific.

Both of you seem to have stated effectively that you don't really see
such a pressing reason why it is needed.

So then it's another thing entirely for me to explain why and convince
you guys there are good reasons for it being important to be able to
continue to use CMD and ENTRYPOINT for specifying command line
arguments still remains an important thing after adding a process
supervisor. There are actually many different reasons for why that is
desirable (that I can think of right now). But that's another
discussion and case for me to make to you.

I would be happy to go into that aspect further. Perhaps off-the
mailing list is a better idea. To then come back here again when that
discussion is over and concluded with a short summary. But I don't
want to waste anyone's time so please reply and indicate if you would
really like for me to go into more depth with better justifications
for why we need that particular feature.

>> so,
>> just using CMD with [ "/init" ] is enough and allows to anyone get into the
>> container
>> without running any init process / supervisor.
>>
>> I cannot imagine why this would be useful, but if it was required,
>> registering a
>> new service in runtime into s6 could be a possible solution. This service
>> would
>> include your custom program or script.

Would appreciate coming back to how we can do this later on. After I
have made a more convincing case for why it's actually needed. My
naive assumption, not knowing any of s6 yet: It should be that simply
passing on an argv[] array aught to be possible. And perhaps without
too many extra hassles or loops to jump through.

>> >> * will pass on all ENV vars to "my_program_or_script" faithfully
>> >>
>> >>
>> I take care of ENV's in my init process.
>>
>> Concretely, as phusion does, I dump all ENV's into
>> /etc/container_environment:
>>
>> https://github.com/glerchundi/container-base/blob/master/rootfs/init#L19-L30
>>
>> And then make use of them in this script which internally uses s6-envdir to
>> publish
>> environment variables in the spawned process.

This is great. Thank you!

>> https://raw.githubusercontent.com/glerchundi/container-base/master/rootfs/usr/bin/with-contenv
>>
>> So all container init.d, service run and service finish scripts will always
>> have
>> environment variables accessible.
>
> I just wanted to let you know this image is pretty sweet and makes
> mine look like garbage :)
>
>>
>> * will run as PID 1 inside the linux namespace
>> >>
>> >>
>> OK. s6 can run as pid 1, so no problem. The init script I mentioned
>> delegates
>> responsibility to s6 so it runs as a init/pid1/supervisor perfectly.

I like your init script a lot. Because it is was to understand and
well commented. And although written in bash, your init script is
short enough to be re-written in the future in any other language, for
example as a pre-compiled 'C' program etc. (should such a thing ever
be needed / worthwhile).

>> > * where my_program_or_script may spawn BOTH child AND non-child
>> >> (orphaned) processes
>> >>
>> >
>> Laurent correct me if I'm wrong but, this is an intrinsic feature of s6. As
>> it can run
>> as pid 1 it inherits all orphaned child processes.
>>
>>
>> >
>> >> * when "process_supervisor" (e.g. s6 or whatever) receives a TERM signal
>> >>    * it faithfully passes that signal to "my_program_or_script"
>> >>    * it also passes that signal to any orphaned non-child processes too
>> >>
>> >>
>> yes, it can:
>>
>> https://github.com/glerchundi/container-base/blob/master/rootfs/etc/s6/.s6-svscan/finish#L23-L31

That is awesome ^^. Like the White Whale of moby dick. It really felt
good to see this here in your base image. Really appreciate that.

> I agree, this seems like the right way to properly stop a Docker
> container. Like I said in the other email, I think it's up to the
> image-creator to do all this - not the responsibility of s6 directly.
>
>>
>>
>> > * when my_program_or_script dies, or exits
>> >>    * clean up ALL remaining non-children orphaned processes afterwards
>> >>     * which share the same linux namespace
>> >>        - this is VERY important for docker, as docker does not do this
>> >>
>> >>
>> this should be your concrete service concern. For example, imagine you have
>> a load
>> balancer with two supervised processes, nginx (main) & confd. Probably you
>> want to

Nope. At least for the initial guildelines I gave - the container has
only 1 supervised process (not 2). As per the official docker
guidelines. Having a 2nd or 3rd etc supervised process means optional
behaviour...

>> have confd always running, no matter why this process could die, but you
>> want it up.
>> Instead, if your main process dies, you want your container to die too. So
>> that should
>> be implemented in you specific use case:
>>
>> For example:
>>
>> https://github.com/glerchundi/container-mariadb/blob/master/rootfs/etc/s6/mysql/finish#L8-L13
>
> This is really similar to calling `s6-svscanctl -t /path/to/whatever`

Sorry I am not familiar enough with s6 and these things yet to
properly understand that example ^^. But I am guessing that the
general the essence of what you were in saying is something like this:

"the supervised process should try to shut down gracefully when it
voluntarily exits or receives a TERM signal". Else: "you should be
responsible to write a finish script for that supervised process".

Which is already a given. I am much less concerned if a process in a
docker container fails to handle it's own shutdown properly. When your
general .s6 finish script will kill the orphans. To ensures that
docker daemon isn't getting messed up with in-ablility to stop
containers ('error: no such process'). Which was my primary and most
major concern.

> I'm not sure if s6 will do anything subtly differently, I'm guessing
> it treats receiving a TERM signal in exactly the same way. Just wanted
> to raise that and double-check there's no differences.

^^

>> > So to ENSURE these things:
>> >>
>> >> * to pass full command line arguments and ENV vars to
>> >> "my_program_or_script"
>> >>
>> >> * ensure that when "my_program_or_script" exits, (crashes or normal exit)
>> >>    * no processes are left running in that linux namespace
>> >>
>> >> * ensure that when the service reveives a TERM, (docker stop)
>> >>    * no processes are left running in that linux namespace
>> >>
>> >> * ensure that when the service reveives a KILL, (docker stop)
>> >>    * no processes are left running in that linux namespace
>> >>
>> >>
>> >> BUT in addition, any extra configurability should be entirely optional:
>> >>
>> >>   * to add supplemental run scripts (for support programs like cron and
>> >> rsyslog etc)
>> >>   * which is what most general process supervisors consider as their
>> >> main mechanism for starting services
>> >>
>> >> SO
>> >>   * if "my_program_or_script" is supplied as an argument, THAT is the
>> >> main process running inside the docker container.
>> >>   * if no "my_program_or_script" argument is supplied, then use
>> >> whatever other conventional ways to determine the main process from
>> >> the directories of run scripts.
>> >>
>> >>
>> >>
>> >> SUMMARY
>> >>
>> >> Current solutions for docker seem to be "too complex" for various
>> >> reasons. Such as:
>> >>
>> >> * no mandatory ssh server (phusion baseimage fails)
>> >> * no python dependancy (supervisor fails, and phusion baseimage fails)
>> >> * no mandatory cron or rsyslog (current s6 base image fails, and
>> >> phusion base image fails)
>> >> * no "hundreds of cli tools" - most of which may never be used
>> >> (current s6 base image fails)
>> >> * no awkward intermediate bootstrap script required to write ENV vars
>> >> and cmdline args to a file (runit fails)
>> >>
>> >> So for whichever of those reasons, it feels the problem of docker
>> >> remains un-satisfied. And not fully addressed by any individual
>> >> solution. What's the best course of action?

Here is an idea how we might put s6 to the test:

Let's make a minimal busybox base image with s6, and also those other
necessary mechanisms of Gorka's ubuntu image. So cutting out all that
apt configuration stuff. I believe this is a good way of demonstrating
s6 inside of docker.

(Busybox does not have bash though.)

Then we might do a couple of useful things with resulting image:

* Compare file size against the original busybox image (with and without s6).
* Run unix `time` command (on `docker run <our busybox image>`) to
compare the startup initialisation times with and without s6.

It's just an idea at this point.

>> >> How can these problems be solved into one single tool?
>> >>
>> >> I am hoping that the person who wrote this page:
>> >>
>> >> http://skarnet.org/software/s6/why.html
>> >>
>> >> Might be able to comment on some these above ^^ docker-specific
>> >> considerations? If so please, it would really be appreciated.
>> >>
>> >> Also cc'ing the author of the docker s6 base images. Who perhaps can
>> >> comment about some of the problems he has encountered. Many thanks for
>> >> any comments again (but please reply on the mailing list).
>> >>
>> >>
>> >> Kind Regards
>> >> dreamcat4
>> >>
>> >
>> >

Reply via email to