Re: process supervisor - considerations for docker

2015-03-09 Thread Dreamcat4
On Sun, Mar 1, 2015 at 6:54 PM, John Regan  wrote:
> Quick FYI, busybox tar will extract a tar.gz, you just need to add the z

Ah right. It turns out that the default official busybox image
("latest") does not have z option yet. Because it is too old version
of busybox. I have kindly asked on their dockerhub page to update it.

> flag - tar xvzf /path/to/file.tar.gz
>
>
> On March 1, 2015 11:59:33 AM CST, Dreamcat4  wrote:
>>
>> On Sun, Mar 1, 2015 at 5:27 PM, John Regan  wrote:
>>>
>>>  Hi all -
>>>
>>>  Dreamcat4,
>>>  I think I got muddled up a few emails ago and didn't realize what you
>>>  were getting at. An easy-to-use, "extract this and now you're cooking
>>>  with gas" type tarball that works for any distro is an awesome idea!
>>>  My apologies for misunderstanding your idea.
>>>
>>>  The one "con" I foresee (if you can really call it that) you can't
>>>  list just a tarball on the Docker Hub. Would it be worth coming up
>>>  with a sort of "flagship image" that makes use of this? I guess we
>>
>>
>> Yeah I see the value in that. Good idea. In the documentation for such
>> example / showcase image, it can include the instruction for general
>> ways (any image).
>>
>>
>> ===
>> I've started playing
>> around with gorka's new tarball now. Seems that
>> that ADD isn't decompressing the tarball (when fetched from remote
>> URL). Which is pretty annoying. So ADD is currently 'broken' for want
>> it to do.
>>
>> Official Docker people will eventually improve ADD directive to take
>> optional arguments --flagX --flagY etc to let people control the
>> precise behaviour of ADD. Here is an open issue on docker, can track
>> it here:
>>
>> https://github.com/docker/docker/issues/3050
>> ===
>>
>>
>> Until then, these commands will work for busybox image:
>>
>> FROM busybox
>>
>> ADD
>> https://github.com/glerchundi/container-s6-overlay-builder/releases/download/v0.1.0/s6-overlay-0.1.0-linux-amd64.tar.gz
>> /s6-overlay.tar.gz
>> RUN gunzip -c
>> /s6-overlay.tar.gz | tar -xvf - -C / && rm /s6-overlay.tar.gz
>>
>> COPY test.sh /test.sh
>>
>> ENTRYPOINT ["/init"]
>> CMD ["/test.sh"]
>>
>> ^^ Where busybox has a very minimal 'tar' program included. Hence the
>> slightly awkward way of doing things.
>>
>>
>>>  could just start using it in our own images? In the end, it's not a
>>>  big deal - just thought it'd be worth figuring out how to maximize
>>>  exposure.
>>>
>>>  Laurent, Gorka, and Dreamcat4: this is awesome. :)
>>>
>>>  -John
>>>
>>>  On Sun, Mar 01, 2015 at 10:13:24AM +0100, Gorka Lertxundi wrote:

  Hi guys,

  I haven't had much time this week
 due to work and now I am overwhelmed!

  Yesterday, as Dreamcat4 has noticed, I've been working in a version
 that
  gathers all the ideas covered here.

  All,
  * I already converted bash init scripts into execline and make use of
  s6-utils instead of 'linux' ones to facilitate usage in another base
 images.
  * It's important to have just _one_ codebase, this would help focusing
  improvements and problems in one place. I extracted all the elements I
  thought would be useful in a container environment. So, if you all feel
  comfortable we could start discussing bugs, improvements or whatever
 there.
  I called this project/repo container-s6-overlay-builder (
  https://github.com/glerchundi/container-s6-overlay-builder).
  * Now, and after abstracting 's6-overlay', using ubuntu with s6 is a
 matter
  of extracting a tarball. container-base is using
 it already:

 https://github.com/glerchundi/container-base/blob/master/Dockerfile#L73-L75.
  * To sum up, we all agree with this. It is already implemented in the
  overlay:
- Case #1: Common case, start supervision tree up.
  docker run image
- Case #2: Would start a shell without the supervision tree running
  docker run -ti --entrypoint="" base /bin/sh
- Case #3: Would start a shell with the supervision tree up.
  docker run -ti image /bin/sh

  Dreamcat4,
  * Having a tarball with all the needed base elements to get s6 working
 is
  the way to go!

  Laurent,
  * Having a github mirror repo is gonna help spreading the word!
  * Although three init phases are working now I need your help with
 those
  scripts, probably a lot of mistakes were done...
-

 https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage1
-

 https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage2
-

 https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage3
  * I've chosen /etc/s6/.s6-init as the destination folder for the init
  scripts, would you like me to change?

  John,
  About github organization, I think this is no

Re: process supervisor - considerations for docker

2015-03-02 Thread Laurent Bercot

On 02/03/2015 14:24, Dreamcat4 wrote:

For the rough edges: Each item raised as seperate Github issue on your
repo Gorka. Laurent please check them also if you can. They are here


 Working on the overlay atm. We're aware of the issues. Please give
us some time to whip up a new version. Those things aren't instant,
especially since we're not getting paid for it (hint, hint).



* I've chosen /etc/s6/.s6-init as the destination folder for the init
scripts, would you like me to change?


 Yeah, I'll change it in the version I'll send a Gorka a pull request
about. Not because of the name, but because you really shouldn't store
anything under the scandir.

--
 Laurent



Re: process supervisor - considerations for docker

2015-03-02 Thread Dreamcat4
On Sun, Mar 1, 2015 at 9:13 AM, Gorka Lertxundi  wrote:
> Hi guys,
>
> I haven't had much time this week due to work and now I am overwhelmed!
>
> Yesterday, as Dreamcat4 has noticed, I've been working in a version that
> gathers all the ideas covered here.
>
> All,
> * I already converted bash init scripts into execline and make use of
> s6-utils instead of 'linux' ones to facilitate usage in another base images.
> * It's important to have just _one_ codebase, this would help focusing
> improvements and problems in one place. I extracted all the elements I
> thought would be useful in a container environment. So, if you all feel
> comfortable we could start discussing bugs, improvements or whatever there.
> I called this project/repo container-s6-overlay-builder (
> https://github.com/glerchundi/container-s6-overlay-builder).
> * Now, and after abstracting 's6-overlay', using ubuntu with s6 is a matter
> of extracting a tarball. container-base is using it already:
> https://github.com/glerchundi/container-base/blob/master/Dockerfile#L73-L75.
> * To sum up, we all agree with this. It is already implemented in the
> overlay:
>   - Case #1: Common case, start supervision tree up.
> docker run image
>   - Case #2: Would start a shell without the supervision tree running
> docker run -ti --entrypoint="" base /bin/sh
>   - Case #3: Would start a shell with the supervision tree up.
> docker run -ti image /bin/sh
>
> Dreamcat4,
> * Having a tarball with all the needed base elements to get s6 working is
> the way to go!
>
> Laurent,
> * Having a github mirror repo is gonna help spreading the word!
> * Although three init phases are working now I need your help with those
> scripts, probably a lot of mistakes were done...

Gorka,
Thank you for doing so much of this. Have been testing it yesterday.
It is pretty good. Especialy for:

* The passing of CMD arguments - works well.
* Receiving a TERM - orphan reaping (docker stop) - works well.


For the rough edges: Each item raised as seperate Github issue on your
repo Gorka. Laurent please check them also if you can. They are here
vv

Open issues on Gorka's s6-overlay repo:

https://github.com/glerchundi/container-s6-overlay-builder/issues

Subscibe to issue to track it.
Many thanks.

>   -
> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage1
>   -
> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage2
>   -
> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage3
> * I've chosen /etc/s6/.s6-init as the destination folder for the init
> scripts, would you like me to change?

^^ Laurent

>
> John,
> About github organization, I think this is not the place to discuss about
> it. I really like the idea and I'm open to discuss it but first things
> first, lets focus on finishing this first approach! Still, simple-d and
> micro-d are good names but are tightly coupled to docker *-d, and rocket
> being the relatively the new buzzword (kubernetes is going to support it)
> maybe we need to reconsider them.
>
> rgds,
>
> 2015-02-28 18:57 GMT+01:00 John Regan :
>
>> Sweet. And yeah, as Laurent mentioned in the other email, it's the
>> weekend. Setting dates for this kind of stuff is hard to do, I just
>> work on this in my free time. It's done when it's done.
>>
>> I also agree that s6 is *not* a docker-specific tool, nor should it
>> be. I'm thankful that Laurent's willing to listen to any ideas we
>> might have re: s6 development, but like I said, the goal is *not*
>> "make s6 a docker-specific tool"
>>
>> There's still a few high-level decisions to be made, too, before we
>> really start any work:
>>
>> 1. Goals:
>>   * Are we going to make a series of s6 baseimages (like one
>>   based on Ubuntu, another on CentOS, Alpine, and so on)?
>>   * Should we pick a base distro and focus on creating a series of
>>   platform-oriented images, aimed more at developers (ie, a PHP image, a
>>   NodeJS image, etc)?
>>   * Or should be focus on creating a series of service-oriented
>>   images, ie, an image for running GitLab, an image for running an
>>   XMPP server, etc?
>>
>> Figuring out the overall, high-level focus early will be really
>> helpful in the long run.
>>
>> Options 2 and 3 are somewhat related - you can't really get to 3
>> (create service-oriented images) without getting through 2 (make
>> platform-oriented images) anyway.
>>
>> It's not like a goal would be set in stone, either. If more guys want
>> to get on board and help, we could alway sit down and re-evaluate.
>> With more manpower, you could get into doing a whole series of
>> distro-based, service-oriented images (ie, a Ubuntu XMPP server as
>> well as an Alpine XMPP server).
>>
>> But given we're just a few guys, setting a straightforward small focus
>> is probably the way to go. I would vote for either creating a series
>> of baseimages, ori

Re: process supervisor - considerations for docker

2015-03-01 Thread John Regan
Quick FYI, busybox tar will extract a tar.gz, you just need to add the z flag - 
tar xvzf /path/to/file.tar.gz

On March 1, 2015 11:59:33 AM CST, Dreamcat4  wrote:
>On Sun, Mar 1, 2015 at 5:27 PM, John Regan  wrote:
>> Hi all -
>>
>> Dreamcat4,
>> I think I got muddled up a few emails ago and didn't realize what you
>> were getting at. An easy-to-use, "extract this and now you're cooking
>> with gas" type tarball that works for any distro is an awesome idea!
>> My apologies for misunderstanding your idea.
>>
>> The one "con" I foresee (if you can really call it that) you can't
>> list just a tarball on the Docker Hub. Would it be worth coming up
>> with a sort of "flagship image" that makes use of this? I guess we
>
>Yeah I see the value in that. Good idea. In the documentation for such
>example / showcase image, it can include the instruction for general
>ways (any image).
>
>
>===
>I've started playing around with gorka's new tarball now. Seems that
>that ADD isn't decompressing the tarball (when fetched from remote
>URL). Which is pretty annoying. So ADD is currently 'broken' for want
>it to do.
>
>Official Docker people will eventually improve ADD directive to take
>optional arguments --flagX --flagY etc to let people control the
>precise behaviour of ADD. Here is an open issue on docker, can track
>it here:
>
>https://github.com/docker/docker/issues/3050
>===
>
>
>Until then, these commands will work for busybox image:
>
>FROM busybox
>
>ADD
>https://github.com/glerchundi/container-s6-overlay-builder/releases/download/v0.1.0/s6-overlay-0.1.0-linux-amd64.tar.gz
>/s6-overlay.tar.gz
>RUN gunzip -c /s6-overlay.tar.gz | tar -xvf - -C / && rm
>/s6-overlay.tar.gz
>
>COPY test.sh /test.sh
>
>ENTRYPOINT ["/init"]
>CMD ["/test.sh"]
>
>^^ Where busybox has a very minimal 'tar' program included. Hence the
>slightly awkward way of doing things.
>
>
>> could just start using it in our own images? In the end, it's not a
>> big deal - just thought it'd be worth figuring out how to maximize
>> exposure.
>>
>> Laurent, Gorka, and Dreamcat4: this is awesome. :)
>>
>> -John
>>
>> On Sun, Mar 01, 2015 at 10:13:24AM +0100, Gorka Lertxundi wrote:
>>> Hi guys,
>>>
>>> I haven't had much time this week due to work and now I am
>overwhelmed!
>>>
>>> Yesterday, as Dreamcat4 has noticed, I've been working in a version
>that
>>> gathers all the ideas covered here.
>>>
>>> All,
>>> * I already converted bash init scripts into execline and make use
>of
>>> s6-utils instead of 'linux' ones to facilitate usage in another base
>images.
>>> * It's important to have just _one_ codebase, this would help
>focusing
>>> improvements and problems in one place. I extracted all the elements
>I
>>> thought would be useful in a container environment. So, if you all
>feel
>>> comfortable we could start discussing bugs, improvements or whatever
>there.
>>> I called this project/repo container-s6-overlay-builder (
>>> https://github.com/glerchundi/container-s6-overlay-builder).
>>> * Now, and after abstracting 's6-overlay', using ubuntu with s6 is a
>matter
>>> of extracting a tarball. container-base is using it already:
>>>
>https://github.com/glerchundi/container-base/blob/master/Dockerfile#L73-L75.
>>> * To sum up, we all agree with this. It is already implemented in
>the
>>> overlay:
>>>   - Case #1: Common case, start supervision tree up.
>>> docker run image
>>>   - Case #2: Would start a shell without the supervision tree
>running
>>> docker run -ti --entrypoint="" base /bin/sh
>>>   - Case #3: Would start a shell with the supervision tree up.
>>> docker run -ti image /bin/sh
>>>
>>> Dreamcat4,
>>> * Having a tarball with all the needed base elements to get s6
>working is
>>> the way to go!
>>>
>>> Laurent,
>>> * Having a github mirror repo is gonna help spreading the word!
>>> * Although three init phases are working now I need your help with
>those
>>> scripts, probably a lot of mistakes were done...
>>>   -
>>>
>https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage1
>>>   -
>>>
>https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage2
>>>   -
>>>
>https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage3
>>> * I've chosen /etc/s6/.s6-init as the destination folder for the
>init
>>> scripts, would you like me to change?
>>>
>>> John,
>>> About github organization, I think this is not the place to discuss
>about
>>> it. I really like the idea and I'm open to discuss it but first
>things
>>> first, lets focus on finishing this first approach! Still, simple-d
>and
>>> micro-d are good names but are tightly coupled to docker *-d, and
>rocket
>>> being the relatively the new buzzword (kubernetes is going to
>support it)
>>> maybe we need to reconsider them.
>>>
>>> rgds,
>>>
>>> 2015-02-28 18:57 GMT+01:00 John Regan :
>>>
>>> > Sweet. And yeah, as Laurent mentioned in the other email, it

Re: process supervisor - considerations for docker

2015-03-01 Thread Dreamcat4
On Sun, Mar 1, 2015 at 5:27 PM, John Regan  wrote:
> Hi all -
>
> Dreamcat4,
> I think I got muddled up a few emails ago and didn't realize what you
> were getting at. An easy-to-use, "extract this and now you're cooking
> with gas" type tarball that works for any distro is an awesome idea!
> My apologies for misunderstanding your idea.
>
> The one "con" I foresee (if you can really call it that) you can't
> list just a tarball on the Docker Hub. Would it be worth coming up
> with a sort of "flagship image" that makes use of this? I guess we

Yeah I see the value in that. Good idea. In the documentation for such
example / showcase image, it can include the instruction for general
ways (any image).


===
I've started playing around with gorka's new tarball now. Seems that
that ADD isn't decompressing the tarball (when fetched from remote
URL). Which is pretty annoying. So ADD is currently 'broken' for want
it to do.

Official Docker people will eventually improve ADD directive to take
optional arguments --flagX --flagY etc to let people control the
precise behaviour of ADD. Here is an open issue on docker, can track
it here:

https://github.com/docker/docker/issues/3050
===


Until then, these commands will work for busybox image:

FROM busybox

ADD 
https://github.com/glerchundi/container-s6-overlay-builder/releases/download/v0.1.0/s6-overlay-0.1.0-linux-amd64.tar.gz
/s6-overlay.tar.gz
RUN gunzip -c /s6-overlay.tar.gz | tar -xvf - -C / && rm /s6-overlay.tar.gz

COPY test.sh /test.sh

ENTRYPOINT ["/init"]
CMD ["/test.sh"]

^^ Where busybox has a very minimal 'tar' program included. Hence the
slightly awkward way of doing things.


> could just start using it in our own images? In the end, it's not a
> big deal - just thought it'd be worth figuring out how to maximize
> exposure.
>
> Laurent, Gorka, and Dreamcat4: this is awesome. :)
>
> -John
>
> On Sun, Mar 01, 2015 at 10:13:24AM +0100, Gorka Lertxundi wrote:
>> Hi guys,
>>
>> I haven't had much time this week due to work and now I am overwhelmed!
>>
>> Yesterday, as Dreamcat4 has noticed, I've been working in a version that
>> gathers all the ideas covered here.
>>
>> All,
>> * I already converted bash init scripts into execline and make use of
>> s6-utils instead of 'linux' ones to facilitate usage in another base images.
>> * It's important to have just _one_ codebase, this would help focusing
>> improvements and problems in one place. I extracted all the elements I
>> thought would be useful in a container environment. So, if you all feel
>> comfortable we could start discussing bugs, improvements or whatever there.
>> I called this project/repo container-s6-overlay-builder (
>> https://github.com/glerchundi/container-s6-overlay-builder).
>> * Now, and after abstracting 's6-overlay', using ubuntu with s6 is a matter
>> of extracting a tarball. container-base is using it already:
>> https://github.com/glerchundi/container-base/blob/master/Dockerfile#L73-L75.
>> * To sum up, we all agree with this. It is already implemented in the
>> overlay:
>>   - Case #1: Common case, start supervision tree up.
>> docker run image
>>   - Case #2: Would start a shell without the supervision tree running
>> docker run -ti --entrypoint="" base /bin/sh
>>   - Case #3: Would start a shell with the supervision tree up.
>> docker run -ti image /bin/sh
>>
>> Dreamcat4,
>> * Having a tarball with all the needed base elements to get s6 working is
>> the way to go!
>>
>> Laurent,
>> * Having a github mirror repo is gonna help spreading the word!
>> * Although three init phases are working now I need your help with those
>> scripts, probably a lot of mistakes were done...
>>   -
>> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage1
>>   -
>> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage2
>>   -
>> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage3
>> * I've chosen /etc/s6/.s6-init as the destination folder for the init
>> scripts, would you like me to change?
>>
>> John,
>> About github organization, I think this is not the place to discuss about
>> it. I really like the idea and I'm open to discuss it but first things
>> first, lets focus on finishing this first approach! Still, simple-d and
>> micro-d are good names but are tightly coupled to docker *-d, and rocket
>> being the relatively the new buzzword (kubernetes is going to support it)
>> maybe we need to reconsider them.
>>
>> rgds,
>>
>> 2015-02-28 18:57 GMT+01:00 John Regan :
>>
>> > Sweet. And yeah, as Laurent mentioned in the other email, it's the
>> > weekend. Setting dates for this kind of stuff is hard to do, I just
>> > work on this in my free time. It's done when it's done.
>> >
>> > I also agree that s6 is *not* a docker-specific tool, nor should it
>> > be. I'm thankful that Laurent's willing to listen to any ideas we
>> > might have

Re: process supervisor - considerations for docker

2015-03-01 Thread John Regan
Hi all -

Dreamcat4,
I think I got muddled up a few emails ago and didn't realize what you
were getting at. An easy-to-use, "extract this and now you're cooking
with gas" type tarball that works for any distro is an awesome idea!
My apologies for misunderstanding your idea.

The one "con" I foresee (if you can really call it that) you can't
list just a tarball on the Docker Hub. Would it be worth coming up
with a sort of "flagship image" that makes use of this? I guess we
could just start using it in our own images? In the end, it's not a
big deal - just thought it'd be worth figuring out how to maximize
exposure.

Laurent, Gorka, and Dreamcat4: this is awesome. :)

-John

On Sun, Mar 01, 2015 at 10:13:24AM +0100, Gorka Lertxundi wrote:
> Hi guys,
> 
> I haven't had much time this week due to work and now I am overwhelmed!
> 
> Yesterday, as Dreamcat4 has noticed, I've been working in a version that
> gathers all the ideas covered here.
> 
> All,
> * I already converted bash init scripts into execline and make use of
> s6-utils instead of 'linux' ones to facilitate usage in another base images.
> * It's important to have just _one_ codebase, this would help focusing
> improvements and problems in one place. I extracted all the elements I
> thought would be useful in a container environment. So, if you all feel
> comfortable we could start discussing bugs, improvements or whatever there.
> I called this project/repo container-s6-overlay-builder (
> https://github.com/glerchundi/container-s6-overlay-builder).
> * Now, and after abstracting 's6-overlay', using ubuntu with s6 is a matter
> of extracting a tarball. container-base is using it already:
> https://github.com/glerchundi/container-base/blob/master/Dockerfile#L73-L75.
> * To sum up, we all agree with this. It is already implemented in the
> overlay:
>   - Case #1: Common case, start supervision tree up.
> docker run image
>   - Case #2: Would start a shell without the supervision tree running
> docker run -ti --entrypoint="" base /bin/sh
>   - Case #3: Would start a shell with the supervision tree up.
> docker run -ti image /bin/sh
> 
> Dreamcat4,
> * Having a tarball with all the needed base elements to get s6 working is
> the way to go!
> 
> Laurent,
> * Having a github mirror repo is gonna help spreading the word!
> * Although three init phases are working now I need your help with those
> scripts, probably a lot of mistakes were done...
>   -
> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage1
>   -
> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage2
>   -
> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage3
> * I've chosen /etc/s6/.s6-init as the destination folder for the init
> scripts, would you like me to change?
> 
> John,
> About github organization, I think this is not the place to discuss about
> it. I really like the idea and I'm open to discuss it but first things
> first, lets focus on finishing this first approach! Still, simple-d and
> micro-d are good names but are tightly coupled to docker *-d, and rocket
> being the relatively the new buzzword (kubernetes is going to support it)
> maybe we need to reconsider them.
> 
> rgds,
> 
> 2015-02-28 18:57 GMT+01:00 John Regan :
> 
> > Sweet. And yeah, as Laurent mentioned in the other email, it's the
> > weekend. Setting dates for this kind of stuff is hard to do, I just
> > work on this in my free time. It's done when it's done.
> >
> > I also agree that s6 is *not* a docker-specific tool, nor should it
> > be. I'm thankful that Laurent's willing to listen to any ideas we
> > might have re: s6 development, but like I said, the goal is *not*
> > "make s6 a docker-specific tool"
> >
> > There's still a few high-level decisions to be made, too, before we
> > really start any work:
> >
> > 1. Goals:
> >   * Are we going to make a series of s6 baseimages (like one
> >   based on Ubuntu, another on CentOS, Alpine, and so on)?
> >   * Should we pick a base distro and focus on creating a series of
> >   platform-oriented images, aimed more at developers (ie, a PHP image, a
> >   NodeJS image, etc)?
> >   * Or should be focus on creating a series of service-oriented
> >   images, ie, an image for running GitLab, an image for running an
> >   XMPP server, etc?
> >
> > Figuring out the overall, high-level focus early will be really
> > helpful in the long run.
> >
> > Options 2 and 3 are somewhat related - you can't really get to 3
> > (create service-oriented images) without getting through 2 (make
> > platform-oriented images) anyway.
> >
> > It's not like a goal would be set in stone, either. If more guys want
> > to get on board and help, we could alway sit down and re-evaluate.
> > With more manpower, you could get into doing a whole series of
> > distro-based, service-oriented images (ie, a Ubuntu XMPP server as
> > w

Re: process supervisor - considerations for docker

2015-03-01 Thread Gorka Lertxundi
Hi, I forgot to reference all the involved sources:

- container-skarnet-builder (
https://github.com/glerchundi/container-skarnet-builder): every time
Laurent release a skarnet software, this releases a new statically linked
binaries.
- container-s6-overlay-builder (
https://github.com/glerchundi/container-s6-overlay-builder): this is where
the magic happens, /rootfs contains what we really want to have in our
destination images. Every change will produce a new tarball which should
fit in any linux based distro.
- container-base (https://github.com/glerchundi/container-base): an example
using ubuntu, a modified version of the one we were talking about.


2015-03-01 10:13 GMT+01:00 Gorka Lertxundi :

> Hi guys,
>
> I haven't had much time this week due to work and now I am overwhelmed!
>
> Yesterday, as Dreamcat4 has noticed, I've been working in a version that
> gathers all the ideas covered here.
>
> All,
> * I already converted bash init scripts into execline and make use of
> s6-utils instead of 'linux' ones to facilitate usage in another base images.
> * It's important to have just _one_ codebase, this would help focusing
> improvements and problems in one place. I extracted all the elements I
> thought would be useful in a container environment. So, if you all feel
> comfortable we could start discussing bugs, improvements or whatever there.
> I called this project/repo container-s6-overlay-builder (
> https://github.com/glerchundi/container-s6-overlay-builder).
> * Now, and after abstracting 's6-overlay', using ubuntu with s6 is a
> matter of extracting a tarball. container-base is using it already:
> https://github.com/glerchundi/container-base/blob/master/Dockerfile#L73-L75
> .
> * To sum up, we all agree with this. It is already implemented in the
> overlay:
>   - Case #1: Common case, start supervision tree up.
> docker run image
>   - Case #2: Would start a shell without the supervision tree running
> docker run -ti --entrypoint="" base /bin/sh
>   - Case #3: Would start a shell with the supervision tree up.
> docker run -ti image /bin/sh
>
> Dreamcat4,
> * Having a tarball with all the needed base elements to get s6 working is
> the way to go!
>
> Laurent,
> * Having a github mirror repo is gonna help spreading the word!
> * Although three init phases are working now I need your help with those
> scripts, probably a lot of mistakes were done...
>   -
> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage1
>   -
> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage2
>   -
> https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage3
> * I've chosen /etc/s6/.s6-init as the destination folder for the init
> scripts, would you like me to change?
>
> John,
> About github organization, I think this is not the place to discuss about
> it. I really like the idea and I'm open to discuss it but first things
> first, lets focus on finishing this first approach! Still, simple-d and
> micro-d are good names but are tightly coupled to docker *-d, and rocket
> being the relatively the new buzzword (kubernetes is going to support it)
> maybe we need to reconsider them.
>
> rgds,
>
> 2015-02-28 18:57 GMT+01:00 John Regan :
>
>> Sweet. And yeah, as Laurent mentioned in the other email, it's the
>> weekend. Setting dates for this kind of stuff is hard to do, I just
>> work on this in my free time. It's done when it's done.
>>
>> I also agree that s6 is *not* a docker-specific tool, nor should it
>> be. I'm thankful that Laurent's willing to listen to any ideas we
>> might have re: s6 development, but like I said, the goal is *not*
>> "make s6 a docker-specific tool"
>>
>> There's still a few high-level decisions to be made, too, before we
>> really start any work:
>>
>> 1. Goals:
>>   * Are we going to make a series of s6 baseimages (like one
>>   based on Ubuntu, another on CentOS, Alpine, and so on)?
>>   * Should we pick a base distro and focus on creating a series of
>>   platform-oriented images, aimed more at developers (ie, a PHP image, a
>>   NodeJS image, etc)?
>>   * Or should be focus on creating a series of service-oriented
>>   images, ie, an image for running GitLab, an image for running an
>>   XMPP server, etc?
>>
>> Figuring out the overall, high-level focus early will be really
>> helpful in the long run.
>>
>> Options 2 and 3 are somewhat related - you can't really get to 3
>> (create service-oriented images) without getting through 2 (make
>> platform-oriented images) anyway.
>>
>> It's not like a goal would be set in stone, either. If more guys want
>> to get on board and help, we could alway sit down and re-evaluate.
>> With more manpower, you could get into doing a whole series of
>> distro-based, service-oriented images (ie, a Ubuntu XMPP server as
>> well as an Alpine XMPP server).
>>
>> But given we're just a few guys, setting

Re: process supervisor - considerations for docker

2015-03-01 Thread Gorka Lertxundi
Hi guys,

I haven't had much time this week due to work and now I am overwhelmed!

Yesterday, as Dreamcat4 has noticed, I've been working in a version that
gathers all the ideas covered here.

All,
* I already converted bash init scripts into execline and make use of
s6-utils instead of 'linux' ones to facilitate usage in another base images.
* It's important to have just _one_ codebase, this would help focusing
improvements and problems in one place. I extracted all the elements I
thought would be useful in a container environment. So, if you all feel
comfortable we could start discussing bugs, improvements or whatever there.
I called this project/repo container-s6-overlay-builder (
https://github.com/glerchundi/container-s6-overlay-builder).
* Now, and after abstracting 's6-overlay', using ubuntu with s6 is a matter
of extracting a tarball. container-base is using it already:
https://github.com/glerchundi/container-base/blob/master/Dockerfile#L73-L75.
* To sum up, we all agree with this. It is already implemented in the
overlay:
  - Case #1: Common case, start supervision tree up.
docker run image
  - Case #2: Would start a shell without the supervision tree running
docker run -ti --entrypoint="" base /bin/sh
  - Case #3: Would start a shell with the supervision tree up.
docker run -ti image /bin/sh

Dreamcat4,
* Having a tarball with all the needed base elements to get s6 working is
the way to go!

Laurent,
* Having a github mirror repo is gonna help spreading the word!
* Although three init phases are working now I need your help with those
scripts, probably a lot of mistakes were done...
  -
https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage1
  -
https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage2
  -
https://github.com/glerchundi/container-s6-overlay-builder/tree/master/rootfs/etc/s6/.s6-init/init-stage3
* I've chosen /etc/s6/.s6-init as the destination folder for the init
scripts, would you like me to change?

John,
About github organization, I think this is not the place to discuss about
it. I really like the idea and I'm open to discuss it but first things
first, lets focus on finishing this first approach! Still, simple-d and
micro-d are good names but are tightly coupled to docker *-d, and rocket
being the relatively the new buzzword (kubernetes is going to support it)
maybe we need to reconsider them.

rgds,

2015-02-28 18:57 GMT+01:00 John Regan :

> Sweet. And yeah, as Laurent mentioned in the other email, it's the
> weekend. Setting dates for this kind of stuff is hard to do, I just
> work on this in my free time. It's done when it's done.
>
> I also agree that s6 is *not* a docker-specific tool, nor should it
> be. I'm thankful that Laurent's willing to listen to any ideas we
> might have re: s6 development, but like I said, the goal is *not*
> "make s6 a docker-specific tool"
>
> There's still a few high-level decisions to be made, too, before we
> really start any work:
>
> 1. Goals:
>   * Are we going to make a series of s6 baseimages (like one
>   based on Ubuntu, another on CentOS, Alpine, and so on)?
>   * Should we pick a base distro and focus on creating a series of
>   platform-oriented images, aimed more at developers (ie, a PHP image, a
>   NodeJS image, etc)?
>   * Or should be focus on creating a series of service-oriented
>   images, ie, an image for running GitLab, an image for running an
>   XMPP server, etc?
>
> Figuring out the overall, high-level focus early will be really
> helpful in the long run.
>
> Options 2 and 3 are somewhat related - you can't really get to 3
> (create service-oriented images) without getting through 2 (make
> platform-oriented images) anyway.
>
> It's not like a goal would be set in stone, either. If more guys want
> to get on board and help, we could alway sit down and re-evaluate.
> With more manpower, you could get into doing a whole series of
> distro-based, service-oriented images (ie, a Ubuntu XMPP server as
> well as an Alpine XMPP server).
>
> But given we're just a few guys, setting a straightforward small focus
> is probably the way to go. I would vote for either creating a series
> of baseimages, oriented towards other image-makers, or pick Alpine as
> a base, and focus on making small and efficient service-oriented
> images (ie, a 10MB XMPP service, something like that) aimed at
> sysadmins/users.
>
> But I'm open to any of those options, or others, so long as it's
> within the realm of possibility for just a few people working in their
> free time.
>
> 1. Should be form a GitHub org, and what should it be called?
>
> I vote yes, I'll go ahead and make it if you want.
>
> For the org name, I was thinking about starting a series of Alpine
> images aimed at users (like I said, 10MB chat service) under the org
> name "micro-d" (as in, Micro Docker containers), already. If that's the
> focus we go with, then that's proba

Re: process supervisor - considerations for docker

2015-02-28 Thread Dreamcat4
Right,
Glad that others (maybe Laurent and Gorka) want to do some guys who
all have more experience than me with 's6' process supervisor. I way
just offering to do it in case, to not impose extra work on anybody
who is already busy.

Gorka is now  working on something changes today. I don't quite know
what those are. It seems like we should check again his repos after he
finishes whatever stuff he's currently doing ATM. Fabulous.


In respect of recent replies:

Laurent,
Of course I don't mind any suitable changes upstreamed (that can be
upstreamed). And things which are not-docker-specific should not be
presented that way (preferable all of them).

Providing that the functionality is added and it can be used with
docker too amongst other tools. Docker is not the only tool which uses
linux namespaces for containerization. And that is the underlying
kernel feature (available in all modern linux kernel). I missed saying
"try to upstream everything". Sorry - I agree it is a good idea to do
that.

Just the major requirement I am interested to see fulfilled (one way
or another) is a new single tarball build product / build target that
combines the 2 or more separate s6 packages into a single tar ball…
For the docker ADD mechanism to work.

That unified tarball does not need to be named 's6-docker' or anything
that is docker-specific. It does not need to be officially sanctioned
/ provided by upstream or not…

Upstream would be better of course but we don't wish to impose our
ways upon you!

We could also debate what exacty should or should-not be in that
single tarball. Or just not bother arguing specifics when there is
always the solution to make a 'light' and 'full' variant. With all the
optional tools included in the 'full' version. And just the minimum in
the 'light'. Where the 'light' appears to be 2 of your current
official tarballs.


John,
Sorry I don't see the point (anymore) of writing s6-specific base
images when there is no longer any pressing technical reason to do so.
The drawback of choosing that approach is that by publishing some s6
base images, we will always end up neglecting and leaving some out
other ones from being supported. Wheras with ADD (from a single
tarball source) - that supports all base images in one swoop without
any effort whatsoever on our part. And that is a big plus.

Of course, once the universal ADD mechanism is working, then it
becomes a completely trivial matter to roll new base images that
include s6, for whichever distros you wish. With just a single extra
line in the Dockerfile. But then it hardly seems worth it telling
people to use them rather than ADD, which other users can just do
directly themselves?

I just get the general impression that most Docker citizens strongly
prefer to use official base images whenever possible. Because it is
what they know and what they trust. Even if a user writes their own
base, they will almost always be FROM: ubuntu or FROM: debian (or
'alpine' now, whichever one they like the most).

Meaning that: it is harder to get the general docker population to
trust and switch over to some new 's6-*' base image coming from
somewhere which is effectively deemed as being a 3rd party repo.

If we instead tell them to use the ADD:  method… they
will trust it more because they all still get to keep their preferred
"FROM: ubuntu" line at the top as always…

At least that's my own thought process / reasoning behind such
opinion. Where I am coming at from, with this whole 'maybe we should
not don't do base images' anymore.




On Sat, Feb 28, 2015 at 5:57 PM, John Regan  wrote:
> Sweet. And yeah, as Laurent mentioned in the other email, it's the
> weekend. Setting dates for this kind of stuff is hard to do, I just
> work on this in my free time. It's done when it's done.
>
> I also agree that s6 is *not* a docker-specific tool, nor should it
> be. I'm thankful that Laurent's willing to listen to any ideas we
> might have re: s6 development, but like I said, the goal is *not*
> "make s6 a docker-specific tool"
>
> There's still a few high-level decisions to be made, too, before we
> really start any work:
>
> 1. Goals:
>   * Are we going to make a series of s6 baseimages (like one
>   based on Ubuntu, another on CentOS, Alpine, and so on)?
>   * Should we pick a base distro and focus on creating a series of
>   platform-oriented images, aimed more at developers (ie, a PHP image, a
>   NodeJS image, etc)?
>   * Or should be focus on creating a series of service-oriented
>   images, ie, an image for running GitLab, an image for running an
>   XMPP server, etc?
>
> Figuring out the overall, high-level focus early will be really
> helpful in the long run.
>
> Options 2 and 3 are somewhat related - you can't really get to 3
> (create service-oriented images) without getting through 2 (make
> platform-oriented images) anyway.
>
> It's not like a goal would be set in stone, either. If more guys want
> to get on board and help, we could alway sit down and re

Re: process supervisor - considerations for docker

2015-02-28 Thread John Regan
Sweet. And yeah, as Laurent mentioned in the other email, it's the
weekend. Setting dates for this kind of stuff is hard to do, I just
work on this in my free time. It's done when it's done.

I also agree that s6 is *not* a docker-specific tool, nor should it
be. I'm thankful that Laurent's willing to listen to any ideas we
might have re: s6 development, but like I said, the goal is *not*
"make s6 a docker-specific tool"

There's still a few high-level decisions to be made, too, before we
really start any work:

1. Goals:
  * Are we going to make a series of s6 baseimages (like one
  based on Ubuntu, another on CentOS, Alpine, and so on)?
  * Should we pick a base distro and focus on creating a series of 
  platform-oriented images, aimed more at developers (ie, a PHP image, a
  NodeJS image, etc)?
  * Or should be focus on creating a series of service-oriented
  images, ie, an image for running GitLab, an image for running an
  XMPP server, etc?

Figuring out the overall, high-level focus early will be really
helpful in the long run.

Options 2 and 3 are somewhat related - you can't really get to 3
(create service-oriented images) without getting through 2 (make
platform-oriented images) anyway.

It's not like a goal would be set in stone, either. If more guys want
to get on board and help, we could alway sit down and re-evaluate.
With more manpower, you could get into doing a whole series of
distro-based, service-oriented images (ie, a Ubuntu XMPP server as
well as an Alpine XMPP server).

But given we're just a few guys, setting a straightforward small focus
is probably the way to go. I would vote for either creating a series
of baseimages, oriented towards other image-makers, or pick Alpine as
a base, and focus on making small and efficient service-oriented
images (ie, a 10MB XMPP service, something like that) aimed at
sysadmins/users.

But I'm open to any of those options, or others, so long as it's
within the realm of possibility for just a few people working in their
free time.

1. Should be form a GitHub org, and what should it be called?

I vote yes, I'll go ahead and make it if you want.

For the org name, I was thinking about starting a series of Alpine
images aimed at users (like I said, 10MB chat service) under the org
name "micro-d" (as in, Micro Docker containers), already. If that's the
focus we go with, then that's probably a pretty OK name.

If we go with doing a series of simple, easy-to-use baseimages aimed
at other imagemakers, then probably something like "simple-d" (Simple
Docker containers).

Again, open to suggestions, those are just my initial ideas. The one
thing I would advise against is using s6 in the name, since that
would imply it's a project under the skarnet.org umbrella, which I
don't think this is. It's outside that scope. We can promote how much
we love s6 all we want in the docs, and blog posts, and so on, but
we *shouldn't* do things like call our init "s6-init", name the image
"s6-alpine", stuff like that.

Once we figure out the high-level goals, we can set out a few more
structural-type things.

-John


Re: process supervisor - considerations for docker

2015-02-28 Thread Laurent Bercot

On 28/02/2015 11:58, Laurent Bercot wrote:

  (In case you can't tell: I'm not a github fan.


 Meh. At this time, publicity is a good thing for my software,
even if 1. it's still a bit early, and 2. I have to use tools
I'm not entirely comfortable with. So I set up mirrors of
everything on github.
 https://github.com/s6 in particular.
 Pull to your heart's content and spread the word. Have fun.

--
 Laurent



Re: process supervisor - considerations for docker

2015-02-28 Thread Laurent Bercot

The idea is that with a docker-targeted s6 tarball, it should
universally work on top of any / all base image.


 Just to make things perfectly clear: I am not going to make a
special version of s6 just for Docker. I like Docker, I think it's
a useful tool, and I'm willing to support it, as in make adjustments
to s6 if necessary to ease integration with Docker without impacting
other uses; but I draw the line at adding specific code for it,
because 1. it's yet another slippery slope I'm not treading on, and
2. it would defeat the purpose of Docker. AIUI, Docker images can,
and should, be standard images - you should be able to run a Linux
kernel with init=YOURENTRYPOINT and it should just run, give or take
a few details such as /dev and more generally filesystems. (I don't
know what assumption a docker entrypoint can make: are /proc and
/sys mounted ? is /dev mounted ? is / read-only ? can I create
filesystems ?)



If what Laurent says is true that the s6 packages are entirely
self-contained.


 It depends on what you mean by self-contained. If you compile and
statically link against a libc such as musl, all you need is the
execline and s6 binaries. Else, you need a libc.so. And for the
build, you also need skalibs.
 It's unclear to me what format a distributed docker image takes.
Is it source, with instructions how to build it ? Is it a binary
image ? Or is it possible to distribute both ?
 Binary images are the most practical, *but* they make assumptions
on the target architecture. Having to provide an image per target
architecture sounds contrary to the purpose of Docker.



* re-write /init from bash ---> execline
* add the argv[] support for CMD/ENTRYPOINT arguments


 I can do that with Gorka, with his permission.



   * /init (probably renamed to be 's6-init' or something like that)


 Why ? An init process is an init process is an init process, no
matter the underlying tools.
 Also, please don't use the s6-init name. I'm going to need it for
a future package and I'd like to avoid possible confusions.



* Can probably start work on the first bullet point (convert "/init"
to execline) during this weekend. Unless anyone else would rather jump
in before me and do it. But it seems not.


 Hold your horses, Ben Hur. People usually take a break during the
weekend; give at least John and Gorka some time to answer. It's
John's initial idea and Gorka's image, let them have the final say,
and work on it themselves or delegate tasks as they see fit.



* If Laurant wants to push his core s6 releases (including the docker
specific one) onto Github. Then it would be great for him to make a
"github/s6" org with Gorak, as new home for 's6', or else a git mirror
of the official skanet.org.


 I'm not moving s6's home. I can set up a mirror for s6 on github, but
I fail to see how this would be useful - it's not as if the skarnet.org
server was overloaded. (The day it's overloaded with s6 pull requests
will be a happy day for me.)
 If it's not about pulling, then what is it about ?

 (In case you can't tell: I'm not a github fan. Technically, it's a
good piece of software, git, at the core, with 2 or 3 layers of all your
standard bloated crap piled onto it - and it shows whenever you're
trying to interoperate with it. Politically, the account creation
procedure makes it very clear that github is about money first. The
only thing I like about github is the presentation, which is directly
inspired from Google. So, yeah, not much to save.)

--
 Laurent



Re: process supervisor - considerations for docker

2015-02-27 Thread Dreamcat4
Okay! all great then.

Now I want us to iterate upon the suggested plan / 'road-map'. I have
hinted that we can use ADD. That is another kind of improvement. I
think it will save us some considerable overheads, if we move around
our code a bit. And actually not need to make any special s6-base
images for the remaining distros.

The idea is that with a docker-targeted s6 tarball, it should
universally work on top of any / all base image. If what Laurent says
is true that the s6 packages are entirely self-contained. So it should
'just work' if we put the right things in the tar ball.

It can be centrally maintained in one place. No need for us to have to
duplicate the same thing in multiple distros. And in multiple versions
of each distro  (e.g. sid, wheezy). Yet every one of those we can
still support.

I also dropping off (remove) those previous points which John didn't like.
So the revised roadmap would look something like this:

* re-write /init from bash ---> execline
* add the argv[] support for CMD/ENTRYPOINT arguments
* move /init from ubuntu base image --> s6-builder
* create new 's6docker' build product in s6-builder, which contains:
  * execline.tar.gz
  * s6.tar.gz
  * /init (probably renamed to be 's6-init' or something like that)
* replace 'moved' bits in Gorak's pre-existing ubuntu base image
* document how to use it
* blog about it


How we use it:

When s6-builder runs it's builds, it can spit out 1 new extra build
target to this location:

https://github.com/glerchundi/container-s6-builder/releases

1 single unified tarball called "s6-docker-2.1.1.2-linux-amd64.tar.gz"
(or whatever we should call it).

That 1 tarball contain all necessary s6 files required to be used in docker.
Then we can just document by telling people to put this in their Dockerfile:

FROM: debian:wheezy
ADD: https://github.com/<...>/s6docker.tar.gz /
ENTRYPOINT: ["/s6-init", … ]

All Done!

We can just as easy specify whatever other official image in the FROM line. e.g.

FROM: alpine
ADD: https://github.com/<...>/s6docker.tar.gz /
ENTRYPOINT: ["/s6-init", … ]

Or busybox, arch linux, centos, etc. It should not matter 1 cent or
change the proceedure…

This way helps us out a lot to reduce the labor. Otherwise I guess we
are giving ourselves a rather daunting task of maintaining multiple
different base images. Each of which having multiple versions etc. And
then when upstream changes them we have to too…

Please consider the revised plan.
Many thanks

* Very happy have a crack at all of the needed development works on
that list. And submit them to gorka's repos. Although I'm not so
technically adept / familiar with this stuff yet. Therefore it may
take me a while to do all that, learning as I go, etc.

* Can't blog about it (no blog myself). So someone else can do that
later on (e.g. John). After everything is done.

* Can probably start work on the first bullet point (convert "/init"
to execline) during this weekend. Unless anyone else would rather jump
in before me and do it. But it seems not.



Possible Github Organization:

So this is another point of discussion.

A new github organisation would not be so essential anymore. At least
for "being a place to house the various s6-base images". Since there
would not be any.

Yet a github organisation can still be used in 2 other ways:

* To have an official-sounding name (and downloads URL) that never need change.

* If Laurant wants to push his core s6 releases (including the docker
specific one) onto Github. Then it would be great for him to make a
"github/s6" org with Gorak, as new home for 's6', or else a git mirror
of the official skanet.org.

Given the reduced complexity, I don't really care now if we actually
have an 's6' organisation at all. But am happy to leave such a choice
entirely up to Laurent and Gorka. I mean, if they want to do it, then
they are the official people to decide upon that. I have no personal
opinion myself (either way). As I only contribute a relatively very
minor part of works to help improve it.


Kind Regards
dreamcat4


Re: process supervisor - considerations for docker

2015-02-27 Thread John Regan
> John, I agree with all of your points (and reservations) here in last message.
> 
> And in retrospect, sorry for being a bit argumentative over that
> entrypoint aspect of our discussion. You know, more people tend to
> agree with your way of doing things than mine. But both approaches
> have their own sets of pros + cons. I hope you can see that choosing
> either one is actually a trade, and it just depends on which specific
> aspects someone deems to be more important than there other ones.
> 
> Another thing I didn't say during that discussion, but was irking me
> was that the point you were making was not actually a consideration
> that was specific to s6 in Docker... Because you can effectively make
> exactly that same argument in general and replace Gorka's '/init' with
> just any interpreter such as 'python' 'ruby' or whatever… Meaning it
> may be important to you and a valid point. But isn't so much of a
> consideration that is specific to s6 alone. So I didn't see why it
> should really matter as much in our discussion as these other things.
> However I can totally understand the point you were making there (it
> is a valid one), and where you were coming from. Again, apologies
> about that.

Hey man, no need to apologize about anything! We're a couple of people
with strong opinions, that's all :)

Like I said in the other email, these opinions I have are based on
real-world, "if this service is broken lives are at risk" (no joke)
experience. I just want to try instill lessons I've learned about what
makes a good image good.

-John


Re: process supervisor - considerations for docker

2015-02-27 Thread John Regan
> Well you are then neglecting to see 2 other benefits which i neglected
> to mention anywhere previously:
> 
> 1) Is that anyone can just look in my Dockerfile and see the default
> tvheadend arguments as-is. Without needing to go around chasing them
> being embedded in some script (when only Dockerfile is published to
> DockerHub, not such start scripts).

That's true, that they can just look in the Dockerfile. But I'll argue
the easy way to fix that is by writing a good README file.

When you get a chance take a look at the sameersbn/gitlab readme file:

https://registry.hub.docker.com/u/sameersbn/gitlab/

The guy documents every single environment variable, what they do, a
"quick start" guide, and so on. This is what I strive to do now when I
write a README.

When an image has a solid README and great docs, I don't even feel the
need to look at the Dockerfile. If a regular user feels like they need
to go through my Dockerfile to figure out how to use my image, then my
image sucks.

> 2) Is that when they run my image, they can do a 'docker ps' and see
> the FULL tvheadend arguments, including the default ones in the entry
> point component. Which is then behaving much more like regular 'ps'
> command.
> 

This is true for my method, also. When you run a 'docker ps',
'docker top', or 'docker inspect' you'll see the fully-resolved
command line.

> I actually like those benefits, and as the image writer have the
> responsibility to ensure such things work properly, and to my own
> liking.
> 
> > arguments in the first place. Your ENTRYPOINT just remains "/init",
> > which will launch s6-svscan, which will launch TVHeadEnd. Your CMD
> > array remains null.
> >
> > Now, conceptually, you can think of TVHeadEnd as requiring "system
> > arguments" and "user arguments" -- "system arguments" being those
> > defaults you mentioned.
> 
> Yeah - expect I don't actually want to do that, and Docker INC are
> officially giving me the freedom to use both entrypoint + cmd in any
> ways as I personally see fit.
> 
> >
> > Now just make the TVHeadEnd `run` script something like:
> 
> vv that start script won't be published in 'Dockerfile' tab on
> Dockerhub. Therefore, the more functionality I am pushing from the
> Dockerfile into that script, then the harder my Dockerfile will be for
> other people to understand.
> 
> It's a comletely valid way of doing things. I just don't think you
> should try to enforce it as a hard rule in other people's images.
> 

None of this is really a "hard rule" I guess.

But I *have* been using Docker since version 0.5, and use it in
production systems with a team of people. Everything I've learned
about Docker, I've learned "the hard way."

I've made images in the past where the users needed to change the
ENTRYPOINT for different tasks, and it's really, really confusing. You
really save yourself (and everybody) a lot of headache and grief by
not encouraging the user to mess around with the ENTRYPOINT.

-John


Re: process supervisor - considerations for docker

2015-02-27 Thread Dreamcat4
On Fri, Feb 27, 2015 at 5:29 PM, John Regan  wrote:
> Quick preface:
>
> I know I keep crapping on some parts of your ideas here, but I would
> to reiterate the core idea is absolutely great - easy-to-use images
> based around s6. Letting the user quickly prototype a service by just
> running `docker run imagename command some arguments` is actually a
> *great* idea, since that lets people start making images without
> needing to know a lot of details about the underlying process
> supervisor. It's flexible, I know a lot of people like to initially
> try things "on-the-fly" before sitting down to write a Dockerfile, and
> this allows for that.
>
> You're just getting caught up in some of the finer
> implementation details, like the usage of ENTRYPOINT vs CMD vs
> environment variables and so on, how to handle dealing with shell
> arguments, these parts are solved problems.
>
> So, onwards:
>
>> * Convert Gorak's current "/init" script from bash --> execline
>> * Testing on Gorak's ubuntu base image
>>
>> * Add support for spawning the argv[], for cmd and entry point.
>> * Testing on Gorak's ubuntu base image
>>
>> * Create new base image for Alpine (thanks for mentioning it earlier John)
>> * Possibly create other base image(s). e.g. debian, arch, busy box and so on.
>> * Test them, refine them.
>
> Everything up to here sounds fine and dandy - make some images and get
> them out there, I'm all about that.
>
>>
>> * Document the new set of s6 base images
>> * Blog about them
>
> Also awesome.
>
>> * Inform the fusion people we have created a successor
>
> Hmm, I don't think that'll go over well, nor do I think it's really an
> appropriate thing to do. Some people like the phusion images, for one.
> The whole reason they exist is because a lot of guys keep trying to
> use Docker as VM, and get upset when they can't SSH into the
> container.
>
> I don't see these as a "successor", rather an "alternative."
>
> Besides, none of us would even be concerned about this if Phusion
> hadn't made something in the first place!
>
>> * Inform 1(+) member(s) of Docker Inc to make them aware of new s6-images.
>
> Docker Inc isn't *that* concerned about what types of images are being
> made. They'd probably get that email and say "..alright cool?"
>
> The right approach (in my opinion) is to just build a really cool
> product, then let the product speak for itself.
>
>> Of course the proper channel ATM is to open issues and PR's on Gorak's
>> current s6-ubuntu base image. So we can open those fairly soon and
>> move discussion to there.
>
> It might be worth starting up a github organization or something and
> creating new images under that namespace. I can't speak for Gorak, but
> the proposed image is sufficiently different enough from my existing
> one that I'd have to make a new image anyways. My existing ones don't
> allow for that `docker run imagename command arguments` use-case, for
> me this constitutes a breaking change. I'd rather just deprecate my
> existing ones, and either join in on Gorak's project, or start a new
> one, either way.
>
>>
>>
>> Another thing we don't need to worry about right now (but may become a
>> consideration later on):
>>
>> * Once there are 2+ similar s6 images.
>>   * May be worth to consult Docker Inc employees about official / base
>> image builds on the hub.
>>   * May be worth to consult base image writers (of the base images we
>> are using) e.g. 'ubuntu' etc.
>
> I wouldn't ever really worry about that. The base images don't have
> any kind of process supervisor, and they shouldn't. They're meant to
> be minimal installs for building stuff off of.
>
>>
>>  * Is a possibile to convert to github organisation to house multiple images
>>  * Can be helpful for others to grow support and other to come on
>> board later on who add new distros.
>>   * May be worth to ensure uniform behaviour of common s6 components
>> across different disto's s6 base images.
>>  * e.g. Central place of structured and consistent documentation
>> that covers all similar s6 base images together.
>
> Yep, see my comments above. I'm all about that.
>
>>
>> Again I'm not mandating that we need to do any of those things at all.
>> As it should not be anything of my decision whatsoever. But good idea
>> to keep those possibilities in mind when doing near-term work. "Try to
>> keep it general" basically.
>>
>> For example:
>>
>> I see a lot of good ideas in Gorak's base image about fixing APT. It
>> maybe that some of those great ideas can be fed back upstream to the
>> official ubuntu base image itself. Then (if they are receptive
>> upstream) it can later be removed from Gorak's s6-specific ubuntu base
>> image (being a child of that). Which generally improves the level
>> standardization, granularity (when people choose decide s6 or not),
>> etc.
>
> They probably won't be that receptive - Gorak isn't changing the base
> Ubuntu image *drastically*, but it still deviates from how a normal,
> plain-jane

Re: process supervisor - considerations for docker

2015-02-27 Thread Dreamcat4
On Fri, Feb 27, 2015 at 5:08 PM, John Regan  wrote:
>> Let me explain my point by an example:
>>
>> I am writing an image for tvheadend server. The tvheadend program has
>> some default arguments, which almost always are:
>>
>> -u hts -g video -c /config
>>
>> So then after that we might append user-specifig flags. Which for my
>> personal use are:
>>
>> --satip_xml http://192.168.1.22:8080/desc.xml --bindaddr 192.168.2.218
>>
>> So those user flags become set in CMD. As a user I set from my
>> orchestration tool, which is crane. In a 'crane.yml' YAML user
>> configuration file. Then I type 'crane lift', which does the appending
>> (and overriding) of CMD at the end of 'docker run...'.
>>
>> Another user comes along. Their user-specific (last arguments) will be
>> entirely different. And they should naturally use CMD to set them.
>> This is all something you guys have already stated.
>>
>> BUT (as an image writer). I don't want them to wipe out (override) the
>> first "default" part:
>>
>> -u hts -g video -c /config
>>
>> Because the user name, group name, and "/config" configuration dir (a
>> VOLUME). Those choices were all hard-coded and backed into the
>> tvheadend image's Dockerfile. HOWEVER if some very few people do want
>> to override it, they can by setting --entrypoint=. For example to
>> start up an image as a different user.
>>
>> BUT because they are almost never changed. Then that's why they are
>> tacked onto the end of entrypoint instead. As that is the best place
>> to put them. It says every user repeating unnecessarily the same set
>> of default arguments every time in their CMD part. So as an image
>> writer of the tvheadend image, the image's default entry point and cmd
>> are:
>>
>> ENTRYPOINT ["/tvheadend","-u","hts","-g","video","-c","/config"]
>> CMD *nothing* (or else could be "--help")
>>
>> after converting it to use s6 will be:
>>
>> ENTRYPOINT ["/init", "/tvheadend","-u","hts","-g","video","-c","/config"]
>> CMD *nothing* (or else could be "--help")
>>
>> And it that that easy. After making such change then no user of my
>> tvheadend image will be affected… because users are only meant to
>> override CMD. And if they choose to override entrypoint (and
>> accidentally remove '/init') then they are entirely on their own.
>>
>
> Making the ENTRYPOINT have a bunch of defaults like this is actually
> exactly what you *shouldn't* do. Ideally, once you've created your
> baseimage, you shouldn't touch the ENTRYPOINT and CMD ever again in
> any derived image.
>
> So, first thing's first - if you're building an image with TVHeadEnd
> installed, you may as well take the time to write a script to run it
> in s6, which eliminates the need to change the ENTRYPOINT and CMD

Well you are then neglecting to see 2 other benefits which i neglected
to mention anywhere previously:

1) Is that anyone can just look in my Dockerfile and see the default
tvheadend arguments as-is. Without needing to go around chasing them
being embedded in some script (when only Dockerfile is published to
DockerHub, not such start scripts).

2) Is that when they run my image, they can do a 'docker ps' and see
the FULL tvheadend arguments, including the default ones in the entry
point component. Which is then behaving much more like regular 'ps'
command.

I actually like those benefits, and as the image writer have the
responsibility to ensure such things work properly, and to my own
liking.

> arguments in the first place. Your ENTRYPOINT just remains "/init",
> which will launch s6-svscan, which will launch TVHeadEnd. Your CMD
> array remains null.
>
> Now, conceptually, you can think of TVHeadEnd as requiring "system
> arguments" and "user arguments" -- "system arguments" being those
> defaults you mentioned.

Yeah - expect I don't actually want to do that, and Docker INC are
officially giving me the freedom to use both entrypoint + cmd in any
ways as I personally see fit.

>
> Now just make the TVHeadEnd `run` script something like:

vv that start script won't be published in 'Dockerfile' tab on
Dockerhub. Therefore, the more functionality I am pushing from the
Dockerfile into that script, then the harder my Dockerfile will be for
other people to understand.

It's a comletely valid way of doing things. I just don't think you
should try to enforce it as a hard rule in other people's images.

>
> ```
> #!/bin/sh
>
> DEFAULT_TVHEADEND_SYSTEM_ARGS="-u hts -g video -c /config"
> TVHEADEND_SYSTEM_ARGS=${TVHEADEND_SYSTEM_ARGS:-$DEFAULT_TVHEADEND_ARGS}
> # the above line will use TV_HEADEND_SYSTEM_ARGS if defined, otherwise
> # set TVHEADEND_SYSTEM_ARGS = DEFAULT_TVHEADEND_SYSTEM_ARGS
>
> exec tvheadend $TVHEADEND_SYSTEM_ARGS $TVHEADEND_USER_ARGS
> ```
>
> That's shell script, I'm sure you could do an execline
> implementation, that fact doesn't particularly matter.
>
> So now, the user can run:
>
> `docker run -e TVHEADEND_USER_ARGS="--satip_xml http:..." tvheadendimage`
>
> If they need to change the default/system argumen

Re: process supervisor - considerations for docker

2015-02-27 Thread John Regan
Quick preface:

I know I keep crapping on some parts of your ideas here, but I would
to reiterate the core idea is absolutely great - easy-to-use images
based around s6. Letting the user quickly prototype a service by just
running `docker run imagename command some arguments` is actually a
*great* idea, since that lets people start making images without
needing to know a lot of details about the underlying process
supervisor. It's flexible, I know a lot of people like to initially
try things "on-the-fly" before sitting down to write a Dockerfile, and
this allows for that.

You're just getting caught up in some of the finer
implementation details, like the usage of ENTRYPOINT vs CMD vs
environment variables and so on, how to handle dealing with shell
arguments, these parts are solved problems.

So, onwards:

> * Convert Gorak's current "/init" script from bash --> execline
> * Testing on Gorak's ubuntu base image
> 
> * Add support for spawning the argv[], for cmd and entry point.
> * Testing on Gorak's ubuntu base image
> 
> * Create new base image for Alpine (thanks for mentioning it earlier John)
> * Possibly create other base image(s). e.g. debian, arch, busy box and so on.
> * Test them, refine them.

Everything up to here sounds fine and dandy - make some images and get
them out there, I'm all about that.

> 
> * Document the new set of s6 base images
> * Blog about them

Also awesome.

> * Inform the fusion people we have created a successor

Hmm, I don't think that'll go over well, nor do I think it's really an
appropriate thing to do. Some people like the phusion images, for one.
The whole reason they exist is because a lot of guys keep trying to
use Docker as VM, and get upset when they can't SSH into the
container.

I don't see these as a "successor", rather an "alternative."

Besides, none of us would even be concerned about this if Phusion
hadn't made something in the first place!

> * Inform 1(+) member(s) of Docker Inc to make them aware of new s6-images.

Docker Inc isn't *that* concerned about what types of images are being
made. They'd probably get that email and say "..alright cool?"

The right approach (in my opinion) is to just build a really cool
product, then let the product speak for itself.

> Of course the proper channel ATM is to open issues and PR's on Gorak's
> current s6-ubuntu base image. So we can open those fairly soon and
> move discussion to there.

It might be worth starting up a github organization or something and
creating new images under that namespace. I can't speak for Gorak, but
the proposed image is sufficiently different enough from my existing
one that I'd have to make a new image anyways. My existing ones don't
allow for that `docker run imagename command arguments` use-case, for
me this constitutes a breaking change. I'd rather just deprecate my
existing ones, and either join in on Gorak's project, or start a new
one, either way.

> 
> 
> Another thing we don't need to worry about right now (but may become a
> consideration later on):
> 
> * Once there are 2+ similar s6 images.
>   * May be worth to consult Docker Inc employees about official / base
> image builds on the hub.
>   * May be worth to consult base image writers (of the base images we
> are using) e.g. 'ubuntu' etc.

I wouldn't ever really worry about that. The base images don't have
any kind of process supervisor, and they shouldn't. They're meant to
be minimal installs for building stuff off of.

> 
>  * Is a possibile to convert to github organisation to house multiple images
>  * Can be helpful for others to grow support and other to come on
> board later on who add new distros.
>   * May be worth to ensure uniform behaviour of common s6 components
> across different disto's s6 base images.
>  * e.g. Central place of structured and consistent documentation
> that covers all similar s6 base images together.

Yep, see my comments above. I'm all about that.

> 
> Again I'm not mandating that we need to do any of those things at all.
> As it should not be anything of my decision whatsoever. But good idea
> to keep those possibilities in mind when doing near-term work. "Try to
> keep it general" basically.
> 
> For example:
> 
> I see a lot of good ideas in Gorak's base image about fixing APT. It
> maybe that some of those great ideas can be fed back upstream to the
> official ubuntu base image itself. Then (if they are receptive
> upstream) it can later be removed from Gorak's s6-specific ubuntu base
> image (being a child of that). Which generally improves the level
> standardization, granularity (when people choose decide s6 or not),
> etc.

They probably won't be that receptive - Gorak isn't changing the base
Ubuntu image *drastically*, but it still deviates from how a normal,
plain-jane Ubuntu install works. The base images should be as close to
stock as possible, since that's what most people will be coming from.
If the base Ubuntu image does something differently from an actual
base U

Re: process supervisor - considerations for docker

2015-02-27 Thread John Regan
> Let me explain my point by an example:
> 
> I am writing an image for tvheadend server. The tvheadend program has
> some default arguments, which almost always are:
> 
> -u hts -g video -c /config
> 
> So then after that we might append user-specifig flags. Which for my
> personal use are:
> 
> --satip_xml http://192.168.1.22:8080/desc.xml --bindaddr 192.168.2.218
> 
> So those user flags become set in CMD. As a user I set from my
> orchestration tool, which is crane. In a 'crane.yml' YAML user
> configuration file. Then I type 'crane lift', which does the appending
> (and overriding) of CMD at the end of 'docker run...'.
> 
> Another user comes along. Their user-specific (last arguments) will be
> entirely different. And they should naturally use CMD to set them.
> This is all something you guys have already stated.
> 
> BUT (as an image writer). I don't want them to wipe out (override) the
> first "default" part:
> 
> -u hts -g video -c /config
> 
> Because the user name, group name, and "/config" configuration dir (a
> VOLUME). Those choices were all hard-coded and backed into the
> tvheadend image's Dockerfile. HOWEVER if some very few people do want
> to override it, they can by setting --entrypoint=. For example to
> start up an image as a different user.
> 
> BUT because they are almost never changed. Then that's why they are
> tacked onto the end of entrypoint instead. As that is the best place
> to put them. It says every user repeating unnecessarily the same set
> of default arguments every time in their CMD part. So as an image
> writer of the tvheadend image, the image's default entry point and cmd
> are:
> 
> ENTRYPOINT ["/tvheadend","-u","hts","-g","video","-c","/config"]
> CMD *nothing* (or else could be "--help")
> 
> after converting it to use s6 will be:
> 
> ENTRYPOINT ["/init", "/tvheadend","-u","hts","-g","video","-c","/config"]
> CMD *nothing* (or else could be "--help")
> 
> And it that that easy. After making such change then no user of my
> tvheadend image will be affected… because users are only meant to
> override CMD. And if they choose to override entrypoint (and
> accidentally remove '/init') then they are entirely on their own.
> 

Making the ENTRYPOINT have a bunch of defaults like this is actually
exactly what you *shouldn't* do. Ideally, once you've created your
baseimage, you shouldn't touch the ENTRYPOINT and CMD ever again in
any derived image.

So, first thing's first - if you're building an image with TVHeadEnd
installed, you may as well take the time to write a script to run it
in s6, which eliminates the need to change the ENTRYPOINT and CMD
arguments in the first place. Your ENTRYPOINT just remains "/init",
which will launch s6-svscan, which will launch TVHeadEnd. Your CMD
array remains null.

Now, conceptually, you can think of TVHeadEnd as requiring "system
arguments" and "user arguments" -- "system arguments" being those
defaults you mentioned.

Now just make the TVHeadEnd `run` script something like:

```
#!/bin/sh

DEFAULT_TVHEADEND_SYSTEM_ARGS="-u hts -g video -c /config"
TVHEADEND_SYSTEM_ARGS=${TVHEADEND_SYSTEM_ARGS:-$DEFAULT_TVHEADEND_ARGS}
# the above line will use TV_HEADEND_SYSTEM_ARGS if defined, otherwise
# set TVHEADEND_SYSTEM_ARGS = DEFAULT_TVHEADEND_SYSTEM_ARGS

exec tvheadend $TVHEADEND_SYSTEM_ARGS $TVHEADEND_USER_ARGS
```

That's shell script, I'm sure you could do an execline
implementation, that fact doesn't particularly matter.

So now, the user can run:

`docker run -e TVHEADEND_USER_ARGS="--satip_xml http:..." tvheadendimage`

If they need to change the default/system arguments for some reason, they can
do that, too:

`docker run -e TVHEADEND_SYSTEM_ARGS="whatever" -e 
TVHEADEND_USER_ARGS="--satip_xml http:..." tvheadendimage`

>From (briefly) reading the crane documentation, that's totally
possible. It would be something like:

```
containers:
  tvheadend:
image: tvheadendimage
run:
env:
  - "TVHEADEND_USER_ARGS=--satip_xml http.."
  - "TVHEADEND_SYSTEM_ARGS="whatever"
```

This is exactly what environment variables are meant for in Docker,
btw, and how most images for a service operate. You define some
service-specific environment variables, and use those to setup the
service at run-time. All those various orchestration tools support
this.


Re: process supervisor - considerations for docker

2015-02-27 Thread Dreamcat4
On Fri, Feb 27, 2015 at 1:10 PM, Dreamcat4  wrote:
>> * Once there are 2+ similar s6 images.
>>   * May be worth to consult Docker Inc employees about official / base
>> image builds on the hub.
>
> Here is an example of why we might benefit from seeking help from Docker Inc:
>
> * Multiple FROM images (multiple inheritance).
>
> There should already be an open ticket for this feature (which does
> not exist in Docker). And it seems relevant to our situation.
>
> Or they could make a feature called "flavours" as a way to "tweak"
> base images. Then that would save us some unnecessary duplication of
> work.
>
> For example:
>
> FROM: ubuntu
> FLAVOUR: s6
>
> People could instead do:
>
> FROM: alpine
> FLAVOUR: s6

Oh wait a minute: I'm being a little retarded. We can already use ADD
for achieving that sort of thing. Just instead the entry would point
to a github URL to get a single tarball from. Gorka is sort-of already
doing this… just with 2 separate ones, without his /init included
within, which is copied from a local directory etc.

> Where FLAVOR: s6 is just a separate auks layer (added ontop of the
> base) at the time the image is build. So s6 is just the s6-part, kept
> independent and separated out from the various base images.
>
> Then we would only need to worry about maintaining an 's6' flavour,
> which is self-contained. Bringing everything it needs with it - it's
> own 'execline' and other needed s6 support tools. So not depending
> upon anything that may or may-not be in the base image (including busy
> box).
>
> Such help from Docker Inc would save us having to maintain many
> individual copies of various base images. So we should tell them about
> it, and let them know that!
>
> The missing capability of multiple FROM: base images (which I believe
> is how is described in current open ticket(s) on docker/docker) is
> essentially exactly the same idea as this FLAVOR keyword I have used
> above ^^. They are interchangeable concepts. I've just called it
> something else for the sake of being awkward / whatever.


Re: process supervisor - considerations for docker

2015-02-27 Thread Dreamcat4
> * Once there are 2+ similar s6 images.
>   * May be worth to consult Docker Inc employees about official / base
> image builds on the hub.

Here is an example of why we might benefit from seeking help from Docker Inc:

* Multiple FROM images (multiple inheritance).

There should already be an open ticket for this feature (which does
not exist in Docker). And it seems relevant to our situation.

Or they could make a feature called "flavours" as a way to "tweak"
base images. Then that would save us some unnecessary duplication of
work.

For example:

FROM: ubuntu
FLAVOUR: s6

People could instead do:

FROM: alpine
FLAVOUR: s6

Where FLAVOR: s6 is just a separate auks layer (added ontop of the
base) at the time the image is build. So s6 is just the s6-part, kept
independent and separated out from the various base images.

Then we would only need to worry about maintaining an 's6' flavour,
which is self-contained. Bringing everything it needs with it - it's
own 'execline' and other needed s6 support tools. So not depending
upon anything that may or may-not be in the base image (including busy
box).

Such help from Docker Inc would save us having to maintain many
individual copies of various base images. So we should tell them about
it, and let them know that!

The missing capability of multiple FROM: base images (which I believe
is how is described in current open ticket(s) on docker/docker) is
essentially exactly the same idea as this FLAVOR keyword I have used
above ^^. They are interchangeable concepts. I've just called it
something else for the sake of being awkward / whatever.


Re: process supervisor - considerations for docker

2015-02-27 Thread Dreamcat4
On Fri, Feb 27, 2015 at 10:19 AM, Gorka Lertxundi  wrote:
> Dreamcat4, pull request are always welcomed!
>
> 2015-02-27 0:40 GMT+01:00 Laurent Bercot :
>
>> On 26/02/2015 21:53, John Regan wrote:
>>
>>> Besides, the whole idea here is to make an image that follows best
>>> practices, and best practices state we should be using a process
>>> supervisor that cleans up orphaned processes and stuff. You should be
>>> encouraging people to run their programs, interactively or not, under
>>> a supervision tree like s6.
>>>
>>
>>  The distinction between "process" and "service" is key here, and I
>> agree with John.
>>
>> 
>>  There's a lot of software out there that seems built on the assumption
>> that
>> a program should do everything within a single executable, and that
>> processes
>> that fail to address certain issues are incomplete and the program needs to
>> be patched.
>>
>>  Under Unix, this assumption is incorrect. Unix is mostly defined by its
>> simple and efficient interprocess communication, so a Unix program is best
>> designed as a *set* of processes, with the right communication channels
>> between them, and the right control flow between those processes. Using
>> Unix primitives the right way allows you to accomplish a task with minimal
>> effort by delegating a lot to the operating system.
>>
>>  This is how I design and write software: to take advantage of the design
>> of Unix as much as I can, to perform tasks with the lowest possible amount
>> of code.
>>  This requires isolating basic building blocks, and providing those
>> building
>> blocks as binaries, with the right interface so users can glue them
>> together on the command line.
>>
>>  Take the "syslogd" service. The "rsyslogd" way is to have one executable,
>> rsyslogd, that provides the syslogd functionality. The s6 way is to combine
>> several tools to implement syslogd; the functionality already exists, even
>> if it's not immediately apparent. This command line should do:
>>
>>  pipeline s6-ipcserver-socketbinder /dev/log s6-envuidgid nobody
>> s6-applyuidgid -Uz s6-ipcserverd ucspilogd "" s6-envuidgid syslog
>> s6-applyuidgid -Uz s6-log /var/log/syslogd
>>
>>
> I love puzzles.
>
>
>>  Yes, that's one unique command line. The syslogd implementation will take
>> the form of two long-running processes, one listening on /dev/log (the
>> syslogd socket) as user nobody, and spawning a short-lived ucspilogd
>> process
>> for every connection to syslog; and the other writing the logs to the
>> /var/log/syslogd directory as user syslog and performing automatic
>> rotation.
>> (You can configure how and where things are logged by writing a real s6-log
>> script at the end of the command line.)
>>
>>  Of course, in the real world, you wouldn't write that. First, because s6
>> provides some shortcuts for common operations so the real command lines
>> would be a tad shorter, and second, because you'd want the long-running
>> processes to be supervised, so you'd use the supervision infrastructure
>> and write two short run scripts instead.
>>
>>  (And so, to provide syslogd functionality to one client, you'd really have
>> 1 s6-svscan process, 2 s6-supervise processes, 1 s6-ipcserverd process,
>> 1 ucspilogd process and 1 s6-log process. Yes, 6 processes. This is not as
>> insane as it sounds. Processes are not a scarce resource on Unix; the
>> scarce resources are RAM and CPU. The s6 processes have been designed to
>> take *very* little of those, so the total amount of RAM and CPU they all
>> use is still smaller than the amount used by a single rsyslogd process.)
>>
>>  There are good reasons to program this way. Mostly, it amounts to writing
>> as little code as possible. If you look at the source code for every single
>> command that appears on the insane command line above, you'll find that
>> it's
>> pretty short, and short means maintainable - which is the most important
>> quality to have in a codebase, especially when there's just one guy
>> maintaining it.
>>  Using high-level languages also reduces the source code's size, but it
>> adds the interpreter's or run-time system's overhead, and a forest of
>> dependencies. What is then run on the machine is not lightweight by any
>> measure. (Plus, most of those languages are total crap.)
>>
>>  Anyway, my point is that it often takes several processes to provide a
>> service, and that it's a good thing. This practice should be encouraged.
>> So, yes, running a service under a process supervisor is the right design,
>> and I'm happy that John, Gorka, Les and other people have figured it out.
>>
>>  s6 itself provides the "process supervision" service not as a single
>> executable, but as a set of tools. s6-svscan doesn't do it all, and it's
>> by design. It's just another basic building block. Sure, it's a bit special
>> because it can run as process 1 and is the root of the supervision tree,
>> but that doesn't mean it's a turnkey program - the key lies in how it's
>> used together w

Re: process supervisor - considerations for docker

2015-02-27 Thread Dreamcat4
On Thu, Feb 26, 2015 at 11:40 PM, Laurent Bercot
 wrote:
> On 26/02/2015 21:53, John Regan wrote:
>>
>> Besides, the whole idea here is to make an image that follows best
>> practices, and best practices state we should be using a process
>> supervisor that cleans up orphaned processes and stuff. You should be
>> encouraging people to run their programs, interactively or not, under
>> a supervision tree like s6.
>
>
>  The distinction between "process" and "service" is key here, and I
> agree with John.
>
> 
>  There's a lot of software out there that seems built on the assumption that
> a program should do everything within a single executable, and that
> processes
> that fail to address certain issues are incomplete and the program needs to
> be patched.
>
>  Under Unix, this assumption is incorrect. Unix is mostly defined by its
> simple and efficient interprocess communication, so a Unix program is best
> designed as a *set* of processes, with the right communication channels
> between them, and the right control flow between those processes. Using
> Unix primitives the right way allows you to accomplish a task with minimal
> effort by delegating a lot to the operating system.
>
>  This is how I design and write software: to take advantage of the design
> of Unix as much as I can, to perform tasks with the lowest possible amount
> of code.
>  This requires isolating basic building blocks, and providing those building
> blocks as binaries, with the right interface so users can glue them
> together on the command line.
>
>  Take the "syslogd" service. The "rsyslogd" way is to have one executable,
> rsyslogd, that provides the syslogd functionality. The s6 way is to combine
> several tools to implement syslogd; the functionality already exists, even
> if it's not immediately apparent. This command line should do:
>
>  pipeline s6-ipcserver-socketbinder /dev/log s6-envuidgid nobody
> s6-applyuidgid -Uz s6-ipcserverd ucspilogd "" s6-envuidgid syslog
> s6-applyuidgid -Uz s6-log /var/log/syslogd
>
>  Yes, that's one unique command line. The syslogd implementation will take
> the form of two long-running processes, one listening on /dev/log (the
> syslogd socket) as user nobody, and spawning a short-lived ucspilogd process
> for every connection to syslog; and the other writing the logs to the
> /var/log/syslogd directory as user syslog and performing automatic rotation.
> (You can configure how and where things are logged by writing a real s6-log
> script at the end of the command line.)
>
>  Of course, in the real world, you wouldn't write that. First, because s6
> provides some shortcuts for common operations so the real command lines
> would be a tad shorter, and second, because you'd want the long-running
> processes to be supervised, so you'd use the supervision infrastructure
> and write two short run scripts instead.
>
>  (And so, to provide syslogd functionality to one client, you'd really have
> 1 s6-svscan process, 2 s6-supervise processes, 1 s6-ipcserverd process,
> 1 ucspilogd process and 1 s6-log process. Yes, 6 processes. This is not as
> insane as it sounds. Processes are not a scarce resource on Unix; the
> scarce resources are RAM and CPU. The s6 processes have been designed to
> take *very* little of those, so the total amount of RAM and CPU they all
> use is still smaller than the amount used by a single rsyslogd process.)
>
>  There are good reasons to program this way. Mostly, it amounts to writing
> as little code as possible. If you look at the source code for every single
> command that appears on the insane command line above, you'll find that it's
> pretty short, and short means maintainable - which is the most important
> quality to have in a codebase, especially when there's just one guy
> maintaining it.
>  Using high-level languages also reduces the source code's size, but it
> adds the interpreter's or run-time system's overhead, and a forest of
> dependencies. What is then run on the machine is not lightweight by any
> measure. (Plus, most of those languages are total crap.)
>
>  Anyway, my point is that it often takes several processes to provide a
> service, and that it's a good thing. This practice should be encouraged.
> So, yes, running a service under a process supervisor is the right design,
> and I'm happy that John, Gorka, Les and other people have figured it out.
>
>  s6 itself provides the "process supervision" service not as a single
> executable, but as a set of tools. s6-svscan doesn't do it all, and it's
> by design. It's just another basic building block. Sure, it's a bit special
> because it can run as process 1 and is the root of the supervision tree,
> but that doesn't mean it's a turnkey program - the key lies in how it's
> used together with other s6 and Unix tools.
>  That's why starting s6-svscan directly as the entrypoint isn't such a
> good idea. It's much more flexible to run a script as the entrypoint
> that performs a few basic initialization steps t

Re: process supervisor - considerations for docker

2015-02-27 Thread Gorka Lertxundi
Dreamcat4, pull request are always welcomed!

2015-02-27 0:40 GMT+01:00 Laurent Bercot :

> On 26/02/2015 21:53, John Regan wrote:
>
>> Besides, the whole idea here is to make an image that follows best
>> practices, and best practices state we should be using a process
>> supervisor that cleans up orphaned processes and stuff. You should be
>> encouraging people to run their programs, interactively or not, under
>> a supervision tree like s6.
>>
>
>  The distinction between "process" and "service" is key here, and I
> agree with John.
>
> 
>  There's a lot of software out there that seems built on the assumption
> that
> a program should do everything within a single executable, and that
> processes
> that fail to address certain issues are incomplete and the program needs to
> be patched.
>
>  Under Unix, this assumption is incorrect. Unix is mostly defined by its
> simple and efficient interprocess communication, so a Unix program is best
> designed as a *set* of processes, with the right communication channels
> between them, and the right control flow between those processes. Using
> Unix primitives the right way allows you to accomplish a task with minimal
> effort by delegating a lot to the operating system.
>
>  This is how I design and write software: to take advantage of the design
> of Unix as much as I can, to perform tasks with the lowest possible amount
> of code.
>  This requires isolating basic building blocks, and providing those
> building
> blocks as binaries, with the right interface so users can glue them
> together on the command line.
>
>  Take the "syslogd" service. The "rsyslogd" way is to have one executable,
> rsyslogd, that provides the syslogd functionality. The s6 way is to combine
> several tools to implement syslogd; the functionality already exists, even
> if it's not immediately apparent. This command line should do:
>
>  pipeline s6-ipcserver-socketbinder /dev/log s6-envuidgid nobody
> s6-applyuidgid -Uz s6-ipcserverd ucspilogd "" s6-envuidgid syslog
> s6-applyuidgid -Uz s6-log /var/log/syslogd
>
>
I love puzzles.


>  Yes, that's one unique command line. The syslogd implementation will take
> the form of two long-running processes, one listening on /dev/log (the
> syslogd socket) as user nobody, and spawning a short-lived ucspilogd
> process
> for every connection to syslog; and the other writing the logs to the
> /var/log/syslogd directory as user syslog and performing automatic
> rotation.
> (You can configure how and where things are logged by writing a real s6-log
> script at the end of the command line.)
>
>  Of course, in the real world, you wouldn't write that. First, because s6
> provides some shortcuts for common operations so the real command lines
> would be a tad shorter, and second, because you'd want the long-running
> processes to be supervised, so you'd use the supervision infrastructure
> and write two short run scripts instead.
>
>  (And so, to provide syslogd functionality to one client, you'd really have
> 1 s6-svscan process, 2 s6-supervise processes, 1 s6-ipcserverd process,
> 1 ucspilogd process and 1 s6-log process. Yes, 6 processes. This is not as
> insane as it sounds. Processes are not a scarce resource on Unix; the
> scarce resources are RAM and CPU. The s6 processes have been designed to
> take *very* little of those, so the total amount of RAM and CPU they all
> use is still smaller than the amount used by a single rsyslogd process.)
>
>  There are good reasons to program this way. Mostly, it amounts to writing
> as little code as possible. If you look at the source code for every single
> command that appears on the insane command line above, you'll find that
> it's
> pretty short, and short means maintainable - which is the most important
> quality to have in a codebase, especially when there's just one guy
> maintaining it.
>  Using high-level languages also reduces the source code's size, but it
> adds the interpreter's or run-time system's overhead, and a forest of
> dependencies. What is then run on the machine is not lightweight by any
> measure. (Plus, most of those languages are total crap.)
>
>  Anyway, my point is that it often takes several processes to provide a
> service, and that it's a good thing. This practice should be encouraged.
> So, yes, running a service under a process supervisor is the right design,
> and I'm happy that John, Gorka, Les and other people have figured it out.
>
>  s6 itself provides the "process supervision" service not as a single
> executable, but as a set of tools. s6-svscan doesn't do it all, and it's
> by design. It's just another basic building block. Sure, it's a bit special
> because it can run as process 1 and is the root of the supervision tree,
> but that doesn't mean it's a turnkey program - the key lies in how it's
> used together with other s6 and Unix tools.
>  That's why starting s6-svscan directly as the entrypoint isn't such a
> good idea. It's much more flexible to run a script as th

Re: process supervisor - considerations for docker

2015-02-26 Thread Laurent Bercot

On 26/02/2015 21:53, John Regan wrote:

Besides, the whole idea here is to make an image that follows best
practices, and best practices state we should be using a process
supervisor that cleans up orphaned processes and stuff. You should be
encouraging people to run their programs, interactively or not, under
a supervision tree like s6.


 The distinction between "process" and "service" is key here, and I
agree with John.


 There's a lot of software out there that seems built on the assumption that
a program should do everything within a single executable, and that processes
that fail to address certain issues are incomplete and the program needs to
be patched.

 Under Unix, this assumption is incorrect. Unix is mostly defined by its
simple and efficient interprocess communication, so a Unix program is best
designed as a *set* of processes, with the right communication channels
between them, and the right control flow between those processes. Using
Unix primitives the right way allows you to accomplish a task with minimal
effort by delegating a lot to the operating system.

 This is how I design and write software: to take advantage of the design
of Unix as much as I can, to perform tasks with the lowest possible amount
of code.
 This requires isolating basic building blocks, and providing those building
blocks as binaries, with the right interface so users can glue them
together on the command line.

 Take the "syslogd" service. The "rsyslogd" way is to have one executable,
rsyslogd, that provides the syslogd functionality. The s6 way is to combine
several tools to implement syslogd; the functionality already exists, even
if it's not immediately apparent. This command line should do:

 pipeline s6-ipcserver-socketbinder /dev/log s6-envuidgid nobody s6-applyuidgid -Uz 
s6-ipcserverd ucspilogd "" s6-envuidgid syslog s6-applyuidgid -Uz s6-log 
/var/log/syslogd

 Yes, that's one unique command line. The syslogd implementation will take
the form of two long-running processes, one listening on /dev/log (the
syslogd socket) as user nobody, and spawning a short-lived ucspilogd process
for every connection to syslog; and the other writing the logs to the
/var/log/syslogd directory as user syslog and performing automatic rotation.
(You can configure how and where things are logged by writing a real s6-log
script at the end of the command line.)

 Of course, in the real world, you wouldn't write that. First, because s6
provides some shortcuts for common operations so the real command lines
would be a tad shorter, and second, because you'd want the long-running
processes to be supervised, so you'd use the supervision infrastructure
and write two short run scripts instead.

 (And so, to provide syslogd functionality to one client, you'd really have
1 s6-svscan process, 2 s6-supervise processes, 1 s6-ipcserverd process,
1 ucspilogd process and 1 s6-log process. Yes, 6 processes. This is not as
insane as it sounds. Processes are not a scarce resource on Unix; the
scarce resources are RAM and CPU. The s6 processes have been designed to
take *very* little of those, so the total amount of RAM and CPU they all
use is still smaller than the amount used by a single rsyslogd process.)

 There are good reasons to program this way. Mostly, it amounts to writing
as little code as possible. If you look at the source code for every single
command that appears on the insane command line above, you'll find that it's
pretty short, and short means maintainable - which is the most important
quality to have in a codebase, especially when there's just one guy
maintaining it.
 Using high-level languages also reduces the source code's size, but it
adds the interpreter's or run-time system's overhead, and a forest of
dependencies. What is then run on the machine is not lightweight by any
measure. (Plus, most of those languages are total crap.)

 Anyway, my point is that it often takes several processes to provide a
service, and that it's a good thing. This practice should be encouraged.
So, yes, running a service under a process supervisor is the right design,
and I'm happy that John, Gorka, Les and other people have figured it out.

 s6 itself provides the "process supervision" service not as a single
executable, but as a set of tools. s6-svscan doesn't do it all, and it's
by design. It's just another basic building block. Sure, it's a bit special
because it can run as process 1 and is the root of the supervision tree,
but that doesn't mean it's a turnkey program - the key lies in how it's
used together with other s6 and Unix tools.
 That's why starting s6-svscan directly as the entrypoint isn't such a
good idea. It's much more flexible to run a script as the entrypoint
that performs a few basic initialization steps then execs into s6-svscan.
Just like you'd do for a real init. :)


 

Heck, most people don't *care* about this kind of thing because they
don't even know. So if you just make /init the ENTRYPOINT, 99% of
people will probabl

Re: process supervisor - considerations for docker

2015-02-26 Thread John Regan
On Thu, Feb 26, 2015 at 08:23:47PM +, Dreamcat4 wrote:
> You CANNOT enforce specific ENTRYPOINT + CMD usages amongst docker
> users. It will never work because too many people use docker in too
> many different ways. And it does not matter from a technical
> perspective for the solution I have been quietly thinking of (but not
> had an opportunity to share yet).
> 
> It's best to think of ENTRYPOINT (in conventional docker learning
> before throwing in any /init system) and being "the interpreter" such
> as the "/bin/sh -c" bit that sets up the environment. Like the shebang
> line. Or could be the python interpreter instead etc.

I disagree, and I think your second paragraph actually supports my
argument: if you think of ENTRYPOINT as the command for setting up the
environment, then it makes sense to use ENTRYPOINT as the method for
setting up a supervision tree vs not setting up a supervision tree,
because those are two pretty different environments.

People use Docker in tons of different ways, sure. But I'm completely
able to say "this is the entrypoint my image uses, and this is what it
does."

Besides, the whole idea here is to make an image that follows best
practices, and best practices state we should be using a process
supervisor that cleans up orphaned processes and stuff. You should be
encouraging people to run their programs, interactively or not, under
a supervision tree like s6.

Heck, most people don't *care* about this kind of thing because they
don't even know. So if you just make /init the ENTRYPOINT, 99% of
people will probably never even realize what's happening. If they can
run `docker run -ti imagename /bin/sh` and get a working, interactive
shell, and the container exits when they type "exit", then they're
good to go! Most won't even question what the image is up to, they'll
just continue on getting the benefits of s6 without even realizing it.

> 
> My suggestion:
> 
> * /init is launched by docker as the first argument.
> * init checks for "$@". If there are any arguments:
> 
>  * create (from a simple template) a s6 run script
>* run script launches $1 (first arg) as the command to run
>  * run script template is written with remaining args to $1
> 
>  * proceed normally (inspect the s6 config directory as usual!)
>* as there should be no breakage of all existing functionality
> 
> * Providing there is no VOLUME sat ontop of the /etc/s6 config directory
> * Then the run script is temporary - it will only last while the
> container is running.
>* So won't be there anymore to cleanup on and future 'docker run'
> invokations with different arguments.
> 
> The main thing I'm concerned about is about preserving proper shell
> quoting, because sometimes args can be like --flag='some thing'.
> 
> It may be one simple way to get proper quoting (in conventional shells
> like bash) is to use 'set -x' to echo out the line, as the output is
> ensured by the interpreter to be re-executable. Although even if that
> takes care of the quotes, it would still not be good to have
> accidental variable expansion, interpretation of $ ! etc. Maybe I'm
> thinking a bit too far ahead. But we already know that Gorka's '/init'
> script is written in bash.

I think here, you're getting way more caught up in the details of your
idea than you need to be. Shells, arguments, quoting, etc, you're
overcomplicating some of this stuff.


Re: process supervisor - considerations for docker

2015-02-26 Thread Dreamcat4
I think you guys are kindda on the right track. But you are trying to
impose some un-needed restrictions in regards to CMD vs ENTRYPOINT
usage.

To docker, the final combination of ENTRYPOINT + CMD is all that
matters. So long as the first arg is /init. Then any remaining args
should be shifted and used to determine what program is being run on
supervision.

So basically all the user has to care about (to get process
supervision) is to pre-pend "/init" as the first argument, before
argv[0] (the command to run) and any optional subsequent args.

If there is no arguments after "/init". Then it's multi-manages
process. And just conventional ways (inspect config directories for
each services).

You CANNOT enforce specific ENTRYPOINT + CMD usages amongst docker
users. It will never work because too many people use docker in too
many different ways. And it does not matter from a technical
perspective for the solution I have been quietly thinking of (but not
had an opportunity to share yet).


It's best to think of ENTRYPOINT (in conventional docker learning
before throwing in any /init system) and being "the interpreter" such
as the "/bin/sh -c" bit that sets up the environment. Like the shebang
line. Or could be the python interpreter instead etc.

It's best to understand CMD as being the docker image's user
presentation of its exposed "command interface". So for example if you
ran docker inside of docker, it's overridable CMD part would be the
docker subcommands such as 'run' 'stop' etc. And then users can
understand that "docker run docker  " Is all they need to
care about from a user perpective. --help usage of the docker image
and so on.

But technically, /init should only ever receive and argv[] array which
is the combined ENTRYPOINT + CMD appended after it (whether the user
has overridden entry point or has no entry point set is entirely their
business. And inside of /init it cannot know such things anyway.

My suggestion:

* /init is launched by docker as the first argument.
* init checks for "$@". If there are any arguments:

 * create (from a simple template) a s6 run script
   * run script launches $1 (first arg) as the command to run
 * run script template is written with remaining args to $1

 * proceed normally (inspect the s6 config directory as usual!)
   * as there should be no breakage of all existing functionality

* Providing there is no VOLUME sat ontop of the /etc/s6 config directory
* Then the run script is temporary - it will only last while the
container is running.
   * So won't be there anymore to cleanup on and future 'docker run'
invokations with different arguments.

The main thing I'm concerned about is about preserving proper shell
quoting, because sometimes args can be like --flag='some thing'.

It may be one simple way to get proper quoting (in conventional shells
like bash) is to use 'set -x' to echo out the line, as the output is
ensured by the interpreter to be re-executable. Although even if that
takes care of the quotes, it would still not be good to have
accidental variable expansion, interpretation of $ ! etc. Maybe I'm
thinking a bit too far ahead. But we already know that Gorka's '/init'
script is written in bash.

Sorry for coming at it from a completely different angle!
:)


On Thu, Feb 26, 2015 at 5:03 PM, John Regan  wrote:
>
>
>>  I think you're better off with:
>>
>>  * Case 1 : docker run --entrypoint="" image commandline
>>(with or without -ti depending on whether you need an interactive
>>terminal)
>>  * Case 2 : docker run image
>>  * Case 3: docker run image commandline
>>(with or without -ti depending on whether you need an interactive
>>terminal)
>>
>>  docker run --entrypoint="" -ti image /bin/sh
>>would start a shell without the supervision tree running
>>
>>  docker run -ti image /bin/sh
>>would start a shell with the supervision tree up.
>>
>
> After reading your reasoning, I agree 100% - let -ti drive whether it's 
> interactive, and --entrypoint drive whether there's a supervision tree.
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.


Re: process supervisor - considerations for docker

2015-02-26 Thread John Regan


>  I think you're better off with:
>
>  * Case 1 : docker run --entrypoint="" image commandline
>(with or without -ti depending on whether you need an interactive
>terminal)
>  * Case 2 : docker run image
>  * Case 3: docker run image commandline
>(with or without -ti depending on whether you need an interactive
>terminal)
>
>  docker run --entrypoint="" -ti image /bin/sh
>would start a shell without the supervision tree running
>
>  docker run -ti image /bin/sh
>would start a shell with the supervision tree up.
>

After reading your reasoning, I agree 100% - let -ti drive whether it's 
interactive, and --entrypoint drive whether there's a supervision tree. 
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Re: process supervisor - considerations for docker

2015-02-26 Thread Laurent Bercot

On 26/02/2015 15:38, John Regan wrote:

When you build a Docker image, the ENTRYPOINT is what program you want
to run as PID1 by default. It can be the path to a program, along with some
arguments, or it can be null.

CMD is really just arguments to your ENTRYPOINT, unless ENTRYPOINT is
null, in which case it becomes your effective ENTRYPOINT.

At build-time, you can specify a default CMD, which is what gets run
if no arguments are passed to docker run.


 OK, got it.



I'm going to call these case 1 (command with no supervision environment), case
2 (default supervision environment), and case 3 (supervision environment with
a provided command).

Here's a breakdown of each case's invocation, given a set ENTRYPOINT and CMD:

ENTRYPOINT = null, CMD = /init

* Case 1: `docker run image commandline`
* Case 2: `docker run image
* Case 3: `docker run image /init commandline`

ENTRYPOINT = /init, CMD = null

* Case 1: `docker run --entrypoint="" image commandline`
* Case 2: `docker run image`
* Case 3: `docker run image commandline`


 Makes sense so far.



Now, something worth noting is that none of these command run interactively -
to run interactively, you run something like 'docker run -ti ubuntu /bin/sh'.

-t allocates a TTY, and -i keeps STDIN open.

So, I think the right thing to do is make /init check if there's a TTY
allocated and that STDIN is open, and if so, just exec into the passed
arguments without starting the supervision tree.


 ... but here I disagree.
 Performing different actions when a fd is a tty from when it is not
breaks the principle of least surprise. It's counter-intuitive,
generally bad practice, and simply not what you want in most cases.
It's okay for simple user interface stuff, such as programs that
pretty-print their output when stdout is a terminal, but in the
docker container case I don't think it's justified.



ENTRYPOINT = /init (with TTY/STDIN detection), CMD = null

* Case 1: `docker run -ti image commandline`
* Case 2: `docker run image`
* Case 3: `docker run image commandline`

So basically, if you want to run your command interactively with no execution
environment, you just pass '-ti' to 'docker run' like you normally do. If
you want it to run under the supervision tree, just don't pass the '-ti'
flags. This makes the image work like pretty much every image ever, and
the user doesn't ever need to type out "/init".


 The main problem with that approach is that there's no way to run an
interactive command with the supervision environment - but it's something
that users are very likely to need. What if I want my services to run,
but at the same time I want a shell inside the container to explore and
debug things ?

 I think you're better off with:

 * Case 1 : docker run --entrypoint="" image commandline
(with or without -ti depending on whether you need an interactive terminal)
 * Case 2 : docker run image
 * Case 3: docker run image commandline
(with or without -ti depending on whether you need an interactive terminal)

 docker run --entrypoint="" -ti image /bin/sh
would start a shell without the supervision tree running

 docker run -ti image /bin/sh
would start a shell with the supervision tree up.



Laurent, how hard is it to check if you're attached to a TTY or not?


 It's not hard at all:
 http://pubs.opengroup.org/onlinepubs/9699919799/functions/isatty.html
 (then you need a command line that does this, but I think busybox
provides one, and in the worst case it's trivial to write.)
 However, I don't think it's a good idea to do so in this case.

--
 Laurent



Re: process supervisor - considerations for docker

2015-02-26 Thread John Regan
On Thu, Feb 26, 2015 at 02:37:23PM +0100, Laurent Bercot wrote:
> On 26/02/2015 14:11, John Regan wrote:
> >Just to clarify," docker run" spins up a new container, so that wouldn't 
> >work for stopping a container. It would just spin up a new container running 
> >"s6-svscanctl -t service"
> >
> >To stop, you run "docker stop "
> 
>  Ha! Shows how much I know about Docker.
>  I believe the idea is sound, though. And definitely implementable.
> 

I figure this is also a good moment to go over ENTRYPOINT and CMD,
since that's come up a few times in the discussion.

When you build a Docker image, the ENTRYPOINT is what program you want
to run as PID1 by default. It can be the path to a program, along with some
arguments, or it can be null.

CMD is really just arguments to your ENTRYPOINT, unless ENTRYPOINT is
null, in which case it becomes your effective ENTRYPOINT.

At build-time, you can specify a default CMD, which is what gets run
if no arguments are passed to docker run.

When you do 'docker run imagename blah blah blah', the 'blah blah
blah' gets passed as arguments to ENTRYPOINT. If you want to specify a
different ENTRYPOINT at runtime, you need to use the --entrypoint
switch.

So, for example: the default ubuntu image has a null ENTRYPOINT, and
the default CMD is "/bin/bash". If I run `docker run ubuntu`, then
/bin/bash gets executed (and quits immediately, since it doesn't have
anything to do).

If I run `docker run ubuntu echo hello`, then /bin/echo is executed.

In my Ubuntu baseimage, I made the ENTRYPOINT "s6-svscan /etc/s6". In
hindsight, this probably wasn't the best idea. If the user types
"docker run jprjr/ubuntu-baseimage hey there", then the effective
command becomes "s6-svscan /etc/s6 hey there" - which goes against how
most other Docker images work.

So, if I pull up Laurent's earlier list of options for the client:

> * docker run image commandline
>   Runs commandline in the image without starting the supervision environment.
> * docker run image /init
>   Starts the supervision environment and lets it run forever.
> * docker run image /init commandline
>   Runs commandline in the fully operational supervision environment. When
>commandline exits, the supervision environment is stopped and cleaned up.

I'm going to call these case 1 (command with no supervision environment), case
2 (default supervision environment), and case 3 (supervision environment with
a provided command).

Here's a breakdown of each case's invocation, given a set ENTRYPOINT and CMD:

ENTRYPOINT = null, CMD = /init

* Case 1: `docker run image commandline`
* Case 2: `docker run image
* Case 3: `docker run image /init commandline`

ENTRYPOINT = /init, CMD = null

* Case 1: `docker run --entrypoint="" image commandline`
* Case 2: `docker run image`
* Case 3: `docker run image commandline`

Now, something worth noting is that none of these command run interactively -
to run interactively, you run something like 'docker run -ti ubuntu /bin/sh'.

-t allocates a TTY, and -i keeps STDIN open.

So, I think the right thing to do is make /init check if there's a TTY
allocated and that STDIN is open, and if so, just exec into the passed
arguments without starting the supervision tree.

I'm not going to lie, I don't know the details of how to actually do that.
Assuming that's possible, your use cases become this:

ENTRYPOINT = /init (with TTY/STDIN detection), CMD = null

* Case 1: `docker run -ti image commandline`
* Case 2: `docker run image`
* Case 3: `docker run image commandline`

So basically, if you want to run your command interactively with no execution
environment, you just pass '-ti' to 'docker run' like you normally do. If
you want it to run under the supervision tree, just don't pass the '-ti'
flags. This makes the image work like pretty much every image ever, and
the user doesn't ever need to type out "/init".

Laurent, how hard is it to check if you're attached to a TTY or not? This is
where we start getting into your area of expertise :)

-John


Re: process supervisor - considerations for docker

2015-02-26 Thread Laurent Bercot

On 26/02/2015 14:11, John Regan wrote:

Just to clarify," docker run" spins up a new container, so that wouldn't work for 
stopping a container. It would just spin up a new container running "s6-svscanctl -t 
service"

To stop, you run "docker stop "


 Ha! Shows how much I know about Docker.
 I believe the idea is sound, though. And definitely implementable.

--
 Laurent



Re: process supervisor - considerations for docker

2015-02-26 Thread John Regan
Sorry for the formatting, I'm sending this from my phone.

> Starts the supervision environment and lets it run forever. It can be
stopped from the outside via "docker run image s6-svscanctl -t /service".

Just to clarify," docker run" spins up a new container, so that wouldn't work 
for stopping a container. It would just spin up a new container running 
"s6-svscanctl -t service" 

To stop, you run "docker stop " 

On February 26, 2015 6:59:30 AM CST, Laurent Bercot 
 wrote:
>On 26/02/2015 13:00, Dreamcat4 wrote:
>> ^^ Okay so this is what I have been trying to say but Gorka has put
>> more elegantly here. So you kindda have to try to support both.
>>
>>> * start your supervisor by default: docker run your-image
>>> * get access to the container directly without any s6 process
>started:
>>> docker run your-image /bin/bash
>>> * run a custom script and supervise it: docker run your-image /init
>>> /your-custom-script
>
>  I'm still catching up on Docker and this thread, and working on other
>small things, and I will prepare a general reply from s6's point of
>view, but I have a suggestion for this part: I believe it can be
>implemented in a simple fashion.
>
>- When the image starts, /init sets up and starts the supervision tree,
>but not before forking a "stage 2" process that blocks until the
>supervision tree is operational; the "init-stage1" script in the
>examples/ subdirectory of the s6 package shows how it can be done.
>  - If any arguments have been given to /init, the stage 2 process
>takes them. ("background { init-stage2 $@ }")
>  - The stage 2 process can perform any one-time initialization that
>is needed for the image to be fully operational.
>- If stage 2's command line is empty, the stage 2 process simply exits;
>the whole container keeps running, with a supervision tree inside.
>  - If stage 2's command line is not empty, the stage 2 process execs
>into "foreground { $@ } s6-svscanctl -t /service".
>This will run the given command line, then shut down the container.
>
>  This gives the client the following options:
>
>  * docker run image commandline
>Runs commandline in the image without starting the supervision
>environment.
>  * docker run image /init
>  Starts the supervision environment and lets it run forever. It can be
>stopped from the outside via "docker run image s6-svscanctl -t
>/service".
>  * docker run image /init commandline
>Runs commandline in the fully operational supervision environment. When
>commandline exits, the supervision environment is stopped and cleaned
>up.
>  * In all cases, everything will be run with the right environment
>transmitted from the docker invocation.
>
>If that is the flexibility you want, it can definitely be achieved, and
>it's really easy to do - it's just a question of scripting around the
>s6 primitives. (More on that later.)
>
>  While I'm at it: I definitely recommend checking out Alpine Linux if
>you're looking for a lightweight, practical distribution that makes
>static linking easy. The maintainers are regular contributors to the
>musl libc mailing list and seem intent on keeping things working.
>
>-- 
>  Laurent

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: process supervisor - considerations for docker

2015-02-26 Thread Laurent Bercot

On 26/02/2015 13:00, Dreamcat4 wrote:

^^ Okay so this is what I have been trying to say but Gorka has put
more elegantly here. So you kindda have to try to support both.


* start your supervisor by default: docker run your-image
* get access to the container directly without any s6 process started:
docker run your-image /bin/bash
* run a custom script and supervise it: docker run your-image /init
/your-custom-script


 I'm still catching up on Docker and this thread, and working on other
small things, and I will prepare a general reply from s6's point of
view, but I have a suggestion for this part: I believe it can be
implemented in a simple fashion.

 - When the image starts, /init sets up and starts the supervision tree,
but not before forking a "stage 2" process that blocks until the
supervision tree is operational; the "init-stage1" script in the
examples/ subdirectory of the s6 package shows how it can be done.
 - If any arguments have been given to /init, the stage 2 process
takes them. ("background { init-stage2 $@ }")
 - The stage 2 process can perform any one-time initialization that
is needed for the image to be fully operational.
 - If stage 2's command line is empty, the stage 2 process simply exits;
the whole container keeps running, with a supervision tree inside.
 - If stage 2's command line is not empty, the stage 2 process execs
into "foreground { $@ } s6-svscanctl -t /service".
This will run the given command line, then shut down the container.

 This gives the client the following options:

 * docker run image commandline
   Runs commandline in the image without starting the supervision environment.
 * docker run image /init
   Starts the supervision environment and lets it run forever. It can be
stopped from the outside via "docker run image s6-svscanctl -t /service".
 * docker run image /init commandline
   Runs commandline in the fully operational supervision environment. When
commandline exits, the supervision environment is stopped and cleaned up.
 * In all cases, everything will be run with the right environment
transmitted from the docker invocation.

 If that is the flexibility you want, it can definitely be achieved, and
it's really easy to do - it's just a question of scripting around the
s6 primitives. (More on that later.)

 While I'm at it: I definitely recommend checking out Alpine Linux if
you're looking for a lightweight, practical distribution that makes
static linking easy. The maintainers are regular contributors to the
musl libc mailing list and seem intent on keeping things working.

--
 Laurent


Re: process supervisor - considerations for docker

2015-02-26 Thread Dreamcat4
On Thu, Feb 26, 2015 at 8:31 AM, Gorka Lertxundi  wrote:
> Hi,
>
> My name is Gorka, not Gornak! It seems like suddenly I discovered that I was
> born in the eastern europe! hehe :)
>
> I'll answer to both of you, mixed, so try not to get confused.
>
> Lets go,
>
>> > But Gornak - I must say that your new ubuntu base image really seem *a
>> > lot* better than the phusion/baseimage one. It is fantastic and an
>> > excellent job you have done there and you continue to update with new
>> > versions of s6, etc. Can't really say thank you enough for that.
>
>
> Thanks!
>
>> I think if anybody were to start up a new baseimage project, Alpine is
>> the way to go, hands-down. Tiny, efficient images.
>
>
> Wow, I didn't hear about Alpine Linux. What would differentiate from, for
> example, busybox with opkg? https://github.com/progrium/busybox. Busybox is
> battle-tested and having a package manager in it seems the right way.
>
> The problem with these 'not as mainstream as ubuntu' distros is the smaller
> community around it. That community discovers things that you probably
> didn't be aware of, bugfixes, fast security updates, ... . So my main
> concern about the image is not its size but the
> keep-easily-up-to-date-and-secure no matter it size. Even so, although you
> probably know that, docker storages images incrementally so that just the
> base images is stored once and all app-specific images will be on top of
> this image.
>
> It is always the result of a commitment to easy of use, size and
> maintainability.
>
>>
>> > Great work Gorka for providing these linux x86_64 binaries on Github
>> > releases.
>> > This was exactly the kind of thing I was hoping for / looking for in
>> > regards to that aspect.
>
>
> As I said in my last email, I'll try to keep them updated
>
>> > Right so I was half-expecting this kind of response (and from John
>> > Regan too). In my initial post I could not think of a concise-enough
>> > way to demonstrate and explain my reasoning behind that specific
>> > request. At least not without entering into a whole other big long
>> > discussion that would have detracted / derailed from some of the other
>> > important considerations and discussion points in respect to docker.
>> >
>> > Basically without that capability (which I am aware goes against
>> > convention for process supervisors that occupy pid 1). Then you are
>> > forcing docker users to choose an XOR (exclusive-OR) between either
>> > using s6 process supervision or the ability to specify command line
>> > arguments to their docker containers (via ENTRYPOINT and/or CMD).
>> > Which essentially is like a breakage of those ENTRYPOINT and CMD
>> > features of docker. At least that is my understanding how pretty much
>> > all of these process supervisors behave. And not any criticism
>> > levelled at s6 alone. Since you would not typically expect this
>> > feature anyway (before we had containerisation etc.). It is very
>> > docker-specific.
>> >
>> > Both of you seem to have stated effectively that you don't really see
>> > such a pressing reason why it is needed.
>> >
>> > So then it's another thing entirely for me to explain why and convince
>> > you guys there are good reasons for it being important to be able to
>> > continue to use CMD and ENTRYPOINT for specifying command line
>> > arguments still remains an important thing after adding a process
>> > supervisor. There are actually many different reasons for why that is
>> > desirable (that I can think of right now). But that's another
>> > discussion and case for me to make to you.
>> >
>> > I would be happy to go into that aspect further. Perhaps off-the
>> > mailing list is a better idea. To then come back here again when that
>> > discussion is over and concluded with a short summary. But I don't
>> > want to waste anyone's time so please reply and indicate if you would
>> > really like for me to go into more depth with better justifications
>> > for why we need that particular feature.
>
>
> I don't think it must be one or another. With CMD [ "/init" ] you can:

^^ Okay so this is what I have been trying to say but Gorka has put
more elegantly here. So you kindda have to try to support both.

> * start your supervisor by default: docker run your-image
> * get access to the container directly without any s6 process started:
> docker run your-image /bin/bash
> * run a custom script and supervise it: docker run your-image /init
> /your-custom-script
>
>>
>> > Would appreciate coming back to how we can do this later on. After I
>> > have made a more convincing case for why it's actually needed. My
>> > naive assumption, not knowing any of s6 yet: It should be that simply
>> > passing on an argv[] array aught to be possible. And perhaps without
>> > too many extra hassles or loops to jump through.
>
>
> Would appreciate that use-cases! :-)

To make an overview

* Containers that provide Development tools / dev environments - often
those category of docker images take d

Re: process supervisor - considerations for docker

2015-02-26 Thread Gorka Lertxundi
Hi,

My name is Gorka, not Gornak! It seems like suddenly I discovered that I
was born in the eastern europe! hehe :)

I'll answer to both of you, mixed, so try not to get confused.

Lets go,

> But Gornak - I must say that your new ubuntu base image really seem *a
> > lot* better than the phusion/baseimage one. It is fantastic and an
> > excellent job you have done there and you continue to update with new
> > versions of s6, etc. Can't really say thank you enough for that.
>

Thanks!

I think if anybody were to start up a new baseimage project, Alpine is
> the way to go, hands-down. Tiny, efficient images.


Wow, I didn't hear about Alpine Linux. What would differentiate from, for
example, busybox with opkg? https://github.com/progrium/busybox. Busybox is
battle-tested and having a package manager in it seems the right way.

The problem with these 'not as mainstream as ubuntu' distros is the smaller
community around it. That community discovers things that you probably
didn't be aware of, bugfixes, fast security updates, ... . So my main
concern about the image is not its size but the
keep-easily-up-to-date-and-secure no matter it size. Even so, although you
probably know that, docker storages images incrementally so that just the
base images is stored once and all app-specific images will be on top of
this image.

It is always the result of a commitment to easy of use, size and
maintainability.


> > Great work Gorka for providing these linux x86_64 binaries on Github
> releases.
> > This was exactly the kind of thing I was hoping for / looking for in
> > regards to that aspect.
>

As I said in my last email, I'll try to keep them updated

> Right so I was half-expecting this kind of response (and from John
> > Regan too). In my initial post I could not think of a concise-enough
> > way to demonstrate and explain my reasoning behind that specific
> > request. At least not without entering into a whole other big long
> > discussion that would have detracted / derailed from some of the other
> > important considerations and discussion points in respect to docker.
> >
> > Basically without that capability (which I am aware goes against
> > convention for process supervisors that occupy pid 1). Then you are
> > forcing docker users to choose an XOR (exclusive-OR) between either
> > using s6 process supervision or the ability to specify command line
> > arguments to their docker containers (via ENTRYPOINT and/or CMD).
> > Which essentially is like a breakage of those ENTRYPOINT and CMD
> > features of docker. At least that is my understanding how pretty much
> > all of these process supervisors behave. And not any criticism
> > levelled at s6 alone. Since you would not typically expect this
> > feature anyway (before we had containerisation etc.). It is very
> > docker-specific.
> >
> > Both of you seem to have stated effectively that you don't really see
> > such a pressing reason why it is needed.
> >
> > So then it's another thing entirely for me to explain why and convince
> > you guys there are good reasons for it being important to be able to
> > continue to use CMD and ENTRYPOINT for specifying command line
> > arguments still remains an important thing after adding a process
> > supervisor. There are actually many different reasons for why that is
> > desirable (that I can think of right now). But that's another
> > discussion and case for me to make to you.
> >
> > I would be happy to go into that aspect further. Perhaps off-the
> > mailing list is a better idea. To then come back here again when that
> > discussion is over and concluded with a short summary. But I don't
> > want to waste anyone's time so please reply and indicate if you would
> > really like for me to go into more depth with better justifications
> > for why we need that particular feature.
>

I don't think it must be one or another. With CMD [ "/init" ] you can:
* start your supervisor by default: docker run your-image
* get access to the container directly without any s6 process started:
docker run your-image /bin/bash
* run a custom script and supervise it: docker run your-image /init
/your-custom-script


> > Would appreciate coming back to how we can do this later on. After I
> > have made a more convincing case for why it's actually needed. My
> > naive assumption, not knowing any of s6 yet: It should be that simply
> > passing on an argv[] array aught to be possible. And perhaps without
> > too many extra hassles or loops to jump through.
>

Would appreciate that use-cases! :-)


> > >>
> https://github.com/glerchundi/container-base/blob/master/rootfs/etc/s6/.s6-svscan/finish#L23-L31
> >
> > That is awesome ^^. Like the White Whale of moby dick. It really felt
> > good to see this here in your base image. Really appreciate that.
>

With some changes to fit my docker base image needs but that was taken from
Laurent examples in the s6 tarball, so thanks Laurent :-)


> > Nope. At least for the initial guildelines I gave - the 

Re: process supervisor - considerations for docker

2015-02-25 Thread John Regan
On Thu, Feb 26, 2015 at 01:38:14AM +, Dreamcat4 wrote:
> Really overwhelmed by all this. It is a much more positive response
> than expected. And many good things. I am very grateful. Some bits I
> still like us to continue discussion upon.
> 
> But Gornak - I must say that your new ubuntu base image really seem *a
> lot* better than the phusion/baseimage one. It is fantastic and an
> excellent job you have done there and you continue to update with new
> versions of s6, etc. Can't really say thank you enough for that.

Just to toss an idea out there - the phusion baseimage, my baseimage,
and Gornak's are all based off Ubuntu. Our images, while smaller than
phusion's, are still fairly large downloads.

Have any of you guys looked at Alpine linux? I think the guys at
Alpine may have accidentally created the perfect distro for
Docker. It's a *great* balance between a Ubuntu-based image and a
Busybox/buildroot-based image:

* The base image is something like 10MB, unlike Ubuntu
* It still has a package manager, unlike busybox
* Their packages tend to bring in less additional crap
* You still have that composability that Ubuntu/CentOS/etc images have

I actually just submitted s6 packages to Alpine for testing earlier
today, so hopefully it'll be in their repos at some point.

The fundamental problem with a buildroot/busybox-based image is that
it's really hard to build new images. You can't just say "FROM this;
RUN package-manager install that"

I think if anybody were to start up a new baseimage project, Alpine is
the way to go, hands-down. Tiny, efficient images.
> 
> Anyway, back to the discussion:
> 
> On Wed, Feb 25, 2015 at 3:57 PM, John Regan  wrote:
> > On Wed, Feb 25, 2015 at 03:58:07PM +0100, Gorka Lertxundi wrote:
> >> Hello,
> >>
> >> After that great john's post, I tried to solve exactly your same problems. 
> >> I
> >> created my own base image based primarily on John's and Phusion's base
> >> images.
> >
> > That's awesome - I get so excited when I hear somebody's actually
> > read, digested, and taken action based on something I wrote. So cool!
> > :)
> >
> >>
> >> See my thoughts below.
> >>
> >> 2015-02-25 12:30 GMT+01:00 Laurent Bercot :
> >>
> >> >
> >> >  (Moving the discussion to the supervision@list.skarnet.org list.
> >> > The original message is quoted below.)
> >> >
> >> >  Hi Dreamcat4,
> >> >
> >> >  Thanks for your detailed message. I'm very happy that s6 found an
> >> > application in docker, and that there's such an interest for it!
> >> > skaw...@list.skarnet.org is indeed the right place to reach me and
> >> > discuss the software I write, but for s6 in particular and process
> >> > supervisors in general, supervision@list.skarnet.org is the better
> >> > place - it's full of people with process supervision experience.
> >> >
> >> >  Your message gives a lot of food for thought, and I don't have time
> >> > right now to give it all the attention it deserves. Tonight or
> >> > tomorrow, though, I will; and other people on the supervisionlist
> >> > will certainly have good insights.
> >> >
> >> >  Cheers!
> >> >
> >> > -- Laurent
> >> >
> >> >
> >> > On 25/02/2015 11:55, Dreamcat4 wrote:
> >> >
> >> >> Hello,
> >> >> Now there is someone (John Regan) who has made s6 images for docker.
> >> >> And written a blog post about it. Which is a great effort - and the
> >> >> reason I've come here. But it gives me a taste of wanting more.
> >> >> Something a bit more foolproof, and simpler, to work specifically
> >> >> inside of docker.
> >> >>
> >> >>  From that blog post I get a general impression that s6 has many
> >> >> advantages. And it may be a good candidate for docker. But I would be
> >> >> remiss not to ask the developers of s6 themselves not to try to take
> >> >> some kind of a personal an interest in considering how s6 might best
> >> >> work inside of docker specifically. I hope that this is the right
> >> >> mailing list to reach s6 developers / discuss such matters. Is this
> >> >> the correct mailing list for s6 dev discussions?
> >> >>
> >> >> I've read and read around the subject of process supervision inside
> >> >> docker. Various people explain how or why they use various different
> >> >> process supervisors in docker (not just s6). None of them really quite
> >> >> seem ideal. I would like to be wrong about that but nothing has fully
> >> >> convinced me so far. Perhaps it is a fair criticism to say that I
> >> >> still have a lot more to learn in regards to process supervisors. But
> >> >> I have no interest in getting bogged down by that. To me, I already
> >> >> know more-or-less enough about how docker manages (or rather
> >> >> mis-manages!) it's container processes to have an opinion about what
> >> >> is needed, from a docker-sided perspective. And know enough that
> >> >> docker project itself won't fix these issues. For one thing because of
> >> >> not owning what's running on the inside of containers. And also
> >> >> because of their single-proces

Re: process supervisor - considerations for docker

2015-02-25 Thread Dreamcat4
Really overwhelmed by all this. It is a much more positive response
than expected. And many good things. I am very grateful. Some bits I
still like us to continue discussion upon.

But Gornak - I must say that your new ubuntu base image really seem *a
lot* better than the phusion/baseimage one. It is fantastic and an
excellent job you have done there and you continue to update with new
versions of s6, etc. Can't really say thank you enough for that.

Anyway, back to the discussion:

On Wed, Feb 25, 2015 at 3:57 PM, John Regan  wrote:
> On Wed, Feb 25, 2015 at 03:58:07PM +0100, Gorka Lertxundi wrote:
>> Hello,
>>
>> After that great john's post, I tried to solve exactly your same problems. I
>> created my own base image based primarily on John's and Phusion's base
>> images.
>
> That's awesome - I get so excited when I hear somebody's actually
> read, digested, and taken action based on something I wrote. So cool!
> :)
>
>>
>> See my thoughts below.
>>
>> 2015-02-25 12:30 GMT+01:00 Laurent Bercot :
>>
>> >
>> >  (Moving the discussion to the supervision@list.skarnet.org list.
>> > The original message is quoted below.)
>> >
>> >  Hi Dreamcat4,
>> >
>> >  Thanks for your detailed message. I'm very happy that s6 found an
>> > application in docker, and that there's such an interest for it!
>> > skaw...@list.skarnet.org is indeed the right place to reach me and
>> > discuss the software I write, but for s6 in particular and process
>> > supervisors in general, supervision@list.skarnet.org is the better
>> > place - it's full of people with process supervision experience.
>> >
>> >  Your message gives a lot of food for thought, and I don't have time
>> > right now to give it all the attention it deserves. Tonight or
>> > tomorrow, though, I will; and other people on the supervisionlist
>> > will certainly have good insights.
>> >
>> >  Cheers!
>> >
>> > -- Laurent
>> >
>> >
>> > On 25/02/2015 11:55, Dreamcat4 wrote:
>> >
>> >> Hello,
>> >> Now there is someone (John Regan) who has made s6 images for docker.
>> >> And written a blog post about it. Which is a great effort - and the
>> >> reason I've come here. But it gives me a taste of wanting more.
>> >> Something a bit more foolproof, and simpler, to work specifically
>> >> inside of docker.
>> >>
>> >>  From that blog post I get a general impression that s6 has many
>> >> advantages. And it may be a good candidate for docker. But I would be
>> >> remiss not to ask the developers of s6 themselves not to try to take
>> >> some kind of a personal an interest in considering how s6 might best
>> >> work inside of docker specifically. I hope that this is the right
>> >> mailing list to reach s6 developers / discuss such matters. Is this
>> >> the correct mailing list for s6 dev discussions?
>> >>
>> >> I've read and read around the subject of process supervision inside
>> >> docker. Various people explain how or why they use various different
>> >> process supervisors in docker (not just s6). None of them really quite
>> >> seem ideal. I would like to be wrong about that but nothing has fully
>> >> convinced me so far. Perhaps it is a fair criticism to say that I
>> >> still have a lot more to learn in regards to process supervisors. But
>> >> I have no interest in getting bogged down by that. To me, I already
>> >> know more-or-less enough about how docker manages (or rather
>> >> mis-manages!) it's container processes to have an opinion about what
>> >> is needed, from a docker-sided perspective. And know enough that
>> >> docker project itself won't fix these issues. For one thing because of
>> >> not owning what's running on the inside of containers. And also
>> >> because of their single-process viewpoint take on things. Andy way.
>> >> That kind of political nonsense doesn't matter for our discussion. I
>> >> just want to have a technical discussion about what is needed, and how
>> >> might be the best way to solve the problem!
>> >>
>> >>
>> >> MY CONCERNS ABOUT USING S6 INSIDE OF DOCKER
>> >>
>> >> In regards of s6 only, currently these are my currently perceived
>> >> shortcomings when using it in docker:
>> >>
>> >> * it's not clear how to pass in programs arguments via CMD and
>> >> ENTRYPOINT in docker
>> >
>> >- in fact i have not seen ANY docker process supervisor solutions
>> >> show how to do this (except perhaps phusion base image)
>> >>
>> >> * it is not clear if ENV vars are preserved. That is also something
>> >> essential for docker.
>> >
>> >
>> >> * s6 has many utilities s6-*
>> >>  - not clear which ones are actually required for making a docker
>> >> process supervisor
>> >>
>> >>
>> > * s6 not available yet as .deb or .rpm package
>> >>  - official packages are helpful because on different distros:
>> >> + standard locations where to put config files and so on may
>> >> differ.
>> >> + to install man pages too, in the right place
>> >>
>> >>
>> > * s6 is not available as official single pre-compiled binary file 

Re: process supervisor - considerations for docker

2015-02-25 Thread John Albietz
Interesting work Gorka! Thx for sharing and keep it up. 

Seeing examples of how people are using s6, particularly including the 
build/compile steps, makes it much easier to test it out etc. 

- John

> On Feb 25, 2015, at 12:23 PM, Gorka Lertxundi  wrote:
> 
> John,
> 
> 
> 
> 
> I got to s6 because I read your post so...big thanks! :-)
> 
> 
> 
> 
> 
> 
> 
> [...] release feature like that. Expect a pull request soon for the other s6 
> packages.
> 
> 
> They are always welcomed!
> 
> 
> 
> 
> 
> 
> 
> [...] wanted to let you know this image is pretty sweet and makes mine look 
> like garbage :)
> 
> 
> 
> 
> 
> C'mon, lots of ideas were grabbed without your permission, like that great 
> COPY /rootfs /!
> 
> 
> 
> 
> 
> 
> 
> [...] the same way. Just wanted to raise that and double-check there's no 
> differences.
> 
> 
> Laurent, any differences between `kill -15 1` and `s6-svscanctl -t 
> /path/to/whatever`?


smime.p7s
Description: S/MIME cryptographic signature


Re: process supervisor - considerations for docker

2015-02-25 Thread Laurent Bercot

On 25/02/2015 21:23, Gorka Lertxundi wrote:

Laurent, any differences between `kill -15 1` and `s6-svscanctl -t 
/path/to/whatever`?


 Well, maybe it's not the case with a fake process 1 created by Docker,
but normally "kill -15 1" just doesn't work. The kernel is supposed
to ignore any signal sent to process 1 from another process.
(The kernel itself can raise signals in process 1, though: for instance,
on Linux, a SIGINT when you Ctrl-Alt-Del on the console, if
/proc/sys/kernel/ctrl-alt-del is 0.)

 So, to be safe, use s6-svscanctl -t.

--
 Laurent



Re: process supervisor - considerations for docker

2015-02-25 Thread Gorka Lertxundi
John,




I got to s6 because I read your post so...big thanks! :-)







[...] release feature like that. Expect a pull request soon for the other s6 
packages.


They are always welcomed!







[...] wanted to let you know this image is pretty sweet and makes mine look 
like garbage :)





C'mon, lots of ideas were grabbed without your permission, like that great COPY 
/rootfs /!







[...] the same way. Just wanted to raise that and double-check there's no 
differences.


Laurent, any differences between `kill -15 1` and `s6-svscanctl -t 
/path/to/whatever`?

Re: process supervisor - considerations for docker

2015-02-25 Thread John Regan
On Wed, Feb 25, 2015 at 03:58:07PM +0100, Gorka Lertxundi wrote:
> Hello,
> 
> After that great john's post, I tried to solve exactly your same problems. I
> created my own base image based primarily on John's and Phusion's base
> images.

That's awesome - I get so excited when I hear somebody's actually
read, digested, and taken action based on something I wrote. So cool!
:)

> 
> See my thoughts below.
> 
> 2015-02-25 12:30 GMT+01:00 Laurent Bercot :
> 
> >
> >  (Moving the discussion to the supervision@list.skarnet.org list.
> > The original message is quoted below.)
> >
> >  Hi Dreamcat4,
> >
> >  Thanks for your detailed message. I'm very happy that s6 found an
> > application in docker, and that there's such an interest for it!
> > skaw...@list.skarnet.org is indeed the right place to reach me and
> > discuss the software I write, but for s6 in particular and process
> > supervisors in general, supervision@list.skarnet.org is the better
> > place - it's full of people with process supervision experience.
> >
> >  Your message gives a lot of food for thought, and I don't have time
> > right now to give it all the attention it deserves. Tonight or
> > tomorrow, though, I will; and other people on the supervisionlist
> > will certainly have good insights.
> >
> >  Cheers!
> >
> > -- Laurent
> >
> >
> > On 25/02/2015 11:55, Dreamcat4 wrote:
> >
> >> Hello,
> >> Now there is someone (John Regan) who has made s6 images for docker.
> >> And written a blog post about it. Which is a great effort - and the
> >> reason I've come here. But it gives me a taste of wanting more.
> >> Something a bit more foolproof, and simpler, to work specifically
> >> inside of docker.
> >>
> >>  From that blog post I get a general impression that s6 has many
> >> advantages. And it may be a good candidate for docker. But I would be
> >> remiss not to ask the developers of s6 themselves not to try to take
> >> some kind of a personal an interest in considering how s6 might best
> >> work inside of docker specifically. I hope that this is the right
> >> mailing list to reach s6 developers / discuss such matters. Is this
> >> the correct mailing list for s6 dev discussions?
> >>
> >> I've read and read around the subject of process supervision inside
> >> docker. Various people explain how or why they use various different
> >> process supervisors in docker (not just s6). None of them really quite
> >> seem ideal. I would like to be wrong about that but nothing has fully
> >> convinced me so far. Perhaps it is a fair criticism to say that I
> >> still have a lot more to learn in regards to process supervisors. But
> >> I have no interest in getting bogged down by that. To me, I already
> >> know more-or-less enough about how docker manages (or rather
> >> mis-manages!) it's container processes to have an opinion about what
> >> is needed, from a docker-sided perspective. And know enough that
> >> docker project itself won't fix these issues. For one thing because of
> >> not owning what's running on the inside of containers. And also
> >> because of their single-process viewpoint take on things. Andy way.
> >> That kind of political nonsense doesn't matter for our discussion. I
> >> just want to have a technical discussion about what is needed, and how
> >> might be the best way to solve the problem!
> >>
> >>
> >> MY CONCERNS ABOUT USING S6 INSIDE OF DOCKER
> >>
> >> In regards of s6 only, currently these are my currently perceived
> >> shortcomings when using it in docker:
> >>
> >> * it's not clear how to pass in programs arguments via CMD and
> >> ENTRYPOINT in docker
> >
> >- in fact i have not seen ANY docker process supervisor solutions
> >> show how to do this (except perhaps phusion base image)
> >>
> >> * it is not clear if ENV vars are preserved. That is also something
> >> essential for docker.
> >
> >
> >> * s6 has many utilities s6-*
> >>  - not clear which ones are actually required for making a docker
> >> process supervisor
> >>
> >>
> > * s6 not available yet as .deb or .rpm package
> >>  - official packages are helpful because on different distros:
> >> + standard locations where to put config files and so on may
> >> differ.
> >> + to install man pages too, in the right place
> >>
> >>
> > * s6 is not available as official single pre-compiled binary file for
> >> download via wget or curl
> >> - which would be the most ideal way to install it into a docker
> >> container
> >>
> >
> >> ^^ Some of these perceived shortcomings are more important /
> >> significant than others! Some are not in the remit of s6 development
> >> to be concerned about. Some are mild nit-picking, or the ignorance of
> >> not-knowning, having not actually tried out s6 before.
> >>
> >> But my general point is that it is not clear-enough to me (from my
> >> perspective) whether s6 can actually satisfy all of the significant
> >> docker-specific considerations. Which I have not properly stated yet.
> >> So he

Re: process supervisor - considerations for docker

2015-02-25 Thread Gorka Lertxundi
Hello,

After that great john's post, I tried to solve exactly your same problems. I
created my own base image based primarily on John's and Phusion's base
images.

See my thoughts below.

2015-02-25 12:30 GMT+01:00 Laurent Bercot :

>
>  (Moving the discussion to the supervision@list.skarnet.org list.
> The original message is quoted below.)
>
>  Hi Dreamcat4,
>
>  Thanks for your detailed message. I'm very happy that s6 found an
> application in docker, and that there's such an interest for it!
> skaw...@list.skarnet.org is indeed the right place to reach me and
> discuss the software I write, but for s6 in particular and process
> supervisors in general, supervision@list.skarnet.org is the better
> place - it's full of people with process supervision experience.
>
>  Your message gives a lot of food for thought, and I don't have time
> right now to give it all the attention it deserves. Tonight or
> tomorrow, though, I will; and other people on the supervisionlist
> will certainly have good insights.
>
>  Cheers!
>
> -- Laurent
>
>
> On 25/02/2015 11:55, Dreamcat4 wrote:
>
>> Hello,
>> Now there is someone (John Regan) who has made s6 images for docker.
>> And written a blog post about it. Which is a great effort - and the
>> reason I've come here. But it gives me a taste of wanting more.
>> Something a bit more foolproof, and simpler, to work specifically
>> inside of docker.
>>
>>  From that blog post I get a general impression that s6 has many
>> advantages. And it may be a good candidate for docker. But I would be
>> remiss not to ask the developers of s6 themselves not to try to take
>> some kind of a personal an interest in considering how s6 might best
>> work inside of docker specifically. I hope that this is the right
>> mailing list to reach s6 developers / discuss such matters. Is this
>> the correct mailing list for s6 dev discussions?
>>
>> I've read and read around the subject of process supervision inside
>> docker. Various people explain how or why they use various different
>> process supervisors in docker (not just s6). None of them really quite
>> seem ideal. I would like to be wrong about that but nothing has fully
>> convinced me so far. Perhaps it is a fair criticism to say that I
>> still have a lot more to learn in regards to process supervisors. But
>> I have no interest in getting bogged down by that. To me, I already
>> know more-or-less enough about how docker manages (or rather
>> mis-manages!) it's container processes to have an opinion about what
>> is needed, from a docker-sided perspective. And know enough that
>> docker project itself won't fix these issues. For one thing because of
>> not owning what's running on the inside of containers. And also
>> because of their single-process viewpoint take on things. Andy way.
>> That kind of political nonsense doesn't matter for our discussion. I
>> just want to have a technical discussion about what is needed, and how
>> might be the best way to solve the problem!
>>
>>
>> MY CONCERNS ABOUT USING S6 INSIDE OF DOCKER
>>
>> In regards of s6 only, currently these are my currently perceived
>> shortcomings when using it in docker:
>>
>> * it's not clear how to pass in programs arguments via CMD and
>> ENTRYPOINT in docker
>
>- in fact i have not seen ANY docker process supervisor solutions
>> show how to do this (except perhaps phusion base image)
>>
>> * it is not clear if ENV vars are preserved. That is also something
>> essential for docker.
>
>
>> * s6 has many utilities s6-*
>>  - not clear which ones are actually required for making a docker
>> process supervisor
>>
>>
> * s6 not available yet as .deb or .rpm package
>>  - official packages are helpful because on different distros:
>> + standard locations where to put config files and so on may
>> differ.
>> + to install man pages too, in the right place
>>
>>
> * s6 is not available as official single pre-compiled binary file for
>> download via wget or curl
>> - which would be the most ideal way to install it into a docker
>> container
>>
>
>> ^^ Some of these perceived shortcomings are more important /
>> significant than others! Some are not in the remit of s6 development
>> to be concerned about. Some are mild nit-picking, or the ignorance of
>> not-knowning, having not actually tried out s6 before.
>>
>> But my general point is that it is not clear-enough to me (from my
>> perspective) whether s6 can actually satisfy all of the significant
>> docker-specific considerations. Which I have not properly stated yet.
>> So here they are listed below…
>>
>>
>> DOCKER-SPECIFIC CONSIDERATIONS FOR A PROCESS SUPERVISOR
>>
>> A good process supervisor for docker should ideally:
>>
>> * be a single pre-compiled binary program file. That can be downloaded
>> by curl/wget (or can be installed from .deb or .rpm).
>>
>
I always include s6, execline, s6-portable-utils in my base images, they
are pretty
small and manageable. You could find/build them using

Re: process supervisor - considerations for docker

2015-02-25 Thread John Regan
Hi Dreamcat4 -

First thing's first - I can't stress enough how awesome it is to know
people are using/talking about my Docker images, blog posts, and so
on. Too cool!

I've responded to your concerns/questions/etc throughout the email
below.

-John

On Wed, Feb 25, 2015 at 11:32:37AM +, Dreamcat4 wrote:
> Thank you for moving my message Laurent.
> 
> Sorry for the mixup r.e. the mailing lists. I have subscribed to the
> correct list now (for s6 specific).
> 
> On Wed, Feb 25, 2015 at 11:30 AM, Laurent Bercot
>  wrote:
> >
> >  (Moving the discussion to the supervision@list.skarnet.org list.
> > The original message is quoted below.)
> >
> >  Hi Dreamcat4,
> >
> >  Thanks for your detailed message. I'm very happy that s6 found an
> > application in docker, and that there's such an interest for it!
> > skaw...@list.skarnet.org is indeed the right place to reach me and
> > discuss the software I write, but for s6 in particular and process
> > supervisors in general, supervision@list.skarnet.org is the better
> > place - it's full of people with process supervision experience.
> >
> >  Your message gives a lot of food for thought, and I don't have time
> > right now to give it all the attention it deserves. Tonight or
> > tomorrow, though, I will; and other people on the supervisionlist
> > will certainly have good insights.
> >
> >  Cheers!
> >
> > -- Laurent
> >
> >
> >
> > On 25/02/2015 11:55, Dreamcat4 wrote:
> >>
> >> Hello,
> >> Now there is someone (John Regan) who has made s6 images for docker.
> >> And written a blog post about it. Which is a great effort - and the
> >> reason I've come here. But it gives me a taste of wanting more.
> >> Something a bit more foolproof, and simpler, to work specifically
> >> inside of docker.
> >>
> >>  From that blog post I get a general impression that s6 has many
> >> advantages. And it may be a good candidate for docker. But I would be
> >> remiss not to ask the developers of s6 themselves not to try to take
> >> some kind of a personal an interest in considering how s6 might best
> >> work inside of docker specifically. I hope that this is the right
> >> mailing list to reach s6 developers / discuss such matters. Is this
> >> the correct mailing list for s6 dev discussions?
> >>
> >> I've read and read around the subject of process supervision inside
> >> docker. Various people explain how or why they use various different
> >> process supervisors in docker (not just s6). None of them really quite
> >> seem ideal. I would like to be wrong about that but nothing has fully
> >> convinced me so far. Perhaps it is a fair criticism to say that I
> >> still have a lot more to learn in regards to process supervisors. But
> >> I have no interest in getting bogged down by that. To me, I already
> >> know more-or-less enough about how docker manages (or rather
> >> mis-manages!) it's container processes to have an opinion about what
> >> is needed, from a docker-sided perspective. And know enough that
> >> docker project itself won't fix these issues. For one thing because of
> >> not owning what's running on the inside of containers. And also
> >> because of their single-process viewpoint take on things. Andy way.
> >> That kind of political nonsense doesn't matter for our discussion. I
> >> just want to have a technical discussion about what is needed, and how
> >> might be the best way to solve the problem!
> >>
> >>
> >> MY CONCERNS ABOUT USING S6 INSIDE OF DOCKER
> >>
> >> In regards of s6 only, currently these are my currently perceived
> >> shortcomings when using it in docker:
> >>
> >> * it's not clear how to pass in programs arguments via CMD and
> >> ENTRYPOINT in docker
> >>- in fact i have not seen ANY docker process supervisor solutions
> >> show how to do this (except perhaps phusion base image)
> >>

To be honest, I just haven't really done that. I usually use
environment variables to setup my services. For example, if I have a
NodeJS service, I'll run something like

`docker run -e NODEJS_SCRIPT="myapp.js" some-nodejs-image`

Then in my NodeJS `run` script, I'd check if that environment variable
is defined and use it as my argument to NodeJS. I'm just making up
this bit of shell code on the fly, it might have syntax errors, but
you should get the idea:

```
if [ -n "$NODEJS_SCRIPT" ]; then
exec node "$NODEJS_SCRIPT"
else
printf "NODEJS_SCRIPT undefined"
touch down
exit 1
fi
```

Another option is to write a script to use as an entrypoint that
handles command arguments, then execs into s6-svcscan.

> >> * it is not clear if ENV vars are preserved. That is also something
> >> essential for docker.

In my experience, they are. If you use s6-svc as your entrypoint (like
I do in my images), then define environment variables via docker's -e
switch they'll be preserved and available in each service's `run` script,
just like in my NodeJS example above.

> >>
> >> * s6 has many utilities s6-*
> >>  - not clear which ones are actually req

Re: process supervisor - considerations for docker

2015-02-25 Thread Laurent Bercot


 (Moving the discussion to the supervision@list.skarnet.org list.
The original message is quoted below.)

 Hi Dreamcat4,

 Thanks for your detailed message. I'm very happy that s6 found an
application in docker, and that there's such an interest for it!
skaw...@list.skarnet.org is indeed the right place to reach me and
discuss the software I write, but for s6 in particular and process
supervisors in general, supervision@list.skarnet.org is the better
place - it's full of people with process supervision experience.

 Your message gives a lot of food for thought, and I don't have time
right now to give it all the attention it deserves. Tonight or
tomorrow, though, I will; and other people on the supervisionlist
will certainly have good insights.

 Cheers!

-- Laurent


On 25/02/2015 11:55, Dreamcat4 wrote:

Hello,
Now there is someone (John Regan) who has made s6 images for docker.
And written a blog post about it. Which is a great effort - and the
reason I've come here. But it gives me a taste of wanting more.
Something a bit more foolproof, and simpler, to work specifically
inside of docker.

 From that blog post I get a general impression that s6 has many
advantages. And it may be a good candidate for docker. But I would be
remiss not to ask the developers of s6 themselves not to try to take
some kind of a personal an interest in considering how s6 might best
work inside of docker specifically. I hope that this is the right
mailing list to reach s6 developers / discuss such matters. Is this
the correct mailing list for s6 dev discussions?

I've read and read around the subject of process supervision inside
docker. Various people explain how or why they use various different
process supervisors in docker (not just s6). None of them really quite
seem ideal. I would like to be wrong about that but nothing has fully
convinced me so far. Perhaps it is a fair criticism to say that I
still have a lot more to learn in regards to process supervisors. But
I have no interest in getting bogged down by that. To me, I already
know more-or-less enough about how docker manages (or rather
mis-manages!) it's container processes to have an opinion about what
is needed, from a docker-sided perspective. And know enough that
docker project itself won't fix these issues. For one thing because of
not owning what's running on the inside of containers. And also
because of their single-process viewpoint take on things. Andy way.
That kind of political nonsense doesn't matter for our discussion. I
just want to have a technical discussion about what is needed, and how
might be the best way to solve the problem!


MY CONCERNS ABOUT USING S6 INSIDE OF DOCKER

In regards of s6 only, currently these are my currently perceived
shortcomings when using it in docker:

* it's not clear how to pass in programs arguments via CMD and
ENTRYPOINT in docker
   - in fact i have not seen ANY docker process supervisor solutions
show how to do this (except perhaps phusion base image)

* it is not clear if ENV vars are preserved. That is also something
essential for docker.

* s6 has many utilities s6-*
 - not clear which ones are actually required for making a docker
process supervisor

* s6 not available yet as .deb or .rpm package
 - official packages are helpful because on different distros:
+ standard locations where to put config files and so on may differ.
+ to install man pages too, in the right place

* s6 is not available as official single pre-compiled binary file for
download via wget or curl
- which would be the most ideal way to install it into a docker container


^^ Some of these perceived shortcomings are more important /
significant than others! Some are not in the remit of s6 development
to be concerned about. Some are mild nit-picking, or the ignorance of
not-knowning, having not actually tried out s6 before.

But my general point is that it is not clear-enough to me (from my
perspective) whether s6 can actually satisfy all of the significant
docker-specific considerations. Which I have not properly stated yet.
So here they are listed below…


DOCKER-SPECIFIC CONSIDERATIONS FOR A PROCESS SUPERVISOR

A good process supervisor for docker should ideally:

* be a single pre-compiled binary program file. That can be downloaded
by curl/wget (or can be installed from .deb or .rpm).

* can take directly command and arguments. With argv[] like this:
 "process_supervisor" "my_program_or_script" "my program or script
arguments…"

* will pass on all ENV vars to "my_program_or_script" faithfully

* will run as PID 1 inside the linux namespace

* where my_program_or_script may spawn BOTH child AND non-child
(orphaned) processes

* when "process_supervisor" (e.g. s6 or whatever) receives a TERM signal
   * it faithfully passes that signal to "my_program_or_script"
   * it also passes that signal to any orphaned non-child processes too

* when my_program_or_script dies, or exits
   * clean up ALL remaining non-child