Re: register runsvdir as subreaper

2017-02-02 Thread John Regan

On 02/02/2017 03:29 PM, Mitar wrote:

Hi!

Hm, when I read the runit man page I got scared because of its trying
to reboot and halt the machine. I am not sure how will that interact
with a Docker container. I also didn't want one extra process to be in
every container. But you are right, it seems it might be necessary
anyway.

So, let 'see. I could simply then use runit as PID 1 inside a Docker
image. /etc/runit/1 could be an empty script (is it even required to
have it, if not needed?). /etc/runit/2 would then start runsvdir.
Should it exec into it?

I would then map Docker stop signal to be SIGINT, and I would create a
/etc/runit/ctrlaltdel script which would gracefully call stop on all
services. Or does runit already do that?

If /etc/runit/stopit does not exit, then sending the SIGINT signal to
runit does not do anything besides running the /etc/runit/ctrlaltdel
script?


Mitar


I'm going to really recommend checking out the s6-overlay, I think it
will do everything you're talking about out-of-the-box. No need to
customize the Docker stop signal, it handles gracefully shutting
down services, reaping orphaned processes, and so on. It's really
useful and will probably save you a lot of time.

https://github.com/just-containers/s6-overlay

-John Regan


Re: register runsvdir as subreaper

2017-02-01 Thread John Regan

On 01/30/2017 11:38 AM, Mitar wrote:

Hi!

I would like to ask if runsvdir could by default be defined as a
subreaper on Linux. If it is already a PID 1, then there is no
difference, but sometimes it is not. In that case when an orphan
process happens under it, then it would be re-parented under the
runsvdir, mimicking the behavior when runsvdir runs as a PID 1.

runit is often used in Docker containers and sometimes you have a
wrapper script which spawns runsvdir as a child. In that case runsvdir
does not run as PID 1.

I have found a similar patch for Debian, but which requested this
feature on runsv. I think that might be misused for making process who
demonize in fact stay under runsv. Or maybe that is a future feature
of runit, not sure, but that can be discussion for some other thread.
I would like to ask that something similar to that patch is done for
runsvdir for now:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=833048

This would really make it easier to use runit inside Docker.

A bit more about subreapers here:

https://unix.stackexchange.com/questions/250153/what-is-a-subreaper-process


Mitar



If you're looking to supervise processes inside containers, I'd 
recommend checking out s6-overlay[1] - it's s6 + a collection of 
scripts, meant to be used within containers for any Linux distro. It 
handles reaping processes, logging, log rotation - it's a swiss army 
knife. Full disclosure, I'm a member of the project :)


[1]: https://github.com/just-containers/s6-overlay


Re: process supervisor - considerations for docker

2015-02-26 Thread John Regan


  I think you're better off with:

  * Case 1 : docker run --entrypoint= image commandline
(with or without -ti depending on whether you need an interactive
terminal)
  * Case 2 : docker run image
  * Case 3: docker run image commandline
(with or without -ti depending on whether you need an interactive
terminal)

  docker run --entrypoint= -ti image /bin/sh
would start a shell without the supervision tree running

  docker run -ti image /bin/sh
would start a shell with the supervision tree up.


After reading your reasoning, I agree 100% - let -ti drive whether it's 
interactive, and --entrypoint drive whether there's a supervision tree. 
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Re: process supervisor - considerations for docker

2015-02-26 Thread John Regan
On Thu, Feb 26, 2015 at 02:37:23PM +0100, Laurent Bercot wrote:
 On 26/02/2015 14:11, John Regan wrote:
 Just to clarify, docker run spins up a new container, so that wouldn't 
 work for stopping a container. It would just spin up a new container running 
 s6-svscanctl -t service
 
 To stop, you run docker stop container id
 
  Ha! Shows how much I know about Docker.
  I believe the idea is sound, though. And definitely implementable.
 

I figure this is also a good moment to go over ENTRYPOINT and CMD,
since that's come up a few times in the discussion.

When you build a Docker image, the ENTRYPOINT is what program you want
to run as PID1 by default. It can be the path to a program, along with some
arguments, or it can be null.

CMD is really just arguments to your ENTRYPOINT, unless ENTRYPOINT is
null, in which case it becomes your effective ENTRYPOINT.

At build-time, you can specify a default CMD, which is what gets run
if no arguments are passed to docker run.

When you do 'docker run imagename blah blah blah', the 'blah blah
blah' gets passed as arguments to ENTRYPOINT. If you want to specify a
different ENTRYPOINT at runtime, you need to use the --entrypoint
switch.

So, for example: the default ubuntu image has a null ENTRYPOINT, and
the default CMD is /bin/bash. If I run `docker run ubuntu`, then
/bin/bash gets executed (and quits immediately, since it doesn't have
anything to do).

If I run `docker run ubuntu echo hello`, then /bin/echo is executed.

In my Ubuntu baseimage, I made the ENTRYPOINT s6-svscan /etc/s6. In
hindsight, this probably wasn't the best idea. If the user types
docker run jprjr/ubuntu-baseimage hey there, then the effective
command becomes s6-svscan /etc/s6 hey there - which goes against how
most other Docker images work.

So, if I pull up Laurent's earlier list of options for the client:

 * docker run image commandline
   Runs commandline in the image without starting the supervision environment.
 * docker run image /init
   Starts the supervision environment and lets it run forever.
 * docker run image /init commandline
   Runs commandline in the fully operational supervision environment. When
commandline exits, the supervision environment is stopped and cleaned up.

I'm going to call these case 1 (command with no supervision environment), case
2 (default supervision environment), and case 3 (supervision environment with
a provided command).

Here's a breakdown of each case's invocation, given a set ENTRYPOINT and CMD:

ENTRYPOINT = null, CMD = /init

* Case 1: `docker run image commandline`
* Case 2: `docker run image
* Case 3: `docker run image /init commandline`

ENTRYPOINT = /init, CMD = null

* Case 1: `docker run --entrypoint= image commandline`
* Case 2: `docker run image`
* Case 3: `docker run image commandline`

Now, something worth noting is that none of these command run interactively -
to run interactively, you run something like 'docker run -ti ubuntu /bin/sh'.

-t allocates a TTY, and -i keeps STDIN open.

So, I think the right thing to do is make /init check if there's a TTY
allocated and that STDIN is open, and if so, just exec into the passed
arguments without starting the supervision tree.

I'm not going to lie, I don't know the details of how to actually do that.
Assuming that's possible, your use cases become this:

ENTRYPOINT = /init (with TTY/STDIN detection), CMD = null

* Case 1: `docker run -ti image commandline`
* Case 2: `docker run image`
* Case 3: `docker run image commandline`

So basically, if you want to run your command interactively with no execution
environment, you just pass '-ti' to 'docker run' like you normally do. If
you want it to run under the supervision tree, just don't pass the '-ti'
flags. This makes the image work like pretty much every image ever, and
the user doesn't ever need to type out /init.

Laurent, how hard is it to check if you're attached to a TTY or not? This is
where we start getting into your area of expertise :)

-John


Re: process supervisor - considerations for docker

2015-02-26 Thread John Regan
On Thu, Feb 26, 2015 at 08:23:47PM +, Dreamcat4 wrote:
 You CANNOT enforce specific ENTRYPOINT + CMD usages amongst docker
 users. It will never work because too many people use docker in too
 many different ways. And it does not matter from a technical
 perspective for the solution I have been quietly thinking of (but not
 had an opportunity to share yet).
 
 It's best to think of ENTRYPOINT (in conventional docker learning
 before throwing in any /init system) and being the interpreter such
 as the /bin/sh -c bit that sets up the environment. Like the shebang
 line. Or could be the python interpreter instead etc.

I disagree, and I think your second paragraph actually supports my
argument: if you think of ENTRYPOINT as the command for setting up the
environment, then it makes sense to use ENTRYPOINT as the method for
setting up a supervision tree vs not setting up a supervision tree,
because those are two pretty different environments.

People use Docker in tons of different ways, sure. But I'm completely
able to say this is the entrypoint my image uses, and this is what it
does.

Besides, the whole idea here is to make an image that follows best
practices, and best practices state we should be using a process
supervisor that cleans up orphaned processes and stuff. You should be
encouraging people to run their programs, interactively or not, under
a supervision tree like s6.

Heck, most people don't *care* about this kind of thing because they
don't even know. So if you just make /init the ENTRYPOINT, 99% of
people will probably never even realize what's happening. If they can
run `docker run -ti imagename /bin/sh` and get a working, interactive
shell, and the container exits when they type exit, then they're
good to go! Most won't even question what the image is up to, they'll
just continue on getting the benefits of s6 without even realizing it.

 
 My suggestion:
 
 * /init is launched by docker as the first argument.
 * init checks for $@. If there are any arguments:
 
  * create (from a simple template) a s6 run script
* run script launches $1 (first arg) as the command to run
  * run script template is written with remaining args to $1
 
  * proceed normally (inspect the s6 config directory as usual!)
* as there should be no breakage of all existing functionality
 
 * Providing there is no VOLUME sat ontop of the /etc/s6 config directory
 * Then the run script is temporary - it will only last while the
 container is running.
* So won't be there anymore to cleanup on and future 'docker run'
 invokations with different arguments.
 
 The main thing I'm concerned about is about preserving proper shell
 quoting, because sometimes args can be like --flag='some thing'.
 
 It may be one simple way to get proper quoting (in conventional shells
 like bash) is to use 'set -x' to echo out the line, as the output is
 ensured by the interpreter to be re-executable. Although even if that
 takes care of the quotes, it would still not be good to have
 accidental variable expansion, interpretation of $ ! etc. Maybe I'm
 thinking a bit too far ahead. But we already know that Gorka's '/init'
 script is written in bash.

I think here, you're getting way more caught up in the details of your
idea than you need to be. Shells, arguments, quoting, etc, you're
overcomplicating some of this stuff.


Re: process supervisor - considerations for docker

2015-02-25 Thread John Regan
Hi Dreamcat4 -

First thing's first - I can't stress enough how awesome it is to know
people are using/talking about my Docker images, blog posts, and so
on. Too cool!

I've responded to your concerns/questions/etc throughout the email
below.

-John

On Wed, Feb 25, 2015 at 11:32:37AM +, Dreamcat4 wrote:
 Thank you for moving my message Laurent.
 
 Sorry for the mixup r.e. the mailing lists. I have subscribed to the
 correct list now (for s6 specific).
 
 On Wed, Feb 25, 2015 at 11:30 AM, Laurent Bercot
 ska-skaw...@skarnet.org wrote:
 
   (Moving the discussion to the supervision@list.skarnet.org list.
  The original message is quoted below.)
 
   Hi Dreamcat4,
 
   Thanks for your detailed message. I'm very happy that s6 found an
  application in docker, and that there's such an interest for it!
  skaw...@list.skarnet.org is indeed the right place to reach me and
  discuss the software I write, but for s6 in particular and process
  supervisors in general, supervision@list.skarnet.org is the better
  place - it's full of people with process supervision experience.
 
   Your message gives a lot of food for thought, and I don't have time
  right now to give it all the attention it deserves. Tonight or
  tomorrow, though, I will; and other people on the supervisionlist
  will certainly have good insights.
 
   Cheers!
 
  -- Laurent
 
 
 
  On 25/02/2015 11:55, Dreamcat4 wrote:
 
  Hello,
  Now there is someone (John Regan) who has made s6 images for docker.
  And written a blog post about it. Which is a great effort - and the
  reason I've come here. But it gives me a taste of wanting more.
  Something a bit more foolproof, and simpler, to work specifically
  inside of docker.
 
   From that blog post I get a general impression that s6 has many
  advantages. And it may be a good candidate for docker. But I would be
  remiss not to ask the developers of s6 themselves not to try to take
  some kind of a personal an interest in considering how s6 might best
  work inside of docker specifically. I hope that this is the right
  mailing list to reach s6 developers / discuss such matters. Is this
  the correct mailing list for s6 dev discussions?
 
  I've read and read around the subject of process supervision inside
  docker. Various people explain how or why they use various different
  process supervisors in docker (not just s6). None of them really quite
  seem ideal. I would like to be wrong about that but nothing has fully
  convinced me so far. Perhaps it is a fair criticism to say that I
  still have a lot more to learn in regards to process supervisors. But
  I have no interest in getting bogged down by that. To me, I already
  know more-or-less enough about how docker manages (or rather
  mis-manages!) it's container processes to have an opinion about what
  is needed, from a docker-sided perspective. And know enough that
  docker project itself won't fix these issues. For one thing because of
  not owning what's running on the inside of containers. And also
  because of their single-process viewpoint take on things. Andy way.
  That kind of political nonsense doesn't matter for our discussion. I
  just want to have a technical discussion about what is needed, and how
  might be the best way to solve the problem!
 
 
  MY CONCERNS ABOUT USING S6 INSIDE OF DOCKER
 
  In regards of s6 only, currently these are my currently perceived
  shortcomings when using it in docker:
 
  * it's not clear how to pass in programs arguments via CMD and
  ENTRYPOINT in docker
 - in fact i have not seen ANY docker process supervisor solutions
  show how to do this (except perhaps phusion base image)
 

To be honest, I just haven't really done that. I usually use
environment variables to setup my services. For example, if I have a
NodeJS service, I'll run something like

`docker run -e NODEJS_SCRIPT=myapp.js some-nodejs-image`

Then in my NodeJS `run` script, I'd check if that environment variable
is defined and use it as my argument to NodeJS. I'm just making up
this bit of shell code on the fly, it might have syntax errors, but
you should get the idea:

```
if [ -n $NODEJS_SCRIPT ]; then
exec node $NODEJS_SCRIPT
else
printf NODEJS_SCRIPT undefined
touch down
exit 1
fi
```

Another option is to write a script to use as an entrypoint that
handles command arguments, then execs into s6-svcscan.

  * it is not clear if ENV vars are preserved. That is also something
  essential for docker.

In my experience, they are. If you use s6-svc as your entrypoint (like
I do in my images), then define environment variables via docker's -e
switch they'll be preserved and available in each service's `run` script,
just like in my NodeJS example above.

 
  * s6 has many utilities s6-*
   - not clear which ones are actually required for making a docker
  process supervisor

The only *required* programs are the ones in the main s6 and execline
packages.

 
  * s6 not available yet as .deb or .rpm package

Re: process supervisor - considerations for docker

2015-02-25 Thread John Regan
On Wed, Feb 25, 2015 at 03:58:07PM +0100, Gorka Lertxundi wrote:
 Hello,
 
 After that great john's post, I tried to solve exactly your same problems. I
 created my own base image based primarily on John's and Phusion's base
 images.

That's awesome - I get so excited when I hear somebody's actually
read, digested, and taken action based on something I wrote. So cool!
:)

 
 See my thoughts below.
 
 2015-02-25 12:30 GMT+01:00 Laurent Bercot ska-skaw...@skarnet.org:
 
 
   (Moving the discussion to the supervision@list.skarnet.org list.
  The original message is quoted below.)
 
   Hi Dreamcat4,
 
   Thanks for your detailed message. I'm very happy that s6 found an
  application in docker, and that there's such an interest for it!
  skaw...@list.skarnet.org is indeed the right place to reach me and
  discuss the software I write, but for s6 in particular and process
  supervisors in general, supervision@list.skarnet.org is the better
  place - it's full of people with process supervision experience.
 
   Your message gives a lot of food for thought, and I don't have time
  right now to give it all the attention it deserves. Tonight or
  tomorrow, though, I will; and other people on the supervisionlist
  will certainly have good insights.
 
   Cheers!
 
  -- Laurent
 
 
  On 25/02/2015 11:55, Dreamcat4 wrote:
 
  Hello,
  Now there is someone (John Regan) who has made s6 images for docker.
  And written a blog post about it. Which is a great effort - and the
  reason I've come here. But it gives me a taste of wanting more.
  Something a bit more foolproof, and simpler, to work specifically
  inside of docker.
 
   From that blog post I get a general impression that s6 has many
  advantages. And it may be a good candidate for docker. But I would be
  remiss not to ask the developers of s6 themselves not to try to take
  some kind of a personal an interest in considering how s6 might best
  work inside of docker specifically. I hope that this is the right
  mailing list to reach s6 developers / discuss such matters. Is this
  the correct mailing list for s6 dev discussions?
 
  I've read and read around the subject of process supervision inside
  docker. Various people explain how or why they use various different
  process supervisors in docker (not just s6). None of them really quite
  seem ideal. I would like to be wrong about that but nothing has fully
  convinced me so far. Perhaps it is a fair criticism to say that I
  still have a lot more to learn in regards to process supervisors. But
  I have no interest in getting bogged down by that. To me, I already
  know more-or-less enough about how docker manages (or rather
  mis-manages!) it's container processes to have an opinion about what
  is needed, from a docker-sided perspective. And know enough that
  docker project itself won't fix these issues. For one thing because of
  not owning what's running on the inside of containers. And also
  because of their single-process viewpoint take on things. Andy way.
  That kind of political nonsense doesn't matter for our discussion. I
  just want to have a technical discussion about what is needed, and how
  might be the best way to solve the problem!
 
 
  MY CONCERNS ABOUT USING S6 INSIDE OF DOCKER
 
  In regards of s6 only, currently these are my currently perceived
  shortcomings when using it in docker:
 
  * it's not clear how to pass in programs arguments via CMD and
  ENTRYPOINT in docker
 
 - in fact i have not seen ANY docker process supervisor solutions
  show how to do this (except perhaps phusion base image)
 
  * it is not clear if ENV vars are preserved. That is also something
  essential for docker.
 
 
  * s6 has many utilities s6-*
   - not clear which ones are actually required for making a docker
  process supervisor
 
 
  * s6 not available yet as .deb or .rpm package
   - official packages are helpful because on different distros:
  + standard locations where to put config files and so on may
  differ.
  + to install man pages too, in the right place
 
 
  * s6 is not available as official single pre-compiled binary file for
  download via wget or curl
  - which would be the most ideal way to install it into a docker
  container
 
 
  ^^ Some of these perceived shortcomings are more important /
  significant than others! Some are not in the remit of s6 development
  to be concerned about. Some are mild nit-picking, or the ignorance of
  not-knowning, having not actually tried out s6 before.
 
  But my general point is that it is not clear-enough to me (from my
  perspective) whether s6 can actually satisfy all of the significant
  docker-specific considerations. Which I have not properly stated yet.
  So here they are listed below…
 
 
  DOCKER-SPECIFIC CONSIDERATIONS FOR A PROCESS SUPERVISOR
 
  A good process supervisor for docker should ideally:
 
  * be a single pre-compiled binary program file. That can be downloaded
  by curl/wget (or can