On 10/14/2014 10:49 AM, Lars Kellogg-Stedman wrote:
On Tue, Oct 14, 2014 at 02:51:15PM +1100, Angus Lees wrote:
1. It would be good if the "interesting" code came from python sdist/bdists
rather than rpms.

I agree in principal, although starting from packages right now lets
us ignore a whole host of issues.  Possibly we'll hit that change down
the road.

2. I think we should separate out "run the server" from "do once-off setup".

Currently the containers run a start.sh that typically sets up the database,
runs the servers, creates keystone users and sets up the keystone catalog.  In
something like k8s, the container will almost certainly be run multiple times
in parallel and restarted numerous times, so all those other steps go against
the service-oriented k8s ideal and are at-best wasted.

All the existing containers [*] are designed to be idempotent, which I
think is not a bad model.  Even if we move initial configuration out
of the service containers I think that is a goal we want to preserve.

I pursued exactly the model you suggest on my own when working on an
ansible-driven workflow for setting things up:


Ansible made it easy to support one-off "batch" containers which, as
you say, aren't exactly supported in Kubernetes.  I like your
(ab?)use of restartPolicy; I think that's worth pursuing.

I agree that Ansible makes it easy to support one-off batch containers. Ansible rocks.

Which brings me to my question (admittedly, I am a Docker n00b, so please forgive me for the question)...

Can I use your Dockerfiles to build Ubuntu/Debian images instead of only Fedora images? Seems to me that the image-based Docker system makes the resulting container quite brittle -- since a) you can't use configuration management systems like Ansible to choose which operating system or package management tools you wish to use, and b) any time you may a change to the image, you need to regenerate the image from a new Dockerfile and, presumably, start a new container with the new image, shut down the old container, and then change all the other containers that were linked with the old container to point to the new one. All of this would be a simple apt-get upgrade -y for things like security updates, which for the most part, wouldn't require any such container rebuilds.

So... what am I missing with this? What makes Docker images more ideal than straight up LXC containers and using Ansible to control upgrades/changes to configuration of the software on those containers?

Again, sorry for the n00b question!


[*] That work, which includes rabbitmq, mariadb, keystone, and glance.

I'm open to whether we want to make these as lightweight/independent as
possible (every daemon in an individual container), or limit it to one per
project (eg: run nova-api, nova-conductor, nova-scheduler, etc all in one

My goal is one-service-per-container, because that generally makes the
question of process supervision and log collection a *host* problem
rather than a *container* problem. It also makes it easier to scale an
individual service, if that becomes necessary.

OpenStack-dev mailing list

OpenStack-dev mailing list

Reply via email to