On 10/14/2014 05:44 PM, Angus Lees wrote:
On Tue, 14 Oct 2014 07:51:54 AM Steven Dake wrote:
Angus,

On 10/13/2014 08:51 PM, Angus Lees wrote:
I've been reading a bunch of the existing Dockerfiles, and I have two
humble requests:


1. It would be good if the "interesting" code came from python
sdist/bdists
rather than rpms.

This will make it possible to rebuild the containers using code from a
private branch or even unsubmitted code, without having to go through a
redhat/rpm release process first.

I care much less about where the python dependencies come from. Pulling
them from rpms rather than pip/pypi seems like a very good idea, given
the relative difficulty of caching pypi content and we also pull in the
required C, etc libraries for free.


With this in place, I think I could drop my own containers and switch to
reusing kolla's for building virtual testing environments.  This would
make me happy.
I've captured this requirement here:
https://blueprints.launchpad.net/kolla/+spec/run-from-master

I also believe it would be interesting to run from master or a stable
branch for CD.  Unfortunately I'm still working on the nova-compute
docker code, but if someone comes along and picks up that blueprint, i
expect it will get implemented :)  Maybe that could be you.
Yeah I've already got a bunch of working containers that pull from master[1],
but I've been thinking I should change that to use an externally supplied
bdist.  The downside is you quickly end up wanting a docker container to build
your deployment docker container.  I gather this is quite a common thing to
do, but I haven't found the time to script it up yet.

[1] https://github.com/anguslees/kube-openstack/tree/master/docker

I could indeed work on this, and I guess I was gauging the level of enthusiasm
within kolla for such a change.  I don't want to take time away from the
alternative I have that already does what I need only to push uphill to get it
integrated :/
There would be no uphill push. For milestone #2, I am already going to reorganize the docker directory to support centos+rdo as an alternative to fedora+rdo. Fedora+master is just another directory in this model (or Ubuntu + master if you want that choice as well). IMO the more choice about deployment platforms the better, especially a master model (or more likely a stable branch model).

Regards
-steve

2. I think we should separate out "run the server" from "do once-off
setup".

Currently the containers run a start.sh that typically sets up the
database, runs the servers, creates keystone users and sets up the
keystone catalog.  In something like k8s, the container will almost
certainly be run multiple times in parallel and restarted numerous times,
so all those other steps go against the service-oriented k8s ideal and
are at-best wasted.

I suggest making the container contain the deployed code and offer a few
thin scripts/commands for entrypoints.  The main
replicationController/pod _just_ starts the server, and then we have
separate pods (or perhaps even non-k8s container invocations) that do
initial database setup/migrate, and post- install keystone setup.
The server may not start before the configuration of the server is
complete.  I guess I don't quite understand what you indicate here when
you say we have separate pods that do initial database setup/migrate.
Do you mean have dependencies in some way, or for eg:

glance-registry-setup-pod.yaml - the glance registry pod descriptor
which sets up the db and keystone
glance-registry-pod.yaml - the glance registry pod descriptor which
starts the application and waits for db/keystone setup

and start these two pods as part of the same selector (glance-registry)?

That idea sounds pretty appealing although probably won't be ready to go
for milestone #1.
So the way I do it now, I have a replicationController that starts/manages
(eg) nova-api pods[2].  I separately have a nova-db-sync pod[3] that basically
just runs "nova-manage db sync".

I then have a simple shell script[4] that starts them all at the same time.
The nova-api pods crash and get restarted a few times until the database has
been appropriately configured by the nova-db-sync pod, and then they're fine and
start serving.

When nova-db-sync exits successfully, the pod just sits in state terminated
thanks to restartPolicy: onFailure.  Sometime later I can delete the
terminated nova-db-sync pod, but it's also harmless if I just leave it or even
if it gets occasionally re-run as part of some sort of update.


[2] 
https://github.com/anguslees/kube-openstack/blob/master/kubernetes-in/nova-api-repcon.yaml
[3] 
https://github.com/anguslees/kube-openstack/blob/master/kubernetes-in/nova-db-sync-pod.yaml
[4] https://github.com/anguslees/kube-openstack/blob/master/kubecfg-create.sh


I'm open to whether we want to make these as lightweight/independent as
possible (every daemon in an individual container), or limit it to one per
project (eg: run nova-api, nova-conductor, nova-scheduler, etc all in one
container).   I think the differences are run-time scalability and
resource- attribution vs upfront coding effort and are not hugely
significant either way.

Post-install catalog setup we can combine into one cross-service setup
like
tripleO does[1].  Although k8s doesn't have explicit support for batch
tasks currently, I'm doing the pre-install setup in restartPolicy:
onFailure pods currently and it seems to work quite well[2].

(I'm saying "post install catalog setup", but really keystone catalog can
happen at any point pre/post aiui.)

[1]
https://github.com/openstack/tripleo-incubator/blob/master/scripts/setup-> > 
endpoints [2]
https://github.com/anguslees/kube-openstack/blob/master/kubernetes-in/nov
a-db-sync-pod.yaml
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to