Hi Ramon, One issue that can arise if a configuration step is introduced to Docker images is the propagation of configuration changes in a running cluster. If the configuration is done for a generic server at startup, the subsequent changes to the config has to go live following either of two ways.
1. Kill the existing containers and let the PaaS healing component (ex: RC in Kubernetes) to spawn new ones and let new config be applied to those containers. This approach is the same for an already configured image. 2. Have some kind of an agent on the containers to poll and pull new config to the existing containers. This approach is used in Apache Stratos, and it tends to introduce another set of problems, when it comes to containers. IMO, although the management of different, already configured, servers is bit cumbersome, it can ease the already PaaS recommended approaches to various problems at the cluster management layer. For example, Kubernetes has a rolling-update, which takes an image name as the new version of the server to gradually update the cluster. This encourages an immutable image with changes that get written in to versioned images. For the maintainability issue of the images, can a Puppet based image building pipeline be applied? This would have a single Puppet server (configuration automation layer) which would trigger (on demand or by a hook of some kind) an image build with a certain set of parameters, that would result in a (new version or an overwriting version for the) set of Docker images in the local Docker registry. Additionally this event can trigger another execution where the Kubernetes clusters are updated. This way the configuration is preserved and effectively separated from the images and image maintenance is simplified. Regards, Chamila de Alwis Committer and PMC Member - Apache Stratos Software Engineer | WSO2 | +94772207163 Blog: code.chamiladealwis.com On Fri, Apr 1, 2016 at 4:58 PM, Imesh Gunaratne <[email protected]> wrote: > Hi Ramon, > > On Thu, Mar 31, 2016 at 9:42 PM, Ramon Gordillo <[email protected]> > wrote: > >> Hi. >> >> After doing some research and test on the puppet modules and dockerfiles >> with WSO2 API Management, I have some thoughts to share on this list. >> >> Great! It's really nice to hear you thoughts on $subject. > >> >> Some particular information, for example the DNS domain name for the >> cluster, the namespace, etc, is a runtime configuration. That is, if we >> want, for example, to deploy an instance of api management per project (aka >> namespace), with the current approach it will require to build a set of >> docker containers per project. Currently, we have 10 teams working in their >> own projects, so you can figure out the maintainability of 10 different >> sets. >> >> It's not a must create a container image with all the required > configuration. It's just a best practice. If the startup time of the server > is not a problem, running an orchestration tool at the startup should be > fine. We followed the same pattern few months back with Stratos + K8S. > > However I personally prefer fully configured container images approach as > they are less error prone and may not depend on any dynamic parameters. It > would be much similar to a product archive which includes all > configurations, the person who deploys the product just need to extract and > run. If there are any security concerns on the credentials and keys > packaged, we might need to use a tool like secure wallet. > > >> Apart from it, there are some configuration information that can be >> obtained from the kubernetes master instead of hardcoding it in the >> container. Even the kubernetes master information is injected in the >> containers in environment variables ( >> http://kubernetes.io/docs/user-guide/environment-guide/, see >> KUBERNETES_SERVICE_HOST, KUBERNETES_SERVICE_PORT). >> >> With those considerations, I propose to use puppet also at runtime when >> starting the container, for configuring it using template environment >> variables (which are instanced at runtime), and getting as much information >> as we could automatically instead of forcing the user to provide it. >> > > Yes, as I mentioned above, it's a compromise between optimizing the > startup time against the number of container images we need to maintain. > >> >> Other than that, I also propose to have one script at startup per PaaS >> solution. It will customize the peculiarities of this PaaS, and adapt it to >> the agnostic configuration. For example, in kubernetes, some environment >> variables are injected by default in the container, but in CloudFoundry, >> for example, other variables with other structure is used. With this >> approach, the container can be used for both of them (even with standalone >> docker-compose for example) just having a simple and reusable script for >> any PaaS solution. >> > > AFAIU this would be only needed if we are to do configurations at the > startup. If the container image contains all configurations needed we may > not need a PaaS specific startup script. > > Thanks > > >> What do you think? >> >> Thanks. >> >> Regards. >> >> _______________________________________________ >> Dev mailing list >> [email protected] >> http://wso2.org/cgi-bin/mailman/listinfo/dev >> >> > > > -- > *Imesh Gunaratne* > Senior Technical Lead > WSO2 Inc: http://wso2.com > T: +94 11 214 5345 M: +94 77 374 2057 > W: http://imesh.io > Lean . Enterprise . Middleware > > > _______________________________________________ > Dev mailing list > [email protected] > http://wso2.org/cgi-bin/mailman/listinfo/dev > >
_______________________________________________ Dev mailing list [email protected] http://wso2.org/cgi-bin/mailman/listinfo/dev
