On Tue, Aug 30, 2016 at 7:33 AM, Pieter Nagel <[email protected]> wrote:

> Thanks all for the helpful feedback.
>
> On Thu, Aug 25, 2016 at 4:42 PM, Clayton Coleman <[email protected]>
> wrote:
>
>> Please provide any feedback about limitations in export that you can - we
>> intend "create, tweak, export, apply, apply, ..." to function exactly as it
>> would for any config driven system.
>
>
> Here's the ideal scenario I would love to see: (All of this assumes a
> single conceptual app, consistingly of multiple related deploymentconfigs,
> stored in a single logical source repository, OpenShift objects stored in
> same repository as the rest of the code).
>
> Developers edit code for the app, and occasionally need to create or
> delete deploymentconfigs and buildconfigs.
>
> Currently, an OpenShift buildconfig watches a git repository and keeps
> building images according to its own config. But what if the buildconfig's
> yaml itself has been changed, or new buildconfigs added?
>
> Ideally, there'd bay a different layer that watches the repository, and
> when there are changes, goes and deleletes all
> buildconfigs/deploymentconfigs whose underlying yaml files have been
> deleted, creates all new ones, and run 'oc apply' on the others. Only then
> do all resulting buildconfigs that watch the same repository go and build
> images with their new, up-to-date configs.
>
> Given this kind of mechanism, one could also ensure that the QA pipeline
> (which watches a pre-release branch of the same repository) runs with the
> exact same set of OpenShift configs.
>
> One small part that would help to make this work is a oc subcommand with
> the semantics of `oc create` if the object does not yet exist, and `oc
> apply` if already exists.
>

oc apply has those semantics today.  Not sure what difference you would
need?


>
> A larger step towards this would be an oc subcommand that takes a set of
> yaml/json files on the command line, as well as a selector. This command
> then marks all objects that match the selector but for which no yaml/json
> has been given to be deleted, ands runs the aforementioned oc create/apply
> on the rest. Then one can run that command with all openshift yaml files in
> the repository as input, and use it as a super-duper get everything
> up-to-date. (In production, this will have to be careful not to delete
> objects until all the new deploymentconfigs etc. are up-and-running).
>

This is the "purge from apply" work (
https://github.com/kubernetes/kubernetes/pull/29551).  The "hold on
deletion" may be something that could be addressed by structuring how
objects relate - so that deleting a config map or secret wouldn't affect
deployed instances of an application.  There are some proposals on that
floating around.



>
>

> The problem I'm trying to solve is this: in the real world, there is a
> tight coupling between the source code for an app, and the OpenShift
> objects that host that app. For example, if we rename the environment
> variable that an app uses to find its' database, the old deploymentconfig
> with the old variable name makes no sense against an image built from the
> commit that changed it, and the new deploymentconfig with the new variable
> name makes no sense against an omage build from a commit before it changed.
> We want all to change in lockstep without extra manual steps.
>
> Currently, we have all kinds of manual steps we run to deploy our
> applications on our servers.
>
> Once we've finished porting to OpenShift, I don't want us to end up with
> just as many manual steps as before, except this time round the steps are
> "run `oc create...`" instead of "ssh to server and run apt-get install...".
>
> --
> Pieter Nagel
> Lautus Solutions (Pty) Ltd
> Building 27, The Woodlands, 20 Woodlands Drive, Woodmead, Gauteng
> 0832587540
>
_______________________________________________
dev mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

Reply via email to