Why do we need parameters? Which parameters are we adding?

On Aug 9, 2017, at 12:21 PM, Cesar Wong <cew...@redhat.com> wrote:

Hi Devan,

You can see my branch here:
https://github.com/csrwng/origin/tree/parameterize_template
(last 5 commits)

Hopefully should be a PR soon. The REST endpoint should be functional, the
CLI still needs work, but basically the idea is to have the reverse of the
‘oc process’ command, where the input is a list of resources and out comes
a template with parameters.

On Aug 9, 2017, at 11:40 AM, Devan Goodwin <dgood...@redhat.com> wrote:

On Wed, Aug 9, 2017 at 11:44 AM, Cesar Wong <cew...@redhat.com> wrote:

Hi Devan,

This past iteration I started work on this same problem [1]

https://trello.com/c/I2ZJxS94/998-5-improve-oc-export-to-parameterize-containerapppromotion

The problem is broad and the way I decided to break it up is to consider the
export and parameterize operations independently. The export should be
handled by the resource’s strategy as you mentioned in the Kube issue you
opened. The parameterization part can be a follow up to the export. Here’s
an initial document describing it:

https://docs.google.com/a/redhat.com/document/d/15SLkhXRovY1dLbxpWFy_Wfq3I6xMznsOAnopTYrXw_A/edit?usp=sharing


Thanks that was a good read, will keep an eye on this document.

Does anything exist yet for your parameterization code? Curious what
it looks like and if it's something we could re-use yet, what the
inputs and outputs are, etc.


On the export side, I think we need to decide whether there is different
“types” of export that can happen which should affect the logic of the
resource strategy. For example, does a deployment config look different if
you’re exporting it for use in a different namespace vs a different cluster.
If this is the case, then right now is probably a good time to drive that
change to the upstream API as David suggested.


Is anyone working on a proposal for this export logic upstream? I am
wondering if I should try to put one together if I can find the time.
The general idea (as I understand it) would be to migrate the
currently quite broken export=true param to something strategy based,
and interpret "true" to mean a strategy that matches what we do today.
The references in code I've seen indicate that the current intention
is to strip anything the user cannot specify themselves.




On Aug 9, 2017, at 10:27 AM, Ben Parees <bpar...@redhat.com> wrote:



On Wed, Aug 9, 2017 at 10:00 AM, Devan Goodwin <dgood...@redhat.com> wrote:


On Wed, Aug 9, 2017 at 9:58 AM, Ben Parees <bpar...@redhat.com> wrote:



On Wed, Aug 9, 2017 at 8:49 AM, Devan Goodwin <dgood...@redhat.com>
wrote:


We are working on a more robust project export/import process (into a
new namespace, possibly a new cluster, etc) and have a question on how
to handle image streams.

Our first test was with "oc new-app
https://github.com/openshift/ruby-hello-world.git";, this results in an
image stream like the following:

$ oc get is ruby-hello-world -o yaml
apiVersion: v1
kind: ImageStream
metadata:
 annotations:
   openshift.io/generated-by: OpenShiftNewApp
 creationTimestamp: 2017-08-08T12:01:22Z
 generation: 1
 labels:
   app: ruby-hello-world
 name: ruby-hello-world
 namespace: project1
 resourceVersion: "183991"
 selfLink: /oapi/v1/namespaces/project1/imagestreams/ruby-hello-world
 uid: 4bd229be-7c31-11e7-badf-989096de63cb
spec:
 lookupPolicy:
   local: false
status:
 dockerImageRepository: 172.30.1.1:5000/project1/ruby-hello-world
 tags:
 - items:
   - created: 2017-08-08T12:02:04Z
     dockerImageReference:


172.30.1.1:5000/project1/ruby-hello-world@sha256:8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074
     generation: 1
     image:
sha256:8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074
   tag: latest


If we link up with the kubernetes resource exporting by adding
--export:

$ oc get is ruby-hello-world -o yaml --export
apiVersion: v1
kind: ImageStream
metadata:
 annotations:
   openshift.io/generated-by: OpenShiftNewApp
 creationTimestamp: null
 generation: 1
 labels:
   app: ruby-hello-world
 name: ruby-hello-world
 namespace: default
 selfLink: /oapi/v1/namespaces/default/imagestreams/ruby-hello-world
spec:
 lookupPolicy:
   local: false
status:
 dockerImageRepository: 172.30.1.1:5000/default/ruby-hello-world


This leads to an initial question, what stripped the status tags? I
would have expected this code to live in the image stream strategy:


https://github.com/openshift/origin/blob/master/pkg/image/registry/imagestream/strategy.go
but this does not satisfy RESTExportStrategy, I wasn't able to
determine where this is happening.

The dockerImageRepository in status remains, but weirdly flips from
"project1" to "default" when doing an export. Should this remain in an
exported IS at all? And if so is there any reason why it would flip
from project1 to default?

Our real problem however picks up in the deployment config after
import, in here we end up with the following (partial) DC:

apiVersion: v1
kind: DeploymentConfig
metadata:
 annotations:
   openshift.io/generated-by: OpenShiftNewApp
 labels:
   app: ruby-hello-world
 name: ruby-hello-world
 namespace: project2
 selfLink:
/oapi/v1/namespaces/project2/deploymentconfigs/ruby-hello-world
spec:
 template:
   metadata:
     annotations:
       openshift.io/generated-by: OpenShiftNewApp
     labels:
       app: ruby-hello-world
       deploymentconfig: ruby-hello-world
   spec:
     containers:
     - image:

172.30.1.1:5000/project1/ruby-hello-world@sha256:8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074
       imagePullPolicy: Always
       name: ruby-hello-world

So our deployment config still refers to a very specific image and
points to the old project. Is there any logic we could apply safely to
address this?

It feels like this should boil down to something like
"ruby-hello-world@sha256:HASH", could we watch for
$REGISTRY_IP:PORT/projectname/ during export and strip that leading
portion out? What would be the risks in doing so?



Adding Cesar since he was recently looking at some of the export logic
you
have questions about and he's also very interested in this subject since
he's working on a related piece of functionality.  That said:

if you've got an imagechangetrigger in the DC you should be able to
strip
the entire image field (it should be repopulated from the ICT
imagestream
reference during deployment).  However:


Ok good, so during export we can iterate the image change triggers, if
we see one we can match up on containerName and strip container.image
for that name.


1) you still need to straighten out the ICT reference which is also
going to
be pointing to an imagestreamtag in the old project/cluster/whatever


Ok I think we can handle this explicitly. More below though.

2) if you don't have an ICT reference you do need to sort this out and
stripping it the way you propose is definitely not a good idea...what's
going to repopulate that w/ the right prefix/project in the new cluster?
What if the image field was pointing to docker.io or some other external
registry?


I definitely wouldn't advocate blindly doing so, but rather on export
I believe we can determine the cluster registry IP (if there is one),
and then watch for it as we export objects, and parameterize it. At
this point it feels like we need to be thinking about generating a
template rather than a flat list of kube API resources. (which is what
our app does right now)



talk to Cesar.  He's developing a "templatize this resource object" api
endpoint.  The idea would be to run a flow where you export objects, then
send them through the templatizer.


This past iteration





To clarify I have been attempting to do as much of this using the
built in kube API "export" param, but the suggestion above feels like
it should not be there. Our main driver will be an app in-cluster (for
monitoring capacity and archiving dormant projects), so we do have a
place to apply extra logic like this. I'm now thinking our app should
layer this logic in after we fetch the resources using kube's export
param, and then generate a template.



We need a general solution to the "export this resource for use in another
project/cluster" problem, it would be nice if this could be that.  But as i
said, there are some very intractable problems around how to handle
references.




Side topic, it would be nice if this functionality was available in oc
somewhere (potentially as some new command in future), would just need
to solve lookup of the integrated registry IP so we could extract it
to a param.



yes, it's definitely desirable if we can solve the challenges to make it
generically usable.





In short, you're attempting to tackle a very complex problem where the
answer is frequently "it depends".  We wrote some documentation
discussing
some of the considerations when exporting/importing resources between
clusters/projects:


https://docs.openshift.org/latest/dev_guide/application_lifecycle/promoting_applications.html


This is very useful, as is the feedback, thanks! If anyone has
additional edge cases in mind please let us know, or if you believe
this is simply not possible and we shouldn't be trying. However at
this point I'm still feeling like we can proceed here with the goal of
doing as much as we can, try to ensure the users project makes it into
it's new location and if something is broken because we missed it, or
it simply has to be broken because we can't make assumptions, they can
fix it themselves.



Defining the boundary conditions of when the user simply has to step in and
manually fix up references/etc is definitely a good idea.  I think anything
that references something outside the current project (whether that means
another project in the same cluster, or another cluster, or an external
registry entirely) qualifies, at a minimum, for a warning to the user of "we
weren't sure how to handle this so we left it alone, but you may need to
update it depending where you intend to reuse this resource"









All help appreciated, thanks.

Devan





--
Ben Parees | OpenShift





--
Ben Parees | OpenShift



_______________________________________________
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
_______________________________________________
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

Reply via email to