We are working on a more robust project export/import process (into a new namespace, possibly a new cluster, etc) and have a question on how to handle image streams.
Our first test was with "oc new-app https://github.com/openshift/ruby-hello-world.git", this results in an image stream like the following: $ oc get is ruby-hello-world -o yaml apiVersion: v1 kind: ImageStream metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: 2017-08-08T12:01:22Z generation: 1 labels: app: ruby-hello-world name: ruby-hello-world namespace: project1 resourceVersion: "183991" selfLink: /oapi/v1/namespaces/project1/imagestreams/ruby-hello-world uid: 4bd229be-7c31-11e7-badf-989096de63cb spec: lookupPolicy: local: false status: dockerImageRepository: 172.30.1.1:5000/project1/ruby-hello-world tags: - items: - created: 2017-08-08T12:02:04Z dockerImageReference: 172.30.1.1:5000/project1/ruby-hello-world@sha256:8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074 generation: 1 image: sha256:8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074 tag: latest If we link up with the kubernetes resource exporting by adding --export: $ oc get is ruby-hello-world -o yaml --export apiVersion: v1 kind: ImageStream metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null generation: 1 labels: app: ruby-hello-world name: ruby-hello-world namespace: default selfLink: /oapi/v1/namespaces/default/imagestreams/ruby-hello-world spec: lookupPolicy: local: false status: dockerImageRepository: 172.30.1.1:5000/default/ruby-hello-world This leads to an initial question, what stripped the status tags? I would have expected this code to live in the image stream strategy: https://github.com/openshift/origin/blob/master/pkg/image/registry/imagestream/strategy.go but this does not satisfy RESTExportStrategy, I wasn't able to determine where this is happening. The dockerImageRepository in status remains, but weirdly flips from "project1" to "default" when doing an export. Should this remain in an exported IS at all? And if so is there any reason why it would flip from project1 to default? Our real problem however picks up in the deployment config after import, in here we end up with the following (partial) DC: apiVersion: v1 kind: DeploymentConfig metadata: annotations: openshift.io/generated-by: OpenShiftNewApp labels: app: ruby-hello-world name: ruby-hello-world namespace: project2 selfLink: /oapi/v1/namespaces/project2/deploymentconfigs/ruby-hello-world spec: template: metadata: annotations: openshift.io/generated-by: OpenShiftNewApp labels: app: ruby-hello-world deploymentconfig: ruby-hello-world spec: containers: - image: 172.30.1.1:5000/project1/ruby-hello-world@sha256:8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074 imagePullPolicy: Always name: ruby-hello-world So our deployment config still refers to a very specific image and points to the old project. Is there any logic we could apply safely to address this? It feels like this should boil down to something like "ruby-hello-world@sha256:HASH", could we watch for $REGISTRY_IP:PORT/projectname/ during export and strip that leading portion out? What would be the risks in doing so? All help appreciated, thanks. Devan _______________________________________________ dev mailing list [email protected] http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
