Re: How to pull images from a remote registry with the actual layers instead of just metadata?

2017-11-18 Thread Ben Parees
On Sat, Nov 18, 2017 at 3:16 PM, Joel Pearson  wrote:

> It would introduce a new final layer right? Because after every build,
> OpenShift automatically adds a bunch of labels?


yeah that's true, sorry completely blanked on that.


>
> On Sun, 19 Nov 2017 at 7:13 am, Ben Parees  wrote:
>
>> On Sat, Nov 18, 2017 at 2:54 AM, Joel Pearson <
>> japear...@agiledigital.com.au> wrote:
>>
>>> Ahh ok. Is there some way to abuse build config‘s to push existing
>>> images to remote OpenShift registries?
>>
>>
>> technically you could probably have a dockerfile that just says "FROM
>> imagex" and nothing else, and put that in a buildconfig.
>>
>> I'm not sure if that would introduce any new layers during the docker
>> build or not.
>>
>> But it's probably not the right solution for moving images around
>> regardless.
>>
>>
>>>
>>> On Sat, 18 Nov 2017 at 6:15 pm, Ben Parees  wrote:
>>>
 On Sat, Nov 18, 2017 at 2:12 AM, Joel Pearson <
 japear...@agiledigital.com.au> wrote:

> So there is no way with the oc command to import an image and not have
> it need the remote to exist after that? I’d just have to use docker push
> instead?


 currently that is correct.


>
> On Sat, 18 Nov 2017 at 6:04 pm, Ben Parees  wrote:
>
>> On Sat, Nov 18, 2017 at 1:13 AM, Lionel Orellana 
>> wrote:
>>
>>> So it sounds like the local option means after it’s pulled once it
 will exist in the local registry?
>>>
>>>
>>> Hmm It always seems to do the pull-through
>>> .
>>> Not sure what will happen if the remote is down.
>>>
>>
>> the blobs will be mirrored in the local registry, but the manifest is
>> not (currently) so the remote still needs to be accessible, but the pull
>> should be faster once the blobs have been cached in the local registry.
>> (assuming mirroring pullthrough is turned on, which by default i believe 
>> it
>> is).
>>
>>
>>
>>
>>>
>>> On 18 November 2017 at 16:53, Joel Pearson <
>>> japear...@agiledigital.com.au> wrote:
>>>
 Thanks Lionel. I guess one way to make it secure would be to have a
 certificate that’s valid on the internet. But I guess it’s not really
 important if it’s all internal traffic.

 I’ll try out that local option I think that’s what I want. Because
 I don’t want to have to rely on the remote registry always being there,
 because we’re thinking of shutting down our dev and test clusters at 
 night
 time.

 So it sounds like the local option means after it’s pulled once it
 will exist in the local registry?

 On Sat, 18 Nov 2017 at 4:41 pm, Lionel Orellana 
 wrote:

> Hi Joel,
>
> By default the imported image stream tag will have a reference
> policy of Source. That means the pod will end up pulling the image 
> from the
> remote registry directly. For that to work you have to link a secret
> containing the docker credentials with the deployment's sa. For the 
> default
> sa this looks like this
>
>  oc secrets link default my-dockercfg --for=pull
>
> The other option is to set the istag's reference policy to Local.
>
> tags:
> - annotations: null
>   ...
>   name: latest
>   referencePolicy:
> type: Local  .
>
> Now the pod will try to get the image from the local registry
> which in turn will pull from the remote. The registry will look for a
> dockercfg secret with the remote server name. By default 
> communication with
> the remote registry will not use ssl. This is controlled by the istag
> import policy:
>
> importPolicy: insecure: true
>
> I have not been able to get it to work with insecure: false. I
> can't find the right place to put the remote's ca for the registry to 
> use
> it. But it all works well when insecure is true.
>
>
> Cheers
>
> Lionel
>
>
> On 18 November 2017 at 13:59, Joel Pearson <
> japear...@agiledigital.com.au> wrote:
>
>> Hi,
>>
>> I'm using OpenShift 3.6.1 in AWS and I tried using "oc
>> import-image" to pull an image from one openshift cluster to 
>> another.  I
>> setup the docker secrets, and it appeared to be working as there was 
>> a
>> bunch of metadata visible in the 

Re: How to pull images from a remote registry with the actual layers instead of just metadata?

2017-11-18 Thread Joel Pearson
It would introduce a new final layer right? Because after every build,
OpenShift automatically adds a bunch of labels?
On Sun, 19 Nov 2017 at 7:13 am, Ben Parees  wrote:

> On Sat, Nov 18, 2017 at 2:54 AM, Joel Pearson <
> japear...@agiledigital.com.au> wrote:
>
>> Ahh ok. Is there some way to abuse build config‘s to push existing images
>> to remote OpenShift registries?
>
>
> technically you could probably have a dockerfile that just says "FROM
> imagex" and nothing else, and put that in a buildconfig.
>
> I'm not sure if that would introduce any new layers during the docker
> build or not.
>
> But it's probably not the right solution for moving images around
> regardless.
>
>
>>
>> On Sat, 18 Nov 2017 at 6:15 pm, Ben Parees  wrote:
>>
>>> On Sat, Nov 18, 2017 at 2:12 AM, Joel Pearson <
>>> japear...@agiledigital.com.au> wrote:
>>>
 So there is no way with the oc command to import an image and not have
 it need the remote to exist after that? I’d just have to use docker push
 instead?
>>>
>>>
>>> currently that is correct.
>>>
>>>

 On Sat, 18 Nov 2017 at 6:04 pm, Ben Parees  wrote:

> On Sat, Nov 18, 2017 at 1:13 AM, Lionel Orellana 
> wrote:
>
>> So it sounds like the local option means after it’s pulled once it
>>> will exist in the local registry?
>>
>>
>> Hmm It always seems to do the pull-through
>> .
>> Not sure what will happen if the remote is down.
>>
>
> the blobs will be mirrored in the local registry, but the manifest is
> not (currently) so the remote still needs to be accessible, but the pull
> should be faster once the blobs have been cached in the local registry.
> (assuming mirroring pullthrough is turned on, which by default i believe 
> it
> is).
>
>
>
>
>>
>> On 18 November 2017 at 16:53, Joel Pearson <
>> japear...@agiledigital.com.au> wrote:
>>
>>> Thanks Lionel. I guess one way to make it secure would be to have a
>>> certificate that’s valid on the internet. But I guess it’s not really
>>> important if it’s all internal traffic.
>>>
>>> I’ll try out that local option I think that’s what I want. Because I
>>> don’t want to have to rely on the remote registry always being there,
>>> because we’re thinking of shutting down our dev and test clusters at 
>>> night
>>> time.
>>>
>>> So it sounds like the local option means after it’s pulled once it
>>> will exist in the local registry?
>>>
>>> On Sat, 18 Nov 2017 at 4:41 pm, Lionel Orellana 
>>> wrote:
>>>
 Hi Joel,

 By default the imported image stream tag will have a reference
 policy of Source. That means the pod will end up pulling the image 
 from the
 remote registry directly. For that to work you have to link a secret
 containing the docker credentials with the deployment's sa. For the 
 default
 sa this looks like this

  oc secrets link default my-dockercfg --for=pull

 The other option is to set the istag's reference policy to Local.

 tags:
 - annotations: null
   ...
   name: latest
   referencePolicy:
 type: Local  .

 Now the pod will try to get the image from the local registry which
 in turn will pull from the remote. The registry will look for a 
 dockercfg
 secret with the remote server name. By default communication with the
 remote registry will not use ssl. This is controlled by the istag 
 import
 policy:

 importPolicy: insecure: true

 I have not been able to get it to work with insecure: false. I
 can't find the right place to put the remote's ca for the registry to 
 use
 it. But it all works well when insecure is true.


 Cheers

 Lionel


 On 18 November 2017 at 13:59, Joel Pearson <
 japear...@agiledigital.com.au> wrote:

> Hi,
>
> I'm using OpenShift 3.6.1 in AWS and I tried using "oc
> import-image" to pull an image from one openshift cluster to another. 
>  I
> setup the docker secrets, and it appeared to be working as there was a
> bunch of metadata visible in the image stream.
>
> However, when actually started a pod, it seemed at that point it
> tried to get the actual layers from the remote registry of the other
> openshift cluster, at this point it got some authentication error, 
> which is

Re: How to pull images from a remote registry with the actual layers instead of just metadata?

2017-11-18 Thread Ben Parees
On Sat, Nov 18, 2017 at 2:54 AM, Joel Pearson  wrote:

> Ahh ok. Is there some way to abuse build config‘s to push existing images
> to remote OpenShift registries?


technically you could probably have a dockerfile that just says "FROM
imagex" and nothing else, and put that in a buildconfig.

I'm not sure if that would introduce any new layers during the docker build
or not.

But it's probably not the right solution for moving images around
regardless.


>
> On Sat, 18 Nov 2017 at 6:15 pm, Ben Parees  wrote:
>
>> On Sat, Nov 18, 2017 at 2:12 AM, Joel Pearson <
>> japear...@agiledigital.com.au> wrote:
>>
>>> So there is no way with the oc command to import an image and not have
>>> it need the remote to exist after that? I’d just have to use docker push
>>> instead?
>>
>>
>> currently that is correct.
>>
>>
>>>
>>> On Sat, 18 Nov 2017 at 6:04 pm, Ben Parees  wrote:
>>>
 On Sat, Nov 18, 2017 at 1:13 AM, Lionel Orellana 
 wrote:

> So it sounds like the local option means after it’s pulled once it
>> will exist in the local registry?
>
>
> Hmm It always seems to do the pull-through
> .
> Not sure what will happen if the remote is down.
>

 the blobs will be mirrored in the local registry, but the manifest is
 not (currently) so the remote still needs to be accessible, but the pull
 should be faster once the blobs have been cached in the local registry.
 (assuming mirroring pullthrough is turned on, which by default i believe it
 is).




>
> On 18 November 2017 at 16:53, Joel Pearson <
> japear...@agiledigital.com.au> wrote:
>
>> Thanks Lionel. I guess one way to make it secure would be to have a
>> certificate that’s valid on the internet. But I guess it’s not really
>> important if it’s all internal traffic.
>>
>> I’ll try out that local option I think that’s what I want. Because I
>> don’t want to have to rely on the remote registry always being there,
>> because we’re thinking of shutting down our dev and test clusters at 
>> night
>> time.
>>
>> So it sounds like the local option means after it’s pulled once it
>> will exist in the local registry?
>>
>> On Sat, 18 Nov 2017 at 4:41 pm, Lionel Orellana 
>> wrote:
>>
>>> Hi Joel,
>>>
>>> By default the imported image stream tag will have a reference
>>> policy of Source. That means the pod will end up pulling the image from 
>>> the
>>> remote registry directly. For that to work you have to link a secret
>>> containing the docker credentials with the deployment's sa. For the 
>>> default
>>> sa this looks like this
>>>
>>>  oc secrets link default my-dockercfg --for=pull
>>>
>>> The other option is to set the istag's reference policy to Local.
>>>
>>> tags:
>>> - annotations: null
>>>   ...
>>>   name: latest
>>>   referencePolicy:
>>> type: Local  .
>>>
>>> Now the pod will try to get the image from the local registry which
>>> in turn will pull from the remote. The registry will look for a 
>>> dockercfg
>>> secret with the remote server name. By default communication with the
>>> remote registry will not use ssl. This is controlled by the istag import
>>> policy:
>>>
>>> importPolicy: insecure: true
>>>
>>> I have not been able to get it to work with insecure: false. I can't
>>> find the right place to put the remote's ca for the registry to use it. 
>>> But
>>> it all works well when insecure is true.
>>>
>>>
>>> Cheers
>>>
>>> Lionel
>>>
>>>
>>> On 18 November 2017 at 13:59, Joel Pearson <
>>> japear...@agiledigital.com.au> wrote:
>>>
 Hi,

 I'm using OpenShift 3.6.1 in AWS and I tried using "oc
 import-image" to pull an image from one openshift cluster to another.  
 I
 setup the docker secrets, and it appeared to be working as there was a
 bunch of metadata visible in the image stream.

 However, when actually started a pod, it seemed at that point it
 tried to get the actual layers from the remote registry of the other
 openshift cluster, at this point it got some authentication error, 
 which is
 super bizarre since it happily imported all the metadata fine.

 Is there some way to actually do the equivalent of docker pull?  So
 that the image data is transferred in that moment, as opposed to a
 on-demand "lazy" transfer?

 Can "oc tag" actually copy the data?

 Thanks,

RE: How to pull images from a remote registry with the actual layers instead of just metadata?

2017-11-18 Thread Lars Milland
Hi

 

This limitation or ”design” of the oc import-image and also the limitations of 
Docker push where one needs to have the image locally to be able to push it, is 
the reason why we have shifted to use Skopeo for all such Docker image 
importing.

 

We have two OpenShift environments with their own OpenShift Docker Registries 
inside, one for test and one for production, and move images up from test 
through the use of Skopeo, using OpenShift service account credentials in 
Jenkins pipelines running from the production OpenShift environment. That way 
we ensure that images are always available in the embedded OpenShift Docker 
registry, and not have the OpenShift environment be dependent on other 
resources as in this case an external Docker registry.

 

We are then also using Skopeo to copy in Docker images that are not our own 
from DockerHub or other outside our OpenShift environments Docker registries.

 

We have added the Skopeo command to the Jenkins slave images we use for all 
deployment pipeline activities, so imports of images cross environments can be 
run inside our Jenkins pipelines.

 

So a Jenkins based pipeline that imports a Java application based image and 
activates the matching Fabric8 based DeploymentConfig running from the 
OpenShift production environment as a Maven/Fabric8/Skopeo Jenkins slave 
connecting to the OpenShift Test environment, would look like this:

 

def version() {

  def matcher = readFile('pom.xml') =~ '(.+)'

  matcher ? matcher[0][1] : null

}

 

 

def destNamespace = 'myproject'

def srcRegistry ='registry-test.mydomain.com:443'

def destRegistry = 'docker-registry.default.svc:5000'

def srcNamespace = 'myproject-preproduction'

def application = 'myapp'

def version = version()

def tag = application + '-' + version

def kubernetesserver='https://kubernetes.default:443'

def srckubernetesserver='https://openshift-test.mydomain.com:8443'

def replicaCount = '2'

 

 

node('java8-maven') {

withEnv(["KUBERNETES_TRUST_CERTIFICATES=true", "KUBERNETES_NAMESPACE=${ 
destNamespace }"]) {



checkout scm



stage('Import Image') {

 

withCredentials(

[   usernamePassword(

credentialsId: 'test-myproject-builder',

passwordVariable: 'SRC_TOKEN',

usernameVariable: 'SRC_USERNAME'),

usernamePassword(

credentialsId: 'prod-myproject-builder',

passwordVariable: 'DEST_TOKEN',

usernameVariable: 'DEST_USERNAME')   

]) {

sh """

echo "Importing image with Skopeo 
\${srcRegistry}/${srcNamespace}/${application}:${tag} -> 
\${destRegistry}/${destNamespace}/${application}:${tag}"

oc login ${kubernetesserver} --token=${DEST_TOKEN} 
--insecure-skip-tls-verify

oc login ${srckubernetesserver} --token=${SRC_TOKEN} 
--insecure-skip-tls-verify

skopeo --debug copy --src-tls-verify=false --dest-tls-verify=false 
--src-creds openshift:${SRC_TOKEN} --dest-creds openshift:${DEST_TOKEN} 
atomic:${srcRegistry}/${srcNamespace}/${application}:${tag} 
atomic:${destRegistry}/${destNamespace}/${application}:${tag}

   echo 'Executing deploy of latest DeploymentController'

   oc login ${kubernetesserver} --token=\$(cat 
/var/run/secrets/kubernetes.io/serviceaccount/token) --insecure-skip-tls-verify

   mvn -B -e -Dappargs='--spring.profiles.active=production 
--spring.cloud.kubernetes.secrets.paths=/tmp/applicationproperties' 
-Dmaven.test.skip=true -Djava.net.preferIPv4Stack=true -Dfabric8.mode=openshift 
-Dfabric8.skipResourceValidation=true -Dopenshiftnamespace=${destNamespace}  
-Dreplicas=${replicaCount} clean fabric8:resource-apply -s 
devops/maven/settings.xml

   oc rollout latest dc/${application} -n ${destNamespace}

"""

   openshiftVerifyDeployment depCfg: "${application}", namespace: 
"${destNamespace}", verifyReplicaCount: "${replicaCount}"

}

}

}

}

 

 

 

Best regards

Lars Milland

 

From: users-boun...@lists.openshift.redhat.com 
[mailto:users-boun...@lists.openshift.redhat.com] On Behalf Of Joel Pearson
Sent: 18. november 2017 08:55
To: Ben Parees 
Cc: users 
Subject: Re: How to pull images from a remote registry with the actual layers 
instead of just metadata?

 

Ahh ok. Is there some way to abuse build config‘s to push existing images to 
remote OpenShift registries?

On Sat, 18 Nov 2017 at 6:15 pm, Ben Parees  > wrote:

On Sat, Nov 18, 2017 at 2:12 AM, Joel Pearson  > wrote:

So there is no way with the oc command to import an image and not have it need 
the remote to exist after that? I’d just have to use docker push instead?

 

currently that is correct.

 

 

On Sat, 18 Nov 2017 at 6:04 pm, Ben Parees 

Re: How to pull images from a remote registry with the actual layers instead of just metadata?

2017-11-18 Thread Joel Pearson
Wow! Thanks Lars, I’ll try out your ideas on Monday.
On Sat, 18 Nov 2017 at 10:34 pm, Lars Milland  wrote:

> Hi
>
>
>
> This limitation or ”design” of the oc import-image and also the
> limitations of Docker push where one needs to have the image locally to be
> able to push it, is the reason why we have shifted to use Skopeo for all
> such Docker image importing.
>
>
>
> We have two OpenShift environments with their own OpenShift Docker
> Registries inside, one for test and one for production, and move images up
> from test through the use of Skopeo, using OpenShift service account
> credentials in Jenkins pipelines running from the production OpenShift
> environment. That way we ensure that images are always available in the
> embedded OpenShift Docker registry, and not have the OpenShift environment
> be dependent on other resources as in this case an external Docker registry.
>
>
>
> We are then also using Skopeo to copy in Docker images that are not our
> own from DockerHub or other outside our OpenShift environments Docker
> registries.
>
>
>
> We have added the Skopeo command to the Jenkins slave images we use for
> all deployment pipeline activities, so imports of images cross environments
> can be run inside our Jenkins pipelines.
>
>
>
> So a Jenkins based pipeline that imports a Java application based image
> and activates the matching Fabric8 based DeploymentConfig running from the
> OpenShift production environment as a Maven/Fabric8/Skopeo Jenkins slave
> connecting to the OpenShift Test environment, would look like this:
>
>
>
> def version() {
>
>   def matcher = readFile('pom.xml') =~ '(.+)'
>
>   matcher ? matcher[0][1] : null
>
> }
>
>
>
>
>
> def destNamespace = 'myproject'
>
> def srcRegistry ='registry-test.mydomain.com:443'
>
> def destRegistry = 'docker-registry.default.svc:5000'
>
> def srcNamespace = 'myproject-preproduction'
>
> def application = 'myapp'
>
> def version = version()
>
> def tag = application + '-' + version
>
> def kubernetesserver='https://kubernetes.default:443'
>
> def srckubernetesserver='https://openshift-test.mydomain.com:8443'
>
> def replicaCount = '2'
>
>
>
>
>
> node('java8-maven') {
>
> withEnv(["KUBERNETES_TRUST_CERTIFICATES=true",
> "KUBERNETES_NAMESPACE=${ destNamespace }"]) {
>
>
>
> checkout scm
>
>
>
> stage('Import Image') {
>
>
>
> withCredentials(
>
> [   usernamePassword(
>
> credentialsId: 'test-myproject-builder',
>
> passwordVariable: 'SRC_TOKEN',
>
> usernameVariable: 'SRC_USERNAME'),
>
> usernamePassword(
>
> credentialsId: 'prod-myproject-builder',
>
> passwordVariable: 'DEST_TOKEN',
>
> usernameVariable: 'DEST_USERNAME')
>
> ]) {
>
> sh """
>
> echo "Importing image with Skopeo
> \${srcRegistry}/${srcNamespace}/${application}:${tag} ->
> \${destRegistry}/${destNamespace}/${application}:${tag}"
>
> oc login ${kubernetesserver} --token=${DEST_TOKEN}
> --insecure-skip-tls-verify
>
> oc login ${srckubernetesserver} --token=${SRC_TOKEN}
> --insecure-skip-tls-verify
>
> skopeo --debug copy --src-tls-verify=false
> --dest-tls-verify=false --src-creds openshift:${SRC_TOKEN} --dest-creds
> openshift:${DEST_TOKEN}
> atomic:${srcRegistry}/${srcNamespace}/${application}:${tag}
> atomic:${destRegistry}/${destNamespace}/${application}:${tag}
>
>echo 'Executing deploy of latest DeploymentController'
>
>oc login ${kubernetesserver} --token=\$(cat /var/run/secrets/
> kubernetes.io/serviceaccount/token) --insecure-skip-tls-verify
>
>mvn -B -e -Dappargs='--spring.profiles.active=production
> --spring.cloud.kubernetes.secrets.paths=/tmp/applicationproperties'
> -Dmaven.test.skip=true -Djava.net.preferIPv4Stack=true
> -Dfabric8.mode=openshift -Dfabric8.skipResourceValidation=true
> -Dopenshiftnamespace=${destNamespace}  -Dreplicas=${replicaCount} clean
> fabric8:resource-apply -s devops/maven/settings.xml
>
>oc rollout latest dc/${application} -n ${destNamespace}
>
> """
>
>openshiftVerifyDeployment depCfg: "${application}", namespace:
> "${destNamespace}", verifyReplicaCount: "${replicaCount}"
>
> }
>
> }
>
> }
>
> }
>
>
>
>
>
>
>
> Best regards
>
> Lars Milland
>
>
>
> *From:* users-boun...@lists.openshift.redhat.com [mailto:
> users-boun...@lists.openshift.redhat.com] *On Behalf Of *Joel Pearson
> *Sent:* 18. november 2017 08:55
> *To:* Ben Parees 
> *Cc:* users 
> *Subject:* Re: How to pull images from a remote registry with the actual
> layers instead of just metadata?
>
>
>
> Ahh ok. Is there some way to abuse build config‘s to push existing images
> to remote OpenShift registries?
>
> On Sat, 18 Nov 2017 at 6:15 pm, Ben Parees  wrote:
>
> On Sat, Nov 18, 2017 at 2:12 AM, Joel Pearson <
> japear...@agiledigital.com.au> wrote:

Re: How to pull images from a remote registry with the actual layers instead of just metadata?

2017-11-17 Thread Joel Pearson
Ahh ok. Is there some way to abuse build config‘s to push existing images
to remote OpenShift registries?
On Sat, 18 Nov 2017 at 6:15 pm, Ben Parees  wrote:

> On Sat, Nov 18, 2017 at 2:12 AM, Joel Pearson <
> japear...@agiledigital.com.au> wrote:
>
>> So there is no way with the oc command to import an image and not have it
>> need the remote to exist after that? I’d just have to use docker push
>> instead?
>
>
> currently that is correct.
>
>
>>
>> On Sat, 18 Nov 2017 at 6:04 pm, Ben Parees  wrote:
>>
>>> On Sat, Nov 18, 2017 at 1:13 AM, Lionel Orellana 
>>> wrote:
>>>
 So it sounds like the local option means after it’s pulled once it will
> exist in the local registry?


 Hmm It always seems to do the pull-through
 .
 Not sure what will happen if the remote is down.

>>>
>>> the blobs will be mirrored in the local registry, but the manifest is
>>> not (currently) so the remote still needs to be accessible, but the pull
>>> should be faster once the blobs have been cached in the local registry.
>>> (assuming mirroring pullthrough is turned on, which by default i believe it
>>> is).
>>>
>>>
>>>
>>>

 On 18 November 2017 at 16:53, Joel Pearson <
 japear...@agiledigital.com.au> wrote:

> Thanks Lionel. I guess one way to make it secure would be to have a
> certificate that’s valid on the internet. But I guess it’s not really
> important if it’s all internal traffic.
>
> I’ll try out that local option I think that’s what I want. Because I
> don’t want to have to rely on the remote registry always being there,
> because we’re thinking of shutting down our dev and test clusters at night
> time.
>
> So it sounds like the local option means after it’s pulled once it
> will exist in the local registry?
>
> On Sat, 18 Nov 2017 at 4:41 pm, Lionel Orellana 
> wrote:
>
>> Hi Joel,
>>
>> By default the imported image stream tag will have a reference policy
>> of Source. That means the pod will end up pulling the image from the 
>> remote
>> registry directly. For that to work you have to link a secret containing
>> the docker credentials with the deployment's sa. For the default sa this
>> looks like this
>>
>>  oc secrets link default my-dockercfg --for=pull
>>
>> The other option is to set the istag's reference policy to Local.
>>
>> tags:
>> - annotations: null
>>   ...
>>   name: latest
>>   referencePolicy:
>> type: Local  .
>>
>> Now the pod will try to get the image from the local registry which
>> in turn will pull from the remote. The registry will look for a dockercfg
>> secret with the remote server name. By default communication with the
>> remote registry will not use ssl. This is controlled by the istag import
>> policy:
>>
>> importPolicy: insecure: true
>>
>> I have not been able to get it to work with insecure: false. I can't
>> find the right place to put the remote's ca for the registry to use it. 
>> But
>> it all works well when insecure is true.
>>
>>
>> Cheers
>>
>> Lionel
>>
>>
>> On 18 November 2017 at 13:59, Joel Pearson <
>> japear...@agiledigital.com.au> wrote:
>>
>>> Hi,
>>>
>>> I'm using OpenShift 3.6.1 in AWS and I tried using "oc import-image"
>>> to pull an image from one openshift cluster to another.  I setup the 
>>> docker
>>> secrets, and it appeared to be working as there was a bunch of metadata
>>> visible in the image stream.
>>>
>>> However, when actually started a pod, it seemed at that point it
>>> tried to get the actual layers from the remote registry of the other
>>> openshift cluster, at this point it got some authentication error, 
>>> which is
>>> super bizarre since it happily imported all the metadata fine.
>>>
>>> Is there some way to actually do the equivalent of docker pull?  So
>>> that the image data is transferred in that moment, as opposed to a
>>> on-demand "lazy" transfer?
>>>
>>> Can "oc tag" actually copy the data?
>>>
>>> Thanks,
>>>
>>> Joel
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>

 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users


>>>
>>>
>>> --
>>> Ben Parees | OpenShift
>>>
>>>
>
>
> --
> Ben Parees | OpenShift
>
>

Re: How to pull images from a remote registry with the actual layers instead of just metadata?

2017-11-17 Thread Ben Parees
On Sat, Nov 18, 2017 at 2:12 AM, Joel Pearson  wrote:

> So there is no way with the oc command to import an image and not have it
> need the remote to exist after that? I’d just have to use docker push
> instead?


currently that is correct.


>
> On Sat, 18 Nov 2017 at 6:04 pm, Ben Parees  wrote:
>
>> On Sat, Nov 18, 2017 at 1:13 AM, Lionel Orellana 
>> wrote:
>>
>>> So it sounds like the local option means after it’s pulled once it will
 exist in the local registry?
>>>
>>>
>>> Hmm It always seems to do the pull-through
>>> .
>>> Not sure what will happen if the remote is down.
>>>
>>
>> the blobs will be mirrored in the local registry, but the manifest is not
>> (currently) so the remote still needs to be accessible, but the pull should
>> be faster once the blobs have been cached in the local registry.  (assuming
>> mirroring pullthrough is turned on, which by default i believe it is).
>>
>>
>>
>>
>>>
>>> On 18 November 2017 at 16:53, Joel Pearson <
>>> japear...@agiledigital.com.au> wrote:
>>>
 Thanks Lionel. I guess one way to make it secure would be to have a
 certificate that’s valid on the internet. But I guess it’s not really
 important if it’s all internal traffic.

 I’ll try out that local option I think that’s what I want. Because I
 don’t want to have to rely on the remote registry always being there,
 because we’re thinking of shutting down our dev and test clusters at night
 time.

 So it sounds like the local option means after it’s pulled once it will
 exist in the local registry?

 On Sat, 18 Nov 2017 at 4:41 pm, Lionel Orellana 
 wrote:

> Hi Joel,
>
> By default the imported image stream tag will have a reference policy
> of Source. That means the pod will end up pulling the image from the 
> remote
> registry directly. For that to work you have to link a secret containing
> the docker credentials with the deployment's sa. For the default sa this
> looks like this
>
>  oc secrets link default my-dockercfg --for=pull
>
> The other option is to set the istag's reference policy to Local.
>
> tags:
> - annotations: null
>   ...
>   name: latest
>   referencePolicy:
> type: Local  .
>
> Now the pod will try to get the image from the local registry which in
> turn will pull from the remote. The registry will look for a dockercfg
> secret with the remote server name. By default communication with the
> remote registry will not use ssl. This is controlled by the istag import
> policy:
>
> importPolicy: insecure: true
>
> I have not been able to get it to work with insecure: false. I can't
> find the right place to put the remote's ca for the registry to use it. 
> But
> it all works well when insecure is true.
>
>
> Cheers
>
> Lionel
>
>
> On 18 November 2017 at 13:59, Joel Pearson <
> japear...@agiledigital.com.au> wrote:
>
>> Hi,
>>
>> I'm using OpenShift 3.6.1 in AWS and I tried using "oc import-image"
>> to pull an image from one openshift cluster to another.  I setup the 
>> docker
>> secrets, and it appeared to be working as there was a bunch of metadata
>> visible in the image stream.
>>
>> However, when actually started a pod, it seemed at that point it
>> tried to get the actual layers from the remote registry of the other
>> openshift cluster, at this point it got some authentication error, which 
>> is
>> super bizarre since it happily imported all the metadata fine.
>>
>> Is there some way to actually do the equivalent of docker pull?  So
>> that the image data is transferred in that moment, as opposed to a
>> on-demand "lazy" transfer?
>>
>> Can "oc tag" actually copy the data?
>>
>> Thanks,
>>
>> Joel
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>>
>> --
>> Ben Parees | OpenShift
>>
>>


-- 
Ben Parees | OpenShift
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to pull images from a remote registry with the actual layers instead of just metadata?

2017-11-17 Thread Joel Pearson
So there is no way with the oc command to import an image and not have it
need the remote to exist after that? I’d just have to use docker push
instead?
On Sat, 18 Nov 2017 at 6:04 pm, Ben Parees  wrote:

> On Sat, Nov 18, 2017 at 1:13 AM, Lionel Orellana 
> wrote:
>
>> So it sounds like the local option means after it’s pulled once it will
>>> exist in the local registry?
>>
>>
>> Hmm It always seems to do the pull-through
>> .
>> Not sure what will happen if the remote is down.
>>
>
> the blobs will be mirrored in the local registry, but the manifest is not
> (currently) so the remote still needs to be accessible, but the pull should
> be faster once the blobs have been cached in the local registry.  (assuming
> mirroring pullthrough is turned on, which by default i believe it is).
>
>
>
>
>>
>> On 18 November 2017 at 16:53, Joel Pearson > > wrote:
>>
>>> Thanks Lionel. I guess one way to make it secure would be to have a
>>> certificate that’s valid on the internet. But I guess it’s not really
>>> important if it’s all internal traffic.
>>>
>>> I’ll try out that local option I think that’s what I want. Because I
>>> don’t want to have to rely on the remote registry always being there,
>>> because we’re thinking of shutting down our dev and test clusters at night
>>> time.
>>>
>>> So it sounds like the local option means after it’s pulled once it will
>>> exist in the local registry?
>>>
>>> On Sat, 18 Nov 2017 at 4:41 pm, Lionel Orellana 
>>> wrote:
>>>
 Hi Joel,

 By default the imported image stream tag will have a reference policy
 of Source. That means the pod will end up pulling the image from the remote
 registry directly. For that to work you have to link a secret containing
 the docker credentials with the deployment's sa. For the default sa this
 looks like this

  oc secrets link default my-dockercfg --for=pull

 The other option is to set the istag's reference policy to Local.

 tags:
 - annotations: null
   ...
   name: latest
   referencePolicy:
 type: Local  .

 Now the pod will try to get the image from the local registry which in
 turn will pull from the remote. The registry will look for a dockercfg
 secret with the remote server name. By default communication with the
 remote registry will not use ssl. This is controlled by the istag import
 policy:

 importPolicy: insecure: true

 I have not been able to get it to work with insecure: false. I can't
 find the right place to put the remote's ca for the registry to use it. But
 it all works well when insecure is true.


 Cheers

 Lionel


 On 18 November 2017 at 13:59, Joel Pearson <
 japear...@agiledigital.com.au> wrote:

> Hi,
>
> I'm using OpenShift 3.6.1 in AWS and I tried using "oc import-image"
> to pull an image from one openshift cluster to another.  I setup the 
> docker
> secrets, and it appeared to be working as there was a bunch of metadata
> visible in the image stream.
>
> However, when actually started a pod, it seemed at that point it tried
> to get the actual layers from the remote registry of the other openshift
> cluster, at this point it got some authentication error, which is super
> bizarre since it happily imported all the metadata fine.
>
> Is there some way to actually do the equivalent of docker pull?  So
> that the image data is transferred in that moment, as opposed to a
> on-demand "lazy" transfer?
>
> Can "oc tag" actually copy the data?
>
> Thanks,
>
> Joel
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>

>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> --
> Ben Parees | OpenShift
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to pull images from a remote registry with the actual layers instead of just metadata?

2017-11-17 Thread Ben Parees
On Sat, Nov 18, 2017 at 1:13 AM, Lionel Orellana  wrote:

> So it sounds like the local option means after it’s pulled once it will
>> exist in the local registry?
>
>
> Hmm It always seems to do the pull-through
> .
> Not sure what will happen if the remote is down.
>

the blobs will be mirrored in the local registry, but the manifest is not
(currently) so the remote still needs to be accessible, but the pull should
be faster once the blobs have been cached in the local registry.  (assuming
mirroring pullthrough is turned on, which by default i believe it is).




>
> On 18 November 2017 at 16:53, Joel Pearson 
> wrote:
>
>> Thanks Lionel. I guess one way to make it secure would be to have a
>> certificate that’s valid on the internet. But I guess it’s not really
>> important if it’s all internal traffic.
>>
>> I’ll try out that local option I think that’s what I want. Because I
>> don’t want to have to rely on the remote registry always being there,
>> because we’re thinking of shutting down our dev and test clusters at night
>> time.
>>
>> So it sounds like the local option means after it’s pulled once it will
>> exist in the local registry?
>>
>> On Sat, 18 Nov 2017 at 4:41 pm, Lionel Orellana 
>> wrote:
>>
>>> Hi Joel,
>>>
>>> By default the imported image stream tag will have a reference policy of
>>> Source. That means the pod will end up pulling the image from the remote
>>> registry directly. For that to work you have to link a secret containing
>>> the docker credentials with the deployment's sa. For the default sa this
>>> looks like this
>>>
>>>  oc secrets link default my-dockercfg --for=pull
>>>
>>> The other option is to set the istag's reference policy to Local.
>>>
>>> tags:
>>> - annotations: null
>>>   ...
>>>   name: latest
>>>   referencePolicy:
>>> type: Local  .
>>>
>>> Now the pod will try to get the image from the local registry which in
>>> turn will pull from the remote. The registry will look for a dockercfg
>>> secret with the remote server name. By default communication with the
>>> remote registry will not use ssl. This is controlled by the istag import
>>> policy:
>>>
>>> importPolicy: insecure: true
>>>
>>> I have not been able to get it to work with insecure: false. I can't
>>> find the right place to put the remote's ca for the registry to use it. But
>>> it all works well when insecure is true.
>>>
>>>
>>> Cheers
>>>
>>> Lionel
>>>
>>>
>>> On 18 November 2017 at 13:59, Joel Pearson <
>>> japear...@agiledigital.com.au> wrote:
>>>
 Hi,

 I'm using OpenShift 3.6.1 in AWS and I tried using "oc import-image" to
 pull an image from one openshift cluster to another.  I setup the docker
 secrets, and it appeared to be working as there was a bunch of metadata
 visible in the image stream.

 However, when actually started a pod, it seemed at that point it tried
 to get the actual layers from the remote registry of the other openshift
 cluster, at this point it got some authentication error, which is super
 bizarre since it happily imported all the metadata fine.

 Is there some way to actually do the equivalent of docker pull?  So
 that the image data is transferred in that moment, as opposed to a
 on-demand "lazy" transfer?

 Can "oc tag" actually copy the data?

 Thanks,

 Joel

 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users


>>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Ben Parees | OpenShift
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to pull images from a remote registry with the actual layers instead of just metadata?

2017-11-17 Thread Joel Pearson
Thanks Lionel. I guess one way to make it secure would be to have a
certificate that’s valid on the internet. But I guess it’s not really
important if it’s all internal traffic.

I’ll try out that local option I think that’s what I want. Because I don’t
want to have to rely on the remote registry always being there, because
we’re thinking of shutting down our dev and test clusters at night time.

So it sounds like the local option means after it’s pulled once it will
exist in the local registry?
On Sat, 18 Nov 2017 at 4:41 pm, Lionel Orellana  wrote:

> Hi Joel,
>
> By default the imported image stream tag will have a reference policy of
> Source. That means the pod will end up pulling the image from the remote
> registry directly. For that to work you have to link a secret containing
> the docker credentials with the deployment's sa. For the default sa this
> looks like this
>
>  oc secrets link default my-dockercfg --for=pull
>
> The other option is to set the istag's reference policy to Local.
>
> tags:
> - annotations: null
>   ...
>   name: latest
>   referencePolicy:
> type: Local  .
>
> Now the pod will try to get the image from the local registry which in
> turn will pull from the remote. The registry will look for a dockercfg
> secret with the remote server name. By default communication with the
> remote registry will not use ssl. This is controlled by the istag import
> policy:
>
> importPolicy: insecure: true
>
> I have not been able to get it to work with insecure: false. I can't find
> the right place to put the remote's ca for the registry to use it. But it
> all works well when insecure is true.
>
>
> Cheers
>
> Lionel
>
>
> On 18 November 2017 at 13:59, Joel Pearson 
> wrote:
>
>> Hi,
>>
>> I'm using OpenShift 3.6.1 in AWS and I tried using "oc import-image" to
>> pull an image from one openshift cluster to another.  I setup the docker
>> secrets, and it appeared to be working as there was a bunch of metadata
>> visible in the image stream.
>>
>> However, when actually started a pod, it seemed at that point it tried to
>> get the actual layers from the remote registry of the other openshift
>> cluster, at this point it got some authentication error, which is super
>> bizarre since it happily imported all the metadata fine.
>>
>> Is there some way to actually do the equivalent of docker pull?  So that
>> the image data is transferred in that moment, as opposed to a on-demand
>> "lazy" transfer?
>>
>> Can "oc tag" actually copy the data?
>>
>> Thanks,
>>
>> Joel
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How to pull images from a remote registry with the actual layers instead of just metadata?

2017-11-17 Thread Joel Pearson
Hi,

I'm using OpenShift 3.6.1 in AWS and I tried using "oc import-image" to
pull an image from one openshift cluster to another.  I setup the docker
secrets, and it appeared to be working as there was a bunch of metadata
visible in the image stream.

However, when actually started a pod, it seemed at that point it tried to
get the actual layers from the remote registry of the other openshift
cluster, at this point it got some authentication error, which is super
bizarre since it happily imported all the metadata fine.

Is there some way to actually do the equivalent of docker pull?  So that
the image data is transferred in that moment, as opposed to a on-demand
"lazy" transfer?

Can "oc tag" actually copy the data?

Thanks,

Joel
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users