Re: Re: Re: Re: Origin 3.9 (oc cluster up) doesnt use registry-mirror for internal registry

2018-04-22 Thread marc . schlegel
Thanks for the link
I think this is a valid solution for development. In the long run we need 
to create custom imagestream anyway.
Stil, I cannot save the yaml because our registry is not in the whitelist, 
even when setting the insecure annotation. I double checked my 
docker-daemon...

{
  "registry-mirrors": [
"https://docker.mydomain.com:5000";
  ],
  "insecure-registries": [
"docker.mydomain.com:5000",
"172.30.0.0/16"
  ],
  "debug": true,
  "experimental": true
}




Von:Ben Parees 
An: marc.schle...@sdv-it.de
Kopie:  users 
Datum:  20.04.2018 15:25
Betreff:Re: Re: Re: Origin 3.9 (oc cluster up) doesnt use 
registry-mirror for internal registry





On Fri, Apr 20, 2018 at 2:49 AM,  wrote:
After setting up the proxy in oc cluster up as well as the daemon 
(including the necessary bypass) the problem remains. 

So I created a admin user to which I gave the cluster-admin role and this 
one can see all image-streams and I can update them in the webconsole. 

And here I can see the root cause which is actually caused by SSL 


Internal error occurred: Get https://registry-1.docker.io/v2/: x509: 
certificate signed by unknown authority. Timestamp: 2018-04-20T06:33:47Z 
Error count: 2 

Of course we have our own CA :-)
Is there a way to import our ca-bundle? I did not see anything in "oc 
cluster up --help"

You're seeing this error in the imagestreams during image import?

The easiest thing to do is mark the imagestreams insecure: 
https://docs.openshift.org/latest/dev_guide/managing_images.html#insecure-registries

(Since oc cluster up is intended for dev usage, I am going to make the 
assumption this is a reasonable thing for you to do).

If you don't want to do that, you'd need to add the cert to the origin 
image which oc cluster up starts up to run the master.

 




Von:Ben Parees  
An:marc.schle...@sdv-it.de 
Kopie:users  
Datum:19.04.2018 16:10 
Betreff:Re: Re: Origin 3.9 (oc cluster up) doesnt use 
registry-mirror for internal registry 





On Thu, Apr 19, 2018 at 9:14 AM,  wrote: 
Thanks for the quick replies. 

The http-proxy is not enough to get out, since the daemon uses also other 
protocols than http. 

right but it will get the imagestream imported.  After that it's up to 
your daemon configuration as to whether the pull can occur, and it sounded 
like you had already configured your daemon. 

  


Changing the image-streams seems to be a valid approach, unfortunately I 
cannot export them in order to edit them...because they are not there yet 
According to the documentation I need to export the image-stream by 
@ 
In order to get the id, I can use oc describe...but see 

$ oc describe is jenkins 
Error from server (NotFound): imagestreams.image.openshift.io "jenkins" 
not found 

So I cannot run 

$ oc export isimage jenkins@??? 

I am wondering why the containerized version isnt honoring the settings of 
the docker-daemon running on my machine. Well it does when it is pulling 
the openshift images 
 docker images 
REPOSITORY TAG IMAGE ID   
 CREATED SIZE 
openshift/origin-web-console   v3.9.0  60938911a1f9   
 2 weeks ago 485MB 
openshift/origin-docker-registry   v3.9.0  2663c9df9123   
 2 weeks ago 455MB 
openshift/origin-haproxy-routerv3.9.0  c70d45de5384   
 2 weeks ago 1.27GB 
openshift/origin-deployer  v3.9.0  378ccd170718   
 2 weeks ago 1.25GB 
openshift/origin   v3.9.0  b5f178918ae9   
 2 weeks ago 1.25GB 
openshift/origin-pod   v3.9.0  1b36bf755484   
 2 weeks ago 217MB

but the image-steams are not pulled. 
Nonetheless, When I pull the image-stream manually (docker pull 
openshift/jenkins-2-centos7) it works. 
So why is the pull not working from inside Openshift? 

regards 
Marc 






You can update the image streams to change the registry. 

You can also set a proxy for the master, which is the process doing the 
imports and which presumably needs the proxy configured, by passing these 
args to oc cluster up: 

  --http-proxy='': HTTP proxy to use for master and builds
  --https-proxy='': HTTPS proxy to use for master and builds


I believe that should enable your existing imagestreams (not the ones 
pointing to the proxy url) to import. 





best regards 
Marc 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




-- 
Ben Parees | OpenShift



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshift

Re: oc port-forward command unexpectedly cuts the proxied connection on Origin 3.7.2

2018-04-22 Thread Aleksandar Lazic
Hi.

Am 19.04.2018 um 10:46 schrieb Fabio Martinelli:
> Dear Colleagues
>
> In few time I've to migrate several corporate applications from a
> RedHat6 LXC cluster to a RedHat7 OpenShift Origin 3.7.2 cluster
>
> here the application Developers are use to write an Ansible playbook
> for each app so they've explicitly requested me to prepare a base
> CentOS7 container running as non-root and featuring an unprivileged
> SSHd daemon in order to run their well tested Ansible playbooks,
> furthermore to place the container /home on a dedicated GlusterFS
> volume to make it persistent along the time ; last ring of this chain
> is the oc port-forward command that's in charge of connecting the
> Developers workstation with the unprivileged SSHd daemon just for the
> Ansible playbook execution time.
>
> this is actually working pretty well but the fact that the oc
> port-forward command at certain point cuts the connection and the
> Ansible run gets obviously affected making the Developer experience
> disappointing ; on the other end the SSHd process didn't stop.
Does the port-forwarding goes thru a proxy?
Is there a amount of time when this happens (= timeout)
What's in the events when this happen?

> kindly which settings may I change both on the Origin Masters yaml
> files and on the Origin Nodes yaml files in order to prevent this issue ?
>
> I'm aware that the application Developers should rewrite their works
> in terms of Dockerfiles but for the time being they've really no time
> to do that.
>
>
> Many thanks,
> Fabio Martinelli
Best regards
Aleks
ME2Digital

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Cleaning up not correct when using GlusterFS?

2018-04-22 Thread Tim Dudgeon

Thanks.
Upgrading from origin 3.7.1 to 3.7.2 fixes the problem.

Tim


On 20/04/18 22:46, Seth Jennings wrote:

Associated bz https://bugzilla.redhat.com/show_bug.cgi?id=1546156

On Fri, Apr 20, 2018 at 4:45 PM, Seth Jennings > wrote:


Pretty sure this was fixed in this PR that went into 3.9.

https://github.com/openshift/origin/commit/0727d1d31fad4b4f66eff46fe750f966fab8c28b





On Fri, Apr 20, 2018 at 12:49 PM, Tim Dudgeon
mailto:tdudgeon...@gmail.com>> wrote:

I believe I'm seeing a problem with using GlusterFS volumes
when you terminate a pod that is using a gluster backed PVC.
This is with Origin 3.7.1. I did this:

1. create new project
2. deployed a pod
3. added a volume to the pod usingĀ  a gluster backed PVC.
4. rsh to the pod and check the volume can be written to
5. delete the project

After stage 3 the volume was working OK in the pod and the
volume was reported by hekati.

After stage 5 the PVC was no longer present, glusterfs volume
was no longer see by hekati (so far so good) but the pod was
stuck in the 'Terminating' state and the project did not get
deleted. It looks like the container that was running in the
pod had been deleted. Even after one hour it was still stuck
in the terminating state.

Looking deeper it looks like the mount on the host on which
the pod was running was still present. e.g. this was still
found in /etc/mtab:

10.0.0.15:vol_a8866bf3769c987aee5c919305b89529

/var/lib/origin/openshift.local.volumes/pods/51a4ef9e-44b4-11e8-b523-fa163ea80da9/volumes/kubernetes.io

~glusterfs/pvc-28d4eb2e-44b4-11e8-b523-fa163ea80da9
fuse.glusterfs

rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072
0 0

Manually unmounting this mount resulted in the pod finally
terminating and (after a short delay) the project being deleted.

Looks like the cleanup processes are not quite correct?

Tim



___
users mailing list
users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users






___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users