Re: Cleaning up not correct when using GlusterFS?

2018-04-20 Thread Seth Jennings
Associated bz https://bugzilla.redhat.com/show_bug.cgi?id=1546156

On Fri, Apr 20, 2018 at 4:45 PM, Seth Jennings  wrote:

> Pretty sure this was fixed in this PR that went into 3.9.
> https://github.com/openshift/origin/commit/0727d1d31fad4b4f66eff46fe750f9
> 66fab8c28b
>
>
> On Fri, Apr 20, 2018 at 12:49 PM, Tim Dudgeon 
> wrote:
>
>> I believe I'm seeing a problem with using GlusterFS volumes when you
>> terminate a pod that is using a gluster backed PVC. This is with Origin
>> 3.7.1. I did this:
>>
>> 1. create new project
>> 2. deployed a pod
>> 3. added a volume to the pod using  a gluster backed PVC.
>> 4. rsh to the pod and check the volume can be written to
>> 5. delete the project
>>
>> After stage 3 the volume was working OK in the pod and the volume was
>> reported by hekati.
>>
>> After stage 5 the PVC was no longer present, glusterfs volume was no
>> longer see by hekati (so far so good) but the pod was stuck in the
>> 'Terminating' state and the project did not get deleted. It looks like the
>> container that was running in the pod had been deleted. Even after one hour
>> it was still stuck in the terminating state.
>>
>> Looking deeper it looks like the mount on the host on which the pod was
>> running was still present. e.g. this was still found in /etc/mtab:
>>
>> 10.0.0.15:vol_a8866bf3769c987aee5c919305b89529
>> /var/lib/origin/openshift.local.volumes/pods/51a4ef9e-44b4-1
>> 1e8-b523-fa163ea80da9/volumes/kubernetes.io~glusterfs/pvc-28
>> d4eb2e-44b4-11e8-b523-fa163ea80da9 fuse.glusterfs
>> rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072
>> 0 0
>>
>> Manually unmounting this mount resulted in the pod finally terminating
>> and (after a short delay) the project being deleted.
>>
>> Looks like the cleanup processes are not quite correct?
>>
>> Tim
>>
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Cleaning up not correct when using GlusterFS?

2018-04-20 Thread Seth Jennings
Pretty sure this was fixed in this PR that went into 3.9.
https://github.com/openshift/origin/commit/0727d1d31fad4b4f66eff46fe750f966fab8c28b

On Fri, Apr 20, 2018 at 12:49 PM, Tim Dudgeon  wrote:

> I believe I'm seeing a problem with using GlusterFS volumes when you
> terminate a pod that is using a gluster backed PVC. This is with Origin
> 3.7.1. I did this:
>
> 1. create new project
> 2. deployed a pod
> 3. added a volume to the pod using  a gluster backed PVC.
> 4. rsh to the pod and check the volume can be written to
> 5. delete the project
>
> After stage 3 the volume was working OK in the pod and the volume was
> reported by hekati.
>
> After stage 5 the PVC was no longer present, glusterfs volume was no
> longer see by hekati (so far so good) but the pod was stuck in the
> 'Terminating' state and the project did not get deleted. It looks like the
> container that was running in the pod had been deleted. Even after one hour
> it was still stuck in the terminating state.
>
> Looking deeper it looks like the mount on the host on which the pod was
> running was still present. e.g. this was still found in /etc/mtab:
>
> 10.0.0.15:vol_a8866bf3769c987aee5c919305b89529
> /var/lib/origin/openshift.local.volumes/pods/51a4ef9e-44b4-
> 11e8-b523-fa163ea80da9/volumes/kubernetes.io~glusterfs/pvc-
> 28d4eb2e-44b4-11e8-b523-fa163ea80da9 fuse.glusterfs
> rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072
> 0 0
>
> Manually unmounting this mount resulted in the pod finally terminating and
> (after a short delay) the project being deleted.
>
> Looks like the cleanup processes are not quite correct?
>
> Tim
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Cleaning up not correct when using GlusterFS?

2018-04-20 Thread Jamie Duncan
I'm pretty sure there is a bz for this. Will look when I'm near a normal
screen.

On Fri, Apr 20, 2018, 1:51 PM Tim Dudgeon  wrote:

> I believe I'm seeing a problem with using GlusterFS volumes when you
> terminate a pod that is using a gluster backed PVC. This is with Origin
> 3.7.1. I did this:
>
> 1. create new project
> 2. deployed a pod
> 3. added a volume to the pod using  a gluster backed PVC.
> 4. rsh to the pod and check the volume can be written to
> 5. delete the project
>
> After stage 3 the volume was working OK in the pod and the volume was
> reported by hekati.
>
> After stage 5 the PVC was no longer present, glusterfs volume was no
> longer see by hekati (so far so good) but the pod was stuck in the
> 'Terminating' state and the project did not get deleted. It looks like
> the container that was running in the pod had been deleted. Even after
> one hour it was still stuck in the terminating state.
>
> Looking deeper it looks like the mount on the host on which the pod was
> running was still present. e.g. this was still found in /etc/mtab:
>
> 10.0.0.15:vol_a8866bf3769c987aee5c919305b89529
>
> /var/lib/origin/openshift.local.volumes/pods/51a4ef9e-44b4-11e8-b523-fa163ea80da9/volumes/
> kubernetes.io~glusterfs/pvc-28d4eb2e-44b4-11e8-b523-fa163ea80da9
> fuse.glusterfs
> rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072
>
> 0 0
>
> Manually unmounting this mount resulted in the pod finally terminating
> and (after a short delay) the project being deleted.
>
> Looks like the cleanup processes are not quite correct?
>
> Tim
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Cleaning up not correct when using GlusterFS?

2018-04-20 Thread Tim Dudgeon
I believe I'm seeing a problem with using GlusterFS volumes when you 
terminate a pod that is using a gluster backed PVC. This is with Origin 
3.7.1. I did this:


1. create new project
2. deployed a pod
3. added a volume to the pod usingĀ  a gluster backed PVC.
4. rsh to the pod and check the volume can be written to
5. delete the project

After stage 3 the volume was working OK in the pod and the volume was 
reported by hekati.


After stage 5 the PVC was no longer present, glusterfs volume was no 
longer see by hekati (so far so good) but the pod was stuck in the 
'Terminating' state and the project did not get deleted. It looks like 
the container that was running in the pod had been deleted. Even after 
one hour it was still stuck in the terminating state.


Looking deeper it looks like the mount on the host on which the pod was 
running was still present. e.g. this was still found in /etc/mtab:


10.0.0.15:vol_a8866bf3769c987aee5c919305b89529 
/var/lib/origin/openshift.local.volumes/pods/51a4ef9e-44b4-11e8-b523-fa163ea80da9/volumes/kubernetes.io~glusterfs/pvc-28d4eb2e-44b4-11e8-b523-fa163ea80da9 
fuse.glusterfs 
rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 
0 0


Manually unmounting this mount resulted in the pod finally terminating 
and (after a short delay) the project being deleted.


Looks like the cleanup processes are not quite correct?

Tim



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Re: Re: Origin 3.9 (oc cluster up) doesnt use registry-mirror for internal registry

2018-04-20 Thread Ben Parees
On Fri, Apr 20, 2018 at 2:49 AM,  wrote:

> After setting up the proxy in oc cluster up as well as the daemon
> (including the necessary bypass) the problem remains.
>
> So I created a admin user to which I gave the cluster-admin role and this
> one can see all image-streams and I can update them in the webconsole.
>
> And here I can see the root cause which is actually caused by SSL
>
>
> Internal error occurred: Get https://registry-1.docker.io/v2/: x509:
> certificate signed by unknown authority. Timestamp: 2018-04-20T06:33:47Z
> Error count: 2
>
> Of course we have our own CA :-)
> Is there a way to import our ca-bundle? I did not see anything in "oc
> cluster up --help"


You're seeing this error in the imagestreams during image import?

The easiest thing to do is mark the imagestreams insecure:
https://docs.openshift.org/latest/dev_guide/managing_images.html#insecure-registries

(Since oc cluster up is intended for dev usage, I am going to make the
assumption this is a reasonable thing for you to do).

If you don't want to do that, you'd need to add the cert to the origin
image which oc cluster up starts up to run the master.



>
>
>
>
> Von:Ben Parees 
> An:marc.schle...@sdv-it.de
> Kopie:users 
> Datum:19.04.2018 16:10
> Betreff:Re: Re: Origin 3.9 (oc cluster up) doesnt use
> registry-mirror for internal registry
> --
>
>
>
>
>
> On Thu, Apr 19, 2018 at 9:14 AM, <*marc.schle...@sdv-it.de*
> > wrote:
> Thanks for the quick replies.
>
> The http-proxy is not enough to get out, since the daemon uses also other
> protocols than http.
>
> right but it will get the imagestream imported.  After that it's up to
> your daemon configuration as to whether the pull can occur, and it sounded
> like you had already configured your daemon.
>
>
>
>
> Changing the image-streams seems to be a valid approach, unfortunately I
> cannot export them in order to edit them...because they are not there yet
> According to the documentation I need to export the image-stream by
> @
> In order to get the id, I can use oc describe...but see
>
> $ oc describe is jenkins
> Error from server (NotFound): *imagestreams.image.openshift.io*
>  "jenkins" not found
>
> So I cannot run
>
> $ oc export isimage jenkins@???
>
> I am wondering why the containerized version isnt honoring the settings of
> the docker-daemon running on my machine. Well it does when it is pulling
> the openshift images
>  docker images
> REPOSITORY TAG IMAGE ID
>  CREATED SIZE
> openshift/origin-web-console   v3.9.0  60938911a1f9
>  2 weeks ago 485MB
> openshift/origin-docker-registry   v3.9.0  2663c9df9123
>  2 weeks ago 455MB
> openshift/origin-haproxy-routerv3.9.0  c70d45de5384
>  2 weeks ago 1.27GB
> openshift/origin-deployer  v3.9.0  378ccd170718
>  2 weeks ago 1.25GB
> openshift/origin   v3.9.0  b5f178918ae9
>  2 weeks ago 1.25GB
> openshift/origin-pod   v3.9.0  1b36bf755484
>  2 weeks ago 217MB
>
> but the image-steams are not pulled.
> Nonetheless, When I pull the image-stream manually (docker pull
> openshift/jenkins-2-centos7) it works.
> So why is the pull not working from inside Openshift?
>
> regards
> Marc
>
>
>
>
>
>
> You can update the image streams to change the registry.
>
> You can also set a proxy for the master, which is the process doing the
> imports and which presumably needs the proxy configured, by passing these
> args to oc cluster up:
>
>   --http-proxy='': HTTP proxy to use for master and builds
>   --https-proxy='': HTTPS proxy to use for master and builds
>
>
> I believe that should enable your existing imagestreams (not the ones
> pointing to the proxy url) to import.
>
>
>
>
>
> best regards
> Marc
>
> ___
> users mailing list
> *users@lists.openshift.redhat.com* 
> *http://lists.openshift.redhat.com/openshiftmm/listinfo/users*
> 
>
> ___
> users mailing list
> *users@lists.openshift.redhat.com* 
> *http://lists.openshift.redhat.com/openshiftmm/listinfo/users*
> 
>
>
>
>
> --
> Ben Parees | OpenShift
>
>
>
> ___
> users mailing list
> *users@lists.openshift.redhat.com* 
> *http://lists.openshift.redhat.com/openshiftmm/listinfo/users*
> 
>
>
>
>
> --
> Ben Parees | OpenShift
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Ben Parees | OpenShift

Re: Re: Re: Re: Origin 3.9 (oc cluster up) doesnt use registry-mirror for internal registry

2018-04-20 Thread marc . schlegel
One more thing, When I change the image-stream to point to our mirror 
registry I cannot save the yaml.


Reason: ImageStream "jenkins" is invalid: [spec.tags[1].from.name: 
Forbidden: registry "docker.sdvrz.de:5000" not allowed by whitelist: 
"172.30.1.1:5000", "docker.io:443", "*.docker.io:443", "*.redhat.com:443", 
and 5 more ..., spec.tags[2].from.name: Forbidden: registry 
"docker.sdvrz.de:5000" not allowed by whitelist: "172.30.1.1:5000", 
"docker.io:443", "*.docker.io:443", "*.redhat.com:443", and 5 more ...] 

Why is the internal registry using other settings as my docker-daemon on 
the host? Our mirror is added as insecrure-registry.
There seems to be no option to option to change this. I've also looked at 
the internal registries deployment in the "default" project using the 
cluster-admin




Von:marc.schle...@sdv-it.de
An: users@lists.openshift.redhat.com
Datum:  20.04.2018 08:51
Betreff:Re: Re: Re: Origin 3.9 (oc cluster up) doesnt use 
registry-mirror for internal registry
Gesendet von:   users-boun...@lists.openshift.redhat.com



After setting up the proxy in oc cluster up as well as the daemon 
(including the necessary bypass) the problem remains. 

So I created a admin user to which I gave the cluster-admin role and this 
one can see all image-streams and I can update them in the webconsole. 

And here I can see the root cause which is actually caused by SSL 


Internal error occurred: Get https://registry-1.docker.io/v2/: x509: 
certificate signed by unknown authority. Timestamp: 2018-04-20T06:33:47Z 
Error count: 2 

Of course we have our own CA :-)
Is there a way to import our ca-bundle? I did not see anything in "oc 
cluster up --help" 



Von:Ben Parees  
An:marc.schle...@sdv-it.de 
Kopie:users  
Datum:19.04.2018 16:10 
Betreff:Re: Re: Origin 3.9 (oc cluster up) doesnt use 
registry-mirror for internal registry 





On Thu, Apr 19, 2018 at 9:14 AM,  wrote: 
Thanks for the quick replies. 

The http-proxy is not enough to get out, since the daemon uses also other 
protocols than http. 

right but it will get the imagestream imported.  After that it's up to 
your daemon configuration as to whether the pull can occur, and it sounded 
like you had already configured your daemon. 

 


Changing the image-streams seems to be a valid approach, unfortunately I 
cannot export them in order to edit them...because they are not there yet 
According to the documentation I need to export the image-stream by 
@ 
In order to get the id, I can use oc describe...but see 

$ oc describe is jenkins 
Error from server (NotFound): imagestreams.image.openshift.io "jenkins" 
not found 

So I cannot run 

$ oc export isimage jenkins@??? 

I am wondering why the containerized version isnt honoring the settings of 
the docker-daemon running on my machine. Well it does when it is pulling 
the openshift images 
 docker images 
REPOSITORY TAG IMAGE ID CREATED
   SIZE 
openshift/origin-web-console   v3.9.0  60938911a1f9 2 
weeks ago 485MB 
openshift/origin-docker-registry   v3.9.0  2663c9df9123 2 
weeks ago 455MB 
openshift/origin-haproxy-routerv3.9.0  c70d45de5384 2 
weeks ago 1.27GB 
openshift/origin-deployer  v3.9.0  378ccd170718 2 
weeks ago 1.25GB 
openshift/origin   v3.9.0  b5f178918ae9 2 
weeks ago 1.25GB 
openshift/origin-pod   v3.9.0  1b36bf755484 2 
weeks ago 217MB

but the image-steams are not pulled. 
Nonetheless, When I pull the image-stream manually (docker pull 
openshift/jenkins-2-centos7) it works. 
So why is the pull not working from inside Openshift? 

regards 
Marc 






You can update the image streams to change the registry. 

You can also set a proxy for the master, which is the process doing the 
imports and which presumably needs the proxy configured, by passing these 
args to oc cluster up: 

  --http-proxy='': HTTP proxy to use for master and builds
  --https-proxy='': HTTPS proxy to use for master and builds


I believe that should enable your existing imagestreams (not the ones 
pointing to the proxy url) to import. 





best regards 
Marc 



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users