Re: Cleaning up not correct when using GlusterFS?

2018-04-22 Thread Tim Dudgeon

Thanks.
Upgrading from origin 3.7.1 to 3.7.2 fixes the problem.

Tim


On 20/04/18 22:46, Seth Jennings wrote:

Associated bz https://bugzilla.redhat.com/show_bug.cgi?id=1546156

On Fri, Apr 20, 2018 at 4:45 PM, Seth Jennings > wrote:


Pretty sure this was fixed in this PR that went into 3.9.

https://github.com/openshift/origin/commit/0727d1d31fad4b4f66eff46fe750f966fab8c28b





On Fri, Apr 20, 2018 at 12:49 PM, Tim Dudgeon
> wrote:

I believe I'm seeing a problem with using GlusterFS volumes
when you terminate a pod that is using a gluster backed PVC.
This is with Origin 3.7.1. I did this:

1. create new project
2. deployed a pod
3. added a volume to the pod usingĀ  a gluster backed PVC.
4. rsh to the pod and check the volume can be written to
5. delete the project

After stage 3 the volume was working OK in the pod and the
volume was reported by hekati.

After stage 5 the PVC was no longer present, glusterfs volume
was no longer see by hekati (so far so good) but the pod was
stuck in the 'Terminating' state and the project did not get
deleted. It looks like the container that was running in the
pod had been deleted. Even after one hour it was still stuck
in the terminating state.

Looking deeper it looks like the mount on the host on which
the pod was running was still present. e.g. this was still
found in /etc/mtab:

10.0.0.15:vol_a8866bf3769c987aee5c919305b89529

/var/lib/origin/openshift.local.volumes/pods/51a4ef9e-44b4-11e8-b523-fa163ea80da9/volumes/kubernetes.io

~glusterfs/pvc-28d4eb2e-44b4-11e8-b523-fa163ea80da9
fuse.glusterfs

rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072
0 0

Manually unmounting this mount resulted in the pod finally
terminating and (after a short delay) the project being deleted.

Looks like the cleanup processes are not quite correct?

Tim



___
users mailing list
users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users






___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Cleaning up not correct when using GlusterFS?

2018-04-20 Thread Seth Jennings
Associated bz https://bugzilla.redhat.com/show_bug.cgi?id=1546156

On Fri, Apr 20, 2018 at 4:45 PM, Seth Jennings  wrote:

> Pretty sure this was fixed in this PR that went into 3.9.
> https://github.com/openshift/origin/commit/0727d1d31fad4b4f66eff46fe750f9
> 66fab8c28b
>
>
> On Fri, Apr 20, 2018 at 12:49 PM, Tim Dudgeon 
> wrote:
>
>> I believe I'm seeing a problem with using GlusterFS volumes when you
>> terminate a pod that is using a gluster backed PVC. This is with Origin
>> 3.7.1. I did this:
>>
>> 1. create new project
>> 2. deployed a pod
>> 3. added a volume to the pod using  a gluster backed PVC.
>> 4. rsh to the pod and check the volume can be written to
>> 5. delete the project
>>
>> After stage 3 the volume was working OK in the pod and the volume was
>> reported by hekati.
>>
>> After stage 5 the PVC was no longer present, glusterfs volume was no
>> longer see by hekati (so far so good) but the pod was stuck in the
>> 'Terminating' state and the project did not get deleted. It looks like the
>> container that was running in the pod had been deleted. Even after one hour
>> it was still stuck in the terminating state.
>>
>> Looking deeper it looks like the mount on the host on which the pod was
>> running was still present. e.g. this was still found in /etc/mtab:
>>
>> 10.0.0.15:vol_a8866bf3769c987aee5c919305b89529
>> /var/lib/origin/openshift.local.volumes/pods/51a4ef9e-44b4-1
>> 1e8-b523-fa163ea80da9/volumes/kubernetes.io~glusterfs/pvc-28
>> d4eb2e-44b4-11e8-b523-fa163ea80da9 fuse.glusterfs
>> rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072
>> 0 0
>>
>> Manually unmounting this mount resulted in the pod finally terminating
>> and (after a short delay) the project being deleted.
>>
>> Looks like the cleanup processes are not quite correct?
>>
>> Tim
>>
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Cleaning up not correct when using GlusterFS?

2018-04-20 Thread Jamie Duncan
I'm pretty sure there is a bz for this. Will look when I'm near a normal
screen.

On Fri, Apr 20, 2018, 1:51 PM Tim Dudgeon  wrote:

> I believe I'm seeing a problem with using GlusterFS volumes when you
> terminate a pod that is using a gluster backed PVC. This is with Origin
> 3.7.1. I did this:
>
> 1. create new project
> 2. deployed a pod
> 3. added a volume to the pod using  a gluster backed PVC.
> 4. rsh to the pod and check the volume can be written to
> 5. delete the project
>
> After stage 3 the volume was working OK in the pod and the volume was
> reported by hekati.
>
> After stage 5 the PVC was no longer present, glusterfs volume was no
> longer see by hekati (so far so good) but the pod was stuck in the
> 'Terminating' state and the project did not get deleted. It looks like
> the container that was running in the pod had been deleted. Even after
> one hour it was still stuck in the terminating state.
>
> Looking deeper it looks like the mount on the host on which the pod was
> running was still present. e.g. this was still found in /etc/mtab:
>
> 10.0.0.15:vol_a8866bf3769c987aee5c919305b89529
>
> /var/lib/origin/openshift.local.volumes/pods/51a4ef9e-44b4-11e8-b523-fa163ea80da9/volumes/
> kubernetes.io~glusterfs/pvc-28d4eb2e-44b4-11e8-b523-fa163ea80da9
> fuse.glusterfs
> rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072
>
> 0 0
>
> Manually unmounting this mount resulted in the pod finally terminating
> and (after a short delay) the project being deleted.
>
> Looks like the cleanup processes are not quite correct?
>
> Tim
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Cleaning up not correct when using GlusterFS?

2018-04-20 Thread Tim Dudgeon
I believe I'm seeing a problem with using GlusterFS volumes when you 
terminate a pod that is using a gluster backed PVC. This is with Origin 
3.7.1. I did this:


1. create new project
2. deployed a pod
3. added a volume to the pod usingĀ  a gluster backed PVC.
4. rsh to the pod and check the volume can be written to
5. delete the project

After stage 3 the volume was working OK in the pod and the volume was 
reported by hekati.


After stage 5 the PVC was no longer present, glusterfs volume was no 
longer see by hekati (so far so good) but the pod was stuck in the 
'Terminating' state and the project did not get deleted. It looks like 
the container that was running in the pod had been deleted. Even after 
one hour it was still stuck in the terminating state.


Looking deeper it looks like the mount on the host on which the pod was 
running was still present. e.g. this was still found in /etc/mtab:


10.0.0.15:vol_a8866bf3769c987aee5c919305b89529 
/var/lib/origin/openshift.local.volumes/pods/51a4ef9e-44b4-11e8-b523-fa163ea80da9/volumes/kubernetes.io~glusterfs/pvc-28d4eb2e-44b4-11e8-b523-fa163ea80da9 
fuse.glusterfs 
rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 
0 0


Manually unmounting this mount resulted in the pod finally terminating 
and (after a short delay) the project being deleted.


Looks like the cleanup processes are not quite correct?

Tim



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users