Pretty sure this was fixed in this PR that went into 3.9.
https://github.com/openshift/origin/commit/0727d1d31fad4b4f66eff46fe750f966fab8c28b

On Fri, Apr 20, 2018 at 12:49 PM, Tim Dudgeon <[email protected]> wrote:

> I believe I'm seeing a problem with using GlusterFS volumes when you
> terminate a pod that is using a gluster backed PVC. This is with Origin
> 3.7.1. I did this:
>
> 1. create new project
> 2. deployed a pod
> 3. added a volume to the pod using  a gluster backed PVC.
> 4. rsh to the pod and check the volume can be written to
> 5. delete the project
>
> After stage 3 the volume was working OK in the pod and the volume was
> reported by hekati.
>
> After stage 5 the PVC was no longer present, glusterfs volume was no
> longer see by hekati (so far so good) but the pod was stuck in the
> 'Terminating' state and the project did not get deleted. It looks like the
> container that was running in the pod had been deleted. Even after one hour
> it was still stuck in the terminating state.
>
> Looking deeper it looks like the mount on the host on which the pod was
> running was still present. e.g. this was still found in /etc/mtab:
>
> 10.0.0.15:vol_a8866bf3769c987aee5c919305b89529
> /var/lib/origin/openshift.local.volumes/pods/51a4ef9e-44b4-
> 11e8-b523-fa163ea80da9/volumes/kubernetes.io~glusterfs/pvc-
> 28d4eb2e-44b4-11e8-b523-fa163ea80da9 fuse.glusterfs
> rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072
> 0 0
>
> Manually unmounting this mount resulted in the pod finally terminating and
> (after a short delay) the project being deleted.
>
> Looks like the cleanup processes are not quite correct?
>
> Tim
>
>
>
> _______________________________________________
> users mailing list
> [email protected]
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to