I setup the registry, as per the docs, to use a persistence storage claim,
backed by glusterfs. It seemed to work fine for a while... until I decided
to verify that is actually persistent... so I scaled it down to 0, from 1.
and ever since I cant start it up again. even rebooted all my 4 nodes... to
no avail. The error is always the same:

Unable to mount volumes for pod
"docker-registry-4-pghgo_default(ff06c760-deb3-11e5-97b1-005056b72766)":
unsupported volume type

I installed the cluster using the ansible playbook method.

How do I find and fix root cause?


[root@openshift-1 openshift-ansible]# oc get pod docker-registry-4-pghgo -o
yaml | grep -i vol -A3 -B2
        level: s0:c1,c0
    terminationMessagePath: /dev/termination-log
    volumeMounts:
    - mountPath: /registry
      name: registry-storage
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
--
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - emptyDir: {}
    name: registry-storage
  - name: v1
    persistentVolumeClaim:
      claimName: claim1
  - name: default-token-h911r
    secret:
[root@openshift-1 openshift-ansible]# oc get pvc
NAME        LABELS    STATUS    VOLUME                   CAPACITY
ACCESSMODES   AGE
claim1      <none>    Bound     gluster-default-volume   4Gi        RWX
      1d
webclaim1   <none>    Bound     web1-volume              4Gi        RWX
      1d
[root@openshift-1 openshift-ansible]# oc get pv
NAME                     LABELS    CAPACITY   ACCESSMODES   STATUS    CLAIM
              REASON    AGE
gluster-default-volume   <none>    4Gi        RWX           Bound
default/claim1                1d
web1-volume              <none>    4Gi        RWX           Bound
default/webclaim1             1d
[root@openshift-1 openshift-ansible]#


Thanks a lot,
Mohamed.
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to