Carlos,
To "clean up" the PV, you need to remove the "instance data" associated
with the binding with the previous PVC. There's a handful of lines that
need to be deleted if you typed `oc edit pv/pv-xxxxx` (and then save the
object). Using the following PV as an example, delete the `claimRef` and
`status` sections of the yaml document, then save & quit. Run `oc get pv`
again it should show up as available.
```
# oc get pv pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
EXPORT_block: "\nEXPORT\n{\n\tExport_Id = 5;\n\tPath =
/export/pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9;\n\tPseudo
= /export/pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9;\n\tAccess_Type
= RW;\n\tSquash
= no_root_squash;\n\tSecType = sys;\n\tFilesystem_id =
5.5;\n\tFSAL {\n\t\tName
= VFS;\n\t}\n}\n"
Export_Id: "5"
Project_Id: "0"
Project_block: ""
Provisioner_Id: d5abc261-5fb7-11e7-8769-0a580a800010
kubernetes.io/createdby: nfs-dynamic-provisioner
pv.kubernetes.io/provisioned-by: example.com/nfs
creationTimestamp: 2017-07-05T07:30:36Z
name: pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9
resourceVersion: "60641"
selfLink: /api/v1/persistentvolumes/pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9
uid: d6521c6e-6153-11e7-b249-000d3a1a72a9
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: nfsdynpvc
namespace: 3z64o
resourceVersion: "60470"
uid: d63a35a5-6153-11e7-b249-000d3a1a72a9
nfs:
path: /export/pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9
server: 172.30.206.205
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs-provisioner-3z64o
status:
phase: Released
```
___________________________________________________
LOUIS P. SANTILLAN
Architect, OPENSHIFT & DEVOPS
Red Hat Consulting, <https://www.redhat.com/> Container and PaaS Practice
[email protected] M: 3236334854
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
On Mon, Oct 8, 2018 at 3:04 PM Carlos María Cornejo Crespo <
[email protected]> wrote:
> Hi folks,
>
> I'm not able to manually reclaim a pv and would like to know what I'm
> doing wrong.
> My setup is openshift 3.9 with glusterFS getting installed as part of the
> openshift installation.
>
> The inventory setup creates a storage class for gluster and also makes it
> the default one.
>
> As the setup by default is reclaim policy to Delete and I want to keep the
> pv when I delete the pvc I created a new storage class as follows:
>
> # storage class
> apiVersion: storage.k8s.io/v1
> kind: StorageClass
> metadata:
> annotations:
> storageclass.kubernetes.io/is-default-class: "false"
> name: glusterfs-retain
> parameters:
> resturl: http://myheketi-storage-glusterfs.domainblah.com
> restuser: admin
> secretName: heketi-storage-admin-secret
> secretNamespace: glusterfs
> provisioner: kubernetes.io/glusterfs
> reclaimPolicy: Retain
>
> and if I make a deployment requesting a volume via pvc it works well and
> the pv gets bounded as expected
>
> # deployment
> - kind: DeploymentConfig
> apiVersion: v1
> ..
> spec:
> spec:
> volumeMounts:
> - name: "jenkins-data"
> mountPath: "/var/lib/jenkins"
> volumes:
> - name: "jenkins-data"
> persistentVolumeClaim:
> claimName: "jenkins-data"
>
> #pvc
> - kind: PersistentVolumeClaim
> apiVersion: v1
> metadata:
> name: "jenkins-data"
> spec:
> accessModes:
> - ReadWriteOnce
> resources:
> requests:
> storage: 30Gi
> storageClassName: glusterfs-retain
>
> Now if I delete the pvc and try to reclaim that pv by creating a new
> deployment that refers to it is when I get the unexpected behaviour. A new
> pvc is created but that generates a new pv with the same name and the
> original pv stays as Released and never gets Available.
>
> How do I manually make it available? According to this
> <https://kubernetes.io/docs/concepts/storage/persistent-volumes/#retain> I
> need to manually clean up the data on the associated storage asset??? How
> am I supposed to do this if the volumen has been dynamically provisioned by
> GlusterFS?? I´m pretty sure it must be much simpler than that.
>
> Any advise?
>
> Kind regards,
> Carlos M.
> _______________________________________________
> users mailing list
> [email protected]
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users