Had a similar thing happen an EBS volume last year. Haven't been able to
replicate it since.
Happened when a node was overloaded and couldn't report back it's status,
my best guess was it tried to mount onto another node and some sort of race
condition wiped the contents.

On Fri, 7 Apr 2017 at 22:38 Mateus Caruccio <[email protected]>
wrote:

> Is it possible for this line to run while a PVC is still mounted?
>
>
> https://github.com/openshift/origin/blob/7558d75e1b677c019259136a73abbd625591f5ed/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go#L2123
>
> I got an entire disk erased, with no FS/ceph corruption indications, and
> tons of the following messages:
>
> I0401 19:05:03.804564    1422 kubelet.go:2117] Failed to remove orphaned
> pod "2b46c157-16e5-11e7-9f74-000d3ac02da0" dir; err: remove
> /var/lib/docker/openshift.local.volumes/pods/2b46c157-16e5-11e7-9f74-000d3ac02da0/volumes/
> kubernetes.io~rbd/ceph-6704: device or resource busy
>
>
> Regards,
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com
> We make the infrastructure invisible
> _______________________________________________
> dev mailing list
> [email protected]
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
_______________________________________________
dev mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

Reply via email to