Found this and posted my workaround while waiting for the fix to be
released:
https://github.com/kubernetes/kubernetes/issues/14642#issuecomment-220561817
Am Freitag, 20. Mai 2016 11:07:08 UTC+2 schrieb Lukas Sägesser:
>
> Suddenly a Mongo DB pod that has been running for weeks tries to restart
> and the data disk can't be mounted anymore.
>
> This error repeats without end:
> 2016-05-20 10:51:49 +0200 CEST 2016-05-20 10:53:31 +0200 CEST
> 2 mongo-controller-1g2xh Pod Warning
> FailedMount {kubelet gke-test-cluster-default-pool-a7afbbdb-i0sb}
> Unable to mount volumes for pod
> "mongo-controller-1g2xh_default(9838e85c-1e67-11e6-8ecf-42010af000c0)":
> Could not attach GCE PD "test-mongo-disk-test". Timeout waiting for mount
> paths to be created.
>
> When I ssh into the node, "mount | grep mongo" gives no result.
> But in the GCE Console in my browser, that node lists the disk as
> connected.
>
> How does this happen? Why doesn't the node recover on its own?
>
>
>
>
--
You received this message because you are subscribed to the Google Groups
"Containers at Google" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/google-containers.
For more options, visit https://groups.google.com/d/optout.