Data lives in another container attached to OSD container as Docker volume.
According to `deis ps -a`, this volume was created two weeks ago, though
all files in `current` are very recent. I suspect that something removed
files in the data volume after reboot. As reboot was caused by CoreOS
No, it really was in the cluster. Before reboot cluster had HEALTH_OK.
Though now I've checked `current` directory and it doesn't contain any data:
root@staging-coreos-1:/var/lib/ceph/osd/ceph-0# ls current
commit_op_seq meta nosnap omap
while other OSDs do. It really looks
On Sat, Aug 29, 2015 at 3:32 PM, Евгений Д. wrote:
> I'm running 3-node cluster with Ceph (it's Deis cluster, so Ceph daemons are
> containerized). There are 3 OSDs and 3 mons. After rebooting all nodes one
> by one all monitors are up, but only two OSDs of three are up.
I'm running 3-node cluster with Ceph (it's Deis cluster, so Ceph daemons
are containerized). There are 3 OSDs and 3 mons. After rebooting all nodes
one by one all monitors are up, but only two OSDs of three are up. 'Down'
OSD is really running but is never marked up/in.
All three mons are