mer.cz>:
> Is it possible that something else was mounted there?
> Or is it possible nothing was mounted there?
> That would explain such behaviour...
>
> Jan
>
> On 31 Aug 2015, at 17:07, Евгений Д. <ineu.m...@gmail.com> wrote:
>
> No, it really was in the cluster. Before
like something was broken on reboot,
probably during container start, so it's not really related to Ceph. I'll
go with OSD recreation.
Thank you.
2015-08-31 11:50 GMT+03:00 Gregory Farnum <gfar...@redhat.com>:
> On Sat, Aug 29, 2015 at 3:32 PM, Евгений Д. <ineu.m...@gmail.com>
I'm running 3-node cluster with Ceph (it's Deis cluster, so Ceph daemons
are containerized). There are 3 OSDs and 3 mons. After rebooting all nodes
one by one all monitors are up, but only two OSDs of three are up. 'Down'
OSD is really running but is never marked up/in.
All three mons are
. But this malloc - ENOMEM/OOM killer - corrupted journal
- trying to recover - ENOMEM/OOM killer ... looks like a bug.
2015-08-19 0:13 GMT+03:00 Евгений Д. ineu.m...@gmail.com:
Hello.
I have a small Ceph cluster running 9 OSDs, using XFS on separate disks
and dedicated partitions on system disk
Hello.
I have a small Ceph cluster running 9 OSDs, using XFS on separate disks and
dedicated partitions on system disk for journals.
After creation it worked fine for a while. Then suddenly one of OSDs
stopped and didn't start. I had to recreate it. Recovery started.
After few days of recovery