> I have two nodes with 8 OSDs on each. First node running 2 monitors on
> different virtual machines (mon.1 and mon.2), second node runing mon.3
> After several reboots (I have tested power failure scenarios) "ceph -w" on
> node 2 always fails with message:
>
> root@bes-mon3:~# ceph --verbose -w
> Error initializing cluster client: Error
The cluster is simply protecting itself from a split brain situation.
Say you have:
mon.1 mon.2 mon.3
If mon.1 fails, no big deal, you still have 2/3 so no problem.
Now instead, say mon.1 is separated from mon.2 and mon.3 because of a
network partition (trunk failure, whatever). If one monitor of the
three could elect itself as leader then you might have divergence
between your monitors. Self-elected mon.1 thinks it's the leader and
mon.{2,3} have elected a leader amongst themselves. The harsh reality
is you really need to have monitors on 3 distinct physical hosts to
protect against the failure of a physical host.
--
Kyle
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com