On Fri, Dec 19, 2014 at 6:19 PM, Francois Lafont <[email protected]> wrote:
>
>
> So, indeed, I have to use routing *or* maybe create 2 monitors
> by server like this:
>
> [mon.node1-public1]
>     host     = ceph-node1
>     mon addr = 10.0.1.1
>
> [mon.node1-public2]
>     host     = ceph-node1
>     mon addr = 10.0.2.1
>
> # etc...
>
> But, in this case, the working directories of mon.node1-public1
> and mon.node1-public2 will be in the same disk (I have no
> choice). Is it a problem? Are monitors big consumers of I/O disk?
>
>
Interesting idea.  While you will have an even number of monitors, you'll
still have an odd number of failure domains.  I'm not sure if it'll work
though... make sure you test having the leader on both networks.  It might
cause problems if the leader is on the 10.0.1.0/24 network?

Monitors can be big consumers of disk IO, if there is a lot of cluster
activity.  Monitors records all of the cluster changes in LevelDB, and send
copies to all of the daemons.  There have been posts to the ML about people
running out of Disk IOps on the monitors, and the problems it causes.  The
bigger the cluster, the more IOps.  As long as you monitor and alert on
your monitor disk IOps, I don't think it would be a problem.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to