Hi Francois,
Actually you are discussing two separate questions here:)
1. in the 5 mons(2 in dc1, 2 in dc2, 1 in wan), can the monitor form a quorum?
How to offload the mon in WAN?
Yes and No, in one case, you lose any of your DC completely, that's
fine, the left 3 monitors could form a quorum.
But if ONLY the link between DC1 and DC2 is cut (i.e, the WAN
connection in both DCs to mon.5 in WAN remains), the problem become kinds of
tricky since the monitors are splitting minds, mon1 in DC1 and mon 3 in DC2
have different view in the quorum. I am not sure if they could reach an
agreement.
My 0.02 is try to make the VM in WAN as a router? So in the case of link
between DCs is cut, the monitor could use the spare WAN link to talk with each
other, not sure.
The way you use to offload mon.5 is right. Client will randomly pick up
one monitor(and try other on failure) that provided in the ceph.conf. So if
mon.5 doesn't appear in Ceph.conf at all, no client will try to use it.
2. Can the cluster functioning well?
Well, that depends on your crush_ruleset(how to place the replicas) ,
pool size(how much replicas) and min_size(the minimal success replica for a
write).
For example, if your replication rule is: 3 copies, 2 copies in local
DC and the other 1 copy in geo. Then if you set the pool min_size to 2, the
cluster can still works with PGs in degraded mode. But if you set min_size to
3, all the IOs will be blocked there waiting for the replica in geo.
Xiaoxi.
-----Original Message-----
From: ceph-users [mailto:[email protected]] On Behalf Of
Francois Lafont
Sent: Monday, April 13, 2015 1:04 AM
To: [email protected]
Subject: [ceph-users] How to dispatch monitors in a multi-site cluster (ie in 2
datacenters)
Hi,
To summarize, my principal question is: in a ceph cluster, is it possible to
have, among the monitors, one monitor not necessarily very efficient and with
potentially network access latencies and still avoid a negative effect on the
cluster?
I explain the context of my question because it's important. Let's suppose I
have 2 datacenters: dc1 and dc2. And let's suppose that we can consider the
network connection between dc1 and dc2 as a real LAN, ie no problem of latency
between dc1 and dc2 etc. I'm thinking about how to dispatch the monitors
between dc1 and dc2. Let's suppose I have 5 monitors. I can put 2 monitors on
dc1 and 3 monitors on dc2. If the connection between dc1 and
dc2 is cut, then the cluster in dc2 will continue to work well because in
dc2 the quorum of monitors is reached but in dc1 the cluster will be stopped
(no quorum).
Now, what happens if I do this: I put 2 monitors in dc1, 2 monitors on
dc2 and I put the 5th monitors in the WAN, for instance in a VM linked with the
cluster network by a VPN tunnel (I have one VPN tunnel between
mon.5 and dc1 and one VPN tunnel between mon.5 and dc2). In this case, if the
connection between dc1 and dc2 is cut (but WAN connection and the VPN tunnels
in dc1 and dc2 are Ok), in theory the cluster will continue to work in dc1 and
in dc2 because the quorum is reached (mon.1, mon.2 and
mon.5 in dc1 and mon.3, mon.4 and mon.5 in dc2). Is it correct?
But in this case, how does it work? If a client in dc1 writes data in the OSDs
of dc1, the data will be not present in the OSD of dc2. It seems to me that
it's a big problem, unless in fact the cluster does not work in the conditions
that I've described...
And if the mon.5 is not very efficient with network access latencies, is it a
problem for the ceph clients? If I just indicate the ip addresses of
mon.1 and mon.2 for the clients in dc1 in the ceph.conf file and if I just
indicate the ip addresses of mon.3 and mon.4 for the clients in dc2, can I hope
to avoid the slowness that can generate the mon.5 in the WAN?
Thanks in advance for your help.
--
François Lafont
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com