On Thu, Mar 16, 2017 at 1:02 PM, Adam Carheden <carhe...@ucar.edu> wrote:
> Ceph can mirror data between clusters
> (http://docs.ceph.com/docs/master/rbd/rbd-mirroring/), but can it
> mirror data between pools in the same cluster?

Unfortunately, that's a negative. The rbd-mirror daemon currently
assumes that the local and remote pool names are the same. Therefore,
you cannot mirror images between a pool named "X" and a pool named
"Y".

> My use case is DR in the even of a room failure. I have a single CEPH
> cluster that spans multiple rooms. The two rooms have separate power
> and cooling, but have a single 10Gbe link between them (actually 2 w/
> active-passive failover). I can configure pools and crushmaps to keep
> data local to each room so my single link doesn't become a bottleneck.
> However, I'd like to be able to recovery quickly if a room UPS fails.
>
> Ideally I'd like something like this:
>
> HA pool - spans rooms but we limit how much we put on it to avoid
> latency or saturation issues with our single 10Gbe link.
> room1 pool - Writes only to OSDs in room 1
> room2 pool - Writes only to OSDs in room 2
> room1-backup pool - Asynchronous mirror of room1 pool that writes only
> to OSDs in room 2
> room2-backup pool - Asynchronous mirror of room2 pool that writes only
> to OSDs in room 1
>
> In the event of a room failure, my very important stuff migrates or
> reboots immediately in the other room without any manual steps. For
> everything else, I manually spin up new VMs (scripted, of course) that
> run from the mirrored backups.
>
> Is this possible?
>
> If I made it two separate CEPH clusters, how would I do the automated
> HA failover? I could have 3 clusters (HA, room1, room2, mirroring
> between room1 and roomt2), but then each cluster would be so small (2
> nodes, 3 nodes) that node failure becomes more of a risk than room
> failure.

At the current time, I think three separate clusters would be the only
thing that could satisfy all use-case requirements. While I have never
attempted this, I would think that you should be able to run two
clusters on the same node (e.g. the HA cluster gets one OSD per node
in both rooms and the roomX cluster gets the remainder of OSDs in each
node in its respective room).

>
> (And yes, I do have a 3rd small room with monitors running so if one
> of the primary rooms goes down monitors in the remaining room + 3rd
> room have a quorum)
>
> Thanks
> --
> Adam Carheden
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to