On 10/08/2014 11:00 AM, Aegeaner wrote:
> 
> Hi all!
> 
> For production use, I want to use two ceph clusters at the same time.
> One is the master cluster, and the other is the replication cluster,
> which syncs RBD snapshots with master cluster at fixed time (every day,
> e.g.), by the way this article describes:
> http://ceph.com/dev-notes/incremental-snapshots-with-rbd/ . In case the
> master cluster is down, I mean, there is some problem with ceph so that
> the whole cluster is down, I can switch from master cluster to slave
> cluster.
> 

Ok, but there will be a sync gap betweeh the master and slave cluster
since the RBD replication is not happening real-time, thus you will
loose some data if the master cluster 'burns down'.

> Now the question is, if the master cluster is down, and if I have backed
> up all the metadata before: the monitor map, the osd map, the pg map,
> the crush map. How can I restore the master Ceph cluster from these
> cluster maps? Is there a tool or certain way to do it?
> 

So explain 'down'? Due to what?

In theory it is probably possible to bring a cluster back to life if it
has become corrupted, but on a large deployment there will be a lot of
PGmap and OSDmap changes in a very short period in time.

You will *never* get a consistent snapshot of the whole cluster at a
specific point in time.

But the question still stands, explain 'down'. What does it mean in your
case?

You could loose all your monitors at the same time. They can probably be
fixed with a backup of those maps, but I think it comes down to calling
Sage and pulling your credit card.

> Thanks!
> 
> ===============
> Aegeaner
> 
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Wido den Hollander
Ceph consultant and trainer
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to