I've looked into this a bit, and the best I've come up with is to snapshot all of the RGW pools. I asked a similar question before: http://comments.gmane.org/gmane.comp.file-systems.ceph.user/855

I am planning to have a 2nd cluster for disaster recovery, with some in-house geo-replication.

I haven't actually tried this yet. I just setup my development cluster, and this is on my list of things to test. The basic idea:

 * Disable geo-replication
 * Snapshot the Disaster Recovery cluster manually
 * Rollback all of the RGW pools to the snapshot I want to restore from
 * Manually restore objects from the Disaster Recovery cluster to the
   Production Cluster, probably using s3cmd
 * Return all of the RGW pools to the most recent snapshot
 * Re-enable geo-replication


I have several layers of safety above this, so this process is meant to be a last resort after several layers of human+code errors. In theory, it shouldn't ever happen, but we all know how that goes.


I would like to discuss how RadosGW snapshots might work, but there doesn't seem to be much interest at this time. The ability to use RadosGW snapshots is somewhat niche.




*Craig Lewis*
Senior Systems Engineer
Office +1.714.602.1309
Email [email protected] <mailto:[email protected]>

*Central Desktop. Work together in ways you never thought possible.*
Connect with us Website <http://www.centraldesktop.com/> | Twitter <http://www.twitter.com/centraldesktop> | Facebook <http://www.facebook.com/CentralDesktop> | LinkedIn <http://www.linkedin.com/groups?gid=147417> | Blog <http://cdblog.centraldesktop.com/>

On 6/20/13 07:59 , Mike Bryant wrote:
Hi,
is there any way to create snapshots of individual buckets, that can
be restored from piecemeal?
i.e. if someone deletes objects by mistake?

Cheers
Mike


--
Mike Bryant | Systems Administrator | Ocado Technology
[email protected] | 01707 382148 | www.ocadotechnology.com


_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to