Hi all, 

Can export-diff work effectively without the fast-diff rbd feature as it
is not supported in kernel rbd ? 

Maged 

On 2017-10-19 23:18, Oscar Segarra wrote:

> Hi Richard,  
> 
> Thanks a lot for sharing your experience... I have made deeper investigation 
> and it looks export-diff is the most common tool used for backup as you have 
> suggested. 
> 
> I will make some tests with export-diff  and I will share my experience. 
> 
> Again, thanks a lot! 
> 
> 2017-10-16 12:00 GMT+02:00 Richard Hesketh <[email protected]>:
> 
>> On 16/10/17 03:40, Alex Gorbachev wrote:
>>> On Sat, Oct 14, 2017 at 12:25 PM, Oscar Segarra <[email protected]> 
>>> wrote:
>>>> Hi,
>>>> 
>>>> In my VDI environment I have configured the suggested ceph
>>>> design/arquitecture:
>>>> 
>>>> http://docs.ceph.com/docs/giant/rbd/rbd-snapshot/ [1]
>>>> 
>>>> Where I have a Base Image + Protected Snapshot + 100 clones (one for each
>>>> persistent VDI).
>>>> 
>>>> Now, I'd like to configure a backup script/mechanism to perform backups of
>>>> each persistent VDI VM to an external (non ceph) device, like NFS or
>>>> something similar...
>>>> 
>>>> Then, some questions:
>>>> 
>>>> 1.- Does anybody have been able to do this kind of backups?
>>> 
>>> Yes, we have been using export-diff successfully (note this is off a
>>> snapshot and not a clone) to back up and restore ceph images to
>>> non-ceph storage.  You can use merge-diff to create "synthetic fulls"
>>> and even do some basic replication to another cluster.
>>> 
>>> http://ceph.com/geen-categorie/incremental-snapshots-with-rbd/ [2]
>>> 
>>> http://docs.ceph.com/docs/master/dev/rbd-export/ [3]
>>> 
>>> http://cephnotes.ksperis.com/blog/2014/08/12/rbd-replication [4]
>>> 
>>> --
>>> Alex Gorbachev
>>> Storcium
>>> 
>>>> 2.- Is it possible to export BaseImage in qcow2 format and snapshots in
>>>> qcow2 format as well as "linked clones" ?
>>>> 3.- Is it possible to export the Base Image in raw format, snapshots in raw
>>>> format as well and, when recover is required, import both images and
>>>> "relink" them?
>>>> 4.- What is the suggested solution for this scenario?
>>>> 
>>>> Thanks a lot everybody!
>> 
>> In my setup I backup individually complete raw disk images to file, because 
>> then they're easier to manually inspect and grab data off in the event of 
>> catastrophic cluster failure. I haven't personally bothered trying to 
>> preserve the layering between master/clone images in backup form; that 
>> sounds like a bunch of effort and by inspection the amount of space it'd 
>> actually save in my use case is really minimal.
>> 
>> However I do use export-diff in order to make backups efficient - a rolling 
>> snapshot on each RBD is used to export the day's diff out of the cluster and 
>> then the ceph_apply_diff utility from https://gp2x.org/ceph/ is used to 
>> apply that diff to the raw image file (though I did patch it to work with 
>> streaming input and eliminate the necessity for a temporary file containing 
>> the diff). There are a handful of very large RBDs in my cluster for which 
>> exporting the full disk image takes a prohibitively long time, which made 
>> leveraging diffs necessary.
>> 
>> For a while, I was instead just exporting diffs and using merge-diff to 
>> munge them together into big super-diffs, and the restoration procedure 
>> would be to apply the merged diff to a freshly made image in the cluster. 
>> This worked, but it is a more fiddly recovery process; importing complete 
>> disk images is easier. I don't think it's possible to create two images in 
>> the cluster and then link them into a layering relationship; you'd have to 
>> import the base image, clone it, and them import a diff onto that clone if 
>> you wanted to recreate the original layering.
>> 
>> Rich
>> 
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [5]
> 
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

  

Links:
------
[1] http://docs.ceph.com/docs/giant/rbd/rbd-snapshot/
[2] http://ceph.com/geen-categorie/incremental-snapshots-with-rbd/
[3] http://docs.ceph.com/docs/master/dev/rbd-export/
[4] http://cephnotes.ksperis.com/blog/2014/08/12/rbd-replication
[5] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to