Thank you both for the feedback. Wolfgang, I was actually using an incarnation 
of your script the other day :) Thanks for that. I'll probably backup all my 
volumes the way Wolfgang does (because it's so simple), but also backup from 
within the vm on the vms where I have databases. To have a backup for the 
backup. 


Dave Spano 


----- Original Message -----

From: "Wolfgang Hennerbichler" <[email protected]> 
To: "Sebastien Han" <[email protected]> 
Cc: "Dave Spano" <[email protected]>, [email protected] 
Sent: Tuesday, April 9, 2013 5:00:41 AM 
Subject: Re: [ceph-users] Question about Backing Up RBD Volumes in Openstack 

On Tue, Apr 09, 2013 at 10:09:11AM +0200, Sebastien Han wrote: 
> So the memory is _not_ saved, only the disk is. Note that it's always hard to 
> make consistent snapshot. I assume that freezing the filesystem itself is the 
> only solution to have a consistent snapshot, and still this doesn't mean that 
> your application's data are consistent , in term of commits for instance with 
> a database. I don't know which applications do you run but at the end, you 
> might consider to stop some daemon as well, freeze the fs (to be sure), take 
> your snapshot, unfreeze, start your daemon. You could reach 100% consistency 
> with this method… I guess :) 

I believe in journaling filesystems and their ability to recover from 'crashes' 
like these. So what I do is just take a snapshot of a running vm. If you 
re-mount that snapshot (e. g. in a rbd child) the journal replay assures a 
consistent filesystem. Database systems are designed to recover from power 
loss, this is nothing different than a power loss. And pg_dump or mysql_dumps 
(or whatever_dump) are a good habit anyway, if the database is small enough. 
I've written up how I do ceph backups here: 

http://www.wogri.at/Ceph-VM-Backup.339.0.html 

Be aware that disabling rbd_cache in libvirt for now is more safe in terms of 
stability than enabling it (patches are out there but not in the OS packages 
yet). 

> Cheers. 

my 2c 
Wolfgang 

-- 
http://www.wogri.com 

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to