As a point to
* someone accidentally removed a thing, and now they need a thing back

I thought MooseFS has an interesting feature that I thought would be good
for CephFS and maybe others.

Basically a timed Trashbin
"Deleted files are retained for a configurable period of time (a file
system level "trash bin")"

It's an idea to cover this use case.


On Wed, May 6, 2015 at 3:35 AM Mariusz Gronczewski <
[email protected]> wrote:

> Snapshot on same storage cluster should definitely NOT be treated as
> backup
>
> Snapshot as a source for backup however can be pretty good solution for
> some cases, but not every case.
>
> For example if using ceph to serve static web files, I'd rather have
> possibility to restore given file from given path than snapshot of
> whole multiple TB cluster.
>
> There are 2 cases for backup restore:
>
> * something failed, need to fix it - usually full restore needed
> * someone accidentally removed a thing, and now they need a thing back
>
> Snapshots fix first problem, but not the second one, restoring 7TB of
> data to recover few GBs is not reasonable.
>
> As it is now we just backup from inside VMs (file-based backup) and have
> puppet to easily recreate machine config but if (or rather when) we
> would use object store we would backup it in a way that allows for
> partial restore.
>
> On Wed, 6 May 2015 10:50:34 +0100, Nick Fisk <[email protected]> wrote:
> > For me personally I would always feel more comfortable with backups on a
> completely different storage technology.
> >
> > Whilst there are many things you can do with snapshots and replication,
> there is always a small risk that whatever causes data loss on your primary
> system may affect/replicate to your 2nd copy.
> >
> > I guess it all really depends on what you are trying to protect against,
> but Tape still looks very appealing if you want to maintain a completely
> isolated copy of data.
> >
> > > -----Original Message-----
> > > From: ceph-users [mailto:[email protected]] On Behalf
> Of
> > > Alexandre DERUMIER
> > > Sent: 06 May 2015 10:10
> > > To: Götz Reinicke
> > > Cc: ceph-users
> > > Subject: Re: [ceph-users] How to backup hundreds or thousands of TB
> > >
> > > for the moment, you can use snapshot for backup
> > >
> > > https://ceph.com/community/blog/tag/backup/
> > >
> > > I think that async mirror is on the roadmap
> > > https://wiki.ceph.com/Planning/Blueprints/Hammer/RBD%3A_Mirroring
> > >
> > >
> > >
> > > if you use qemu, you can do qemu full backup. (qemu incremental backup
> is
> > > coming for qemu 2.4)
> > >
> > >
> > > ----- Mail original -----
> > > De: "Götz Reinicke" <[email protected]>
> > > À: "ceph-users" <[email protected]>
> > > Envoyé: Mercredi 6 Mai 2015 10:25:01
> > > Objet: [ceph-users] How to backup hundreds or thousands of TB
> > >
> > > Hi folks,
> > >
> > > beside hardware and performance and failover design: How do you manage
> > > to backup hundreds or thousands of TB :) ?
> > >
> > > Any suggestions? Best practice?
> > >
> > > A second ceph cluster at a different location? "bigger archive" Disks
> in good
> > > boxes? Or tabe-libs?
> > >
> > > What kind of backupsoftware can handle such volumes nicely?
> > >
> > > Thanks and regards . Götz
> > > --
> > > Götz Reinicke
> > > IT-Koordinator
> > >
> > > Tel. +49 7141 969 82 420
> > > E-Mail [email protected]
> > >
> > > Filmakademie Baden-Württemberg GmbH
> > > Akademiehof 10
> > > 71638 Ludwigsburg
> > > www.filmakademie.de
> > >
> > > Eintragung Amtsgericht Stuttgart HRB 205016
> > >
> > > Vorsitzender des Aufsichtsrats: Jürgen Walter MdL Staatssekretär im
> > > Ministerium für Wissenschaft, Forschung und Kunst Baden-Württemberg
> > >
> > > Geschäftsführer: Prof. Thomas Schadt
> > >
> > >
> > > _______________________________________________
> > > ceph-users mailing list
> > > [email protected]
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > _______________________________________________
> > > ceph-users mailing list
> > > [email protected]
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Mariusz Gronczewski, Administrator
>
> Efigence S. A.
> ul. Wołoska 9a, 02-583 Warszawa
> T: [+48] 22 380 13 13
> F: [+48] 22 380 13 14
> E: [email protected]
> <mailto:[email protected]>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to