Hi. We use Ceph Rados GW S3. And we are very happy :). Each administrator is responsible for its service.
Using the following clients S3: Linux - s3cmd, duply; Windows - cloudberry. P.S 500 TB data, 3x replication, 3 datacenter. С уважением, Фасихов Ирек Нургаязович Моб.: +79229045757 2017-02-14 12:15 GMT+03:00 Götz Reinicke <[email protected]>: > Hi, > > I guess that's a question that pops up in different places, but I could > not find any which fits to my thoughts. > > Currently we start to use ceph for file shares of our films produced by > our students and some xen/vmware VMs. Thd VM data is already backed up; the > fils original footage is stored in other places. > > We start with some 100TB rbd and mount smb/NFS shares from the clients. > May be we look into ceph fs soon. > > The question is: How would someone handle a backup of 100 TB data? > Rsyncing that to an other system or having a commercial backup solution > looks not that good e.g. regarding the price. > > One thought is, is there some sort of best practice in the ceph world e.g. > replicating to an other physical independent cluster? Or use more replicas, > odds, nodes and do snapshots in one cluster? > > Having productive data and backup on the same hardware currently makes me > feel not that good too….But the world changes :) > > Long story short: How do you do backup hundreds of TB? > > Curious for suggestions and thoughts .. Thanks and Regards . Götz > > > _______________________________________________ > ceph-users mailing list > [email protected] > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
