On Fri, Feb 17, 2012 at 7:45 AM, Nathan Stratton <[email protected]>wrote:

> On Fri, 17 Feb 2012, Matty wrote:
>
> How are folks backing up large amounts of data in their gluster file
>> systems? Replication? Snapshots and archival? As file systems grow to
>> 1PB the conventional backup to disk / tape methodology needs to
>> change. We are putting a lot of thought into this subject here.
>>
>
> At lest on our end, we don't have backups.... We just make sure we have 2
> copies of the data on RAID6 hardware.
>
> This is an interesting question. We are also looking at various ways of
doing backup. Replication saves from site failure or cluster failure but
doesn't save from software/hardware corruption. Even though likelyhood of
that occurring is low in replica environment still for critical system it
often is the requirement. On the other hand making backup copies of vast
amounts of data over the network is difficult and expensive in terms of
time and cost. I am considering 3 options:
1. Incremental backup - netbackup (paid) or some other tool onto a Media
servers with direct attached cheap storage.
2. Creating hard links and leaving it locally on each node since our files
are immutable.
3. Using queues to send a copy offsite

Still in the process of exploring options.

>
> <>
>>
> Nathan Stratton                                CTO, BlinkMind, Inc.
> nathan at robotics.net                         nathan at blinkmind.com
> http://www.robotics.net                        http://www.blinkmind.com
>
> - Ryan
>>
>  ______________________________**_________________
> Gluster-users mailing list
> [email protected]
> http://gluster.org/cgi-bin/**mailman/listinfo/gluster-users<http://gluster.org/cgi-bin/mailman/listinfo/gluster-users>
>
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to