Hi, I'm faced with the growing storage demands in my department. In the near future we will need several hundred TB. Mostly large files. ATM we already have 80 TB of data with gets backed up to tape.
Providing the primary storage is not the big problem. My biggest concern is the backup of the data. One solution would be using a NetApp solution with snapshots. On the other hand is this a very expensive solution, the data will be written once, but then only read again. Short: it should be a cheap solution, but the data should be backed up. And it would be nice if we could abandon tape backups... My idea is to use some big RAID 6 arrays for the primary data, create LUNs in slices of max. 10 TB with XFS filesystems. Backuppc would be ideal for backup, because of the pool feature (we already use backuppc for a smaller amount of data). Has anyone experiences with backuppc and a pool size of >50 TB? I'm not sure how well this will work. I see that backuppc needs 45h to backup 3,2 TB of data right now, mostly small files. I don't like very large filesystems, but I don't see how this will scale with either multiple backuppc server and smaller filesystems (well, more than one server will be needed anyway, but I don't want to run 20 or more server...) or (if possible) with multiple backuppc instances on the same server, each with a own pool filesystem. So, anyone using backuppc in such an environment? Ralf ------------------------------------------------------------------------------ SOLARIS 10 is the OS for Data Centers - provides features such as DTrace, Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW http://p.sf.net/sfu/solaris-dev2dev _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/