Lord Sporkton wrote at about 13:46:25 -0700 on Friday, April 26, 2013: > As mentioned we have multiple customers and departments. It's not just one > server. Also 50g data bases aren't the largest. We have ones upto 200g > individual dbs. Also we're using scsi drives which cost a pretty penny.
I don't think BackupPC is the write solution for backing up regularly changing files (like databases) that are 200GB. First, you will likely get very little pooling at the file level since even a 1 bit change in the DB will require a new pool entry. Second you will "waste" significant IO and computation resources writing the DB to a flat file, (optionally) compressing it to the cpool, and then one or more times either comparing it to an existing entry and/or computing checksums. If anything, you want to be doing block-level pooling -- either at the filesystem layer (e.g., ZFS) or by dividing each database manually into smaller less frequently changing chunks. If you are indeed talking about files in the 50-200GB range, you are not going to fit more than a handful of files per TB disk... even if you have a RAID array of multiple disks, you are still probably talking about only a small number of files. So, you are probably better off writing a simple script that just back up those few DB files and rotates them if you want to retain several older copies. ------------------------------------------------------------------------------ Try New Relic Now & We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, & servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr _______________________________________________ BackupPC-users mailing list [email protected] List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
