On Thu, Jan 19, 2006 at 02:28:09PM -0500, Tim Chipman wrote: > > Additionally, I suppose - based on how quick the data pool evolves, rsync > could be used to update the (rotating disk pool raid5 data) which might > be faster than doing (reformat and clean copy) each week. Or maybe not.
given the huge number of files involved, i'd be real surprised if it weren't faster to reformat and copy each time. Don't copy files, though - copy the entire filesystem that backuppc lives on. dd if=/dev/backuppcpooldev of=/dev/offsitedev bs=2M change this so that you're using the raw devices for your software raidset or lvm or whatever. block size probably doesn't matter too much as long as it's not tiny (the default is probably tiny). You'll of course have to unmount the source and target filesystems first. Copying the data at the file level is potentially safer since it will force the filesystem to be traversed, and you'll learn of filesystem-level errors sooner rather than later. > (Clearly it gets a bit cumbersome for a data pool that is too large > .. lugging 20 SATA disks wouldn't be fun neither would lugging a similar number of tapes, which is your other option...actually, with the size of disks, you'd end up with more tapes. > .. but for "reasonable" data > pool size it ?seems? like it could be feasible?) > It seems reasonable but don't discount the possibility that those disks will get damaged in transport. tape is much more durable. The risk might be worth it; just understand the choices you're making. danno ------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642 _______________________________________________ BackupPC-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
