Re: [BackupPC-users] Copying the pool / filesystem migration
On 10/19/2016 09:59 PM, Michael Stowe wrote: > On 2016-10-19 12:38, Nick Bright wrote: >> The real problem I have is in converting the ext3 filesystem to xfs. > You have not asked this question, and I apologize for offering this > unsolicited advice: don't. > I'd recommend simply moving to ext4, which doesn't have such issues -- > and this you can do by moving the entire image, then converting the > filesystem. I agree. Ext4 is usually fine in production. Another, less feasible alternative in production, would be to convert the pool itself by moving to BackupPC 4. Since it does not use hardlinks, moving the pool is trivial. However, beyond version 4 not being stable yet, this is also less feasible since it takes the full age of your backup retention to complete. I.e. the conversion is not retroactive but uses both pools at the same time. The benefit is there would be no gaps. Best regards, Johan Ehnberg -- Check out the vibrant tech community on one of the world's most engaging tech sites, SlashDot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Copying the pool / filesystem migration
On 2016-10-19 12:38, Nick Bright wrote: > Greetings, > > I'm in the process of migrating to a new BackupPC server, my old > machine > having software RAID5 on SATA; it was just getting a bit outdated and > more than a bit starved for IOPS. The new machine (a VM, though the > host > is dedicated) is RAID10 SAS MDL (7.2krpm) across 8 spindles on a > P410/512MB FBWC - a far superior build for IOPS. > > The old machine is using ext3 on its' filesystem, as it was a direct > filesystem move from the machine before that (which was CentOS6). > > So, I'm stuck with ext3 on slow hardware; trying to move to xfs on the > new faster hardware. Getting the data to the new machine is easy enough > - I've done it twice already; once with an intermediary disk physically > moving it between machines, and once over the network. The network is > just as fast as a physical disk, as the decrease in speed still > outweighs having to copy the data twice. > > The real problem I have is in converting the ext3 filesystem to xfs. You have not asked this question, and I apologize for offering this unsolicited advice: don't. xfs is a great filesystem, but my own experience is that under a highly-hardlinked load with lots of IOPS under a VM means that subtle corruption can (and will) creep in, leading to having to dump your entire pool and start over, or wonder which files you can trust. By subtle corruption, I mean the kind of corruption that will cause the xfs mount to panic, but that the xfs tools cannot fix, and sometimes cannot recognize. Your first symptom will be that certain backups will fail with mysterious issues copying files. Your mileage may vary, of course. I'd recommend simply moving to ext4, which doesn't have such issues -- and this you can do by moving the entire image, then converting the filesystem. > I've staged the copy as two different disks in the guest, one > containing > the ext3 filesystem (which i can later dispose of), and one containing > the xfs filesystem. Using rsync -aH, the copy went to about 950/1200GB > then slowed to a crawl, getting perhaps 2-4GB per day; because it's in > the hardlink territory of the backuppc store. > > I tried using the BackupPC_tarPCCopy instead of rsync, but the command > refused to work. It stated an error about the pool root configuration, > even though the configuration was correct. I was unable to resolve the > error. > > What strategies or suggestions could the community make? At this rate, > it's going to take another THREE MONTHS to copy the pool between > filesystems, a time during which this server isn't making backups. > > The old server is, but at the end of it all i'm faced with trying to > merge the pools (probably functionally impossible given the performance > issues), or having a substantial gap in my backups. Neither option is > appealing. I'm OK with a gap in backups, but I'd like to contain it to > a > week or two, not an entire quarter. roject: http://backuppc.sourceforge.net/ -- Check out the vibrant tech community on one of the world's most engaging tech sites, SlashDot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] Copying the pool / filesystem migration
Greetings, I'm in the process of migrating to a new BackupPC server, my old machine having software RAID5 on SATA; it was just getting a bit outdated and more than a bit starved for IOPS. The new machine (a VM, though the host is dedicated) is RAID10 SAS MDL (7.2krpm) across 8 spindles on a P410/512MB FBWC - a far superior build for IOPS. The old machine is using ext3 on its' filesystem, as it was a direct filesystem move from the machine before that (which was CentOS6). So, I'm stuck with ext3 on slow hardware; trying to move to xfs on the new faster hardware. Getting the data to the new machine is easy enough - I've done it twice already; once with an intermediary disk physically moving it between machines, and once over the network. The network is just as fast as a physical disk, as the decrease in speed still outweighs having to copy the data twice. The real problem I have is in converting the ext3 filesystem to xfs. I've staged the copy as two different disks in the guest, one containing the ext3 filesystem (which i can later dispose of), and one containing the xfs filesystem. Using rsync -aH, the copy went to about 950/1200GB then slowed to a crawl, getting perhaps 2-4GB per day; because it's in the hardlink territory of the backuppc store. I tried using the BackupPC_tarPCCopy instead of rsync, but the command refused to work. It stated an error about the pool root configuration, even though the configuration was correct. I was unable to resolve the error. What strategies or suggestions could the community make? At this rate, it's going to take another THREE MONTHS to copy the pool between filesystems, a time during which this server isn't making backups. The old server is, but at the end of it all i'm faced with trying to merge the pools (probably functionally impossible given the performance issues), or having a substantial gap in my backups. Neither option is appealing. I'm OK with a gap in backups, but I'd like to contain it to a week or two, not an entire quarter. -- --- - Nick Bright- - Vice President of Technology - - Valnet -=- We Connect You -=- - - Tel 888-332-1616 x 315 / Fax 620-331-0789 - - Web http://www.valnet.net/ - --- - Are your files safe?- - Valnet Vault - Secure Cloud Backup - - More information & 30 day free trial at - - http://www.valnet.net/services/valnet-vault - --- This email message and any attachments are intended solely for the use of the addressees hereof. This message and any attachments may contain information that is confidential, privileged and exempt from disclosure under applicable law. If you are not the intended recipient of this message, you are prohibited from reading, disclosing, reproducing, distributing, disseminating or otherwise using this transmission. If you have received this message in error, please promptly notify the sender by reply E-mail and immediately delete this message from your system. -- Check out the vibrant tech community on one of the world's most engaging tech sites, SlashDot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/