> aren't you increasing the exposure of your production system X2 by giving > another backup process access to it?
Yes. And it's the right thing to do. Because a production failure with rapid recovery is manageably bad. Having your production and backups encrypted by ransomware is a business-ending catastrophe. I have an explanation, but if that much makes sense to you, you don't need to read on. ED. Redundant systems generally increase the likelihood of nuisance failure, but decrease the likelihood of catastrophic failure. This case is no different. By having two separate backup servers in different locations, maybe with different admins, you are exposing the primary machines to double the risk by having 2 independent methods of access. Assuming your risk was near zero, doubling it shouldn't be so bad. So yeah, there's a greater risk of potential disruption by having multiple methods of access. x2. Also x2 network bandwidth. Assuming the risk of having your backup server compromised is near (but not quite) zero, then you are looking at a non-zero chance of everything you care about getting mangled by a malicious entity who happened to crack a single machine. That's a non-zero chance at total, business-ending failure. Having a separate backup enclave means that killing production and backups simultaneously would require 2 near-zero possibility hacks occurring in rapid succession. 0.0001^2 So the risk of simple failure, with reasonable recovery is twice as likely. But the probability of production and backups getting destroyed at once goes down exponentially. Other solutions that are similarly over-cautious in industry include tape backups going into cold storage, mirrored raid sets with drives that get pulled and stored in safety deposit boxes, etc. It may be overkill, and that's your call. I will continue to suggest it though. Hacking and ransomware are growing problems. Single backup solutions guard well against accidents and hardware failure. To guard against mischief and corruption, you want two, and you want them isolated from each other. Perhaps from different vendors or using different technologies. Thank you for reading. I am recovering from back surgery and find myself with more free time than usual. :-) Ed the long-winded self important explainer and promoter of security practices. > On 2018, Oct 14, at 12:02 PM, Mike Hughes <m...@visionary.com> wrote: > > Thanks for the information Ed. I figured I could leave the '-z' off the rsync > command. > Regarding parallel backups: I see your point of chains exposing the potential > to nuke all backups but aren't you increasing the exposure of your production > system X2 by giving another backup process access to it? Just curious on your > thoughts on that since you seem to have been down this road. > From: ED Fochler <soek...@liquidbinary.com> > Sent: Sunday, October 14, 2018 10:23:13 AM > To: General list for user discussion, questions and support > Subject: Re: [BackupPC-users] syncing local and cloud backups > > I can answer the rsync compression question. no. Running gzip'd data > through gzip is a waste of CPU power. Depending on your link and CPU speed > it may even slow down your ability to transfer data. > > As for the recovery from an rsync'd backup... > If your /etc/BackupPC and /var/lib/BackupPC directories are already symlinks > to other locations, you can easily shut down BackupPC, swap links, and start > it up. So long as both systems are running the same version, it should come > up cleanly. > > I gave up backing up the backup server though. If you want proper redundancy > you run backups in parallel, not in a chain. If one backup server has access > to the other backup server, then it has the potential (if compromised) to > destroy all of your backups and originals from one location. Redundant > backups should live in separate private enclaves. > > ED. > > > > > On 2018, Oct 13, at 8:52 PM, Mike Hughes <m...@visionary.com> wrote: > > > > Another related question: Does it make sense to use rsync's compression > > when transferring cpool? If that data is already compressed, am I gaining > > much by having rsync try to compress it again? > > Thanks! > > From: Mike Hughes <m...@visionary.com> > > Sent: Friday, October 12, 2018 8:25 AM > > To: General list for user discussion, questions and support > > Cc: Craig Barratt > > Subject: Re: [BackupPC-users] syncing local and cloud backups > > > > Cool, thanks for the idea Craig. So that will provide a backup of the > > entire cpool and associated metadata necessary to rebuild hosts in the > > event of a site loss, but what would that process look like? > > > > Say I have the entire ‘/etc/BackupPC’ folder rsynced to an offsite disk. > > What would the recovery process look like? From what I’m thinking I’d have > > to rsync the entire folder back to the destination site, do a fresh install > > of BackupPC and associate it with this new folder. Is that about right? > > Would there not be a method to extract an important bit of data from the > > cpool without performing an entire site restore? I’m considering the > > situation where I have data of separate priority. That one cpool might > > contain several TB of files along with a few important servers of higher > > priority. The only option looks like a full site restore after rsyncing > > everything back. Am I thinking of this correctly? > > > > From: Craig Barratt via BackupPC-users > > <backuppc-users@lists.sourceforge.net> > > Sent: Thursday, October 11, 2018 20:01 > > To: General list for user discussion, questions and support > > <backuppc-users@lists.sourceforge.net> > > Cc: Craig Barratt <cbarr...@users.sourceforge.net> > > Subject: Re: [BackupPC-users] syncing local and cloud backups > > > > I'd recommend just using rsync if you want to make a remote copy of the > > cpool, pc and conf directories, to a place that BackupPC doesn't back up. > > > > Craig > > > > On Thu, Oct 11, 2018 at 10:22 AM Mike Hughes <m...@visionary.com> wrote: > > Hi BackupPC users, > > > > Similar questions have come up a few times but I have not found anything > > relating to running multiple pools. Here's our setup: > > - On-prem dev servers backed up locally to BackupPC (4.x) > > - Prod servers backed up in the cloud to a separate BackupPC (4.x) instance > > > > I'd like to provide disaster recovery options by syncing the dedup'd pools > > from on-prem to cloud and vice-versa but this would create an infinite > > loop. Is it possible to place the off-site data into a separate cpool which > > I could exclude from the sync? It would also be nice to be able to extract > > files from the synced pool individually without having to pull down the > > whole cpool and reproducing the entire BackupPC server. > > > > How do others manage on-prem and off-site backup synchronization? > > Thanks, > > Mike > > > > > > _______________________________________________ > > BackupPC-users mailing list > > BackupPC-users@lists.sourceforge.net > > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > > Wiki: http://backuppc.wiki.sourceforge.net > > Project: http://backuppc.sourceforge.net/ > > _______________________________________________ > > BackupPC-users mailing list > > BackupPC-users@lists.sourceforge.net > > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > > Wiki: http://backuppc.wiki.sourceforge.net > > Project: http://backuppc.sourceforge.net/ > > > > _______________________________________________ > BackupPC-users mailing list > BackupPC-users@lists.sourceforge.net > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > _______________________________________________ > BackupPC-users mailing list > BackupPC-users@lists.sourceforge.net > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/