Re: [BackupPC-users] HowTo backup __TOPDIR__?
Thomas Birnthaler wrote: What is the best way to syncronize __TOPDIR__ to another location? As I found in many messages, rsync isn't possible because of expensive memory usage for the hardlinks. Since version 3.0.0 (protocol 3 on both ends) rsync uses an incremental mode to generate and compare the file lists on both sides. So memory usage decreased a lot, because just a small part of the list is in memory all the time. But the massive hardlink usage of BackupPC causes very slow copying of the whole structure, because link creation on any filesystem seems to be a very expensive task (locks?) ... So it seems possible to make the initial remote backup by dd and after this a daily rsync? Because on a daily basis are (hopefully) not tons of new hardlinks which have to be created on the remote side. In my opinion dd or cp -a isn't possible too because they would copy all the data. That would consume to much time if I syncronize the locations on a daily basis. Any other tool has the same time consumption if it keeps hardlinks (cp e.g. does that with option -l). A somehow lazy solution would be to just copy the pool-Files (hashes as file names) by rsync and create a tar archive of the pc directory. I would believe that creating of the tar archive and copying it to the other location will consume nearly the same time, space and bandwidth as dd or cp. Isn't it? The time consuming process of link creation is then deferred to the restore case (which may never be needed). Thomas br Matthias -- Don't Panic -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] HowTo backup __TOPDIR__?
while reading linux journal look at what i found... maybe it will do what you want You can use the dd and nc commands for exact disk mirroring from one server to another. The following commands send data from Server1 to Server2: Server2# nc -l 12345 | dd of=/dev/sdb Server1# dd if=/dev/sda | nc server2 12345 Make sure that you issue Server2's command first so that it's listening on port 12345 when Server1 starts sending its data. Unless you're sure that the disk is not being modified, it's better to boot Server1 from a RescueCD or LiveCD to do the copy. source: http://www.linuxjournal.com/content/tech-tip-remote-mirroring-using-nc-and-dd http://www.linux-geex.com __ On Thu, Aug 6, 2009 at 9:19 PM, Matthias Meyer matthias.me...@gmx.liwrote: Thomas Birnthaler wrote: What is the best way to syncronize __TOPDIR__ to another location? As I found in many messages, rsync isn't possible because of expensive memory usage for the hardlinks. Since version 3.0.0 (protocol 3 on both ends) rsync uses an incremental mode to generate and compare the file lists on both sides. So memory usage decreased a lot, because just a small part of the list is in memory all the time. But the massive hardlink usage of BackupPC causes very slow copying of the whole structure, because link creation on any filesystem seems to be a very expensive task (locks?) ... So it seems possible to make the initial remote backup by dd and after this a daily rsync? Because on a daily basis are (hopefully) not tons of new hardlinks which have to be created on the remote side. In my opinion dd or cp -a isn't possible too because they would copy all the data. That would consume to much time if I syncronize the locations on a daily basis. Any other tool has the same time consumption if it keeps hardlinks (cp e.g. does that with option -l). A somehow lazy solution would be to just copy the pool-Files (hashes as file names) by rsync and create a tar archive of the pc directory. I would believe that creating of the tar archive and copying it to the other location will consume nearly the same time, space and bandwidth as dd or cp. Isn't it? The time consuming process of link creation is then deferred to the restore case (which may never be needed). Thomas br Matthias -- Don't Panic -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/ -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] HowTo backup __TOPDIR__?
Matthias Meyer wrote: Since version 3.0.0 (protocol 3 on both ends) rsync uses an incremental mode to generate and compare the file lists on both sides. So memory usage decreased a lot, because just a small part of the list is in memory all the time. But the massive hardlink usage of BackupPC causes very slow copying of the whole structure, because link creation on any filesystem seems to be a very expensive task (locks?) ... So it seems possible to make the initial remote backup by dd and after this a daily rsync? Because on a daily basis are (hopefully) not tons of new hardlinks which have to be created on the remote side. Rsync still has to traverse all of the pool/pc trees in one pass to match up the links by inode number so there is some limit relative to the RAM on the machine to what it can reasonably handle. But with the new protocol 3, that limit might be fairly large. In my opinion dd or cp -a isn't possible too because they would copy all the data. That would consume to much time if I syncronize the locations on a daily basis. Any other tool has the same time consumption if it keeps hardlinks (cp e.g. does that with option -l). A somehow lazy solution would be to just copy the pool-Files (hashes as file names) by rsync and create a tar archive of the pc directory. I would believe that creating of the tar archive and copying it to the other location will consume nearly the same time, space and bandwidth as dd or cp. Isn't it? The BackupPC_tarPCCopy utility makes a tar image containing only the link information, assuming that you have already copied the pool, but yes it does take a very long time to complete a restore. However, as a disaster backup it might be useful. The down side is that you'd need to both rsync the pool and complete the BackupPC_tarPCCopy run with no changes to the master server. -- Les Mikesell lesmikes...@gmail.com -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] HowTo backup __TOPDIR__?
If I want to have desaster resistent backups I need to have the backups at least on two locations. What is the best way to syncronize __TOPDIR__ to another location? As I found in many messages, rsync isn't possible because of expensive memory usage for the hardlinks. In my opinion dd or cp -a isn't possible too because they would copy all the data. That would consume to much time if I syncronize the locations on a daily basis. Thanks Matthias -- Don't Panic -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] HowTo backup __TOPDIR__?
Matthias Meyer wrote: If I want to have desaster resistent backups I need to have the backups at least on two locations. What is the best way to syncronize __TOPDIR__ to another location? As I found in many messages, rsync isn't possible because of expensive memory usage for the hardlinks. In my opinion dd or cp -a isn't possible too because they would copy all the data. That would consume to much time if I syncronize the locations on a daily basis. The simple way is to run two independent copies and that also keeps you from having a single point of failure. If that isn't practical, rotating external disks that you image-copy locally might work. -- Les Mikesell lesmikes...@gmail.com -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] HowTo backup __TOPDIR__?
What is the best way to syncronize __TOPDIR__ to another location? As I found in many messages, rsync isn't possible because of expensive memory usage for the hardlinks. Since version 3.0.0 (protocol 3 on both ends) rsync uses an incremental mode to generate and compare the file lists on both sides. So memory usage decreased a lot, because just a small part of the list is in memory all the time. But the massive hardlink usage of BackupPC causes very slow copying of the whole structure, because link creation on any filesystem seems to be a very expensive task (locks?) ... In my opinion dd or cp -a isn't possible too because they would copy all the data. That would consume to much time if I syncronize the locations on a daily basis. Any other tool has the same time consumption if it keeps hardlinks (cp e.g. does that with option -l). A somehow lazy solution would be to just copy the pool-Files (hashes as file names) by rsync and create a tar archive of the pc directory. The time consuming process of link creation is then deferred to the restore case (which may never be needed). Thomas -- OSTC Open Source Training and Consulting GmbH / HRB Nuernberg 20032 tel +49 911-3474544 / fax +49 911-1806277 / http://www.ostc.de Delsenbachweg 32 / D-90425 Nuernberg / Geschaeftsfuehrung: Thomas Birnthaler / +49 171-3047465 / t...@ostc.de / pgp 0xFEE7EB4C Hermann Gottschalk / +49 173-3600680 / h...@ostc.de / pgp 0x0B2D8EEA -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/