[BackupPC-users] I just tried my first restore and it failed please help.
I found a file on the backup of my windows laptop and tried to restore it to /root/ The error message is below. I was able to download the file through Firefox by clicking on the file, but if I want to restore whole directories this will not work. Running: /usr/bin/ssh -q -x -l root localhost /bin/gtar -x -p --numeric-owner --same-owner -v -f - -C /root Running: //usr/share/BackupPC/bin/BackupPC_tarCreate -h nendu -n 5 -s docsandsettings -t -r /All\ Users/Documents -p / /All\ Users/Documents/FastnetOfficeUpdate.exe Xfer PIDs are now 3476,3478 Tar exited with error 65280 () status Restore failed: BackupPC_tarCreate failed Thanks, Krsnendu dasa - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
[BackupPC-users] help understanding backup times
Hi, I inherited a BackupPC system from a previous admin, and admit to not have used it before. I'm trying to get a handle on the timing of full and incremental backups. I've read through the docs but still am not understanding the odd schedule of backup runs that are going on for my systems. Ideally I would like to see incrementals done sun-fri and fulls on sat. All starting about 12:30am each day. But it seems they're all over the place, with days missed, which I don't quite understand. I'm using version 2.1.2. Here is (what I think) the relevant config variables are that I have set: $Conf{WakeupSchedule} = [0.5]; $Conf{MaxBackups} = 2; $Conf{MaxBackupPCNightlyJobs} = 4; $Conf{BackupPCNightlyPeriod} = 1; $Conf{FullPeriod} = 6.97; $Conf{IncrPeriod} = 0.97; $Conf{FullKeepCnt} = 4; $Conf{FullKeepCntMin} = 8; $Conf{FullAgeMax} = 120; $Conf{IncrKeepCnt} = 6; $Conf{IncrKeepCntMin} = 6; $Conf{IncrAgeMax} = 60; $Conf{BlackoutPeriods} = [ { hourBegin = 7.0, hourEnd = 19.5, weekDays = [1, 2, 3, 4, 5], }, ]; Here's a recent sample of 1 host's backup runs: Backup# TypeFilled Start Date Duration/mins Age/days Server Backup Path 811 fullyes 12/9 11:23 2.3 25.0 /backup2/pc/copper/811 816 fullyes 12/16 11:32 2.7 17.9 /backup2/pc/copper/816 822 fullyes 12/24 11:00 2.1 10.0 /backup2/pc/copper/822 823 incrno 12/26 08:58 0.2 8.1 /backup2/pc/copper/823 824 incrno 12/28 01:44 0.2 6.4 /backup2/pc/copper/824 825 incrno 12/29 03:17 0.3 5.3 /backup2/pc/copper/825 826 incrno 12/31 01:29 0.2 3.4 /backup2/pc/copper/826 827 fullyes 1/1 01:18 2.4 2.4 /backup2/pc/copper/827 828 incrno 1/2 01:15 0.3 1.4 /backup2/pc/copper/828 829 incrno 1/3 01:47 0.2 0.4 /backup2/pc/copper/829 Note no backup on 12/25 or 12/30. Any help with understanding what's going on, and how I can get it to do the above that I mentioned would be much appreciated. Thanx, -Tony Anthony J. Biacco Senior Systems/Network Administrator Decentrix Inc. 303-899-4000 x303 - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
Re: [BackupPC-users] help understanding backup times
On 01/03 10:19 , Anthony J Biacco wrote: I've read through the docs but still am not understanding the odd schedule of backup runs that are going on for my systems. Ideally I would like to see incrementals done sun-fri and fulls on sat. All starting about 12:30am each day. it's not like traditional tape backup, where you set up a cron job to initiate each backup at a particular time you choose. the software figures out what needs to be done, then checks to see if it's allowable. so if you haven't done a full backup for 7 days (the default), it'll initiate a full backup next time it runs. It will check about once an hour, and see if - there's any job that needs to run - if there's any blackout that would prevent the job from running if you're using rsync for your backups (which works the best in most situations), neither full nor incremental backups transfer all the data on the machine. (the full backups just do more thorough checks on it). So it's not that critical to schedule the jobs for a particular time. I personally have no problem working on my workstation while a full backup is going on. Then again, I have a dual-CPU linux machine. The Windows task scheduler is nowhere near as good, and you may be more likely to run into problems. But it seems they're all over the place, with days missed, which I don't quite understand. if you look at the backuppc logs, you'll probably find that the missed days were ones where other jobs took up all the available time between the blackout hours. you can alleviate this by making per-machine configurations with their own blackout times (which override the one in config.pl). so for instance it may happen that the boss doesn't arrive until 10am, and the backup of his machine takes no more than an hour. you can allow his blackout period to go as late at 9am (blackouts won't stop a job that's in the middle of running, just prevent a job from starting). his machine could still be backed up while others are prevented from doing so. you may also try upping the number of backups running in parallel; but IME, $Conf{MaxBackups} = 2 is just about right for LAN backups unless you have some really high-end hardware. any more and the disk contention kills performance. if you insist on having backups done at a particular time on a particular day; then you can schedule them in cron, using the BackupPC_serverMesg tool. # do incremental backups of this machine most every workday at noon 00 12 * * 1,2,3,4 backuppc /usr/share/backuppc/bin/BackupPC_serverMesg \ backup host.example.tld host.example.tld backuppc 0 # do a full backup on fridays 00 12 * * 5 backuppc /usr/share/backuppc/bin/BackupPC_serverMesg \ backup host.example.tld host.example.tld backuppc 1 be warned, trying to force things to happen at particular times can easily lead to unexpected results, like too many jobs running simultaneously. also keep in mind that this command won't absolutely force a job to go off at this time; it's more of a suggestion to the server that it do this backup now, if possible. I would advise you to do this sparingly. it is also possible to run a backup job by hand, using the BackupPC_dump command; but that is dangerous as it can lead to race conditions with some of the housekeeping processes that BackupPC runs; so only use it for troubleshooting and debugging. -- Carl Soderstrom Systems Administrator Real-Time Enterprises www.real-time.com - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
Re: [BackupPC-users] receiving mails though backup are disabled; obscure time-out error with smb
Renke writes: First one is that some of our clients aren't online any more but we would keep the backups. According to the help I changed the per-client-config of $Conf{FullPeriod} to -2. Today I get a mail that this machine isn't full backuped since more than 7 days. I believed the changed configuration would suppress this messages, has anyone a hint? I don't believe the messages are suppressed. The reason is you might use this setting and be using cron or manual backups to backup the machine. To diable the email you also need to set $Conf{EMailNotifyOldBackupDays} to a large value. The second one is real strange. The smb-backup of our Win2000-computers is stable, but yesterday I added the first XP machine. Now I get everytime the error Call timed out: server did not respond after 2 milliseconds opening remote file \Bin\AdvOCR\F (\Bin\AdvOCR\). On this machine isn't at all a file or directory called \Bin\AdvOCR\F but the browsing of the backup-archive shows that backuppc creates many empty directories which didn't exists on the remote machine. My first thought was an outdated smb-client but updating didn't fixed the problem. If I mount the smb-share on the console of the backup-server everything is allright and I can't see the directories backuppc tries to save. The directories are all one-letter names (e.g. a, c, I, N, O, ...) - what happened? The 2 millisecond timeout from smbclient is believed to be caused by anti-virus SW that needs long processing time when the directory or files are large. Not sure why the error message doesn't make sense. Craig - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
Re: [BackupPC-users] Out of memory problem
Steve writes: I'm having trouble backing up localhost, and have traced it down to this: Out of memory during large request for 16781312 bytes, total sbrk() is 354103296 bytes at /d/3/backup/BackupPC/lib/BackupPC/FileZIO.pm line 191. So... I assume that means it choked trying to compress something really big, but I thought ZIO had protections against that sort of thing, and did compression in smaller chunks or something. Is there some way to tweak its handling of expensive compressions, or to get it to note the error and still proceed to back up everything else instead of aborting the backup? Yes, FileZIO does make sure it doesn't use too much memory. It is likely most of the rest of the usage is the entire file list that rsync (and File::RsyncP) keeps in memory. You could add more memory or swap, split the backup into several pieces, or switch to tar for localhost. Craig - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
Re: [BackupPC-users] OK, how about changing the server's backuppc process niceness?
On 1/2/07, Jason Hughes [EMAIL PROTECTED] wrote: Good recommendations, Holger. I would add that niceing a process only changes its scheduling affinity, but does not modify in any way its hard disk activity or DMA priority, so until the original poster understands what exactly makes the server slow, he's shooting in the dark. A busy hard drive usually makes a system feel slower than a busy CPU process, because hard disk activity requires a 6-10ms seek minimum, plus streaming and unloading to vram, depending on what other processes are doing. Actually, if you are using the cfq IO scheduler in recent Linux kernels, renicing a process also prioritizes IO as well. :-) -Dave - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
Re: [BackupPC-users] I just tried my first restore and it failed please help.
Krsnendu writes: I found a file on the backup of my windows laptop and tried to restore it to /root/ The error message is below. I was able to download the file through Firefox by clicking on the file, but if I want to restore whole directories this will not work. Running: /usr/bin/ssh -q -x -l root localhost /bin/gtar -x -p --numeric-owner --same-owner -v -f - -C /root Running: //usr/share/BackupPC/bin/BackupPC_tarCreate -h nendu -n 5 -s docsandsettings -t -r /All\ Users/Documents -p / /All\ Users/Documents/FastnetOfficeUpdate.exe Xfer PIDs are now 3476,3478 Tar exited with error 65280 () status Restore failed: BackupPC_tarCreate failed Does ssh to localhost work? Does /bin/gtar exist on localhost? Craig - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/