Re: [BackupPC-users] bug regarding error notifications $Conf{EmailNotifyMinDays}

2007-01-24 Thread Craig Barratt
Dan writes:

 I recently had a problem with backuppc's EMailNotifyMinDays
 parameter.
 
 running 2.1.2p1 (will upgrade to 2.1.3, but from skimming changelogs
 this has not been addressed).

Right - no changes in this area.

 Every 3 days or so, I woudl receive an email from backuppc
 complaining that backups had failed for the first alphabetically-sorted
 host name in my hosts file (ie, in the example above, arf).  
 
 As it turns out this is an oddball host, so i figured something weird
 was broken, and one that wasn't a high priority, so i didn't get around
 to fixing it.
 
 Unfortunately, the emailnotifymindays paramter prevented backuppc from
 telling me anything else (such as that all of my other hosts were failing
 backups, or that the disk was full, etc).
 
 Luckily I figured this out during routine testing and didn't miss too
 many days' worth of backups (about 2 weeks as it turned out).
 
 Setting emailnotifymindays to 0 causes all the emails to come through.
 
 At the very least 0 should be the default and a comment should fully
 explain the ramifications.
 
 Better would be to have this never apply to system-wide administrative
 e-mails, and to have it apply to each host individually rather than
 to all hosts that a given user is reponsible for.

This is strange.  The code should send admin messages without
caring about the value of $Conf{EMailNotifyMinDays}.  You should
get one every night if something needs attention.  The cache of
email sending times to users is only for per-host messages, not
admin messages.

So there must be some other reason the admin emails were not
sent or delivered.  Your address setting looks correct and you
know that per-host user messages work to that same address.

To debug, copy and edit the script to exit after it sends the
admin email, and change the logic so it always sends it.

I agree $Conf{EMailNotifyMinDays} could/should apply per host, rather
than per user.  I'll add that to the feature request list.

Craig

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] how to include certain files

2007-01-24 Thread Klaas Vantournhout
Hi,  I'm posting a lot these days to the list.  It is just that I am 
finetuning my backup server, and want to do it in a nice way.

I was wondering about the following.

In my general conf.pl file I have stated that no hidden files should be 
backuped, hence I have

$Conf{BackupFilesExclude} = { '*' = {'.*','*/.*']}

But in version 3.0.0beta3, the user is able to change those variables to 
his likeing, unless the admin changes this.

So I turn this off and all is safe.

But assume now that user A wants to add the directory .thunderbird to 
his backup.  He should be able to include this in some way.
Any idea how this could be done.

I thought that next to
$Conf{BackupFilesExclude}
there also excists an
$Conf{BackupFilesInclude}
which does the oposite, and that would make it possible if I let users 
edit this variable.  Unfortunately this does not excist.

something else I noticed that after switching from version 2.1.0 to 
3.0.0, is that the exludefiles are not appended anymore to the rsync 
command.

in /var/log/messages/ I only get the rudimentary command without the 
include and exclude's while this was the case in the previous version.

Thanks in advance for helping
Klaas
-- 
Several billion trillion tons of superhot
exploding hydrogen nuclei rose slowly above
the horizon and managed to look small, cold
and slightly damp.
Douglas Adams - The Hitch Hickers
Guide to the Galaxy

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] rsync troubles

2007-01-24 Thread Glenn English
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I started using backuppc about 2 weeks ago (on Debian Linux with rsync
xfers), and about 1 week ago it started moving the entire contents of
the files every time. An incremental of my main server used to take 30
seconds; now it's taking 110 minutes.

Here's what I think may be the relevant config lines.

zbox:/etc/backuppc# grep '} =' *
config.pl:$ENV{'PATH'} = '/bin:/usr/bin';
config.pl:$Conf{ServerHost} = `hostname`;
config.pl:$Conf{ServerPort} = -1;
config.pl:$Conf{ServerMesgSecret} = '';
config.pl:$Conf{MyPath} = '/bin';
config.pl:$Conf{UmaskMode} = 027;
config.pl:$Conf{WakeupSchedule} = [5];
config.pl:$Conf{MaxBackups} = 4;
config.pl:$Conf{MaxUserBackups} = 4;
config.pl:$Conf{MaxPendingCmds} = 10;
config.pl:$Conf{MaxBackupPCNightlyJobs} = 2;
config.pl:$Conf{BackupPCNightlyPeriod} = 1;
config.pl:$Conf{MaxOldLogFiles} = 14;
config.pl:$Conf{DfPath} = '/bin/df';
config.pl:$Conf{DfCmd} = '$dfPath /usr/local';
config.pl:$Conf{SplitPath} = '/usr/bin/split';
config.pl:$Conf{Bzip2Path} = '/usr/bin/bzip2';
config.pl:$Conf{DfMaxUsagePct} = 95;
config.pl:$Conf{TrashCleanSleepSec} = 300;
config.pl:$Conf{DHCPAddressRanges} = [];
config.pl:$Conf{BackupPCUser} = 'backuppc';
config.pl:$Conf{BackupPCUserVerify} = 1;
config.pl:$Conf{HardLinkMax} = 31999;
config.pl:$Conf{TarShareName} = '/';
config.pl:$Conf{FullPeriod} = 6.97;
config.pl:$Conf{IncrPeriod} = 0.97;
config.pl:$Conf{FullKeepCnt} = 1;
config.pl:$Conf{FullKeepCntMin} = 1;
config.pl:$Conf{IncrKeepCnt} = 6;
config.pl:$Conf{IncrKeepCntMin} = 1;
config.pl:$Conf{PartialAgeMax} = 3;
config.pl:$Conf{IncrFill} = 0;
config.pl:$Conf{RestoreInfoKeepCnt} = 10;
config.pl:$Conf{ArchiveInfoKeepCnt} = 10;
config.pl:$Conf{BlackoutBadPingLimit} = 3;
config.pl:$Conf{BlackoutPeriods} = [
config.pl:$Conf{BackupZeroFilesIsFatal} = 1;
config.pl:$Conf{XferMethod} = 'rsync';
config.pl:$Conf{XferLogLevel} = 1;
config.pl:$Conf{TarClientCmd} = '$sshPath -q -x -n -l root $host'
config.pl:$Conf{TarFullArgs} = '$fileList+';
config.pl:$Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
config.pl:$Conf{TarClientRestoreCmd} = '$sshPath -q -x -l root $host'
config.pl:$Conf{TarClientPath} = '/bin/tar';
config.pl:$Conf{RsyncClientPath} = '/usr/bin/rsync';
config.pl:$Conf{RsyncClientCmd} = '$sshPath -q -x -l root $host
$rsyncPath $argList+';
config.pl:$Conf{RsyncClientRestoreCmd} = '$sshPath -q -x -l root $host
$rsyncPath $argList+';
config.pl:$Conf{RsyncShareName} = '/';
config.pl:$Conf{RsyncdClientPort} = 873;
config.pl:$Conf{RsyncdUserName} = '';
config.pl:$Conf{RsyncdPasswd} = '';
config.pl:$Conf{RsyncdAuthRequired} = 1;
config.pl:$Conf{RsyncCsumCacheVerifyProb} = 0.01;
config.pl:$Conf{RsyncArgs} = [
config.pl:$Conf{RsyncRestoreArgs} = [
config.pl:$Conf{ArchiveDest} = '/tmp';
config.pl:$Conf{ArchiveComp} = 'gzip';
config.pl:$Conf{ArchivePar} = 0;
config.pl:$Conf{ArchiveSplit} = 0;
config.pl:$Conf{ArchiveClientCmd} = '$Installdir/bin/BackupPC_archiveHost'
config.pl:$Conf{SshPath} = '/usr/bin/ssh' if -x '/usr/bin/ssh';
config.pl:$Conf{PingPath} = '/bin/ping';
config.pl:$Conf{PingCmd} = '$pingPath -c 1 $host';
config.pl:$Conf{ServerInitdPath} = '/etc/init.d/backuppc';
config.pl:$Conf{ServerInitdStartCmd} = '$serverInitdPath start';
config.pl:$Conf{CompressLevel} = 0;
config.pl:$Conf{PingMaxMsec} = 20;
config.pl:$Conf{ClientTimeout} = 72000;
config.pl:$Conf{MaxOldPerPCLogFiles} = 12;
config.pl:$Conf{RestorePostUserCmd} = undef;
config.pl:$Conf{ArchivePostUserCmd} = undef;
config.pl:$Conf{ClientNameAlias} = undef;
config.pl:$Conf{SendmailPath} = '/usr/sbin/sendmail';
config.pl:$Conf{EMailNotifyMinDays} = 1;
config.pl:$Conf{EMailFromUserName} = 'backuppc';
config.pl:$Conf{EMailAdminUserName} = 'root';
config.pl:$Conf{EMailUserDestDomain} = 'localhost';
config.pl:$Conf{EMailNoBackupEverSubj} = undef;
config.pl:$Conf{EMailNoBackupEverMesg} = undef;
config.pl:$Conf{EMailNotifyOldBackupDays} = 7.0;
config.pl:$Conf{EMailNoBackupRecentSubj} = undef;
config.pl:$Conf{EMailNoBackupRecentMesg} = undef;
config.pl:$Conf{CgiAdminUserGroup} = 'backuppc';
config.pl:$Conf{CgiURL} = 'http://'.$Conf{ServerHost}.'/backuppc/index.cgi';
config.pl:$Conf{Language} = 'en';
config.pl:$Conf{CgiUserHomePageCheck} = '';
config.pl:$Conf{CgiDateFormatMMDD} = 1;
config.pl:$Conf{CgiNavBarAdminAllHosts} = 1;
config.pl:$Conf{CgiSearchBoxEnable} = 1;
config.pl:$Conf{CgiNavBarLinks} = [
config.pl:$Conf{CgiStatusHilightColor} = {
config.pl:$Conf{CgiHeaders} = 'meta http-equiv=pragma
content=no-cache';
config.pl:$Conf{CgiImageDir} = '/usr/share/backuppc/image';
config.pl:$Conf{CgiExt2ContentType} = { };
config.pl:$Conf{CgiImageDirURL} = '/backuppc/image';
config.pl:$Conf{CgiCSSFile} = 'BackupPC_stnd.css';
localhost.pl:$Conf{XferMethod} = 'tar';
localhost.pl:$Conf{TarShareName} = ['/etc', '/root' ];
localhost.pl:$Conf{TarClientCmd} = '/usr/bin/env LC_ALL=C $tarPath -c -v
- -f - -C $shareName'
localhost.pl:$Conf{TarFullArgs} = '$fileList';
localhost.pl:$Conf{TarIncrArgs} = 

[BackupPC-users] Archive Backup Error

2007-01-24 Thread Byron Trimble
I'm receiving this error when trying to archive my backups:

Error: /opt/backuppc/bin/BackupPC_tarCreate, compress or split failed 
Please advise!!

Thanks,

Byron J. Trimble
Technical Specialist/Systems Administrator
407-409-5207
[EMAIL PROTECTED]
Skype ID: btrimble


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Exclude not working

2007-01-24 Thread James Kyle
I figure I'm doing something wrong here, but I wish to backup /usr/ 
local, but not /usr/local/var/backups


_TOPDIR_/pc/localhost/config.pl:

$Conf{RsyncShareName} = ['/usr/local'];
$Conf{BackupFilesExclude} = ['/usr/local/var/backups'];
$Conf{RsyncClientCmd} = '/usr/bin/sudo $rsyncPath $argList';
$Conf{RsyncClientRestoreCmd} = '/usr/bin/sudo $rsyncPath $argList';

localhost full backup log:

Contents of file /usr/local/var/backups/pc/localhost/XferLOG.0.z,  
modified 2007-01-24 12:03:32 (Extracting only Errors)


Running: /usr/bin/sudo /usr/bin/rsync --server --sender --numeric-ids  
--perms --owner --group --devices --links --times --block-size=2048 -- 
recursive --exclude=/usr/local/var/backups --ignore-times . /usr/local/

Xfer PIDs are now 315
Got remote protocol 28
Negotiated protocol version 26
Sent exclude: /usr/local/var/backups
Xfer PIDs are now 315,316
[ skipped 2037 lines ]
Done: 1892 files, 214682497 bytes

And yet,
NameTypeMode#   SizeDate Modified
backups dir 07550   544 2007-01-24 -8:00



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Archive Backup Error

2007-01-24 Thread Les Mikesell
Byron Trimble wrote:
 I'm receiving this error when trying to archive my backups:

 Error: /opt/backuppc/bin/BackupPC_tarCreate, compress or split failed 
 Please advise!!

   

If your archive didn't specify compression or splitting, it means that 
the backuppc
user can't open the output device and you need to change ownership or 
permissions.
On newer Linux versions, /dev is rebuilt on every reboot and you have to do
something to make such changes stick - see 'man udev'.

-- 
  Les Mikesell
   [EMAIL PROTECTED]


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Exclude not working

2007-01-24 Thread Jason Hughes

All of my excludes look like this:
$Conf{BackupFilesExclude} = ['/proc', '/var/named/chroot/proc', '/mnt', 
'/sys', '/media'];

$Conf{XferMethod} = 'rsyncd';
$Conf{RsyncShareName} = 'wholedrive';

They seem to work fine.  I'm using 3.0.0beta3.  Is your rsync share name 
correct?  Shouldn't your exclude path be _relative_ to that directory 
then?  My wholedrive is /, so my directories are relative.  Yours 
aren't.


JH

James Kyle wrote:
I figure I'm doing something wrong here, but I wish to backup 
/usr/local, but not /usr/local/var/backups


_TOPDIR_/pc/localhost/config.pl:

$Conf{RsyncShareName} 
http://airto.hosted.ats.ucla.edu:8080/cgi-bin/BackupPC_Admin?action=viewtype=docs#item_%24conf%7brsyncsharename%7d
 = ['/usr/local'];
$Conf{BackupFilesExclude} 
http://airto.hosted.ats.ucla.edu:8080/cgi-bin/BackupPC_Admin?action=viewtype=docs#item_%24conf%7bbackupfilesexclude%7d
 = ['/usr/local/var/backups'];
$Conf{RsyncClientCmd} 
http://airto.hosted.ats.ucla.edu:8080/cgi-bin/BackupPC_Admin?action=viewtype=docs#item_%24conf%7brsyncclientcmd%7d
 = '/usr/bin/sudo $rsyncPath $argList';
$Conf{RsyncClientRestoreCmd} 
http://airto.hosted.ats.ucla.edu:8080/cgi-bin/BackupPC_Admin?action=viewtype=docs#item_%24conf%7brsyncclientrestorecmd%7d
 = '/usr/bin/sudo $rsyncPath $argList';

localhost full backup log:

Contents of file /usr/local/var/backups/pc/localhost/XferLOG.0.z, 
modified 2007-01-24 12:03:32 (Extracting only Errors)


Running: /usr/bin/sudo /usr/bin/rsync --server --sender --numeric-ids --perms 
--owner --group --devices --links --times --block-size=2048 --recursive 
--exclude=/usr/local/var/backups --ignore-times . /usr/local/
Xfer PIDs are now 315
Got remote protocol 28
Negotiated protocol version 26
Sent exclude: /usr/local/var/backups
Xfer PIDs are now 315,316
[ skipped 2037 lines ]
Done: 1892 files, 214682497 bytes
  
And yet, 
Name		Type	Mode	#	Size	Date Modified

backups dir 07550   544 2007-01-24 -8:00

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Exclude not working

2007-01-24 Thread Holger Parplies
Hi,

Jason Hughes wrote on 24.01.2007 at 15:45:16 [Re: [BackupPC-users] Exclude not 
working]:
 James Kyle wrote:
 I figure I'm doing something wrong here, but I wish to backup 
 /usr/local, but not /usr/local/var/backups
 
 _TOPDIR_/pc/localhost/config.pl:
 
 $Conf{RsyncShareName} = ['/usr/local'];
 $Conf{BackupFilesExclude} = ['/usr/local/var/backups'];
  [...]
 
 Shouldn't your exclude path be _relative_ to that directory 
 then?

correct, you should have

$Conf {BackupFilesExclude} = [ '/var/backups' ];

Note that

$Conf {BackupFilesExclude} = [ 'var/backups' ];

would work too, but it would also exclude something like
'/usr/local/some/further/directories/var/backups', should it exist.

I wrote a while back in another thread that exclude paths are anchored to
the root of the share by BackupPC. That is wrong. *Include paths* are
($Conf {BackupFilesOnly} that is), exclude paths aren't. You need to specify
a leading slash if you want to guarantee that only /usr/local/var/backups
matches. Note also that you *could* append a trailing slash ('/var/backups/')
to make /usr/local/var/backups (and its contents) be excluded *only if* it is
a directory. If you, at a later point, replaced it with a softlink to some
place else, the softlink (but not the directory it points to, unless that is
also below /usr/local and not excluded) would then be backed up. It probably
doesn't make much difference in this case, but it would if you, for example,
used something like

$Conf {BackupFilesExclude} = [ '/var/backups/*/' ];

(backup all files in /usr/local/var/backups but not the directories). That
is probably not a particularly good example, unless it is coincidentally
the backuppc user's home directory ... you might even be able to do fancy
stuff like

$Conf {BackupFilesExclude} = [ '+ /var/backups/.ssh/', '/var/backups/*/' ];

though that is unfortunately untested (and possibly counter-intuitive). What
I *did* test though, is that rsync [2.6.4] matches '.ssh' with '*', though
the man pages claims that shell filename matching rules are used.

I hope that was not too confusing :).

Regards,
Holger

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] DLT-V4 Tape drive wit S-ATA

2007-01-24 Thread Bastian Schern
Hello everybody,

does anybody already get some experience with a DLT-V4 Streamer with
S-ATA interface e.g. Tandberg DLT-V4
(http://www.tandbergdata.com/support/dlt-v4) and Linux?

Cheers
Bastian





Virus checked by G DATA AntiVirusKit
Version: AVK 17.1806 from 04.01.2007
Virus news: www.antiviruslab.com



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Avoiding long backup times

2007-01-24 Thread Holger Parplies
Hi,

Krsnendu dasa wrote on 21.01.2007 at 13:45:55 [Re: [BackupPC-users] Avoiding 
long backup times]:
 On 20/01/07, Holger Parplies [EMAIL PROTECTED] wrote:
   I read, that backuppc stores every identical file only one time.
 
  Basically true, but BackupPC needs to determine that the file is identical 
  to
  something and to what. Sparing you the transfer (if possible with reasonable
  cost) is rsyncs job. Doing so comes at a cost in terms of CPU usage, so
  you've got the option of using tar if bandwidth is cheaper than CPU power.
  In your case, it obviously is not.
 
   [...] What happens with an unchanged file in a full backup?
 
  tar and smb will transfer it, rsync will not (presuming you mean
  unchanged as in same name, same content).
 
  For a changed file, rsync will try to speed up the transfer. If you append a
  few bytes to a large file, tar/smb will transfer the whole file (even on an
  incremental), while rsync will (basically) transfer some checksums and the
  few bytes only (on full and incremental backups).
 
 So...
 
 With tar and smb the file is transfered then it is checked to see if
 it is the same as a file already in the pool. If it is already in the
 pool a link is created and the copied file is deleted.
 
 Whereas rsync checks if the file is the same as another in the pool
 before transferring?

No.

Actually, I was thinking of *native rsync* in what I wrote. As I read the
code (or the comments, rather ;-), the same basically applies to
BackupPC/File::RsyncP, though there are two additional challenges worth
mentioning: compression of the local file and pooling.

As Les pointed out, the point of reference is always the same file in the
previous full backup, not any file in the pool that may (later) turn out to
have identical contents. This reference file may be compressed. In any case,
it's part of a previous backup, so the contents cannot be changed in place.
As I understand it, BackupPC will create a local directory tree containing
the required files (all for a full backup, only the changed ones for an
incremental), much as native rsync would with the '--copy-dest' or
'--compare-dest' options respectively (meaning, an identical file will be
copied locally, not transfered, and a changed file will be partially transfered
if possible), except that BackupPC will already compress the local file if
compression is turned on. Later on, BackupPC_link will replace files in this
directory tree by hardlinks to pools files or add them to the pool as
appropriate.

I like the idea of taking *any* identical file from the pool as reference
though. I don't know if it is possible (i.e. whether the remote rsync will
transfer one checksum covering the whole file before the local part needs to
commit to selecting one file), but it certainly would be a worthwhile speedup.

Regards,
Holger

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Avoiding long backup times

2007-01-24 Thread Timothy J. Massey
[EMAIL PROTECTED] wrote on 01/24/2007 
09:28:51 PM:

  I like the idea of taking *any* identical file from the pool as reference
  though. I don't know if it is possible (i.e. whether the remote rsync 
will
  transfer one checksum covering the whole file before the local part 
needs to
  commit to selecting one file), but it certainly would be a worthwhile 
speedup.

This could be a most valuable speedup for the initial backup of remote 
hosts.  Currently, your choices are to do a painfully slow (or even 
impossibly slow) first backup over a WAN link, or physically move the 
server to the remote location for the first full backup.  This is 
usually difficult, and often prevents the other backups the server has 
to do from working as well.

The way we currently avoid this is to copy all of the remote server's 
data to a removable hard drive, move the hard drive to a computer on the 
backup server's network and create a host to back it up.  Once that 
backup is complete, we change BackupPC's host configuration to point to 
the real server.

The other way would be to have some sort of way to transfer a host's 
pool and configuration from one server to another.  Then I could use a 
second BackupPC server to back up at the remote location, then move the 
pool from my portable BackupPC server to the remote backup server.

I would strongly prefer the second solution:  the ability to move a 
single host from one server to another would be *tremendous*.  It would 
make replacing and load balancing backup servers much easier.  When 
you're trying to build a solution that will function for long-term 
online archival storage (we're targeting 7 *years*), the ability to move 
hosts from one system to another (to divide the load on a single server, 
to migrate decommissioned hosts to secondary backup servers, to adjust 
space imbalances among servers or all kinds of other reasons) is *very* 
attractive.

But I would assuming that moving just one host out of the pool is going 
to be harder than migrating the entire pool to a new host, and we 
already know how difficult doing *that* is...  So I'm not holding my breath.

I wonder if I'm just trying to push BackupPC beyond what it is designed 
to do?

Tim Massey

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Pooling Incrementals -- I'm confused

2007-01-24 Thread Holger Parplies
Hi,

Joe Casadonte wrote on 22.01.2007 at 10:02:59 [Re: [BackupPC-users] Pooling  
Incrementals -- I'm confused]:
 I was thinking more about this, and I guess the functional difference
 [between full and incremental backups] is in the time it takes to do the
 backup:

the time it takes to do a backup is not the function of the backup, it's an
unavoidable nuisance ;-).

 From the archive perspective, there seems not to be a difference.

That is not correct. There are at least two differences:

1.) Full backups go to more trouble to determine which files have changed and
which haven't.
tar/smb:  all files are transfered, BackupPC checks (full) vs.
  only newer files (based on timestamp) transfered (incremental)
rsync(d): block checksum comparison (full) vs.
  timestamp/size comparison (incremental)
So, you might miss changed files on an incremental backup which a full
backup will catch. Especially, incremental tar/smb won't transfer moved
(renamed) files since their timestamps indicate they need'nt be.
2.) A full backup constitutes a point of reference for future backups.
Incrementals will transfer everything since the last full backup. That
means you will generally transfer more from day to day. With rsync, a
full backup will transfer insignificantly more than an incremental would
(block checksums), but following incremental backups will restart at 0.
Thus, while a full backup will use (significantly) more resources in terms
of CPU usage and I/O bandwidth, it may save you significant amounts of
network bandwidth on subsequent incremental backups.
For a more detailed comment and a note or two on multi-level incremental
backups, see my note in the thread Avoiding long backup times from
20.01.2007.

Marty wrote on 21.01.2007 at 15:33:46 [Re: [BackupPC-users] Pooling  
Incrementals -- I'm confused]:
 The main effect, 
 as you describe, is that file corruption *may* be caught and corrected. 
 Unfortunately this only addresses corruption of the archive, and not in the 
 host.  To the contrary, corruption in the host is compounded by full backups, 
 which silently supercede the uncorrupted backups and may not be caught in
 before the uncorrupted backups expire.

Well, that issue is the same for any backup scheme. If you expire old
backups, their data is lost. With tape backup schemes, you might keep older
tapes, say one full backup from each month. With BackupPC, you might use
exponential backup expiry to achieve much the same (automatically!). True,
you can't expand disk space as easily as you can buy new tapes, but as long
as files *don't* change, keeping extra copies costs you virtually nothing.

I see your point that changed files missed on incremental backups *might*
indicate corruption on the client side, because a legitimate change would
*usually* update the modification time, but that is not always true, and it
cannot be automatically and generically decided which is the case.
It should be possible to detect and log such a circumstance, I would expect,
but I have no idea how complicated that might be.

 The last time I checked I didn't see any validation of archives in backuppc.

From the comments in the source and a little experimenting (hey, and the
documentation :-) I get the impression that the file name is the md5sum of
the concatenation of the size of the uncompressed file and the contents of
the uncompressed file (for files  256K that is, see BackupPC::Lib,
sub Buffer2MD5 for details). That's easy enough (algorithmically) to verify.
Expect quite some disk I/O and CPU usage though. For the fun of it, I'll
write a script that does some checking, though I wouldn't know what to do
except print a message on mismatches.

 This is compounded 
 by the problem of backing up backuppc archives, caused by the huge number of 
 hardlinks, which AFAIK is still unsolved and which not a backuppc problem per 
 se, but more likely a linux and/or rsync problem.

Backing up the complete pool system is somewhat of a problem, and it *is* a
BackupPC problem. It's not a bug in rsync, cp or tar, it's just a huge
amount of data that needs to be traversed in order to re-create the hardlink
relationships correctly (and a huge amount of disk seeks involved). With
knowledge of how the BackupPC pool system works, it is somewhat easier.
There's even BackupPC_tarPCCopy which seems to perform some highly illegal tar
voodoo by creating tar archives with hardlinks referencing files not in the
archive. I obviously wouldn't admit having tried it and thus can't tell you
how much it speeds up the process, but it's supposed to make it feasible ;-).

And then there's the archive method for creating a snapshot of a host at one
point in time (corresponding to the tapes you would keep with a tape backup
solution). As far as I know, that includes generating parity information in
addition to a tar archive, optionally even split into fixed size parts that

Re: [BackupPC-users] Avoiding long backup times

2007-01-24 Thread Holger Parplies
Hi,

Timothy J. Massey wrote on 24.01.2007 at 23:21:28 (04. Woche) [Re: 
[BackupPC-users] Avoiding long backup times]:
 [EMAIL PROTECTED] wrote on 01/24/2007 
 09:28:51 PM:
 
  I like the idea of taking *any* identical file from the pool as reference
  though. I don't know if it is possible (i.e. whether the remote rsync 
  will transfer one checksum covering the whole file before the local part 
  needs to commit to selecting one file), but it certainly would be a
  worthwhile speedup.
 
 This could be a most valuable speedup for the initial backup of remote 
 hosts.

that was what I was thinking. I have, in the mean time, read Craig's
description of rsync operation in the online documentation under Rsync
checksum caching though (I should have read that a long time ago). 
Apparently, BackupPC sends block checksums to the rsync on the client and
gets back information on which blocks to update and how. That means, a file
would have to be selected beforehand. I have no idea if that would be
reversible (eg. by not starting the remote rsync as --server) or if it
solely depends on the direction the file is transfered (quite likely, as the
receiving end cannot tell the sending end what changes to the file it would
like to make ... it could request specific blocks though). Probably, if it
was possible, BackupPC would already do it. How are multiple --compare-dest
options implemented in native rsync, though?

  Currently, your choices are to do a painfully slow (or even 
 impossibly slow) first backup over a WAN link, or physically move the 
 server to the remote location for the first full backup.  This is 
 usually difficult, and often prevents the other backups the server has 
 to do from working as well.

Or to add directories to the backup set in chunks that can be handled with
the available bandwidth, such as by initially having a long
BackupFilesExclude list and removing entries one by one.

I have some thoughts on transfering a host from one server to another, but
I'll have to get back to that later.

Regards,
Holger

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/