Re: [BackupPC-users] Renaming host...

2020-02-27 Thread Raman Gupta
I believe you can just rename the host in the config file, and rename
the host directory under `pc` in the backuppc data directory.

After you rename the hosts to whatever makes sense, use
ClientNameAlias to ensure the renamed host points to the *real*
hostname. See 
http://backuppc.sourceforge.net/faq/BackupPC.html#_conf_clientnamealias_.

Regards,
Raman

On Thu, Feb 27, 2020 at 9:02 PM Omidia Backuppc  wrote:
>
> Hi all.
>
> How do you rename hosts in backuppc?
>
> So there are a couple windows machines being backup up by backuppc (3.3.1)  
> that are foolishly named after the user, which is great until they leave, 
> then the host in backuppc is the old name, referring to the previous user... 
> which is fine until you go through a few generations of this and now it's a 
> puzzle to figure out who the heck "kevinpc" refers to.
>
> i know, generic names, they are now, but i'd REALLY like to be able to update 
> at least what shows up on the web interface to actually reference something 
> that makes sense.  (they regularly check and test they're own backups -- I 
> have found that to be a rare breed though I insist on it for everyone 
> anyways, this is made a bit less easy because none of the names on the web 
> interface refer to something they can make sense of without really thinking 
> about it.)
>
> it would be great to rename all the rest of the stuff, like the name in the 
> config files, even the folder but that's all a lot less important (and maybe 
> harder don't know).  i can live with that stuff being wrong/old/a puzzle.  
> Would be nice but doesn't matter much.
>
> But what the user sees... matters a lot.
>
> I know i can retire the host and hide it and add the new one and it won't 
> take up extra disc space, but first no history and second that's not the best 
> solution.
>
> This is easy right?  Am I just being daft?
>
> Help!  =)
>
> Thanks.
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backups missing entire directories: "file has vanished"

2019-05-10 Thread Raman Gupta
Unfortunately, I did not save the pool state before "fixing" the
problem, and I haven't been able to reproduce it. Hopefully there is
something in the logs that will help narrow down the cause. The
snippets I sent to the mailing list were partial greps of the log, but
if you do see something that you need more context for, I still do
have the full XferLog at logging level 6, with rsync -vvv. I saved it
before deleting the bad backups.

Regards,
Raman

On Thu, May 9, 2019 at 2:31 AM Craig Barratt via BackupPC-users
 wrote:
>
> Thanks for the update.
>
> Can you re-create the problem?  It would be great to track down what went 
> wrong.  (I still need to look at your log file snippets in more detail, 
> hopefully this weekend.)
>
> Craig
>
> On Wed, May 8, 2019 at 10:24 PM Raman Gupta  wrote:
>>
>> So the issue was definitely with the prior backups, probably related
>> to borkage due to the cancellation / partial backup I mentioned in my
>> previous message. The last six or seven backups were all incrementals,
>> with the last one being filled. I had tried deleting the last filled
>> incremental several times, but this had never fixed the problem --
>> probably because deleting that backup caused bpc to fill the prior
>> incremental, and whatever was borked in the deleted backup was copied
>> over to the prior one. The solution was to delete all the backups all
>> the way back to the previous filled full backup. Once this was done, a
>> manually triggered full backup then finally completed with no issues.
>>
>> Regards,
>> Raman
>>
>> On Wed, May 8, 2019 at 11:32 AM Raman Gupta  wrote:
>> >
>> > The backup is large, but not huge. I created a new test host pointing
>> > only to /home/raman/x and it worked just fine. I forgot to mention I
>> > also have another BackupPC server, backing up the same client with the
>> > same configuration, and it backs up all of these files from this
>> > client without any issue, so the issue is definitely on the server
>> > side, not on the client side.
>> >
>> > I ran a test with XferLogLevel=6 and "-vvv". Here are some relevant
>> > grep outputs -- `home/raman/x/y/2018` is a directory created in 2018,
>> > while all of the previously created directories in `y` are backing up
>> > without any issue --- this happens to be tax data so I have one
>> > directory for every year, and all 2015 directories and before are
>> > fine, but 2016 and later are not:
>> >
>> > $ BackupPC_zcat /var/lib/BackupPC/pc/edison/XferLOG.3972.z  | grep 
>> > "x/y/2018"
>> > [sender] showing directory home/raman/x/y/2018 because of pattern /home/**
>> > [sender] make_file(home/raman/x/y/2018,*,2)
>> > recv_file_name(home/raman/x/y/2018)
>> > recv_file_name(home/raman/x/y/2018)
>> > G bpc_lstat(home/raman/x/y/2018)
>> > file has vanished: "/home/raman/x/y/2018"
>> > recv_generator(home/raman/x/y/2018,216871)
>> > G bpc_lstat(home/raman/x/y/2018)
>> > G bpc_mkdir(home/raman/x/y/2018, 0700)
>> > [sender] pushing local filters for /home/raman/x/y/2018/
>> > [sender] showing file home/raman/x/y/2018/a.txt.pdf because of pattern 
>> > /home/**
>> > [... same thing for all other subdirectories and files ...]
>> > recv_file_name(home/raman/x/y/2018/a.txt.pdf)
>> > [... same thing for all other subdirectories and files ...]
>> > recv_generator(home/raman/x/y/2018,217359)
>> > G bpc_lstat(home/raman/x/y/2018)
>> > G bpc_mkdir(home/raman/x/y/2018, 040775)
>> > G bpc_lstat(home/raman/x/y/2018)
>> > rsync_bpc: stat "/home/raman/x/y/2018" failed: No such file or directory 
>> > (2)
>> > delete_in_dir(home/raman/x/y/2018)
>> > [generator] pushing local filters for /home/raman/x/y/2018/
>> > G bpc_opendir(home/raman/x/y/2018)
>> > G bpc_lstat(home/raman/x/y/2018/a.txt.pdf)
>> > file has vanished: "/home/raman/x/y/2018/a.txt.pdf"
>> > [... same thing for all other subdirectories and files ...]
>> > recv_generator(home/raman/x/y/2018/a.txt.pdf,217366)
>> > G bpc_lstat(home/raman/x/y/2018/a.txt.pdf)
>> > G bpc_sysCall_poolFileCheck(home/raman/x/y/2018/a.txt.pdf): potential
>> > match /var/lib/BackupPC//cpool/28/56/29571824e1fe8922304bd1924f02ce22
>> > (len = 171990)
>> > G bpc_lstat(home/raman/x/y/2018/a.txt.pdf)
>> > G bpc_open(home/raman/x/y/2018/a.txt.pdf, 0x0, 00) -> 3
>> > G bpc_lgetxattr(home/raman/x/y/2018/a.txt.pdf, user.rsync.%aacl)
>> > G bpc_llistxattr(ho

Re: [BackupPC-users] Backups missing entire directories: "file has vanished"

2019-05-08 Thread Raman Gupta
So the issue was definitely with the prior backups, probably related
to borkage due to the cancellation / partial backup I mentioned in my
previous message. The last six or seven backups were all incrementals,
with the last one being filled. I had tried deleting the last filled
incremental several times, but this had never fixed the problem --
probably because deleting that backup caused bpc to fill the prior
incremental, and whatever was borked in the deleted backup was copied
over to the prior one. The solution was to delete all the backups all
the way back to the previous filled full backup. Once this was done, a
manually triggered full backup then finally completed with no issues.

Regards,
Raman

On Wed, May 8, 2019 at 11:32 AM Raman Gupta  wrote:
>
> The backup is large, but not huge. I created a new test host pointing
> only to /home/raman/x and it worked just fine. I forgot to mention I
> also have another BackupPC server, backing up the same client with the
> same configuration, and it backs up all of these files from this
> client without any issue, so the issue is definitely on the server
> side, not on the client side.
>
> I ran a test with XferLogLevel=6 and "-vvv". Here are some relevant
> grep outputs -- `home/raman/x/y/2018` is a directory created in 2018,
> while all of the previously created directories in `y` are backing up
> without any issue --- this happens to be tax data so I have one
> directory for every year, and all 2015 directories and before are
> fine, but 2016 and later are not:
>
> $ BackupPC_zcat /var/lib/BackupPC/pc/edison/XferLOG.3972.z  | grep "x/y/2018"
> [sender] showing directory home/raman/x/y/2018 because of pattern /home/**
> [sender] make_file(home/raman/x/y/2018,*,2)
> recv_file_name(home/raman/x/y/2018)
> recv_file_name(home/raman/x/y/2018)
> G bpc_lstat(home/raman/x/y/2018)
> file has vanished: "/home/raman/x/y/2018"
> recv_generator(home/raman/x/y/2018,216871)
> G bpc_lstat(home/raman/x/y/2018)
> G bpc_mkdir(home/raman/x/y/2018, 0700)
> [sender] pushing local filters for /home/raman/x/y/2018/
> [sender] showing file home/raman/x/y/2018/a.txt.pdf because of pattern 
> /home/**
> [... same thing for all other subdirectories and files ...]
> recv_file_name(home/raman/x/y/2018/a.txt.pdf)
> [... same thing for all other subdirectories and files ...]
> recv_generator(home/raman/x/y/2018,217359)
> G bpc_lstat(home/raman/x/y/2018)
> G bpc_mkdir(home/raman/x/y/2018, 040775)
> G bpc_lstat(home/raman/x/y/2018)
> rsync_bpc: stat "/home/raman/x/y/2018" failed: No such file or directory (2)
> delete_in_dir(home/raman/x/y/2018)
> [generator] pushing local filters for /home/raman/x/y/2018/
> G bpc_opendir(home/raman/x/y/2018)
> G bpc_lstat(home/raman/x/y/2018/a.txt.pdf)
> file has vanished: "/home/raman/x/y/2018/a.txt.pdf"
> [... same thing for all other subdirectories and files ...]
> recv_generator(home/raman/x/y/2018/a.txt.pdf,217366)
> G bpc_lstat(home/raman/x/y/2018/a.txt.pdf)
> G bpc_sysCall_poolFileCheck(home/raman/x/y/2018/a.txt.pdf): potential
> match /var/lib/BackupPC//cpool/28/56/29571824e1fe8922304bd1924f02ce22
> (len = 171990)
> G bpc_lstat(home/raman/x/y/2018/a.txt.pdf)
> G bpc_open(home/raman/x/y/2018/a.txt.pdf, 0x0, 00) -> 3
> G bpc_lgetxattr(home/raman/x/y/2018/a.txt.pdf, user.rsync.%aacl)
> G bpc_llistxattr(home/raman/x/y/2018/a.txt.pdf)
> G bpc_read(3 (home/raman/x/y/2018/a.txt.pdf), buf, 171990) tmpFd = -1
> G bpc_close(3 (home/raman/x/y/2018/a.txt.pdf))
> [... same thing for all other subdirectories and files ...]
>
> and some context around the "file has vanished" message:
>
> $ BackupPC_zcat /var/lib/BackupPC/pc/edison/XferLOG.3972.z  | grep -C
> 5 'file has vanished: "/home/raman/x/y/2018"'
> G bpc_readdir -> 1996
> G bpc_lstat(home/raman/x/y/1996)
> [generator] make_file(home/raman/x/y/1996,*,2)
> G bpc_readdir -> 2018
> G bpc_lstat(home/raman/x/y/2018)
> file has vanished: "/home/raman/x/y/2018"
> G bpc_readdir -> 2014
> G bpc_lstat(home/raman/x/y/2014)
> [generator] make_file(home/raman/x/y/2014,*,2)
> G bpc_readdir -> 2011
> G bpc_lstat(home/raman/x/y/2011)
>
> Lastly, I've tried a test of placing a new file into the 2015
> directory, and this file does back up successfully. So it seems like
> new files are captured, as long as they fall into an existing
> directory, but new directories are not.
>
> One thing I did do on this server is cancel the first full backup
> after removing `--one-file-system` and deleted the backup from the web
> UI, whereas the other server completed its backup without
> intervention. I wonder if the server pool was corrupted somehow by
> doing that? Also, the `--one-file-system` was pro

Re: [BackupPC-users] Backups missing entire directories: "file has vanished"

2019-05-08 Thread Raman Gupta
The backup is large, but not huge. I created a new test host pointing
only to /home/raman/x and it worked just fine. I forgot to mention I
also have another BackupPC server, backing up the same client with the
same configuration, and it backs up all of these files from this
client without any issue, so the issue is definitely on the server
side, not on the client side.

I ran a test with XferLogLevel=6 and "-vvv". Here are some relevant
grep outputs -- `home/raman/x/y/2018` is a directory created in 2018,
while all of the previously created directories in `y` are backing up
without any issue --- this happens to be tax data so I have one
directory for every year, and all 2015 directories and before are
fine, but 2016 and later are not:

$ BackupPC_zcat /var/lib/BackupPC/pc/edison/XferLOG.3972.z  | grep "x/y/2018"
[sender] showing directory home/raman/x/y/2018 because of pattern /home/**
[sender] make_file(home/raman/x/y/2018,*,2)
recv_file_name(home/raman/x/y/2018)
recv_file_name(home/raman/x/y/2018)
G bpc_lstat(home/raman/x/y/2018)
file has vanished: "/home/raman/x/y/2018"
recv_generator(home/raman/x/y/2018,216871)
G bpc_lstat(home/raman/x/y/2018)
G bpc_mkdir(home/raman/x/y/2018, 0700)
[sender] pushing local filters for /home/raman/x/y/2018/
[sender] showing file home/raman/x/y/2018/a.txt.pdf because of pattern /home/**
[... same thing for all other subdirectories and files ...]
recv_file_name(home/raman/x/y/2018/a.txt.pdf)
[... same thing for all other subdirectories and files ...]
recv_generator(home/raman/x/y/2018,217359)
G bpc_lstat(home/raman/x/y/2018)
G bpc_mkdir(home/raman/x/y/2018, 040775)
G bpc_lstat(home/raman/x/y/2018)
rsync_bpc: stat "/home/raman/x/y/2018" failed: No such file or directory (2)
delete_in_dir(home/raman/x/y/2018)
[generator] pushing local filters for /home/raman/x/y/2018/
G bpc_opendir(home/raman/x/y/2018)
G bpc_lstat(home/raman/x/y/2018/a.txt.pdf)
file has vanished: "/home/raman/x/y/2018/a.txt.pdf"
[... same thing for all other subdirectories and files ...]
recv_generator(home/raman/x/y/2018/a.txt.pdf,217366)
G bpc_lstat(home/raman/x/y/2018/a.txt.pdf)
G bpc_sysCall_poolFileCheck(home/raman/x/y/2018/a.txt.pdf): potential
match /var/lib/BackupPC//cpool/28/56/29571824e1fe8922304bd1924f02ce22
(len = 171990)
G bpc_lstat(home/raman/x/y/2018/a.txt.pdf)
G bpc_open(home/raman/x/y/2018/a.txt.pdf, 0x0, 00) -> 3
G bpc_lgetxattr(home/raman/x/y/2018/a.txt.pdf, user.rsync.%aacl)
G bpc_llistxattr(home/raman/x/y/2018/a.txt.pdf)
G bpc_read(3 (home/raman/x/y/2018/a.txt.pdf), buf, 171990) tmpFd = -1
G bpc_close(3 (home/raman/x/y/2018/a.txt.pdf))
[... same thing for all other subdirectories and files ...]

and some context around the "file has vanished" message:

$ BackupPC_zcat /var/lib/BackupPC/pc/edison/XferLOG.3972.z  | grep -C
5 'file has vanished: "/home/raman/x/y/2018"'
G bpc_readdir -> 1996
G bpc_lstat(home/raman/x/y/1996)
[generator] make_file(home/raman/x/y/1996,*,2)
G bpc_readdir -> 2018
G bpc_lstat(home/raman/x/y/2018)
file has vanished: "/home/raman/x/y/2018"
G bpc_readdir -> 2014
G bpc_lstat(home/raman/x/y/2014)
[generator] make_file(home/raman/x/y/2014,*,2)
G bpc_readdir -> 2011
G bpc_lstat(home/raman/x/y/2011)

Lastly, I've tried a test of placing a new file into the 2015
directory, and this file does back up successfully. So it seems like
new files are captured, as long as they fall into an existing
directory, but new directories are not.

One thing I did do on this server is cancel the first full backup
after removing `--one-file-system` and deleted the backup from the web
UI, whereas the other server completed its backup without
intervention. I wonder if the server pool was corrupted somehow by
doing that? Also, the `--one-file-system` was probably added at the
same time as v4 and the resulting pool migrations on subsequent
backups, which would have *removed* a lot of stuff that was being
backed up by v3. I have run a BackupPC_fsck and there were no errors.

Regards,
Raman

On Wed, May 8, 2019 at 2:11 AM Craig Barratt via BackupPC-users
 wrote:
>
> Thanks for confirming; it's not a charset issue then.
>
> The empty /home due to --one-file-system from 2017 shouldn't make a 
> difference by now, but it's good to keep in mind.
>
> When you look through the XferLOG file, please look out for any other 
> unexpected errors.  You could also increase the XferLogLevel (eg, to 5 or 6).
>
> Also, if the backup size is large, you could create a new test host (use 
> ClientNameAlias to point it back to the right host) that just backs up 
> /home/raman/x or even /home/raman/x/y.  If the same problem happens, then it 
> will be much easier to browse shorter log files.  If the problem doesn't 
> happen in that case, then that's a useful clue too.
>
> Craig
>
> On Tue, May 7, 2019 at 11:02 PM Raman Gupta  wrote:
>>
>> It is an ext4 file

Re: [BackupPC-users] Backups missing entire directories: "file has vanished"

2019-05-08 Thread Raman Gupta
It is an ext4 filesystem. The directories are plain ASCII -- no
strange characters in any way. The `rsync` process on the Linux client
runs as root, and I have verified root has access to these files
without any issue. There are plenty of inodes free (`df -i` shows that
filesystem as only 5% used). There is no file corruption -- all the
data is good.

There is a seemingly weird coincidence here though... my backups have
been running nightly for years. However, in late 2017 I updated
BackupPC on my Fedora box, and `--one-file-system` was added to the
rsync args without me realizing. This caused all backups from that
point forward to a few days ago to be missing all the new files and
modifications in `/home`, which is a mountpoint for a local LVM ext4
partition. It seems that every file and directory that falls into this
category are now failing with this "vanished" error. Is it possible
that BackupPC is confused because it is expecting to find these files
in prior backups?

I will try the debugging you suggest.

Regards,
Raman

On Wed, May 8, 2019 at 1:44 AM Craig Barratt via BackupPC-users
 wrote:
>
> What sort of filesystem is this?  Do those directory names contain non-ascii 
> characters?
>
> However, the "file has vanished" error shouldn't occur on a directory, so 
> something strange is going on.
>
> I'd recommend turning on additional debug in rsync (eg, add -vvv to 
> $Conf{RsyncArgs}, and also look at the --debug option) and looking in the 
> XferLOG file.  When the initial file list is sent, are those directories and 
> their contents present in the file list?
>
> Craig
>
> On Tue, May 7, 2019 at 4:26 PM Michael Stowe  
> wrote:
>>
>> On 2019-05-07 13:39, Raman Gupta wrote:
>>
>> Certain directories (and their contents) on one of my hosts are not getting 
>> backed up at all, even with a “Full” backup.
>>
>> I use rsync as my Xfer method, with BackupPC 4.3.0 on Fedora (rpms 
>> BackupPC-4.3.0-1.fc29.x86_64, BackupPC-XS-0.58-1.fc29.x86_64).
>>
>> Looking at the backup logs, I see messages like the following related to the 
>> directories that are not being backed up:
>>
>> file has vanished: “/home/raman/x/y/a” file has vanished: 
>> “/home/raman/x/y/b” file has vanished: “/home/raman/x/y/c”
>>
>> I have other directories and files successfully backed up in 
>> “/home/raman/x/y”, but the directories “a”, “b”, and “c” (and their content) 
>> are not being backed up.
>>
>> Note that these files have not vanished — they are not ephemeral and they 
>> haven't been touched in days. For example:
>>
>>  File: /home/raman/x/y/a
>> Size: 4096Blocks: 8  IO Block: 4096   directory
>>
>> Device: fd08h/64776d Inode: 33037482 Links: 5 Access: (0775/drwxrwxr-x) Uid: 
>> ( 1000/ raman) Gid: ( 1000/ raman) Context: 
>> unconfined_u:object_r:user_home_t:s0 Access: 2019-05-07 05:05:17.288857497 
>> -0400 Modify: 2019-04-30 00:56:22.914849594 -0400 Change: 2019-04-30 
>> 00:56:22.914849594 -0400 Birth: –
>>
>> Any idea what might be happening here?
>>
>> Regards, Raman
>>
>> “File has vanished” issues can be tricky to diagnose if the file appears to 
>> be there. What rsync is really telling you is that it built a file list, and 
>> some of the files or directories from that list are not accessible when it 
>> actually went to read them. Actually being deleted or ephemeral files are 
>> two reasons, but there are others, from filename encoding issues to inode 
>> changes to complications with remotely mounted filesystems to corruption 
>> issues to complex file permissions.
>>
>> While I might check the file's details both before and after the rsync run 
>> to look for changes, I recommend ensuring that these files are reliably 
>> accessible by the rsync user, check the logs for any problems, and working 
>> through filesystem issues. (XFS is notorious for this sort of thing.) Also, 
>> if the volume is anything other than a local mount, that's where I'd look 
>> first for issues; be aware that rsync's high read volume often exposes 
>> issues not evident under less stressful usage.
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backups missing entire directories: "file has vanished"

2019-05-07 Thread Raman Gupta
Certain directories (and their contents) on one of my hosts are not
getting backed up at all, even with a "Full" backup.

I use rsync as my Xfer method, with BackupPC 4.3.0 on Fedora (rpms
BackupPC-4.3.0-1.fc29.x86_64, BackupPC-XS-0.58-1.fc29.x86_64).

Looking at the backup logs, I see messages like the following related
to the directories that are not being backed up:

file has vanished: "/home/raman/x/y/a"
file has vanished: "/home/raman/x/y/b"
file has vanished: "/home/raman/x/y/c"

I have other directories and files successfully backed up in
"/home/raman/x/y", but the directories "a", "b", and "c" (and their
content) are not being backed up.

Note that these files have *not* vanished -- they are not ephemeral
and they haven't been touched in days. For example:

  File: /home/raman/x/y/a
 Size: 4096Blocks: 8  IO Block: 4096   directory
Device: fd08h/64776dInode: 33037482Links: 5
Access: (0775/drwxrwxr-x)  Uid: ( 1000/   raman)   Gid: ( 1000/   raman)
Context: unconfined_u:object_r:user_home_t:s0
Access: 2019-05-07 05:05:17.288857497 -0400
Modify: 2019-04-30 00:56:22.914849594 -0400
Change: 2019-04-30 00:56:22.914849594 -0400
Birth: -

Any idea what might be happening here?

Regards,
Raman


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Misconfigured or bad backup detection

2019-04-29 Thread Raman Gupta
Hi guys... I just ran into an interesting situation. I was looking for
some files from my backup, and noticed that nothing had been backed up
in my home directory since late 2017.

Upon investigation, it looks like the Fedora packages started adding
"--one-file-system" by default in the "RsyncArgs", which excludes my
"/home" directory! My bad -- I merged in the upstream changes without
thinking deeply about the ramifications of each change. Thankfully
I've not needed these backups but, scary!

Mistakes like this happen and it got me to thinking about ways to
prevent this kind of thing in the future. What mechanisms are people
using today to avoid this?

If a feature to prevent this were to be added in BackupPC, I was
thinking possibly something like "canary files" -- such files would be
known "canaries" that if they were present in a prior backup, but no
longer present in a new backup, that BackupPC could raise an alert /
send an email. Or, simply add paths to such canary files or
directories in the configuration.

Regards,
Raman


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Anyone have a copy of BackupPC_DeleteFile.pl?

2015-02-07 Thread Raman Gupta
Here is version 0.1.4 from 2009:

https://gist.github.com/rocketraman/ebce662290da354222c2

I don't know if it is the latest that was available before the wiki
disappeared.

Regards,
Raman

On 02/07/2015 11:47 AM, Carl T. Miller wrote:
 When I searched for BackupPC_DeleteFile.pl it appears
 that the only place it was posted was on the wiki
 for BackupPC.  And the URL is no longer valid.
 
 Can someone shoot me a copy of the file or, even
 better, make it available on the Internet?
 
 Thanks!
 
 
 --
 Dive into the World of Parallel Programming. The Go Parallel Website,
 sponsored by Intel and developed in partnership with Slashdot Media, is your
 hub for all things parallel software development, from weekly thought
 leadership blogs to news, videos, case studies, tutorials and more. Take a
 look and join the conversation now. http://goparallel.sourceforge.net/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC finds the wrong hosts when ISPs hijack the DNS

2013-09-05 Thread Raman Gupta
On 09/04/2013 11:05 PM, George Adams wrote:
 I am running BackupPC 3.3.0 on an Ubunut 12.04.3 system.  My hosts (which are 
 all Windows PCs accessed via SMB) are set to dhcp=0, which has worked well 
 over several years/versions of BackupPC.  According to the docs on How 
 BackupPC Finds Hosts, these sequence is:
 
 - first gethostbyname()
 - then nmblookup if the DNS lookup fails.
 
 However I am on Time Warner/Road Runner which apparently does DNS hijacking 
 in my area.  So EVERY DNS lookup is successful.  Any lookup of a non-existent 
 host returns the TWC search page address.

Do yourself a favor and switch your router to use the OpenDNS or
Google DNS servers. You'll probably get better performance, as well as
real NXDOMAIN responses.

Regards,
Raman Gupta

--
Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
Discover the easy way to master current and previous Microsoft technologies
and advance your career. Get an incredible 1,500+ hours of step-by-step
tutorial videos with LearnDevNow. Subscribe today and save!
http://pubads.g.doubleclick.net/gampad/clk?id=58041391iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Moving lots of data on a client

2013-08-20 Thread Raman Gupta
I have a client on which about 100 GB of data has been moved from one
directory to another -- otherwise its exactly the same.

As I understand it, since the data has been moved, BackupPC 3 will
transfer all the data again (and discard it once it realizes the data
is already in the pool) i.e. it does not skip the transfer of each
file even though the checksum is identical to an existing file in the
pool.

I am using the rsync transfer method.

Is there a workaround to prevent all 100 GB of data from being
transferred again?

Regards,
Raman

--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Moving lots of data on a client

2013-08-20 Thread Raman Gupta
On 08/20/2013 03:27 PM, John Rouillard wrote:
 On Tue, Aug 20, 2013 at 02:23:38PM -0400, Raman Gupta wrote:
 I have a client on which about 100 GB of data has been moved from one
 directory to another -- otherwise its exactly the same.

 As I understand it, since the data has been moved, BackupPC 3 will
 transfer all the data again (and discard it once it realizes the data
 is already in the pool) i.e. it does not skip the transfer of each
 file even though the checksum is identical to an existing file in the
 pool.
 
 That is also my understanding.
  
 I am using the rsync transfer method.

 Is there a workaround to prevent all 100 GB of data from being
 transferred again?
 
 Your mileage may vary on this, but changing the structure of the data
 on the backuppc system to match what is currently present on the
 client should work.  Asuuming you moved the data from:

Thanks John (and Les) for this suggestion. This was my initial thought
as well, but wanted to get feedback from the list before trying it.

I used mv/cp -rl as suggested to create the target structure in the
last full. I ignored attrib files as suggested by John (clearly, the
manipulation of the last full breaks the attrib data, but that is fine
for this temporary hack).

I then deleted all the incrementals after the last full using J.
Kosowski's deleteBackup script, just to be sure they didn't mess
something up. Since they didn't see the changes made in the full, I
think they were corrupted anyway.

I then ran a new full backup manually, and it worked fine.

Lastly, I debated whether to manually reverse the mv/cp operations
made in the prior last full, to restore it to its original state (with
correct locations and attrib files), or simply delete that full
backup. I think either approach would have been fine, but I opted to
simply delete it to avoid any potential screw-ups.

All-in-all, saved a day or so worth of data transfer successfully.

Regards,
Raman

--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Moving lots of data on a client

2013-08-20 Thread Raman Gupta
On 08/20/2013 03:28 PM, Arnold Krille wrote:
 On Tue, 20 Aug 2013 14:23:38 -0400 Raman Gupta rocketra...@gmail.com
 wrote:
 I have a client on which about 100 GB of data has been moved from one
 directory to another -- otherwise its exactly the same.

 As I understand it, since the data has been moved, BackupPC 3 will
 transfer all the data again (and discard it once it realizes the data
 is already in the pool) i.e. it does not skip the transfer of each
 file even though the checksum is identical to an existing file in the
 pool.

 I am using the rsync transfer method.

 Is there a workaround to prevent all 100 GB of data from being
 transferred again?
 
 I think the workaround is to use rsync as transfer ;-) At least when you
 added the checksum-seed= parameter to your config, it should
 calculate the checksums on the client and compare with the servers
 database and only transfer contents that differ.

No, checksum-seed doesn't help here. BPC transfers all the data again.
I think checksum-seed caches the checksums on the server, but if BPC
thinks the file doesn't exist on the server side at all (which it
doesn't since it has moved locations), then checksum-seed is irrelevant.

Hopefully BPC 4 will be smarter -- I think I saw a post on
backuppc-devel from Craig indicating that it will be.

 Otherwise I would not manually fiddle with the dirs on the server, its
 far less stress and risk for error if you just let backuppc do its
 thing. Even if that means transfering the files again...

I went ahead with the fiddling -- I'm a bit of a daredevil at heart :)

Regards,
Raman

--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup the backup to online provider

2013-06-27 Thread Raman Gupta
On 06/25/2013 12:05 PM, Carl Wilhelm Soderstrom wrote:
 On 06/25 11:55 , Raman Gupta wrote:
 For disaster recovery purposes, I have been periodically backing up my
 BackupPC pool to external storage. I have a small pool of
 approximately 300 GB on a Linux server, and currently use rsync to
 copy the pool to storage and keep it updated.
 
 That mechanism won't scale much farther. Rsync chokes on the number of files
 that BackupPC (at least the 3.x version) uses, because of the hardlinks.

Agreed, which is one of the reasons I want to replace rsync, and takes
me back to my original question:

 Does anyone have any positive or negative experiences to
 share with using CrashPlan, or any similar provider [1], for backing
 up their pool?

On 06/25/2013 12:05 PM, Carl Wilhelm Soderstrom wrote:
 BPC 4 may be better able to replicate its data pool remotely. it's very
 alpha at the moment tho and I don't know if anyone is yet trying to hammer
 out how to do remote replication with it and what the limiting values are.

Yes, I've been following the information about BPC 4 on the dev list.

Regards,
Raman

--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backup the backup to online provider

2013-06-25 Thread Raman Gupta
For disaster recovery purposes, I have been periodically backing up my
BackupPC pool to external storage. I have a small pool of
approximately 300 GB on a Linux server, and currently use rsync to
copy the pool to storage and keep it updated.

I am considering moving my DR backup to an online provider such as
CrashPlan. Does anyone have any positive or negative experiences to
share with using CrashPlan, or any similar provider [1], for backing
up their pool?

Regards,
Raman

--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] why are excludes sent to rsync when none are present in the config file?

2011-09-19 Thread Raman Gupta
On 09/18/2011 10:29 PM, Adam Monsen wrote:
 AHA, adding the following entry to BackupFilesOnly '*' worked:
 
   /opt/backup/stage/other/*
 
 Now that directory is backed up. But I don't understand why. :)

Because

/opt/backup/stage/other

backs up the directory entry itself, but not its contents.

/opt/backup/stage/other/*

backs up the directory entry and contents but not subdirectories.

/opt/backup/stage/other/**

backs up the directory and all contents, including subdirectories.

Do man rsync and check out the section INCLUDE/EXCLUDE PATTERN RULES.

Cheers,
Raman

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
Learn about the latest advances in developing for the 
BlackBerryreg; mobile platform with sessions, labs  more.
See new tools and technologies. Register for BlackBerryreg; DevCon today!
http://p.sf.net/sfu/rim-devcon-copy1 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] why are excludes sent to rsync when none are present in the config file?

2011-09-19 Thread Raman Gupta
On 09/19/2011 01:04 PM, Bowie Bailey wrote:
 On 9/19/2011 11:16 AM, Raman Gupta wrote:
 On 09/18/2011 10:29 PM, Adam Monsen wrote:
 AHA, adding the following entry to BackupFilesOnly '*' worked:

   /opt/backup/stage/other/*

 Now that directory is backed up. But I don't understand why. :)
 Because

 /opt/backup/stage/other

 backs up the directory entry itself, but not its contents.

 /opt/backup/stage/other/*

 backs up the directory entry and contents but not subdirectories.

 /opt/backup/stage/other/**

 backs up the directory and all contents, including subdirectories.

 Do man rsync and check out the section INCLUDE/EXCLUDE PATTERN RULES.
 
 That's not the behavior that I'm seeing.  I have a linux host that I am
 backing up via rsyncd.  BackupFilesOnly includes /etc and a couple of
 other directories.  This successfully backs up the entire subtree on all
 three specified directories.  No asterisks required anywhere.

You are right, my apologies. Please ignore my comments above.

When using BackupFilesOnly BackupPC does some manipulation of the
paths before passing them to rsync, so the normal rsync rules don't apply.

I like to pass my arguments directly to the rsync protocol, so I do:

$Conf{RsyncArgsExtra} = [
'--exclude-from=$confDir/pc/$host.exclude',
];

to create per-client exclude files in the form specified by the rsync
man page, which gives me a bit more control over the arguments as well
as externalizes the includes/excludes into separate files.

Cheers,
Raman

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
Learn about the latest advances in developing for the 
BlackBerryreg; mobile platform with sessions, labs  more.
See new tools and technologies. Register for BlackBerryreg; DevCon today!
http://p.sf.net/sfu/rim-devcon-copy1 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] aborted by signal=PIPE

2011-01-10 Thread Raman Gupta
On 01/10/2011 04:41 AM, mohammad tayebi wrote:
 *Hi Backuppc Users*
 
 i have problem ?
 my backupc server has Raid 5 And mounted /var/lib/backuppc
 
 ofcource my quesition is : This Log
 
 * 2011-01-09 20:00:03 Got fatal error during xfer (aborted by signal=PIPE)*

There are many reasons this could be happening, but to throw one more
possibility your way -- I recently had this problem after upgrading a
client machine to Fedora 14. Subsequently, about 80-90% of backups of
that client started failing.

The solution was simply to upgrade to the latest Fedora 14 kernel.
That was about 2-3 weeks ago, and the abort hasn't occurred even once
since.

Cheers,
Raman

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] The dread Unable to read 4 bytes / Read EOF: Connection reset by peer

2010-05-11 Thread Raman Gupta
On 05/11/2010 02:52 PM, Nick Bright wrote:
 Precisely correct. I removed those port arguments and set the SSH server
 on the target machine back to port 22 and it's working. Now I need to
 figure out how to properly tell BackupPC that SSH is on a non-standard
 port. I thought that I was doing it properly, but clearly not. All my
 other boxes use the standard ports, which is why I've not had this issue
 before.

My preferred way to do this is to add an entry to
~backuppc/.ssh/config:

Host backuppc_alias
 Hostname realhostname
 Port 222
 Compression yes

would connect the BackupPC alias backuppc_alias to realhostname, port 
222, with compression on.

man ssh_config

Cheers,
Raman

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] suggestion for enhancement

2010-04-27 Thread Raman Gupta
On 04/27/2010 02:39 PM, Les Mikesell wrote:
 On 4/27/2010 8:09 AM, Tyler J. Wagner wrote:
 On Tuesday 27 April 2010 05:24:04 chitowner watertower wrote:
 I would like to suggest- if any dev team members happen to see this- that
it would be a nifty option if some kind of progress indicator could be
added to the backupPC interface. AFAICT, when a backup is running, there
are no hourly additions to the log file, or anything else to show how 
 much
has been done in relation to the total job.

 I would be happy just to see the list of files copied so far when using 
 rsync.
 It would be very helpful for identifying when large files in the wrong place
 are causing a backup to slow.

 Sometimes you can get an idea of what is happening by looking at what is
 being stored in the pc/host/new directory as a backup runs.

Another trick I've often used is to run an strace command, limited to 
file access, on the backuppc dump process on the server:

strace -e trace=file -p pid

It'll list all the files as they get written to disk. Really handy.

Cheers,
Raman

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] high load and stuck processes

2010-03-05 Thread Raman Gupta
On 03/05/2010 09:01 AM, Josh Malone wrote:
 Also - you need a good filesystem to handle lots (or even not so many) of
 backups. I reently switched from EXT3 to EXT4 and saw on order of magnitude
 (I kid you not, 10+ hours to 1) reduction in the backup time and system
 load. Unfortunately, I think this introduced some problems in the RHEL5
 ext4 code so I also switched from 32-bit RHEL5 to 64-bit -- that seems to
 have cleared up the problems.

When you switched to ext4 and got this performance improvement, did 
you simply upgrade your existing ext3 volumes via tune2fs, or did you 
rebuild the entire filesystem so that the existing on-disk structures 
were migrated as well?

Cheers,
Raman

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using existing rsync exclusion files?

2009-01-17 Thread Raman Gupta
Walter Francis wrote:
 One thing which I really need to be able to do is utilize the rsync exclude
 files which I maintain for each of my backup sets.  Is it possible to do this,
 either via config file, or (ideally), via something like
 /etc/backuppc/exclude/hostname.exclude where hostname is dynamic so you just
 drop in a file for a hostname and those paths are automatically excluded.  I
 started to do something along this nature by tweaking the scripts itself when
 the rsync command line options are build, but I decided I better ask first.  I
 might be missing something, plus I prefer to not 'lock' myself out of
 upgrades.

I wrote a patch that does exactly this. Look here:

http://article.gmane.org/gmane.comp.sysutils.backup.backuppc.devel/625

Hope that helps!

Cheers,
Raman


--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Wildly different speeds for hosts

2008-04-14 Thread Raman Gupta
Raman Gupta wrote:
 I have three hosts configured to backup to my PC. Here are the speeds
 from the host summary:
 
 host 1:  24.77 GB,  14,000 files, 18.78 MB/s (slower WAN link)
 host 2:   1.27 GB,   4,000 files,  1.89 MB/s (faster WAN link)
 host 3:   4.82 GB, 190,000 files,  0.66 MB/s (fast LAN link)
 
 They all use rsync with the same setup, other than the exclude list.
 Backups are configured to run one at a time so there is no overlap
 between them.
 
 The speed of host 3 concerns me. Host 3 is by far the beefiest
 machine, and on the fastest network link of all the hosts, but yet
 backs up at only 0.66 MB/s (incrementals are even slower).

Ok, it seems that the number of files has a large non-linear affect on
the performance of BackupPC. I excluded a bunch of stuff from my host
3 backup, and the new stats are:

host 3:4.2 GB,  85,000 files,  2.19 MB/s

For a file count reduction factor of 2.2, there was a speed increase
factor of 3.3.

Cheers,
Raman

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Wildly different speeds for hosts

2008-04-13 Thread Raman Gupta
I have three hosts configured to backup to my PC. Here are the speeds
from the host summary:

host 1:  24.77 GB,  14,000 files, 18.78 MB/s (slower WAN link)
host 2:   1.27 GB,   4,000 files,  1.89 MB/s (faster WAN link)
host 3:   4.82 GB, 190,000 files,  0.66 MB/s (fast LAN link)

They all use rsync with the same setup, other than the exclude list.
Backups are configured to run one at a time so there is no overlap
between them.

The speed of host 3 concerns me. Host 3 is by far the beefiest
machine, and on the fastest network link of all the hosts, but yet
backs up at only 0.66 MB/s (incrementals are even slower).

Cheers,
Raman

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Missing backup

2008-03-05 Thread Raman Gupta
The backup for one of my hosts did not run two nights ago. I can't
find any indication of the reason why -- no errors at all. The logs
indicate the backup didn't even start. My other two hosts completed
successfully.

This is the first time this has happened in about three weeks, and
without me doing anything, the backup worked again normally last night.

Since the logs don't indicate anything abnormal, I'm not sure where to
start debugging this. Any clues?

Cheers,
Raman

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Missing backup

2008-03-05 Thread Raman Gupta
Les Mikesell wrote:
 Raman Gupta wrote:
 The backup for one of my hosts did not run two nights ago. I can't
 find any indication of the reason why -- no errors at all. The logs
 indicate the backup didn't even start. My other two hosts completed
 successfully.

 This is the first time this has happened in about three weeks, and
 without me doing anything, the backup worked again normally last night.

 Since the logs don't indicate anything abnormal, I'm not sure where to
 start debugging this. Any clues?
 
 If you fill the pool disk to (default) 95%, the backups won't run.  Some
 early versions had a bug where this situation would never trigger emails
 but the current version should send emails after EmailNotifyMinDays
 without backups.  Another possibility is that other backups did not
 complete in time for this one to start before your blackout window.

Checked both of those situations and neither is the cause:

1) Pool file system was recently at 58% (3/5 10:47), today's max is
58% (3/5 03:30) and yesterday's max was 58%.

2) Log snippet from the night previous that worked (host2 is the last
to be done, and the one that didn't work):

2008-03-03 05:16:45 Finished host1 (BackupPC_link host1)
2008-03-03 05:21:40 Finished incr backup on host2
2008-03-03 05:21:40 Running BackupPC_link host2 (pid=2871)
2008-03-03 05:21:42 Finished host2 (BackupPC_link host2)

Log snippet from the night that did not work -- note that host1
finishes at 4:13 am, which is about an hour before the previous night
-- also note the 0 skipped hosts:

2008-03-04 03:30:00 24hr disk usage: 58% max, 58% recent, 0 skipped hosts
[...]
2008-03-04 04:13:10 Finished host1 (BackupPC_link host1)
2008-03-04 04:30:00 Next wakeup is 2008-03-05 03:30:00

Cheers,
Raman

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncArgs configuration

2008-02-10 Thread Raman Gupta
Raman Gupta wrote:
 Another thing that would be useful (with or without the change above)
 is if the RsyncArgs had the runtime variable substitution turned on
 for at least the ConfDir and the host name. For example, I tried this:
 
 $Conf{RsyncArgs} = [
 [...],
 '--exclude-from=$confDir/pc/$host.exclude',
 ];
 
 but it did not work.

I went trolling around the source and I think this patch accomplishes
the variable substitution in RsyncArgs:

--- Rsync.pm.orig   2008-02-11 01:09:27.0 -0500
+++ Rsync.pm2008-02-11 01:54:05.0 -0500
@@ -246,6 +246,18 @@
 # transferred, even though it is a full dump.
 #
$rsyncArgs = $conf-{RsyncArgs};
+
+#
+# Merge variables into $rsyncArgs
+#
+my $args = {
+host  = $t-{host},
+hostIP= $t-{hostIP},
+client= $t-{client},
+confDir   = $conf-{ConfDir}
+};
+$rsyncArgs = $bpc-cmdVarSubstitute($rsyncArgs, $args);
+
$rsyncArgs = [EMAIL PROTECTED], @fileList] if ( @fileList );
 $rsyncArgs = [EMAIL PROTECTED], --ignore-times]
 if ( $t-{type} eq full );


Cheers,
Raman Gupta

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] RsyncArgs configuration

2008-02-10 Thread Raman Gupta
I am setting up BackupPC to backup several machines on my home
network. I am converting over from rlbackup, which has served me well
for a couple of years, but BackupPC should be even better. Kudos to
the developers. I do have a couple of nits however...

I find the file include/exclude mechanism in BackupPC (via
$Conf{BackupFilesExclude}) for rsync to be somewhat obtuse. There
doesn't seem to be any easy way to create advanced include/exclude
lists with exceptions, as one can using the --exclude-from and
--include-from arguments to rsync.

Therefore, I have modified the $Conf{RsyncArgs} to add an
--exclude-from parameter, with a path to an external exclude file.
This works fine. But I would like to set this on a per-host basis. To
do this, I need to override all the rsync args including the Do not
edit these! parameters, instead of just my single exclude-from parameter.

Since that is not the ideal situation, I think it would be useful if
backuppc came with the Do not edit these! rsync parameters separate
from the other rsync arguments. Something like:

$Conf{RsyncBaseArgs} = [
#
# Do not edit these!
#
'--numeric-ids',
'--perms',
'--owner',
'--group',
'-D',
'--links',
'--hard-links',
'--times',
'--block-size=2048',
'--recursive',

#
# Rsync = 2.6.3 supports the --checksum-seed option
# which allows rsync checksum caching on the server.
# Uncomment this to enable rsync checksum caching if
# you have a recent client rsync version and you want
# to enable checksum caching.
#
#'--checksum-seed=32761',
];


$Conf{RsyncArgs} = [
#
# Add additional arguments here
#
'--exclude-from=/etc/BackupPC/rsync.exclude',
];

This would allow users to easily override just the relevant
additional arguments for rsync on a per-host basis.

Another thing that would be useful (with or without the change above)
is if the RsyncArgs had the runtime variable substitution turned on
for at least the ConfDir and the host name. For example, I tried this:

$Conf{RsyncArgs} = [
[...],
'--exclude-from=$confDir/pc/$host.exclude',
];

but it did not work.

Cheers,
Raman Gupta

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/