Re: [BackupPC-users] full backup

2022-11-29 Thread jbk

On 11/28/22 09:02, Paulo Ricardo Bruck wrote:


Hi all

New adventures using backuppc4.0 80)

I was using backuppcV3 to do a 3 full backup and its was 
working like a charm 8)


Now , reading the doc of backuppc4 it changes the view of 
backup.

https://backuppc.github.io/backuppc/BackupPC.html#BackupPC-4.0

Using Ubuntu-22.04 + backuppc-4.4.0-5ubuntu2.

It is possible to do a 3 full backup and see only the 
content of them without merging ?


My conf:
$Conf{BackupPCNightlyPeriod} = 1;
$Conf{FullAgeMax} = 10;
$Conf{FullKeepCnt} = [
  3
];
$Conf{FullKeepCntMin} = 3;
$Conf{FullPeriod} = '0.97';
$Conf{IncrKeepCnt} = 0;
$Conf{IncrKeepCntMin} = 0;
$Conf{IncrPeriod} = ;
$Conf{RsyncShareName} = [
  'media'
];

How do I test?
a)
I create at /media 3 directories 1, 2 and 3.
Full backup
And checking GUI I could see all 3 directories. Fine it is 
working 8)


b) remove directories 1,2  and 3 and create  directories 
3,4 and 5.

full backup
and checking GUII I see directories 1,2,3 4 and 5... Not 
what I want.


c) remove directories 3,4 and 5. and create directories 
6,7 and 8.

full backup
and checking GUI I see directories 1,2,3,4,5,6,7 and 
8Not what I want I would like seeing 
only directories 6,7 and 8..


any help?

Best regards

Since the design is to fill with the previous backup and it 
appears you are doing these tests in quick succession w/o a 
day between so that "nightly" can run your results are as 
expected. Check back tomorrow and see if then the deleted 
directories/files have been removed from their respective 
backups. If you click delete backup # in the GUI it is then 
cued to be deleted when nightly runs at its scheduled time 
overnight.
Since deleting a backup file requires unlinking it is not a 
straight forward delete.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] full backup ok, incremental fails

2020-04-26 Thread Graham Seaman

That worked. Thanks for the help!

Graham

On 26/04/2020 17:35, Craig Barratt via BackupPC-users wrote:

Sorry, the correct form should be "$@":

#!/bin/sh -f
exec /bin/tar -c "$@"

(Note that you want to force tar to have the -c option, not exec).

Craig



On Sun, Apr 26, 2020 at 5:14 AM Graham Seaman > wrote:


Hi Craig

I set sudoers to allow backuppc to run tar as root with no
password, and incremental backups work fine.

This is only marginally less secure than the old setup, which
allowed backuppc to run the script which called tar, so I guess I
can live with this.

But in case you have any other ideas, here's my tiny script that's
now definitely what's causing the problem (the quote marks are
double quotes, not two single quotes):

#!/bin/sh -f

exec -c /bin/tar "$*"


Graham


On 26/04/2020 04:09, Craig Barratt via BackupPC-users wrote:

It would be helpful if you included the edited script in your
reply.  Did you use double quotes, or two single quotes?

I'd recommend trying without the script, just the make sure it
works correctly.  Then you can be sure it's an issue with how the
script handles/splits arguments.

Craig

On Sat, Apr 25, 2020 at 2:49 PM Graham Seaman
mailto:gra...@theseamans.net>> wrote:

Craig

Quoting $* gives me a new error:

/bin/tar: invalid option -- ' '

(I get exactly the same error whether I use $incrDate or
$incrDate+)

That script is to avoid potential security problems from
relaxing the rules in sudoers, so I'd rather not get rid of
it, but I'm a bit surprised no-one else has the same problems
(and that it apparently used to work for me once)

Graham


On 25/04/2020 17:59, Craig Barratt via BackupPC-users wrote:

Graham,

Your script is the problem.  Using $* causes the shell the
resplit arguments at whitespace.  To preserve the arguments
you need to put that in quotes:

exec /bin/tar -c "$*"

Craig

On Sat, Apr 25, 2020 at 5:04 AM Graham Seaman
mailto:gra...@theseamans.net>> wrote:

Thanks Craig

That's clearly the problem, but I'm still mystified.

I have backuppc running on my home server; the storage
is on a NAS NFS
mounted on the home server. Backing up other hosts on my
network (both
full and incremental) over rsync works fine.

The home server backs up using tar. The command in the
log is:

Running: /usr/bin/sudo
/etc/backuppc/localtar/tar_create.sh -v -f - -C
/etc --totals --newer=2020-04-22 21:18:10 .

If I set

 $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';


then incremental backups of the home server fail with:

/bin/tar: Substituting -9223372036854775807 for unknown
date format
‘2020-04-22\\’
/bin/tar: 21\:18\:10: Cannot stat: No such file or directory

If instead I set:

$Conf{TarIncrArgs} = '--newer=$incrDate $fileList';

then incremental backups fail with:

/bin/tar: Option --after-date: Treating date
'2020-04-22' as 2020-04-22
00:00:00
/bin/tar: 21\:18\:10: Cannot stat: No such file or directory

Could it be to do with my localtar/tar_create.sh? (I
created this so
long ago I no longer remember where it came from).

This is just:

#!/bin/sh -f
exec /bin/tar -c $*

Thanks again

Graham

On 25/04/2020 02:59, Craig Barratt via BackupPC-users wrote:
> Graham,
>
> This is a problem with shell (likely ssh) escaping of
arguments that
> contain a space.
>
> For incremental backups a timestamp is passed as an
argument to tar
> running on the client.  The argument should be a date
and time, eg:
>
>     --after-date 2020-04-22\ 21:18:10'
>
> Notice there needs to be a backslash before the space,
so it is part of
> a single argument, not two separate arguments.
>
> You can tell BackupPC to escape an argument (to
protect it from passing
> via ssh) by adding a "+" to the end of the argument
name, eg:
>
>     $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>
>
> Craig
>
> On Fri, Apr 24, 2020 at 3:17 PM Graham Seaman
mailto:gra...@theseamans.net>
> 

Re: [BackupPC-users] full backup ok, incremental fails

2020-04-26 Thread Craig Barratt via BackupPC-users
Sorry, the correct form should be "$@":

#!/bin/sh -f
exec /bin/tar -c "$@"

(Note that you want to force tar to have the -c option, not exec).

Craig



On Sun, Apr 26, 2020 at 5:14 AM Graham Seaman  wrote:

> Hi Craig
>
> I set sudoers to allow backuppc to run tar as root with no password, and
> incremental backups work fine.
>
> This is only marginally less secure than the old setup, which allowed
> backuppc to run the script which called tar, so I guess I can live with
> this.
>
> But in case you have any other ideas, here's my tiny script that's now
> definitely what's causing the problem (the quote marks are double quotes,
> not two single quotes):
>
> #!/bin/sh -f
>
> exec -c /bin/tar "$*"
>
>
> Graham
>
>
> On 26/04/2020 04:09, Craig Barratt via BackupPC-users wrote:
>
> It would be helpful if you included the edited script in your reply.  Did
> you use double quotes, or two single quotes?
>
> I'd recommend trying without the script, just the make sure it works
> correctly.  Then you can be sure it's an issue with how the script
> handles/splits arguments.
>
> Craig
>
> On Sat, Apr 25, 2020 at 2:49 PM Graham Seaman 
> wrote:
>
>> Craig
>>
>> Quoting $* gives me a new error:
>>
>> /bin/tar: invalid option -- ' '
>>
>> (I get exactly the same error whether I use $incrDate or $incrDate+)
>>
>> That script is to avoid potential security problems from relaxing the
>> rules in sudoers, so I'd rather not get rid of it, but I'm a bit surprised
>> no-one else has the same problems (and that it apparently used to work for
>> me once)
>>
>> Graham
>>
>>
>> On 25/04/2020 17:59, Craig Barratt via BackupPC-users wrote:
>>
>> Graham,
>>
>> Your script is the problem.  Using $* causes the shell the resplit
>> arguments at whitespace.  To preserve the arguments you need to put that in
>> quotes:
>>
>> exec /bin/tar -c "$*"
>>
>> Craig
>>
>> On Sat, Apr 25, 2020 at 5:04 AM Graham Seaman 
>> wrote:
>>
>>> Thanks Craig
>>>
>>> That's clearly the problem, but I'm still mystified.
>>>
>>> I have backuppc running on my home server; the storage is on a NAS NFS
>>> mounted on the home server. Backing up other hosts on my network (both
>>> full and incremental) over rsync works fine.
>>>
>>> The home server backs up using tar. The command in the log is:
>>>
>>> Running: /usr/bin/sudo /etc/backuppc/localtar/tar_create.sh -v -f - -C
>>> /etc --totals --newer=2020-04-22 21:18:10 .
>>>
>>> If I set
>>>
>>>  $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>>>
>>>
>>> then incremental backups of the home server fail with:
>>>
>>> /bin/tar: Substituting -9223372036854775807 for unknown date format
>>> ‘2020-04-22\\’
>>> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>>>
>>> If instead I set:
>>>
>>> $Conf{TarIncrArgs} = '--newer=$incrDate $fileList';
>>>
>>> then incremental backups fail with:
>>>
>>> /bin/tar: Option --after-date: Treating date '2020-04-22' as 2020-04-22
>>> 00:00:00
>>> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>>>
>>> Could it be to do with my localtar/tar_create.sh? (I created this so
>>> long ago I no longer remember where it came from).
>>>
>>> This is just:
>>>
>>> #!/bin/sh -f
>>> exec /bin/tar -c $*
>>>
>>> Thanks again
>>>
>>> Graham
>>>
>>> On 25/04/2020 02:59, Craig Barratt via BackupPC-users wrote:
>>> > Graham,
>>> >
>>> > This is a problem with shell (likely ssh) escaping of arguments that
>>> > contain a space.
>>> >
>>> > For incremental backups a timestamp is passed as an argument to tar
>>> > running on the client.  The argument should be a date and time, eg:
>>> >
>>> > --after-date 2020-04-22\ 21:18:10'
>>> >
>>> > Notice there needs to be a backslash before the space, so it is part of
>>> > a single argument, not two separate arguments.
>>> >
>>> > You can tell BackupPC to escape an argument (to protect it from passing
>>> > via ssh) by adding a "+" to the end of the argument name, eg:
>>> >
>>> > $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>>> >
>>> >
>>> > Craig
>>> >
>>> > On Fri, Apr 24, 2020 at 3:17 PM Graham Seaman >> > > wrote:
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > Ok, I guess its this (from the start of XferLOG.bad):
>>> >
>>> > /bin/tar: Option --after-date: Treating date '2020-04-22' as
>>> 2020-04-22
>>> > 00:00:00
>>> > /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>>> >
>>> > which is kind of confusing, as it goes on to copy the rest of the
>>> > directory and then says '0 Errors'. Anyway, its correct that there
>>> is no
>>> > file called '21:18:10'. Any idea why it thinks there should be?
>>> >
>>> > Graham
>>> >
>>> >
>>> > On 24/04/2020 20:59, Craig Barratt via BackupPC-users wrote:
>>> > > Graham,
>>> > >
>>> > > Tar exit status of 512 means it encountered some sort of error
>>> > (eg, file
>>> > > read error) while it was running on the target client.  Please
>>> look at
>>> > > the XferLOG.bad 

Re: [BackupPC-users] full backup ok, incremental fails

2020-04-26 Thread Graham Seaman

Hi Craig

I set sudoers to allow backuppc to run tar as root with no password, and 
incremental backups work fine.


This is only marginally less secure than the old setup, which allowed 
backuppc to run the script which called tar, so I guess I can live with 
this.


But in case you have any other ideas, here's my tiny script that's now 
definitely what's causing the problem (the quote marks are double 
quotes, not two single quotes):


#!/bin/sh -f

exec -c /bin/tar "$*"


Graham


On 26/04/2020 04:09, Craig Barratt via BackupPC-users wrote:
It would be helpful if you included the edited script in your reply.  
Did you use double quotes, or two single quotes?


I'd recommend trying without the script, just the make sure it works 
correctly.  Then you can be sure it's an issue with how the script 
handles/splits arguments.


Craig

On Sat, Apr 25, 2020 at 2:49 PM Graham Seaman > wrote:


Craig

Quoting $* gives me a new error:

/bin/tar: invalid option -- ' '

(I get exactly the same error whether I use $incrDate or $incrDate+)

That script is to avoid potential security problems from relaxing
the rules in sudoers, so I'd rather not get rid of it, but I'm a
bit surprised no-one else has the same problems (and that it
apparently used to work for me once)

Graham


On 25/04/2020 17:59, Craig Barratt via BackupPC-users wrote:

Graham,

Your script is the problem.  Using $* causes the shell the
resplit arguments at whitespace.  To preserve the arguments you
need to put that in quotes:

exec /bin/tar -c "$*"

Craig

On Sat, Apr 25, 2020 at 5:04 AM Graham Seaman
mailto:gra...@theseamans.net>> wrote:

Thanks Craig

That's clearly the problem, but I'm still mystified.

I have backuppc running on my home server; the storage is on
a NAS NFS
mounted on the home server. Backing up other hosts on my
network (both
full and incremental) over rsync works fine.

The home server backs up using tar. The command in the log is:

Running: /usr/bin/sudo /etc/backuppc/localtar/tar_create.sh
-v -f - -C
/etc --totals --newer=2020-04-22 21:18:10 .

If I set

 $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';


then incremental backups of the home server fail with:

/bin/tar: Substituting -9223372036854775807 for unknown date
format
‘2020-04-22\\’
/bin/tar: 21\:18\:10: Cannot stat: No such file or directory

If instead I set:

$Conf{TarIncrArgs} = '--newer=$incrDate $fileList';

then incremental backups fail with:

/bin/tar: Option --after-date: Treating date '2020-04-22' as
2020-04-22
00:00:00
/bin/tar: 21\:18\:10: Cannot stat: No such file or directory

Could it be to do with my localtar/tar_create.sh? (I created
this so
long ago I no longer remember where it came from).

This is just:

#!/bin/sh -f
exec /bin/tar -c $*

Thanks again

Graham

On 25/04/2020 02:59, Craig Barratt via BackupPC-users wrote:
> Graham,
>
> This is a problem with shell (likely ssh) escaping of
arguments that
> contain a space.
>
> For incremental backups a timestamp is passed as an
argument to tar
> running on the client.  The argument should be a date and
time, eg:
>
>     --after-date 2020-04-22\ 21:18:10'
>
> Notice there needs to be a backslash before the space, so
it is part of
> a single argument, not two separate arguments.
>
> You can tell BackupPC to escape an argument (to protect it
from passing
> via ssh) by adding a "+" to the end of the argument name, eg:
>
>     $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>
>
> Craig
>
> On Fri, Apr 24, 2020 at 3:17 PM Graham Seaman
mailto:gra...@theseamans.net>
> >> wrote:
>
>
>
>
>
>
>
>
>     Ok, I guess its this (from the start of XferLOG.bad):
>
>     /bin/tar: Option --after-date: Treating date
'2020-04-22' as 2020-04-22
>     00:00:00
>     /bin/tar: 21\:18\:10: Cannot stat: No such file or
directory
>
>     which is kind of confusing, as it goes on to copy the
rest of the
>     directory and then says '0 Errors'. Anyway, its correct
that there is no
>     file called '21:18:10'. Any idea why it thinks there
should be?
>
>     Graham
>
>
>     On 24/04/2020 20:59, Craig Barratt via BackupPC-users

Re: [BackupPC-users] full backup ok, incremental fails

2020-04-25 Thread Craig Barratt via BackupPC-users
It would be helpful if you included the edited script in your reply.  Did
you use double quotes, or two single quotes?

I'd recommend trying without the script, just the make sure it works
correctly.  Then you can be sure it's an issue with how the script
handles/splits arguments.

Craig

On Sat, Apr 25, 2020 at 2:49 PM Graham Seaman  wrote:

> Craig
>
> Quoting $* gives me a new error:
>
> /bin/tar: invalid option -- ' '
>
> (I get exactly the same error whether I use $incrDate or $incrDate+)
>
> That script is to avoid potential security problems from relaxing the
> rules in sudoers, so I'd rather not get rid of it, but I'm a bit surprised
> no-one else has the same problems (and that it apparently used to work for
> me once)
>
> Graham
>
>
> On 25/04/2020 17:59, Craig Barratt via BackupPC-users wrote:
>
> Graham,
>
> Your script is the problem.  Using $* causes the shell the resplit
> arguments at whitespace.  To preserve the arguments you need to put that in
> quotes:
>
> exec /bin/tar -c "$*"
>
> Craig
>
> On Sat, Apr 25, 2020 at 5:04 AM Graham Seaman 
> wrote:
>
>> Thanks Craig
>>
>> That's clearly the problem, but I'm still mystified.
>>
>> I have backuppc running on my home server; the storage is on a NAS NFS
>> mounted on the home server. Backing up other hosts on my network (both
>> full and incremental) over rsync works fine.
>>
>> The home server backs up using tar. The command in the log is:
>>
>> Running: /usr/bin/sudo /etc/backuppc/localtar/tar_create.sh -v -f - -C
>> /etc --totals --newer=2020-04-22 21:18:10 .
>>
>> If I set
>>
>>  $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>>
>>
>> then incremental backups of the home server fail with:
>>
>> /bin/tar: Substituting -9223372036854775807 for unknown date format
>> ‘2020-04-22\\’
>> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>>
>> If instead I set:
>>
>> $Conf{TarIncrArgs} = '--newer=$incrDate $fileList';
>>
>> then incremental backups fail with:
>>
>> /bin/tar: Option --after-date: Treating date '2020-04-22' as 2020-04-22
>> 00:00:00
>> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>>
>> Could it be to do with my localtar/tar_create.sh? (I created this so
>> long ago I no longer remember where it came from).
>>
>> This is just:
>>
>> #!/bin/sh -f
>> exec /bin/tar -c $*
>>
>> Thanks again
>>
>> Graham
>>
>> On 25/04/2020 02:59, Craig Barratt via BackupPC-users wrote:
>> > Graham,
>> >
>> > This is a problem with shell (likely ssh) escaping of arguments that
>> > contain a space.
>> >
>> > For incremental backups a timestamp is passed as an argument to tar
>> > running on the client.  The argument should be a date and time, eg:
>> >
>> > --after-date 2020-04-22\ 21:18:10'
>> >
>> > Notice there needs to be a backslash before the space, so it is part of
>> > a single argument, not two separate arguments.
>> >
>> > You can tell BackupPC to escape an argument (to protect it from passing
>> > via ssh) by adding a "+" to the end of the argument name, eg:
>> >
>> > $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>> >
>> >
>> > Craig
>> >
>> > On Fri, Apr 24, 2020 at 3:17 PM Graham Seaman > > > wrote:
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Ok, I guess its this (from the start of XferLOG.bad):
>> >
>> > /bin/tar: Option --after-date: Treating date '2020-04-22' as
>> 2020-04-22
>> > 00:00:00
>> > /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>> >
>> > which is kind of confusing, as it goes on to copy the rest of the
>> > directory and then says '0 Errors'. Anyway, its correct that there
>> is no
>> > file called '21:18:10'. Any idea why it thinks there should be?
>> >
>> > Graham
>> >
>> >
>> > On 24/04/2020 20:59, Craig Barratt via BackupPC-users wrote:
>> > > Graham,
>> > >
>> > > Tar exit status of 512 means it encountered some sort of error
>> > (eg, file
>> > > read error) while it was running on the target client.  Please
>> look at
>> > > the XferLOG.bad file carefully to see the specific error from tar.
>> > >
>> > > If you are unable to see the error, please send me the entire
>> > > XferLOG.bad file?
>> > >
>> > > Craig
>> > >
>> > > On Fri, Apr 24, 2020 at 12:13 PM Graham Seaman
>> > mailto:gra...@theseamans.net>
>> > > >>
>> wrote:
>> > >
>> > > I have a persistent problem with backing up one host: I can do
>> > a full
>> > > backup, but an incremental backup fails on trying to transfer
>> > the first
>> > > directory:
>> > >
>> > > tarExtract: Done: 0 errors, 2 filesExist, 81381 sizeExist,
>> 18122
>> > > sizeExistComp, 2 filesTotal, 81381 sizeTotal
>> > > Got fatal error during xfer (Tar exited with error 512 ()
>> status)
>> > > Backup aborted (Tar exited with error 512 () status)
>> > >
>> > > 

Re: [BackupPC-users] full backup ok, incremental fails

2020-04-25 Thread Graham Seaman

Craig

Quoting $* gives me a new error:

/bin/tar: invalid option -- ' '

(I get exactly the same error whether I use $incrDate or $incrDate+)

That script is to avoid potential security problems from relaxing the 
rules in sudoers, so I'd rather not get rid of it, but I'm a bit 
surprised no-one else has the same problems (and that it apparently used 
to work for me once)


Graham


On 25/04/2020 17:59, Craig Barratt via BackupPC-users wrote:

Graham,

Your script is the problem.  Using $* causes the shell the resplit 
arguments at whitespace.  To preserve the arguments you need to put 
that in quotes:


exec /bin/tar -c "$*"

Craig

On Sat, Apr 25, 2020 at 5:04 AM Graham Seaman > wrote:


Thanks Craig

That's clearly the problem, but I'm still mystified.

I have backuppc running on my home server; the storage is on a NAS NFS
mounted on the home server. Backing up other hosts on my network (both
full and incremental) over rsync works fine.

The home server backs up using tar. The command in the log is:

Running: /usr/bin/sudo /etc/backuppc/localtar/tar_create.sh -v -f - -C
/etc --totals --newer=2020-04-22 21:18:10 .

If I set

 $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';


then incremental backups of the home server fail with:

/bin/tar: Substituting -9223372036854775807 for unknown date format
‘2020-04-22\\’
/bin/tar: 21\:18\:10: Cannot stat: No such file or directory

If instead I set:

$Conf{TarIncrArgs} = '--newer=$incrDate $fileList';

then incremental backups fail with:

/bin/tar: Option --after-date: Treating date '2020-04-22' as
2020-04-22
00:00:00
/bin/tar: 21\:18\:10: Cannot stat: No such file or directory

Could it be to do with my localtar/tar_create.sh? (I created this so
long ago I no longer remember where it came from).

This is just:

#!/bin/sh -f
exec /bin/tar -c $*

Thanks again

Graham

On 25/04/2020 02:59, Craig Barratt via BackupPC-users wrote:
> Graham,
>
> This is a problem with shell (likely ssh) escaping of arguments that
> contain a space.
>
> For incremental backups a timestamp is passed as an argument to tar
> running on the client.  The argument should be a date and time, eg:
>
>     --after-date 2020-04-22\ 21:18:10'
>
> Notice there needs to be a backslash before the space, so it is
part of
> a single argument, not two separate arguments.
>
> You can tell BackupPC to escape an argument (to protect it from
passing
> via ssh) by adding a "+" to the end of the argument name, eg:
>
>     $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>
>
> Craig
>
> On Fri, Apr 24, 2020 at 3:17 PM Graham Seaman
mailto:gra...@theseamans.net>
> >>
wrote:
>
>
>
>
>
>
>
>
>     Ok, I guess its this (from the start of XferLOG.bad):
>
>     /bin/tar: Option --after-date: Treating date '2020-04-22' as
2020-04-22
>     00:00:00
>     /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>
>     which is kind of confusing, as it goes on to copy the rest
of the
>     directory and then says '0 Errors'. Anyway, its correct that
there is no
>     file called '21:18:10'. Any idea why it thinks there should be?
>
>     Graham
>
>
>     On 24/04/2020 20:59, Craig Barratt via BackupPC-users wrote:
>     > Graham,
>     >
>     > Tar exit status of 512 means it encountered some sort of error
>     (eg, file
>     > read error) while it was running on the target client. 
Please look at
>     > the XferLOG.bad file carefully to see the specific error
from tar.
>     >
>     > If you are unable to see the error, please send me the entire
>     > XferLOG.bad file?
>     >
>     > Craig
>     >
>     > On Fri, Apr 24, 2020 at 12:13 PM Graham Seaman
>     mailto:gra...@theseamans.net>
>
>     >       >
>     >     I have a persistent problem with backing up one host:
I can do
>     a full
>     >     backup, but an incremental backup fails on trying to
transfer
>     the first
>     >     directory:
>     >
>     >     tarExtract: Done: 0 errors, 2 filesExist, 81381
sizeExist, 18122
>     >     sizeExistComp, 2 filesTotal, 81381 sizeTotal
>     >     Got fatal error during xfer (Tar exited with error 512
() status)
>     >     Backup aborted (Tar exited with error 512 () status)
>     >
>     >     All other hosts work ok. So I'm guessing it must be 

Re: [BackupPC-users] full backup ok, incremental fails

2020-04-25 Thread Craig Barratt via BackupPC-users
Graham,

Your script is the problem.  Using $* causes the shell the resplit
arguments at whitespace.  To preserve the arguments you need to put that in
quotes:

exec /bin/tar -c "$*"

Craig

On Sat, Apr 25, 2020 at 5:04 AM Graham Seaman  wrote:

> Thanks Craig
>
> That's clearly the problem, but I'm still mystified.
>
> I have backuppc running on my home server; the storage is on a NAS NFS
> mounted on the home server. Backing up other hosts on my network (both
> full and incremental) over rsync works fine.
>
> The home server backs up using tar. The command in the log is:
>
> Running: /usr/bin/sudo /etc/backuppc/localtar/tar_create.sh -v -f - -C
> /etc --totals --newer=2020-04-22 21:18:10 .
>
> If I set
>
>  $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>
>
> then incremental backups of the home server fail with:
>
> /bin/tar: Substituting -9223372036854775807 for unknown date format
> ‘2020-04-22\\’
> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>
> If instead I set:
>
> $Conf{TarIncrArgs} = '--newer=$incrDate $fileList';
>
> then incremental backups fail with:
>
> /bin/tar: Option --after-date: Treating date '2020-04-22' as 2020-04-22
> 00:00:00
> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>
> Could it be to do with my localtar/tar_create.sh? (I created this so
> long ago I no longer remember where it came from).
>
> This is just:
>
> #!/bin/sh -f
> exec /bin/tar -c $*
>
> Thanks again
>
> Graham
>
> On 25/04/2020 02:59, Craig Barratt via BackupPC-users wrote:
> > Graham,
> >
> > This is a problem with shell (likely ssh) escaping of arguments that
> > contain a space.
> >
> > For incremental backups a timestamp is passed as an argument to tar
> > running on the client.  The argument should be a date and time, eg:
> >
> > --after-date 2020-04-22\ 21:18:10'
> >
> > Notice there needs to be a backslash before the space, so it is part of
> > a single argument, not two separate arguments.
> >
> > You can tell BackupPC to escape an argument (to protect it from passing
> > via ssh) by adding a "+" to the end of the argument name, eg:
> >
> > $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
> >
> >
> > Craig
> >
> > On Fri, Apr 24, 2020 at 3:17 PM Graham Seaman  > > wrote:
> >
> >
> >
> >
> >
> >
> >
> >
> > Ok, I guess its this (from the start of XferLOG.bad):
> >
> > /bin/tar: Option --after-date: Treating date '2020-04-22' as
> 2020-04-22
> > 00:00:00
> > /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
> >
> > which is kind of confusing, as it goes on to copy the rest of the
> > directory and then says '0 Errors'. Anyway, its correct that there
> is no
> > file called '21:18:10'. Any idea why it thinks there should be?
> >
> > Graham
> >
> >
> > On 24/04/2020 20:59, Craig Barratt via BackupPC-users wrote:
> > > Graham,
> > >
> > > Tar exit status of 512 means it encountered some sort of error
> > (eg, file
> > > read error) while it was running on the target client.  Please
> look at
> > > the XferLOG.bad file carefully to see the specific error from tar.
> > >
> > > If you are unable to see the error, please send me the entire
> > > XferLOG.bad file?
> > >
> > > Craig
> > >
> > > On Fri, Apr 24, 2020 at 12:13 PM Graham Seaman
> > mailto:gra...@theseamans.net>
> > > >>
> wrote:
> > >
> > > I have a persistent problem with backing up one host: I can do
> > a full
> > > backup, but an incremental backup fails on trying to transfer
> > the first
> > > directory:
> > >
> > > tarExtract: Done: 0 errors, 2 filesExist, 81381 sizeExist,
> 18122
> > > sizeExistComp, 2 filesTotal, 81381 sizeTotal
> > > Got fatal error during xfer (Tar exited with error 512 ()
> status)
> > > Backup aborted (Tar exited with error 512 () status)
> > >
> > > All other hosts work ok. So I'm guessing it must be a file
> > permission
> > > error. Looking at the files, everything seems to be owned by
> > > backuppc.backuppc, so I don't know where/what else to look
> > for. Any
> > > suggestions?
> > >
> > > Thanks
> > > Graham
> > >
> > >
> > > ___
> > > BackupPC-users mailing list
> > > BackupPC-users@lists.sourceforge.net
> > 
> > >  > >
> > > List:
> > https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > > Wiki:http://backuppc.wiki.sourceforge.net
> > > Project: http://backuppc.sourceforge.net/
> > >
> > >
> > >
> > > ___
> > 

Re: [BackupPC-users] full backup ok, incremental fails

2020-04-25 Thread Graham Seaman
Thanks Craig

That's clearly the problem, but I'm still mystified.

I have backuppc running on my home server; the storage is on a NAS NFS
mounted on the home server. Backing up other hosts on my network (both
full and incremental) over rsync works fine.

The home server backs up using tar. The command in the log is:

Running: /usr/bin/sudo /etc/backuppc/localtar/tar_create.sh -v -f - -C
/etc --totals --newer=2020-04-22 21:18:10 .

If I set

 $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';


then incremental backups of the home server fail with:

/bin/tar: Substituting -9223372036854775807 for unknown date format
‘2020-04-22\\’
/bin/tar: 21\:18\:10: Cannot stat: No such file or directory

If instead I set:

$Conf{TarIncrArgs} = '--newer=$incrDate $fileList';

then incremental backups fail with:

/bin/tar: Option --after-date: Treating date '2020-04-22' as 2020-04-22
00:00:00
/bin/tar: 21\:18\:10: Cannot stat: No such file or directory

Could it be to do with my localtar/tar_create.sh? (I created this so
long ago I no longer remember where it came from).

This is just:

#!/bin/sh -f
exec /bin/tar -c $*

Thanks again

Graham

On 25/04/2020 02:59, Craig Barratt via BackupPC-users wrote:
> Graham,
> 
> This is a problem with shell (likely ssh) escaping of arguments that
> contain a space.
> 
> For incremental backups a timestamp is passed as an argument to tar
> running on the client.  The argument should be a date and time, eg:
> 
> --after-date 2020-04-22\ 21:18:10'
> 
> Notice there needs to be a backslash before the space, so it is part of
> a single argument, not two separate arguments.
> 
> You can tell BackupPC to escape an argument (to protect it from passing
> via ssh) by adding a "+" to the end of the argument name, eg:
> 
> $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
> 
> 
> Craig
> 
> On Fri, Apr 24, 2020 at 3:17 PM Graham Seaman  > wrote:
> 
> 
> 
> 
> 
> 
> 
> 
> Ok, I guess its this (from the start of XferLOG.bad):
> 
> /bin/tar: Option --after-date: Treating date '2020-04-22' as 2020-04-22
> 00:00:00
> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
> 
> which is kind of confusing, as it goes on to copy the rest of the
> directory and then says '0 Errors'. Anyway, its correct that there is no
> file called '21:18:10'. Any idea why it thinks there should be?
> 
> Graham
> 
> 
> On 24/04/2020 20:59, Craig Barratt via BackupPC-users wrote:
> > Graham,
> >
> > Tar exit status of 512 means it encountered some sort of error
> (eg, file
> > read error) while it was running on the target client.  Please look at
> > the XferLOG.bad file carefully to see the specific error from tar.
> >
> > If you are unable to see the error, please send me the entire
> > XferLOG.bad file?
> >
> > Craig
> >
> > On Fri, Apr 24, 2020 at 12:13 PM Graham Seaman
> mailto:gra...@theseamans.net>
> > >> wrote:
> >
> >     I have a persistent problem with backing up one host: I can do
> a full
> >     backup, but an incremental backup fails on trying to transfer
> the first
> >     directory:
> >
> >     tarExtract: Done: 0 errors, 2 filesExist, 81381 sizeExist, 18122
> >     sizeExistComp, 2 filesTotal, 81381 sizeTotal
> >     Got fatal error during xfer (Tar exited with error 512 () status)
> >     Backup aborted (Tar exited with error 512 () status)
> >
> >     All other hosts work ok. So I'm guessing it must be a file
> permission
> >     error. Looking at the files, everything seems to be owned by
> >     backuppc.backuppc, so I don't know where/what else to look
> for. Any
> >     suggestions?
> >
> >     Thanks
> >     Graham
> >
> >
> >     ___
> >     BackupPC-users mailing list
> >     BackupPC-users@lists.sourceforge.net
> 
> >      >
> >     List:   
> https://lists.sourceforge.net/lists/listinfo/backuppc-users
> >     Wiki:    http://backuppc.wiki.sourceforge.net
> >     Project: http://backuppc.sourceforge.net/
> >
> >
> >
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> 
> > List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:    http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> >
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> 

Re: [BackupPC-users] full backup ok, incremental fails

2020-04-24 Thread Craig Barratt via BackupPC-users
Graham,

This is a problem with shell (likely ssh) escaping of arguments that
contain a space.

For incremental backups a timestamp is passed as an argument to tar running
on the client.  The argument should be a date and time, eg:

--after-date 2020-04-22\ 21:18:10'

Notice there needs to be a backslash before the space, so it is part of a
single argument, not two separate arguments.

You can tell BackupPC to escape an argument (to protect it from passing via
ssh) by adding a "+" to the end of the argument name, eg:

$Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';


Craig

On Fri, Apr 24, 2020 at 3:17 PM Graham Seaman  wrote:

>
>
>
>
>
>
>
> Ok, I guess its this (from the start of XferLOG.bad):
>
> /bin/tar: Option --after-date: Treating date '2020-04-22' as 2020-04-22
> 00:00:00
> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>
> which is kind of confusing, as it goes on to copy the rest of the
> directory and then says '0 Errors'. Anyway, its correct that there is no
> file called '21:18:10'. Any idea why it thinks there should be?
>
> Graham
>
>
> On 24/04/2020 20:59, Craig Barratt via BackupPC-users wrote:
> > Graham,
> >
> > Tar exit status of 512 means it encountered some sort of error (eg, file
> > read error) while it was running on the target client.  Please look at
> > the XferLOG.bad file carefully to see the specific error from tar.
> >
> > If you are unable to see the error, please send me the entire
> > XferLOG.bad file?
> >
> > Craig
> >
> > On Fri, Apr 24, 2020 at 12:13 PM Graham Seaman  > > wrote:
> >
> > I have a persistent problem with backing up one host: I can do a full
> > backup, but an incremental backup fails on trying to transfer the
> first
> > directory:
> >
> > tarExtract: Done: 0 errors, 2 filesExist, 81381 sizeExist, 18122
> > sizeExistComp, 2 filesTotal, 81381 sizeTotal
> > Got fatal error during xfer (Tar exited with error 512 () status)
> > Backup aborted (Tar exited with error 512 () status)
> >
> > All other hosts work ok. So I'm guessing it must be a file permission
> > error. Looking at the files, everything seems to be owned by
> > backuppc.backuppc, so I don't know where/what else to look for. Any
> > suggestions?
> >
> > Thanks
> > Graham
> >
> >
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > 
> > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> >
> >
> >
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> >
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] full backup ok, incremental fails

2020-04-24 Thread Graham Seaman







Ok, I guess its this (from the start of XferLOG.bad):

/bin/tar: Option --after-date: Treating date '2020-04-22' as 2020-04-22
00:00:00
/bin/tar: 21\:18\:10: Cannot stat: No such file or directory

which is kind of confusing, as it goes on to copy the rest of the
directory and then says '0 Errors'. Anyway, its correct that there is no
file called '21:18:10'. Any idea why it thinks there should be?

Graham


On 24/04/2020 20:59, Craig Barratt via BackupPC-users wrote:
> Graham,
>
> Tar exit status of 512 means it encountered some sort of error (eg, file
> read error) while it was running on the target client.  Please look at
> the XferLOG.bad file carefully to see the specific error from tar.
> 
> If you are unable to see the error, please send me the entire
> XferLOG.bad file?
> 
> Craig
> 
> On Fri, Apr 24, 2020 at 12:13 PM Graham Seaman  > wrote:
> 
> I have a persistent problem with backing up one host: I can do a full
> backup, but an incremental backup fails on trying to transfer the first
> directory:
> 
> tarExtract: Done: 0 errors, 2 filesExist, 81381 sizeExist, 18122
> sizeExistComp, 2 filesTotal, 81381 sizeTotal
> Got fatal error during xfer (Tar exited with error 512 () status)
> Backup aborted (Tar exited with error 512 () status)
> 
> All other hosts work ok. So I'm guessing it must be a file permission
> error. Looking at the files, everything seems to be owned by
> backuppc.backuppc, so I don't know where/what else to look for. Any
> suggestions?
> 
> Thanks
> Graham
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> 
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
> 
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
> 


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] full backup ok, incremental fails

2020-04-24 Thread Craig Barratt via BackupPC-users
Graham,

Tar exit status of 512 means it encountered some sort of error (eg, file
read error) while it was running on the target client.  Please look at the
XferLOG.bad file carefully to see the specific error from tar.

If you are unable to see the error, please send me the entire XferLOG.bad
file?

Craig

On Fri, Apr 24, 2020 at 12:13 PM Graham Seaman 
wrote:

> I have a persistent problem with backing up one host: I can do a full
> backup, but an incremental backup fails on trying to transfer the first
> directory:
>
> tarExtract: Done: 0 errors, 2 filesExist, 81381 sizeExist, 18122
> sizeExistComp, 2 filesTotal, 81381 sizeTotal
> Got fatal error during xfer (Tar exited with error 512 () status)
> Backup aborted (Tar exited with error 512 () status)
>
> All other hosts work ok. So I'm guessing it must be a file permission
> error. Looking at the files, everything seems to be owned by
> backuppc.backuppc, so I don't know where/what else to look for. Any
> suggestions?
>
> Thanks
> Graham
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup will not complete

2020-03-10 Thread Michael Stowe

On 2020-03-09 02:11, George Campbell wrote:

Hello, I am new to this list, so please let me know if there is anything missing from my question... 

I have setup backuppc as a docker (version 4.3.2) on an Ubuntu server. My client is a Windows 10 running RPC.  

A full backup runs and I can see lots of files on the server. But, the backup never seems to finish and gets stuck. I can't see that the backup in incomplete, though I have not tried to test it.  

A few observations:  
- on the client I can see the RPC processing running, but it does not seem to do anything withe with the disk or network 
- the docker is setup to use 8 threads, but only ever uses one. I have changed the MaxBackupPCNightlyJobs to 8.  
- The server and client are both massive with 64/16gb of memory and 12/4 cores respectively. Both, and the network, seem untaxed.  
- I can open a file and restore back to the client, so the config seems to all be working.  

Can someone please suggest what could be wrong, or where I should look next? 


Thanks, George


There's a lot to unpack here, so please bear with me a bit: 


First, I don't know what you mean by RPC -- windows has an RPC service,
but I'm not sure how that relates to one of the BackupPC transports
(e.g., smb, rsync, ftp) and to make use of the RPC service, you'd need
to do some custom work or scripting.  (If this is the case, you may want
to elaborate.) 


Second, ignoring the above and assuming you're using smb or rsync,
you'll need a strategy to deal with windows filesystem semantics and
features.  It's pretty easy to hang either transport by trying to back
up an open file or junction.  Tried and true strategies involve shadow
volumes or ... not doing that. 


MaxBackupPCNightlyJobs will run backup jobs in parallel, but it won't
automatically split your job for you. 


While 16G of memory and 4 cores isn't terrible, it's worth noting that
IO speed is a significant factor, so unless that's enough RAM to cache
your entire backup (or filesystem) then expect to run into IO
constraints -- and note that the time for an individual backup job is
usually proportional to the size of the data. 


And finally, note that there have been various bugs in transports, so
it's probably not a bad idea to be specific about the version in the
case that somebody recognizes it as being problematic in some way.___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] full backup of windows client succeeds, incremental fails

2016-07-05 Thread Carl Wilhelm Soderstrom
On 2016-06-09 12:43, Carl Wilhelm Soderstrom wrote:
> I have seen it happen on a couple of occasions where a Windows machine
> (backed up via Cygwin rsyncd, not the minimal rsyncd off the SF page) 
> will suddenly stop working for incremental backups. Full backups will 
> continue to work, but incrementals will start failing with a PIPE error. 

FWIW, the box in question was moved to a different part of the building,
plugged into a different switch, the old user's profile was wiped off and a
new profile created. Now backups seem to work fine - but they don't take as
long, so that may explain things.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] full backup of windows client succeeds, incremental fails

2016-06-09 Thread Carl Wilhelm Soderstrom
On 06/09 02:36 , Les Mikesell wrote:
> It might just be somewhat different timing for that host too - that
> is, there may be a large number of unchanging files or it has slow
> drives that make it take a longer time to find something that changed.

I don't think so.
At this point I'm starting to think it's a networking problem. Lots of
people have been plugging and unplugging networking cables, moving switches
and machines around, and otherwise fiddling with stuff - generally with very
little thought given to network diameter, bisectional bandwidth, etc. I
wouldn't be surprised if BackupPC is just exposing a deeper problem.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] full backup of windows client succeeds, incremental fails

2016-06-09 Thread Michael Stowe
On 2016-06-09 12:43, Carl Wilhelm Soderstrom wrote:
> I have seen it happen on a couple of occasions where a Windows machine
> (backed up via Cygwin rsyncd, not the minimal rsyncd off the SF page) 
> will
> suddenly stop working for incremental backups. Full backups will 
> continue to
> work, but incrementals will start failing with a PIPE error. For 
> example,
> here's a run where I was trying to debug with
> '/usr/share/backuppc/bin/BackupPC_dump -i -v host.example.com':
> 
>   create d 770 544/1049089   0 Program Files (x86)/Common
> Files/Adobe/AAMUpdaterInventory/1.0/AdobeIDCC2015AppLangen_US-11.0
>   create d 770 544/1049089   0 Program Files (x86)/Common
> Files/Adobe/AAMUpdaterInventory/1.0/AdobeIDCC2015AppLangen_US-11.0/11.2.0.100
> Done: 0 files, 0 bytes
> Got fatal error during xfer (aborted by signal=PIPE)
> Backup aborted by user signal
> dump failed: aborted by signal=PIPE
> 
> On the rsyncd side, the rsyncd.log says:
> 2016/06/09 11:26:23 [1728] connect from backuppc.example.com
> (10.77.87.121)
> 2016/06/09 11:26:23 [1728] rsync on . from
> rsyncba...@backuppc.example.com (10.77.87.121)
> 2016/06/09 11:26:23 [1728] building file list
> 2016/06/09 11:30:25 [1728] rsync: [sender] write error: Broken pipe 
> (32)
> 2016/06/09 11:30:25 [1728] rsync error: error in socket IO (code 10) at
> io.c(820) [sender=3.1.2]
> 
> A run with '/usr/share/backuppc/bin/BackupPC_dump -f -v 
> host.example.com'
> will run to completion, but the incremental fails, even if it's run 
> soon
> after a full is run.

This implies that there's something in the comparison of files that's 
breaking the transfer that isn't breaking on a full copy.  I note that 
this can happen if the BackupPC side has performance, timing, or 
corruption issues; the clue is that it happens to incrementals and not 
fulls, so it's not likely to be on the Windows' side.

> I've tried updating cygwin, and the rsyncd package was updated. The 
> fact
> that I was able to upgrade a very large number of packages in the 
> cygwin
> installation, indicates to me that it wasn't a cygwin upgrade which 
> broke
> things.

I've had problems with certain versions of cygwin before; the "broken 
pipe" error is very generic and unhelpful, and can happen when it chokes 
on umlauts or when it times out due to fragmentation or due to build 
issues.  If you'd like to check it against a known good version, I do 
maintain one here:

https://github.com/mwstowe/BackupPC-Client/tree/master/backuppc

Note that rsync and its libraries are only a grand total of five files 
(or fewer.)  While you're at it, you might want to pull the rsyncd.conf, 
which has a few options that correct the more sporadic rsync/Windows 
issues.

> This only happens to an occasional Windows machine, and it happens 
> without
> any obvious cause. Not all Windows machines will break, just one.
> 
> Any suggestions on how to debug/fix this? the obvious workaround is 
> just to
> only do full backups, and I've done that in the past, but if I can 
> solve
> this a better way I'd like to do so.
> 
> The next step would be to uninstall and reinstall cygwin. It's possible 
> the
> problem is some corruption in a file which hasn't been upgraded and 
> thus
> repaired.
> --
> Carl Soderstrom
> Systems Administrator
> Real-Time Enterprises
> www.real-time.com


--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] full backup of windows client succeeds, incremental fails

2016-06-09 Thread Les Mikesell
On Thu, Jun 9, 2016 at 2:07 PM, Carl Wilhelm Soderstrom
 wrote:
> On 06/09 01:50 , Les Mikesell wrote:
>> Sometimes this is caused by a nat router or stateful firewall
>> (possibly even host firewall software) timing out and breaking a
>> connection due to too much idle time in the traffic.  If you are
>> running over ssh you can usually fix it by enabling keepalives - not
>> sure about the standalone rsyncd options.
>
> Good thought. However, it consistently happens to only one host, doesn't
> seem to correspond to having moved that host to another side of the
> firewall, and both the client and the backup server are on the same
> broadcast domain.
>
> That said, it's at a remote location and I can't trace the cabling myself,
> so it's possible there's a switch or something which is a bit more 'clever'
> than it should be, and is causing this.

It might just be somewhat different timing for that host too - that
is, there may be a large number of unchanging files or it has slow
drives that make it take a longer time to find something that changed.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] full backup of windows client succeeds, incremental fails

2016-06-09 Thread Carl Wilhelm Soderstrom
On 06/09 01:50 , Les Mikesell wrote:
> Sometimes this is caused by a nat router or stateful firewall
> (possibly even host firewall software) timing out and breaking a
> connection due to too much idle time in the traffic.  If you are
> running over ssh you can usually fix it by enabling keepalives - not
> sure about the standalone rsyncd options.

Good thought. However, it consistently happens to only one host, doesn't
seem to correspond to having moved that host to another side of the
firewall, and both the client and the backup server are on the same
broadcast domain.

That said, it's at a remote location and I can't trace the cabling myself,
so it's possible there's a switch or something which is a bit more 'clever'
than it should be, and is causing this.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] full backup of windows client succeeds, incremental fails

2016-06-09 Thread Les Mikesell
On Thu, Jun 9, 2016 at 12:43 PM, Carl Wilhelm Soderstrom
 wrote:
> I have seen it happen on a couple of occasions where a Windows machine
> (backed up via Cygwin rsyncd, not the minimal rsyncd off the SF page) will
> suddenly stop working for incremental backups. Full backups will continue to
> work, but incrementals will start failing with a PIPE error. For example,
> here's a run where I was trying to debug with
> '/usr/share/backuppc/bin/BackupPC_dump -i -v host.example.com':
>
>   create d 770 544/1049089   0 Program Files (x86)/Common
> Files/Adobe/AAMUpdaterInventory/1.0/AdobeIDCC2015AppLangen_US-11.0
>   create d 770 544/1049089   0 Program Files (x86)/Common
> Files/Adobe/AAMUpdaterInventory/1.0/AdobeIDCC2015AppLangen_US-11.0/11.2.0.100
> Done: 0 files, 0 bytes
> Got fatal error during xfer (aborted by signal=PIPE)
> Backup aborted by user signal
> dump failed: aborted by signal=PIPE

Sometimes this is caused by a nat router or stateful firewall
(possibly even host firewall software) timing out and breaking a
connection due to too much idle time in the traffic.  If you are
running over ssh you can usually fix it by enabling keepalives - not
sure about the standalone rsyncd options.

-- 
Les Mikesell
  lesmikes...@gmail.com

--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup not backing up all files

2014-10-14 Thread Holger Parplies
Hi,

Mikko Kortelainen wrote on 2014-10-14 12:18:00 +0300 [[BackupPC-users] Full 
backup not backing up all files]:
 I have a problem with two BackupPC hosts, yet there's a further third
 host that is working ok.
 
 The problem is that a full backup does not seem to back up all files
 on Windows hosts. An incremental backs up many more files than a full
 one, or so it seems, looking at the version history.
 [...]
 What could be the problem here? Something to do with the Samba version
 (3.5 vs. 4.1)?

so it would seem. Or maybe only a changed default configuration. I don't know
enough about smbclient to guess what it might be, but I know that smb.conf is
relevant for the smbclient command as well.

 What can I do to diagnose this?

A 'diff' of the two smb.conf files might shed some light, presuming it is
configuration related. Otherwise, I would expect the old smbclient package
(along with the same version samba-common) to work on the newer system. You
have several options for trying the downgrade (***Note: I am assuming you have
nothing installed on the system that depends on smbclient/samba-common, i.e.
no samba server, samba configuration tool etc. If in doubt, check with
'apt-cache rdepends samba-common smbclient'***):

- download the packages with a browser and install with 'dpkg -i',
- add a sources.list line for the old distribution on the host(s) with the
  new one and do something like

apt-get update
apt-get install --reinstall smbclient=xyz samba-common=xyz

  (where 'xyz' is the version of smbclient/samba-common on the old system;
  'apt-cache policy smbclient' might help figuring things out), or
- download the packages on the old system with

apt-get -d install --reinstall smbclient samba-common

  (the '-d' switch means 'download only'). Copy them from
  /var/cache/apt/archives over to the new host and install with 'dpkg -i'.

You'll probably need to confirm the downgrade for 'apt-get' and possibly
use a '--force-downgrade' switch on dpkg, but I don't think so.

Presuming downgrading helps and you want to keep things that way, you
should put the packages on hold (so they won't be inadvertantly
upgraded at some point), and you should probably browse the changelogs to
see if there were any important security fixes that the old version misses.

Otherwise, 'apt-get install smbclient samba-common' or simply
'apt-get upgrade' should revert things to the new version.

Hope that helps.

Regards,
Holger

P.S.: There might be some other changed dependencies that I missed. In
  general, apt-get should take care of things. If you use 'dpkg -i',
  you can follow that with 'apt-get -f install' to fix dependencies,
  though in awkward situations this occasionally tends to want to
  fix things by removing the package you just installed. Always
  check what 'apt-get' wants to do, and be very sceptical if it wants
  to remove packages.
  Bottom line: it's easy enough to attempt the downgrade and simply
  *not* do it if it turns out to require more than replacing the two
  packages with older versions.

--
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup not backing up all files

2014-10-14 Thread Mikko Kortelainen
Holger, thanks for your response.

The smb.conf should be pretty much stock on both machines. Besides
comments and whitespace, the only difference is this:

   client ntlmv2 auth = no

That's set on the Samba 3.5 host but not on the 4.1. Actually it seems
that the default for that has changed from Samba 3 (no) to Samba 4
(yes). I set it to no and ran a full backup again but the result was
no different.

A colleague of mine just told me he did a Samba downgrade from 4.1 to
3.5 and tested a quick full backup which was fine. I haven't tried it
myself yet.

What I did do is I ran a full backup command from the command line
with smbclient 4.1 and it stops here:

   0 (0,0 kb/s) \MSDOS.SYS
   94720 (  818,6 kb/s) \msizap.exe
   47772 ( 2744,3 kb/s) \NTDETECT.COM
  297072 ( 4917,1 kb/s) \ntldr
NT_STATUS_SHARING_VIOLATION opening remote file \pagefile.sys (\)
NT_STATUS_SHARING_VIOLATION listing \\*
tar: dumped 26297 files and directories
Total bytes written: 17752618496

So it stops after the first NT_STATUS_SHARING_VIOLATION on a file in
the root directory of C$. There's a bug related to it and tagged to
Samba 3.6:

smbclient TAR omits remainder of directory on access denial.
https://bugzilla.samba.org/show_bug.cgi?id=10605

Perhaps Samba 4.1 suffers from the same bug.

This is pretty bad as the backup is still listed as a good backup,
even though it is far from complete. And the error log can have more
NT_STATUS_SHARING_VIOLATION messages so it is not immediately obvious
that the backup stopped before copying everything it can. It only
omits the rest of the files and subdirectories in the same directory
with that error and continues with the next same or upper level
directory. Being the root of C$ there are no same level or upper level
directories left to back up, so it stops completely.

As time goes by, the incrementals get deleted from disk, and any files
not included in the full backups will be gone, if I have understood
correctly (?). So there will be at most IncrAgeMax days of complete
backups available. The full backups are just partial.

I guess the only thing I can do now is downgrade to Samba 3.5.

It is quite alarming to realize this thing may bite every user out
there with the default packages and settings on any relatively new
distro. And there is no good warning about it. Looking at the error
log, the problem is not immediately obvious. It is obvious only after
you browse your full backups and discover some stuff is not there.

-Mikko


2014-10-14 15:26 GMT+03:00 Holger Parplies wb...@parplies.de:
 Hi,

 Mikko Kortelainen wrote on 2014-10-14 12:18:00 +0300 [[BackupPC-users] Full 
 backup not backing up all files]:
 I have a problem with two BackupPC hosts, yet there's a further third
 host that is working ok.

 The problem is that a full backup does not seem to back up all files
 on Windows hosts. An incremental backs up many more files than a full
 one, or so it seems, looking at the version history.
 [...]
 What could be the problem here? Something to do with the Samba version
 (3.5 vs. 4.1)?

 so it would seem. Or maybe only a changed default configuration. I don't know
 enough about smbclient to guess what it might be, but I know that smb.conf is
 relevant for the smbclient command as well.

 What can I do to diagnose this?

 A 'diff' of the two smb.conf files might shed some light, presuming it is
 configuration related. Otherwise, I would expect the old smbclient package
 (along with the same version samba-common) to work on the newer system. You
 have several options for trying the downgrade (***Note: I am assuming you have
 nothing installed on the system that depends on smbclient/samba-common, i.e.
 no samba server, samba configuration tool etc. If in doubt, check with
 'apt-cache rdepends samba-common smbclient'***):

 - download the packages with a browser and install with 'dpkg -i',
 - add a sources.list line for the old distribution on the host(s) with the
   new one and do something like

 apt-get update
 apt-get install --reinstall smbclient=xyz samba-common=xyz

   (where 'xyz' is the version of smbclient/samba-common on the old system;
   'apt-cache policy smbclient' might help figuring things out), or
 - download the packages on the old system with

 apt-get -d install --reinstall smbclient samba-common

   (the '-d' switch means 'download only'). Copy them from
   /var/cache/apt/archives over to the new host and install with 'dpkg -i'.

 You'll probably need to confirm the downgrade for 'apt-get' and possibly
 use a '--force-downgrade' switch on dpkg, but I don't think so.

 Presuming downgrading helps and you want to keep things that way, you
 should put the packages on hold (so they won't be inadvertantly
 upgraded at some point), and you should probably browse the changelogs to
 see if there were any important security fixes that the old version misses.

 Otherwise, 'apt-get install smbclient samba-common' or simply
 'apt-get upgrade' 

Re: [BackupPC-users] Full backup locks up computer.

2012-02-21 Thread Steve Blackwell
On Thu, 16 Feb 2012 10:12:23 -0500
Steve Blackwell zep...@cfl.rr.com wrote:

8  snip

 The problem I'm having is that whenever I try to do a full backup, the
 computer locks up. There are no messages in any of the logs to
 indicate what might have caused the problem. Interestingly,
 incremental backups work OK. 

8  snip

Huh! After month of having this problem, last night it did a full
backup, unattended with no errors. Weird.

Steve

-- 
Changing lives one card at a time

http://www.send1cardnow.com


signature.asc
Description: PGP signature
--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup on tape

2011-10-11 Thread Les Mikesell
On Tue, Oct 11, 2011 at 10:23 AM, Carlos Albornoz
caralborn...@gmail.com wrote:

 Recently in my company we adquiered a tape backup (ts2900), and i wish
 send full backup to this tape's, that is possible?
 The idea is send montly backup to tape.

 I ask because i read that backuppc is designed for disk backup.

You can use the 'archive host' configuration to set up tape output
that you can control from the web interface or just use the command
line backuppc_tarCreate tool to generate a tar archive of the backup
you want to send to tape.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup on tape

2011-10-11 Thread Carlos Albornoz
On Tue, Oct 11, 2011 at 12:37 PM, Les Mikesell lesmikes...@gmail.com wrote:
 On Tue, Oct 11, 2011 at 10:23 AM, Carlos Albornoz
 caralborn...@gmail.com wrote:

 Recently in my company we adquiered a tape backup (ts2900), and i wish
 send full backup to this tape's, that is possible?
 The idea is send montly backup to tape.

 I ask because i read that backuppc is designed for disk backup.

 You can use the 'archive host' configuration to set up tape output
 that you can control from the web interface or just use the command
 line backuppc_tarCreate tool to generate a tar archive of the backup
 you want to send to tape.


Hi Les.

And this 'archive host' can be the same backuppc server?, or
necessarily has to be another host?


-- 
Carlos Albornoz C.
Linux User #360502
Fono: 97864420

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup on tape

2011-10-11 Thread Les Mikesell
On Tue, Oct 11, 2011 at 10:53 AM, Carlos Albornoz
caralborn...@gmail.com wrote:

 And this 'archive host' can be the same backuppc server?, or
 necessarily has to be another host?


See:
http://backuppc.sourceforge.net/faq/BackupPC.html#configuring_an_archive_host

An 'archive host' is just a special configuration in the backuppc
server so you can use the same navigation approach as for real hosts
in the web interface, but instead of making a new backup, it lets you
pick existing backups and output a tar archive to your configured
device or location.   But if you want to control it from cron or the
command line it is probably easier to just use the backuppc_tarCreate
program directly.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup

2009-05-23 Thread Daniel Carrera
Holger Parplies wrote:
 1.) That is what you are requesting BackupPC to do.
 If you want your backups to depend on a different reference point
 than the previous full backup, you can use IncrLevels. An incremental
 backup *can* miss changes. That is highly unlikely with rsync but
 remains possible. With increasing level, chances increase. So, doing
 *exactly* as requested makes sense.
 As with any backup scheme, only full backups are completely reliable.

How can it miss changes? How do full backups fix it?



 2.) Backup dependencies
 A level 1 backup cannot depend on any other level 1 backup (because this
 other backup can - and probably will - expire first).

This is something else I don't understand. Why does it create 
dependencies? I thought BackupPC did something functionally equivalent 
to my hand-links method. With my method, I can safely delete any daily 
backup with full confidence that the others won't be affected, as I'm 
just deleting hard links. As long as a block of data has at least one 
hard link pointing to it (ie. as long as I need it) the file will be 
safe. I thought that BackupPC did something that was functionally 
equivalent with its pooling thing.

Daniel.

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
Group, R/GA,  Big Spaceship. http://www.creativitycat.com 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup

2009-05-23 Thread Les Mikesell
Daniel Carrera wrote:
 Holger Parplies wrote:
 1.) That is what you are requesting BackupPC to do.
 If you want your backups to depend on a different reference point
 than the previous full backup, you can use IncrLevels. An incremental
 backup *can* miss changes. That is highly unlikely with rsync but
 remains possible. With increasing level, chances increase. So, doing
 *exactly* as requested makes sense.
 As with any backup scheme, only full backups are completely reliable.
 
 How can it miss changes? How do full backups fix it?

The smb and tar methods use file timestamps to determine which files to 
transfer.  They'll miss new files with back-dated times (like you get 
with unzip, etc.) and the new locations of old files under a renamed 
directory.  And those methods don't have a concept of files that were 
deleted since the last run.  Full backups fix it by copying everything 
(and it doesn't wear your network out...).

 2.) Backup dependencies
 A level 1 backup cannot depend on any other level 1 backup (because this
 other backup can - and probably will - expire first).
 
 This is something else I don't understand. Why does it create 
 dependencies? I thought BackupPC did something functionally equivalent 
 to my hand-links method.

It does that for fulls.  For incrementals the tree just holds the 
changes, saving a lot of work.  On a small scale server that might not 
matter. For one close to capacity, it does.

 With my method, I can safely delete any daily 
 backup with full confidence that the others won't be affected, as I'm 
 just deleting hard links. As long as a block of data has at least one 
 hard link pointing to it (ie. as long as I need it) the file will be 
 safe. I thought that BackupPC did something that was functionally 
 equivalent with its pooling thing.

Again, that's true for fulls - and it knows what it needs to keep to 
support the incrementals.  There is one more layer of complexity here 
though.  There is an  additional hard link in the cpool directory that 
is used to match up new copies of the same content - so these links need 
to be removed when the link count goes down to 1, meaning no backups 
reference it.

-- 
   Les Mikesell
lesmikes...@gmail.com


--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
Group, R/GA,  Big Spaceship. http://www.creativitycat.com 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup

2009-05-22 Thread Les Mikesell
Daniel Carrera wrote:
 Hello,
 
 If BackupPC uses hard links, what exactly makes a full backup different 
 from an incremental backup? Is it just the --checksum flag for rsync?

It depends on the xfer method.  With smb and tar, a full actually 
transfers everything, with rsync it sets the -i flag so the checksums 
are compared.  In all cases a full rebuilds a complete tree of 
directories and pool links while an incremental does not.

 Suppose that a file has not changed since the last full backup. Will 
 BackupPC re-transmit the file and create a new redundant file on the 
 backup disk? I imagine not. I imagine that it just makes a hard link. In 
 which case, the full backup now starts to look like a plain regular 
 backup with --checksum added.

There are two steps here - the transfer (which smb/tar would do but 
rsync will realize it can skip) and the pooling with hard links.  Note 
that for rsync to avoid the transfer, the same file with the same name 
must appear in the reference backup of the same pc - but pooling with 
hard links will happen if a file with the same content exists anywhere, 
from the same pc or not.

-- 
   Les Mikesell
lesmikes...@gmail.com


--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
Group, R/GA,  Big Spaceship. http://www.creativitycat.com 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup

2009-05-22 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2009-05-22 15:10:56 -0500 [Re: [BackupPC-users] Full 
backup]:
 Daniel Carrera wrote:
  Hello,
  
  If BackupPC uses hard links, what exactly makes a full backup different 
  from an incremental backup? Is it just the --checksum flag for rsync?
 
 It depends on the xfer method. [...] with rsync it sets the -i flag so
 the checksums are compared.

actually, it's -I (--ignore-times), not -i (--itemize-changes).

 There are two steps here - the transfer (which smb/tar would do but 
 rsync will realize it can skip) and the pooling with hard links.  Note 
 that for rsync to avoid the transfer, the same file with the same name 
 must appear in the reference backup of the same pc

And I apparently never tire of pointing it out: the reference backup for a
full rsync backup is the *previous backup of the host*, the reference backup
for an incremental rsync backup is the *previous backup of lower level* of the
host. Level 1 incrementals will re-transmit any changed files until the next
full backup (because they are relative to the previous full, not to each
other). The next full will not re-transmit these files (unless they have
changed once again). It doesn't need to, because it will check the contents
anyway, so starting from a more recent point cannot introduce any errors.

So, to sum it up, a full backup clears up any errors that may have been
introduced (unlikely with rsync, but possible) and gives a new reference point
for future backups.

Regards,
Holger

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
Group, R/GA,  Big Spaceship. http://www.creativitycat.com 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup

2009-05-22 Thread Daniel Carrera
Holger Parplies wrote:
 the reference backup
 for an incremental rsync backup is the *previous backup of lower level* of the
 host. Level 1 incrementals will re-transmit any changed files until the next
 full backup (because they are relative to the previous full, not to each
 other).

That seems wasteful. Why is it like that?


 The next full will not re-transmit these files (unless they have
 changed once again).

So it's possible that a full backup runs faster than an incremental 
because it doesn't have to transmit everything again?

Daniel.

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
Group, R/GA,  Big Spaceship. http://www.creativitycat.com 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup

2009-05-22 Thread Les Mikesell
Daniel Carrera wrote:
 
 the reference backup
 for an incremental rsync backup is the *previous backup of lower level* of 
 the
 host. Level 1 incrementals will re-transmit any changed files until the next
 full backup (because they are relative to the previous full, not to each
 other).
 
 That seems wasteful. Why is it like that?

The original version only had the timestamp-based tar and smb methods 
and work like more traditional backups.  The rsync-in-perl code was 
added later and stayed mostly compatible until the option for 
incremental level settings was added in recent versions.

 The next full will not re-transmit these files (unless they have
 changed once again).
 
 So it's possible that a full backup runs faster than an incremental 
 because it doesn't have to transmit everything again?

The full itself would take as long - but the one following the full 
which becomes the next reference copy could be much faster than another 
incremental based on the older full.

-- 
Les Mikesell
 lesmikes...@gmail.com

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
Group, R/GA,  Big Spaceship. http://www.creativitycat.com 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup

2009-05-22 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2009-05-22 18:28:21 -0500 [Re: [BackupPC-users] Full 
backup]:
 Daniel Carrera wrote:
  
  the reference backup for an incremental rsync backup is the
  *previous backup of lower level* of the host. Level 1 incrementals
  will re-transmit any changed files until the next full backup
  (because they are relative to the previous full, not to each other).
  
  That seems wasteful. Why is it like that?
 
 The original version only had the timestamp-based tar and smb methods 
 and work like more traditional backups. [...]

yes, but the question was: why?

1.) That is what you are requesting BackupPC to do.
If you want your backups to depend on a different reference point
than the previous full backup, you can use IncrLevels. An incremental
backup *can* miss changes. That is highly unlikely with rsync but
remains possible. With increasing level, chances increase. So, doing
*exactly* as requested makes sense.
As with any backup scheme, only full backups are completely reliable.

2.) Backup dependencies
A level 1 backup cannot depend on any other level 1 backup (because this
other backup can - and probably will - expire first). As a consequence,
the backup needs to record any changes since the last full (level 0)
backup. BackupPC would need to take two backups into account, if it were
to use the previous incremental as rsync reference: the incremental (for
the rsync algorithm) and the full (for creating the new backup view). If
a file was same as in the incremental, but had changed since the full
backup, it would need to be added to the new backup. That can probably be
implemented (don't know the rsync algorithm well enough to say for sure),
but I wouldn't want to debug it, and I wouldn't feel comfortable about
using it until it had been debugged.
I'm not sure what that would mean for memory requirements considering
large file lists.

  The next full will not re-transmit these files (unless they have
  changed once again).
  
  So it's possible that a full backup runs faster than an incremental 
  because it doesn't have to transmit everything again?
 
 The full itself would take as long -

Wrong. The full can - in theory - be faster than an incremental would be, but
that will only be the case if bandwidth is your limiting factor and your
backup set is relatively small. In my experience, rsync fulls usually take
significantly longer than incrementals because all files need to be read on
client and server and their checksums calculated and compared (one example I
see here has full backups taking at least an hour - for about 25 GB of data -
and incrementals ranging between 4 minutes and half an hour). An incremental
will stat all files and skip those that are apparently unchanged. If you have
100 GB of data in your backup, that will take at least an hour to read on the
client (and probably more on the server with a compressed pool). You can
probably transfer quite some data in that time. If you only have 10 MB of
data in your backup and you can save re-transferring 5 MB over an ISDN link,
it's obviously a different matter.

 but the one following the full 
 which becomes the next reference copy could be much faster than another 
 incremental based on the older full.

Usually, yes :).

Regards,
Holger

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
Group, R/GA,  Big Spaceship. http://www.creativitycat.com 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-28 Thread Toni Van Remortel
Les Mikesell wrote:
 Toni Van Remortel wrote:
 Toni Van Remortel wrote:
 Anyway, I'm preparing a separate test setup now, to be able to do
 correct tests (so both BackupPC and an rsync tree are using data from
 the same time).
 Test results will be here tomorrow.
   
 So that is today.

 BackupPC full dump, with patch which removed --ignore-times for a full
 backup:
 Done: 507 files, 50731819 bytes
 full backup complete
 real13m39.796s
 user0m4.232s
 sys 0m0.556s
 Network IO used: 620MB

 'rsync -auvH --ignore-times' on the same data:
 sent 48 bytes  received 108845 bytes  72595.33 bytes/sec
 total size is 54915491  speedup is 504.31
 real0m16.978s
 user0m0.480s
 sys 0m0.468s
 Network IO used: 12.5MB


 Big difference.

 Was the previous backuppc full exactly the same as the local target of
 the stock rsync or had backupc been doing incrementals over a period
 when files were changing and the stock rsync run was updating the same
 place?  Backuppc will transfer anything changed since its last full.
 I'm not sure if this is affected by setting multi-level incrementals
 or not.

As I said: I prepared a backup set for this test. So I did a full backup
and an rsync yesterday. Meanwhile, neither BackupPC nor rsync pulled
data until this morning. So both systems were starting from the same
data, and were asked to update the same data.

Anyway, I'm leaving BackupPC for what it is until this is solved. My
main concerns now are bandwidth efficiency and clear backup disk usage
(which isn't really obvious with BackupPC).

-- 
Toni Van Remortel
Linux System Engineer @ Precision Operations NV
+32 3 451 92 26 - [EMAIL PROTECTED]


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-28 Thread Les Mikesell
Toni Van Remortel wrote:
 Les Mikesell wrote:
 Toni Van Remortel wrote:
 Toni Van Remortel wrote:
 Anyway, I'm preparing a separate test setup now, to be able to do
 correct tests (so both BackupPC and an rsync tree are using data from
 the same time).
 Test results will be here tomorrow.
   
 So that is today.

 BackupPC full dump, with patch which removed --ignore-times for a full
 backup:
 Done: 507 files, 50731819 bytes
 full backup complete
 real13m39.796s
 user0m4.232s
 sys 0m0.556s
 Network IO used: 620MB

 'rsync -auvH --ignore-times' on the same data:
 sent 48 bytes  received 108845 bytes  72595.33 bytes/sec
 total size is 54915491  speedup is 504.31
 real0m16.978s
 user0m0.480s
 sys 0m0.468s
 Network IO used: 12.5MB


 Big difference.
 Was the previous backuppc full exactly the same as the local target of
 the stock rsync or had backupc been doing incrementals over a period
 when files were changing and the stock rsync run was updating the same
 place?  Backuppc will transfer anything changed since its last full.
 I'm not sure if this is affected by setting multi-level incrementals
 or not.

 As I said: I prepared a backup set for this test. So I did a full backup
 and an rsync yesterday. Meanwhile, neither BackupPC nor rsync pulled
 data until this morning. So both systems were starting from the same
 data, and were asked to update the same data.
 
 Anyway, I'm leaving BackupPC for what it is until this is solved. My
 main concerns now are bandwidth efficiency and clear backup disk usage
 (which isn't really obvious with BackupPC).

If you go to the backuppc web page for the host, what does the 'File 
Size/Count Reuse Summary' say about the run?  There seems to be 
something very different about your setup than what I see, although I 
haven't measured actual bytes on the wire and I normally run with ssh 
compression on remote targets.

-- 
   Les Mikesell
[EMAIL PROTECTED]


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-28 Thread Craig Barratt
Toni writes:

 BackupPC full dump, with patch which removed --ignore-times for a full
 backup:
 Done: 507 files, 50731819 bytes
 full backup complete
 real13m39.796s
 user0m4.232s
 sys 0m0.556s
 Network IO used: 620MB
 
 'rsync -auvH --ignore-times' on the same data:
 sent 48 bytes  received 108845 bytes  72595.33 bytes/sec
 total size is 54915491  speedup is 504.31
 real0m16.978s
 user0m0.480s
 sys 0m0.468s
 Network IO used: 12.5MB

There are two significant anomolies here:

  - The native rsync only sent 48 bytes to the remote rsync.
That means it is not sending block checksums.  Somehow
the --ignore-times option isn't taking effect.

  - Network IO used doesn't make sense: native rsync reports
it sent 48 bytes and received 108K, but network IO is
12.5MB.  Similarly, for BackupPC, the total files size
is around 50MB, but there is 620MB of reported IO.

How are you measuring the network IO?

You should increase $Conf{XferLogLevel} to maybe 5 and send me the
XferLOG file for BackupPC offlist.  It obviously is not skipping
files based on attributes as you intended.  Also, please tell me
exactly where you made the change to remove --ignore-times.  I need
to check the side effects of that change.

BackupPC will be slower than native rsync for various reasons (more disk
seeking as hardlinked pool files over time tend to get spread across the
disk, compression overhead, perl vs compiled C).  But it shouldn't be
this much worse.  Let's take this off line to understand what is going
on.

Craig

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread Paul Archer
What kind of specs does your server have (besides running ZFS)? That is, 
processor, memory, etc.

I've got a P-III 500Mhz with 512MB RAM as my backup server. It also is my 
file server (I want to split those into separate machines, but I can't right 
now), with about 250GB of data. (Most of that is images/videos/mp3s, so I 
leave compression off.) It takes 30 hours to do a full doing an rsync to 
itself, and incrementals take about 3 hours.
That's a fair bit of data for a slow machine, so I'm trying to get an idea 
of what I can do to speed things up.
And FWIW, I am a fan of ZFS, but until I get another box, I can't really 
switch to it.

Paul


7:53am, dan wrote:

 I backup about 6-7Gb during a full backup of one of my sco unix servers
 using rsync over ssh and it takes under an hour.

 4-5Gb on an very old unix machine using rsync on an nfs mount takes just
 over an hour.

 full backups of my laptop is about 8Gb and takes about 15minutes though it
 is on gigabit and so is the backuppc server BUT the unix servers are not on
 gigabit, just 100Mb/s ethernet.

 On Nov 27, 2007 12:52 AM, Nils Breunese (Lemonbit) [EMAIL PROTECTED] wrote:

 Toni Van Remortel wrote:

 And I have set up BackupPC here 'as-is' in the first place, but we saw
 that the full backups, that ran every 7 days, took about 3 to 4 days
 to
 complete, while for the same hosts the incrementals finished in 1
 hour.
 That's why I got digging into the principles of BackupPC, as I
 wanted to
 know why the full backups don't works 'as expected'.

 Well, I can tell you BackupPC using rsync as the Xfermethod is working
 just fine for us. The incrementals don't take days, all seems normal.
 I hope you'll be able to find the problem in your setup.

 Nils Breunese.

 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/






---
Perl elegant? Perl is like your grandfather's garage. Sure, he kept
most of it tidy to please your grandmother but there was always one
corner where you could find the most amazing junk. And some days,
when you were particularly lucky, he'd show you how it worked.
--Shawn Corey shawn.corey [at] sympatico.ca--

-10921 days until retirement!-

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread Les Mikesell
Toni Van Remortel wrote:

 But I don know that BackupPC does use more bandwidth.
 Besides: when dumping a full backup, the 'pool' means (I hope): file
 already in pool, using it. If not, then there is a problem, as those
 files are already in another backup set of the test host. But BackupPC
 pulls them over anyway.

I'm not sure what you mean by 'pool' here.  The only thing relevant to 
what a backuppc rsync transfer will copy is the previous full of the 
same machine. Files of the same name in the same location will use the 
rsync algorithm to decide how much, if any, data needs to be copied - 
anything else will be copied in full.  When a newly transferred file is 
being linked to the pool, it may be discovered at that point that 
identical content already exists and a link will be made to save storage 
space.

 It should work the way you expect as-is, although the rsync-in-perl
 that knows how to read the compressed archive is somewhat slower.
 The problem ain't the backup server nor it's speed. The problem is the
 data transfer.
 
 And I have set up BackupPC here 'as-is' in the first place, but we saw
 that the full backups, that ran every 7 days, took about 3 to 4 days to
 complete, while for the same hosts the incrementals finished in 1 hour.
 That's why I got digging into the principles of BackupPC, as I wanted to
 know why the full backups don't works 'as expected'.

Did you have a full backup of the same host in place with most files 
unchanged at the time you expected a low bandwidth full to happen?  It 
is still possible for the time to to be much longer than an incremental, 
depending on the number of files and the speed and memory of the 
machines but it should not be using much more bandwidth.

-- 
   Les Mikesell
[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread Gene Horodecki

 I'm not sure what you mean by 'pool' here.  The only thing relevant to 
 what a backuppc rsync transfer will copy is the previous full of the 
 same machine. Files of the same name in the same location will use the 
 rsync algorithm to decide how much, if any, data needs to be copied - 
 anything else will be copied in full.  When a newly transferred file is 
 being linked to the pool, it may be discovered at that point that 
 identical content already exists and a link will be made to save storage 
 space.

Is this true?  Why not just send the checksum/name/date/permissions of the
file first and see if it exists already and link it in if it does.  If the
file does not exist by name but there is a checksum for the file, then just
use the vital data to link in the file and you're done.  I'm thinking
Backuppc shouldn't need to send the entire file for that?

Of course if there is no checksum then it is entirely a new file.  If the
checksum is different but the filename is there then send it via rsync, etc
etc...



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread Les Mikesell
dan wrote:
 the ZFS machine is an nextenta(opensolaris+ubuntu) machine with an 
 athlon64x2 3800+ and 1Gb Ram with 2 240Gb sata drives in the array.  its 
 a dell e521

Is nexenta still an active project?  And would you recommend using it?

-- 
  Les Mikesell
[EMAIL PROTECTED]



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread Les Mikesell
Gene Horodecki wrote:
 I'm not sure what you mean by 'pool' here.  The only thing relevant to 
 what a backuppc rsync transfer will copy is the previous full of the 
 same machine. Files of the same name in the same location will use the 
 rsync algorithm to decide how much, if any, data needs to be copied - 
 anything else will be copied in full.  When a newly transferred file is 
 being linked to the pool, it may be discovered at that point that 
 identical content already exists and a link will be made to save storage 
 space.
 
 Is this true?  Why not just send the checksum/name/date/permissions of the
 file first and see if it exists already and link it in if it does.  If the
 file does not exist by name but there is a checksum for the file, then just
 use the vital data to link in the file and you're done.  I'm thinking
 Backuppc shouldn't need to send the entire file for that?

You are talking to a stock rsync on the other end.  I don't think it 
knows about the hashing scheme and collision detection that the backuppc 
pooling mechanism uses for filename generation.  And it's only going to 
matter when you update the same file on a bunch of machines with low 
bandwidth connections anyway.

 Of course if there is no checksum then it is entirely a new file.  If the
 checksum is different but the filename is there then send it via rsync, etc
 etc...

I don't know if the rsync checksum has enough in common with the 
backuppc filename hash to even make a good guess about a matching pooled 
file.  I wouldn't expect it to.

-- 
   Les Mikesell
[EMAIL PROTECTED]


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread Rich Rauenzahn

Gene Horodecki wrote:

I had that problem as well.. so I uhh.. well, I fiddled with the backup
directory on the backuppc server and moved them around so that backuppc
wouldn't see I had moved them remotely.. Not something I would exactly
recommend doing... although it worked.



Great suggestions..  It's too late for me now because the backup (should
be) 95% complete.. but I will remember that for next time.

Tell me, are the directories in the pc/hostname path just regular
directories that have the letter 'f' prepended to them?  Did you have to
reorganize every layer of backups in existance to match, or just one layer?

I'll do this next time.
  


So if I moved, say, /var/www to /home/www, I first made a full backup 
before the move.  Then I moved the www directory on the remote host, 
then went to the backuppc server and moved fwww from /fvar/fwww to 
/fhome/fwww within that latest backup tree.  Then I did another full.   
I think that's what I did anyway... =-)


Rich
-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread Gene Horodecki
Sounds reasonable... What did you do about the attrib file?  I noticed
there is a file called 'attrib' in each of the pool directories with some
binary data in it.

Rich Rauenzahn [EMAIL PROTECTED] wrote:

   Gene Horodecki wrote:

   

I had that problem as well.. so I uhh.. well, I fiddled with the
backupdirectory on the backuppc server and moved them around so that
backuppcwouldn't see I had moved them remotely.. Not something I would
exactlyrecommend doing... although it worked./pre


Great suggestions..  It's too late for me now because the backup
(shouldbe)
95% complete.. but I will remember that for next time.Tell me, are the
directories in the pc/hostname path just regulardirectories that have
the
letter 'f' prepended to them?  Did you have toreorganize every layer of
backups in existance to match, or just one layer?I'll do this next
time./pre

 
So if I moved, say, /var/www to /home/www, I first made a full backup
before the move.  Then I moved the www directory on the remote host, then
went to the backuppc server and moved fwww from /fvar/fwww to /fhome/fwww
within that latest backup tree.  Then I did another full.   I think that's
what I did anyway... =-)

Rich



-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread Rich Rauenzahn


Gene Horodecki wrote:
 Sounds reasonable... What did you do about the attrib file?  I noticed 
 there is a file called 'attrib' in each of the pool directories with 
 some binary data in it.

Nothing... it just contains permissions, etc.  That's why I did another 
full after the move -- then all of the metadata is updated correctly.

Rich


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread dan
nexenta is alive and well.  in fact, check this out.
http://www.nexenta.com/corp/

nexenta is not advancing at the pace of ubuntu though.  i like the ubuntu
system so nexenta is great for me.  if i were you and you were not tied to
ubuntu then you might consider opensolaris or solaris10.  solaris10 is not
beta software but opensolaris and nexenta are.



On Nov 27, 2007 5:16 PM, Gene Horodecki [EMAIL PROTECTED] wrote:

 Sounds reasonable... What did you do about the attrib file?  I noticed
 there is a file called 'attrib' in each of the pool directories with some
 binary data in it.

 Rich Rauenzahn [EMAIL PROTECTED] wrote:

 Gene Horodecki wrote:

  I had that problem as well.. so I uhh.. well, I fiddled with the backup
 directory on the backuppc server and moved them around so that backuppc
 wouldn't see I had moved them remotely.. Not something I would exactly
 recommend doing... although it worked.

  Great suggestions..  It's too late for me now because the backup (should
 be) 95% complete.. but I will remember that for next time.

 Tell me, are the directories in the pc/hostname path just regular
 directories that have the letter 'f' prepended to them?  Did you have to
 reorganize every layer of backups in existance to match, or just one layer?

 I'll do this next time.


 So if I moved, say, /var/www to /home/www, I first made a full backup
 before the move.  Then I moved the www directory on the remote host, then
 went to the backuppc server and moved fwww from /fvar/fwww to /fhome/fwww
 within that latest backup tree.  Then I did another full.   I think that's
 what I did anyway... =-)

 Rich


 -
 SF.Net email is sponsored by: The Future of Linux Business White Paper
 from Novell.  From the desktop to the data center, Linux is going
 mainstream.  Let it simplify your IT future.
 http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread Toni Van Remortel
Toni Van Remortel wrote:
 Anyway, I'm preparing a separate test setup now, to be able to do
 correct tests (so both BackupPC and an rsync tree are using data from
 the same time).
 Test results will be here tomorrow.
   
So that is today.

BackupPC full dump, with patch which removed --ignore-times for a full
backup:
Done: 507 files, 50731819 bytes
full backup complete
real13m39.796s
user0m4.232s
sys 0m0.556s
Network IO used: 620MB

'rsync -auvH --ignore-times' on the same data:
sent 48 bytes  received 108845 bytes  72595.33 bytes/sec
total size is 54915491  speedup is 504.31
real0m16.978s
user0m0.480s
sys 0m0.468s
Network IO used: 12.5MB


Big difference.

-- 
Toni Van Remortel
Linux System Engineer @ Precision Operations NV
+32 3 451 92 26 - [EMAIL PROTECTED]


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-26 Thread Nils Breunese (Lemonbit)

Toni Van Remortel wrote:


How can I reduce bandwidth usage for full backups?

Even when using rsync, BackupPC does transfer all data on a full  
backup,

and not only the modified files since the last incremental or full.


That's not true. Only modifications are transfered over the network  
when using rsync. Full backups are just more thoroughly checking  
whether files have changed (not just comparing timestamps, but  
actually checking the contents of the files).


Nils Breunese.


PGP.sig
Description: This is a digitally signed message part
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-26 Thread Toni Van Remortel
Nils Breunese (Lemonbit) wrote:
 Toni Van Remortel wrote:
 How can I reduce bandwidth usage for full backups?

 Even when using rsync, BackupPC does transfer all data on a full backup,
 and not only the modified files since the last incremental or full.
 That's not true. Only modifications are transfered over the network
 when using rsync. Full backups are just more thoroughly checking
 whether files have changed (not just comparing timestamps, but
 actually checking the contents of the files).
Then I wonder what gets transferred. If I monitor a full dump, the
bandwidth usage is way much higher than when I copy it manually. If a
simple 'rsync -auv' takes 2 hours to complete against a backup from 1
day ago, they why does BackupPC take 2 days for the same action?

I'm out of ideas, and hacks.

-- 
Toni Van Remortel
Linux System Engineer @ Precision Operations NV
+32 3 451 92 26 - [EMAIL PROTECTED]


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-26 Thread Nils Breunese (Lemonbit)

Toni Van Remortel wrote:


Nils Breunese (Lemonbit) wrote:

Toni Van Remortel wrote:

How can I reduce bandwidth usage for full backups?

Even when using rsync, BackupPC does transfer all data on a full  
backup,

and not only the modified files since the last incremental or full.

That's not true. Only modifications are transfered over the network
when using rsync. Full backups are just more thoroughly checking
whether files have changed (not just comparing timestamps, but
actually checking the contents of the files).

Then I wonder what gets transferred. If I monitor a full dump, the
bandwidth usage is way much higher than when I copy it manually. If a
simple 'rsync -auv' takes 2 hours to complete against a backup from 1
day ago, they why does BackupPC take 2 days for the same action?

I'm out of ideas, and hacks.


It might be because BackupPC doesn't run the equivalent of rsync -auv.  
See $Conf{RsyncArgs} in your config.pl for the options used and  
remember rsync is talking to BackupPC's rsync interface, not a stock  
rsync. There's much more going on: the compression, the checksumming,  
the pooling, the nightly jobs (if your backup job really needs two  
days then it probably gets in the way of the nightly jobs), that's all  
not happening when you run a plain rsync -auv. The traffic shouldn't  
be much higher though (after the initial backup of course), I think.


Could you give us some numbers? How much traffic are you seeing for a  
BackupPC backup compared to a 'plain rsync'?


Nils Breunese.

P.S. You might want to check out rdiff-backup (http://www.nongnu.org/rdiff-backup/ 
) if you're looking for an rsync style incremental backup tool and  
don't need the compression, pooling, rotation and web interface that  
BackupPC gets you out of the box.


PGP.sig
Description: This is a digitally signed message part
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-26 Thread Les Mikesell
Toni Van Remortel wrote:

 How can I reduce bandwidth usage for full backups?

 Even when using rsync, BackupPC does transfer all data on a full backup,
 and not only the modified files since the last incremental or full.
 That's not true. Only modifications are transfered over the network
 when using rsync. Full backups are just more thoroughly checking
 whether files have changed (not just comparing timestamps, but
 actually checking the contents of the files).
 Then I wonder what gets transferred. If I monitor a full dump, the
 bandwidth usage is way much higher than when I copy it manually. If a
 simple 'rsync -auv' takes 2 hours to complete against a backup from 1
 day ago, they why does BackupPC take 2 days for the same action?

On fulls, backuppc adds --ignore-times to the rsync arguments.  Try a 
comparison against that for the bandwidth check.  This makes both ends 
read the entire contents of the filesystem so the time will increase 
correspondingly.  The backuppc side is also uncompressing the stored 
copy and doing the checksum exchange in perl code which will slow it 
down a bit.  You can speed this up some with the checksum-seed option. 
If you don't care about this extra data check, you could probably edit 
Rsync.pm and remove that setting.

-- 
   Les Mikesell
[EMAIL PROTECTED]


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-26 Thread Les Mikesell
Toni Van Remortel wrote:

 Could you give us some numbers? How much traffic are you seeing for
  a BackupPC backup compared to a 'plain rsync'?
 Full backup, run for the 2nd time today (no changes in files):
 - - BackupPC full dump : killed it after 30mins, as it pulled all data
 again (2.8GB)

This doesn't make any sense to me.  I run backups on some remote 
machines that could not possibly work if rsync fulls copied unchanged 
data.  How are you measuring the traffic?

 Well, we need the web interface of BackupPC, we need the reporting
 functionality of BackupPC, but we like to to be more bandwidth efficient.
 Maybe I'll write an Xfer module that uses plain rsync ...

It should work the way you expect as-is, although the rsync-in-perl that 
knows how to read the compressed archive is somewhat slower.

-- 
   Les Mikesell
[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-26 Thread Nils Breunese (Lemonbit)

Toni Van Remortel wrote:


And I have set up BackupPC here 'as-is' in the first place, but we saw
that the full backups, that ran every 7 days, took about 3 to 4 days  
to
complete, while for the same hosts the incrementals finished in 1  
hour.
That's why I got digging into the principles of BackupPC, as I  
wanted to

know why the full backups don't works 'as expected'.


Well, I can tell you BackupPC using rsync as the Xfermethod is working  
just fine for us. The incrementals don't take days, all seems normal.  
I hope you'll be able to find the problem in your setup.


Nils Breunese.


PGP.sig
Description: This is a digitally signed message part
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] full backup failed - Got fatal error during xfer (Child exited prematurely)

2006-12-22 Thread Craig Barratt
Jorge writes:

 Remote[2]: file has vanished: /proc/2/exe

You should exclude /proc from the backup by adding it to
$Conf{BackupFilesExclude}.

Craig

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/