[BackupPC-users] copying the pool

2010-10-04 Thread Chris Purves
I recently copied the pool to a new hard disk following the Copying the pool 
instructions from the main documentation.  The documentation says to copy the 
'cpool', 'log', and 'conf' directories using any technique and the 'pc' 
directory using BackupPC_tarPCCopy; however, there is no mention of what to do 
with the 'pool' directory.  I thought it might be created automatically when 
the nightly cleanup runs, but three days later and still no 'pool' directory.

Is this an oversight in the documentation or is the 'pool' directory not 
needed?  I am using BackupPC 3.1.0.

Thanks.

-- 
Chris Purves

I've seen the look of a fat man having dinner. - Frank Sinatra

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] copying the pool

2010-10-04 Thread Chris Purves
On 04/10/2010 2:20 PM, Robin Lee Powell wrote:
 On Mon, Oct 04, 2010 at 08:56:49AM -0400, Chris Purves wrote:
 I recently copied the pool to a new hard disk following the
 Copying the pool instructions from the main documentation.  The
 documentation says to copy the 'cpool', 'log', and 'conf'
 directories using any technique and the 'pc' directory using
 BackupPC_tarPCCopy; however, there is no mention of what to do
 with the 'pool' directory.  I thought it might be created
 automatically when the nightly cleanup runs, but three days later
 and still no 'pool' directory.

 Is this an oversight in the documentation or is the 'pool'
 directory not needed?  I am using BackupPC 3.1.0.

 Unless you have compression turned off, the pool directory should be
 totally empty.

 If you have compression turned off, the cpool directory should be
 totally empty.

 Whichever one is empty can be ignored.

So it is.  I reconnected and mounted the old drive and the pool directory is 
indeed empty.  Thanks for your reply.


-- 
Chris Purves

What we observe is not nature itself, but nature exposed to our method of 
questioning. - Werner Heisenberg

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] excludes for smb HOWTO

2010-09-22 Thread Chris Purves
I had a lot of trouble when trying to exclude certain files and directories for 
a backup using smb for the transfer method.  I was unable to find solutions in 
this mailing list or the samba mailing list and so I did a bunch of testing 
with smbclient.  I found that for smbclient to properly match it was necessary 
to use '\' for directory separators (not '/') and in addition the last 
separator must be a double backslash ('\\') and in addition to that each 
backslash must be escaped with, of course, a backslash. (quadruple backslash in 
places)

An example:

$Conf{BackupFilesExclude} = {
   '*' = [
 'Application Data',
 '\\DocumentsMy Music',
 '\\Downloads\\big_filesdebian_install_dvd.iso
 'ntuser.dat.LOG1',
 '*.lock',
 '*\\Thumbs.db',
 '*\\.*'
   ]
};

In the above example, the Application Data directory is excluded.  To exclude 
\Documents\My Music, the quadruple backslash is placed before My Music and 
not Documents.  Next the file debian_install_dvd.iso in the directory 
Downloads\big_files is excluded.  Note again that the last directory 
separator has a quadruple backslash.  The next line excludes the file 
ntuser.dat.LOG1 in the root directory.  The next line excludes any .lock 
file.  Note that the asterisk at the beginning not only matches the first part 
of the filename but also the directory tree.  When matching files it is 
necessary to match the directories as well, which leads to the next line which 
excludes the Thumbs.db file found in any directory.  The last line excludes 
any file or directory which begins with '.'.

Note that if you use the BackupPC GUI, the backslashes do not need to be 
escaped.  You can use single and double backslashes instead of double and 
quadruple.

This is how it worked for me using BackupPC 3.1.0 with samba 3.2.5 on Debian 
Lenny backing up a Windows Vista computer.  Since I can't imagine that this is 
how it's supposed to work you may or may not get the same results with 
different versions of samba or Windows or whatever.  But hopefully others are 
able to make use of my own trials and heartache.

-- 
Chris Purves

I can calculate the motion of heavenly bodies, but not the madness of people. 
- Sir Isaac Newton

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] timeout with rsyncd

2009-07-24 Thread Chris Purves
On July 23, 2009 03:39:08 pm Chris Purves wrote:
 On July 23, 2009 09:47:46 am Chris Purves wrote:
  Hello,
  
  I am having a problem on one of the shares for an rsyncd backup.  Normally 
  I back up four different rsyncd shares.  Three go without problems, but the 
  fourth gets a timeout error.  Here is the error from the rsyncd log:
  
  2009/07/22 10:29:52 [370] name lookup failed for 10.221.77.1: Name or 
  service not known
  2009/07/22 10:29:52 [370] connect from UNKNOWN (10.221.77.1)
  2009/07/22 16:29:52 [370] rsync on . from bac...@unknown (10.221.77.1)
  2009/07/22 16:29:52 [370] building file list
  2009/07/22 17:29:58 [370] rsync error: timeout in data send/receive (code 
  30) at io.c(239) [sender=3.0.3]
  
  The timout: one hour corresponds to the timeout value set in the 
  rsyncd.conf file.  I can run rsync from the command line with the same 
  arguments used by Backuppc and it takes about one minute to complete 
  without any errors.
  
  Below is the XferLOG from backuppc for the same instance.
  
  incr backup started back to 2009-05-29 21:00:03 (backup #226) for directory 
  www
  Connected to vesuvius:873, remote version 30
  Negotiated protocol version 28
  Connected to module www
  Sending args: --server --sender --numeric-ids --perms --owner --group -D 
  --links --hard-links --times --block-size=2048 --recursive 
  --checksum-seed=32761 . .
  Checksum caching enabled (checksumSeed = 32761)
  Sent exclude: /data
  Xfer PIDs are now 20138
create d 775   0/200014096 .
... removed 335 lines ...
create d 755 33/334096 twiki/working/work_areas/TreePlugin
  Remote[1]: rsync error: timeout in data send/receive (code 30) at io.c(239) 
  [sender=3.0.3]
  Read EOF: 
  Tried again: got 0 bytes
  finish: removing in-process file .
  Child is aborting
  Done: 0 files, 0 bytes
  
 
 Adding -vvv to the rsync command shows that it hangs after the file transfer 
 has finished, but it doesn't exit cleanly.  From my rsyncd log:
 
 ...lots of lines removed...
 2009/07/23 21:31:30 [4545] sending file_sum
 2009/07/23 21:31:30 [4545] false_alarms=0 hash_hits=0 matches=0
 2009/07/23 21:31:30 [4545] sender finished 
 twiki/working/work_areas/TreePlugin/WIP_Doc.tree
 2009/07/23 21:31:30 [4545] send_files phase=1
 2009/07/23 21:31:35 [4545] send files finished
 2009/07/23 21:31:35 [4545] total: matches=344  hash_hits=381  false_alarms=0 
 data=117624
 2009/07/23 21:31:35 [4545] sent 547458 bytes  received 3143 bytes  total size 
 49509520
 2009/07/23 21:31:35 [4545] _exit_cleanup(code=0, file=main.c, line=749): 
 about to call exit(0)
 

I was able to solve the problem by removing a ganttproject log file.  There was 
nothing strange about the file other than it was one of the few that had 
changed since the backup stopped working.  I can provide more information if 
anyone is interested.

-- 
Chris Purves

No one can serve two masters. Either he will hate the one and love the other, 
or he will be devoted to the one and despise the other. You cannot serve both 
God and Money. - Jesus of Nazareth


signature.asc
Description: This is a digitally signed message part.
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] timeout with rsyncd

2009-07-23 Thread Chris Purves
Hello,

I am having a problem on one of the shares for an rsyncd backup.  Normally I 
back up four different rsyncd shares.  Three go without problems, but the 
fourth gets a timeout error.  Here is the error from the rsyncd log:

2009/07/22 10:29:52 [370] name lookup failed for 10.221.77.1: Name or service 
not known
2009/07/22 10:29:52 [370] connect from UNKNOWN (10.221.77.1)
2009/07/22 16:29:52 [370] rsync on . from bac...@unknown (10.221.77.1)
2009/07/22 16:29:52 [370] building file list
2009/07/22 17:29:58 [370] rsync error: timeout in data send/receive (code 30) 
at io.c(239) [sender=3.0.3]

The timout: one hour corresponds to the timeout value set in the rsyncd.conf 
file.  I can run rsync from the command line with the same arguments used by 
Backuppc and it takes about one minute to complete without any errors.

Below is the XferLOG from backuppc for the same instance.

incr backup started back to 2009-05-29 21:00:03 (backup #226) for directory www
Connected to vesuvius:873, remote version 30
Negotiated protocol version 28
Connected to module www
Sending args: --server --sender --numeric-ids --perms --owner --group -D 
--links --hard-links --times --block-size=2048 --recursive 
--checksum-seed=32761 . .
Checksum caching enabled (checksumSeed = 32761)
Sent exclude: /data
Xfer PIDs are now 20138
  create d 775   0/200014096 .
  ... removed 335 lines ...
  create d 755 33/334096 twiki/working/work_areas/TreePlugin
Remote[1]: rsync error: timeout in data send/receive (code 30) at io.c(239) 
[sender=3.0.3]
Read EOF: 
Tried again: got 0 bytes
finish: removing in-process file .
Child is aborting
Done: 0 files, 0 bytes




-- 
Chris Purves

Some songs are just like tattoos for your brain...you hear them and they're 
affixed to you. - Carlos Santana

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] timeout with rsyncd

2009-07-23 Thread Chris Purves
On July 23, 2009 09:47:46 am Chris Purves wrote:
 Hello,
 
 I am having a problem on one of the shares for an rsyncd backup.  Normally I 
 back up four different rsyncd shares.  Three go without problems, but the 
 fourth gets a timeout error.  Here is the error from the rsyncd log:
 
 2009/07/22 10:29:52 [370] name lookup failed for 10.221.77.1: Name or service 
 not known
 2009/07/22 10:29:52 [370] connect from UNKNOWN (10.221.77.1)
 2009/07/22 16:29:52 [370] rsync on . from bac...@unknown (10.221.77.1)
 2009/07/22 16:29:52 [370] building file list
 2009/07/22 17:29:58 [370] rsync error: timeout in data send/receive (code 30) 
 at io.c(239) [sender=3.0.3]
 
 The timout: one hour corresponds to the timeout value set in the rsyncd.conf 
 file.  I can run rsync from the command line with the same arguments used by 
 Backuppc and it takes about one minute to complete without any errors.
 
 Below is the XferLOG from backuppc for the same instance.
 
 incr backup started back to 2009-05-29 21:00:03 (backup #226) for directory 
 www
 Connected to vesuvius:873, remote version 30
 Negotiated protocol version 28
 Connected to module www
 Sending args: --server --sender --numeric-ids --perms --owner --group -D 
 --links --hard-links --times --block-size=2048 --recursive 
 --checksum-seed=32761 . .
 Checksum caching enabled (checksumSeed = 32761)
 Sent exclude: /data
 Xfer PIDs are now 20138
   create d 775   0/200014096 .
   ... removed 335 lines ...
   create d 755 33/334096 twiki/working/work_areas/TreePlugin
 Remote[1]: rsync error: timeout in data send/receive (code 30) at io.c(239) 
 [sender=3.0.3]
 Read EOF: 
 Tried again: got 0 bytes
 finish: removing in-process file .
 Child is aborting
 Done: 0 files, 0 bytes
 

Adding -vvv to the rsync command shows that it hangs after the file transfer 
has finished, but it doesn't exit cleanly.  From my rsyncd log:

...lots of lines removed...
2009/07/23 21:31:30 [4545] sending file_sum
2009/07/23 21:31:30 [4545] false_alarms=0 hash_hits=0 matches=0
2009/07/23 21:31:30 [4545] sender finished 
twiki/working/work_areas/TreePlugin/WIP_Doc.tree
2009/07/23 21:31:30 [4545] send_files phase=1
2009/07/23 21:31:35 [4545] send files finished
2009/07/23 21:31:35 [4545] total: matches=344  hash_hits=381  false_alarms=0 
data=117624
2009/07/23 21:31:35 [4545] sent 547458 bytes  received 3143 bytes  total size 
49509520
2009/07/23 21:31:35 [4545] _exit_cleanup(code=0, file=main.c, line=749): about 
to call exit(0)



-- 
Chris Purves

Personally, I liked the university. They gave us money and facilities, we 
didn't have to produce anything! You've never been out of college! You don't 
know what it's like out there! I've worked in the private sector. They expect 
results. - Ray Stantz

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd doesn't complete

2006-12-13 Thread Chris Purves
Craig Barratt wrote:
 Chris writes:
 
 I am running backuppc 2.1.1 on debian.  I have a rsync server running on 
 a Windows XP machine (cwRsync).  The two machines are connected via a 
 VPN (L2TP over IPSEC).

 I can use rsync directly with no problems.  The transfer takes about 
 four minutes.  But when backuppc runs, the transfer freezes up after 
 what looks like the last file is transfered.  After two hours of 
 inactivity have passed, backuppc gives up and closes the connection as 
 well as not leaving even a partial backup.

 A significant difference I noticed between the two cases is that the 
 rsyncd log when calling rsync directly says rsync on test, but when 
 using backuppc says rsync on .  Test is the correct name of the rsyncd 
 share.  The only change I made to config.pl concerning rsync is to add 
 --compress as an rsync option.  Any help solving this problem is 
 appreciated.  Log/config file outputs are below.
 
 File::RsyncP doesn't support the --compress option, so you should
 remove it.  Try again and if it still fails please email the XferLOG
 file.
 

That fixed it.  I hadn't realised that BackupPC uses File::RsyncP and 
not the rsync client.

Thanks, Craig.

-- 
Chris


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] rsyncd doesn't complete

2006-12-12 Thread Chris Purves
Hello,

I am running backuppc 2.1.1 on debian.  I have a rsync server running on 
a Windows XP machine (cwRsync).  The two machines are connected via a 
VPN (L2TP over IPSEC).

I can use rsync directly with no problems.  The transfer takes about 
four minutes.  But when backuppc runs, the transfer freezes up after 
what looks like the last file is transfered.  After two hours of 
inactivity have passed, backuppc gives up and closes the connection as 
well as not leaving even a partial backup.

A significant difference I noticed between the two cases is that the 
rsyncd log when calling rsync directly says rsync on test, but when 
using backuppc says rsync on .  Test is the correct name of the rsyncd 
share.  The only change I made to config.pl concerning rsync is to add 
--compress as an rsync option.  Any help solving this problem is 
appreciated.  Log/config file outputs are below.

Here is the rsyncd log from a straight rsync backup running the command 
'rsync --numeric-ids --perms --owner --group --devices --links --times 
--block-size=2048 --recursive --compress rsync://aims-03/test ~/temp/'

2006/12/12 16:08:02 [1276] 192.168.114.1 is not a known address for 
AURORA: spoofed address?
2006/12/12 16:08:02 [1276] connect from UNKNOWN (192.168.114.1)
2006/12/12 16:08:09 [1276] rsync on test from unknown (192.168.114.1)
2006/12/12 16:08:09 [1276] building file list
2006/12/12 16:08:10 [1276] send unknown [192.168.114.1] test () 
downloads/Production Protocol Nov-15-06.mpp 140288
...remove 1543 lines...
2006/12/12 16:11:51 [1276] send unknown [192.168.114.1] test () 
twiki/tools/upgrade_emails.pl 1768
2006/12/12 16:11:52 [1276] sent 14518417 bytes  received 34386 bytes 
total size 23073568



Here is the rsyncd log when backuppc attempts to perform a backup:

2006/12/12 13:59:59 [944] 192.168.114.1 is not a known address for 
AURORA: spoofed address?
2006/12/12 13:59:59 [944] connect from UNKNOWN (192.168.114.1)
2006/12/12 14:00:06 [944] rsync on . from unknown (192.168.114.1)
2006/12/12 14:00:06 [944] building file list
2006/12/12 14:00:07 [944] send unknown [192.168.114.1] test () 
downloads/Production Protocol Nov-15-06.mpp 140288
...remove 1543 lines...
2006/12/12 14:03:35 [944] send unknown [192.168.114.1] test () 
twiki/upgrade-4.0.5.tgz 286522

then nothing until timeout


My host config file:

$Conf{FullPeriod} = 6.97;
$Conf{IncrPeriod} = 0.97;

$Conf{FullKeepCnt} = 3;
$Conf{FullKeepCntMin} = 1;
$Conf{FullAgeMax} = 90;

$Conf{IncrKeepCnt} = 7;
$Conf{IncrKeepCntMin} = 1;
$Conf{IncrAgeMax} = 30;

$Conf{PartialAgeMax} = 3;

$Conf{RestoreInfoKeepCnt} = 10;
$Conf{ArchiveInfoKeepCnt} = 10;


$Conf{BlackoutBadPingLimit} = 3;
$Conf{BlackoutGoodCnt}  = 7;

$Conf{BlackoutPeriods} = [
 {
 hourBegin =  7.0,
 hourEnd   =  1.5,
 weekDays  = [0, 1, 2, 3, 4, 5, 6],
 },
];

$Conf{XferMethod} = 'rsyncd';
$Conf{RsyncShareName} = 'test';
$Conf{RsyncdAuthRequired} = 0;

$Conf{EMailAdminUserName} = 'chris';


-- 
Chris


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] smb scheduled backup fails, but manual works

2006-12-07 Thread Chris Purves
Travis Fraser wrote:
 On Wed, 2006-12-06 at 17:11 -0700, Chris Purves wrote:

 I have a problem where a scheduled smb backup does not run.  The home 
 page for that machine shows 'Pings to king-graham have failed 109 
 consecutive times.'

 However, I can ping the machine from the command line and I can manually 
 start an incremental backup after which the home page shows 'Pings to 
 king-graham have succeeded 2 consecutive times.'

 Is this a laptop where it might go into suspend or leave the network?

It's a machine that I dual-boot windows/linux.  When I boot to linux the 
hostname changes, so it does effectively leave the network.

-- 
Chris


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] smb scheduled backup fails, but manual works

2006-12-07 Thread Chris Purves
Tristan Stahnke wrote:
 
 On 12/6/06, Chris Purves [EMAIL PROTECTED] wrote:

 I have a problem where a scheduled smb backup does not run.  The home
 page for that machine shows 'Pings to king-graham have failed 109
 consecutive times.'

 However, I can ping the machine from the command line and I can manually
 start an incremental backup after which the home page shows 'Pings to
 king-graham have succeeded 2 consecutive times.'

 I had a similar problem on my home network when I set the DHCP flag in
 the conf/hosts file to 1.  Setting it to 0 seemed to fix that problem.
 Since I don't have a DNS server on my home network I'm guessing it
 wasn't able to properly query the router for DHCP information, but it
 was still able to communicate to the client when I manually started a
 backup.  If you manually specify the host in /etc/hosts and can ping
 it from the commandline, I believe setting the DHCP flag to 0 should
 fix the problem.  Granted if the IP address of king-graham changes
 perhaps something else is amiss. Good luck.
 
Okay, I set the DHCP flag to 0, and that seems to have fixed the 
problem.  Thanks for the help.

After checking the docs for the DHCP flag, I found that you should only 
set the flag to 1 if nmblookup (or gethostbyname) fails.  In this case 
you need to specify a range of IP addresses and backuppc will query 
those specific addresses to request the hostname from those machines. 
Now that I understand this better, it's clear that I should have set the 
flag to 0.  Thanks Tristan for pointing me in the right direction.

-- 
Chris


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] smb scheduled backup fails, but manual works

2006-12-06 Thread Chris Purves
Hi,

I have a problem where a scheduled smb backup does not run.  The home 
page for that machine shows 'Pings to king-graham have failed 109 
consecutive times.'

However, I can ping the machine from the command line and I can manually 
start an incremental backup after which the home page shows 'Pings to 
king-graham have succeeded 2 consecutive times.'

Any help is appreciated.  Thanks.

-- 
Chris


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/