[BackupPC-users] BackupPC misconfiguration Rsync network usage

2009-01-07 Thread William McKee
Hi all,

This evening I tracked down a configuration error that was causing a
bandwidth spike due to a misconfiguration of BackupPC (v2.1.2). I set
the IncrPeriod to 0.00 thinking that no incrementals would get run. Boy
was that wrong! Instead, it ran incrementals one after another during
off-peak hours. That spiked my bandwidth with my hosting provider which
sent me searching for the culprit.

Because of the holidays, I had forgotten about the edit of the
IncrPeriod so wasn't sure what was causing the spike. Thus I went
digging through my logs and such to try to identify the culprit.

I use VMware on a co-lo server which has 3 guestts that all get backed
up by BackupPC. I could identify that the host was transmitting massive
amounts of data (130Gb) which appeared to be coming from one of the
three guests. However, I couldn't figure out which guest was pushing out
the excessive data.

I went through the usual log files without much luck. I then checked the
ifconfig output which all looked normal inside the hosts. Once I finally
looked at the BackupPC logs for the guest server, I realized what was
happening and corrected the issue by removing my bad entry. I also added
--bwlimit to the RsyncArgs setting in config.pl to control maxing out my
bandwidth.

However, this all took longer than I'd have liked. I'm stumped as to why
the data transmitted off of the guest did not show up in the ifconfig
output. I know that the guest is sending data via rsync based on the
logs. However it's not showing up in the ifconfig stats (see below). Is
this due to the way that rsync works? I was sending about 450Mb of data
every 1-2 hrs from 8pm - 6am (I can send the logs if that would be of
any help). I've included below the ifconfig outputs for the host
(massive TX bytes) and the guest (normal TX bytes). I would have
expected a corresponding amount of TX bytes for the guest. Thanks for
any insight.


Cheers,
William



Output of ifconfig on host (atlas)
eth0  Link encap:Ethernet  HWaddr 00:17:A4:3F:C3:B5  
  inet addr:64.132.42.194  Bcast:64.132.42.207  Mask:255.255.255.240
  inet6 addr: fe80::217:a4ff:fe3f:c3b5/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:67509721 errors:0 dropped:0 overruns:0 frame:3
  TX packets:102403892 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:6915124969 (6.4 GiB)  TX bytes:139582421865 (129.9 GiB)
  Interrupt:16 


Output of ifconfig on guest (wg75)
eth0  Link encap:Ethernet  HWaddr 00:0c:29:2a:5f:cd  
  inet addr:192.168.233.25  Bcast:192.168.233.255  Mask:255.255.255.0
  inet6 addr: fe80::20c:29ff:fe2a:5fcd/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:26307738 errors:0 dropped:0 overruns:0 frame:0
  TX packets:42720081 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:2111065627 (1.9 GB)  TX bytes:2438535854 (2.2 GB)
  Interrupt:17 Base address:0x1400 


-- 
Knowmad Technologies - Open Source Web Solutions
W: http://www.knowmad.com | E: will...@knowmad.com | P: 704.343.9330

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC misconfiguration Rsync network usage

2009-01-07 Thread Vasan
William,

Is the guest machine multi homed - with multiple network interface cards.
Linux binds a IP Address to entire OS rather than to a specific
interface unlike some of the other UNIX flavors that binds it only to the
interface. If it is really multi-homed, you might get a clue by looking at
the ifconfig output of all eth? interfaces. There might be another eth?
interface that would have a corresponding increase of packets that you are
expecting...

HTH

Vasan

On Wed, Jan 7, 2009 at 12:56 PM, William McKee will...@knowmad.com wrote:

 Hi all,

 This evening I tracked down a configuration error that was causing a
 bandwidth spike due to a misconfiguration of BackupPC (v2.1.2). I set
 the IncrPeriod to 0.00 thinking that no incrementals would get run. Boy
 was that wrong! Instead, it ran incrementals one after another during
 off-peak hours. That spiked my bandwidth with my hosting provider which
 sent me searching for the culprit.

 Because of the holidays, I had forgotten about the edit of the
 IncrPeriod so wasn't sure what was causing the spike. Thus I went
 digging through my logs and such to try to identify the culprit.

 I use VMware on a co-lo server which has 3 guestts that all get backed
 up by BackupPC. I could identify that the host was transmitting massive
 amounts of data (130Gb) which appeared to be coming from one of the
 three guests. However, I couldn't figure out which guest was pushing out
 the excessive data.

 I went through the usual log files without much luck. I then checked the
 ifconfig output which all looked normal inside the hosts. Once I finally
 looked at the BackupPC logs for the guest server, I realized what was
 happening and corrected the issue by removing my bad entry. I also added
 --bwlimit to the RsyncArgs setting in config.pl to control maxing out my
 bandwidth.

 However, this all took longer than I'd have liked. I'm stumped as to why
 the data transmitted off of the guest did not show up in the ifconfig
 output. I know that the guest is sending data via rsync based on the
 logs. However it's not showing up in the ifconfig stats (see below). Is
 this due to the way that rsync works? I was sending about 450Mb of data
 every 1-2 hrs from 8pm - 6am (I can send the logs if that would be of
 any help). I've included below the ifconfig outputs for the host
 (massive TX bytes) and the guest (normal TX bytes). I would have
 expected a corresponding amount of TX bytes for the guest. Thanks for
 any insight.


 Cheers,
 William



 Output of ifconfig on host (atlas)
 eth0  Link encap:Ethernet  HWaddr 00:17:A4:3F:C3:B5
  inet addr:64.132.42.194  Bcast:64.132.42.207  Mask:255.255.255.240
  inet6 addr: fe80::217:a4ff:fe3f:c3b5/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:67509721 errors:0 dropped:0 overruns:0 frame:3
  TX packets:102403892 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:6915124969 (6.4 GiB)  TX bytes:139582421865 (129.9 GiB)
  Interrupt:16


 Output of ifconfig on guest (wg75)
 eth0  Link encap:Ethernet  HWaddr 00:0c:29:2a:5f:cd
  inet addr:192.168.233.25  Bcast:192.168.233.255
  Mask:255.255.255.0
  inet6 addr: fe80::20c:29ff:fe2a:5fcd/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:26307738 errors:0 dropped:0 overruns:0 frame:0
  TX packets:42720081 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:2111065627 (1.9 GB)  TX bytes:2438535854 (2.2 GB)
  Interrupt:17 Base address:0x1400


 --
 Knowmad Technologies - Open Source Web Solutions
 W: http://www.knowmad.com | E: will...@knowmad.com | P: 704.343.9330


 --
 Check out the new SourceForge.net Marketplace.
 It is the best place to buy or sell services for
 just about anything Open Source.
 http://p.sf.net/sfu/Xq1LFB
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Remote backups of a win2003 server keeps failing after a certain amount of time.

2009-01-07 Thread Koen Linders
Remote backups of a win2003 server keeps failing after a certain amount of
time (almost every time about 20h later / data 11 GB already done).

Backup method: Rsyncd
Server client: Rsync via Deltacopy (great piece of easy to configure
software thanks to someone on this list, works perfectly for win2K, winXP,
vista, win2003)

Slow upload +- 512 KB/s 
25 GB of data on a separate partition (not the windows one)

Another win2003 with similar data (but less 9g) hasn't got a problem. The
first full needed 24h, +- 100 KB/s average. 

= Could it be a specific file is too big? Too long name? 
= Anyone could point me to what to search for?

Greetings,
Koen Linders

Contents of file /var/lib/backuppc/pc/80.201.242.118/LOG.012009, modified
2009-01-07 09:44:29 
2009-01-01 20:00:00 full backup started for directory dDRIVE
2009-01-02 16:23:54 Aborting backup up after signal ALRM
2009-01-02 16:23:55 Got fatal error during xfer (aborted by signal=ALRM)
2009-01-02 16:23:55 Saved partial dump 0
2009-01-02 20:00:00 full backup started for directory dDRIVE
2009-01-03 16:24:01 Aborting backup up after signal ALRM
2009-01-03 16:24:02 Got fatal error during xfer (aborted by signal=ALRM)
2009-01-03 16:24:04 Saved partial dump 0
2009-01-03 17:00:00 full backup started for directory dDRIVE
2009-01-04 13:23:27 Aborting backup up after signal ALRM
2009-01-04 13:23:28 Got fatal error during xfer (aborted by signal=ALRM)
2009-01-04 13:23:28 Saved partial dump 0
2009-01-04 14:00:01 full backup started for directory dDRIVE
2009-01-05 10:26:18 Aborting backup up after signal ALRM
2009-01-05 10:26:20 Got fatal error during xfer (aborted by signal=ALRM)
2009-01-05 10:26:23 Saved partial dump 0
2009-01-05 15:19:04 full backup started for directory dDRIVE
2009-01-06 11:51:22 Aborting backup up after signal ALRM
2009-01-06 11:51:26 Got fatal error during xfer (aborted by signal=ALRM)
2009-01-06 11:51:29 Saved partial dump 0
2009-01-06 11:58:05 full backup started for directory dDRIVE
2009-01-07 09:44:26 Aborting backup up after signal ALRM
2009-01-07 09:44:28 Got fatal error during xfer (aborted by signal=ALRM)
2009-01-07 09:44:29 Saved partial dump 0


Contents of file /var/lib/backuppc/pc/80.201.242.118/XferLOG.0.z, modified
2009-01-07 09:44:28 (Extracting only Errors) 
full backup started for directory dDRIVE
Connected to 80.201.242.118:873, remote version 30
Negotiated protocol version 28
Connected to module dDRIVE
Sending args: --server --sender --numeric-ids --perms --owner --group -D
--links --hard-links --times --block-size=2048 --recursive --ignore-times .
.
Sent exclude: Thumbs.db
Sent exclude: IconCache.db
Sent exclude: Cache
Sent exclude: cache
Sent exclude: /Documents and Settings/*/Local Settings/Temporary Internet
Files
Sent exclude: /Documents and Settings/*/Local Settings/Temp
Sent exclude: /Documents and Settings/*/NTUSER.DAT
Sent exclude: /Documents and Settings/*/ntuser.dat.LOG
Sent exclude: /Documents and Settings/*/Local Settings/Application
Data/Microsoft/Windows/UsrClass.dat
Sent exclude: /Documents and Settings/*/Local Settings/Application
Data/Microsoft/Windows/UsrClass.dat.LOG
Sent exclude: /Documents and Settings/*/Local Settings/Application
Data/Mozilla/Firefox/Profiles/*/Cache
Sent exclude: /Documents and Settings/*/Local Settings/Application
Data/Mozilla/Firefox/Profiles/*/OfflineCache
Sent exclude: /Documents and Settings/*/Recent
Sent exclude: *.lock
Sent exclude: /WINDOWS
Sent exclude: /RECYCLER
Sent exclude: /MSOCache
Sent exclude: /System Volume Information
Sent exclude: /AUTOEXEC.BAT
Sent exclude: /BOOTSECT.BAK
Sent exclude: /CONFIG.SYS
Sent exclude: /hiberfil.sys
Sent exclude: /pagefile.sys
Sent exclude: /Program Files/F-Secure/common/policy.ipf
Sent exclude: NTUSER.DAT.LOG
Sent exclude: NTUSER.dat
Sent exclude: *.tmp
Sent exclude: /Profiles/VTB/*/NTUSER.DAT
Sent exclude: /Profiles/VTB/*/ntuser.dat.LOG
Sent exclude: /Profiles/VTB/*/Local Settings/Application
Data/Microsoft/Windows/UsrClass.dat
Sent exclude: /Profiles/VTB/*/Local Settings/Application
Data/Microsoft/Windows/UsrClass.dat.LOG
Sent exclude: /Profiles/VTB/*/Local Settings/Application
Data/Mozilla/Firefox/Profiles/*/Cache
Sent exclude: /Profiles/VTB/*/Local Settings/Application
Data/Mozilla/Firefox/Profiles/*/Cache
Sent exclude: /Profiles/VTB/*/Local Settings/Application
Data/Mozilla/Firefox/Profiles/*/OfflineCache
Sent exclude: /Profiles/VTB/*/Recent
Sent exclude: /Profiles/VTB/*/NetHood
Sent exclude: /Profiles/VTB/*/Onlangs geopend
Sent exclude: /Profiles/VTB/*/UserData
Sent exclude: /Public/Backup
Xfer PIDs are now 30058
[ skipped 25915 lines ]
Remote[2]: file has vanished: Shares/Audiologie/administratie HOTO en
FM/Facturatie/faktuuraanvraag FM en H.A/FM + hulpmiddelen/~$orbeeld
Veranneman facturatie FM en hulpmiddelen.doc (in dDRIVE)
[ skipped 83 lines ]
finish: removing in-process file Shares/Audiologie/backup/PC
MPI047/Backup.bkf
Child is aborting
Done: 22734 files, 11921915308 bytes
Got fatal error during xfer (aborted by signal=ALRM)

Re: [BackupPC-users] I received the error No files dumped for share

2009-01-07 Thread Omar Llorens Crespo Domínguez
Hi, I have the same problem , but not with a Windows XP. I try to backup 
the same server where i have instaled the backuppc.
My configuration is the next:


$Conf{FullKeepCnt} = [
  4,
  0,
  6
];
$Conf{IncrKeepCnt} = 28;
$Conf{TarShareName} = [
  '/etc'

];
$Conf{XferMethod} = 'tar';
$Conf{TarClientCmd} = ' env LC_ALL=C /usr/bin/sudo $tarPath -c -v -f - 
-C $shareName+'
. ' --totals';

$Conf{TarClientRestoreCmd} = ' env LC_ALL=C /usr/bin/sudo $tarPath -x -p 
--numeric-owner --same-owner'
   . ' -v -f - -C $shareName+';


But when i try to do full backup i have the next error:

Executing DumpPreUserCmd: /var/lib/raa/scripts/precopy /etc
Exec of /var/lib/raa/scripts/precopy /etc failed
Running:  /usr/bin/sudo /bin/tar -c -v -f - -C /etc --totals .
full backup started for directory /etc
Xfer PIDs are now 32571,32570
Exec failed for  /usr/bin/sudo /bin/tar -c -v -f - -C /etc --totals .
tarExtract: Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 
filesTotal, 0 sizeTotal
Executing DumpPostUserCmd: /var/lib/raa/scripts/postcopy /etc
Exec of /var/lib/raa/scripts/postcopy /etc failed
Got fatal error during xfer (No files dumped for share /etc)
Backup aborted (No files dumped for share /etc)
Not saving this as a partial backup since it has fewer files than the prior one 
(got 0 and 0 files versus 0)




Thank you for your answer.

JPL TSOLUCIO S.L
www.tsolucio.com
902 886 938

Craig Barratt escribió:
 Sean writes:

   
 I have tried to do a full backup of a Windows XP PC. the Backup is
 successful. Although I get the error ?No files dumped for share. What
 is wrong?
 

 The backup isn't successful (since no files were dumped for one (or more)
 shares).

 Please look at the XferLOG.bad file (which should be quite short) and
 if the answer isn't apparent, email the contents of the file (or at
 least the first few lines) to this thread.  You should also explain
 which XferMethod you are using and the corresponding Share and
 Include/Exclude settings.

 Craig

 --
 Check out the new SourceForge.net Marketplace.
 It is the best place to buy or sell services for
 just about anything Open Source.
 http://p.sf.net/sfu/Xq1LFB
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

   


--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC misconfiguration Rsync network usage

2009-01-07 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

William McKee wrote:
 I use VMware on a co-lo server which has 3 guestts that all get backed
 up by BackupPC. I could identify that the host was transmitting massive
 amounts of data (130Gb) which appeared to be coming from one of the
 three guests. However, I couldn't figure out which guest was pushing out
 the excessive data.
 
 I went through the usual log files without much luck. I then checked the
 ifconfig output which all looked normal inside the hosts. Once I finally
 looked at the BackupPC logs for the guest server, I realized what was
 happening and corrected the issue by removing my bad entry. I also added
 --bwlimit to the RsyncArgs setting in config.pl to control maxing out my
 bandwidth.
 
 However, this all took longer than I'd have liked. I'm stumped as to why
 the data transmitted off of the guest did not show up in the ifconfig
 output. I know that the guest is sending data via rsync based on the
 logs. However it's not showing up in the ifconfig stats (see below). Is
 this due to the way that rsync works? I was sending about 450Mb of data
 every 1-2 hrs from 8pm - 6am (I can send the logs if that would be of
 any help). I've included below the ifconfig outputs for the host
 (massive TX bytes) and the guest (normal TX bytes). I would have
 expected a corresponding amount of TX bytes for the guest. Thanks for
 any insight.

I would suspect vmware has something to do with that. Try creating
traffic with any other tool, and it likely won't be counted in the way
you think it should as well.

Another option is perhaps the counters wrapped due to the amount of
data... so if they wrapped recently, then the values will be very small,
even though a huge amount of data has been transmitted.

There is nothing special that rsync does to cause it's bandwidth not to
be counted normally (AFAIK).

Regards,
Adam
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAklkgdsACgkQGyoxogrTyiXTTACguBsJwvOLXCi/+Mgs2ML6JYV5
mOcAoJ61TAmlpkE9j/6tyu9PkHxsnDp3
=elRS
-END PGP SIGNATURE-

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Remote backups of a win2003 server keeps failing after a certain amount of time.

2009-01-07 Thread Koen Linders
I forgot to add the windows logbook errors:

1) 
The description for Event ID ( 0 ) in Source ( rsyncd ) cannot be found. The
local computer may not have the necessary registry information or message
DLL files to display messages from a remote computer. You may be able to use
the /AUXSOURCE= flag to retrieve this description; see Help and Support for
details. The following information is part of the event: rsyncd: PID 644:
rsync: writefd_unbuffered failed to write 4092 bytes [sender]: Connection
reset by peer (104).

2)
The description for Event ID ( 0 ) in Source ( rsyncd ) cannot be found. The
local computer may not have the necessary registry information or message
DLL files to display messages from a remote computer. You may be able to use
the /AUXSOURCE= flag to retrieve this description; see Help and Support for
details. The following information is part of the event: rsyncd: PID 644:
rsync error: error in rsync protocol data stream (code 12) at
/home/lapo/packaging/rsync-3.0.4-1/src/rsync-3.0.4/io.c(1541)
[sender=3.0.4].

-Oorspronkelijk bericht-
Van: Koen Linders [mailto:koen.lind...@koca.be] 
Verzonden: 07 January 2009 10:17
Aan: BackupPC-users@lists.sourceforge.net
Onderwerp: [BackupPC-users] Remote backups of a win2003 server keeps failing
after a certain amount of time.

Remote backups of a win2003 server keeps failing after a certain amount of
time (almost every time about 20h later / data 11 GB already done).

Backup method: Rsyncd
Server client: Rsync via Deltacopy (great piece of easy to configure
software thanks to someone on this list, works perfectly for win2K, winXP,
vista, win2003)

Slow upload +- 512 KB/s 
25 GB of data on a separate partition (not the windows one)

Another win2003 with similar data (but less 9g) hasn't got a problem. The
first full needed 24h, +- 100 KB/s average. 

= Could it be a specific file is too big? Too long name? 
= Anyone could point me to what to search for?

Greetings,
Koen Linders

Contents of file /var/lib/backuppc/pc/80.201.242.118/LOG.012009, modified
2009-01-07 09:44:29 
2009-01-01 20:00:00 full backup started for directory dDRIVE
2009-01-02 16:23:54 Aborting backup up after signal ALRM
2009-01-02 16:23:55 Got fatal error during xfer (aborted by signal=ALRM)
2009-01-02 16:23:55 Saved partial dump 0
2009-01-02 20:00:00 full backup started for directory dDRIVE
2009-01-03 16:24:01 Aborting backup up after signal ALRM
2009-01-03 16:24:02 Got fatal error during xfer (aborted by signal=ALRM)
2009-01-03 16:24:04 Saved partial dump 0
2009-01-03 17:00:00 full backup started for directory dDRIVE
2009-01-04 13:23:27 Aborting backup up after signal ALRM
2009-01-04 13:23:28 Got fatal error during xfer (aborted by signal=ALRM)
2009-01-04 13:23:28 Saved partial dump 0
2009-01-04 14:00:01 full backup started for directory dDRIVE
2009-01-05 10:26:18 Aborting backup up after signal ALRM
2009-01-05 10:26:20 Got fatal error during xfer (aborted by signal=ALRM)
2009-01-05 10:26:23 Saved partial dump 0
2009-01-05 15:19:04 full backup started for directory dDRIVE
2009-01-06 11:51:22 Aborting backup up after signal ALRM
2009-01-06 11:51:26 Got fatal error during xfer (aborted by signal=ALRM)
2009-01-06 11:51:29 Saved partial dump 0
2009-01-06 11:58:05 full backup started for directory dDRIVE
2009-01-07 09:44:26 Aborting backup up after signal ALRM
2009-01-07 09:44:28 Got fatal error during xfer (aborted by signal=ALRM)
2009-01-07 09:44:29 Saved partial dump 0


Contents of file /var/lib/backuppc/pc/80.201.242.118/XferLOG.0.z, modified
2009-01-07 09:44:28 (Extracting only Errors) 
full backup started for directory dDRIVE
Connected to 80.201.242.118:873, remote version 30
Negotiated protocol version 28
Connected to module dDRIVE
Sending args: --server --sender --numeric-ids --perms --owner --group -D
--links --hard-links --times --block-size=2048 --recursive --ignore-times .
.
Sent exclude: Thumbs.db
Sent exclude: IconCache.db
Sent exclude: Cache
Sent exclude: cache
Sent exclude: /Documents and Settings/*/Local Settings/Temporary Internet
Files
Sent exclude: /Documents and Settings/*/Local Settings/Temp
Sent exclude: /Documents and Settings/*/NTUSER.DAT
Sent exclude: /Documents and Settings/*/ntuser.dat.LOG
Sent exclude: /Documents and Settings/*/Local Settings/Application
Data/Microsoft/Windows/UsrClass.dat
Sent exclude: /Documents and Settings/*/Local Settings/Application
Data/Microsoft/Windows/UsrClass.dat.LOG
Sent exclude: /Documents and Settings/*/Local Settings/Application
Data/Mozilla/Firefox/Profiles/*/Cache
Sent exclude: /Documents and Settings/*/Local Settings/Application
Data/Mozilla/Firefox/Profiles/*/OfflineCache
Sent exclude: /Documents and Settings/*/Recent
Sent exclude: *.lock
Sent exclude: /WINDOWS
Sent exclude: /RECYCLER
Sent exclude: /MSOCache
Sent exclude: /System Volume Information
Sent exclude: /AUTOEXEC.BAT
Sent exclude: /BOOTSECT.BAK
Sent exclude: /CONFIG.SYS
Sent exclude: /hiberfil.sys
Sent exclude: /pagefile.sys
Sent exclude: /Program 

[BackupPC-users] Quotas whith BackupPC

2009-01-07 Thread cedric briner
hello,

aahhaah the world is not infinite and my resources (hardware/brain) 
neither :(

Can we implement such feature

I'm thinking to put quota on my user, telling them how much data they 
can backup. To do this, the backup will proceed like:

  1 - first do a disk usage (DU) with the exclude path
  2 - if not bigger than the quota, start the backup.
  3 - else
  3.1 - send an email to the person and ask it to remove some folder or 
some file extension to make his saving data smaller
  3.2  - the user launch a DU java WebStart application (GUI) to help 
him to easily calculate the size of what backuppc lets him to backup
  3.3 - when done, the DU application talk with backuppc to let know it 
about the new configuration and the new size of the data to be saved
  3.4 - backuppc start the backup


The java application should do something like jDiskReport or JDU or ...
  - calculate the DU and display it
  - able to see the tree to backup
  - set/unset directories to exclude from.
  - display a small panel of file extensions to remove (.jpg, .mp3 ...)
  - able to provide as line command options the exclude directories and 
the file extensions directories)
  - talk back to the backuppc server the data (exclude directories, 
exclude file extensions, disk usage)
  - launch the application as a GUI when launch in 3.2
  - launch the application with no GUI when launch in 1


What do you think about such idea.

cEd

-- 

Cedric BRINER
Geneva - Switzerland

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Error: Wrong user: my userid is 48, instead of 150(backuppc)

2009-01-07 Thread Miguel A. Velasco
Hi all, recently I´ve installed backuppc 3.1.0 on my Centos 5.2 Server. 
When I access to the backuppc management web I see Error: Wrong user: 
my userid is 48, instead of 150(backuppc) where userid 48 is apache 
user and 150 is backuppc.
Folowing the BackupPC documentation I´ve checked the folowwing requirements:

1º.- Apache is not using mod_perl: [r...@centos ~]# httpd -l | egrep 
mod_perl
2º.- I think the server has installed Perl setuid emulation:
[r...@centos ~]# ls -ld /usr/bin/sperl5.8.8
-rws--x--x 1 root root 70888 Sep 17 10:40 /usr/bin/sperl5.8.8
[r...@centos ~]#  rpm -qa | grep -i perl-suidperl
perl-suidperl-5.8.8-15.el5_2.1
[r...@centos ~]# ls -ld /usr/bin/suidperl
-rwxr-xr-x 1 root root 14836 Sep 17 10:40 /usr/bin/suidperl
3º.- The BackupPC permissions are:
[r...@centos ~]# ll /usr/share/backuppc/
total 24K
drwxr-xr-x  4 backuppc backuppc 4.0K Nov 22 01:54 .
drwxr-xr-x 76 root root 4.0K Nov 22 01:54 ..
drwxr-xr-x  2 backuppc backuppc 4.0K Jan  6 11:37 cgi-bin
drwxr-xr-x  3 backuppc backuppc 4.0K Nov 22 01:54 html
-rw-r--r--  1 backuppc backuppc  107 Mar 16  2008 index.html

[r...@centos ~]# ll /usr/share/backuppc/cgi-bin/BackupPC_Admin
-rwsr-xr-x 1 backuppc apache 3.9K Mar 16  2008  
 
/usr/share/backuppc/cgi-bin/BackupPC_Admin

[r...@centos ~]# ll /etc/BackupPC/
total 108K
drwxr-x---  2 backuppc apache 4.0K Jan  6 11:38 .
drwxr-xr-x 83 root root   4.0K Jan  7 02:52 ..
-rw-r--r--  1 root root 20 Nov 22 01:55 apache.users
-rwxr-x---  1 backuppc apache  80K Nov 22 02:17 config.pl
-rwxr-x---  1 backuppc apache 2.2K Mar 16  2008 hosts
-rw-r--r--  1 apache   apache   23 Jan  6 11:18 .htaccess

¿Whay may I am doing wrong? Please, I would be very grateful if someone 
has any response for me.

Thanks very much for your time.
Miguel Velasco

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Remote backups of a win2003 server keeps failing after a certain amount of time.

2009-01-07 Thread Holger Parplies
Hi,

Koen Linders wrote on 2009-01-07 10:17:20 - [[BackupPC-users] Remote 
backups of a win2003 server keeps failing after a certain amount of time.]:
 Remote backups of a win2003 server keeps failing after a certain amount of
 time (almost every time about 20h later / data 11 GB already done).
 
 Backup method: Rsyncd
 [...]
 Contents of file /var/lib/backuppc/pc/80.201.242.118/LOG.012009, modified
 2009-01-07 09:44:29 
 2009-01-01 20:00:00 full backup started for directory dDRIVE
 2009-01-02 16:23:54 Aborting backup up after signal ALRM
 2009-01-02 16:23:55 Got fatal error during xfer (aborted by signal=ALRM)
 2009-01-02 16:23:55 Saved partial dump 0

signal ALRM is always caused by BackupPC aborting the backup after
$Conf{ClientTimeout} has passed without BackupPC detecting any progress. In
the case of rsync(d), this needs to account for the complete backup due to the
way it is implemented (for tar type backups, I believe it is only the duration
of the longest file transfer). The default value is 72000 seconds == 20 hours.

Raise $Conf{ClientTimeout} to a value that will allow your backup to complete.
A too high value has the drawback of not detecting stuck backups for that
amount of time. This is nothing to worry about for your first backup. Perhaps
just add a '0' and change the value back after the first backup has
successfully completed. Future rsync(d) backups should hopefully complete
significantly faster.

 2009-01-02 20:00:00 full backup started for directory dDRIVE
 2009-01-03 16:24:01 Aborting backup up after signal ALRM
 2009-01-03 16:24:02 Got fatal error during xfer (aborted by signal=ALRM)
 2009-01-03 16:24:04 Saved partial dump 0

I'm not sure why you repeatedly get a partial dump 0 though instead of the
transfer restarting from the point it was originally interrupted (using the
previous partial as reference). That would allow the backup to eventually
complete even without increasing ClientTimeout, but it does not seem to be
happening in your case.

Regards,
Holger

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Remote backups of a win2003 server keeps failing after a certain amount of time.

2009-01-07 Thread Koen Linders
Thanks for the reply.

I changed the value to 144000 for this client and will wait another day (or
two). 

Maybe it hangs on a specific file? I hope changing the value worked.

Anyway, thx :)
Koen Linders

-Oorspronkelijk bericht-
Van: Holger Parplies [mailto:wb...@parplies.de] 
Verzonden: 07 January 2009 12:35
Aan: Koen Linders
CC: BackupPC-users@lists.sourceforge.net
Onderwerp: Re: [BackupPC-users] Remote backups of a win2003 server keeps
failing after a certain amount of time.

Hi,

Koen Linders wrote on 2009-01-07 10:17:20 - [[BackupPC-users] Remote
backups of a win2003 server keeps failing after a certain amount of time.]:
 Remote backups of a win2003 server keeps failing after a certain amount of
 time (almost every time about 20h later / data 11 GB already done).
 
 Backup method: Rsyncd
 [...]
 Contents of file /var/lib/backuppc/pc/80.201.242.118/LOG.012009, modified
 2009-01-07 09:44:29 
 2009-01-01 20:00:00 full backup started for directory dDRIVE
 2009-01-02 16:23:54 Aborting backup up after signal ALRM
 2009-01-02 16:23:55 Got fatal error during xfer (aborted by signal=ALRM)
 2009-01-02 16:23:55 Saved partial dump 0

signal ALRM is always caused by BackupPC aborting the backup after
$Conf{ClientTimeout} has passed without BackupPC detecting any progress. In
the case of rsync(d), this needs to account for the complete backup due to
the
way it is implemented (for tar type backups, I believe it is only the
duration
of the longest file transfer). The default value is 72000 seconds == 20
hours.

Raise $Conf{ClientTimeout} to a value that will allow your backup to
complete.
A too high value has the drawback of not detecting stuck backups for that
amount of time. This is nothing to worry about for your first backup.
Perhaps
just add a '0' and change the value back after the first backup has
successfully completed. Future rsync(d) backups should hopefully complete
significantly faster.

 2009-01-02 20:00:00 full backup started for directory dDRIVE
 2009-01-03 16:24:01 Aborting backup up after signal ALRM
 2009-01-03 16:24:02 Got fatal error during xfer (aborted by signal=ALRM)
 2009-01-03 16:24:04 Saved partial dump 0

I'm not sure why you repeatedly get a partial dump 0 though instead of the
transfer restarting from the point it was originally interrupted (using the
previous partial as reference). That would allow the backup to eventually
complete even without increasing ClientTimeout, but it does not seem to be
happening in your case.

Regards,
Holger


--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backuppc quotas

2009-01-07 Thread cedric briner
hello,

aahhaah the world is not infinite and my resources (hardware/brain)
neither  :(

Can we implement such feature

I'm thinking to put quota on my user, telling them how much data they
can backup. To do this, the backup will proceed like:

   1 - first do a disk usage (DU) with the exclude path
   2 - if not bigger than the quota, start the backup.
   3 - else
   3.1 - send an email to the person and ask it to remove some folder or
some file extension to make his saving data smaller
   3.2  - the user launch a DU java WebStart application (GUI) to help
him to easily calculate the size of what backuppc lets him to backup
   3.3 - when done, the DU application talk with backuppc to let know it
about the new configuration and the new size of the data to be saved
   3.4 - backuppc start the backup


The java application should do something like jDiskReport or JDU or ...
   - calculate the DU and display it
   - able to see the tree to backup
   - set/unset directories to exclude from.
   - display a small panel of file extensions to remove (.jpg, .mp3 ...)
   - able to provide as line command options the exclude directories and
the file extensions directories)
   - talk back to the backuppc server the data (exclude directories,
exclude file extensions, disk usage)
   - launch the application as a GUI when launch in 3.2
   - launch the application with no GUI when launch in 1


What do you think about such idea.

cEd


-- 

Cedric BRINER
Geneva - Switzerland

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Change target directory

2009-01-07 Thread Max Hetrick
Renke Brausse wrote:
 the directory is hard coded, you can only change it at compile time.
 
 The easiest solution is to bind mount /var/lib/backuppc to a directory
 of your choice.

If you've installed with RPMs, say on CentOS or RHEL, then it's not hard 
coded. Or if you've installed it with Debian packages, you can change 
the path.

 From the web GUI:
Edit Config - Server - Install Path - TopDir

 From the config.pl file:

$Conf{TopDir} = '/path/to/location';

Only thing I've heard is there is a bug with the RHEL/CentOS package 
where the pool directory remains in /var/lib/backuppc if you've 
installed from the RPMs.

Regards,
Max

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc quotas

2009-01-07 Thread Andrew Libby


cedric briner wrote:
 hello,
 
 aahhaah the world is not infinite and my resources (hardware/brain)
 neither  :(
 
 Can we implement such feature
 
 I'm thinking to put quota on my user, telling them how much data they
 can backup. To do this, the backup will proceed like:
 
1 - first do a disk usage (DU) with the exclude path
2 - if not bigger than the quota, start the backup.
3 - else
3.1 - send an email to the person and ask it to remove some folder or
 some file extension to make his saving data smaller
3.2  - the user launch a DU java WebStart application (GUI) to help
 him to easily calculate the size of what backuppc lets him to backup
3.3 - when done, the DU application talk with backuppc to let know it
 about the new configuration and the new size of the data to be saved
3.4 - backuppc start the backup
 
 
 The java application should do something like jDiskReport or JDU or ...
- calculate the DU and display it
- able to see the tree to backup
- set/unset directories to exclude from.
- display a small panel of file extensions to remove (.jpg, .mp3 ...)
- able to provide as line command options the exclude directories and
 the file extensions directories)
- talk back to the backuppc server the data (exclude directories,
 exclude file extensions, disk usage)
- launch the application as a GUI when launch in 3.2
- launch the application with no GUI when launch in 1
 
 
 What do you think about such idea.
 
 cEd
 
 

Hi Cedric,

Unless I'm missing something, why wouldn't you implement
quotas on the users data before backups?  Most systems have
this capability already.  It'd be much simpler than trying
get users to prioritize which things they want backed up.

Andy


-- 

===
xforty technologies
Andrew Libby
ali...@xforty.com
www.xforty.com
===


--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Change target directory

2009-01-07 Thread Renke Brausse
  the directory is hard coded, you can only change it at compile time.
  
  The easiest solution is to bind mount /var/lib/backuppc to a directory
  of your choice.

 Isn't this set in config.pl file using $Conf{TopDir}.

good point - maybe I'm just outdated and stuck to 2.x :)


signature.asc
Description: Dies ist ein digital signierter Nachrichtenteil
--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how to examine progress of backup

2009-01-07 Thread Rob Owens
Les Mikesell wrote:
 Matthias Meyer wrote:
 I use rsyncd to backup both, windows as well as linux clients.
 Is it possible to examine or calculate the progress of an actual running
 backup?
 
 No, gnu tar has a way to get an estimate of the size of an incremental 
 run that amanda uses to help compute what will fit on a tape, but rsync 
 doesn't have an equivalent.
 
 One Idea would be to calculate the duration of the last backups and assume
 the actual backup will have the same performance. But this can be wrong if
 some new (big) files had been added.
 Another Idea is to examine the filespace of the share and calculate the
 estimated duration based on the last transfer rates. But this can be wrong
 if some (big) files are in the exclusion list.

 Any Ideas?
 
 I don't think it is even possible to guess how much rsync will transfer 
 on a changed file without running through the whole comparison.
 
I think if you use the --dry-run option in rsync it may tell you how
much data would have been transferred.

-Rob


The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. If you are not the addressee, any disclosure, reproduction,
copying, distribution, or other dissemination or use of this transmission in
error please notify the sender immediately and then delete this e-mail.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted lost, destroyed, arrive late or
incomplete, or contain viruses.
The sender therefore does not accept liability for any errors or omissions
in the contents of this message which arise as a result of e-mail
transmission. If verification is required please request a hard copy
version.




--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Archive full backups?

2009-01-07 Thread Tino Schwarze
On Sun, Jan 04, 2009 at 01:38:19PM +0100, Bernhard Schneck wrote:

 I've been using BackupPC (3.0.0) on Ubuntu (8.x) for a while
 and am quite happy with it ... thanks a lot for the effort
 to all BackupPC developers and contributors!
 
 I've started to look at the Archive functions.
 
 What I want to achieve is to do regular full and incremental
 backups, while archiving the most recent full backups to
 different media regularly (say once per week).
 
 As I understand, Archive will transfer the most recent backup
 to the archive storage location ... even if this is an
 incremental and the corresponding full backup has not been
 archived (which makes the incremental more or less unusable
 for disaster recovery purposes)

No, the archive you get from a full will be identical to the archive you
get from an incremental. The term full backup and incremental
backup is only of interest during the data transfer phase from client
to the BackupPC server. After that, all backups are treated (almost) the
same with gaps in incrementals being automatically filled from the most
recent incremental or full.

I'm using the archive feature with a custom script to get archives on
tape - the script checks which hosts to archive (based on last archive
time), then creates the archive. When the script is finished (it's got
an enforced timeout), Bacula picks up the archives and writes them to
tape.

HTH,

Tino.

-- 
What we nourish flourishes. - Was wir nähren erblüht.

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] automated backup of specific dirs to local hdd

2009-01-07 Thread Tino Schwarze
On Wed, Dec 31, 2008 at 02:08:27AM +1000, jed wrote:

 Is this app purely for backup across networks to servers, or is it also 
 perfectly fine for local backups of stipulated dirs?

You could set up a server and have it only backup itself - no problem.

 For starters I'm just wanting to regularly backup my Tbird/FF profiles 
 to a separate hdd on the same Mac..

Then BackupPC is probably overkill for you.

 Can it deal with folders that contain data that's live and may be 
 updating at the time of a backup? (hope that makes sense)

No. There's no standard software to backup live data. You always need a
special solution like shutting down the program in question.

Are you using MacOS X? I've heard that Time machine does an excellent
job of backing up data on Macs...

Hope that helps,

Tino.

-- 
What we nourish flourishes. - Was wir nähren erblüht.

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc quotas

2009-01-07 Thread cedric briner
Hi Andy
 Hi Cedric,
 
 Unless I'm missing something, why wouldn't you implement
 quotas on the users data before backups?  Most systems have
 this capability already.  It'd be much simpler than trying
 get users to prioritize which things they want backed up.

I'm not really sure to very well understand what you've said. But, the 
idea developed is to tell the user:
   I can provide you a backup with a 10GB.

Then whatever you do, checking the size in the server or in the client, 
the user will have to do a disk usage (DU), to find out which 
directories or which files extensions they want to exclude.

That said, yes, I agree with you that instead of
  1 - first do a disk usage (DU) with the exclude path
you can directly start with a backup, because you do make the assumption 
that the size to be downloaded will be as higher as the last time (so no 
really need to do a DU before each time). And then it will be more 
efficient for the client that this verification occurs on the server side.


 Andy
cEd


-- 

Cedric BRINER
Geneva - Switzerland

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Remote backups of a win2003 server keeps failing after a certain amount of time.

2009-01-07 Thread Jeffrey J. Kosowsky
Koen Linders wrote at about 14:05:56 + on Wednesday, January 7, 2009:
  Thanks for the reply.
  
  I changed the value to 144000 for this client and will wait another day (or
  two). 
  
  Maybe it hangs on a specific file? I hope changing the value worked.
  

That is very likely with Windoze. Rsync with protocol 28 can get stuck
on files with long or weird filenames. If so you will see that
rsync stops partway through by looking at the XferLOG file.

Note this problem doesn't occur with protocol 30 (used by rsync 3.0)
BUT this is not yet supported by perl-File-RsyncP.

You can get around this by using rsyncd (with or without a separate
ssh tunnel) instead of straight rsync.

This question seems to come up once a week or more. If not already in
the FAQ it should probably be added. Maybe it should also be added to
the comments in the config.pl file too.

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Change target directory

2009-01-07 Thread Juergen Harms
$Conf{TopDir} appears to be a hardly coded variable. My understanding 
is that, in your configuration file, you can set it to any value you 
want, but that the result you intuitively expect will only be achieved 
as long as you stay within the file-system that contains the backuppc 
configuration data.

Google reveals abundant references to that topic. I had found clear 
instructions on what to do in the wiki, but can't find them any more - 
the wiki appears to have been recently improved. From memory, here are 
the 2 principal alternatives for using a directory on another file-system:

1. mount - as already suggested - your backup file-system directly at
the mount point /var/lib/backuppc
2. do a bind-mount of the backup file-system
  - mount your backup file-system at the mount point of your choice,
e.g. backup-mount
  - do mount --bind backup-mount /var/lib/backuppc
to have this done automatically at boot time, add a line to your
/etc/fstab file
  - backup-mount /var/lib/backuppc none bind

In both cases, set the TopDir variable correspondingly. Also make sure 
to save any data previously stored at /var/lib/backuppc before you use 
that node as a mount point.

This is really from memory: I am far from home and do not have access to 
the system with backuppc and the corresponding notes.

big smiley and cheers!

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Change target directory

2009-01-07 Thread Nils Breunese (Lemonbit)
Juergen Harms wrote:

 $Conf{TopDir} appears to be a hardly coded variable. My  
 understanding
 is that, in your configuration file, you can set it to any value you
 want, but that the result you intuitively expect will only be achieved
 as long as you stay within the file-system that contains the backuppc
 configuration data.

The thing is that the TopDir location is also written into some lib  
file at install time. If you install a BackupPC package the same thing  
applies: if you just change it in the config file, it won't work.

 From memory, here are the 2 principal alternatives for using a  
 directory on another file-system:

 1. mount - as already suggested - your backup file-system directly at
the mount point /var/lib/backuppc
 2. do a bind-mount of the backup file-system
  - mount your backup file-system at the mount point of your  
 choice,
e.g. backup-mount
  - do mount --bind backup-mount /var/lib/backuppc
to have this done automatically at boot time, add a line to your
/etc/fstab file
  - backup-mount /var/lib/backuppc none bind

You can also use a plain symlink. These solutions have been posted  
many times to this mailinglist.

Nils Breunese.

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] tracking cause of backup growth

2009-01-07 Thread John Rouillard
Hi all:

I am having an issue with backup growth. I have approx 100 hosts that
should be in steady state: all have 9 full backups and 14 incrementals
which are the maximum number of retained backups.

The amount of data being backed up shouldn't be varying much, but I
have been continually losing about 1-2G/day over the past month or so.

Does anyboody have any tools/tricks for auditing the backuppc pc
directory to figure out what files are using the additional space?

I have done some auditing and I see things like log files changing on
every backup, but the size is larger in some cases, smaller in others
so on average they have the same size for a given day.

-- 
-- rouilj

John Rouillard
System Administrator
Renesys Corporation
603-244-9084 (cell)
603-643-9300 x 111

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc quotas

2009-01-07 Thread Les Mikesell
cedric briner wrote:
 Hi Andy
 Hi Cedric,

 Unless I'm missing something, why wouldn't you implement
 quotas on the users data before backups?  Most systems have
 this capability already.  It'd be much simpler than trying
 get users to prioritize which things they want backed up.
 
 I'm not really sure to very well understand what you've said. But, the 
 idea developed is to tell the user:
I can provide you a backup with a 10GB.

User quotas would normally be used on a server.  I assume you want to 
back up local hard disks where the whole space is available.

 Then whatever you do, checking the size in the server or in the client, 
 the user will have to do a disk usage (DU), to find out which 
 directories or which files extensions they want to exclude.

Your idea of checking the size on the client before starting should work 
and if you have sshd running you should be able to do it as a 
DumpPreUserCmd with UserCmdCheckStatus set to abort on failure. 
However, any per-client size checking is unrealistically simple because 
the server-side space that will be used depends very much on the 
uniqueness of the client's files.  That is, if you add a backup of a 
large directory where the files are all copies of things already backed 
up from other machines it will take very little additional server space.

-- 
  Les Mikesell
lesmikes...@gmail.com


--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] vista backup question

2009-01-07 Thread Mark Maciolek
hi,

Running 3.1.0 on CentOS 5.2 with 40 clients so far mostly Linux systems. 
Added my first Vista system today. Used Deltacopy to install rsync.

The backup worked but still had 13446 error transfer, mainly file name 
too long.

Remote[1]: rsync: readlink_stat(All Users/Application Data/Application 
Data/Application Data/Application Data/Application Data/Application 
Data/Application Data/Application Data/Application Data/...


Basically it is repeating the Application Data over and over again.

Has anyone seen this issue and know how to prevent it?

Mark
-- 
Mark Maciolek
Network Administrator
Morse Hall 339
862-3050
m...@sr.unh.edu
https://www.sr.unh.edu

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Change target directory

2009-01-07 Thread tagore

Hi!

Thank you for answers.

Tagore

+--
|This was sent by hirleve...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] incremental tar xfer errors

2009-01-07 Thread Simone Marzona
Hi

I got a strange problem doing incrementals with tar over ssh using
--newer=$incrDate+. It seems an escape problem of part of the time
reference for the incremental. The date part of --newer is parsed
correctly but the hour part of --newer.. doesn't and is changed in
00:00:00 and tar interprets 11:34:33 as a filename that doesn't exist.

extract from logfile

Running: /usr/bin/ssh -c blowfish -C -q -x -n -l backup $host $tarPath
-c -v -v -f - -C /root --totals --newer=2009-01-07 11:34:33 .

incr backup started back to 2009-01-07 11:34:33 (backup #0) for
directory /root
Xfer PIDs are now 25928,25927
/bin/tar: Treating date `2009-01-07' as 2009-01-07 00:00:00 + 0
nanoseconds
/bin/tar: 11\:34\:33: Cannot stat: No such file or directory

The problem disappears if I use mtime and not incrDate+ (or rsync) in
the incremental options of tar.

What I'm wrong?

Thanks in advance.


--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] incremental tar xfer errors

2009-01-07 Thread Jeffrey J. Kosowsky
Simone Marzona wrote at about 22:38:42 +0100 on Wednesday, January 7, 2009:
  Hi
  
  I got a strange problem doing incrementals with tar over ssh using
  --newer=$incrDate+. It seems an escape problem of part of the time
  reference for the incremental. The date part of --newer is parsed
  correctly but the hour part of --newer.. doesn't and is changed in
  00:00:00 and tar interprets 11:34:33 as a filename that doesn't exist.
  
  extract from logfile
  
  Running: /usr/bin/ssh -c blowfish -C -q -x -n -l backup $host $tarPath
  -c -v -v -f - -C /root --totals --newer=2009-01-07 11:34:33 .
  

Well the problem (obviously) is that there is whitespace between the
date and time. Maybe try putting quotations around $incrDate+

  incr backup started back to 2009-01-07 11:34:33 (backup #0) for
  directory /root
  Xfer PIDs are now 25928,25927
  /bin/tar: Treating date `2009-01-07' as 2009-01-07 00:00:00 + 0
  nanoseconds
  /bin/tar: 11\:34\:33: Cannot stat: No such file or directory
  
  The problem disappears if I use mtime and not incrDate+ (or rsync) in
  the incremental options of tar.
  
  What I'm wrong?
  
  Thanks in advance.
  
  
  --
  Check out the new SourceForge.net Marketplace.
  It is the best place to buy or sell services for
  just about anything Open Source.
  http://p.sf.net/sfu/Xq1LFB
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/
  

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] vista backup question

2009-01-07 Thread Cody Dunne
Hi Mark,

Mark Maciolek wrote:
 hi,
 
 Running 3.1.0 on CentOS 5.2 with 40 clients so far mostly Linux systems. 
 Added my first Vista system today. Used Deltacopy to install rsync.
 
 The backup worked but still had 13446 error transfer, mainly file name 
 too long.
 
 Remote[1]: rsync: readlink_stat(All Users/Application Data/Application 
 Data/Application Data/Application Data/Application Data/Application 
 Data/Application Data/Application Data/Application Data/...
 
 
 Basically it is repeating the Application Data over and over again.
 
 Has anyone seen this issue and know how to prevent it?
 
 Mark

That's because rsync is following the Vista junction points for the old 
XP locations of things. See 
http://www.cs.umd.edu/~cdunne/projs/backuppc_guide.html#Xfer
for the excludes you need to pass to rsync.

Cody

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] incremental tar xfer errors

2009-01-07 Thread Craig Barratt
Simone writes:

 I got a strange problem doing incrementals with tar over ssh using
 --newer=$incrDate+. It seems an escape problem of part of the time
 reference for the incremental.

Yes, the escaping isn't happening.  The $incrDate+ form means
to escape the value, so that is what you should use (since you
are running through ssh).

Are you sure $Conf{TarIncrArgs} includes --newer=$incrDate+ rather
than --newer=$incrDate?  Have you checked the per-client config too?

Craig

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] I received the error No files dumped for share

2009-01-07 Thread Craig Barratt
Omar writes:

 $Conf{TarClientCmd} = ' env LC_ALL=C /usr/bin/sudo $tarPath -c -v -f -
 -C $shareName+'
 . ' --totals';
 
 $Conf{TarClientRestoreCmd} = ' env LC_ALL=C /usr/bin/sudo $tarPath -x -p
 --numeric-owner --same-owner'
. ' -v -f - -C $shareName+';

Both of these are wrong - they start with a space.  BackupPC doesn't
know what program to exec.

You need something like:

$Conf{TarClientCmd} = '/usr/bin/sudo env LC_ALL=C $tarPath -c -v -f - -C 
$shareName+ --totals';

Craig

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc quotas

2009-01-07 Thread cedric briner
 Hi Andy
 Hi Cedric,

 Unless I'm missing something, why wouldn't you implement
 quotas on the users data before backups?  Most systems have
 this capability already.  It'd be much simpler than trying
 get users to prioritize which things they want backed up.
 I'm not really sure to very well understand what you've said. But, the 
 idea developed is to tell the user:
I can provide you a backup with a 10GB.
 
 User quotas would normally be used on a server.  I assume you want to 
 back up local hard disks where the whole space is available.
 
 Then whatever you do, checking the size in the server or in the client, 
 the user will have to do a disk usage (DU), to find out which 
 directories or which files extensions they want to exclude.
 
 Your idea of checking the size on the client before starting should work 
 and if you have sshd running you should be able to do it as a 
 DumpPreUserCmd with UserCmdCheckStatus set to abort on failure.

 However, any per-client size checking is unrealistically simple because 
 the server-side space that will be used depends very much on the 
 uniqueness of the client's files.  That is, if you add a backup of a 
Who cares that files could be already backup in the pool ? Not the 
enduser, the enduser only cares about saving his data and being able to 
get them back. And you as the sysadmin, are a bit more happy, because 
you have a way to control how much data will be backed up. Which will 
help you to evaluate your backup system.

The main problem with my situation is that people have huge amount of 
data stored, and that we only want to provide backup for important ones. 
So we let the user to tell backuppc which directory to backup, and each 
a year we said, we want people beeing able to save 10Gb or 100Gb.

 large directory where the files are all copies of things already backed 
 up from other machines it will take very little additional server space.
yes, but as already said, such things is a nice feature for the sysadmin 
and not for the enduser.

-- 

Cedric BRINER
Geneva - Switzerland

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/