[BackupPC-users] DumpPreUserCmd returned error status 256

2007-06-07 Thread Garith Dugmore
Hi,

When using the following configuration command

Conf{DumpPreUserCmd} = 'rsync -az --delete ethleen.saao::backupreadonly 
ctfileserver.saao::read_only';

backuppc reports in the log:

2007-06-06 16:58:08 DumpPreUserCmd returned error status 256... exiting


If the command (rsync -az --delete ethleen.saao::backupreadonly 
ctfileserver.saao::read_only) is run from the CLI is completes 
successfully as root.
An echo $? after the command is run reveals a error code 0.

Is there anything else I could check?
I realize I should run this command as apache for a true result but I 
don't see why this would change the result.

Thnx in advance!

-- 

Garith Dugmore
Systems Administrator
South African Astronomical Observatory




-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] DumpPreUserCmd returned error status 256

2007-06-07 Thread Keith Edmunds
Hi Garith

 Is there anything else I could check?
 I realize I should run this command as apache for a true result but I 
 don't see why this would change the result.

I would run the actual command as shown in the log file, and I'd run it as
user 'backuppc'. The actual command presumably starts with
'/usr/bin/ssh ...'

Keith

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Phantom directories. Possible bug

2007-06-07 Thread Jesús Martel
Hi!

I have a problem with BackupPC 3.0. The system give me the error:

[...]
Call timed out: server did not respond after 2 milliseconds listing \E\*
Call timed out: server did not respond after 2 milliseconds listing \I\*
Call timed out: server did not respond after 2 milliseconds listing \I\*
Call timed out: server did not respond after 2 milliseconds listing \L\*
Call timed out: server did not respond after 2 milliseconds listing \R\*
[...]

But BackupPC create directories wich does no exist. The directories E,
I, L and R are phantom directories, it is a bug?

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Grouping hosts and pool

2007-06-07 Thread Holger Parplies
Hi,

Mark Sopuch wrote on 07.06.2007 at 13:36:55 [Re: [BackupPC-users] Grouping 
hosts and pool]:
 Jason M. Kusar wrote:
  Mark Sopuch wrote:
  I'd like to group data (let's just say dept data) from certain hosts 
  together (or actually to seperate some from others) to a different 
  filesystem and still keep the deduping pool in a common filesystem. 
  [...]
  Yes, hard links do not work across filesystems.
  [...]
 [...] my concerns lie mainly with certain types of 
 hosts (data) encroaching quite wildly into the shared allocated space 
 under DATA/... thus leaving less room for the incoming data from other 
 hosts. It's a space budgetting and control thing. [...] I am not sure how
 any other quoting schemes would work to provide similar capability for soft 
 and hard quota if they are in the same fs and usernames are not stamped 
 around in DATA/... to differentiate such things to those other quota'ing 
 systems. Sure I want to back everything up but I do not want the 
 bulkiest least important thing blocking a smaller top priority backup 
 getting space to write to when there's a mad run of new data.
 
 Hope I am being clear enough. Thanks again.

I believe you are being clear enough, but I doubt you have a clear enough
idea of what you actually want :-).

If multiple hosts/users share the same (de-duplicated) data, which one would
you want it to be accounted to? If a new low-priority backup creates a huge
amount of new data (in the sense that it was not in any of its previous
backups) which is, though, already in the pool (from backups of
high-priority hosts), should that backup fail to link to that data, because
it exceeds its quota? This could happen, for example, when a user downloads
the OpenOffice.org or X11 sources that someone else has also previously
downloaded.
What happens when the high-priority backups using the data expire and only
the low-priority backups remain? Should the low-priority user then be over
quota and have his existing backup removed? What happens to incremental
backups based on that backup?

You see, it's not a simple matter to combine de-duplication and quotas. The
whole point of de-duplication is to not use up disk space for the same data
multiple times. The whole point of quotas is to divide up disk space
according to fixed rules. While it may make sense to add the cost of shared
files to each user's quota (thus counting them multiple times and
consequently having the sum of all used quotas be (possibly many times) larger
than the actual amount of used disk space), that is probably not the way any
conventional quota system works, because it's concerned with dividing up
real physical disk space.

What you probably want is something like 'du $TOPDIR/pc/$hostname' gives you:
take into account de-duplication savings *within* the one host but not those
*between different* hosts. There is no way to achieve this with file
ownerships, because one inode can belong to only one user, regardless of which
link you traverse to access it.

You might be relieved to hear that BackupPC never creates a temporary copy
of data it can find in the pool (so a 1 GB backup does *not* first use up 1 GB
of additional space and then delete duplicate data). You need to be aware
however, that files not already in the pool are added to the pool by
BackupPC_link, which is run after the backup. That means they are not
available for de-duplication during the backup itself yet. If one backup
includes ten instances of an identical new 1 GB file, it will first store
10 * 1 GB. BackupPC_link will then resolve the duplicates. After
BackupPC_link, only 1 * 1 GB will be used.

The only simple thing I can think of in the moment is to check disk usage with
'du' *after the backup* (after the link phase, if you want to be correct) and
then react to an over-quota situation.

Problem 1: What do you do? Delete the complete backup? Or try to 'trim' it
   down to what would fit? How? Make sure there's no subsequent
   backup running yet that is using the data ...
Problem 2: DumpPostUserCmd runs before BackupPC_link, so the calculated disk
   usage would not be strictly correct if you do it from there.
Problem 3: The corrective measure is taken after the fact. A high priority
   backup may already have been skipped due to DfMaxUsagePct being
   exceeded.
Problem 4: I haven't got a large pool where I could test, but I would expect
   'du' to suffer from the same large-amount-of-files-with-more-than-
   one-hardlink and inodes-spread-out-over-disk problems as pool
   copying encounters. Expect it to be slow and test first if it
   works at all.

As a side note, I'm not sure how well BackupPC handles full pool file
systems. If there were no problems at all, DfMaxUsagePct would not be
needed. A quota system would, in fact, deny BackupPC access to more space on
the disk once the quota was reached, just as if the disk had run full. I doubt
it is a good 

Re: [BackupPC-users] DumpPreUserCmd returned error status 256

2007-06-07 Thread Craig Barratt
Garith Dugmore writes:

 When using the following configuration command
 
 Conf{DumpPreUserCmd} = 'rsync -az --delete ethleen.saao::backupreadonly 
 ctfileserver.saao::read_only';
 
 backuppc reports in the log:
 
 2007-06-06 16:58:08 DumpPreUserCmd returned error status 256... exiting

BackupPC doesn't use a shell to run external commands,
so you need a full path to the executable.

Craig

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] DumpPreUserCmd returned error status 256

2007-06-07 Thread Garith Dugmore




Aha, thnx for this. And thanks for such an awesome application! 

Craig Barratt wrote:

  Garith Dugmore writes:

  
  
When using the following configuration command

Conf{DumpPreUserCmd} = 'rsync -az --delete ethleen.saao::backupreadonly 
ctfileserver.saao::read_only';

backuppc reports in the log:

2007-06-06 16:58:08 DumpPreUserCmd returned error status 256... exiting

  
  
BackupPC doesn't use a shell to run external commands,
so you need a full path to the executable.

Craig
  


-- 

Garith Dugmore
Systems Administrator
South African Astronomical Observatory

Phone:		021 460 9343
Fax:		021 447 3639
SAAO Website:	http://www.saao.ac.za
SALT Website:	http://www.salt.ac.za
Jabber:		[EMAIL PROTECTED]
MSN:		[EMAIL PROTECTED]




-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Backup won't complete

2007-06-07 Thread Thomas Nygreen
I am trying to use BackupPC to back up two hosts, but it seems like I
need some help. Backing up host#1 works fine, but the other, host#2,
does not. I'm using the rsync method. It looks like all the files are
copied to the /var/lib/backuppc/pc/host2/new folder on the BackupPC
server, but the rsync process doesn't exit. The Xfer log is empty,
NewFileList is complete and the log file just says backup started at...

Running rsync with the same arguments that BackupPC uses works fine, and
ssh -l root host2 whoami prints out root like it should.

Here is all the relevant info i can think of about the three boxes:

Server: Ubuntu 6.10 Edgy Eft
   BackupPC Version: 2.1.2-5ubuntu3 (2.1.2pl1)
   Linux 2.6.15-26-386 #1 PREEMPT Fri Sep 8 19:55:17 UTC 2006 i686 GNU/Linux
   OpenSSH_4.3p2 Debian-5ubuntu1, OpenSSL 0.9.8b 04 May 2006
   rsync  version 2.6.8  protocol version 29
   Disk usage on /var/lib/backuppc : 12%

Changes in config.pl:
$Conf{MaxBackups} = 2;// default 4
$Conf{MaxUserBackups} = 2;// default 4
$Conf{BackupFilesExclude} = '/proc';  // default undef
$Conf{XferMethod} = 'rsync';  // default 'smb'
$Conf{CompressLevel} = 3; // default 0


Host#1: Debian 4.0 Etch (BACKUP WORKS FINE)
   Linux 2.6.18-4-686 #1 SMP Mon Mar 26 17:17:36 UTC 2007 i686 GNU/Linux
   rsync  version 2.6.9  protocol version 29
   OpenSSH_4.3p2 Debian-9, OpenSSL 0.9.8c 05 Sep 2006
Host#1 has no config file

Host#2: Ubuntu 6.06 LTS Dapper Drake Server Edition
   Linux 2.6.15-28-server #1 SMP Thu Feb 1 16:58:14 UTC 2007 i686 GNU/Linux
   rsync  version 2.6.6  protocol version 29
   OpenSSH_4.2p1 Debian-7ubuntu3.1, OpenSSL 0.9.8a 11 Oct 2005
(both rsync and ssh versions are the most recent available on the ubuntu
apt repositories)

Host2.pl (we have been trying a lot of different excludes to minimize
the size, but here are the essential ones):
#=-*-perl-*-
$Conf{BackupFilesExclude} =
['/pub','/tmp','/proc','/media','/sys/bus/pci/drivers','/var','/root','/usr','/bin','/lib'];


I hope someone can help.

Regards,

Thomas Nygreen


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup won't complete

2007-06-07 Thread Carl Wilhelm Soderstrom
On 06/07 08:50 , Thomas Nygreen wrote:
 I am trying to use BackupPC to back up two hosts, but it seems like I
 need some help. Backing up host#1 works fine, but the other, host#2,
 does not. I'm using the rsync method. It looks like all the files are
 copied to the /var/lib/backuppc/pc/host2/new folder on the BackupPC
 server, but the rsync process doesn't exit. The Xfer log is empty,
 NewFileList is complete and the log file just says backup started at...

Have you tried debugging by running BackupPC_dump by hand?

[EMAIL PROTECTED]:$ /usr/share/backuppc/bin/BackupPC_dump -f -v host.domain.tld


-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup won't complete (problem solved)

2007-06-07 Thread Thomas Nygreen
Carl Wilhelm Soderstrom skrev:
 Have you tried debugging by running BackupPC_dump by hand?
 
 [EMAIL PROTECTED]:$ /usr/share/backuppc/bin/BackupPC_dump -f -v 
 host.domain.tld
 

Now I have, and by doing so found the problem. It looked like it did 
it's job just fine until:

   [some tens of thousands of lines removed]
   Sending empty csums for sys/block/sdb/uevent
   Sending empty csums for sys/bus/ide/drivers/ide-cdrom/bind
   Sending empty csums for sys/bus/ide/drivers/ide-cdrom/unbind
   Segmentation fault

and the same old processes were hanging:

   /usr/bin/perl /usr/share/backuppc/bin/BackupPC_dump -f host2
   /usr/bin/ssh -q -x -l root host2 /usr/bin/rsync --server --sender 
--numeric-ids --perms --owner --group --devices --links --times 
--block-size=2048 --recursive -D lots of excludes --ignore-times . /

After experimenting a lot with excluding parts of the /sys tree, I 
located the problem to /sys/bus/pci_express. I don't see why. So I 
excluded '/sys/bus/pci_express' and now it works just fine.

Thanks for leading me to the right track!


Thomas Nygreen

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup won't complete (problem solved)

2007-06-07 Thread Nils Breunese (Lemonbit Internet)
Thomas Nygreen wrote:

 Carl Wilhelm Soderstrom skrev:
 Have you tried debugging by running BackupPC_dump by hand?

 [EMAIL PROTECTED]:$ /usr/share/backuppc/bin/BackupPC_dump -f -v 
 host.domain.tld

 
 Now I have, and by doing so found the problem. It looked like it did 
 it's job just fine until:
 
[some tens of thousands of lines removed]
Sending empty csums for sys/block/sdb/uevent
Sending empty csums for sys/bus/ide/drivers/ide-cdrom/bind
Sending empty csums for sys/bus/ide/drivers/ide-cdrom/unbind
Segmentation fault
 
 and the same old processes were hanging:
 
/usr/bin/perl /usr/share/backuppc/bin/BackupPC_dump -f host2
/usr/bin/ssh -q -x -l root host2 /usr/bin/rsync --server --sender 
 --numeric-ids --perms --owner --group --devices --links --times 
 --block-size=2048 --recursive -D lots of excludes --ignore-times . /
 
 After experimenting a lot with excluding parts of the /sys tree, I 
 located the problem to /sys/bus/pci_express. I don't see why. So I 
 excluded '/sys/bus/pci_express' and now it works just fine.
 
 Thanks for leading me to the right track!

/sys is something you'll probably want to exclude entirely.

Nils Breunese.



signature.asc
Description: OpenPGP digital signature
-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Grouping hosts and pool

2007-06-07 Thread Mark Sopuch
Holger Parplies wrote:

Hi,

Mark Sopuch wrote on 07.06.2007 at 13:36:55 [Re: [BackupPC-users] Grouping 
hosts and pool]:
  

Jason M. Kusar wrote:


Mark Sopuch wrote:
  

I'd like to group data (let's just say dept data) from certain hosts 
together (or actually to seperate some from others) to a different 
filesystem and still keep the deduping pool in a common filesystem. 
[...]


Yes, hard links do not work across filesystems.
[...]
  

[...] my concerns lie mainly with certain types of 
hosts (data) encroaching quite wildly into the shared allocated space 
under DATA/... thus leaving less room for the incoming data from other 
hosts. It's a space budgetting and control thing. [...] I am not sure how
any other quoting schemes would work to provide similar capability for soft 
and hard quota if they are in the same fs and usernames are not stamped 
around in DATA/... to differentiate such things to those other quota'ing 
systems. Sure I want to back everything up but I do not want the 
bulkiest least important thing blocking a smaller top priority backup 
getting space to write to when there's a mad run of new data.

Hope I am being clear enough. Thanks again.



I believe you are being clear enough, but I doubt you have a clear enough
idea of what you actually want :-).
  

That may be true. I obviously treated hard-links as quite 'magical' and 
the reality of there implementation and implication (amongst other 
things) didn't come to thought.

If multiple hosts/users share the same (de-duplicated) data, which one would
you want it to be accounted to? 

For me, it's more about isolation than accounting. I guess I was looking 
for a common filesystem to pool into plus seperate filessystems per 
group of hosts. Each group would have hosts sandboxed (in a filesystem 
with soft and hard quotas) then alerts of quotas near nearing limits are 
sent by the file server appliance to backuppc admins. If a per group 
sandbox fills then I can live with that and it's backups failing but I 
cannot live with a common pooling filesystem filling up of course due to 
it's shared nature (dependencies). My efforts would always have ensured 
the common pool is massive enough to cover some concept of a worst case 
(best dedupe case) leaving the sandbox management as my only real concern.

If you don't expect much duplicate data between hosts (or groups of hosts),
the best approach is probably really to run independent instances of
BackupPC.
  

I think I'll take that advice and given hard-links don't span 
filesystems I am railroaded anyway I suspect. Making a group manager 
that would edit symlinks to route the hosts to their respective group 
sandboxes was the other thing I was looking to do and now don't need to 
which is some consolation.

Thanks for the polished explanation Holger.

-- Mark

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/