Re: [BackupPC-users] BackupPC_link errors

2007-04-09 Thread Eric Snyder



Craig Barratt wrote:

Eric writes:

  
2007-04-08 21:20:02 BackupPC_link got error -4 when calling 
MakeFileLink(/data/BackupPC/backups/pc/slacker/2/fEric/fhello.txt, 
5ee4aa1b190383553c1a7712ad260358, 1)



The -4 error means that a file cannot be added to the pool
(ie: a new hardlink cpool/5/e/e/5ee4aa1b190383553c1a7712ad260358
to /data/BackupPC/backups/pc/slacker/2/fEric/fhello.txt).

You've checked the obvious things.

  

  FilesystemInodes   IUsed   IFree IUse% Mounted on
  /dev/hdb139075840 1011307 380645333% /dev/backupdisk



Your TOPDIR is /data/BackupPC/backups, but your disk is mounted
on /dev/backupdisk.  Is that right?
  

Yes, This is correct.

What happens when you run:

df /data/BackupPC/backups/pc/slacker
df /data/BackupPC/backups/cpool
  

I get:
[EMAIL PROTECTED]:~# df /data/BackupPC/backups/pc/slacker
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/hdb1307663800 144787820 147247548  50% /dev/backupdisk
[EMAIL PROTECTED]:~# df /data/BackupPC/backups/cpool
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/hdb1307663800 144787820 147247548  50% /dev/backupdisk


Does it show the same file system?

Does the directory /data/BackupPC/backups/cpool/5/e/e exist?  What happens when
you try to manually make the link:

su backuppc
link /data/BackupPC/backups/pc/slacker/2/fEric/fhello.txt 
/data/BackupPC/backups/cpool/5/e/e/5ee4aa1b190383553c1a7712ad260358
  
link: cannot create link 
`/data/BackupPC/backups/cpool/5/e/e/5ee4aa1b190383553c1a7712ad260358' to 
`/data/BackupPC/backups/pc/slacker/2/fEric/fhello.txt': No such file or 
directory


I did a couple of things.

1) I changed the backup directory to be the one on the same drive that 
BackupPC runs from (hda) that holds the symlinks to the other drive. It 
created the links when I ran the backup. We know that my installation of 
BackupPC will make links OK to the same drive that it is installed on.


2) I created individual links for each directory (pc, cpool and pool) 
that point to actual directories on hdb and then tried the backup. The 
backup run fine and links were created on the target drive (hdb) inside 
dev/backupdisk/backups/cpool.


It seems that the problem only shows up when I have a single link of my 
topmost directory rather than individual links to each directory one 
level down.

???

Thanks,
Eric
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_link errors

2007-04-09 Thread John Moorhouse
I've also got problems with the link error when I change the topDir from 
the default install location (var/lib/backuppc) to a mount on a raid 
array, ie moving all the subdirectories within topDir to the new location.

I've tried mounting the raid on a partition at root level 
'/backup-files', and most recently I created a directory within 
var/lib/backuppc called RAID and mounted it on that, each time backuppc 
works, in that it backups up and saves the back up files where you would 
expect them, i.e. '/backup-files/pc' and var/lib/backuppc/raid/pc but it 
seems to be unable to to add anything into the relevant cpool directory

I've done the df as suggsted:-

[EMAIL PROTECTED]:/var/lib$ sudo df /var/lib/backuppc/raid/cpool
Filesystem   1K-blocks Used Available Use% Mounted on
/dev/md0 625118068 32888 625085180   1% 
/var/lib/backuppc/raid
[EMAIL PROTECTED]:/var/lib$ sudo df /var/lib/backuppc/raid/pc
Filesystem   1K-blocks Used Available Use% Mounted on
/dev/md0 625118068 32888 625085180   1% 
/var/lib/backuppc/raid

Help !!

Thanks

John

Other information on the set up:-

 From fstab

/dev/md0  /var/lib/backuppc/raid  reiserfs  user,suid,dev,exec  0  0

(also tried EXT2 with no difference)

The dm0 data is

/dev/md0:
Version : 00.90.03
  Creation Time : Sun Apr  8 15:10:33 2007
 Raid Level : raid5
 Array Size : 625137152 (596.18 GiB 640.14 GB)
Device Size : 312568576 (298.09 GiB 320.07 GB)
   Raid Devices : 3
  Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Mon Apr  9 20:06:56 2007
  State : clean
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

 Layout : left-symmetric
 Chunk Size : 4K

   UUID : 2dd5496a:4a0998eb:f51f310c:dbb2f99b
 Events : 0.25986

Number   Major   Minor   RaidDevice State
   0   810  active sync   /dev/sda1
   1   8   171  active sync   /dev/sdb1
   2   8   332  active sync   /dev/sdc1



>
> df /data/BackupPC/backups/pc/slacker
> df /data/BackupPC/backups/cpool
>
> Does it show the same file system?
>
> Does the directory /data/BackupPC/backups/cpool/5/e/e exist?  What happens 
> when
> you try to manually make the link:
>
> su backuppc
> link /data/BackupPC/backups/pc/slacker/2/fEric/fhello.txt 
> /data/BackupPC/backups/cpool/5/e/e/5ee4aa1b190383553c1a7712ad260358
>
> Craig
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys-and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/backuppc-users
> http://backuppc.sourceforge.net/
>   


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] FW: Call for testers w/ using BackupPC (or equivalent)

2007-04-09 Thread John Villalovos
FYI: This might be useful to those of you using ext2/3, to try.
Please send feedback to Theodore Ts'o if you try it out.

John

-Original Message-
From: Theodore Ts'o [mailto:[EMAIL PROTECTED]
Sent: Saturday, April 07, 2007 8:53 AM
To: [EMAIL PROTECTED]; linux-ext4@vger.kernel.org; Villalovos,
John L; Bill Broadley; Lenny Foner
Subject: Call for testers w/ using BackupPC (or equivalent)


For a while now, I've been receiving complaints from users who have been
using BackupPC, or some other equivalent backup progam which functions
by using hard links to create incremental backups.  (There may be some
people who are using rsync to do the same thing; if you know of other
such backup programs with such properties, please let me know.)

BackupPC works by creating hard link trees, so that files that have not
changed across incremental backups.  With a large enough filesystem,
this is sufficient to cause memory usage issues when e2fsck needs to run
a full check on the filesystem.   There are two causes of this problem:

* Even if directories are equivalent, Unix does not allow directories to
  be hardlinked, so if a filesystem has 100,000 directories, each
  incremental backup will create 100,000 new directories in the BackupPC
  directory.  E2fsck requires 12 bytes of storage per directory in order
  to store accounting information.

* E2fsck uses an icount abstraction to store the i_links_count
  information from the inode, as well as the number of times an inode is
  actually referenced by directory.   This abstraction uses an
  optimization based on the observation that on most normal filesystems,
  there are very few hard links (i.e., i_links_count for most regular
  files is 1).   The icount abstraction uses 6 bytes of memory for each
  directory and regular file which has been hardlinked, and two of them
  are used.

One such filesystem that was reported to me had 88 million inodes, of
which 11 million were files, and 77 million were directories (!).  This
meant that e2fsck needed to allocate around 881 megabytes of memory in
one contiguous array for the dirinfo data structures, and two
(approximately) 500 megabyte contiguous arrays for the icount
abstraction.

On a 32-bit processor, especially with shared libraries enabled to
futher reduce the amount of available 3GB address space, e2fsck can very
easily fail to have enough memory.  Using a statically-linked e2fsck can
help, as can moving to a 64-bit processor, but you still need a large
amount of memory.

OK, so that's the problem.  What's the solution?   I have a testing
version of e2fsprogs which uses a scratch directory to store the
in-memory databases in a file instead.  So this won't help on a root
filesystem, since a writeable directory is required, but most of the
time the BackupPC archives should be on a separate filesystem.

To download it, please get e2fsprogs version 1.40-WIP-2007-04-11, which
can be found here:

http://downloads.sourceforge.net/e2fsprogs/e2fsprogs-1.40-WIP-2007-04-11.tar.gz

After you build it, create an /etc/e2fsck.conf file with the following
contents:

[scratch_files]
directory = /var/cache/e2fsck

...and then make sure /var/cache/e2fsck exists by running the command
"mkdir /var/cache/e2fsck".

My initial tests show that e2fsck does run approximately 25% slower with
the scratch_files feature enabled, but it should use a significant
smaller amount of memory, and so for people who have had their e2fsck
thrashing due to swap activity, it could run faster.  And certainly for
people where e2fsck was failing altogether due to lack of memory and/or
address space, this should allow them to complete.

But because there is this performance tradeoff with using
[scratch_files] I want to to be able to give tuning advice for when to
use it, and when not to use it.  That's also why we have a
numdirs_threshold parameter in [scratch_files] which can be used to only
use it on filesystems with a large number of directories (this tends to
be a good marker for filesystems that might need this feature; but the
question is what should a good default be?)


So what I'm looking for from testers is to run the following experiment:

1) Using your existing e2fsck (please let me know which version), run
   the command:

/sbin/e2fsck -nfvttC0 /dev/sdXX

   ... and send me the output.

   Since the e2fsck is run with the -n option, it is ok to run this on a
   mounted filesystem (but you probably want to do this at night or some
   lightly loaded time since it will slow your fileserver down esp. if
   you try this during peak hours).

   If you know your filesystem will cause e2fsck to fail due to lack of
   memory, of course there's no reason to do this.

2) Using the new version of e2fsck from 1.40-WIP-2007-04-07, run the
   same command again, and send me the output:

e2fsck.new -nfvttC0 /dev/sdXX

   While it is running, when it is running pass #3, could you send me
   the output of "ls -s /var/cache/e2fsck".  I want 

Re: [BackupPC-users] Resources for fixing File::RsyncP

2007-04-09 Thread Nils Breunese (Lemonbit)

mattkelly wrote:

I have installed BackupPC 3.0 and have been successful when using  
smb to

back Windows boxes up.  However, I have not been able to install
File-RsyncP-0.68 in order to use rsync with some linux boxes.   
Right now

I only have version 0.52.  I have looked everywhere and not found
anything to help trouble shoot installing RsyncP 0.68.  I am able  
to do
"perl Makefile.PL", but then when doing "make", it spits out a  
bunch of

errors.  I've pasted them here:  http://rafb.net/p/ZKLlmo76.html .  If
you can take a look and let me know what I am missing here I would
really appreciate it.  Am I using the wrong version of Perl or  
something?


Looks like you're missing a bunch of header files (lines 4-22 of your  
output). On my CentOS system these are part of the glibc-headers  
package, though if you're on CentOS/Red Hat/Fedora you might just as  
well install perl-File-RsyncP from Dag Wieers yum repository and not  
worry about making and keeping it up to date yourself.


Nils Breunese.


PGP.sig
Description: Dit deel van het bericht is digitaal ondertekend
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Wonderful

2007-04-09 Thread Ski Kacoroski
On Fri, 6 Apr 2007 16:14:43 -0400 "Lowe, Bryan" <[EMAIL PROTECTED]>
wrote:
> I'm running BackupPC on a RHEL4 box to back up 104 hosts, a mixture
> of Linux and Sun Solaris servers.
> 
> I just wanted to say how happy I am with this software.  I'm not very
> good with words, so just let me say thank you thank you thank you for
> producing such a reliable backup method at absolutely no cost to me.
> 
>  
> 
> Regards,
>  
> Bryan Lowe
> Unix Systems Administrator

I second Bryan's sentiments as do the 1500 users I am backing up who
depend on BackupPC to protect their data.  I am using (8) Debian boxes
to backup 1500 OS X workstations (laptops and desktops).  This software
is a lifesaver.

cheers,

ski

-- 
"When we try to pick out anything by itself, we find it
 connected to the entire universe"John Muir

Chris "Ski" Kacoroski, [EMAIL PROTECTED], 206-501-9803

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Resources for fixing File::RsyncP

2007-04-09 Thread mattkelly
Hi all,
I have installed BackupPC 3.0 and have been successful when using smb to 
back Windows boxes up.  However, I have not been able to install 
File-RsyncP-0.68 in order to use rsync with some linux boxes.  Right now 
I only have version 0.52.  I have looked everywhere and not found 
anything to help trouble shoot installing RsyncP 0.68.  I am able to do 
"perl Makefile.PL", but then when doing "make", it spits out a bunch of 
errors.  I've pasted them here:  http://rafb.net/p/ZKLlmo76.html .  If 
you can take a look and let me know what I am missing here I would 
really appreciate it.  Am I using the wrong version of Perl or something?
Thanks!

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_link errors

2007-04-09 Thread Craig Barratt
Eric writes:

> 2007-04-08 21:20:02 BackupPC_link got error -4 when calling 
> MakeFileLink(/data/BackupPC/backups/pc/slacker/2/fEric/fhello.txt, 
> 5ee4aa1b190383553c1a7712ad260358, 1)

The -4 error means that a file cannot be added to the pool
(ie: a new hardlink cpool/5/e/e/5ee4aa1b190383553c1a7712ad260358
to /data/BackupPC/backups/pc/slacker/2/fEric/fhello.txt).

You've checked the obvious things.

>   FilesystemInodes   IUsed   IFree IUse% Mounted on
>   /dev/hdb139075840 1011307 380645333% /dev/backupdisk

Your TOPDIR is /data/BackupPC/backups, but your disk is mounted
on /dev/backupdisk.  Is that right?

What happens when you run:

df /data/BackupPC/backups/pc/slacker
df /data/BackupPC/backups/cpool

Does it show the same file system?

Does the directory /data/BackupPC/backups/cpool/5/e/e exist?  What happens when
you try to manually make the link:

su backuppc
link /data/BackupPC/backups/pc/slacker/2/fEric/fhello.txt 
/data/BackupPC/backups/cpool/5/e/e/5ee4aa1b190383553c1a7712ad260358

Craig

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] TrashCleanSleepSec and drive spindown

2007-04-09 Thread Craig Barratt
Simon writes:

> So far I've determined that "$Conf{TrashCleanSleepSec} = '300'" isn't 
> going to be doing me any favours. I've bumped it up to once an hour (and 
> might do so to once a day; I'm only backing up three machines and the 
> data doesn't change at a frantic rate).
> 
> Is this safe?

Yes.

> If BackupPC wakes once an hour, on the occasions when it doesn't need to 
> do anything (i.e. all machines are backed up or can't be pinged), will 
> it touch /var/lib/backuppc/? If so, can I stop it?

No.  Each time it wakes up it will queue all the PCs, and the backups
file for each host will be checked to see if anything needs to be done.

You should reduce $Conf{WakeupSchedule}.

Craig

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Does ordering file access by inode improve performance?

2007-04-09 Thread Craig Barratt
Various parts of BackupPC spend a lot of time traversing large
trees of files, including BackupPC_dump, BackupPC_trashClean
and BackupPC_nightly.

As many people have observed, over time BackupPC's pooling results
in directories with files that are widely dispersed across the disk.
This makes disk seeks the performance bottleneck.

Currently BackupPC processes all the files in a directory by
reading the directory and processing each file in the order
returned by the directory read.

Simon Strack from Monash U noted a couple of years ago that disk
seeks can be reduced significantly by sorting the directory read
results by inode.  If numeric inode is closely correlated with the
disk position (ie: block) then the files are processed in an order
that reduces disk seeks.

Perl's built-in directory reading functions just return a file name.
The perl module IO::Dirent additionally returns the inode and file
type, which avoids a stat() on each file.

I'm interested in exploring whether IO::Dirent works with different
operating and file systems and, if so, whether traversing those file
systems by sorting inodes returned by IO::Dirent provides any benefit.

I am asking for some volunteers to do the following:

 - install IO::Dirent from CPAN.

 - unpack the attached tar file in a directory

 - make sure IO::Dirent works (ie: returns correct type and inode
   information) on the file system you will test by running the
   inodeVerify script:

su backuppc
mkdir TOPDIR/temp
cd TOPDIR/temp
inodeVerify

   It should print "IO::Dirent is ok".

   You can remove the temp directory.

 - run the inodeTest benchmark on a large directory tree
   (eg: /data/BackupPC/cpool or /data/BackupPC/cpool/0 or
   /data/BackupPC/cpool/[0-7]).  You need a large enough
   tree to render caching unimportant, eg: to do the
   entire pool:

su backuppc
inodeTest TOPDIR/cpool

   or one of these (1/16 of the pool, 1/4 of the pool or 1/2 of
   the pool respectively):

inodeTest TOPDIR/cpool/0
inodeTest TOPDIR/cpool/[0-3]
inodeTest TOPDIR/cpool/[0-7]

   The benchmark traverses the tree and stats each file,
   first without inode sorting, and then with inode sorting.

   The pair of tests is repeated 3 times, and the first pair
   is ignored to reduce the measurement error due to caching,
   which tends to benefit the second and subsequent runs.

   If the run time on the last 4-5 runs is way shorter than
   the first then caching is dominating and you need to re-run
   with a larger tree.

   The ratio of elapsed time taken for the two non-sorted
   runs to the two sorted runs is printed.

   You should make sure the load from other usage on the file
   system is low, or at least relatively constant, during the
   test - otherwise the results won't be meaningful.

I'd like to get the following info from you: the output from the
two scripts, the OS, the file system type and raid or lvm setup.

Please email info to me off list and I will summarize.

Craig

inode.tgz
Description: GNU Unix tar archive
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync error: error allocating core memory buffers

2007-04-09 Thread Bernhard Ott
 Original Message 
Subject: Re:[BackupPC-users] rsync error: error allocating core memory 
buffers
From: Holger Parplies <[EMAIL PROTECTED]>
To: John Hannfield <[EMAIL PROTECTED]>
Date: 29.03.2007 02:24

 > Hi,
 >
 > John Hannfield wrote on 28.03.2007 at 16:12:23 [[BackupPC-users] 
rsync error: error allocating core memory buffers]:
 >> Backups work fine, but restores over rsync and ssh are failing with an
 >> rsync error:
 >> [...]
 >> Has anyone seen this before and know of a solution?
 >
 > no, but I notice that you are running different rsync versions (well, 
a new
 > rsync on the host and an older File::RsyncP on the BackupPC server, as it
 > seems):
 >
 >> Got remote protocol 29
 >> Negotiated protocol version 28
 >
Even the latest File::RsyncP only supports up
to 28 (as Craig pointed out)

Regards,
Bernhard


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/