Re: [BackupPC-users] An idea to fix both SIGPIPE and memory issues with rsync

2009-12-16 Thread Robin Lee Powell
On Tue, Dec 15, 2009 at 05:42:55PM -0800, Robin Lee Powell wrote:
 Just to give a sense of scale here:
 
 # date ; find /pictures -xdev -type f -printf %h\n /tmp/dirs ; date
 Tue Dec 15 12:50:57 PST 2009
 Tue Dec 15 17:26:44 PST 2009
 
 (something I ran to try to figure out how to partition the tree)
 
 The other one isn't even close to finishing, as far as I can tell.

Just for amusement:

# date ; find /data -xdev -type f -printf %h\n /tmp/dirs ; date
Tue Dec 15 12:51:17 PST 2009
[snip many errors caused by things disappearing during the walk]
Wed Dec 16 00:55:52 PST 2009

RedHat GFS *really* doesn't like directories with large numbers of
files.  It's not a big fan of stat() calls, either.

-Robin

-- 
They say:  The first AIs will be built by the military as weapons.
And I'm  thinking:  Does it even occur to you to try for something
other  than  the default  outcome?  See http://shrunklink.com/cdiz
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] An idea to fix both SIGPIPE and memory issues with rsync

2009-12-16 Thread Carl Wilhelm Soderstrom
On 12/15 05:42 , Robin Lee Powell wrote:
 The other one isn't even close to finishing, as far as I can tell.
 In the face of it taking nigh-on 5 hours just to *walk the tree*,
 from the local host, I haven't been focusing on little things like
 ssh encryption choices too much.  :)

So you're convinced the speed bottleneck is firmly on the filesystem/disk array?

I've noticed that changing the SSH cipher helps substantially sometimes; but
sometimes (often?) not much. It's a worthwhile optimization when it works
tho. I think I've seen 10-15% improvement in backup times.

As for your link issues; the suggestion that it may be a keepalive problem
reminded me of this tool:
http://freshmeat.net/projects/openssh-watchdog

It may be worth trying that.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] An idea to fix both SIGPIPE and memory issues with rsync

2009-12-16 Thread Carl Wilhelm Soderstrom
On 12/16 06:38 , Carl Wilhelm Soderstrom wrote:
 On 12/15 05:42 , Robin Lee Powell wrote:
  The other one isn't even close to finishing, as far as I can tell.
  In the face of it taking nigh-on 5 hours just to *walk the tree*,
  from the local host, I haven't been focusing on little things like
  ssh encryption choices too much.  :)
 
 So you're convinced the speed bottleneck is firmly on the filesystem/disk 
 array?
 
 I've noticed that changing the SSH cipher helps substantially sometimes; but
 sometimes (often?) not much. It's a worthwhile optimization when it works
 tho. I think I've seen 10-15% improvement in backup times.
 
 As for your link issues; the suggestion that it may be a keepalive problem
 reminded me of this tool:
 http://freshmeat.net/projects/openssh-watchdog
 
 It may be worth trying that.

According to the main page for the openssh-watchdog:
http://www.sc.isc.tohoku.ac.jp/~hgot/sources/openssh-watchdog.html

It looks like Debian has a patch that provides this functionality already.
Down at the bottom of the page it lists other tools with similar
functionality, including:

OpenSSH package from Debian   (`ProtocolKeepAlive' option)

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] An idea to fix both SIGPIPE and memory issues with rsync

2009-12-16 Thread Les Mikesell
Robin Lee Powell wrote:
 On Tue, Dec 15, 2009 at 05:42:55PM -0800, Robin Lee Powell wrote:
 Just to give a sense of scale here:

 # date ; find /pictures -xdev -type f -printf %h\n /tmp/dirs ; date
 Tue Dec 15 12:50:57 PST 2009
 Tue Dec 15 17:26:44 PST 2009

 (something I ran to try to figure out how to partition the tree)

 The other one isn't even close to finishing, as far as I can tell.
 
 Just for amusement:
 
 # date ; find /data -xdev -type f -printf %h\n /tmp/dirs ; date
 Tue Dec 15 12:51:17 PST 2009
 [snip many errors caused by things disappearing during the walk]
 Wed Dec 16 00:55:52 PST 2009
 
 RedHat GFS *really* doesn't like directories with large numbers of
 files.  It's not a big fan of stat() calls, either.

Usually it is just simple physics.  The disk head can only be in on place at a 
time.  It's never where you want it, and it is slow to move.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] An idea to fix both SIGPIPE and memory issues with rsync

2009-12-16 Thread Ralf Gross
Robin Lee Powell schrieb:
 RedHat GFS *really* doesn't like directories with large numbers of
 files.  It's not a big fan of stat() calls, either.


Well, a network Cluster Filesystem is no fun to backup and might very
well be the bottleneck.

Ralf

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] An idea to fix both SIGPIPE and memory issues with rsync

2009-12-16 Thread Robin Lee Powell
I want to start by saying that I appreciate all the help and
suggestions y'all have given on something that's obviously not your
problem.  :)  Unfortunately, it looks like this problem is (1) far
more interesting than I thought and (2) might be in BackupPC itself.

On Tue, Dec 15, 2009 at 11:29:46AM -0900, Chris Robertson wrote:
 My guess would be that your firewalls are set up to close
 inactive TCP sessions.  Try adding -o ServerAliveInterval=60
 to your RsyncClientCmd (so it looks something like $sshPath -C -q
 -x -o ServerAliveInterval=60 -l root $host $rsyncPath $argList+)
 and see if that solves your problem.

Also suggested by several other people.

Well, umm, it kept the connection open.  Now I've got [counts] 8
BackupPC_dump processes doing absolutely nothing.

The one I was stracing and tcpduming and so on looks fine: ssh is
sending the keepalives, and they're being responded to
appropriately.  The rsync is up on the client, the dump is up on the
server, all procs are connected to each other correctly.

They're just not *doing* anything.  Nothing has errored out; BackupPC
thinks everything is fine.

Some of the places this is happening are very small backups that
usually take a matter of minutes.

Suddenly this isn't looking like a networking problem anymore; the
networking appears to be just fine.  This is looking like a BackupPC
problem.  My version, by the way, is 3.1.0.  It's like at some point
BackupPC forgot that it was supposed to be trying to back anything
up.

Is it possible the nightly jobs are configured in some weird way
that broke things?

Anyways, the one that has the problem consistently *also* always has
it in exactly the same place; I was watching it in basically every
way possible, so here comes the debugging stuff.  As will become
obvious, I probably need to turn the ssh debugging down and the
BackupPC debugging up.

If, for some reason, you want raw copies of any of this, let me know
and I'll post them.

On the machine in question, we're backing up /pictures/.
/pictures/agate/shared/pictures/ has about a thousand subdirs that
each have about a thousand subdirs; the *vast* majority of the files
on the file system are in there.

The backup always stops doing anything right after lstat of the last
file in /pictures/agate/shared/pictures/0216/2987/.  According to ls
-U, /pictures/agate/shared/pictures/0216/ is the last directory in
/pictures/agate/shared/pictures/, and
/pictures/agate/shared/pictures/0216/2987/ is the last directory in
/pictures/agate/shared/pictures/0216/.  Now, there are other
directories in /pictures/agate/shared/ that ls -U after pictures/,
but with rsync 3.0 on the client, I'm assuming those are handled in
parallel.

So it looks like the backup is stalling right after finishing
checking all the files.

No new dir is made.  There's a NewFileList, but it's empty.

XferLog.z is mostly useless; too many -v in my ssh call apparently,
but here's the last bit anyways (copied from less):

- -
browse.jpgCE^...@^@w85@I:1^Kdisplay.jpgpA3^...@^@y85@I:1^Ksegment.jpg8D^...@^@t85@I:$^Y3187/DSCN1886_gallery.jpgD4^...@^@}...@i:2^Pfor_cropping.jpg94y...@^@88^...@i:2^Klibrary.jpg8A
^...@^@8A^...@i:2^Qmarket_banner.jpgNC1^...@^@8C^...@i:2
browse.jpg~...@^@80^...@i:1^D.JPG^TA0^I^@8C^...@i:1
_badge.j...@^@87^...@i:2^Lselector.jpg'^...@^@84^...@i:3^Nhop_banner.jpgF788^...@^@89^...@i:2^Kdisplay.jpgBFx...@^@82^...@i:2^ksegment.jpgl...@^@^...@i:%(968/food-wrapping-mr-gallery-x_badge.jpgD5b...@^@^...@i:d^kgallery.jpg^...@^@^...@i:D^Qmarket_banner.jpgA8D1^...@^@^...@i:D^Klibrary.jpgAB^...@^@^...@iBAD^Oshop_banner.jpg8CCB^...@^@:D
browse.jpg^...@^@@I:C^D.jpgE3^c...@^^$@I:C^L_segment.jpgAD
^...@^@@I:d^pfor_cropping.jpg!*...@^z$@I:D^Lselector.jpg96^...@^@^...@i:D^Kdisplay.jpgR1^A^@@I:%^K539/061.JPGDCf...@^@d
 @I:,^l_gallery.jpgr...@^@a @I:-^Klibrary.jpgD7^...@^@c 
@I:-^Pfor_cropping.jpgF9u...@^@` @I:-^Lselector.jpg8C^...@^@b 
@I:-^Qmarket_banner.jpgD2l...@^@d @I:-^oshop_banner.jpg^...@^@c @I:-
browse.jpgEE^...@^@b @I:-^Ksegment.jpg8D^...@^@a 
@I:-^kdisplay.j...@^@^...@b @I:-^ibadge.jpg...@^@c 
@I:$^T7026/blog_browse.jpgD9^O
^...@^@D7o...@i:.^Klibrary.jpgEC^...@^@D9o...@i:.^lselector.jpg=...@^@D8o...@i:.^Kgallery.jpg,^...@^@D5o...@i:.^Qmarket_banner.jpgl
- -

The only relevant, and last, LOG line is: 2009-12-16 00:12:00 incr backup 
started back to 2009-12-12 00:00:08 (backup #0) for directory /

Current time on the box is Wed Dec 16 10:54:14 PST 2009
(boxes agree to within a second; didn't test beyond flipping back
and forth and typing date).

Last bit of strace of the dump proc on the SERVER: backuppc  4607 24499  0 
00:12 ?00:00:07 /usr/bin/perl /usr/local/bin/BackupPC_dump -i 
ey02-s00416_pictures

- -
04:34:54.081334 read(7, 
nner.jpgDX\1\0mW\342I\272-\vlibrary.jpg\177\24\0\0:,\4.JPGN 

Re: [BackupPC-users] An idea to fix both SIGPIPE and memory issues with rsync

2009-12-16 Thread Robin Lee Powell
Sorry, a couple of things I forgot.

On Wed, Dec 16, 2009 at 11:15:37AM -0800, Robin Lee Powell wrote:
 Anyways, the one that has the problem consistently *also* always has
 it in exactly the same place; I was watching it in basically every
 way possible, so here comes the debugging stuff.  As will become
 obvious, I probably need to turn the ssh debugging down and the
 BackupPC debugging up.

I take that back; I've got:

$Conf{XferLogLevel} = 2;

and there's nothing from BackupPC is the XferLog around or after the
time it stopped working.  I'm going to set it higher anyways if
that's likely to help; what's a good value?

Also, no significant CPU activity on either side, so it's not like
rsync is sitting there contemplating what files to transfer.

Just for interest, memory usage on the client:

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
21663 root  15   0  436m 402m  640 S0  6.6 198:21.99 /usr/bin/rsync 
--server --sender ...

That's rsync 3.0 *without* --hard-links; a non-trivial burden on a
client machine.

-Robin

-- 
They say:  The first AIs will be built by the military as weapons.
And I'm  thinking:  Does it even occur to you to try for something
other  than  the default  outcome?  See http://shrunklink.com/cdiz
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] An idea to fix both SIGPIPE and memory issues with rsync

2009-12-16 Thread Robin Lee Powell

On last thing: Stop/Dequeue Backup did the right thing: all parts of
the backup, on both server and client, were torn down correctly.

So, clearly, there was nothing wrong with the communication between
parts as such.

-Robin

-- 
They say:  The first AIs will be built by the military as weapons.
And I'm  thinking:  Does it even occur to you to try for something
other  than  the default  outcome?  See http://shrunklink.com/cdiz
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] pools are showing data but clients have no backups

2009-12-16 Thread Sabuj Pattanayek
Hi,

Added some debugging code to Storage/Text.pm sub TextFileWrite, it
used to look like this :

rename($file.new, $file) if ( -f $file.new );

I changed it to :

if ( -f $file.new ) {
my $renRet = rename($file.new, $file);
if ($renRet) {
print Rename OK\n;
}
else {
print Rename failed $file.new to $file\n;
}
}
else {
print failed -f $file.new\n;
}

This is what BackupPC_fixupBackupSummary returns when I run it as root
for one of the backup hosts where backups has only up to #45 and
backups.new has #46:

Reading /glfsdist/backuppc1/pc/home_caox/46/backupInfo
Adding info for backup 46 from backupInfo file
Rename failed /glfsdist/backuppc1/pc/home_caox/backups.new to
/glfsdist/backuppc1/pc/home_caox/backups

[r...@gluster1 ~]# ls -l  /glfsdist/backuppc1/pc/home_caox/backups*
-rw-r- 1 root root 3440 Dec 15 12:41
/glfsdist/backuppc1/pc/home_caox/backups
-rw-r- 1 root root 3514 Dec 16 14:00
/glfsdist/backuppc1/pc/home_caox/backups.new

Let's try a manual move:

[r...@gluster1 ~]# mv /glfsdist/backuppc1/pc/home_caox/backups.new
/glfsdist/backuppc1/pc/home_caox/backups
mv: overwrite `/glfsdist/backuppc1/pc/home_caox/backups'? y
mv: cannot move `/glfsdist/backuppc1/pc/home_caox/backups.new' to
`/glfsdist/backuppc1/pc/home_caox/backups': File exists

hrmm... here's the strace -ff :

stat(/glfsdist/backuppc1/pc/home_caox/backups, y

^ that's me hitting 'y' when it asks if I want to overwrite the file.

{st_mode=S_IFREG|0640, st_size=3440, ...}) = 0
lstat(/glfsdist/backuppc1/pc/home_caox/backups.new,
{st_mode=S_IFREG|0640, st_size=3514, ...}) = 0
lstat(/glfsdist/backuppc1/pc/home_caox/backups,
{st_mode=S_IFREG|0640, st_size=3440, ...}) = 0
stat(/glfsdist/backuppc1/pc/home_caox/backups,
{st_mode=S_IFREG|0640, st_size=3440, ...}) = 0
geteuid()   = 0
getegid()   = 0
getuid()= 0
getgid()= 0
access(/glfsdist/backuppc1/pc/home_caox/backups, W_OK) = 0
rename(/glfsdist/backuppc1/pc/home_caox/backups.new,
/glfsdist/backuppc1/pc/home_caox/backups) = -1 EEXIST (File exists)
open(/usr/share/locale/locale.alias, O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=2528, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
0) = 0x2b6fc324b000
read(3, # Locale name alias data base.\n#..., 4096) = 2528
read(3, , 4096)   = 0
close(3)= 0
munmap(0x2b6fc324b000, 4096)= 0
open(/usr/share/locale/en_US.UTF-8/LC_MESSAGES/coreutils.mo,
O_RDONLY) = -1 ENOENT (No such file or directory)
open(/usr/share/locale/en_US.utf8/LC_MESSAGES/coreutils.mo,
O_RDONLY) = -1 ENOENT (No such file or directory)
open(/usr/share/locale/en_US/LC_MESSAGES/coreutils.mo, O_RDONLY) =
-1 ENOENT (No such file or directory)
open(/usr/share/locale/en.UTF-8/LC_MESSAGES/coreutils.mo, O_RDONLY)
= -1 ENOENT (No such file or directory)
open(/usr/share/locale/en.utf8/LC_MESSAGES/coreutils.mo, O_RDONLY) =
-1 ENOENT (No such file or directory)
open(/usr/share/locale/en/LC_MESSAGES/coreutils.mo, O_RDONLY) = -1
ENOENT (No such file or directory)
write(2, mv: , 4mv: ) = 4
write(2, cannot move `/glfsdist/backuppc1..., 104cannot move
`/glfsdist/backuppc1/pc/home_caox/backups.new' to
`/glfsdist/backuppc1/pc/home_caox/backups') = 104
open(/usr/share/locale/en_US.UTF-8/LC_MESSAGES/libc.mo, O_RDONLY) =
-1 ENOENT (No such file or directory)
open(/usr/share/locale/en_US.utf8/LC_MESSAGES/libc.mo, O_RDONLY) =
-1 ENOENT (No such file or directory)
open(/usr/share/locale/en_US/LC_MESSAGES/libc.mo, O_RDONLY) = -1
ENOENT (No such file or directory)
open(/usr/share/locale/en.UTF-8/LC_MESSAGES/libc.mo, O_RDONLY) = -1
ENOENT (No such file or directory)
open(/usr/share/locale/en.utf8/LC_MESSAGES/libc.mo, O_RDONLY) = -1
ENOENT (No such file or directory)
open(/usr/share/locale/en/LC_MESSAGES/libc.mo, O_RDONLY) = -1 ENOENT
(No such file or directory)
write(2, : File exists, 13: File exists)   = 13
write(2, \n, 1
)   = 1
close(1)= 0
exit_group(1)   = ?

This has got to be some glusterfs weirdness but why? any ideas?

Thanks,
Sabuj

 The file backups.new is written out by a function called BackupInfoWrite
 (which further uses a function called TextFileWrite), which is called
 both in BackupPC_dump, when a backup completes and in BackupPC_link,
 when the linking of the backup completes. The file backups.new is
 written, verified and then renamed to backups. If the verification
 fails, the rename does not occur. I'd suggest checking for permissions
 issues or file system corruption.

 Chris

--
This SF.Net email is sponsored by 

Re: [BackupPC-users] Backuppc only works from the command line

2009-12-16 Thread Chris Robertson
M. Sabath wrote:
 Hello all,

 I use backuppc on Debian 5.
 Since I upgraded from Debian 4 to Debian 5 backuppc doesn't run
 automatically.

 Our server runs only during daytime between 7am and 19 pm
   

Let me see if I have this right...  Your server is only powered on from 
7 am to 7 pm...

 From the command line all works fine.

 Using backuppc with Debian 4.0 all worked fine.

 What am I doing wrong?


 Thank you

 Markus


 --


 Here are some configuration entries of my config.pl which might be
 interesting:

 $Conf{WakeupSchedule} = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
 15, 16, 17, 18, 19, 20, 21, 22, 23];


 $Conf{FullPeriod} = 6.97;
 $Conf{IncrPeriod} = 0.97;

 $Conf{FullKeepCnt} = [3,1,1];

 $Conf{BackupsDisable} = 0;

 $Conf{BlackoutPeriods} = [
 {
   hourBegin =  7.0,
   hourEnd   = 19.5,
   

...your blackout period covers that whole time...

   weekDays  = [1, 2, 3, 4, 5],
   

...at least on the week days, and you are wondering why BackupPC is not 
working automatically?

 },
 ];
   

Let me know if my understanding is not correct.  Otherwise, I'd suggest 
reading the fine manual: 
http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_blackoutperiods_

Chris


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Slow link options

2009-12-16 Thread Kameleon
I have a few remote sites I am wanting to backup using backuppc. However,
two are on slow DSL connections and the other 2 are on T1's. I did some math
and roughly figured that the DSL connections, having a 256k upload, could do
approximately 108MB/hour of transfer. With these clients having around 65GB
each that would take FOREVER!!!

I am able to take the backuppc server to 2 of the remote locations (the DSL
ones) and put it on the LAN with the server to be backed up to get the
initial full backup. What I am wondering is this: What do others do with
slow links like this? I need a full backup at least weekly and incrimentals
nightly. Is there an easy way around this?

Thanks in advance.
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] An idea to fix both SIGPIPE and memory issues with rsync

2009-12-16 Thread Les Mikesell
Robin Lee Powell wrote:
 
 They're just not *doing* anything.  Nothing has errored out; BackupPC
 thinks everything is fine.
 
 Some of the places this is happening are very small backups that
 usually take a matter of minutes.
 
 Suddenly this isn't looking like a networking problem anymore; the
 networking appears to be just fine.  This is looking like a BackupPC
 problem.  My version, by the way, is 3.1.0.  It's like at some point
 BackupPC forgot that it was supposed to be trying to back anything
 up.

This sounds vaguely familiar, perhaps like:
http://www.mail-archive.com/backuppc-de...@lists.sourceforge.net/msg00321.html
but I don't know if that had a resolution.  It doesn't seem common, though.

 The backup always stops doing anything right after lstat of the last
 file in /pictures/agate/shared/pictures/0216/2987/.  According to ls
 -U, /pictures/agate/shared/pictures/0216/ is the last directory in
 /pictures/agate/shared/pictures/, and
 /pictures/agate/shared/pictures/0216/2987/ is the last directory in
 /pictures/agate/shared/pictures/0216/.  Now, there are other
 directories in /pictures/agate/shared/ that ls -U after pictures/,
 but with rsync 3.0 on the client, I'm assuming those are handled in
 parallel.

I don't think that's true.  Backuppc is going to insist on protocol 28 
which, I think, means it has to get the entire directory tree before 
starting the comparison.

 I'm quite stumped.  Any ideas?

Is your client rsync binary up to date with any distribution updates? 
Maybe it is just a bug that has already been fixed.  Also, I thought at 
one point you said this was only happening with incrementals.  Have you 
tried forcing a full run?

-- 
   Les Mikesell
lesmikes...@gmail.com


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Slow link options

2009-12-16 Thread Chris Robertson
Kameleon wrote:
 I have a few remote sites I am wanting to backup using backuppc. 
 However, two are on slow DSL connections and the other 2 are on T1's. 
 I did some math and roughly figured that the DSL connections, having a 
 256k upload, could do approximately 108MB/hour of transfer. With these 
 clients having around 65GB each that would take FOREVER!!!

 I am able to take the backuppc server to 2 of the remote locations 
 (the DSL ones) and put it on the LAN with the server to be backed up 
 to get the initial full backup. What I am wondering is this: What do 
 others do with slow links like this? I need a full backup at least 
 weekly and incrimentals nightly. Is there an easy way around this?

The feasibility of this depends entirely on the rate of change of the 
backup data.  Once you get the initial full, rsync backups only transfer 
changes.  Have a look at the documentation 
(http://backuppc.sourceforge.net/faq/BackupPC.html#backup_basics) for 
more details.


 Thanks in advance.

Chris


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] An idea to fix both SIGPIPE and memory issues with rsync

2009-12-16 Thread Robin Lee Powell
On Wed, Dec 16, 2009 at 03:13:23PM -0600, Les Mikesell wrote:
 Robin Lee Powell wrote:
  
  They're just not *doing* anything.  Nothing has errored out; BackupPC
  thinks everything is fine.
  
  Some of the places this is happening are very small backups that
  usually take a matter of minutes.
  
  Suddenly this isn't looking like a networking problem anymore; the
  networking appears to be just fine.  This is looking like a BackupPC
  problem.  My version, by the way, is 3.1.0.  It's like at some point
  BackupPC forgot that it was supposed to be trying to back anything
  up.
 
 This sounds vaguely familiar, perhaps like:
 http://www.mail-archive.com/backuppc-de...@lists.sourceforge.net/msg00321.html
 but I don't know if that had a resolution.  It doesn't seem common, though.

That part turns out to be Extremely Lame: it was adding -vv to
ssh.  Drop the -vs and put the -q back in, and that part gets
better.  Which means I'm sort of back to the drawing board.  I'll
try with ServerAliveInterval=60 and -q and let y'all know.

  The backup always stops doing anything right after lstat of the
  last file in /pictures/agate/shared/pictures/0216/2987/.
  According to ls -U, /pictures/agate/shared/pictures/0216/ is the
  last directory in /pictures/agate/shared/pictures/, and
  /pictures/agate/shared/pictures/0216/2987/ is the last directory
  in /pictures/agate/shared/pictures/0216/.  Now, there are other
  directories in /pictures/agate/shared/ that ls -U after
  pictures/, but with rsync 3.0 on the client, I'm assuming those
  are handled in parallel.
 
 I don't think that's true.  Backuppc is going to insist on
 protocol 28 which, I think, means it has to get the entire
 directory tree before starting the comparison.

Then it's even weirder.  :)  But see above.

-Robin

-- 
They say:  The first AIs will be built by the military as weapons.
And I'm  thinking:  Does it even occur to you to try for something
other  than  the default  outcome?  See http://shrunklink.com/cdiz
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] problem with smb transfers

2009-12-16 Thread Omid
so i've set up my backuppc server again (we got a bunch of new drives and
new server) and rather than upgrade, i just did a fresh installation.  so
i've migrated my settings from v2.1 to v3.2.0beta0, and i'm having a problem
with smb transfers.

they seem to fail.  here's a sample xfer error log file:

Running: /usr/bin/smbclient prettylady\\c\$ -I 192.168.0.244 -U
backupuser -E -N -d 1 -c tarmode\ full -Tc - \\_data_\\\* \\faxserver\\\*
full backup started for share c$
Xfer PIDs are now 20691,20690
Anonymous login successful
[ skipped 1 lines ]
tree connect failed: NT_STATUS_ACCESS_DENIED
Anonymous login successful
[ skipped 1 lines ]
tree connect failed: NT_STATUS_ACCESS_DENIED
tarExtract: Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0
filesTotal, 0 sizeTotal
Got fatal error during xfer (No files dumped for share c$)
Backup aborted (No files dumped for share c$)
Not saving this as a partial backup since it has fewer files than the prior
one (got 0 and 0 files versus 0)



what i don't get is why is it trying to log in anonymously??  it should be
logging in as backupuser, and the smbclient seems to indicate that.  the
config file for this machine is:

$Conf{SmbShareName} = 'c$';
$Conf{BackupFilesOnly} = {
  'c$' = [
   '\_data_\*',
   '\faxserver\*'
   ],
};


and in the config.pl, the lines:

$Conf{SmbShareUserName} = 'backupuser';
$Conf{SmbSharePasswd} = '';

exist.

when i try to log into the host from another windows machine, to the c$
share, using the user backupuser and the password, it connects fine.

i suspect i've forgotten to update a line in my config.pl.  (i didn't copy
the one from v2.1 over, i did a fresh installation, i'm starting with a
fresh pool, and so i went through the config.pl and set it all up again.

what did i forget?

thanks!
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] problem with smb transfers (solved)

2009-12-16 Thread Omid
(unfortunately i don't receive copies of my own emails, but...)

i figured out what was going on.  (or google did... of course...)

from the page:

https://bugs.launchpad.net/ubuntu/+source/backuppc/+bug/283652

I believe that the fix you need is to edit /etc/backuppc/config.pl. There
are three strings:

$Conf{SmbClientFullCmd}
$Conf{SmbClientIncrCmd}, and
$Conf{SmbClientRestoreCmd}

which control Samba backups and restore. In all three strings remove the
-N flag.

My understanding that the flag is no longer needed, because the login prompt
is automatically suppressed because backuppc passes the password through the
PASSWD environment variable. I reported separately a bug (
https://bugs.launchpad.net/bugs/297025) in the new version of smbclient that
prevents this password from being passed if the -N flag is used.

Let me know if this doesn't work.


but just thought i'd report back!!
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc only works from the command line

2009-12-16 Thread M. Sabath
Am Mittwoch, den 16.12.2009, 11:27 -0900 schrieb Chris Robertson:
 M. Sabath wrote:
  Hello all,
 
  I use backuppc on Debian 5.
  Since I upgraded from Debian 4 to Debian 5 backuppc doesn't run
  automatically.
 
  Our server runs only during daytime between 7am and 19 pm

 
 Let me see if I have this right...  Your server is only powered on from 
 7 am to 7 pm...
 
  From the command line all works fine.
 
  Using backuppc with Debian 4.0 all worked fine.
 
  What am I doing wrong?
 
 
  Thank you
 
  Markus
 
 
  --
 
 
  Here are some configuration entries of my config.pl which might be
  interesting:
 
  $Conf{WakeupSchedule} = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
  15, 16, 17, 18, 19, 20, 21, 22, 23];
 
 
  $Conf{FullPeriod} = 6.97;
  $Conf{IncrPeriod} = 0.97;
 
  $Conf{FullKeepCnt} = [3,1,1];
 
  $Conf{BackupsDisable} = 0;
 
  $Conf{BlackoutPeriods} = [
  {
  hourBegin =  7.0,
  hourEnd   = 19.5,

 
 ...your blackout period covers that whole time...
 
  weekDays  = [1, 2, 3, 4, 5],

 
 ...at least on the week days, and you are wondering why BackupPC is not 
 working automatically?
 
  },
  ];

 
 Let me know if my understanding is not correct.  Otherwise, I'd suggest 
 reading the fine manual: 
 http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_blackoutperiods_
 
 Chris
 



Hello Chris,

yes you are rigth.
So i have to disable the blackout period.

Is this possible?
Otherwise I will set it to a time where our server is off.

Thank you 

MArkus


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/