Re: Path to rsync Binary?

2002-10-21 Thread tim . conway
SunOS 5.7 Last change: 25 Jan 20025 User Commandsrsync(1) -e, --rsh=COMMAND specify rsh replacement --rsync-path=PATH specify path to rsync on the remote machine Tim Conway [EMAIL

Re: ERROR: buffer overflow in receive_file_entry

2002-10-21 Thread Craig Barratt
has anyone seen this error: ns1: /acct/peter rsync ns1.pad.com::acct overflow: flags=0xe8 l1=3 l2=20709376 lastname=. ERROR: buffer overflow in receive_file_entry rsync error: error allocating core memory buffers (code 22) at util.c(238) ns1: /acct/peter Either something is wrong with

Any work-around for very large number of files yet?

2002-10-21 Thread Crowder, Mark
Title: Any work-around for very large number of files yet? Yes, I've read the FAQ, just hoping for a boon... I'm in the process of relocating a large amount of data from one nfs server to another (Network Appliance filers). The process I've been using is to nfs mount both source and

Re: Any work-around for very large number of files yet?

2002-10-21 Thread tim . conway
Mark: You are S.O.L. There's been a lot of discussion on the subject, and so far, the only answer is faster machines with more memory. For my own application, I have had to write my own system, which can be best described as find, sort, diff, grep, cut, tar, gzip. It's a bit more

rsync: read error: Connection timed out

2002-10-21 Thread Ryan
I am rsyncing several directories some of them have over 150,000 files ... I have seen this error messages several times: rsync: read error: Connection timed out rsync error: error in rsync protocol data stream (code 12) at io.c(162) rsync: connection unexpectedly closed (359475 bytes read

Re: Any work-around for very large number of files yet?

2002-10-21 Thread jw schultz
On Mon, Oct 21, 2002 at 09:37:45AM -0500, Crowder, Mark wrote: Yes, I've read the FAQ, just hoping for a boon... I'm in the process of relocating a large amount of data from one nfs server to another (Network Appliance filers). The process I've been using is to nfs mount both source and

RE: Any work-around for very large number of files yet?

2002-10-21 Thread Crowder, Mark
JW (and others), Thanks for the input. --whole-file did indeed allow it to reach the failure point faster... I've been experimenting with find/cpio, and there's probably an answer there. Thanks Again, Mark -Original Message- From: jw schultz [mailto:jw;pegasys.ws] Sent: Monday,

pruning old files

2002-10-21 Thread Shinichi Maruyama
jw In the past i found that using find was quite good for this. jw Use touch to create a file with a mod_time just before you jw started the last sync. Then from inside $src run jw find . -newer $touchfile -print|cpio -pdm $dest For pruning, how about to add the feature to rsync. Is it

Re: Rsync and ignore nonreadable and timeout

2002-10-21 Thread tim . conway
All parameters are in parameter/value pairs, joined by '=' characters. This is important even for apparent simple assertions, as there is only one name for each parameter... i.e. there is no do not ignore nonreadable, or do not use chroot, but rather ignore nonreadable = no and use chroot =

Re: pruning old files

2002-10-21 Thread Brad Hards
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Tue, 22 Oct 2002 10:46, Shinichi Maruyama wrote: jw In the past i found that using find was quite good for this. jw Use touch to create a file with a mod_time just before you jw started the last sync. Then from inside $src run jw find .

Re: pruning old files

2002-10-21 Thread Shinichi Maruyama
bhards--exclude-older=SECONDs bhardsexclude files older than SECONDs before bhards Define older? bhards Do you mean atime, mtime or ctime? I think mtime is natural like traditional find's -newer or -mtime. Of course it may good to be able to specify them, if someone needs