Re: I also am getting hang/timeout using rsync 2.4.6 (on solaris too)

2000-10-25 Thread Dave Dykstra

On Wed, Oct 25, 2000 at 04:47:27PM +0200, Eckebrecht von Pappenheim wrote:
 Dave Dykstra wrote:
 
  Were using rsh, ssh, or daemon mode?
 
 I used ssh 1.2.27 on both machines. Again, with 2.3.1 we didn't have any
 hangs at all...
 
 Eckebrecht

Has it been compiled without the USE_PIPES in configure?  That's been 
recently re-confirmed to be necessary to work with rsync 2.4.x.

- Dave Dykstra




Re: I also am getting hang/timeout using rsync 2.4.6 -e ssh

2000-10-26 Thread Dave Dykstra

On Sat, Oct 21, 2000 at 05:04:56AM -0700, Harry Putnam wrote:
...
  - versions of OS at both ends
 
 Redhat linux 6.2FreeBSD-4.0
 
  - versions of ssh at both ends
 
 ssh-1.2.27-5i   SSH Version OpenSSH-2.1, protocol versions 1.5/2.0.
 Compiled with SSL (0x00904100).
 
  - versions of rsync at both ends (2.4.6 from your mail)
 Linux box:
 rsync version 2.4.1  protocol version 24
 
 FreeBSD box:
 rsync version 2.4.3  protocol version 24


rsync 2.4.1 had known problems with hanging ssh.  Are you initiating from
the Linux side?   I think it only affects the initiating side.  Please
upgrade to 2.4.6.  I believe 2.4.3 was similar to 2.4.6 with respect to ssh
but it broke rsh connections.  The difference between rsync 2.4.1 and 2.4.3
was that 2.4.3 set O_NONBLOCK on stdin  stdout to ssh.

And has that ssh been compiled without USE_PIPES?



  - netstat output at both ends when frozen
 
 Linux Box:
 [209rt]~ # netstat -t
 Active Internet connections (w/o servers)
 Proto Recv-Q Send-Q Local Address   Foreign Address State 
...
 tcp0  45288 reader.local.lan:1020   satellite.local.lan:ssh ESTABLISHED 
...
 FreeBSD box:
 bsd # netstat -t
 Active Internet connections
 Proto Recv-Q Send-Q  Local Address  Foreign Address(state)
...
 tcp4   0  0  satellite.ssh  reader.1020ESTABLISHED
...


I don't know what netstat used to look like with the hangs rsync 2.4.1 had
over ssh, but the above numbers indicate that either the Linux or FreeBSD
operating systems are at fault for not transmitting the send Q from Linux
to FreeBSD.

- Dave Dykstra




Re: rsync server Vs rsync call

2000-11-02 Thread Dave Dykstra

On Thu, Nov 02, 2000 at 06:54:24PM +, Maharajan M wrote:
 Hi All,
 
 I'm newbie for rsync and sorry if this questions are already posted in this
 list.
 
 1) I don't have the root privilage for two unix machines and I wanna mirror 2GB
 of sources between them 
 through very slow internet link. 
 
 Whether I need to start rsync server or call rsync directly. Which one is
 faster?

Should make no difference in speed.

 If I have to start rsync server, then in which  machine I have to start
 and without having root access.

You can start in either one, but I think it is better to pull than push.
You can run without root by using --port to choose an unprivileged port and
"use chroot = no" in the rsyncd.conf.

 If i have to run rsync directly, ie rsync [loacaldir] [remote m/c:remotedir],
 what are the options I have to
 use for faster transfer.

Use -z for a slow link.

 2) Assume that the internet link is down inbetween, how can i setup rsync, so
 that it will automatically resume once the link is up. 

Run it periodically from cron, or run it in a while loop while it returns
an error code.

- Dave Dykstra




Re: rsync runtime errors

2000-11-10 Thread Dave Dykstra

On Fri, Nov 10, 2000 at 11:33:15AM +, Maharajan M wrote:
 Hi All,
 
 I'm also getting the below errors from rsync when i tried to sync using rsync
 command.
 
 1) unexpected EOF in read_timeout
 
 2) Invalid file index 45875200 (count=4348)
 
 3) unexpected tag -7
 
 I'm getting above errors when i tried to sync files between 2 machines in a
 internet link. 
 
 What is the problem? Is there any options to avoid these errors? 


You're probably running too old of a version of rsync.  Some older versions
in the 2.4.* series had that problem.

- Dave Dykstra




Re: --exclude not used in all cases?

2000-11-22 Thread Dave Dykstra

On Tue, Nov 21, 2000 at 10:57:33PM +0100, Rolf Grossmann wrote:
 Hi,
 
 on Tue, 21 Nov 2000 15:11:22 -0600 Dave Dykstra wrote 
 concerning "Re: --exclude not used in all cases?" something like this:
 
  Looks like the include "/*" from the server side is overriding the
  exclude "crap" on the client side.  If it were in a subdirectory it
  would probably be excluded.
 
 Indeed, if I leave out the /* it works, thank you. Unfortunately that's
 not what I expected, nor what I had in mind in the first place. By using
 include, I was hoping to only include the listed files/directories, so
 for the example I simply used /*. Is there a way to override the server's
 list or can anyone think of a way to write the configuration in a different
 way, so that the default will be not to include a file (without forcing
 a transfer for the specified ones)?
 
 Thanks again, Rolf

I recommend carefully reading over the "EXCLUDE PATTERNS" section in the
rsync man page and trying out examples; the rules are complicated but the
man page lays them out precisely.

I may not be understanding your requirements correctly, but it sounds to me
from your description that all you need to do is to list includes for the
files you want followed by a --exclude '*'.  The key thing to watch out for
with --exclude '*' is that if you want to include any file in a subdirectory
you also need to explicitly include all of its parent directories.

- Dave Dykstra




Re: strange rsync problem

2000-12-01 Thread Dave Dykstra

On Fri, Dec 01, 2000 at 11:11:00AM -0500, Ernie wrote:
 Hello all
 I recently installed rsync on 2 linux boxes I have here.  I'm trying to rsync a
 very simple 10 byte text file just as a test.  When I run this command:
 rsync -v -e ssh -z file3 scully:/home/ernie
 
 I get prompted for my password

I don't know if you're complaining about the prompt or not, but that's
entirely up to ssh; whatever ssh does to run any command will be the same
here.

 and rsync tells me its building the file list
 then hangs.  I noticed that if I don't specificy a path for rsync, it fails, so
 I copied rsync from /usr/local/bin to /usr/bin (for whatever reason, rsync
 couldn't find it in /usr/local/bin even though its in my path).

Sshd doesn't use your own path on the remote machine, it has its own default.
You can specify an explicit path to rsync on the client side with --rsync-path.

 rsync will
 just hang like that, not doing anything ... what am I doing wrong?  The file
 'file3' is only 10 bytes!

Ssh hanging is a known problem in some of the earlier 2.4.x series versions
of rsync, although I don't think it 't happenned on such a small amount of
data.  Try it on rsync 2.4.6 if you're not.  You also don't mention what
operating system(s) you are using.

- Dave Dykstra




Re: strange rsync problem

2000-12-01 Thread Dave Dykstra

On Fri, Dec 01, 2000 at 01:41:53PM -0500, Ernie wrote:
   You can specify an explicit path to rsync on the client side with --rsync-path.
 
 Oh, duh, i should have remembered that.  I did use the --rsync-path paramter
 with no success.

I presume by "no success" you mean that it still hung, right?  I would
think it should at least have had success finding the remote rsync.  The
rsync-path parameter is really a shell command for whatever login shell
you're using on the remote side so you may have some success debugging it
by using something like
--rsync-path "set -x; /usr/local/bin/rsync"
or
--rsync-path "strace -o /tmp/rsync.strace /usr/local/bin/rsync"
You can also turn on ssh debugging with
-e "ssh -v"


 Hm.  I am running rsync 2.4.6, with OpenSSH 2.3.0p1.  Both boxes have this
 config, and are running linux:


- Dave Dykstra




Re: rsync and exclude patterns

2000-12-05 Thread Dave Dykstra

On Mon, Dec 04, 2000 at 03:05:26PM -0800, Mike Spitzer wrote:
 
 In the exclude pattern section, the rsync man page states:
 
   if  the  pattern starts with a / then it is matched
   against the start of the filename, otherwise it  is
   matched  against the end of the filename.
 
 Consistent with this, excluding "*.c" excludes .c files anywhere in
 the tree.
 
 Also consistent with this, excluding "foo/bar.c" excludes foo/bar.c
 anywhere in the tree (this would exclude foo/bar.c, baz/foo/bar.c, etc.).
 
 But, I'm getting unexpected results from excluding "foo/*.c".  I expected
 this to behave similarly to excluding foo/bar.c, which would be to
 exclude any .c files in any foo directory, anywhere in the tree.  What I
 get instead is that *.c is excluded only in a top-level foo directory.
 foo/bar.c would be excluded, but baz/foo/bar.c would not be excluded.
 This seems inconsistent with the documentation, and inconsistent with
 the "foo/bar.c" exclude behavior.
 
 Is this a bug or am I not understanding the rules?
 
 Thanks.


You understand correctly, this is a known bug.  See

http://lists.samba.org/pipermail/rsync/1999-October/001440.html

There's not an easy solution, however, because the function rsync uses to
do wildcard matches, fnmatch(), cannot completely implement the semantics
as described in the rsync man page.  A workaround for you may be to exclude
"**/foo/*.c", but that's not complete because the semantics of "**" is such
that including it anywhere in a pattern means that all asterisks can cross
slash boundaries.  Thus, it will match baz/foo/oof/bar.c.  As I said back
then, the rsync include/exclude semantics  implementation needs to be
completely redone.

- Dave Dykstra




Re: Multiple directory copies

2000-12-07 Thread Dave Dykstra

On Thu, Dec 07, 2000 at 05:46:54PM -, John Horne wrote:
 Hello,
 
 I'm trying to arrange via cron for an account with several directories to be
 updated overnight. I do not want everything in the account updated - just
 some of the directories. However, I can't seem to see how I can specify in
 one go to copy more than one directory to the remote account. I thought
 initially of:
 
rsync -e ssh -aq tables/ data/ eros: (eros is the remote host)
 
 but this just copies the directory contents into the home directory of the
 remote account. I cannot seem to specify:
 
rsync -e ssh -aq tables/ eros:tables data/ eros:data
 
 so that the contents of each directory is copied into the relevant remote
 directory. Is this possible? The man page for rsync seems to indicate that
 only one remote directory can be specified at a time.


Having a slash at the end of the source specification removes the base
name of the source from the destination filename.  Use just

rsync -e ssh -aq tables data eros:

- Dave Dykstra




Re: rsync

2000-12-07 Thread Dave Dykstra

On Wed, Dec 06, 2000 at 03:35:57PM +0100, Christian Boesch wrote:
 does anyone know what this error message in the rsync log file means:
 transfer interrupted (code 11) at main.c(278)
 chris

Other people have reported that in older versions of rsync (2.3.* series);
is that what you're running?  If I recall correctly, rsync printed that
message whenever it had some kind of I/O error, but I believe it was always
after having printed another more descriptive message that pointed to the
real problem.

- Dave Dykstra




Re: rsync daemon

2000-12-19 Thread Dave Dykstra

On Tue, Dec 19, 2000 at 04:39:32PM +1100, Ian Millsom wrote:
  What's your version of rsync?  What's your OS, both client and server?
  Please post replies to the list.
 
 Sorry forgot to mention, all servers running redhat 6.2 rsync version
 2.4.1 on all machines
 
 Just after this posting, I had checked my version, to the current one on
 the site, and noticed that there is a later one. I will upgrade all my
 machines and see if this fixes the error.


I noticed this same problem on my redhat 6.2 machine last week.  Check
/var/adm/messages.  Mine reported

inetd[415]: rsync/tcp server failing (looping or being flooded), service 
terminated for 10 min

I could see no way to configure Linux inetd to avoid that, so I ended up
starting rsync as an independent daemon out of /etc/rc.d/rc3.d.

- Dave Dykstra




Re: rsync hangs with FreeSwan

2001-01-04 Thread Dave Dykstra

On Thu, Jan 04, 2001 at 12:12:43PM +1000, [EMAIL PROTECTED] wrote:
 Dave_Dykstra_ wrote:
  Because of the piplined nature of the rsync implementation and the
  back-and-forth nature of the rsync protocol, rsync stresses many TCP
  implementations.  No one else has mentioned a problem with FreeSwan, but I
  suspect a bug with it.  You should at least run "netstat" on both ends and
  let us know what the send and receive queues are on both sides of that TCP
  connection.  In the end it will probably take some intervention from the
  FreeSwan implementers; if you can give them a test case that they can use
  to reproduce it, it will probably help them.
 
 OK, more info. 
 
 I tried the following today over the Freeswan link:
 From the remote host [EMAIL PROTECTED] - which has an (internal) address
 of 192.168.2.254: - I do:
 
 rsync -azCvvc * 203.37.221.107:work/prod-head
...
 203.37.221.107 is the (real) address) of my local system - a temporary
 ppp connection (provided on dialup by the ISP) and the same address as
 the IPSEC link
 
 The local system is raita.finder.com.au with (internal) address of
 192.168.254.245
...
 I get this from the local netstat -t:
 
 Active Internet connections (w/o servers)
 Proto Recv-Q Send-Q Local Address   Foreign AddressState  
 tcp0  0 ts-bris-2-p59.bris:1021 dux.gc.eracom.com.:1021 ESTABLISHED 
 tcp0  37608 ts-bris-2-p59.bri:shell dux.gc.eracom.com.:1023 ESTABLISHED 
 tcp0  0 ts-bris-2-p59.bris:1022 dux.gc.eracom.com:login ESTABLISHED 
 tcp0  0 ts-bris-2-p59.bris:1023 dux.gc.eracom.com:login ESTABLISHED 
...
 ... and this from the remote netstat -t (with irrelevent connections
 removed):
 
 Active Internet connections (w/o servers)
 Proto Recv-Q Send-Q Local Address   Foreign Address
 State  
 ...
 tcp0  0 localhost.localdom:1021 ts-bris-2-p59.bris:1021 ESTABLISHED 
 tcp0  0 localhost.localdom:1023 ts-bris-2-p59.bri:shell ESTABLISHED 
 tcp0700 localhost.localdo:login ts-bris-2-p59.bris:1022 ESTABLISHED 
 tcp0  0 localhost.localdo:login ts-bris-2-p59.bris:1023 ESTABLISHED 
 ...



The fact that there's something in the send queues and nothing in the
receive queues and yet it is hung is an indicator that there's a TCP bug;
once data is passed to the kernel, it is entirely up to the TCP
implementation to make sure that it gets sent to the other side.  Report it
to FreeSwan.  If we're lucky, maybe it's related to the other TCP problems
people have been having with rsync on Linux.

I just ran into a situation today where Solaris 8 hangs in a similar manner
using even straight rcp on a particular file (noticed first in rsync,
reproducible with rcp).  I will report more to this mailing list once I
find out a Sun patch number.  There's a lot of TCP bugs out there, folks.

- Dave Dykstra




Re: two passwords

2001-01-05 Thread Dave Dykstra

On Fri, Jan 05, 2001 at 10:19:34AM -0500, Rick Otten wrote:
  Yes, the error message is coming because ssh is terminating early but I
  don't think that the advice that Jason goes on to give (using extra keys,
  expect, etc) is correct.  There is no reason why rsync can't handle a
  double prompt, because when you use "rsync -e ssh" all the prompting is
  handled completely by ssh; rsync has nothing to do with it.  I just tried
  an example and it worked ok.  Are you getting any other error messages
  before "unexpected EOF in read_timeout"?  My guess is that you aren't
  getting properly authenticated to ssh.  Using "rsync -e 'ssh -v'" may
  give you more info about what's going wrong.
 
 It looks to me like the second password is being required by the shell rather
 than the ssh authentication mechanism...  (sdshell)

That could indeed be a problem because rsync is expecting the first data
over the connection to be coming from its own corresponding executable.  I
just tried for example

rsync -e ssh --rsync-path "echo 'prompt: ';/path/to/rsync"

and it reported

protocol version mismatch - is your shell clean?
(see the rsync man page for an explanation)
Received signal 16.

That's not the error you're seeing though.  I tried redirecting the prompt
to stderr and then it worked after printing the prompt.   I then tried
inserting a "read" and it caused it to hang because rsync isn't reading
input from its own stdin to send to its -e command.  That still doesn't
sound exactly like what you're seeing but I think it's on the right track.
Maybe you need an option for rsync to pass data from its stdin to the
remote side.

- Dave




Re: Need info: problems with 2.3.1?

2001-01-18 Thread Dave Dykstra

On Tue, Jan 16, 2001 at 03:02:13AM -0500, Hal Haygood wrote:
 Due to my ongoing issues with 2.4.6, I'm planning on backrevving our 
 enterprise to 2.3.1.  (We were using 2.2.1 before I started mucking about.)

I am using 2.4.6 throughout my enterprise and have not encountered any
problems that weren't repeatable with 2.3.X.


 Before I do so, I would like to know if 2.3.1 suffers from any of the 
 following issues:

2.3.2 is the latest rsync version that has the "Uggh" SSH buffering
hack so you could use that.

 1) SSH hangs, with any of 1.2.27, 2.3.0, or 2.4.0 (Yes, I know not many 
 people are using 2.4.0 yet, but we will likely be moving towards it soon.)
 
 2) rsh hangs or errors (seems to not be an issue)

I don't think it's any worse than 2.4.6, and I have yet to see any hard
evidence that it is any better.


 3) Include/exclude processing bugs other than the 'foo/*.c' bug already 
 discussed.

http://pserver.samba.org/cgi-bin/cvsweb/rsync/exclude.c gives all the
details.  Looks like there was a problem with a buggy fnmatch in glibc
that was fixed.  I think that the bug only shows up on certain versions of
Linux. See http://lists.samba.org/pipermail/rsync/1999-October/001466.html
and other messages in that thread.

- Dave Dykstra




Re: Sym links on Destination

2001-01-18 Thread Dave Dykstra

You can add the "-L" option.  Unfortunately, that will follow all symlinks
on the source too, not just that one.

- Dave Dykstra

On Wed, Jan 17, 2001 at 03:39:29PM -0500, Scott Gribben wrote:
 We have been running rsync for a while and it works great!  But, we just ran
 into a situation I need some quick help on:
 
 Solaris 2.6 to Solaris 2.6, destination running in daemon mode.
 
 We ran out of space on one of the file systems, so we made a soft link (ln
 -s) to a different file system to make use of the extra space over there.
 
 We ran into a problem with rsync overwrote the link with the real directory
 and filled up the file system again!
 
 Originally:
 /opt/foo/bar/stuff
 
 syncing from foo level to destination
 
 What we changed to:
 Bar is a sym link ( cd /opt/foo; ln -s ./bar /newFileSystem/bar after all
 the appropriate cpio'ing was done)
 
 /opt/foo/bar  is /newFileSystem/bar.  When rsync was done (file system full)
 the sym link was gone and overwritten like a regular directory.
 
 Now:
 Back to original
 
 We are running:
 
 Rsync -rtv -stats -exclude-from=foo /opt/foo/* user@machine::module
 mailto:user@machine::module 
 
 The docs tell me that it will preserve links on the source, but I need it to
 follow the links on the destination.
 
 I'm in a rush as this has some production issues associated, so any help as
 soon as possible would be great.
 
 THANKS in advance!
 Scott




Re: Source and destination don't match

2001-01-18 Thread Dave Dykstra

On Thu, Jan 18, 2001 at 02:04:46PM -0800, Jeff Kennedy wrote:
 Greetings all,
 
 I have a source directory that is not being touched by anyone, no
 updates or even reads except by the rsync host.  I am using just a
 straight binary, no rsyncd.conf file.  I am using the follwing command:
 
 rsync -avz /source/path/dir /dest/path/dir
 
 Using version 2.4.6 on Solaris 7, source and destination are both on a
 NetApp filer.  Seems to run without incident but du's on both
 directories show a 40MB difference.
 
 Is this normal?  Thanks.


You might need --delete to clean out old files or --sparse to handle
sparse files efficiently.

- Dave Dykstra




Re: rsync problem

2001-01-24 Thread Dave Dykstra

On Wed, Jan 24, 2001 at 11:11:32AM +1100, Kevin Saenz wrote:
 Ok I have just inherited this system.
 For my lack of understanding please forgive me
 
 I believe that rsync in running in --daemon mode
 the version of rsync we are using is 2.4.6
 also if this helps we are running rsync using the following
 command line
 
 rsync -avz --delete --force --progress 
 --exclude-from=/usr/local/etc/exclude.list server::toor/u/groups/asset 
 /u/groups
 
 this command runs thru a number of files and eventually stops halfway thru
 it's job with the error below
 
 ERROR: out of memory in generate_sums
 Also we have ommitted --progress as well.
 
 Has anyone seen this error is there a way to clear it up


You're probably trying to transfer too many files for the amount
of memory/swap space you have.  Rsync as it is currently implemented
uses up a little bit of memory for every file it touches.  Often people
break up their transfers into smaller pieces by copying each top level
directory separately.

- Dave Dykstra




Re: rsync 2.4.6 hangs in waitid() on Solaris 2.6 system

2001-01-24 Thread Dave Dykstra

On Wed, Jan 24, 2001 at 11:59:18AM -0500, John Stoffel wrote:
 
 
 Hi all,
 
 This is a followup to bug report 2779 on the rsync bug tracking web
 site, I'm also seeing the hang in waitid() on the master process when
 trying to do an rsync on a single host.
 
 Basically, I've got a server with two network interfaces, connected to
 two different NetApps and I'm using rsync to bring them into sync for
 a migration.
 
 Each netapp is on it's own dedicated subnet link, so there's no
 network contention.  Here's  how I'm running it:
 
 # rsync-2.4.6/rsync --archive --delete --exclude ".snapshot/" --exclude ".snapshot" 
--links --recursive --stats --verbose /sqatoast/acme /newtoast192/acme
 
 
 I've also tried 2.4.5 and it too hangs, but with a different set of
 traces, each process (there are three) is just in a poll() loop.
 
 I'm now trying 2.4.4 to see if that will work, but the --exclude
 option seems to have changed how it works as I go back in versions.
 
 Does anyone have a patch for 2.4.6 that will make it work properly
 with Solaris 2.6 servers talking to itself?
 
 Thanks,
 John
John Stoffel - Senior Unix Systems Administrator - Lucent Technologies
[EMAIL PROTECTED] - http://www.lucent.com - 978-952-7548


Other people have reported similar experiences but nobody has pointed to a
problem in rsync; the problem is more likely to be in NFS on the NetApp or
Solaris machines.  I believe most NFS traffic goes over UDP but do you happen
to know if it using TCP?  We have seen many problems with TCP connections
when rsync is communicating between two different machines.

Try using "-W" to disable the rsync rolling checksum algorithm when copying
between two NFS mounts, because that causes extra NFS traffic.  Rsync's
algorithm is optimized for minimizing network traffic between its two
halves at the expense of extra local access and in your case the "network"
is between processes on the same machine and the "local" is over a
network.

- Dave Dykstra




Re: rsync 2.4.6 hangs in waitid() on Solaris 2.6 system

2001-01-24 Thread Dave Dykstra

On Wed, Jan 24, 2001 at 03:48:06PM -0500, John Stoffel wrote:
...
 or it doesn't have a good heuristic that says:
 
 if I don't get *any* info after X seconds, just die
 
 where X would be something like 900 or 1200 seconds, which seems like
 a reasonable number.

Have you tried --timeout?

- Dave Dykstra




Re: The out of memory problem with large numbers of files

2001-01-25 Thread Dave Dykstra

On Thu, Jan 25, 2001 at 04:35:11PM +0100, Schmitt, Martin wrote:
 Dave,
 
 thanks for your reply.
 
  No, it's not an out of memory problem but it is like one of 
  the numerous
  different kinds of hangs that people experience.  Are you 
  copying between
  two places on the same system or are you copying to another 
  system?  What
  kinds of network transport is involved?  What version of 
  rsync were you
  using. 
 
 It's two systems (Sun 450), everything mounted locally, ssh (ssh2, that
 strange commercial one) transport, rsync 2.4.1.
 
 To make it short, here is the command line I used for invocation:
 
 /opt/gnu/bin/rsync --rsh=ssh \
--rsync-path=/opt/gnu/bin/rsync \
--archive \
--verbose \
otherhost:/dir/FOO /test/dir
 
 In order to replicate /dir/FOO on otherhost as /test/dir/FOO on the local
 system.
  
  Breaking up copies into smaller pieces will reduce the memory usage.
 
 I believe I will start going for this solution. I just hoped someone had
 already found a way to work around this through some obscure rsync option.



Ah, rsync 2.4.1 had definite known problems with SSH transport.  Upgrade
to 2.4.6 and your problem should be solved.  It's very easy to compile
from source.  Get it from rsync.samba.org and run configure and make.
You can also use the solaris binary that I maintain on that web site;
I compile it on Solaris 2.5.1 but run it on 2.3 through 8 with no problem.

- Dave Dykstra




Re: How to exclude binary executables?

2001-01-25 Thread Dave Dykstra

On Thu, Jan 25, 2001 at 04:08:56PM +0100, Remko Scharroo wrote:
 First of all, I love rsync. After using mirror and rdist, rsync really
 does it well and fast!
 
 But there is one feature I miss in rsync that rdist has: there seems to
 be no way to exclude binary executables from being copied. Of course, if
 you know the file names you can, but if you don't you can't. Such a
 feature is very helpful when syncing two source distributions on two
 different platforms while avoiding that compiled executables are copied
 with the source.
 
 Is there any treatment for this? If not, is there someone who wants to
 implement it?


That could be done outside of rsync by using find/file/grep/cut and giving
the list of files to rsync in an --include-from file and an --exclude '*'.

Unfortunately rsync can't yet directly read that from stdin although it
just occurred to me that you could probably use "--include-from /dev/fd/0".
You also now need --include '*/' to include all directories too before the
--exclude '*', although a new proposed option --files-from could replace
all the above includes and excludes.  I had offered to implement --files-from
but I first want somebody to respond to my challenge of giving performance
comparisons (http://lists.samba.org/pipermail/rsync/2001-January/003395.html).

- Dave Dykstra




Re: Solaris 8 rsync problem

2001-01-25 Thread Dave Dykstra

On Wed, Jan 24, 2001 at 08:30:50AM -0800, Adam Wilson wrote:
 
 Hello, I am seeing the following problems when trying
 to perform rsync between a Sun running Solaris 8 and a
 Redhat Linux box.  rsh is already set up to allow
 remote logins, file copies, etc.
 
 
 [cable@galadriel]{1348}% rsync --version
 rsync version 2.4.6  protocol version 24
 
 Written by Andrew Tridgell and Paul Mackerras
 
 [cable@galadriel]{1349}%
 
 
 Now, if I put my source directory with a trailing "/"
 on it, and do archive mode, I get this error.
 
 
 [cable@galadriel]{1349}% rsync -alv --exclude=BACKUP
 /home/cable/src/ITG/Test_4.1.5/ sauron:/home/cable/src
 building file list ... readlink : No such file or
 directory
 readlink : No such file or directory
 readlink : No such file or directory
 readlink : No such file or directory
 readlink : No such file or directory
 readlink : No such file or directory
 readlink : No such file or directory
 done
 wrote 68 bytes  read 16 bytes  168.00 bytes/sec
 total size is 0  speedup is 0.00
 [cable@galadriel]{1350}% 
 
 
 Then, if I leave the same options, but remove the
 trailing "/", I get this scrolling error that is
 infinite.
 
 
 [cable@galadriel]{1351}% rsync -alv --exclude=BACKUP
 /home/cable/src/ITG/Test_4.1.5 sauron:/home/cable/src
 building file list ... opendir(Test_4.1.5): Too many
 open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files
 opendir(Test_4.1.5): Too many open files


It seems that there must be something strange with the Test_4.1.5
directory.  It seems almost as if it is recursively hardlinked to itself.
Or maybe there's a Test_4.1.5 on the destination side that is symlinked
recursively.


 Finally, if I remove archiving mode and try it both
 with and without the trailing "/", it still doesn't
 work.
 
 
 [cable@galadriel]{1352}% rsync -lv --exclude=BACKUP
 /home/cable/src/ITG/Test_4.1.5/ sauron:/ho
 skipping directory /home/cable/src/ITG/Test_4.1.5/.
 wrote 17 bytes  read 16 bytes  22.00 bytes/sec
 total size is 0  speedup is 0.00
 [cable@galadriel]{1353}% 
 [cable@galadriel]{1353}% rsync -lv --exclude=BACKUP
 /home/cable/src/ITG/Test_4.1.5 sauron:/hom
 skipping directory /home/cable/src/ITG/Test_4.1.5
 wrote 17 bytes  read 16 bytes  66.00 bytes/sec
 total size is 0  speedup is 0.00
 [cable@galadriel]{1354}% 


Right, I wouldn't expect it to do anything without -a.

- Dave Dykstra




Re: rsync exits with 'remsh' error from script

2001-01-25 Thread Dave Dykstra

On Wed, Jan 24, 2001 at 02:24:37PM -0600, Denmark B. Weatherburn wrote:
 Hi Listers,
 
 I hope this posting qualifies for your acceptance.
 I'm working on a Korn shell script to using rsync to synchronize several Sun
 hosts running Solaris 2.7.
 Below is the error message that I get. I'm not sure if there is a log file
 that can provide more information, but I checked several possibilities
 including
 - the rsync configuration on both source and destination
 - the .rhosts (rsh) configuration on both source and destination
 - the backup directory on destination
 BTW, does rsync create the backup directory automatically or do I have to
 create it in my script before the call to rsync.
 - file and directory locations and permissions, etc
 
 bbanksig{opsi}/usr/local/shells/synctime ./syncgnt.sh bbanksp
 24-January-2001 14:07:33
 checking if bbanksp is available...
 machine is up!
 DESTMACH=bbanksp
 Password:
 Failed to exec remsh : No such file or directory
...
 rsync_func()
 {
 /usr/local/bin/sudo /usr/local/bin/rsync \
...


By default rsync uses remsh if remsh exists in the PATH of whoever compiled
rsync.  It must not be in the default PATH provided by sudo.  It's in
/usr/bin/remsh on my Solaris 2.7 machine.  If you have rsh but not remsh, a
workaround would be for you to use '-e rsh' or to give a complete path to
the -e.

- Dave Dykstra




Re: The out of memory problem with large numbers of files

2001-01-25 Thread Dave Dykstra

On Thu, Jan 25, 2001 at 11:47:32AM -0500, Lenny Foner wrote:
 While we're discussing memory issues, could someone provide a simple
 answer to the following three questions?
 (a) How much memory, in bytes/file, does rsync allocate?

Andrew Tridgell said 10-14 bytes per file in 
http://lists.samba.org/pipermail/rsync/1998-December/000895.html
where he proposed a mechanism to eliminate it.  He had hoped to implement
it in 1999 but it still hasn't happened.


 (b) Is this the same for the rsyncs on both ends, or is there
 some asymmetry there?
 (c) Does it matter whether pushing or pulling?
...

I don't know, I suggest running a test and watching the process sizes.

 By the way, this does seem to be (once again) a potential argument for
 the --files-from switch:  doing it -that- way means (I hope!) that
 rsync would not be building up an in-memory copy of the filesystem,
 and its memory requirements would presumably only increase until it
 had enough files in its current queue to keep its network connections
 streaming at full speed, and would then basically stabilize.  So
 presumably it might know about the 10-100 files it's currently trying
 to compute checksums for and get across the network, but not 100,000
 files.

No, that behavior should be identical with the --include-from/exclude '*'
approach; I don't believe rsync uses any memory for excluded files.

- Dave Dykstra




Re: can not push to daemon on non-std port.

2001-01-30 Thread Dave Dykstra

On Tue, Jan 30, 2001 at 06:35:56PM -, Wrieth, Henry wrote:
 Greetings,
 
 I primarily use rsync to update remote hosts when source files are edited.
 This means I push to rsync daemons listening on those remote hosts from my
 source host when I need to (on demand).   This is much easier for me than
 running a daemon on the source and executing pulls on all the remote hosts.
 My problem is with the command line syntax of the rsync client.
 
 for copying from a remote rsync server to the local machine (PULL).
 rsync [OPTION]... [USER@]HOST::SRC [DEST] 
 rsync [OPTION]... rsync://[USER@]HOST[:PORT]/SRC [DEST]
 
 for copying from the local machine to a remote rsync server (PUSH). 
 rsync [OPTION]... SRC [SRC]... [USER@]HOST::DEST
 
 There is no url option for pushing to a server.  i.e.:
 rsync [OPTION]...SRC [SRC]... rsync://[USER@]HOST[:PORT]/DEST
 
 The problem: It seems there is no way to push to a rsync daemon if it is not
 running on the default port.  I can not specify a port in the
 '[USER@]HOST::DEST' syntax.  if you say 'HOST:PORT::DEST'  the single ':'
 gets interpreted as the server module which is not found.  
 
 Is there anyway to run an rsync daemon on a non standard port and still be
 able to push stuff to it either with the '::' syntax or the url syntax?
 Sorry, If this was covered recently.  I am new to the list.
 
 Thanks in advance for any help
 --Henry
 
 Regards,
 
 Henry Wrieth


Use --port  to select the non-standard port.

The rsync:// syntax doesn't make much sense as a destination because 
in general URLs are for sources, not destinations; that's why it isn't
supported on that end.

- Dave Dykstra




Re: Transfering File List

2001-01-31 Thread Dave Dykstra

On Tue, Jan 30, 2001 at 10:55:15PM +0100, Otto Wyss wrote:
 I've come across this thread lately and would apreciate an option
 "--files-from -", so I could pipe the filenames from a perl script to
 rsync. Since this option certainly needs the full path with filename,
 does this path starts at the end of the server URL? I.e.
 
 rsync -a --files-from - ftp.de.debian.org::debian ...
 
 do I have to pipe "pool/x/xfree/xfree_4" to rsync?
 
 O. Wyss


If I understand you correctly I believe the answer is yes.  Until the
--files-from is implemented, however, try using
--include-from /dev/fd/0 --exclude '*'
You will also need to include all parent directories of the files you 
want.

I'm still waiting for somebody to do a performance test before implementing
--files-from ...

- Dave Dykstra




Re: Password for SSH2

2001-02-02 Thread Dave Dykstra

Yes, and by the way /etc/rsyncd.secrets isn't for rsh either, it is only
for the rsync daemon mode.

- Dave Dykstra

On Thu, Feb 01, 2001 at 02:08:21PM -0900, Britton wrote:
 
 Copy you .ssh/identity.pub into .ssh/authorized_keys in your account on
 the remote machine.  If I am correct in understanding that what you want
 is passwordless login.  The man page has details on this near the
 beginning.  This method may be subject to the details of sshd
 configuration, which I don't entirely remember, attached is my
 /etc/ssh/sshd_config which I use for passwordless login to localhost to
 test MPICH programs.
 
 Britton Kerin
 
 On Thu, 1 Feb 2001, Ed Young wrote:
 
  Hi,
 
  How do I set up a password for ssh2 to use?
  The /etc/rsyncd.secrets seems to be only for rsh.
 
  Thanks,
 
  Ed Young




Re: rsync hang question (again sorry) with debain and ssh

2001-02-05 Thread Dave Dykstra

On Mon, Feb 05, 2001 at 02:25:18PM +, Dean Scothern wrote:
 Hello,
 
 I'have an intermittent problem regarding transfers using rsync between
 debian potato servers
 In common with others on this list.  when executing:
 rsync -ave remotehost:/usr dir. I get a hang part way through the transfer.

I assume you've got something like "ssh" after the -e.

 This happens
 only on a few machines. I noticed on this list some time ago that there was
 a discussion about changing the behaviour of ssh to solve this, but Iam
 lothe to do this as it defeats the benefits of the debian package system 
 used locally. Are there still issues with using
 rsync with ssh and different file types?
 I can repeat the behaviour with rsync 2.3.2 - 2.4.6 and ssh 1.2.3, 2.2.0

I think that openssh (that's what you're using, right?) already has the
modification to use socketpairs instead of pipes.  Start by checking the
queues in the output of netstat on each side, and if they show something
queued to send you may need to go to tcpdump.  If they don't, you may need
to go to strace to see where things are hanging.

- Dave Dykstra




Re: rsync hang question (again sorry) with debain and ssh

2001-02-05 Thread Dave Dykstra

On Mon, Feb 05, 2001 at 09:57:44AM -0600, Dave Dykstra wrote:
...
 I think that openssh (that's what you're using, right?) already has the
 modification to use socketpairs instead of pipes. 
...

Oh, I just found another relevant message from Tridge:

On Fri, Oct 27, 2000 at 01:36:39PM +1000, Andrew Tridgell wrote:
  All the latest versions of openssh use socketpairs.

 More importantly openssh uses non-blocking IO internally. That solves
 the problem no matter whether it uses pipes or socketpairs.

- Dave Dykstra




Re: unnecessary chown's when uploading to rsync server

2001-02-05 Thread Dave Dykstra

On Mon, Feb 05, 2001 at 02:29:42PM -0500, Diab Jerius wrote:
 I'm uploading files to an rsync server using the options
 
 -vrlptgx
 
 I'm getting tons of chown errors on the server side for directories.
 Now, if it was *supposed* to be chown'ing stuff, I expect the errors,
 as I'm not running the server as root, and there is a different owner
 on the server side.  But, I'm not specifying -o, so it shouldn't be
 trying to chown it (at least I don't think so).
 
 My server config file has
 
   use chroot = no
 
 Am I missing something?
 
 Thanks,
 Diab


-g uses chown.  You need to be the owner of a file to change the group.
Were the files already there owned by somebody else?

- Dave Dykstra




Re: Option --execute=/path/to/script.sh

2001-02-06 Thread Dave Dykstra

On Wed, Feb 07, 2001 at 04:43:02AM +0800, Wrieth, Henry wrote:
 Greetings,
 
 I primarily use rsync to update remote hosts when source files are edited.
 This means from one host, I upload to many rsync daemons listening on those
 remote hosts.   This is much easier for me than running a daemon on the
 source and executing pulls on all the remote hosts.  
 
 My problem is that I often want to execute post-distribution scripts on the
 endpoints (daemons) such as 'bounce_the_server'.  This is similar to
 Interwoven's OpenDeploy feature "deploy_and_run".   It seems trivial to do
 this when running rsync over rsh or ssh since we already have .rhosts trust
 to run rsync itself and thus can run other rsh commands.  But when running
 in client-server mode there would need to be a
 '--execute=/path/to/script.sh' Option to do this.
 
 Has anybody thought about or working on this type of functionality?


Yes, it has been thought about, and I think it is a good idea, but nobody
has done anything about it.  I found an archive of a discussion from two
years ago in the rsync bugs tracker.  See

  http://rsync.samba.org/cgi-bin/rsync/todo?id=1592;user=guest;selectid=1592

- Dave Dykstra




Re: losing leading / when copying symbolic links

2001-02-07 Thread Dave Dykstra

On Wed, Feb 07, 2001 at 05:42:55PM -0500, Diab Jerius wrote:
 Here's a mystery (to me at least!):
 
 I've got a symbolic link on the source disk which looks like this:
 
  ls -l /proj/axaf/Simul/bin/mips4_r10k-IRIX-6/spatquant_bp
 [...]
 /proj/axaf/Simul/bin/mips4_r10k-IRIX-6/spatquant_bp - 
 /proj/axaf/Simul/bin/mips4_r10k-IRIX-6/spatquant_bp-D2304
 
 
 my rsyncd.conf looks like:
 
 [proj_axaf_simul_bin_xfer]
 path = /proj/axaf/Simul/bin
 read only = no
 
 
 I run rsync on the source machine as
 
 rsync -vrlptx --port=9753 \
   --delete \
   --force \
   --include '*/' \
   --include '**mips*-IRIX*/*' \
   --exclude '*' \
   /proj/axaf/Simul/bin/ jeeves::proj_axaf_simul_bin_xfer
 
 
 and it turns the file on the remote disk into:
 
 jeeves-155: ls -l /proj/axaf/Simul/bin/mips4_r10k-IRIX-6/spatquant_bp
 [...]
 /proj/axaf/Simul/bin/mips4_r10k-IRIX-6/spatquant_bp -
 proj/axaf/Simul/bin/mips4_r10k-IRIX-6/spatquant_bp-D2304
 
 Whatever happened to the leading /?
 
 Thanks,
 Diab
 
 


Are you using "use chroot = no" in your rsyncd.conf?  From the rsynd.conf
man page under "use chroot":

For  writing  when  "use chroot" is false, for security reasons
symlinks may only be relative  paths  pointing  to  other  files
within  the  root  path,  and  leading  slashes are removed from
absolute paths.

- Dave Dykstra




Re: problem authenticating to daemon

2001-02-08 Thread Dave Dykstra

On Thu, Feb 08, 2001 at 01:25:22AM -, Wrieth, Henry wrote:
 in the server config file, I tried
 secrets file = /app/rsync/config/secret
 auth users = iwmaster
 where 'cat /app/rsync/config/secret'
 iwmaster:mah
 
 on the client I try:
 /app/rsync/bin/rsync /tmp/mahesh/test webtest::cSend/tmp/mahesh/test
 Password: mah
 @ERROR: auth failed on module cSend
 
 or:
 /app/rsync/bin/rsync /tmp/mahesh/test
 iwmaster@webtest::cSend/tmp/mahesh/test
 Password: mah
 @ERROR: auth failed on module cSend

Set the "log file" rsyncd.conf option (or check syslog) and see if any
error messages are written there.  Details are not sent back to the client
for security reasons.  A common problem is that the secrets file has to be
non-group or world readable unless you set "strict modes = no".


 shouldn't I get prompted for a username and password, not just a password?
 It doesn't work when I use iwmaster@host anyway.  what is wrong with the
 password?
 Is the USER environment variable passed in the rsync:// protocol?

From the rsync man page:

   USER or LOGNAME
  The USER or LOGNAME environment variables are  used
  to  determine  the default username sent to a rsync
  server.



 Are there known issues with authentication in client-server mode which is
 causing this to fail, or did I just do something stupid??

- Dave Dykstra




Re: --execute option

2001-02-12 Thread Dave Dykstra

I'm not going to comment on your entire message, just pieces of it:

On Mon, Feb 12, 2001 at 12:53:39PM -0500, Wrieth, Henry wrote:
...
 -Authentication:
 If we are going to spawn the child as the client user, we need to be assured
 the user is who he or she claims to be.  Currently, rsyncd uses a  password
 file with arbitrary users (users not necessarily on the host) and a clear
 text password, or secrets file.  It is secured by 'root read only'
 permissions. Modules may permission allowed users from that list.   We all
 recognize that the secrets file is a good, but antique idea and does not
 pass audits any longer.  My only recommendation is that we upgrade the login
 functionality to allow native OS usernames and passwords.  Then perhaps, we
 extend this later to include LDAP, radius, kerberos, PKI, etc. 

The reason why the password is in the clear in the secrets file is not an
antique idea: it avoids having to send the passwords in the clear over the
network, which you cannot avoid with native OS usernames and passords
unless you use encryption like SSH does.  Having the password in the clear
on the server side enables using it as a key for a random number challenge-
response which rsync does behind the scenes.

 --execute
 Once we have child daemons securely spawning as normal users we can have no
 worries about adding an --execute option.  I say this because we are not
 granting any new privileges to the user which they do not already have by
 logging onto the box directly.  Jil can not do things as Jak, Like send
 nasty mail to his boss, and can not do things as root like, rm /.

Keep in mind that it's not just the user id authentication we're concerned
about, it's also the chroot environment.  The chroot is a safeguard against
holes in the rsync server implementation and not one we're usually willing
to give up.


 -Summary
 I think --execute could become a safe and extremely powerful addition to
 rsync by making two relatively simple changes to the existing daemon.
 
   1.) When 'uid=' is not specified in a module spawn the child as
 the client username
   2.) Update the login function to allow native OS username and
 password.  'login=native'
 
 Now we can create an --execute feature to allow execution of arbitrary
 scripts either before or after file deployment and either 'on success'  or
 'on failure'.  
 --execute="/path/doit -q arg" after_deploy on_success
 
 Perhaps it be required that if the --execute option is used then either the
 uid= or login= be enforced.   That is I don't think --execute should be
 allowed in anonymous mode. 

Serious suggestion: from what I understand about your application now, it
seems to me that using '-e ssh' would be the best way for you to go instead
of an rsync daemon.  Is there a reason why that's not an option for you?

- Dave Dykstra




Re: bug with --backup and --exclude

2001-02-12 Thread Dave Dykstra

On Mon, Feb 12, 2001 at 11:26:46AM -0600, Chris Garrigues wrote:
 I found the problem.  In keep_backup, make_file needs to be called with a 
 first argument of -1, not 0.
 
 The patch is attached.


Thanks for tracking that down.  I checked the current code and see that it
is fixed already in 2.4.6 (and that's why I couldn't reproduce it).   I
wasn't aware of the problem or I would have encouraged you to upgrade.  See
Revision 1.6 at

http://pserver.samba.org/cgi-bin/cvsweb/rsync/backup.c

- Dave Dykstra




Re: include/exclude confusing output

2001-02-12 Thread Dave Dykstra

On Sat, Feb 10, 2001 at 08:30:32AM -0800, Harry Putnam wrote:
...
  After looking thru the examples in man page it seems this command line
  should do it:
  
rsync -nav --include "*/" --exclude "alpha/" \
 --exclude "alphaev6/" ftp.wtfo.com::rh-ftp/redhat-7.0/updates/ .
  
  But a dry run shows files under the two alphas also being downloaded.

The problem with that is that --include "*/" matches the alpha and alphaev6
directories first so it never applies the two excludes.


  Trying this then:
  
rsync -nav --include "*/" --exclude "alpha/*" \
 --exclude "alphaev6/*" ftp.wtfo.com::rh-ftp/redhat-7.0/updates/ .
  
  That way neither of the alpha directories is included.
  
  Can't think of any more ways... and seems one of those should do what
  I want.

[Note that the --include "*/" has no effect].

...

 Now I'm even more confused... it turns out that the second example
 shown above does do as I wanted and expected, when used without the
 dryrun `-n' flag.  The alpha and alphaev6 directories are downloaded
 with no files under them.
 
 With the -n flag the output indicates they are not.  Very
 misleading and confusing.


The -n output has several inconsistencies compared to a real run.  It needs
to be cleaned up.  A couple years ago somebody promised to do it but it
never happened.

- Dave Dykstra




Re: Using pipes to feed inclusions to rsync

2001-02-12 Thread Dave Dykstra

On Mon, Feb 12, 2001 at 10:14:22PM +0100, Otto Wyss wrote:
 Is it possible to use pipes when specifying an inclusion list of files?
 I.e. is the following statement possible
 
 rsync -aLPv --include-from - --exclude '*' [source] [destination]
 
 where the - after --include-from denotes STDIN. Is there another way or
 syntax to do this? I'd like to use such a statement in a perl script.
 
 O. Wyss

Most modern operating systems allow you to specify /dev/fd/N where N is
the number of an open file descriptor.  Stdin would be /dev/fd/0.

- Dave Dykstra




Re: stupid question wrote x read y bytes using -n option and 'real' syncing

2001-02-13 Thread Dave Dykstra

On Tue, Feb 13, 2001 at 09:26:39AM +0100, wolf wrote:
 Hello,
 
 Before using rsync 2.4.6 I usually test with the -n Option before rsync.
 
 rsync -avz -n --bwlimit=3 --rsh=ssh --rsync-path=/opt/bin/rsync --delete
 tmp user@remotehost:/home/user
 
 I observed the following difference in the Output 'wrote'and 'read'
 
 a) with -n
 
 wrote 792 bytes  read 128 bytes  40.89 bytes/sec
 total size is 59597  speedup is 64.78
 
 b) 'real' rsync
 
 wrote 10273 bytes  read 452 bytes  612.86 bytes/sec
 total size is 59597  speedup is 5.56
 
 Is this ok (cause rsync tells only how many bytes really 'wrote' and
 'write')?

Yes, that's the way it works.  -n doesn't go through enough of the process
to determine how many bytes will be written  read without it.


 How can I estimate how many data would really transferred before
 rsyncing?

Currently there isn't a way that I know of.

- Dave Dykstra




Re: --execute option

2001-02-13 Thread Dave Dykstra

On Tue, Feb 13, 2001 at 12:45:34PM -0500, Wrieth, Henry wrote:
  In general I think the rsync daemon was designed to 
  be a read-only server with a small amount of support 
  for uploading, and if complex uploading and 
  authentication is needed then there are other tools 
  that can still carry the rsync protocol.
 
 Yes, rsync works very well for what it was designed to do.  Most tools like
 this are designed to work in a pull mode.  That is my problem.
 
 you say, "there are other tools that can still carry the rsync protocol."
 Can you pleas tell me what they are so I can take a look?

I was just referring to things like rsh and ssh.  Basically, anything that
provides a data pipe for rsync to work with.

- Dave Dykstra




Re: readlink: no such file or directory // too many open files // problem

2001-02-21 Thread Dave Dykstra

On Wed, Feb 21, 2001 at 10:32:15AM -0500, Spleen wrote (in private email):
 Thnaks for the reply! Answers to your questions are inline:
 - Original Message -
 From: "Dave Dykstra" [EMAIL PROTECTED]
 To: "Spleen" [EMAIL PROTECTED]
 Sent: Wednesday, February 21, 2001 9:45 AM
 Subject: Re: readlink: no such file or directory // too many open files //
 problem
 
 
  On Tue, Feb 20, 2001 at 03:04:02PM -0500, Spleen wrote:
   Hi,
 I've seen others with this problem, and thought I'd see if anyone had
   yet found a solution. It occurs for me on Solaris8 SPARC, and seems to
 be
   exclusive to solaris8 in my reading of the archive. Has anyone
 succesfully
   run this on Solaris8?
 
 I myself have succesfully managed to transfer one file in the
   hierarchy, and find I can transfer more than one as long as they match
 the
   remotehost::module/filename, ie rsyncmaster::www/* /var/www transfers
 all
   files in module www on rsyncmaster, but as soon as I add -r, -a, or take
   off the /*, back come the readlink errors.
 
 Just thought this might be of use to others. And of course of anyone
   can help me I'd be very thankful.
 
  
   Cameron Macintosh
   Programmer
   PageMail, Inc.
 
 
  Where did you get your binary from?
 
 From the rsync website ( rsync.samba.org??? )
 
 
 Which version of rsync?
 
 2.4.6
 
 If you compile
  from source does it have the same result?
 
 haven't tried it, but will today.


I built the solaris 2.5.1 binary on that web site and Andrew Tridgell built
the Solaris 8 binary.  One person reported different problems with both of
them but his results were inconclusive and I don't think I ever heard if he
compiled from source.  If you have success when building from source, I
would definitely like to have you do the same tests with both of the rsync
2.4.6 solaris binaries.  As far as I know my solaris 2.5.1 binary should
work ok on Solaris 8, and I'd sure like to know if it doesn't.

- Dave Dykstra




Re: Using rsync for incremental backups and the logfile dilemma

2001-02-21 Thread Dave Dykstra

On Thu, Feb 22, 2001 at 01:00:49AM +0800, Hans E. Kristiansen wrote:
 I have a related question to this problem.
 
 We are doing backups from PC clients to a Linux server using rsync, and I
 would like change the full backup to incremental backups.
 
 However, the problem is that I may have used the wrong options
 , --destination-dir rather than -compare-dest option. However, can I still
 use --compare-dest with a remote directory, e.g. something like:
 
 rsync -ave ssh --compare-dest=backuphost:/home/user/backup/lastfullbackup  \
 /cygdrive/c/My Documents/  \
 backuphost:/home/user/backup/$DATE

No, --compare-dest has to be local.


- Dave Dykstra




Re: SEC: unclassified Cannot create tmp/

2001-02-22 Thread Dave Dykstra

On Thu, Feb 22, 2001 at 04:59:28PM +1100, Wolfe, MR Phillip wrote:
 Hi There,
 
 I running with rsync 2.4.3. I have an rsync server, usual port.
 
 I attempt to copy a file from a client to the server with 
 #pwd
 /tmp
 #rsync -vvv /tmp/filename rsyncserver::trial/tmp
 
 In amongst the verbose output I find the line:
 
 cannot create tmp/.filenameyHaanr : No such file or directory
 
 My log file shows that the module in the rsyncd.conf file is working and
 that bytes are writing and being read.
 
 Has anyone else suffered this type of error??
 
 Any feed back greatly appreciated as I'm at a loss.
 
 Cheers.

Does the module have "read only = no"?  The default is "yes".

- Dave Dykstra




Re: Incremental backup - New Story

2001-02-22 Thread Dave Dykstra

On Thu, Feb 22, 2001 at 07:01:30PM +0800, Hans E. Kristiansen wrote:
 using both the --backup and the --backup-dir option, I am now able to take
 incremental backup of changed files. However, with a small problem. I have
 noticed that if the user creates new files ( like new documents ), the are
 added to the destination directory, and not to the backup directory. This is
 rather unfortunate, since I would like them to be in the incremental
 directory. Is there any workaround?
 
 Thanks,
 Hans E..
 

I haven't tried it, but perhaps this would work for you instead of using
--backup and --backup-dir:
1. rsync to your incremental backup directory with a --compare-dest
of your old full backup (be sure to use a full path on --compare-dest
because relative paths are relative to the destination).
2. copy all files from your incremental directory on your backup server
to your full backup directory on your backup server.
3. rsync --delete from your source directory to your full backup; the
only thing that should happen then is deletes since you already
moved over changed files in step #2.

- Dave Dykstra




Re: readlink: no such file or directory // too many open files // problem

2001-02-22 Thread Dave Dykstra

On Thu, Feb 22, 2001 at 09:24:36AM -0500, Spleen wrote:
 Ok,
   The problem is gone. rsync now works like a charm!
   I tried using the 251 binary, and had the same issue, however, I had only
 replaced the binary on the rsync client side ( just a mistake, I wasn't
 thinking! )
   I then proceeded to compile from source on one of our Solaris251 boxes (
 using gcc ), and replaced the server binary with this. Problem solved!
   So, the precompiled solaris251 binary might have solved my problem, but I
 never tried that. My self compiled 251 binary definately worked on Solaris
 8, although the client ( also Solaris8 ) is still using the precompiled 251
 binary.

Could you please try the precompiled solaris251 binary on the server side?
I would very much like to know if that binary of mine works on solaris8.
It sounds like the 2.4.6 solaris8 binary on the rsync web site is somehow
broken and we should just delete it.  I'll go ahead and do that.

I would also like to know if you would have the same problem if you
compiled rsync 2.4.6 natively on solaris8.  There may be some problem with
upward source incompatibility.

- Dave Dykstra




Re: Getting rsync to keep UID/GID on rsync'd files

2001-02-23 Thread Dave Dykstra

On Fri, Feb 23, 2001 at 09:43:18PM +, Andrew Clayton wrote:
 
 Hello,
 
 I appologise if this is FAQ.
 
 I have been mirroring a web site between two servers, where the filea have
 varying
 permisions, owners and groups. But they have the same UID/GIDs on both
 systems.
 
 I have been using the following command.
 
 rsync --verbose --progress --stats --rsh=/usr/bin/ssh --recursive --times
 --perms --owner --group --links --delete
 --exclude-from=/home/web/rsync-excludes.txt . rigel:/home/html
 
 And this has been working fine. But I really need to automate this.
 Curently I run this
 as root, and I get asked for root's password on rigel.
 
 I been playing with rsync, but no joy. Basically I want to accomplish the
 above, but
 without using ssh.
 
 As a test I setup the following.
 
 On server1 I created a /etc/rsyncd.conf like

There's probably a way to do it with an rsync daemon but I'd first like to
suggest that you'd be better off using one of the no-password authentication
mechanisms with ssh.  The best one is probably to use ssh-keygen to
generate a key without a passphrase and put the public key half of that
into /.ssh/authorized_keys.  Is that an option?  If you want to limit what
it can do that can be done through the authorized_keys file as well.

- Dave Dykstra




Re: I also am getting hang/timeout using rsync 2.4.6 --daemon

2001-02-27 Thread Dave Dykstra

Re: http://lists.samba.org/pipermail/rsync/2001-February/003628.html

Some more email was exhanged the last couple days on the subject of a TCP
hang between Solaris and Linux:



 Date: Mon, 26 Feb 2001 12:06:08 -0600
 From: Dave Dykstra [EMAIL PROTECTED]
 To: "David S. Miller" [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED], [EMAIL PROTECTED]
 Subject: Re: Linux 2.2.16 through 2.2.18preX TCP hang bug triggered by rsync
 
 David,
 
 Regarding the patch you sent us (below) to try to help solve the problem of
 Solaris sending acks with a sequence number that was out-of-window:
 
 We have now completed extensive testing with it and analysis of tcpdumps
 and have determined that the patch is working as expected (accepting the
 acks) but it isn't enough to work around the state that Solaris gets itself
 into; the connection still hangs.  It looks like Alexey was right.  Linux
 is able to make further progress getting data sent to Solaris but it isn't
 enough to recover the whole session; the Linux receive window stays at 0 so
 I presume the rsync application isn't reading the data because it's waiting
 for the Solaris side to complete something.  Oddly, every 30 or 60 seconds
 after this situation occurs, Linux sends another 2-4 1460 byte packets and
 they're acknowledged by Solaris.  It seems unlikely that the rsync
 application would be sending exact multiples of 1460, but I didn't do a
 trace during the hang to see if it was generating extra data for some
 reason.
 
 I have attached the tcpdump in case you're interested.  Recall that 'static'
 is Linux and 'dynamic' is Solaris.  We have added our interpretation on
 some of the lines.
 
 We also have had an initial response from Sun where they recommended
 upgrading with a certain patch but that too hasn't solved the problem (the
 attached tcpdump is with the Solaris patch in place).
 
 Thanks for your help and I'll let you know if we do ever get a satisfactory
 answer from Sun.
 
 - Dave Dykstra




 Date: Tue, 27 Feb 2001 13:13:32 -0600
 From: Dave Dykstra [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED],
   [EMAIL PROTECTED]
 Subject: Re: Linux 2.2.16 through 2.2.18preX TCP hang bug triggered by rsync
 
 
 On Tue, Feb 27, 2001 at 09:51:36PM +0300, [EMAIL PROTECTED] wrote:
  Hello!
  
   into; the connection still hangs.  It looks like Alexey was right.  Linux
   is able to make further progress getting data sent to Solaris but it isn't
   enough to recover the whole session; the Linux receive window stays at 0 so
   I presume the rsync application isn't reading the data because it's waiting
   for the Solaris side to complete something.  Oddly, every 30 or 60 seconds
   after this situation occurs, Linux sends another 2-4 1460 byte packets and
   they're acknowledged by Solaris.
  
  The situation is rather opposite. Solaris does not enter persist mode (#1)
  and sets SND.NXT to an invalid value (#2) (these are two different problems).
  So, its ACKs are valid only when it retransmits. With changes made
  at Linux side the session does not hang, but becomes clocked by Solaris
  retransmission timer. When Solaris retransmits, we see ACK and send
  to new window. Apparently, if window from linux side is not open
  for tcp timeout, solaris aborts session.
 
 That makes sense.
 
 
  I talked to sun engineer; he said they are aware of both these bugs,
  but patch fixing this is still not available.
 
 I'm glad to hear that it's gotten to the engineer who will be able to make
 a difference.  Could you please give me his email address?  When a patch is
 available I'm willing to test it for him.
 
 
  [ Also, he was suprised with rsync. 8)8) Though transmissions with closed
window are legal and it would work if solaris was not buggy,
I have to agree: this application is really stupid, it uses tcp
in maximally suboptimal way.
  ]
  
  Actually, this place is one the most thin place in tcp protocol.
  Due to obscurity of specifications, most of OSes have bugs here
  (both fatal like solaris and not fatal, but still affecting performance).
  To avoid such problems maintainers of rsync could take care of not holding
  window closed. 
 
 I have added the rsync author to the Cc and will forward your message to
 the rsync mailing list.
 
 
  Moreover, I have seen report on deadlock in rsync itself,
  when both sides stay with zero window forever, because both sides
  want to write.
 
 I have been closely following rsync for a couple years now and have never
 seen a confirmed case of that, so I'm quite skeptical.  There have been
 numerous cases of triggering OS bugs, however, so I'm not surprised by your
 comment about it being the "most thin place" in the tcp protocol.
 
 Thanks!
 
 - Dave Dykstra


diff -u --recursive --new-file --exclude=CVS 

Re: I also am getting hang/timeout using rsync 2.4.6 --daemon

2001-02-27 Thread Dave Dykstra

On Tue, Feb 27, 2001 at 03:01:38PM -0500, Scott Russell wrote:
 All -
 
 I understand the focus of the discussion below is Linux - Solaris time
 outs but is anyone else seeing the same problem under Linux - Linux when
 running 2.4.6 rsync --deamon?
 
 Currently I'm seeing it from the client end. Both of the servers I'm pulling
 from were updated to 2.4.6 and then I started seeing problems. Since I don't
 have visibility to the server side there isn't to much else I can say for
 sure.
 
 On my end (client side) the system is Red Hat 6.0 + errata and a 2.2.18
 kernel using rsync 2.4.6. In my logs I see "unexpected EOF in read_timeout"
 usually right after the file list is received. Running rysnc with -v -v -v
 doesn't show anything special about the problem.

Those symptoms are quite different.  I suggest checking the server side
logs first.  The EOF occurs on the client side anytime the server side goes
away prematurely.  You probably aren't yet stressing TCP because I don't
think much bidirectional traffic is exchanged so early, unless your first
file is very large.

- Dave Dykstra




Re: should rsync also be called ssync?

2001-02-28 Thread Dave Dykstra

On Wed, Feb 28, 2001 at 11:04:56AM +1100, Martin Pool wrote:
  Dave Dykstra wrote:
  How does everybody (especially Martin and Tridge) feel about the idea of
  rsync defaulting to "-e ssh" if it is invoked under the name ssync?  Around
  here everybody is being told they should stop using r* commands and start
  using the s* equivalents so it seems a natural changeover.  If there is
  general agreement, I'm willing to implement it and change the default
  installation to install symlinks at bin/ssync and man/man1/ssync.1.
 
 Personally I don't see any need to build a separate binary when a
 shell script or alias would do just as well.  

I certainly wasn't talking about building a separate binary, I was just
talking about putting a symlink called 'ssync' in the standard install if
ssh was found.


 Also, I think in any
 given system people are going to either be using rsh or ssh, not both.
 (Does *anyone* really use rsh rather than daemon or ssh anymore?)

That's very much not the case around here yet.  Most people use only the
intranet and are just starting to get aquainted with ssh because of a
push by corporate security.  It's going to take quite a while before 
people to transition, and it would be very handy to be able to choose
rsync or ssync commands to choose a different default.  It would also
ease confusion as everybody begins to think "r* means bad security".

Regarding Phil's comment about there not being a need to use up the name
"ssync" in the command name space, I argue that a lot of people who have
heard of rsync and ssh would expect ssync to be a secure version of rsync
and would be confused if somebody used that name for something else.


 It seems fairly clear that it would be good to change over to SSH as
 the default:
 
  * for people who can use either, it's better to be secure by default.
 
  * people still using rsh should be gently encouraged to move to ssh
unless they speifically choose to stay.
 
  * ssh can (optionally) fall back to using rsh if there's no ssh
server listening.
 
  * forgetting to set RSYNC_RSH or -e is a pretty common problem for
people wanting to start using it.
 
  * if all else fails, you can do
--with-default-rsh=/etc/alternatives/remoteshell, or ln -s rsh shh
 
 However, I can imagine this causing small problems for people building
 binary distributions.

I don't mind there being a configure option, but I do a wide binary
distribution and would not be able to enable the option and I would object
to it being the default because of the number of things it could break.
For example, I'll expect I'll have a lot of cases where both sshd and rsh
will be enabled for the same server so it won't fall back to rsh, but ssh
will prompt for a password but rsh won't.



On Wed, Feb 28, 2001 at 04:31:44PM +1100, Martin Pool wrote:
 This is my idea of the patch.  Note that this would make ssh the
 default for 2.4.7, unless you specify otherwise at configure or run
 time.
...
 Index: configure.in
 ===
 RCS file: /data/cvs/rsync/configure.in,v
 retrieving revision 1.67
 diff -p -c -r1.67 configure.in
 *** configure.in  2001/02/24 01:37:48 1.67
 --- configure.in  2001/02/28 05:15:17
 *** AC_PROG_CC
 *** 42,47 
 --- 42,62 
   AC_PROG_INSTALL
   AC_SUBST(SHELL)
   
 + AC_ARG_WITH(rsh,
 + [  --with-rsh=COMMAND  use alternative remote shell program],
 + [ RSYNC_RSH="$withval" ],
 + [ RSYNC_RSH=ssh ] )


I object to having RSYNC_RSH be set to ssh by default.  I also object to
it causing an error if ssh is not installed.

I think the name --with-default-rsh is better.


 + 
 + AC_MSG_CHECKING("for $RSYNC_RSH in \$PATH")
 + if locn=`which $RSYNC_RSH`

"which" is not a standard function.  It is not on three of the system
types I build binaries for.  Use AC_PATH_PROG(VARIABLE, PROG-TO-CHECK-FOR).


 + then
 + AC_MSG_RESULT($locn)
 + else
 + AC_MSG_RESULT(no)
 + AC_MSG_WARN("$RSYNC_RSH does not seem to be in \$PATH on this machine")
 + fi
 + AC_DEFINE_UNQUOTED(RSYNC_RSH, "$RSYNC_RSH", [Default remote shell program to use])
 + 
   AC_CHECK_PROG(HAVE_REMSH, remsh, 1, 0)
   AC_DEFINE_UNQUOTED(HAVE_REMSH, $HAVE_REMSH)
...
 Index: rsync.h
 ===
 RCS file: /data/cvs/rsync/rsync.h,v
 retrieving revision 1.97
 diff -p -c -r1.97 rsync.h
 *** rsync.h   2001/02/23 01:02:55 1.97
 --- rsync.h   2001/02/28 05:15:21
 *** enum logcode {FNONE=0, FERROR=1, FINFO=2
 *** 73,84 
   
   #include "config.h"
   
 - #if HAVE_REMSH
 - #define RSYNC_RSH "remsh"
 - #else
 - #define RSYNC_RSH "rsh"
 - #endif
 - 
   #include sys/types.h
   
   #ifdef HAVE_UNISTD_H
 --- 73,78 


Here's one problem I see: you're setting HAVE_REMSH but aren't using it
anymore.  You nee

Re: should rsync also be called ssync?

2001-03-01 Thread Dave Dykstra

On Thu, Mar 01, 2001 at 06:31:49PM +1100, Andrew Tridgell wrote:
  It would also ease confusion as everybody begins to think "r* means
  bad security".
 
 I think this argument is a little weak. There are 143 commands
 starting with r on my system. Only 2 or 3 of them suffer from the rsh
 style security problems.

Right, but in particular the network commands that rsync is perceived
to be among.  I assume the "r" in rsync was derived from the "r" in rsh
and probably many other people do too.

On Thu, Mar 01, 2001 at 01:16:29PM +1100, Martin Pool wrote:
 On 28 Feb 2001, Dave Dykstra [EMAIL PROTECTED] wrote:
...
 I guess this is getting off the track, but I think that is a kind of
 wierd situation.  If you're allowing rhosts login for rsh, why not
 allow it for ssh?

It is allowed but the problem is that it takes a while for people to figure
out that they also need to have a "known_hosts" entry for the client machine
because only RhostsRSAAuthentication is allowed.  Also, the client isn't
always setuid-root which is also needed for .rhosts.

...

 Why not just do it as a shell script?
 
   #! /bin/sh
   rsync -e ssh "$@"
 
 The GNU Standards (for what they're worth) deprecate having program
 behaviour depend on argv[0], and I'm inclined to agree, especially
 because you're doing to support a program that really should be dead
 by now.

That's an interesting idea that I hadn't thought of for this case.  I'm
assuming you mean including that shell script in the standard package and
not just suggesting that people do this on their own.  I'll consider it
when I get into it, but I can think of a couple minor disadvantages of
doing it as a shell script:

1. It can be tricky to locate another program sometimes.  It's not
always in the PATH.  An absolute path can be used but then the
package isn't relocatable without edits and so far the rsync
package is.  A counter argument of course is that people could just
edit the script to relocate it.

2. It's slightly slower on startup.

Anybody know GNU's reasons against having program behavior depend on
argv[0]?  I guess GNU has had a lot of headaches with people want to
install things with a "g" prefix or not, that might explain it.

- Dave Dykstra




Re: To exclude or include... That is the question.

2001-03-02 Thread Dave Dykstra

On Fri, Mar 02, 2001 at 04:27:06PM +1100, Peter Cormick wrote:
 I'm trying to use rsync to backup user home directories.  They all live
 in /home, but so does some other rubbish which I do not want.
 The user ids all start with u, so I tried something like this:
 exclude = *
 include = u*
 But this will only allow processing of file and directories that start
 with u.  What I want is for everything below the ublahblah directory to
 be copied.  How to achieve this... simply?
 
 Many thanks to those that help.
 

Use --include '/u*/' --exclude '/*/' (in that order).

- Dave Dykstra




Patch to implement ssync

2001-03-05 Thread Dave Dykstra

Proposed patch to implement ssync follows.  I decided to go ahead with
the shell script idea.  Unless I hear objections, I'll figure on submitting
this to rsync CVS in a day or two.

- Dave Dykstra


*** configure.in.O  Fri Mar  2 15:42:18 2001
--- configure.inFri Mar  2 16:27:53 2001
***
*** 42,47 
--- 42,62 
  AC_PROG_INSTALL
  AC_SUBST(SHELL)
  
+ AC_ARG_WITH(default-ssh,
+   [  --with-default-ssh=PATH default path to ssh for ssync],
+   [ RSYNC_SSH="$withval"
+ AC_MSG_CHECKING(for --with-default-ssh)
+ AC_MSG_RESULT($withval)
+   ],
+   [ AC_PATH_PROG(RSYNC_SSH, ssh) ])
+ AC_SUBST(RSYNC_SSH)
+ INSTALLSSYNC=
+ if test -n "$RSYNC_SSH"; then
+ # this is a Makefile target
+ INSTALLSSYNC=installssync
+ fi
+ AC_SUBST(INSTALLSSYNC)
+ 
  AC_CHECK_PROG(HAVE_REMSH, remsh, 1, 0)
  AC_DEFINE_UNQUOTED(HAVE_REMSH, $HAVE_REMSH)
  
***
*** 290,293 
  AC_SUBST(CC_SHOBJ_FLAG)
  AC_SUBST(BUILD_POPT)
  
! AC_OUTPUT(Makefile lib/dummy zlib/dummy)
--- 305,315 
  AC_SUBST(CC_SHOBJ_FLAG)
  AC_SUBST(BUILD_POPT)
  
! AC_OUTPUT(Makefile lib/dummy zlib/dummy ssync)
! 
! if test -z "$RSYNC_SSH"; then
! # another AC_OUTPUT doesn't work so instead always generate ssync
! #   and then clean it out if don't need it
! echo "removing ssync"
! rm -f ssync
! fi
*** Makefile.in.O   Fri Mar  2 16:03:14 2001
--- Makefile.in Fri Mar  2 16:31:03 2001
***
*** 42,55 
  
  man: rsync.1 rsyncd.conf.5
  
! install: all
-mkdir -p ${bindir}
!   ${INSTALLCMD} -m 755 rsync ${bindir}
-mkdir -p ${mandir}/man1
-mkdir -p ${mandir}/man5
${INSTALLCMD} -m 644 $(srcdir)/rsync.1 ${mandir}/man1
${INSTALLCMD} -m 644 $(srcdir)/rsyncd.conf.5 ${mandir}/man5
  
  install-strip:
$(MAKE) INSTALLCMD='$(INSTALLCMD) -s' install
  
--- 42,61 
  
  man: rsync.1 rsyncd.conf.5
  
! install: installrsync @INSTALLSSYNC@
! 
! installrsync: all
-mkdir -p ${bindir}
!   ${INSTALLCMD} -m 755 $(srcdir)/rsync ${bindir}
-mkdir -p ${mandir}/man1
-mkdir -p ${mandir}/man5
${INSTALLCMD} -m 644 $(srcdir)/rsync.1 ${mandir}/man1
${INSTALLCMD} -m 644 $(srcdir)/rsyncd.conf.5 ${mandir}/man5
  
+ installssync:
+   ${INSTALLCMD} -m 755 $(srcdir)/ssync ${bindir}
+   ${INSTALLCMD} -m 644 $(srcdir)/ssync.1 ${mandir}/man1
+ 
  install-strip:
$(MAKE) INSTALLCMD='$(INSTALLCMD) -s' install
  
*** ssync.in.O  Fri Mar  2 16:32:01 2001
--- ssync.inFri Mar  2 16:11:04 2001
***
*** 0 
--- 1,4 
+ #!/bin/sh
+ prefix=@prefix@
+ exec_prefix=@exec_prefix@
+ exec @bindir@/rsync -e @RSYNC_SSH@ ${1+"$@"}
*** ssync.1.O   Fri Mar  2 16:32:03 2001
--- ssync.1 Mon Mar  5 08:39:40 2001
***
*** 0 
--- 1,13 
+ .TH "ssync" "1" 
+ .SH "NAME" 
+ ssync - alias for "rsync -e ssh"
+ .SH "SYNOPSIS" 
+ .PP 
+ ssync [rsync_options]
+ .SH "DESCRIPTION" 
+ .PP 
+ ssync is a convenience function to invoke "rsync -e ssh" (using full path
+ names for the two programs).  All options are passed directly to rsync.
+ .SH "SEE ALSO" 
+ .PP 
+ rsync(1)




--report and --log-format options (was Re: Moving files with rsync)

2001-03-06 Thread Dave Dykstra

On Tue, Mar 06, 2001 at 07:08:05PM +1030, Alex C wrote:
 I'm working on a project where I need to automate the transfer of
 files securely over a dialup connection. Files need to be moved both
 ways with wildcard pattern matching needed on both sides to find the
 right files.
 
 I've got this working with ssh and scp, but this requires many
 separate ssh invocations (especially for retrieving files, e.g. ssh to
 ls files, scp to copy files, ssh to rm files). There is a noticeable
 delay for each ssh invocation, and this is more error prone since
 accidental disconnection (e.g. of the dialup link) could leave the
 files existing on both sides.
 
 This is what I need:
 - Secure transfers
 - The ability to send/retrieve all files matching wildcard patterns
 - The ability to have files deleted after they have been transferred
 - Atomic operation as much as possible, so that files won't end up
   existing on both sides in the case of an error
 - The ability to do it all with as few reconnections as possible
 
 It looks like rsync would be great for this, since it can work over
 ssh, match wildcards on the remote side with --include etc. but there
 doesn't appear to be a way to remove the files (at least on the remote
 side) after they have been received, and only one transfer direction
 is supported per rsync invocation. Is there a way to get around these
 problems or would I be better off just using ssh or something else?
 Connecting once per send operation and once per receive operation
 would be satisfactory, but moving instead of copying is essential.
 
 I guess what I really want to be able to do is
   rsync --move src dest , src2 dest2 , src3 dest3


I don't think that type of operation is likely to get into rsync itself
but I could certainly see that something could be built successfully on
top of rsync to do that.



 Also, it seems to be possible to send all the files with rsync and
 then remove files based on rsync's output with --log-format=%f, but
 rsync sometimes lists files even if they haven't been successfully
 transferred. Is this a bug? Is the assumption that a file has been
 transferred successfully if it is listed on stdout with --log-format
 and its name did not appear on stderr reasonable?


It's long been desired that the --log-format option be more robust and
provide some guarantees, but it really doesn't unfortunately.  It needs
some close attention to do a good job.

The most recently something related was discussed was last October but I
see it was done in private email so I will attach it here for the record.
Follow the references to previous discussions in January 1999.

- Dave Dykstra


On Wed, Oct 11, 2000 at 11:04:15AM -0500, Dave Dykstra wrote:
 Date: Wed, 11 Oct 2000 11:04:15 -0500
 From: Dave Dykstra [EMAIL PROTECTED]
 To: Douglas N Arnold [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED], Andrew Tridgell [EMAIL PROTECTED], [EMAIL PROTECTED]
 Subject: Re: --report patch for rsync
 
 I'm including Martin Pool in the Cc on this now since he seems to have
 taken over primary maintenance of rsync; it will really be up to him in
 discussion with Andrew (they work in the same office) to decide.  According
 to my email record it looks like you contacted Andrew and I directly, not
 via the rsync mailing list, and didn't send us a patch to look at.  Ah, I
 see the patch is available now on your web page
 http://www.math.psu.edu/dna/synchron
 in the "download" section.
 
 My opinion is that rsync *should* change in scattered places throughout it
 in order to do this properly.  I don't think it should have to be an
 "extensive" change but I do expect it to not be all localized.  Protocol
 changes would be acceptable.
 
 The main thing I don't like about --report is that its format is completely
 set and doesn't give the user any control over the output like --log-format
 can.  Also, I don't see why it should have to imply --dry-run (not to
 mention so many other options); some people might like to find out that
 information about what rsync did, not just what it would do.  My aim is for
 maximum flexibility so that somebody else doesn't need to come along in
 another 6 months with a requirement for something that's slightly different
 and need to add yet another new option.
 
 Another possibility: it seems to me you could do a compromise between your
 current --report and what I had envisioned for --log-format by extending
 the --log-format option to do pretty much exactly what your --report option
 does (except implying --dry-run) but not do the other things I had
 proposed.  If you need to put restrictions on to say that it only works on
 the client side, or that whatever % substitution you choose can't be used
 in combination with some of the others, that's probably something we could
 handle.  Somebody in the future may come along and remove the restrictions.
 
 - Dave Dykstra
 
 
 On Tue, Oct 10, 2000 at 06:07:01PM -0400, Douglas N Arnold wrote:
  
  Dear Dave,
  
 

Re: Should -W be the default when doing a local rsync copy?

2001-03-07 Thread Dave Dykstra

On Wed, Mar 07, 2001 at 11:01:00AM -0600, Dave Dykstra wrote:
 I just did a measurement on a
 copy between two nfs-mounted ~5MB files with and without -W and found it
 took over 3 times longer without the -W.

I forgot to mention that the two files were nearly identical accept for
a little bit tacked on the end (it was a backup of a growing mailbox file).

- Dave Dykstra




Re: rsync hard link problems

2001-03-12 Thread Dave Dykstra

On Fri, Mar 09, 2001 at 04:34:53PM -0800, sarahwhispers wrote:
 
 Hi, 
 
 I'm having difficulty using rsync to synchronize 
 hard-linked files.  In some cases, it seems to 
 transfer a hard-linked file again even though 
 the one of the other hard-linked files has already 
 been copied.  
...
 The *end* result is always correct, but in the first
 case, 1bar gets sent across, but in the second case 
 3bar is not sent.  It seems to have to do with the 
 fact that 1 comes before 2 and 3 comes after 2.
 
 I want to use rsync to transfer large trees of files 
 with lots of hard links in them. My plan is to start 
 with one or two directories in the tree, then expand 
 to cover more and eventually all.
 
 Obviously if I just started by copying all the files at
 once, it would be ok, but that's not currently an
 option.
 
 Anyone out there know what's going wrong?  Can I do 
 something different to fix it?  Is it a bug?  


I'm afraid that's just the way it is.  Rsync can only manage hard links
between files that are in the same run, and it just keeps tracks of them in
the order in which it encounters them in each run.  I'm not sure what else
it could do.  You probably need to arrange things so that all files that
are hardlinked together are always in the same run.  Alternatively, after
a file is transferred perhaps you create the hardlinks into the next
destination directory with your own script before letting rsync loose on it.

- Dave Dykstra




Re: [resend] patch: ldap authentication for rsyncd (2.4.6)

2001-03-13 Thread Dave Dykstra

On Tue, Mar 13, 2001 at 03:04:40PM +0100, Stefan Nehlsen wrote:
 hello,
 
 there was absolutely no reaction first time I posted this in december.
 
 With this patch you may use a ldap server for authentication of rsyncd users.


Can you say more about how you intend to use this?  Perhaps there was no
reaction because nobody else could think of how they would use it
personally; I know that's the case for me.   I know there is resistance to
putting features into rsync if the same thing can be done in other ways.

I'm sure that at a minimum if this were to be accepted into the standard
rsync code base there would have to be corresponding configure and man page
changes before that could happen.

- Dave Dykstra
  (former rsync maintainer)




Re: [resend] patch: ldap authentication for rsyncd (2.4.6)

2001-03-13 Thread Dave Dykstra

On Tue, Mar 13, 2001 at 11:30:43AM -0500, Nicolas Williams wrote:
 Nope. It would be better to integrate GSS and/or SASL support.
 
 PAM is best for initial authentication handling (e.g., user/password).
 
 GSS / SASL are best for network authentication (e.g., Kerberos, PKI,
 etc...).
 
 Nico
 
 
 On Tue, Mar 13, 2001 at 05:19:00PM +0100, Rief, Jacob wrote:
  Hi,
  Would'nt it be a better idea to use 'pam' for authentication
  instead of doing a hard-wired LDAP-authentication in rsync?
  This would include LDAP authentication through pam_ldap.so.
  Jacob



We had a related discussion to this last month.  See messages from me at
http://lists.samba.org/pipermail/rsync/2001-February/003687.html
http://lists.samba.org/pipermail/rsync/2001-February/003700.html
http://lists.samba.org/pipermail/rsync/2001-February/003703.html


In summary, I think that if you need to have a complicated authentication
mechanism, you should be using ssh or rsh or some other data-pipe-providing
external tool for rsync to run over rather than building it all into rsync
itself.

- Dave Dykstra




Re: handling of uid= parameter (was Re: files can only be read when world-readable)

2001-03-19 Thread Dave Dykstra

On Sat, Mar 17, 2001 at 01:03:24PM +1100, Martin Pool wrote:
 So, I see that in clientserver.c, the uid and gid parameters are
 silently ignored if the daemon is not running as root.  I wonder if we
 should do something differently there.  Perhaps rsync should issue a
 warning if they're present and we're not root.  

It wouldn't hurt to print a warning to the log file, although it's likely
to be ignored.

 Better might be to go
 ahead and try to setuid anyhow, in case the user does have the
 capability to change, and then print a warning if it fails.

The only that setuid() can succeed if not root is if it is giving up
setuid-bit functionality (chmod 4755), and rsync is not designed to support
that so I don't see the point.

- Dave Dykstra




Re: Patch to implement ssync

2001-03-20 Thread Dave Dykstra

On Mon, Mar 05, 2001 at 11:31:36AM -0600, Dave Dykstra wrote:
 On Tue, Mar 06, 2001 at 02:10:18AM +1100, Martin Pool wrote:
...
  By all means put the script into the FAQ, or a doc/examples/
  directory, but my preference is that it not be in the main system.
 
 If it's not in the standard package I won't use it because I want people to
 only rely on standard things, and I want to avoid future potential
 nameclashes.   I suppose I could make a whole separate package on
 sourceforge and registered on freshmeat but I'm not sure I'm willing to go
 that far (although I was planning on at least registering the name on
 freshmeat).  Please ask Tridge what he wants to do now.  He previously said
 it was ok with him, but maybe he's changed his mind.  I'll abide by his
 decision.

Just to keep you all informed, Tridge decided to not accept the 'ssync'
patch so I'm going to forget about it.

- Dave Dykstra




Re: exclude list and patterns

2001-03-20 Thread Dave Dykstra

On Tue, Mar 20, 2001 at 12:48:22PM -0500, Magdalena Hewryk wrote:
 Hi,
 
 I'm trying to get an rsync to transfer files using an --exclude-from file.
 
 Syntax:
 ===
 /usr/local/bin/rsync --rsync-path=/usr/local/bin/rsync --rsh "/usr/bin/rsh"
 -av --stats --exclude-from=/export/home/rsync/filelist /prod/app
 host2:/prod/app  /tmp/rsync_list1.log
 
 This is an exclude file:   /export/home/rsync/filelist
 
 profiles/T100.html== works
 listec/comp_[A-Zt2ig7].html  == doesn't work
 listec/comp_[A-Z]*.html  == doesn't work
 listec/[0-9]*  == doesn't work
 .precious/  == works
 
 
 As I noticed when  I put only a pattern like [0-9]* it excludes the right
 files but if I put the pattern in the file name then it doesn't work
 (listec/[0-9]*.  How can I rsync all files names which start with numbers
 but exclude those in /listec directory?  
 
 Any hints?


Your main problem is that since your source directory doesn't end in a
slash, all the files that rsync deals with will have a "app/" prepended to
the path.  You then ran into a bug in rsync's exclude implementation in
that prevents patterns with wildcards from matching the end of a path, as
is documented if the exclude patterns don't start with a slash.

The fix for you is probably to just put a slash at the end of your source
directory.

- Dave Dykstra




Re: exclude list and patterns

2001-03-20 Thread Dave Dykstra

On Tue, Mar 20, 2001 at 03:21:54PM -0500, Magdalena Hewryk wrote:
 Hi,
 
 I need to match the pattern:   profiles/[A-Z]{1,3}.html.
 
 When I use the GNU grep I get the correct result:
 =
 [root] # ls -ls * | /usr/local/bin/grep -v "\[A-Z]\{1,3\}.html"
 
2 -rw-r--r--   1 root other  5 Mar 20 14:08 ABCD.html
2 -rw-r--r--   1 root other  5 Mar 20 14:53 ABCDE.html
2 -rw-r--r--   1 root other 10 Mar 20 11:32 T035.html
2 -rw-r--r--   1 root other 10 Mar 20 11:31 T100.html
2 -rw-r--r--   1 root other 15 Mar 20 11:31 not_found.html
 
 Files like A.html, AB.html, ABC.html are not on the list and this is the way
 it should be.
 
 When I try to apply this pattern to the rsync exclude file it doesn't work
 the way I expected.  Rsync copies all files from profiles to host2 instead
 of excluding A.html, AB.html, ABC.html.
 
 profiles/T100.html
 listec/comp_[A-Zt2ig7].html
 profiles/[A-Z]{\1,3\}.html
 listec/[0-9]*
 tmp/
 
 Thanks,
 Magda

Sorry, but that type of wildcard pattern isn't supported by the excludes.
See 'man rsync'.

It is possible to build your own complete list of files to copy and give
them all to rsync, by building a --include list and doing '--exclude *'
at the end.  Currently you need to also either have --include '*/' or
explicitly list all parent directories above the files you want included
or else the exclude '*' will exclude the whole directories.  There's been
talk of adding a --files-from option which would remove this last restriction,
and I even offered to implement it, but I'm still waiting for anybody to
give performance measurements (using rsync 2.3.2 which had an include
optimization that did something similar if there were no wildcards) to show
what the performance impact would be.

- Dave Dykstra




Re: exclude list and patterns

2001-03-20 Thread Dave Dykstra

On Tue, Mar 20, 2001 at 05:33:46PM -0500, Alberto Accomazzi wrote:
...
 Dave,
 
 I see you've now mentioned a few times what the performance impact of
 this proposed patch would be, and I can't quite understand what you're
 getting at.  My suggestion of --files-from came from the obvious (at
 least to me) realization that the current include/exclude mechanism is
 confusing to many users, and had nothing to do with performance (at
 least on my mind).  I thought (and still think) that it would provide
 a cleaner interface for performing fine-grained synchronization of
 part of a filesystem, and as such was a desireable feature.
 
 So while I understand the argument of not wanting to clobber rsync
 with a lot of unnecessary features, I thought this one makes sense
 regardless of performance or compatibility issues.  In fact, I think
 it makes sense to have it as a separate option as opposed to kludging
 the equivalent functionality in the include/exclude syntax to avoid
 the proliferation of confusing options and special cases.
 
 Anyway, just wanted to make this point.  As I have mentioned, I don't
 personally *need* this option at the moment, but I think that if
 enough people wanted to see it in rsync it should be implemented
 regardless of what the change in performance may be.


Well the easier syntax only motivates me 90% to personally take the time to
implement the option.  If somebody can show a performance improvement that
will be enough to clinch it for me.  My initial motivation for implementing
the optimization that was taken out in 2.4.0 was performance (which I
hadn't measured), and when Tridge took it out he asked me to show him a
performance gain to justify leaving it in I did some measurements then and
couldn't pursuade myself.  All I'm asking is for somebody to put a little
effort into showing a modest performance difference.

- Dave




Re: exclude list and patterns

2001-03-21 Thread Dave Dykstra

On Tue, Mar 20, 2001 at 06:15:21PM -0500, Alberto Accomazzi wrote:
...
 Well, not to be pedantic here, but how do we measure performance of a
 feature that isn't available yet? 

Essentially the same feature performance-wise was in rsync 2.3.2 and
earlier, when there is an include list that has no wildcards and ends in
exclude '*'.  That's what I'm asking people to measure on some large set of
their files where they think it might make a difference.  The simplest way
to compare is to add one wildcard somewhere to the include list.  Compare
the difference in total elapsed time and possibly CPU time.  Note that when
the optimization kicks in, the parent directories are not required to be
explicitly listed in the include list but when the optimization is off the
parent directories or --include '*/' are required.

 I guess my point is that Tridge's
 objection to the optimization does not apply here, since this is
 simply a new option rather than a rewrite of code that works already.

It's true that his main objection (the change in the parent directory
include requirement with and without the optimization) does not apply, but
it's still more code to maintain and makes the code more complicated
so it needs proper justification.

 And the new option is there to make the program more user-friendly
 rather than increasing performance.

Different people expressed an interest in it for both reasons.

- Dave Dykstra




Re: Rsync: ssh inside rsync update?

2001-03-21 Thread Dave Dykstra

On Tue, Mar 20, 2001 at 09:06:03PM -0800, Lachlan Cranswick wrote:
 Not sure if this was answered in the recent flurry of postings but
 what is the status on putting a cut-down version of ssh inside
 rsync to allow an "easy" encryption option?

I don't recall anybody suggesting that exactly.  I for one am quite opposed
to the idea.  I think using an external ssh is easy enough.

- Dave Dykstra




Re: rsync stops during transfer

2001-03-22 Thread Dave Dykstra

On Thu, Mar 22, 2001 at 01:25:37PM +0100, Ragnar Wislff wrote:
 Robert Scholten skrev:
  Hi Ragnar,
 
 Wow, that's quick response! Not waiting for Mir to fall on top of you, 
 I hope ...
 
  It's a common (and nagging) problem.  Could you post again, with:
 
 :-(
 
 
Machine type(s)
 
 2 x Dell PowerEdge 2450 (identical)
 
Operating system(s)
 
 Red Hat Linux 6.2, kernel 2.2.16
 
rsync version(s)
 
 2.4.1 protocol v. 24

That's your problem.  SSH hangs were a known problem in rsync 2.4.1.
Upgrade to 2.4.6.

- Dave Dykstra




Re: RSYNC PROBLEM on DGUX

2001-03-22 Thread Dave Dykstra

On Thu, Mar 22, 2001 at 01:16:15PM -0500, Gerry Maddock wrote:
 I just downloaded the latest rsync and installed it on my DGUX sys. I
 run rsync on all of my linux boxes, and it runs well. Once I had the
 /rsync dir created, wrote an rsyncd.conf in /etc, I started rsync with
 the --daemon option. Next, from one of my linux boxes, I tried to rsync
 files out of my dir listed in rsyncd.conf, but I always get this error:
 [root@penguin /]# /usr/bin/rsync -e ssh 'bailey::tst/*' /tmp/
 @ERROR: invalid gid

Note that when you use the '::' syntax that -e ssh is ignored.

You don't show your rsyncd.conf, but by default it sets 'gid = nobody' and
perhaps you don't have such a group.  See the rsyncd.conf.5 man page.

- Dave Dykstra




Re: RSYNC PROBLEM on DGUX

2001-03-23 Thread Dave Dykstra

On Thu, Mar 22, 2001 at 02:11:55PM -0500, Gerry Maddock wrote:
 Thanks Dave that was the problem, Tim Conway helped me with that one. One
 other thing I just noticed is:
 When I rsync from a linux box to a linux box using:
 /usr/bin/rsync -e ssh 'bailey::tst/*' /tmp/
 Linux knows the "*" means all, when I'm on my linux box and I try to sync with
 the DGUX, the DGUX "seems" to treat "*" as an actual file. So, I actually have
 to type in the file or files I want to sync
 IE: /usr/bin/rsync -e ssh 'bailey::tst/test.txt' /tmp/
 WEIRD!

I've seen that before on an operating system that didn't have the glob()
function.  See glob_expand_one() in util.c.

- Dave Dykstra




Re: Rsync: ssh inside rsync update? (fwd)

2001-03-26 Thread Dave Dykstra

On Wed, Mar 21, 2001 at 05:16:13PM +, L. Cranswick wrote:
 
  On Tue, Mar 20, 2001 at 09:06:03PM -0800, Lachlan Cranswick wrote:
   Not sure if this was answered in the recent flurry of postings but
   what is the status on putting a cut-down version of ssh inside
   rsync to allow an "easy" encryption option?
  
  I don't recall anybody suggesting that exactly.  I for one am quite opposed
  to the idea.  I think using an external ssh is easy enough.
 
 May have not paraphrased this correctly(?).
 
 This would be in the archive late October to early November 2000:  
Nov 03 * Martin Pool(113)  Re: Builtin encryption support in


I think that Martin was just discussing an idea and didn't actually
have any plans to go forward.


 Part of the vibe was that running rsync via ssh can be a put-off 
 to implement rsync securely.  Getting rsync running satifactorily via
 ssh can also assume good and easy access to remote computers
 which may not be the case.  (some sys admins / custodians can
 get freaked when you start talking about tunnelling things
 via ssh - and/or may not have ssh installed)

Starting a user-space sshd shouldn't be much more difficult than a
user-space rsyncd.


 The main concept(?): to encourage people to use rsync securely  -
 such things should  be made easy to do.  The number of
 queries relating to ssh and rsync could infer (or imply?) it
 is only "easy enough" to implement after you have done it a 
 few times?

I understand, but I don't think those considerations are as important as
keeping rsync easy to maintain.  If we can get security by utilizing a very
popular, well understood, well peer-reviewed tool that's separate from
rsync I don't think there's a good reason to clutter up the rsync code
by pulling in pieces of the separate tool into rsync.

- Dave Dykstra




Re: following symbolic links

2001-03-28 Thread Dave Dykstra

On Wed, Mar 28, 2001 at 06:00:35PM +0100, M. Drew Streib wrote:
 On Wed, Mar 28, 2001 at 08:52:01AM -0800, [EMAIL PROTECTED] wrote:
  can you PLEASE tell me how I get rsync to copy file #3 AND the file
  in the sub directory pointed to by the symbolic link ( file #2 ) to 
  from machine #1 to machine #2?
 
 Not quite sure of exactly what you're wanting, but have you looked
 at the '-l' (copy symbolic links) and '-L' (treat symbolic links as
 ordinary files) options?

Also --copy-unsafe-links which preserve symlinks inside the source tree
but treats symlinks that point outside the source tree as regular files.

- Dave Dykstra




Re: --include vs --exclude-form

2001-03-29 Thread Dave Dykstra

On Thu, Mar 29, 2001 at 04:28:08PM -0500, Magdalena Hewryk wrote:
 Hi, 
 I am still having a problem with excluding ../tmp directory but copy over
 ../tmp/content.html file and ../tmp/index.html.
 
 I can't --exclude "*" --include "tmp/" --include "tmp/content.html".  I
 simply cannot exclude "*" for I have 1 directories to be updated and I
 cannot include them  with --include option.
 
 
 The syntax below doesn't give me the result I want.
 /usr/local/bin/rsync --rsync-path=/usr/local/bin/rsync --rsh 
  "/usr/bin/rsh" -av --backup --suffix=_prom+
 --include-from=/export/app/infilelist
 --exclude-from=/export/app/outfilelist/
 
 outfilelist
 /tmp
 
 infilelist
 content.html
 index.html
 
 Any advise?


You have to include a parent directory in order to include a file.  Is it
just that you want to exclude all the other files in /tmp?  Using /tmp/* in
outfilelist should work for that.

- Dave Dykstra




Re: problem with deleting...

2001-03-30 Thread Dave Dykstra

On Fri, Mar 30, 2001 at 01:45:40AM +0100, M. Drew Streib wrote:
 On Thu, Mar 29, 2001 at 06:33:37PM -0500, Larry Prikockis wrote:
...
 here's what I'm using:
 
 rsync -e ssh -rca --delete local/. user@remotehost:/remote/

Let me point out that the '-r' is already implied by '-a'. Also, it's
usually highly undesirable to use '-c' because that does a complete
checksum pass through every file on both sides every time; it overrides
the feature of skipping files whose timestamps match.  Usually '-c' is
good for a manual initial sync (for example if you think many files match
but their timestamps might not) but not for repetitive runs from a script.


  Hi!  I'm trying to use rsync over ssh (open_ssh on Redhat Linux 6.2 to be
  exact) in order to update a production web server from a staging server.  
  In the tests I've run so far, everything has worked beautifully *except*
  that files deleted on the staging side stubbornly refuse to be deleted on
  the remote side.  
 
 Please look for an "IO error encountered: skipping deletion" in the rsync
 log (use -v).

I didn't realize that was only printed with -v; I think it should be elevated
to the FERROR level so it will always show.  Anybody disagree?

- Dave Dykstra




Re: Rsync freezing problem

2001-03-30 Thread Dave Dykstra

On Fri, Mar 30, 2001 at 03:03:14PM +0200, Remi Laporte wrote:
 I've noticed that the -v can bring some freezing of rsync.
 
 The test I've made , have shown that the more "v". I use, the more
 problems happend.
 
 I'am now using this command without problem any more
 rsync -axW --progress --stat --delete --exclude \"lost+found\" /src/
 /target/
 
 Can somebody tell me if he has met difficulties like that with  "-v".
 
 Thank you all.
 -Remi

Yes, that's a known problem, apparently somewhat improved in the latest
sources checked in to CVS.

- Dave Dykstra




Re: The incredible include/exclude dance

2001-04-05 Thread Dave Dykstra

On Wed, Apr 04, 2001 at 08:50:12PM -0700, Harry Putnam wrote:
 
 Once again I find myself in the throws of a major pain in the butt
 figuring out how to exclude the files under directories or at least
 the directories and the files.
 
 Apparently I don't use rsync in enough different circumstance for this
 to become routine.
 
 Every single time I want to use rsync for anytning more complex than
 the simplest on disk transfers.  I land smack dab in a large hair
 pulling session about exclude rules.
 
 Currently trying to download the debian distro for my architecture.
 
 The setup under directory `woody/main/' looks like:
 
 drwxrwxr-x4096 2000/12/18 09:40:43 .
 drwxrwxr-x4096 2000/01/16 04:17:17 binary-all
 drwxrwxr-x4096 2001/04/04 12:20:08 binary-alpha
 drwxrwxr-x4096 2001/04/04 12:20:53 binary-arm
 drwxrwxr-x4096 2001/04/04 12:22:42 binary-i386
 drwxrwxr-x4096 2001/04/04 12:24:09 binary-m68k
 drwxrwxr-x4096 2001/04/04 12:25:32 binary-powerpc
 drwxrwxr-x4096 2001/04/04 12:26:46 binary-sparc
 drwxrwxr-x4096 2000/01/16 04:28:31 disks-alpha
 drwxrwxr-x4096 2000/02/07 05:19:17 disks-i386
 drwxrwxr-x4096 2000/03/10 12:03:13 disks-m68k
 drwxrwxr-x4096 2000/03/10 12:03:44 disks-powerpc
 drwxrwxr-x4096 2000/01/16 04:20:23 disks-sparc
 drwxrwxr-x4096 2001/04/04 12:27:41 source
 
 I want only  binary-all/ binary-i386/ and disks-i386
 
 My command line looks like:
 rsync -navvz  --exclude-from=rsync_woody_exclude 
rsync://ftp.debian.org/debian/dists/woody/ .
 
 I'm trying for a dryrun to see how my exclude rules work
 
 cat rsync_woody_exclude
 binary-alpha/*
 binary-arm/*
 binary-m68k/*
 binary-powerpc/*
 binary-spark/*
 disks-alpha/*
 disks-m68k/*
 disks-powerpc/*
 disks-sparc/*
 source/*
 
 But still every damn file on the server turns up in the output,
 including every thing under the ones supposedly excluded.
 
 Also trying with a leading forward slash -- same results.
 
 The man page says, well  point blank really, that this will work.
 
o  if the pattern ends with a  /  then  it  will  only
   match a directory, not a file, link or device.
 
o  if  the  pattern contains a wildcard character from
   the set *?[ then  expression  matching  is  applied
   using  the shell filename matching rules. Otherwise
   a simple string match is used.
 
 What make this such a gripe is that every single time, a new setup has
 to be jerked with.  
 
 That -n flag should allow me to find out what is going to happen.
 This is not a place where you want to have it wrong.  It would involve
 thousands of files.


Unfortunately you've run into a long known bug.  Here's a message I posted
4 months ago today for somebody who wanted to exclude any path ending in
"foo/*.c":

On Tue, Dec 05, 2000 at 12:09:45PM -0600, Dave Dykstra wrote:
...
 You understand correctly, this is a known bug.  See

 http://lists.samba.org/pipermail/rsync/1999-October/001440.html

 There's not an easy solution, however, because the function rsync uses to
 do wildcard matches, fnmatch(), cannot completely implement the semantics
 as described in the rsync man page.  A workaround for you may be to exclude
 "**/foo/*.c", but that's not complete because the semantics of "**" is such
 that including it anywhere in a pattern means that all asterisks can cross
 slash boundaries.  Thus, it will match baz/foo/oof/bar.c.  As I said back
 then, the rsync include/exclude semantics  implementation needs to be
 completely redone.


In the meantime the man page should probably be changed, but it's bound to
be very difficult to explain.  Anybody interested in taking a crack at it?

- Dave Dykstra




Re: --include index.html vs --exclude-from tmp/*

2001-04-07 Thread Dave Dykstra

On Mon, Apr 02, 2001 at 06:40:01PM -0400, Magdalena Hewryk wrote:
  You have to include a parent directory in order to include a 
  file.  Is it  just that you want to exclude all the other files in /tmp?  
  Using /tmp/* in  outfilelist should work for that.
  
  - Dave Dykstra
  
 
 Dave,
 When I use /tmp/* in my excludefile, it works fine, but I need to exclude
 each tmp directory and include index.html and content.html only.
 
 e.g. for exclusion:
 /tmp
 profiles/tmp
 profiles/advce_for/tmp
 new/tmp
 news/new/tmp
 
 excudelist
 ==
 tmp/*
 
 includelist
 ===
 content.html
 index.html
 
 =---I am getting this result:
 ONLY /tmp is excluded with index.html and content.html mirrored anyway.
 (good)
 Other ../tmp directories are mirror completely, not just index.html and
 content.html files. (bad result)


I think adding '**/tmp/*' to your excludelist should do it.

- Dave




Re: Rsync: Re: password prompts

2001-04-11 Thread Dave Dykstra

On Sat, Apr 07, 2001 at 02:53:13AM +0100, M. Drew Streib wrote:
 The net-net is:
 
 On the box accepting the connection w/o a password from another box with
 the private key, the security of the accepting box is _only_ as good as
 the account on the originating box.

Strike that "w/o a password" and I agree.  Here is the principle I try to
teach people:

If any host is broken into, NO MATTER WHAT AUTHENTICATION MECHANISM
IS USED to connect from there to a second host, the second host can
also be broken into.

If your password has to pass through the compromised host, it can be
discovered.

The vulnerability on the second host can be limited only by what the
compromised host is permitted to do on the second host, such as some of the
schemes that have been discussed here with the ssh authorized_keys.

- Dave Dykstra




Re: rsync with mtime=1 or 0?

2001-04-11 Thread Dave Dykstra

On Fri, Apr 06, 2001 at 03:20:10PM -0700, Jeff Mandel wrote:
 I'm trying to create a nearline archive.
 
 I don't have another volume big enough to hold a full backup of the
 master volume, so I can't compare the change set in the usual way.
 I would just like to get whatever was modified in the last day - like
 the results of a find -mtime 1 (or 0) would give. Something like:
 rsync -mtime=1 /vol1/ /vol2
 
 This would copy the files modified one day ago, but most files would not
 be copied even though they don't exist on the destination, as the source
 files would be older than a day.
 
 I don't think there's a option to do this with rsync. Did I miss it?
 
 All suggestions welcome.
 
 Jeff


Would some of the files exist on /vol2?  If not, why bother with using
rsync?  Why not just use find and cpio?

Alternatively, you could create an include file with find and pass the
include file to rsync along with --exclude '*' to exclude everything else.
Currently you also need to explicitly include all parent directories of
files you want copied.

- Dave Dykstra




Re: delete does not work as expected

2001-04-13 Thread Dave Dykstra

On Wed, Apr 11, 2001 at 04:32:25PM +0200, Axel Christiansen wrote:
 Hello,
 
 while using rsync to backup a couple of machines i noticed the target dirs
 growing and growing. i locks like not erverything related to the
 --delete-excluded will be deleted during the rsync. 
 
 is there something wrong in my rsync call ?
 
 does someone hat experience with large transfers ? 
 
 
 
 /usr/bin/rsync -avvRb -e /usr/bin/ssh --bwlimit=1000 --timeout=2400
 --force --ignore-errors --delete --delete-excluded  --exclude=/proc**
 --exclude=/mnt** --backup-dir=/temp/luzifer/20010411053000.backup/
 luzifer.webseek.de:/** /backup/luzifer/current/
 
 thx, axel.


There's nothing that looks obviously wrong to me.  If you could construct a
complete simple test case that you could explain to someone so we could
reproduce the problem that would probably help a lot.

- Dave Dykstra




Re: rsync between partitions

2001-04-13 Thread Dave Dykstra

On Fri, Apr 13, 2001 at 08:44:52PM +0200, Peter T. Breuer wrote:
 "A month of sundays ago Dave Dykstra wrote:"
  This has been asked for before, but the main problem is that rsync builds a
  temporary file and then moves it into place when finished.  If you're
 
 Thanks for the reply.
 
 Can you meet me halfway and add in a command line option to do it
 by overwriting (--use-unsafe-update) instead? 

I don't think that's going to be trivial, but if you can do it without
major hacks it would be a worthwhile option.  Maybe --overwrite-in-place.



  willing to supply enough scratch space on a regular filesystem and then dd
  the result back to the destination partition you can probably do it with
 
 No, that's not an option at the sizes being considered, unfortunately.
 
  a small modification to the rsync source.  You could delete the check
  for !S_ISREG in generator.c where it prints "skipping non-regular file",
 
 Will do.
 
  make the rsync destination be the temporary file on a regular file
  filesystem, and use --compare-dest to get rsync to compare to your target
 
 compare-dest? It's not in my debian rsync (2.3.2), nor in my 2.4.6.
 Oh well.


Look again, it's there.


  partition (assuming the temporary file doesn't exist before you begin).
 
 So you advocate checksumming against the block device and writing to a
 file.  Why not write to the same? Does the code not work with writes to
 a block device? Does it not do lseek(); write()? If it doesn't, surely
 It's easy to change the wrote routine.

You could write to a block device with extra work but I suggested a regular
filesystem so the normal rsync rename would work; it currently always
chooses a temporary file name in the same directory as the destination file
and renames at the end.



 But I do need to know if the rsync algorithm will go block by block on
 a large file (i.e. partition :), and only send across the changed blocks.
 Knowing that would motivate me heavily to go right in and attack the
 code.

I think that depends on what exactly you mean.  That sounds like a general
description of the rsync algorithm but maybe you have something more
specific in mind.

- Dave Dykstra




Re: rsync between partitions

2001-04-16 Thread Dave Dykstra

On Sat, Apr 14, 2001 at 12:07:46AM +0200, Peter T. Breuer wrote:
...
 Can you point me at how to achieve overwriting instead of going to a
 temporary file? Is it in receiver.c? receive_data()? It seems to rely
 on write_file(). But the fd is already open by the time we get there.
 Can you give me an idea of the call sequence?


Sorry, but I'm really not familiar enough with that part of the rsync code
to be able to advise on how to achieve overwriting vs. using a temporary
file.  Maybe the author Andrew Tridgell could help out.  From my limited
knowledge it doesn't seem like a trivial hack, but maybe it would seem
trivial to him.

- Dave Dykstra




Re: Help

2001-04-16 Thread Dave Dykstra

On Mon, Apr 16, 2001 at 01:02:49PM -0700, Jumblat, Ghassan wrote:
...
 /usr/sbin/rsync -arvzptgo --delete --exclude *core --exclude *.tar --exclude
 *.err --exclude-from $exclude_file ${
 dir[$i]} ${dest_mach}:${loc_dir[$i]} 21 $logfile
...
 building file list ... link_stat s2004can:/etc : No such file or directory
 created directory /tmp/sync_Retail.1229

Rsync thinks you've got 3 directories there because $logfile looks like
a third parameter.  You probably mean to end that by "$logfile 21".

- Dave Dykstra




--compare-dest usage (was Re: [expert] 8.0 Final)

2001-04-23 Thread Dave Dykstra

On Fri, Apr 20, 2001 at 08:53:23AM -0400, Randy Kramer wrote:
...
 rsync -a -vv --progress --partial [--compare-dest=subdirectory_name] \
 carroll.cac.psu.edu::mandrake-iso/mandrakefreq-i586-20010316.iso \
 mandrakefreq-i586-20010316.iso
...
 In reading about the --compare-dest option, it sounded like I could keep
 two local copies of the file I wanted to update, one in the download
 directory and one in a subdirectory specified in the --compare-dest
 option.  I hoped, that if I restarted rsync after an interruption, it
 would continue building the partially transferred file but use the file
 in the --compare-dest directory for comparison to the original and as
 raw material to avoid transferring duplicate blocks.  

Note that if the target file (mandrakefreq-i586-20010316.iso) is found
in the target directory (.), it will take precedence over a file by
the same name in the compare-dest directory.  Your compare-dest directory
should contain the old complete file but with the same name as the new one.



 It did not work for me, but it might have been because:
 
 -I did not set things up properly -- my hard disk does not have space
 for three copies of the iso, so I burned a CD-Rom and used it for the
 --compare-dest copy.  If I had more hard disk space, I would try this
 again with the third copy in a subdirectory below the directory I am
 transferring the iso into.

The CD-ROM would have to have a file called mandrakefreq-i586-20010316.iso;
is that what it had, or did you burn the whole CD from the iso image?



 -The --compare-dest option only works for entire files -- if you are
 transferring multiple files, those that have been successfully
 transferred are recognized as completed by rsync and are not rechecked,
 those that have not been transferred use the files in --compare-dest as
 the raw material for the resumption of the rsync process.

Yes, files that have been successfully transferred and have a matching
timestamp will be skipped by rsync unless you're using -I to ignore
skipping files that have a matching timestamp, as the rsync man page
section on --compare-dest says.

- Dave Dykstra




Re: --compare-dest usage (was Re: [expert] 8.0 Final)

2001-04-24 Thread Dave Dykstra

On Mon, Apr 23, 2001 at 06:57:14PM -0400, David Bolen wrote:
[very good reply to Randy Kramer deleted]

  Aside: I think, based on your previous response, that if I did a
  multifile rsync (say 60 files), and rsync was interrupted after 20
  of the files were rsync'd, the --compare-dest option would work to
  avoid rsync'ing the first 20 files and then rsync would rsync the
  last 40 files in the normal manner (i.e., breaking them into blocks
  of 3000 to 8000 bytes and then comparing them, and transferring only
  the blocks that were different).
 
 I don't think the --compare-dest would be the reason rsync would skip
 the first 20 - it would just see them as existing in the target
 directory at the right date and size.  Where --compare-dest could come
 into play was if they already existed in the separate comparision
 directory, in which case they wouldn't be transferred at all (unless
 you were using the -I option).

The only clarification I would add is to say that --compare-dest only comes
into play when a file does not exist in the target directory (which you
do imply in the first sentence but not in the second).  Any time rsync
finds a file with a matching timestamp and size (when not using -I) in
either the target or compare-dest directory, it will skip the file.  If
it finds a file that does not match the timestamp and size (looking first
in target, then in compare-dest), it will then apply the rsync algorithm
to that file and write output into the target directory.

- Dave Dykstra




Re: delete a single file?

2001-04-25 Thread Dave Dykstra

On Wed, Apr 25, 2001 at 12:06:07PM +0200, andreas haupt wrote:
 Hello all,
 
 a quick one possibly:
   can I use rsync to delete a single file remotely?
   Assume I have deleted a file locally and I want it removed remotely
   but without having to rsync the whole directory.
 
   rsync --delete file remote_machine:.
 
   does not work but gives me a 'link_stat' error because
   the file does not exist anymore locally.
 
 (I know how to use   rsh remote_machine rm file . The question is:
 can rsync do this?)
 
 for those ever curious about the background: I'm syncing big sites once
 per week but might be forced occasionally to delete singular files
 before the big rsync takes place. Using rsync to do this would save me
 time.

You can't use rsync itself, but the remote target to rsh can be a quoted
shell command that runs before the rsync copy and then sends the target
name to stdout:

rsync  source remote_machine:'`rm file;echo targetdir`'

- Dave Dykstra




Re: feature-request: libwrap

2001-04-25 Thread Dave Dykstra

On Wed, Apr 25, 2001 at 03:20:58PM -0400, Scott Adkins wrote:
 --On Wednesday, April 25, 2001 3:06 PM -0400 David N. Blank-Edelman 
 [EMAIL PROTECTED] wrote:
 
  Dave Dykstra [EMAIL PROTECTED] writes:
 
  What are the advantages of that over rsyncd.conf's hosts allow and
  hosts deny?
 
  The main advantage would be the ability for sites that already use
  tcpwrappers to centralize their network authorization
  mechanism. Having this information spread out in lots of little
  separate files is harder to maintain than keeping it all under one
  framework in one set of configuration files.
 
  That being said, it is possible to hide rsync daemons behind
  tcpwrappers tcpd, it is just less efficient than having it be built in
  to the server itself (and you still have two sets of config files to
  contend with).
 
 Respectfully,
   David N. Blank-Edelman
 
 I agree with this... In fact, it isn't even difficult to add tcpwrappers
 support to rsync... What would it be?  A few lines of code?  I think the
 most difficulty is adding support to the configuration process of the
 applications that want to use it... but heck, you can rip that code out
 of something else that does that too :-)  In any the case, it is cleaner
 to add support directly within the application, since it covers all the
 bases (inetd vs standalone), runs more efficiently (less forking) and
 adds another useful feature to your list (tcpwrappers support!) ;-)
 
 Just another penny to add to the well...


That's a good reason, and I would think that if somebody submitted a good
quality patch to support libwrap it would be accepted.

- Dave Dykstra




Re: rsync, smbmount, NT and timestamps

2001-05-02 Thread Dave Dykstra

On Wed, May 02, 2001 at 11:31:31AM +0100, John N S Gill wrote:
 For some time I've been using rsync to sync up some NT file folders and
 it has been working like a treat.
 
 I use smbmount to mount the NT shares to linux boxes at each end of the
 link and then let rsync do the rest.
 
 Last week the linux boxes were upgraded to redhat 7.1.  I am now using
 the following packages:
 
 samba-2.0.7-36
 samba-client-2.0.7-36
 rsync-2.4.6-2
 
 Since the upgrade i am finding the modify times on the receiving end of
 the job are 2 seconds off from the sending end.
 
 When I use 'ls -l --full-time' on the NT shares i see that all the
 timestamps on the receiving end have an even number in the seconds
 column.  On the sending end I see a mixture of odd and event seconds
 (note most of the files on the sending end were created by users running
 NT itself).
 
 It looks like smbmount/samba can only set the time on my NT shares to
 the nearest 2 seconds, but the problems I'm seeing aren't quite that
 simple.
 
 For instance I see thinks like this:
 
 (sending machine)
 drwxrwx---1 rems rems  512 Wed Mar 21 09:08:40 2001 X/XYZ
 
 (receiving machine)
 drwxrwx---1 rems rems  512 Wed Mar 21 13:08:38 2001 X/XYZ
 
 If it was as simple as having to round to nearest even time then there
 should be no problem if the time stamp at the sending end had an even
 time.
 
 I'm not sure if this is a bug or a feature + if it is a bug whether it
 is rsync, samba or NT that is causing the problem.
 
 The good news is that there is an easy work-around, I have added:
 
  --modify-window 2
 
 to my rsync options.
 
 However, I'd prefer not to have to do this. 


If I remember correctly, that's precisely why that option was added.  I'm
not sure why you didn't see the problem before.  Ah yes, see
http://lists.samba.org/pipermail/rsync/2000-July/002503.html
which says it defaults to 2 on Windows.  It would be good if the man page
said it should be 2 when dealing with a FAT filesystem.

- Dave Dykstra




Re: Exclude files (2)...

2001-05-03 Thread Dave Dykstra

On Thu, May 03, 2001 at 01:56:16PM -0600, Jeff Ross wrote:
 No, it was a remote transfer, from one computer on my lan to another.
 
 I wonder if the second -e option, the real -e ssh overrides the first,
 mistaken -e xclude option?

Ah yes, that's it.

- Dave Dykstra




Re: Move with rsync (was Move with samba)

2001-05-04 Thread Dave Dykstra

On Fri, May 04, 2001 at 08:40:25AM -0400, Benoit Langevin wrote:
 Hi,
 
 I am new using rsync, and it got some advantage, but do you know if I do the
 equivalent of a move with rsync.  I have a case where I need to delete the
 remote after retrieving it.  ( I know it could be done be ftp get and dele,
 but since I am already using it to sync file )


No, there isn't.  Rsync does not support that directly.  When rsync is
going over rsh or ssh you can get it to do arbitrary shell commands before
the transfer over the same connection, but not after.

- Dave Dykstra




Re: Synchronizing Live File Systems

2001-05-04 Thread Dave Dykstra

On Thu, May 03, 2001 at 11:17:45PM -0400, CLIFFORD ILKAY wrote:
 Hi,
 
 I need to synchronize /home on ProductionServer with /home on BackupServer 
 periodically. ProductionServer has Samba and netatalk running on it and 
 share files with a network of Windows and Mac OS users. The idea behind 
 BackupServer is if ProductionServer goes down, BackupServer can be pressed 
 into action to replace the down server with minimal interruption. This is 
 not quite fault tolerant but close enough for this purpose. The 
 synchronization interval will be dependent upon the performance of rsync in 
 this scenario so it could be anything from 5 minutes to 15 minutes. The 
 users are creating or editing MS Office files which are stored on the 
 server. What happens when rsync encounters, for example, a Word file that a 
 user is working on? How does rsync deal with files that are in use? Will it 
 skip over the file? Will it just capture whatever is on the disk?

I don't think rsync is a good tool for that application.  If a file is being
modified while rsync is transferring it, the results are undefined.  You'll
be better off with a filesystem that can handle replication.


 How does rsync deal with .AppleDouble files?

I don't know what they are, but I'm sure rsync doesn't do anything special
for them.

- Dave




Re: unexpected EOF in read_timeout

2001-05-08 Thread Dave Dykstra

On Mon, May 07, 2001 at 05:11:23PM -0800, Michael wrote:
 In reading through the archives, I've seen this topic come up several 
 times, but no real solutions. I posted the question on the 
 FAQ-O-MATIC, then realized I should have posted here first.
 
When mirroring a large tree from an rsync server the error
 
unexpected EOF in read_timeout 
unexpected EOF in read_timeout 
 
occurs and the session terminates. This error repeats over and
over resulting in the mirror process never completing. This is
also repeatable on several public servers that have the same
information. I don't know what is happening on the remote end,
but suspect that it is some kind of resource limitation. If the
transfer is attempted for small portions of the tree, all is
well.
 
local system dual celron 500 w/500meg ram, lots of disk
linux-2.2.19
rsync 2.4.6 fresh compile/install
code line:
/usr/bin/rsync -rlptD --timeout=0 --exclude=*~ \
   --delete-after --force $SOURCE $DEST
 
 where source dest =~ 
   some.rsync.server.com::/directory/path/  localpath
 
 This appears to be related to the size of the directory tree that is 
 transferred I think, is there some way to limit this so mirroring 
 with rsync really works, or must it be done manually for the tree 
 slices?
 [EMAIL PROTECTED]


Check the log file of the rsync server.  A number of error messages are
not reported to the client (mostly for security reasons, although some of
them may not actually be a security risk).

- Dave Dykstra




Re: include/exclude

2001-05-09 Thread Dave Dykstra

On Wed, May 09, 2001 at 08:57:49AM -0400, [EMAIL PROTECTED] wrote:
 
 I read through the archives, and have gotten alot of leads, but still
 have not figured out the exact combination for what I am trying to do.
 
 I am copying individual machine's apache logs to a single machine to
 run log analysis on them.  basically, what I have tried:
 
 rsync -ra -v --include /*/2001/05/07/access.log  \
  --exclude * \
--delete-excluded \
  web1::logs /var/tmp/.LogCache/web1
 
 The tree of the log file is broken in to
 
 virtualhostname/YEAR/MONTH/DAY/access.log
 
 My thinking here is that if I run this in a cron job, I will maintain
 two days of raw log data on the single machine that is doing the
 processing.  Any suggestions would be greatly appreciated.

You need to explicitly include each of the parent directories of the
file you want to include, otherwise the --exclude '*' will prevent
rsync from ever looking down into the subdirectories.

- Dave Dykstra




Re: Problem with large include files

2001-05-11 Thread Dave Dykstra

On Fri, May 11, 2001 at 11:41:41AM +1200, Wilson, Mark - MST wrote:
 Hi there
  
 I recently tried to do a transfer of a directory tree with about 120,000
 files. I needed to select various files and used the --include-from option
 to select the files I needed to transfer. The include file had 103,555
 filenames in it. The problem I have is that the transfer quit after
 transferring some of the files. I am running the remote end in daemon mode.
 Interestingly the remote daemon spawned for this job was left behind and did
 not quit. I had to kill it when I noticed it many hours later. Unfortunately
 I didn't have any -v options so didn't get any information as to what caused
 it. I will be doing further tests to see if I can get more information.
  
 Are there any restrictions on the amount of files you can have in an include
 file?
  
 The two machines are Sun E1 domains with 12 processors and 12288
 Megabytes of RAM. 
  
 Any ideas on how to crack this would be appreciated.



Ah, perhaps we finally have somebody to perform the test I have been asking
for for months.

Some background: prior to rsync version 2.4.0 there was an optimization in
rsync, which I put in back when I was officially maintaining rsync, that
would kick in whenever there was a list of non-wildcard include patterns
followed by an exclude '*' (and when not using --delete).  The optimization
bypassed the normal recursive traversal of all the files and directly
opened the included files and sent the list over.  A side effect was that
it did not require that all parent directories of included files be
explicitly included, and Andrew didn't like the fact that it behaved
differently when the optimization was in or out, so he removed the
optimization in 2.4.0.  I tried to pursuade him to leave it in, and he
asked me to prove that it made a significant performance difference.  I
tried with a list of files that was about as long as I thought I'd ever
need with my application, and I couldn't honestly say that it made a big
difference so it stayed out.

Meanwhile, people on this list have been asking that rsync get a new option
--files-from which would just take a list of files to send.  Many people
want it for convenience and not just performance, but I want to also know
what the performance impact would be.  I offered to implement it in
essentially the same way that my include/exclude '*' optimization was, but
only if somebody would measure the performance difference in their
environment and report the results.  Nobody has done that yet.

So what I'd like you to do is go back to rsync 2.3.2, and report timing
results with and without the optimization.  To turn off the optimization,
all you need to do is add a wildcard to one of the paths.  I'm pretty sure
rsync 2.3.2 only needs to be on the sending side, but to be safe it would
be better to run it on both sides.  Since you say it fails completely with
such a long list of files, perhaps you'll have to cut the list down to some
shorter list until it works without the optimization to do a fair comparison.

- Dave Dykstra




Re: Problem with large include files

2001-05-14 Thread Dave Dykstra

On Mon, May 14, 2001 at 03:12:27PM +1200, Wilson, Mark - MST wrote:
 Dave
 
 A couple of points:
 
 1. I think you are telling me that if I go back to 2.3.2 my problems should
 go away. Is this correct?

Hopefully, assuming you trigger the optimization (no wildcards in the
includes followed by an exclude '*', no --delete).  It depends on what
exactly is failing though.

 2. I rather oversimplified how I am using rsync... Perhaps I had better
 explain. If I can do some testing for you I am happy to do so, however there
 is quite a bit of pressure for me to get my problem fixed. This I must do
 first.
...
 Anyway, the purpose of all this verbosity is two fold. Firstly you need to
 tell me, given my environment, how you want your testing done and secondly
 if you had any ideas of how to fix my problem. If we can't fix it we will
 have to do the backup to tape and send it on a plane method -which we really
 want to avoid. 

Try rsync 2.3.2 on both ends and see what the result is.  

What transport method are you using (rsh, ssh, or rsync --daemon mode)?




 As a thought, have you or any of the other developers thought of getting
 rsync to operate over a number of streams or to use sliding windows to
 overcome latency effects?

I don't recall that that subject has been discussed much on the mailing list
since I've been participating.  It was my understanding that rsync already
pipelines pretty well in both directions. 

- Dave Dykstra




Re: Problem with large include files

2001-05-14 Thread Dave Dykstra

On Mon, May 14, 2001 at 11:13:40AM -0500, Dave Dykstra wrote:
...
  As a thought, have you or any of the other developers thought of getting
  rsync to operate over a number of streams or to use sliding windows to
  overcome latency effects?
 
 I don't recall that that subject has been discussed much on the mailing list
 since I've been participating.  It was my understanding that rsync already
 pipelines pretty well in both directions. 

Come to think of it, I know from experience that rsync is able to keep the
TCP queues full in both directions during a transfer.  There must be
something else going on.  Does your link perhaps lose a percentage of the
packets, forcing TCP to timeout and retransmit, slowing it down?

- Dave Dykstra




Re: Problem with large include files

2001-05-14 Thread Dave Dykstra

On Mon, May 14, 2001 at 01:32:01PM -0700, Wayne Davison wrote:
 On Fri, 11 May 2001, Dave Dykstra wrote:
  The optimization bypassed the normal recursive traversal of all the
  files and directly opened the included files and sent the list over.
 
 There's an alternative to this optimization:  if we could read the
 source files from a file (or stdin), we could use -R to preserve the
 source path.  For instance:
 
 cd /srcpath
 rsync -R --source-files-from=file-list desthost:/destpath
 
 This would allow the behavior of the --include code to remain unchanged
 since the user would be specifying a huge list of files/directories to
 transfer, not a huge list of include directives with a * exclude at
 the end.

Uh, I think you're thinking of the same result as I was with the --files-from
option, but a different implementation.  With --files-from it will leave
the behavior of --include alone, although internally I expect it will still
use the include file list portion of the rsync wire protocol.  I would
still want to have a source path as part of the command line syntax, which
you left out above.  The paths in the --files-from file-list will be relative
to the top level.

- Dave Dykstra




Re: FW: Problem with large include files

2001-05-15 Thread Dave Dykstra

On Tue, May 15, 2001 at 03:31:23PM +1200, Wilson, Mark - MST wrote:
...
 Do you have any idea what the maximum number of files you can have in an
 include file is (for the current version)?

No, I don't.  It probably depends on a lot of variables.

 How do you want your test on 2.3.2 done? ie LAN or high speed WAN, numbers
 of file, sizes of files, things to time, daemon v rsh.

What I'd like to see is a case that might make the biggest difference with
and without the optimization:
- probably use the LAN
- the largest number of files that you can get to work 
- small files
- time the whole run with the time command, CPU time and elapsed time
- I don't know about daemon vs rsh, but the daemon leaves the most 
under rsync's control so that may be preferable

- Dave Dykstra




Re: FW: Problem with large include files

2001-05-16 Thread Dave Dykstra

On Wed, May 16, 2001 at 05:25:03PM +1200, Wilson, Mark - MST wrote:
 How do I go about registering this bug with the include file. 

I don't think there's any point registering it until you have better
confirmed that that is indeed the problem.  Also, if developers have no way
of reproducing the problem it is highly unlikely to get fixed.  If you can
come up with a fix yourself, of course, then a patch could probably be
applied.  Personally I'm not convinced that the problem you're seeing is an
include file problem, but your 2.3.2 testing may give better evidence.

There is an rsync bug tracking system, but I'm not sure how thoroughly
anybody looks at it.  I know I don't; I used to while I maintained rsync
but haven't since.  Martin, do you look at and respond to bug reports in
the rsync bug tracking system?  The main page says there are 404 messages
in the incoming bucket, and I believe they're supposed to get moved to
another bucket once somebody has replied to them.  Currently, posting to
the mailing list is much more likely to get a response.


 It would be good get this bug fixed as I would like to be able to back to
 2.4.6 (or whatever) as it is faster and it has bandwidth limiting.

It's faster?  Why do you say that?  I don't recall any changes in the 2.4.x
series explicitly related to performance.

 Will let you know the results of the testing.

- Dave Dykstra




Re: temp files during copy

2001-05-17 Thread Dave Dykstra

On Thu, May 17, 2001 at 04:43:50PM -0400, Jim Ogilvie wrote:
 Hi,
 
 I know rsync creates temp files in the destination directory  and
 then at some point renames them to the original file name.  Therefore
 the destination directories need to be larger than the source directories.
 
 I'm trying to find a way to calculate how much larger the destination
 directories
 need to be. How does rsync decide when to rename them?  Is it by directory?
 
 Thanks,
 
 Jim Ogilvie
 [EMAIL PROTECTED]
 

It's by file so you only need extra space for the largest file.

- Dave




Re: --delete not working with many files?

2001-05-22 Thread Dave Dykstra

On Mon, May 21, 2001 at 01:08:21PM +0200, Lasse Andersson wrote:
 
 
 Hi,
 
 If I use rsync of a smaller set of files the --delete option delete
 files on the recieving side.
 
 But I can not get this to work on big set of files (In this case I try to
 rsync a whole machine /).
 
 Version of rsync on both sides are 2.4.6
 Sender: Solaris 2.6
 Reciever: FreeBSD 4.1.1
 
 Command used is :
 
 /opt/bin/rsync -vazR --timeout=2400 --bwlimit=500 --numeric-ids \
 --exclude-from=/opt/etc/rsync-exclude.list \
 --delete --delete-excluded \
 -e '/opt/bin/ssh -i /.ssh/identity.rsync -l root -C -x' \
 / recieving-host:/rsync-dest-dir/
 
 
 No files or directories get's deleted on the recievers side.
 
 Maybe I missed something?
 
 Help appreciated.


It's probably not the length of the list that's making the difference,
you're probably running into an I/O error which by default disables
deletions.  This is mentioned under --delete in the man page, but I just
noticed that the man page doesn't explain there that you can have it delete
anyway by using the --ignore-errors option.  It's a bit dangerous to use
--ignore-errors because if there is a serious error you may end up deleting
things you don't want to, so perhaps you can go and correct the error
instead.  Some people have requested that rsync distinguish between different
kinds of errors; for example, a permissions error perhaps shouldn't cause
deletions to stop.  I'm not sure what I think about that, because I can
imagine some scenarios where a permissions error could cause a delete
disaster.

- Dave Dykstra




Re: disk space requirement in using rsync

2001-05-25 Thread Dave Dykstra

On Fri, May 25, 2001 at 11:29:48AM -0400, dywang wrote:
 hi, there,
 
 I understand that rsync need about 100Bytes for every file to be transfered
 in order to build the file list.
 
 Could anyone tell me where this space is needed, on the machine where rsync
 initiated, or the remote machine, or both?
 
 e.g.at BOX A:
 Box-A#rsync -avz -e ssh dir-a someone@Box-B:
 or
 Box-A#rsync -avz -e ssh someone@Box-B: dir-a .
 
 so in the above example, which box needs the space to build the file list, A
 or B or both?


It's not disk space that's needed, it's process memory.  I don't know
the exact number of bytes or on which side though.

- Dave Dykstra




Re: problems encountered in 2.4.6

2001-05-25 Thread Dave Dykstra

On Fri, May 25, 2001 at 04:33:28PM -0500, Phil Howard wrote:
 Dave Dykstra wrote:
   One possibility here is that I do have /var/run symlinked to /ram/run
   which is on a ramdisk.  So the lock file is there.  The file is there
   but it is empty.  Should it have data in it?  BTW, it was in ramdisk
   in 2.4.4 and this max connections problem did not exist, so if there
   is a ramdisk sensitivity, it's new since 2.4.4.
  
  I don't know if it will show up with data in it or not, I've never tried it.
  You'll probably need to do some straces.
 
 Where is the count of number of current connections supposed to be kept?
 It's obviously not actually being kept in this file, at least not when on
 a ramdisk.  But if it's supposed to be, that's the problem.  OTOH, it is
 easy to get the count out of sync this way, too.  If a process is killed
 or otherwise just dies, the count is higher than real.  When I do multi-
 process servers with controlled process counts, I like to have the parent
 track the number of children running.  Of course that precludes using inetd.

It locks different ranges of bytes of the file rather than keeping a count in
it.  I guess the idea with that is if a process dies the operating system
will automatically remove the lock.

- Dave Dykstra




Re: reset error

2001-05-29 Thread Dave Dykstra

On Fri, May 25, 2001 at 02:52:24PM -0700, Simison, Matthew wrote:
 I am getting this error,
 
 read error: Connection reset by peer
 
 Why is this happening?
 
 Solaris 7 to Solaris 7
 rsync v-2.4.1
 
 rsync -a -z --address ${IP} /data/test user@${hostIP}::root/data
 
 Matt


First, be sure to upgrade to rsync 2.4.6 as 2.4.1 had some severe
problems.  I'm not sure any of them would have affected rsync daemon mode
(that is, using ::) but it definitely affected at least ssh.

Next, as far as I can tell from the rsync man page the --address option
will make no difference unless you're starting up an rsync --daemon.  I
wasn't even aware the option existed until now.

Finally, chances are you've got a problem with you rsyncd.conf file on
the target machine, but I can't tell because you didn't post it.  Many errors
on daemons do not get passed back to the client, look at the daemon log file
(either syslog or use the 'log file' option).  I note that you need to be
extra careful regarding security when writing to a daemon, and that you need
to use the read only = false option.

- Dave Dykstra




  1   2   3   4   5   6   >