Suddenly broken : connection unexpectedly closed
I always got this messages when I rsync to php.net here's my command line and error messages : $ rsync -avvvzC rsync.php.net::phpweb /usr/local/WWW/phpweb generator(distributions/manual/php_manual_cs.tar.bz2,1405) sending sums for 1405 rsync: connection unexpectedly closed (1954742 bytes read so far) rsync error: error in rsync protocol data stream (code 12) at io.c(150) _exit_cleanup(code=12, file=io.c, line=150): about to call exit(12) what's wrong with it ? and how can I avoid it ? -- KISS : Keep It Simple, Stupid. msg03715/pgp0.pgp Description: PGP signature
Re: Suddenly broken : connection unexpectedly closed
On 16 Apr 2002, Ying-Chieh Liao [EMAIL PROTECTED] wrote: I always got this messages when I rsync to php.net here's my command line and error messages : $ rsync -avvvzC rsync.php.net::phpweb /usr/local/WWW/phpweb generator(distributions/manual/php_manual_cs.tar.bz2,1405) sending sums for 1405 rsync: connection unexpectedly closed (1954742 bytes read so far) rsync error: error in rsync protocol data stream (code 12) at io.c(150) _exit_cleanup(code=12, file=io.c, line=150): about to call exit(12) what's wrong with it ? and how can I avoid it ? Is the destination disk perhaps full? -- Martin -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
Re: Suddenly broken : connection unexpectedly closed
On ¤G, 4 16, 2002 at 18:26:39 +1000, Martin Pool wrote: Is the destination disk perhaps full? I dont think so, it's all ok /dev/da1s1e 4102037 2331519 144235662% 113677 918001 11% /da1 -- Allocate four digits for the year part of a date : a new millennium is coming. --- David Huber msg03717/pgp0.pgp Description: PGP signature
how to take least risk on rsync dir
Hello list, When rsync dir_A to dir_B, I hope I wont make any change to the original dir_B unless the rsync procedure end withour errors, therefore, I hope there's somethig like rsync -av dir_A dir_B_tmp \ mv dir_B dir_B.bkup mv dir_B_tmp dir_B This small script can ensure the minimal change time between 2 versions of archive. Is this built in the native rsync function? Do I have to write scripts myself? -- Patrick Hsieh [EMAIL PROTECTED] GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
Can rsync update files in place?
I've just subscribed, but a search of the archive doesn't indicate this has been handled before... Is there a way to get rsync to not create a new file while transferring and then rename it, but to instead update the existing file in place, i.e. simply write those blocks that have been updated and leave the rest alone? That would be ideal for what I wanted rsync for, namely updating copies of Oracle data files. These reside on a volume of a NetApp Filer where other stuff is also located, and snapshots are made regularly of that volume. Simply deleting the old files and copying them again means that the Filer considers each and every block of the old files old, and that mens they get preserved in the snapshots, which in turn means that the snapshots take up WAY too much space (the snapshots are a sort of copy-on-write mechanism: as long as only a few blocks are updated, you only need a few blocks to maintain a snapshot). I thought that I could get around this by using rsync, because of the way it only transfers those blocks that have changed. I now know a bit more about how it works :-) In no way does rsync increase disk IO efficiency, it increases network IO efficiency... That of course is fine, but unfortunatley not quite suited to the purpose I had in mind. How difficult would it be to add an option so that it works in the way I would want? I understand that it would mean losing the moved data optimization, but for Oracle data files where relatively little is updated (and probably not moved at all) that wouldn't matter a whole lot. (The set of files I'm talking about is 40GB in about 20 files, with 4 files of 8GB.) In the meantime I'll probably hack up something that compares two files and then updates those blocks in the second file that differ... won't work over a network though :-) Thanks, Paul Slootman -- -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
Re: Initial debug of client - Need command line help
Experts, feel free to jump on me for being wrong, but the forking, as i understand it, is integral to the way rsync currently does its work, and i don't think you can realy debug it in the classical sense. Anybody found a way around this one? Tim Conway [EMAIL PROTECTED] 303.682.4917 Philips Semiconductor - Longmont TC 1880 Industrial Circle, Suite D Longmont, CO 80501 Available via SameTime Connect within Philips, n9hmg on AIM perl -e 'print pack(, 19061,29556,8289,28271,29800,25970,8304,25970,27680,26721,25451,25970), .\n ' There are some who call me Tim? John E. Malmberg [EMAIL PROTECTED] Sent by: [EMAIL PROTECTED] 04/15/2002 08:36 PM To: [EMAIL PROTECTED] cc: (bcc: Tim Conway/LMT/SC/PHILIPS) Subject:Initial debug of client - Need command line help Classification: I am trying to debug the RSYNC client and server by single stepping through them. The server seems to ok up to the point where it is waiting for the client connection. On the client side, I am having trouble finding the right options so that it will connect up to the local server with out fork() a copy of itself or trying to exec the rsh command. This is on RSYNC 2.5.5 on OpenVMS Alpha. If the OpenVMS lurkers on this list would like a ZIP archive of what I have so far, please let me know. Thanks, -John [EMAIL PROTECTED] Personal Opinion Only -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
timeout error in rsync-2.5.5
Dear all, I've been trying to track down a problem with timeouts when pulling data from an rsync daemon and I have now run out of any useful ideas. The problem manifests itself when I try to transfer a large directory tree on a slow client machine. What happens then is that the client rsync process successfully receives the list of files from the server, then begins checking the local directory tree, taking its sweet time. Since I know that the process is quite slow, I invoke rsync with a timeout of 5 hours to avoid dropping the connection. Howerver, after a little over 1 hour (usually 66 minutes or so), the server process simply gives up. I have verified the problem under rsync versions 2.3.2, and 2.4.6 and up (including 2.5.5), testing a few different combinations of client/server versions (althoug the client is always a linux box and the server always a solaris box). It looks to me as if something kicks the server out of the select() call at line 202 of io.c (read_timeout) despite the timeout being correctly set to 18000 seconds. Can anybody think of what the problem may be? See all the details below. Thanks, -- Alberto CLIENT: [ads@ads-pc ~]$ rsync --version rsync version 2.5.5 protocol version 26 Copyright (C) 1996-2002 by Andrew Tridgell and others http://rsync.samba.org/ Capabilities: 64-bit files, socketpairs, hard links, symlinks, batchfiles, IPv6, 64-bit system inums, 64-bit internal inums rsync comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions. See the GNU General Public Licence for details. [ads@ads-pc ~]$ rsync -ptv --compress --suffix .old --timeout 18000 -r --delete rsync://adsfore.harvard.edu:1873/text-4097/. /mnt/fwhd0/abstracts/phy/text/ receiving file list ... done rsync: read error: Connection reset by peer rsync error: error in rsync protocol data stream (code 12) at io.c(162) rsync: connection unexpectedly closed (17798963 bytes read so far) rsync error: error in rsync protocol data stream (code 12) at io.c(150) SERVER: adsfore-15: /proj/ads/soft/utils/src/rsync-2.5.5/rsync --version rsync version 2.5.5 protocol version 26 Copyright (C) 1996-2002 by Andrew Tridgell and others http://rsync.samba.org/ Capabilities: 64-bit files, socketpairs, hard links, symlinks, batchfiles, no IPv6, 64-bit system inums, 64-bit internal inums rsync comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions. See the GNU General Public Licence for details. from the log file: 2002/04/16 08:52:48 [18996] rsyncd version 2.5.5 starting, listening on port 1873 2002/04/16 09:39:01 [988] rsync on text-4097/. from ads-pc (131.142.43.117) 2002/04/16 10:51:36 [988] rsync: read error: Connection timed out 2002/04/16 10:51:36 [988] rsync error: error in rsync protocol data stream (code 12) at io.c(162) from a truss: adsfore-14: truss -d -p 988 Base time stamp: 1018964639.2848 [ Tue Apr 16 09:43:59 EDT 2002 ] poll(0xFFBE4E90, 1, 1800) (sleeping...) 4057.4093 poll(0xFFBE4E90, 1, 1800) = 1 4057.4098 read(3, 0xFFBE5500, 4) Err#145 ETIMEDOUT 4057.4103 time() = 1018968696 4057.4106 getpid()= 988 [18996] 4057.4229 write(4, 2 0 0 2 / 0 4 / 1 6 1.., 66) = 66 4057.4345 sigaction(SIGUSR1, 0xFFBE4D20, 0xFFBE4DA0) = 0 4057.4347 sigaction(SIGUSR2, 0xFFBE4D20, 0xFFBE4DA0) = 0 4057.4349 time() = 1018968696 4057.4350 getpid()= 988 [18996] 4057.4352 write(4, 2 0 0 2 / 0 4 / 1 6 1.., 98) = 98 4057.4357 llseek(0, 0, SEEK_CUR) = 0 4057.4359 _exit(12) Alberto Accomazzi mailto:[EMAIL PROTECTED] NASA Astrophysics Data System http://adsabs.harvard.edu Harvard-Smithsonian Center for Astrophysicshttp://cfawww.harvard.edu 60 Garden Street, MS 83, Cambridge, MA 02138 USA -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
-P option fails
Hi, It seems rsync no longer resumes partial transfers after a SIGINT (CTRL-C). I tried the following: % rsync -avzP ~/video/Gone_In_60_Seconds_-_DivX.avi 192.168.0.3:/backup/DivX building file list ... 1 files to consider Gone_In_60_Seconds_-_DivX.avi 262144 0% 10.34kB/s 19:40:22 [CTRL-C] [testing remote size] % rsync -avzP 192.168.0.3:/backup/DivX/Gone_In_60_Seconds_-_DivX.avi receiving file list ... 1 files to consider -rw-rw-r-- 522492 2001/08/27 15:37:07 Gone_In_60_Seconds_-_DivX.avi [testing resume] % rsync -avzP ~/video/Gone_In_60_Seconds_-_DivX.avi 192.168.0.3:/backup/DivX building file list ... 1 files to consider Gone_In_60_Seconds_-_DivX.avi 32768 0%0.00kB/s0:00:00 [back to square one] This test shows the -P has no effect. Am I missing somthing? Thanks in advance for your insight, (rsync version 2.5.4 protocol version 26 on both sides) -- PHEDRE: Je voulais en mourant prendre soin de ma gloire;, Et dérober au jour une flamme si noire : Je n'ai pu soutenir tes larmes, tes combats ; (Phèdre, J-B Racine, acte 1, scène 3) -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
RE: how to take least risk on rsync dir
Patrick Hsieh [[EMAIL PROTECTED]] writes: When rsync dir_A to dir_B, I hope I wont make any change to the original dir_B unless the rsync procedure end withour errors, therefore, I hope there's somethig like rsync -av dir_A dir_B_tmp \ mv dir_B dir_B.bkup mv dir_B_tmp dir_B This small script can ensure the minimal change time between 2 versions of archive. Is this built in the native rsync function? Do I have to write scripts myself? rsync's default behavior ensures this sort of minimal change time, but only at a per file level. That is, each file is actually built as a temporary copy and then only renamed on top of the original file as a final step. Of course, that's largely a requirement so rsync can use the original file as a source for the new file, but it also serves to preclude interruption of the original file as long as possible. But if you want the same sort of assurances at something larger than a file level (e.g., a directory as above), then yes, you need to impose that on your own. For example, when backing up databases (where I need to keep the database backup and transaction logs in sync), I copy them into a temporary directory and only overlay the primary destination files when fully done. The simplest way to do it is close to what you have, but there are a few things you need to be aware of. First, you'll want to use the rsync --compare-dest directory so that it can still find the original files on the destination system for its algorithm - otherwise it'll send over the entire contents of the source files and not use what it can from the original files. Second, you need to realize that by default rsync will only copy files that have changed (by default based on size/timestamp unless you add the -c (checksum) option). So if you do what you have above you'll end up losing files that hadn't changed since they won't exist in dir_B_tmp. You can override this with the -I option at the expense of a small amount of extra data transferred for the unchanged files. So you could do something like: rsync -av -I --compare-dest=B_tmp_to_B_path dir_A dir_B_tmp Note that the --compare-dest argument is a relative path to get from the destination directory (dir_B_tmp in this case) back to the original source directory. rsync won't touch the source directory, but it will use the files within it as masters for the new copy. This will result in dir_B_tmp being a completely copy of dir_A, using the original dir_B as a master whenever possible. This all assumes that you're doing remote copies where the rsync protocol makes sense (you don't show a remote system in your example). If you're just making local copies, it would be better to use -W instead, but you'd still need -I if you wanted files matching those in the current dir_B to be transferred. Then again, for a local setup where you want to update the whole directory, a simple copy may be as effective as rsync, since you're not benefitting from the selection of a subset of files. -- David /---\ \ David Bolen\ E-mail: [EMAIL PROTECTED] / | FitLinxx, Inc.\ Phone: (203) 708-5192| / 860 Canal Street, Stamford, CT 06902 \ Fax: (203) 316-5150 \ \---/ -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
Re: Can rsync update files in place?
On Tue 16 Apr 2002, Martin Pool wrote: On 16 Apr 2002, Paul Slootman [EMAIL PROTECTED] wrote: Is there a way to get rsync to not create a new file while transferring and then rename it, but to instead update the existing file in place, i.e. simply write those blocks that have been updated and leave the rest alone? That would be pretty hard, though not impossible. I suspect it would be roughly as hard to put it into rsync as to write a new program from scratch, depending on what features you need. That's what I've been thinking :-) NetApp machines are kind of a special case because you can't (?) run an rsync program directly on the server. So every block is going across the network in at least one direction. That's not really a worry, as my goal here is to minimize the number of writes, not the network traffic. Perhaps Oracle's native replication features would be better? Hmm, in theory perhaps, but not for what I want, which is to regularly restore the database so that subsequent application tests are performed on a known-good' data set. This means running a number of apps against the database, shutting down the database, restoring the files from backup, rinserepeat. And this currently plays havoc with the snapshot space... Thanks, Paul Slootman -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
rsync under Mac OS X - advice/alternatives needed
Hi everyone. I'm wanting to synchronise (one way) several shared folders between Mac OS X 10.1.3 file servers, and am interested to hear whether anyone has used rsync to do this, or knows whether the version of rsync supplied with Mac OS X is aware of and properly supports Mac OS X and HFS+ volumes, permissions, and ownership information, and if not, whether there is an alternative that is supported (Sychronize Pro! X does not work properly in this regard). Cheers, Mark (I've only suscribed to this list last night, and this is my first post, so if this has already been answered, please don't flame me :-) -- _ Mark Hodge Information Technology Services, Systems Engineer University of Otago, Phone: +64 3 479 8598/021 614 134 P O Box 56, Fax: +64 3 479 5080Dunedin, Email: [EMAIL PROTECTED]New Zealand. _ -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
Re: -P option fails
On 16 Apr 2002, Louis-David Mitterrand [EMAIL PROTECTED] wrote: Hi, It seems rsync no longer resumes partial transfers after a SIGINT (CTRL-C). I tried the following: That's a bug in 2.5.4; please upgrade. -- Martin -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
rsync using krb5 vs. ssh
rsync 2.5.5 (and previous versions) does not operate reliably for me when I use MIT's Kerberos5-1.2.4 rsh, but it does when I use ssh. Can anyone think of a reason? Here's the last error message I got trying it: unexpected tag 117 rsync error: error in rsync protocol data stream (code 12) at io.c(298) Exit 12 Just a wild guess: could it be that the read and write system calls in rsync aren't handling EINTR properly? This is one of the few things I can think of that would differ between various transports. -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
Re: Can rsync update files in place?
On Tue, Apr 16, 2002 at 09:43:12PM +0200, Paul Slootman wrote: On Tue 16 Apr 2002, Martin Pool wrote: On 16 Apr 2002, Paul Slootman [EMAIL PROTECTED] wrote: Is there a way to get rsync to not create a new file while transferring and then rename it, but to instead update the existing file in place, i.e. simply write those blocks that have been updated and leave the rest alone? That would be pretty hard, though not impossible. I suspect it would be roughly as hard to put it into rsync as to write a new program from scratch, depending on what features you need. That's what I've been thinking :-) If someone wanted to experiment with this kind of thing, plug pysync would be an ideal prototyping tool/plug. It's a Python implementation of rsync that I wrote for the express purpose of being an easy way to demonstrate and experiment with the algorithm. It includes both rsync and xdelta algorithms, and is about 500 lines in total, but half of that is comments. http://freshmeat.net/projects/pysync/ I am currently working on including a swig'ed wrapper for librsync, that should mean you could use it for real-world (Python) applications. The plan is to have this ready by the end of this week. -- -- ABO: finger [EMAIL PROTECTED] for more info, including pgp key -- -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
Re: Initial debug of client - Need command line help
[EMAIL PROTECTED] wrote: Experts, feel free to jump on me for being wrong, but the forking, as i understand it, is integral to the way rsync currently does its work, and i don't think you can realy debug it in the classical sense. Anybody found a way around this one? A multi-process debugger can handle forked processes, but it can be tricky. The big problem with the one that I have is that I do not know how to switch the focus from one process to another while the session the debugger is focused on is running (not at a break point). I have temporarily put a special debugger break point in the source after the child process is started. If I did not do that, I could loose control of the parent process. It is easy to lose control of a multi-process debugging session. I am guessing that the initial fork() of RSH is to allow the user to initiate an rsync client session to run in the background with out tying up the main shell session. I am just starting to learn the logic flow and command line arguments for rsync. -John [EMAIL PROTECTED] Personal Opinion Only -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
Re: Initial debug of client - Need command line help
On 16 Apr 2002, John E. Malmberg [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote: Experts, feel free to jump on me for being wrong, but the forking, as i understand it, is integral to the way rsync currently does its work, and i don't think you can realy debug it in the classical sense. Anybody found a way around this one? A multi-process debugger can handle forked processes, but it can be tricky. Personally, I just run two or three debuggers (gdb), one on each process, in different windows or virtual terminals. Since rsync only uses Unix IO to synchronize its various moving parts this works quite nicely. gdb is supposed to be able to catch a child just after it forks but a few months ago that didn't work. Perhaps it does now. Failing that, I would just put in a sleep(60) at the start of each child. I am guessing that the initial fork() of RSH is to allow the user to initiate an rsync client session to run in the background with out tying up the main shell session. OK, really briefly: There are three rsync processes. You can think of them as threads, and they might be in a Java or Windows program. Instead, they're actually seperate processes, both because that's much more portable on Unix, and because in some ways it is cleaner. They are: generator -- traverses the destination directory, calculating checksums and timestamps of existing files, which it sends to the sender -- traverses the source directory, consumes the checksums and uses them to calculate deltas, which it sends to the receiver -- traverses the destination directory, applys deltas and produces the new files So obviously the generator and receiver run on the destination machine, and the sender on the source, with a socket in between. The best way to think of this is as two of the producer-consumer patterns from a threading textbook hooked up back to back. The other complicating factor is that we also need to pass error messages around. Obviously there are some tasks like deleting files, etc, that I have not mentioned but they go into fairly obvious places. We also fork some additional processes, particularly, as you noticed, rsh (or ssh, or whatever). This is forked because unix doesn't have a spawn() system call -- to start a slave process you fork, fiddle with file descriptors, and then exec(). The daemon forks itself into the background as a convenience; the regular program does not do this. You might enjoy reading Linux Application Development by Johnson Troan (?), which has a pretty good overview of how all that stuff works in general. The relevant chapters are not particularly Linux-specific. -- Martin -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html