Re: Pushing hard-linked backups
Matt McCutchen wrote: Eric, Sorry for the slow response. no problem. You're the one who's doing me a favor so take the time you need. Yes, encryption done with --source-filter would work essentially that way. The downside compared to something like duplicity is that the backup host gets to see everything except the file data (i.e., file names, sizes, times, and attributes) and, unless you take additional precautions, can manipulate the stored data by mixing and matching different encrypted versions of files. yeah. I keep forgetting that. I think that backup should always be push so that all of these confidentiality issues can be handled appropriately. rsync/snapshot to trusted host and backing up encfs image of backup directory may be a better solution Well, if you have the Linux intermediary that would be necessary for EncFS, you might prefer duplicity instead. I have come to appreciate the value of walking a filesystem and pulling a file out when it needs replacing. Anything that requires me to create a god awful command line to extract the file is not fun. Remember, I've got bum hands, I use speech recognition, and navigating with a GUI is easier than typing especially what I can say "mouseclick" instead of something completely unpronounceable. ---eric -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Rsync and dispersed storage [Re: Pushing hard-linked backups]
Matt McCutchen wrote: On Fri, 2007-12-28 at 00:15 -0500, Eric S. Johansson wrote: it is possible, I've seen it done, but I can't find the library/tool anymore. I'm curious: what was the nature of this tool (if you remember)? A modified version of rsync? A dispersed storage service with an rsync daemon interface? A virtual filesystem? Did delta transfers work properly? Unfortunately, my memory is somewhat foggy on the finer points of this toolkit. What I remember is that it split up the backup image into N parts and you only needed to recover M in order to reconstruct your data set. I seem to remember it used something analogous to rsync to minimize unnecessary duplication but, that's only a faint memory. I believe it was orders of magnitude better than what we have today and it's a real shame it never caught on. again with a pre and post processing capability, we could add that functionality in without modding the baseline It's not clear to me what kind of pre- or postprocessing capability you are thinking of that would make dispersed storage with rsync practical. A per-file approach with --source-filter and --dest-filter would have the same disadvantages as per-file encryption, and by the time you arrange for the retrieval filter to combine multiple files, I don't see what you gain by having rsync in the picture. Using rsync with a virtual filesystem that implements the dispersed storage is more natural. What seemed like a bright idea during composition appears more like a rusty clunker after one hits the send key. :-) The original, albeit poorly thought out, idea was to replicate the directory structure on all destination machines, then take each file, split into M parts and then replicate each part into the right portion of the filesystem hierarchy on remote machines. Yes, not the best idea I've ever had. I think the virtual filesystem does make a lot more sense. How would you implement caching so that you only need to scan the local filesystem once instead of every time you compare it against the remote file system? -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
--no-tweak-hlinked [Re: Pushing hard-linked backups]
On Sat, 2007-12-29 at 21:35 -0500, Matt McCutchen wrote: > 1. [...] However, now that the module > shares files with snapshots, the snapshots could become corrupted if the > shared files' attributes are tweaked via the module. To avoid this, use > the --no-tweak-hlinked option implemented by my patch available at: > > https://bugzilla.samba.org/show_bug.cgi?id=4561#c1 > > 2. [...] Also, if a push fails and has to be retried, you > are at risk of corrupting snapshots as in #1. To avoid this, use > --no-tweak-hlinked or, if you care less about the timeliness of the > snapshot, --ignore-existing as recommended by the rsync man page. Here is a note about the status of --no-tweak-hlinked for whoever it may concern. Wayne said at https://bugzilla.samba.org/show_bug.cgi?id=4768#c5 that he does not plan to add this option to the official rsync. However, I have found this option very useful in a number of backup scenarios, so I plan to resume maintaining my own custom version of rsync, which will support --no-tweak-hlinked and anything else I think is useful. This version of rsync will be available at: http://mattmccutchen.net/rsync/ I will not advertise it on the main rsync list any further except when it solves someone's problem. Matt -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Pushing hard-linked backups
On Tue, 2007-12-25 at 11:18 -0500, Eric S. Johansson wrote: > so matt, lets go for the rsnapshot push to a benign host for now. OK. I recommend that you use an rsync daemon on the destination host because that approach keeps all of the snapshot-management logic in one place and allows you to reconfigure the daemon without touching the client. The daemon should be accessed over ssh for security; a single-use daemon invoked over ssh is the most convenient way to do this. You'll need to make a directory on the destination host to hold the whole setup; I'll call it /backup . The client (the Windows laptop) will push its data to a write-only daemon module (which I'll call "push") that is mapped to a directory under /backup , say /backup/incoming . The "post-xfer exec" command for "push" will check whether the run was successful and, if so, invoke rsnapshot to store the contents of /backup/incoming in a snapshot. I recommend something like /backup/snapshots for the snapshot root. You can retrieve backups by plain rsync over ssh, or if you want the setup nicely encapsulated, you can configure a separate read-only module mapped to /backup/snapshots for this purpose. You will need configuration files for the rsync daemon and rsnapshot in /backup . See the rsyncd.conf(5) man page for information about writing the daemon configuration file. Since the daemon will be invoked as needed over ssh, you should not start it manually on the destination host, and the port and authentication settings are irrelevant. Consider setting "max connections" to 1. The most important bit is the "post-xfer exec" command, which you should point to a script that I'll call /backup/kick-rsnapshot . The script should look like this, where "interval" is the name of your lowest rsnapshot interval: #!/bin/bash if [ "$RSYNC_EXIT_STATUS" == "0" ]; then rsnapshot -c /backup/rsnapshot.conf interval fi See the rsnapshot(5) man page for information about writing rsnapshot.conf. It should list /backup/incoming as the one and only backup point and /backup/snapshots as the snapshot root. Be sure to enable "link dest"; that was the whole point! Getting rsync on the laptop to access the daemon takes some fancy syntax. You have to tell it explicitly to use ssh and run the remote rsync process in /backup so that it will look for rsyncd.conf there: rsync -e ssh --rsync-path='cd /backup && rsync' \ src/ host::push That's the basic idea. If you run into trouble with this setup, contact me on- or off-list for additional help. This simple setup has the disadvantage of wasting disk space by storing an extra complete copy of the source in /backup/incoming. Here are two approaches to reduce the space overhead: 1. In the rsnapshot configuration, add --link-dest=/backup/incoming so that /backup/snapshots/interval.0 ends up being completely hard-linked with /backup/incoming . Then the overhead is the same as that of an extra snapshot in which no files changed. However, now that the module shares files with snapshots, the snapshots could become corrupted if the shared files' attributes are tweaked via the module. To avoid this, use the --no-tweak-hlinked option implemented by my patch available at: https://bugzilla.samba.org/show_bug.cgi?id=4561#c1 2. Use rsnapshot's sync_first mode, but in place of running "rsnapshot sync", move /backup/incoming to /backup/snapshots/.sync . This is fast and completely eliminates the space overhead, but then the client has to specify a --link-dest option so that files in /backup/incoming can be hard-linked from /backup/snapshots/interval.0 . The daemon won't allow a basis dir path that looks like it goes outside the module, so you'll have to use a symlink to /backup/snapshots/interval.0 inside the module. This is a bit ugly. Also, if a push fails and has to be retried, you are at risk of corrupting snapshots as in #1. To avoid this, use --no-tweak-hlinked or, if you care less about the timeliness of the snapshot, --ignore-existing as recommended by the rsync man page. Matt -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Odd behavior with --detect-renamed
On Fri, 2007-12-28 at 17:31 +0100, Erik Pettersson wrote: > I'm trying out the 'detect-renamed'-patch, and I've encountered some > odd behavior. > Basicly, what I've noticed is that if I move a file into a newly > created directory (which is what happens if I rename a directory, for > example), the file isn't detected as renamed. However, if I manually > create the new directory on the target-side, the file is correctly > detected as a rename, and isn't transfered over the network. > > Here I've moved 'file' into a newly created directory 'dir2', which > isn't available on the target-side. > > > $ find src > > src > > src/dir1 > > src/dir1/dir2 > > src/dir1/dir2/file > > > > $ find dst > > dst > > dst/dir1 > > dst/dir1/file > > > > $ rsync -avv --detect-renamed -e ssh src/dir1 localhost:dst > > [...] > > total: matches=0 hash_hits=0 false_alarms=0 data=4544512 > > The file isn't detected as a rename. I see what the problem is. When the receiving rsync considers dst/dir1/file for a rename, dst/dir/dir2 does not yet exist, so rsync cannot create the partial dir for dst/dir/dir2 in order to stage the rename. One workaround would be to do a preliminary run with --include='*/' --exclude='*' to create all the necessary directories in the destination before doing the main run. One possible fix would be to call create_directory_path to ensure that the destination directory that is the target of the rename exists before handling the partial dir. Unfortunately, create_directory_path will give any created directories default permissions instead of flist ones, which could be bad. To avoid this problem, rsync could look up each intermediate directory in the flist and, if it is found, give it 700 permissions for later fixing (like the incremental recursion code does). This is awkward, which I believe is a sign that the "reverse" design of pre-scanning the destination files and looking up each source file in turn may be better. This design was proposed by Charles Perreault to me off-list and quoted in: http://lists.samba.org/archive/rsync/2007-November/019067.html Matt -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Rsync and dispersed storage [Re: Pushing hard-linked backups]
On Fri, 2007-12-28 at 00:15 -0500, Eric S. Johansson wrote: > it is possible, I've seen it done, but I can't find the library/tool anymore. I'm curious: what was the nature of this tool (if you remember)? A modified version of rsync? A dispersed storage service with an rsync daemon interface? A virtual filesystem? Did delta transfers work properly? > again with a pre and post processing capability, we could add that > functionality > in without modding the baseline It's not clear to me what kind of pre- or postprocessing capability you are thinking of that would make dispersed storage with rsync practical. A per-file approach with --source-filter and --dest-filter would have the same disadvantages as per-file encryption, and by the time you arrange for the retrieval filter to combine multiple files, I don't see what you gain by having rsync in the picture. Using rsync with a virtual filesystem that implements the dispersed storage is more natural. Matt -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Rsync and dispersed storage [Re: Pushing hard-linked backups]
On Thu, 2007-12-27 at 13:19 -0500, Charles Marcus wrote: > I'd rather see rsync support something like this (if it is even possible > or practical): > > http://www.cleversafe.org/dispersed-storage/how-it-works Charles, IMHO, built-in support for dispersed storage is way beyond the scope of rsync. Just use rsync to access a filesystem on the virtual block device provided by Cleversafe. It is true that this won't give you delta transfers. Neither will, say, Amazon S3. It would be interesting to look into establishing an interface for storage services to provide checksumming and/or delta-transfer capabilities to programs like rsync. Matt -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Rsync 3.0.0pre7 released
On Thu, 2007-12-27 at 00:05 +0100, Giuliano Gavazzi wrote: > Number of files: 407317 > Number of files transferred: 377808 > > has this mismatch something to do with the first including > directories and the second not? Yes. "Number of files" counts every file in the file list (regular files, directories, symlinks, ...), while "Number of files transferred" counts only regular files whose data needed to be transferred. Matt -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Pushing hard-linked backups
Eric, Sorry for the slow response. On Tue, 2007-12-25 at 11:18 -0500, Eric S. Johansson wrote: > as for encryption, I think it would be possible (assuming mods to rsync) to > do > rsync encrypted copies. if you assume symmetrical encryption and that the > key > and plaintext is managed by one side, specified by command line args, it > becomes easier (not easy, only easier :-) > > [[ related thought. if rsync had a plugin architecture allowing per file > transformation (pre and post transfer) one could build encryption in as an > addon]] There is an experimental branch "source-filter_dest-filter" of rsync that supports such transformation: http://gitweb.samba.org/?p=rsync.git;a=shortlog;h=patch/source-filter_dest-filter > the idea of the encryption extension is that when a file is ready for block > by > block checking, it is copied (replicating TOP (time, ownership and > permissions) > and encrypted using the given symmetrical key. this should yield an > identical > file if they are the same. if you get the key wrong, tough noogies, you copy > your entire dataset. Yes, encryption done with --source-filter would work essentially that way. The downside compared to something like duplicity is that the backup host gets to see everything except the file data (i.e., file names, sizes, times, and attributes) and, unless you take additional precautions, can manipulate the stored data by mixing and matching different encrypted versions of files. > rsync/snapshot to trusted host and backing up encfs image of backup directory > may be a better solution Well, if you have the Linux intermediary that would be necessary for EncFS, you might prefer duplicity instead. > so matt, lets go for the rsnapshot push to a benign host for now. OK, I will address this soon... Matt -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Migrating Rsync Disk
This might be an issue for a two-way synchronization tool, but not for rsync because rsync is stateless. It decides what needs to be copied based on a simple comparison of the source and destination, not by keeping any information in the source or destination about what copies may have been performed in the past. You can rebuild the first server and the copy will still be properly incremental. Matt Matt, Thanks so much for your explanation. I am much more at ease knowing the copy I have is good and wont need to be done again. Happy New Year, p. -- View this message in context: http://www.nabble.com/Migrating-Rsync-Disk-tp14523326p14540259.html Sent from the Samba - rsync mailing list archive at Nabble.com. -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: problems using --ignore-existing and filter rules
On Fri, 2007-12-28 at 13:03 -0500, Douglas Wade Needham wrote: > The command line used is one like the following, while chroot'ed into > the sandbox, with the attached filter: > > rsync -OavzHn --filter="merge /.rsync/filter.dirs" --ignore-existing / > viking:/ > > I have also confirmed it on the latest versions found in FC6 and > CentOS 4.5, and 5.0. In this case, I have copied things from under / > into a directory such as /sandbox/rsync_test, added a /.rsync > subdirectory to hold my test_rsync script and filter file, and after > adding a few extra files and creating a /opt2 by renaming /opt, > running rsync. In each and every case, I find that /.rsync and /opt2 > are transferred even if listed in a protect rule. And this is true > regardless of whether or not the path specification start and/or ends > in a slash. Right. As documented in the man page, the sole effect of a protect rule is to stop a destination file (or directory) from being deleted if it is extraneous. To stop a destination file from being updated, use an exclude rule. > Now, as to why the protect rule vs. exclude rule is important, I want > to use the filter.dirs file to protect areas which are not a part of > the OS, such as application data, home directories and such with this > file, If you mean that you don't want these areas processed at all, use an exclude rule. > and then have another file protect things such as configuration > files which are a part of the OS, and should not be pushed once they > exist, but should be pushed to a server once the server is up and > running with a minimal OS load. The --ignore-existing option makes rsync leave existing files alone throughout the destination. Rsync does not provide a way to selectively activate this behavior for some areas of the destination. If you want this behavior for certain areas, you can use two rsync runs: one with --ignore-existing for those areas, and one without --ignore-existing with those areas excluded. Still, note that --ignore-existing operates at the level of individual files; there is no way to tell rsync not to add new files to an existing directory. If you need that, then for the first run, instead of passing --ignore-existing, you should run a list of the "configuration" areas through a script on the destination machine that filters out those that already exist and then pass the resulting list to rsync with --files-from. > (Now if only rsync offered a way to run commands on the remote server > when certain files were updated...hehe). You can accomplish this by using an rsync daemon on the destination with a "post-xfer exec" script that parses the daemon's log for any relevant updates and runs any appropriate commands. Or you could use a system administration tool such as Puppet ( https://reductivelabs.com/trac/puppet ) to run commands on a file update whether the update was done via rsync or other means. Matt -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
DO NOT REPLY [Bug 3693] rsync -H should break outdated hard-links to identical files
https://bugzilla.samba.org/show_bug.cgi?id=3693 --- Comment #5 from [EMAIL PROTECTED] 2007-12-29 15:36 CST --- Wayne, if you consider the breaking of outdated hard links not to be part of the expected behavior of -H, please add a clarification to this effect to the man page. -- Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the QA contact for the bug, or are watching the QA contact. -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: app works when copied with tar, not with rsync
On Thu, 2007-12-27 at 17:52 -0500, Brian Poole wrote: > I'm experiencing a most unusual problem.. I am copying a directory > from one server to another (both Solaris 10/SPARC). No applications > are using the directory in question on either side while the copy is > in progress. After the copy successfully completes (no errors or > warnings), my user brings up the application (mix of binaries, java > files, and Apache Tomcat servlets) contained in this directory on the > new server. Or at least, they try to.. > > If I use rsync to perform this copy (tested with 2.5.5 and 3.0.0pre7), > the application logs an unusual Java error (reproduced near the > bottom) and dies. Surely an application issue, right? > > Well, it turns out, that if I use GNU tar (and I must use GNU tar as > the directory has very long file names) to copy the same set of files, > the application works without issue after being copied... Hmm.. I can think of one potential difference between rsync and tar results that you haven't mentioned: the order of entries within each directory. You can see the order with "ls -U". Some filesystems, such as ext2, tend to leave directory entries in the order they were originally created. Others, such as reiserfs, seem to automatically sort the entries in such a way that two directories containing the same names always read in the same order. On a filesystem of the first kind, tar will tend to give you a destination with the same entry order as the source, while rsync sorts the entries to some extent. It is conceivable that your application needs the directory entries to be in a certain order to work correctly. (Of course, such a dependency constitutes a bug.) The way I can see this happening for a Java application is if it uses a classpath entry of the form dir/* (to match any number of jars in the directory) and multiple jars provide identically named classes. In my tests with Sun's Java 1.6.0_02 on Linux, the application gets the class from the jar that comes first in the directory; if rsync reorders the directory, the application may get the wrong class. If this is indeed the problem, there isn't much you can do except report the bug and use tar in the meantime. Matt -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
DO NOT REPLY [Bug 5166] -o -g options don't work right or I've misread the man pages
https://bugzilla.samba.org/show_bug.cgi?id=5166 [EMAIL PROTECTED] changed: What|Removed |Added Status|ASSIGNED|RESOLVED Resolution||FIXED --- Comment #2 from [EMAIL PROTECTED] 2007-12-29 15:06 CST --- My apologies for taking your time. it was the use chroot option. I was misreading what it did from the man pages and assuming the default was what I wanted. Thanks you for the help, you can close this bug as solved. -- Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the QA contact for the bug, or are watching the QA contact. -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
DO NOT REPLY [Bug 5166] -o -g options don't work right or I've misread the man pages
https://bugzilla.samba.org/show_bug.cgi?id=5166 [EMAIL PROTECTED] changed: What|Removed |Added Status|NEW |ASSIGNED --- Comment #1 from [EMAIL PROTECTED] 2007-12-29 12:05 CST --- The -o and -g option fall-back to --numeric-ids if name conversion is not possible. Is an rsync daemon involved in the transfer? If so, see the comments about numeric-ids in the "use chroot" section of the rsyncd.conf manpage. If not, run rsync under strace to see why the password/group lookups are failing. -- Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the QA contact for the bug, or are watching the QA contact. -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
DO NOT REPLY [Bug 5162] using iconv with pre7 chops last special character in filenames
https://bugzilla.samba.org/show_bug.cgi?id=5162 [EMAIL PROTECTED] changed: What|Removed |Added Status|NEW |ASSIGNED --- Comment #2 from [EMAIL PROTECTED] 2007-12-29 12:02 CST --- Thanks for the configure patch. I added a slightly changed version to the latest dev version (which you can download via git, rsync or the latest nightly tar file). To help me debug the problem, can you create a tar file with some filenames that don't convert correctly? And also specify a name for the source character set (so that I can use an --iconv=SOURCE,utf8 spec for the test) and the command-line you used. -- Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the QA contact for the bug, or are watching the QA contact. -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Migrating Rsync Disk
On Fri, 2007-12-28 at 09:48 -0800, pichi wrote: > OK thanks for your reply, but it leads me to think of how rsync works. I > don't need a huge answer but how will server A (target) know that it has > rsync(ed) with server B if this is the first time they have talked? I mean > when I reinstall the OS and rsync, there are no more references on the > target server, and I would hate to have to do another full copy because it > took days. This might be an issue for a two-way synchronization tool, but not for rsync because rsync is stateless. It decides what needs to be copied based on a simple comparison of the source and destination, not by keeping any information in the source or destination about what copies may have been performed in the past. You can rebuild the first server and the copy will still be properly incremental. Matt -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
DO NOT REPLY [Bug 5167] The size of the transferred part of a file exceeds 2 Gb.
https://bugzilla.samba.org/show_bug.cgi?id=5167 --- Comment #1 from [EMAIL PROTECTED] 2007-12-29 07:39 CST --- Please show the output of rsync --version It should show something like: Capabilities: 64-bit files, ... If it doesn't, your version of rsync was simply not compiled with support for large files. -- Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the QA contact for the bug, or are watching the QA contact. -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
DO NOT REPLY [Bug 5167] New: The size of the transferred part of a file exceeds 2 Gb.
https://bugzilla.samba.org/show_bug.cgi?id=5167 Summary: The size of the transferred part of a file exceeds 2 Gb. Product: rsync Version: 2.6.9 Platform: x86 OS/Version: Linux Status: NEW Severity: normal Priority: P3 Component: core AssignedTo: [EMAIL PROTECTED] ReportedBy: [EMAIL PROTECTED] QAContact: [EMAIL PROTECTED] rsync -az --timeout=90 --partial --port=873 --password-file=pasword.tmp [EMAIL PROTECTED]::'mirv/vol0//17 - Check day.vob' '/media/video/recordings/vol0//17 - Check day.vob' 10 minutes later: I do Ctrl/C rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(276) [generator=2.6.9] $ ls -l 17* -rwxrwxrwx 1 lary lary 2124622946 Dec 25 12:55 17 - Check day.vob The limit 2 GB is exceeded. rsync -az --timeout=90 --partial --port=873 --password-file=pasword.tmp [EMAIL PROTECTED]::'mirv/vol0//17 - Check day.vob' '/media/video/recordings/vol0//17 - Check day.vob' I observed of the program rsync by iptraf. After transfer 335 Kb data transfer stopped and error messages stood out. On a server : Dec 25 07:15:03 mirv-99 rsyncd[26952]: rsync: writefd_unbuffered failed to write 4 bytes [sender]: Broken pipe (32) Dec 25 07:15:03 mirv-99 rsyncd[26952]: rsync error: error in rsync protocol data stream (code 12) at io.c(1122) [sender=2.6.9] On the client : io timeout after 120 seconds -- exiting rsync error: timeout in data send/receive (code 30) at io.c(165) [receiver=2.6.9] rsync: connection unexpectedly closed (65 bytes received so far) [generator] rsync error: error in rsync protocol data stream (code 12) at io.c(453) [generator=2.6.9] I have repeated experiment 3 times. :-( -- Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the QA contact for the bug, or are watching the QA contact. -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html