Re: rsync to fat32 on different platforms
Only one additional remark so far: The problem seems to be connected to Windows XP (tested SP 3 so far). On Vista the same task with the same hard drive runs smoothly, while going back to an XP machine shows the same slow behaviour again. Andi Am 12.01.2009 um 02:34 schrieb Daniel: Hi Andreas, I was told that kind of experience (version 2.6.9) a couple of weeks ago. When I restart the sync mission, it's OK. I don't know why also. Do u have any progress about this ? -- Sincerely yours, Daniel Li 2009-01-12 === 2009-01-09 22:05:59 written in your letter:=== Ok, thanks for the inputs. The problem continues, though. I'm using the current cwrsync version now (which includes rsync 3.0.5). The detailed output shows that the receiving of the file names works perfectly, but when the generator starts things are getting very slow. It somhow takes unacceptabely long for these tasks: [generator] make_file(x/y.pdf,*,2) The files in question are sometimes quite large (200-300MB). I also tried to use the parameter --size-only but that did not help either. However: If I cancel the job and run it a second time, these make_file-tasks are executed in a normal fast way for the files that have already been checked until it reaches the files that were not covered during the first (incomplete) run. And: When I attach the drive to a darwin pc and sync it there, then go back to the windows machine and do a sync again, the slow checks begin anew. Can you give more hints based on these infos? Andreas Am 09.01.2009 um 02:40 schrieb Matt McCutchen: On Thu, 2009-01-08 at 11:38 +0100, Andreas Nef wrote: I have a dozen of external usb disks which should be kept in sync with a central rsync server. As I can't predict which OS they will be used with they are formatted with fat32. If I use them with either Linux (ubuntu) or OS X rsync works without problems. However, when I try the syncing with the hd attached to a windows pc the rsync process (using cygwin) will take close to 40 minutes only check what needs to be updated. Does anybody know what could be the cause for this strange behaviour? Is it something with permissions (parameters I used for initial creation of the copy were -rtluvz --modify-window=1)? Or are there compatibility issues with rsync/cygwin? Cygwin might just be slow, or there might genuinely be something strange going on. Increase the verbosity to -vvv to find out more about what is taking so long. Also, you may wish to use rsync = 3.0.0 on the Windows machine to get the speed benefit of incremental recursion, if you aren't already doing so. -- Matt -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html . = = = = = = = = = = = = = = = = = = = = -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
DO NOT REPLY [Bug 6025] New: 0 files to consider should not return code 23
https://bugzilla.samba.org/show_bug.cgi?id=6025 Summary: 0 files to consider should not return code 23 Product: rsync Version: 2.6.9 Platform: x86 URL: http://www.paguito.com OS/Version: Linux Status: NEW Severity: major Priority: P3 Component: core AssignedTo: way...@samba.org ReportedBy: paban...@yahoo.com QAContact: rsync...@samba.org When rsync is asked to download files and the result is CORRECTLY 0 files to consider, either because the directory just happens to be empty at the moment or the files matching the pattern are CORRECTLY not present in the directory, it returns a CODE 23. This is wrong. This behavior is incorrect, CODE 23 means Partial transfer due to error. In such a case there was CORRECTLY nothing to transfer, therefore, the program should exit gracefully with a CODE 0 Success. Since it has nothing to do and therefore it did nothing. Returning CODE 23 is missleading for other programmers using rsync inside a shell script as is leads them to believe that there were some actual files to download but for some reason rsync was unable to download them. In my case, as an example i am downloadig Apache log files from a test server this files are sometimes rotated automatically by cron or sometimes manually deleted. When I have rsync download all the log files in the usual Apache log directory /var/log/httpd , and the directory just happens to be empty: rsync -qPt -e ssh ro...@mysite.com:/var/log/httpd/access_log.* /root/myoldLogs/ I get: rsync: link_stat /var/log/httpd/access_log.* failed: No such file or directory (2) rsync error: some files could not be transferred (code 23) at main.c(1385) IMO: some files could not be transferred is incorrect as there were no files no transfer. Note that removing the --verbose option or adding the --quiet option did not solve the problem. I have seen some people using a workaround by adding a single file using the touch command to create a single empty file that would match the pattern on the rsync command, so that 0 files to consider would be 1 files to consider and rsync returns CODE 0 Success. Thank you. Pablo Angel Rendon -- Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the QA contact for the bug, or are watching the QA contact. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Rsync crash
Hello, I'm using an rsync server on a Linux packages repository (deb packages) to sync all mirrors. The mirrors sync every 10 minutes. Most of the time, there is nothing to sync, except a timestamp file, so the sync is pretty fast. Anyway, after it has been working well for some time (between 2 and 24hours), the rsync daemon crashes. It is still running but seems to retry something infinitly. The rsync log doesn't outputs anything before the crash (just the log of mirrors sync). I 'straced' the rsync process and have the following outputs : select(6, [4 5], NULL, NULL, NULL) = 1 (in [5]) accept(5, {sa_family=AF_INET, sin_port=htons(37870), sin_addr=inet_addr()}, [16]) = 3 rt_sigaction(SIGCHLD, {0x80768c0, [], SA_NOCLDSTOP}, NULL, 8) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb7e196f8) = 7117 close(3)= 0 select(6, [4 5], NULL, NULL, NULL) = 1 (in [4]) accept(4, {sa_family=AF_INET6, sin6_port=htons(55783), inet_pton(AF_INET6, xxx, sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 3 rt_sigaction(SIGCHLD, {0x80768c0, [], SA_NOCLDSTOP}, NULL, 8) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb7e196f8) = 7123 close(3)= 0 select(6, [4 5], NULL, NULL, NULL) = 2 (in [4 5]) select(6, [4 5], NULL, NULL, NULL) = 2 (in [4 5]) select(6, [4 5], NULL, NULL, NULL) = 2 (in [4 5]) The last lines (select() call) is then repeated infinitly and the rsync daemon is no more usuable (until i restart it). I paste my rsync configuration : motd file = /etc/rsyncd.motd log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid lock file = /var/run/rsync.lock uid = nobody gid = nobody use chroot = yes timeout = 5 max connections = 15 syslog facility = local5 [xx] path = /srv/web/d comment = xx read only = yes list = yes Any help will be appreciated :) Cheers, Maxence -- Maxence DUNNEWIND Contact : maxe...@dunnewind.net Site : http://www.dunnewind.net 02 23 20 35 36 06 32 39 39 93 signature.asc Description: Digital signature -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
DO NOT REPLY [Bug 6025] 0 files to consider should not return code 23
https://bugzilla.samba.org/show_bug.cgi?id=6025 m...@mattmccutchen.net changed: What|Removed |Added Status|NEW |RESOLVED Resolution||WONTFIX --- Comment #1 from m...@mattmccutchen.net 2009-01-12 18:49 CST --- Rsync is following traditional shell behavior, under which a wildcard pattern that matches nothing is left unexpanded. (Actually, in your case, rsync relies on the remote shell to handle the wildcard, but rsync emulates the same behavior when --protect-args is on.) Thus, rsync tries to copy the file /var/log/httpd/access_log.* with an asterisk in the name, which does not exist, hence the code 23. You'll get the same result with cp. This is reasonable because an unmatched pattern is often a mistake rather than a genuine attempt to match zero files. I suggest that you specify the containing directory as the source and use filters to exclude all the files except the ones you want: rsync -qPtdO -e ssh --include='access_log.*' --exclude='*' ro...@mysite.com:/var/log/httpd/ /root/myoldLogs/ This will successfully do nothing if there are no matching files. If you don't like including the containing directory in the file list, you could do this instead: rsync -qPt -e ssh --rsync-path='rsync --files-from=(find /var/log/httpd/ -maxdepth 1 -name access_log.* -printf %P\n)' ro...@mysite.com:/var/log/httpd/ /root/myoldLogs/ -- Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the QA contact for the bug, or are watching the QA contact. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: webdav timeout
On Sat, 2009-01-10 at 18:16 +0100, Kraak Helge wrote: I tried to sync two files (50 MB and 100 MB) with my webdav folder using rsync 3.0.5 with Mac OS X (10.4.11) Terminal and X11. With the Terminal application the sync always failed with both files. With X11 I once was successful synchronizing the 50 MB file. The error message is always like this (also when I set the timeout to eg. 7200 seconds): io timeout after 1003 seconds -- exiting rsync error: timeout in data send/receive (code 30) at io.c(200) [sender=3.0.5] the sync command I used: /usr/local/bin/rsync --size-only --human-readable --progre ss --recursive --delete --timeout=1000 --bwlimit=70 --times -- stats /Users/ Helge/temp /Volumes/helgekraak/ /users/Helge/Pictures/ rsync_Pictures_` dat e +%Y_%m_%d__%H_%M`.log I also tried two other webdav services with the same result. When I use the Finder to copy the files it works perfectly. Also when I use the cp command in the Terminal respectively with X11. Do you think this is a bug of rsync? Should I file an issue? Rsync tends to run into inadequacies in network filesystems more often than cp since it is a more sophisticated tool. Please obtain a system call trace of all three rsync processes using ktrace so I can see how rsync is hanging and determine whether the fault is with rsync or the WebDAV program. -- Matt -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: (Synchronization among clients with history)
On Sat, 2009-01-10 at 15:01 -0600, Jeff Allen wrote: I'm looking to build a rough implementation of a multi-client rdiff-backup system; in order to do this I'm using rsync before rdiff-backup. (We'll say there's a server, Client A, and Client B. Files should be synced between A and B but the server should keep a master list of all differences and changes made in any file, by any client in the directory I'm syncing). Essentially, I envision that syncing client A would go something like this: 1. Rsync down from the server to Client A in order to ensure that any newly-created files added recently by Client B (which would have already been uploaded - via rdiff-backup - to the server) is added to the local directory on Client A. 2. Rdiff-backup from Client A to the server. This will not increment the freshly downloaded files created by client B, as the modified times are equal. However, it would update those newly-created/edit files on Client A since the last sync. Do I understand correctly that you're taking advantage of the fact that rdiff-backup leaves the latest files in an ordinary tree that you can read via rsync, provided that you --exclude=/rdiff-backup-data ? However, I will run into problems when I delete a file. If I delete a file off of either client, the file will be un-deleted when I rsync down in step one, as the file would still exist on the server. But if I use rsync --del, it would just delete any and all new files created on a client since the last sync. The best solution I can envision is to write a shell script (or modify the rsync source) which would alter step 1 above to the following: global variable lastSync; //last synchronization for this client function syncFile(file, modifiedDate){ if (modifiedDate lastSync){ //this must be a new file created from another client. download the file from the server } else{ //the file has been deleted on the client since the last sync, delete it. delete the file. } } It just so happens that I had a similar need a few years ago (but without the need to save history) and made a similar proposal as my first rsync bug: https://bugzilla.samba.org/show_bug.cgi?id=2094 Wayne wisely advised me to use a real two-way synchronization tool such as unison ( http://www.cis.upenn.edu/~bcpierce/unison/ ) instead, and I would give you the same advice. But what makes your case more difficult is that you don't want to write directly to the rdiff-backup dir with unison. If unison had an option to propagate changes in one direction and skip any changes detected in the other direction, you could use that in step 1 and count on the next run of unison to recognize the changes made by rdiff-backup as convergent. Unfortunately, unison has no such option, though you may be able to rig up a script to accomplish this in unison's interactive mode. Alternatively, you could introduce an intermediate directory containing another copy of the data (which could be on either each client or the server) and use the following procedure: 1. Rsync from rdiff-backup dir to intermediate dir. 2. Synchronize intermediate dir with client via unison. 3. Back up intermediate dir to rdiff-backup dir. But this uses extra space. Given your requirements for both history and synchronization, you may be better served by using a full version-control tool in place of both rdiff-backup and unison. My personal favorite is git ( http://git.or.cz/ ). The downside is that you'll have to jump through extra hoops if you care about file attributes. See this thread for some ideas (written with reference to git but may apply to other tools too): http://www.gelato.unsw.edu.au/archives/git/0612/index.html#34154 I hope one of these approaches works for you. If not, give me some more information and I will see if I can come up with anything else. -- Matt -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: rsync to fat32 on different platforms
On Mon, 2009-01-12 at 09:58 +0100, Andreas Nef wrote: Only one additional remark so far: The problem seems to be connected to Windows XP (tested SP 3 so far). On Vista the same task with the same hard drive runs smoothly, while going back to an XP machine shows the same slow behaviour again. Then I guess it's a Windows idiosyncrasy. You might get more help on the cwRsync forum: http://www.itefix.no/i2/forum/41 -- Matt -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Rsync error 'unexpected tag 93' when --log-file= parameter is present and run from crontab
I have some general remarks about the problem; I hope Wayne will have more specific ideas on how to debug it. On Sun, 2009-01-11 at 13:33 +0100, Ernst J. Oud wrote: rsync -vrpth --stats --progress --log-file=/nslu2/rsync.log --log-file-format=%t %i %n%L --include-from=/nslu2/rsync-files /share/hdd/data/public/Ernst/ 192.168.1.69::rsync-nslu2 However, if I include the same line in crontab, when executed the server reports an unexpected tag 93 and protocol errors in io.c at line 1169, which is the default handler for communication errors. This *only* happens when the --log-file=/nslu2/rsync.log line is present. BTW; how can an error such as this be 'unexpected'? The tag '93' must mean something? Why can't the error handler be more specific about what caused this? At least some information in a client-server environment on which side generated the protocol error would help enormously! The rsync protocol consists of messages of different types, and each type is identified by a tag number. The error means that an rsync got a message with tag number 93, which is not one of the valid tags whose meaning is defined by the protocol. This is generally a result of the protocol somehow getting out of sync. It is hard to tell which side generates the error though. The error message appears in /var/log/messages at the server's end. The client's syslog does not report an error, it only reports that cron has started rsync. The error should have a [client] or [server] prefix that tells you which side detected the error. That doesn't, however, reveal which side is at fault for the protocol getting out of sync. I can live with the latter option since it also generates a log file by redirection of rsync's output but I prefer the --log-file= version since that allows me to change the log format. You should be able to change the stdout format with --out-format. -- Matt -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: how to connect to rsyncd via forwarded ssh port?
On Sat, 2009-01-10 at 22:32 +0100, Matthias Meyer wrote: I've running a rsyncd and ssh port forwarding (-R 12345:localhost:873 bac...@server) on a client because the client should not reachable but over ssh. The rsyncd should acessible because I can backup and restore files with backuppc (a wonderfull program which use perl::rsync) And I can ssh -p 12345 bac...@localhost to this client too. If I try to connect (from the machine/user where backuppc can connect) to a share of the clients rsyncd I get different errors: rsync -e ssh -C -l backup -p 10021 -av /srv/Service/Installs/autosshpatch.bin bac...@localhost:/BACKUP Seems to work. But it did not connect to the share BACKUP but copy the autosshpatch.bin in a file called BACKUP Once you start the port forward, the rsync daemon is listening at localhost:12345 as far as your rsync command is concerned. The command should just access the daemon at that port using the double-colon or rsync:// syntax without specifying anything ssh-related, and the existing ssh process will take care of the forwarding: rsync -av -p 12345 /srv/Service/Installs/autosshpatch.bin localhost::BACKUP Alternatively: rsync -av /srv/Service/Installs/autosshpatch.bin rsync://localhost:12345/BACKUP -- Matt -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
DO NOT REPLY [Bug 5220] Syntax to access a daemon over a socket supplied on a fd
https://bugzilla.samba.org/show_bug.cgi?id=5220 --- Comment #6 from m...@mattmccutchen.net 2009-01-12 23:18 CST --- I noticed that this was proposed a long time ago: http://lists.samba.org/archive/rsync/2003-March/005418.html It looks like JW Schultz was proposing treating a client with a supplied connection as a special kind of server with am_server set to a distinctive value. I don't think that approach will work now that the client/server distinction means many things besides whether to make a connection or use a provided one, so the current approach of just hooking the code that makes the connection is correct. -- Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the QA contact for the bug, or are watching the QA contact. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Rsync crash
On Mon, 2009-01-12 at 13:29 +0100, Maxence DUNNEWIND wrote: Anyway, after it has been working well for some time (between 2 and 24hours), the rsync daemon crashes. I wouldn't call an infinite loop a crash, but whatever. It is still running but seems to retry something infinitly. The rsync log doesn't outputs anything before the crash (just the log of mirrors sync). I 'straced' the rsync process and have the following outputs : select(6, [4 5], NULL, NULL, NULL) = 1 (in [5]) accept(5, {sa_family=AF_INET, sin_port=htons(37870), sin_addr=inet_addr()}, [16]) = 3 rt_sigaction(SIGCHLD, {0x80768c0, [], SA_NOCLDSTOP}, NULL, 8) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb7e196f8) = 7117 close(3)= 0 select(6, [4 5], NULL, NULL, NULL) = 1 (in [4]) accept(4, {sa_family=AF_INET6, sin6_port=htons(55783), inet_pton(AF_INET6, xxx, sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 3 rt_sigaction(SIGCHLD, {0x80768c0, [], SA_NOCLDSTOP}, NULL, 8) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb7e196f8) = 7123 close(3)= 0 select(6, [4 5], NULL, NULL, NULL) = 2 (in [4 5]) select(6, [4 5], NULL, NULL, NULL) = 2 (in [4 5]) select(6, [4 5], NULL, NULL, NULL) = 2 (in [4 5]) The last lines (select() call) is then repeated infinitly and the rsync daemon is no more usuable (until i restart it). I see what happened here: incoming connections arrived on two different sockets at the same time, and the select call wasn't prepared to handle that case. I'll submit a patch. -- Matt -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
RE: (Synchronization among clients with history)
Matt, Thank you so much for your detailed advice; I sincerely appreciate your time and help. I'm beginning to realize that the span of this project probably doesn't merit hacking the source of unison or rsync. The directory is small enough that adding an extra copy of it would be feasible; I think this would be a simple approach which - as far as I've though out currently - would accomplish what I need. I hadn't thought of using a versioning tool such as Git. Just judging by a glance through the website, that looks like it would satisfy all my requirements (as I'm not too particular about the permissions of this specific data). I'll definitely look into that further. Thanks again for your help, - Jeff From: m...@mattmccutchen.net To: jeffr...@smu.edu Date: Mon, 12 Jan 2009 20:27:20 -0500 CC: rsync@lists.samba.org Subject: Re: (Synchronization among clients with history) On Sat, 2009-01-10 at 15:01 -0600, Jeff Allen wrote: I'm looking to build a rough implementation of a multi-client rdiff-backup system; in order to do this I'm using rsync before rdiff-backup. (We'll say there's a server, Client A, and Client B. Files should be synced between A and B but the server should keep a master list of all differences and changes made in any file, by any client in the directory I'm syncing). Essentially, I envision that syncing client A would go something like this: 1. Rsync down from the server to Client A in order to ensure that any newly-created files added recently by Client B (which would have already been uploaded - via rdiff-backup - to the server) is added to the local directory on Client A. 2. Rdiff-backup from Client A to the server. This will not increment the freshly downloaded files created by client B, as the modified times are equal. However, it would update those newly-created/edit files on Client A since the last sync. Do I understand correctly that you're taking advantage of the fact that rdiff-backup leaves the latest files in an ordinary tree that you can read via rsync, provided that you --exclude=/rdiff-backup-data ? However, I will run into problems when I delete a file. If I delete a file off of either client, the file will be un-deleted when I rsync down in step one, as the file would still exist on the server. But if I use rsync --del, it would just delete any and all new files created on a client since the last sync. The best solution I can envision is to write a shell script (or modify the rsync source) which would alter step 1 above to the following: global variable lastSync; //last synchronization for this client function syncFile(file, modifiedDate){ if (modifiedDate lastSync){ //this must be a new file created from another client. download the file from the server } else{ //the file has been deleted on the client since the last sync, delete it. delete the file. } } It just so happens that I had a similar need a few years ago (but without the need to save history) and made a similar proposal as my first rsync bug: https://bugzilla.samba.org/show_bug.cgi?id=2094 Wayne wisely advised me to use a real two-way synchronization tool such as unison ( http://www.cis.upenn.edu/~bcpierce/unison/ ) instead, and I would give you the same advice. But what makes your case more difficult is that you don't want to write directly to the rdiff-backup dir with unison. If unison had an option to propagate changes in one direction and skip any changes detected in the other direction, you could use that in step 1 and count on the next run of unison to recognize the changes made by rdiff-backup as convergent. Unfortunately, unison has no such option, though you may be able to rig up a script to accomplish this in unison's interactive mode. Alternatively, you could introduce an intermediate directory containing another copy of the data (which could be on either each client or the server) and use the following procedure: 1. Rsync from rdiff-backup dir to intermediate dir. 2. Synchronize intermediate dir with client via unison. 3. Back up intermediate dir to rdiff-backup dir. But this uses extra space. Given your requirements for both history and synchronization, you may be better served by using a full version-control tool in place of both rdiff-backup and unison. My personal favorite is git ( http://git.or.cz/ ). The downside is that you'll have to jump through extra hoops if you care about file attributes. See this thread for some ideas (written with reference to git but may apply to other tools too): http://www.gelato.unsw.edu.au/archives/git/0612/index.html#34154 I hope one of these approaches works for you. If not, give me some more information and I will see if I can come up with anything else. -- Matt -- Please use reply-all for most replies to avoid omitting the mailing