Re: FW: Problem with large include files
On Wed, May 16, 2001 at 05:25:03PM +1200, Wilson, Mark - MST wrote: How do I go about registering this bug with the include file. I don't think there's any point registering it until you have better confirmed that that is indeed the problem. Also, if developers have no way of reproducing the problem it is highly unlikely to get fixed. If you can come up with a fix yourself, of course, then a patch could probably be applied. Personally I'm not convinced that the problem you're seeing is an include file problem, but your 2.3.2 testing may give better evidence. There is an rsync bug tracking system, but I'm not sure how thoroughly anybody looks at it. I know I don't; I used to while I maintained rsync but haven't since. Martin, do you look at and respond to bug reports in the rsync bug tracking system? The main page says there are 404 messages in the incoming bucket, and I believe they're supposed to get moved to another bucket once somebody has replied to them. Currently, posting to the mailing list is much more likely to get a response. It would be good get this bug fixed as I would like to be able to back to 2.4.6 (or whatever) as it is faster and it has bandwidth limiting. It's faster? Why do you say that? I don't recall any changes in the 2.4.x series explicitly related to performance. Will let you know the results of the testing. - Dave Dykstra
Re: FW: Problem with large include files
I still have been unable to move to 2.4.6 for a similar reason -- hangs. I haven't ever detected the #files in a dir issue, but I do still see hang problems in 2.4.6. (both solaris and linux) I see this on localhost to localhost rsync's as well as rsyncs over ssh. eric [EMAIL PROTECTED] wrote: I've seen a problem similar to the include file, but with just running rsync in the following mode copying from one directory to another; rsync -avo --delete --stats /dir1 /dir2 I was using version 2.3.1 for the longest time with no problem, and I just recently moved to 2.4.6. If my directories have more then 61,000 files in them, the process just hangs. Now, in order to use 2.4.6, I must use a script that chops the update process into smaller then 60,000 file chunks. Tim == Tim W. Renwick mailto:[EMAIL PROTECTED] | Put me on the highway Philips Semiconductors (408)474-5370 | and show me a sign 1109 McKay Drive, M/S 41 Fax (408)474-5252 | and take it to the San Jose, CA 95131 SERI trenwick@usvlsjs1 | limit one more time! - Eagles [EMAIL PROTECTED]@[EMAIL PROTECTED] on 05/15/2001 10:36:47 Sent by:[EMAIL PROTECTED] To: [EMAIL PROTECTED]@SMTP cc: [EMAIL PROTECTED]@SMTP Subject:RE: FW: Problem with large include files Classification: How do I go about registering this bug with the include file. It would be good get this bug fixed as I would like to be able to back to 2.4.6 (or whatever) as it is faster and it has bandwidth limiting. Will let you know the results of the testing. Cheers Mark -Original Message- From: Dave Dykstra [mailto:[EMAIL PROTECTED]] Sent: Wednesday, 16 May 2001 01:24 To: Wilson, Mark - MST Cc: RSync List (E-mail) Subject: Re: FW: Problem with large include files On Tue, May 15, 2001 at 03:31:23PM +1200, Wilson, Mark - MST wrote: ... Do you have any idea what the maximum number of files you can have in an include file is (for the current version)? No, I don't. It probably depends on a lot of variables. How do you want your test on 2.3.2 done? ie LAN or high speed WAN, numbers of file, sizes of files, things to time, daemon v rsh. What I'd like to see is a case that might make the biggest difference with and without the optimization: - probably use the LAN - the largest number of files that you can get to work - small files - time the whole run with the time command, CPU time and elapsed time - I don't know about daemon vs rsh, but the daemon leaves the most under rsync's control so that may be preferable - Dave Dykstra CAUTION - This message may contain privileged and confidential information intended only for the use of the addressee named above. If you are not the intended recipient of this message you are hereby notified that any use, dissemination, distribution or reproduction of this message is prohibited. If you have received this message in error please notify Air New Zealand immediately. Any views expressed in this message are those of the individual sender and may not necessarily reflect the views of Air New Zealand. _ For more information on the Air New Zealand Group, visit us online at http://www.airnewzealand.com or http://www.ansett.com.au _ -- __ Eric T. Whiting AMI Semiconductors
Re: FW: Problem with large include files
On Tue, May 15, 2001 at 03:31:23PM +1200, Wilson, Mark - MST wrote: ... Do you have any idea what the maximum number of files you can have in an include file is (for the current version)? No, I don't. It probably depends on a lot of variables. How do you want your test on 2.3.2 done? ie LAN or high speed WAN, numbers of file, sizes of files, things to time, daemon v rsh. What I'd like to see is a case that might make the biggest difference with and without the optimization: - probably use the LAN - the largest number of files that you can get to work - small files - time the whole run with the time command, CPU time and elapsed time - I don't know about daemon vs rsh, but the daemon leaves the most under rsync's control so that may be preferable - Dave Dykstra
RE: FW: Problem with large include files
How do I go about registering this bug with the include file. It would be good get this bug fixed as I would like to be able to back to 2.4.6 (or whatever) as it is faster and it has bandwidth limiting. Will let you know the results of the testing. Cheers Mark -Original Message- From: Dave Dykstra [mailto:[EMAIL PROTECTED]] Sent: Wednesday, 16 May 2001 01:24 To: Wilson, Mark - MST Cc: RSync List (E-mail) Subject: Re: FW: Problem with large include files On Tue, May 15, 2001 at 03:31:23PM +1200, Wilson, Mark - MST wrote: ... Do you have any idea what the maximum number of files you can have in an include file is (for the current version)? No, I don't. It probably depends on a lot of variables. How do you want your test on 2.3.2 done? ie LAN or high speed WAN, numbers of file, sizes of files, things to time, daemon v rsh. What I'd like to see is a case that might make the biggest difference with and without the optimization: - probably use the LAN - the largest number of files that you can get to work - small files - time the whole run with the time command, CPU time and elapsed time - I don't know about daemon vs rsh, but the daemon leaves the most under rsync's control so that may be preferable - Dave Dykstra CAUTION - This message may contain privileged and confidential information intended only for the use of the addressee named above. If you are not the intended recipient of this message you are hereby notified that any use, dissemination, distribution or reproduction of this message is prohibited. If you have received this message in error please notify Air New Zealand immediately. Any views expressed in this message are those of the individual sender and may not necessarily reflect the views of Air New Zealand. _ For more information on the Air New Zealand Group, visit us online at http://www.airnewzealand.com or http://www.ansett.com.au _
RE: FW: Problem with large include files
I've seen a problem similar to the include file, but with just running rsync in the following mode copying from one directory to another; rsync -avo --delete --stats /dir1 /dir2 I was using version 2.3.1 for the longest time with no problem, and I just recently moved to 2.4.6. If my directories have more then 61,000 files in them, the process just hangs. Now, in order to use 2.4.6, I must use a script that chops the update process into smaller then 60,000 file chunks. Tim == Tim W. Renwick mailto:[EMAIL PROTECTED] | Put me on the highway Philips Semiconductors (408)474-5370 | and show me a sign 1109 McKay Drive, M/S 41 Fax (408)474-5252 | and take it to the San Jose, CA 95131 SERI trenwick@usvlsjs1 | limit one more time! - Eagles [EMAIL PROTECTED]@[EMAIL PROTECTED] on 05/15/2001 10:36:47 Sent by:[EMAIL PROTECTED] To: [EMAIL PROTECTED]@SMTP cc: [EMAIL PROTECTED]@SMTP Subject:RE: FW: Problem with large include files Classification: How do I go about registering this bug with the include file. It would be good get this bug fixed as I would like to be able to back to 2.4.6 (or whatever) as it is faster and it has bandwidth limiting. Will let you know the results of the testing. Cheers Mark -Original Message- From: Dave Dykstra [mailto:[EMAIL PROTECTED]] Sent: Wednesday, 16 May 2001 01:24 To: Wilson, Mark - MST Cc: RSync List (E-mail) Subject: Re: FW: Problem with large include files On Tue, May 15, 2001 at 03:31:23PM +1200, Wilson, Mark - MST wrote: ... Do you have any idea what the maximum number of files you can have in an include file is (for the current version)? No, I don't. It probably depends on a lot of variables. How do you want your test on 2.3.2 done? ie LAN or high speed WAN, numbers of file, sizes of files, things to time, daemon v rsh. What I'd like to see is a case that might make the biggest difference with and without the optimization: - probably use the LAN - the largest number of files that you can get to work - small files - time the whole run with the time command, CPU time and elapsed time - I don't know about daemon vs rsh, but the daemon leaves the most under rsync's control so that may be preferable - Dave Dykstra CAUTION - This message may contain privileged and confidential information intended only for the use of the addressee named above. If you are not the intended recipient of this message you are hereby notified that any use, dissemination, distribution or reproduction of this message is prohibited. If you have received this message in error please notify Air New Zealand immediately. Any views expressed in this message are those of the individual sender and may not necessarily reflect the views of Air New Zealand. _ For more information on the Air New Zealand Group, visit us online at http://www.airnewzealand.com or http://www.ansett.com.au _