On Monday, September 8, 2003, at 04:36 AM, David Vaughan wrote:


Hmm. I have now encountered the same problem. As I wrote before, I had no difficulty with 100,000 files and directories 17 deep, but the same folders examined from their parent (the volume name) lead to a far higher depth count (should have been only 18) and did not finish after 30 minutes. There was no crash, per se, just endless processing. The amount of work being done makes this a bad one to try to trace.

From the root is /volumes/, so you will see your other drives.


I am wondering if this is a problem arising from the amount of data (roughly estimated at 10MB considering a full path for every file) being returned on the stack as the function nears the top.

I suspect only a pointer is on the stack and the data elsewhere.


However, there is a higher copy cost in passing strings to functions and returning them if they are very large. You can get rid of the parameter copy with a reference variable. Also, the fastest way to accumulate in a string is with 'put after'; there seems to be about 10% to 15% growing room in large strings and this seems to exploit that.

At some point there might be a virtual memory cost. I wouldn't be surprised if there are some surprises here.

The alternative might be some trigger into circularity, which is what happened with permissions.

I looked at /dev/, aliases and even named pipes, and none of those show circularity in the structure as seen from folders(). I might have goofed, and they might take looking at again. There might also be some unix file type that could cause trouble.


Dar Scott
****************************************
    Dar Scott Consulting
    http://www.swcp.com/dsc/
    Programming Services
****************************************

_______________________________________________
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution

Reply via email to