Daniel, this is not exactly THE "huge directory" problem. This is regarding real directories used by the txfile store, not the webdav collections.
But you are right, the "huge directory" problem is still unsolved. I'm currently not quite sure where the bottle neck is situated.
Stefan
There are several problems with "huge directories". Only some of them can really be fixed...
The basic problem, obviously, is that doing a depth-1 propfind (a common operation!) on a directory containing hundreds or thousands of objects takes a long time.
There are two parts to this: slide taking a long time to process the request and generate the response, and then actually transferring the response (over a possibly slow network link). Plus the client eventually having to parse that response and do something with it.
There's not much we could do about the latter part of this problem - but we can take steps to avoid it. If a user directly puts a huge number of files into a directory, there's not too much that can be done. However, we can avoid creating directories within slide with too many children - the deltav stuff was a major offender here, but that can be worked around now.
It's important to continue paying attention to that, and avoiding creating things that are going to scale poorly.
On the other side, optimising slide to directly cope better with huge directories is the other problem. Merely splitting up directories doesn't help here. There are a few paths that could be taken: creating useful fast-paths for common operations, doing more operations directly at the store level, avoiding creating/copying so many large internal data structures, etc. Obviously, this is a lot of work - as Stefan says, it's not obvious where the bottle necks are - they probably depend on how you're using slide.
Mike
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
