> still a hash. i'm not doing anything particularly clever for speed,
> and it shows in places. listing large directories is the slowest
> operation by far, as it would be for most cases where several thousand
> "stat" structures would have to be dynamically created for each entry
> in a directory. i'm not pre-generating anything however, so in daily
> use, where each client knows exactly where to go, i'm not seeing
> slowdowns.

thanks!

> not that i'm worried: we recently discovered a few misconfigured
> clusters around here (names withheld) that were using ldap and no
> local nameservice caching. each stat on those boxes would take 0.05 ms
> (instead of 0.005) to complete because it needed to contact a server
> for username lookup. the wait became unbearable above a number of
> thousands of files in a particular directory, so people finally
> started complaining after waiting for minutes for 'ls -l' to finish.
> things could be way worse, i guess :)

i suppose we could all be forced to reimplement vi for the
apollo landing computer.

- erik

Reply via email to