In message <[EMAIL PROTECTED]>, Matt Dillon writes:
>
> There are a number of issues... well, there is really one big issue, and
> that is the simple fact that there can be upwards of 260,000+ entries
> in the name cache and cache_purgeleafdirs() doesn't scale. It is an
> O(N*M) algorithm.
I agree, I've never been too fond of the purgeleafdirs() code myself
for that reason and others.
If we disregard the purgeleafdirs() workaround, the current cache code
was built around the assumption that VM page reclaims would be enough
to keep the vnode cache flushed and any vnode which could be potentially
useful was kept around until it wasn't.
Your patch changes this to the opposite: we kill vnodes as soon as
possible, and pick them off the freelist next time we hit them,
if they survice that long.
I think that more or less neuters the vfs cache for anything but
open files, which I think is not in general an optimal solution
either.
I still lean towards finding a dynamic limit on the number of vnodes
and have the cache operation act accordingly as the least generally
lousy algorithm we can employ.
Either way, I think that we should not replace the current code with
a new algorithm until we have some solid data for it, it is a complex
interrelationship and some serious benchmarking is needed before we
can know what to do.
In particular we need to know:
What ratio of directories are reused as a function of
the number of children they have in the cache.
What ratio of files are reused as a function of them
being open or not.
What ratio of files are being reused as a function of
the number of pages they have in-core.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message