On Tue, 23 Aug 2005, Jeffrey Hutzelman wrote:

In message <[EMAIL PROTECTED]>,Jeffrey
Hutzelman writes:
67%-full assumption is incorrect.  Perhaps we were too conservative in
bumping the average filesize assumption from 10K to 32K, and it should
really be bigger.

i dont believe that there should be linear relationsip between the
average filesize and the cache size.  when a user increases the disk
cache, the more kernel memory is not available.

Well, right now we use two numbers. One is a constant; the other is a function of the chunk size. It sounds like you're arguing for eliminating the constant, or at least limiting its effect as the cache size grows very large. Fine, but without data, how do we decide where to draw the line? With a small cache and a large chunk size, we need that constant to ensure we have enough files.

There was a suggestion not that long ago to use sqrt() to calculate the average filesize and thus obtain the limiting effect. Arbitrary, yes, but it did provide nice values given the assumption that large caches means larger average filesizes. It's certainly better than the hard coded average filesize, and should be enough to calm the situation down until a thorough investigation has been made.

I also suspect that the main reason that people want to limit the number of files is that certain platforms (32bit x86 Linux comes to mind) has a limited amount of kernel memory, and thus fails hard if the user creates a cache which has too many files.

/Nikke
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se     |    [EMAIL PROTECTED]
---------------------------------------------------------------------------
 It's curtains for Windows!
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
_______________________________________________
OpenAFS-devel mailing list
OpenAFS-devel@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-devel

Reply via email to