On Mon, 30 Apr 2001, Justin Erenkrantz wrote:
> > The file buckets code has a default MMAP_LIMIT for a given file of 4MB,
> > just to give you an idea. Any file bigger than that won't get MMAPed in
> > the first place, at least not by the buckets code. Multiply that times
> > some sane number of files you wish to have MMAPed at one time, and voila,
> > you've got your global limit for mod_file_cache.
> >
> > Right?
>
> Correct, but wouldn't it be "fairly" expensive to calculate how much
> space you have allocated with MMAP? Or, you try and keep a static
> "cached" value of how much space you've allocated. I bet this might be
> a good place to use a reader/writer lock (wonder if anyone has
> implemented that - <G>). -- justin
By "fairly expensive", I presume you mean this little block, which is
linear with the number of files cached:
int mmaped_size = 0;
foreach apr_bucket b {
apr_bucket_file *f = b->data;
if (f->mmap) {
mmaped_size += b->length;
}
}
It's certainly no worse than that.
You can even make it constant time by assuming that none of the files are
mmaped to start with. Just before you serve a request, check to see if
the file is MMAPed. If it's not, but it is after the request, mmaped_size
+= b->length again. But that might require some kind of locking, which is
(I'm guessing) what you were getting at. Yeah, it could be a bit hairy to
get a precise answer. An estimate might be sufficient and easier, I don't
know for sure.
In response to your followup message, yes, this only keeps track of what
mod_file_cache has done, not accounting for what any other code that uses
MMAP has done. That's why the upper limit must be conservative (as is the
4MB per-file limit imposed by the buckets code). It might be the case
that the 4MB per-file limit is enough by itself, since dividing up the
address space we're willing to use for MMAPs by 4MB potentially yields a
large number of files that might be MMAPed without running out of address
space.
At any rate, even if we don't try to track how much address space we've
used up, it would still be way, way better after fixing the leak than what
we have now, which uses up address space something like:
sum_foreach_file_cached(sizeof(file)*num_requests_where_file_read).
<shrug>
--Cliff
--------------------------------------------------------------
Cliff Woolley
[EMAIL PROTECTED]
Charlottesville, VA