On Mon, 10 Sep 2001, dean gaudet wrote:

> > I don't care if mod_include buffers 200 Megs, as long as it is
> > constantly doing something with the data.  If we have a 200 Meg file
> > that has no SSI tags in it, but we can get all 200 Meg at one time,
> > then we shouldn't have any problem just scanning through the entire
> > 200 Megs very quickly.  Worst case, we do what Brian suggested, and
> > just check the bucket length once we have finished processing all of
> > the data in that bucket.  The buffering only becomes a real problem
> > when we sit waiting for data from a CGI or some other slow content
> > generator.
>
> when you say "buffers 200 Megs" up there do you mean it's mmap()d?
>
> (i think that some of the page freeing decisions in linux are based on
> whether a page is currently mapped or not, and a 200MB file could cause a
> bit of memory shortage...)

It is possible for us to be talking about 200MB MMAP bucket here, yes.

But that won't ever happen in the typical case.  Some module (eg
mod_file_cache) would have to have explicitly set up that 200MB MMAP
bucket.

When you read a file bucket and it automatically MMAPs the file, it places
an upper bound on file size for mmaping at 16MB.  Files bigger than that
will be read onto the heap 8KB at a time (one 8KB hunk per heap bucket).
In that case, mod_include will look at those 8KB heap buckets until it
reaches one that puts it over its buffering threshold, then it will flush
the data it's already examined down the filter chain.

--Cliff


--------------------------------------------------------------
   Cliff Woolley
   [EMAIL PROTECTED]
   Charlottesville, VA


Reply via email to