couldn't we have it so that the 'sub-handlers' request pool is joined with/the same as
the main request's pool,
(this is different to the 'connection' pool right?)
so that sub-requests live for the life of the request...
It looks like that is what the function apr_pool_join does in 'debug' mode
> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> Sent: Friday, June 01, 2001 1:49 PM
> To: [EMAIL PROTECTED]
> Subject: Re: file/mmap buckets, subrequests, pools, 2.0.18
>
>
> On Fri, 1 Jun 2001, Greg Stein wrote:
>
> > On Fri, Jun 01, 2001 at 11:00:08AM -0700, [EMAIL PROTECTED] wrote:
> > >...
> > > This is realitively simple. A while ago, I changed the
> default handler to
> > > use the connection pool to fix this problem. A couple of
> months ago, Dean
> > > pointed out that this was a major resource leak. After
> 2.0.16, somebody
> > > (Roy?) pointed out that this was a pretty big problem
> when serving a lot
> > > of very large files on the same connection.
> > >
> > > The solution was a simple loop at the end of the
> core_output_filter that
> > > reads in the data from the file to memory. This is okay
> to do, because we
> > > are garuanteed to have less than 9k of data. It sounds
> like the problem
> > > is that we don't read in the data if we have an MMAP, or
> we may not be
> > > getting into the loop on sub-requests.
> >
> > What about the idea to have setaside() take a pool
> parameter? The bucket
> > should ensure that its contents live at least as long as the pool.
> >
> > For an MMAP bucket, if the given pool is the same or a
> subpool of the mmap's
> > pool, then nothing needs to happen. If the pool is a parent
> of the mmap's
> > pool, then the bucket needs to read its contents into a new
> POOL bucket
> > attached to the passed-in pool.
> >
> > Other buckets operate similarly. This would ensure that we
> can safely set
> > aside any type of bucket, for any particular lifetime
> (whether that is for a
> > connection or a request or whatever).
>
> Yes, that would work as well. I am beginning to think that this is
> overkill for our use cases, and it wouldn't really solve this problem,
> since the sub_request_output_filter still wouldn't be calling
> setaside.
> Also, when a regular filter calls setaside, which pool does it use? I
> would guess c->pool, but that could get confusing.
>
> My only other concern is actually walking all the way back up
> to ensure
> that the current pool is a decsendant of the pool passed to setaside.
> Those tests should be quick, but we will be calling setaside
> a lot through
> the course of some requests. I am positive that we only want
> to do this
> "copy anything under 9k to non-volatile location" in two
> places, whereas
> setaside is potentially called from every filter. If the setaside
> function is ever called incorrectly, we will end up doing the
> copies far
> more often than we need/want to.
>
> Those are just my concerns though, not a reason not to do the work. I
> just figure that by getting this stuff out in the open early,
> we can avoid
> some annoying head-aches.
>
> Ryan
>
> ______________________________________________________________
> _________________
> Ryan Bloom [EMAIL PROTECTED]
> 406 29th St.
> San Francisco, CA 94131
> --------------------------------------------------------------
> -----------------
>
>