I'm seen this problem with a SOCKS protocol module I wrote.
I'm including a patch that fixes this problem. It does what I mentioned below. In the input filter, it moves the buckets rather than creating a new brigade and then concatenate. In the output filter it splits the brigade after a flush bucket only if there are buckets after the flush.
Juan
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, June 10, 2003 3:41 PM
To: [EMAIL PROTECTED]
Subject: Re: EOS or FLUSH buckets
Juan Rivera wrote:
> Right, my module leaks memory because the core input and output filters
> split the bucket brigades. So it keeps creating more and more bucket
> brigades that are not released until the connection is gone.
When you see this, are we talking about a lot of HTTP requests pipelined on a
single connection, or a single HTTP request that lasts a long time?
> First of all, I think the split in the core input filter (READBYTES)
> should be optimized because all it is doing is splitting the brigade to
> concatenate it into another brigade. Wouldn't be more efficient to do a
> "move buckets from brigade ctx->b to b" and avoid creating a temporary
> brigade?
>
> So for the output side, when I send a flush, it splits the brigade. If
> the flush is the last bucket, this might not be necessary, what do you
> think?
I'll defer these two questions to our Bucketmeister and/or efficiency experts.
(Cliff? Brian?)
> On the topic of EOS, I think that if the last bucket is an EOS and is
> not a keep alive connection it should not hold the data but it currently
> does.
Maybe. But if it's not a keepalive connection, we should be sending a FLUSH
bucket within microseconds, no? OK, maybe that path could be optimized. But
we'd have to be careful because keepalive connections are very common. We
wouldn't want to penalize the hot path by optimizing for the less common case.
Greg
core.c.patch
Description: Binary data
