Right, my module leaks memory because the core input and output filters split the bucket brigades. So it keeps creating more and more bucket brigades that are not released until the connection is gone.
When you see this, are we talking about a lot of HTTP requests pipelined on a single connection, or a single HTTP request that lasts a long time?
First of all, I think the split in the core input filter (READBYTES) should be optimized because all it is doing is splitting the brigade to concatenate it into another brigade. Wouldn't be more efficient to do a "move buckets from brigade ctx->b to b" and avoid creating a temporary brigade?
So for the output side, when I send a flush, it splits the brigade. If the flush is the last bucket, this might not be necessary, what do you think?
I'll defer these two questions to our Bucketmeister and/or efficiency experts. (Cliff? Brian?)
On the topic of EOS, I think that if the last bucket is an EOS and is not a keep alive connection it should not hold the data but it currently does.
Maybe. But if it's not a keepalive connection, we should be sending a FLUSH bucket within microseconds, no? OK, maybe that path could be optimized. But we'd have to be careful because keepalive connections are very common. We wouldn't want to penalize the hot path by optimizing for the less common case.
Greg
