Stas Bekman wrote:
Joe Orton wrote:

On Thu, Jul 01, 2004 at 02:06:33PM -0700, Stas Bekman wrote:
...


$7 = (struct apr_bucket *) 0x1011007
(gdb) print *((*b)->list->prev)
Cannot access memory at address 0x1011007

I don't understand why it doesn't happen on my setup which seems to be pretty close to philippe's one. I suppose it just so happens that the freed memory is still valid on my machine, due to different compilations.

The problem appears to be in Apache, where some downstream filter decides to free the brigade, rendering $bb->cleanup useless, since you can't rely on $bb to be valid at all. That just sucks.


Right; the thing is that currently, there really is no way to actually
"free the brigade"; apr_brigade_destroy() just does an
apr_brigade_cleanup() and unregisters the pool cleanup.  The brigade
structure remains valid until the pool it's allocated from gets
destroyed.

To help track down the bug:

1) build httpd/apr* with -DAPR_BUCKET_DEBUG to enable the brigade
consistency checks; this may show very quickly the problem


Philippe, can you please do that? I don't get this problem so it's probably the best that you do that.

Just updated to httpd 2.0.50-dev latest and recompiled with all the APR_DEBUG_* I could fine, still same exact core dump ;(


2) get a minimal repro case.  This is supposed to be failing in the
modperl test suite on clean build of httpd/modperl-2.0 HEADs?


Supposedly so.

It's already very minimal -- a trivial loop echoing the data from the client.

But as mentioned it doesn't fail for me. Probably the data structure is freed but not overwritten on my OS, so I can't see the problem.

This introduces a bigger question. How do we make sure that perl variables contain valid pointers? We had the same problem with custom APR::Pool pools. If you create a sub-pool and then the parent pool is destroyed, it'll destroy all its sub-pools, the sub-pool will get invalidated without being able to notify the object on the perl side. So I rewrote that code, to perform some trickery, which when the rag is pulled from beneath the object it'll automatically invalidate itself, thus avoiding segfaults if someone tries to use it.

May be we need to introduce similar cases for $bb, i.e. if someone triggers execution of cleanup callbacks it should invalidate the perl object pointing to it.

Same goes for things like $r, quite a few times users happen to write code that has an unwanted closure which catches $r and it will be used on the following request, usually causing a segfault. If for all objects we were to register a cleanup handler that invalidates the object when Apache kills it, then we could end up with much less segfaults. Though I'm not sure how big is the overhead it'll add. Something to ponder on.


-- -------------------------------------------------------------------------------- Philippe M. Chiasson m/gozer\@(apache|cpan|ectoplasm)\.org/ GPG KeyID : 88C3A5A5 http://gozer.ectoplasm.org/ F9BF E0C2 480E 7680 1AE5 3631 CB32 A107 88C3A5A5

Attachment: signature.asc
Description: OpenPGP digital signature



Reply via email to