> -----Ursprüngliche Nachricht-----
> Von: Yann Ylavic [mailto:[email protected]]
> Gesendet: Dienstag, 6. Oktober 2015 17:54
> An: [email protected]
> Betreff: Re: svn commit: r1706669 - in /httpd/httpd/trunk: ./ include/
> modules/http/ modules/ssl/ server/ server/mpm/event/ server/mpm/motorz/
> server/mpm/simple/
>
> On Tue, Oct 6, 2015 at 5:34 PM, Graham Leggett <[email protected]> wrote:
> >
> > apr_bucket_simple_copy() looks wrong - in theory we should have a
> proper copy function that does the right thing with the second copy, for
> example by not copying the pool. If we blindly copy the pool (or the
> request containing the pool) I see nothing that would prevent an attempt
> to free the pool twice.
>
> Agreed, we probably need something like this:
>
> Index: server/eor_bucket.c
> ===================================================================
> --- server/eor_bucket.c (revision 1707064)
> +++ server/eor_bucket.c (working copy)
> @@ -91,6 +91,17 @@ static void eor_bucket_destroy(void *data)
> }
> }
>
> +static apr_status_t eor_bucket_copy(apr_bucket *a, apr_bucket **b)
> +{
> + *b = apr_bucket_alloc(sizeof(**b), a->list); /* XXX: check for
> failure? */
> + **b = *a;
> +
We could use apr_bucket_simple_copy(a, b) instead of the above.
> + /* we don't wan't request to be destroyed twice */
> + (*b)->data = NULL;
Hm. Shouldn't the last EOR bucket of a particular request destroyed call
eor_bucket_cleanup? That would require some kind of a reference like refcount
buckets provide.
> +
> + return APR_SUCCESS;
> +}
> +
> AP_DECLARE_DATA const apr_bucket_type_t ap_bucket_type_eor = {
> "EOR", 5, APR_BUCKET_METADATA,
> eor_bucket_destroy,
> @@ -97,6 +108,6 @@ AP_DECLARE_DATA const apr_bucket_type_t ap_bucket_
Regards
Rüdiger