On 09 Jun 2011, at 9:16 AM, Ruediger Pluem wrote:

+        /* make sure we don't read more than 6000 bytes at a time */
+ apr_brigade_partition(bb, (APR_BUCKET_BUFF_SIZE / 4 * 3), &e);

Shouldn't we move this below the the checking for the metadata bucket?
Why partitioning again, when we only move a metadata bucket between the brigades?

It was an attempt to reuse e, but it's better if it's done later as you say.

+        /* metadata buckets are preserved as is */
+        if (APR_BUCKET_IS_METADATA(e)) {
+            /*
+ * Remove meta data bucket from old brigade and insert into the
+             * new.
+             */
+            APR_BUCKET_REMOVE(e);
+            APR_BRIGADE_INSERT_TAIL(ctx->bb, e);
+            continue;

Why do we ignore FLUSH buckets here? Shouldn't we handle them similar to EOS except for the removal of
the filter?

I understood that flush buckets were metadata buckets, and I checked and it doesn't seem to be the case.

We can pass flush buckets, but it needs to be with the understanding that we can't flush the "overflow" value, as that would cause the base64 to be terminated, and then continued.

Metadata buckets from the origin brigade that come after an EOS bucket are copied to ctx->bb but
silently dropped. Shouldn't we have a

if ((APR_SUCCESS == rv) && (!APR_BRIGADE_EMPTY(ctx-bb))) {
   rv = ap_pass_brigade(f->next, ctx->bb);
}

here for safety reasons?

We do, passing bb rather than ctx->bb, to ensure we've consumed everything. When we leave for the last time, we step out of the way, so any buckets that follow will bypass us completely.

All done in r1134130.

Regards,
Graham
--

Reply via email to