On Mon, 11 Oct 1999, Dmitry Beransky wrote:
> My apologies for continuing this topic, but I've been thinking some more
> about this issue over the weekend. I'm still perplexed by this seemingly
> arbitrary limitation on the number of times a request body can be read. It
> seems that, at least theoretically, it should be possible to cache the
> result of $r->content() or even $r->read() the first time it's called and
> return the cached data on subsequent calls. It should also be possible to
> have $r->read return the cached data even when called from an internal
> redirect (by delegating calls to $r->prev->read, etc). As the size of a
> request body can be arbitrarily large (e.g. file uploads), perhaps it would
> be better not to have the caching behavior turned on by default, but rather
> enable it on a per-request basis.
>
> Again, this is all hypothetical. Can anyone comment on feasibility (and
> usefulness) of such a feature?
I am perplexed as to why you are perplexed. at the low-level data can
only be read from a socket once. we have though about caching that data,
there is an experimental option for Makefile.PL called
PERL_STASH_POST_DATA. if you turn it on, you can get at it again with
$r->subprocess_env("POST_DATA"). this is not on by default because of the
overhead it adds. and, because not all POST data is read in one clump,
what do we do with large multipart file uploads? it's not a problem
that's easy to solve in a general way. I needed to do this once myself,
only for a given Location, like so:
<Limit POST>
PerlFixupHandler My::fixup_handler
</Limit>
sub My::fixup_handler {
my $r = shift;
return DECLINED unless $r->method eq "POST";
$r->args(scalar $r->content);
$r->method("GET");
$r->method_number(M_GET);
$r->headers_in->unset('Content-length');
OK;
}
now when CGI.pm, Apache::Request or whoever parses the client data, it can
do so more than once since $r->args doesn't go away (unless you make it go
away).
-Doug