This is planned for a future release of speedycgi, though there will
probably be an option to set a maximum number of bytes that can be
bufferred before the frontend contacts a perl interpreter and starts
passing over the bytes.
Currently you can do this sort of acceleration with script output if you
use the "speedy" binary (not mod_speedycgi), and you set the BufsizGet option
high enough so that it's able to buffer all the output from your script.
The perl interpreter will then be able to detach and go handle other
requests while the frontend process waits for the output to drain.
> Perrin Harkins wrote:
> > What I was saying is that it doesn't make sense for one to need fewer
> > interpreters than the other to handle the same concurrency. If you have
> > 10 requests at the same time, you need 10 interpreters. There's no way
> > speedycgi can do it with fewer, unless it actually makes some of them
> > wait. That could be happening, due to the fork-on-demand model, although
> > your warmup round (priming the pump) should take care of that.
> >
> I don't know if Speedy fixes this, but one problem with mod_perl v1 is that
> if, for instance, a large POST request is being uploaded, this takes a whole
> perl interpreter while the transaction is occurring. This is at least one
> place where a Perl interpreter should not be needed.
>
> Of course, this could be overcome if an HTTP Accelerator is used that takes
> the whole request before passing it to a local httpd, but I don't know of
> any proxies that work this way (AFAIK they all pass the packets as they
> arrive).