Hi Perrin,

I'm trying to achieve the following: when there is an incoming request, I
want to set a time limit in which an answer should be delivered to the
client, no matter what.

However, since the work triggered by the initial request (there is another
request to other site involved)  might take much longer than that time
limit, I want that work to properly finish, despite the fact that the
initial request was 'served' already.


TMTOWTDI, but the common way to do this is to add the long-running job to a
job queue, and then redirect the user to a page that periodically checks if
the job is done by using JavaScript requests.

It's not such a typical long-running job that I'm doing. It rather goes like this: whereas I most of the time can answer with what I have within the acceptable answer time, I sometimes have to make another request in the background. That too most of the time is served within acceptable time; _sometimes_ it isn't, so only occasionally it takes more.

The clue: let's say the backend service is pay-per-use, so I definitely don't want to throw away a started request. If I have launched a request in the back, I'd want to get the results, even if the initial requester was turned down in the meantime.

If you don't have a job queue and don't want to add one just for this, you
could use a cleanup handler to run the slow stuff after disconnecting:
http://perl.apache.org/docs/2.0/user/handlers/http.html#PerlCleanupHandler

I'm afraid that won't fit, actually. It's not a typical Cleanup I'm after - I actually want to not abandon the request I've started, just for closing the incoming original request. The cleanup handler could relaunch the slow back request - but doing so I'd pay twice for it.

That will tie up a mod_perl process though, so it's not a good way to go
for large sites.

I'm aware of that, but that's less of a concern for now.

Many thanks,

Iosif Fettich

Reply via email to