On Tue, 2002-11-19 at 16:28, Stephen Adkins wrote:
> At 08:18 PM 11/18/2002 -0700, Rob Nagler wrote:
> ....
> >We digress.  The problem is to build a UI to Sabre.  I still haven't
> >seen any numbers which demonstrate the simple solution doesn't work.
> >Connecting to Sabre is no different than connecting to an e-commerce
> >gateway.  Both can be done by connecting directly from the Apache
> >child to the remote service and returning a result.
> 
> Hi,
> 
> My question with this approach is not whether it works for synchronous
> execution (the user is willing to wait for the results to come back)
> but whether it makes sense for asynchronous execution (the user will
> come back and get the results later).
> 
> In fact, we provide our users with the option:
> 
>    1. fetch the data now and display it, OR
>    2. put the request in a queue to be fetched and then later displayed
> 
> We have a fixed number of mainframe login id's, so we can only run a
> limited number (say 4) of them at a time.
> 
> So what I think you are saying for option 2 is:
> 
>    * Apache children (web server processes with mod_perl) have two
>      personalities:
>        - user request processors
>        - back-end work processors
>    * When a user submits work to the queue, the child is acting in a
>      "user request" role and it returns the response quickly.
>    * After detaching from the user, however, it checks to see if fewer
>      than four children are processing the queue and if so, it logs into
>      the mainframe and starts processing the queue.
>    * When it finishes the request, it continues to work the queue until
>      no more work is available, at which time, it quits its "back-end
>      processor" personality and returns to wait for another HTTP request.
> 
> This just seems a bit odd (and unnecessarily complex).
> Why not let there be web server processes and queue worker processes
> and they each do their own job?  Web servers seem to me to be for
> synchronous activity, where the user is waiting for the results.
> 

I am doing something similar right now in a project.  It has to make
approx. 220 requests to outside sources in order to compile a completed
report. These reports vary in time to create based on the data sources
and network traffic.  This is the solution I have in place currently:

1) User visits web page (handled by mod_perl) and they make the request
for a report.

2) The request parameters are stored into a temp file and the user is
redirected to a wait page.  The time spent of the wait page varies and
an approx time is created based on query complexity. The user session is
given a key that is that matches the temp file name.

3) A separate dedicated server (Proc::Daemon based) picks up the temp
file and spawns a child to process it.  This daemon looks for new temp
files every X seconds, were X is 15 seconds, but it could easily be
adjusted.  It keeps a queue of the temp files that have been processed
and drops them from the queue after 45 minutes even if they haven't run.

4) The child recreates the users object and runs the report, when it
completes it deletes the temp file.  If it fails to complete the temp
file remains.

5) When the auto refresh takes place the system determines if the users
request has completed by looking for the temp file named in their
session data.  If the file exists they are given another wait page with
a 30 to 120 second wait time.  If it doesn't exist then the cached
information from the report, just an XML file created from a XML::Simple
dump of the hash containing the report data, is processed and presented
as HTML to the user.

I had attempted using a mod_perl only solution, but I didn't like tying
up the server with additional processing that could be handled
externally.  This method also allows for the server script to reside on
a separate machine (allowing for some shared filesystem samba, NFS etc)
without having to recreate an entire mod_perl environment.

This model has eased my testing as well since I can run the script
completely external of the web server I can run it through a debugger if
needed.  I also use the same script for nightly automated common reports
to limit the number of real time requests since the data doesn't can
that frequently in my case.

> Stephen
> 
> P.S. Another limitation of the "use Apache servers for all server processing"
> philosophy seems to be scheduled events or system events (those not
> initiated by an HTTP request, which are user events).
> 

I agree with Perrin, you can use LWP to emulate a users HTTP request if
you want to use an HTTP style request. 

cron/at represents the best way to handle this (IMHO).  In my case I run
the cron job and it generates the temp files, these temp files get
picked up by the looping server (simple non mod_perl daemon) and
processed. So I don't use LWP, but could send the request to the server
and have it create the temp files just as easily, I just happen to have
the logic abstracted to where I don't need to involve the mod_perl.

Aaron Johnson

> example: Our system allows users to set up a schedule of requests to be run.
> i.e. "Every Tuesday at 3:00am, put this request into the queue".
> This is a scheduled event rather than a user event.
> How is a web server process going to wake up and begin processing this?
> (unless of course everyone who puts something into the queue must send
> a dummy HTTP request to wake up the web servers)
> 



> 
> 
> 
> 

Reply via email to