Tony,
I will concede you wrote an imaginative program but questioner said when his programs were interrupted by timeout, it corrupted his database. Even if he implemented your solution, he should still take steps to protect the integrity of his database from unexpected disconnection of any type.
Your step 5 is essentially an intelligent thumb sucker. A basic thumb sucker is simply a progress monitor, perhaps as simple as a series of dots sent back to user to keep his attention. That might be sufficient to keep the user from clicking somewhere else, but that does not deal with unexpected network failure. I like the idea of sending back incremental data and here is a perfect opportunity to use threads. But if a database is being updated in the background, then either the update process must be asynchronous or a robust 2-phase commit must be modeled. Depending on the nature of the data being updated, it may make sense to consider timeout or network interruption sufficient to cause the transaction to fail triggering rollback of the database. My gut feeling however, is that simple timeout is not sufficient reason to abort the update. The user pressed the submit button and therefore wants the update to commit. Once that button, or a second confirmation submit button is pressed, it should become irrelevant if the user waits around to see the result, unless, for example, a banking transaction where it is important to inform the user what state the database is left in. If the thread connected to the browser senses a disconnect, it can then go into asynchronous notification mode and send an email. The email could supply an url the user could click on to get a status report.
What I think is futile is any programming model that requires a certain browser state to be essential for a database update. I co-authored a wireless shop floor data collection application that sent QA inspection data to an MVS mainframe. The wireless network was not robust enough for 2-phase commit to MVS, exactly like what happens with brittle http connections, so we created a store and forward server that managed queued data and guaranteed to deliver the data to MVS. This made the connection between the hand held device and MVS asynchronous and is very much like the browser model being discussed. As long as the hand held device could update the local store and forward server, the data would eventually be delivered to MVS, even after network outages. If the link between the store and forward server and MVS was down, the data simply queued up on the local server. Once the network link was reestablished, the store and forward server played catch up, not sleeping again until all the queued data was moved successfully. The hand held device could always query the server to learn the current state of the transaction, whether posted locally and ready for upload, queued pending network transport, completed or rolled back for some other reason.
Again, your solution is clever but data integrity should be independent of browser state, especially since there is no way any server can guarantee to control browser state.
Will
At 07:20 PM 6/20/2004 -0700, Anthony Nemmer wrote:
Let me rephrase what I did to solve this problem. I wrote a web stats package that consisted of a sequence of perl programs that analyzed web log files on a user-demand basis. Depending on the size of the Apache log file, these sequence of programs could and did time out when run as part of a CGI script with a client waiting on its output. So here is what I did:
1. User specifies statistics that she wants in an HTML form, presses Submit button.
2. Submit goes to a self-refreshing CGI script that kicks off the sequence of perl programs as an asynchronous subprocess (I think I simply used system("stats.pl parms &");)
(stats.pl then executes the sequence of perl programs that actually does the statistical analysis of the log file)
3. The sequence of perl programs creates a temporary status/lock file with a name that identifies this stats run uniquely (user id . process id . epoch or something like that)
4. The sequence of perl programs writes status/completion information to a file.
5. The self-refreshing CGI script, meanwhile, reads status information from the same file, formats it, and displays output to the user.
(Output refreshes every 5 seconds or so.)
6. The sequence of perl programs creates the statistics HTML pages, and then, the last thing it does is delete the status file.
7. When the self-refreshing CGI script detects that the status file has been deleted, it knows that the sequence of perl programs is done, and redirects to the HTML page(s) showing the web statistics.
A bit involved but it turned out to be pretty robust. The problem of kill -9'ing subprocesses if the user requests a job abort is left as an exercise for the reader (because I forget how I did it.) ;-)
Tony
T. William Schmidt wrote:
All of this is very problematic to achieve robustness given the nature of browsers and the http protocol. Can you redesign the programs so that once one is launched it does not depend on a continuous browser connection? Make the database programs asynchronous. If the user who requested the transaction must be notified of the program's completion status, send the confirmation by email.
Will
At 05:38 PM 6/20/2004 -0700, Anthony Nemmer wrote:
[EMAIL PROTECTED] wrote:
In a message dated 20/06/2004 17:48:21 GMT Daylight Time, [EMAIL PROTECTED] writes:
I maintain an administration site that allows users to execute perl programs using their web browser. I've run into a couple programs that do not complete execution prior to the browser timing out. When the browser quits, the perl program quits before it's done. I end up with a mess because these perl programs are manipulating our database.
Question? How can I force the browser connection to stay active until after a perl program finishes execution?
The only work around I've found is to run from the command line. This connection never quits before it's time.
I appreciate any advice on this.
Not really been able to find a way around this, but I did have a
similar problem. One solution is to keep writing something to the browser (eg, a series of dots to show progress), this tends to keep it active, but may not be practical.
The best solution is to implement a lock-out whilst the perl program is
running, so that it cannot be run more than once at a time. Simple to do - just set a value in an external file.
This can also be used to allow you to interrogate what the perl program is
currently doing.
Better than that, write status information to the file and metarefresh the browser with a cgi script that reads from the status file and displays how far the perl program has gotten or what it is doing. You kill a couple of birds with one file this way.
Tony
-- Rich Mellor RWAP Services 35 Chantry Croft, Kinsley, Pontefract, West Yorkshire, WF9 5JH TEL: 01977 610509 Visit our website at URL:http://www.rwapsoftware.co.uk
Stuck with ordinary dial up internet connection ?? Read our review of internet accelerators and broadband at:
URL: http://www.rwapadventures.com/Services/reviews.html
------------------------------------------------------------------------
_______________________________________________ ActivePerl mailing list [EMAIL PROTECTED] To unsubscribe: http://listserv.ActiveState.com/mailman/mysubs
--
SKYKING, SKYKING, DO NOT ANSWER.
_______________________________________________ ActivePerl mailing list [EMAIL PROTECTED] To unsubscribe: http://listserv.ActiveState.com/mailman/mysubs
Regards, Will Schmidt
WilliamSchmidt.com, LLC 11201 NW 77th Street Terrebonne, OR 97760 541 504-0290 [EMAIL PROTECTED] http://www.williamschmidt.com/
--
SKYKING, SKYKING, DO NOT ANSWER.
Regards, Will Schmidt
WilliamSchmidt.com, LLC 11201 NW 77th Street Terrebonne, OR 97760 541 504-0290 [EMAIL PROTECTED] http://www.williamschmidt.com/
_______________________________________________ ActivePerl mailing list [EMAIL PROTECTED] To unsubscribe: http://listserv.ActiveState.com/mailman/mysubs
