I use a technique I learned from a well known dBase application called SBT. I add a numeric signature field (usually 2 digits) to every master table (defined as any table that is always updated whenever a transaction occurs). When a user opens a record for the purposes of editing, I get the value of the signature field for that record. If it's the highest value the field can contain, then someone else is in the middle of saving a transaction and has it locked for editing. (More on that later) But if it is NOT the highest value the field can hold, then I read the data for that record into variables. (Variables are nice because you can compare the values in the fields to their original values to see what changed).

Now when the user tries to save the transaction, I get the value of the signature field again. If it is the same as the starting value, then I know no one has made any changes since I first read the data from the record. I then set the signature to 99 (or the highest value the field can contain). That is my record lock, so to speak. Because I check for this when I go to get the data, I effectively lock the record for that user. That way if someone comes along in the middle of a transaction update and tries to get the data, they are prevented by my own code until the other user's transaction is finished. I then update the record, and set the signature to the starting value + 1, freeing it up for future reads/updates.

BUT if the initial signature and the present value is NOT the same then someone has updated the record in while I was editing it. At that point, depending on your application, you can present the user with a choice to cancel, or update their current information with the saved information and proceed as normal.

A few caveats here. First, you may ask, why not just lock the record when the user reads the data in and keep it that way until the user saves or cancels the update? Okay, now the user leaves for the day, or crashes to desktop in the middle of an edit. The record remains locked forever, unless you run some maintenance script to unlock all records. Boo!

Next, if you use loops to retry record access in the event the record is currently locked, you will not want to use endless loops as you can conceivably attain deadlock. Set a maximum number of times the loop will retry before it gives up and alerts the user.

Consider also, that from the time the user clicks Save until the time the transaction is saved in all your tables, the application cannot for any reason stop, as in getting input from the user, because again, you can attain deadlock if the user fails to respond, or something crashes.

Also, if there are many kinds of transactions which involve different sets of master tables, it is highly advisable to write a function which rechecks the signatures in all the affected tables and attempts to lock the records, before proceeding with any updates to any of the records. You don't want to be in the middle of saving a transaction only to find halfway in that someone posted changes to one of the records you needed to update. Lockdown every record that needs to be updated before proceeding with the transaction.

Finally, a maintenance script to "unlock" all records in all tables is a must, because eventually something will go wrong in the middle of a transaction and strand records, preventing them from being updated.

This all sounds like a lot of trouble I know, and for simple applications, one table to keep track of records that are allocated to a particular user is better, assuming you have a way of uniquely defining each connection. (Remember you may be able to login to your app with the same credentials from 2 or more workstations but they are separate connections nonetheless). But for a complex application, like for an integrated accounting solution, I think the signature method really works best. There is no need to track logged in users and deal with unexpected disruptions or duplicate credentials from different workstations.

Also I would avoid any automatic record locking methods provided by the database mechanism you use, as these are not always reliable, and do not really deal with the problem of simultaneous multi-table transactions from multiple users. They only prevent data corruption from occurring by preventing simultaneous writes to a single table or database.

Sorry for the lengthy post.

Bob Sneidar
IT Manager
Logos Management
Calvary Chapel CM

On Sep 22, 2008, at 2:46 AM, Klaus Major wrote:

Hi all,

this is a bit off topic, but maybe someone can give me some helpful
hints.

Is someone of you working with databases with multiple users?
If yes, how do you handle "concurrent transactions"?

I mean how do you solve the possible overwriting of data when both
users work with
cursors and are allowed to  update data and how do you update a cursor
(on the fly?)
when its data may have been updated?

Know what I mean?

I searched the net and found some hints, but maybe you have some
really clever
solutions that you are willing to share :-)

Thanks in advance!


Best

Klaus Major
[EMAIL PROTECTED]
http://www.major-k.de


_______________________________________________
use-revolution mailing list
[email protected]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


_______________________________________________
use-revolution mailing list
[email protected]
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution

Reply via email to