johnf wrote:
> On Sunday 31 December 2006 10:53, Carl Karsten wrote:
>> johnf wrote:
>>> On Sunday 31 December 2006 06:08, Paul McNett wrote:
>>>> Carl Karsten wrote:
>>>>> Much like I want the database engine to make sure I don't create orphan
>>>>> child records, I want some low level generic code that makes sure
>>>>> database updates don't get overridden.  I doubt I am the only one.
>>>> I agree with you, Carl, that Dabo should provide some basic behavior for
>>>> detecting and responding to collisions. Then, the application developer
>>>> can decide to override, circumvent, or expand on the basic behavior Dabo
>>>> provides.
>>>>
>>>> Adding the list of fields to the where clause seems like a pretty smart,
>>>> generic way to accomplish this. VFP got a lot of things right, and this
>>>> is one of them. I don't see why it wouldn't work with all backends, and
>>>> anyway the app developer can set it to not detect but overwrite updates
>>>> if they choose.
>>> Let be get this right.  If I have a 100 field table and I want to change
>>> one field the update statement will look something like this:
>>> Update table set myfield = "something" where 99 fields = 99 fields???
>> close.
>>
>> First, there are 5 Types (VFP only used 4, your example just made me
>> realize there is a 5th.)
>>
>> So as the developer, you get to pick one.  your example is closes to the
>> 5th: "PK and All Fields" (which is the same as All Fields) so we will use
>> it.
>>
>> The WHERE will include the modified field too, and in keeping with the
>> Description will start with the PK:
>>
>> Update table set myfield = "something" where
>>      PKField = PKvalue and
>>      myfield = myfield_old_value and
>>      98 fields = 98 values
>>
>> all 100 fields accounted for.
>>
>>  > This will insure that the fields I didn't change were not changed?  Do I
>>  > have it right?
>>
>> You probably also want to insure the field you are changing didn't change.
>>
>> "If someone else changes any of these fields, the WhereClause will not be
>> true anymore, and so no records will be updated."
>> http://fox.wikis.com/wc.dll?Wiki~VfpViews~VFP
>>
>> It makes much more sense once you understand it :)
>>
>> If you can figure out a better way of wording it, please do.
>>
>> I think the concept of "target record" or "active record" or "the record we
>> are interested in" needs to be better defined.  I wouldn't want anyone to
>> think there was a record pointer, but the concept is similar (if you
>> squint.)
>>
>> Carl K
>>
>> _______________________________________________
>> Post Messages to: [email protected]
>> Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-dev
> 
> I think I understand and your mod's to my update statement make sense.  So 
> what we are looking for is NOT a question about concurrency?  This is sort of 
> a "dirty write"  or the "lost updates" problem.  One update over writes a 
> prior update.
> 

To me it is a concurrency issue: 2 clients updating the same record "at the 
same 
time" (which really means the read/edit/write processes overlap.)


> OK let's take the problem from the MsSQL side:  
> 1.  I think the _mssql (works like the ado) will issue an exception when one 
> or more records in your record set has been changed.  However, I think this 
> will not apply to Dabo's way of dealing with a cursor.
> 
> 2. So that leaves us with the way I do it.  I use the rowversion (use to be 
> the timestamp).  The rowversion works as long as you are using only one 
> database.  Remember Microsoft guarantees that the number will be unique.

In VFP terms: WhereType = 4 - Key and TimeStamp.

Why "as long as you are using only one database" ?

> 
> 
> OK let's take the Postgres way:
> 
> 1.  Again the psycopg.py would raise an exception (depends on the setting of 
> the "set transaction isolation" settting).  And of course should be handled 
> in code.  Again most likely will not work with Dabo cursors.
> 
> 2. My way is to read the xmax or xmin fields.  "Where xmin=myxmin " or xmax = 
> null.  
> 
> I would think Sybase works as does MsSQL.  But I know nothing about MySQL, 
> SQLite, Oracle (some day in the future?) or FireBird.  The best guess is they 
> have something to handle "lost updates".
> 
> So I wonder if adding 98 fields to the where is the best generic way of 
> dealing with "lost updates".  Maybe a better solution would be to add a way 
> to set the where clause to include the each database engines unique way of 
> dealing with "lost updates" .  Similar to the way we deal with the "limit" 
> clauses of the database engines.
> 

I think you are assuming there is something wrong with 98 fields in the where 
clause.

Given that I don't see a problem, I think it is better than engine specific 
code.

Carl K


_______________________________________________
Post Messages to: [email protected]
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-dev

Reply via email to