This is a big transaction. Is this performed occasionally? Is this 
performed concurrently to other operations? Can it broken in smaller 
transactions?

On Friday, 9 August 2013 19:16:42 UTC-5, Joe Barnhart wrote:
>
> The overall transaction looks like this:
>
> start the transaction
>   read 10-200 rows into table A
>   read 10-1000 rows into table B
>   read 10-4000 rows into table C (rows depend on ids from A and B inserts)
> commit
>
> If it fails anywhere, I do not want the transaction to succeed.  But if 
> there are duplicate records, I want them silently ignored.
>
> My problem comes from the lack of "insert or ignore" in each of the insert 
> statements.  If any insert encounters a piece of data that is not unique, 
> it rolls back the entire thing.
>
> I have tried to devise a query that looks for any such collisions in the 
> tables before the action.  Nothing works.  It fails for one of two causes:
>
> 1.  There are several processes loading the tables simultaneously.  When a 
> transaction begins, it has no overlaps with the table, but when it commits 
> another transaction has gotten there first and now there IS a collision. 
>  The only solution is to wait a random time and try again.  (expensive)
>
> 2.  There are records within the file being loaded that are duplicate and 
> cause the failure.  While loading table A it encounters records that would 
> insert the same record twice.  To avoid this I've even tried doing a select 
> to see if the row exists (inside my transaction) before the insert.  For 
> some reason I have never gotten this to work.  The select, even when done 
> in the exact same transaction, literally on the line above the insert, 
> fails to see the record and the error is triggered anyway.
>
> All of this is being done in a module, and run in the scheduler.  So my 
> debugging access is somewhat limited in normal operation.  I can bypass the 
> scheduler and insert a file directly to test it.  Otherwise I rely on the 
> stack trace provided by the scheduler.
>
> The savepoint fix is the only method recommended on the Postgres forums, 
> usually when people converting from MySql complain...
>
> -- Joe
>
>
> On Friday, August 9, 2013 2:39:36 PM UTC-7, Niphlod wrote:
>>
>> On Friday, August 9, 2013 11:11:13 PM UTC+2, Joe Barnhart wrote:
>>>
>>> I need to preserve the database state by keeping the changes to the 
>>> tables in a consistent state.  If I insert a group of records, I need to 
>>> make sure they all succeed or fail.  I can't commit them one at a time in 
>>> this case.
>>>
>>  
>> offtopic2-continued: web2py wraps every request is the exact same manner, 
>> in order to give you atomicity of all db operations in a request. If you 
>> just need those "special insert()s" to happen or not atomically, then you 
>> can totally use the try:except block, for as many insert()s as you like.
>>
>> Every piece of DAL write operation between two commit()s or a commit() 
>> and a rollback() are atomical.
>>
>> If instead you need to have 
>> try:
>>     piece of normal code
>>     try:
>>         "special insert handled by those functions"
>>     except:
>>         rollback only the special insert
>>     another piece of normal code
>> except:
>>     rollback both
>>
>> then you're requiring nested transactions, that I anticipated would not 
>> work with the try:except block . Usually you can still get around placing 
>> commit()s and rollback()s in the correct places, but if you don't want to 
>> play with them your issue with savepoints can very well still be 
>> circumvented by a tailored insert.
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to