> It isn't unusual for serialisation errors to be reported prior to > commit: the database usually reports them as soon as it detects the > problem. The traceback from the error you saw probably tells you > where it occurred. Try wrapping your try/except block around the > entire transaction logic rather than just the commit() call. > > If you're using PostgreSQL, the > psycopg2.extensions.TransactionRollbackError exception should cover > the cases you're interested in. > > Yes you are correct, I got the exception now if I wrap the entire block.
Error I GOT YOU : could not serialize access due to concurrent update <class 'psycopg2.extensions.TransactionRollbackError'> Should I rollback and try again or just try to run the block again? It may not improve the performance as I have to do it all again <sigh> Thanks for your help > > > The same problem with MySQL with innoDB (different error message though, > > something abt deadlock detected > > I would guess MySQL is pretty much the same: report the error as soon > as the problem is detected rather than letting you continue until > commit. > > > > What surprises me is that why the db server refuses to work. I am > positive > > that they are completely different set and then it should be in different > > rows modifications. > > > > And then storm does not throw exception on this case so I can catch using > > except and try again or do something about it. > > > > The problem is fixed if: I call store.commit() after each object value > > update (store.flush() is not enough) ; but it is painfully slow > > > > > > > > MyISAM does not have that problem as well > > Well MyISAM doesn't support transactions, so you shouldn't expect it > to report transaction serialisation errors ... > > James. > -- Steve Kieu Ph: +61-7-3367-3241 sip:*[email protected]
-- storm mailing list [email protected] Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/storm
