One caveat with for the 'few' commits approach. If you do many operations (esp write) you might inadvertently lock the tables until the transaction is over (which will in turn stop other threads). This can get especially nasty with sqlite. So the bottom line is that there is no strict correlation between the number of commits and execution speed - it depends on what exactly are you doing in your transactions.
Timothy Farrell wrote: > The fewer commits, the faster the process. Also, with one commit, you > are less likely to have mixed up data (what happens if two items get > committed and then power goes out? 5000 items?) > > One situation that you might want to put commit in the loop is if > you're using a db table as a log or history. The point is that the data > needs to be in there as soon as it can and if the server goes down, the > last entries are the most critical. Also note that the data records for > a log don't relate to another table for anything critical. > > -tim > > SergeyPo wrote: In case when I need to run many updates to database in > long cycle, what is preferred for speed - to have one db.commit() at > the end or commit() after every db update? E.g. for i in range(100000): > db(db.details.id==i).update(**dict([(destination_field, i)])) > db.commit() or for i in range(100000): > db(db.details.id==i).update(**dict([(destination_field, i)])) > db.commit() (this is an example only, of course condition > db.details.id==i is dumb). > -- Timothy Farrell <[email protected]> Computer Guy Statewide General > Insurance Agency (www.swgen.com) --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "web2py Web Framework" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/web2py?hl=en -~----------~----~----~----~------~----~------~--~---

