I've seen this kind of transient error before with another web
framework called Skunkweb (and its ORM "PyDO"), and once with
Turbogears as well -- pretty much any time I've tried to use MySQLdb,
MySQL (with MyISAM tables, not innodb -- I'm switching to InnoDB for
TG), and any python web framework. The culprits I found were:
OperationalError 2006 ("Server gone away")
OperationalError 2013 ("Lost connection during query")
ProgrammingError 2014 ("Commands out of sync; You can't run this
command now")
I didn't really get a chance to track down the root cause; it may have
been some timeout in the mysql server, some quirk of the DB connection
pooling of the web framework I was using, a quirk in MySQLdb itself, or
some pathological combination of the above.
If you google some of these error messages, though, it seems that other
users are commonly afflicted with this problem too -- Django, Webware,
Zope, etc., which leads me to believe it's a problem with MySQL or
MySQLdb. Most other blog or newsgroup posts mentioning it say it seems
to be "random" but "frequent."
I implemented a similar (ghetto) solution (retrying on any of those
three exceptions by re-establishing the connection and re-executing the
query) which has worked fine so far, but this solution is hardly
elegant, and in the back of my mind I'm still afraid of lost and/or
duplicate queries (e.g. consequences of dropping or duplicating "update
account set balance=balance+100 where uid=xxx" are disastrous.)
Hopefully if more people see this problem we can get to the bottom of
it -- I think it's caused a lot of grief and I've never seen a robust
solution to it. Next time I encounter it I'll dig a little deeper.
Update: People in ruby-on-rails land have encountered the same thing,
with similar extensive commentary and hand-wringing (
http://dev.rubyonrails.org/ticket/428 ). The bug is marked as
resolved-fixed, and I don't have time right this second to verify, but
it looks like they check DB connections to see if they're stale before
executing queries. Notably (and appropriately), they also strongly
decided against any automatic query retrying because of data integrity
issues.
Maybe a similar workaround would be appropriate for TG/SQLObject.
-Drew
[EMAIL PROTECTED] wrote:
> I fixed this with this hack, which tries to reestablish a connection
> _once_ on error. This seems to take care of most transient 'server has
> gone away' mysql server errors like timed-out connections, restarted
> mysql server, etc.
>
> Is there a better way of doing this? If not could I suggest that this
> kind of functionality is included in TG? It seems like this _should_ be
> a common problem.
>
> I use mysql 5, TG 0.9a4 fwiw.
>
> Changed the PackageManager code in database.py as follows:
>
> def __get__(self, obj, type):
> import _mysql_exceptions
> if not self.hub:
> self.set_hub()
> try:
> data = self.hub.__get__(obj, type)
> except _mysql_exceptions.OperationalError:
> #reconnect and try again once
> self.set_hub()
> data = self.hub.__get__(obj, type)
> return data
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"TurboGears" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/turbogears
-~----------~----~----~----~------~----~------~--~---