So I haven't really given too much thought to the setup until recently. 
Thus, some of this could well be silly.

Up until now it's been a global scopedsession per each Tornado process, with 
autocommit=True, pool_recycle=14400, pool_size=20.

IIRC, the autocommit=True was a quick solution to the "MySQL has gone away" 
/ "Can't reconnect until invalid transaction is rolled back" class of 
problems. To this day I don't know of the real pros / cons of using 
autocommit=True vs long running transactions. I'm assuming there's some 
overhead to starting a transaction on every flush, but I have no clue if 
this is negligible, or the only overhead involved, etc.

Of the 5 approaches for web frameworks 
here<http://www.sqlalchemy.org/docs/05/session.html#lifespan-of-a-contextual-session>,
 
I chose the last -- do nothing. However, some of the requests use read-only 
queries that don't require a commit or rollback, and I never explicitly call 
close or remove. Is this a mistake ? If so, what can happen ?

Pretty sure I'm going to be switching to a autocommit=False model, and 
adding a session.close() at the end of each request (probably at the 
framework level), to have a new session for each request. I think this will 
help with various problems that have been popping up, including 
ObjectDeletedErrors and StaleDataErrors. On that note, I still don't get why 
you'd ever use autocommit=True without changing the default expire_on_commit 
to False ?

I've also had occasional "MySQL is using too many connections" errors pop up 
given the current setup. Perhaps the reason is obvious to someone given my 
current setup ?

Cheers,

R

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to