I'm trying to set up an API which would use SQLAlchemy Core (not ORM) +
PostgreSQL. The server is a Google managed PostgreSQL instance, on external
IP.
I have a couple of questions. Since I needed to manually add SSL
certificates as connect_args to create_engine + some additional arguments,
I'm using create_engine(). My questions are related to this
1. Does this look ok?
import zope.sqlalchemy
from populus_lib.config import in_worker, pg_certs, pg_url
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
def get_session_factory(engine):
factory = sessionmaker()
factory.configure(bind=engine)
return factory
def get_tm_session(session_factory, transaction_manager):
dbsession = session_factory()
zope.sqlalchemy.register(dbsession, transaction_manager=
transaction_manager)
return dbsession
def includeme(config):
settings = config.get_settings()
settings['tm.manager_hook'] = 'pyramid_tm.explicit_manager'
config.include('pyramid_tm')
engine = create_engine(
pg_url,
connect_args=pg_certs,
pool_pre_ping=True,
pool_reset_on_return='rollback' if in_worker else None, #
in_worker means production
)
session_factory = get_session_factory(engine)
config.registry['dbsession_factory'] = session_factory
config.add_request_method(
lambda r: get_tm_session(session_factory, r.tm),
'dbsession',
reify=True,
)
2. Since I'm not using ORM, but core only, do I need from sqlalchemy.orm
import sessionmaker?
3. Is pool_pre_ping supported with Pyramid's way of session/transaction
handling? I want to be sure that external server disconnects/reconnects are
handled automatically and I think using pool_pre_ping is the best for this.
4. Isn't pool_reset_on_return conflicting pyramid_tm / session handling? I
only need to use this in development, since the SQL server is in US and I'm
in Europe and without this settings SQLAlchemy has a huge overhead on each
query, like 300 ms.
5, Finally, what's puzzling me is that if I create a view like this:
def ping(request):
print(id(request.dbsession.connection().engine))
sleep(60)
And I run this via curl from two concurrent terminal windows, I get equal
ids in pserve / Waitress, while I get different ids with gunicorn defaults
(which I believe is multiprocessing).
As I understand each worker needs it's own engine instance, don't they? I
think the gunicorn behaviour is good, but I'm puzzled by the
pserve/Waitress behaviour. Is this by design?
--
You received this message because you are subscribed to the Google Groups
"pylons-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/pylons-discuss/e6dc1571-5b50-4b77-be1e-62c9c5964f4d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.