I thought I finally got to understand what does it mean to use a connection
pool / QueuePool, but this performance problem puzzles me.
If I run:
for i in range(100):
with pg_engine.connect() as conn:
conn.execute('select 1').fetchall()
it takes 47 seconds.
If I run
with pg_engine.connect() as conn:
for i in range(100):
conn.execute('select 1').fetchall()
it takes 17 seconds.
Isn't the whole point of Connection Pool that connections are
kept/cached/reused? If so, then why does the first version take 3 times
longer?
pg_engine is set up the following way:
pg_url = f
'postgresql+psycopg2://app:{pg_config["password"]}@{pg_config["host"]}/app'
pg_certs = {
'sslcert': f'{config_dir}/client-cert.pem',
'sslkey': f'{config_dir}/client-key.pem',
'sslrootcert': f'{config_dir}/server-ca.pem',
}
pg_engine = create_engine(pg_url, connect_args=pg_certs)
The server is a remote server quite far away, hence the connection issues
are greatly amplified, compared to localhost. Python is 3.7.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http://stackoverflow.com/help/mcve for a full description.
---
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.