Thanks I defined scoped session the wrong way:
engine = create_engine(connectionstring, echo=settings.DEBUG,
echo_pool=settings.DEBUG,
pool_size=20, max_overflow=400)
session = scoped_session(sessionmaker(bind=engine))
s=session()
instead of:
engine =
here is a sample sa output when the application hangs:
009-08-03 01:05:20,458 INFO sqlalchemy.engine.base.Engine.0x...1634
{'param_1': None}
2009-08-03 01:05:33,673 INFO sqlalchemy.engine.base.Engine.0x...1634
SELECT tipallarmi.id AS tipallarmi_id, tipallarmi.codice AS
tipallarmi_codice,
After the select there is an insert, can three concurrent threads
inserting data in the same table cause the hang?
If so how can I avoid this?
Please note that until now the same application was using django orm
with no deadlock problems, I only changed the query to use sqlalchemy
thanks
The problem is riproducible with this simple script:
import samodels as sa
import threading
def insertalarm():
s=sa.Session
a=sa.Allarmi(s.query(sa.TipoAllarmi).get(1))
s.add(a)
s.commit()
for i in range(100):
t=threading.Thread(target=insertalarm)
theres nothing wrong with what is illustrated, save for that most
workstations can't run anywhere near 100 concurrent threads/INSERTS/pg
connections at the same time.
it goes without saying that showing fragments of code is not very
useful since your results cannot be verified.
On Aug 2,
heres my script, runs fine
from sqlalchemy import *
from sqlalchemy.orm import *
e = create_engine('postgres://scott:ti...@localhost/test', echo=True)
m = MetaData(e)
t1 = Table('t1', m, Column('a', Integer, primary_key=True),
Column('b', Integer))
t2 = Table('t2', m, Column('a', Integer,