Hi all, I'm using SQLAlchemy to access a large table (~280 million
rows), and I'm getting timeout issues.  At 30 seconds, SQLAlchemy
quits.  In lieu of getting all tables past and future to be indexed
differently, I was wondering if there was a way using session.query
(*not* select()) to change the default timeout of 30 seconds to
something more suitable for such large tables.  All I have found on
the web points to QueuePool and the pool_timeout parameter, but it is
unclear as to how this fits in with a mapper and session.query.  Here
is a snippet of the code I am using:

def createSession():
    global session, table1
    engine = create_engine('mssql://<database>', echo=False)
    metadata = MetaData()
    metadata.bind=engine
    table1 = Table('<Large Table>', metadata,\
                       Column('LocationId',Integer,primary_key=True),
                       autoload=True)
    mymapper1=mapper(Object,table1)
    Session = sessionmaker(bind=engine, autoflush=True,
transactional=True)
    session = Session()

def getData(lmplocation_id):
    global session, table1
    if not session:
        createSession()
    result = session.query(Object)
    return result

Thanks,
Chris

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to