Thanks for your comments guys. New to SQA and not a db expert, I misunderstood the session mechanism. Here attached some code that reproduces my "problem" (which is not)
My understanding was: Once an object is queried through a session.query(..)
and kept in memory, attributes can be read and eventually changed. The
actual changes in DB will be done on a session.commit() call. However, at
next session.query(...) call would get an up to date object from database
<- that's not the case and I missed that. What confused me was that SQL
statements were displayed when I turned echo to True (engine configuration)
_but_ these SQL statements does not seem to be issued to the underlying
sqlite database.
The attached code contains:
- a reader process which gets the value from db
- a writer process which writes it to db
If you comment line #29 of reader.py, the db change done by the writer is
not seen.
if you uncomment line #29 of reader.py, the db change done by the writer is
seen.
My (final?) questions are:
- Why the session.query(...) does not update the session with fresh
values? Indeed, if i want a session value I just need:
my_mapped_object.attribute...
- In my production code, my SQA stuff is encapsulated into a library
which is imported by several processes (see here after). Do I absolutely
need to call a session.refresh(my_object) just after a session.query(...)
to be sure that the data is fresh?
class OrmManager:
def __init__(self, database, metadata, echo=False):
engine = create_engine('sqlite:///' + database, echo=echo,
connect_args={'detect_types':
sqlite3.PARSE_DECLTYPES|
sqlite3.PARSE_COLNAMES},
native_datetime=True,
poolclass=NullPool,
convert_unicode=True
)
metadata.create_all(engine)
# this factory is thread safe: a session object is returned
(always the same) to the
# caller. If called from another thread, the returned session
object will be different
session_factory = sessionmaker(bind=engine,
expire_on_commit=False)
self.session = scoped_session(session_factory)
def get_session(self):
session = self.session()
return session
class MyLibrary:
...
def __init__(self, orm)
self.orm = orm
def my_api_method_get_object(self):
"""Get object from database"""
session = self.orm.get_session()
my_object = session.query(...).one()
session.refresh(my_object) # <- is this line needed?
return my_object
Thanks again for your time supporting a newbie.
Pierre
On Monday, January 20, 2014 5:12:58 PM UTC+1, pr64 wrote:
>
> Hi,
>
> Strange behaviour with sqa in multi-process environment... already posted
> on
> StackOverflow<http://stackoverflow.com/questions/21109794/delayed-change-using-sqlalchemy>for
> a web app but still missing some understanding so posting here.
>
> I've created an application where my "sqa calls" are encapsulated: My
> API's methods always do the same kind of stuff:
> 1- request a session
> 2- do my stuff - mainly requesting the db through session.query(...)
> 3- eventually:
> - modify some mapped object attributes
> - and write to db (session.commit()).
>
> Several consecutive calls to my API's methods return the same session
> object as long as I'm in the same thread which is the required behaviour.
> Some commits happen but the session is always the same (expire_on_commit =
> False)
>
> # this factory is thread safe: a session object is returned
> (always the same) to the
> # caller. If called from another thread, the returned session
> object will be different
> session_factory = sessionmaker(bind=engine, expire_on_commit=False)
> session_maker = scoped_session(session_factory)
>
> # to get a session:
> session = session_maker()
>
> If I fork another thread, using the API is still safe since the first step
> (getting a session) will return a new session instance (due to
> scoped_session)
>
> Here's my problem:
>
> I have two processes which such an architecture (based on this API). While
> P1 is writing to DB (session.commit()), P2 is reading from it
> (session.query(...)). The problem is that P2 does not see the DB change and
> has to poll several times before getting it. I'm actually requesting the DB
> through session.query(...) calls from P2 but nothing happens. *Sometimes
> it works though! :-(* Don't know why (did not change the code) -> Over 10
> times, it will work 3 times...
>
> If I change my API to close the session after every commit, P2 sees the
> change as if commiting was not actually writing to disk and closing the
> session did it. I do not understand this behavour as the sqa documentation
> encourages to create a global session in a thread a work with it (which is
> done with the scoped_session mechanism)
>
> My configuration is : Linux OS, SQA 0.8.4 and a ram based db
> (/dev/shm/my_db.sqlite but the problem is still there with regular disk
> based db on my ~/my_db.sqlite)
>
> Thanks a lot for your time,
>
> Pierre
>
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.
sqa_multi_process.tar.gz
Description: Binary data
