you will have this issue with any multi-process application that is
modifying data within individual processes. You have to make the
decision, on a per-use-case basis, how visible data changes need to
be to other processes. If its the usual "save changes, then on the
next hit view the results", your application will have to use a model
where the data to be displayed is either loaded fresh from the
database every time, or is loaded from some kind of session storage,
which may be implemented as additional database tables, or some other
cross-process serialization scheme such as file-based sessions or
memcached. Of course, as my first Python project was the Myghty
template/web framework, which has some typical solutions to this kind
of issue built into it, namely the file/memcached based session
storage as well as the data caching system.
The ZBlog app that comes with Myghty approaches this issue in the
simplest way possible; the very first thing that happens on every
HTTP requiest is an objectstore.clear(), which forces all data to be
reloaded from the database for that request. The idea being that
since SA allows you to make such great eager-loading queries, the
reload operation is a lot quicker than it is with more simplistic
data loading paradigms. Generally, the tradeoff is data being very
fresh vs. data being very persistent and already available - this
tradeoff has to be decided on a case by case basis.
At the moment, its not within the scope of SQLAlchemy to manage the
issue of cross-process caching/persistence, it is just a tool to
streamline an application's database conversations. a cross-process
caching solution is something else; there might be interesting ways
to embed such a solution within SA but thats a much larger beast to
deal with, and in any case, SA itself would only propose some kind of
plugin architecture to support such systems but wouldnt want to
provide the implementation; the implementation would vary
tremendously based on the application environment, architecutre, and
functionality goals.
On Jan 21, 2006, at 6:33 PM, marek wrote:
I have a question regarding the use of Sqlalchemy from mod_python. The
question basically boils down to:
How will Sqlalchemy act when a request to add/delete/update a mapped
object goes to any of 20 embedded python interpreters?
The reason I ask is that I have just started using Sqlalchemy on a
project and have noticed that after making a change on a mapped
object, sometimes the expected result returns and sometimes not.
An example:
I delete a property of a mapped object, and then request a web page
that lists all the mapped objects and their properties.
The request can go to any of 20 apache processes, and when refreshing
the list I sometimes get the expected results but other times the list
show the property that should have been deleted.
My initial feeling, after seeing this, was that Sqlalchemy may do some
caching and depending on what apache process the request goes to, the
object in that process my not be correctly synchronized.
Thanks,
Mare-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through
log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD
SPLUNK!
http://sel.as-us.falkag.net/sel?
cmd=lnk&kid=103432&bid=230486&dat=121642
_______________________________________________
Sqlalchemy-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/sqlalchemy-users
k
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
_______________________________________________
Sqlalchemy-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/sqlalchemy-users