I'm looking at using ZODB and have two different databases stored in
different files where different threads will be periodically
insert/update/delete various records in both databases.  Couple of questions
to make sure this is feasible:

When one of the threads wants to do an insert of a set of records I need to
ensure that those changes are submitted in a transaction that does not
overlap with the records being inserted in another thread.  Looks like the
default API assumes there is a single global transaction
 (transaction.commit())  however I saw mention of creating your own
transaction.manager that you can bind at DB.open() time.

Does this mean to support this model I have to re-open the database and do
the binding of a transaction manager in every method where I want to do this
transactional action?  For the moment I was assuming a DB access object that
is shared (singleton access class).

Is the better model to have something like a pool of DB access classes where
each one maintains a DB handle and a transaction and one of the access
classes would be taken out of the pool and used every time a transaction
needs to be run?  Worried a bit about the overhead of re-opening the DB is
this not something to worry about?


Thanks appreciate any advice.  I'm new to Python and ZODB trying to figure
out the best pattern to support a multi-user web scenario with multiple
databases.

--Jared
_______________________________________________
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev

Reply via email to