"Christopher N.Deckard" <[EMAIL PROTECTED]> writes:
> I have roughly 1,500 people in the database. I wrote a script to
> migrate the people. Basically it does getProperty for each property
> on each person folder, then creates the new person object which uses
> XML. Zope apparently cannot handle this number of operations in one
> transaction. Zope gets slower and slower and eventually becomes on
> responsive. It looks like the script has completed, but nothing is
> ever committed to the ZODB, and since Zope is unresponsive it must
> be restarted. This, as expected, kills that entire transaction
> which was never committed.
> Is it known that large numbers of operations, such as above, in a
> single transaction can cause problems? A transaction, of course,
> being a request, and an operation being something like
In practice I've only seen problems when I'm dealing with lots of data
and need to be thinking about this anyway :)
> I've solved the "problem" by using xmlrpc and for person in people
> calling my migrate_person script for only one person at a time.
> This is SO MUCH FASTER. I previously ran the script that migrates
> all of the people, and after 8 hours it still had not completed.
As Jens replied, this is because the transactions are getting
committed for each person, and you can do this without xmlrpc by
committing in your script.
The possible drawback of committing after each person is that you're
committing a new version of the object being modified. Depending on
how you're storing the stuff this can grow your ZODB - if every person
is a node in a single ParsedXML document, you'd be storing 1500
versions, each one person bigger than the last.
You can use transactions and subtransactions to juggle database size,
memory usage, and temp file usage.
Karl Anderson [EMAIL PROTECTED] http://www.monkey.org/~kra/
Zope-Dev maillist - [EMAIL PROTECTED]
** No cross posts or HTML encoding! **
(Related lists -