On Thu, 5 Apr 2001, Michael R. Bernstein wrote:
> I'm trying to find out of there is a point where you start getting
> non-linear performance penalties for additional objects (storing,
> retreiving, or indexing).
I've just finished adding a somewhat small number of objects: 5000.
For every 1000th object, the Data.fs seemed to grow to about 900MB; that's
when things started going slow, in a non-linear fashion (this is more a
hunch than something I payed much attention to).
I paused the script (fancy Unix-command: "^Z") for every 1000th object,
packed the database (which shrunk to 19.5MB! Hmpf.) and restarted the
script (again, fancy Unix-command: "fg"). Then I was back to the same
speed as I initially had.
Does ZODB have a problem with big Data.fs files? Not that I
know. However, I do have a really fast SCSI-subsystem here so that
shouldn't be a big problem either.
I did some copying around with a couple of gigs, and it seems that my
hunch is right: ZODB does not have a problem with big Data.fs files, the
This could be caused indirectly by ZODB if it does too many operations on
the file, but I'm not too conserned about that. Ie. a solution could be
to have ZODB play around with the Data.fs at a less frequent pace, or do
it in another fashion. However, that's not really solving any problems,
unless ZODB is a total maniac with the filesystem.
I'm converting to ReiserFS this afternoon, maybe that will improve things
Someone told me that ZEO and bulk-adding could be a thing to look at...
Zope-Dev maillist - [EMAIL PROTECTED]
** No cross posts or HTML encoding! **
(Related lists -