> That's what I was afraid of.
> FileStorage indexes can't be saved after they reach a certain size,
> where size roughly based on the number of objects.
> I need to find a way to fix this.
So, from this I infer that our database has grown in such a  proportion
that we're reaching some of the current limits (?) of ZODB. Is there any
other possible failure related to this that can be foreseen? I mean, are
there any other limits that can cause problems, because of the large
number of objects? It would be very important for us to know so.

>>> Also, how many objects are in your database?
>> Hmmm... I have no idea... is there an easy way of calculating that?
> >>> import ZEO.ClientStorage
> >>> len(ZEO.ClientStorage.ClientStorage(addr)
> where addr if the address of your storage server as a host, port tuple.
So, this returns 19283681. Does this include object revisions?

In any case, it's not such a surprising number, since we have ~73141
event objects and ~344484 contribution objects, plus ~492016  resource
objects, and then each one of these may contain authors, and fore sure
some associated objects that store different bits of info... So, even if
it doesn't include revisions, 19M is not such a surprising number.
I've also tried to run the "analyze.py" script, but it returns me a
stream of '''type' object is unsubscriptable" errors, due to:

classinfo = pickle.loads(record.data)[0]

any suggestion?

Also, is there any documentation about the basic structures of the
database available? We found some information spread through different
sites, but we couldn't find exhaustive documentation for the API
(information about the different kinds of persistent classes, etc...). 
Is there any documentation on this?




José Pedro Ferreira
(Software Developer, Indico Project)

Geneva, Switzerland

Office 513-R-042
Tel. +41 22 76 77159

For more information about ZODB, see the ZODB Wiki:

ZODB-Dev mailing list  -  ZODB-Dev@zope.org

Reply via email to