OK.
Thanks a lot, once again.
Regards,
Pedro
Jim Fulton wrote:
On May 26, 2009, at 9:35 AM, Jim Fulton wrote:
...
FileStorage indexes can't be saved after they reach a certain size,
where size roughly based on the number of objects.
I need to find a way to fix this.
The fix
On May 26, 2009, at 9:35 AM, Jim Fulton wrote:
...
FileStorage indexes can't be saved after they reach a certain size,
where size roughly based on the number of objects.
I need to find a way to fix this.
The fix was pretty easy. I just checked it in and the fix will be in
the next 3.9
Hanno Schlichting wrote:
Hanno Schlichting recently posted a nice graph showing the persistent
structure of a Plone Page object and it's 9 (!) sub-objects.
http://blog.hannosch.eu/2009/05/visualizing-persistent-structure-of.html
That graph isn't quite correct ;-)
workflow_history has
2009/5/27 Chris Withers ch...@simplistix.co.uk:
Laurence Rowe wrote:
Jim Fulton wrote:
Well said. A feature I'd like to add is the ability to have persistent
objects that don't get their own database records, so that you can get the
benefit of having them track their changes without
Chris Withers wrote:
Hanno Schlichting wrote:
Nope. DateTime objects are plain old-style classes and don't inherit
from persistent.*.
Hmm, oh well, my bad...
In that case it must just be that their pickled form is huge compared to
an int ;-)
Sure:
len(cPickle.dumps(DateTime.DateTime(),
On Wed, May 27, 2009 at 12:17:36PM +0200, Hanno Schlichting wrote:
Chris Withers wrote:
Hanno Schlichting wrote:
They are incredibly expensive to unpickle since all
the DWIM magic in their __init__ get called each time, though.
How come? Unpickling doesn't call __init__ and I don't see
Marius Gedminas wrote:
On Wed, May 27, 2009 at 12:17:36PM +0200, Hanno Schlichting wrote:
Chris Withers wrote:
Hanno Schlichting wrote:
They are incredibly expensive to unpickle since all
the DWIM magic in their __init__ get called each time, though.
How come? Unpickling doesn't call
On Wed, May 27, 2009 at 03:35:26PM +0200, Hanno Schlichting wrote:
Marius Gedminas wrote:
On Wed, May 27, 2009 at 12:17:36PM +0200, Hanno Schlichting wrote:
Chris Withers wrote:
Hanno Schlichting wrote:
They are incredibly expensive to unpickle since all
the DWIM magic in their __init__
Tres Seaver wrote:
Pedro Ferreira wrote:
Thanks a lot for your help. In fact, it was a matter of increasing the
maximum recursion limit.
There's still an unsolved issue, though. Each time we try to recover a
backup using repozo, we get a CRC error. Is this normal? Has it happened
to
Jim Fulton wrote:
On May 25, 2009, at 9:23 AM, Pedro Ferreira wrote:
Dear Andreas, Marius,
This means that you're using ZEO, right? Have you tried to use strace
to see what it's doing? Is it using any CPU time?
Yes, we're using ZEO.
It's doing a lot of lseek() and read() calls, i.e.:
On Tue, May 26, 2009 at 12:00:51PM +0200, Pedro Ferreira wrote:
Jim Fulton wrote:
What was in the ZEO server log when this happened?
2009-05-24T12:22:54 (28965) new connection ('137.138.128.213', 45138):
ManagedServerConnection ('137.138.128.213', 45138)
2009-05-24T12:22:54 (28965) new
On May 26, 2009, at 6:00 AM, Pedro Ferreira wrote:
Jim Fulton wrote:
What was in the ZEO server log when this happened?
2009-05-24T12:22:54 (28965) new connection ('137.138.128.213', 45138):
ManagedServerConnection ('137.138.128.213', 45138)
2009-05-24T12:22:54 (28965) new connection
Hello Jim,
Where's that certain size on the scale?
Tuesday, May 26, 2009, 3:35:56 PM, you wrote:
JF FileStorage indexes can't be saved after they reach a certain size,
JF where size roughly based on the number of objects.
--
Best regards,
Adam GROSZER
On May 26, 2009, at 10:16 AM, Pedro Ferreira wrote:
That's what I was afraid of.
FileStorage indexes can't be saved after they reach a certain size,
where size roughly based on the number of objects.
I need to find a way to fix this.
So, from this I infer that our database has grown in
Hi,
Pedro Ferreira wrote:
I've also tried to run the analyze.py script, but it returns me a
stream of '''type' object is unsubscriptable errors, due to:
classinfo = pickle.loads(record.data)[0]
any suggestion?
I personally apply the attached patch to analyze.py, that does not load
pickled
Jim Fulton wrote:
On May 26, 2009, at 10:16 AM, Pedro Ferreira wrote:
In any case, it's not such a surprising number, since we have ~73141
event objects and ~344484 contribution objects, plus ~492016 resource
objects, and then each one of these may contain authors, and fore sure
some
On May 26, 2009, at 11:17 AM, Laurence Rowe wrote:
Jim Fulton wrote:
On May 26, 2009, at 10:16 AM, Pedro Ferreira wrote:
In any case, it's not such a surprising number, since we have ~73141
event objects and ~344484 contribution objects, plus ~492016
resource
objects, and then each one
2009/5/26 Pedro Ferreira jose.pedro.ferre...@cern.ch:
In any case, it's not such a surprising number, since we have ~73141
event objects and ~344484 contribution objects, plus ~492016 resource
objects, and then each one of these may contain authors, and fore sure
some associated objects that
On May 26, 2009, at 10:21 AM, Adam GROSZER wrote:
Hello Jim,
Where's that certain size on the scale?
Tuesday, May 26, 2009, 3:35:56 PM, you wrote:
JF FileStorage indexes can't be saved after they reach a certain
size,
JF where size roughly based on the number of objects.
I shouldn't
Well said. A feature I'd like to add is the ability to have persistent
objects that don't get their own database records, so that you can get
the benefit of having them track their changes without incuring the
expense of a separate database object.
Jim
On May 26, 2009, at 11:48 AM, Chris
Jim Fulton wrote:
Well said. A feature I'd like to add is the ability to have persistent
objects that don't get their own database records, so that you can get
the benefit of having them track their changes without incuring the
expense of a separate database object.
+lots
Hanno
Jim Fulton wrote:
FileStorage indexes can't be saved after they reach a certain size,
where size roughly based on the number of objects.
I need to find a way to fix this.
It might be interesting to use SQLite for FileStorage indexes. With
SQLite, we wouldn't have to store the whole index
On Tue, May 26, 2009 at 04:16:29PM +0200, Pedro Ferreira wrote:
Also, is there any documentation about the basic structures of the
database available? We found some information spread through different
sites, but we couldn't find exhaustive documentation for the API
(information about the
On Tue, May 26, 2009 at 1:58 PM, Shane Hathaway sh...@hathawaymix.org wrote:
Jim Fulton wrote:
FileStorage indexes can't be saved after they reach a certain size,
where size roughly based on the number of objects.
I need to find a way to fix this.
It might be interesting to use SQLite for
Laurence Rowe wrote:
Jim Fulton wrote:
Well said. A feature I'd like to add is the ability to have persistent
objects that don't get their own database records, so that you can get
the benefit of having them track their changes without incuring the
expense of a separate database object.
Chris Withers wrote:
Laurence Rowe wrote:
Jim Fulton wrote:
Well said. A feature I'd like to add is the ability to have persistent
objects that don't get their own database records, so that you can get
the benefit of having them track their changes without incuring the
expense of a
On Mon, May 25, 2009 at 01:44:46PM +0200, Pedro Ferreira wrote:
We're using ZODB for the Indico Project (at CERN), since 2004, without
any kind of problem. However, today, our database went down and we can't
find a way to recover it. This is a major issue, since we have ~4000
users depending
Dear Andreas, Marius,
This means that you're using ZEO, right? Have you tried to use strace
to see what it's doing? Is it using any CPU time?
Yes, we're using ZEO.
It's doing a lot of lseek() and read() calls, i.e.:
read(6, eq\7cdatetime\ndatetime\nq\10(U\n\7\326\t\r\f..., 4096) = 4096
Dear all,
Thanks a lot for your help. In fact, it was a matter of increasing the
maximum recursion limit.
There's still an unsolved issue, though. Each time we try to recover a
backup using repozo, we get a CRC error. Is this normal? Has it happened
to anyone?
I guess we have a very large
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Pedro Ferreira wrote:
Thanks a lot for your help. In fact, it was a matter of increasing the
maximum recursion limit.
There's still an unsolved issue, though. Each time we try to recover a
backup using repozo, we get a CRC error. Is this normal?
Pedro Ferreira wrote:
Dear all,
Thanks a lot for your help. In fact, it was a matter of increasing the
maximum recursion limit.
There's still an unsolved issue, though. Each time we try to recover a
backup using repozo, we get a CRC error. Is this normal? Has it happened
to anyone?
I
On May 25, 2009, at 9:23 AM, Pedro Ferreira wrote:
Dear Andreas, Marius,
This means that you're using ZEO, right? Have you tried to use
strace
to see what it's doing? Is it using any CPU time?
Yes, we're using ZEO.
It's doing a lot of lseek() and read() calls, i.e.:
read(6,
32 matches
Mail list logo