OK.
Thanks a lot, once again.
Regards,
Pedro
Jim Fulton wrote:
> On May 26, 2009, at 9:35 AM, Jim Fulton wrote:
> ...
>
>> FileStorage indexes can't be saved after they reach a certain size,
>> where size roughly based on the number of objects.
>>
>> I need to find a way to fix this.
>>
On May 26, 2009, at 9:35 AM, Jim Fulton wrote:
...
> FileStorage indexes can't be saved after they reach a certain size,
> where size roughly based on the number of objects.
>
> I need to find a way to fix this.
The fix was pretty easy. I just checked it in and the fix will be in
the next 3.
On Wed, May 27, 2009 at 03:35:26PM +0200, Hanno Schlichting wrote:
> Marius Gedminas wrote:
> > On Wed, May 27, 2009 at 12:17:36PM +0200, Hanno Schlichting wrote:
> >> Chris Withers wrote:
> >>> Hanno Schlichting wrote:
> They are incredibly expensive to unpickle since all
> the DWIM magi
Marius Gedminas wrote:
> On Wed, May 27, 2009 at 12:17:36PM +0200, Hanno Schlichting wrote:
>> Chris Withers wrote:
>>> Hanno Schlichting wrote:
They are incredibly expensive to unpickle since all
the DWIM magic in their __init__ get called each time, though.
>>> How come? Unpickling does
On Wed, May 27, 2009 at 12:17:36PM +0200, Hanno Schlichting wrote:
> Chris Withers wrote:
> > Hanno Schlichting wrote:
> >> They are incredibly expensive to unpickle since all
> >> the DWIM magic in their __init__ get called each time, though.
> >
> > How come? Unpickling doesn't call __init__ and
Chris Withers wrote:
> Hanno Schlichting wrote:
>> Nope. DateTime objects are plain old-style classes and don't inherit
>> from persistent.*.
>
> Hmm, oh well, my bad...
> In that case it must just be that their pickled form is huge compared to
> an int ;-)
Sure:
len(cPickle.dumps(DateTime.Dat
2009/5/27 Chris Withers :
> Laurence Rowe wrote:
>>
>> Jim Fulton wrote:
>>>
>>> Well said. A feature I'd like to add is the ability to have persistent
>>> objects that don't get their own database records, so that you can get the
>>> benefit of having them track their changes without incuring th
Hanno Schlichting wrote:
>>> Hanno Schlichting recently posted a nice graph showing the persistent
>>> structure of a Plone Page object and it's 9 (!) sub-objects.
>>> http://blog.hannosch.eu/2009/05/visualizing-persistent-structure-of.html
>> That graph isn't quite correct ;-)
>>
>> workflow_his
Chris Withers wrote:
> Laurence Rowe wrote:
>> Jim Fulton wrote:
>>> Well said. A feature I'd like to add is the ability to have persistent
>>> objects that don't get their own database records, so that you can get
>>> the benefit of having them track their changes without incuring the
>>> ex
Laurence Rowe wrote:
> Jim Fulton wrote:
>> Well said. A feature I'd like to add is the ability to have persistent
>> objects that don't get their own database records, so that you can get
>> the benefit of having them track their changes without incuring the
>> expense of a separate database
On Tue, May 26, 2009 at 1:58 PM, Shane Hathaway wrote:
> Jim Fulton wrote:
>> FileStorage indexes can't be saved after they reach a certain size,
>> where size roughly based on the number of objects.
>>
>> I need to find a way to fix this.
>
> It might be interesting to use SQLite for FileStorage
On Tue, May 26, 2009 at 04:16:29PM +0200, Pedro Ferreira wrote:
> Also, is there any documentation about the basic structures of the
> database available? We found some information spread through different
> sites, but we couldn't find exhaustive documentation for the API
> (information about the d
Jim Fulton wrote:
> FileStorage indexes can't be saved after they reach a certain size,
> where size roughly based on the number of objects.
>
> I need to find a way to fix this.
It might be interesting to use SQLite for FileStorage indexes. With
SQLite, we wouldn't have to store the whole in
Jim Fulton wrote:
> Well said. A feature I'd like to add is the ability to have persistent
> objects that don't get their own database records, so that you can get
> the benefit of having them track their changes without incuring the
> expense of a separate database object.
+lots
Hanno Schl
Well said. A feature I'd like to add is the ability to have persistent
objects that don't get their own database records, so that you can get
the benefit of having them track their changes without incuring the
expense of a separate database object.
Jim
On May 26, 2009, at 11:48 AM, Chris B
2009/5/26 Pedro Ferreira :
> In any case, it's not such a surprising number, since we have ~73141
> event objects and ~344484 contribution objects, plus ~492016 resource
> objects, and then each one of these may contain authors, and fore sure
> some associated objects that store different bits of
Jim Fulton wrote:
> On May 26, 2009, at 10:16 AM, Pedro Ferreira wrote:
>> In any case, it's not such a surprising number, since we have ~73141
>> event objects and ~344484 contribution objects, plus ~492016 resource
>> objects, and then each one of these may contain authors, and fore sure
>> som
On May 26, 2009, at 11:17 AM, Laurence Rowe wrote:
> Jim Fulton wrote:
>> On May 26, 2009, at 10:16 AM, Pedro Ferreira wrote:
>
>>> In any case, it's not such a surprising number, since we have ~73141
>>> event objects and ~344484 contribution objects, plus ~492016
>>> resource
>>> objects, an
Hi,
Pedro Ferreira wrote:
I've also tried to run the "analyze.py" script, but it returns me a
stream of '''type' object is unsubscriptable" errors, due to:
classinfo = pickle.loads(record.data)[0]
any suggestion?
I personally apply the attached patch to analyze.py, that does not load
pickle
On May 26, 2009, at 10:16 AM, Pedro Ferreira wrote:
>
>>
>> That's what I was afraid of.
>>
>> FileStorage indexes can't be saved after they reach a certain size,
>> where size roughly based on the number of objects.
>>
>> I need to find a way to fix this.
> So, from this I infer that our databas
On May 26, 2009, at 10:21 AM, Adam GROSZER wrote:
> Hello Jim,
>
> Where's that certain size on the scale?
>
> Tuesday, May 26, 2009, 3:35:56 PM, you wrote:
>
> JF> FileStorage indexes can't be saved after they reach a certain
> size,
> JF> where size roughly based on the number of objects.
I
Hello Jim,
Where's that certain size on the scale?
Tuesday, May 26, 2009, 3:35:56 PM, you wrote:
JF> FileStorage indexes can't be saved after they reach a certain size,
JF> where size roughly based on the number of objects.
--
Best regards,
Adam GROSZERmailto:agr
>
> That's what I was afraid of.
>
> FileStorage indexes can't be saved after they reach a certain size,
> where size roughly based on the number of objects.
>
> I need to find a way to fix this.
So, from this I infer that our database has grown in such a proportion
that we're reaching some of th
On May 26, 2009, at 6:00 AM, Pedro Ferreira wrote:
> Jim Fulton wrote:
>>
>> What was in the ZEO server log when this happened?
>>
> 2009-05-24T12:22:54 (28965) new connection ('137.138.128.213', 45138):
>
> 2009-05-24T12:22:54 (28965) new connection ('137.138.128.213', 45139):
>
> 2009-05-24T1
On Tue, May 26, 2009 at 12:00:51PM +0200, Pedro Ferreira wrote:
> Jim Fulton wrote:
> > What was in the ZEO server log when this happened?
> >
> 2009-05-24T12:22:54 (28965) new connection ('137.138.128.213', 45138):
>
> 2009-05-24T12:22:54 (28965) new connection ('137.138.128.213', 45139):
>
> 20
Jim Fulton wrote:
>
> On May 25, 2009, at 9:23 AM, Pedro Ferreira wrote:
>
>> Dear Andreas, Marius,
>>
>>> This means that you're using ZEO, right? Have you tried to use strace
>>> to see what it's doing? Is it using any CPU time?
>>>
>>>
>> Yes, we're using ZEO.
>> It's doing a lot of lseek() an
Tres Seaver wrote:
> Pedro Ferreira wrote:
>
> > Thanks a lot for your help. In fact, it was a matter of increasing the
> > maximum recursion limit.
> > There's still an unsolved issue, though. Each time we try to recover a
> > backup using repozo, we get a CRC error. Is this normal? Has it happene
On May 25, 2009, at 9:23 AM, Pedro Ferreira wrote:
> Dear Andreas, Marius,
>
>> This means that you're using ZEO, right? Have you tried to use
>> strace
>> to see what it's doing? Is it using any CPU time?
>>
>>
> Yes, we're using ZEO.
> It's doing a lot of lseek() and read() calls, i.e.:
>
>
Pedro Ferreira wrote:
> Dear all,
>
> Thanks a lot for your help. In fact, it was a matter of increasing the
> maximum recursion limit.
> There's still an unsolved issue, though. Each time we try to recover a
> backup using repozo, we get a CRC error. Is this normal? Has it happened
> to anyone?
>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Pedro Ferreira wrote:
> Thanks a lot for your help. In fact, it was a matter of increasing the
> maximum recursion limit.
> There's still an unsolved issue, though. Each time we try to recover a
> backup using repozo, we get a CRC error. Is this norma
Dear all,
Thanks a lot for your help. In fact, it was a matter of increasing the
maximum recursion limit.
There's still an unsolved issue, though. Each time we try to recover a
backup using repozo, we get a CRC error. Is this normal? Has it happened
to anyone?
I guess we have a very large databas
Dear Andreas, Marius,
> This means that you're using ZEO, right? Have you tried to use strace
> to see what it's doing? Is it using any CPU time?
>
>
Yes, we're using ZEO.
It's doing a lot of lseek() and read() calls, i.e.:
read(6, "eq\7cdatetime\ndatetime\nq\10(U\n\7\326\t\r\f"..., 4096) =
On Mon, May 25, 2009 at 01:44:46PM +0200, Pedro Ferreira wrote:
> We're using ZODB for the Indico Project (at CERN), since 2004, without
> any kind of problem. However, today, our database went down and we can't
> find a way to recover it. This is a major issue, since we have ~4000
> users dependin
On 25.05.09 13:44, Pedro Ferreira wrote:
> Hello,
>
> We're using ZODB for the Indico Project (at CERN), since 2004, without
> any kind of problem. However, today, our database went down and we can't
> find a way to recover it. This is a major issue, since we have ~4000
> users depending on this ap
Hello,
We're using ZODB for the Indico Project (at CERN), since 2004, without
any kind of problem. However, today, our database went down and we can't
find a way to recover it. This is a major issue, since we have ~4000
users depending on this application, and we're simply not able to access
the d
35 matches
Mail list logo