A new proposal:
http://www.zope.org/Wikis/ZODB/MemorySizeLimitedCache
It outlines how to implement a ZODB cache limited not be the
number of containing objects but by their estimated memory size.
Feedback welcome -- either here or in the Wiki.
--
Dieter
__
Chris Withers wrote at 2006-10-4 18:23 +0100:
>Dieter Maurer wrote:
>
>>> 3. If repozo is not to blame, what could be?
>>
>> One possibility would be a bad call.
>
>A bad call?
You should read my messages carefully!
E.g. a call where the same incremental backup file
is presented more than on
Dieter Maurer wrote:
3. If repozo is not to blame, what could be?
One possibility would be a bad call.
A bad call?
Chris
--
Simplistix - Content Management, Zope & Python Consulting
- http://www.simplistix.co.uk
___
For more informatio
Chris Withers wrote at 2006-10-4 09:45 +0100:
>...rather than just incrementing integers?
>
>I'm asking 'cos I've just started having "time-stamp reduction" errors
>on a production system where a contingent system is having a .fs file
>that's been re-constituted from repozo backups tested with fs
Chris Withers wrote at 2006-10-4 15:06 +0100:
> ...
>The interesting thing is that it looks like the transactions where the
>time appears to go backwards are duplicates of earlier transactions:
>
>position in file tid time from tid
>310253762330x03689abb582f1311 2006-10-
Hi All,
One of my customers has a large (21GB) production zodb which they back
up onto a contingency server using repozo and rsync. The process is
roughly as follows:
1. pack the production database to 3 days once a day.
2. create a full backup with repozo and rsync this to the contingency
...rather than just incrementing integers?
I'm asking 'cos I've just started having "time-stamp reduction" errors
on a production system where a contingent system is having a .fs file
that's been re-constituted from repozo backups tested with fstest.py...
cheers,
Chris
--
Simplistix - Conte