wichert wrote:
>
> On 2009-9-21 17:38, Martin Aspeli wrote:
>> Maybe so. I've yet to benchmark. If the blob doesn't slow down the
>> "view" use case, then storing the (infrequently-used) raw value in a
>> blob may possibly mean more efficient use of storage and memory. The
>> ZODB cache won't n
This may help: http://plone.org/documentation/how-to/debug-zodb-bloat/
Laurence
Chris Withers wrote:
>
> Hi All,
>
> I have a filestorage being used by Zope 2 that is mysteriously growing.
> I don't have confidence in the Undo tab, since this setup has two
> storages, once mounted into the
Shane's earlier benchmarks show MySQL to be the fastest RelStorage backend:
http://shane.willowrise.com/archives/relstorage-10-and-measurements/
Laurence
2009/10/13 Ross J. Reedstrom :
> Very interesting. I wonder how the postgresql version fairs?
>
> Ross
>
>
> On Tue, Oct 13, 2009 at 05:08:07PM
2009/11/13 Martin Aspeli :
> Hanno Schlichting wrote:
>> On Fri, Nov 13, 2009 at 5:40 PM, Jim Fulton wrote:
>>> On Fri, Nov 13, 2009 at 10:18 AM, Mikko Ohtamaa
>>> wrote:
Unfortunately the application having the issues is Plone 3.3. ZODB 3.9
depends on Zope 2.12 so, right?
>>> ZODB doe
2009/11/20 Chris Withers :
> Jim Fulton wrote:
>> On Thu, Nov 19, 2009 at 7:01 PM, Chris Withers
>> wrote:
>>> Jim Fulton wrote:
There's nothing official or supported about a backup solution without
automated tests.
So I guess there isn't one.
>>> Right, so what does Zope Corp
2009/11/20 Jim Fulton :
> On Fri, Nov 20, 2009 at 9:32 AM, Chris Withers wrote:
> ...
>> I'm not sure how much love repozo needs. It works, and it won't need
>> changing until FileStorage's format changes, which I don't see happening any
>> time soon.
>
> It just occurred to me that repozo doesn't
You must be prepared to abort and retry the whole transaction:
while True:
while True:
try:
root[ "one" ] = time.asctime()
transaction.commit()
except POSException.ConflictError:
t error).
> Try to commit transaction
> we have a conflict
> Try to commit transaction
> root is {'one': 'Thu Nov 26 11:38:35 2009'}
> Try to commit transaction
> we have a conflict
> Try to commit transaction
> root is {'one': 'Thu
2009/12/7 Jose Benito Gonzalez Lopez :
> Dear ZODB developers,
>
> Since some time ago (not sure since when) our database
> has passed from 15GB to 65GB so fast, and it keeps growing
> little by little (2 to 5 GB per day). It is clear that something is not
> correct in it.
>
> We would like to chec
2009/12/9 Pedro Ferreira :
> Hello,
>> Just zodbbrowser with no prefix:
>>
>> http://pypi.python.org/pypi/zodbbrowser
>> https://launchpad.net/zodbbrowser
>>
>> It's a web-app: it can connect to your ZEO server so you can inspect the
>> DB while it's being used.
>>
> We tried this, but we curre
2009/12/17 Mikko Ohtamaa :
> Hi,
>
> I need to have little clarification should properties work on
> Persistent objects. I am running ZODB 3.8.4 on Plone 3.3.
>
> I am using plone.behavior and adapters to retrofit objects with a new
> behavior (HeaderBehavior object). This object is also editable t
2009/12/20 Ross Boylan :
> easy_install ZODB3 looked fairly good during installation until the end:
>
> Processing transaction-1.0.0.tar.gz
> Running transaction-1.0.0\setup.py -q bdist_egg --dist-dir
> c:\users\ross\appdata\local\temp\easy_install-cw1i4f\transaction-1.0.0\egg-dist-tmp-z7nrfd
> A
2009/12/20 Ross Boylan :
> The IPC10 presentation says
> #Works as a side-effect of importing ZODB above
> from Persistence import Persistent
>
> I tried that (with the indicate other imports first). It led to a "No
> module" error.
>
> I tried commenting out the line, since the comment could be i
2010/1/6 Martin Aspeli :
> Hi,
>
> This one is pretty high no the list of weirdest things to have happened
> to me in a while. Basically, I have a persistent object that has an
> empty __dict__() on the first request, until it suddenly decides to have
> data again.
>
> I'm on ZODB 3.9.3, using Zope
I've had a request to add savepoint release support to zope.sqlalchemy
as some databases seem to limit the number of savepoints in a
transaction.
I've added this in a branch of transaction here:
svn+ssh://svn.zope.org/repos/main/transaction/branches/elro-savepoint-release
>From the changelog:
*
2010/1/16 Laurence Rowe :
> I'm still not sure this will allow me to add savepoint release support
> to zope.sqlalchemy, as SQLAlchemy has a concept of nested transactions
> rather than savepoints.
> http://groups.google.com/group/sqlalchemy/browse_thread/thread/7a4632587fd977
2010/1/17 Jim Fulton :
> On Sat, Jan 16, 2010 at 1:03 PM, Laurence Rowe wrote:
>> I've had a request to add savepoint release support to zope.sqlalchemy
>> as some databases seem to limit the number of savepoints in a
>> transaction.
>>
>> I've added t
A BTree does not keep track of it's length. See BTrees.Length.Length:
http://apidoc.zope.org/++apidoc++/Code/BTrees/Length/Length/index.html
Laurence
On 8 April 2010 16:36, Leszek Syroka wrote:
> Hi,
>
> what is the fastest way of checking the number of elements in OOBtree.
> Execution time of
Running your test script on my small amazon EC2 instance on linux
takes between 0.0 and 0.04 seconds (I had to remove the divide by
total to avoid a zero division error). 0.02 is 5000/s.
Laurence
On 14 April 2010 00:25, Nitro wrote:
40 tps sounds low: are you pushing blob content over the
On 17 April 2010 05:27, Jeff Shell wrote:
> We encountered a problem during an export/import in a Zope 3 based
> application that resulted in something not being importable. This is from our
> very first Zope 3 based application, and I stumbled across some very old
> adapter/utility registratio
I've had this issue reported to me in the context of zope.sqlalchemy,
but have been unable to reproduce it. Others have also seen it, but as
far as I am aware have not been able to reproduce it:
http://www.mail-archive.com/pgsql-hack...@postgresql.org/msg146522.html
As there have now been three si
I suspect that something like 90% of ZODB pickle data will be string
values, so the scope for reducing the space used by a ZODB through the
newer pickle protocol – and even the class registry – is limited.
What would make a significant impact on data size is compression. With
lots of short strings
On 10 May 2010 21:41, Jim Fulton wrote:
> A. Change transaction._transaction.AbortSavepoint to remove the
> datamanager from the transactions resources (joined data managers)
> when the savepoint is rolled back and abort called on the data
> manager. Then, if the data manager rejoins, it wil
I think that moving to an LLTreeSet for the docset will significantly
reduce your memory usage. Non persistent objects are stored as part of
their parent persistent object's record. Each LOBTree object bucket
contains up to 60 (key, value) pairs. When the values are
non-persistent objects they are
On 11 May 2010 15:08, Jim Fulton wrote:
> On Tue, May 11, 2010 at 8:38 AM, Benji York wrote:
>> On Tue, May 11, 2010 at 7:34 AM, Jim Fulton wrote:
>>> [...] The best I've been
>>> able to come up with is something like:
>>>
>>> t = ZODB.transaction(3)
>>> while t.trying:
>>> with t:
I think this means that you are storing all of your data in a single
persistent object, the database root PersistentMapping. You need to
break up your data into persistent objects (instances of objects that
inherit from persistent.Persistent) for the ZODB to have a chance of
performing memory mappi
nality for zope.sqlalchemy - when a large
number of savepoints are used, the eventual commit can lead to a
`RuntimeError: maximum recursion depth exceeded` in SQLAlchemy as it
attempts to unroll its nested substransactions.
Laurence
On 17 January 2010 15:45, Laurence Rowe wrote:
> 2010/1/17 Jim Fulto
It really depends on what you are trying to achieve.
The simplest solution would probably be to use a geohash string within
an OOBTree.
If you need a full geospatial solution, postgis is featureful and easy
to use, and simple to integrate transactionally with ZODB.
Reinventing the wheel is rarel
On 28 June 2010 15:23, Nitro wrote:
> Am 28.06.2010, 14:10 Uhr, schrieb Dylan Jay :
>
>> I don't use a lot of other indexes other than what comes with plone but
>> I can see the value of what your suggesting in having an installable
>> tested collection of indexes. I can also see that this is a re
On 28 June 2010 19:31, Nitro wrote:
> Am 28.06.2010, 16:52 Uhr, schrieb Laurence Rowe :
>
>> So why don't we all work on the same packages? The main reason is one
>> of legacy. Plone is built on Zope2 and ZCatalog. It works, but it is
>> not without it's issues
On 28 June 2010 21:27, Nitro wrote:
> ZODB is a general python object database with a much wider audience than
> just plone. It suits desktop applications just as well as applications
> you'd normally use twisted and pickle for. Forcing all those zope
> dependencies like buildout on people does no
On 16 August 2010 13:13, Tres Seaver wrote:
> Hanno Schlichting wrote:
>> On Mon, Aug 16, 2010 at 12:14 PM, Pedro Ferreira
>> wrote:
>>> Could this be some problem with using persistent objects as keys in a BTree?
>>> Some comparison problem?
>>
>> I'm not entirely sure about this, but I think us
On 16 August 2010 17:29, Pedro Ferreira wrote:
>
>
>> Consider using one
>> of these alternatives instead:
>>
>> * Set the IOTreeSet as an attribute directly on the persistent object.
>>
>
> You mean on the persistent object I am using as key?
Yes.
>> * Use http://pypi.python.org/pypi/zope.intid
On 23 August 2010 17:51, Jim Fulton wrote:
> It's worth noting that these are not "the" docs. I didn't write or
> review them. I don't have any control over zodb.org. I have no idea
> how to comment on the docs. (I could possibly find out, but I don't have time
> to work that hard.)
...
> This i
On 23 August 2010 19:08, Jim Fulton wrote:
> On Mon, Aug 23, 2010 at 1:08 PM, Laurence Rowe wrote:
>> On 23 August 2010 17:51, Jim Fulton wrote:
>>> It's worth noting that these are not "the" docs. I didn't write or
>>> review them. I don
On 27 September 2010 18:26, Nathan Van Gheem wrote:
> BTW, I thought I could just use the ZPublisherEventsBackup to abort
> every transaction when zope is in read-only... Kind of hacky, but not
> too bad :)
That sounds really evil, but I guess it should work...
plone.app.imaging / plone.scale cr
On 14 October 2010 01:28, Darryl Dixon - Winterhouse Consulting
wrote:
>> On 13/10/2010 15:23, Jim Fulton wrote:
>>> You can connect to the monitor port in 3.9 and earlier,
>>> if the monitor port is configured. In 3.10, the monitor server is
>>> replaced by a ZEO client method, server_status. Th
On 17 November 2010 16:34, Alan Runyan wrote:
>> I have read that there is a problem to implement MS-SQL adapter for
>> Relstorage because the “Two phase commit” feature is not exposed by
>> MS-SQL server .
>
> unsure about that. probably depends on the client access library.
At least when I look
On 17 November 2010 17:05, Laurence Rowe wrote:
> On 17 November 2010 16:34, Alan Runyan wrote:
>>> I have read that there is a problem to implement MS-SQL adapter for
>>> Relstorage because the “Two phase commit” feature is not exposed by
>>> MS-SQL server .
>
I'm not very optimistic about this I'm afraid. First the problems with
using Plone:
* Plone relies heavily on its in ZODB indexes of all content
(portal_catalog). This means that every edit will change lots of
objects (without versioning ~15-20, most of which are in the
catalogue).
* At least w
On 21 January 2011 20:57, Shane Hathaway wrote:
> On 01/21/2011 10:46 AM, Chris Withers wrote:
>> I'm wondering what the recommended maintenance for these two types of
>> storage are that I use:
>>
>> - keep-history=true, never want to lose any revisions
>>
>> My guess is zodbpack with pack-gc as
On 24 January 2011 21:28, Shane Hathaway wrote:
> On 01/24/2011 02:02 PM, Anton Stonor wrote:
>> Hi there,
>>
>> We have recently experienced a couple of PosKey errors with a Plone 4
>> site running RelStorage 1.4.1 and Mysql 5.1.
>>
>> After digging down we found that the objects that were throwi
On 26 January 2011 21:57, Jürgen Herrmann wrote:
> is there a script or some example code to search for cross db
> references?
> i'm also eager to find out... for now i disabled my packing cronjobs.
Packing with garbage collection disabled (pack-gc = false) should
definitely be safe.
Laurence
On 26 January 2011 23:11, Chris Withers wrote:
> On 26/01/2011 22:49, Laurence Rowe wrote:
>>
>> On 26 January 2011 21:57, Jürgen Herrmann
>> wrote:
>>>
>>> is there a script or some example code to search for cross db
>>> references?
>>
On 24 February 2011 10:17, Chris Withers wrote:
> Hi Jim,
>
> The current __exit__ for transaction managers looks like this:
>
> def __exit__(self, t, v, tb):
> if v is None:
> self.commit()
> else:
> self.abort()
>
> ..which means that if you're using t
On 4 May 2011 10:53, Hanno Schlichting wrote:
> Hi.
>
> I tried to analyze the overhead of changing content in Plone a bit. It
> turns out we write back a lot of persistent objects to the database,
> even tough the actual values of these objects haven't changed.
>
> Digging deeper I tried to under
While looking at the Plone versioning code the other day, it struck me
that it would be much more efficient to implement file versioning if
we could rely on blobs never changing after their first commit, as a
copy of the file data would not need to be made proactively in the
versioning repository i
On 9 May 2011 13:32, Hanno Schlichting wrote:
> On Mon, May 9, 2011 at 2:26 PM, Laurence Rowe wrote:
>> While looking at the Plone versioning code the other day, it struck me
>> that it would be much more efficient to implement file versioning if
>> we could rely on blobs
On 6 July 2011 19:44, Jim Fulton wrote:
> We're evaluating AWS for some of our applications and I'm thinking of adding
> some options to support using S3 to store Blobs:
>
> 1. Allow a storage in a ZEO storage server to store Blobs in S3.
> This would probably be through some sort of abstractio
On 7 July 2011 15:18, Jim Fulton wrote:
> On Thu, Jul 7, 2011 at 9:13 AM, Laurence Rowe wrote:
>> On 6 July 2011 19:44, Jim Fulton wrote:
>
> ...
>
>> Adding the ability to store blobs in S3 would be an excellent feature
>> for AWS based deployments. I'm no
On 7 July 2011 16:55, Jim Fulton wrote:
> On Thu, Jul 7, 2011 at 10:49 AM, Laurence Rowe wrote:
> ...
>> One thing I found with my (rather naive) experiments building
>> s3storage a few years ago is that you need to ensure requests to S3
>> are made in parallel to ge
On 18 July 2011 11:07, Pedro Ferreira wrote:
> Hello,
>
> I have an OOTreeSet in my DB that is behaving a bit funny (seems to be
> corrupted). I thought I could get some more information by performing a
> sanity check, but that doesn't seem to help a lot:
>
> """
c in s
> False
c in list
On 18 July 2011 13:08, Pedro Ferreira wrote:
>> TreeSets are essentially BTrees with only keys. This means that the
>> members of a TreeSet must have a stable ordering. I suspect that that
>> c's class does not define the comparison methods (such as __lt__)
>> which means under Python 2 it falls b
Thanks! I'll review and presumably merge this as soon as I can.
>
> JIm
>
> On Mon, Jun 7, 2010 at 12:45 PM, Laurence Rowe wrote:
>> Hi Jim,
>>
>> I've created a new branch for my savepoint release changes following
>> the 1.1 release here:
>>
On 12 October 2011 23:53, Shane Hathaway wrote:
> As I see it, a cache of this type can take 2 basic approaches: it can
> either store {oid: (state, tid)}, or it can store {(oid, tid): (state,
> last_tid)}. The former approach is much simpler, but since memcache has
> no transaction guarantees wha
On 24 January 2012 13:50, steve wrote:
> Hi All,
>
> I apologize for the cross-post but by this mail I simply hope to get a few
> pointers on how to narrow down to the problem I am seeing. I shall post to the
> relevant list if I have further questions.
>
> So here is the issue:
>
> Short descript
On 9 February 2012 11:24, Jim Fulton wrote:
> I'm sorry I haven't had time to look at this. Still don't really.
>
> Thanks Marius!!!
>
> On Wed, Feb 8, 2012 at 6:48 PM, Marius Gedminas wrote:
>> On Thu, Feb 09, 2012 at 01:25:48AM +0200, Marius Gedminas wrote:
>>> On Wed, Feb 08, 2012 at 01:24:55P
On 13 February 2012 10:06, Pedro Ferreira wrote:
>> The OS' file-system cache acts as a storage server cache. The storage
>> server does (essentially) no processing to data read from disk, so an
>> application-level cache would add nothing over the disk cache provided by
>> the storage server.
>
On 13 February 2012 12:39, Pedro Ferreira wrote:
> Hello,
>
> Thanks a lot for your suggestions.
>
>
>> You could try a ZEO fanout setup too, where you have a ZEO server
>> running on each client machine. The intermediary ZEO's client cache
>> (you could put it on tmpfs if you have enough RAM) is
On 14 March 2012 17:47, Jim Fulton wrote:
> I'm pretty happy with how zc.zlibstorage has worked out.
>
> Should I build this into ZODB 3.11?
+1
> BTW, lz4 compression looks interesting.
>
> The Python binding (at least from PyPI) is broken.
> I submitted an issue. Hopefully it will be fixed.
FW
On 20 March 2012 16:52, Adam Tauno Williams wrote:
> It is possible to open a ZODB in a thread and share it to other threads
> via a filesystem socket or pipe [rather than a TCP conntection]? I've
> searched around and haven't found any reference to such a configuration.
This resolved bug report
On 21 March 2012 22:54, Claudiu Saftoiu wrote:
> Hello ZODB List,
>
> (This is also a stackoverflow question - you might prefer the formatting
> there: http://stackoverflow.com/questions/9810116/zodb-database-conflict-fail )
>
> I have a server, and a client.
>
> A client sends a request. The requ
On 30 August 2012 19:19, Shane Hathaway wrote:
> On 08/30/2012 10:14 AM, Marius Gedminas wrote:
>>
>> On Wed, Aug 29, 2012 at 06:30:50AM -0400, Jim Fulton wrote:
>>>
>>> On Wed, Aug 29, 2012 at 2:29 AM, Marius Gedminas
>>> wrote:
On Tue, Aug 28, 2012 at 06:31:05PM +0200, Vincent Pelleti
On 14 October 2012 22:49, Jim Fulton wrote:
> On Sun, Oct 14, 2012 at 5:28 PM, Tres Seaver wrote:
> ...
>>> Well, I don't have time to chase BTrees. This could always be done in
>>> ZODB 5. :)
>>
>> I could help chop BTrees out, if that would be useful: most of the
>> effort will be purely subt
On 14 October 2012 23:33, Jim Fulton wrote:
> On Sun, Oct 14, 2012 at 6:07 PM, Laurence Rowe wrote:
>> On 14 October 2012 22:49, Jim Fulton wrote:
>>> On Sun, Oct 14, 2012 at 5:28 PM, Tres Seaver wrote:
>>> ...
>>>>> Well, I don't have time
On 18 January 2013 10:21, Claudiu Saftoiu wrote:
>
>> > Er, to be clearer: my goal is for the preload to load everything into
>> > the
>> > cache that the query mechanism might use.
>> >
>> > It seems the bucket approach only takes ~10 seconds on the 350k-sized
>> > index
>> > trees vs. ~60-90 sec
On 18 January 2013 10:50, Claudiu Saftoiu wrote:
>
>> > Any suggestions? There must be a way to effectively use indexing with
>> > zodb
>> > and what I'm doing isn't working.
>>
>> Have you confirmed that the ZEO client cache file is being used?
>> Configure logging to display the ZEO messages to
On 8 March 2013 09:38, Claudiu Saftoiu wrote:
> On Fri, Mar 8, 2013 at 12:31 PM, Leonardo Santagada
> wrote:
>>
>>
>> On Fri, Mar 8, 2013 at 2:17 PM, Claudiu Saftoiu
>> wrote:
>>>
>>> Once I know the difference I'll probably be able to answer this myself,
>>> but I wonder why the ZEO server does
It sounds like you're missing some transaction middleware in your wsgi
pipeline. See
https://groups.google.com/forum/#!topic/zope-core-dev/aB5BzvrVJxw for some
clues.
On 22 July 2013 22:58, Suresh V. wrote:
> Also happens with RelStorage trunk from github.
>
> Tests run fine after bumping up th
I'm sure you're probably aware of these, but I thought I'd file this
summary while they were in my head.
There is no history-less FileStorage. It is essentially a transaction log.
Directory Storage has Minimal.py which is history-less, very simple
though it is not proven in production. Could b
Jim Fulton wrote:
I wasn't asking about implementation.
Here are some questions:
- Should this create a new FileStorage? Or should it modify the existing
FileStorage in place?
Probably create a new one (analogous to a pack). Seems safer than
truncating to me.
- Should this work while t
You need to provide the full traceback so we can tell where it is coming
from.
My guess (though I'm surprised by the particular error) is that you have
perhaps got content owned by users in a user folder outside the site
that is no longer accessible when you mount the database on its own. If
Chris,
I think you're looking at forward references when you want to look at
back references.
This might help: http://plone.org/documentation/how-to/debug-zodb-bloat
(you might have to change the refmap to be in a zodb with that much data
though)
Laurence
Chris Withers wrote:
Hi All,
We
Hi,
Several people have made SQLalchemy integrations recently. SQLAlchemy
does not support Two Phase Commit (2PC) so correctly tying it in with
zope's transactions is tricky. With multiple One Phase Commit (1PC)
DataManagers the problem is of course intractable, but given the
popularity of ma
Christian Theune wrote:
We imagine we need two kinds of components to make this work:
1. A query processor that could look like:
class IQueryProcessor(Interface):
def query(...):
"""Returns a list of matching objects. The parameters are
specific to the query processor i
It looks like ZODB performance in your test has the same O(log n)
performance as PostgreSQL checkpoints (the periodic drops in your
graph). This should come as no surprise. B-Trees have a theoretical
Search/Insert/Delete time complexity equal to the height of the tree,
which is (up to) log(n).
Matt Hamilton wrote:
David Binger mems-exchange.org> writes:
On Nov 2, 2007, at 6:20 AM, Lennart Regebro wrote:
Lots of people don't do nightly packs, I'm pretty sure such a process
needs to be completely automatic. The question is weather doing it in
a separate process in the background, o
PGStorage does require packing currently, but it would be fairly trivial
to change it to only store single revisions. Postgres would still ensure
mvcc. Then you just need to make sure postgres auto-vacuum daemon is
running.
Laurence
David Pratt wrote:
Yes, Shane had done some benchmarking abo
Sean Allen wrote:
been looking for anything along those lines.
in particular, strategies and gotchas for how to store objects.
everything i've found is basically just a single type of object being
stored.
i'm really interested in tutorials and information on the best ways to
setup
large com
tsmiller wrote:
I have a bookstore that uses the ZODB as its storage. It uses qooxdoo as
the client and CherryPy for the server. The server has a 'saveBookById'
routine that works 'most' of the time. However, sometimes the
transaction.commit() does NOT commit the changes and when I restart my
[EMAIL PROTECTED] wrote:
We have a large dataset of 650,000+ records that I'd like to examine
easily in Python. I have figured out how to put this into a ZODB file
that totals 4 GB in size. But I'm new to ZODB and very large databases,
and have a few questions.
1. The data is in a IOBTree s
s separate
`records` by ZODB. Other objects do not have a _p_oid attribute and
have to be saved as part of their parent record.
Laurence
2008/6/19 <[EMAIL PROTECTED]>:
> Laurence Rowe wrote:
>>
>> [EMAIL PROTECTED] wrote:
>> Does your record class inherit from persistent.
Andreas Jung wrote:
--On 22. Juni 2008 08:49:32 -0700 tsmiller <[EMAIL PROTECTED]>
wrote:
Gary,
I have been using the ZODB for about a year and a half with a bookstore
application. I am just now about ready to put it out on the internet for
people to use. I have had the same problem with
Backing up a ZODB has always been fairly easy in the past, but with the
introduction of blobs things have got a little more complex.
How should I create a consistent backup of my Data.fs and blob directory?
My inital guess would be to take a copy of the Data.fs, then take a copy of
the blob dire
Hi Adam,
For incremental backups, I presume the procedure would be to first run
repozo on the Data.fs then run rsync on the blobs directory to the backup
blobs directory.
The concerns would be equivalent to those on doing a full backup.
Laurence
Adam Groszer-2 wrote:
>
> Hello Laurence,
>
>
Izak Burger-2 wrote:
>
> Dieter Maurer wrote:
>> This is standard behaviour with long running processes on
>> a system without memory compaction:
>
> Of course, I remember now, there was something about that in my
> Operating Systems course ten years ago :-) I suppose the bigger page
> sizes
Leonardo Santagada wrote:
> On Oct 4, 2008, at 12:36 PM, Wichert Akkerman wrote:
>
>> Adam wrote:
>>>
>>> Thanks for that, guys, I've not used a mailing list like this
>>> before so
>>> unsure how to respond.
>>>
>>> If ZODB stores the Package.Module.Class name in the pickle would it
>>> be
>>
Shane Hathaway wrote:
> Benjamin Liles wrote:
>> Currently at the Plone conference it seems that a large number of people
>> are beginning to host their Plone sites on the Amazon EC2 service. A
>> simpleDB adapter might be a good way to provide persistent storage for
>> an EC2 base Zope instance.
Hanno Schlichting wrote:
> Jim Fulton wrote:
>> On Nov 4, 2008, at 12:12 PM, Benji York wrote:
>>
>>> On Tue, Nov 4, 2008 at 12:01 PM, Jim Fulton <[EMAIL PROTECTED]> wrote:
A few months back, there was a lot of discussion here about BTree
performance. I got a sense that maximum BTree-nod
Broken objects occur when the class for a pickled object cannot be
imported. To change the location of a class, you need to provide an
alias at the old location so that the object can be unpickled, i.e.
MyOldClassName = MyNewClassName. You can only remove MyOldClassName
after you have updated a
Shane Hathaway wrote:
>
> I should note that this KeyError occurs while trying to report on a
> KeyError. I need to fix that. Fortunately, the same error pops out anyway.
There's a fix for this in the Jarn branch. Note that to collect more
interesting data it rolls back the load connection at
eastxing wrote:
> Hi,
>
> I am using Plone2.5.5 with Zope2.9.8-final and ZODB3.6.2.Now my Data.fs
> size is nearly 26G with almost 140k Plone objects and more than 4100k
> zope objects in the database. Since 2 moths ago, I could not pack my
> database successfully. Recent days I tried to pack i
For Plone, the standard remedy to this problem is to separate out
portal_catalog into it's own storage (zeo has support for serving
multiple storages). You may then control the object cache size per
storage, setting the one for the portal_catalog storage large enough to
keep all it's objects in
Christian Theune wrote:
> Hi,
>
> On Tue, 2009-04-28 at 13:54 -0400, Jim Fulton wrote:
>> Thanks again!
>>
>> (Note to everyone else, Shane and I discussed this on IRC, along with
>> another alternative that I'll mention below.)
>>
>> I like version 2 better than version 1. I'd be inclined to s
Pedro Ferreira wrote:
> Dear all,
>
> Thanks a lot for your help. In fact, it was a matter of increasing the
> maximum recursion limit.
> There's still an unsolved issue, though. Each time we try to recover a
> backup using repozo, we get a CRC error. Is this normal? Has it happened
> to anyone?
>
Jim Fulton wrote:
> On May 26, 2009, at 10:16 AM, Pedro Ferreira wrote:
>> In any case, it's not such a surprising number, since we have ~73141
>> event objects and ~344484 contribution objects, plus ~492016 resource
>> objects, and then each one of these may contain authors, and fore sure
>> som
Jim Fulton wrote:
> Well said. A feature I'd like to add is the ability to have persistent
> objects that don't get their own database records, so that you can get
> the benefit of having them track their changes without incuring the
> expense of a separate database object.
+lots
Hanno Schl
A few weeks ago I converted the "ZODB/ZEO Programming Guide" and a few
more articles into structured text and added them to the zope2docs
buildout. I've now moved them to their own buildout in
svn+ssh://svn.zope.org/repos/main/zodbdocs/trunk and they will soon
appear at http://docs.zope.org/zod
Andreas Jung wrote:
> On 26.05.09 19:08, Andreas Jung wrote:
>> On 26.05.09 18:54, Laurence Rowe wrote:
>>
>>> A few weeks ago I converted the "ZODB/ZEO Programming Guide" and a few
>>> more articles into structured text and added them to the zope
Jim Fulton wrote:
> On May 26, 2009, at 12:54 PM, Laurence Rowe wrote:
>
>> A few weeks ago I converted the "ZODB/ZEO Programming Guide" and a few
>> more articles into structured text and added them to the zope2docs
>> buildout. I've now moved t
1 - 100 of 101 matches
Mail list logo