-----BEGIN PGP SIGNED MESSAGE-----
Jim Fulton wrote:
> On Feb 12, 2007, at 12:25 PM, Andreas Jung wrote:
>> I have the following script to emulate a long running writing ZEO
>> by writing 100MB to a page template:
>> import transaction
>> pt = app.foo
>> while 1:
>> data = '*'*100000000
>> T = transaction.begin()
>> pt.pt_edit(data, 'text/html')
>> print 'done'
>> This script fails badly during during the first commit() call. Is
>> this a bug
>> or feature? I am using Zope 2.10.2 on MacOSX Intel.
> Based on the traceback you gave, this looks like a bug. I've
> noticed, however, that large database records can lead to memory
> errors at sizes much smaller than I would expect. If the problem is
> ultimately traced to a hidden memory error, there's not much that can
> be done. In the long run, I expect we'll advise that "large" objects
> be put in blobs, where "large" might be smaller than one might
> expect. For example, I've seen 90MB records lead to memory errors
> even on machines with a hundreds of megabytes free.
That sounds like a fragmented heap.
Tres Seaver +1 540-429-0999 [EMAIL PROTECTED]
Palladion Software "Excellence by Design" http://palladion.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v22.214.171.124 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
-----END PGP SIGNATURE-----
For more information about ZODB, see the ZODB Wiki:
ZODB-Dev mailing list - ZODB-Dev@zope.org