Re: [ZODB-Dev] RFC: ZODB 4.0 (without persistent)
On Sat, Oct 20, 2012 at 9:37 PM, Tres Seaver tsea...@palladion.com wrote: I released BTrees 4.0.0, and created a ZODB branch for the (trivial) shift to depending on it: I had to tweak the C header inclusion a bit, so that the winbot could create binary eggs. There's a 4.0.1 release on pypi now, which has eggs build for Python 2.6 and 2.7 in 32 and 64 bit variants. http://svn.zope.org/ZODB/branches/tseaver-btrees_as_egg/ That branch passes all tests, and should be ready for merging. +1 Hanno ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] RFC: ZODB 4.0 (without persistent)
On Sun, Oct 14, 2012 at 10:21 PM, Jim Fulton j...@zope.com wrote: Goal: Give persistent, ZODB and ZEO their own release cycles. In which distribution would BTrees end up in? I think the pure-Python work for BTrees isn't yet finished, but I could be wrong. But if we are extracting packages into separate distributions, we should move BTrees out as well. I propose to release the following: - ZODB 4 (4.0.0a1 initially) New ZODB (not ZODB3) project that depends on a separate persistent project and that doesn't include ZEO. - ZEO 4 (4.0.0a1 initially) - ZODB3 3.11 (3.11.0.a1 initially) that depends on ZODB and ZEO and is otherwise empty. If there are no objections, I'll release these in a few days. Sounds good. Hanno ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] SVN: ZODB/trunk/ Note split of persistent.
On Mon, Aug 27, 2012 at 8:57 AM, Adam GROSZER agroszer...@gmail.com wrote: Now we have some problems making the binary eggs. See the attached txt for the full output. I think I fixed that on SVN trunk. Looks like on Windows the compiler complains about mismatches between module names and the init functions inside the extension. On Mac OS I only got runtime errors: python -c from persistent import _timestamp Traceback (most recent call last): File string, line 1, in module ImportError: dynamic module does not define init function (init_timestamp) But I do get a test failure in the new maxint test under Python 2.7 on a 64bit Python (sys.maxint == 2**63-1): bin/test -t test_assign_p_estimated_size_bigger_than_sys_maxint Running zope.testrunner.layer.UnitTests tests: Set up zope.testrunner.layer.UnitTests in 0.000 seconds. Traceback (most recent call last): File bin/test-ztk-persistent, line 24, in module '--test-path', '/opt/zope/ztk/trunk/src/persistent', File /opt/eggs/zope.testrunner-4.0.4-py2.7.egg/zope/testrunner/__init__.py, line 30, in run failed = run_internal(defaults, args, script_parts=script_parts) File /opt/eggs/zope.testrunner-4.0.4-py2.7.egg/zope/testrunner/__init__.py, line 43, in run_internal runner.run() File /opt/eggs/zope.testrunner-4.0.4-py2.7.egg/zope/testrunner/runner.py, line 148, in run self.run_tests() File /opt/eggs/zope.testrunner-4.0.4-py2.7.egg/zope/testrunner/runner.py, line 229, in run_tests setup_layers, self.failures, self.errors) File /opt/eggs/zope.testrunner-4.0.4-py2.7.egg/zope/testrunner/runner.py, line 390, in run_layer return run_tests(options, tests, layer_name, failures, errors) File /opt/eggs/zope.testrunner-4.0.4-py2.7.egg/zope/testrunner/runner.py, line 306, in run_tests result.addSuccess(test) File /opt/eggs/zope.testrunner-4.0.4-py2.7.egg/zope/testrunner/runner.py, line 738, in addSuccess t = max(time.time() - self._start_time, 0.0) OverflowError: long too big to convert The error is obviously raised in the wrong place. Not sure what C code causes this. Hanno ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] SVN: ZODB/trunk/ Note split of persistent.
On Mon, Aug 27, 2012 at 3:18 PM, Tres Seaver tsea...@palladion.com wrote: I think that was the attempt to convert a too-big number to a C 'long long'. I have adjusted the test to use 2**63 - 1 directly. Yep, tests pass here now. I will make a 4.0.2 release after to buildbots report success on Windows (assuming they do). The winbot only builds actual tagged releases. You can follow its progress at http://winbot.zope.org/builders/wineggbuilder - there's a run every 30 minutes. I'm confident enough in the fix, so I'd just cut a new release. Hanno ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] SVN: ZODB/trunk/ Note split of persistent.
On Sat, Aug 25, 2012 at 4:11 PM, Tres Seaver tsea...@palladion.com wrote: This failure, and the others like it, indicate that the buildout has failed to install persistent correctly (there is nothing Windows-specific about the failure):: I actually got the same error locally. The trouble is the timestamp.py module and the TimeStamp.c extension. During installation setuptools creates a wrapper for the extension file. So you end up with TimeStamp.so but also a timestamp.py wrapper. This wrapper overwrites the original timestamp.py module. You likely have a case-sensitive file system and therefor didn't get this problem. I think TimeStamp.c has been there before and is imported in other modules as from persistent.TimeStamp import TimeStamp. So we should keep that API and use a different name for the new timestamp.py module. Hanno ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] RFC: release persistent as a standalone package
On Sat, Jun 30, 2012 at 8:02 PM, Tres Seaver tsea...@palladion.com wrote: I would like to release a '4.0.0' version of the package, and switch the ZODB trunk to pull it in as a dependency (deleting the currently included (older) copy of persistent). One possible issue is that I have not (yet) made the C extensions work under Python 3.2: I don't know whether that should be a blocker for a release. +1 and thanks a lot! The missing Python 3 support for the C extensions shouldn't block anything in my view. To my knowledge we haven't treated this as a blocker for any of the ZTK packages either. Lets make it work under Python 3 first and then later make it fast. Hanno ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Relstorage and MSQL
On 09.05.2012, at 02:34, Dylan Jay d...@pretaweb.com wrote: I know it's not all about money but if we were to sponsor development of microsoft sql server support for relstorage, is there someone who knows how and has an estimated cost and available time? It might be helpful if you could add more constraints. Which version of SQL server do you want to support? Just 2008 or 2012 or some Express Edition? Only 64bit servers and clients or 32bit? Are all clients also on Windows, which OS version, which versions of Python? What clustering options are you interested in if any? Do you want memcached support? And what about blob storage? Are blobs inside the DB in little chunks enough, do you want them on the filesystem via shared network drive or new efficient filestream support in SQL server 2008+ The fewer variables you have, the easier an estimate should be. And just stating something like the expected data size, number of clients and availability concerns might help. Hanno ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] all webserver threads blocking on db.open()
I think you might get better help on one of the Pyramid support channels. Your problems all seem to be related to configuring a web server in production mode, rather than database issues. From what I can tell you are dealing with hung requests. I'd look at either the paster configuration options for anything related to timeouts, thread pools, handling of incomplete requests and so on. Or use a more production quality web server like Apache (mod_wsgi), Nginx (gevent/gunicon) which likely has better default configuration values for these things. Hanno On Mon, May 7, 2012 at 5:38 PM, Claudiu Saftoiu csaft...@gmail.com wrote: Hello all, I'm using Repoze.BFG, with paster to launch the webserver. This is a similar issue to the one I emailed about before titled server stops handling requests - nowhere near 100% CPU or Memory used The situation is the same. I used z3c.deadlockdebugger , and what I notice is that, when the server is blocked, there are about 100 threads running (as opposed to 15 or so when the server has just started), and all their stack traces look like this: Thread 140269004887808: File /usr/lib/python2.6/threading.py, line 504, in __bootstrap self.__bootstrap_inner() File /usr/lib/python2.6/threading.py, line 532, in __bootstrap_inner self.run() File /usr/lib/python2.6/threading.py, line 484, in run self.__target(*self.__args, **self.__kwargs) File /home/tsa/env/lib/python2.6/site-packages/paste/httpserver.py, line 878, in worker_thread_callback runnable() File /home/tsa/env/lib/python2.6/site-packages/paste/httpserver.py, line 1052, in lambda lambda: self.process_request_in_thread(request, client_address)) File /home/tsa/env/lib/python2.6/site-packages/paste/httpserver.py, line 1068, in process_request_in_thread self.finish_request(request, client_address) File /usr/lib/python2.6/SocketServer.py, line 322, in finish_request self.RequestHandlerClass(request, client_address, self) File /usr/lib/python2.6/SocketServer.py, line 617, in __init__ self.handle() File /home/tsa/env/lib/python2.6/site-packages/paste/httpserver.py, line 442, in handle BaseHTTPRequestHandler.handle(self) File /usr/lib/python2.6/BaseHTTPServer.py, line 329, in handle self.handle_one_request() File /home/tsa/env/lib/python2.6/site-packages/paste/httpserver.py, line 437, in handle_one_request self.wsgi_execute() File /home/tsa/env/lib/python2.6/site-packages/paste/httpserver.py, line 287, in wsgi_execute self.wsgi_start_response) File /home/tsa/env/lib/python2.6/site-packages/repoze/zodbconn/connector.py, line 18, in __call__ conn = self.db.open() File /home/tsa/env/lib/python2.6/site-packages/ZODB/DB.py, line 729, in open self._a() File /usr/lib/python2.6/threading.py, line 123, in acquire rc = self.__block.acquire(blocking) The server gets to a blocked state every 24 hours or so. Simply restarting the webserver works fine; however, i'd like to know what the problem is so restarting won't be necessary, and to prevent it from getting worse. Any ideas/ suggestions? Thanks, - Claudiu ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Upgraded from ZODB 3.8 to 3.10, do I need to upgrade my Data.fs?
Paul Warner paul.warner at gmail.com writes: We have been running an in house application on ZODB 3.8.2 and recently upgraded to 3.10.5. We run a ZEO server with FileStorage. It seems like everything works great, no actions on our part, but I wondered if it is suggested to some how migrate the on disk FileStorage to take advantage of new features or optimizations in 3.10? The format and data in the file storage has stayed exactly the same for quite some time, so no migration is necessary. For 3.10 the index file format (data.fs.index) has changed, but this has happened transparently. ZODB 3.8 introduced experimental support for blobs (binary large objects), which have gotten stable and production ready with 3.9. There's no good documentation for them, but z3c.blobfile might serve as an example (http://svn.zope.org/z3c.blobfile/trunk/src/z3c/blobfile/). If you have any data that's bytes of 1mb or more each, you should investigate blobs. Not having to load all that data into the ZODB caches alone is really helpful, not to mention the various nice effects of having a smaller main file storage or the various supported backends for blobs. Apart from those, there's been a multitude of new config options listed in the changelogs (http://pypi.python.org/pypi/ZODB3/3.9.7#id14 and http://pypi.python.org/pypi/ZODB3/3.10.5#id10). It's hard to tell, if any of those would be helpful in your particular case. Hanno ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] How to fix CorruptedDataError: Error reading unknown oid. Found '' at 81036527?
On Thu, Jul 14, 2011 at 4:12 PM, Andreas Jung li...@zopyx.com wrote: I followed the documentation at http://pastebin.com/bL0CbBm2 Wow, looks like I didn't specify an expiration date for my paste ;) If anyone feels like putting that into a more permanent place, feel free to use it. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] How to fix CorruptedDataError: Error reading unknown oid. Found '' at 81036527?
On Thu, Jul 14, 2011 at 5:38 PM, Andreas Jung li...@zopyx.com wrote: For the sake of completeness: I found this https://mail.zope.org/pipermail/zodb-dev/2008-February/011606.html and will try it out . There's also instructions in my pastebin dump on how to do this - and quite a bit simpler than Chris version: from persistent import Persistent a = Persistent() a._p_oid = '\x00\x00\x00\x00\x00\xc9-w' a._p_jar = app._p_jar app._p_jar._register(a) app._p_jar._added[a._p_oid] = a transaction.commit() At Jarn we've used that trick many times to repair broken internals in the intid/keyreference data structures. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] RelStorage zodbconvert, ConflictError on Zope2 startup
On Thu, Jul 14, 2011 at 9:32 PM, Sean Upton sdup...@gmail.com wrote: ZODB.POSException.ConflictError: database conflict error (oid 0x00, class persistent.mapping.PersistentMapping, serial this txn started with 0x038fa89e48c295cc 2011-07-13 14:22:17.053148, serial currently committed 0x038b077ae26bec77 2010-12-19 21:46:53.067557) Full traceback: http://pastie.org/2214036 Any ideas on what I'm doing wrong or what's going on? Where should I look into the RelStorage tables for clues? That's weird, you are getting a conflict error on inserting the root application object (oid 0x00). So for some reason the startup code cannot load the existing root object, thinks the database is empty and tries to start fresh. I'm not sure how that could happen, unless the ZODB connection strings are wrong or database permissions not quite right. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] How to fix CorruptedDataError: Error reading unknown oid. Found '' at 81036527?
On Thu, Jul 14, 2011 at 9:40 PM, Jim Fulton j...@zope.com wrote: On Thu, Jul 14, 2011 at 3:23 PM, Hanno Schlichting ha...@hannosch.eu wrote: At Jarn we've used that trick many times to repair broken internals in the intid/keyreference data structures. Do you have any theories why objects are going away for you? Not on a low-level. I know the culprit is five.intid which basically register zope.lifecycle event subscribers for all IPersistent objects to add and remove intid registrations. That code seems to get things wrong. But I've never dug into the code to figure out under what circumstances this happens. I very much believe this is wrong application code in five.intid. It does have to do some funky tricks with raising NotYet exceptions and handling those at transaction boundaries, as the intid is calculated from the p_oid. A new object doesn't have a p_oid until the object is added to the connection. So this is rather tricky code dealing with low-level assumptions. But I never got a reproducible case. So I've just fixed the invalid data whenever I hit it and ripped out five.intid from every project where possible. To my knowledge no such problems exist in zope.intid/zope.keyreference. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Relstorage Blob support and Oracle
On Fri, Jun 10, 2011 at 7:17 AM, Shane Hathaway sh...@hathawaymix.org wrote: BTW, I'd like to release a final of RelStorage 1.5, but the #@$%$ Windows tests are failing on Windows XP (and probably other versions of Windows). I think the problem is caused by file descriptors being left open somewhere. If any Windows specialists would like to lend a hand and fix the tests on RelStorage trunk, it would sure be appreciated. Are you getting real test failures or only warnings from the cleanup module during test teardown? I've got myself a checkout of relstorage on Windows and running tests now. So far I only get warning messages like: error: uncaptured python exception, closing channel __main__.ZEOTestServer :28210 at 0x2c9e108 (type 'exceptions.WindowsError':[Error 32] Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen Prozess verwendet wird: ' c:\\users\\hannosch\\appdata\\local\\temp\\BlobAdaptedFileStorageTests3nvzbg\\Data.fs' [C:\Python\Python26-Installer\lib\asyncore.py|read|76] [C:\Python\Python26-Installer\lib\asyncore.py|handle_read_event|411] [c:\users\hannosch\.buildout\eggs\zodb3-3.10.3-py2.6-win-amd64.egg\ZEO\tests\zeoserver.py|handle_accept|100][c:\users\hannosch\.buildout\eggs\zodb3-3.10.3-py2.6-win-amd64.egg\ZEO\tests\zeoserver.py|cleanup|34] [c:\users\hannosch\.buildout\eggs\zodb3-3.10.3-py2.6-win-amd64.egg\ZODB\FileStorage\FileStorage.py|cleanup|1253]) where the bit of German text translates to the process cannot access the file, as it is in use by another process. This looks like the typical problem, where some code opens a file without explicitly closing it. But instead relies on garbage collection to do the job during __del__ of the file object. That generally doesn't work well on Windows. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Occasional startup errors in serialize.py/_reconstructor
On Mon, May 9, 2011 at 1:14 PM, Andreas Jung li...@zopyx.com wrote: Occasionally I receive the following error after starting my instance after the first request. There is no way to recover out other than restarting the instance...then everything is fine. I don't know why this happens from time to time...any clue? [...] File /home/ajung/sandboxes/zopyx.authoring/lib/python2.6/copy_reg.py, line 48, in _reconstructor obj = object.__new__(cls) TypeError: ('object.__new__(UserDataSchemaProvider) is not safe, use Persistence.Persistent.__new__()', function _reconstructor at 0x2aad95da3b18, (class 'dgho.onkopedia.userdataschema.UserDataSchemaProvider', type 'object', None)) What does the code of the dgho...UserDataSchemaProvider class look like? I'm assuming it's similar to the one from plone.app.users that looks like: class UserDataSchemaProvider(object): implements(IUserDataSchemaProvider) That one is registered as a global utility via a utility declaration with a factory. What part of your code tries to store this utility in the ZODB? Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Occasional startup errors in serialize.py/_reconstructor
On Mon, May 9, 2011 at 2:08 PM, Andreas Jung li...@zopyx.com wrote: Jup - basically a stripped down version of http://svn.plone.org/svn/collective/collective.examples.userdata/trunk/collective/examples/userdata/ Hhm, that code has http://svn.plone.org/svn/collective/collective.examples.userdata/trunk/collective/examples/userdata/profiles/default/componentregistry.xml which ends up storing an instance of the object inside the ZODB. I'm not sure what happens there exactly. The object should definitely inherit from persistent, if you want to store it in the ZODB. As it is now, it might only accidentally get stored at all, if the local site managers _utility_registrations object gets _p_changed set to True. In any case just putting in persistent into the base classes should be the easiest fix. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Immutable blobs?
On Mon, May 9, 2011 at 2:26 PM, Laurence Rowe l...@lrowe.co.uk wrote: While looking at the Plone versioning code the other day, it struck me that it would be much more efficient to implement file versioning if we could rely on blobs never changing after their first commit, as a copy of the file data would not need to be made proactively in the versioning repository incase the blob was changed in a future transaction. Subclassing of blobs is not supported, but looking at the code I didn't see anything that actively prevented this other than the Blob.__init__ itself. Is there something I've missed here? I had thought that an ImmutableBlob could be implemented by overriding the open and consumeFile methods of Blob to prevent modification after first commit. I thought blobs are always immutable by design? Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Occasional startup errors in serialize.py/_reconstructor
On Mon, May 9, 2011 at 2:34 PM, Andreas Jung li...@zopyx.com wrote: The question is more why this error is happening from time to time after the startup after the first request - this scares me a bit. I also encounter a strange issues with PTS from time to time after startup...but not as frequent as this one... PTS (PlacelessTranslationService) had definite problems at various points. It stored persistent objects in module globals, so you'd get connection state errors. It also relied on the absolute paths to the client home to be the exact same for all instances and other fun stuff. Just upgrade to Plone 4 to get rid of all that madness, as PTS doesn't store persistent data there anymore. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] How to check for setting the same values on persistent objects?
On Thu, May 5, 2011 at 6:27 PM, Alexandre Garel alex.ga...@tarentis.com wrote: I'm assuming doing a general check for old == new is not safe, as it might not be implemented correctly for all objects and doing the comparison might be expensive. I know very few of ZODB internals but in Python old == new does not means old is new Sure, but we aren't interested in object identity here. We want to know if something close to cPickle.dumps(old_data, 1) == cPickle.dumps(new_data, 1), for which old_data == new_data is an approximation, but likely not correct in all cases. Checking for identity would only work for ints, interned strings a very few other things. I don't know the way ZODB retrieve a particular object exactly but I assume it does this using _p_oid. So for persistant classes you could check old._p_oid == new._p_oid. For string, int you can of course use old is new. The _p_oid of the object stays the same, it's the data it represents that might change. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Speeding up ZODB (was redis cache for RelStorage)
On Fri, May 6, 2011 at 2:22 PM, Pedro Ferreira jose.pedro.ferre...@cern.ch wrote: That's hard to do for a project that is already 8 or 9 years old, as you can see in the attached file, we've got have many cases that fall outside your limits. I've noticed, for instance, that pages that involve the loading of 200 MaKaC.review.Abstract objects have an awful performance record (maybe because we then load for each object a handful of other referenced persitent objects). I'd expect load times per persistent objects to vary between 0.1 to 10ms. Over a network connection while sometimes hitting the disk, I'd expect to see an average of 1ms. If you get something awful like an Oracle real application cluster, with virtualization, storage area networks and different data centers involved, you are lucky to see 10ms. If you don't have any of the data in a cache and load hundreds of objects, you very quickly get into the range of one to multiple seconds to load the data. If you need to load more than 1000 objects from the database to render a page, your database schema sucks (tm) ;-) But isn't RelStorage supposed be slower than FileStorage/ZEO? The benchmarks vary on this a little bit. But for read performance they are basically the same. You can do the same tricks like using SSD's or bigger OS disk caches to speed up both of them. RelStorage has simpler (freely available) clustering solutions via the native database and supports things like the memcached cache. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Speeding up ZODB (was redis cache for RelStorage)
On Fri, May 6, 2011 at 10:14 PM, Shane Hathaway sh...@hathawaymix.org wrote: From my experience, most people who want ZODB to be faster want Zope catalogs in particular to be faster. I don't think prefetching can make catalogs much faster, though. I've spent a lot of time lately on making ZCatalog faster. The main trick there is to store data in smarter ways, load fewer objects in the first place and trying to minimize data sets as early as possible, so the cost of intersection() and union() gets lower. There's a lot more you can do about optimizing ZCatalog, but prefetching would indeed not help much. The only cases where you could do prefetching are the ones you don't want to do anyways, like loading an entire BTree or TreeSet, because you need to do a len(tree) or actually iterate over the entire thing. All that said, if you hit large datasets, it gets problematic to do catalog operations on each Zope client. At some point a centralized query approach on the server side or via a web API wins in terms of overall resource efficiency. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Speeding up ZODB (was redis cache for RelStorage)
On Thu, May 5, 2011 at 10:43 AM, Pedro Ferreira jose.pedro.ferre...@cern.ch wrote: Since we are talking about speed, does anyone have any tips on making ZODB (in general) faster? Query fewer objects from the database. Make sure you don't store lots of tiny persistent objects in the database, I'd aim for storing data in chunks of 8-32kb or use blobs for larger objects. Remember that ZODB is a key/value storage for the most part. Model your data accordingly. In our project, the DB is apparently the bottleneck, and we are considering implementing a memcache layer in order to avoid fetching so often from DB. Before you do that, you might consider switching to RelStorage, which already has a memcached caching layer in addition to the connection caches. But remember that throwing more caches at the problem isn't a solution. It's likely the way you store or query the data from the database that's not optimal. However, we were also wondering if we could in some way take advantage of different computer hardware - since the ZEO server is mostly single-threaded we thought of getting a machine with higher clock freq and larger cache rather than a commodity 8-core server (which is what we are using now). The ZEO server needs almost no CPU power, except for garbage collection and packing. During normal operations the CPU speed should be irrelevant. Any tips on the kind of hardware that performs best with ZODB/ZEO? Are there any adjustments that can be done at the OS or even application layer that might improve performance? Faster disks. Whatever you can do to get faster disks will help performance. But that's general advise that applies to all database servers. You can also throw more memory at the db server, so the operating systems disk cache will kick in and you'll actually read data from memory instead of the disks. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] How to check for setting the same values on persistent objects?
Hi. I tried to analyze the overhead of changing content in Plone a bit. It turns out we write back a lot of persistent objects to the database, even tough the actual values of these objects haven't changed. Digging deeper I tried to understand what happens here: 1. persistent.__setattr__ will always set _p_changed to True and thus cause the object to be written back 2. Some BTree buckets define the VALUE_SAME macro. If the macro is available and the new value is the same as the old, the change is ignored 3. The VALUE_SAME macro is only defined for the int, long and float value variants but not the object based ones 4. All code in Products.ZCatalog does explicit comparisons of the old and new value and ignores non-value-changes. I haven't seen any other code doing this. I'm assuming doing a general check for old == new is not safe, as it might not be implemented correctly for all objects and doing the comparison might be expensive. But I'm still curious if we could do something about this. Some ideas: 1. Encourage everyone to do the old == new check in all application code before setting attributes on persistent objects. Pros: This works today, you know what type of values you are dealing with and can be certain when to apply this, you might be able to avoid some computation if you store multiple values based on the same input data Cons: It clutters all code 2. Create new persistent base classes which do the checking in their __setattr__ methods Pros: A lot less cluttering in the application code Cons: All applications would need to use the new base classes. Developers might not understand the difference between the variants and use the checking versions, even though they store data which isn't cheap to compare 2.a. Create new base classes and do type checking for built-in types Pros: Safer to use than always doing value comparisons Cons: Still separate base classes and overhead of doing type checks 3. Compare object state at the level of the pickled binary data This would need to work at the level of the ZODB connection. When doing savepoints or commits, the registered objects flagged as _p_changed would be checked before being added to the modified list. In order to do this, we need to get the old value of the object, either by loading it again from the database or by keeping a cache of the non-modified state of all objects. The latter could be done in persistent.__setattr__, where we add the pristine state of an object into a separate cache before doing any changes to it. This probably should be a cache with an upper limit, so we avoid running out of memory for connections that change a lot of objects. The cache would only need to hold the binary data and not unpickle it. Pros: On the level of the binary data, the comparisons is rather cheap and safe to do Cons: We either add more database reads or complex change tracking, the change tracking would require more memory for keeping a copy of the pristine object. Interactions with ghosted objects and the new cache could be fragile. 4. Compare the binary data on the server side Pros: We can get to the old state rather quickly and only need to deal with binary string data Cons: We make all write operations slower, by adding additional read overhead. Especially those which really do change data. This won't work on RelStorage. We only safe disk space and cache invalidations, but still do the bulk of the work and sent data over the network. I probably missed some approaches here. None of the approaches feels like a good solution to me. Doing it server side (4) is a bad idea in my book. Option 3 seems to be the most transparent and safe version, but is also the most complicated to write with all interactions to other caches. It's also not clear what additional responsibilities this would introduce for subclasses of persistent which overwrite various hooks. Maybe option one is the easiest here, but it would need some documentation about this being a best practice. Until now I didn't realize the implications of setting attributes to unchanged values. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] How to check for setting the same values on persistent objects?
On Wed, May 4, 2011 at 1:09 PM, Laurence Rowe l...@lrowe.co.uk wrote: Persistent objects are also used as a cache and in that case code relies on an object being invalidated to ensure its _v_ attributes are cleared. Comparing at the pickle level would break these caches. So you would expect someone to store _v_ attributes on objects as caches, where that cached data is dependent on more than the data of the object? Do you know of any examples of this? I would expect to see _v_ attributes only being used if the cache data is dependent on the object state itself, i.e. if that doesn't change, then the cached data doesn't have to change either. I suspect that this is only really a problem for the catalogue. Content objects will always change on the pickle level when they are invalidated as they will have their modification date updated. I imagine you also see archetypes doing bad things as it tends to store one persistent object per field, but that is just bad practise. When editing a content object in Plone, there's more than 20 different persistent objects being set to _p_changed = True. There's a couple of them which should change dependent of modification date, but a whole lot which doesn't have to change. These other ones include: The container, the container's position map, the persistent mapping containing the workflow history, all base units, the at_references folder, the annotations storage btree, OOBuckets inside that btree , ... and a lot more. We can add code to deal with all of these, but it's a lot of places. Essentially any place that does persistentobject.attribute = value should do the check - that's a whole lot of them. Maybe this is the best we can do and document this as a best practice for ZODB development - I was just trying to see if there's a better way. It would be interesting to see the performance impact of adding newvalue != oldvalue checks on the catalogue data structures. This would also prevent the unindex logic being called unnecessarily. The catalog isn't a problem, it already has these checks in all places. It is less efficient than it could be, as it needs to do: old = btree.get(key, None) if old != new: btree[key] = new So it ends up traversing the btree to the right bucket twice. The int/float based buckets can do the check inside their setattr, so they avoid the extra traversal. It could be interesting to allow buckets with object values to do the check inside the __setattr__ via some additional flag, so the extra traversal could be avoided. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] [RFC] ZEO: Allow non-packaged products
On Fri, Apr 29, 2011 at 3:24 PM, Vincent Pelletier vinc...@nexedi.com wrote: I need ZEO to be able to find non-packaged products for conflict resolution purposes. As ZEO AFAIK doesn't support this I gave it a quick try. I reached the works for me state, that I now would like to get feedback on. Basically, I transposed Zope's products config option to ZEO. Did I miss anything already existing to achieve this ? You can achieve this by adding a normal directory to sys.path: 1. create a base directory 2. create a directory called 'Products' in it 3. put the setuptools magic into an __init__.py into the Products directory so it contains: __import__('pkg_resources').declare_namespace(__name__) 4. add any of your plain 'products' into the Products folder 5. add the base directory to sys.path Is such change (functionality-wise) welcome in ZEO ? I think it makes little sense to introduce a special legacy Zope2 concept to ZODB3. Especially since there's an easy workaround like described above and actually repackaging products into full packages is a 15 minutes job each. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] packing ZODB
Hi. On Thu, Mar 31, 2011 at 12:46 PM, Adam GROSZER agros...@gmail.com wrote: After investigating FileStorage a bit, I found that GC runs on objects, but pack later by transactions. That means that if there's a bigger-ish transaction, we can't get rid of it until all of it's objects are GCed (or superseeded by newer states). Is that correct? I think so. I think yes, so an idea might be to split up transactions to one transaction per object state somehow and pack again. This would definitely work only offline of course. I think it would be interesting to gather some statistics on this. How often does this actually happen, and how much orphaned data could one get rid of. Implementing something that iterates over transactions, takes the live objects of it and writes them into a new transaction should be possible without taking the database offline. How is this handled by relstorage? RelStorage does the same. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Wrong blob returned in one of the zeo clients
On Tue, Mar 1, 2011 at 11:50 PM, Shane Hathaway sh...@hathawaymix.org wrote: On 03/01/2011 02:47 PM, Maurits van Rees wrote: - ZODB3 3.8.6-polling Blobs are stored in postgres with RelStorage. Layout of the blob cache dir is 'zeocache'. I should have spotted this earlier. The zeocache layout was only introduced in ZODB 3.9. I'm surprised it doesn't fail in worse ways under 3.8. I reproduced your setup and I think I found it: shared-blob-dir false seems to be incompatible with ZODB 3.8, because the blob code in ZODB 3.8 constructs blob filenames in an inflexible way. IOW, BlobCacheLayout was never intended to work with ZODB 3.8. I expect my test runner to confirm this within the next couple of hours, then I'll make it so shared-blob-dir false is not available with ZODB 3.8. Indeed. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] blobs missing with relstorage and small blob cache dir
On Mon, Feb 28, 2011 at 4:19 PM, Maurits van Rees m.van.r...@zestsoftware.nl wrote: This is with RelStorage 1.5.0-b1, with blob-dir enabled, shared-blob-dir false and a low blob-cache-size, say 10 bytes. Using Plone 3.3.5, plone.app.blob 1.3, plone.app.imaging 1.0.1, but the same is probably true for non-Plone setups. Blobs are considered experimental in ZODB 3.8. Especially the entire blob cache changed completely for ZODB 3.9. I think you might want to upgrade to Plone 4 and ZODB 3.9 to get a stable environment. Or you'll likely run into more problems. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] blobs missing with relstorage and small blob cache dir
On Mon, Feb 28, 2011 at 5:09 PM, Maurits van Rees m.van.r...@zestsoftware.nl wrote: Op 28-02-11 16:34, Martijn Pieters schreef: On Mon, Feb 28, 2011 at 16:22, Hanno Schlichtingha...@hannosch.eu wrote: Blobs are considered experimental in ZODB 3.8. Especially the entire blob cache changed completely for ZODB 3.9. I think you might want to upgrade to Plone 4 and ZODB 3.9 to get a stable environment. Or you'll likely run into more problems. Thanks for the advice, Hanno, also on the plone-dev list. This is the first time I have read such a clear warning against using blobs in Plone 3 though (same for Zope 2.10.x I guess). http://plone.org/products/plone.app.blob also gives no clear warning. What do others think? To quote Jim: I consider blob support in 3.8 to be somewhat experimental. [1] You can use blobs in Plone 3 / ZODB 3.8. That's how we developed it and then ironed out all the problems. But since we have more stable versions of the software today, it doesn't make a lot of sense to go back to the early development version. ZODB 3.9 has much more stable blob support and direct support for RelStorage (without patches). If you want to use either of those, you do yourself a favor by using the latest stable versions. Hanno [1] https://mail.zope.org/pipermail/zodb-dev/2009-November/012837.html ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Advice on whether to run relstorage database and Zope on different servers
Hi. On Thu, Feb 24, 2011 at 4:20 PM, Anthony Gerrard anthonygerr...@gmail.com wrote: * I'm familiar with enterprise environments where you would have an app server and a database server but are there any advantages to putting Zope and MySQL on different servers? Database servers (be it MySQL or ZODB) have different requirements than application servers (Zope). For small installations this generally doesn't matter, but if you have larger deployments you can adjust your hardware to fit the different roles. Database servers are generally disk I/O bound. Using the fastest possible and very reliable storage for these can help a lot. Be that either a RAID 1 or 10 of SAS drives with 10k or 15k rpm drives or a similar RAID of SSD's. Be prepared for some major performance degradation if you put your database on non-local NAS or SAN setups. Unless optimized correctly these will slow things down considerably. The ZODB has almost no CPU load, the SQL queries done by RelStorage also incur almost no CPU load. Memory isn't all too important for databases as long as the indexes fit into memory and you can handle savepoints / rollback data. Still some memory for the OS disk cache does help. Application servers are generally CPU bound and in case of Zope benefit a lot from memory for the connection cache. With RelStorage you can also use other cache setups like the shared memcached one. The disk types of application servers don't matter much and can be simple local disks, as there's no actual business data on the servers but just reproducible configuration setup - if you use proper version control for your code stored elsewhere that is. * I'd expect a performance hit if we run Zope + MySQL on separate servers but is this hit manageable? As long as the servers are on the same physical network with a low latency (~1ms) there's almost no overhead compared to running things locally. If you get into higher latencies of 10ms or more, you will see a noticeable drop in performance with Zope/Plone though. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Issue with Blobstorage for a third client in Plone
On Fri, Feb 11, 2011 at 11:24 PM, Andreas Mantke ma...@gmx.de wrote: I want to create a third instance (client3) in a running zeoserver that uses his own database (data.fs) and blobstorage. This third client should run a repository for extensions. The first client (and the second) is used for a group of documentation writers. Therefore the databases had to be seperate. We had to do the same with the blobstorages. This is really a question for a Plone support channel like http://plone.org/support/forums/setup. Given your description I'd probably use a different ZEO server and instance for this use-case, possibly a completely separate buildout. You seem to have two completely independent sites, so there's no point in trying to run them from the same ZEO server process. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Increasing MAX_BUCKET_SIZE for IISet, etc
Hi. On Thu, Jan 27, 2011 at 9:00 AM, Matt Hamilton ma...@netsight.co.uk wrote: Alas we are. Or rather, alas, ZCatalog does ;) It would be great if it didn't but it's just the way it is. If I have 300,000 items in my site, and everyone of them visible to someone with the 'Reader' role, then the allowedRolesAndUsers index will have an IITreeSet with 300,000 elements in it. Yes, we could try and optimize out that specific case, but there are others like that too. If all of my items have no effective or expires date, then the same happens with the effective range index (DateRangeIndex 'always' set). You are using queryplan in the site, right? The most typical catalog query for Plone consists of something like ('allowedRolesAndUsers', 'effectiveRange', 'path', 'sort_on'). Without queryplan you indeed load the entire tree (or trees inside allowedRolesAndUsers) for each of these indexes. With queryplan it knows from prior execution, that the set returned by the path index is the smallest. So it first calculates this. Then it uses this small set (usually 10-100 items per folder) to look inside the other indexes. It then only needs to do an intersection of the small path set with each of the trees. If the path set has less then 1000 items, it won't even use the normal intersection function from the BTrees module, but use the optimized Cython based version from queryplan, which essentially does a for-in loop over the path set. Depending on the size ratio between the sets this is up to 20 times faster with in-memory data, and even more so if it avoids database loads. In the worst case you would load buckets equal to length of the path set, usually you should load a lot less. We have large Plone sites in the same range of multiple 100.000 items and with queryplan and blobs we can run them with ZODB cache sizes of less than 100.000 items and memory usage of 500mb per single-threaded process. Of course it would still be really good to optimize the underlying data structures, but queryplan should help make this less urgent. Ahh interesting, that is good to know. I've not actually checked the conflict resolution code, but do bucket change conflicts actually get resolved in some sane way, or does the transaction have to be retried? Conflicts inside the same bucket can be resolved and you won't get to see any log message for them. If you get a ConflictError in the logs, it's one where the request is being retried. And imagine if you use zc.zlibstorage to compress records! :) This is Plone 3, which is Zope 2.10.11, does zc.zlibstorage work on that, or does it need newer ZODB? zc.zlibstorage needs a newer ZODB version. 3.10 and up to be exact. Also, unless I can sort out that large number of small pickles being loaded, I'd imagine this would actually slow things down. The Data.fs would be smaller, making it more likely to fit into the OS disk cache. The overhead of uncompressing the data is small compared to the cost of a disk read instead of a memory read. But it's hard to say what exactly happens with the cache ratio in practice. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Increasing MAX_BUCKET_SIZE for IISet, etc
On Thu, Jan 27, 2011 at 11:09 AM, Matt Hamilton ma...@netsight.co.uk wrote: Hanno Schlichting hanno at hannosch.eu writes: There still seem to be instances in which the entire set is loaded. This could be an artifact of the fact I am clearing the ZODB cache before each ]test, which I think seems to be clearing the query plan. Yes. The queryplan is stored in a volatile attribute, so clearing the zodb cache will throw away the plan. The queryplan version integrated into Zope 2.13 stores the plan in a module global with thread locks around it. Speaking of which I saw in the query plan code, some hook to load a pre-defined query plan... but I can't see exactly how you supply this plan or in what format it is. Do you use this feature? You get a plan representation by calling: http://localhost:8080/Plone/@@catalogqueryplan-prioritymap Then add an environment variable pointing to a variable inside a module: [instance] recipe = plone.recipe.zope2instance environment-vars = CATALOGQUERYPLAN my.customer.module.queryplan Create that module and put the dump in it. it should start with something like: # query plan dumped at 'Mon May 24 01:33:28 2010' queryplan = { '/Plone/portal_catalog': { ... } You can keep updating this plan with some new data from the dump once in a while. Ideally this plan should be persisted in the database at certain intervals, but we haven't implemented that yet. You don't want to persist the plan in every request doing a catalog query. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Lazy Instantiation
On Mon, Jan 24, 2011 at 1:43 AM, Leonardo Santagada santag...@gmail.com wrote: On Sat, Jan 22, 2011 at 3:23 PM, Jim Fulton j...@zope.com wrote: a.child == None will because equality comparison will attempt to access methods on the child, which will cause it's activation. Why would it unghost the object? I hope, I understand this part, someone please correct me ;) The unghostify check is done in cPersistence.c in the Per_getattro function. The Persistence type sets this as: (getattrofunc)Per_getattro, /* tp_getattro */ This is equivalent to overriding __getattribute__ in Python classes. It's not set as tp_getattr which would be the same as __getattr__. __getatttribute__ is called on every attribute access, including method lookups or lookups in the instance dictionary. __getattr__ is only called if there's no method on the type, any of its bases and there's no matching key in the instance dict. So once you want to compare a ghost to something, there's a couple lookups for __eq__, __cmp__ or similar API's. Those lookups will cause the object to be unghostified. The is operator compares memory addresses and doesn't call any method on the type, so it doesn't unghostify the object. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Some tests do not pass load's version parameter
Hi. On Mon, Dec 13, 2010 at 10:11 AM, Vincent Pelletier vinc...@nexedi.com wrote: Reading ZODB.interface, I realised the version parameter is defined as mandatory: class IStorage(Interface): [...] def load(oid, version): That parameter was deprecated in 3.9. Looks like the interface wasn't updated. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Sharing (persisted) strings between threads
On Wed, Dec 8, 2010 at 11:06 AM, Malthe Borch mbo...@gmail.com wrote: With 20 active threads, each having rendered the Plone 4 front page, this approach reduced the memory usage with 70 MB. Did you measure throughput of the system? In the benchmarks I've seen threads numbers of 3 or above will perform worse than one or two threads. At least with the GIL implementation up to Python 2.6 you get much worse performance the more threads you have on multicore systems. There's good explanations of the behavior done by David Beazley at http://www.dabeaz.com/blog.html. In default Plone 4 we have two threads per instance. If you have more than a single ZEO instance you should reduce the thread number to one. We also set a default Python checkinterval of 1000 (instructions), which prevents thread switching for long stretches of time to counter the GIL in the two thread case. So while sharing data between threads might sound interesting, it's not of much help in Python. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] 32-bit vs 64-bit - RelStorage on MySQL
On Thu, Nov 18, 2010 at 5:19 PM, Leonardo Santagada santag...@gmail.com wrote: On Thu, Nov 18, 2010 at 1:47 PM, Chris Withers ch...@simplistix.co.uk wrote: On 18/11/2010 15:39, Marius Gedminas wrote: About the only noticeable difference -- other than the obvious memory growth What obvious memory growth? The one from pointers and anything related to memory going from 32bits to 64bits in size. Py_objects get fatter because of that. For Zope based applications I've generally seen 50% to 100% memory growth when moving to 64bit. That's why we stick to 32bit for a number of memory hungry applications (*wink* Plone). Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Default comparison considered harmful in BTrees.
On Wed, Oct 27, 2010 at 8:56 PM, Jim Fulton j...@zope.com wrote: On Wed, Oct 27, 2010 at 1:59 PM, Hanno Schlichting ha...@hannosch.eu wrote: I suppose that depends on the application. Was the use of None intentional, or the result of sloppy coding? I'd have to check the code. I expect it to be sloppy coding. I'll make a 3.10.1 release without it and a 3.10.2a1 release with it. Thanks a lot for this! I do think these warnings are beneficial, possibly wildly so. :) As I said earlier, in 3.11, it will be an error to use an object with default comparison as a key, but loading state with such objects will only warn. I agree with them being useful. And once I'm not busy at the conference, I'll give it more proper testing. Thanks, Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Default comparison considered harmful in BTrees.
On Mon, Oct 25, 2010 at 11:51 PM, Jim Fulton j...@zope.com wrote: I'm inclined to treat the use of the comparison operator inherited from object in BTrees to be a bug. I plan to fix this on the trunk. Did you mean to throw warnings for simple built-in types? I'm now getting warnings for simple strings and ints, I'd expect tuples as well. All of these do inherit from object and use the default __cmp__. But their hash implementation used in the default __cmp__ should be safe to use. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Default comparison considered harmful in BTrees.
On Wed, Oct 27, 2010 at 6:45 PM, Jim Fulton j...@zope.com wrote: If not, I wonder if the existing indexes have some bad values in them that are triggering this somehow. The relevant check is being done when loading state. I bet you have some bad keys (e.g. None) in your data structures. Could you check that? If this is what's happening, then I think the warning is useful. For 3.11 though (aka trunk) I'll rearrange things so the check isn't performed when loading state. Aha. I haven't looked into all the cases, but I did find a None value in the two examples I checked. So in an OOTree with string keys, None is considered invalid and would need to be an empty string? Thanks! Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] [Zope-dev] Zope Tests: 31 OK, 19 Failed, 2 Unknown
On Fri, Sep 10, 2010 at 10:52 PM, Tres Seaver tsea...@palladion.com wrote: My attempts at diagnosis. Subject: FAILED : Zope Buildbot / zope2 slave-ubuntu64 From: jdriessen at thehealthagency.com Date: Thu Sep 9 09:31:14 EDT 2010 URL: http://mail.zope.org/pipermail/zope-tests/2010-September/019683.html This failure is in the tests for tempstorage.TempStorage, which likely hasn't kept up with recent ZODB 3.10 API changes:: - -- % Error in test check_checkCurrentSerialInTransaction (tempstorage.tests.testTemporaryStorage.ZODBProtocolTests) Traceback (most recent call last): File /usr/lib/python2.6/unittest.py, line 279, in run testMethod() File /home/zope/.buildout/eggs/ZODB3-3.10.0b6-py2.6-linux-x86_64.egg/ZODB/tests/BasicStorage.py, line 235, in check_checkCurrentSerialInTransaction self._storage.tpc_finish(t) File /home/zope/.buildout/eggs/ZODB3-3.10.0b6-py2.6-linux-x86_64.egg/ZODB/BaseStorage.py, line 295, in tpc_finish self._finish(self._tid, u, d, e) File /home/zope/.buildout/eggs/tempstorage-2.11.3-py2.6.egg/tempstorage/TemporaryStorage.py, line 256, in _finish referencesf(data, referencesl) File /home/zope/.buildout/eggs/ZODB3-3.10.0b6-py2.6-linux-x86_64.egg/ZODB/serialize.py, line 629, in referencesf u.noload() UnpicklingError: invalid load key, 'x'. - -- % I'm CC'ing the ZODB list in hopes somebody there can tell us what we need to do to get TemporaryStorage to conform to ZODB's expectations. It took me a while to figure this one out. It's not actually an API change but a side-effect of the tempstorage tests reusing some ZODB tests. I fixed it in https://mail.zope.org/pipermail/zodb-checkins/2010-September/012464.html tempstorage always loads the data in _finish and thus tripped over the invalid payload of a simple 'x' string in those tests. The normal ZODB tests don't care about the data payload and thus were happy to ignore the invalid data. With current ZODB trunk I don't get any test failures anymore inside Zope2. So once 3.10.0b7 is out, we can upgrade to it. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] RFC: deprecate transaction user and description fields in favor of extened info and simplify extended info API
On Thu, Sep 23, 2010 at 4:24 PM, Jim Fulton j...@zope.com wrote: The user and description fields are somewhat archaic. They can only be strings (not unicode) and can as easily be handled as extended info. I propose to deprecate the 'user' and 'description' attributes, and the 'setUser' and 'note' methods and to add a new 'info' attribute whose attributes can be set to set extended info. For example: transaction.info.user = u'j1m' Is this supposed to be extensible in the sense of allowing arbitrary information? In that case I'd prefer this to have a dictionary spelling of: transaction.info[u'user'] = u'j1m' That would make it easier to look up all available keys and values in the info attribute. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZODB: invalidations out of order
On Tue, Sep 21, 2010 at 3:48 PM, Jim Fulton j...@zope.com wrote: 3.8.6 is available now with this bug fix and a few others. Note that the BTrees 32bit bug fix (OverflowError to TypeError) also needs some adjustments in Zope2 (IIRC in the DateIndex code) and some add-ons (like CMFEditions). So it's not straightforward to use this in older Plone versions. Personally I used 3.8.4 in all Plone 3 versions with blobs. We've run this combination on large data sets without any problems and I know of quite a number of other Plone companies having done the same. I'm uncomfortable with with people using 3.8 with blobs in production. I'm going to spend some time today seeing if I can add back enough version api (as opposed to support :) in 3.9 to make it work with Zope 2.10. That's a very noble goal :) Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZODB: invalidations out of order
On Tue, Sep 21, 2010 at 4:30 PM, Jim Fulton j...@zope.com wrote: On Tue, Sep 21, 2010 at 10:00 AM, Hanno Schlichting ha...@hannosch.eu wrote: Note that the BTrees 32bit bug fix (OverflowError to TypeError) also needs some adjustments in Zope2 (IIRC in the DateIndex code) and some add-ons (like CMFEditions). So it's not straightforward to use this in older Plone versions. AFAIK, this fix doesn't convert overflow errors to type errors. It simply raises type errors in situations where data were simply stored incorrectly before. I'm surprised that this would cause problems. It was changes like this one http://svn.zope.org/Zope/trunk/src/Products/PluginIndexes/DateIndex/DateIndex.py?rev=115442r1=115279r2=115442 BTW, if there a standard way to get Zope 2.10 to use a version of ZODB other than the one that ships with it? It's all just a PYTHONPATH in the end. So if you put something on the path by the name of ZODB first, then the one in the software home inside Zope2 won't be loaded. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZODB: invalidations out of order
On Tue, Sep 21, 2010 at 5:20 PM, Jim Fulton j...@zope.com wrote: It was changes like this one http://svn.zope.org/Zope/trunk/src/Products/PluginIndexes/DateIndex/DateIndex.py?rev=115442r1=115279r2=115442 I don't understand how the BTree changes would run afoul of this. This code should prevent the BTree from overflowing in the first place, no? It should. But it failed to recognize that a Python int on a 64bit platform is too large to fit into an IITreeSet. For some reason this problem only started showing up after the BTree fixes. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZODB: invalidations out of order
On Mon, Sep 20, 2010 at 6:00 PM, Jim Fulton j...@zope.com wrote: Do you know that Zope 2.10 won't work with ZODB 3.9? If so, I'm curious why? Zope 2.10 includes ZODB 3.7 by default. 3.8 does work fine with it. But in 3.9 versions got removed. There was a good deal of code in Zope 2 that still claimed to support or depend on versions. We only cleaned out all those places in the Zope 2.12 code. Of course it's possible to create a branch of Zope 2.10 and backport these changes, but since 2.10 is long out of maintenance, those changes would never be released in an official version. Instead we recommend all Plone people to upgrade to Plone 4 which includes Zope 2.12 and ZODB 3.9 and even runs fine with ZODB 3.10. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZODB: invalidations out of order
On Mon, Sep 20, 2010 at 8:08 PM, Jim Fulton j...@zope.com wrote: Does Zope 2.10 actually break with ZODB 3.9 if a user doesn't use versions? Or is this a matter of test failures? Last time I checked it did break. There's at least the temporary storage implementation that required changes to deal with the dropped version argument from several API's. But IIRC there's been more stuff in the control panel and startup code. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] 3.10 final?
Hi, just wondering what the state of 3.10 final is? We had a couple beta releases already. What needs to be done to move it forward and is there anything the community can help with? Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Weird KeyError with OOBTree
On Mon, Aug 16, 2010 at 12:14 PM, Pedro Ferreira jose.pedro.ferre...@cern.ch wrote: Could this be some problem with using persistent objects as keys in a BTree? Some comparison problem? I'm not entirely sure about this, but I think using persistent objects as keys isn't supported. Looking at the code, I doubt using anything expect simple types like unicode strings or tuples of simple types will work without further work. From what I can see in the code, BTree's use functions like PyObject_Compare to compare different keys. Persistent doesn't implement any special compare function and falls back to the standard hash algorithm for an object. This happens to be its memory address. The memory address obviously changes over time and the same address gets reused for different objects. I think implementing a stable hash function for your type could make this work though. The ZODB gods correct me please :) Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Weird KeyError with OOBTree
On Mon, Aug 16, 2010 at 2:04 PM, Pedro Ferreira jose.pedro.ferre...@cern.ch wrote: I think implementing a stable hash function for your type could make this work though. From what I read, ZODB doesn't use hash functions, relying on __cmp__ instead. So, I guess I should make my class non-persistent and implement a __cmp__ function for it... Right, implementing __cmp__ or all of the rich compare functions would be best. The __hash__ is just used as the default backend for __cmp__ of object. It's probably better to not rely on that indirection and implement compare directly. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZEO in v3.10 with older clients and older Python
On Thu, Jul 29, 2010 at 1:20 PM, Christian Theune c...@gocept.com wrote: ZEO in version 3.10 is supposed to work with older clients. Is it also intended to work with older clients running Python 2.4? The server needs to run on Python 2.5 due to the with statement. The docs explicitly state that Python 2.4 is no longer supported at all. Go with the times and use Python 2.6 or 2.7 ;) Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZEO in v3.10 with older clients and older Python
On Thu, Jul 29, 2010 at 2:49 PM, Christian Theune c...@gocept.com wrote: On 07/29/2010 02:42 PM, Hanno Schlichting wrote: The docs explicitly state that Python 2.4 is no longer supported at all. Go with the times and use Python 2.6 or 2.7 ;) The docs also say that ZEO supports older clients. The application that I currently have in mind has no chance of moving from Python 2.4 but probably would benefit from a threaded ZEO server which I'd like to verify. I think we discussed the Python version support policy on this list in regard to the changes to exception classes. It's documented pretty clearly: ZODB 3.10 requires Python 2.5 or later. Note -- When using ZEO and upgrading from Python 2.4, you need to upgrade clients and servers at the same time, or upgrade clients first and then servers. Clients running Python 2.5 or 2.6 will work with servers running Python 2.4. Clients running Python 2.4 won't work properly with servers running Python 2.5 or later due to changes in the way Python implements exceptions. So the ZODB 3.10 server requires 2.5. You cannot run clients with Python 2.4 with a server running 2.5. Therefor you will have to update the application at some point. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZEO in v3.10 with older clients and older Python
On Thu, Jul 29, 2010 at 2:57 PM, Christian Theune c...@gocept.com wrote: Thanks for digging this out. I'll try to find the discussion and refresh my memory. See for example your response here https://mail.zope.org/pipermail/zodb-dev/2010-April/013269.html ;-) And Andreas was the only one responding to the Python 2.5 upgrade proposal at https://mail.zope.org/pipermail/zodb-dev/2009-December/013085.html Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Remember date of last pack?
On Mon, Jul 26, 2010 at 5:14 PM, Christian Zagrodnick c...@gocept.com wrote: for monitoring if a storage has been packed it would be handy if it remembered the date of its last pack. An easy solution for file storage would be to create a Data.fs.packed file. Other ways do determine the date would be: - Use age of Data.fs.old: very implicit; file is not available during packing (thus monitoring becomes difficult); it is not required to keep Data.fs.old - Provide a wrapper script around zeopack which stores the information: while that would work it is rather tedious to provide that again and again for each buildout/installation. - Configure repozo to do incremental backups and keep some old backups. Configure zeopack to run regularly. Check that you have a recent full repozo backup around, which gets created automatically once you do a pack. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Remember date of last pack?
On Mon, Jul 26, 2010 at 8:12 PM, Christian Zagrodnick c...@gocept.com wrote: On 2010-07-26 17:38:10 +0200, Hanno Schlichting said: - Configure repozo to do incremental backups and keep some old backups. Configure zeopack to run regularly. Check that you have a recent full repozo backup around, which gets created automatically once you do a pack. I don't understand how that helps that the database was packed. If *course* there is an automatic job to pack the database. But that doesn't necessarily mean that it works. Hence a *test* to monitor that the database was packed. In this configuration repozo only creates a full backup, if the database is packed. Otherwise you will only get incremental backups. Repozo already does check of the database file to see if it was packed (by comparing the actual data inside the database file). This way you are just reusing this information instead of incurring any extra cost. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] I released 3.10.0b2 this morning
On Wed, Jul 14, 2010 at 2:47 PM, Jim Fulton j...@zope.com wrote: It got a lot larger recently when someone moved a bunch of reports from the Zope 2 lists. A lot of these bugs are obsolete. That one was me. I tried hard to only move those bugs, which were indeed pure ZODB problems. A lot of them looked like they were edge-cases and feature requests more than anything else. But if any of them are Zope2 specific, feel free to reassign them back. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] KeyError in BTrees.check
On Tue, Jul 13, 2010 at 11:23 AM, Suresh V. suresh...@yahoo.com wrote: While trying to run manage_cleanup on my BTreeFolder2, I get a KeyError from the classify function in Module BTrees.check: def classify(obj): return _type2kind[type(obj)] I see that obj is None at this point. Anyway I can patch the code to run around this? This should be solved in BTreeFolder2 I guess. It shouldn't pass in something that is None into the lower level function. But I thought the manage_cleanup methods where only used years ago, while BTree's had some internal book keeping problems. Do you still get BTree corruption in any recent ZODB3 / Zope 2 combination? The cleanup code is from six years back before BTreeFolder2 was even part of a Zope2 release, I was close to removing it the other day. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] [Checkins] SVN: ZODB/trunk/src/ Merged zagy-lp509801 branch.
Hi Jim. This error is biting us in ZODB 3.8.5 and 3.9.5. Are you intending to backport this and the other bugfixes? Do you want help with backporting and in what way? Create bugfix branches against the older branches for your review or directly apply the patches as you reviewed them already? Thanks, Hanno On Sun, Jul 11, 2010 at 2:27 PM, Jim Fulton j...@zope.com wrote: Log message for revision 114586: Merged zagy-lp509801 branch. Updating blobs in save points could cause spurious invalidations out of order errors. https://bugs.launchpad.net/zodb/+bug/509801 (Thanks to Christian Zagrodnick for chasing this down.) Changed: U ZODB/trunk/src/CHANGES.txt U ZODB/trunk/src/ZODB/Connection.py U ZODB/trunk/src/ZODB/tests/testblob.py -=- Modified: ZODB/trunk/src/CHANGES.txt === --- ZODB/trunk/src/CHANGES.txt 2010-07-11 12:18:52 UTC (rev 114585) +++ ZODB/trunk/src/CHANGES.txt 2010-07-11 12:27:54 UTC (rev 114586) @@ -8,6 +8,11 @@ Bugs fixed -- +- Updating blobs in save points could cause spurious invalidations + out of order errors. https://bugs.launchpad.net/zodb/+bug/509801 + + (Thanks to Christian Zagrodnick for chasing this down.) + - When a demo storage push method was used to create a new demo storage and the new storage was closed, the original was (incorrectly) closed. Modified: ZODB/trunk/src/ZODB/Connection.py === --- ZODB/trunk/src/ZODB/Connection.py 2010-07-11 12:18:52 UTC (rev 114585) +++ ZODB/trunk/src/ZODB/Connection.py 2010-07-11 12:27:54 UTC (rev 114586) @@ -328,13 +328,13 @@ def invalidate(self, tid, oids): Notify the Connection that transaction 'tid' invalidated oids. if self.before is not None: - # this is an historical connection. Invalidations are irrelevant. + # This is a historical connection. Invalidations are irrelevant. return self._inv_lock.acquire() try: if self._txn_time is None: self._txn_time = tid - elif tid self._txn_time: + elif (tid self._txn_time) and (tid is not None): raise AssertionError(invalidations out of order, %r %r % (tid, self._txn_time)) @@ -1121,7 +1121,7 @@ # that that the next attribute access of its name # unghostify it, which will cause its blob data # to be reattached cleanly - self.invalidate(s, {oid:True}) + self.invalidate(None, (oid, )) else: s = self._storage.store(oid, serial, data, '', transaction) Modified: ZODB/trunk/src/ZODB/tests/testblob.py === --- ZODB/trunk/src/ZODB/tests/testblob.py 2010-07-11 12:18:52 UTC (rev 114585) +++ ZODB/trunk/src/ZODB/tests/testblob.py 2010-07-11 12:27:54 UTC (rev 114586) @@ -563,6 +563,35 @@ db.close() +def savepoint_commits_without_invalidations_out_of_order(): + Make sure transactions with blobs can be commited without the + invalidations out of order error (LP #509801) + + bs = create_storage() + db = DB(bs) + tm1 = transaction.TransactionManager() + conn1 = db.open(transaction_manager=tm1) + conn1.root.b = ZODB.blob.Blob('initial') + tm1.commit() + conn1.root.b.open('w').write('1') + _ = tm1.savepoint() + + tm2 = transaction.TransactionManager() + conn2 = db.open(transaction_manager=tm2) + conn2.root.b.open('w').write('2') + _ = tm1.savepoint() + conn1.root.b.open().read() + '1' + conn2.root.b.open().read() + '2' + tm2.commit() + tm1.commit() # doctest: +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + ConflictError: database conflict error... + db.close() + + def savepoint_cleanup(): Make sure savepoint data gets cleaned up. ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Restoring from repozo and reusing an index file?
On Fri, Jun 11, 2010 at 3:25 PM, Paul Winkler sli...@gmail.com wrote: I'm preparing to move a zope site from one host to another. We've been planning to use repozo backups to copy the filestorage from the old host to the new one. I'd like to figure out how to minimize downtime while not losing any data. My plan was: For moving stuff to a new host where a short downtime is planned anyways, I would just use rsync. 1. While the old site is still running, rsync both the Data.fs and index file to the new host 2. Make sure you don't run any zeopack operation between that point and the actual switchover 3. You can keep running rsync a couple times shortly before the downtime to get the most recent state over 4. Shut down the old site 5. Do a final rsync of both the .fs and .fs.index (this should take a couple seconds or minutes at most) 6. Start the new site, since the index is up-to-date and matches the Data.fs it should be seconds as well I tend to run rsync via rsync -rP --rsh=ssh. The Data.fs is an append-only file, so rsync is very efficient at handling it. Only zeopack rewrites things all across the file and causes a subsequent rsync to be slow again. 1) would it be safe to copy the index file from the old host and just use that with the filestorage generated by repozo? It's save, but it will be ignored and a new index be created, so you don't gain anything. But at some point you could upgrade to ZODB 3.10 (currently in beta), where repozo does save the index file for each backup. It will still be slower than rsync for this use-case, though. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Changing the pickle protocol?
On Sun, May 23, 2010 at 5:45 PM, Jim Fulton j...@zope.com wrote: On Sat, May 22, 2010 at 12:30 PM, Hanno Schlichting ha...@hannosch.eu wrote: - The code to make the protocol configurable on all levels (storage, index, persistent cache, ...) is large and ugly, I'm puzzled. Why were changes so extensive? All existing code should be able to read protocol 2 pickles. I would have expected a change in ZODB.serialiize.ObjectWriter only. Can you explain why more extensive changes were necessary? They weren't really necessary. I just made the protocol for all the different things configurable. So a ZEO client could use a different protocol than the storage. And the protocol for the ZEO client would influence the persistent cache and the index for that cache and so on. In total there's 17 different cPickle.Pickler objects, which all need to figure out the protocol to use in some way and are currently hardcoded to either protocol 0 or 1. This was motivated by making it easy to test the different protocols against each other in one codebase. If I were to do this for real, I wouldn't make the protocol configurable at all or only at the storage level. - Protocol 2 is only more efficient at dealing with boolean values, small tuples and longs - all infrequent in my type of data Hm, interesting. I wasn't aware of those benefits. This is the full list of new opcodes in protocol 2: /* Protocol 2. */ #define PROTO'\x80' /* identify pickle protocol */ #define NEWOBJ '\x81' /* build object by applying cls.__new__ to argtuple */ #define EXT1 '\x82' /* push object from extension registry; 1-byte index */ #define EXT2 '\x83' /* ditto, but 2-byte index */ #define EXT4 '\x84' /* ditto, but 4-byte index */ #define TUPLE1 '\x85' /* build 1-tuple from stack top */ #define TUPLE2 '\x86' /* build 2-tuple from two topmost stack items */ #define TUPLE3 '\x87' /* build 3-tuple from three topmost stack items */ #define NEWTRUE '\x88' /* push True */ #define NEWFALSE '\x89' /* push False */ #define LONG1'\x8a' /* push long from 256 bytes */ #define LONG4'\x8b' /* push really big long */ The most interesting is probably longs, quoting the PEP (and confirmed in the code): Pickling and unpickling Python longs takes time quadratic in the number of digits, in protocols 0 and 1. Under protocol 2, new opcodes support linear-time pickling and unpickling of longs. Basically before protocol 2, the repr() is used and afterwards there's a dedicated opcode representation. But none of this is particularly exciting. I expect that protocol 3 as used in Python 3 for unicode/bytes representation is going to be much more interesting. But that's a whole different story. It might get easier if we'd centralize the cPickle.Pickler creation in some helper function, so it could be updated in one place, instead of the 17 current ones. But that's all nice-to-have. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZEO and access permissions
On Sat, May 22, 2010 at 2:17 PM, Nitro ni...@dr-code.org wrote: ZEO already supports authenticated logins. Based on the login I'd like people to be able to access some objects and deny access to others. First I thought I'd do the access restrictions on the application level. That's the only sane thing to do. You want to have higher level abstractions to manage security. Like giving permissions based on their class, based on their relationship to others. Usually you'll also want to go from just users to groups or maybe use external authentication services at some point. The database level is the wrong abstraction level to do this. In SQL terms, you are trying to store a full fledged security policy on each database row. This is going to be prohibitively slow and unmanageable very soon. I think you could extend database users and permissions, to manage access permissions on a full database / storage level. Potentially introduce read/write permissions on this level. But anything more fine-grained belongs to the application domain. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Changing the pickle protocol?
Hi. Following up on my idea of using pickle protocol 2. I implemented this in a fully configurable fashion on a branch, mainly to ease benchmarking and testing of the different variants. My conclusions (maybe for future reference): - There's no significant win of just switching the pickle protocol - The code to make the protocol configurable on all levels (storage, index, persistent cache, ...) is large and ugly, if there's an improvement in the new protocol, I'd change the default without a config option - There's no significant reduction in size for typical content management like data - Protocol 2 is only more efficient at dealing with boolean values, small tuples and longs - all infrequent in my type of data Potential follow-up experiments: - Use protocol 2 in combination with the extension registry, use codes in the 128 to 191 - Reserved for Zope range for ZODB internal types (BTrees, PersistentMapping and PersistentList) [1] Cheers, Hanno [1] http://www.python.org/dev/peps/pep-0307/ ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Storage iterators and IndexError
On Fri, May 14, 2010 at 8:43 PM, Jim Fulton j...@zope.com wrote: There's a test for storage iterators that verifies that they raise a special exception that extends StopIteration and IndexError. This makes storage iterators a bit harder to implement than necessary. Does anyone know of a reason why we should have to raise a special error that raises IndexErrors? Not really. It sounds like it's trying to do both new-style iterators (via __iter__ / next / StopIteration) and old-style iteration (via __getitem__ / IndexError) at once. From http://docs.python.org/reference/datamodel.html about object.__getitem__(self, key): Note for loops expect that an IndexError will be raised for illegal indexes to allow proper detection of the end of the sequence. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] repozo full backup and index files
On Fri, May 14, 2010 at 10:24 AM, Christian Theune c...@gocept.com wrote: Hmm. If the full backup is just a regular FS file then you could start with the naive approach and just open/close it once after performing a backup as that would create the index file. Sure. That would be an easy but also rather inefficient way. In my case it takes only about 15 minutes of rather heavy I/O to create the file, but for larger files this time usually goes up. From what I understand repozo does the following: 1. it opens the real file storage file as a read-only FileStorage 2. calculates the byte position of end of the last complete transaction 3. closes the file storage 4. opens the fs file as a normal binary file 5. copies over all bytes up to the calculated position into a temp file 6. closes the fs file and temp file and renames the temp file according to some timestamp What I'm wondering is, if you could copy or otherwise create the index as part of the second step. In step 5 we only deal with bytes and cannot parse those, but in step 2 we have a proper ZODB FileStorage object and can to some degree decide which transaction we want. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] repozo full backup and index files
On Fri, May 14, 2010 at 4:33 PM, Tres Seaver tsea...@palladion.com wrote: I would be willing to make a stab at this, if we can hold off on a 3.10.0 beta until I've had a chance to try it. Oh, awesome! Best possible outcome I could hope for - someone else wants to do the work for me ;-) Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] repozo full backup and index files
Hi. I was wondering if there's a specific reason why repozo does not backup or create index files, whenever it does a full backup. I understand that creating index files for incremental backups is probably hard, but for a full backup it should be possible. Recreating the index file after restoring a backup can take forever and increases the potential downtime quite a bit. So is this just something nobody ever had time to implement or are there technical problems with this? Thanks, Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZEO monitor server
On Fri, Apr 30, 2010 at 8:07 PM, Jim Fulton j...@zope.com wrote: Does anyone use the monitor server? Would anyone object if it went away, to be replaced by a server method on the standard ZEO protocol? I don't know anyone using the monitor server. So changing it is not a problem here. We only use the activity monitor on the client side. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] Changing the pickle protocol?
Hi. The ZODB currently uses a hardcoded pickle protocol one. There's both the more efficient protocol two and in Python 3 protocol 3. Protocol two has seen various improvements in recent Python versions, triggered by its use in memcached. I'd be interested to work on changing the protocol. How should I approach this? I can see three general approaches: 1. Hardcode the version to 2 in all places, instead of one. Pros: Easy to do, backwards compatible with all supported Python versions Cons: Still inflexible 2. Make the protocol version configurable Pros: Give control to the user, one could change the protocol used for storages or persistent caches independently Cons: More overhead, different protocol versions could have different bugs 3. Make the format configurable Shane made a proposal in this direction at some point. This would abstract the persistent format and allow for different serialization formats. As part of this one could also have different Pickle/Protocol combinations. Pros: Lots of flexibility, it might be possible to access the data from different languages Cons: Even more overhead If I am to look into any of these options, which one should I look into? Option 1 is obviously the easiest and I made a branch for this at some point already. I'm not particularly interested in option 3 myself, as I haven't had the use-case. Thanks for any advice, Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Changing the pickle protocol?
On Wed, Apr 28, 2010 at 5:11 PM, Jim Fulton j...@zope.com wrote: Do you know of specific benefits you expect from protocol 2? Any specific reasons you think it would be better in practice? I have just seen some ongoing work on pickles in recent times, for example from the Python 2.7 what's new: - The pickle and cPickle modules now automatically intern the strings used for attribute names, reducing memory usage of the objects resulting from unpickling. (Contributed by Jake McGuire; issue 5084.) - The cPickle module now special-cases dictionaries, nearly halving the time required to pickle them. (Contributed by Collin Winter; issue 5670.) Unless I've misread the code, these changes only apply to protocol two. And then there's the old claims of pep 307 stating that pickling new-style classes would be more efficient. Finally Python 3 introduces pickle protocol version 3, which deals explicitly with the new bytes type. There's more changes in Python 3 and the pickle format, so that's a separate project. But it suggested to me, that the pickle format isn't quite as dead anymore as it used to be. I've avoided going to protocol 2 for two reasons: - It wasn't clear we'd get a benefit without deeper changes. Those deeper changed might be of value, but only if we're careful about how we make them. In particular, we could replace class names in pickles if we has a registry mapping ints to class names. This could provide a number of benefits beyond smaller pickles, but it needs some thought to get right. Right. I'm not particular interested in the pickle class registry. Having a hard dependency between code filling the registry and the actual data has all sorts of implications. I don't really want to go there myself. - I want zope.xmlpickle to work with ZODB database records and it doesn't support protocol 2 yet. This doesn't have to block moving to protocol 2, but I really would like to have this work if possible. Ok. I know there's some tools reading the zodb data on their own, without actually using the API's. I wouldn't want to break them, if there's no clear benefit. I'm skeptical that there would be enough benefit for protocol 2 without implementing a registry to take advantage of integer pickle codes. The other benefit of protocol 2 has to do with the way instance pickles are constructed and, for persistent objects, ZODB takes a very different approach anyway. I suggest doing some realistic experiments to look at the impact of the change. - Convert an interesting Zope 2 database from protocol 1 to protocol 2. How does this affect database size? - Do some sort of write and read benchmarks using the 2 protocols to see if there's a meaningful benefit. Ok, thanks. That gives me enough direction to work on some specific benchmarks. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Problems in ZEO pack in 3.9.x?
On Mon, Apr 26, 2010 at 7:44 PM, Jim Fulton j...@zope.com wrote: Hm. I don't know if it was intentional to ignore POSKeyErrors in the old pack code. It seems like a bad idea to me. Yep, I was wondering if that was a conscious design choice or just accidental behavior. What do folks think about this? Should missing records be ignored? Or should the missing record cause the pack (or maybe just GC) to fail? Mmh, I think having the pack succeed would be nice. It can sometimes take a while until you can fix those PosKeyErrors. Not everyone has the skill to do that. Preventing the ZODB from growing indefinitely during that time would be nice. But doing GC on an inconsistent state is probably a bad idea. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Problems in ZEO pack in 3.9.x?
On Tue, Apr 27, 2010 at 4:08 PM, Jim Fulton j...@zope.com wrote: On Tue, Apr 27, 2010 at 6:29 AM, Hanno Schlichting ha...@hannosch.eu wrote: But doing GC on an inconsistent state is probably a bad idea. Then I think the current behavior is correct. You can now disable GC using the pack-gc option: filestorage pack-gc false ... which will allow you to pack away old revisions while you research the POSKeyError issue. Ok, thanks for the clarification. I'm looking into zc.zodbdgc now :) Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZODB 3.9.5 has been released
On Fri, Apr 23, 2010 at 11:30 PM, Jim Fulton j...@zope.com wrote: See http://pypi.python.org/pypi/ZODB3/3.9.5 Cool! I added the Python 2.6 / Win64 egg. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Using zodb and blobs
On Wed, Apr 14, 2010 at 11:52 AM, Nitro ni...@dr-code.org wrote: Yes, in my case it's nothing critical or related to money. If there's a hardware outage a day of work is lost at worst. In case of corruption (which can happen also without fsync as data within the file can just be garbled) you need a backup anyways. Usually you will only loose the last transaction and not a days of work. The Data.fs is an append-only file, with one transaction appended after another. If there's a garbled or incomplete write, you'll typically loose the last transaction. The ZODB is smart enough to detect broken transactions and skip them on restart. I have witnessed one ZEO installation myself, where the physical machine hosting the ZEO server restarted multiple times a day, over a period of months. Nobody noticed for a long time, as the application was accessible all the time and no data had been lost. Obviously this wasn't a very write-intense application. But it still showed me how stable the ZODB really is. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] DemoStorage stacking in zope.testing layers
Hi. On Mon, Apr 5, 2010 at 4:36 PM, Martin Aspeli optilude+li...@gmail.com wrote: What is the correct way to use DemoStorage stacking in test layers? Not sure if this helps, but zope.app.testing's functional.py [1] did something similar here (stacking DemoStorages). Maybe the FunctionalTestSetup class including the way it uses shared state to keep track of the DB's might be helpful. Hanno [1] http://svn.zope.org/zope.app.testing/trunk/src/zope/app/testing/functional.py?view=markup ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Understanding the ZODB cache-size option
On Mon, Mar 22, 2010 at 7:05 PM, Jeff Shell j...@bottlerocket.net wrote: Are there any metrics about how to set the ZODB 'cache-size' (or cache-size-bytes) option? We've been using '5000' (arbitrarily chosen) for our Zope 3.4-ish app servers. We have access to zc.z3monitor which can output the number of objects in the object caches (combined) and number of non-ghost objects in the object caches (combined). But I don't know understand how to interpret those numbers and use them to make better settings. ZODB cache size optimization is a typical case of black art. There's no real good way to find the perfect number. I would advise against using the cache-size-bytes option. There's a known critical problem with it, that is currently only fixed on the 3.9 SVN branch. So stick to the old object count cache-size for now. But generally database caching is a trade-off between performance and available RAM. As the upper limit, you could have your entire data set in each ZODB cache. So you could look at the number of persistent objects in the database and match your cache-size to that number. That's usually not want you want. As a real strategy, you should set up detailed monitoring of the server. Monitor and graph overall RAM usage, RAM usage per Zope process, number of DB loads and writes over time. Preferably include some way of measuring application request performance and track CPU and I/O usage on the server hosting the database. If you have those numbers, you can play around with the cache setting and increase it. See what impact it has on your application and data set. At some point you run out of memory and need to decrease the number or the increased cache size doesn't actually buy you any application performance anymore. For general Zope 3 application there are no rules of thumb that I know. The dataset and load patterns of the applications are too different to have any of those. It's affected a lot by the dataset and if you use ZODB blobs or another mechanism to store large binary content outside the DB. 5000 persistent objects including 1000 images of 5mb each are obviously very different, than 1000 BTree buckets containing only integers. One main advantage of blobs is that they aren't loaded into the ZODB connection caches, so they lower the memory requirements for applications with binary content a lot. In the Plone context I generally use something like number of content objects in the catalog + 5000 objects for the general application as a starting point. But that has a lot of assumptions on the type of content and the application in it. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZODB on Windows 64bit?
On Sun, Jan 17, 2010 at 2:51 PM, Jim Fulton j...@zope.com wrote: On Fri, Jan 15, 2010 at 10:01 PM, Hanno Schlichting ha...@hannosch.eu wrote: P.S. Running the tests on the checkout of the 3.9.4 tag gives me two test failures, of which one is a test cleanup problem caused by the other. Does this failure occur reliably? Or is it intermittent? I only ran the tests once. Without knowing if my efforts would be useful at all, those 50 minutes of waiting were a bit too much upfront investment for me ;-) I'll try things again and see if I can debug the issue. Thanks, Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZODB on Windows 64bit?
On Sat, Jan 16, 2010 at 9:05 AM, Adam GROSZER agros...@gmail.com wrote: Please don't stop with those binaries. There are some win32 users out here ;-) Sure. But Jim is building the 32bit binaries himself. So there's no need for me to do that. As he seems to be building things on WinXP, he probably doesn't have a 64bit environment yet. That's why I'm offering these in addition. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] ZODB on Windows 64bit?
Hi. I recently updated my Windows build environment to a 64bit Windows 7 environment with cross compilation support for 32/64 bit. I created and uploaded Windows binary eggs for all Zope 2.12.3 dependencies in both flavors for Python 2.6. The last one missing with a 64bit flavor is the ZODB3 now. I created a binary egg of that against Python 2.6.4 based on Visual Studio C++ Express 2008. Is anyone interested in that binary egg or could it go to PyPi? Thanks, Hanno P.S. Running the tests on the checkout of the 3.9.4 tag gives me two test failures, of which one is a test cleanup problem caused by the other. The overall result is: Tests with errors: checkCommitLockVoteAbort (ZEO.tests.testZEO.DemoStorageTests) checkCommitLockVoteAbort (ZEO.tests.testZEO.DemoStorageTests) Total: 3834 tests, 0 failures, 2 errors in 47 minutes 56.331 seconds. The more detailed result is: Running .DemoStorageTests tests: Tear down .ClientStorageSharedBlobsBlobTests in 0.000 seconds. Set up .DemoStorageTests in 0.015 seconds. Error in test checkCommitLockVoteAbort (ZEO.tests.testZEO.DemoStorageTests) Traceback (most recent call last): File C:\Python\python26\lib\unittest.py, line 279, in run testMethod() File c:\zope\zodb\src\ZEO\tests\CommitLockTests.py, line 171, in checkCommitLockVoteAbort self._dostore() File c:\zope\zodb\src\ZODB\tests\StorageTestBase.py, line 188, in _dostore self._storage.tpc_begin(t) File c:\zope\zodb\src\ZEO\ClientStorage.py, line 1091, in tpc_begin self._server.tpc_begin(id(txn), txn.user, txn.description, File c:\zope\zodb\src\ZEO\ClientStorage.py, line 88, in __getattr__ raise ClientDisconnected() ClientDisconnected Error in test checkCommitLockVoteAbort (ZEO.tests.testZEO.DemoStorageTests) Traceback (most recent call last): File C:\Python\python26\lib\unittest.py, line 289, in run self.tearDown() File c:\zope\zodb\src\ZEO\tests\testZEO.py, line 214, in tearDown StorageTestBase.StorageTestBase.tearDown(self) File c:\zope\zodb\src\ZODB\tests\StorageTestBase.py, line 159, in tearDown ZODB.tests.util.TestCase.tearDown(self) File c:\users\hannosch\.buildout\eggs\zope.testing-3.8.6-py2.6.egg\zope\testing\setupstack.py, line 33, in tearDown f(*p, **k) File c:\users\hannosch\.buildout\eggs\zope.testing-3.8.6-py2.6.egg\zope\testing\setupstack.py, line 51, in rmtree os.rmdir(path) WindowsError: [Error 145] The directory is not empty: 'C:\\zope\\ZODB\\parts\\test\\tmp\\DemoStorageTestsgjkw_l' Ran 50 tests with 0 failures and 2 errors in 3 minutes 43.772 seconds. ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] An interface for broken objects?
On Sun, Jan 3, 2010 at 5:38 PM, Jim Fulton j...@zope.com wrote: I suggest including these in the interface. Thanks for fixing this yourself and merging. Sorry, I didn't get around to extend things myself. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] An interface for broken objects?
Hi. We currently have a package called zope.broken whose entire real content is: import zope.interface class IBroken(zope.interface.Interface): Marker interface for broken objects This is used for example by zope.container, which won't try to set __name__ and __parent__ pointers on these objects. Is this something that could be put into the ZODB itself and have the ZODB.broken.Broken class directly implement this interface? Thanks, Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] An interface for broken objects?
On Thu, Dec 31, 2009 at 6:03 PM, Jim Fulton j...@zope.com wrote: On Thu, Dec 31, 2009 at 11:46 AM, Hanno Schlichting ha...@hannosch.eu wrote: Is this something that could be put into the ZODB itself and have the ZODB.broken.Broken class directly implement this interface? +1. although it shouldn't be an empty interface. Ok. I've gone ahead and made a branch for this at: svn+ssh://svn.zope.org/repos/main/ZODB/branches/hannosch-ibroken It's a single changeset: http://svn.zope.org/ZODB/?rev=107467view=rev I only specified the custom exception thrown by the __setattr__ in the interface. Everything else is private double underscore methods. I wasn't sure if any of those should really be specified formally. Thanks for considering to merge this :) Happy new year, Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] undo (and storage interface) brokenness
On Wed, Dec 23, 2009 at 9:26 PM, Jim Fulton j...@zope.com wrote: Undo is broken in a number of ways. Does anyone care? Does anyone use undo? Speaking from the Zope2/Plone crowd: I'm using it during development with a local file storage at times. In a real ZEO production setup, undo in Plone most of the time only works for a quick: revert the last transaction. For any transaction that happened a while ago our catalog-overuse will cause some persistent object inside the catalog to have changed in a later transaction. Throwing away all changes done in the meantime is usually not practical. For recovery purposes of older data, we have found a bit of manual work and zc.beforestorage to be the better approach. So if undo is gone from ZEO it wouldn't be tragic. If the remove last transaction could be retained, that would be nice. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] undo (and storage interface) brokenness
On Thu, Dec 24, 2009 at 1:34 AM, Martin Aspeli optilude+li...@gmail.com wrote: Hanno Schlichting wrote: Throwing away all changes done in the meantime is usually not practical. ... although sometimes it is preferable to a full backup restore or living with whatever it is you want to restore (usually a delete). Right. That's why we use beforestorage. For recovery purposes of older data, we have found a bit of manual work and zc.beforestorage to be the better approach. What is zc.beforestorage? Look at its PyPi page http://pypi.python.org/pypi/zc.beforestorage Basically it's a convenient way to open a database as it was at a certain point in time. So for Plone in practice you get a separate copy of the live database or from a backup, open it via beforestorage at the time you want and export the content you are looking for via a zexp. Put that zexp into the import of your live server and import the content. You usually have to reindex the content, as you cannot take the catalog entries with you. This works nicely for restoring deleted content and we have done so a number of times. If there's some unwanted changes made to individual content items, you can revert those via CMFEditions / application level versioning. So if undo is gone from ZEO it wouldn't be tragic. If the remove last transaction could be retained, that would be nice. I actually think it would be tragic. Or at least pretty miserable. In practice it hardly ever works. Relying on it as a substitution for backup or tested restore procedures is dangerous. People who manage high availability setups can probably find other ways (like very frequent backups and good restore routines). For a lot of average installations, it can be a lifesaver. It was a lifesaver for one of my clients about a week ago, for example. The late night phonecall kind of lifesaver. Frequent backups and good restore routines have nothing to do with high availability setups. They are a basic requirement for any type of setup. But even if you don't do frequent backups, as long as you have not packed your database, your data is still there and can be retrieved without too much hassle. But doing partial database recovery is something that requires a good deal of knowledge of the underlying application and database. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZEO and blobs: missing tmp file breaks transaction on retry
On Fri, Nov 13, 2009 at 5:40 PM, Jim Fulton j...@zope.com wrote: On Fri, Nov 13, 2009 at 10:18 AM, Mikko Ohtamaa mi...@redinnovation.com wrote: Unfortunately the application having the issues is Plone 3.3. ZODB 3.9 depends on Zope 2.12 so, right? ZODB does depend on Zope anything. :) Plone 3.3 may use an earlier version of ZODB. but perhaps it is possible to get it to work with a later one. I wouldn't know. :) Plone 3.x uses Zope 2.10 and ZODB 3.7. Upgrading it to ZODB 3.8.x is trivial. But the changes in ZODB 3.9 (essentially the removal of the version feature) require a bunch of non-trivial changes to Zope2. So only Zope 2.12 works with ZODB 3.9. Anyone using Plone 3.x who wants to use blobs is therefor stuck with ZODB 3.8.x. It's not supported by Plone and considered experimental on all layers :) Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] cache-size-bytes usable?
On Sat, Nov 7, 2009 at 6:34 PM, Andreas Jung li...@zopyx.com wrote: Am 07.11.09 18:25, schrieb Hanno Schlichting: Has anyone used the bytes limited cache in production yet and has any experience with it? I would assume that this feature is working since it comes from the Haufe Zope fork afaik and likely we are using it in production. But I have to check this by digging through the legacy of the origin author. Well, I know Jim had to fix a number of bugs already after Dieter's initial patch landed in the codebase. Like Sizes of new objects weren't added to the object cache size estimation sometime late in the 3.9.0 process. So the codebase isn't really the same and there have been a huge number of changes in the ZODB codebase compared to what the Haufe Zope fork was based on. That's why I'd be interested to know if someone uses the actual ZODB 3.9.x codebase. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] URGENT: ZODB down - Important Software Application at CERN
Chris Withers wrote: Hanno Schlichting wrote: Nope. DateTime objects are plain old-style classes and don't inherit from persistent.*. Hmm, oh well, my bad... In that case it must just be that their pickled form is huge compared to an int ;-) Sure: len(cPickle.dumps(DateTime.DateTime(), 1)) == 392 Where as their float representation is: len(cPickle.dumps(DateTime.DateTime().timeTime(), 1)) == 10 They are incredibly expensive to unpickle since all the DWIM magic in their __init__ get called each time, though. How come? Unpickling doesn't call __init__ and I don't see why the DWIM magic would be needed anyway, since everything has already been parsed. How would a new instance of a class be constructed without calling the init or new? Look at the _instantiate method in pickle.py, when it does: value = klass(*args) What happens on unpickling is that a new DateTime instance representing just now is generated and then that instance is updated with the values from the pickle. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] URGENT: ZODB down - Important Software Application at CERN
Marius Gedminas wrote: On Wed, May 27, 2009 at 12:17:36PM +0200, Hanno Schlichting wrote: Chris Withers wrote: Hanno Schlichting wrote: They are incredibly expensive to unpickle since all the DWIM magic in their __init__ get called each time, though. How come? Unpickling doesn't call __init__ and I don't see why the DWIM magic would be needed anyway, since everything has already been parsed. How would a new instance of a class be constructed without calling the init or new? You're cleverly changing the question ;-) *Most* objects are unpickled by calling __new__ followed by __setstate__, but without calling __init__. Chris is therefore understandably surprised. Hhm, no. From what I read most objects are created by the following: class _EmptyClass: pass value = _EmptyClass() value.__class__ = klass The __new__ is only called when your new-style class has a __getnewargs__ method, which none of the standard types have. And even then it would only be used for pickle protocol 2, but the ZODB uses protocol 1. Old-style classes that define __getinitargs__ will get their __init__ called during unpickling, though, and DateTime is such a class. Ah, ok. I missed that this only happens in conjunction with __getinitargs__. Thanks, Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] URGENT: ZODB down - Important Software Application at CERN
Chris Withers wrote: Laurence Rowe wrote: Jim Fulton wrote: Well said. A feature I'd like to add is the ability to have persistent objects that don't get their own database records, so that you can get the benefit of having them track their changes without incuring the expense of a separate database object. +lots Hanno Schlichting recently posted a nice graph showing the persistent structure of a Plone Page object and it's 9 (!) sub-objects. http://blog.hannosch.eu/2009/05/visualizing-persistent-structure-of.html That graph isn't quite correct ;-) workflow_history has DateTime objects in it, and I think they get their own pickle. Nope. DateTime objects are plain old-style classes and don't inherit from persistent.*. They are incredibly expensive to unpickle since all the DWIM magic in their __init__ get called each time, though. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] Blobs without shared-blob directory
Hi. I have a configuration with one ZEO database server (ZODB 3.8.1) and two physical servers housing four single-threaded ZEO clients each. There's some 15gb of blob data in the (Plone) site. Setting up and maintaining NFS to share direct access to the blob data is beyond what I can support in production right now. So this leaves me with a blob-cache per ZEO client. Am I correct to assume that sharing the blob-cache between the four processes on the same physical machine is not supported, as these don't have any mechanism to synchronize read/write activity to the blob-cache? Or is there something in the blob design that ensures a no-conflict situation? Does the blob-cache grow indefinitely or is there some mechanism to purge / pack it built-in somewhere? Thanks, Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Blobs without shared-blob directory
Jim Fulton wrote: On May 15, 2009, at 6:22 AM, Hanno Schlichting wrote: Am I correct to assume that sharing the blob-cache between the four processes on the same physical machine is not supported, as these don't have any mechanism to synchronize read/write activity to the blob- cache? Or is there something in the blob design that ensures a no-conflict situation? It probably works in ZODB 3.8, although there are no tests for it. I made sure it works in 3.9 and there are tests for it. This is our (ZC's) production configuration. Ok, I'll not try it for now on a tight deadline, but will consider it for more quiet times. Unfortunately 3.9 and Zope 2.10 don't quite want to work with each other. Does the blob-cache grow indefinitely or is there some mechanism to purge / pack it built-in somewhere? ZODB 3.9 has an option to limit the blob cache size. We're using that too. Note that the ZODB 3.9 blob cache uses it's own layout to facilitate the management of the cache. You can't use a 3.8 blob cache with 3.9. Ok, thanks! I'll live with some outside cronjob driven purging for now. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZODB 3.9.0b1 released
Jim Fulton wrote: I've just released the first beta of ZODB 3.9. Awesome :) I thought there were some outstanding issues with the cache-size-bytes functionality. Got all of these resolved / reviewed by now? Thanks, Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZODB 3.9
Chris Withers wrote: Hanno Schlichting wrote: Just be aware that ZODB 3.9 is not compatible with any stable Zope 2.x release. It only works and is required for Zope 2.12. It can be made to work with prior versions of Zope2 but that is a mild pain. What are the problems with using ZODB 3.9 in Zope 2.12? ZODB 3.9 removed a bunch of deprecated API's. Look at http://pypi.python.org/pypi/ZODB3/3.9.0a12#change-history to see how much changed in this version. The main things were related to Versions are no-longer supported. which changed some low level API used in quite a number of places and meant that some of the stuff in Products.OFSP couldn't possibly work anymore. There were some smaller things as well, like ZopeUndo moving into the Zope2 codebase and such. I think someone reported to got the combination working, but I doubt it's possible without editing the Zope2 source code, which isn't the most maintainable solution. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZODB 3.9
Alan Runyan wrote: Is a ZODB 3.9 beta around the corner? Just be aware that ZODB 3.9 is not compatible with any stable Zope 2.x release. It only works and is required for Zope 2.12. It can be made to work with prior versions of Zope2 but that is a mild pain. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Can I use RelStorage with ZODB3.9.0
Wichert Akkerman wrote: On 3/6/09 6:27 AM, eastxing wrote: About one month ago, I asked a question about 'ZODB pack' and got suggestions to update to new ZODB version. It took me one month to update my site from Plone2.5.5(with Zope2.9.6-final,ZODB3.6.2) to Plone3.1.7(with Zope2.10.7,ZODB3.7.3). Then I update zasync(an schedule durable task framework used by Zope2) to use its second-generation replacer -- 'zc.async', 'zc.async' needs ZODB3.9.0, then I went further to update to use Zope2.11.7 and ZODB3.9.0a12, so far so good. Please note that Zope 2.11.7 is not supported for Plone 3.1.7. You should be able to use ZODB 3.9 with Zope 2.10.x though. Zope 2.11.2 is indeed not officially supported for Plone 3.x but we have nightly test runs passing for months now. So there's a good chance it'll work. If you want to use RelStorage I suggest using ZODB 3.8 with the appropriate patches. This combination is used in production by a number of people and has been tested. ZODB 3.9 introduces a number of API incompatible changes and will not work with Zope 2.11 in general. I had to change the Zope 2 code for 2.12 in quite a number of places to make it work with ZODB 3.9. I'd be surprised if you get this combination working in a reliable way. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZODB alternate serialization format patch
Shane Hathaway wrote: Hanno Schlichting wrote: Shane Hathaway wrote: In addition to making the data format pluggable, the patch puts most of ZODB's dependencies on the cPickle module in one place, so now if we decide to improve our usage of the cPickle module, we can make that change in one place rather than several. I'm wondering if this would be a good opportunity to include a version marker in addition to the protocol format to the API's? Maybe. Starting with pickle protocol version 2, pickles do start with a version header. Reading pickles is backwards compatible. But specifying the version in which to write pickles needs explicit configuration. The ZODB still uses version one of the pickle protocol throughout and so far it has been complicated and cumbersome to change this in any way. Is it cumbersome? I tried changing to pickle protocol v2 while making this patch. Without this patch it was cumbersome, as the number of places to change the pickle version was many. Now it probably has become a lot easier ;) Only 4 of the 3000+ ZODB tests failed. One of them failed because we have some pickle introspection code that does not yet understand protocol 2; that should be easy to fix. Another failed because it seems to depend on the exact length of a generated pickle, which is a bit silly. I didn't look at the rest of the test failures because I assume they are all similarly superficial. That sounds easy to fix. While the documentation of Protocol Buffers mention that they try to be stable and avoid incompatible versions, I don't trust any standard to be so generic that it can avoid incompatible changes over a period of many years. The patch I made might cover this. After this patch, ZODB will look for the serializer format name in curly braces at the beginning of each serialized object. That format name could include a version number if necessary. Right. I was wondering if it would be a good idea to build this in from the start. My impression is that every data format of any kind of non-trivial complexity will have multiple incompatible versions of the same spec at some point. Maybe it is YAGNI as the formats themselves have a version mechanism in them, though. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZODB alternate serialization format patch
Shane Hathaway wrote: I have just created a patch for ZODB that makes the object serialization format pluggable, meaning that it should now be possible to write Python code that causes ZODB to store in formats other than Python pickles. Cool :) In addition to making the data format pluggable, the patch puts most of ZODB's dependencies on the cPickle module in one place, so now if we decide to improve our usage of the cPickle module, we can make that change in one place rather than several. I'm wondering if this would be a good opportunity to include a version marker in addition to the protocol format to the API's? The ZODB still uses version one of the pickle protocol throughout and so far it has been complicated and cumbersome to change this in any way. While the documentation of Protocol Buffers mention that they try to be stable and avoid incompatible versions, I don't trust any standard to be so generic that it can avoid incompatible changes over a period of many years. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Minimal changes for Plone 3.1.6 running on ZODB 3.8.1 ?
Russ Ferriday wrote: In a warty old application, I'm hitting memory constraints that I think the 3.8.1 changes in ZODB might fix. We are running both Plone 3.0 and 3.1 in production with ZODB 3.8.1 on two quite large sites (one has RelStorage, the other one blob storage). I'd consider it production ready. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Optimize BTree node sizes?
Jim Fulton wrote: On Nov 4, 2008, at 12:12 PM, Benji York wrote: On Tue, Nov 4, 2008 at 12:01 PM, Jim Fulton [EMAIL PROTECTED] wrote: A few months back, there was a lot of discussion here about BTree performance. I got a sense that maximum BTree-node and bucket sizes should be increased. Does anyone have recommendations for new sizes? It'd be cool if the bucket size could be dynamic (say governed by an attribute on the BTree), but I suspect that is dramatically out of scope for what you were planning on doing. I have a list of projects I might try to do for 3.9. One would be to make BTrees subclassable and to modify BTrees to get these limits from class attributes. But whether I get to this or not, if there are more sensible defaults, it would make sense to use them. Personally I think the numbers for the integer BTree's could be increased indeed, but for the object BTree's, a way to adjust the bucket sizes would be more important. The problem for BTree's storing objects, is of course that you have no idea how large those objects are going to be. Sometimes they might be simple types with just a couple of bytes each, sometimes catalog brains with one kilobyte, but sometimes they might be actual content-like object of various megabyte in size. In general there is no way of knowing what you are going to use the BTree's for. Making it possible for the application code to adjust the values for the particular kind of data, is probably better than guessing a better default value. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] 3.8.1b8 released and would like to release 3.8.1 soon
Dieter Maurer wrote: Wichert Akkerman wrote at 2008-9-24 09:44 +0200: Jim Fulton wrote: I'd appreciate it if people would try it out soon. I can say that the combination of 3.8.1b8 and Dieter's zodb-cache-size-bytes patch does not seem to work. With zodb-cache-size-bytes set to 1 gigabyte on an instance with a single thread and using RelStorage Zope capped its memory usage at 200mb. I can see two potential reasons (beside a bug in my implementation): * you have not used a very large object count. The most tight restriction (count or size) restricts what can be in the cache. With a small object count, this will be tighter than the byte size restriction. The object count is 65. Without the cache-size-bytes setting this produces a memory load of about one gigabyte to one and a half gigabytes currently. * Size is only estimated -- not exact. The pickle size is used as size approximation. I would be surprized however, when the pickle size would be five times larger than the real size. IIRC turned into a packed Data.fs the size of the whole content is about 25 gigabytes of all typical Plone content. I think a possible interaction with RelStorage (which we asked Shane to look into) or Jim's mentioned cPersistence.h change was far more likely causing this. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] 3.8.1b8 released and would like to release 3.8.1 soon
Jim Fulton wrote: I'd appreciate it if people would try it out soon. Besides the RelStorage site where we were running into problems as mentioned in the other thread, we also use the ZODB 3.8 branch (beta8 plus the first two beta fixes) in a smaller site with blob storage for about a week in production now. So far it looks stable. The Data.fs in question is about 4 gigabyte (packed) with about 3.5 million objects in it and about 8 gigabyte of blobs. The site is used heavily and has a certain amount of conflict errors happening. It uses a non-persistent ZEO client cache of one gigabyte in addition to a 15 object count cache size. Hanno ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev