Re: [ZODB-Dev] Question about relstorage and pack-gc
On Fri, Jul 13, 2012 at 7:05 PM, Shane Hathaway sh...@hathawaymix.orgwrote: On 07/12/2012 01:30 PM, Santi Camps wrote: My specific question is: if I disable pack-gc, can I safety empty object_ref table and free this space? Certainly. However, most of the 23 GB probably consists of blobs; blobs are not shown in the query results you posted. Shane Thanks for your answer. I haven't blobs inside ZODB, but truncating object_ref table has shrink the database to 12GB Great improvement, I prefer that config because in this application deletions are rare Thanks -- Santi Camps KMKey hacker (http://www.kmkey.com) Earcon S.L. (http://www.earcon.com) ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] Question about relstorage and pack-gc
Hi I've been using relstorage for years in production environments and it's really amazing, zero big problems. However, I have now some big databases. I'm examining one of them, 23 GB in total, postgresql 8.3 and RelStorage 1.5.1 and I see that biggest objects are objects_ref and it's index: relname | relpages -+-- pg_toast_11069817 | 987842 object_ref | 709272 object_ref_pkey | 560284 object_state| 267525 select count(*) from object_ref; count --- 111224549 select count(*) from object_refs_added; count 216341 select count(*) from object_state; count - 2545303 Data is consistent, no references to unexisting or empty transactions. So, information to garbage collection seems to be using a lot of space. My specific question is: if I disable pack-gc, can I safety empty object_ref table and free this space? Thanks in advance -- Santi Camps KMKey hacker (http://www.kmkey.com) Earcon S.L. (http://www.earcon.com) ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] Copying zodb's with relstorage
Hi all I was trying to move a database copy a relstorage zodb and having some issues. The original zodb is mounted using a mount point /original_path If I restore the backup of the database and mount it using exactly the same mount point /original_path in the destination zope, all goes right. But what I want is to replicate the original database N times, so need to have /destination_pathN in the mount point. When I do that, the database seems empty (no object is shown in the mounted point). Is there any way to fix this updating registers in SQL ? I know a way to solve it might be to export and import ZEXP, but the database is very big and I'm trying to avoid it I've tried these 2 queries but no effect obtained, these fields seems just informative: update transaction set description=replace(description::text, 'helpdesk_src', 'redesistemas')::bytea; update transaction set username=replace(username::text, 'helpdesk_src', 'redesistemas')::bytea; Where is stored the information about parent - children objects ? Is the prev_tid field of object_state table ? Thanks in advance -- Santi Camps KMKey hacker (http://www.kmkey.com) Earcon S.L. (http://www.earcon.com) ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Copying zodb's with relstorage
On Thu, Feb 10, 2011 at 5:07 PM, Shane Hathaway sh...@hathawaymix.orgwrote: On 02/10/2011 08:42 AM, Santi Camps wrote: The objective is to duplicate a storage using different mount points. For instance, if we have Database1 - mount_point_1 , create Database2 and Database3 as copies of Database1 (using pg_dump pg_restore), and then mount them as mount_point_2 and mount_point_3 Yes, but why do you want to do that? There might be a better way to accomplish what you're trying to do, or perhaps what you're doing is the right thing to do but there's some bug and you need to describe why that bug is important. Well, the original is a base site I want to replicate to be used for different clients. I often have done that using mount points and zexp's export/import, but was trying to do it copying the database to avoid the long and heavy import process. Thanks Santi Camps ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Relstorage Database Adapter
On Wed, Nov 24, 2010 at 7:24 PM, Shane Hathaway sh...@hathawaymix.orgwrote: On 11/24/2010 05:17 AM, Santi Camps wrote: Hi all I'm using relstorage for a long time with very good results. Until know, I also use database adapters like ZPsycopgDA to connect the same SQL database and store information in other application tables. I was thinking to create a Relstorage Database Adapter able to do the work using relstorage configuration and connections. Some preliminary simple tests seems to work: def execute_query(conn, cursor, *args, **kw): query = kw['query'] cursor.execute(query); return cursor query = 'SELECT COUNT(*) FROM kmkey_task' rows = app._p_jar._storage._with_store(execute_query, query=query) print rows.fetchone() The question is, can I go on ? Or use same connections than relstorage could cause problems at ZODB level ? Well, it doesn't look like you're using the two RelStorage connections the way they are intended to be used. For the code snippet you gave, it's likely that you want to use the load connection rather than the store connection. The load connection ensures consistent reads. RelStorage does a number of things to maintain that guarantee. The load connection is never committed, only rolled back. The store connection is intended to be used only during transaction commit. At the beginning of a commit (tpc_begin), RelStorage rolls back the store connection in order to get the most recent updates. The store connection and load connection are often out of sync with each other, so code that uses the store connection should detect and handle conflicting updates. I suspect the load/store connection split is too complicated for most apps that just want to interact with the database, so I haven't exposed any documented API. I considered making an API when I worked on repoze.pgtextindex, but I concluded that pgtextindex can be a bit lazy about consistency and therefore doesn't need all the extra complexity that reusing RelStorage connections would bring. I understand it. I will try to do it simpler: create new connections using relstorage connection string Thanks a lot for your explanation. -- Santi Camps KMKey hacker (http://www.kmkey.com) Earcon S.L. (http://www.earcon.com) ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] Relstorage Database Adapter
Hi all I'm using relstorage for a long time with very good results. Until know, I also use database adapters like ZPsycopgDA to connect the same SQL database and store information in other application tables. I was thinking to create a Relstorage Database Adapter able to do the work using relstorage configuration and connections. Some preliminary simple tests seems to work: def execute_query(conn, cursor, *args, **kw): query = kw['query'] cursor.execute(query); return cursor query = 'SELECT COUNT(*) FROM kmkey_task' rows = app._p_jar._storage._with_store(execute_query, query=query) print rows.fetchone() The question is, can I go on ? Or use same connections than relstorage could cause problems at ZODB level ? Thanks in advance -- Santi Camps KMKey hacker (http://www.kmkey.com) Earcon S.L. (http://www.earcon.com) ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] Relstorage 1.4b3 error
Hi Shane (and others) I'm just testing Relstorage 1.4b3 ... it has exciting new features :-) !! I've started testing memcached integration, and in few hours I've had this error twice: - Module ZPublisher.Publish, line 121, in publish - Module Zope2.App.startup, line 240, in commit - Module transaction._manager, line 96, in commit - Module Products.CPSCompat.PatchZODBTransaction, line 175, in commit - Module transaction._transaction, line 441, in _commitResources - Module ZODB.Connection, line 715, in tpc_finish - Module relstorage.storage, line 825, in tpc_finish - Module relstorage.storage, line 845, in _finish - Module relstorage.cache, line 294, in after_tpc_finish - Module memcache, line 360, in incr - Module memcache, line 384, in _incrdecr ValueError: invalid literal for int(): NOT_FOUND Retrying the request solve the problem, but I report it to you to be known (in fact, I don't know if the problem is in relstorage or in memcache). I'm working with python2.4.6 and python-memcache module version 1.40 Thanks a lot for this new release of relstorage, the performance testings are amazing -- Santi Camps KMKey hacker (http://www.kmkey.com) Earcon S.L. (http://www.earcon.com) ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] Estrange behaviour with BTreeFolder2
Hi all I'm having memory problems and, after some debugging with gc, I see some estrange behaviours with BTreeFolder2 (or with BTrees itself, I'm not sure). I've a BTreeFolder2 with 4 objects, more o less. Some of them are emails. Just accessing the container, about 700 mails are loaded in memory !! from AccessControl.SecurityManagement import newSecurityManager from Testing.makerequest import makerequest user = app.beta.acl_users.getUser('manager').__of__(app.beta.acl_users) newSecurityManager({}, user) app = makerequest(app) km = app.beta km.portal_repository._tree import gc objects = gc.get_objects() objects2 = [obj for obj in objects if getattr(obj, '__class__', None)] emails = [obj for obj in objects2 if 'mail' in obj.__class__.__name__.lower()] print len(emails) 697 Following gc references, I see this mails are referenced from a OOBucket, that is referenced by another OOBucket, ... and some of them are referenced from persistent.PickleCache (instead I've configured cache-size = 0 in zope.conf). Seems that some buckets and its referenced objects are readed during btree loading. So, the garbage collector never clears this from memory, and after some hours of intensive work, the server RAM is finished Is that normal ? I'm making some mistake ? Anybody knows any way to solve it ? Thanks in advance -- Santi Camps (Earcon S.L.) http://www.earcon.com http://www.kmkey.com ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Estrange behaviour with BTreeFolder2
objects = gc.get_objects() objects2 = [obj for obj in objects if getattr(obj, '__class__', None)] (I'm a bit surprised that you get objects without a __class__ attribute -- could you elaborate about those? Old-style class instances? Extension types?) No, it's just a copy paste of a old memory debugging thread, sorry emails = [obj for obj in objects2 if 'mail' in obj.__class__.__name__.lower()] How many different classes are there with 'mail' in the (lowercased) class name? Do they all inherit from Persistent? Just one class, it inherit from Persistent, yes. Following other classes results in the same problem Following gc references, I see this mails are referenced from a OOBucket, that is referenced by another OOBucket, ... I assume those mails are values rather than keys. Yes, the keys are strings with the id's of the objects Your description would make perfect sense to me if your mail classes weren't subclasses of Persistent (and the fix would be: make them subclass Persistent). No, all them inherit from CMF PortalContent, and this from Persistent, so there is not the problem. I'm using RelStorage, but this shouldn't affect. I will try the same tests in a FileStorage to be sure Thanks for your help -- Santi Camps (Earcon S.L.) http://www.earcon.com http://www.kmkey.com ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Estrange behaviour with BTreeFolder2
I'm using RelStorage, but this shouldn't affect. I will try the same tests in a FileStorage to be sure Hi all It seems to be a Relstorage specific issue. The same tests using FileStorage doesn't produce memory garbage ... It could be something related to poll-interval parameter ? I've leaved default values there Thanks again -- Santi Camps (Earcon S.L.) http://www.earcon.com http://www.kmkey.com ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Estrange behaviour with BTreeFolder2
Hmm, I don't know how RelStorage could affect the pickle cache; the two have no direct interaction at all. Are you using any ZODB patches other than the polling patch? Shane Hi Shane No, just the Relstorage polling patch I'm not saying that pickle cache was affected, I'm saying that loading a BTreeFolder2 seems very affected. In a FileStorage system, objects contained in the BTreeFolder are not loaded when the BTreeFolder itself is loaded, but in RelStorage, a lot of contained objects are loaded just accessing the container ¿? After that, this objects are never removed from memory, but the question is why they are loaded. Any ideas ? Thanks -- Santi Camps (Earcon S.L.) http://www.earcon.com http://www.kmkey.com ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Relstorage pack problems
On Fri, Jan 23, 2009 at 8:45 PM, Shane Hathaway sh...@hathawaymix.org wrote: Shane Hathaway wrote: Assuming your bad script caused your problem, it is likely that packing will still mess up your database, since you still probably have mixed-up object_state rows. Don't pack until I've had a chance to look again. Here is some more analysis. Now that I understand you accidentally merged two databases into one by forcing copyTransactionsFrom() to run when it shouldn't, I looked for the transactions you merged. First I looked for the OIDs with a confused transaction ID. = select zoid from current_object where tid != (select max(tid) from object_state where object_state.zoid = current_object.zoid); zoid -- 7 10 12 11 9 8 (6 rows) Then I listed all non-current transaction IDs for those objects. = select zoid, tid from object_state where zoid in (7,8,9,10,11,12) and tid != (select tid from current_object where current_object.zoid = object_state.zoid); zoid |tid --+ 8 | 250499913748614178 9 | 250499913748614178 10 | 250499913748614178 11 | 250499913748614178 12 | 250499913748614178 7 | 250499913748614178 (6 rows) Based on this information and the information in my last email, I can deduce that you fortunately merged only two transactions from another database and that while the merge caused conflicts, these objects haven't been otherwise modified. Note that the bad database merge could have happened at any time, not necessarily November 17 when these transactions were created. Anyone with access to your database and your broken script could cause this problem again. Fix the script quickly. Here are the two bad transactions: 250499913441768123 | initial database creation 250499913748614178 | /manage_main\012\012Created Zope Application You need to delete all traces of these two transactions from your database. Before you do, please ensure nothing is actually using them. The query below should not return any rows. select * from current_object where tid in (250499913441768123, 250499913748614178); Assuming that query returns no rows, here is how you can remove the bad transactions: update object_state set prev_tid = 0 where prev_tid in (250499913441768123, 250499913748614178); delete from object_state where tid in (250499913441768123, 250499913748614178); delete from object_ref where tid in (250499913441768123, 250499913748614178); delete from object_refs_added where tid in (250499913441768123, 250499913748614178); delete from transaction where tid in (250499913441768123, 250499913748614178); commit; Once you've done that, you should see no more anomalies in current_object: = select zoid from current_object where tid != (select max(tid) from object_state where object_state.zoid = current_object.zoid); zoid -- (0 rows) I used several shortcuts for this solution, particularly the statement that sets prev_tid to 0. If you had merged a more complex database, I wouldn't have been able to use shortcuts. I'm glad to know RelStorage didn't do anything wrong after all. Perhaps the copyTransactionsFrom() method could work harder to prevent a mishap like this, but that method is part of the ZODB API, not RelStorage, so I don't have as much control over it. However, I still don't want you to pack yet because my experiments with packing your database has revealed some unexpected behavior. I'm going to look into it. Thanks again, Shane. We'll fix the script, try removing this 2 transactions and packing on a copy of the database, to see what happens. The last pack on a copy works, but then the application raises a KeyError 8, probably becouse this zoid is one of the affected by the wrong transactions. -- Santi Camps (Earcon S.L.) http://www.earcon.com http://www.kmkey.com ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Relstorage pack problems
On Fri, Jan 23, 2009 at 2:38 AM, Shane Hathaway sh...@hathawaymix.org wrote: Santi, I hope you don't mind me discussing your database in public. I'm not going to talk about anything that looks like it could be private. Other RelStorage users might benefit from the analysis. Hi Shane That's right, of course. Furthermore, thanks a lot for this analysis. Looking at your database, I see that something bad happened just before transaction 250499913441768123. That number is an encoded time stamp: from ZODB.TimeStamp import TimeStamp from ZODB.utils import p64 str(TimeStamp(p64(250499913441768123))) '2008-11-17 19:36:04.913130' The transaction log entry says initial database creation, which means that the database had no root object (OID 0), so ZODB created one and started a brand new database. Strange! This happened about an hour after a transaction labeled: /asp_ekartek/kmkey_iso/portal_setup/manage_doUpgrades I'm guessing that an upgrade script did something horribly wrong that day. I've been revising what happens this day, and I think we are near to get the guilty.This day the database was migrated from DirectoryStorage to Relstorage.AFAIK, the upgrade should be done in DirectoryStorage, before the conversion to Relstorage. I don't think the upgrade can corrupt the database, all operations ara high level ones, and we never change ZODB objects by hand. So, the problem should be in the conversion process. I attach the script we use to do the conversion. Be free to include in Relstorage if you think it's useful and it is well done (as I said, I really don't know much about ZODB, I just mix zodbconvert.py with some DirectoryStorage code) Furthermore, the entry for OID 0 in the current_object table points to an old transaction rather than the most recent transaction that modified OID 0. That's not supposed to happen, even when you undo. I hope RelStorage didn't do that! Did you or someone on your team change current_object by hand? I can understand why you would, since a simple modification to current_object would be a nice quick fix for the broken upgrade. The fix would not be complete, though, because now the object_state table and the current_object table disagree on the current state of OID 0. According to object_state, even now, the current state of OID 0 still points to the small object graph that was accidentally created on November 17. The pack code relies more on object_state than on current_object, so the pack code sees only a handful of objects that are reachable. Packing with garbage collection removes everything that is not reachable. The current_object table is really just a cache of object_state. If the schema were fully normalized, there would not be a current_object table. In theory, the current_object table makes it possible to load ZODB objects quickly. But if the current_object table results in problems like this, I need to consider alternatives. We've used the attached script to convert a lot of others databases, that are packing successfully, so something special should occurs in this case.The only explanation I can found is that the conversion would be done without unmounting the DirectoryStorage database from its Zope mount point, and that caused the problem. The right way we use to convert databases from DS to RS is, first of all, detach them from Zope, then convert, and then mount the resultant RS, but It's possible that were a human mistake that day. In any case, I believe you can get out of this mess pretty easily. You need to delete the extra object states for OID 0 created on November 17. I tried this in my copy of your database: delete from object_state where zoid = 0 and tid in ( 250499913441768123, 250499913748614178); After that, select count(1) from object_state where zoid = 0; should tell you there is only one state in the database for OID 0. Packing should work fine then. It seemed to do the right thing on my copy, but I don't have your application code to check it. Thank you very much for that information. I really could not be able to found it myself.We will try it as soon as possible and let you know the results Regards -- Santi Camps (Earcon S.L.) http://www.earcon.com http://www.kmkey.com #!/usr/bin/env python ## # # Copyright (c) 2008 Zope Corporation and Contributors. # All Rights Reserved. # # This software is subject to the provisions of the Zope Public License, # Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution. # THIS SOFTWARE IS PROVIDED AS IS AND ANY AND ALL EXPRESS OR IMPLIED # WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS # FOR A PARTICULAR PURPOSE. # ## ZODB storage
Re: [ZODB-Dev] Relstorage pack problems
Interesting. RelStorage is already set up to do something like that. It splits packing into two phases. In the first phase, it makes a complete list of every row for the second phase to delete. The second phase could be delayed as long as we want. Between the two phases, if any database connections use objects that have been marked for deletion, we could cancel the pack and flag a software bug. If you want we make any kind of test with our databases, It will be a pleasure It would be very helpful to me if you could provide a copy of your database for me to debug. I'm hoping for a compressed SQL dump. Shane Hi again I already know what happens: either object_ref and object_refs_added tables are completely empty in my database. I can't understand why, but it explains the data loose when packing (nothing references nothing). So, the mystery is not in Relstorage but in the data. The only right ways to solve it is to disable pack-gc or try to fill this tables correctly. Do you know any simple way to do it ? Perhaps an export / import should fill this tables ? Thanks a lot for your help -- Santi Camps (Earcon S.L.) http://www.earcon.com http://www.kmkey.com ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Relstorage pack problems
On Mon, Jan 19, 2009 at 8:22 PM, Shane Hathaway sh...@hathawaymix.org wrote: Santi Camps wrote: We've been using RelStorage 1.1.c1 in production environments over postgresql 8.1 for some time. It has been working really fine, but yesterday we have a big problem packing a relstorage zodb mounted as a zope mount point. I'm guessing that your mounted object is not attached to the root of the mounted database. In that situation, both RelStorage and FileStorage will pack away practically everything in the mounted database, because from the viewpoint of the mounted database, the disconnected object is unreachable and thus garbage. A quick workaround is to disable garbage collection in zope.conf; then you can pack without losing any objects at all. A proper fix would be to make the mounted object reachable from the mounted database root. I believe the Zope 2 support for mounted databases does this automatically, but it's possible that Zope 3 takes an unwise shortcut. Sorry, I miss to say we are using Zope 2.9.4, and that mounted object is already reachable from the mounted database root (adding a ZODB mount point to the root thought ZMI). Although, the resultant database, after pack, has a big size. So, not all was deleted, but seems the root object not to be found. We are using the same method on a lot of databases and has been working fine for a long time, until this case. I know it's very difficult to say what happened, but I've reported the case because perhaps it's possible to take some measures in order to avoid data loss while packing. I don't know a lot about ZODB, but DirectoryStorage, for instance, don't delete objects immediately, but uses a 'deleted' mark and don't remove them until X days (X configured in the settings). I love this caution of DirectoryStorage, but I like all the other of RelStorage :-) If you want we make any kind of test with our databases, It will be a pleasure Thanks a lot for your help -- Santi Camps (Earcon S.L.) http://www.earcon.com http://www.kmkey.com ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] Open Connections with RelStorage
Hi all RelStorage is really fantastic, I'm starting to use it in some production environments with postgresql 8.1 and works really fine. But I've observed the number of open connections is always increasing, but never decreases. Anybody with the same problem ? I'm sorry, but I'm not able to debug ZODB and understant what's happening. Could be related to store files in the ZODB (so, in postgres through relstorage) ? Thanks in advance -- Santi Camps Earcon S.L. - http://www.earcon.com - http://www.kmkey.com ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Open Connections with RelStorage
On Thu, Oct 23, 2008 at 11:21 PM, Shane Hathaway [EMAIL PROTECTED] wrote: Shane Hathaway wrote: Santi Camps wrote: I've 8 zope's with 4 threads each one, and I've seen more than 50 open connections used by relstorage. I was hopping one connection per thread, like postgressql adapters does. The number of connections of relstorage doesn't depend of number of zope threads ?. RelStorage opens 2 connections per thread: one for reading, one for writing. I should add that RelStorage also opens short-lived connections for various operations like computing the size of the storage and packing. Be sure to allow room for those as well. Shane Thanks a lot for your help. Yes, I have 8 zope processes to take advantage of two quadcore processors (so, 8 cores), but it's very interesting know that diferences between threads and processes using RelStorage Regards -- Santi Camps Earcon S.L. - http://www.earcon.com - http://www.kmkey.com ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev