On Tue, May 18, 2010 at 1:14 PM, Ryan Noon rmn...@gmail.com wrote:
Hi All,
I converted my code to use LOBTrees holding LLTreeSets and it sticks to the
memory bounds and performs admirably throughout the whole process.
Unfortunately opening the database afterwards seems to be really really
On Tue, May 11, 2010 at 7:37 PM, Ryan Noon rmn...@gmail.com wrote:
...
(a pointer to relevant documentation would be really
useful)
A major deficiency of ZODB is that there is effectively no standard
documentation.
I'm working on fixing this.
Jim
--
Jim Fulton
On Tue, May 11, 2010 at 7:37 PM, Ryan Noon rmn...@gmail.com wrote:
Hi Jim,
I'm really sorry for the miscommunication, I thought I made that clear in my
last email:
I'm wrapping ZODB in a 'ZMap' class that just forwards all the dictionary
methods to the ZODB root and allows easy
Hi Jim,
I'm really sorry for the miscommunication, I thought I made that clear in my
last email:
I'm wrapping ZODB in a 'ZMap' class that just forwards all the dictionary
methods to the ZODB root and allows easy interchangeability with my old
sqlite OODB abstraction.
wordid_to_docset is a ZMap,
I think this means that you are storing all of your data in a single
persistent object, the database root PersistentMapping. You need to
break up your data into persistent objects (instances of objects that
inherit from persistent.Persistent) for the ZODB to have a chance of
performing memory
Thanks Laurence, this looks really helpful. The simplicity of ZODB's
concept and the joy of using it apparently hides some of the complexity
necessary to use it efficiently. I'll check this out when I circle back to
data stuff tomorrow.
Have a great morning/day/evening!
-Ryan
On Tue, May 11,
Thanks for your quick reply!
So, the best place to call those would be during my commit break (whenever I
decide to take it? [which would be less often if I could be sure of no
crashing]). Are there any other problems with the way I was using ZODB in
my code? I really like it, but I recognize
So, the best place to call those would be during my commit break (whenever I
decide to take it? [which would be less often if I could be sure of no
crashing]). Are there any other problems with the way I was using ZODB in
my code? I really like it, but I recognize that it's a lot more
On Mon, May 10, 2010 at 3:27 PM, Ryan Noon rmn...@gmail.com wrote:
Hi everyone,
I recently switched over some of my home-rolled sqlite backed object
databases into ZODB based on what I'd read and some cool performance numbers
I'd seen. I'm really happy with the entire system so far except for
On Mon, May 10, 2010 at 4:58 PM, Jim Fulton j...@zope.com wrote:
...
The first thing to understand is that options like cache-size and
cache-size bytes are suggestions, not limits. :) In particular, they
are only enforced:
- at transaction boundaries,
- when an application creates a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Jim Fulton wrote:
On Mon, May 10, 2010 at 3:27 PM, Ryan Noon rmn...@gmail.com wrote:
snip
Here's my code:
self.storage = FileStorage(self.dbfile, pack_keep_old=False)
cache_size = 512 * 1024 * 1024
self.db =
On Mon, May 10, 2010 at 5:20 PM, Tres Seaver tsea...@palladion.com wrote:
...
Also note that memory allocated by Python is generally not returned to
the OS when freed.
Python's own internal heap management has gotten noticeably better about
returning reclaimed chunks to the OS in 2.6.
Yeah,
First off, thanks everybody. I'm implementing and testing the suggestions
now. When I said ZODB was more complicated than my solution I meant that
the system was abstracting a lot more from me than my old code (because I
wrote it and new exactly how to make the cache enforce its limits!).
The
On Mon, May 10, 2010 at 5:39 PM, Ryan Noon rmn...@gmail.com wrote:
First off, thanks everybody. I'm implementing and testing the suggestions
now. When I said ZODB was more complicated than my solution I meant that
the system was abstracting a lot more from me than my old code (because I
Hi all,
I've incorporated everybody's advice, but I still can't get memory to obey
cache-size-bytes. I'm using the new 3.10 from pypi (but the same behavior
happens on the server where I was using 3.10 from the new lucid apt repos).
I'm going through a mapping where we take one long integer
P.S. About the data structures:
wordset is a freshly unpickled python set from my old sqlite oodb thingy.
The new docsets I'm keeping are 'L' arrays from the stdlib array module.
I'm up for using ZODB's builtin persistent data structures if it makes a
lot of sense to do so, but it sorta breaks
I think that moving to an LLTreeSet for the docset will significantly
reduce your memory usage. Non persistent objects are stored as part of
their parent persistent object's record. Each LOBTree object bucket
contains up to 60 (key, value) pairs. When the values are
non-persistent objects they are
17 matches
Mail list logo