On 18 January 2013 10:21, Claudiu Saftoiu csaft...@gmail.com wrote:
Er, to be clearer: my goal is for the preload to load everything into
the
cache that the query mechanism might use.
It seems the bucket approach only takes ~10 seconds on the 350k-sized
index
trees vs. ~60-90
On 14 October 2012 22:49, Jim Fulton j...@zope.com wrote:
On Sun, Oct 14, 2012 at 5:28 PM, Tres Seaver tsea...@palladion.com wrote:
...
Well, I don't have time to chase BTrees. This could always be done in
ZODB 5. :)
I could help chop BTrees out, if that would be useful: most of the
On 14 October 2012 23:33, Jim Fulton j...@zope.com wrote:
On Sun, Oct 14, 2012 at 6:07 PM, Laurence Rowe l...@lrowe.co.uk wrote:
On 14 October 2012 22:49, Jim Fulton j...@zope.com wrote:
On Sun, Oct 14, 2012 at 5:28 PM, Tres Seaver tsea...@palladion.com wrote:
...
Well, I don't have time
On 30 August 2012 19:19, Shane Hathaway sh...@hathawaymix.org wrote:
On 08/30/2012 10:14 AM, Marius Gedminas wrote:
On Wed, Aug 29, 2012 at 06:30:50AM -0400, Jim Fulton wrote:
On Wed, Aug 29, 2012 at 2:29 AM, Marius Gedminas mar...@gedmin.as
wrote:
On Tue, Aug 28, 2012 at 06:31:05PM +0200,
On 20 March 2012 16:52, Adam Tauno Williams awill...@whitemice.org wrote:
It is possible to open a ZODB in a thread and share it to other threads
via a filesystem socket or pipe [rather than a TCP conntection]? I've
searched around and haven't found any reference to such a configuration.
This
On 14 March 2012 17:47, Jim Fulton j...@zope.com wrote:
I'm pretty happy with how zc.zlibstorage has worked out.
Should I build this into ZODB 3.11?
+1
BTW, lz4 compression looks interesting.
The Python binding (at least from PyPI) is broken.
I submitted an issue. Hopefully it will be
On 13 February 2012 10:06, Pedro Ferreira jose.pedro.ferre...@cern.ch wrote:
The OS' file-system cache acts as a storage server cache. The storage
server does (essentially) no processing to data read from disk, so an
application-level cache would add nothing over the disk cache provided by
On 9 February 2012 11:24, Jim Fulton j...@zope.com wrote:
I'm sorry I haven't had time to look at this. Still don't really.
Thanks Marius!!!
On Wed, Feb 8, 2012 at 6:48 PM, Marius Gedminas mar...@gedmin.as wrote:
On Thu, Feb 09, 2012 at 01:25:48AM +0200, Marius Gedminas wrote:
On Wed, Feb
On 24 January 2012 13:50, steve st...@lonetwin.net wrote:
Hi All,
I apologize for the cross-post but by this mail I simply hope to get a few
pointers on how to narrow down to the problem I am seeing. I shall post to the
relevant list if I have further questions.
So here is the issue:
On 12 October 2011 23:53, Shane Hathaway sh...@hathawaymix.org wrote:
As I see it, a cache of this type can take 2 basic approaches: it can
either store {oid: (state, tid)}, or it can store {(oid, tid): (state,
last_tid)}. The former approach is much simpler, but since memcache has
no
On 18 July 2011 11:07, Pedro Ferreira jose.pedro.ferre...@cern.ch wrote:
Hello,
I have an OOTreeSet in my DB that is behaving a bit funny (seems to be
corrupted). I thought I could get some more information by performing a
sanity check, but that doesn't seem to help a lot:
c in s
False
On 18 July 2011 13:08, Pedro Ferreira jose.pedro.ferre...@cern.ch wrote:
TreeSets are essentially BTrees with only keys. This means that the
members of a TreeSet must have a stable ordering. I suspect that that
c's class does not define the comparison methods (such as __lt__)
which means under
On 6 July 2011 19:44, Jim Fulton j...@zope.com wrote:
We're evaluating AWS for some of our applications and I'm thinking of adding
some options to support using S3 to store Blobs:
1. Allow a storage in a ZEO storage server to store Blobs in S3.
This would probably be through some sort of
On 7 July 2011 16:55, Jim Fulton j...@zope.com wrote:
On Thu, Jul 7, 2011 at 10:49 AM, Laurence Rowe l...@lrowe.co.uk wrote:
...
One thing I found with my (rather naive) experiments building
s3storage a few years ago is that you need to ensure requests to S3
are made in parallel to get
On 9 May 2011 13:32, Hanno Schlichting ha...@hannosch.eu wrote:
On Mon, May 9, 2011 at 2:26 PM, Laurence Rowe l...@lrowe.co.uk wrote:
While looking at the Plone versioning code the other day, it struck me
that it would be much more efficient to implement file versioning if
we could rely
On 4 May 2011 10:53, Hanno Schlichting ha...@hannosch.eu wrote:
Hi.
I tried to analyze the overhead of changing content in Plone a bit. It
turns out we write back a lot of persistent objects to the database,
even tough the actual values of these objects haven't changed.
Digging deeper I
On 24 February 2011 10:17, Chris Withers ch...@simplistix.co.uk wrote:
Hi Jim,
The current __exit__ for transaction managers looks like this:
def __exit__(self, t, v, tb):
if v is None:
self.commit()
else:
self.abort()
..which means that if
On 26 January 2011 21:57, Jürgen Herrmann juergen.herrm...@xlhost.de wrote:
is there a script or some example code to search for cross db
references?
i'm also eager to find out... for now i disabled my packing cronjobs.
Packing with garbage collection disabled (pack-gc = false) should
On 26 January 2011 23:11, Chris Withers ch...@simplistix.co.uk wrote:
On 26/01/2011 22:49, Laurence Rowe wrote:
On 26 January 2011 21:57, Jürgen Herrmannjuergen.herrm...@xlhost.de
wrote:
is there a script or some example code to search for cross db
references?
i'm also eager to find
On 24 January 2011 21:28, Shane Hathaway sh...@hathawaymix.org wrote:
On 01/24/2011 02:02 PM, Anton Stonor wrote:
Hi there,
We have recently experienced a couple of PosKey errors with a Plone 4
site running RelStorage 1.4.1 and Mysql 5.1.
After digging down we found that the objects that
On 21 January 2011 20:57, Shane Hathaway sh...@hathawaymix.org wrote:
On 01/21/2011 10:46 AM, Chris Withers wrote:
I'm wondering what the recommended maintenance for these two types of
storage are that I use:
- keep-history=true, never want to lose any revisions
My guess is zodbpack with
I'm not very optimistic about this I'm afraid. First the problems with
using Plone:
* Plone relies heavily on its in ZODB indexes of all content
(portal_catalog). This means that every edit will change lots of
objects (without versioning ~15-20, most of which are in the
catalogue).
* At least
On 17 November 2010 16:34, Alan Runyan runy...@gmail.com wrote:
I have read that there is a problem to implement MS-SQL adapter for
Relstorage because the “Two phase commit” feature is not exposed by
MS-SQL server .
unsure about that. probably depends on the client access library.
At least
On 17 November 2010 17:05, Laurence Rowe l...@lrowe.co.uk wrote:
On 17 November 2010 16:34, Alan Runyan runy...@gmail.com wrote:
I have read that there is a problem to implement MS-SQL adapter for
Relstorage because the “Two phase commit” feature is not exposed by
MS-SQL server .
unsure
On 14 October 2010 01:28, Darryl Dixon - Winterhouse Consulting
darryl.di...@winterhouseconsulting.com wrote:
On 13/10/2010 15:23, Jim Fulton wrote:
You can connect to the monitor port in 3.9 and earlier,
if the monitor port is configured. In 3.10, the monitor server is
replaced by a ZEO
On 27 September 2010 18:26, Nathan Van Gheem vangh...@gmail.com wrote:
BTW, I thought I could just use the ZPublisherEventsBackup to abort
every transaction when zope is in read-only... Kind of hacky, but not
too bad :)
That sounds really evil, but I guess it should work...
plone.app.imaging
On 23 August 2010 17:51, Jim Fulton j...@zope.com wrote:
It's worth noting that these are not the docs. I didn't write or
review them. I don't have any control over zodb.org. I have no idea
how to comment on the docs. (I could possibly find out, but I don't have time
to work that hard.)
...
On 23 August 2010 19:08, Jim Fulton j...@zope.com wrote:
On Mon, Aug 23, 2010 at 1:08 PM, Laurence Rowe l...@lrowe.co.uk wrote:
On 23 August 2010 17:51, Jim Fulton j...@zope.com wrote:
It's worth noting that these are not the docs. I didn't write or
review them. I don't have any control over
On 16 August 2010 13:13, Tres Seaver tsea...@palladion.com wrote:
Hanno Schlichting wrote:
On Mon, Aug 16, 2010 at 12:14 PM, Pedro Ferreira
jose.pedro.ferre...@cern.ch wrote:
Could this be some problem with using persistent objects as keys in a BTree?
Some comparison problem?
I'm not
On 16 August 2010 17:29, Pedro Ferreira jose.pedro.ferre...@cern.ch wrote:
Consider using one
of these alternatives instead:
* Set the IOTreeSet as an attribute directly on the persistent object.
You mean on the persistent object I am using as key?
Yes.
* Use
On 28 June 2010 15:23, Nitro ni...@dr-code.org wrote:
Am 28.06.2010, 14:10 Uhr, schrieb Dylan Jay d...@pretaweb.com:
I don't use a lot of other indexes other than what comes with plone but
I can see the value of what your suggesting in having an installable
tested collection of indexes. I can
On 28 June 2010 19:31, Nitro ni...@dr-code.org wrote:
Am 28.06.2010, 16:52 Uhr, schrieb Laurence Rowe l...@lrowe.co.uk:
So why don't we all work on the same packages? The main reason is one
of legacy. Plone is built on Zope2 and ZCatalog. It works, but it is
not without it's issues - we can't
On 28 June 2010 21:27, Nitro ni...@dr-code.org wrote:
ZODB is a general python object database with a much wider audience than
just plone. It suits desktop applications just as well as applications
you'd normally use twisted and pickle for. Forcing all those zope
dependencies like buildout on
It really depends on what you are trying to achieve.
The simplest solution would probably be to use a geohash string within
an OOBTree.
If you need a full geospatial solution, postgis is featureful and easy
to use, and simple to integrate transactionally with ZODB.
Reinventing the wheel is
for zope.sqlalchemy - when a large
number of savepoints are used, the eventual commit can lead to a
`RuntimeError: maximum recursion depth exceeded` in SQLAlchemy as it
attempts to unroll its nested substransactions.
Laurence
On 17 January 2010 15:45, Laurence Rowe l...@lrowe.co.uk wrote:
2010/1/17 Jim
On 11 May 2010 15:08, Jim Fulton j...@zope.com wrote:
On Tue, May 11, 2010 at 8:38 AM, Benji York be...@zope.com wrote:
On Tue, May 11, 2010 at 7:34 AM, Jim Fulton j...@zope.com wrote:
[...] The best I've been
able to come up with is something like:
t = ZODB.transaction(3)
while
I think this means that you are storing all of your data in a single
persistent object, the database root PersistentMapping. You need to
break up your data into persistent objects (instances of objects that
inherit from persistent.Persistent) for the ZODB to have a chance of
performing memory
On 10 May 2010 21:41, Jim Fulton j...@zope.com wrote:
A. Change transaction._transaction.AbortSavepoint to remove the
datamanager from the transactions resources (joined data managers)
when the savepoint is rolled back and abort called on the data
manager. Then, if the data manager
I think that moving to an LLTreeSet for the docset will significantly
reduce your memory usage. Non persistent objects are stored as part of
their parent persistent object's record. Each LOBTree object bucket
contains up to 60 (key, value) pairs. When the values are
non-persistent objects they are
I suspect that something like 90% of ZODB pickle data will be string
values, so the scope for reducing the space used by a ZODB through the
newer pickle protocol – and even the class registry – is limited.
What would make a significant impact on data size is compression. With
lots of short
I've had this issue reported to me in the context of zope.sqlalchemy,
but have been unable to reproduce it. Others have also seen it, but as
far as I am aware have not been able to reproduce it:
http://www.mail-archive.com/pgsql-hack...@postgresql.org/msg146522.html
As there have now been three
On 17 April 2010 05:27, Jeff Shell j...@bottlerocket.net wrote:
We encountered a problem during an export/import in a Zope 3 based
application that resulted in something not being importable. This is from our
very first Zope 3 based application, and I stumbled across some very old
Running your test script on my small amazon EC2 instance on linux
takes between 0.0 and 0.04 seconds (I had to remove the divide by
total to avoid a zero division error). 0.02 is 5000/s.
Laurence
On 14 April 2010 00:25, Nitro ni...@dr-code.org wrote:
40 tps sounds low: are you pushing blob
A BTree does not keep track of it's length. See BTrees.Length.Length:
http://apidoc.zope.org/++apidoc++/Code/BTrees/Length/Length/index.html
Laurence
On 8 April 2010 16:36, Leszek Syroka leszek.marek.syr...@cern.ch wrote:
Hi,
what is the fastest way of checking the number of elements in
2010/1/17 Jim Fulton j...@zope.com:
On Sat, Jan 16, 2010 at 1:03 PM, Laurence Rowe l...@lrowe.co.uk wrote:
I've had a request to add savepoint release support to zope.sqlalchemy
as some databases seem to limit the number of savepoints in a
transaction.
I've added this in a branch
2009/12/20 Ross Boylan rossboy...@stanfordalumni.org:
easy_install ZODB3 looked fairly good during installation until the end:
quote
Processing transaction-1.0.0.tar.gz
Running transaction-1.0.0\setup.py -q bdist_egg --dist-dir
2009/12/20 Ross Boylan rossboy...@stanfordalumni.org:
The IPC10 presentation says
#Works as a side-effect of importing ZODB above
from Persistence import Persistent
I tried that (with the indicate other imports first). It led to a No
module error.
I tried commenting out the line, since
2009/12/17 Mikko Ohtamaa mi...@redinnovation.com:
Hi,
I need to have little clarification should properties work on
Persistent objects. I am running ZODB 3.8.4 on Plone 3.3.
I am using plone.behavior and adapters to retrofit objects with a new
behavior (HeaderBehavior object). This object
2009/12/9 Pedro Ferreira jose.pedro.ferre...@cern.ch:
Hello,
Just zodbbrowser with no prefix:
http://pypi.python.org/pypi/zodbbrowser
https://launchpad.net/zodbbrowser
It's a web-app: it can connect to your ZEO server so you can inspect the
DB while it's being used.
We tried this,
2009/12/7 Jose Benito Gonzalez Lopez jose.benito.gonza...@cern.ch:
Dear ZODB developers,
Since some time ago (not sure since when) our database
has passed from 15GB to 65GB so fast, and it keeps growing
little by little (2 to 5 GB per day). It is clear that something is not
correct in it.
2009/11/20 Chris Withers ch...@simplistix.co.uk:
Jim Fulton wrote:
On Thu, Nov 19, 2009 at 7:01 PM, Chris Withers ch...@simplistix.co.uk
wrote:
Jim Fulton wrote:
There's nothing official or supported about a backup solution without
automated tests.
So I guess there isn't one.
Right, so
2009/11/20 Jim Fulton j...@zope.com:
On Fri, Nov 20, 2009 at 9:32 AM, Chris Withers ch...@simplistix.co.uk wrote:
...
I'm not sure how much love repozo needs. It works, and it won't need
changing until FileStorage's format changes, which I don't see happening any
time soon.
It just occurred
2009/11/13 Martin Aspeli optilude+li...@gmail.com:
Hanno Schlichting wrote:
On Fri, Nov 13, 2009 at 5:40 PM, Jim Fulton j...@zope.com wrote:
On Fri, Nov 13, 2009 at 10:18 AM, Mikko Ohtamaa mi...@redinnovation.com
wrote:
Unfortunately the application having the issues is Plone 3.3. ZODB 3.9
This may help: http://plone.org/documentation/how-to/debug-zodb-bloat/
Laurence
Chris Withers wrote:
Hi All,
I have a filestorage being used by Zope 2 that is mysteriously growing.
I don't have confidence in the Undo tab, since this setup has two
storages, once mounted into the
2009/5/27 Chris Withers ch...@simplistix.co.uk:
Laurence Rowe wrote:
Jim Fulton wrote:
Well said. A feature I'd like to add is the ability to have persistent
objects that don't get their own database records, so that you can get the
benefit of having them track their changes without
Jim Fulton wrote:
On May 26, 2009, at 10:16 AM, Pedro Ferreira wrote:
In any case, it's not such a surprising number, since we have ~73141
event objects and ~344484 contribution objects, plus ~492016 resource
objects, and then each one of these may contain authors, and fore sure
some
Jim Fulton wrote:
Well said. A feature I'd like to add is the ability to have persistent
objects that don't get their own database records, so that you can get
the benefit of having them track their changes without incuring the
expense of a separate database object.
+lots
Hanno
A few weeks ago I converted the ZODB/ZEO Programming Guide and a few
more articles into structured text and added them to the zope2docs
buildout. I've now moved them to their own buildout in
svn+ssh://svn.zope.org/repos/main/zodbdocs/trunk and they will soon
appear at http://docs.zope.org/zodb
Andreas Jung wrote:
On 26.05.09 19:08, Andreas Jung wrote:
On 26.05.09 18:54, Laurence Rowe wrote:
A few weeks ago I converted the ZODB/ZEO Programming Guide and a few
more articles into structured text and added them to the zope2docs
buildout. I've now moved them to their own buildout
Jim Fulton wrote:
On May 26, 2009, at 12:54 PM, Laurence Rowe wrote:
A few weeks ago I converted the ZODB/ZEO Programming Guide and a few
more articles into structured text and added them to the zope2docs
buildout. I've now moved them to their own buildout in
svn+ssh://svn.zope.org/repos
Pedro Ferreira wrote:
Dear all,
Thanks a lot for your help. In fact, it was a matter of increasing the
maximum recursion limit.
There's still an unsolved issue, though. Each time we try to recover a
backup using repozo, we get a CRC error. Is this normal? Has it happened
to anyone?
I
Christian Theune wrote:
Hi,
On Tue, 2009-04-28 at 13:54 -0400, Jim Fulton wrote:
Thanks again!
(Note to everyone else, Shane and I discussed this on IRC, along with
another alternative that I'll mention below.)
I like version 2 better than version 1. I'd be inclined to simplify
and
For Plone, the standard remedy to this problem is to separate out
portal_catalog into it's own storage (zeo has support for serving
multiple storages). You may then control the object cache size per
storage, setting the one for the portal_catalog storage large enough to
keep all it's objects
Shane Hathaway wrote:
I should note that this KeyError occurs while trying to report on a
KeyError. I need to fix that. Fortunately, the same error pops out anyway.
There's a fix for this in the Jarn branch. Note that to collect more
interesting data it rolls back the load connection at
eastxing wrote:
Hi,
I am using Plone2.5.5 with Zope2.9.8-final and ZODB3.6.2.Now my Data.fs
size is nearly 26G with almost 140k Plone objects and more than 4100k
zope objects in the database. Since 2 moths ago, I could not pack my
database successfully. Recent days I tried to pack it
Broken objects occur when the class for a pickled object cannot be
imported. To change the location of a class, you need to provide an
alias at the old location so that the object can be unpickled, i.e.
MyOldClassName = MyNewClassName. You can only remove MyOldClassName
after you have updated
Shane Hathaway wrote:
Benjamin Liles wrote:
Currently at the Plone conference it seems that a large number of people
are beginning to host their Plone sites on the Amazon EC2 service. A
simpleDB adapter might be a good way to provide persistent storage for
an EC2 base Zope instance. Has
Leonardo Santagada wrote:
On Oct 4, 2008, at 12:36 PM, Wichert Akkerman wrote:
Adam wrote:
Thanks for that, guys, I've not used a mailing list like this
before so
unsure how to respond.
If ZODB stores the Package.Module.Class name in the pickle would it
be
possible for me to simply
Izak Burger-2 wrote:
Dieter Maurer wrote:
This is standard behaviour with long running processes on
a system without memory compaction:
Of course, I remember now, there was something about that in my
Operating Systems course ten years ago :-) I suppose the bigger page
sizes used on
Andreas Jung wrote:
--On 22. Juni 2008 08:49:32 -0700 tsmiller [EMAIL PROTECTED]
wrote:
Gary,
I have been using the ZODB for about a year and a half with a bookstore
application. I am just now about ready to put it out on the internet for
people to use. I have had the same problem with
`records` by ZODB. Other objects do not have a _p_oid attribute and
have to be saved as part of their parent record.
Laurence
2008/6/19 [EMAIL PROTECTED]:
Laurence Rowe wrote:
[EMAIL PROTECTED] wrote:
Does your record class inherit from persistent.Persistent? 650k integers +
object pointers should
tsmiller wrote:
I have a bookstore that uses the ZODB as its storage. It uses qooxdoo as
the client and CherryPy for the server. The server has a 'saveBookById'
routine that works 'most' of the time. However, sometimes the
transaction.commit() does NOT commit the changes and when I restart
PGStorage does require packing currently, but it would be fairly trivial
to change it to only store single revisions. Postgres would still ensure
mvcc. Then you just need to make sure postgres auto-vacuum daemon is
running.
Laurence
David Pratt wrote:
Yes, Shane had done some benchmarking
Matt Hamilton wrote:
David Binger dbinger at mems-exchange.org writes:
On Nov 2, 2007, at 6:20 AM, Lennart Regebro wrote:
Lots of people don't do nightly packs, I'm pretty sure such a process
needs to be completely automatic. The question is weather doing it in
a separate process in the
It looks like ZODB performance in your test has the same O(log n)
performance as PostgreSQL checkpoints (the periodic drops in your
graph). This should come as no surprise. B-Trees have a theoretical
Search/Insert/Delete time complexity equal to the height of the tree,
which is (up to) log(n).
Christian Theune wrote:
snip /
We imagine we need two kinds of components to make this work:
1. A query processor that could look like:
class IQueryProcessor(Interface):
def query(...):
Returns a list of matching objects. The parameters are
specific to the query
Chris,
I think you're looking at forward references when you want to look at
back references.
This might help: http://plone.org/documentation/how-to/debug-zodb-bloat
(you might have to change the refmap to be in a zodb with that much data
though)
Laurence
Chris Withers wrote:
Hi All,
Hi,
Several people have made SQLalchemy integrations recently. SQLAlchemy
does not support Two Phase Commit (2PC) so correctly tying it in with
zope's transactions is tricky. With multiple One Phase Commit (1PC)
DataManagers the problem is of course intractable, but given the
popularity of
You need to provide the full traceback so we can tell where it is coming
from.
My guess (though I'm surprised by the particular error) is that you have
perhaps got content owned by users in a user folder outside the site
that is no longer accessible when you mount the database on its own. If
Jim Fulton wrote:
snip /
I wasn't asking about implementation.
Here are some questions:
- Should this create a new FileStorage? Or should it modify the existing
FileStorage in place?
Probably create a new one (analogous to a pack). Seems safer than
truncating to me.
- Should this work
I'm sure you're probably aware of these, but I thought I'd file this
summary while they were in my head.
There is no history-less FileStorage. It is essentially a transaction log.
Directory Storage has Minimal.py which is history-less, very simple
though it is not proven in production. Could
81 matches
Mail list logo