Thats a good thing to know, I will do some testing of my own...
Thanks
Flávio
On Jan 23, 2008 2:36 AM, David Pratt [EMAIL PROTECTED] wrote:
Yes, Shane had done some benchmarking about a year or so ago. PGStorage
was actually faster with small writes but slower for larger ones. As far
as
Jim Fulton wrote:
Chris McDonough did the transaction split off. He's probably the best one
to answer your other questions.
I know, but then he's subscribed to this list afaik, so I'll just wait for
him to respond.
If the tests pass without it, then I think it is a safe bet that it can.
:)
On Jan 22, 2008, at 8:44 PM, Marius Gedminas wrote:
On Tue, Jan 22, 2008 at 05:43:42PM -0200, Flavio Coelho wrote:
Actually what I am trying to run away from is the packing
monster ;-)
I want to be able to use an OO database without the inconvenience
of having
it growing out of control
Jim,
What would you consider a large active database?
curiously,
alan
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/
ZODB-Dev mailing list - ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev
Flavio Coelho wrote:
Actually what I am trying to run away from is the packing monster ;-)
Jim has done a great deal of work on packing (that will go into 3.9 I
presume) that should make your pack 3 to 6 times faster (depending on if
you do garbage collection at pack time or not).
I want
PGStorage does require packing currently, but it would be fairly trivial
to change it to only store single revisions. Postgres would still ensure
mvcc. Then you just need to make sure postgres auto-vacuum daemon is
running.
Laurence
David Pratt wrote:
Yes, Shane had done some benchmarking
--On 22. Januar 2008 21:17:45 -0500 Stephan Richter
[EMAIL PROTECTED] wrote:
On Tuesday 22 January 2008, Dieter Maurer wrote:
OracleStorage was abandoned because it was almost an order
or magnitude slower than FileStorage.
Actually, Lovely Systems uses PGStorage because it is faster for
Andreas,
Could you try to mount the catalog separately? My understanding is
the catalog is what makes storages a misery.
CMF/portal_catalog mounted as Filestorage
and CMF could be mounted as PGStorage.
I presume you would see much more reasonable performance?
cheers
--
Alan Runyan
Enfold
Thomas Lotze wrote:
Jim Fulton wrote:
Chris McDonough did the transaction split off. He's probably the best one
to answer your other questions.
I know, but then he's subscribed to this list afaik, so I'll just wait for
him to respond.
I'm afraid I can't look at this right away but I'll put
Likely but I don#t know how to setup my instance in order to make the
copied instance making use of mounted catalog.
What about creating a Plone site with a FileStorage.
Then mount a subfolder into PGStorage say
/Plone/some_folder
then you could see create af older /Plone/new_folder and dump
--On 23. Januar 2008 13:32:29 -0600 Alan Runyan [EMAIL PROTECTED] wrote:
Likely but I don#t know how to setup my instance in order to make the
copied instance making use of mounted catalog.
What about creating a Plone site with a FileStorage.
Then mount a subfolder into PGStorage say
Flavio Coelho wrote at 2008-1-22 17:43 -0200:
...
Actually what I am trying to run away from is the packing monster ;-)
Jim has optimized pack consideraly (-- zc.FileStorage).
I, too, have worked on pack optimization the last few days (we
cannot yet use Jims work because we are using ZODB 3.4
On Jan 23, 2008 6:45 PM, Dieter Maurer [EMAIL PROTECTED] wrote:
Flavio Coelho wrote at 2008-1-22 17:43 -0200:
...
Actually what I am trying to run away from is the packing monster ;-)
Jim has optimized pack consideraly (-- zc.FileStorage).
I, too, have worked on pack optimization the last
On Mon, Jan 21, 2008 at 07:15:42PM +0100, Dieter Maurer wrote:
Marius Gedminas wrote at 2008-1-21 00:08 +0200:
Personally, I'd be afraid to use deepcopy on a persistent object.
A deepcopy is likely to be no copy at all.
As Python's deepcopy does not know about object ids, it is likely
14 matches
Mail list logo