Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Dieter Maurer
Alan Runyan wrote at 2008-1-23 13:32 -0600:
> ...
>each record in a catalog may have an object for each index/metadata attribute
>you are capturing.  and possibly a few others.  Each catalog entries contain
>ts of object per object being indexed.That is my understanding.

It does not completely fit reality.

There is no correspondence from catalogued objects to a single
or even many persitent objects in the catalog.

The catalog maintains a (non persistent) metadata record for
each catalogued object -- a single one, for all metadata fields.
These records (they are tuples) are maintained in "IOBucket"s.
An "IOBucket" can have up to 60 entries.

Each index maintains some information for a catalogued object --
but not in individual (object specific) objects. Instead
persistent objects (such as "IITreeSet|IOBTree|OIBTree"s) are
used to combine the information about many (about 40 to 120) objects.



-- 
Dieter
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Dieter Maurer
Alan Runyan wrote at 2008-1-23 11:24 -0600:
> ...
>My understanding is
>the catalog is what makes storages a misery.

I do not think that this is true.

The catalog only often contains lots of objects -- and maybe some of
them with not so good persistency design.

>CMF/portal_catalog mounted as Filestorage
>and CMF could be mounted as PGStorage.
>
>I presume you would see much more reasonable performance?

At the storage level, all objects look identical: a pair of pickles.
Differences are only in the pickle sizes and in the access frequency...



-- 
Dieter
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Zvezdan Petkovic

On Jan 23, 2008, at 4:05 PM, Flavio Coelho wrote:

BTW: stay on the list. I do not like personal emails.

sorry, I never meant to email you personally, this due to the  
default configuration of this list, that sets the  to the  
poster instead of to the list...So I may have been too quick to the  
send button  on  a reply.



I know this is off the thread topic, but it deserves an explanation.
The above expectation is a misconception.  The list should *not* set a  
Reply-To field.  There are SMTP header fields specific to the list  
such as


Precedence: list
List-Post:  

Your mail user agent is responsible for honoring those header fields.
For example, mutt has Reply, Reply All, and List Reply options.
The last one is suitable for the mailing list communications.
The first one is good for a private reply to the author.

Sadly, the "modern" flashy email agents, worry more about HTML  
presentation, then about providing a "List Reply" option.


Talking of HTML, it's also preferred to send plain text messages to  
the mailing lists, instead of multipart/mixed HTML/text or just HTML.


--
Zvezdan Petkovic <[EMAIL PROTECTED]>

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] Writing Persistent Class

2008-01-23 Thread Marius Gedminas
On Mon, Jan 21, 2008 at 07:15:42PM +0100, Dieter Maurer wrote:
> Marius Gedminas wrote at 2008-1-21 00:08 +0200:
> >Personally, I'd be afraid to use deepcopy on a persistent object.
> 
> A deepcopy is likely to be no copy at all.
> 
>   As Python's "deepcopy" does not know about object ids, it is likely
>   that the copy result uses the same oids as the original.
>   When you store this copy, objects with the same oid are identified.

This appears not to be the case.  The following script prints "Looks OK":


#!/usr/bin/python
import copy
import transaction
from persistent import Persistent
from ZODB.DB import DB
from ZODB.MappingStorage import MappingStorage

class SampleObject(Persistent):
pass

db = DB(MappingStorage())
conn = db.open()
conn.root()['obj1'] = SampleObject()
transaction.commit()

# Use a different connection to sidestep the ZODB object cache
conn2 = db.open()
conn2.root()['obj2'] = copy.deepcopy(conn2.root()['obj1'])
transaction.commit()

conn3 = db.open()
obj1 = conn3.root()['obj1']
obj2 = conn3.root()['obj2']
transaction.commit()

if obj1 is obj2:
print "Fail: instead of a copy we got the same object"
elif obj1._p_oid == obj2._p_oid:
print "Fail: copy has the same oid"
else:
print "Looks OK."


Marius Gedminas
-- 
I used to think I was indecisive, but now I'm not so sure.


signature.asc
Description: Digital signature
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Flavio Coelho
On Jan 23, 2008 6:45 PM, Dieter Maurer <[EMAIL PROTECTED]> wrote:

> Flavio Coelho wrote at 2008-1-22 17:43 -0200:
> > ...
> >Actually what I am trying to run away from is the "packing monster" ;-)
>
> Jim has optimized pack consideraly (--> "zc.FileStorage").
>
> I, too, have worked on pack optimization the last few days (we
> cannot yet use Jims work because we are using ZODB 3.4 while
> Jims optimization is for ZODB 3.8) and obtained speedups of
> more then 80 persent.
>
> >I want to be able to use an OO database without the inconvenience of
> having
> >it growing out of control and then having to spend hours packing the
> >database every once in a while. (I do a lot of writes in my DBs). Do this
> >Holy grail of databases exist? :-)
>
> The pack equivalent of Postgres is called "vacuum full".
> It is more disruptive than packing 
>
>
> Maybe, you have a look at the old "bsddbstorage".
> It could be configured to not use historical data.
> Support was discontinued due to lack of interest --
> but I report this for the second time within a week
> or so. This may indicate a renewed interest.


Thanks I will look at it.

>
>
>
> BTW: stay on the list. I do not like personal emails.


sorry, I never meant to email you personally, this due to the default
configuration of this list, that sets the  to the poster instead
of to the list...So I may have been too quick to the send button  on  a
reply.

>
>
>
>
> --
> Dieter
>



-- 
Flávio Codeço Coelho

"My grandfather once told me that there were two kinds of people: those who
do the work and those who take the credit. He told me to try to be in the
first group; there was much less competition."
Indira Gandhi

registered Linux user # 386432
get counted at http://counter.li.org

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Dieter Maurer
Flavio Coelho wrote at 2008-1-22 17:43 -0200:
> ...
>Actually what I am trying to run away from is the "packing monster" ;-)

Jim has optimized pack consideraly (--> "zc.FileStorage").

I, too, have worked on pack optimization the last few days (we
cannot yet use Jims work because we are using ZODB 3.4 while
Jims optimization is for ZODB 3.8) and obtained speedups of
more then 80 persent.

>I want to be able to use an OO database without the inconvenience of having
>it growing out of control and then having to spend hours packing the
>database every once in a while. (I do a lot of writes in my DBs). Do this
>Holy grail of databases exist? :-)

The pack equivalent of Postgres is called "vacuum full".
It is more disruptive than packing 


Maybe, you have a look at the old "bsddbstorage".
It could be configured to not use historical data.
Support was discontinued due to lack of interest --
but I report this for the second time within a week
or so. This may indicate a renewed interest.


BTW: stay on the list. I do not like personal emails.



-- 
Dieter
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Andreas Jung



--On 23. Januar 2008 13:32:29 -0600 Alan Runyan <[EMAIL PROTECTED]> wrote:


Likely but I don#t know how to setup my instance in order to make the
copied instance making use of mounted catalog.


What about creating a Plone site with a FileStorage.
Then mount a subfolder into PGStorage say
/Plone/some_folder

then you could see create af older /Plone/new_folder and dump buncha
content and paste it into /Plone/foo_folder and paste it into
/Plone/some_folder and see the difference?


If there is a benefit..what would be arguments for running a mixed setup?


My understanding is that each object in a catalog (Plone has 3 cataogs)
has numerous other objects associated with it.  Something like:

each record in a catalog may have an object for each index/metadata
attribute you are capturing.  and possibly a few others.  Each catalog
entries contain ts of object per object being indexed.That is my
understanding.



That would be an artificial test. You basically want to keep your data in 
one storage type - in this case either Filestorage or PGStorage. Why no 
mix?
For an application like Plone the catalog is as important as the data. You 
don#t want to lose data during production. Reindexing can be very expensive 
and possibly causes a longer down time. On the other hand you don't want 
run two different storage types at the same time - one point of failure 
more. Copy and paste of a whole is likely not the common usecase but burst 
writes as in this particular case appear to be very slow with PGStorage.


Andreas


pgpsGjKg6sttD.pgp
Description: PGP signature
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Alan Runyan
> Likely but I don#t know how to setup my instance in order to make the
> copied instance making use of mounted catalog.

What about creating a Plone site with a FileStorage.
Then mount a subfolder into PGStorage say
/Plone/some_folder

then you could see create af older /Plone/new_folder and dump buncha content
and paste it into /Plone/foo_folder and paste it into /Plone/some_folder and
see the difference?

> If there is a benefit..what would be arguments for running a mixed setup?

My understanding is that each object in a catalog (Plone has 3 cataogs) has
numerous other objects associated with it.  Something like:

each record in a catalog may have an object for each index/metadata attribute
you are capturing.  and possibly a few others.  Each catalog entries contain
ts of object per object being indexed.That is my understanding.


-- 
Alan Runyan
Enfold Systems, Inc.
http://www.enfoldsystems.com/
phone: +1.713.942.2377x111
fax: +1.832.201.8856
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Andreas Jung



--On 23. Januar 2008 11:24:28 -0600 Alan Runyan <[EMAIL PROTECTED]> wrote:


Andreas,

Could you try to mount the catalog separately?  My understanding is
the catalog is what makes storages a misery.


Likely but I don#t know how to setup my instance in order to make the 
copied instance making use of mounted catalog.


CMF/portal_catalog mounted as Filestorage
and CMF could be mounted as PGStorage.

I presume you would see much more reasonable performance?



If there is a benefit..what would be arguments for running a mixed setup?

Andreas

pgp4IAaSpgnQ1.pgp
Description: PGP signature
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread David Pratt

Graphical test comparison (open office format)
http://pgstorage.cvs.sourceforge.net/pgstorage/PGStorage/tests/comparison.ods

Regards,
David

Alan Runyan wrote:

Andreas,

Could you try to mount the catalog separately?  My understanding is
the catalog is what makes storages a misery.

CMF/portal_catalog mounted as Filestorage
and CMF could be mounted as PGStorage.

I presume you would see much more reasonable performance?

cheers


___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] Re: Duplicate tests

2008-01-23 Thread Chris McDonough

Thomas Lotze wrote:

Jim Fulton wrote:


Chris McDonough did the transaction split off. He's probably the best one
to answer your other questions.


I know, but then he's subscribed to this list afaik, so I'll just wait for
him to respond.


I'm afraid I can't look at this right away but I'll put it on the list of things 
to do.


- C





If the tests pass without it, then I think it is a safe bet that it can.
:)


And gone it is.



___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Alan Runyan
Andreas,

Could you try to mount the catalog separately?  My understanding is
the catalog is what makes storages a misery.

CMF/portal_catalog mounted as Filestorage
and CMF could be mounted as PGStorage.

I presume you would see much more reasonable performance?

cheers

-- 
Alan Runyan
Enfold Systems, Inc.
http://www.enfoldsystems.com/
phone: +1.713.942.2377x111
fax: +1.832.201.8856
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Andreas Jung



--On 22. Januar 2008 21:17:45 -0500 Stephan Richter 
<[EMAIL PROTECTED]> wrote:



On Tuesday 22 January 2008, Dieter Maurer wrote:

"OracleStorage" was abandoned because it was almost an order
or magnitude slower than "FileStorage".


Actually, Lovely Systems uses PGStorage because it is faster for them.



It would be interesting where PGS is faster than filestorage. Ok, I just 
tried made a simple benchmark for testing write performance. I created a 
Plone site and then created a copy using copy/paste within the ZMI.

(AMD Dualcore 2.6 GHz, Postgres 7.4.7 running on dedicated DB server):

Copy&Paste using Filestorage: 3-4 seconds
Copy&Paste using PGStorage: 30-40 seconds

I doubt that one could optimize the write performance using PGStorage
significantly. DCOracleStorage had a similar bad performance compared to 
Filestorage (10 times slower as far as I can remember).


Andreas

pgpeWpFXRU5v5.pgp
Description: PGP signature
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


[ZODB-Dev] Re: PGStorage

2008-01-23 Thread Laurence Rowe
PGStorage does require packing currently, but it would be fairly trivial 
to change it to only store single revisions. Postgres would still ensure 
mvcc. Then you just need to make sure postgres auto-vacuum daemon is 
running.


Laurence

David Pratt wrote:
Yes, Shane had done some benchmarking about a year or so ago. PGStorage 
was actually faster with small writes but slower for larger ones. As far 
as packing, as a zodb implementation, packing is still required to 
reduce the size of data in Postgres. BTW Stephan, where is Lovely using 
it - a site example? I had read some time ago that they were exploring 
it but not that it was being used.


Regards,
David

Stephan Richter wrote:

On Tuesday 22 January 2008, Dieter Maurer wrote:

"OracleStorage" was abandoned because it was almost an order
or magnitude slower than "FileStorage".


Actually, Lovely Systems uses PGStorage because it is faster for them.

Regards,
Stephan

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev



___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread David Pratt

Cool.

Jim Fulton wrote:
Berkeley DB storage didnt' work out the first time around.  My 
experience optimizing packing for FileStorage reminded me how much I 
want an alternative to FileStorage for large active databases.  I intend 
to revisit Berkeley DB storage one of these days.

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Jim Fulton


On Jan 23, 2008, at 9:59 AM, Benji York wrote:


Flavio Coelho wrote:
Actually what I am trying to run away from is the "packing  
monster" ;-)


Jim has done a great deal of work on packing (that will go into 3.9  
I presume)


and is available now in zc.FileStorage.

that should make your pack 3 to 6 times faster (depending on if you  
do garbage collection at pack time or not).


And consumes twice as much memory, depending on your settings.

For testing, I used a 20G database containing catalog data whos size  
was cut in half by packing.  It used around 900MB of memory for  
packing with garbage collection. :(  Without GC, it used much less.


Another major benefit of my new packing code is that it does most of  
the work in a separate process, which allows it to take advantage of  
multiple processors.


I want to be able to use an OO database without the inconvenience  
of having

it growing out of control and then having to spend hours packing the
database every once in a while. (I do a lot of writes in my DBs).  
Do this

Holy grail of databases exist? :-)


Why not put the pack in cron?


My new packing code helps a lot, but packing is still very disruptive.

IMO, something that packed incrementally, with disk being freed along  
the way, would be a big improvement. This isn't possible with  
FileStorage.


Jim

--
Jim Fulton
Zope Corporation


___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Jim Fulton


On Jan 23, 2008, at 9:44 AM, Alan Runyan wrote:


Jim,

What would you consider a "large active database"?



I don't have specific metrics.  Generally databases who's daily growth  
is measured in Gigabytes and who's size is significantly reduced by  
packing is what I have in mind.


Jim

--
Jim Fulton
Zope Corporation


___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Benji York

Flavio Coelho wrote:

Actually what I am trying to run away from is the "packing monster" ;-)


Jim has done a great deal of work on packing (that will go into 3.9 I 
presume) that should make your pack 3 to 6 times faster (depending on if 
you do garbage collection at pack time or not).



I want to be able to use an OO database without the inconvenience of having
it growing out of control and then having to spend hours packing the
database every once in a while. (I do a lot of writes in my DBs). Do this
Holy grail of databases exist? :-)


Why not put the pack in cron?
--
Benji York
Senior Software Engineer
Zope Corporation
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Alan Runyan
Jim,

What would you consider a "large active database"?

curiously,
alan
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Jim Fulton


On Jan 22, 2008, at 8:44 PM, Marius Gedminas wrote:


On Tue, Jan 22, 2008 at 05:43:42PM -0200, Flavio Coelho wrote:
Actually what I am trying to run away from is the "packing  
monster" ;-)


I want to be able to use an OO database without the inconvenience  
of having

it growing out of control and then having to spend hours packing the
database every once in a while. (I do a lot of writes in my DBs).  
Do this

Holy grail of databases exist? :-)


I've learned to love this aspect of FileStorage.  Sure, the Data.fs is
measured in gigabytes, but when the users run to you crying "help!  
help!
why is this bit of data not what I expected it to be?" you can dig  
into

object history from a debugzope console and figure what changed it and
when.  Or restore deleted data, for that matter.



We can have our cake and eat it too.  We can keep multiple revisions  
and pack them incrementally.  Keeping multiple revisions is almost  
necessary, because without doing so, MVCC isn't effective.


The old Berkeley DB storage did this.  It automatically packed records  
older than a configurable time.


Berkeley DB storage didnt' work out the first time around.  My  
experience optimizing packing for FileStorage reminded me how much I  
want an alternative to FileStorage for large active databases.  I  
intend to revisit Berkeley DB storage one of these days.


Jim

--
Jim Fulton
Zope Corporation


___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


[ZODB-Dev] Re: Duplicate tests

2008-01-23 Thread Thomas Lotze
Jim Fulton wrote:

> Chris McDonough did the transaction split off. He's probably the best one
> to answer your other questions.

I know, but then he's subscribed to this list afaik, so I'll just wait for
him to respond.

> If the tests pass without it, then I think it is a safe bet that it can.
> :)

And gone it is.

-- 
Thomas



___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Flavio Coelho
Thats a good thing to know, I will do some testing of my own...

Thanks

Flávio

On Jan 23, 2008 2:36 AM, David Pratt <[EMAIL PROTECTED]> wrote:

> Yes, Shane had done some benchmarking about a year or so ago. PGStorage
> was actually faster with small writes but slower for larger ones. As far
> as packing, as a zodb implementation, packing is still required to
> reduce the size of data in Postgres. BTW Stephan, where is Lovely using
> it - a site example? I had read some time ago that they were exploring
> it but not that it was being used.
>
> Regards,
> David
>
> Stephan Richter wrote:
> > On Tuesday 22 January 2008, Dieter Maurer wrote:
> >> "OracleStorage" was abandoned because it was almost an order
> >> or magnitude slower than "FileStorage".
> >
> > Actually, Lovely Systems uses PGStorage because it is faster for them.
> >
> > Regards,
> > Stephan
> ___
> For more information about ZODB, see the ZODB Wiki:
> http://www.zope.org/Wikis/ZODB/
>
> ZODB-Dev mailing list  -  ZODB-Dev@zope.org
> http://mail.zope.org/mailman/listinfo/zodb-dev
>



-- 
Flávio Codeço Coelho

"My grandfather once told me that there were two kinds of people: those who
do the work and those who take the credit. He told me to try to be in the
first group; there was much less competition."
Indira Gandhi

registered Linux user # 386432
get counted at http://counter.li.org

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] PGStorage

2008-01-23 Thread Flavio Coelho
Then perhaps I should rephrase my question:

Is there a way, by means of configuration, to reduce the size of history to
reduce the cost of packing?

thanks,

Flávio

On Jan 22, 2008 11:44 PM, Marius Gedminas <[EMAIL PROTECTED]> wrote:

> On Tue, Jan 22, 2008 at 05:43:42PM -0200, Flavio Coelho wrote:
> > Actually what I am trying to run away from is the "packing monster" ;-)
> >
> > I want to be able to use an OO database without the inconvenience of
> having
> > it growing out of control and then having to spend hours packing the
> > database every once in a while. (I do a lot of writes in my DBs). Do
> this
> > Holy grail of databases exist? :-)
>
> I've learned to love this aspect of FileStorage.  Sure, the Data.fs is
> measured in gigabytes, but when the users run to you crying "help! help!
> why is this bit of data not what I expected it to be?" you can dig into
> object history from a debugzope console and figure what changed it and
> when.  Or restore deleted data, for that matter.
>
> Marius Gedminas
> --
> MSDOS didn't get as bad as it is overnight -- it took over ten years
> of careful development.
>-- [EMAIL PROTECTED]
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.6 (GNU/Linux)
>
> iD8DBQFHlpwGkVdEXeem148RAql7AJ9ZF7HMXpFb3+MMXXSqMdQ7j0Ao/gCcD+/0
> 0Fgx/gHD4PqZaig9OkBPhhw=
> =NRh4
> -END PGP SIGNATURE-
>
> ___
> For more information about ZODB, see the ZODB Wiki:
> http://www.zope.org/Wikis/ZODB/
>
> ZODB-Dev mailing list  -  ZODB-Dev@zope.org
> http://mail.zope.org/mailman/listinfo/zodb-dev
>
>


-- 
Flávio Codeço Coelho

"My grandfather once told me that there were two kinds of people: those who
do the work and those who take the credit. He told me to try to be in the
first group; there was much less competition."
Indira Gandhi

registered Linux user # 386432
get counted at http://counter.li.org

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev