Lalo Martins wrote:
> Well, two betas of OracleStorage in one day, then a month and a
> half of silence. What's the status?

>From our perspective, it's what it was then.  We built it for
a customer who later decided they didn't want it. I'm
glad to hear you found it useful.

> What about the other Storage projects? BerkeleyStorage has been
> dead for an year, and I heard pretty nasty words about
> InterbaseStorage. What about someone who wanted to try to port
> OracleStorage to Postgres or some other RDBMS?

We're working on a new suite of Berkeley storages that we are
writing to the the latest Berkeley database APIs. As Chris
pointed out, these are based on a new Berkeley DB extension that 
Andrew Kuchling started and that Robin Dunn is finishing.

There are a number of interesting new features that will eventually
result from these storages:

  - A new pack/garbage collection approach.  The current storages
    perform a mark-and-sweep garbage collection when packing. It turns
    out that this really isn't very scalable.  I'm going two 
    switch to using the same garbage-collection strategy that Python uses, 
    which is reference counting supplimented by an incremental cyclic garbage
    detection algorithm. I have a simple storage (no undo or versions) that
    needs no packing of your data structures are free of cycles.

    I'm hopeful that we can make packing automatic and incremental.

  - Undo-log generation will be much faster, at least for common
    cases. Generation of undo logs (for selecting transactions to undo)
    is becoming a significant performance problem with FileStorage.

  - ZEO verification, performed when a client with a persistent
    cache starts up, will be much faster.

  - Policies to control whether multiple revisions are stored
    or whether revisions are removed by packing on a object-by-object
    or transaction-by-transaction basis.

    For example, you could have a poll/votting product that didn't allow
    undo of and didn't require storing multiple records for votes. This
    would be cleaner than mounting a separate database with a simple

    You could keep significant historical revisions for important objects, such
    as Wiki pages, templates, scripts, etc., with a policy to decide
    which revisions are significant.

  - The ability to set policies to implement quotas.

I expect that we'll work out a lot of these ideas for Berkeley storages and
then implement them in OrcaleStorage. Others should then be able to map the
implementation onto other storages.

I'll note, in passing, that it's much nicer working these ideas out
with the BerkelyDB API than dealing with PLSQL or Python SQL.

We are also working on ZEO storage replication. This may have a big
impact on the storage API, or lead to specialized APIs for replicated
> Please help stamp out Data.fs! :-)

I don't think Data.fs will go away.  I do expect it to be relagated to
initial evaluation and development projects. Use of Berkely DB in
transactional mode requires a significant andminstration commitment.
Log files need to be purged. Backup and recovery processes need to be in 
place. A similar cost is associated with using Oracle and many other databases,
I expect. People aren't going to want to deal with these issues when 
initially trying and learning Zope (or ZODB).


Jim Fulton           mailto:[EMAIL PROTECTED]   Python Powered!        
Technical Director   (888) 344-4332    
Digital Creations

Zope-Dev maillist  -  [EMAIL PROTECTED]
**  No cross posts or HTML encoding!  **
(Related lists - )

Reply via email to