ForkedStorage, I like it simply for the coolness of the name. :^)

But it sparked a different kind of idea, leveraging a pattern that might 
emerge in Zope 3.

Let's say we had a queue in Zope.  We could asynchronously send changes 
into the queue.  Later, based on some policy (e.g. idle time, clock 
ticks, etc.), those changes would be enacted/committed.

Imagine the queue itself is in a different storage, likely 
non-versioned.  Imagine that the queue is processed every N seconds.  It 
takes all the work to do and performs it, but in a subtransaction.

Thus you might send the queue ten increments to a counter, but only one 
will be committed to the main storage.

To make programmers have to think less about the queue (send in the 
object reference, the method to use, and the parameters), you could make 
it look like a special form of subtransactions.  That is, you say:

   self.title='Simple change'
   self.body = upload_file

At the transaction level, all enclosed changes are queued for later 
commit.  You don't have to think any differently than rather object 
state management.

This pattern applies better when you have a lot of document cataloging 
to be done.  A separate process can wake up, make a ZEO connection, and 
process the queue.  I don't think that indexing documents *has* to be a 
transactional part of every document save.

Under this cron-style approach, you also pay less of a conflict-error 
penalty, as you can increase the backoff period.  There's no web browser 
on the other end, impatiently waiting for their flaming logo. :^)

Ahh well, fun to talk about.  Maybe this time next year we can repeat 
the conversation. :^)


Shane Hathaway wrote:
> Jeremy Hylton wrote:
>>>>>>> "CM" == Chris McDonough <[EMAIL PROTECTED]> writes:
>>   >> Completely agreed.  My disagreement is portraying the counter
>>   >> problem as impossible with the zodb.  I think some people, as
>>   >> evidenced by some of the responses, are willing to live with the
>>   >> tradeoffs.  Other people will find managing a log file on disk to
>>   >> be a more manageable solution.
>>   CM> It would be best to make make a dual-mode undoing and nonundoing
>>   CM> storage on a per-object basis.
>> I'd really like to do this for ZODB4, but it seems hard to get it into
>> FileStorage, without adding automatic incremental packing to
>> FileStorage.
>> Example: Object A is marked as save enough revisions to do a single
>> undo.  When a transaction updates A and makes older revisions
>> unnecessary, there's no obvious way to remove them without doing a
>> pack.  We could write a garbage collector that removed unneeded things
>> (as opposed to packing everything to a particular date), but it
>> doesn't seem very useful if it needs to be run manually.
> One idea I've been floating in my head is the idea of a "forked" 
> storage, where some objects are stored in an undoable storage and others 
> are stored in a non-undoable storage.  I could try to explain it in 
> English but pseudocode is easier:
> class ForkedStorage:
>     def __init__(self, undoable_storage, non_undoable_storage):
>         self.undoable = undoable_storage
>         self.non_undoable = non_undoable_storage
>     def store(self, oid, data, serial):
>         if not serial or serial == '\0' * 8:
>             # For new objects, choose a storage.
>             want_undo = self.wantUndoableStorage(data)
>             if want_undo:
>                 storage = self.undoable
>             else:
>                 storage = self.non_undoable
>         else:
>             # For existing objects, use the storage chosen previously.
>             if self.undoable.load(oid):
>                 storage = self.undoable
>             else:
>                 storage = self.non_undoable
>, data, serial)
>     def load(self, oid):
>         data, serial = self.undoable.load(oid)
>         if not data:
>             data, serial = self.non_undoable.load(oid)
>             if not data:
>                 raise POSException, 'data not found'
>         return data, serial
>     def wantUndoableStorage(self, data):
>         u = cpickle.Unpickler()
>         module, name = u.loads(data)
>         class_ = getattr(__import__(module), name)
>         if getattr(class_, '_p_undoable', 1):
>             return 1
>         else:
>             return 0
> Only a simple idea. :-)
>> Also, how would you specifiy the object's packing policy?  I'm
>> thinking an _p_revision_control attribute or something like that.  If
>> the attribute exists on an object, it sets a particular policy for
>> that object.  
>  > > Do individual transactions need to play in this game, too?  I'm
>> imagining a use case where an object is marked as "no revisions" but
>> you want to be able to undo a particular transaction.  I'm not sure if
>> that means :
>>     - you can undo the transaction, but the "no revisions" object
>>       keeps its current state.
>>     - you can undo the transaction, and because the transaction is
>>       specially marked as undoable, there actually is a revision
>>     - you can't undo the transaction
>> The first choice seems appropriate for a counter (I think), but I'm
>> not sure if it makes sense for all possible revision-less objects.
> The first choice also makes sense for a catalog.  Here's another 
> possible variation: transactions that involve *only* non-undoable 
> objects are non-undoable; all other transactions are undoable and revert 
> the revision of non-undoable objects as well.
> Shane
> _______________________________________________
> Zope-Dev maillist  -  [EMAIL PROTECTED]
> **  No cross posts or HTML encoding!  **
> (Related lists -
> )

Zope-Dev maillist  -  [EMAIL PROTECTED]
**  No cross posts or HTML encoding!  **
(Related lists - )

Reply via email to