Well, when I read about "commands" modifying the model and stored in a log
for future replay it sounds a lot like Prevayler with the luxury of
automatic "transaction command object" creation based on parameters
extracted via a concern.

With Prevayler, you are supposed to create snapshots of your model in order
to avoid replaying all the commands. Those snapshots could be considered a
cache(command(t)) of your model to optimise replaying at t+1. 

One question is :if your model data is the result of applying a queue of
commands, how do you want to store it? When you think about model migration,
storing all events since the beginning looks efficient since you only have
to modify the command's code (and BTW you get a historic model for the same
price) and replay everything. 

When you apply a command, you probably need to know the last value of other
entities, supposedly stored in some kind of snapshot. So when you think
about efficiency, you need direct access to the last value, the snapshot.

Now, what about an "entityStore" containing a queue of events mapped to an
entity identity ? What about calculating the actual model's value "on the
fly" applying only a specific queue of commands (BTW commands must obviously
be deterministic). You actually extract a reverse tree of commands to apply
to get the final entity value. The snapshot is only an "optimisation" or a
cache of the last entity calculation. When you want to migrate your
commands, you simply clear the cache, you don't replay anything "in
advance".

One problem I had with Prevayler is "snapshot generation". It takes a lot of
resources (O(model's value size)). I partly solved it by partitioning my
model in smaller snapshots. But I still had a problem with "dead" old
data... like last year "order lines", they were still present in the
snapshot ! So I had to remove "old values", and it was a mess. 

This seems to be solved here, I can let complete branches of the "reverse
tree of commands" go to sleep gracefully because I will never traverse those
branches anymore. The added value here is that I could still migrate those
"dead data" to a new version of commands since I will only reapply those
commands when I really need to 'calculate' the latest value of my model's
entities...

Anyone still following me ?

I will try to resume. 

If you think of an entity as a result of the application of a queue of
commands using other entities you should be able to create a historical
representation of your complete model on the fly, the snapshot.

But if you don't consider the snapshot as a whole, but as a map of entity
values, you should be able to find out the entity last value by replaying a
smaller queue of commands (and cache it for future use).

Is it clearer or helpful ?

Cheers,

Philippe


David Leangen-19 wrote:
> 
> 
>> What about using Prevayler ?
>> http://www.prevayler.org/wiki/
> 
> For what? I'm not sure I understand why you are proposing this.
> 
> Recently, I have been using Prevayler a lot. It's great for small  
> projects that can be kept in memory. I don't see the connection with  
> this thread, though...
> 
> Can you clarify?
> 
> 
> Cheers,
> =David
> 
> 
> 
> _______________________________________________
> qi4j-dev mailing list
> [email protected]
> http://lists.ops4j.org/mailman/listinfo/qi4j-dev
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Migration-support%2C-Event-Feeds%2C-and-others...-tp24602284p24622918.html
Sent from the Qi4j-dev mailing list archive at Nabble.com.


_______________________________________________
qi4j-dev mailing list
[email protected]
http://lists.ops4j.org/mailman/listinfo/qi4j-dev

Reply via email to