Hi all,

I understand that to have implement  Aggregate a la DDD, the ES
changes are the right way how to do it.  Coincidentally I was reading
today about Datomic[1] and all this "do not update but add time
attribute to value" is starting to makes sense to me.
Using words from their whitepaper : The time to change that is now.

cheers,

 - Tibor

[1] http://datomic.com/datomic-whitepaper.html#data-model

On Fri, Jun 1, 2012 at 5:37 AM, Niclas Hedhman <[email protected]> wrote:
> Gang,
>
> I am contemplating the possibility of going the full distance with DDD
> support and restrictions when it comes to Entities.
>
> * Entities are bound to an Aggregate, and the Aggregate has an
> Aggregate Root, which is the only Entity within the Aggregate that is
> globally reachable.
>
> * Only changes within the Aggregate are atomic. Changes across
> Aggregates are eventually consistent.
>
> * Invariants are declared on the Aggregate Root or assigned to the
> Aggregate Root at assembly.
>
> * Aggregates are declared via @Aggregated annotation on Associations
> and ManyAssociations.
>
> * The Aggregated entities Identity is scoped by the Aggregate Root
> (under the hood, Aggregate Root identity is prefixed to the aggregated
> entity).
>
> * When a non-Aggregated Association is traversed the retrieved Entity
> is read-only.
>
>
> Would then that mean that UnitOfWork is not needed at all?? The
> Aggregate IS effectively the UnitOfWork, and obtaining an Aggregate
> can be done directly on the EntityFactory/Module, and the aggregated
> entities are created from the AggregateRoot. Various posts on DDD
> group also seems to suggest the same thing, IF you are modelling with
> Aggregates, UnitOfWork should not exist.
>
> In all, this seems to suggest that the whole persistence system can be
> simplified, GoodThing(tm), yet with the Aggregates being the
> Distribution boundary, Consistency boundary, Transaction boundary and
> Concurrency boundary, I think we can obtain a more solid semantic
> model for how things are expected to work, both locally as well as
> distributed.
>
> To add to the above, I would like to get in place an asynchronous
> model for the Entity Store SPI as well;
>
> * All changes to Entities are captured as Transitions.
>
> * Such transitions are pushed to the Entity Store SPI asynchronously.
> Optimistic success, with callback for success/failures.
>
> * Retrieval is likewise asynchronous. The request contains a callback
> whereto deliver the transition stream.
>
> * Perhaps retrieval requests can be persistent, so that one can
> register a Specification, which will continue to feed the callback
> with all changes matching the specification. Not sure if this will be
> useful though.
>
> This could also simplify the EntityStore SPI quite a bit, since the
> only interface needed would be something like;
>
> public interface EventStore<T extends Event>
> {
>
>    void save( Identity identity, Iterable<T> events, EventOutcome<T> handler 
> );
>
>    void load( Identity identity, EventRetriever<T> callback );
> }
>
> public interface EventOutcome<T>
> {
>    void success( Identity id );
>
>    void failures( Identity id, Set<T> eventsNotStores,
> EventFailureMessage description );
> }
>
> public interface EventRetriever<T>
> {
>    void eventRetrieved( Identity id, T eventRetrieved );
>
>    void noMoreEvents();
> }
>
> public interface StateTransition extends Event  // super interface for
> all ES transitions
> {
>    Identity entityIdentity();
>
>    long sequenceNumber();
>
>    DateTime timestamp();
> }
>
> IF the entitiy state is represented as a List of Transitions, the
> "current state" must be rebuilt from these transitions, which seems to
> suggest things will be much slower. This is probably true if the
> number of modifications to a Property or Association are magnitude
> larger than the snapshot value, but only actual trials will tell what
> can be expected, how much will be in serialization overhead, versus
> reconstruction of the snapshot state. A later optimization could be to
> allow for "snapshot", which the ES understand as "temporal starting
> point".
>
>
> What are your thoughts on this? Summary;
>
>  * Aggregate a la DDD.
>  * Event Store model.
>  * Transition events in store.
>  * Asynch model for store/retrieve.
>
>
> Finally, is this too much of a change for 2.0, and should be scheduled
> for a 3.0 right away?? *I* think this is the right way to go, and I
> think some areas in the persistence system will be simplified, as well
> as setting the stage for "historic data" support, stronger
> distribution capabilities, event sourcing model and much more...
>
> Cheers
> --
> Niclas Hedhman, Software Developer
> http://www.qi4j.org - New Energy for Java
>
> I live here; http://tinyurl.com/3xugrbk
> I work here; http://tinyurl.com/6a2pl4j
> I relax here; http://tinyurl.com/2cgsug
>
> _______________________________________________
> qi4j-dev mailing list
> [email protected]
> http://lists.ops4j.org/mailman/listinfo/qi4j-dev

_______________________________________________
qi4j-dev mailing list
[email protected]
http://lists.ops4j.org/mailman/listinfo/qi4j-dev

Reply via email to