On Mon, Jun 13, 2011 at 9:26 PM, Julian Leviston <[email protected]> wrote:
> ... also, the idea of modelling change ITSELF is an appealing one in this
> context, and all changes including data "entry" etc being simply represented
> as a log of mutations using the command pattern. Thus the data represented
> in the first "world" would be mutated and "propagated" to the new world
> (actually more like the "view" of it filtered or some-such) according to the
> new rules, and the inverse would apply as well...

There might be some experience on this in the CQRS-community. They
usually model systems with event sourcing as the primary
representation of state and have to deal with the versioning issues.

Then there's the experience with working with databases, both
relational and OO. The practice that seems to work is to model new
versions in a backwards-compatible way and then refactor once the old
versions has been completely shut down.

My own thinking in this area is that you handle merges automatically
if you can but fall back on manual intervention if not. Hopefully the
system has a user-base that knows what to do with inconsistent data.

In any case I guess the default behaviour when branching is to simply
diverge, merges in any direction, should only happen if asked to, and
when asked to. The Git workflow seems to work very well. If there is
anything broken with it it is that it tends to express dependencies
that aren't really there, but that isn't a fundamental property of the
DAG-model just a consequence of how the tools steers you. The Darcs
approach with it's theory of patches be better in this regard, have no
experience working with it though.

BR,
John

_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to