Hi, > On Feb 28, 2012, at 3:54 PM, Marcel Reutegger wrote: > > > I'd solve this differently. Saves are always performed on one > > partition, > > even if some of the change set actually goes beyond a given partition. > > this is however assuming that our implementation supports dynamic > > partitioning and redistribution (e.g. when a new cluster node is added > > to the federation). in this case the excessive part of the change set > > would eventually be migrated to the correct cluster node. > > I'd like to better understand your approach: if we have, say, > Partitions P and Q, containing subtrees /p and /q, respectively, then > a save that spans elements in both /p and /q might be saved in P > first, and later migrated to Q? What happens if this later migration > leads to a conflict?
I guess this would be the result of a concurrent save when there's an additional conflicting save under /q at the same time. good question... CouchDB solves this with a deterministic algorithm that simply picks one revision as the latest one and flags the conflict. maybe we could use something similar? regards marcel
