Hi, On Thu, Mar 7, 2013 at 2:05 PM, Michael Dürig <[email protected]> wrote: > Yes I'm also a bit worried about this. Couldn't this lead to cascading > undos? And further to violation of other application constraints when data > gets "uncommited"? I think this is a very slippery slope...
The way I see it, the normal case for commits would be to go against the root journal and thus never encounter this issue. Unlike with the new MongoMK that automatically can execute non-conflicting changes in parallel, the SegmentMK would require explicit configuration when improved concurrency is required. In other words, I don't think that for example the default configuration of a SegmentMK cluster should be for each cluster node to have their own journals. They might, but only if explicitly configured that way based on an analysis of the application level access patterns. The reasoning behind this design is that I think the optimistic locking mechanism should already buy quite a bit of extra performance for normal deployments, and the remaining cases where even more write throughput is needed typically have some very specific and tightly scoped access patterns for which the extra throughput is needed. And as mentioned in another email, in my experience such access patterns are seldom conflicting (or come with strict application-level constraints) and generally resilient against the potential loss of some small fraction of changes. Thus in practice I don't see such cascading undos or constraint violations becoming much of a problem. But that of course depends on the amount of care put in deciding which operations should be executed against lower-level journals. That's probably an area where good documentation/training and more real-world experience will be needed. BR, Jukka Zitting
