On Thu, Dec 1, 2011 at 3:49 PM, Thomas Mueller <[email protected]> wrote: > Hi, > >> A test-and-set operation necessarily requires at least some level of >> atomicity which can quickly become a bottleneck for a clustered setup. > > Test-and-set is a problem in a clustered 'eventually consistent' model, > that's true. I don't know how test-and-set could be used in that way. > > Possibly the easiest solution is that each node modification sets the node > type again (to the expected value) even if the user didn't change it. > > > But I don't think we should try to increase concurrency of write > operations within the *same* repository because that's not a problem at > all.
i beg to differ ;) in jr2 saves are serialized. IMO that's a *real* problem, especially when saving large change sets. this problem can be addressed e.g. with an MVCC based model. cheers stefan > Jackrabbit 2 is slow for other reasons. In my view, the main problems > for Jackrabbit 2 are: > > - the more nodes the slower because of the randomly distributed node ids > - the more open sessions the slower because of internal event processing > - the more child nodes the slower because the list is stored as one unit > - jackrabbit core is slow internally for other reasons we didn't analyze > - indexing everything with Lucene is a performance problem at some point > - in a cluster, writes do not scale as writing is basically synchronized > > None of that problems is related to transaction isolation. I'm not saying > we shouldn't try to use MVCC. But it never was a problem that Jackrabbit 2 > doesn't use MVCC. > > Regards, > > Thomas >
