Jackrabbit by default does not lock the underlying nodes that are
being modified, it uses a merge strategy to merge concurrent changes
to identical nodes on save, however modifications to the same property
or addition of same name siblings are non. In those situations you get
a InvalidItemStateException on one of the updates, effectively
optimistic locking.
If you can't afford to have one update fail, then you need to write
lock the mergable nearest persisted ancestor to prevent other threads
beginning a modification.
With transactions, there appears to be an added complication, which I
cant claim to fully understand. If there are concurrent creates on
nodes, then the I have been seeing a failure creating the node version
history subtree. This update was not of the form that was producing
the InvalidItemStateException but was generating something like no
such item exception. I found the solution in this circumstance was to
ensure, even though the update was mergable, was to take an explicit
lock.
So if using transactions, and expecting high levels of concurrent
writes to the same nodes or properties, I have found that explicit
write locks are required to prevent non mergable conflicts happening.
The slight downside of this is locking is not free. If operating in a
Jackrabbit cluster, locks will results in more journal records, but
then using transactions will reduce the number of journal records. The
other worry is... without ordering the lock sequence in you
application code deadlocks become possible.... It would be nice not to
have to resort to locks at all, and handle the failure at a higher
level.
BTW, there is a bug in 1.4.x that means that empty journal records are
produced (fixed in 1.5.x)
Ian
On 25 Mar 2009, at 21:59, Kaspar Fischer wrote:
Ian, thanks for the post. Can you explain what you mean by locking?
(Dead locks?)