://jackrabbit.510166.n4.nabble.com/Concurrent-Write-issues-with-Jackrabbit-tp2304674p2304674.html
Sent from the Jackrabbit - Dev mailing list archive at Nabble.com.
Hi,
Are you sure the problem is concurrency and not performance? Are
you sure that the persistence manager you use does support higher
write throughput? What persistence manager do you use, and what is the
write throughput you see, and what do you need?
Regards,
Thomas
Hi,
The problem is performance . Thats based on concurrent writes we are doing.
So if I see the design of jackrabbit and as much I make out of it is
that the Write lock is acquired at the SharedItemStateManager level
which is above PersistanceManager.
We tried changing to DB persistance manager
Hi,
Do you use Day CRX / CQ? If yes, I suggest to use the Day support.
Regards,
Thomas
Sure..
We are already going through them. I just wanted to make sure if we
have some kind of configuration to allow parallel writes in
jackrabbit if we write to different
parts of repository..What I am getting here is that writes will be
serialized due to a Single Write lock
Thanks a lot
Hi,
What I am getting here is that writes will be
serialized due to a Single Write lock
For scalability, you also need scalable hardware. Just using multiple
threads will not improve performance if all the data is then stored on
the same disk.
Regards,
Thomas
Sure..
But with 100 concurrent threads I guess we should not be looking for
big hardware anyways.
Also if we cluster the repository I guess still we deal with a global
lock across nodes..
So not sure how much we gain
Thanks
Shashank
On Wed, Jul 28, 2010 at 9:06 PM, Thomas Müller