Przemyslaw Pakulski wrote:
But in fact in real application most of business method ends with either save, checkin or commit, and in consequence concurrent calls of this methods will block each other and wait for storing modified data. We are using versioning feature intensively, and we have performance problems mainly with write operations. Additionally we notice big performance degradation when we switch to SimpleDBPersistenceManager with MySQL or other db using network communications. So it looks like overall performance depends much on PM implementation because all save/checkin operations wait for PM until he finish all his work. One solution to avoid blocking write operations could be special thread/s responsible for flushing data to PM, but i don't think so that Jackrabbit uses asynchronous processing.

deferred flushing of data to the PM will put ACID properties of a transaction at risk. I don't think this is a valid approach.

If there exists any singleton component on top of PM, which is reponsible for serializing all saves, checkins or transactions then naturally using connection pools doesn't help, but maybe it means that Jackrabbit is not designed to work effectively in multithreaded environment.

IMO jackrabbit works quite well in a multi-threaded environment, but of course there is always room for improvement.

I've created a jira issue that deals with concurrency and fine grained locking enhancements in the SharedItemStateManager: http://issues.apache.org/jira/browse/JCR-314

Even if usage of connection pool is not reasonable in current design, I think it is worth to consider JDBC batch updates instead of single updates to gain better DBPM performance.

That's a very good point. We should definitively look into this in more detail.

regards
 marcel

Reply via email to