Hi Andreas,

sorry for my very delayed answer.

In your answer on TheServerSide, you said that "Scalability is mainly
a matter of choosing and configuring the persistence layer correctly."
Are there any scenario recommendations / best practises available?
I'll check out the website again, but insider knowledge is as always
greatly appreciated.
Generally the configuration of Jackrabbit out-of-the-box running the
DerbyPersistenceManager yields very acceptable results. Both in
terms of performance and and also in terms of scalability. Basically
having a row per items delegates the scalability to Derby on how
many rows can be stored in a table. None of our tests have exhausted
such a limitation.

> "Backup/Restore" operations in my experience usually happen on the
> persistence layer, which means that restore operation (obviously) does
> not go through the normal user API.
How would a transactional replication be implemented (e.g. from an
authoring system to a live system in a DMZ)? If a lot of documents
are involved, for instance after an URL change which affects a lot
of links, this could probably lead to such a massive transaction.
Should this be implemented by accessing the persistence layer directly?
IIUC this would have the drawback that the JCR implementation couldn't
be replaced without changing the replication code ...
I think I agree with you that Jackrabbit should be able to run very large
transactions and I think that while there are workarounds for most applications
(for example to split things into smaller transactions) those are not desirable.

I think the best way forward is to make sure that this is fixed in Jackrabbit.
I don't think that this is a design problem and people just have not requested
this feature frequently enough for someone to care enough to fix it.

Feel free to make it an issue and vote on it ;)

Thanks for your input and again sorry for the delayed answer,

regards,
david

Reply via email to