Hi,
We are considering using Cassandra as back end for the publish
environment. In author we are using mongo.
What are the options we have to customize replication agent to achieve
this?
Regards,
Abhijit
Hi Abhijit,
I assume you refer to replication as implemented in Sling and AEM. Those work
on top of the JCR API, so they are independent of the Micro Kernel
implementation.
For running Oak on Cassandra you would need a specific MK implementation
(presumably based on the DocumentMK). Is that
Hi Team,
Currently SegmentNodeStore does not uses BlobStore by default and
stores the binary data within data tar files. This has the goodness
that
1. Backup is simpler - User just needs to backup segmentstore directory
2. No Blob GC - The RevisionGC would also delete the binary content and a
On 04/09/2014 12:25, Chetan Mehrotra wrote:
... (supermegacut!)
Thoughts?
As you mentioned AEM, the deployment based on JR2 already delivers 2
different directories for repository/segment and blobs.
Both AEM and JR2 are used to run separate tasks for cleaning the blobs IIRC.
So I'm in favour
Hi,
I think your best guess would be
http://jackrabbit.apache.org/oak/docs/nodestore/documentmk.html
as a general overview (even if skewed towards MongoDB) and looking into
http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/
There