Using Cassandra as Back End for publish

2014-09-04 Thread Abhijit Mazumder
Hi, We are considering using Cassandra as back end for the publish environment. In author we are using mongo. What are the options we have to customize replication agent to achieve this? Regards, Abhijit

Re: Using Cassandra as Back End for publish

2014-09-04 Thread Michael Marth
Hi Abhijit, I assume you refer to replication as implemented in Sling and AEM. Those work on top of the JCR API, so they are independent of the Micro Kernel implementation. For running Oak on Cassandra you would need a specific MK implementation (presumably based on the DocumentMK). Is that

Using BlobStore by default with SegmentNodeStore

2014-09-04 Thread Chetan Mehrotra
Hi Team, Currently SegmentNodeStore does not uses BlobStore by default and stores the binary data within data tar files. This has the goodness that 1. Backup is simpler - User just needs to backup segmentstore directory 2. No Blob GC - The RevisionGC would also delete the binary content and a

Re: Using BlobStore by default with SegmentNodeStore

2014-09-04 Thread Davide Giannella
On 04/09/2014 12:25, Chetan Mehrotra wrote: ... (supermegacut!) Thoughts? As you mentioned AEM, the deployment based on JR2 already delivers 2 different directories for repository/segment and blobs. Both AEM and JR2 are used to run separate tasks for cleaning the blobs IIRC. So I'm in favour

Re: Using Cassandra as Back End for publish

2014-09-04 Thread Michael Marth
Hi, I think your best guess would be http://jackrabbit.apache.org/oak/docs/nodestore/documentmk.html as a general overview (even if skewed towards MongoDB) and looking into http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/ There