Hi, On Mon, Jan 13, 2020 at 1:19 AM Marcel Reutegger <[email protected]> wrote:
> Hi, > > On 12.01.20, 15:40, "jorgeeflorez ." <[email protected]> > wrote: > > > If I create two backends > > (Oak instances), both using the same type of document and blob store, > > and both pointing to the same "location" (folder in a file system, S3 > > path, etc). will they work without collisions or conflicts when > > reading/storing files? > > For the blob stores this is generally true. When it comes to NodeStore > implementations, only the DocumentNodeStore will also work in a clustered > setup. The SegmentNodeStore implementation does support multiple active > processes working on the same storage. > I'm pretty sure the blob stores need to implement SharedDataStore to work in this context - or at least if you want data store garbage collection to work in these data stores. I'd use OakFileDataStore if you want to use a filesystem-based blob store, S3DataStore for S3 and AzureDataStore for Azure - they all implement SharedDataStore. -MR
