Hi, In addition to what Marcel wrote:
With the Mongo storage (and the relational database storage), you could start multiple Oak instances and read/write the data concurrently. You might also need to use a clustered / sharded storage backend (MongoDB or database). There are some limitations to index scalability: the property indexes are currently global (for the whole repository) and modified synchronously, which currently means the root document is updated a lot. That might be a problem if you have many cluster nodes. At least it's a theoretical limitation, with the current property index implementation. It might not be a problem in practice, and you don't have to use property indexes (you could use purely Solr indexes). Regards, Thomas On 03/07/14 09:11, "Bertrand Delacretaz" <[email protected]> wrote: >Hi, > >What kind of limitations, if any, do people see in growing an >Oak/Mongo repository to a few billion nodes? > >IIRC people were doing tests with a few hundred million nodes in >Jackrabbit, so given Oak's scalable design I suppose that would work - >but do any obvious bottlenecks come to mind? > >Also, do you have an estimate of the Oak/Mongo overhead in terms of >storage size, assuming tons of small nodes in the 10kb range? > >-Bertrand
