On Tue, Nov 1, 2011 at 12:26 PM, Rick Bullotta <[email protected]> wrote: > Probably the opposite (if you could even do it). You'd lose the LRU caching > across > the boundary. > > Is the data being written the same as the data being read, or is there a > natural > segmentation? If so you could implement a crude form of sharding/partioning > to avoid "hot spots" (concurrency related) during these periods.
It's largely going to be the same data. It's definitely the same type of data. Basically there will be discreet sets of same type of data (kinda like sub-graphs, but they can be connected) inserted in a batch-like manner. Those discreet sets of data will then immediately be consumed by processes that use it to calculate additional data and store the results to various data stores, incl. the same Neo4J db. After the initial consumption, the data sets will be consumed on-demand, but I'm not sure about how frequently at this point. It'll likely be something like once or twice a day or even less frequently, but I'll have to see how the usage patterns emerge after the solution goes live to be sure. Thanks for the answer Rick. I'll do some quick-and-dirty testing later this week to guide the decision making on this. I'll see if I can post what I find on this thread afterwards. -TPP _______________________________________________ Neo4j mailing list [email protected] https://lists.neo4j.org/mailman/listinfo/user

