On Mon, Mar 21, 2011 at 11:07 PM, Rick Bullotta <[email protected]> wrote:
> Here's the quick summary of what we're encountering: > > We are inserting large numbers of activity stream entries on a nearly > constant basis. To optimize transactioning, we queue these up and have a > single scheduled task that reads the entries from the queue and persists them > to Neo. Within these transactions, it's possible that a very large number of > relationships will be created and deleted (sometimes create and deleted all > within the transaction, since we are managing something similar to an index). > I've noticed that the time required to handle the inserts (not just the > total, but the time per insert) degrades DRAMATICALLY if there are more than > a few hundred entries to write. It is very fast if there are < 100 entries > in the batch, but very slow if there are over > 1000. With Neo 1.1, we did > not notice this behavior. We have tried Neo 1.2 and 1.3 and both seem to > exhibit this behavior. > > Can anyone provide any insight into possible causes/fixes? This sounds familiar to what I've seen... For what is worth I've already seen it since I started to use neo4j at the late day of 1.2 and soon migrated to 1.3 milestones... This has lead me to using an externally managed Lucene index for my purposes where index is meant to grow rapidly. -- Massimo http://meridio.blogspot.com _______________________________________________ Neo4j mailing list [email protected] https://lists.neo4j.org/mailman/listinfo/user

