I believe the issue is the size of the transaction, based on a similar
question. Can you batch the inserts into transactions with about 30k per
transaction?

--
Toby Matejovsky


On Thu, Jul 22, 2010 at 11:27 AM, Jeff Klann <[email protected]> wrote:

> I'm stumped on this one.
>
> I'm getting the "fast write performance at first that slows to a crawl"
> issue described in the performance guide, so I increased the Linux
> dirty_page ratio (all the way up to 80%), turned of auto log rotation, and
> increased the size of the memory mapped cache. This issue is still
> happening
> exactly as before.
>
> I've narrowed my problem to this:
>   *If I insert a lot of nodes with about 50 short string properties each,
> the performance slows to a crawl at about 40,000 inserts (and it stays
> slow)* ... however if I don't insert the properties the performance is
> fine.
>
> What am I doing wrong? The machine currently has a small amount of RAM, but
> I don't understand why that would impact pure insertion, and only after
> thousands of inserts. (I don't read the properties back after adding them.)
> I have not used BatchInserter because it is nice to have normal database
> access for some parts of this database builder program I'm writing, but if
> that's the only way I could refactor. Also all these inserts are within one
> transaction (about 100k nodes per transaction) - do I need to split this
> into smaller transactions?
>
> Thanks,
> Jeff Klann
> _______________________________________________
> Neo4j mailing list
> [email protected]
> https://lists.neo4j.org/mailman/listinfo/user
>
_______________________________________________
Neo4j mailing list
[email protected]
https://lists.neo4j.org/mailman/listinfo/user

Reply via email to