All transaction state is currently kept in memory on the java heap, and 20+ mio. changes is too much to fit in a 4 GB heap. When you have too much stuff on the heap, it will manifest with those "GC overhead limit exceeded" and the database will run slow, though there are other things that can produce similar symptoms.
Try putting USING PERIODIC COMMIT 10000 in front of your LOAD CSV query. This will periodically commit the transaction, thus limiting the transaction state kept in memory. Unfortunately it will also break the atomicity of the transaction. -- Chris Vest System Engineer, Neo Technology [ skype: mr.chrisvest, twitter: chvest ] On 27 Aug 2014, at 22:31, 'Curtis Mosters' via Neo4j <[email protected]> wrote: > Let's say I have: > > LOAD CSV WITH HEADERS FROM "file:C:/test.txt" AS csvLine > CREATE (p:Person { person_id: toInt(csvLine.person_id), name: csvLine.name }) > > I run this query in the browser. I know that it's not the fastest way and I > should think about using the batch importer. But I really like that way > somehow and want to speed it up. > > So when I ran this the first time, after like 2 or 3 minutes I got an erro > saying "GC overhead limit exceeded". So It set > > wrapper.java.initmemory=4096 > wrapper.java.maxmemory=4096 > > Now the error does not come up. But it's still slow and I can't see how much > time is still needed. So if you have tips on doing this, I would be very > thankful. =) > > PS: the file is 2 gb big and has like 20 mio entries > > -- > You received this message because you are subscribed to the Google Groups > "Neo4j" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "Neo4j" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
