Re: [Neo4j] performance when deleting large numbers of nodes

2016-06-20 Thread 'Chris Vest' via Neo4j
If you perform deletes in parallel, it can be worth investing some time in making the code smart enough to choose disjoint data sets in transactions that run in parallel; e.g. no node should be start or end node in more than one parallel transaction at a time. That way they won’t contend on lock

Re: [Neo4j] performance when deleting large numbers of nodes

2016-06-18 Thread 'Michael Hunger' via Neo4j
Shouldn't be slow. Faster disk. Concurrent batches would help. Von meinem iPhone gesendet > Am 18.06.2016 um 22:29 schrieb John Fry : > > > Clark - this works. It is still slow. I guess multithreading may help > some > > > > Transaction tx = db.beginTx(); > > //try ( Tr

Re: [Neo4j] performance when deleting large numbers of nodes

2016-06-18 Thread Clark Richey
Yes. That's a lot to delete doing it in parallel will definitely help Sent from my iPhone > On Jun 18, 2016, at 17:29, John Fry wrote: > > > Clark - this works. It is still slow. I guess multithreading may help > some > > > > Transaction tx = db.beginTx(); > > //try ( T

Re: [Neo4j] performance when deleting large numbers of nodes

2016-06-18 Thread John Fry
Clark - this works. It is still slow. I guess multithreading may help some Transaction tx = db.beginTx(); //try ( Transaction tx = db.beginTx() ) { for (int i=0; i5) { txc=0; tx.success(); tx.close(); tx = db.beginTx(); } }

Re: [Neo4j] performance when deleting large numbers of nodes

2016-06-18 Thread Clark Richey
Don't them. Just create a counter and every x deletes commit the transaction and open a new one. Sent from my iPhone > On Jun 18, 2016, at 17:03, John Fry wrote: > > Thanks Clark - is there any good/recommended way to nest the commits? > > Thx JF > >> On Saturday, June 18, 2016 at 1:43:19 P

Re: [Neo4j] performance when deleting large numbers of nodes

2016-06-18 Thread John Fry
Thanks Clark - is there any good/recommended way to nest the commits? Thx JF On Saturday, June 18, 2016 at 1:43:19 PM UTC-7, Clark Richey wrote: > > You need to periodically commit. Holding that many transactions in memory > isn't efficient. > > Sent from my iPhone > > On Jun 18, 2016, at 16:

Re: [Neo4j] performance when deleting large numbers of nodes

2016-06-18 Thread Clark Richey
You need to periodically commit. Holding that many transactions in memory isn't efficient. Sent from my iPhone > On Jun 18, 2016, at 16:41, John Fry wrote: > > Hello All, > > I have a graph of about 200M relationships and often I need to delete a > larges amount of them. > For the proxy c

[Neo4j] performance when deleting large numbers of nodes

2016-06-18 Thread John Fry
Hello All, I have a graph of about 200M relationships and often I need to delete a larges amount of them. For the proxy code below I am seeing huge memory usage and memory thrashing when deleting about 15M relationships. When it hits tx.close() I see all CPU cores start working at close to 100%