Hi Michael,

Some of my work flows create very large amounts of cypher merge and create 
unique operations - potentially millions of lines.
I break the master file into 50 smaller files,  and cat these concurrently 
to cypher-shell. Actually its not quite that easy as a create
smaller buckets before cat'ing them into cypher-shell due to limitations of 
the shell's buffering.
I have good reason for using this approach,   as the task is additive with 
periodic updates and many cypher lines are created 'on the fly' due to data 
dependencies.

I am able to achieve an ingest rate of about about 500 lines of cypher per 
sec using this approach - but its is still not as fast as I would like.
Are there any tweaks that might improve this approach ?

Thanks, Wayne

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to