Hi Michael,
I ran the ab script on my instance and got pretty bad performance results:
it took 201 seconds to complete on my 4 core/30G RAM ec2 instance.
<https://lh6.googleusercontent.com/-aO1_mLiWqfY/VQcsejTo6AI/AAAAAAAAFCE/l02yIC-osZs/s1600/Screen%2BShot%2B2015-03-17%2Bat%2B12.47.33%2BAM.png>
>>That test creates 500k minute nodes and 500k minute relationships in a
_SINGLE_ transaction
so you need enough RAM to fit it in
I have a 30G RAM machine. Am I doing anything with neo4j configurations?
Here are some configuration properties:
# conf/neo4j.properties
neostore.nodestore.db.mapped_memory=500M
neostore.relationshipstore.db.mapped_memory=1200M
neostore.propertystore.db.mapped_memory=500M
# conf/neo4j-wrapper.conf
wrapper.java.initmemory=4096
wrapper.java.maxmemory=4096
Another question I have is, is it possible to achieve >10K writes/sec using
cypher over REST? (I am using jadell/neo4jphp for transactional cypher
queries over REST).
On 16 March 2015 at 18:49, Michael Hunger wrote:
Hi,
* Use Neo4j 2.2 which is much better at scaling concurrent writes.
* Use a fast SSD (or SSD-RAID) and many cores and* many concurrent* requests
* try with a simple tool like ab (apache bench) before you try a driver.
The test you looked at is a single query that only uses a single CPU.
That test creates 500k minute nodes and 500k minute relationships in a
_SINGLE_ transaction
so you need enough RAM to fit it in. otherwise it will garbage collect
trying to make more space
I think I also used neo4j-enterprise for that one.
I ran the test with neo4j-shell and enough RAM (4 or 8G don't remember).
I recommend for high write loads to disable the neo4j 2nd level cache
(cache_type=none)
I add my ab-test for you it creates 3 nodes, 2 rels per request.
./run_ab 24 100000 create_plain.json (on a 6 core, 12HT server) takes
Document Path: /db/data/transaction/commit
Document Length: 103 bytes
Concurrency Level: 24
Time taken for tests: 4.843 seconds
Complete requests: 100000
Failed requests: 0
Keep-Alive requests: 100000
Total transferred: 30000000 bytes
Total body sent: 43800000
HTML transferred: 10300000 bytes
Requests per second: 20647.55 [#/sec] (mean)
Time per request: 1.162 [ms] (mean)
Time per request: 0.048 [ms] (mean, across all concurrent requests)
Transfer rate: 6049.09 [Kbytes/sec] received
8831.67 kb/s sent
14880.76 kb/s total
Am 16.03.2015 um 13:53 schrieb Niranjan U
Hi Michael,
Greetings, and a fine Monday morning to you!
I am currently evaluating Neo4J for a project that will have high frequency
of writes, and I am playing with a test instance of the community version.
Briefly, this is my requirement:
I want to evaluate if Neo4J can scale to 10K writes/sec, which is my
project's requirement. Down the lane, this write frequency will increase to
30-40K, and I would also like to compare write concurrency scaling with
vertical hardware scaling costs. I have read your blog posts, as also those
of a few others, and see that you have achieved impressive write speeds.
However, I am having trouble replicating your results on my setup:
*My setup: *
- Amazon ec2 r3.xlarge instance (4 CPU cores, 30G RAM). Ubuntu OS
- I am using Neo4J server REST apis (using this popular neo4jphp library
<https://github.com/jadell/neo4jphp>)
- Neo4J version 2.1.7
1) As per this blog
<http://java.dzone.com/articles/importing-forests-neo4j?page=0,0>, I tried
importing the forest. However, at his point:
//for every Hour, connect 60 minutes
MATCH (hour:Hour) FOREACH (minute in range(0,59) | create (:Minute
{id:minute})-[:PART_OF]->(hour));
the server hit 100% CPU and the query continued to execute for more than 5 mins
- nowhere close to the 14 secs time mentioned in the blog.
I would like to know what could be the issue.
2) I ran ab test on my server, and the results are here
<https://docs.google.com/document/d/1swkDVUSAbmsn_fEZZszDeYV_yiZKXT4ivDX4tmGR18c/edit?usp=sharing>.
For 10K writes with 10 threads, the test took around 20 secs, which is
incredibly high for a r3.xlarge instance. Would like to know why I am getting
such a slow performance.
--
You received this message because you are subscribed to the Google Groups
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.