Hi Michael,

Seraph uses old and outdated, 2-year old APIs (/db/data/cypher and 
/db/data/node) which are not performant 
and also misses relevant headers (e.g. X-Stream:true) for those.
It also doesn't support http keep-alive. 

thanks for clarification. We selected Serap because your web-site 
recommended it (http://neo4j.com/developer/javascript/). It was the first 
one on that page and the page did not suggest that it was outdated. That 
why we used it. I will rewrite the test to use the driver you suggested.

Is this the correct driver:

https://www.npmjs.com/package/neo4j

in Version 2.0.0-RC1? Are there any configuration options I need to be 
aware of to switch on keep-alive?

Configuration for Neo4j also easy to improve, for your store 2.5G 
page-cache memory should be enough.

I will reduce dbms.pagecache.memory to 2.5GB then (thanks to credits from 
Google, the machine as a lot of memory, so we thought, the more the better).

The warmup is also not sufficient.


And running the queries once, i.e. cold caches are also a non-production 
approach.

There are a some 2*10^12 possible combinations for shortest paths. In a 
productive environment, pairs are likely to be new and not caches hits. 
Therefore we did not want to test query caches, but the real performance of 
the computation.


I'm currently looking into it and will post an blog post with my 
recommendations next week.

Perfect, will rerun the tests using your suggestions.

As we all know benchmark tests are always well suited to the publisher :)

The index should be a unique constraint instead.

I will change this from an index to a unique constraint.

Thanks
  Frank


Cheers, Michael


Am Sonntag, 7. Juni 2015 14:25:34 UTC+2 schrieb Michael Hunger:
>
> Hi,
>
> It would have been very nice to be contacted before such an article went 
> out and not called out as part of the post to "defend yourself". Just 
> saying.
>
> Seraph uses old and outdated, 2-year old APIs (/db/data/cypher and 
> /db/data/node) which are not performant 
> and also misses relevant headers (e.g. X-Stream:true) for those.
> It also doesn't support http keep-alive. 
>
> I would either use requests directly or perhaps node-neo4j 2.x, would have 
> to test though.
>
> Configuration for Neo4j also easy to improve, for your store 2.5G 
> page-cache memory should be enough.
> The warmup is also not sufficient.
>
> And running the queries once, i.e. cold caches are also a non-production 
> approach.
>
> I'm currently looking into it and will post an blog post with my 
> recommendations next week.
>
> As we all know benchmark tests are always well suited to the publisher :)
>
> The index should be a unique constraint instead.
>
> Cheers, Michael
>
> Am 07.06.2015 um 12:33 schrieb Frank Celler <[email protected] 
> <javascript:>>:
>
> Hi Christophe,
>
> I'm Frank from ArangoDB. The author of the article, Claudius, is my 
> colleague - he currently not at his computer. Therefore, I try to answer 
> your questions. Please let me know, if you need more information. Any help 
> with the queries is more than welcome. If we can improve them in any way, 
> please let us know.
>
> - we raised the ulimit as requested by neo4j when it started: open files 
> (-n) 40000
>
> - there is one index on PROFILES:
>
> neo4j-sh (?)$ schema
> Indexes
>   ON :PROFILES(_key) ONLINE  
>
> - as far as we understood, there is no need to create an index for edges
>
> - we used "seraph" as node.js driver, because that was recommend in the 
> node user group
>
> - we set
>
> dbms.pagecache.memory=20g
>
> (we were told in talk, that this is nowadays the only cache parameter that 
> matters).
>
> - we started with 
>
> ./bin/neo4j start
>
> - JVM is
>
> java version "1.7.0_79"
> Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
> Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
>
> Thanks for your help
>   Frank
>
> Am Freitag, 5. Juni 2015 19:25:09 UTC+2 schrieb Christophe Willemsen:
>>
>> I have looked at their repository too. Most of the queries seems 'almost' 
>> correct, but there is no information concerning the real schema indexes, 
>> the configuration of the JVM etc.., also the results are the throughput so 
>> I wait for someone maybe more experimented in these kind of benchmarks in 
>> order to reply to it.
>>
>> Le vendredi 5 juin 2015 04:32:59 UTC+2, Michael Hunger a écrit :
>>>
>>> I'm currently on the road but there are several things wrong with it. 
>>> Will look into more detail in the next few days
>>>
>>> Michael
>>>
>>> Von meinem iPhone gesendet
>>>
>>> Am 04.06.2015 um 12:57 schrieb Andrii Stesin <[email protected]>:
>>>
>>> Just ran into the following article (published supposedly today Jun 04, 
>>> 2015) which claims to contain comparison of benchmark results: Native 
>>> multi-model can compete with pure document and graph databases 
>>> <https://www.arangodb.com/2015/06/multi-model-benchmark/> which makes 
>>> me think that there is something wrong with either their data model or with 
>>> test setup, because results for Neo4j are surprisingly low.
>>>
>>> Am I the only one out there who feel the same?
>>>
>>> WBR,
>>> Andrii
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Neo4j" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to [email protected].
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] <javascript:>.
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to