Hi,

Yes the master/slave mode is only in the enterprise version.

Cheers
Mark

On 5 November 2014 14:50, Aileen Agricola <[email protected]
> wrote:

> Hi Liu,
>
> I'm forwarding your questions to our google groups.   The community will
> assist.
>
> best,
> Aileen
> ------
>
> I haven't found the master-slave mode in community version, is it only
> included in enterprise version? Thanks
>
> Best regards
>
> LIU
>
>>
>> I want to consult you about the neo4j enterprise version's purchase
>> problem. What's the purchase way and how much. Thanks in advance.
>>
>> Best regards
>>
>
>
>>
>> 2014-10-28 7:58 GMT+08:00 Aileen Agricola <
>> [email protected]>:
>>
>>> Hi Liu,
>>>
>>> You will receive a response directly from our community.
>>>
>>> best,
>>> Aileen
>>>
>>>
>>> Aileen Agricola
>>> Web Program Manager | Neo Technology
>>> [email protected] | 206.437.2524
>>>
>>> *Join us at GraphConnect 2014 SF! graphconnect.com
>>> <http://graphconnect.com/>*
>>> *As a friend of Neo4j, use discount code *KOMPIS
>>> <https://graphconnect2014sf.eventbrite.com/?discount=KOMPIS>* for $100 off
>>> registration*
>>>
>>>
>>> On Mon, Oct 27, 2014 at 4:57 PM, LIU Xiaobing <[email protected]> wrote:
>>>
>>>> Hi Aileen,
>>>>     Thanks, do you mean that you have forwarded my question below to
>>>> the mail [email protected]? I haven't received any response. If i
>>>> want to find the response, how can i do? wait an email from
>>>> [email protected] or search from the neo4j google group? Here is
>>>> my quesion:
>>>>
>>>> Hi experts,
>>>>     Now I encountering one performance problem about neo4j. I try to
>>>> write some data to neo4j, the data scale is about billions and the
>>>> relationships between the data is just people-people. When I use py2neo to
>>>> query and write data to neo4j, i found that it's very slow.
>>>>     The query clause i use:
>>>>     create_rels = 'MERGE(first:{TYPE1} {{id:'{val1}'}}) MERGE
>>>> (second:{TYPE2} {{id:'{val2}'}}) MERGE (first)-[r:{RTYPE}]->(second) ON
>>>> CREATE SET r.weight={weight_set} ON MATCH SET {weight_compute} WITH r SET
>>>> r.half_life={half_life},r.update_time=TIMESTAMP(),r.threshold={threshold}
>>>> WITH r WHERE r.weight<r.threshold DELETE r'
>>>>  
>>>> self.query=neo.CypherQuery(self.graph_db,self.create_rels.format(TYPE1=entity1[0],val1=entity1[1],TYPE2=entity2[0],val2=entity2[1],RTYPE=rel_type,weight_set=weight_set,weight_compute=CYPHER_WEIGHT_COMPUTE,half_life=half_life,threshold=threshold))
>>>>     self.query.execute()
>>>>
>>>>     the CYPHER_WEIGHT_COMPUTE definition is
>>>> "r.weight=r.weight+r.weight*EXP((TIMESTAMP()-r.update_time)/(r.half_life*1.0))"
>>>>
>>>>     The purpose of the clause is that the nodes and relationships will
>>>> be created when the nodes are not in graph db and the properties of
>>>> relationships will be update if they are.
>>>>     I have tried such ways to gain the performance, but it didn't work
>>>> well.
>>>>     1) configure the configure file of server
>>>>     neo4j-wrapper.conf:
>>>>     wrapper.java.initmemory=4096
>>>>     wrapper.java.maxmemory=4096
>>>>     wrapper.java.minmemory=4096
>>>>
>>>>     neo4j.properties
>>>>     neostore.nodestore.db.mapped_memory=256M
>>>>     neostore.relationshipstore.db.mapped_memory=256M
>>>>     neostore.propertystore.db.mapped_memory=256M
>>>>     neostore.propertystore.db.strings.mapped_memory=128M
>>>>     neostore.propertystore.db.arrays.mapped_memory=128M
>>>>
>>>>     node_auto_indexing=true
>>>>     relationship_auto_indexing=true
>>>>
>>>>     2) Create constraints of the properties of nodes in order to create
>>>> indexes
>>>>        Cypher clause: create constraint on (n:UID) assert n.id IS
>>>> UNIQUE
>>>>
>>>>     When i check the load of server who's equipped with 16 4-core
>>>> processors, i found that the cpu's load is very high while the network and
>>>> io's load is not. Does Cypher clause is cpu-greedy? How can i dig the
>>>> performance using other ways? Thanks very much.
>>>>
>>>> By the way, the version of neo4j is 2.1.5 stable verion, version of
>>>> client py2neo is 1.1.6, RAM of the server is 8G
>>>>
>>>> Best regards
>>>>
>>>> 2014-10-27 23:13 GMT+08:00 Aileen Agricola <
>>>> [email protected]>:
>>>>
>>>>> Hi Liu,
>>>>>
>>>>> I'm forwarding your question to our google group
>>>>> [email protected]
>>>>> Please provide any additional information there.
>>>>>
>>>>> best,
>>>>>
>>>>> Aileen Agricola
>>>>> Web Program Manager | Neo Technology
>>>>> [email protected] | 206.437.2524
>>>>>
>>>>> *Join us at GraphConnect 2014 SF! graphconnect.com
>>>>> <http://graphconnect.com/>*
>>>>> *As a friend of Neo4j, use discount code *KOMPIS
>>>>> <https://graphconnect2014sf.eventbrite.com/?discount=KOMPIS>* for $100 off
>>>>> registration*
>>>>>
>>>>>
>>>>> On Mon, Oct 27, 2014 at 8:08 AM, LIU Xiaobing <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> Hi experts,
>>>>>>     Now I encountering one performance problem about neo4j. I try to
>>>>>> write some data to neo4j, the data scale is about billions and the
>>>>>> relationships between the data is just people-people. When I use py2neo 
>>>>>> to
>>>>>> query and write data to neo4j, i found that it's very slow.
>>>>>>     The query clause i use:
>>>>>>     create_rels = 'MERGE(first:{TYPE1} {{id:'{val1}'}}) MERGE
>>>>>> (second:{TYPE2} {{id:'{val2}'}}) MERGE (first)-[r:{RTYPE}]->(second) ON
>>>>>> CREATE SET r.weight={weight_set} ON MATCH SET {weight_compute} WITH r SET
>>>>>> r.half_life={half_life},r.update_time=TIMESTAMP(),r.threshold={threshold}
>>>>>> WITH r WHERE r.weight<r.threshold DELETE r'
>>>>>>  
>>>>>> self.query=neo.CypherQuery(self.graph_db,self.create_rels.format(TYPE1=entity1[0],val1=entity1[1],TYPE2=entity2[0],val2=entity2[1],RTYPE=rel_type,weight_set=weight_set,weight_compute=CYPHER_WEIGHT_COMPUTE,half_life=half_life,threshold=threshold))
>>>>>>     self.query.execute()
>>>>>>
>>>>>>     the CYPHER_WEIGHT_COMPUTE definition is
>>>>>> "r.weight=r.weight+r.weight*EXP((TIMESTAMP()-r.update_time)/(r.half_life*1.0))"
>>>>>>
>>>>>>     The purpose of the clause is that the nodes and relationships
>>>>>> will be created when the nodes are not in graph db and the properties of
>>>>>> relationships will be update if they are.
>>>>>>     I have tried such ways to gain the performance, but it didn't
>>>>>> work well.
>>>>>>     1) configure the configure file of server
>>>>>>     neo4j-wrapper.conf:
>>>>>>     wrapper.java.initmemory=4096
>>>>>>     wrapper.java.maxmemory=4096
>>>>>>     wrapper.java.minmemory=4096
>>>>>>
>>>>>>     neo4j.properties
>>>>>>     neostore.nodestore.db.mapped_memory=256M
>>>>>>     neostore.relationshipstore.db.mapped_memory=256M
>>>>>>     neostore.propertystore.db.mapped_memory=256M
>>>>>>     neostore.propertystore.db.strings.mapped_memory=128M
>>>>>>     neostore.propertystore.db.arrays.mapped_memory=128M
>>>>>>
>>>>>>     node_auto_indexing=true
>>>>>>     relationship_auto_indexing=true
>>>>>>
>>>>>>     2) Create constraints of the properties of nodes in order to
>>>>>> create indexes
>>>>>>        Cypher clause: create constraint on (n:UID) assert n.id IS
>>>>>> UNIQUE
>>>>>>
>>>>>>     When i check the load of server who's equipped with 16 4-core
>>>>>> processors, i found that the cpu's load is very high while the network 
>>>>>> and
>>>>>> io's load is not. Does Cypher clause is cpu-greedy? How can i dig the
>>>>>> performance using other ways? Thanks very much.
>>>>>>
>>>>>> By the way, the version of neo4j is 2.1.5 stable verion, version of
>>>>>> client py2neo is 1.1.6, RAM of the server is 8G
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards
>>>>>> LIU Xiaobing 刘小兵
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards
>>>> LIU Xiaobing 刘小兵
>>>>
>>>>
>>>
>>
>>
>> --
>> Best Regards
>> LIU Xiaobing 刘小兵
>>
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to