I read through some performance tuning docs, would it help if I optimize 
below parameters? how large shall I configure?

dbms.pagecache.memory
node_cache_size
relationship_cache_size

the goal we are trying to achieve is even when there's a heavy loaded query 
running for 1 user, it does not impact other users' access to neo4j (kind 
of the shared pool stuff?)

On Tuesday, April 19, 2016 at 2:20:31 PM UTC+8, Michael Hunger wrote:

> 1. Use labels
>
> you don't need the WITH (except for LIMIT batching)
>
> and you should be able to delete up to 1M entities per transaction with a 
> heap size of 4G
>
> so you can use LIMIT 1000000
>
> Michael
>
> On Tue, Apr 19, 2016 at 3:28 AM, kincheong lau <[email protected] 
> <javascript:>> wrote:
>
>> in the neo4j browser, i usually got 'discounted to server...' error 
>> before i realize there's a problem with the query, and it slows down the 
>> server already even i click (x).
>>
>> yesterday i was trying to delete a relationship, there are 8,xxx of 
>> SUBSCRIBE_TO relationship, but i need to break down using LIMIT 100 to 
>> prevent server hang.
>>
>> match (reader)-[u:SUBSCRIBE_TO]->(book)
>> with reader, u, book 
>> delete u;
>>
>> On Tuesday, April 19, 2016 at 12:34:00 AM UTC+8, Michael Hunger wrote:
>>>
>>> You should be able to abort that long running query by terminating the 
>>> transaction:
>>>
>>> 1. click the (x) in the neo4j browser
>>> 2. press ctrl-c if you use Neo4j shell
>>> 3. if you run the statements programmatically, create a tx (embedded or 
>>> remote) and then call tx.terminate() from another thread.
>>>
>>> what was the query?
>>>
>>> you have to share more detail if you want help with query optimization 
>>> (datamodel, query, existing indexes, machine config (or 
>>> graph.db/messages.log) )
>>>
>>> Michael
>>>
>>>
>>> On Mon, Apr 18, 2016 at 11:50 AM, kincheong lau <[email protected]> 
>>> wrote:
>>>
>>>> we encounter problems when some of our developers run a badly optimized 
>>>> query:
>>>> 1. the neo4j server will hang and no response
>>>> 2. there's no way to kill the long running query
>>>> 3. we usually have no choice but force restart neo4j
>>>> 4. we have applied indexes but seems does not help much when we have 
>>>> large data volume
>>>>
>>>> we are using community version, any server configuration we could try 
>>>> to avoid this performance issue?
>>>>
>>>> -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "Neo4j" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Neo4j" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to